[Pure Storage] Add volume group support

Pure Storage FlashArray's have a construct called a Volume Group
within which volumes can be created.

The volume group can have storage QoS levels assigned to it that
limits the combined bandwidth and/or IOPs of all volumes within
the volume group.

Adding volume group support requires the use of vendor-specific
volume type extra-specs which will allow a specific volume type
to be limited to a volume group on a specified backend array. The
volume group does not require QoS settings, but these can be
applied using the vendor-specific volume type extra-specs.

This patch provides the ability to create, manage and delete
volume groups and to manage volumes in the volume group.
Additonally, this patch will allow existing volumes in a volume
group to be managed using the ``cinder manage`` command, including
VVOLss that are in volume groups in non-replicated FlashArray pods.

Retyping of volumes in and out of a volume group based volume type
is also supported.

Volume group based volumes can also be replicated as any other
volume on a FlashArray and replication failover of these replicated
volume group based volumes is also supported.

Additional vendor-specific volume type extra specs are detailed in
the updated driver documentation and in the release notes.

Implements: blueprint pure-add-volume-groups
Change-Id: I65c7241febec740d727f330b3bc0ef1b80abdd78
This commit is contained in:
Simon Dodsley 2023-12-07 11:36:39 -05:00 committed by Simon Dodsley
parent 7727bbfeb4
commit cd3a12006b
5 changed files with 783 additions and 54 deletions

View File

@ -992,6 +992,12 @@ VALID_AC_FC_PORTS = ValidResponse(200, None, 1,
DotNotation(AC_FC_PORTS[2]),
DotNotation(AC_FC_PORTS[3])], {})
MANAGEABLE_PODS = [
{
'name': 'somepod',
}
]
MANAGEABLE_PURE_VOLS = [
{
'name': 'myVol1',
@ -1171,6 +1177,11 @@ QOS_INVALID = {"maxIOPS": "100", "maxBWS": str(512 * 1024 + 1)}
QOS_ZEROS = {"maxIOPS": "0", "maxBWS": "0"}
QOS_IOPS = {"maxIOPS": "100"}
QOS_BWS = {"maxBWS": "1"}
MAX_IOPS = 100000000
MAX_BWS = 549755813888
MIN_IOPS = 100
MIN_BWS = 1048576
VGROUP = 'puretest-vgroup'
ARRAY_RESPONSE = {
'status_code': 200
@ -1584,6 +1595,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
vols.append(v[0])
vol_names.append(v[1])
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
model_updates, _ = self.driver.update_provider_info(vols, None)
self.assertEqual(len(test_vols), len(model_updates))
for update, vol_name in zip(model_updates, vol_names):
@ -1605,6 +1618,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
vols.append(v[0])
vol_names.append(v[1])
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
model_updates, _ = self.driver.update_provider_info(vols, None)
self.assertEqual(1, len(model_updates))
self.assertEqual(vol_names[2], model_updates[0]['provider_id'])
@ -1639,9 +1654,12 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
self.assertEqual(49, len(result))
self.assertIsNotNone(pure.GENERATED_NAME.match(result))
@mock.patch.object(volume_types, 'get_volume_type')
@mock.patch(DRIVER_PATH + ".flasharray.VolumePost")
def test_revert_to_snapshot(self, mock_fa):
def test_revert_to_snapshot(self, mock_fa,
mock_get_volume_type):
vol, vol_name = self.new_fake_vol(set_provider_id=True)
mock_get_volume_type.return_value = vol.volume_type
snap, snap_name = self.new_fake_snap(vol)
mock_data = self.flasharray.VolumePost(source=self.flasharray.
Reference(name=vol_name))
@ -1655,9 +1673,12 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
self.driver.revert_to_snapshot,
context, vol, snap)
@mock.patch.object(volume_types, 'get_volume_type')
@mock.patch(DRIVER_PATH + ".flasharray.VolumePost")
def test_revert_to_snapshot_group(self, mock_fa):
def test_revert_to_snapshot_group(self, mock_fa,
mock_get_volume_type):
vol, vol_name = self.new_fake_vol(set_provider_id=True)
mock_get_volume_type.return_value = vol.volume_type
group, group_name = self.new_fake_group()
group_snap, group_snap_name = self.new_fake_group_snap(group)
snap, snap_name = self.new_fake_snap(vol, group_snap)
@ -1665,15 +1686,177 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
Reference(name=vol_name))
context = mock.MagicMock()
self.driver.revert_to_snapshot(context, vol, snap)
self.array.post_volumes.assert_called_with(names=[snap_name],
volume=mock_data,
overwrite=True)
self.array.post_volumes.\
assert_called_with(names=[group_snap_name + '.' + vol_name],
volume=mock_data,
overwrite=True)
self.assert_error_propagates([self.array.post_volumes],
self.driver.revert_to_snapshot,
context, vol, snap)
@mock.patch(DRIVER_PATH + ".flasharray.VolumePost")
def test_create_in_vgroup(self, mock_fa_post):
vol, vol_name = self.new_fake_vol()
mock_data = self.array.flasharray.VolumePost(provisioned=vol["size"])
mock_fa_post.return_value = mock_data
self.driver.create_in_vgroup(self.array, vol_name,
vol["size"], VGROUP,
MAX_IOPS, MAX_BWS)
self.array.post_volumes.\
assert_called_with(names=[VGROUP + "/" + vol_name],
with_default_protection=False,
volume=mock_data)
iops_msg = (f"vg_maxIOPS QoS error. Must be more than {MIN_IOPS} "
f"and less than {MAX_IOPS}")
exc_out = self.assertRaises(exception.InvalidQoSSpecs,
self.driver.create_in_vgroup,
self.array, vol_name,
vol["size"], VGROUP,
1, MAX_BWS)
self.assertEqual(str(exc_out), iops_msg)
bws_msg = (f"vg_maxBWS QoS error. Must be between {MIN_BWS} "
f"and {MAX_BWS}")
exc_out = self.assertRaises(exception.InvalidQoSSpecs,
self.driver.create_in_vgroup,
self.array, vol_name,
vol["size"], VGROUP,
MAX_IOPS, 1)
self.assertEqual(str(exc_out), bws_msg)
@mock.patch(DRIVER_PATH + ".flasharray.VolumePost")
def test_create_from_snap_in_vgroup(self, mock_fa_post):
vol, vol_name = self.new_fake_vol()
snap, snap_name = self.new_fake_snap(vol)
src_data = pure.flasharray.Reference(name=snap_name)
mock_data = self.array.flasharray.VolumePost(source=src_data)
mock_fa_post.return_value = mock_data
self.driver.create_from_snap_in_vgroup(self.array, vol_name,
vol["size"], VGROUP,
MAX_IOPS, MAX_BWS)
self.array.post_volumes.\
assert_called_with(names=[VGROUP + "/" + vol_name],
with_default_protection=False,
volume=mock_data)
iops_msg = (f"vg_maxIOPS QoS error. Must be more than {MIN_IOPS} "
f"and less than {MAX_IOPS}")
exc_out = self.assertRaises(exception.InvalidQoSSpecs,
self.driver.create_from_snap_in_vgroup,
self.array, vol_name,
vol["size"], VGROUP,
1, MAX_BWS)
self.assertEqual(str(exc_out), iops_msg)
bws_msg = (f"vg_maxBWS QoS error. Must be between {MIN_BWS} "
f"and {MAX_BWS}")
exc_out = self.assertRaises(exception.InvalidQoSSpecs,
self.driver.create_from_snap_in_vgroup,
self.array, vol_name,
vol["size"], VGROUP,
MAX_IOPS, 1)
self.assertEqual(str(exc_out), bws_msg)
@mock.patch(DRIVER_PATH + ".LOG")
@mock.patch(DRIVER_PATH + ".flasharray.VolumeGroupPatch")
def test_delete_vgroup_if_empty(self, mock_vg_patch, mock_logger):
vol, vol_name = self.new_fake_vol()
vgname = VGROUP + "/" + vol_name
rsp = ValidResponse(200, None, 1, [DotNotation({"volume_count": 0,
"name": vgname})], {})
self.array.get_volume_groups.return_value = rsp
mock_data = pure.flasharray.VolumeGroupPatch(destroyed=True)
self.driver._delete_vgroup_if_empty(self.array, vgname)
self.array.patch_volume_groups.\
assert_called_with(names=[vgname], volume_group=mock_data)
self.mock_config.pure_eradicate_on_delete = True
self.driver._delete_vgroup_if_empty(self.array, vgname)
self.array.delete_volume_groups.assert_called_with(names=[vgname])
err_rsp = ErrorResponse(400, [DotNotation({'message':
'vgroup delete failed'})], {})
self.array.delete_volume_groups.return_value = err_rsp
self.driver._delete_vgroup_if_empty(self.array, vgname)
mock_logger.warning.\
assert_called_with("Volume group deletion failed "
"with message: %s", "vgroup delete failed")
@mock.patch.object(volume_types, 'get_volume_type_extra_specs')
def test__get_volume_type_extra_spec(self, mock_specs):
vol, vol_name = self.new_fake_vol()
vgname = VGROUP + "/" + vol_name
mock_specs.return_value = {'vg_name': vgname}
self.driver.\
_get_volume_type_extra_spec = mock.Mock(return_value={})
@mock.patch(DRIVER_PATH + ".LOG")
@mock.patch(DRIVER_PATH + ".flasharray.VolumeGroupPost")
def test_create_volume_group_if_not_exist(self, mock_vg_post, mock_logger):
vol, vol_name = self.new_fake_vol()
vgname = VGROUP + "/" + vol_name
mock_qos = pure.flasharray.Qos(iops_limit='MAX_IOPS',
bandwidth_limit='MAX_BWS')
mock_data = pure.flasharray.VolumeGroupPost(qos=mock_qos)
err_mock_data = pure.flasharray.VolumeGroupPatch(qos=mock_qos)
self.driver._create_volume_group_if_not_exist(self.array, vgname,
MAX_IOPS, MAX_BWS)
self.array.post_volume_groups.\
assert_called_with(names=[vgname], volume_group=mock_data)
err_rsp = ErrorResponse(400, [DotNotation({'message':
'already exists'})], {})
self.array.post_volume_groups.return_value = err_rsp
self.driver._create_volume_group_if_not_exist(self.array, vgname,
MAX_IOPS, MAX_BWS)
self.array.patch_volume_groups.\
assert_called_with(names=[vgname], volume_group=err_mock_data)
mock_logger.warning.\
assert_called_with("Skipping creation of vg %s since it "
"already exists. Resetting QoS", vgname)
patch_rsp = ErrorResponse(400, [DotNotation({'message':
'does not exist'})], {})
self.array.patch_volume_groups.return_value = patch_rsp
self.driver._create_volume_group_if_not_exist(self.array, vgname,
MAX_IOPS, MAX_BWS)
mock_logger.warning.\
assert_called_with("Unable to change %(vgroup)s QoS, "
"error message: %(error)s",
{"vgroup": vgname,
"error": 'does not exist'})
call_count = 0
def side_effect(*args, **kwargs):
nonlocal call_count
call_count += 1
if call_count > 1: # Return immediately on any recursive call
return None
else:
# Call the actual method logic for the first invocation
err_rsp = ErrorResponse(400, [DotNotation({'message':
'some error'})],
{})
self.array.post_volume_groups.return_value = err_rsp
rsp = ValidResponse(200, None, 1,
[DotNotation({"destroyed": "true",
"name": vgname})], {})
self.array.get_volume_groups.return_value = rsp
return original_method(*args, **kwargs)
original_method = self.driver._create_volume_group_if_not_exist
with mock.patch.object(self.driver,
'_create_volume_group_if_not_exist',
side_effect=side_effect):
self.driver._create_volume_group_if_not_exist(self.array, vgname,
MAX_IOPS, MAX_BWS)
mock_logger.warning.\
assert_called_with("Volume group %s is deleted but not"
" eradicated - will recreate.", vgname)
self.array.delete_volume_groups.assert_called_with(names=[vgname])
@mock.patch(DRIVER_PATH + ".flasharray.VolumePost")
@mock.patch(BASE_DRIVER_OBJ + "._add_to_group_if_needed")
@mock.patch(BASE_DRIVER_OBJ + "._get_replication_type_from_vol_type")
@ -1692,6 +1875,44 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
self.assert_error_propagates([mock_fa],
self.driver.create_volume, vol_obj)
@mock.patch(DRIVER_PATH + ".LOG")
@mock.patch.object(volume_types, 'get_volume_type_extra_specs')
def test_get_volume_type_extra_spec(self, mock_specs, mock_logger):
vol, vol_name = self.new_fake_vol()
vgname = VGROUP + "/" + vol_name
mock_specs.return_value = {'flasharray:vg_name': vgname}
spec_out = self.driver._get_volume_type_extra_spec(vol.volume_type_id,
'vg_name')
assert spec_out == vgname
mock_specs.return_value = {}
default_value = "puretestvg"
spec_out = self.driver.\
_get_volume_type_extra_spec(vol.volume_type_id,
'vg_name',
default_value=default_value)
mock_logger.debug.\
assert_called_with("Returning default spec value: %s.",
default_value)
mock_specs.return_value = {'flasharray:vg_name': vgname}
possible_values = ['vgtest']
spec_out = self.driver.\
_get_volume_type_extra_spec(vol.volume_type_id,
'vg_name',
possible_values=possible_values)
mock_logger.debug.\
assert_called_with("Invalid spec value: %s specified.", vgname)
mock_specs.return_value = {'flasharray:vg_name': vgname}
possible_values = [vgname]
spec_out = self.driver.\
_get_volume_type_extra_spec(vol.volume_type_id,
'vg_name',
possible_values=possible_values)
mock_logger.debug.\
assert_called_with("Returning spec value %s", vgname)
@mock.patch(DRIVER_PATH + ".flasharray.VolumePost")
@mock.patch(BASE_DRIVER_OBJ + "._add_to_group_if_needed")
@mock.patch(BASE_DRIVER_OBJ + "._get_replication_type_from_vol_type")
@ -1837,6 +2058,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
source=
pure.flasharray.
reference(name=src_name))
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
mock_fa.return_value = mock_data
mock_get_replication_type.return_value = None
# Branch where extend unneeded
@ -1866,6 +2089,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
reference(name=src_name))
mock_fa.return_value = mock_data
# Branch where extend unneeded
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
self.driver.create_cloned_volume(vol, src_vol)
self.array.post_volumes.assert_called_with(names=[vol_name],
volume=mock_data)
@ -1888,6 +2113,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
Reference(name=src_name),
name=vol_name)
mock_fa.return_value = mock_data
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
self.driver.create_cloned_volume(vol, src_vol)
mock_extend.assert_called_with(self.array, vol_name,
src_vol["size"], vol["size"])
@ -1901,6 +2128,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
mock_add_to_group):
vol, vol_name = self.new_fake_vol(set_provider_id=False)
group = fake_group.fake_group_obj(mock.MagicMock())
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
src_vol, _ = self.new_fake_vol(spec={"group_id": group.id})
mock_get_replication_type.return_value = None
@ -2719,6 +2948,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
self.array.get_volumes.return_value = MPV
self.array.get_connections.return_value = []
vol, vol_name = self.new_fake_vol(set_provider_id=False)
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
self.driver.manage_existing(vol, volume_ref)
mock_rename.assert_called_with(ref_name, vol_name,
raise_not_exist=True)
@ -2730,6 +2961,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
self.array.get_volumes.return_value = MPV
self.array.get_connections.return_value = []
vol, _ = self.new_fake_vol(set_provider_id=False)
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
self.assert_error_propagates(
[mock_rename, mock_validate],
self.driver.manage_existing,
@ -2765,10 +2998,16 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
self.driver.manage_existing,
vol, volume_ref)
def test_manage_existing_vol_in_pod(self):
def test_manage_existing_vol_in_repl_pod(self):
ref_name = 'somepod::vol1'
volume_ref = {'source-name': ref_name}
pod = deepcopy(MANAGEABLE_PODS)
pod[0]['array_count'] = 1
pod[0]['link_source_count'] = 1
pod[0]['link_target_count'] = 1
self.array.get_connections.return_value = []
self.array.get_pods.return_value = ValidResponse(
200, None, 1, [DotNotation(pod[0])], {})
vol, vol_name = self.new_fake_vol(set_provider_id=False)
self.assertRaises(exception.ManageExistingInvalidReference,
@ -3428,6 +3667,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
get_voltype = "cinder.objects.volume_type.VolumeType.get_by_name_or_id"
with mock.patch(get_voltype) as mock_get_vol_type:
mock_get_vol_type.return_value = new_type
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
did_retype, model_update = self.driver.retype(
ctxt,
vol,
@ -4203,6 +4444,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
get_voltype = "cinder.objects.volume_type.VolumeType.get_by_name_or_id"
with mock.patch(get_voltype) as mock_get_vol_type:
mock_get_vol_type.return_value = new_type
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
did_retype, model_update = self.driver.retype(
ctxt,
vol,
@ -4229,6 +4472,8 @@ class PureBaseVolumeDriverTestCase(PureBaseSharedDriverTestCase):
get_voltype = "cinder.objects.volume_type.VolumeType.get_by_name_or_id"
with mock.patch(get_voltype) as mock_get_vol_type:
mock_get_vol_type.return_value = new_type
self.driver._get_volume_type_extra_spec = mock.Mock(
return_value={})
did_retype, model_update = self.driver.retype(
ctxt,
vol,

View File

@ -177,6 +177,11 @@ HOST_CREATE_MAX_RETRIES = 5
USER_AGENT_BASE = 'OpenStack Cinder'
MIN_IOPS = 100
MAX_IOPS = 100000000 # 100M
MIN_BWS = 1048576 # 1 MB/s
MAX_BWS = 549755813888 # 512 GB/s
class PureDriverException(exception.VolumeDriverException):
message = _("Pure Storage Cinder driver failure: %(reason)s")
@ -346,20 +351,20 @@ class PureBaseVolumeDriver(san.SanDriver):
array.patch_volumes(names=[vol_name],
volume=flasharray.VolumePatch(
qos=flasharray.Qos(
iops_limit=100000000,
bandwidth_limit=549755813888)))
iops_limit=MAX_IOPS,
bandwidth_limit=MAX_BWS)))
elif qos['maxIOPS'] == 0:
array.patch_volumes(names=[vol_name],
volume=flasharray.VolumePatch(
qos=flasharray.Qos(
iops_limit=100000000,
iops_limit=MAX_IOPS,
bandwidth_limit=qos['maxBWS'])))
elif qos['maxBWS'] == 0:
array.patch_volumes(names=[vol_name],
volume=flasharray.VolumePatch(
qos=flasharray.Qos(
iops_limit=qos['maxIOPS'],
bandwidth_limit=549755813888)))
bandwidth_limit=MAX_BWS)))
else:
array.patch_volumes(names=[vol_name],
volume=flasharray.VolumePatch(
@ -368,6 +373,75 @@ class PureBaseVolumeDriver(san.SanDriver):
bandwidth_limit=qos['maxBWS'])))
return
@pure_driver_debug_trace
def create_from_snap_in_vgroup(self,
array,
vol_name,
snap_name,
vgroup,
vg_iop,
vg_bw):
if not (MIN_IOPS <= int(vg_iop) <= MAX_IOPS):
msg = (_('vg_maxIOPS QoS error. Must be more than '
'%(min_iops)s and less than %(max_iops)s') %
{'min_iops': MIN_IOPS, 'max_iops': MAX_IOPS})
raise exception.InvalidQoSSpecs(message=msg)
if not (MIN_BWS <= int(vg_bw) <= MAX_BWS):
msg = (_('vg_maxBWS QoS error. Must be between '
'%(min_bws)s and %(max_bws)s') %
{'min_bws': MIN_BWS, 'max_bws': MAX_BWS})
raise exception.InvalidQoSSpecs(message=msg)
self._create_volume_group_if_not_exist(array,
vgroup,
int(vg_iop),
int(vg_bw))
vg_volname = vgroup + "/" + vol_name
if self._array.safemode:
array.post_volumes(names=[vg_volname],
with_default_protection=False,
volume=flasharray.VolumePost(
source=flasharray.Reference(
name=snap_name)))
else:
array.post_volumes(names=[vg_volname],
volume=flasharray.VolumePost(
source=flasharray.Reference(name=snap_name)))
return vg_volname
@pure_driver_debug_trace
def create_in_vgroup(self,
array,
vol_name,
vol_size,
vgroup,
vg_iop,
vg_bw):
if not (MIN_IOPS <= int(vg_iop) <= MAX_IOPS):
msg = (_('vg_maxIOPS QoS error. Must be more than '
'%(min_iops)s and less than %(max_iops)s') %
{'min_iops': MIN_IOPS, 'max_iops': MAX_IOPS})
raise exception.InvalidQoSSpecs(message=msg)
if not (MIN_BWS <= int(vg_bw) <= MAX_BWS):
msg = (_('vg_maxBWS QoS error. Must be between '
'%(min_bws)s and %(max_bws)s') %
{'min_bws': MIN_BWS, 'max_bws': MAX_BWS})
raise exception.InvalidQoSSpecs(message=msg)
self._create_volume_group_if_not_exist(array,
vgroup,
int(vg_iop),
int(vg_bw))
vg_volname = vgroup + "/" + vol_name
if self._array.safemode:
array.post_volumes(names=[vg_volname],
with_default_protection=False,
volume=flasharray.VolumePost(
provisioned=vol_size))
else:
array.post_volumes(names=[vg_volname],
volume=flasharray.VolumePost(
provisioned=vol_size))
return vg_volname
@pure_driver_debug_trace
def create_with_qos(self, array, vol_name, vol_size, qos):
if self._array.safemode:
@ -630,7 +704,7 @@ class PureBaseVolumeDriver(san.SanDriver):
:return None
"""
vol_name = self._generate_purity_vol_name(volume)
if snapshot['cgsnapshot']:
if snapshot['group_snapshot'] or snapshot['cgsnapshot']:
snap_name = self._get_pgroup_snap_name_from_snapshot(snapshot)
else:
snap_name = self._get_snap_name(snapshot)
@ -647,7 +721,16 @@ class PureBaseVolumeDriver(san.SanDriver):
@pure_driver_debug_trace
def create_volume(self, volume):
"""Creates a volume."""
"""Creates a volume.
Note that if a vgroup is specified in the volume type
extra_spec then we do not apply volume level qos as this is
incompatible with volume group qos settings.
We will force a volume group to have the maximum qos settings
if not specified in the volume type extra_spec as this can
cause retyping issues in the future if not defined.
"""
qos = None
vol_name = self._generate_purity_vol_name(volume)
vol_size = volume["size"] * units.Gi
@ -656,7 +739,26 @@ class PureBaseVolumeDriver(san.SanDriver):
current_array = self._get_current_array()
if type_id is not None:
volume_type = volume_types.get_volume_type(ctxt, type_id)
qos = self._get_qos_settings(volume_type)
vg_iops = self._get_volume_type_extra_spec(type_id,
'vg_maxIOPS',
default_value=MAX_IOPS)
vg_bws = self._get_volume_type_extra_spec(type_id,
'vg_maxBWS',
default_value=MAX_BWS)
vgroup = self._get_volume_type_extra_spec(type_id, 'vg_name')
if vgroup:
vgroup = INVALID_CHARACTERS.sub("-", vgroup)
vg_volname = self.create_in_vgroup(current_array,
vol_name,
vol_size,
vgroup,
vg_iops,
vg_bws)
return self._setup_volume(current_array,
volume,
vg_volname)
else:
qos = self._get_qos_settings(volume_type)
if qos is not None:
self.create_with_qos(current_array, vol_name, vol_size, qos)
else:
@ -687,7 +789,26 @@ class PureBaseVolumeDriver(san.SanDriver):
type_id = volume.get('volume_type_id')
if type_id is not None:
volume_type = volume_types.get_volume_type(ctxt, type_id)
qos = self._get_qos_settings(volume_type)
vg_iops = self._get_volume_type_extra_spec(type_id,
'vg_maxIOPS',
default_value=MAX_IOPS)
vg_bws = self._get_volume_type_extra_spec(type_id,
'vg_maxBWS',
default_value=MAX_BWS)
vgroup = self._get_volume_type_extra_spec(type_id, 'vg_name')
if vgroup:
vgroup = INVALID_CHARACTERS.sub("-", vgroup)
vg_volname = self.create_from_snap_in_vgroup(current_array,
vol_name,
snap_name,
vgroup,
vg_iops,
vg_bws)
return self._setup_volume(current_array,
volume,
vg_volname)
else:
qos = self._get_qos_settings(volume_type)
if self._array.safemode:
current_array.post_volumes(names=[vol_name],
@ -710,8 +831,8 @@ class PureBaseVolumeDriver(san.SanDriver):
current_array.patch_volumes(names=[vol_name],
volume=flasharray.VolumePatch(
qos=flasharray.Qos(
iops_limit=100000000,
bandwidth_limit=549755813888)))
iops_limit=MAX_IOPS,
bandwidth_limit=MAX_BWS)))
return self._setup_volume(current_array, volume, vol_name)
@ -854,6 +975,33 @@ class PureBaseVolumeDriver(san.SanDriver):
ctxt.reraise = False
LOG.warning("Volume deletion failed with message: %s",
res.errors[0].message)
# Now check to see if deleting this volume left an empty volume
# group. If so, we delete / eradicate the volume group
if "/" in vol_name:
vgroup = vol_name.split("/")[0]
self._delete_vgroup_if_empty(current_array, vgroup)
@pure_driver_debug_trace
def _delete_vgroup_if_empty(self, array, vgroup):
"""Delete volume group if empty"""
vgroup_volumes = list(array.get_volume_groups(
names=[vgroup]).items)[0].volume_count
if vgroup_volumes == 0:
# Delete the volume group
array.patch_volume_groups(
names=[vgroup],
volume_group=flasharray.VolumeGroupPatch(
destroyed=True))
if self.configuration.pure_eradicate_on_delete:
# Eradciate the volume group
res = array.delete_volume_groups(names=[vgroup])
if res.status_code == 400:
with excutils.save_and_reraise_exception() as ctxt:
ctxt.reraise = False
LOG.warning("Volume group deletion failed "
"with message: %s",
res.errors[0].message)
@pure_driver_debug_trace
def create_snapshot(self, snapshot):
@ -1517,12 +1665,12 @@ class PureBaseVolumeDriver(san.SanDriver):
else:
ref_vol_name = existing_ref['source-name']
if not is_snap and '::' in ref_vol_name:
# Don't allow for managing volumes in a pod
raise exception.ManageExistingInvalidReference(
_("Unable to manage volume in a Pod"))
current_array = self._get_current_array()
if not is_snap and self._pod_check(current_array, ref_vol_name):
# Don't allow for managing volumes in a replicated pod
raise exception.ManageExistingInvalidReference(
_("Unable to manage volume in a Replicated Pod"))
volres = current_array.get_volumes(names=[ref_vol_name])
if volres.status_code == 200:
volume_info = list(volres.items)[0]
@ -1743,8 +1891,54 @@ class PureBaseVolumeDriver(san.SanDriver):
current_array.patch_volumes(
names=[new_vol_name],
volume=flasharray.VolumePatch(
qos=flasharray.Qos(iops_limit=100000000,
bandwidth_limit=549755813888)))
qos=flasharray.Qos(iops_limit=MAX_IOPS,
bandwidth_limit=MAX_BWS)))
# If we are managing to a volume type that is a volume group
# make sure that the target volume group exists with the
# correct QoS settings.
if self._get_volume_type_extra_spec(volume.volume_type['id'],
'vg_name'):
target_vg = self._get_volume_type_extra_spec(
volume.volume_type['id'],
'vg_name')
target_vg = INVALID_CHARACTERS.sub("-", target_vg)
vg_iops = self._get_volume_type_extra_spec(
volume.volume_type['id'],
'vg_maxIOPS',
default_value=MAX_IOPS)
vg_bws = self._get_volume_type_extra_spec(
volume.volume_type['id'],
'vg_maxBWS',
default_value=MAX_BWS)
if not (MIN_IOPS <= int(vg_iops) <= MAX_IOPS):
msg = (_('vg_maxIOPS QoS error. Must be more than '
'%(min_iops)s and less than %(max_iops)s') %
{'min_iops': MIN_IOPS, 'max_iops': MAX_IOPS})
raise exception.InvalidQoSSpecs(message=msg)
if not (MIN_BWS <= int(vg_bws) <= MAX_BWS):
msg = (_('vg_maxBWS QoS error. Must be between '
'%(min_bws)s and less than %(max_bws)s') %
{'min_bws': MIN_BWS, 'max_bws': MAX_BWS})
raise exception.InvalidQoSSpecs(message=msg)
self._create_volume_group_if_not_exist(current_array,
target_vg,
vg_iops,
vg_bws)
res = current_array.patch_volumes(
names=[new_vol_name],
volume=flasharray.VolumePatch(
volume_group=flasharray.Reference(
name=target_vg)))
if res.status_code != 200:
LOG.warning("Failed to move volume %(vol)s, to volume "
"group %(vg)s. Error: %(mess)s", {
"vol": new_vol_name,
"vg": target_vg,
"mess": res.errors[0].message})
new_vol_name = target_vg + "/" + new_vol_name
if "/" in ref_vol_name:
source_vg = ref_vol_name.split('/')[0]
self._delete_vgroup_if_empty(current_array, source_vg)
# Check if the volume_type has QoS settings and if so
# apply them to the newly managed volume
qos = None
@ -1775,6 +1969,20 @@ class PureBaseVolumeDriver(san.SanDriver):
return size
def _pod_check(self, array, volume):
"""Check if volume is in a replicated pod."""
if "::" in volume:
pod = volume.split("::")[0]
pod_info = list(array.get_pods(names=[pod]).items)[0]
if (pod_info.link_source_count == 0
and pod_info.link_target_count == 0
and pod_info.array_count == 1):
return False
else:
return True
else:
return False
def _rename_volume_object(self,
old_name,
new_name,
@ -1782,7 +1990,12 @@ class PureBaseVolumeDriver(san.SanDriver):
snapshot=False):
"""Rename a volume object (could be snapshot) in Purity.
This will not raise an exception if the object does not exist
This will not raise an exception if the object does not exist.
We need to ensure that if we are renaming to a different
container in the backend, eg a pod, volume group, or just
the main array container, we have to rename first and then
move the object.
"""
current_array = self._get_current_array()
if snapshot:
@ -1790,6 +2003,45 @@ class PureBaseVolumeDriver(san.SanDriver):
names=[old_name],
volume_snapshot=flasharray.VolumePatch(name=new_name))
else:
if "/" in old_name and "::" not in old_name:
interim_name = old_name.split("/")[1]
res = current_array.patch_volumes(
names=[old_name],
volume=flasharray.VolumePatch(
volume_group=flasharray.Reference(name="")))
if res.status_code == 400:
LOG.warning("Unable to move %(old_name)s, error "
"message: %(error)s",
{"old_name": old_name,
"error": res.errors[0].message})
old_name = interim_name
if "/" not in old_name and "::" in old_name:
interim_name = old_name.split("::")[1]
res = current_array.patch_volumes(
names=[old_name],
volume=flasharray.VolumePatch(
pod=flasharray.Reference(name="")))
if res.status_code == 400:
LOG.warning("Unable to move %(old_name)s, error "
"message: %(error)s",
{"old_name": old_name,
"error": res.errors[0].message})
old_name = interim_name
if "/" in old_name and "::" in old_name:
# This is a VVOL which can't be moved, so have
# to take a copy
interim_name = old_name.split("/")[1]
res = current_array.post_volumes(
names=[interim_name],
volume=flasharray.VolumePost(
source=flasharray.Reference(name=old_name)))
if res.status_code == 400:
LOG.warning("Unable to copy %(old_name)s, error "
"message: %(error)s",
{"old_name": old_name,
"error": res.errors[0].message})
old_name = interim_name
res = current_array.patch_volumes(
names=[old_name],
volume=flasharray.VolumePatch(name=new_name))
@ -1896,7 +2148,7 @@ class PureBaseVolumeDriver(san.SanDriver):
connected_vols = {}
for connect in range(0, len(connections)):
connected_vols[connections[connect].volume.name] = \
connections[connect].host.name
getattr(connections[connect].host, "name", None)
# Put together a map of existing cinder volumes on the array
# so we can lookup cinder id's by purity volume names
@ -1910,7 +2162,7 @@ class PureBaseVolumeDriver(san.SanDriver):
cinder_id = existing_vols.get(vol_name)
not_safe_msgs = []
host = connected_vols.get(vol_name)
in_pod = ("::" in vol_name)
in_pod = self._pod_check(array, vol_name)
is_deleted = pure_vols[pure_vol].destroyed
if host:
@ -1920,7 +2172,7 @@ class PureBaseVolumeDriver(san.SanDriver):
not_safe_msgs.append(_('Volume already managed'))
if in_pod:
not_safe_msgs.append(_('Volume is in a Pod'))
not_safe_msgs.append(_('Volume is in a Replicated Pod'))
if is_deleted:
not_safe_msgs.append(_('Volume is deleted'))
@ -2081,6 +2333,46 @@ class PureBaseVolumeDriver(san.SanDriver):
return REPLICATION_TYPE_ASYNC
return None
def _get_volume_type_extra_spec(self, type_id, spec_key,
possible_values=None,
default_value=None):
"""Get extra spec value.
If the spec value is not present in the input possible_values, then
default_value will be returned.
If the type_id is None, then default_value is returned.
The caller must not consider scope and the implementation adds/removes
scope. the scope used here is 'flasharray' e.g. key
'flasharray:vg_name' and so the caller must pass vg_name as an
input ignoring the scope.
:param type_id: volume type id
:param spec_key: extra spec key
:param possible_values: permitted values for the extra spec if known
:param default_value: default value for the extra spec incase of an
invalid value or if the entry does not exist
:return: extra spec value
"""
if not type_id:
return default_value
spec_key = ('flasharray:%s') % spec_key
spec_value = volume_types.get_volume_type_extra_specs(type_id).get(
spec_key, False)
if not spec_value:
LOG.debug("Returning default spec value: %s.", default_value)
return default_value
if possible_values is None:
return spec_value
if spec_value in possible_values:
LOG.debug("Returning spec value %s", spec_value)
return spec_value
LOG.debug("Invalid spec value: %s specified.", spec_value)
def _get_qos_settings(self, volume_type):
"""Get extra_specs and qos_specs of a volume_type.
@ -2106,16 +2398,18 @@ class PureBaseVolumeDriver(san.SanDriver):
if qos == {}:
return None
else:
# Chack set vslues are within limits
# Check set vslues are within limits
iops_qos = int(qos.get('maxIOPS', 0))
bw_qos = int(qos.get('maxBWS', 0)) * 1048576
if iops_qos != 0 and not (100 <= iops_qos <= 100000000):
msg = _('maxIOPS QoS error. Must be more than '
'100 and less than 100000000')
bw_qos = int(qos.get('maxBWS', 0)) * MIN_BWS
if iops_qos != 0 and not (MIN_IOPS <= iops_qos <= MAX_IOPS):
msg = (_('maxIOPS QoS error. Must be more than '
'%(min_iops)s and less than %(max_iops)s') %
{'min_iops': MIN_IOPS, 'max_iops': MAX_IOPS})
raise exception.InvalidQoSSpecs(message=msg)
if bw_qos != 0 and not (1048576 <= bw_qos <= 549755813888):
msg = _('maxBWS QoS error. Must be between '
'1 and 524288')
if bw_qos != 0 and not (MIN_BWS <= bw_qos <= MAX_BWS):
msg = (_('maxBWS QoS error. Must be between '
'%(min_bws)s and %(max_bws)s') %
{'min_bws': MIN_BWS, 'max_bws': MAX_BWS})
raise exception.InvalidQoSSpecs(message=msg)
qos['maxIOPS'] = iops_qos
@ -2142,8 +2436,15 @@ class PureBaseVolumeDriver(san.SanDriver):
repl_type = self._get_replication_type_from_vol_type(
volume.volume_type)
vgroup_type = self._get_volume_type_extra_spec(volume.volume_type_id,
'vg_name')
if repl_type in [REPLICATION_TYPE_SYNC, REPLICATION_TYPE_TRISYNC]:
base_name = self._replication_pod_name + "::" + base_name
if vgroup_type:
raise exception.InvalidVolumeType(
reason=_("Synchronously replicated volume group volumes "
"are not supported"))
else:
base_name = self._replication_pod_name + "::" + base_name
return base_name + "-cinder"
@ -2265,6 +2566,7 @@ class PureBaseVolumeDriver(san.SanDriver):
ERR_MSG_ALREADY_EXISTS in res.errors[0].message):
# Happens if the volume is already connected to the host.
# Treat this as a success.
ctxt.reraise = False
LOG.debug("Volume connection already exists for Purity "
"host with message: %s", res.errors[0].message)
@ -2309,6 +2611,8 @@ class PureBaseVolumeDriver(san.SanDriver):
prev_repl_type = None
new_repl_type = None
source_vg = False
target_vg = False
# See if the type specifies the replication type. If we know it is
# replicated but doesn't specify a type assume that it is async rep
@ -2381,10 +2685,82 @@ class PureBaseVolumeDriver(san.SanDriver):
self._get_current_array(), volume
)
current_array = self._get_current_array()
# Now check if we are retyping to/from a type with volume groups
if "/" in self._get_vol_name(volume):
source_vg = self._get_vol_name(volume).split('/')[0]
if self._get_volume_type_extra_spec(new_type['id'], 'vg_name'):
target_vg = self._get_volume_type_extra_spec(new_type['id'],
'vg_name')
if source_vg or target_vg:
if target_vg:
target_vg = INVALID_CHARACTERS.sub("-", target_vg)
vg_iops = self._get_volume_type_extra_spec(
new_type['id'],
'vg_maxIOPS',
default_value=MAX_IOPS)
vg_bws = self._get_volume_type_extra_spec(
new_type['id'],
'vg_maxBWS',
default_value=MAX_BWS)
if not (MIN_IOPS <= int(vg_iops) <= MAX_IOPS):
msg = (_('vg_maxIOPS QoS error. Must be more than '
'%(min_iops)s and less than %(max_iops)s') %
{'min_iops': MIN_IOPS, 'max_iops': MAX_IOPS})
raise exception.InvalidQoSSpecs(message=msg)
if not (MIN_BWS <= int(vg_bws) <= MAX_BWS):
msg = (_('vg_maxBWS QoS error. Must be more than '
'%(min_bws)s and less than %(max_bws)s') %
{'min_bws': MIN_BWS, 'max_bws': MAX_BWS})
raise exception.InvalidQoSSpecs(message=msg)
self._create_volume_group_if_not_exist(current_array,
target_vg,
vg_iops,
vg_bws)
current_array.patch_volumes(
names=[self._get_vol_name(volume)],
volume=flasharray.VolumePatch(
volume_group=flasharray.Reference(
name=target_vg)))
vol_name = self._get_vol_name(volume)
if source_vg:
target_vol_name = (target_vg +
"/" +
vol_name.split('/')[1])
else:
target_vol_name = (target_vg +
"/" +
vol_name)
model_update = {
'id': volume.id,
'provider_id': target_vol_name,
'metadata': {**volume.metadata,
'array_volume_name': target_vol_name,
'array_name': self._array.array_name}
}
# If we have empied a VG by retyping out of it then delete VG
if source_vg:
self._delete_vgroup_if_empty(current_array, source_vg)
else:
current_array.patch_volumes(
names=[self._get_vol_name(volume)],
volume=flasharray.VolumePatch(
volume_group=flasharray.Reference(
name="")))
target_vol_name = self._get_vol_name(volume).split('/')[1]
model_update = {
'id': volume.id,
'provider_id': target_vol_name,
'metadata': {**volume.metadata,
'array_volume_name': target_vol_name,
'array_name': self._array.array_name}
}
if source_vg:
self._delete_vgroup_if_empty(current_array, source_vg)
return True, model_update
# If we are moving to a volume type with QoS settings then
# make sure the volume gets the correct new QoS settings.
# This could mean removing existing QoS settings.
current_array = self._get_current_array()
qos = self._get_qos_settings(new_type)
vol_name = self._generate_purity_vol_name(volume)
if qos is not None:
@ -2393,8 +2769,8 @@ class PureBaseVolumeDriver(san.SanDriver):
current_array.patch_volumes(names=[vol_name],
volume=flasharray.VolumePatch(
qos=flasharray.Qos(
iops_limit=100000000,
bandwidth_limit=549755813888)))
iops_limit=MAX_IOPS,
bandwidth_limit=MAX_BWS)))
return True, model_update
@ -2445,10 +2821,14 @@ class PureBaseVolumeDriver(san.SanDriver):
# Manager sets the active_backend to '' when secondary_id was default,
# but the driver failover_host method calls us with "default"
elif not active_backend_id or active_backend_id == 'default':
LOG.info('Failing back to %s', self._failed_over_primary_array)
self._swap_replication_state(current,
self._failed_over_primary_array,
failback=True)
if self._failed_over_primary_array is not None:
LOG.info('Failing back to %s', self._failed_over_primary_array)
self._swap_replication_state(current,
self._failed_over_primary_array,
failback=True)
else:
LOG.info('Failover not occured - secondary array '
'cannot be same as primary')
else:
secondary = self._get_secondary(active_backend_id)
LOG.info('Failing over to %s', secondary.backend_id)
@ -2607,7 +2987,7 @@ class PureBaseVolumeDriver(san.SanDriver):
# part of the ActiveCluster and we need to reflect this in our
# capabilities.
self._is_active_cluster_enabled = False
self._is_replication_enabled = False
self._is_replication_enabled = True
if secondary_array.uniform:
if secondary_array in self._uniform_active_cluster_target_arrays:
@ -2787,13 +3167,16 @@ class PureBaseVolumeDriver(san.SanDriver):
snap_name = "%s:%s" % (source_array_name, pgroup_name)
LOG.debug("Looking for snap %(snap)s on array id %(array_id)s",
{"snap": snap_name, "array_id": target_array.array_id})
pg_snaps = list(
target_array.get_protection_group_snapshots_transfer(
names=[snap_name],
destroyed=False,
filter='progress="1.0"',
sort=["started-"]).items)
pg_snap = pg_snaps[0] if pg_snaps else None
try:
pg_snaps = list(
target_array.get_protection_group_snapshots_transfer(
names=[snap_name],
destroyed=False,
filter='progress="1.0"',
sort=["started-"]).items)
pg_snap = pg_snaps[0] if pg_snaps else None
except AttributeError:
pg_snap = None
LOG.debug("Selecting snapshot %(pg_snap)s for failover.",
{"pg_snap": pg_snap})
@ -2822,6 +3205,52 @@ class PureBaseVolumeDriver(san.SanDriver):
source_array.delete_pods(names=[name])
self._create_pod_if_not_exist(source_array, name)
@pure_driver_debug_trace
def _create_volume_group_if_not_exist(self,
source_array,
vgname,
vg_iops,
vg_bws):
res = source_array.post_volume_groups(
names=[vgname],
volume_group=flasharray.VolumeGroupPost(
qos=flasharray.Qos(
bandwidth_limit=vg_bws,
iops_limit=vg_iops)))
if res.status_code == 400:
with excutils.save_and_reraise_exception() as ctxt:
if ERR_MSG_ALREADY_EXISTS in res.errors[0].message:
# Happens if the vg already exists
ctxt.reraise = False
LOG.warning("Skipping creation of vg %s since it "
"already exists. Resetting QoS", vgname)
res = source_array.patch_volume_groups(
names=[vgname],
volume_group=flasharray.VolumeGroupPatch(
qos=flasharray.Qos(
bandwidth_limit=vg_bws,
iops_limit=vg_iops)))
if res.status_code == 400:
with excutils.save_and_reraise_exception() as ctxt:
if ERR_MSG_NOT_EXIST in res.errors[0].message:
ctxt.reraise = False
LOG.warning("Unable to change %(vgroup)s QoS, "
"error message: %(error)s",
{"vgroup": vgname,
"error": res.errors[0].message})
return
if list(source_array.get_volume_groups(
names=[vgname]).items)[0].destroyed:
ctxt.reraise = False
LOG.warning("Volume group %s is deleted but not"
" eradicated - will recreate.", vgname)
source_array.delete_volume_groups(names=[vgname])
self._create_volume_group_if_not_exist(source_array,
vgname,
vg_iops,
vg_bws)
@pure_driver_debug_trace
def _create_protection_group_if_not_exist(self, source_array, pgname):
if not pgname:
@ -2948,6 +3377,21 @@ class PureBaseVolumeDriver(san.SanDriver):
LOG.debug('Creating volume %(vol)s from replicated snapshot '
'%(snap)s', {'vol': vol_name,
'snap': volume_snaps[snap].name})
if "/" in vol_name:
# We have to create the target vgroup with assosiated QoS
vg_iops = self._get_volume_type_extra_spec(
vol.volume_type_id,
'vg_maxIOPS',
default_value=MAX_IOPS)
vg_bws = self._get_volume_type_extra_spec(
vol.volume_type_id,
'vg_maxBWS',
default_value=MAX_BWS)
self._create_volume_group_if_not_exist(
secondary_array,
vol_name.split("/")[0],
int(vg_iops),
int(vg_bws))
if secondary_safemode:
secondary_array.post_volumes(
with_default_protection=False,
@ -2967,8 +3411,9 @@ class PureBaseVolumeDriver(san.SanDriver):
overwrite=True)
else:
LOG.debug('Ignoring unmanaged volume %(vol)s from replicated '
'snapshot %(snap)s.', {'vol': vol_name,
'snap': snap['name']})
'snapshot %(snap)s.',
{'vol': vol_name,
'snap': volume_snaps[snap].name})
# The only volumes remaining in the vol_names set have been left behind
# on the array and should be considered as being in an error state.
model_updates = []

View File

@ -404,3 +404,18 @@ set in `cinder.conf`:
:config-target: Pure
cinder.volume.drivers.pure
Pure Storage-supported extra specs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Extra specs are associated with Block Storage volume types. When users request
volumes of a particular volume type, the volumes are created on storage
backends that meet the list of requirements. In the case of Pure Storage, these
vendor-specific extra specs can be used to bring all volumes of a specific
volume type into a construct known as a volume group. Additionally, the
storage quality of service limits can be applied to the volume group.
Use the specs in the following table to configure volume groups and
associate with a volume type. Define Block Storage volume types by using
the :command:`openstack volume type set` command.
.. include:: ../../tables/manual/cinder-pure_storage_extraspecs.inc

View File

@ -0,0 +1,16 @@
.. list-table:: Description of extra specs options for Pure Storage FlashArray
:header-rows: 1
* - Extra spec
- Type
- Description
* - ``flasharray:vg_name``
- String
- Specify the name of the volume group in which all volumes using this
volume type will be created.
* - ``flasharray:vg_maxIOPS``
- String
- Maximum number of IOPs allowed for the volume group. Range 100 - 100M
* - ``flasharray:vg_maxBWS``
- String
- Maximum bandwidth limit for the volume group. Range 1024 - 524288 (512GB/s)

View File

@ -0,0 +1,8 @@
---
features:
- |
[Pure Storage] Volume Group support added through new vendor-specific volume type
extra-specs. Volume Groups can be used to isolate tenant volumes into their own area
in a FlashArray, and this volume group can have tenant-wide storage QoS for the
volume group. Full replication support is also available for volumes in volume groups
and exisitng volume group volumes (such as VMware vVols) can be managed directly.