Add subcloud secondary status support and migration
Definition of "Day-2": User can perform the operation post the initial deployment. Definition of "secondary": A secondary subcloud is just in the DB, will not do any other operations like management/sync. Update DB of subclouds add rehome_data column. Update "dcmanager subcloud add --secondary" to save data for day-2's rehome/migrate purpose. Update "dcmanager subcloud update --bootstrap-address --bootstrap-values" to save data for day-2's rehome/migrate purpose. Add "dcmanager subcloud migrate" for day-2's rehome/migrate Example of usage: dcmanager subcloud add --secondary --bootstrap-address \ 128.224.115.15 --bootstrap-values ./sub1-bootsrapvalues.yml dcmanager subcloud migrate sub1 --sysadmin-password PASSWORD EQUALS TO: dcmanager subcloud add --migrate --bootstrap-address \ 123.123.123.123 --bootstrap-values ./sub1-bootsrapvalues.yml \ --sysadmin-password password This commit updates the 'subcloud add' implementation to use the 'secondary' subcloud deployment operations. It saves rehome necessary data as JSON format into the 'subclouds' table's 'rehome_data' column in the DB. Additionally, it adds the 'migrate' subcommand for subcloud day-2's migration abilities. Test Plan: 1. PASS - Verify that 'secondary' option works for 'subcloud add' successfully. 2. PASS - Verify that 'bootstrap-values' could update subclouds' rehome_data through api. successfully. 3. PASS - Verify that 'bootstrap-address' could update subclouds' rehome_data through api. successfully. 4. PASS - Verify that 'migrate' command can migrate a 'secondary' subclouds successfully. 5. PASS - Verify original subcloud add/update functionalities successfully. 6. PASS - Verify 'subcloud add --secondary' can handle error And set secondary-failed successfully. 7. PASS - Verify delete a 'secondary-failed' subcloud successfully CLI example: dcmanager subcloud add --secondary --bootstrap-address 128.224.119.55 \ --bootstrap-values ./testsub.yml dcmanager subcloud update testsub --bootstrap-address 128.224.119.55 \ --bootstrap-values ./testsub.yml dcmanager subcloud migrate testsub --sysadmin-password PASSWORD API use case: PATCH /v1.0/subclouds/testsub/migrate Story: 2010852 Task: 48503 Task: 48484 Change-Id: I9a308a4e2cc5057091ba195c4d05e9d1eb4a950c Signed-off-by: Wang Tao <tao.wang@windriver.com>
This commit is contained in:
parent
db7aed2148
commit
24a825848c
@ -139,6 +139,7 @@ serviceUnavailable (503)
|
||||
- management_start_address: management_start_ip
|
||||
- management_subnet: management_subnet
|
||||
- migrate: migrate
|
||||
- secondary: secondary
|
||||
- name: subcloud_name
|
||||
- release: release
|
||||
- sysadmin_password: sysadmin_password
|
||||
@ -226,6 +227,7 @@ This operation does not accept a request body.
|
||||
- management-end-ip: management_end_ip
|
||||
- management-subnet: management_subnet
|
||||
- management-gateway-ip: management_gateway_ip
|
||||
- rehome_data: rehome_data
|
||||
- created-at: created_at
|
||||
- updated-at: updated_at
|
||||
- data_install: data_install
|
||||
@ -289,6 +291,7 @@ This operation does not accept a request body.
|
||||
- management-subnet: management_subnet
|
||||
- management-gateway-ip: management_gateway_ip
|
||||
- oam_floating_ip: oam_floating_ip
|
||||
- rehome_data: rehome_data
|
||||
- created-at: created_at
|
||||
- updated-at: updated_at
|
||||
- data_install: data_install
|
||||
@ -327,6 +330,10 @@ The attributes of a subcloud which are modifiable:
|
||||
|
||||
- management-end-ip
|
||||
|
||||
- bootstrap_values
|
||||
|
||||
- bootstrap_address
|
||||
|
||||
**Normal response codes**
|
||||
|
||||
200
|
||||
@ -352,6 +359,7 @@ serviceUnavailable (503)
|
||||
- management-end-ip: subcloud_management_end_ip
|
||||
- bootstrap-address: bootstrap_address
|
||||
- sysadmin-password: sysadmin_password
|
||||
- bootstrap-values: bootstrap_values_for_rehome
|
||||
|
||||
Request Example
|
||||
----------------
|
||||
@ -729,6 +737,73 @@ Response Example
|
||||
.. literalinclude:: samples/subclouds/subcloud-patch-update_status-response.json
|
||||
:language: json
|
||||
|
||||
*****************************************
|
||||
Migrate a specific subcloud
|
||||
*****************************************
|
||||
|
||||
.. rest_method:: PATCH /v1.0/subclouds/{subcloud}/migrate
|
||||
|
||||
|
||||
**Normal response codes**
|
||||
|
||||
200
|
||||
|
||||
**Error response codes**
|
||||
|
||||
badRequest (400), unauthorized (401), forbidden (403), badMethod (405),
|
||||
HTTPUnprocessableEntity (422), internalServerError (500),
|
||||
serviceUnavailable (503)
|
||||
|
||||
**Request parameters**
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- subcloud: subcloud_uri
|
||||
- sysadmin_password: sysadmin_password
|
||||
|
||||
Request Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: samples/subclouds/subcloud-patch-migrate-request.json
|
||||
:language: json
|
||||
|
||||
**Response parameters**
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- id: subcloud_id
|
||||
- group_id: group_id
|
||||
- name: subcloud_name
|
||||
- description: subcloud_description
|
||||
- location: subcloud_location
|
||||
- software-version: software_version
|
||||
- availability-status: availability_status
|
||||
- error-description: error_description
|
||||
- deploy-status: deploy_status
|
||||
- backup-status: backup_status
|
||||
- backup-datetime: backup_datetime
|
||||
- openstack-installed: openstack_installed
|
||||
- management-state: management_state
|
||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||
- management-start-ip: management_start_ip
|
||||
- management-end-ip: management_end_ip
|
||||
- management-subnet: management_subnet
|
||||
- management-gateway-ip: management_gateway_ip
|
||||
- rehome_data: rehome_data
|
||||
- created-at: created_at
|
||||
- updated-at: updated_at
|
||||
- data_install: data_install
|
||||
- data_upgrade: data_upgrade
|
||||
- endpoint_sync_status: endpoint_sync_status
|
||||
- sync_status: sync_status
|
||||
- endpoint_type: sync_status_type
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: samples/subclouds/subcloud-patch-migrate-response.json
|
||||
:language: json
|
||||
|
||||
*****************************
|
||||
Deletes a specific subcloud
|
||||
*****************************
|
||||
|
@ -139,6 +139,14 @@ bootstrap_values:
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
bootstrap_values_for_rehome:
|
||||
description: |
|
||||
The content of a file containing the bootstrap overrides such as subcloud
|
||||
name, management and OAM subnet.The sysadmin password of the subcloud.
|
||||
Must be base64 encoded.
|
||||
in: body
|
||||
required: false
|
||||
type: string
|
||||
cloud_status:
|
||||
description: |
|
||||
The overall alarm status of the subcloud.
|
||||
@ -335,6 +343,12 @@ region_name:
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
rehome_data:
|
||||
description: |
|
||||
JSON format data for rehoming a subcloud.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
release:
|
||||
description: |
|
||||
The subcloud software version.
|
||||
@ -348,6 +362,12 @@ restore_values:
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
secondary:
|
||||
description: |
|
||||
A flag indicating if the subcloud is a secondary subcloud
|
||||
in: body
|
||||
required: false
|
||||
type: boolean
|
||||
software_version:
|
||||
description: |
|
||||
The software version for the subcloud.
|
||||
|
@ -0,0 +1,3 @@
|
||||
{
|
||||
"sysadmin_password": "XXXXXXX"
|
||||
}
|
@ -0,0 +1,26 @@
|
||||
{
|
||||
"id": 20,
|
||||
"name": "testsub",
|
||||
"description": "des",
|
||||
"location": "CA",
|
||||
"software-version": "23.09",
|
||||
"management-state": "unmanaged",
|
||||
"availability-status": "offline",
|
||||
"deploy-status": "secondary",
|
||||
"backup-status": null,
|
||||
"backup-datetime": null,
|
||||
"error-description": "No errors present",
|
||||
"management-subnet": "192.168.97.0/24",
|
||||
"management-start-ip": "192.168.97.2",
|
||||
"management-end-ip": "192.168.97.200",
|
||||
"management-gateway-ip": "192.168.97.1",
|
||||
"openstack-installed": false,
|
||||
"systemcontroller-gateway-ip": "192.168.10.1",
|
||||
"data_install": null,
|
||||
"data_upgrade": null,
|
||||
"created-at": "2023-08-01 05:44:07.722249",
|
||||
"updated-at": "2023-08-01 05:47:32.950772",
|
||||
"group_id": 1,
|
||||
"peer_group_id": "123",
|
||||
"rehome_data": "{\"saved_payload\": {\"system_mode\": \"simplex\", \"name\": \"testsub\", \"description\": \"dddd\", \"location\": \"PEK SE Lab\", \"external_oam_subnet\": \"128.224.119.0/24\", \"external_oam_gateway_address\": \"128.224.119.1\", \"external_oam_floating_address\": \"128.224.119.55\", \"management_subnet\": \"192.168.97.0/24\", \"management_start_address\": \"192.168.97.2\", \"management_end_address\": \"192.168.97.200\", \"management_gateway_address\": \"192.168.97.1\", \"systemcontroller_gateway_address\": \"192.168.10.1\", \"docker_http_proxy\": \"http://147.11.252.42:9090\", \"docker_https_proxy\": \"http://147.11.252.42:9090\", \"docker_no_proxy\": [], \"bootstrap-address\": \"128.224.119.56\", \"software_version\": \"23.09\"}}"
|
||||
}
|
@ -186,7 +186,6 @@ class SubcloudGroupsController(restcomm.GenericPathController):
|
||||
except RemoteError as e:
|
||||
pecan.abort(httpclient.UNPROCESSABLE_ENTITY, e.value)
|
||||
except Exception as e:
|
||||
# TODO(abailey) add support for GROUP already exists (409)
|
||||
LOG.exception(e)
|
||||
pecan.abort(httpclient.INTERNAL_SERVER_ERROR,
|
||||
_('Unable to create subcloud group'))
|
||||
|
@ -85,7 +85,7 @@ SUBCLOUD_REDEPLOY_GET_FILE_CONTENTS = [
|
||||
]
|
||||
|
||||
BOOTSTRAP_VALUES_ADDRESSES = [
|
||||
'bootstrap-address', 'management_start_address', 'management_end_address',
|
||||
'bootstrap-address', 'bootstrap_address', 'management_start_address', 'management_end_address',
|
||||
'management_gateway_address', 'systemcontroller_gateway_address',
|
||||
'external_oam_gateway_address', 'external_oam_floating_address',
|
||||
'admin_start_address', 'admin_end_address', 'admin_gateway_address'
|
||||
@ -341,6 +341,38 @@ class SubcloudsController(object):
|
||||
else dccommon_consts.DEPLOY_CONFIG_UP_TO_DATE
|
||||
return sync_status
|
||||
|
||||
def _validate_migrate(self, payload, subcloud):
|
||||
# Verify rehome data
|
||||
if not subcloud.rehome_data:
|
||||
LOG.exception("Unable to migrate subcloud %s, "
|
||||
"required rehoming data is missing" % subcloud.name)
|
||||
pecan.abort(500, _("Unable to migrate subcloud %s, "
|
||||
"required rehoming data is missing" % subcloud.name))
|
||||
rehome_data = json.loads(subcloud.rehome_data)
|
||||
if 'saved_payload' not in rehome_data:
|
||||
LOG.exception("Unable to migrate subcloud %s, "
|
||||
"saved_payload is missing in rehoming data" % subcloud.name)
|
||||
pecan.abort(500, _("Unable to migrate subcloud %s, "
|
||||
"saved_payload is missing in rehoming data" % subcloud.name))
|
||||
saved_payload = rehome_data['saved_payload']
|
||||
# Validate saved_payload
|
||||
if len(saved_payload) == 0:
|
||||
LOG.exception("Unable to migrate subcloud %s, "
|
||||
"saved_payload is empty" % subcloud.name)
|
||||
pecan.abort(500, _("Unable to migrate subcloud %s, "
|
||||
"saved_payload is empty" % subcloud.name))
|
||||
if 'bootstrap-address' not in saved_payload:
|
||||
LOG.exception("Unable to migrate subcloud %s, "
|
||||
"bootstrap-address is missing in rehoming data" % subcloud.name)
|
||||
pecan.abort(500, _("Unable to migrate subcloud %s, "
|
||||
"bootstrap-address is missing in rehoming data" % subcloud.name))
|
||||
# Validate sysadmin_password is in payload
|
||||
if 'sysadmin_password' not in payload:
|
||||
LOG.exception("Unable to migrate subcloud %s, "
|
||||
"need sysadmin_password" % subcloud.name)
|
||||
pecan.abort(500, _("Unable to migrate subcloud %s, "
|
||||
"need sysadmin_password" % subcloud.name))
|
||||
|
||||
@staticmethod
|
||||
def _append_static_err_content(subcloud):
|
||||
err_dict = consts.ERR_MSG_DICT
|
||||
@ -505,7 +537,11 @@ class SubcloudsController(object):
|
||||
|
||||
psd_common.validate_migrate_parameter(payload, request)
|
||||
|
||||
psd_common.validate_sysadmin_password(payload)
|
||||
psd_common.validate_secondary_parameter(payload, request)
|
||||
|
||||
# No need sysadmin_password when add a secondary subcloud
|
||||
if 'secondary' not in payload:
|
||||
psd_common.validate_sysadmin_password(payload)
|
||||
|
||||
psd_common.pre_deploy_create(payload, context, request)
|
||||
|
||||
@ -571,6 +607,9 @@ class SubcloudsController(object):
|
||||
SUBCLOUD_MANDATORY_NETWORK_PARAMS))
|
||||
|
||||
if reconfigure_network:
|
||||
if utils.subcloud_is_secondary_state(subcloud.deploy_status):
|
||||
pecan.abort(500, _("Cannot perform on %s "
|
||||
"state subcloud" % subcloud.deploy_status))
|
||||
system_controller_mgmt_pool = psd_common.get_network_address_pool()
|
||||
# Required parameters
|
||||
payload['name'] = subcloud.name
|
||||
@ -590,6 +629,8 @@ class SubcloudsController(object):
|
||||
group_id = payload.get('group_id')
|
||||
description = payload.get('description')
|
||||
location = payload.get('location')
|
||||
bootstrap_values = payload.get('bootstrap_values')
|
||||
bootstrap_address = payload.get('bootstrap_address')
|
||||
|
||||
# Syntax checking
|
||||
if management_state and \
|
||||
@ -616,9 +657,8 @@ class SubcloudsController(object):
|
||||
grp = db_api.subcloud_group_get_by_name(context,
|
||||
group_id)
|
||||
group_id = grp.id
|
||||
except exceptions.SubcloudGroupNameNotFound:
|
||||
pecan.abort(400, _('Invalid group'))
|
||||
except exceptions.SubcloudGroupNotFound:
|
||||
except (exceptions.SubcloudGroupNameNotFound,
|
||||
exceptions.SubcloudGroupNotFound):
|
||||
pecan.abort(400, _('Invalid group'))
|
||||
|
||||
if INSTALL_VALUES in payload:
|
||||
@ -634,7 +674,9 @@ class SubcloudsController(object):
|
||||
context, subcloud_id, management_state=management_state,
|
||||
description=description, location=location,
|
||||
group_id=group_id, data_install=payload.get('data_install'),
|
||||
force=force_flag)
|
||||
force=force_flag,
|
||||
bootstrap_values=bootstrap_values,
|
||||
bootstrap_address=bootstrap_address)
|
||||
return subcloud
|
||||
except RemoteError as e:
|
||||
pecan.abort(422, e.value)
|
||||
@ -643,6 +685,9 @@ class SubcloudsController(object):
|
||||
LOG.exception(e)
|
||||
pecan.abort(500, _('Unable to update subcloud'))
|
||||
elif verb == 'reconfigure':
|
||||
if utils.subcloud_is_secondary_state(subcloud.deploy_status):
|
||||
pecan.abort(500, _("Cannot perform on %s "
|
||||
"state subcloud" % subcloud.deploy_status))
|
||||
payload = self._get_reconfig_payload(
|
||||
request, subcloud.name, subcloud.software_version)
|
||||
if not payload:
|
||||
@ -682,6 +727,9 @@ class SubcloudsController(object):
|
||||
LOG.exception("Unable to reconfigure subcloud %s" % subcloud.name)
|
||||
pecan.abort(500, _('Unable to reconfigure subcloud'))
|
||||
elif verb == "reinstall":
|
||||
if utils.subcloud_is_secondary_state(subcloud.deploy_status):
|
||||
pecan.abort(500, _("Cannot perform on %s "
|
||||
"state subcloud" % subcloud.deploy_status))
|
||||
psd_common.check_required_parameters(request,
|
||||
SUBCLOUD_ADD_MANDATORY_FILE)
|
||||
|
||||
@ -769,6 +817,9 @@ class SubcloudsController(object):
|
||||
LOG.exception("Unable to reinstall subcloud %s" % subcloud.name)
|
||||
pecan.abort(500, _('Unable to reinstall subcloud'))
|
||||
elif verb == "redeploy":
|
||||
if utils.subcloud_is_secondary_state(subcloud.deploy_status):
|
||||
pecan.abort(500, _("Cannot perform on %s "
|
||||
"state subcloud" % subcloud.deploy_status))
|
||||
config_file = psd_common.get_config_file_path(subcloud.name,
|
||||
consts.DEPLOY_CONFIG)
|
||||
has_bootstrap_values = consts.BOOTSTRAP_VALUES in request.POST
|
||||
@ -839,6 +890,9 @@ class SubcloudsController(object):
|
||||
res = self.updatestatus(subcloud.name)
|
||||
return res
|
||||
elif verb == 'prestage':
|
||||
if utils.subcloud_is_secondary_state(subcloud.deploy_status):
|
||||
pecan.abort(500, _("Cannot perform on %s "
|
||||
"state subcloud" % subcloud.deploy_status))
|
||||
payload = self._get_prestage_payload(request)
|
||||
payload['subcloud_name'] = subcloud.name
|
||||
try:
|
||||
@ -871,6 +925,29 @@ class SubcloudsController(object):
|
||||
except Exception:
|
||||
LOG.exception("Unable to prestage subcloud %s" % subcloud.name)
|
||||
pecan.abort(500, _('Unable to prestage subcloud'))
|
||||
elif verb == 'migrate':
|
||||
try:
|
||||
# Reject if not in secondary/rehome-failed/rehome-prep-failed state
|
||||
if subcloud.deploy_status not in [consts.DEPLOY_STATE_SECONDARY,
|
||||
consts.DEPLOY_STATE_REHOME_FAILED,
|
||||
consts.DEPLOY_STATE_REHOME_PREP_FAILED]:
|
||||
LOG.exception("Unable to migrate subcloud %s, "
|
||||
"must be in secondary or rehome failure state" % subcloud.name)
|
||||
pecan.abort(400, _("Unable to migrate subcloud %s, "
|
||||
"must be in secondary or rehome failure state" %
|
||||
subcloud.name))
|
||||
payload = json.loads(request.body)
|
||||
self._validate_migrate(payload, subcloud)
|
||||
|
||||
# Call migrate
|
||||
self.dcmanager_rpc_client.migrate_subcloud(context, subcloud.id, payload)
|
||||
return db_api.subcloud_db_model_to_dict(subcloud)
|
||||
except RemoteError as e:
|
||||
pecan.abort(422, e.value)
|
||||
except Exception:
|
||||
LOG.exception(
|
||||
"Unable to migrate subcloud %s" % subcloud.name)
|
||||
pecan.abort(500, _('Unable to migrate subcloud'))
|
||||
|
||||
@utils.synchronized(LOCK_NAME)
|
||||
@index.when(method='delete', template='json')
|
||||
|
@ -84,6 +84,10 @@ subclouds_rules = [
|
||||
{
|
||||
'method': 'PATCH',
|
||||
'path': '/v1.0/subclouds/{subcloud}/update_status'
|
||||
},
|
||||
{
|
||||
'method': 'PATCH',
|
||||
'path': '/v1.0/subclouds/{subcloud}/migrate'
|
||||
}
|
||||
]
|
||||
)
|
||||
|
@ -216,6 +216,8 @@ DEPLOY_STATE_PRE_REHOME = 'pre-rehome'
|
||||
DEPLOY_STATE_REHOMING = 'rehoming'
|
||||
DEPLOY_STATE_REHOME_FAILED = 'rehome-failed'
|
||||
DEPLOY_STATE_REHOME_PREP_FAILED = 'rehome-prep-failed'
|
||||
DEPLOY_STATE_SECONDARY = 'secondary'
|
||||
DEPLOY_STATE_SECONDARY_FAILED = 'secondary-failed'
|
||||
DEPLOY_STATE_DONE = 'complete'
|
||||
DEPLOY_STATE_RECONFIGURING_NETWORK = 'reconfiguring-network'
|
||||
DEPLOY_STATE_RECONFIGURING_NETWORK_FAILED = 'network-reconfiguration-failed'
|
||||
|
@ -179,7 +179,8 @@ class CertificateUploadError(DCManagerException):
|
||||
|
||||
|
||||
class LicenseInstallError(DCManagerException):
|
||||
message = _("Error while installing license on subcloud: %(subcloud_id)s. %(error_message)s")
|
||||
message = _("Error while installing license on subcloud: "
|
||||
"%(subcloud_id)s. %(error_message)s")
|
||||
|
||||
|
||||
class LicenseMissingError(DCManagerException):
|
||||
|
@ -163,6 +163,21 @@ def validate_migrate_parameter(payload, request):
|
||||
'not allowed'))
|
||||
|
||||
|
||||
def validate_secondary_parameter(payload, request):
|
||||
secondary_str = payload.get('secondary')
|
||||
migrate_str = payload.get('migrate')
|
||||
if secondary_str is not None:
|
||||
if secondary_str not in ["true", "false"]:
|
||||
pecan.abort(400, _('The secondary option is invalid, '
|
||||
'valid options are true and false.'))
|
||||
if consts.DEPLOY_CONFIG in request.POST:
|
||||
pecan.abort(400, _('secondary with deploy-config is '
|
||||
'not allowed'))
|
||||
if migrate_str is not None:
|
||||
pecan.abort(400, _('secondary with migrate is '
|
||||
'not allowed'))
|
||||
|
||||
|
||||
def validate_subcloud_config(context, payload, operation=None,
|
||||
ignore_conflicts_with=None):
|
||||
"""Check whether subcloud config is valid."""
|
||||
|
@ -1094,3 +1094,15 @@ def update_abort_status(context, subcloud_id, deploy_status, abort_failed=False)
|
||||
updated_subcloud = db_api.subcloud_update(context, subcloud_id,
|
||||
deploy_status=new_deploy_status)
|
||||
return updated_subcloud
|
||||
|
||||
|
||||
def subcloud_is_secondary_state(deploy_state):
|
||||
if deploy_state in [consts.DEPLOY_STATE_SECONDARY,
|
||||
consts.DEPLOY_STATE_SECONDARY_FAILED]:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def create_subcloud_rehome_data_template():
|
||||
"""Create a subcloud rehome data template"""
|
||||
return {'saved_payload': {}}
|
||||
|
@ -123,7 +123,8 @@ def subcloud_db_model_to_dict(subcloud):
|
||||
"data_upgrade": subcloud.data_upgrade,
|
||||
"created-at": subcloud.created_at,
|
||||
"updated-at": subcloud.updated_at,
|
||||
"group_id": subcloud.group_id}
|
||||
"group_id": subcloud.group_id,
|
||||
"rehome_data": subcloud.rehome_data}
|
||||
return result
|
||||
|
||||
|
||||
@ -182,7 +183,8 @@ def subcloud_update(context, subcloud_id, management_state=None,
|
||||
openstack_installed=None, group_id=None,
|
||||
data_install=None, data_upgrade=None,
|
||||
first_identity_sync_complete=None,
|
||||
systemcontroller_gateway_ip=None):
|
||||
systemcontroller_gateway_ip=None,
|
||||
rehome_data=None):
|
||||
"""Update a subcloud or raise if it does not exist."""
|
||||
return IMPL.subcloud_update(context, subcloud_id, management_state,
|
||||
availability_status, software_version,
|
||||
@ -192,7 +194,7 @@ def subcloud_update(context, subcloud_id, management_state=None,
|
||||
backup_datetime, error_description, openstack_installed,
|
||||
group_id, data_install, data_upgrade,
|
||||
first_identity_sync_complete,
|
||||
systemcontroller_gateway_ip)
|
||||
systemcontroller_gateway_ip, rehome_data)
|
||||
|
||||
|
||||
def subcloud_bulk_update_by_ids(context, subcloud_ids, update_form):
|
||||
|
@ -391,7 +391,8 @@ def subcloud_update(context, subcloud_id, management_state=None,
|
||||
data_install=None,
|
||||
data_upgrade=None,
|
||||
first_identity_sync_complete=None,
|
||||
systemcontroller_gateway_ip=None):
|
||||
systemcontroller_gateway_ip=None,
|
||||
rehome_data=None):
|
||||
with write_session() as session:
|
||||
subcloud_ref = subcloud_get(context, subcloud_id)
|
||||
if management_state is not None:
|
||||
@ -435,6 +436,8 @@ def subcloud_update(context, subcloud_id, management_state=None,
|
||||
if systemcontroller_gateway_ip is not None:
|
||||
subcloud_ref.systemcontroller_gateway_ip = \
|
||||
systemcontroller_gateway_ip
|
||||
if rehome_data is not None:
|
||||
subcloud_ref.rehome_data = rehome_data
|
||||
subcloud_ref.save(session)
|
||||
return subcloud_ref
|
||||
|
||||
|
@ -0,0 +1,21 @@
|
||||
# Copyright (c) 2023 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
import sqlalchemy
|
||||
|
||||
ENGINE = 'InnoDB',
|
||||
CHARSET = 'utf8'
|
||||
|
||||
|
||||
def upgrade(migrate_engine):
|
||||
meta = sqlalchemy.MetaData(bind=migrate_engine)
|
||||
|
||||
subclouds = sqlalchemy.Table('subclouds', meta, autoload=True)
|
||||
# Add the 'rehome_data' column to the subclouds table.
|
||||
subclouds.create_column(sqlalchemy.Column('rehome_data', sqlalchemy.Text))
|
||||
|
||||
|
||||
def downgrade(migrate_engine):
|
||||
raise NotImplementedError('Database downgrade is unsupported.')
|
@ -138,6 +138,7 @@ class Subcloud(BASE, DCManagerBase):
|
||||
systemcontroller_gateway_ip = Column(String(255))
|
||||
audit_fail_count = Column(Integer)
|
||||
first_identity_sync_complete = Column(Boolean, default=False)
|
||||
rehome_data = Column(Text())
|
||||
|
||||
# multiple subclouds can be in a particular group
|
||||
group_id = Column(Integer,
|
||||
|
@ -113,7 +113,9 @@ class DCManagerService(service.Service):
|
||||
@request_context
|
||||
def update_subcloud(self, context, subcloud_id, management_state=None,
|
||||
description=None, location=None,
|
||||
group_id=None, data_install=None, force=None):
|
||||
group_id=None, data_install=None, force=None,
|
||||
deploy_status=None,
|
||||
bootstrap_values=None, bootstrap_address=None):
|
||||
# Updates a subcloud
|
||||
LOG.info("Handling update_subcloud request for: %s" % subcloud_id)
|
||||
subcloud = self.subcloud_manager.update_subcloud(context, subcloud_id,
|
||||
@ -122,7 +124,10 @@ class DCManagerService(service.Service):
|
||||
location,
|
||||
group_id,
|
||||
data_install,
|
||||
force)
|
||||
force,
|
||||
deploy_status,
|
||||
bootstrap_values,
|
||||
bootstrap_address)
|
||||
return subcloud
|
||||
|
||||
@request_context
|
||||
@ -247,6 +252,12 @@ class DCManagerService(service.Service):
|
||||
subcloud_id,
|
||||
deploy_status)
|
||||
|
||||
@request_context
|
||||
def migrate_subcloud(self, context, subcloud_ref, payload):
|
||||
LOG.info("Handling migrate_subcloud request for: %s",
|
||||
subcloud_ref)
|
||||
return self.subcloud_manager.migrate_subcloud(context, subcloud_ref, payload)
|
||||
|
||||
@request_context
|
||||
def subcloud_deploy_resume(self, context, subcloud_id, subcloud_name,
|
||||
payload, deploy_states_to_run):
|
||||
|
@ -16,7 +16,9 @@
|
||||
#
|
||||
from __future__ import division
|
||||
|
||||
import base64
|
||||
import collections
|
||||
import copy
|
||||
import datetime
|
||||
import filecmp
|
||||
import functools
|
||||
@ -35,6 +37,7 @@ from oslo_log import log as logging
|
||||
from oslo_messaging import RemoteError
|
||||
from tsconfig.tsconfig import CONFIG_PATH
|
||||
from tsconfig.tsconfig import SW_VERSION
|
||||
import yaml
|
||||
|
||||
from dccommon import consts as dccommon_consts
|
||||
from dccommon.drivers.openstack.sdk_platform import OpenStackDriver
|
||||
@ -331,6 +334,55 @@ class SubcloudManager(manager.Manager):
|
||||
dccommon_consts.ANSIBLE_OVERRIDES_PATH, subcloud_name)]
|
||||
return rehome_command
|
||||
|
||||
def migrate_subcloud(self, context, subcloud_ref, payload):
|
||||
'''migrate_subcloud function is for day-2's rehome purpose.
|
||||
|
||||
This is called by 'dcmanager subcloud migrate <subcloud>'.
|
||||
This function is used to migrate those 'secondary' subcloud.
|
||||
|
||||
:param context: request context object
|
||||
:param subcloud_ref: id or name of the subcloud
|
||||
:param payload: subcloud configuration
|
||||
'''
|
||||
subcloud = None
|
||||
try:
|
||||
# subcloud_ref could be int type id.
|
||||
subcloud = utils.subcloud_get_by_ref(context, str(subcloud_ref))
|
||||
if not subcloud:
|
||||
LOG.exception("Failed to migrate, non-existent subcloud %s" % subcloud_ref)
|
||||
raise Exception("Failed to migrate, non-existent subcloud %s" % subcloud_ref)
|
||||
if 'sysadmin_password' not in payload:
|
||||
raise Exception("Failed to migrate subcloud: %s, must provide sysadmin_password" %
|
||||
subcloud.name)
|
||||
|
||||
if subcloud.deploy_status not in [consts.DEPLOY_STATE_SECONDARY,
|
||||
consts.DEPLOY_STATE_REHOME_FAILED,
|
||||
consts.DEPLOY_STATE_REHOME_PREP_FAILED]:
|
||||
raise Exception("Failed to migrate subcloud: %s, "
|
||||
"must be in secondary or rehome failure state" %
|
||||
subcloud.name)
|
||||
|
||||
rehome_data = json.loads(subcloud.rehome_data)
|
||||
saved_payload = rehome_data['saved_payload']
|
||||
# Update sysadmin_password/ansible_ssh_pass
|
||||
sysadmin_password = base64.b64decode(payload['sysadmin_password']).decode('utf-8')
|
||||
saved_payload['sysadmin_password'] = sysadmin_password
|
||||
saved_payload['ansible_ssh_pass'] = sysadmin_password
|
||||
|
||||
# Re-generate ansible config based on latest rehome_data
|
||||
subcloud = self.subcloud_migrate_generate_ansible_config(
|
||||
context, subcloud.id,
|
||||
saved_payload)
|
||||
self.rehome_subcloud(context, subcloud, saved_payload)
|
||||
except Exception:
|
||||
# If we failed to migrate the subcloud, update the
|
||||
# deployment status
|
||||
if subcloud:
|
||||
LOG.exception("Failed to migrate subcloud %s" % subcloud.name)
|
||||
db_api.subcloud_update(
|
||||
context, subcloud.id,
|
||||
deploy_status=consts.DEPLOY_STATE_REHOME_FAILED)
|
||||
|
||||
def rehome_subcloud(self, context, subcloud, payload):
|
||||
# Ansible inventory filename for the specified subcloud
|
||||
ansible_subcloud_inventory_file = self._get_ansible_filename(
|
||||
@ -354,12 +406,17 @@ class SubcloudManager(manager.Manager):
|
||||
LOG.info(f"Adding subcloud {payload['name']}.")
|
||||
|
||||
rehoming = payload.get('migrate', '').lower() == "true"
|
||||
secondary = (payload.get('secondary', '').lower() == "true")
|
||||
|
||||
# Create the subcloud
|
||||
subcloud = self.subcloud_deploy_create(context, subcloud_id,
|
||||
payload, rehoming,
|
||||
return_as_dict=False)
|
||||
|
||||
# return if 'secondary' subcloud
|
||||
if secondary:
|
||||
return
|
||||
|
||||
# Return if create failed
|
||||
if rehoming:
|
||||
success_state = consts.DEPLOY_STATE_PRE_REHOME
|
||||
@ -825,6 +882,70 @@ class SubcloudManager(manager.Manager):
|
||||
self.run_deploy_phases(context, subcloud_id, payload,
|
||||
deploy_states_to_run)
|
||||
|
||||
def subcloud_migrate_generate_ansible_config(self, context, subcloud_id, payload):
|
||||
"""Generate latest ansible config based on given payload for day-2 rehoming purpose.
|
||||
|
||||
:param context: request context object
|
||||
:param subcloud_id: subcloud_id from db
|
||||
:param payload: subcloud configuration
|
||||
:return: resulting subcloud DB object
|
||||
"""
|
||||
LOG.info("Generate subcloud %s ansible config." % payload['name'])
|
||||
|
||||
deploy_state = consts.DEPLOY_STATE_PRE_REHOME
|
||||
subcloud = db_api.subcloud_update(
|
||||
context, subcloud_id,
|
||||
deploy_status=deploy_state)
|
||||
|
||||
try:
|
||||
# Write ansible based on rehome_data
|
||||
m_ks_client = OpenStackDriver(
|
||||
region_name=dccommon_consts.DEFAULT_REGION_NAME,
|
||||
region_clients=None).keystone_client
|
||||
endpoint = m_ks_client.endpoint_cache.get_endpoint('sysinv')
|
||||
sysinv_client = SysinvClient(dccommon_consts.DEFAULT_REGION_NAME,
|
||||
m_ks_client.session,
|
||||
endpoint=endpoint)
|
||||
LOG.debug("Getting cached regionone data for %s" % subcloud.name)
|
||||
cached_regionone_data = self._get_cached_regionone_data(
|
||||
m_ks_client, sysinv_client)
|
||||
|
||||
self._populate_payload_with_cached_keystone_data(
|
||||
cached_regionone_data, payload, populate_passwords=False)
|
||||
|
||||
payload['users'] = {}
|
||||
for user in USERS_TO_REPLICATE:
|
||||
payload['users'][user] = \
|
||||
str(keyring.get_password(
|
||||
user, dccommon_consts.SERVICES_USER_NAME))
|
||||
|
||||
# Ansible inventory filename for the specified subcloud
|
||||
ansible_subcloud_inventory_file = utils.get_ansible_filename(
|
||||
subcloud.name, INVENTORY_FILE_POSTFIX)
|
||||
|
||||
# Create the ansible inventory for the new subcloud
|
||||
utils.create_subcloud_inventory(payload,
|
||||
ansible_subcloud_inventory_file)
|
||||
|
||||
# create subcloud intermediate certificate and pass in keys
|
||||
self._create_intermediate_ca_cert(payload)
|
||||
|
||||
# Write this subclouds overrides to file
|
||||
# NOTE: This file should not be deleted if subcloud migrate fails
|
||||
# as it is used for debugging
|
||||
self._write_subcloud_ansible_config(cached_regionone_data, payload)
|
||||
|
||||
return subcloud
|
||||
|
||||
except Exception:
|
||||
LOG.exception("Failed to generate subcloud %s config" % payload['name'])
|
||||
# If we failed to generate the subcloud config, update the deployment status
|
||||
deploy_state = consts.DEPLOY_STATE_REHOME_PREP_FAILED
|
||||
subcloud = db_api.subcloud_update(
|
||||
context, subcloud_id,
|
||||
deploy_status=deploy_state)
|
||||
return subcloud
|
||||
|
||||
def subcloud_deploy_create(self, context, subcloud_id, payload,
|
||||
rehoming=False, return_as_dict=True):
|
||||
"""Create subcloud and notify orchestrators.
|
||||
@ -838,8 +959,17 @@ class SubcloudManager(manager.Manager):
|
||||
"""
|
||||
LOG.info("Creating subcloud %s." % payload['name'])
|
||||
|
||||
# cache original payload data for day-2's rehome usage
|
||||
original_payload = copy.deepcopy(payload)
|
||||
|
||||
# Check the secondary option from payload
|
||||
secondary_str = payload.get('secondary', '')
|
||||
secondary = (secondary_str.lower() == 'true')
|
||||
|
||||
if rehoming:
|
||||
deploy_state = consts.DEPLOY_STATE_PRE_REHOME
|
||||
elif secondary:
|
||||
deploy_state = consts.DEPLOY_STATE_SECONDARY
|
||||
else:
|
||||
deploy_state = consts.DEPLOY_STATE_CREATING
|
||||
|
||||
@ -847,6 +977,7 @@ class SubcloudManager(manager.Manager):
|
||||
context, subcloud_id,
|
||||
deploy_status=deploy_state)
|
||||
|
||||
rehome_data = None
|
||||
try:
|
||||
# Create a new route to this subcloud on the management interface
|
||||
# on both controllers.
|
||||
@ -960,7 +1091,22 @@ class SubcloudManager(manager.Manager):
|
||||
# as it is used for debugging
|
||||
self._write_subcloud_ansible_config(cached_regionone_data, payload)
|
||||
|
||||
if not rehoming:
|
||||
# To add a 'secondary' subcloud, save payload into DB
|
||||
# for day-2's migrate purpose.
|
||||
if secondary:
|
||||
# remove unused parameters
|
||||
if 'secondary' in original_payload:
|
||||
del original_payload['secondary']
|
||||
if 'ansible_ssh_pass' in original_payload:
|
||||
del original_payload['ansible_ssh_pass']
|
||||
if 'sysadmin_password' in original_payload:
|
||||
del original_payload['sysadmin_password']
|
||||
bootstrap_info = utils.create_subcloud_rehome_data_template()
|
||||
bootstrap_info['saved_payload'] = original_payload
|
||||
rehome_data = json.dumps(bootstrap_info)
|
||||
deploy_state = consts.DEPLOY_STATE_SECONDARY
|
||||
|
||||
if not rehoming and not secondary:
|
||||
deploy_state = consts.DEPLOY_STATE_CREATED
|
||||
|
||||
except Exception:
|
||||
@ -969,12 +1115,15 @@ class SubcloudManager(manager.Manager):
|
||||
|
||||
if rehoming:
|
||||
deploy_state = consts.DEPLOY_STATE_REHOME_PREP_FAILED
|
||||
elif secondary:
|
||||
deploy_state = consts.DEPLOY_STATE_SECONDARY_FAILED
|
||||
else:
|
||||
deploy_state = consts.DEPLOY_STATE_CREATE_FAILED
|
||||
|
||||
subcloud = db_api.subcloud_update(
|
||||
context, subcloud.id,
|
||||
deploy_status=deploy_state)
|
||||
deploy_status=deploy_state,
|
||||
rehome_data=rehome_data)
|
||||
|
||||
# The RPC call must return the subcloud as a dictionary, otherwise it
|
||||
# should return the DB object for dcmanager internal use (subcloud add)
|
||||
@ -2145,7 +2294,10 @@ class SubcloudManager(manager.Manager):
|
||||
location=None,
|
||||
group_id=None,
|
||||
data_install=None,
|
||||
force=None):
|
||||
force=None,
|
||||
deploy_status=None,
|
||||
bootstrap_values=None,
|
||||
bootstrap_address=None):
|
||||
"""Update subcloud and notify orchestrators.
|
||||
|
||||
:param context: request context object
|
||||
@ -2156,6 +2308,9 @@ class SubcloudManager(manager.Manager):
|
||||
:param group_id: new subcloud group id
|
||||
:param data_install: subcloud install values
|
||||
:param force: force flag
|
||||
:param deploy_status: update to expected deploy status
|
||||
:param bootstrap_values: bootstrap_values yaml content
|
||||
:param bootstrap_address: oam IP for rehome
|
||||
"""
|
||||
|
||||
LOG.info("Updating subcloud %s." % subcloud_id)
|
||||
@ -2195,6 +2350,79 @@ class SubcloudManager(manager.Manager):
|
||||
LOG.error("Invalid management_state %s" % management_state)
|
||||
raise exceptions.InternalError()
|
||||
|
||||
# update bootstrap values into rehome_data
|
||||
rehome_data_dict = None
|
||||
# load the existing data if it exists
|
||||
if subcloud.rehome_data:
|
||||
rehome_data_dict = json.loads(subcloud.rehome_data)
|
||||
# update saved_payload with the bootstrap values
|
||||
if bootstrap_values:
|
||||
_bootstrap_address = None
|
||||
if not rehome_data_dict:
|
||||
rehome_data_dict = utils.create_subcloud_rehome_data_template()
|
||||
else:
|
||||
# Since bootstrap-address is not original data in bootstrap-values
|
||||
# it's necessary to save it first, then put it back after
|
||||
# after bootstrap_values is updated.
|
||||
if 'bootstrap-address' in rehome_data_dict['saved_payload']:
|
||||
_bootstrap_address = rehome_data_dict['saved_payload']['bootstrap-address']
|
||||
bootstrap_values_dict = yaml.load(bootstrap_values, Loader=yaml.SafeLoader)
|
||||
rehome_data_dict['saved_payload'] = bootstrap_values_dict
|
||||
# put bootstrap_address back into rehome_data_dict
|
||||
if _bootstrap_address:
|
||||
rehome_data_dict['saved_payload']['bootstrap-address'] = _bootstrap_address
|
||||
|
||||
# update deploy status, ONLY apply for unmanaged subcloud
|
||||
new_deploy_status = None
|
||||
if deploy_status is not None:
|
||||
if subcloud.management_state != dccommon_consts.MANAGEMENT_UNMANAGED:
|
||||
raise exceptions.BadRequest(
|
||||
resource='subcloud',
|
||||
msg='deploy_status can only be updated on unmanaged subcloud')
|
||||
new_deploy_status = deploy_status
|
||||
# set all endpoint statuses to unknown
|
||||
# no endpoint will be audited for secondary
|
||||
# subclouds
|
||||
self.state_rpc_client.update_subcloud_endpoint_status_sync(
|
||||
context,
|
||||
subcloud_name=subcloud.name,
|
||||
endpoint_type=None,
|
||||
sync_status=dccommon_consts.SYNC_STATUS_UNKNOWN)
|
||||
|
||||
# clear existing fault alarm of secondary subcloud
|
||||
for alarm_id, entity_instance_id in (
|
||||
(fm_const.FM_ALARM_ID_DC_SUBCLOUD_OFFLINE,
|
||||
"subcloud=%s" % subcloud.name),
|
||||
(fm_const.FM_ALARM_ID_DC_SUBCLOUD_RESOURCE_OUT_OF_SYNC,
|
||||
"subcloud=%s.resource=%s" %
|
||||
(subcloud.name, dccommon_consts.ENDPOINT_TYPE_DC_CERT)),
|
||||
(fm_const.FM_ALARM_ID_DC_SUBCLOUD_BACKUP_FAILED,
|
||||
"subcloud=%s" % subcloud.name)):
|
||||
try:
|
||||
fault = self.fm_api.get_fault(alarm_id,
|
||||
entity_instance_id)
|
||||
if fault:
|
||||
self.fm_api.clear_fault(alarm_id,
|
||||
entity_instance_id)
|
||||
except Exception as e:
|
||||
LOG.info(
|
||||
"Failed to clear fault for subcloud %s, alarm_id=%s" %
|
||||
(subcloud.name, alarm_id))
|
||||
LOG.exception(e)
|
||||
|
||||
# update bootstrap_address
|
||||
if bootstrap_address:
|
||||
if rehome_data_dict is None:
|
||||
raise exceptions.BadRequest(
|
||||
resource='subcloud',
|
||||
msg='Cannot update bootstrap_address into rehome data, '
|
||||
'need to import bootstrap_values first')
|
||||
rehome_data_dict['saved_payload']['bootstrap-address'] = bootstrap_address
|
||||
|
||||
if rehome_data_dict:
|
||||
rehome_data = json.dumps(rehome_data_dict)
|
||||
else:
|
||||
rehome_data = None
|
||||
subcloud = db_api.subcloud_update(
|
||||
context,
|
||||
subcloud_id,
|
||||
@ -2202,7 +2430,9 @@ class SubcloudManager(manager.Manager):
|
||||
description=description,
|
||||
location=location,
|
||||
group_id=group_id,
|
||||
data_install=data_install
|
||||
data_install=data_install,
|
||||
deploy_status=new_deploy_status,
|
||||
rehome_data=rehome_data
|
||||
)
|
||||
|
||||
# Inform orchestrators that subcloud has been updated
|
||||
|
@ -135,7 +135,8 @@ class ManagerClient(RPCClient):
|
||||
|
||||
def update_subcloud(self, ctxt, subcloud_id, management_state=None,
|
||||
description=None, location=None, group_id=None,
|
||||
data_install=None, force=None):
|
||||
data_install=None, force=None,
|
||||
deploy_status=None, bootstrap_values=None, bootstrap_address=None):
|
||||
return self.call(ctxt, self.make_msg('update_subcloud',
|
||||
subcloud_id=subcloud_id,
|
||||
management_state=management_state,
|
||||
@ -143,7 +144,10 @@ class ManagerClient(RPCClient):
|
||||
location=location,
|
||||
group_id=group_id,
|
||||
data_install=data_install,
|
||||
force=force))
|
||||
force=force,
|
||||
deploy_status=deploy_status,
|
||||
bootstrap_values=bootstrap_values,
|
||||
bootstrap_address=bootstrap_address))
|
||||
|
||||
def update_subcloud_with_network_reconfig(self, ctxt, subcloud_id, payload):
|
||||
return self.cast(ctxt, self.make_msg('update_subcloud_with_network_reconfig',
|
||||
@ -230,6 +234,11 @@ class ManagerClient(RPCClient):
|
||||
payload=payload,
|
||||
deploy_states_to_run=deploy_states_to_run))
|
||||
|
||||
def migrate_subcloud(self, ctxt, subcloud_ref, payload):
|
||||
return self.cast(ctxt, self.make_msg('migrate_subcloud',
|
||||
subcloud_ref=subcloud_ref,
|
||||
payload=payload))
|
||||
|
||||
|
||||
class DCManagerNotifications(RPCClient):
|
||||
"""DC Manager Notification interface to broadcast subcloud state changed
|
||||
|
@ -294,7 +294,9 @@ class SubcloudStateManager(manager.Manager):
|
||||
|
||||
# Rules for updating sync status:
|
||||
#
|
||||
# Always update if not in-sync.
|
||||
# Skip audit any 'secondary' state subclouds
|
||||
#
|
||||
# For others, always update if not in-sync.
|
||||
#
|
||||
# Otherwise, only update the sync status if managed and online
|
||||
# (unless dc-cert).
|
||||
@ -306,10 +308,11 @@ class SubcloudStateManager(manager.Manager):
|
||||
# This means if a subcloud is going offline or unmanaged, then
|
||||
# the sync status update must be done first.
|
||||
#
|
||||
if (sync_status != dccommon_consts.SYNC_STATUS_IN_SYNC or
|
||||
if ((sync_status != dccommon_consts.SYNC_STATUS_IN_SYNC or
|
||||
((subcloud.availability_status == dccommon_consts.AVAILABILITY_ONLINE) and
|
||||
(subcloud.management_state == dccommon_consts.MANAGEMENT_MANAGED
|
||||
or endpoint_type == dccommon_consts.ENDPOINT_TYPE_DC_CERT))):
|
||||
or endpoint_type == dccommon_consts.ENDPOINT_TYPE_DC_CERT))) and
|
||||
subcloud.deploy_status != consts.DEPLOY_STATE_SECONDARY):
|
||||
# update a single subcloud
|
||||
try:
|
||||
self._do_update_subcloud_endpoint_status(context,
|
||||
|
@ -1233,7 +1233,9 @@ class TestSubcloudAPIOther(testroot.DCManagerApiTest):
|
||||
location=None,
|
||||
group_id=None,
|
||||
data_install=json.dumps(install_data),
|
||||
force=None)
|
||||
force=None,
|
||||
bootstrap_values=None,
|
||||
bootstrap_address=None)
|
||||
self.assertEqual(response.status_int, 200)
|
||||
|
||||
@mock.patch.object(psd_common, 'get_network_address_pool')
|
||||
@ -1304,7 +1306,9 @@ class TestSubcloudAPIOther(testroot.DCManagerApiTest):
|
||||
location=None,
|
||||
group_id=None,
|
||||
data_install=json.dumps(install_data),
|
||||
force=None)
|
||||
force=None,
|
||||
bootstrap_values=None,
|
||||
bootstrap_address=None)
|
||||
self.assertEqual(response.status_int, 200)
|
||||
|
||||
@mock.patch.object(subclouds.SubcloudsController, '_get_patch_data')
|
||||
@ -1341,7 +1345,9 @@ class TestSubcloudAPIOther(testroot.DCManagerApiTest):
|
||||
location=None,
|
||||
group_id=None,
|
||||
data_install=json.dumps(install_data),
|
||||
force=None)
|
||||
force=None,
|
||||
bootstrap_values=None,
|
||||
bootstrap_address=None)
|
||||
self.assertEqual(response.status_int, 200)
|
||||
|
||||
@mock.patch.object(subclouds.SubcloudsController, '_get_patch_data')
|
||||
@ -1404,7 +1410,9 @@ class TestSubcloudAPIOther(testroot.DCManagerApiTest):
|
||||
location=None,
|
||||
group_id=None,
|
||||
data_install=None,
|
||||
force=True)
|
||||
force=True,
|
||||
bootstrap_values=None,
|
||||
bootstrap_address=None)
|
||||
self.assertEqual(response.status_int, 200)
|
||||
|
||||
@mock.patch.object(subclouds.SubcloudsController, '_get_reconfig_payload')
|
||||
|
@ -97,7 +97,7 @@ class TestDCManagerService(base.DCManagerTestCase):
|
||||
self.context, subcloud_id=1,
|
||||
management_state='testmgmtstatus')
|
||||
mock_subcloud_manager().update_subcloud.assert_called_once_with(
|
||||
self.context, 1, 'testmgmtstatus', None, None, None, None, None)
|
||||
self.context, 1, 'testmgmtstatus', None, None, None, None, None, None, None, None)
|
||||
|
||||
@mock.patch.object(service, 'SubcloudManager')
|
||||
@mock.patch.object(service, 'rpc_messaging')
|
||||
|
@ -2747,3 +2747,140 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
||||
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
||||
self.assertEqual(consts.DEPLOY_STATE_PRE_RESTORE,
|
||||
updated_subcloud.deploy_status)
|
||||
|
||||
@mock.patch.object(subcloud_manager, 'db_api', side_effect=db_api)
|
||||
@mock.patch.object(subcloud_manager.SubcloudManager,
|
||||
'subcloud_migrate_generate_ansible_config')
|
||||
@mock.patch.object(subcloud_manager.SubcloudManager, 'rehome_subcloud')
|
||||
def test_migrate_subcloud(self, mock_rehome_subcloud,
|
||||
mock_subcloud_migrate_generate_ansible_config,
|
||||
mock_db_api):
|
||||
# Prepare the test data
|
||||
subcloud = self.create_subcloud_static(self.ctx)
|
||||
saved_payload = {
|
||||
"name": subcloud.name,
|
||||
"deploy_status": "secondary",
|
||||
"rehome_data": '{"saved_payload": {"system_mode": "simplex",\
|
||||
"name": "testsub", "bootstrap-address": "128.224.119.56"}}',
|
||||
}
|
||||
payload = {
|
||||
"sysadmin_password": "TGk2OW51eA=="
|
||||
}
|
||||
payload_result = {
|
||||
"name": subcloud.name,
|
||||
"deploy_status": "secondary",
|
||||
"rehome_data": {
|
||||
"saved_payload": {
|
||||
"system_mode": "simplex",
|
||||
"name": "testsub",
|
||||
"bootstrap-address": "128.224.119.56",
|
||||
"sysadmin_password": "Li69nux",
|
||||
"ansible_ssh_pass": "Li69nux",
|
||||
}
|
||||
},
|
||||
}
|
||||
sm = subcloud_manager.SubcloudManager()
|
||||
db_api.subcloud_update(self.ctx, subcloud.id,
|
||||
deploy_status=consts.DEPLOY_STATE_SECONDARY,
|
||||
rehome_data=saved_payload['rehome_data'])
|
||||
sm.migrate_subcloud(self.ctx, subcloud.id, payload)
|
||||
|
||||
mock_subcloud_migrate_generate_ansible_config.assert_called_once_with(
|
||||
mock.ANY, mock.ANY, payload_result['rehome_data']['saved_payload'])
|
||||
mock_rehome_subcloud.assert_called_once_with(
|
||||
mock.ANY, mock.ANY, payload_result['rehome_data']['saved_payload'])
|
||||
|
||||
self.assertFalse(mock_db_api.subcloud_update.called)
|
||||
|
||||
@mock.patch.object(subcloud_manager.SubcloudManager, 'subcloud_deploy_create')
|
||||
@mock.patch.object(subcloud_manager.SubcloudManager, 'rehome_subcloud')
|
||||
@mock.patch.object(subcloud_manager.SubcloudManager, 'run_deploy_phases')
|
||||
@mock.patch.object(subcloud_manager, 'db_api')
|
||||
def test_add_subcloud_with_secondary_option(self, mock_db_api,
|
||||
mock_run_deploy_phases,
|
||||
mock_rehome_subcloud,
|
||||
mock_subcloud_deploy_create):
|
||||
# Prepare the test data
|
||||
values = {
|
||||
'name': 'TestSubcloud',
|
||||
'sysadmin_password': '123',
|
||||
'secondary': 'true'
|
||||
}
|
||||
|
||||
# Create an instance of SubcloudManager
|
||||
sm = subcloud_manager.SubcloudManager()
|
||||
|
||||
# Call add_subcloud method with the test data
|
||||
sm.add_subcloud(mock.MagicMock(), 1, values)
|
||||
|
||||
# Assert that the rehome_subcloud and run_deploy_phases methods were not called
|
||||
mock_rehome_subcloud.assert_not_called()
|
||||
mock_run_deploy_phases.assert_not_called()
|
||||
|
||||
mock_subcloud_deploy_create.assert_called_once()
|
||||
|
||||
# Assert that db_api.subcloud_update was not called for secondary subcloud
|
||||
self.assertFalse(mock_db_api.subcloud_update.called)
|
||||
|
||||
def test_update_subcloud_bootstrap_values(self):
|
||||
|
||||
fake_bootstrap_values = "{'name': 'TestSubcloud', 'system_mode': 'simplex'}"
|
||||
fake_result = '{"saved_payload": {"name": "TestSubcloud", "system_mode": "simplex"}}'
|
||||
|
||||
subcloud = self.create_subcloud_static(
|
||||
self.ctx,
|
||||
name='subcloud1',
|
||||
deploy_status=consts.DEPLOY_STATE_DONE)
|
||||
db_api.subcloud_update(self.ctx,
|
||||
subcloud.id,
|
||||
availability_status=dccommon_consts.AVAILABILITY_ONLINE)
|
||||
|
||||
fake_dcmanager_cermon_api = FakeDCManagerNotifications()
|
||||
|
||||
p = mock.patch('dcmanager.rpc.client.DCManagerNotifications')
|
||||
mock_dcmanager_api = p.start()
|
||||
mock_dcmanager_api.return_value = fake_dcmanager_cermon_api
|
||||
|
||||
sm = subcloud_manager.SubcloudManager()
|
||||
sm.update_subcloud(self.ctx,
|
||||
subcloud.id,
|
||||
bootstrap_values=fake_bootstrap_values)
|
||||
|
||||
# Verify subcloud was updated with correct values
|
||||
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
||||
self.assertEqual(fake_result,
|
||||
updated_subcloud.rehome_data)
|
||||
|
||||
def test_update_subcloud_bootstrap_address(self):
|
||||
fake_bootstrap_values = '{"name": "TestSubcloud", "system_mode": "simplex"}'
|
||||
fake_result = ('{"saved_payload": {"name": "TestSubcloud", '
|
||||
'"system_mode": "simplex", '
|
||||
'"bootstrap-address": "123.123.123.123"}}')
|
||||
|
||||
subcloud = self.create_subcloud_static(
|
||||
self.ctx,
|
||||
name='subcloud1',
|
||||
deploy_status=consts.DEPLOY_STATE_DONE)
|
||||
|
||||
db_api.subcloud_update(self.ctx,
|
||||
subcloud.id,
|
||||
availability_status=dccommon_consts.AVAILABILITY_ONLINE)
|
||||
|
||||
fake_dcmanager_cermon_api = FakeDCManagerNotifications()
|
||||
|
||||
p = mock.patch('dcmanager.rpc.client.DCManagerNotifications')
|
||||
mock_dcmanager_api = p.start()
|
||||
mock_dcmanager_api.return_value = fake_dcmanager_cermon_api
|
||||
|
||||
sm = subcloud_manager.SubcloudManager()
|
||||
sm.update_subcloud(self.ctx,
|
||||
subcloud.id,
|
||||
bootstrap_values=fake_bootstrap_values)
|
||||
sm.update_subcloud(self.ctx,
|
||||
subcloud.id,
|
||||
bootstrap_address="123.123.123.123")
|
||||
|
||||
# Verify subcloud was updated with correct values
|
||||
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
||||
self.assertEqual(fake_result,
|
||||
updated_subcloud.rehome_data)
|
||||
|
Loading…
x
Reference in New Issue
Block a user