Introduces Alembic database migration tool

This PR introduces Alembic database migration tool. Trove currently
uses sqlalchemy-migrate with which SQLAlchemy-2.0 does not work. By
introducing the new migration tool, we can prepare for migrating
to SQLAlchemy-2.0.

This change will not affect the default trove-manage's usage while
the database schema versioning will be changed because alembic uses
non-linear, dependency-graph versioning while sqlalchemy-migrate
uses integer based versioning.

Depends-On: https://review.opendev.org/c/openstack/trove-tempest-plugin/+/921589

Co-Authored-By: wu.chunyang <wchy1001@gmail.com>

Story: 2010922
Task: 48782
Change-Id: Idd63f470a2b941720314b6356fe28cd8e394427e
This commit is contained in:
Hirotaka Wakabayashi 2024-05-08 12:25:36 +00:00 committed by wu.chunyang
parent 7501804a9f
commit 114601edba
34 changed files with 1359 additions and 231 deletions

View File

@ -348,15 +348,22 @@ Finally, when trove-guestagent does backup/restore, it will pull this image with
Initialize Trove Database Initialize Trove Database
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
This is controlled through `sqlalchemy-migrate
<https://code.google.com/archive/p/sqlalchemy-migrate/>`_ scripts under the .. versionchanged:: Caracal
trove/db/sqlalchemy/migrate_repo/versions directory in this repository. The
The database migration engine was changed from ``sqlalchemy-migrate`` to
``alembic``, and the ``sqlalchemy-migrate`` was removed.
This is controlled through `alembic`__ scripts under the
trove/db/sqlalchemy/migrations/versions directory in this repository. The
script ``trove-manage`` (which should be installed together with Trove script ``trove-manage`` (which should be installed together with Trove
controller software) could be used to aid in the initialization of the Trove controller software) could be used to aid in the initialization of the Trove
database. Note that this tool looks at the ``/etc/trove/trove.conf`` file for database. Note that this tool looks at the ``/etc/trove/trove.conf`` file for
its database credentials, so initializing the database must happen after Trove its database credentials, so initializing the database must happen after Trove
is configured. is configured.
.. __: https://alembic.sqlalchemy.org/en/latest/
Launching the Trove Controller Launching the Trove Controller
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -0,0 +1,10 @@
---
features:
- Support Alembic database migration tool.
upgrade:
- Trove has removed support for SQLAlchemy-Migrate. If users are
upgrading from a version prior to "Wallaby," they need to first
upgrade to a version between "Wallaby" and "Bobocat" using
SQLAlchemy-Migrate to complete the database schema migration,
and then use trove-manage to upgrade to the latest version.

View File

@ -1,12 +1,12 @@
alembic>=1.8.0
pbr!=2.1.0,>=2.0.0 # Apache-2.0 pbr!=2.1.0,>=2.0.0 # Apache-2.0
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT
SQLAlchemy>=1.4.0 # MIT
keystonemiddleware>=4.17.0 # Apache-2.0 keystonemiddleware>=4.17.0 # Apache-2.0
Routes>=2.3.1 # MIT Routes>=2.3.1 # MIT
WebOb>=1.7.1 # MIT WebOb>=1.7.1 # MIT
PasteDeploy>=1.5.0 # MIT PasteDeploy>=1.5.0 # MIT
Paste>=2.0.2 # MIT Paste>=2.0.2 # MIT
sqlalchemy-migrate>=0.11.0 # Apache-2.0
netaddr>=0.7.18 # BSD netaddr>=0.7.18 # BSD
httplib2>=0.9.1 # MIT httplib2>=0.9.1 # MIT
lxml!=3.7.0,>=3.4.1 # BSD lxml!=3.7.0,>=3.4.1 # BSD

View File

@ -0,0 +1,117 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = %(here)s/migrations
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
# Uncomment the line below if you want the files to be prepended with date and time
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
# for all available tokens
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to migrations/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:migrations/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = sqlite:///trove.db
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@ -13,12 +13,26 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from pathlib import Path
from alembic import command as alembic_command
from alembic import config as alembic_config
from alembic import migration as alembic_migration
from alembic.script import ScriptDirectory
from oslo_log import log as logging
import sqlalchemy as sa
import sqlalchemy.exc import sqlalchemy.exc
from sqlalchemy import text
from trove.common import exception from trove.common import exception
from trove.db.sqlalchemy import migration
from trove.db.sqlalchemy import session from trove.db.sqlalchemy import session
LOG = logging.getLogger(__name__)
ALEMBIC_INIT_VERSION = '906cffda7b29'
ALEMBIC_LATEST_VERSION = '5c68b4fb3cd1'
def list(query_func, *args, **kwargs): def list(query_func, *args, **kwargs):
query = query_func(*args, **kwargs) query = query_func(*args, **kwargs)
@ -128,12 +142,115 @@ def clean_db():
session.clean_db() session.clean_db()
def _get_alembic_revision(config):
script = ScriptDirectory.from_config(config)
current_revision = script.get_current_head()
if current_revision is not None:
return current_revision
return "head"
def _migrate_legacy_database(config):
"""Check if database is a legacy sqlalchemy-migrate-managed database.
If it is, migrate it by "stamping" the initial alembic schema.
"""
# If the database doesn't have the sqlalchemy-migrate legacy migration
# table, we don't have anything to do
engine = session.get_engine()
if not sa.inspect(engine).has_table('migrate_version'):
return
# Likewise, if we've already migrated to alembic, we don't have anything to
# do
with engine.begin() as connection:
context = alembic_migration.MigrationContext.configure(connection)
if context.get_current_revision():
return
# We have legacy migrations but no alembic migration. Stamp (dummy apply)
# the initial alembic migration.
LOG.info(
'The database is still under sqlalchemy-migrate control; '
'fake applying the initial alembic migration'
)
# In case we upgrade from the branch prior to stable/2023.2
if sa.inspect(engine).has_table('migrate_version'):
# for the deployment prior to Bobocat
query = text("SELECT version FROM migrate_version")
with engine.connect() as connection:
result = connection.execute(query)
cur_version = result.first().values()[0]
LOG.info("current version is %s", cur_version)
if cur_version == 48:
alembic_command.stamp(config, ALEMBIC_INIT_VERSION)
elif cur_version > 48:
# we already upgrade to the latest branch, use the latest
# version(5c68b4fb3cd1)
alembic_command.stamp(config, ALEMBIC_LATEST_VERSION)
else:
message = ("You need to upgrade trove database to a version "
"between Wallaby and Bobocat, and then upgrade to "
"the latest.")
raise exception.BadRequest(message)
def _configure_alembic(options):
alembic_ini = Path(__file__).joinpath('..', 'alembic.ini').resolve()
if alembic_ini.exists():
# alembic configuration
config = alembic_config.Config(alembic_ini)
# override the database configuration from the file
config.set_main_option('sqlalchemy.url',
options['database']['connection'])
# override the logger configuration from the file
# https://stackoverflow.com/a/42691781/613428
config.attributes['configure_logger'] = False
return config
else:
# return None if no alembic.ini exists
return None
def db_sync(options, version=None, repo_path=None): def db_sync(options, version=None, repo_path=None):
migration.db_sync(options, version, repo_path) config = _configure_alembic(options)
if config:
# Check the version
if version is None:
version = _get_alembic_revision(config)
# Raise an exception in sqlalchemy-migrate style
if version is not None and version.isdigit():
raise exception.InvalidValue(
'You requested an sqlalchemy-migrate database version;'
'this is no longer supported'
)
# Upgrade to a later version using alembic
_migrate_legacy_database(config)
alembic_command.upgrade(config, version)
else:
raise exception.BadRequest('sqlalchemy-migrate is '
'no longer supported')
def db_upgrade(options, version=None, repo_path=None): def db_upgrade(options, version=None, repo_path=None):
migration.upgrade(options, version, repo_path) config = _configure_alembic(options)
if config:
# Check the version
if version is None:
version = 'head'
# Raise an exception in sqlalchemy-migrate style
if version.isdigit():
raise exception.InvalidValue(
'You requested an sqlalchemy-migrate database version;'
'this is no longer supported'
)
# Upgrade to a later version using alembic
_migrate_legacy_database(config)
alembic_command.upgrade(config, version)
else:
raise exception.BadRequest('sqlalchemy-migrate is '
'no longer supported')
def db_reset(options, *plugins): def db_reset(options, *plugins):

View File

@ -16,8 +16,11 @@
from sqlalchemy import MetaData from sqlalchemy import MetaData
from sqlalchemy import orm from sqlalchemy import orm
from sqlalchemy.orm import exc as orm_exc from sqlalchemy.orm import exc as orm_exc
from sqlalchemy.orm import registry
from sqlalchemy import Table from sqlalchemy import Table
mapper_registry = registry()
def map(engine, models): def map(engine, models):
meta = MetaData() meta = MetaData()
@ -25,59 +28,86 @@ def map(engine, models):
if mapping_exists(models['instances']): if mapping_exists(models['instances']):
return return
orm.mapper(models['instances'], Table('instances', meta, autoload=True)) mapper_registry.map_imperatively(models['instances'],
orm.mapper(models['instance_faults'], Table('instances', meta,
Table('instance_faults', meta, autoload=True)) autoload_with=engine))
orm.mapper(models['root_enabled_history'], mapper_registry.map_imperatively(models['instance_faults'],
Table('root_enabled_history', meta, autoload=True)) Table('instance_faults', meta,
orm.mapper(models['datastores'], autoload_with=engine))
Table('datastores', meta, autoload=True)) mapper_registry.map_imperatively(models['root_enabled_history'],
orm.mapper(models['datastore_versions'], Table('root_enabled_history', meta,
Table('datastore_versions', meta, autoload=True)) autoload_with=engine))
orm.mapper(models['datastore_version_metadata'], mapper_registry.map_imperatively(models['datastores'],
Table('datastore_version_metadata', meta, autoload=True)) Table('datastores', meta,
orm.mapper(models['capabilities'], autoload_with=engine))
Table('capabilities', meta, autoload=True)) mapper_registry.map_imperatively(models['datastore_versions'],
orm.mapper(models['capability_overrides'], Table('datastore_versions', meta,
Table('capability_overrides', meta, autoload=True)) autoload_with=engine))
orm.mapper(models['service_statuses'], mapper_registry.map_imperatively(models['datastore_version_metadata'],
Table('service_statuses', meta, autoload=True)) Table('datastore_version_metadata',
orm.mapper(models['dns_records'], meta, autoload_with=engine))
Table('dns_records', meta, autoload=True)) mapper_registry.map_imperatively(models['capabilities'],
orm.mapper(models['agent_heartbeats'], Table('capabilities', meta,
Table('agent_heartbeats', meta, autoload=True)) autoload_with=engine))
orm.mapper(models['quotas'], mapper_registry.map_imperatively(models['capability_overrides'],
Table('quotas', meta, autoload=True)) Table('capability_overrides', meta,
orm.mapper(models['quota_usages'], autoload_with=engine))
Table('quota_usages', meta, autoload=True)) mapper_registry.map_imperatively(models['service_statuses'],
orm.mapper(models['reservations'], Table('service_statuses', meta,
Table('reservations', meta, autoload=True)) autoload_with=engine))
orm.mapper(models['backups'], mapper_registry.map_imperatively(models['dns_records'],
Table('backups', meta, autoload=True)) Table('dns_records', meta,
orm.mapper(models['backup_strategy'], autoload_with=engine))
Table('backup_strategy', meta, autoload=True)) mapper_registry.map_imperatively(models['agent_heartbeats'],
orm.mapper(models['security_groups'], Table('agent_heartbeats', meta,
Table('security_groups', meta, autoload=True)) autoload_with=engine))
orm.mapper(models['security_group_rules'], mapper_registry.map_imperatively(models['quotas'],
Table('security_group_rules', meta, autoload=True)) Table('quotas', meta,
orm.mapper(models['security_group_instance_associations'], autoload_with=engine))
Table('security_group_instance_associations', meta, mapper_registry.map_imperatively(models['quota_usages'],
autoload=True)) Table('quota_usages', meta,
orm.mapper(models['configurations'], autoload_with=engine))
Table('configurations', meta, autoload=True)) mapper_registry.map_imperatively(models['reservations'],
orm.mapper(models['configuration_parameters'], Table('reservations', meta,
Table('configuration_parameters', meta, autoload=True)) autoload_with=engine))
orm.mapper(models['conductor_lastseen'], mapper_registry.map_imperatively(models['backups'],
Table('conductor_lastseen', meta, autoload=True)) Table('backups', meta,
orm.mapper(models['clusters'], autoload_with=engine))
Table('clusters', meta, autoload=True)) mapper_registry.map_imperatively(models['backup_strategy'],
orm.mapper(models['datastore_configuration_parameters'], Table('backup_strategy', meta,
Table('datastore_configuration_parameters', meta, autoload_with=engine))
autoload=True)) mapper_registry.map_imperatively(models['security_groups'],
orm.mapper(models['modules'], Table('security_groups', meta,
Table('modules', meta, autoload=True)) autoload_with=engine))
orm.mapper(models['instance_modules'], mapper_registry.map_imperatively(models['security_group_rules'],
Table('instance_modules', meta, autoload=True)) Table('security_group_rules', meta,
autoload_with=engine))
mapper_registry.map_imperatively(
models['security_group_instance_associations'],
Table('security_group_instance_associations', meta,
autoload_with=engine))
mapper_registry.map_imperatively(models['configurations'],
Table('configurations', meta,
autoload_with=engine))
mapper_registry.map_imperatively(models['configuration_parameters'],
Table('configuration_parameters',
meta, autoload_with=engine))
mapper_registry.map_imperatively(models['conductor_lastseen'],
Table('conductor_lastseen', meta,
autoload_with=engine))
mapper_registry.map_imperatively(models['clusters'],
Table('clusters', meta,
autoload_with=engine))
mapper_registry.map_imperatively(
models['datastore_configuration_parameters'],
Table('datastore_configuration_parameters', meta,
autoload_with=engine))
mapper_registry.map_imperatively(models['modules'],
Table('modules', meta,
autoload_with=engine))
mapper_registry.map_imperatively(models['instance_modules'],
Table('instance_modules', meta,
autoload_with=engine))
def mapping_exists(model): def mapping_exists(model):

View File

@ -0,0 +1 @@
Generic single-database configuration.

View File

@ -0,0 +1,101 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically unless we're told not to.
if config.attributes.get('configure_logger', True):
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = None
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
This is modified from the default based on the below, since we want to
share an engine when unit testing so in-memory database testing actually
works.
https://alembic.sqlalchemy.org/en/latest/cookbook.html#connection-sharing
"""
connectable = config.attributes.get('connection', None)
if connectable is None:
# only create Engine if we don't have a Connection from the outside
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
# when connectable is already a Connection object, calling connect() gives
# us a *branched connection*.
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -0,0 +1,38 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
def upgrade() -> None:
${upgrades if upgrades else "pass"}
def downgrade() -> None:
${downgrades if downgrades else "pass"}

View File

@ -0,0 +1,100 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Add Datastore Version Registry Extension
Revision ID: 5c68b4fb3cd1
Revises: 906cffda7b29
Create Date: 2024-04-30 13:59:10.690895
"""
from typing import Sequence, Union
from alembic import op
from sqlalchemy.sql import table, column
from sqlalchemy import text, Column
from sqlalchemy import String, Text
from trove.common.constants import REGISTRY_EXT_DEFAULTS
# revision identifiers, used by Alembic.
revision: str = '5c68b4fb3cd1'
down_revision: Union[str, None] = '906cffda7b29'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
repl_namespaces = {
"mariadb": "trove.guestagent.strategies.replication.mariadb_gtid",
"mongodb":
"trove.guestagent.strategies.replication.experimental.mongo_impl",
"mysql": "trove.guestagent.strategies.replication.mysql_gtid",
"percona": "trove.guestagent.strategies.replication.mysql_gtid",
"postgresql": "trove.guestagent.strategies.replication.postgresql",
"pxc": "trove.guestagent.strategies.replication.mysql_gtid",
"redis": "trove.guestagent.strategies.replication.experimental.redis_sync",
}
repl_strategies = {
"mariadb": "MariaDBGTIDReplication",
"mongodb": "Replication",
"mysql": "MysqlGTIDReplication",
"percona": "MysqlGTIDReplication",
"postgresql": "PostgresqlReplicationStreaming",
"pxc": "MysqlGTIDReplication",
"redis": "RedisSyncReplication",
}
def upgrade() -> None:
bind = op.get_bind()
# 1. select id and manager from datastore_versions table
connection = op.get_bind()
# add columns before proceeding
op.add_column("datastore_versions", Column('registry_ext', Text(),
nullable=True))
op.add_column("datastore_versions", Column('repl_strategy', Text(),
nullable=True))
for dsv_id, dsv_manager in connection.execute(
text("select id, manager from datastore_versions")):
registry_ext = REGISTRY_EXT_DEFAULTS.get(dsv_manager, '')
repl_strategy = "%(repl_namespace)s.%(repl_strategy)s" % {
'repl_namespace': repl_namespaces.get(dsv_manager, ''),
'repl_strategy': repl_strategies.get(dsv_manager, '')
}
ds_versions_table = table("datastore_versions", column("", String))
op.execute(
ds_versions_table.update()
.where(ds_versions_table.c.id == dsv_id)
.values({"registry_ext": registry_ext,
"repl_strategy": repl_strategy})
)
if bind.engine.name != "sqlite":
op.alter_column("datastore_versions", "registry_ext", nullable=False,
existing_type=Text)
op.alter_column("datastore_versions", "repl_strategy", nullable=False,
existing_type=Text)
else:
with op.batch_alter_table('datastore_versions') as bo:
bo.alter_column("registry_ext", nullable=False,
existing_type=Text)
bo.alter_column("repl_strategy", nullable=False,
existing_type=Text)
def downgrade() -> None:
pass

View File

@ -0,0 +1,654 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""init trove db
Revision ID: 906cffda7b29
Revises:
Create Date: 2024-04-30 13:58:12.700444
"""
from typing import Sequence, Union
from alembic import op
from sqlalchemy import Boolean, Column, DateTime, Float, ForeignKey, Integer, \
String, Text, UniqueConstraint
from sqlalchemy.sql import table, column
from trove.common import cfg
CONF = cfg.CONF
# revision identifiers, used by Alembic.
revision: str = '906cffda7b29'
down_revision: Union[str, None] = None
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade():
bind = op.get_bind()
""" merges the schmas up to 048 are treated asis.
049 is saved as another revision
"""
"""001_base_schema.py
"""
# NOTE(hiwkby)
# move the 001_base_schema.py to 032_clusters.py because
# instances references foreign keys in clusters and other tables
# that are created at 032_clusters.py.
"""002_service_images.py
"""
op.create_table(
'service_images',
Column('id', String(36), primary_key=True, nullable=False),
Column('service_name', String(255)),
Column('image_id', String(255))
)
"""003_service_statuses.py
"""
op.create_table(
'service_statuses',
Column('id', String(36), primary_key=True, nullable=False),
Column('instance_id', String(36), nullable=False),
Column('status_id', Integer(), nullable=False),
Column('status_description', String(64), nullable=False),
Column('updated_at', DateTime()),
)
op.create_index('service_statuses_instance_id', 'service_statuses',
['instance_id'])
"""004_root_enabled.py
"""
op.create_table(
'root_enabled_history',
Column('id', String(36), primary_key=True, nullable=False),
Column('user', String(length=255)),
Column('created', DateTime()),
)
"""005_heartbeat.py
"""
op.create_table(
'agent_heartbeats',
Column('id', String(36), primary_key=True, nullable=False),
Column('instance_id', String(36), nullable=False, index=True,
unique=True),
Column('guest_agent_version', String(255), index=True),
Column('deleted', Boolean(), index=True),
Column('deleted_at', DateTime()),
Column('updated_at', DateTime(), nullable=False)
)
"""006_dns_records.py
"""
op.create_table(
'dns_records',
Column('name', String(length=255), primary_key=True),
Column('record_id', String(length=64))
)
"""007_add_volume_flavor.py
"""
# added the columns to instances table
"""008_add_instance_fields.py
"""
# added the columns to instances table
"""009_add_deleted_flag_to_instances.py
"""
# added the columns to instances table
"""010_add_usage.py
"""
op.create_table(
'usage_events',
Column('id', String(36), primary_key=True, nullable=False),
Column('instance_name', String(36)),
Column('tenant_id', String(36)),
Column('nova_instance_id', String(36)),
Column('instance_size', Integer()),
Column('nova_volume_id', String(36)),
Column('volume_size', Integer()),
Column('end_time', DateTime()),
Column('updated', DateTime()),
)
"""011_quota.py
"""
op.create_table(
'quotas',
Column('id', String(36),
primary_key=True, nullable=False),
Column('created', DateTime()),
Column('updated', DateTime()),
Column('tenant_id', String(36)),
Column('resource', String(length=255), nullable=False),
Column('hard_limit', Integer()),
UniqueConstraint('tenant_id', 'resource')
)
op.create_table(
'quota_usages',
Column('id', String(36),
primary_key=True, nullable=False),
Column('created', DateTime()),
Column('updated', DateTime()),
Column('tenant_id', String(36)),
Column('in_use', Integer(), default=0),
Column('reserved', Integer(), default=0),
Column('resource', String(length=255), nullable=False),
UniqueConstraint('tenant_id', 'resource')
)
op.create_table(
'reservations',
Column('created', DateTime()),
Column('updated', DateTime()),
Column('id', String(36),
primary_key=True, nullable=False),
Column('usage_id', String(36)),
Column('delta', Integer(), nullable=False),
Column('status', String(length=36))
)
"""012_backup.py
"""
# NOTE(hiwkby)
# move the 012_backup.py to 029_add_backup_datastore.py because
# backups references a datastore_versions.id as a foreign key
# that are created at 012_backup.py.
"""013_add_security_group_artifacts.py
"""
op.create_table(
'security_groups',
Column('id', String(length=36), primary_key=True, nullable=False),
Column('name', String(length=255)),
Column('description', String(length=255)),
Column('user', String(length=255)),
Column('tenant_id', String(length=255)),
Column('created', DateTime()),
Column('updated', DateTime()),
Column('deleted', Boolean(), default=0),
Column('deleted_at', DateTime()),
)
# NOTE(hiwkby)
# move the security_group_instance_associations schema
# to 032_clusters.py where instances table is created.
op.create_table(
'security_group_rules',
Column('id', String(length=36), primary_key=True, nullable=False),
Column('group_id', String(length=36),
ForeignKey('security_groups.id', ondelete='CASCADE',
onupdate='CASCADE')),
Column('parent_group_id', String(length=36),
ForeignKey('security_groups.id', ondelete='CASCADE',
onupdate='CASCADE')),
Column('protocol', String(length=255)),
Column('from_port', Integer()),
Column('to_port', Integer()),
Column('cidr', String(length=255)),
Column('created', DateTime()),
Column('updated', DateTime()),
Column('deleted', Boolean(), default=0),
Column('deleted_at', DateTime()),
)
"""014_update_instance_flavor_id.py
"""
# updated the instances table schema
"""015_add_service_type.py
"""
# updated the instances table schema
# NOTE(hiwkby)
# service_type was deleted in 016_add_datastore_type.py
"""016_add_datastore_type.py
"""
op.create_table(
'datastores',
Column('id', String(36), primary_key=True, nullable=False),
Column('name', String(255), unique=True),
# NOTE(hiwkby) manager was dropped in 017_update_datastores.py
# Column('manager', String(255), nullable=False),
Column('default_version_id', String(36)),
)
op.create_table(
'datastore_versions',
Column('id', String(36), primary_key=True, nullable=False),
Column('datastore_id', String(36), ForeignKey('datastores.id')),
# NOTE(hiwkby)
# unique=false in 018_datastore_versions_fix.py
Column('name', String(255), unique=False),
Column('image_id', String(36), nullable=True),
Column('packages', String(511)),
Column('active', Boolean(), nullable=False),
# manager added in 017_update_datastores.py
Column('manager', String(255)),
# image_tags added in 047_image_tag_in_datastore_version.py
Column('image_tags', String(255), nullable=True),
# version added in 048_add_version_to_datastore_version.py
Column('version', String(255), nullable=True),
UniqueConstraint('datastore_id', 'name', 'version',
name='ds_versions')
)
"""017_update_datastores.py
"""
# updated the datastores and datastore_versions table schema.
"""018_datastore_versions_fix.py
"""
# updated the datastore_versions table schema
"""019_datastore_fix.py
"""
# updated the datastore_versions table schema
"""020_configurations.py
"""
op.create_table(
'configurations',
Column('id', String(36), primary_key=True, nullable=False),
Column('name', String(64), nullable=False),
Column('description', String(256)),
Column('tenant_id', String(36), nullable=False),
Column('datastore_version_id', String(36), nullable=False),
Column('deleted', Boolean(), nullable=False, default=False),
Column('deleted_at', DateTime()),
# NOTE(hiwkby)
# created added in 031_add_timestamps_to_configurations.py
Column('created', DateTime()),
Column('updated', DateTime()),
)
op.create_table(
'configuration_parameters',
Column('configuration_id', String(36), ForeignKey('configurations.id'),
nullable=False, primary_key=True),
Column('configuration_key', String(128),
nullable=False, primary_key=True),
Column('configuration_value', String(128)),
Column('deleted', Boolean(), nullable=False, default=False),
Column('deleted_at', DateTime()),
)
"""021_conductor_last_seen.py
"""
op.create_table(
'conductor_lastseen',
Column('instance_id', String(36), primary_key=True, nullable=False),
Column('method_name', String(36), primary_key=True, nullable=False),
Column('sent', Float(precision=32))
)
"""022_add_backup_parent_id.py
"""
# updated the backups table schema
"""023_add_instance_indexes.py
"""
# updated the instances table schema
"""024_add_backup_indexes.py
"""
# updated the backups table schema
"""025_add_service_statuses_indexes.py
"""
# updated the service_statuses table schema
"""026_datastore_versions_unique_fix.py
"""
# updated the datastore_versions table schema
"""027_add_datastore_capabilities.py
"""
op.create_table(
'capabilities',
Column('id', String(36), primary_key=True, nullable=False),
Column('name', String(255), unique=True),
Column('description', String(255), nullable=False),
Column('enabled', Boolean()),
)
op.create_table(
'capability_overrides',
Column('id', String(36), primary_key=True, nullable=False),
Column('datastore_version_id', String(36),
ForeignKey('datastore_versions.id')),
Column('capability_id', String(36), ForeignKey('capabilities.id')),
Column('enabled', Boolean()),
UniqueConstraint('datastore_version_id', 'capability_id',
name='idx_datastore_capabilities_enabled')
)
"""028_recreate_agent_heartbeat.py
"""
# updated the datastore_versions table schema
"""029_add_backup_datastore.py
"""
# NOTE(hiwkby)
# moves here from 012_backup.py because
# backups references datastore_versions.id as a foreign key
op.create_table(
'backups',
Column('id', String(36), primary_key=True, nullable=False),
Column('name', String(255), nullable=False),
Column('description', String(512)),
Column('location', String(1024)),
Column('backup_type', String(32)),
Column('size', Float()),
Column('tenant_id', String(36)),
Column('state', String(32), nullable=False),
Column('instance_id', String(36)),
Column('checksum', String(32)),
Column('backup_timestamp', DateTime()),
Column('deleted', Boolean(), default=0),
Column('created', DateTime()),
Column('updated', DateTime()),
Column('deleted_at', DateTime()),
# 022_add_backup_parent_id.py
Column('parent_id', String(36), nullable=True),
# 029_add_backup_datastore.py
Column('datastore_version_id', String(36),
ForeignKey(
"datastore_versions.id",
name="backups_ibfk_1")),
)
op.create_index('backups_instance_id', 'backups', ['instance_id'])
op.create_index('backups_deleted', 'backups', ['deleted'])
"""030_add_master_slave.py
"""
# updated the instances table schema
"""031_add_timestamps_to_configurations.py
"""
# updated the configurations table schema
"""032_clusters.py
"""
op.create_table(
'clusters',
Column('id', String(36), primary_key=True, nullable=False),
Column('created', DateTime(), nullable=False),
Column('updated', DateTime(), nullable=False),
Column('name', String(255), nullable=False),
Column('task_id', Integer(), nullable=False),
Column('tenant_id', String(36), nullable=False),
Column('datastore_version_id', String(36),
ForeignKey(
"datastore_versions.id",
name="clusters_ibfk_1"),
nullable=False),
Column('deleted', Boolean()),
Column('deleted_at', DateTime()),
Column('configuration_id', String(36),
ForeignKey(
"configurations.id",
name="clusters_ibfk_2")),
)
op.create_index('clusters_tenant_id', 'clusters', ['tenant_id'])
op.create_index('clusters_deleted', 'clusters', ['deleted'])
# NOTE(hiwkby)
# move here from the 001_base_schema.py because
# instances references cluster and other tables as foreign keys
op.create_table(
'instances',
Column('id', String(36), primary_key=True, nullable=False),
Column('created', DateTime()),
Column('updated', DateTime()),
Column('name', String(255)),
Column('hostname', String(255)),
Column('compute_instance_id', String(36)),
Column('task_id', Integer()),
Column('task_description', String(255)),
Column('task_start_time', DateTime()),
Column('volume_id', String(36)),
Column('flavor_id', String(255)),
Column('volume_size', Integer()),
Column('tenant_id', String(36), nullable=True),
Column('server_status', String(64)),
Column('deleted', Boolean()),
Column('deleted_at', DateTime()),
Column('datastore_version_id', String(36),
ForeignKey(
"datastore_versions.id",
name="instances_ibfk_1"), nullable=True),
Column('configuration_id', String(36),
ForeignKey(
"configurations.id",
name="instances_ibfk_2")),
Column('slave_of_id', String(36),
ForeignKey(
"instances.id",
name="instances_ibfk_3"), nullable=True),
Column('cluster_id', String(36),
ForeignKey(
"clusters.id",
name="instances_ibfk_4")),
Column('shard_id', String(36)),
Column('type', String(64)),
Column('region_id', String(255)),
Column('encrypted_key', String(255)),
Column('access', Text(), nullable=True),
)
op.create_index('instances_tenant_id', 'instances', ['tenant_id'])
op.create_index('instances_deleted', 'instances', ['deleted'])
op.create_index('instances_cluster_id', 'instances', ['cluster_id'])
# NOTE(hiwkby)
# move here from the security_group_instance_associations schema
# because it references instances.id as a foreign key
op.create_table(
'security_group_instance_associations',
Column('id', String(length=36), primary_key=True, nullable=False),
Column('security_group_id', String(length=36),
ForeignKey('security_groups.id', ondelete='CASCADE',
onupdate='CASCADE')),
Column('instance_id', String(length=36),
ForeignKey('instances.id', ondelete='CASCADE',
onupdate='CASCADE')),
Column('created', DateTime()),
Column('updated', DateTime()),
Column('deleted', Boolean(), default=0),
Column('deleted_at', DateTime())
)
"""033_datastore_parameters.py
"""
op.create_table(
'datastore_configuration_parameters',
Column('id', String(36), primary_key=True, nullable=False),
Column('name', String(128), primary_key=True, nullable=False),
Column('datastore_version_id', String(36),
ForeignKey("datastore_versions.id"),
primary_key=True, nullable=False),
Column('restart_required', Boolean(), nullable=False, default=False),
Column('max_size', String(40)),
Column('min_size', String(40)),
Column('data_type', String(128), nullable=False),
UniqueConstraint(
'datastore_version_id', 'name',
name='UQ_datastore_configuration_parameters_datastore_version_id_name') # noqa
)
"""034_change_task_description.py
"""
# updated the configurations table schema
"""035_flavor_id_int_to_string.py
"""
# updated the configurations table schema
"""036_add_datastore_version_metadata.py
"""
op.create_table(
'datastore_version_metadata',
Column('id', String(36), primary_key=True, nullable=False),
Column(
'datastore_version_id',
String(36),
ForeignKey('datastore_versions.id', ondelete='CASCADE'),
),
Column('key', String(128), nullable=False),
Column('value', String(128)),
Column('created', DateTime(), nullable=False),
Column('deleted', Boolean(), nullable=False, default=False),
Column('deleted_at', DateTime()),
Column('updated_at', DateTime()),
UniqueConstraint(
'datastore_version_id', 'key', 'value',
name='UQ_datastore_version_metadata_datastore_version_id_key_value') # noqa
)
"""037_modules.py
"""
is_nullable = True if bind.engine.name == "sqlite" else False
op.create_table(
'modules',
Column('id', String(length=64), primary_key=True, nullable=False),
Column('name', String(length=255), nullable=False),
Column('type', String(length=255), nullable=False),
# contents was updated in 40_module_priority.py
Column('contents', Text(length=4294967295), nullable=False),
Column('description', String(length=255)),
Column('tenant_id', String(length=64), nullable=True),
Column('datastore_id', String(length=64), nullable=True),
Column('datastore_version_id', String(length=64), nullable=True),
Column('auto_apply', Boolean(), default=0, nullable=False),
Column('visible', Boolean(), default=1, nullable=False),
Column('live_update', Boolean(), default=0, nullable=False),
Column('md5', String(length=32), nullable=False),
Column('created', DateTime(), nullable=False),
Column('updated', DateTime(), nullable=False),
Column('deleted', Boolean(), default=0, nullable=False),
Column('deleted_at', DateTime()),
UniqueConstraint(
'type', 'tenant_id', 'datastore_id', 'datastore_version_id',
'name', 'deleted_at',
name='UQ_type_tenant_datastore_datastore_version_name'),
# NOTE(hiwkby)
# the following columns added in 040_module_priority.py
Column('priority_apply', Boolean(), nullable=is_nullable, default=0),
Column('apply_order', Integer(), nullable=is_nullable, default=5),
Column('is_admin', Boolean(), nullable=is_nullable, default=0),
)
op.create_table(
'instance_modules',
Column('id', String(length=64), primary_key=True, nullable=False),
Column('instance_id', String(length=64),
ForeignKey('instances.id', ondelete="CASCADE",
onupdate="CASCADE"), nullable=False),
Column('module_id', String(length=64),
ForeignKey('modules.id', ondelete="CASCADE",
onupdate="CASCADE"), nullable=False),
Column('md5', String(length=32), nullable=False),
Column('created', DateTime(), nullable=False),
Column('updated', DateTime(), nullable=False),
Column('deleted', Boolean(), default=0, nullable=False),
Column('deleted_at', DateTime()),
)
"""038_instance_faults.py
"""
op.create_table(
'instance_faults',
Column('id', String(length=64), primary_key=True, nullable=False),
Column('instance_id', String(length=64),
ForeignKey('instances.id', ondelete="CASCADE",
onupdate="CASCADE"), nullable=False),
Column('message', String(length=255), nullable=False),
Column('details', Text(length=65535), nullable=False),
Column('created', DateTime(), nullable=False),
Column('updated', DateTime(), nullable=False),
Column('deleted', Boolean(), default=0, nullable=False),
Column('deleted_at', DateTime()),
)
"""039_region.py
"""
instances = table("instances", column("region_id", String))
op.execute(
instances.update()
.values({"region_id": CONF.service_credentials.region_name})
)
"""040_module_priority.py
"""
# updated the modules table schema
"""041_instance_keys.py
"""
# updated the instances table schema
"""042_add_cluster_configuration_id.py
"""
# updated the clusters table schema
"""043_instance_ds_version_nullable.py
"""
# updated the instances table schema
"""044_remove_datastore_configuration_parameters_deleted.py
"""
# updated the datastore_configuration_parameters table schema
"""045_add_backup_strategy.py
"""
op.create_table(
'backup_strategy',
Column('id', String(36), primary_key=True, nullable=False),
Column('tenant_id', String(36), nullable=False),
Column('instance_id', String(36), nullable=False, default=''),
Column('backend', String(255), nullable=False),
Column('swift_container', String(255), nullable=True),
Column('created', DateTime()),
UniqueConstraint(
'tenant_id', 'instance_id',
name='UQ_backup_strategy_tenant_id_instance_id'),
)
op.create_index('backup_strategy_tenant_id_instance_id',
'backup_strategy', ['tenant_id', 'instance_id'])
"""046_add_access_to_instance.py
"""
# updated the instances table schema
"""047_image_tag_in_datastore_version.py
"""
# updated the datastore_versions table schema
"""048_add_version_to_datastore_version.py
"""
# updated the datastore_versions table schema
"""049_add_registry_ext_to_datastore_version.py
"""
def downgrade() -> None:
pass

View File

@ -0,0 +1,42 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""drop-migrate-version-table
Revision ID: cee1bcba3541
Revises: 5c68b4fb3cd1
Create Date: 2024-06-05 14:27:15.530991
"""
from typing import Sequence, Union
from alembic import op
from sqlalchemy.engine import reflection
# revision identifiers, used by Alembic.
revision: str = 'cee1bcba3541'
down_revision: Union[str, None] = '5c68b4fb3cd1'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
conn = op.get_bind()
inspector = reflection.Inspector.from_engine(conn)
tables = inspector.get_table_names()
if 'migrate_version' in tables:
op.drop_table('migrate_version')
def downgrade() -> None:
pass

View File

@ -89,7 +89,7 @@ def get_facade():
def get_engine(use_slave=False): def get_engine(use_slave=False):
_check_facade() _create_facade(CONF)
return _FACADE.get_engine(use_slave=use_slave) return _FACADE.get_engine(use_slave=use_slave)
@ -105,7 +105,7 @@ def clean_db():
engine = get_engine() engine = get_engine()
meta = MetaData() meta = MetaData()
meta.bind = engine meta.bind = engine
meta.reflect() meta.reflect(bind=engine)
with contextlib.closing(engine.connect()) as con: with contextlib.closing(engine.connect()) as con:
trans = con.begin() # pylint: disable=E1101 trans = con.begin() # pylint: disable=E1101
for table in reversed(meta.sorted_tables): for table in reversed(meta.sorted_tables):

View File

@ -13,6 +13,8 @@
# limitations under the License. # limitations under the License.
import semantic_version import semantic_version
from sqlalchemy.sql.expression import text
from trove.common import cfg from trove.common import cfg
from trove.guestagent.datastore.mysql_common import service from trove.guestagent.datastore.mysql_common import service
from trove.guestagent.utils import mysql as mysql_util from trove.guestagent.utils import mysql as mysql_util
@ -31,15 +33,16 @@ class MySqlApp(service.BaseMySqlApp):
def _get_gtid_executed(self): def _get_gtid_executed(self):
with mysql_util.SqlClient(self.get_engine()) as client: with mysql_util.SqlClient(self.get_engine()) as client:
return client.execute('SELECT @@global.gtid_executed').first()[0] return client.execute(
text('SELECT @@global.gtid_executed')).first()[0]
def _get_slave_status(self): def _get_slave_status(self):
with mysql_util.SqlClient(self.get_engine()) as client: with mysql_util.SqlClient(self.get_engine()) as client:
return client.execute('SHOW SLAVE STATUS').first() return client.execute(text('SHOW SLAVE STATUS')).first()
def _get_master_UUID(self): def _get_master_UUID(self):
slave_status = self._get_slave_status() slave_status = self._get_slave_status()
return slave_status and slave_status['Master_UUID'] or None return slave_status and slave_status._mapping['Master_UUID'] or None
def get_latest_txn_id(self): def get_latest_txn_id(self):
return self._get_gtid_executed() return self._get_gtid_executed()
@ -57,8 +60,8 @@ class MySqlApp(service.BaseMySqlApp):
def wait_for_txn(self, txn): def wait_for_txn(self, txn):
with mysql_util.SqlClient(self.get_engine()) as client: with mysql_util.SqlClient(self.get_engine()) as client:
client.execute("SELECT WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS('%s')" client.execute(
% txn) text("SELECT WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS('%s')" % txn))
def get_backup_image(self): def get_backup_image(self):
"""Get the actual container image based on datastore version. """Get the actual container image based on datastore version.

View File

@ -119,8 +119,9 @@ class BaseMySqlAdmin(object, metaclass=abc.ABCMeta):
db_result = client.execute(t) db_result = client.execute(t)
for db in db_result: for db in db_result:
LOG.debug("\t db: %s.", db) LOG.debug("\t db: %s.", db)
if db['grantee'] == "'%s'@'%s'" % (user.name, user.host): if db._mapping['grantee'] == "'%s'@'%s'" % (user.name,
user.databases = db['table_schema'] user.host):
user.databases = db._mapping['table_schema']
def change_passwords(self, users): def change_passwords(self, users):
"""Change the passwords of one or more existing users.""" """Change the passwords of one or more existing users."""
@ -264,7 +265,7 @@ class BaseMySqlAdmin(object, metaclass=abc.ABCMeta):
if len(result) != 1: if len(result) != 1:
return None return None
found_user = result[0] found_user = result[0]
user.host = found_user['Host'] user.host = found_user._mapping['Host']
self._associate_dbs(user) self._associate_dbs(user)
return user return user
@ -404,11 +405,11 @@ class BaseMySqlAdmin(object, metaclass=abc.ABCMeta):
if limit is not None and count >= limit: if limit is not None and count >= limit:
break break
LOG.debug("user = %s", str(row)) LOG.debug("user = %s", str(row))
mysql_user = models.MySQLUser(name=row['User'], mysql_user = models.MySQLUser(name=row._mapping['User'],
host=row['Host']) host=row._mapping['Host'])
mysql_user.check_reserved() mysql_user.check_reserved()
self._associate_dbs(mysql_user) self._associate_dbs(mysql_user)
next_marker = row['Marker'] next_marker = row._mapping['Marker']
users.append(mysql_user.serialize()) users.append(mysql_user.serialize())
if limit is not None and result.rowcount <= limit: if limit is not None and result.rowcount <= limit:
next_marker = None next_marker = None
@ -478,6 +479,8 @@ class BaseMySqlApp(service.BaseDbApp):
def execute_sql(self, sql_statement, use_flush=False): def execute_sql(self, sql_statement, use_flush=False):
LOG.debug("Executing SQL: %s", sql_statement) LOG.debug("Executing SQL: %s", sql_statement)
if isinstance(sql_statement, str):
sql_statement = text(sql_statement)
with mysql_util.SqlClient( with mysql_util.SqlClient(
self.get_engine(), use_flush=use_flush) as client: self.get_engine(), use_flush=use_flush) as client:
return client.execute(sql_statement) return client.execute(sql_statement)
@ -775,14 +778,14 @@ class BaseMySqlApp(service.BaseDbApp):
def get_port(self): def get_port(self):
with mysql_util.SqlClient(self.get_engine()) as client: with mysql_util.SqlClient(self.get_engine()) as client:
result = client.execute('SELECT @@port').first() result = client.execute(text('SELECT @@port')).first()
return result[0] return result[0]
def wait_for_slave_status(self, status, client, max_time): def wait_for_slave_status(self, status, client, max_time):
def verify_slave_status(): def verify_slave_status():
ret = client.execute( ret = client.execute(text(
"SELECT SERVICE_STATE FROM " "SELECT SERVICE_STATE FROM "
"performance_schema.replication_connection_status").first() "performance_schema.replication_connection_status")).first()
if not ret: if not ret:
actual_status = 'OFF' actual_status = 'OFF'
else: else:
@ -803,7 +806,7 @@ class BaseMySqlApp(service.BaseDbApp):
def start_slave(self): def start_slave(self):
LOG.info("Starting slave replication.") LOG.info("Starting slave replication.")
with mysql_util.SqlClient(self.get_engine()) as client: with mysql_util.SqlClient(self.get_engine()) as client:
client.execute('START SLAVE') client.execute(text('START SLAVE'))
self.wait_for_slave_status("ON", client, 180) self.wait_for_slave_status("ON", client, 180)
def stop_slave(self, for_failover): def stop_slave(self, for_failover):
@ -811,13 +814,13 @@ class BaseMySqlApp(service.BaseDbApp):
replication_user = None replication_user = None
with mysql_util.SqlClient(self.get_engine()) as client: with mysql_util.SqlClient(self.get_engine()) as client:
result = client.execute('SHOW SLAVE STATUS') result = client.execute(text('SHOW SLAVE STATUS'))
replication_user = result.first()['Master_User'] replication_user = result.first()['Master_User']
client.execute('STOP SLAVE') client.execute(text('STOP SLAVE'))
client.execute('RESET SLAVE ALL') client.execute(text('RESET SLAVE ALL'))
self.wait_for_slave_status('OFF', client, 180) self.wait_for_slave_status('OFF', client, 180)
if not for_failover: if not for_failover:
client.execute('DROP USER IF EXISTS ' + replication_user) client.execute(text('DROP USER IF EXISTS ' + replication_user))
return { return {
'replication_user': replication_user 'replication_user': replication_user
@ -826,7 +829,7 @@ class BaseMySqlApp(service.BaseDbApp):
def stop_master(self): def stop_master(self):
LOG.info("Stopping replication master.") LOG.info("Stopping replication master.")
with mysql_util.SqlClient(self.get_engine()) as client: with mysql_util.SqlClient(self.get_engine()) as client:
client.execute('RESET MASTER') client.execute(text('RESET MASTER'))
def make_read_only(self, read_only): def make_read_only(self, read_only):
with mysql_util.SqlClient(self.get_engine()) as client: with mysql_util.SqlClient(self.get_engine()) as client:

View File

@ -46,6 +46,8 @@ class SqlClient(object):
def execute(self, t, **kwargs): def execute(self, t, **kwargs):
LOG.debug('Execute SQL: %s', t) LOG.debug('Execute SQL: %s', t)
if isinstance(t, str):
t = text(t)
try: try:
return self.conn.execute(t, kwargs) return self.conn.execute(t, kwargs)
except Exception as err: except Exception as err:

View File

@ -51,12 +51,14 @@ class ClusterTest(trove_testtools.TestCase):
self.cluster_name = "Cluster" + self.cluster_id self.cluster_name = "Cluster" + self.cluster_id
self.tenant_id = "23423432" self.tenant_id = "23423432"
self.dv_id = "1" self.dv_id = "1"
self.configuration_id = "2"
self.db_info = DBCluster(ClusterTasks.NONE, self.db_info = DBCluster(ClusterTasks.NONE,
id=self.cluster_id, id=self.cluster_id,
name=self.cluster_name, name=self.cluster_name,
tenant_id=self.tenant_id, tenant_id=self.tenant_id,
datastore_version_id=self.dv_id, datastore_version_id=self.dv_id,
task_id=ClusterTasks.NONE._code) task_id=ClusterTasks.NONE._code,
configuration_id=self.configuration_id)
self.context = trove_testtools.TroveTestContext(self) self.context = trove_testtools.TroveTestContext(self)
self.datastore = Mock() self.datastore = Mock()
self.dv = Mock() self.dv = Mock()

View File

@ -16,13 +16,11 @@ from unittest.mock import Mock
from trove.common import template from trove.common import template
from trove.datastore.models import DatastoreVersion from trove.datastore.models import DatastoreVersion
from trove.tests.unittests import trove_testtools from trove.tests.unittests import trove_testtools
from trove.tests.unittests.util import util
class TemplateTest(trove_testtools.TestCase): class TemplateTest(trove_testtools.TestCase):
def setUp(self): def setUp(self):
super(TemplateTest, self).setUp() super(TemplateTest, self).setUp()
util.init_db()
self.env = template.ENV self.env = template.ENV
self.template = self.env.get_template("mysql/config.template") self.template = self.env.get_template("mysql/config.template")
self.flavor_dict = {'ram': 1024, 'name': 'small', 'id': '55'} self.flavor_dict = {'ram': 1024, 'name': 'small', 'id': '55'}

View File

@ -24,7 +24,6 @@ from trove.conductor import manager as conductor_manager
from trove.instance import models as t_models from trove.instance import models as t_models
from trove.instance.service_status import ServiceStatuses from trove.instance.service_status import ServiceStatuses
from trove.tests.unittests import trove_testtools from trove.tests.unittests import trove_testtools
from trove.tests.unittests.util import util
# See LP bug #1255178 # See LP bug #1255178
OLD_DBB_SAVE = bkup_models.DBBackup.save OLD_DBB_SAVE = bkup_models.DBBackup.save
@ -35,7 +34,6 @@ class ConductorMethodTests(trove_testtools.TestCase):
# See LP bug #1255178 # See LP bug #1255178
bkup_models.DBBackup.save = OLD_DBB_SAVE bkup_models.DBBackup.save = OLD_DBB_SAVE
super(ConductorMethodTests, self).setUp() super(ConductorMethodTests, self).setUp()
util.init_db()
self.cond_mgr = conductor_manager.Manager() self.cond_mgr = conductor_manager.Manager()
self.instance_id = utils.generate_uuid() self.instance_id = utils.generate_uuid()

View File

@ -28,7 +28,6 @@ CONF = cfg.CONF
class TestConfigurationsController(trove_testtools.TestCase): class TestConfigurationsController(trove_testtools.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
util.init_db()
cls.ds_name = cls.random_name( cls.ds_name = cls.random_name(
'datastore', prefix='TestConfigurationsController') 'datastore', prefix='TestConfigurationsController')

View File

@ -19,13 +19,11 @@ from trove.datastore.models import DatastoreVersion
from trove.datastore.models import DatastoreVersionMetadata from trove.datastore.models import DatastoreVersionMetadata
from trove.datastore.models import DBCapabilityOverrides from trove.datastore.models import DBCapabilityOverrides
from trove.tests.unittests import trove_testtools from trove.tests.unittests import trove_testtools
from trove.tests.unittests.util import util
class TestDatastoreBase(trove_testtools.TestCase): class TestDatastoreBase(trove_testtools.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
util.init_db()
cls.ds_name = cls.random_name(name='test-datastore') cls.ds_name = cls.random_name(name='test-datastore')
cls.ds_version_name = cls.random_name(name='test-version') cls.ds_version_name = cls.random_name(name='test-version')

View File

@ -0,0 +1,48 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import unittest
from unittest.mock import Mock, MagicMock
from trove.common import exception
from trove.db.sqlalchemy import api
class TestDbSqlalchemyApi(unittest.TestCase):
def test_db_sync_alembic(self):
api._configure_alembic = MagicMock(return_value=True)
api._get_alembic_revision = MagicMock(return_value='head')
api.alembic_command.upgrade = Mock()
api.db_sync({})
self.assertTrue(api.alembic_command.upgrade.called)
def test_db_sync_sqlalchemy_migrate(self):
api._configure_alembic = MagicMock(return_value=False)
with self.assertRaises(exception.BadRequest) as ex:
api.db_sync({})
self.assertTrue(ex.msg,
'sqlalchemy-migrate is no longer supported')
def test_db_upgrade_alembic(self):
api._configure_alembic = MagicMock(return_value=True)
api.alembic_command.upgrade = Mock()
api.db_upgrade({})
self.assertTrue(api.alembic_command.upgrade.called)
def test_db_upgrade_sqlalchemy_migrate(self):
api._configure_alembic = MagicMock(return_value=False)
with self.assertRaises(exception.BadRequest) as ex:
api.db_upgrade({})
self.assertTrue(ex.msg,
'sqlalchemy-migrate is no longer supported')

View File

@ -1,110 +0,0 @@
# Copyright 2014 Tesora Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from unittest.mock import call
from unittest.mock import Mock
from unittest.mock import patch
from sqlalchemy.engine import reflection
from sqlalchemy.schema import Column
from trove.db.sqlalchemy.migrate_repo.schema import String
from trove.db.sqlalchemy import utils as db_utils
from trove.tests.unittests import trove_testtools
class TestDbMigrationUtils(trove_testtools.TestCase):
def setUp(self):
super(TestDbMigrationUtils, self).setUp()
def tearDown(self):
super(TestDbMigrationUtils, self).tearDown()
@patch.object(reflection.Inspector, 'from_engine')
def test_get_foreign_key_constraint_names_single_match(self,
mock_inspector):
mock_engine = Mock()
(mock_inspector.return_value.
get_foreign_keys.return_value) = [{'constrained_columns': ['col1'],
'referred_table': 'ref_table1',
'referred_columns': ['ref_col1'],
'name': 'constraint1'},
{'constrained_columns': ['col2'],
'referred_table': 'ref_table2',
'referred_columns': ['ref_col2'],
'name': 'constraint2'}]
ret_val = db_utils.get_foreign_key_constraint_names(mock_engine,
'table1',
['col1'],
'ref_table1',
['ref_col1'])
self.assertEqual(['constraint1'], ret_val)
@patch.object(reflection.Inspector, 'from_engine')
def test_get_foreign_key_constraint_names_multi_match(self,
mock_inspector):
mock_engine = Mock()
(mock_inspector.return_value.
get_foreign_keys.return_value) = [
{'constrained_columns': ['col1'],
'referred_table': 'ref_table1',
'referred_columns': ['ref_col1'],
'name': 'constraint1'},
{'constrained_columns': ['col2', 'col3'],
'referred_table': 'ref_table1',
'referred_columns': ['ref_col2', 'ref_col3'],
'name': 'constraint2'},
{'constrained_columns': ['col2', 'col3'],
'referred_table': 'ref_table1',
'referred_columns': ['ref_col2', 'ref_col3'],
'name': 'constraint3'},
{'constrained_columns': ['col4'],
'referred_table': 'ref_table2',
'referred_columns': ['ref_col4'],
'name': 'constraint4'}]
ret_val = db_utils.get_foreign_key_constraint_names(
mock_engine, 'table1', ['col2', 'col3'],
'ref_table1', ['ref_col2', 'ref_col3'])
self.assertEqual(['constraint2', 'constraint3'], ret_val)
@patch.object(reflection.Inspector, 'from_engine')
def test_get_foreign_key_constraint_names_no_match(self, mock_inspector):
mock_engine = Mock()
(mock_inspector.return_value.
get_foreign_keys.return_value) = []
ret_val = db_utils.get_foreign_key_constraint_names(mock_engine,
'table1',
['col1'],
'ref_table1',
['ref_col1'])
self.assertEqual([], ret_val)
@patch('trove.db.sqlalchemy.utils.ForeignKeyConstraint')
def test_drop_foreign_key_constraints(self, mock_constraint):
test_columns = [Column('col1', String(5)),
Column('col2', String(5))]
test_refcolumns = [Column('ref_col1', String(5)),
Column('ref_col2', String(5))]
test_constraint_names = ['constraint1', 'constraint2']
db_utils.drop_foreign_key_constraints(test_constraint_names,
test_columns,
test_refcolumns)
expected = [call(columns=test_columns,
refcolumns=test_refcolumns,
name='constraint1'),
call(columns=test_columns,
refcolumns=test_refcolumns,
name='constraint2')]
self.assertEqual(expected, mock_constraint.call_args_list)

View File

@ -31,7 +31,6 @@ from trove.tests.unittests.util import util
class TestDatastoreVersionController(trove_testtools.TestCase): class TestDatastoreVersionController(trove_testtools.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
util.init_db()
cls.ds_name = cls.random_name('datastore') cls.ds_name = cls.random_name('datastore')
cls.ds_version_number = '5.7.30' cls.ds_version_number = '5.7.30'
models.update_datastore(name=cls.ds_name, default_version=None) models.update_datastore(name=cls.ds_name, default_version=None)

View File

@ -41,7 +41,6 @@ from trove.instance import service_status as srvstatus
from trove.instance.tasks import InstanceTasks from trove.instance.tasks import InstanceTasks
from trove import rpc from trove import rpc
from trove.tests.unittests import trove_testtools from trove.tests.unittests import trove_testtools
from trove.tests.unittests.util import util
CONF = cfg.CONF CONF = cfg.CONF
@ -50,7 +49,6 @@ class MockMgmtInstanceTest(trove_testtools.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
util.init_db()
cls.version_id = str(uuid.uuid4()) cls.version_id = str(uuid.uuid4())
cls.datastore = datastore_models.DBDatastore.create( cls.datastore = datastore_models.DBDatastore.create(
id=str(uuid.uuid4()), id=str(uuid.uuid4()),

View File

@ -24,7 +24,6 @@ from trove.tests.unittests.util import util
class TestMgmtInstanceController(trove_testtools.TestCase): class TestMgmtInstanceController(trove_testtools.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
util.init_db()
cls.controller = ins_service.MgmtInstanceController() cls.controller = ins_service.MgmtInstanceController()
cls.ds_name = cls.random_name('datastore') cls.ds_name = cls.random_name('datastore')

View File

@ -23,7 +23,6 @@ from trove.tests.unittests.util import util
class TestQuotaController(trove_testtools.TestCase): class TestQuotaController(trove_testtools.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
util.init_db()
cls.controller = quota_service.QuotaController() cls.controller = quota_service.QuotaController()
cls.admin_project_id = cls.random_uuid() cls.admin_project_id = cls.random_uuid()
super(TestQuotaController, cls).setUpClass() super(TestQuotaController, cls).setUpClass()

View File

@ -34,7 +34,6 @@ from trove.instance.tasks import InstanceTasks
from trove.taskmanager import api as task_api from trove.taskmanager import api as task_api
from trove.tests.fakes import nova from trove.tests.fakes import nova
from trove.tests.unittests import trove_testtools from trove.tests.unittests import trove_testtools
from trove.tests.unittests.util import util
CONF = cfg.CONF CONF = cfg.CONF
@ -117,7 +116,6 @@ class CreateInstanceTest(trove_testtools.TestCase):
@patch.object(task_api.API, 'get_client', Mock(return_value=Mock())) @patch.object(task_api.API, 'get_client', Mock(return_value=Mock()))
def setUp(self): def setUp(self):
util.init_db()
self.context = trove_testtools.TroveTestContext(self, is_admin=True) self.context = trove_testtools.TroveTestContext(self, is_admin=True)
self.name = "name" self.name = "name"
self.flavor_id = 5 self.flavor_id = 5
@ -252,7 +250,6 @@ class TestInstanceUpgrade(trove_testtools.TestCase):
def setUp(self): def setUp(self):
self.context = trove_testtools.TroveTestContext(self, is_admin=True) self.context = trove_testtools.TroveTestContext(self, is_admin=True)
util.init_db()
self.datastore = datastore_models.DBDatastore.create( self.datastore = datastore_models.DBDatastore.create(
id=str(uuid.uuid4()), id=str(uuid.uuid4()),
@ -329,7 +326,6 @@ class TestInstanceUpgrade(trove_testtools.TestCase):
class TestReplication(trove_testtools.TestCase): class TestReplication(trove_testtools.TestCase):
def setUp(self): def setUp(self):
util.init_db()
self.datastore = datastore_models.DBDatastore.create( self.datastore = datastore_models.DBDatastore.create(
id=str(uuid.uuid4()), id=str(uuid.uuid4()),

View File

@ -46,7 +46,6 @@ class FakeDBInstance(object):
class BaseInstanceStatusTestCase(trove_testtools.TestCase): class BaseInstanceStatusTestCase(trove_testtools.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
util.init_db()
cls.db_info = FakeDBInstance() cls.db_info = FakeDBInstance()
cls.datastore = models.DBDatastore.create( cls.datastore = models.DBDatastore.create(
id=str(uuid.uuid4()), id=str(uuid.uuid4()),

View File

@ -31,7 +31,6 @@ CONF = cfg.CONF
class TestInstanceController(trove_testtools.TestCase): class TestInstanceController(trove_testtools.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
util.init_db()
cls.ds_name = cls.random_name('datastore', cls.ds_name = cls.random_name('datastore',
prefix='TestInstanceController') prefix='TestInstanceController')

View File

@ -23,14 +23,12 @@ from trove.datastore import models as datastore_models
from trove.module import models from trove.module import models
from trove.taskmanager import api as task_api from trove.taskmanager import api as task_api
from trove.tests.unittests import trove_testtools from trove.tests.unittests import trove_testtools
from trove.tests.unittests.util import util
class CreateModuleTest(trove_testtools.TestCase): class CreateModuleTest(trove_testtools.TestCase):
@patch.object(task_api.API, 'get_client', Mock(return_value=Mock())) @patch.object(task_api.API, 'get_client', Mock(return_value=Mock()))
def setUp(self): def setUp(self):
util.init_db()
self.context = Mock() self.context = Mock()
self.name = "name" self.name = "name"
self.module_type = 'ping' self.module_type = 'ping'

View File

@ -31,13 +31,11 @@ from trove.instance.models import InstanceServiceStatus
from trove.instance.models import InstanceTasks from trove.instance.models import InstanceTasks
from trove.instance.service_status import ServiceStatuses from trove.instance.service_status import ServiceStatuses
from trove.tests.unittests import trove_testtools from trove.tests.unittests import trove_testtools
from trove.tests.unittests.util import util
class GaleraClusterTasksTest(trove_testtools.TestCase): class GaleraClusterTasksTest(trove_testtools.TestCase):
def setUp(self): def setUp(self):
super(GaleraClusterTasksTest, self).setUp() super(GaleraClusterTasksTest, self).setUp()
util.init_db()
self.cluster_id = "1232" self.cluster_id = "1232"
self.cluster_name = "Cluster-1234" self.cluster_name = "Cluster-1234"
self.tenant_id = "6789" self.tenant_id = "6789"

View File

@ -61,7 +61,6 @@ from trove.instance.tasks import InstanceTasks
from trove import rpc from trove import rpc
from trove.taskmanager import models as taskmanager_models from trove.taskmanager import models as taskmanager_models
from trove.tests.unittests import trove_testtools from trove.tests.unittests import trove_testtools
from trove.tests.unittests.util import util
INST_ID = 'dbinst-id-1' INST_ID = 'dbinst-id-1'
VOLUME_ID = 'volume-id-1' VOLUME_ID = 'volume-id-1'
@ -1205,7 +1204,6 @@ class RootReportTest(trove_testtools.TestCase):
def setUp(self): def setUp(self):
super(RootReportTest, self).setUp() super(RootReportTest, self).setUp()
util.init_db()
def tearDown(self): def tearDown(self):
super(RootReportTest, self).tearDown() super(RootReportTest, self).tearDown()

View File

@ -46,11 +46,6 @@
tempest_concurrency: 1 tempest_concurrency: 1
devstack_localrc: devstack_localrc:
TEMPEST_PLUGINS: /opt/stack/trove-tempest-plugin TEMPEST_PLUGINS: /opt/stack/trove-tempest-plugin
USE_PYTHON3: true
Q_AGENT: openvswitch
Q_PLUGIN: ml2
Q_ML2_TENANT_NETWORK_TYPE: vxlan
Q_ML2_PLUGIN_MECHANISM_DRIVERS: openvswitch
SYNC_LOG_TO_CONTROLLER: True SYNC_LOG_TO_CONTROLLER: True
TROVE_DATASTORE_VERSION: 5.7 TROVE_DATASTORE_VERSION: 5.7
TROVE_AGENT_CALL_HIGH_TIMEOUT: 1800 TROVE_AGENT_CALL_HIGH_TIMEOUT: 1800
@ -87,14 +82,6 @@
s-proxy: true s-proxy: true
tls-proxy: true tls-proxy: true
tempest: true tempest: true
q-svc: true
q-agt: true
q-dhcp: true
q-l3: true
q-meta: true
q-ovn-metadata-agent: false
ovn-controller: false
ovn-northd: false
tempest_test_regex: ^trove_tempest_plugin\.tests tempest_test_regex: ^trove_tempest_plugin\.tests
tempest_test_timeout: 3600 tempest_test_timeout: 3600
zuul_copy_output: zuul_copy_output: