keystone.common.sql package

Subpackages

Submodules

keystone.common.sql.core module

SQL backends for the various services.

Before using this module, call initialize(). This has to be done before CONF() because it sets up configuration options.

class keystone.common.sql.core.DateTimeInt(*args, **kwargs)[source]

Bases: sqlalchemy.sql.type_api.TypeDecorator

A column that automatically converts a datetime object to an Int.

Keystone relies on accurate (sub-second) datetime objects. In some cases the RDBMS drop sub-second accuracy (some versions of MySQL). This field automatically converts the value to an INT when storing the data and back to a datetime object when it is loaded from the database.

NOTE: Any datetime object that has timezone data will be converted to UTC.

Any datetime object that has no timezone data will be assumed to be UTC and loaded from the DB as such.

epoch = datetime.datetime(1970, 1, 1, 0, 0, tzinfo=<UTC>)
impl

alias of sqlalchemy.sql.sqltypes.BigInteger

process_bind_param(value, dialect)[source]

Receive a bound parameter value to be converted.

Subclasses override this method to return the value that should be passed along to the underlying TypeEngine object, and from there to the DBAPI execute() method.

The operation could be anything desired to perform custom behavior, such as transforming or serializing data. This could also be used as a hook for validating logic.

This operation should be designed with the reverse operation in mind, which would be the process_result_value method of this class.

Parameters
  • value – Data to operate upon, of any type expected by this method in the subclass. Can be None.

  • dialect – the Dialect in use.

process_result_value(value, dialect)[source]

Receive a result-row column value to be converted.

Subclasses should implement this method to operate on data fetched from the database.

Subclasses override this method to return the value that should be passed back to the application, given a value that is already processed by the underlying TypeEngine object, originally from the DBAPI cursor method fetchone() or similar.

The operation could be anything desired to perform custom behavior, such as transforming or serializing data. This could also be used as a hook for validating logic.

Parameters
  • value – Data to operate upon, of any type expected by this method in the subclass. Can be None.

  • dialect – the Dialect in use.

This operation should be designed to be reversible by the “process_bind_param” method of this class.

class keystone.common.sql.core.JsonBlob(*args, **kwargs)[source]

Bases: sqlalchemy.sql.type_api.TypeDecorator

impl

alias of sqlalchemy.sql.sqltypes.Text

process_bind_param(value, dialect)[source]

Receive a bound parameter value to be converted.

Subclasses override this method to return the value that should be passed along to the underlying TypeEngine object, and from there to the DBAPI execute() method.

The operation could be anything desired to perform custom behavior, such as transforming or serializing data. This could also be used as a hook for validating logic.

This operation should be designed with the reverse operation in mind, which would be the process_result_value method of this class.

Parameters
  • value – Data to operate upon, of any type expected by this method in the subclass. Can be None.

  • dialect – the Dialect in use.

process_result_value(value, dialect)[source]

Receive a result-row column value to be converted.

Subclasses should implement this method to operate on data fetched from the database.

Subclasses override this method to return the value that should be passed back to the application, given a value that is already processed by the underlying TypeEngine object, originally from the DBAPI cursor method fetchone() or similar.

The operation could be anything desired to perform custom behavior, such as transforming or serializing data. This could also be used as a hook for validating logic.

Parameters
  • value – Data to operate upon, of any type expected by this method in the subclass. Can be None.

  • dialect – the Dialect in use.

This operation should be designed to be reversible by the “process_bind_param” method of this class.

class keystone.common.sql.core.ModelDictMixin[source]

Bases: oslo_db.sqlalchemy.models.ModelBase

classmethod from_dict(d)[source]

Return a model instance from a dictionary.

to_dict()[source]

Return the model’s attributes as a dictionary.

class keystone.common.sql.core.ModelDictMixinWithExtras[source]

Bases: oslo_db.sqlalchemy.models.ModelBase

Mixin making model behave with dict-like interfaces includes extras.

NOTE: DO NOT USE THIS FOR FUTURE SQL MODELS. “Extra” column is a legacy

concept that should not be carried forward with new SQL models as the concept of “arbitrary” properties is not in line with the design philosophy of Keystone.

attributes = []
classmethod from_dict(d)[source]
to_dict(include_extra_dict=False)[source]

Return the model’s attributes as a dictionary.

If include_extra_dict is True, ‘extra’ attributes are literally included in the resulting dictionary twice, for backwards-compatibility with a broken implementation.

keystone.common.sql.core.cleanup()[source]
keystone.common.sql.core.enable_sqlite_foreign_key()[source]
keystone.common.sql.core.filter_limit_query(model, query, hints)[source]

Apply filtering and limit to a query.

Parameters
  • model – table model

  • query – query to apply filters to

  • hints – contains the list of filters and limit details. This may be None, indicating that there are no filters or limits to be applied. If it’s not None, then any filters satisfied here will be removed so that the caller will know if any filters remain.

Returns

query updated with any filters and limits satisfied

keystone.common.sql.core.handle_conflicts(conflict_type='object')[source]

Convert select sqlalchemy exceptions into HTTP 409 Conflict.

keystone.common.sql.core.initialize()[source]

Initialize the module.

keystone.common.sql.core.initialize_decorator(init)[source]

Ensure that the length of string field do not exceed the limit.

This decorator check the initialize arguments, to make sure the length of string field do not exceed the length limit, or raise a ‘StringLengthExceeded’ exception.

Use decorator instead of inheritance, because the metaclass will check the __tablename__, primary key columns, etc. at the class definition.

keystone.common.sql.core.session_for_read()[source]
keystone.common.sql.core.session_for_write()[source]
keystone.common.sql.core.truncated(f)[source]

keystone.common.sql.upgrades module

class keystone.common.sql.upgrades.Repository(engine, repo_name)[source]

Bases: object

upgrade(version=None, current_schema=None)[source]
property version
keystone.common.sql.upgrades.add_constraints(constraints)[source]
keystone.common.sql.upgrades.contract_schema()[source]

Contract the database.

This is run manually by the keystone-manage command once the keystone nodes have been upgraded to the latest release and will remove any old tables/columns that are no longer required.

keystone.common.sql.upgrades.expand_schema()[source]

Expand the database schema ahead of data migration.

This is run manually by the keystone-manage command before the first keystone node is migrated to the latest release.

keystone.common.sql.upgrades.find_repo(repo_name)[source]

Return the absolute path to the named repository.

keystone.common.sql.upgrades.get_constraints_names(table, column_name)[source]
keystone.common.sql.upgrades.get_db_version(repo='migrate_repo')[source]
keystone.common.sql.upgrades.get_init_version(abs_path=None)[source]

Get the initial version of a migrate repository.

Parameters

abs_path – Absolute path to migrate repository.

Returns

initial version number or None, if DB is empty.

keystone.common.sql.upgrades.migrate_data()[source]

Migrate data to match the new schema.

This is run manually by the keystone-manage command once the keystone schema has been expanded for the new release.

keystone.common.sql.upgrades.offline_sync_database_to_version(version=None)[source]

Perform and off-line sync of the database.

Migrate the database up to the latest version, doing the equivalent of the cycle of –expand, –migrate and –contract, for when an offline upgrade is being performed.

If a version is specified then only migrate the database up to that version. Downgrading is not supported. If version is specified, then only the main database migration is carried out - and the expand, migration and contract phases will NOT be run.

keystone.common.sql.upgrades.remove_constraints(constraints)[source]
keystone.common.sql.upgrades.validate_upgrade_order(repo_name, target_repo_version=None)[source]

Validate the state of the migration repositories.

This is run before allowing the db_sync command to execute. Ensure the upgrade step and version specified by the operator remains consistent with the upgrade process. I.e. expand’s version is greater or equal to migrate’s, migrate’s version is greater or equal to contract’s.

Parameters
  • repo_name – The name of the repository that the user is trying to upgrade.

  • target_repo_version – The version to upgrade the repo. Otherwise, the version will be upgraded to the latest version available.

Module contents