MASAKARI(1) | masakari | MASAKARI(1) |
masakari - masakari 17.0.0
Masakari is an OpenStack project designed to ensure high availability of instances and compute processes running on hosts.
This documentation is intended to help explain the current scope of the Masakari project and the architectural decisions made to support this scope. The documentation will include the future architectural roadmap and the current development process and policies.
The Masakari API is extensive. We provide a concept guide which gives some of the high level details, as well as a more detailed API reference.
A detailed install guide for masakari.
Masakari provides Virtual Machines High Availability(VMHA), and rescues KVM-based Virtual Machines(VM) from a failure events described below:
The below services enables deplores to integrate with the Masakari directly or through custom plug-ins.
The Masakari service consists of the following components:
This section describes how to install and configure Masakari services on the compute node.
This section assumes that you already have a working OpenStack environment with the following components installed: Nova, Glance, Cinder, Neutron and Identity.
The installation and configuration vary by distribution.
This section describes how to install and configure Masakari for Ubuntu 18.04 (bionic).
Before you install and configure the masakari service, you must create databases, service credentials, and API endpoints.
# mysql
mysql> CREATE DATABASE masakari CHARACTER SET utf8;
mysql> GRANT ALL PRIVILEGES ON masakari.* TO 'username'@'localhost' \ IDENTIFIED BY 'MASAKARI_DBPASS'; mysql> GRANT ALL PRIVILEGES ON masakari.* TO 'username'@'%' \ IDENTIFIED BY 'MASAKARI_DBPASS';
Replace MASAKARI_DBPASS with a suitable password.
$ . admin-openrc
$ openstack user create --password-prompt masakari User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 8a7dbf5279404537b1c7b86c033620fe | | name | masakari | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
$ openstack role add --project service --user masakari admin
$ openstack service create --name masakari \ --description "masakari high availability" instance-ha +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | masakari high availability | | enabled | True | | id | 060d59eac51b4594815603d75a00aba2 | | name | masakari | | type | instance-ha | +-------------+----------------------------------+
$ openstack endpoint create --region RegionOne \ masakari public http:// <CONTROLLER_IP>/instance-ha/v1/$\(tenant_id\)s +--------------+-------------------------------------------------------+ | Field | Value | +--------------+-------------------------------------------------------+ | enabled | True | | id | 38f7af91666a47cfb97b4dc790b94424 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | masakari | | service_type | instance-ha | | url | http://<CONTROLLER_IP>/instance-ha/v1/$(tenant_id)s | +--------------+-------------------------------------------------------+ $ openstack endpoint create --region RegionOne \ masakari internal http:// <CONTROLLER_IP>/instance-ha/v1/$\(tenant_id\)s +--------------+-------------------------------------------------------+ | Field | Value | +--------------+-------------------------------------------------------+ | enabled | True | | id | 38f7af91666a47cfb97b4dc790b94424 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | masakari | | service_type | instance-ha | | url | http://<CONTROLLER_IP>/instance-ha/v1/$(tenant_id)s | +--------------+-------------------------------------------------------+ $ openstack endpoint create --region RegionOne \ masakari admin http://<CONTROLLER_IP>/instance-ha/v1/$\(tenant_id\)s +--------------+-------------------------------------------------------+ | Field | Value | +--------------+-------------------------------------------------------+ | enabled | True | | id | 38f7af91666a47cfb97b4dc790b94424 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | masakari | | service_type | instance-ha | | url | http://<CONTROLLER_IP>/instance-ha/v1/$(tenant_id)s | +--------------+-------------------------------------------------------+
NOTE:
# git clone https://opendev.org/openstack/masakari.git
Go to /opt/stack/masakari and execute the command below. This will generate masakari.conf.sample, a sample configuration file, at /opt/stack/masakari/etc/masakari/:
# tox -egenconfig
# masakari.conf.sample
[DEFAULT] transport_url = rabbit://stackrabbit:admin@<CONTROLLER_IP>:5672/ graceful_shutdown_timeout = 5 os_privileged_user_tenant = service os_privileged_user_password = admin os_privileged_user_auth_url = http://<CONTROLLER_IP>/identity os_privileged_user_name = nova logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s [01;35m%(instance)s[00m logging_debug_format_suffix = [00;33mfrom (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d[00m logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [[00;36m-%(color)s] [01;35m%(instance)s%(color)s%(message)s[00m logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [[01;36m%(request_id)s [00;36m%(project_name)s %(user_name)s%(color)s] [01;35m%(instance)s%(color)s%(message)s[00m use_syslog = False debug = True masakari_api_workers = 2 [database] connection = mysql+pymysql://root:admin@1<CONTROLLER_IP>/masakari?charset=utf8 [keystone_authtoken] memcached_servers = localhost:11211 cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = <MASAKARI_PASS> username = masakari auth_url = http://<CONTROLLER_IP>/identity auth_type = password [taskflow] connection = mysql+pymysql://root:admin@<CONTROLLER_IP>/masakari?charset=utf8
NOTE:
Replace MASAKARI_PASS with the password you chose for the masakari user in the Identity service.
Copy masakari.conf file to /etc/masakari/
# cp -p etc/masakari/masakari.conf.sample /etc/masakari/masakari.conf
# cd masakari # sudo python setup.py install
# masakari-manage db sync
# masakari-api # masakari-engine
Verify Masakari installation.
$ . admin-openrc
NOTE:
$ openstack endpoint list +-------------+----------------+--------------------------------------------------------+ | Name | Type | Endpoints | +-------------+----------------+--------------------------------------------------------+ | nova_legacy | compute_legacy | RegionOne | | | | public: http://controller/compute/v2/<tenant_id> | | | | | | nova | compute | RegionOne | | | | public: http://controller/compute/v2.1 | | | | | | cinder | block-storage | RegionOne | | | | public: http://controller/volume/v3/<tenant_id> | | | | | | glance | image | RegionOne | | | | public: http://controller/image | | | | | | cinderv3 | volumev3 | RegionOne | | | | public: http://controller/volume/v3/<tenant_id> | | | | | | masakari | instance-ha | RegionOne | | | | internal: http://controller/instance-ha/v1/<tenant_id> | | | | RegionOne | | | | admin: http://controller/instance-ha/v1/<tenant_id> | | | | RegionOne | | | | public: http://controller/instance-ha/v1/<tenant_id> | | | | | | keystone | identity | RegionOne | | | | public: http://controller/identity | | | | RegionOne | | | | admin: http://controller/identity | | | | | | cinderv2 | volumev2 | RegionOne | | | | public: http://controller/volume/v2/<tenant_id> | | | | | | placement | placement | RegionOne | | | | public: http://controller/placement | | | | | | neutron | network | RegionOne | | | | public: http://controller:9696/ | | | | | +-------------+----------------+--------------------------------------------------------+
$ openstack segment list
NOTE:
In this section you will find information on Masakari’s command line interface.
masakari-status <category> <command> [<args>]
masakari-status is a tool that provides routines for checking the status of a Masakari deployment.
The standard pattern for executing a masakari-status command is:
masakari-status <category> <command> [<args>]
Run without arguments to see a list of available command categories:
masakari-status
Categories are:
Detailed descriptions are below:
You can also run with a category argument such as upgrade to see a list of all commands in that category:
masakari-status upgrade
These sections describe the available categories and arguments for masakari-status.
Return Codes
Return code | Description |
0 | All upgrade readiness checks passed successfully and there is nothing to do. |
1 | At least one check encountered an issue and requires further investigation. This is considered a warning but the upgrade may be OK. |
2 | There was an upgrade status check failure that needs to be investigated. This should be considered something that stops an upgrade. |
255 | An unexpected error occurred. |
History of Checks
7.0.0 (Stein)
masakari-manage <category> <action> [<args>]
masakari-manage controls DB by managing various admin-only aspects of masakari.
The standard pattern for executing a masakari-manage command is:
masakari-manage <category> <command> [<args>]
Run without arguments to see a list of available command categories:
masakari-manage
You can also run with a category argument such as db to see a list of all commands in that category:
masakari-manage db
These sections describe the available categories and arguments for masakari-manage.
To control and manage masakari operations, the extended command list available in openstack command.
The masakari service stores its API configuration settings in the api-paste.ini file.
[composite:masakari_api] use = call:masakari.api.urlmap:urlmap_factory /: apiversions /v1: masakari_api_v1 [composite:masakari_api_v1] use = call:masakari.api.auth:pipeline_factory_v1 keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit authtoken keystonecontext osapi_masakari_app_v1 noauth2 = cors http_proxy_to_wsgi request_id faultwrap sizelimit noauth2 osapi_masakari_app_v1 # filters [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory oslo_config_project = masakari [filter:http_proxy_to_wsgi] paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory [filter:request_id] paste.filter_factory = oslo_middleware:RequestId.factory [filter:faultwrap] paste.filter_factory = masakari.api.openstack:FaultWrapper.factory [filter:sizelimit] paste.filter_factory = oslo_middleware:RequestBodySizeLimiter.factory [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory [filter:keystonecontext] paste.filter_factory = masakari.api.auth:MasakariKeystoneContext.factory [filter:noauth2] paste.filter_factory = masakari.api.auth:NoAuthMiddleware.factory # apps [app:osapi_masakari_app_v1] paste.app_factory = masakari.api.openstack.ha:APIRouterV1.factory [pipeline:apiversions] pipeline = faultwrap http_proxy_to_wsgi apiversionsapp [app:apiversionsapp] paste.app_factory = masakari.api.openstack.ha.versions:Versions.factory
The following is an overview of all available configuration options in Masakari.
This determines the strategy to use for authentication: keystone or noauth2. ‘noauth2’ is designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username.
When True, the ‘X-Forwarded-For’ header is treated as the canonical remote address. When False (the default), the ‘remote_address’ header is used.
You should only enable this if you have an HTML sanitizing proxy.
As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option.
This string is prepended to the normal URL that is returned in links to the OpenStack Masakari API. If it is empty (the default), the URLs are returned unchanged.
Determine if monkey patching should be applied.
Related options:
any effect
List of modules/decorators to monkey patch.
This option allows you to patch a decorator for all functions in specified modules.
Related options:
This is the message queue topic that the masakari engine ‘listens’ on. It is used when the masakari engine is started up to configure the queue, and whenever an RPC call to the masakari engine is made.
WARNING:
Interval in seconds for identifying duplicate notifications. If the notification received is identical to the previous ones whose status is either new or running and if it’s created_timestamp and the current timestamp is less than this config option value, then the notification will be considered as duplicate and it will be ignored.
Number of seconds to wait after a service is enabled or disabled.
Number of seconds to wait for instance to shut down
Interval in seconds for processing notifications which are in error or new state.
Interval in seconds for identifying notifications which are in new state. If the notification is in new state till this config option value after it’s generated_time, then it is considered that notification is ignored by the messaging queue and will be processed by ‘process_unfinished_notifications’ periodic task.
Interval in seconds for checking running notifications.
Interval in seconds for identifying running notifications expired.
Number of threads to be used for evacuating and confirming instances during execution of host_failure workflow.
Defines which driver to use for executing notification workflows.
Match this value when searching for nova in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type>
Location of ca certificates file to use for nova client requests.
OpenStack privileged account username. Used for requests to other services (such as Nova) that require an account with special rights.
Password associated with the OpenStack privileged account.
Tenant name associated with the OpenStack privileged account.
Auth URL associated with the OpenStack privileged account.
User domain name associated with the OpenStack privileged account.
Project domain name associated with the OpenStack privileged account.
Directory where the masakari python module is installed
Hostname, FQDN or IP address of this host. Must be valid within AMQP key.
Possible values:
Full class name for the Manager for masakari engine
Max interval time between periodic tasks execution in seconds.
Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
The IP address on which the Masakari API will listen.
The port on which the Masakari API will listen.
Number of workers for Masakari API service. The default will be the number of CPUs available.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format).
Group | Name |
DEFAULT | log-config |
DEFAULT | log_config |
Defines the format string for %(asctime)s in log records. Default: the value above . This option is ignored if log_config_append is set.
Group | Name |
DEFAULT | logfile |
Group | Name |
DEFAULT | logdir |
Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set.
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.
Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set.
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
Use JSON formatting for logging. This option is ignored if log_config_append is set.
Log output to standard error. This option is ignored if log_config_append is set.
Log output to Windows Event Log.
WARNING:
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to “interval”.
Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the next rotation.
Log file maximum size in MB. This option is ignored if “log_rotation_type” is not set to “size”.
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter
Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter
Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter
Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter
List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set.
The format for an instance that is passed with the log message.
The format for an instance UUID that is passed with the log message.
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered.
Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service’s log file.
Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with ‘backdoor_port’ in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process.
Enables or disables logging values of all registered options when starting a service (at DEBUG level).
Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait.
File name for the paste.deploy config for api service
A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.
Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs).
Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ means wait forever.
True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies.
Group | Name |
DEFAULT | rpc_conn_pool_size |
Size of executor thread pool when executor is threading or eventlet.
Group | Name |
DEFAULT | rpc_thread_pool_size |
The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is:
driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
Example: rabbit://rabbitmq:password@127.0.0.1:5672//
For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping
The backend URL to use for distributed coordination.By default it’s None which means that coordination is disabled. The coordination is implemented for distributed lock management and was tested with etcd.Coordination doesn’t work for file driver because lock files aren’t removed after lock releasing.
Indicate whether this resource may be shared with the domain received in the requests “origin” header. Format: “<protocol>://<host>[:<port>]”, no trailing slash. Example: https://horizon.example.com
Indicate that the actual request can include user credentials
Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.
Indicate which methods can be used during the actual request.
Indicate which header field names may be used during the actual request.
The SQLAlchemy connection string to use to connect to the database.
The SQLAlchemy connection string to use to connect to the slave database.
The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don’t configure any setting.
Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.
Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.
Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
Verbosity of SQL debugging information: 0=None, 100=Everything.
Enable the experimental use of database reconnect on connection lost.
If True, increases the interval between retries of a database operation up to db_max_retry_interval.
If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&…
Group | Name |
DEFAULT | dbapi_use_tpool |
WARNING:
The path to respond to healtcheck requests on.
WARNING:
Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies.
Additional backends that can perform health checks and report that information back as part of a request.
A list of network addresses to limit source ip allowed to access healthcheck information. Any request from ip outside of these network addresses are ignored.
Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin.
Check the presence of a file based on a port to determine if an application is running on a port. Expects a “port:path” list of strings. Used by DisableByFilesPortsHealthcheck plugin.
Operators can decide whether all instances or only those instances which have [host_failure]\ha_enabled_instance_metadata_key set to True should be allowed for evacuation from a failed source compute node. When set to True, it will evacuate all instances from a failed source compute node. First preference will be given to those instances which have [host_failure]\ha_enabled_instance_metadata_key set to True, and then it will evacuate the remaining ones. When set to False, it will evacuate only those instances which have [host_failure]\ha_enabled_instance_metadata_key set to True.
Operators can decide on the instance metadata key naming that affects the per-instance behaviour of [host_failure]\evacuate_all_instances. The default is the same for both failure types (host, instance) but the value can be overridden to make the metadata key different per failure type.
Operators can decide whether error instances should be allowed for evacuation from a failed source compute node or not. When set to True, it will ignore error instances from evacuation from a failed source compute node. When set to False, it will evacuate error instances along with other instances from a failed source compute node.
Operators can decide whether reserved_host should be added to aggregate group of failed compute host. When set to True, reserved host will be added to the aggregate group of failed compute host. When set to False, the reserved_host will not be added to the aggregate group of failed compute host.
Compute disable reason in case Masakari detects host failure.
Operators can decide whether all instances or only those instances which have [instance_failure]\ha_enabled_instance_metadata_key set to True should be taken into account to recover from instance failure events. When set to True, it will execute instance failure recovery actions for an instance irrespective of whether that particular instance has [instance_failure]\ha_enabled_instance_metadata_key set to True. When set to False, it will only execute instance failure recovery actions for an instance which has [instance_failure]\ha_enabled_instance_metadata_key set to True.
Operators can decide on the instance metadata key naming that affects the per-instance behaviour of [instance_failure]\process_all_instances. The default is the same for both failure types (host, instance) but the value can be overridden to make the metadata key different per failure type.
Complete “public” Identity API endpoint. This endpoint should not be an “admin” endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint.
Group | Name |
keystone_authtoken | auth_uri |
Complete “public” Identity API endpoint. This endpoint should not be an “admin” endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release.
WARNING:
Interface to use for the Identity API endpoint. Valid values are “public”, “internal” (default) or “admin”.
Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.
Request timeout value for communicating with Identity API server.
How many times are we trying to reconnect when communicating with Identity API Server.
Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead.
A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.
Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.
Group | Name |
keystone_authtoken | memcache_servers |
In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.
(Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
(Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
(Optional) Number of seconds memcached server is considered dead before it is tried again.
(Optional) Maximum total number of open connections to every memcached server.
(Optional) Socket timeout in seconds for communicating with a memcached server.
(Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.
(Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.
(Optional) Use the advanced (eventlet safe) memcached client pool.
(Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.
Used to control the use and type of token binding. Can be set to: “disabled” to not check token binding. “permissive” (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. “strict” like “permissive” but if the bind type is unknown the token will be rejected. “required” any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.
A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check.
For backwards compatibility reasons we must let valid service tokens pass that don’t pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible.
The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules.
Group | Name |
keystone_authtoken | auth_plugin |
Config Section from which to load plugin specific options
DEPRECATED
This option is a list of all of the v2.1 API extensions to never load. However, it will be removed in the near future, after which the all the functionality that was previously in extensions will be part of the standard API, and thus always accessible.
Group | Name |
osapi_v1 | extensions_blacklist |
WARNING:
DEPRECATED
This is a list of extensions. If it is empty, then all extensions except those specified in the extensions_blacklist option will be loaded. If it is not empty, then only those extensions in this list will be loaded, provided that they are also not in the extensions_blacklist option. Once this deprecated option is removed, after which the all the functionality that was previously in extensions will be part of the standard API, and thus always accessible.
Group | Name |
osapi_v1 | extensions_whitelist |
WARNING:
DEPRECATED
This option is a string representing a regular expression (regex) that matches the project_id as contained in URLs. If not set, it will match normal UUIDs created by keystone.
Group | Name |
osapi_v1 | project_id_regex |
WARNING:
Name for the AMQP container. must be globally unique. Defaults to a generated UUID
Group | Name |
amqp1 | container_name |
Group | Name |
amqp1 | idle_timeout |
Group | Name |
amqp1 | trace |
Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system’s CA-bundle to verify the server’s certificate.
Group | Name |
amqp1 | ssl_ca_file |
Self-identifying certificate PEM file for client authentication
Group | Name |
amqp1 | ssl_cert_file |
Private key PEM file used to sign ssl_cert_file certificate (optional)
Group | Name |
amqp1 | ssl_key_file |
Group | Name |
amqp1 | ssl_key_password |
By default SSL checks that the name in the server’s certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server’s SSL certificate uses the virtual host name instead of the DNS name.
Group | Name |
amqp1 | sasl_mechanisms |
Group | Name |
amqp1 | sasl_config_dir |
Group | Name |
amqp1 | sasl_config_name |
Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt.
Maximum limit for connection_retry_interval + connection_retry_backoff
Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error.
The maximum number of attempts to re-send a reply message which failed due to a recoverable error.
The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry.
The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry.
The duration to schedule a purge of idle sender links. Detach link after expiry.
Indicates the addressing mode used by the driver. Permitted values: ‘legacy’ - use legacy non-routable addressing ‘routable’ - use routable addresses ‘dynamic’ - use legacy addresses if the message bus does not support routing otherwise use routable addressing
Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private ‘subnet’ per virtual host. Set to False if the message bus supports virtual hosting using the ‘hostname’ field in the AMQP 1.0 Open performative as the name of the virtual host.
address prefix used when sending to a specific server
Group | Name |
amqp1 | server_request_prefix |
Group | Name |
amqp1 | broadcast_prefix |
Group | Name |
amqp1 | group_request_prefix |
Address prefix for all generated RPC addresses
Address prefix for all generated Notification addresses
Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages.
Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination.
Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers.
Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else ‘notify’
Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else ‘rpc’
Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: ‘rpc-call’ - send RPC Calls pre-settled ‘rpc-reply’- send RPC Replies pre-settled ‘rpc-cast’ - Send RPC Casts pre-settled ‘notify’ - Send Notifications pre-settled
Pool Size for Kafka Consumers
WARNING:
The pool size limit for connections expiration policy
WARNING:
The time-to-live in sec of idle connections in the pool
WARNING:
Group id for Kafka consumer. Consumers in one group will coordinate message consumption
Upper bound on the delay for KafkaProducer batching in seconds
The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version
Protocol used to communicate with brokers
Group | Name |
DEFAULT | notification_driver |
A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC.
Group | Name |
DEFAULT | notification_transport_url |
Group | Name |
rpc_notifier2 | topics |
DEFAULT | notification_topics |
The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
Use durable queues in AMQP. If rabbit_quorum_queue is enabled, queues will be durable and this value will be ignored.
Group | Name |
DEFAULT | amqp_auto_delete |
Group | Name |
oslo_messaging_rabbit | rabbit_use_ssl |
SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
Group | Name |
oslo_messaging_rabbit | kombu_ssl_version |
Group | Name |
oslo_messaging_rabbit | kombu_ssl_keyfile |
Group | Name |
oslo_messaging_rabbit | kombu_ssl_certfile |
Group | Name |
oslo_messaging_rabbit | kombu_ssl_ca_certs |
Global toggle for enforcing the OpenSSL FIPS mode. This feature requires Python support. This is available in Python 3.9 in all environments and may have been backported to older Python versions on select environments. If the Python executable used does not support OpenSSL FIPS mode, an exception will be raised.
Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services.
How long to wait (in seconds) before reconnecting in response to an AMQP consumer cancel notification.
Group | Name |
DEFAULT | kombu_reconnect_delay |
EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions.
How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout.
Group | Name |
oslo_messaging_rabbit | kombu_reconnect_timeout |
Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.
The RabbitMQ login method.
Group | Name |
DEFAULT | rabbit_login_method |
How long to backoff for between retries when connecting to RabbitMQ.
Group | Name |
DEFAULT | rabbit_retry_backoff |
Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: “rabbitmqctl set_policy HA ‘^(?!amq.).*’ ‘{“ha-mode”: “all”}’ “
Group | Name |
DEFAULT | rabbit_ha_queues |
Use quorum queues in RabbitMQ (x-queue-type: quorum). The quorum queue is a modern queue type for RabbitMQ implementing a durable, replicated FIFO queue based on the Raft consensus algorithm. It is available as of RabbitMQ 3.8.0. If set this option will conflict with the HA queues (rabbit_ha_queues) aka mirrored queues, in other words the HA queues should be disabled, quorum queues durable by default so the amqp_durable_queues opion is ignored when this option enabled.
Each time a message is redelivered to a consumer, a counter is incremented. Once the redelivery count exceeds the delivery limit the message gets dropped or dead-lettered (if a DLX exchange has been configured) Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit.
By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of messages in the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit.
Group | Name |
oslo_messaging_rabbit | rabbit_quroum_max_memory_length |
By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of memory bytes used by the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit.
Group | Name |
oslo_messaging_rabbit | rabbit_quroum_max_memory_bytes |
Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. Setting 0 as value will disable the x-expires. If doing so, make sure you have a rabbitmq policy to delete the queues or you deployment will create an infinite number of queue over time.
Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.
Number of seconds after which the Rabbit broker is considered down if heartbeat’s keep-alive fails (0 disables heartbeat).
How often times during the heartbeat_timeout_threshold we check the heartbeat.
(DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore
WARNING:
Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down
Group | Name |
DEFAULT | osapi_max_request_body_size |
DEFAULT | max_request_body_size |
Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
This option controls whether or not to enforce scope when evaluating policies. If True, the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False, a message will be logged informing operators that policies are being invoked with mismatching scope.
This option controls whether or not to use old deprecated defaults when evaluating policies. If True, the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together. If False, the deprecated policy check string is logically OR’d with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior.
The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option.
Group | Name |
DEFAULT | policy_file |
Default rule. Enforced when a requested rule is not found.
Group | Name |
DEFAULT | policy_default_rule |
Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
Group | Name |
DEFAULT | policy_dirs |
Content Type to send and receive data for REST based policy check
server identity verification for REST based policy check
Absolute path to ca cert file for REST based policy check
Absolute path to client cert for REST based policy check
Absolute path client key file REST based policy check
Compute disable reason in case Masakari detects process failure.
Group | Name |
DEFAULT | ssl_ca_file |
Group | Name |
DEFAULT | ssl_cert_file |
Group | Name |
DEFAULT | ssl_key_file |
SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format.
The SQLAlchemy connection string to use to connect to the taskflow database.
File name for the paste.deploy config for masakari-api
Group | Name |
DEFAULT | api_paste_config |
A python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.
Group | Name |
DEFAULT | wsgi_log_format |
The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is “HTTP_X_FORWARDED_PROTO”.
Group | Name |
DEFAULT | secure_proxy_ssl_header |
Group | Name |
DEFAULT | ssl_ca_file |
Group | Name |
DEFAULT | ssl_cert_file |
Group | Name |
DEFAULT | ssl_key_file |
Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
Group | Name |
DEFAULT | tcp_keepidle |
Group | Name |
DEFAULT | wsgi_default_pool_size |
Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
Group | Name |
DEFAULT | max_header_line |
Group | Name |
DEFAULT | wsgi_keep_alive |
Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ means wait forever.
Group | Name |
DEFAULT | client_socket_timeout |
The configuration for masakari lies in below described files.
Masakari has two main config files: masakari.conf and recovery_workflow_sample_config.conf.
Masakari, like most OpenStack projects, uses a policy language to restrict permissions on REST API actions.
WARNING:
The following is an overview of all available policies in masakari. For a sample configuration file, refer to Sample Masakari Policy File.
Decides what is required for the ‘is_admin:True’ check to succeed.
Default rule for most non-Admin APIs.
Default rule for most Admin APIs.
List available extensions.
Shows information for an extension.
Extension Info API extensions to change the API.
Lists IDs, names, type, reserved, on_maintenance for all hosts.
Shows details for a host.
Creates a host under given segment.
Updates the editable attributes of an existing host.
Deletes a host from given segment.
Host API extensions to change the API.
Lists IDs, notification types, host_name, generated_time, payload and status for all notifications.
Shows details for a notification.
Creates a notification.
Notification API extensions to change the API.
Lists IDs, names, description, recovery_method, service_type for all segments.
Shows details for a segment.
Creates a segment.
Updates the editable attributes of an existing host.
Deletes a segment.
Segment API extensions to change the API.
List all versions.
Version API extensions to change the API.
Lists IDs, notification_id, instance_id, source_host, dest_host, status and type for all VM moves.
Shows details for one VM move.
VM moves API extensions to change the API.
The following is an overview of all available configuration options in Masakari.
This option allows operator to customize tasks to be executed for host failure auto recovery workflow.
Provide list of strings reflecting to the task classes that should be included to the host failure recovery workflow. The full classname path of all task classes should be defined in the ‘masakari.task_flow.tasks’ of setup.cfg and these classes may be implemented by OpenStack Masaskari project team, deployer or third party.
By default below three tasks will be part of this config option:- 1. disable_compute_service_task 2. prepare_HA_enabled_instances_task 3. evacuate_instances_task
The allowed values for this option is comma separated dictionary of object names in between { and }.
This option allows operator to customize tasks to be executed for host failure reserved_host recovery workflow.
Provide list of strings reflecting to the task classes that should be included to the host failure recovery workflow. The full classname path of all task classes should be defined in the ‘masakari.task_flow.tasks’ of setup.cfg and these classes may be implemented by OpenStack Masaskari project team, deployer or third party.
By default below three tasks will be part of this config option:- 1. disable_compute_service_task 2. prepare_HA_enabled_instances_task 3. evacuate_instances_task
The allowed values for this option is comma separated dictionary of object names in between { and }.
This option allows operator to customize tasks to be executed for instance failure recovery workflow.
Provide list of strings reflecting to the task classes that should be included to the instance failure recovery workflow. The full classname path of all task classes should be defined in the ‘masakari.task_flow.tasks’ of setup.cfg and these classes may be implemented by OpenStack Masaskari project team, deployer or third party.
By default below three tasks will be part of this config option:- 1. stop_instance_task 2. start_instance_task 3. confirm_instance_active_task
The allowed values for this option is comma separated dictionary of object names in between { and }.
This option allows operator to customize tasks to be executed for process failure recovery workflow.
Provide list of strings reflecting to the task classes that should be included to the process failure recovery workflow. The full classname path of all task classes should be defined in the ‘masakari.task_flow.tasks’ of setup.cfg and these classes may be implemented by OpenStack Masaskari project team, deployer or third party.
By default below two tasks will be part of this config option:- 1. disable_compute_node_task 2. confirm_compute_node_disabled_task
The allowed values for this option is comma separated dictionary of object names in between { and }.
If operator wants customized recovery workflow, so here is guidelines mentioned for how to associate custom tasks from Third Party Library along with standard recovery workflows in Masakari.:
from oslo_log import log as logging from taskflow import task LOG = logging.getLogger(__name__) class Noop(task.Task): def __init__(self, novaclient): self.novaclient = novaclient super(Noop, self).__init__() def execute(self, **kwargs): LOG.info("Custom task executed successfully..!!") return
For example, Third Party Library’s setup.cfg will have following entry points
masakari.task_flow.tasks = custom_pre_task = <custom_task_class_path_from_third_party_library> custom_main_task = <custom_task_class_path_from_third_party_library> custom_post_task = <custom_task_class_path_from_third_party_library>
Note: Entry point in Third Party Library’s setup.cfg should have same key as in Masakari setup.cfg for respective failure recovery.
host_auto_failure_recovery_tasks = { 'pre': ['disable_compute_service_task', 'custom_pre_task'], 'main': ['custom_main_task', 'prepare_HA_enabled_instances_task'], 'post': ['evacuate_instances_task', 'custom_post_task']}
WARNING:
The following is a sample masakari policy file for adaptation and use.
The sample policy can also be viewed in file form.
IMPORTANT:
# Decides what is required for the 'is_admin:True' check to succeed. #"context_is_admin": "role:admin" # Default rule for most non-Admin APIs. #"admin_or_owner": "is_admin:True or project_id:%(project_id)s" # Default rule for most Admin APIs. #"admin_api": "is_admin:True" # List available extensions. # GET /extensions #"os_masakari_api:extensions:index": "rule:admin_api" # Shows information for an extension. # GET /extensions/{extensions_id} #"os_masakari_api:extensions:detail": "rule:admin_api" # Extension Info API extensions to change the API. #"os_masakari_api:extensions:discoverable": "rule:admin_api" # Lists IDs, names, type, reserved, on_maintenance for all hosts. # GET /segments/{segment_id}/hosts #"os_masakari_api:os-hosts:index": "rule:admin_api" # Shows details for a host. # GET /segments/{segment_id}/hosts/{host_id} #"os_masakari_api:os-hosts:detail": "rule:admin_api" # Creates a host under given segment. # POST /segments/{segment_id}/hosts #"os_masakari_api:os-hosts:create": "rule:admin_api" # Updates the editable attributes of an existing host. # PUT /segments/{segment_id}/hosts/{host_id} #"os_masakari_api:os-hosts:update": "rule:admin_api" # Deletes a host from given segment. # DELETE /segments/{segment_id}/hosts/{host_id} #"os_masakari_api:os-hosts:delete": "rule:admin_api" # Host API extensions to change the API. #"os_masakari_api:os-hosts:discoverable": "rule:admin_api" # Lists IDs, notification types, host_name, generated_time, payload # and status for all notifications. # GET /notifications #"os_masakari_api:notifications:index": "rule:admin_api" # Shows details for a notification. # GET /notifications/{notification_id} #"os_masakari_api:notifications:detail": "rule:admin_api" # Creates a notification. # POST /notifications #"os_masakari_api:notifications:create": "rule:admin_api" # Notification API extensions to change the API. #"os_masakari_api:notifications:discoverable": "rule:admin_api" # Lists IDs, names, description, recovery_method, service_type for all # segments. # GET /segments #"os_masakari_api:segments:index": "rule:admin_api" # Shows details for a segment. # GET /segments/{segment_id} #"os_masakari_api:segments:detail": "rule:admin_api" # Creates a segment. # POST /segments #"os_masakari_api:segments:create": "rule:admin_api" # Updates the editable attributes of an existing host. # PUT /segments/{segment_id} #"os_masakari_api:segments:update": "rule:admin_api" # Deletes a segment. # DELETE /segments/{segment_id} #"os_masakari_api:segments:delete": "rule:admin_api" # Segment API extensions to change the API. #"os_masakari_api:segments:discoverable": "rule:admin_api" # List all versions. # GET / #"os_masakari_api:versions:index": "@" # Version API extensions to change the API. #"os_masakari_api:versions:discoverable": "@" # Lists IDs, notification_id, instance_id, source_host, dest_host, # status and type for all VM moves. # GET /notifications/{notification_id}/vmoves #"os_masakari_api:vmoves:index": "rule:admin_api" # Shows details for one VM move. # GET /notifications/{notification_id}/vmoves/{vmove_id} #"os_masakari_api:vmoves:detail": "rule:admin_api" # VM moves API extensions to change the API. #"os_masakari_api:vmoves:discoverable": "rule:admin_api"
Masakari comprises of two services api and engine, each performing different functions. The user-facing interface is a REST API, while internally Masakari communicates via an RPC message passing mechanism.
The API servers process REST requests, which typically involve database reads/writes, sending RPC messages to other Masakari engine, and generating responses to the REST calls. RPC messaging is done via the oslo.messaging library, an abstraction on top of message queues. The Masakari engine will run on the same host where the Masakari api is running, and has a manager that is listening for RPC messages. The manager too has periodic tasks.
Below you will find a helpful explanation of the key components of a typical Masakari deployment. [image]
Similar to other OpenStack services Masakari emits notifications to the message bus with the Notifier class provided by oslo.messaging-doc. From the notification consumer point of view a notification consists of two parts: an envelope with a fixed structure defined by oslo.messaging and a payload defined by the service emitting the notification. The envelope format is the following:
{ "priority": <string, selected from a predefined list by the sender>, "event_type": <string, defined by the sender>, "timestamp": <string, the isotime of when the notification emitted>, "publisher_id": <string, defined by the sender>, "message_id": <uuid, generated by oslo>, "payload": <json serialized dict, defined by the sender> }
Driver | Description |
messaging | Send notifications using the 1.0 message format |
messagingv2 | Send notifications using the 2.0 message format (with a message envelope) |
routing | Configurable routing notifier (by priority or event_type) |
log | Publish notifications via Python logging infrastructure |
test | Store notifications in memory for test verification |
noop | Disable sending notifications entirely |
So notifications can be completely disabled by setting the following in Masakari configuration file:
[oslo_messaging_notifications] driver = noop
Masakari supports only Versioned notifications.
Masakari code uses the masakari.rpc.get_notifier call to get a configured oslo.messaging Notifier object and it uses the oslo provided functions on the Notifier object to emit notifications. The configuration of the returned Notifier object depends on the parameters of the get_notifier call and the value of the oslo.messaging configuration options driver and topics. The versioned notification the the payload is not a free form dictionary but a serialized oslo.versionedobjects-doc.
For example the wire format of the segment.update notification looks like the following:
{ "event_type": "api.update.segments.start", "timestamp": "2018-11-27 14:32:20.396940", "payload": { "masakari_object.name": "SegmentApiPayload", "masakari_object.data": { "description": null, "fault": null, "recovery_method": "auto", "name": "test", "service_type": "compute", "id": 877, "uuid": "89597691-bebd-4860-a93e-1b6e9de34b9e" }, " "masakari_object.version": "1.0", "masakari_object.namespace": "masakari" }, "priority": "INFO", "publisher_id": "masakari-api:test-virtualbox", "message_id": "e6322900-025d-4dd6-a3a1-3e0e1e9badeb" }
The serialized oslo versionedobject as a payload provides a version number to the consumer so the consumer can detect if the structure of the payload is changed. Masakari provides the following contract regarding the versioned notification payload:
Event type | Notification class | Payload class | Sample |
error.exception | ExceptionNotification | ExceptionPayload | |
create.host.end | HostApiNotification | HostApiPayload | |
create.host.start | HostApiNotification | HostApiPayload | |
delete.host.end | HostApiNotification | HostApiPayload | |
delete.host.start | HostApiNotification | HostApiPayload | |
update.host.end | HostApiNotification | HostApiPayload | |
update.host.start | HostApiNotification | HostApiPayload | |
create.notification.end | NotificationApiNotification | NotificationApiPayload | |
create.notification.start | NotificationApiNotification | NotificationApiPayload | |
process.notification.end | NotificationApiNotification | NotificationApiPayload | |
process.notification.error | NotificationApiNotification | NotificationApiPayload | |
process.notification.start | NotificationApiNotification | NotificationApiPayload | |
create.segment.end | SegmentApiNotification | SegmentApiPayload | |
create.segment.start | SegmentApiNotification | SegmentApiPayload | |
delete.segment.end | SegmentApiNotification | SegmentApiPayload | |
delete.segment.start | SegmentApiNotification | SegmentApiPayload | |
update.segment.end | SegmentApiNotification | SegmentApiPayload | |
update.segment.start | SegmentApiNotification | SegmentApiPayload |
unknown
2024, OpenStack Foundation
April 5, 2024 | 17.0.0 |