Krake Reference¶
This is the code reference for the Krake project.
Module hierarchy¶
This section presents the modules and sub-modules of the Krake project present
in the krake/
directory.
The tests for Krake are added in the krake/tests/
directory. The
pytest
module is used to launch all unit tests.
- krake
- The
krake
module itself only contains a few utility functions, as well as functions for reading and validating the environment variables and the configuration provided. However, this module contains several sub-modules presented in the following. - krake.api
This module contains the logic needed to start the API as a an
aiohttp
application. It exchanges objects with the various clients defined in krake.client. These objects are the ones defined in krake.data.- krake.client
This module contains all the necessary logic for any kind of client to communicate with the API described in the krake.api module.
- krake.controller
This module contains the base controller and the definition for several controllers. Each one of these controllers is a separate process, that communicates with the API or the database. For this, the controllers use elements provided by the krake.client module.
All new controller should be added in this module.
- krake.controller.kubernetes.application
- This sub-module contains the definition of the controller specialized for the Kubernetes application handling.
- krake.controller.kubernetes.cluster
- This sub-module contains the definition of the controller specialized for the Kubernetes cluster handling.
- krake.controller.scheduler
- This sub-module defines the Scheduler controller, responsible for binding the Krake applications and magnum clusters to the specific backends.
- krake.controller.gc
- This sub-module defines the Garbage Collector controller, responsible for handling the dependencies during the deletion of a resource. It marks as deleted all dependents of a resource marked as deleted, thus triggering their deletion.
- krake.controller.magnum
- This sub-module defines the Magnum controller, responsible for managing Magnum cluster resources and creating their respective Kubernetes cluster.
- krake.data
This module defines all elements used by the API and the controllers. It contains the definition of all these objects, and the logic to allow them to be serialized and deserialized.
Krake¶
-
class
krake.
ConfigurationOptionMapper
(config_cls, option_fields_mapping=None)¶ Bases:
object
Handle the creation of command line options for a specific Configuration class. For each attribute of the Configuration, and recursively, an option will be added to set it from the command line. A mapping between the option name and the hierarchical list of fields is created. Nested options keep the upper layers as prefixes, which are separated by a “-” character.
For instance, the following classes:
class SpaceShipConfiguration(Serializable): name: str propulsion: PropulsionConfiguration class PropulsionConfiguration(Serializable): power: int engine_type: TypeConfiguration class TypeConfiguration(Serializable): name: str
Will be transformed into the following options:
--name str --propulsion-power int --propulsion-engine-type-name: str
And the option-fields mapping will be:
{ "name": [Field(name="name", ...)], "propulsion-power": [ Field(name="propulsion", ...), Field(name="power", ...) ], "propulsion-engine-type-name": [ Field(name="propulsion", ...), Field(name="engine_type", ...), Field(name="name", ...), ], }
Then, from parsed arguments, the default value of an element of configuration are replaced by the elements set by the user through the parser, using this mapping.
The mapping of the option name to the list of fields is necessary here because a configuration element called
"lorem-ipsum"
with a"dolor-sit-amet"
element will be transformed into a"--lorem-ipsum-dolor-sit-amet"
option. It will then be parsed as"lorem_ipsum_dolor_sit_amet"
. This last string, if split with"_"
, could be separated into"lorem"
and"ipsum_dolor_sit_amet"
, or"lorem_ipsum_dolor"
and"sit_amet"
. Hence the idea of the mapping to get the right separation.Parameters: - config_cls (type) – the configuration class which will be used as a model to generate the options.
- option_fields_mapping (dict, optional) – a mapping of the option names, with POSIX convention (with “-” character”), to the list of fields: <option_name_with_dash>: <hierarchical_list_of_fields> This argument can be used to set the mapping directly, instead of creating it from a Configuration class.
-
add_arguments
(parser)¶ Using the configuration class given, create automatically and recursively command-line options to set the different attributes of the configuration. Nested options keep the upper layers as prefixes, which are separated by a “-” character.
Generate the mapping between the option name and the hierarchy of the attributes of the Configuration.
Parameters: parser (argparse.ArgumentParser) – the parser to which the new command-line options will be added.
-
merge
(config, args)¶ Merge the configuration taken from file and the one from the command line arguments. The arguments have priority and replace the values read from configuration.
Parameters: Returns: - the result of the merge of the CLI
arguments into the configuration, as serializable object.
Return type:
-
krake.
load_yaml_config
(filepath)¶ Load Krake base configuration settings from YAML file
Parameters: filepath (os.PathLike, optional) – Path to YAML configuration file Raises: FileNotFoundError
– If no configuration file can be foundReturns: Krake YAML file configuration Return type: dict
-
krake.
search_config
(filename)¶ Search configuration file in known directories.
The filename is searched in the following directories in given order:
- Current working directory
/etc/krake
Returns: Path to configuration file Return type: os.PathLike Raises: FileNotFoundError
– If the configuration cannot be found in any of the search locations.
-
krake.
setup_logging
(config_log)¶ Setups Krake logging based on logging configuration and global config level for each logger without log-level configuration
Parameters: config_log (dict) – dictschema logging configuration (see logging.config.dictConfig()
)
API Server¶
This module provides the HTTP RESTful API of the Krake application. It is
implemented as an aiohttp
application.
This module defines the bootstrap function for creating the aiohttp server instance serving Krake’s HTTP API.
Krake serves multiple APIs for different technologies, e.g. the core
functionality like roles and role bindings are served by the
krake.api.core
API where as the Kubernetes API is provided by
krake.api.kubernetes
.
Example
The API server can be run as follows:
from aiohttp import web
from krake.api.app import create_app
config = ...
app = create_app(config)
web.run_app(app)
-
krake.api.app.
cors_setup
(app)¶ Set the default CORS (Cross-Origin Resource Sharing) rules for all routes of the given web application.
Parameters: app (web.Application) – Web application
-
krake.api.app.
create_app
(config)¶ Create aiohttp application instance providing the Krake HTTP API
Parameters: config (krake.data.config.ApiConfiguration) – Application configuration object Returns: Krake HTTP API Return type: aiohttp.web.Application
-
krake.api.app.
db_session
(app, host, port)¶ Async generator creating a database
krake.api.database.Session
that can be used by other components (middleware, route handlers) or by the requests handlers. The database session is available under thedb
key of the application.This function should be used as cleanup context (see
aiohttp.web.Application.cleanup_ctx
).Parameters: app (aiohttp.web.Application) – Web application
-
krake.api.app.
http_session
(app, ssl_context=None)¶ Async generator creating an
aiohttp.ClientSession
HTTP(S) session that can be used by other components (middleware, route handlers). The HTTP(S) client session is available under thehttp
key of the application.This function should be used as cleanup context (see
aiohttp.web.Application.cleanup_ctx
).Parameters: app (aiohttp.web.Application) – Web application
-
krake.api.app.
load_authentication
(config)¶ Create the authentication middleware
middlewares.authentication()
.The authenticators are loaded from the “authentication” configuration key. If the server is configured with TLS, client certificates are also added as authentication (
auth.client_certificate_authentication()
) strategy.Parameters: config (krake.data.config.ApiConfiguration) – Application configuration object Returns: aiohttp middleware handling request authentication
Load authorization function from configuration.
Parameters: config (krake.data.config.ApiConfiguration) – Application configuration object Raises: ValueError
– If an unknown authorization strategy is configuredReturns: Coroutine function for authorizing resource requests
Authentication and Authorization¶
Authentication and Authorization module for Krake.
Access to the Krake API is controlled by two distinct mechanisms performed after each other:
- Authentication
- verifies the identity of a user (Who is requesting?)
- Authorization
- decides if the user has permission to access a resource
Authentication¶
Authentication is performed for every request. The
krake.api.middlewares.authentication()
middleware factory is used for
this purpose. The concrete authentication implementation will be derived from
the configuration.
# Anonymous authentication
authentication:
kind: static
name: system
# Keystone authentication
authentication:
kind: keystone
endpoint: http://localhost:5000/v3
An authenticator is a simple asynchronous function:
Currently, there are two authentication implementations available:
- Static authentication (
static_authentication()
)- Keystone authentication (
keystone_authentication()
)
Authorization¶
Authorization is established with the help of the protected()
decorator
function. The decorator annotates a given aiohttp request handler with the
required authorization information (see AuthorizationRequest
).
An authorizer is a simple asynchronous function:
The concrete authentication implementation will be derived from the
configuration and is stored under the authorizer
key of the application.
# Authorization mode
#
# - RBAC (Role-based access control)
# - always-allow (Allow all requests. No authorization is performed.)
# - always-deny (Deny all requests. Only for testing purposes.)
#
authorization: always-allow
Currently, there are three authorization implementations available:
- Always allow (
always_allow()
)- Always deny (
always_deny()
)- Role-based access control / RBAC (
rbac()
)
-
class
krake.api.auth.
AuthorizationRequest
¶ Bases:
tuple
Authorization request handled by authorizers.
-
verb
¶ Verb that should be performed on the resource.
Type: krake.data.core.Verb
-
api
Alias for field number 0
-
namespace
Alias for field number 1
-
resource
Alias for field number 2
-
verb
Alias for field number 3
-
-
krake.api.auth.
always_allow
(request, auth_request)¶ Authorizer allowing every request.
Parameters: - request (aiohttp.web.Request) – Incoming HTTP request
- auth_request (AuthorizationRequest) – Authorization request associated with the incoming HTTP request.
-
krake.api.auth.
always_deny
(request, auth_request)¶ Authorizer denying every request.
Parameters: - request (aiohttp.web.Request) – Incoming HTTP request
- auth_request (AuthorizationRequest) – Authorization request associated with the incoming HTTP request.
Raises: aiohttp.web.HTTPForbidden
– Always raised
-
krake.api.auth.
client_certificate_authentication
()¶ Authenticator factory for authenticating requests with client certificates.
The client certificate is loaded from the
peercert
attribute of the underlying TCP transport. The common name of the client certificate is used as usernameReturns: Authenticator using client certificate information for authentication. Return type: callable
-
krake.api.auth.
keycloak_authentication
(endpoint, realm)¶ Authenticator factory for Keycloak authentication.
The token in the
Authorization
header of a request sent to Krake will be sent as access token to the OpenID user information endpoint. The returned user name from Keycloak is used as authenticated user name.The authenticator requires an HTTP client session that is loaded from the
http
key of the application.Parameters: Returns: Authenticator for the given Keystone endpoint.
Return type: callable
-
krake.api.auth.
keystone_authentication
(endpoint)¶ Authenticator factory for OpenStack Keystone authentication.
The token in the
Authorization
header of a request will be used asX-Auth-Token
header for a request to the Keystone token endpoint. The returned user name from Keystone is used as authenticated user name.The authenticator requires an HTTP client session that is loaded from the
http
key of the application.Parameters: endpoint (str) – Keystone HTTP endpoint Returns: Authenticator for the given Keystone endpoint. Return type: callable
-
krake.api.auth.
protected
(api, resource, verb)¶ Decorator function for aiohttp request handlers performing authorization.
The returned decorator can be used to wrap a given aiohttp handler and call the current authorizer of the application (loaded from the
authorizer
key of the application). If the authorizer does not raise any exception the request is authorized and the wrapped request handler is called.Example
from krake.api.auth import protected @routes.get("/book/{name}") @protected(api="v1", resource="book", verb="get", namespaced=False) async def get_resource(request): assert "user" in request
Parameters: - api (str) – Name of the API group
- resource (str) – Name of the resource
- verb (str, krake.data.core.Verb) – Verb that should be performed
Returns: Decorator that can be used to wrap a given aiohttp request handler.
Return type: callable
-
krake.api.auth.
rbac
(request, auth_request)¶ Role-based access control authorizer.
The roles of a user are loaded from the database. It checks if any role allows the verb on the resource in the namespace. Roles are only permissive. There are no denial rules.
Parameters: - request (aiohttp.web.Request) – Incoming HTTP request
- auth_request (AuthorizationRequest) – Authorization request associated with the incoming HTTP request.
Returns: The role allowing access.
Return type: Raises: aiohttp.web.HTTPForbidden
– If no role allows access.
Database Abstraction¶
Database abstraction for etcd. Key idea of the abstraction is to provide an declarative way of defining persistent data structures (aka. “models”) together with a simple interface for loading and storing these data structures.
This goal is achieved by leveraging the JSON-serializable data classes based
on krake.data.serializable
and combining them with a simple database
session.
Example
from krake.api.database import Session
from krake.data import Key
from krake.data.serializable import Serializable
class Book(Serializable):
isbn: int
title: str
__etcd_key__ = Key("/books/{isbn}")
async with Session(host="localhost") as session:
book = await session.get(Book, isbn=9783453146976)
-
class
krake.api.database.
EtcdClient
(host='127.0.0.1', port=2379, protocol='http', cert=(), verify=None, timeout=None, headers=None, user_agent=None, pool_size=30, username=None, password=None, token=None, server_version='3.3.0', cluster_version='3.3.0')¶ Bases:
etcd3.aio_client.AioClient
Async etcd v3 client based on
etcd3.aio_client.AioClient
with some minor patches.
-
class
krake.api.database.
Event
¶ Bases:
tuple
Events that are yielded by
Session.watch()
-
event
Alias for field number 0
-
rev
Alias for field number 2
-
value
Alias for field number 1
-
-
class
krake.api.database.
EventType
¶ Bases:
enum.Enum
Different types of events that can occur during
Session.watch()
.
-
class
krake.api.database.
Revision
¶ Bases:
tuple
Etcd revision of a loaded key-value pair.
Etcd stores all keys in a flat binary key space. The key space has a lexically sorted index on byte string keys. The key space maintains multiple revisions of the same key. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space.
Every
Session.get()
request returns also the revision besides the model.-
version
¶ is the version of the key. A deletion resets the version to zero and any modification of the key increases its version.
Type: int
-
created
Alias for field number 1
-
key
Alias for field number 0
-
modified
Alias for field number 2
-
version
Alias for field number 3
-
-
class
krake.api.database.
Session
(host, port=2379, loop=None)¶ Bases:
object
Database session for managing
krake.data.serializable.Serializable
objects in an etcd database.The serializable objects need have one additional attribute:
- __etcd_key__
- A
krake.data.Key
template for the associated etcd key of a managed object.
Objects managed by a session have an attached etcd
Revision
when loaded from the database. This revision can be read byrevision()
. If an object has no revision attached, it is considered fresh or new. It is expected that the associated key of a new object does not already exist in the database.The session is an asynchronous context manager. It takes of care of opening and closing an HTTP session to the gRPC JSON gateway of the etcd server.
The etcd v3 protocol is documented by its protobuf definitions.
Example
async with Session(host="localhost") as session: pass
Parameters: -
all
(cls, **kwargs)¶ Fetch all instances of a given type
The instances can be filtered by partial identities. Every identity can be specified as keyword argument and only instances with this identity attribute are returned. The only requirement for a filtered identity attribute is that all preceding identity attributes must also be given.
Example
class Book(Serializable): isbn: int title: str author: str __metadata__ = { "key": Key("/books/{author}/{isbn}") } await db.all(Book) # Get all books by Adam Douglas await db.all(Book, author="Adam Douglas") # This will raise a TypeError because the preceding "name" # attribute is not given. await db.all(Book, isbn=42)
Parameters: - cls (type) – Serializable class that should be loaded
- **kwargs – Parameters for the etcd key
Yields: (object, Revision) – Tuple of deserialized model and revision
Raises: TypeError
– If an identity attribute is given without all preceding identity attributes.
-
client
¶ Lazy loading of the etcd client. It is only created when the first request is performed.
Returns: the client to connect to the database. Return type: EtcdClient
-
delete
(instance)¶ Delete a given instance from etcd.
A transaction is used ensuring the etcd key was not modified in-between. If the transaction is successful, the revision of the instance will be updated to the revision returned by the transaction response.
Parameters: instance (object) – Serializable object that should be deleted
Raises: ValueError
– If the passed object has no revision attached.TransactionError
– If the key was modified in between
-
get
(cls, **kwargs)¶ Fetch an serializable object from the etcd server specified by its identity attribute.
-
**kwargs
Parameters for the etcd key
Returns: Deserialized model with attached revision. If the key was not found in etcd, None is returned. Return type: object, None -
-
load_instance
(cls, kv)¶ Load an instance and its revision by an etcd key-value pair
Parameters: - cls (type) – Serializable type
- kv – etcd key-value pair
Returns: Deserialized model with attached revision
Return type:
-
put
(instance)¶ Store new revision of a serializable object on etcd server.
If the instances does not have an attached
Revision
(seerevision()
), it is assumed that a key-value pair should be created. Otherwise, it is assumed that the key-value pair is updated.A transaction ensures that
- the etcd key was not modified in-between if the key is updated
- the key does not already exists if a key is added
If the transaction is successful, the revision of the instance will updated to the revision returned by the transaction response.
Parameters: - instance (krake.data.serializable.Serializable) – Serializable object that
- be stored. (should) –
- Raise:
- TransactionError: If the key was modified in between or already
- exists
-
watch
(cls, **kwargs)¶ Watch the namespace of a given serializable type and yield every change in this namespace.
Internally, it uses the etcd watch API. The
created
future can be used to signal successful creation of an etcd watcher.Parameters: - cls (type) – Serializable type of which the namespace should be watched
- **kwargs – Parameters for the etcd key
Yields: Event – Every change in the namespace will generate an event
-
exception
krake.api.database.
TransactionError
¶
-
class
krake.api.database.
Watcher
(session, model, **kwargs)¶ Bases:
object
Async context manager for database watching requests.
This context manager is used internally by
Session.watch()
. It returns a async generator on entering. It is ensured that the watch is created on entering. This means inside the context, it can be assumed that the watch exists.Parameters: -
watch
()¶ Async generator for watching database prefix.
Yields: Event –
- Database event holding the loaded model (see
model
argument) and database revision.
- Database event holding the loaded model (see
-
-
krake.api.database.
revision
(instance)¶ Returns the etcd
Revision
of an object used with aSession
. If the object is currently unattached – which means it was not retrieved from the database withSession.get()
– this function returnsNone
.Parameters: instance (object) – Object used with Session
.Returns: The current etcd revision of the instance. Return type: Revision, None
Helpers¶
Simple helper functions that are used by the HTTP endpoints.
-
class
krake.api.helpers.
Heartbeat
(response, interval=None)¶ Bases:
object
Asynchronous context manager for heartbeating long running HTTP responses.
Writes newlines to the response body in a given heartbeat interval. If
interval
is set to 0, no heartbeat will be sent.Parameters: - response (aiohttp.web.StreamResponse) – Prepared HTTP response with chunked encoding
- interval (int, float, optional) – Heartbeat interval in seconds. Default: 10 seconds.
Raises: ValueError
– If the response is not prepared or not chunk encodedExample
import asyncio from aiohttp import web from krake.helpers import Heartbeat async def handler(request): # Prepare streaming response resp = web.StreamResponse() resp.enable_chunked_encoding() await resp.prepare(request) async with Heartbeat(resp): while True: await resp.write(b"spam\n") await asyncio.sleep(120)
-
heartbeat
()¶ Indefinitely write a new line to the response body and sleep for
interval
.
-
class
krake.api.helpers.
HttpProblem
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Store the reasons for failures of the HTTP layers for the API.
The reason is stored as an RFC 7807 Problem. It is a way to define a uniform, machine-readable details of errors in a HTTP response. See https://tools.ietf.org/html/rfc7807 for details.
-
type
¶ A URI reference that identifies the problem type. It should point the Krake API users to the concrete part of the Krake documentation where the problem type is explained in detail. Defaults to about:blank.
Type: str
-
title
¶ A short, human-readable summary of the problem type
Type: HttpProblemTitle
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶ Bases:
krake.data.serializable.ModelizedSchema
-
classmethod
remove_none_values
(data, **kwargs)¶ Remove attributes if value equals None
-
classmethod
-
__post_init__
()¶ HACK:
marshmallow.Schema
allows registering hooks likepost_dump
. This is not allowed in krakeSerializable
, therefore withinmarshmallow.Schema
allows registering hooks likepost_dump
. This is not allowed in krakeSerializable
, therefore the __post_init__ method is registered directly within the hook.
-
remove_none_values
(data, **kwargs)¶ Remove attributes if value equals None
-
-
exception
krake.api.helpers.
HttpProblemError
(exc: aiohttp.web_exceptions.HTTPException, problem: krake.api.helpers.HttpProblem = HttpProblem(type='about:blank', title=None, status=None, detail=None, instance=None), **kwargs)¶ Bases:
Exception
Custom exception raised if failures on the HTTP layers occur
-
class
krake.api.helpers.
HttpProblemTitle
¶ Bases:
enum.Enum
Store the title of an RFC 7807 problem.
The RFC 7807 Problem title is a short, human-readable summary of the problem type. The name defines the title itself. The value is used as part of the URI reference that identifies the problem type, see
middlewares.problem_response()
for details.
-
class
krake.api.helpers.
ListQuery
¶ Bases:
object
Simple mixin class for
operation
template classes.Defines default
operation.query
attribute for list and list all operations.
-
class
krake.api.helpers.
QueryFlag
(**metadata)¶ Bases:
marshmallow.fields.Field
Field used for boolean query parameters.
If the query parameter exists the field is deserialized to
True
regardless of the value. The field is marked asload_only
.-
deserialize
(value, attr=None, data=None, **kwargs)¶ Deserialize
value
.Parameters: - value – The value to deserialize.
- attr – The attribute/key in data to deserialize.
- data – The raw input data passed to Schema.load.
- kwargs – Field-specific keyword arguments.
Raises: ValidationError – If an invalid value is passed or if a required value is missing.
-
-
krake.api.helpers.
blocking
()¶ Decorator function to enable function blocking. This allows only a return of the response if the requested action is completed (eg. deletion of a resource). The function logic is therefore executed after its decorated counterpart.
Returns: JSON style response coming from the handler Return type: Response
-
krake.api.helpers.
load
(argname, cls)¶ Decorator function for loading database models from URL parameters.
The wrapper loads the
name
parameter from the requestsmatch_info
attribute. If thematch_info
contains anamespace
parameter, it is used as etcd key parameter as well.Example
from aiohttp import web from krake.data import serialize from krake.data.core import Role @load("role", Role) def get_role(request, role): return json_response(serialize(role))
Parameters: Returns: Decorator for aiohttp request handlers
Return type: callable
-
krake.api.helpers.
make_create_request_schema
(cls)¶ Create a
marshmallow.Schema
excluding subresources and read-only.Parameters: cls (type) – Data class with Schema
attributeReturns: Schema instance with excluded subresources Return type: marshmallow.Schema
-
krake.api.helpers.
session
(request)¶ Load the database session for a given aiohttp request
Internally, it just returns the value that was given as cleanup context by func:krake.api.app.db_session.
Parameters: request (aiohttp.web.Request) – HTTP request Returns: Database session for the given request Return type: krake.database.Session
-
krake.api.helpers.
use_schema
(argname, schema)¶ Decorator function for loading a
marshmallow.Schema
from the request body.If the request body is not valid JSON
aiohttp.web.HTTPUnsupportedMediaType
will be raised in the wrapper.Parameters: - argname (str) – Name of the keyword argument that will be passed to the wrapped function.
- schema (marshmallow.Schema) – Schema that should used to deserialize the request body
Returns: Decorator for aiohttp request handlers
Return type: callable
Middlewares¶
This modules defines aiohttp middlewares for the Krake HTTP API
-
krake.api.middlewares.
authentication
(authenticators, allow_anonymous)¶ Middleware factory authenticating every request.
The concrete implementation is delegated to the passed asynchronous authenticator function (see
krake.api.auth
for details). This function returns the username for an incoming request. If the request is unauthenticated – meaning the authenticator returns None –system:anonymous
is used as username.The username is registered under the
user
key of the incoming request.Anonymous requests can be allowed. If no authenticator authenticates the incoming request, “system:anonymous” is assigned as user for the request. This behavior can be disabled. In that case “401 Unauthorized” is raised if an request is not authenticated by any authenticator.
Parameters: - authenticators (List[callable]) – List if asynchronous function returning the username for a given request.
- allow_anonymous (bool) – If True, anonymous (unauthenticated) requests are allowed.
Returns: aiohttp middleware loading a username for every incoming HTTP request.
-
krake.api.middlewares.
error_log
()¶ Middleware factory for logging exceptions in request handlers
Returns: aiohttp middleware catching every exception logging it to the passed logger and reraising the exception.
-
krake.api.middlewares.
problem_response
(problem_base_url=None)¶ Middleware factory for HTTP exceptions in request handlers
Parameters: problem_base_url (str, optional) – Base URL of the Krake documentation where HTTP problems are explained in detail. Returns: aiohttp middleware catching HttpProblemError or HTTPException based exception transforming the excpetion text to the helpers.HttpProblem
(RFC 7807 Problem representation of failure) and reraising the exception.
-
krake.api.middlewares.
retry_transaction
(retry=1)¶ Middleware factory for transaction error handling.
If a
database.TransactionError
occurs, the request handler is retried for the specified number of times. If the transaction error persists, a 409 Conflict HTTP exception is raised.Parameters: retry (int, optional) – Number of retries if a transaction error occurs. Returns: aiohttp middleware handling transaction errors. Return type: coroutine
Client¶
This module provides a simple Python client to the Krake HTTP API. It
leverages the same data models as the API server from krake.data
.
-
class
krake.client.
ApiClient
(client)¶ Bases:
object
Base class for all clients of a specific Krake API.
-
client
¶ the lower-level client to use to create the actual connections.
Type: krake.client.Client
-
plurals
¶ contains the name of the resources handled by the current API and their corresponding names in plural: “<name_in_singular>”: “<name_in_plural>”
Type: dict[str, str]
Parameters: client (krake.client.Client) – client to use for the HTTP communications. -
-
class
krake.client.
Client
(url, loop=None, ssl_context=None)¶ Bases:
object
Simple async Python client for the Krake HTTP API.
The specific APIs are implemented in separate classes. Each API object requires an
Client
instance to interface the HTTP REST API.The client implements the asynchronous context manager protocol used to handle opening and closing the internal HTTP session.
Example
from krake.client import Client from krake.client.core import CoreApi async with Client("http://localhost:8080") as client: core_api = CoreApi(client) role = await core_api.read_role(name="reader")
-
close
()¶ Close the internal HTTP session and remove all resource attributes.
-
open
()¶ Open the internal HTTP session and initializes all resource attributes.
-
-
class
krake.client.
Watcher
(session, url, model)¶ Bases:
object
Async context manager used by
watch_*()
methods ofClientApi
.The context manager returns the async generator of resources. On entering it is ensured that the watch is created. This means inside the context a watch is already established.
Parameters: - session (aiohttp.ClientSession) – HTTP session that is used to access the REST API.
- url (str) – URL for the watch request
- model (type) – Type that will be used to deserialize
krake.data.core.WatchEvent.object
-
watch
()¶ Async generator yielding watch events
Yields: krake.data.core.WatchEvent –
- Watch events where
object
is already deserialized correctly according to the API definition (see
model
argument)
- Watch events where
Client APIs¶
-
class
krake.client.core.
CoreApi
(client)¶ Bases:
krake.client.ApiClient
Core API client
Example
from krake.client import Client with Client(url="http://localhost:8080") as client: core_api = CoreApi(client)
Parameters: client (krake.client.Client) – API client for accessing the Krake HTTP API -
create_global_metric
(body)¶ Create the specified GlobalMetric.
Parameters: body (GlobalMetric) – Body of the HTTP request. Returns: Body of the HTTP response. Return type: GlobalMetric
-
create_global_metrics_provider
(body)¶ Create the specified GlobalMetricsProvider.
Parameters: body (GlobalMetricsProvider) – Body of the HTTP request. Returns: Body of the HTTP response. Return type: GlobalMetricsProvider
-
create_metric
(body, namespace)¶ Create the specified Metric.
Parameters: Returns: Body of the HTTP response.
Return type:
-
create_metrics_provider
(body, namespace)¶ Create the specified MetricsProvider.
Parameters: - body (MetricsProvider) – Body of the HTTP request.
- namespace (str) – Namespace of the MetricsProvider.
Returns: Body of the HTTP response.
Return type:
-
create_role
(body)¶ Create the specified Role.
Parameters: body (Role) – Body of the HTTP request. Returns: Body of the HTTP response. Return type: Role
-
create_role_binding
(body)¶ Create the specified RoleBinding.
Parameters: body (RoleBinding) – Body of the HTTP request. Returns: Body of the HTTP response. Return type: RoleBinding
-
delete_global_metric
(name)¶ Delete the specified GlobalMetric.
Parameters: name (str) – name of the GlobalMetric. Returns: Body of the HTTP response. Return type: GlobalMetric
-
delete_global_metrics_provider
(name)¶ Delete the specified GlobalMetricsProvider.
Parameters: name (str) – name of the GlobalMetricsProvider. Returns: Body of the HTTP response. Return type: GlobalMetricsProvider
-
delete_metric
(name, namespace)¶ Delete the specified Metric.
Parameters: Returns: Body of the HTTP response.
Return type:
-
delete_metrics_provider
(name, namespace)¶ Delete the specified MetricsProvider.
Parameters: Returns: Body of the HTTP response.
Return type:
-
delete_role
(name)¶ Delete the specified Role.
Parameters: name (str) – name of the Role. Returns: Body of the HTTP response. Return type: Role
-
delete_role_binding
(name)¶ Delete the specified RoleBinding.
Parameters: name (str) – name of the RoleBinding. Returns: Body of the HTTP response. Return type: RoleBinding
-
list_global_metrics
()¶ List the GlobalMetrics in the namespace.
Returns: Body of the HTTP response. Return type: GlobalMetricList
-
list_global_metrics_providers
()¶ List the GlobalMetricsProviders in the namespace.
Returns: Body of the HTTP response. Return type: GlobalMetricsProviderList
-
list_metrics
(namespace=None)¶ List the Metrics in the namespace.
Parameters: namespace (str) – namespace of the Metric Returns: Body of the HTTP response. Return type: MetricList
-
list_metrics_providers
(namespace=None)¶ List the MetricsProviders in the namespace.
Parameters: namespace (str) – namespace of the MetricsProvider. Returns: Body of the HTTP response. Return type: MetricsProviderList
-
list_role_bindings
()¶ List the RoleBindings in the namespace.
Returns: Body of the HTTP response. Return type: RoleBindingList
-
list_roles
()¶ List the Roles in the namespace.
Returns: Body of the HTTP response. Return type: RoleList
-
read_global_metric
(name)¶ Read the specified GlobalMetric.
Parameters: name (str) – name of the GlobalMetric. Returns: Body of the HTTP response. Return type: GlobalMetric
-
read_global_metrics_provider
(name)¶ Reads the specified GlobalMetricsProvider.
Parameters: name (str) – name of the GlobalMetricsProvider. Returns: Body of the HTTP response. Return type: GlobalMetricsProvider
-
read_metric
(name, namespace)¶ Read the specified Metric.
Parameters: Returns: Body of the HTTP response.
Return type:
-
read_metrics_provider
(name, namespace)¶ Read the specified MetricsProvider.
Parameters: Returns: Body of the HTTP response.
Return type:
-
read_role
(name)¶ Read the specified Role.
Parameters: name (str) – name of the Role. Returns: Body of the HTTP response. Return type: Role
-
read_role_binding
(name)¶ Read the specified RoleBinding.
Parameters: name (str) – name of the RoleBinding. Returns: Body of the HTTP response. Return type: RoleBinding
-
update_global_metric
(body, name)¶ Update the specified GlobalMetric.
Parameters: - body (GlobalMetric) – Body of the HTTP request.
- name (str) – name of the GlobalMetric.
Returns: Body of the HTTP response.
Return type:
-
update_global_metrics_provider
(body, name)¶ Update the specified GlobalMetricsProvider.
Parameters: - body (GlobalMetricsProvider) – Body of the HTTP request.
- name (str) – name of the GlobalMetricsProvider.
Returns: Body of the HTTP response.
Return type:
-
update_metric
(body, name, namespace)¶ Update the specified GlobalMetric.
Parameters: - body (GlobalMetric) – Body of the HTTP request.
- name (str) – name of the Metric.
- namespace (str) – namespace of the Metric
Returns: Body of the HTTP response.
Return type:
-
update_metrics_provider
(body, name, namespace)¶ Update the specified MetricsProvider.
Parameters: - body (MetricsProvider) – Body of the HTTP request.
- name (str) – name of the MetricsProvider.
- namespace (str) – namespace of the MetricsProvider.
Returns: Body of the HTTP response.
Return type:
-
update_role
(body, name)¶ Update the specified Role.
Parameters: Returns: Body of the HTTP response.
Return type:
-
update_role_binding
(body, name)¶ Update the specified RoleBinding.
Parameters: - body (RoleBinding) – Body of the HTTP request.
- name (str) – name of the RoleBinding.
Returns: Body of the HTTP response.
Return type:
-
watch_global_metrics
(heartbeat=None)¶ Generate a watcher for the GlobalMetrics in the namespace.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: GlobalMetricList
-
watch_global_metrics_providers
(heartbeat=None)¶ Generate a watcher for the GlobalMetricsProviders in the namespace.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: GlobalMetricsProviderList
-
watch_metrics
(namespace=None, heartbeat=None)¶ Generate a watcher for the Metrics in the namespace.
Parameters: Returns: Body of the HTTP response.
Return type:
-
watch_metrics_providers
(namespace=None, heartbeat=None)¶ Generate a watcher for the MetricsProviders in the namespace.
Parameters: Returns: Body of the HTTP response.
Return type:
-
watch_role_bindings
(heartbeat=None)¶ Generate a watcher for the RoleBindings in the namespace.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: RoleBindingList
-
watch_roles
(heartbeat=None)¶ Generate a watcher for the Roles in the namespace.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: RoleList
-
-
class
krake.client.infrastructure.
InfrastructureApi
(client)¶ Bases:
krake.client.ApiClient
Infrastructure API client
Example
from krake.client import Client with Client(url="http://localhost:8080") as client: infrastructure_api = InfrastructureApi(client)
Parameters: client (krake.client.Client) – API client for accessing the Krake HTTP API -
create_cloud
(body, namespace)¶ Create the specified Cloud.
Parameters: Returns: Body of the HTTP response.
Return type:
-
create_global_cloud
(body)¶ Create the specified GlobalCloud.
Parameters: body (GlobalCloud) – Body of the HTTP request. Returns: Body of the HTTP response. Return type: GlobalCloud
-
create_global_infrastructure_provider
(body)¶ Create the specified GlobalInfrastructureProvider.
Parameters: body (GlobalInfrastructureProvider) – Body of the HTTP request. Returns: Body of the HTTP response. Return type: GlobalInfrastructureProvider
-
create_infrastructure_provider
(body, namespace)¶ Create the specified InfrastructureProvider.
Parameters: - body (InfrastructureProvider) – Body of the HTTP request.
- namespace (str) – namespace in which the InfrastructureProvider will be updated.
Returns: Body of the HTTP response.
Return type:
-
delete_cloud
(namespace, name)¶ Delete the specified Cloud.
Parameters: Returns: Body of the HTTP response.
Return type:
-
delete_global_cloud
(name)¶ Delete the specified GlobalCloud.
Parameters: name (str) – name of the GlobalCloud. Returns: Body of the HTTP response. Return type: GlobalCloud
-
delete_global_infrastructure_provider
(name)¶ Delete the specified GlobalInfrastructureProvider.
Parameters: name (str) – name of the GlobalInfrastructureProvider. Returns: Body of the HTTP response. Return type: GlobalInfrastructureProvider
-
delete_infrastructure_provider
(namespace, name)¶ Delete the specified InfrastructureProvider.
Parameters: Returns: Body of the HTTP response.
Return type:
-
list_all_infrastructure_providers
()¶ List all InfrastructureProviders.
Returns: Body of the HTTP response. Return type: InfrastructureProviderList
-
list_clouds
(namespace)¶ List the Clouds in the namespace.
Parameters: namespace (str) – namespace in which the Cloud will be updated. Returns: Body of the HTTP response. Return type: CloudList
-
list_global_clouds
()¶ List the GlobalClouds in the namespace.
Returns: Body of the HTTP response. Return type: GlobalCloudList
-
list_global_infrastructure_providers
()¶ List the GlobalInfrastructureProviders in the namespace.
Returns: Body of the HTTP response. Return type: GlobalInfrastructureProviderList
-
list_infrastructure_providers
(namespace)¶ List the InfrastructureProviders in the namespace.
Parameters: namespace (str) – namespace in which the InfrastructureProvider will be updated. Returns: Body of the HTTP response. Return type: InfrastructureProviderList
-
read_cloud
(namespace, name)¶ Read the specified Cloud.
Parameters: Returns: Body of the HTTP response.
Return type:
-
read_global_cloud
(name)¶ Read the specified GlobalCloud.
Parameters: name (str) – name of the GlobalCloud. Returns: Body of the HTTP response. Return type: GlobalCloud
-
read_global_infrastructure_provider
(name)¶ Read the specified GlobalInfrastructureProvider.
Parameters: name (str) – name of the GlobalInfrastructureProvider. Returns: Body of the HTTP response. Return type: GlobalInfrastructureProvider
-
read_infrastructure_provider
(namespace, name)¶ Read the specified InfrastructureProvider.
Parameters: Returns: Body of the HTTP response.
Return type:
-
update_cloud
(body, namespace, name)¶ Update the specified Cloud.
Parameters: Returns: Body of the HTTP response.
Return type:
-
update_cloud_status
(body, namespace, name)¶ Update the specified Cloud.
Parameters: Returns: Body of the HTTP response.
Return type:
-
update_global_cloud
(body, name)¶ Update the specified GlobalCloud.
Parameters: - body (GlobalCloud) – Body of the HTTP request.
- name (str) – name of the GlobalCloud.
Returns: Body of the HTTP response.
Return type:
-
update_global_cloud_status
(body, name)¶ Update the specified GlobalCloud.
Parameters: - body (GlobalCloud) – Body of the HTTP request.
- name (str) – name of the GlobalCloud.
Returns: Body of the HTTP response.
Return type:
-
update_global_infrastructure_provider
(body, name)¶ Update the specified GlobalInfrastructureProvider.
Parameters: - body (GlobalInfrastructureProvider) – Body of the HTTP request.
- name (str) – name of the GlobalInfrastructureProvider.
Returns: Body of the HTTP response.
Return type:
-
update_infrastructure_provider
(body, namespace, name)¶ Update the specified InfrastructureProvider.
Parameters: - body (InfrastructureProvider) – Body of the HTTP request.
- namespace (str) – namespace in which the InfrastructureProvider will be updated.
- name (str) – name of the InfrastructureProvider.
Returns: Body of the HTTP response.
Return type:
-
watch_all_clouds
(heartbeat=None)¶ Generate a watcher for all Clouds.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: CloudList
-
watch_all_infrastructure_providers
(heartbeat=None)¶ Generate a watcher for all InfrastructureProviders.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: InfrastructureProviderList
-
watch_clouds
(namespace, heartbeat=None)¶ Generate a watcher for the Clouds in the namespace.
Parameters: Returns: Body of the HTTP response.
Return type:
-
watch_global_clouds
(heartbeat=None)¶ Generate a watcher for the GlobalClouds in the namespace.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: GlobalCloudList
-
watch_global_infrastructure_providers
(heartbeat=None)¶ Generate a watcher for the GlobalInfrastructureProviders in the namespace.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: GlobalInfrastructureProviderList
-
watch_infrastructure_providers
(namespace, heartbeat=None)¶ Generate a watcher for the InfrastructureProviders in the namespace.
Parameters: Returns: Body of the HTTP response.
Return type:
-
-
class
krake.client.kubernetes.
KubernetesApi
(client)¶ Bases:
krake.client.ApiClient
Kubernetes API client
Example
from krake.client import Client with Client(url="http://localhost:8080") as client: kubernetes_api = KubernetesApi(client)
Parameters: client (krake.client.Client) – API client for accessing the Krake HTTP API -
create_application
(body, namespace)¶ Creates the specified Application.
Parameters: - body (Application) – Body of the HTTP request.
- namespace (str) – namespace in which the Application will be updated.
Returns: Body of the HTTP response.
Return type:
-
create_cluster
(body, namespace)¶ Creates the specified Cluster.
Parameters: Returns: Body of the HTTP response.
Return type:
-
delete_application
(namespace, name)¶ Deletes the specified Application.
Parameters: Returns: Body of the HTTP response.
Return type:
-
delete_cluster
(namespace, name)¶ Deletes the specified Cluster.
Parameters: Returns: Body of the HTTP response.
Return type:
-
list_all_applications
()¶ Lists all Applications.
Returns: Body of the HTTP response. Return type: ApplicationList
-
list_all_clusters
()¶ Lists all Clusters.
Returns: Body of the HTTP response. Return type: ClusterList
-
list_applications
(namespace)¶ Lists the Applications in the namespace.
Parameters: namespace (str) – namespace in which the Application will be updated. Returns: Body of the HTTP response. Return type: ApplicationList
-
list_clusters
(namespace)¶ Lists the Clusters in the namespace.
Parameters: namespace (str) – namespace in which the Cluster will be updated. Returns: Body of the HTTP response. Return type: ClusterList
-
read_application
(namespace, name)¶ Reads the specified Application.
Parameters: Returns: Body of the HTTP response.
Return type:
-
read_cluster
(namespace, name)¶ Reads the specified Cluster.
Parameters: Returns: Body of the HTTP response.
Return type:
-
update_application
(body, namespace, name)¶ Updates the specified Application.
Parameters: - body (Application) – Body of the HTTP request.
- namespace (str) – namespace in which the Application will be updated.
- name (str) – name of the Application.
Returns: Body of the HTTP response.
Return type:
-
update_application_binding
(body, namespace, name)¶ Updates the specified Application.
Parameters: - body (ClusterBinding) – Body of the HTTP request.
- namespace (str) – namespace in which the Application will be updated.
- name (str) – name of the Application.
Returns: Body of the HTTP response.
Return type:
-
update_application_complete
(body, namespace, name)¶ Updates the specified Application.
Parameters: - body (ApplicationComplete) – Body of the HTTP request.
- namespace (str) – namespace in which the Application will be updated.
- name (str) – name of the Application.
Returns: Body of the HTTP response.
Return type:
-
update_application_shutdown
(body, namespace, name)¶ Updates the specified Application.
Parameters: - body (ApplicationShutdown) – Body of the HTTP request.
- namespace (str) – namespace in which the Application will be updated.
- name (str) – name of the Application.
Returns: Body of the HTTP response.
Return type:
-
update_application_status
(body, namespace, name)¶ Updates the specified Application.
Parameters: - body (Application) – Body of the HTTP request.
- namespace (str) – namespace in which the Application will be updated.
- name (str) – name of the Application.
Returns: Body of the HTTP response.
Return type:
-
update_cluster
(body, namespace, name)¶ Updates the specified Cluster.
Parameters: Returns: Body of the HTTP response.
Return type:
-
update_cluster_binding
(body, namespace, name)¶ Update the specified Cluster.
Parameters: - body (CloudBinding) – Body of the HTTP request.
- namespace (str) – namespace in which the Cluster will be updated.
- name (str) – name of the Cluster.
Returns: Body of the HTTP response.
Return type:
-
update_cluster_status
(body, namespace, name)¶ Updates the specified Cluster.
Parameters: Returns: Body of the HTTP response.
Return type:
-
watch_all_applications
(heartbeat=None)¶ Generates a watcher for all Applications.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds. Returns: Body of the HTTP response. Return type: ApplicationList
-
watch_all_clusters
(heartbeat=None)¶ Generates a watcher for all Clusters.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds. Returns: Body of the HTTP response. Return type: ClusterList
-
watch_applications
(namespace, heartbeat=None)¶ Generates a watcher for the Applications in the namespace.
Parameters: Returns: Body of the HTTP response.
Return type:
-
watch_clusters
(namespace, heartbeat=None)¶ Generates a watcher for the Clusters in the namespace.
Parameters: Returns: Body of the HTTP response.
Return type:
-
-
class
krake.client.openstack.
OpenStackApi
(client)¶ Bases:
krake.client.ApiClient
Openstack API client
Example
from krake.client import Client with Client(url="http://localhost:8080") as client: openstack_api = OpenStackApi(client)
Parameters: client (krake.client.Client) – API client for accessing the Krake HTTP API -
create_magnum_cluster
(body, namespace)¶ Creates the specified MagnumCluster.
Parameters: - body (MagnumCluster) – Body of the HTTP request.
- namespace (str) – namespace in which the MagnumCluster will be updated.
Returns: Body of the HTTP response.
Return type: MagnumCluster
-
create_project
(body, namespace)¶ Creates the specified Project.
Parameters: - body (Project) – Body of the HTTP request.
- namespace (str) – namespace in which the Project will be updated.
Returns: Body of the HTTP response.
Return type: Project
-
delete_magnum_cluster
(namespace, name)¶ Deletes the specified MagnumCluster.
Parameters: Returns: Body of the HTTP response.
Return type: MagnumCluster
-
delete_project
(namespace, name)¶ Deletes the specified Project.
Parameters: Returns: Body of the HTTP response.
Return type: Project
-
list_all_magnum_clusters
()¶ Lists all MagnumClusters.
Returns: Body of the HTTP response. Return type: MagnumClusterList
-
list_all_projects
()¶ Lists all Projects.
Returns: Body of the HTTP response. Return type: ProjectList
-
list_magnum_clusters
(namespace)¶ Lists the MagnumClusters in the namespace.
Parameters: namespace (str) – namespace in which the MagnumCluster will be updated. Returns: Body of the HTTP response. Return type: MagnumClusterList
-
list_projects
(namespace)¶ Lists the Projects in the namespace.
Parameters: namespace (str) – namespace in which the Project will be updated. Returns: Body of the HTTP response. Return type: ProjectList
-
read_magnum_cluster
(namespace, name)¶ Reads the specified MagnumCluster.
Parameters: Returns: Body of the HTTP response.
Return type: MagnumCluster
-
read_project
(namespace, name)¶ Reads the specified Project.
Parameters: Returns: Body of the HTTP response.
Return type: Project
-
update_magnum_cluster
(body, namespace, name)¶ Updates the specified MagnumCluster.
Parameters: Returns: Body of the HTTP response.
Return type: MagnumCluster
-
update_magnum_cluster_binding
(body, namespace, name)¶ Updates the specified MagnumCluster.
Parameters: Returns: Body of the HTTP response.
Return type: MagnumCluster
-
update_magnum_cluster_status
(body, namespace, name)¶ Updates the specified MagnumCluster.
Parameters: Returns: Body of the HTTP response.
Return type: MagnumCluster
-
update_project
(body, namespace, name)¶ Updates the specified Project.
Parameters: Returns: Body of the HTTP response.
Return type: Project
-
update_project_status
(body, namespace, name)¶ Updates the specified Project.
Parameters: Returns: Body of the HTTP response.
Return type: Project
-
watch_all_magnum_clusters
(heartbeat=None)¶ Generates a watcher for all MagnumClusters.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: MagnumClusterList
-
watch_all_projects
(heartbeat=None)¶ Generates a watcher for all Projects.
Parameters: heartbeat (int) – Number of seconds after which the server sends a heartbeat in form of an empty newline. Passing 0 disables the heartbeat. Default: 10 seconds Returns: Body of the HTTP response. Return type: ProjectList
-
watch_magnum_clusters
(namespace, heartbeat=None)¶ Generates a watcher for the MagnumClusters in the namespace.
Parameters: Returns: Body of the HTTP response.
Return type: MagnumClusterList
-
watch_projects
(namespace, heartbeat=None)¶ Generates a watcher for the Projects in the namespace.
Parameters: Returns: Body of the HTTP response.
Return type: ProjectList
-
Controllers¶
This module comprises Krake controllers responsible for watching API resources and transferring the state of related real-world resources to the desired state specified in the API. Controllers can be written in any language and with every technique. This module provides basic functionality and paradigms to implement a simple “control loop mechanism” in Python.
-
class
krake.controller.
BurstWindow
(name, burst_time, max_retry=0, loop=None)¶ Bases:
object
Context manager that can be used to check the time arbitrary code took to run. This arbitrary code should be something that needs to run indefinitely. If this code fails too quickly, it is not restarted.
The criteria are as follows: every
max_retry
times, if the average running time of the task is more than theburst_time
, the task is considered savable and the context manager is exited. If not, an exception will be raised.window = BurstWindow("my_task", 10, max_retry=3) while True: # use any kind of loop with window: # code to retry # ...
Parameters: - name (str) – the name of the background task (for debugging purposes).
- burst_time (float) – maximal accepted average time for a retried task.
- max_retry (int, optional) – number of times the task should be retried before testing the burst time. If 0, the task will be retried indefinitely, without looking for attr:burst_time.
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
-
__exit__
(*exc)¶ After the given number of tries, raise an exception if the content of the context manager failed too fast.
Raises: RuntimeError
– if a background task keep on failing more regularly than what the burst time allows.
-
class
krake.controller.
Controller
(api_endpoint, loop=None, ssl_context=None, debounce=0)¶ Bases:
object
Base class for Krake controllers providing basic functionality for watching and enqueuing API resources.
The basic workflow is as follows: the controller holds several background tasks. The API resources are watched by a Reflector, which calls a handler on each received state of a resource. Any received new state is put into a
WorkQueue
. Multiple workers consume this queue. Workers are responsible for doing the actual state transitions. The work queue ensures that a resource is processed by one worker at a time (strict sequential). The status of the real world resources is monitored by an Observer (another background task).However, this workflow is just a possibility. By modifying
__init__()
(or other functions), it is possible to add other queues, change the workers at will, add several Reflector or Observer, create additional background tasks…Parameters: - api_endpoint (str) – URL to the API
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
- ssl_context (ssl.SSLContext, optional) – if given, this context will be used to communicate with the API endpoint.
- debounce (float, optional) – value of the debounce for the
WorkQueue
.
-
cleanup
()¶ Unregister all background tasks that are attributes.
-
create_endpoint
(api_endpoint)¶ Ensure the scheme (HTTP/HTTPS) of the endpoint to connect to the API, depending on the existence of a given SSL context.
Parameters: api_endpoint (str) – the given API endpoint. Returns: the final endpoint with the right scheme. Return type: str
-
prepare
(client)¶ Start all API clients that the controller will be using. Create all necessary coroutines and register them as background tasks that will be started by the Controller.
Parameters: client (krake.client.Client) – the base client to use for the API client to connect to the API.
-
register_task
(corofactory, name=None)¶ - Add a coroutine to the list of task that will be run in the background
- of the Controller.
Parameters: - corofactory (coroutine) – the coroutine that will be used as task. It must
be running indefinitely and not catch
asyncio.CancelledError
. - name (str, optional) – the name of the background task, for logging purposes.
-
retry
(coro, name='')¶ Start a background task. If the task fails not too regularly, restart it A
BurstWindow
is used to decide if the task should be restarted.Parameters: - coro (coroutine) – the background task to try to restart.
- name (str) – the name of the background task (for debugging purposes).
Raises: RuntimeError
– if a background task keep on failing more regularly than what the burst time allows.
-
run
()¶ Start at once all the registered background tasks with the retry logic.
-
simple_on_receive
(resource, condition=<class 'bool'>)¶ Example of a resource receiving handler, that accepts a resource under conditions, and if they are met, add the resource to the queue. When listing values, you get a Resource, while when watching, you get an Event.
Parameters: - resource (krake.data.serializable.Serializable) – a resource received by listing.
- condition (callable, optional) – a condition to accept the given
resource. The signature should be
(resource) -> bool
.
-
exception
krake.controller.
ControllerError
(message)¶ Bases:
Exception
Base class for exceptions during handling of a resource.
-
__str__
()¶ Custom error message for exception
-
-
class
krake.controller.
Executor
(controller, loop=None, catch_signals=True)¶ Bases:
object
Component used to encapsulate the Controller. It takes care of starting the Controller, and handles all logic not directly dependent to the Controller, such as the handlers for the UNIX signals.
It implements the asynchronous context manager protocol. The controller itself can be awaited. The “await” call blocks until the Controller terminates.
executor = Executor(controller) async with executor: await executor
Parameters: - controller (krake.controller.Controller) – the controller that the executor is tasked with starting.
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
- catch_signals (bool, optional) – if True, the Executor will add handlers to catch killing signals in order to stop the Controller and the Executor gracefully.
-
__aenter__
()¶ Create the signal handlers and start the Controller as background task.
-
__aexit__
(*exc)¶ Wait for the managed controller to be finished and cleanup.
-
stop
()¶ Called as signal handler. Stop the Controller managed by the instance.
-
class
krake.controller.
Observer
(resource, on_res_update, time_step=1)¶ Bases:
object
Component used to watch the actual status of one instance of any resource.
Parameters: - resource – the instance of a resource that the Observer has to watch.
- on_res_update (coroutine) – a coroutine called when a resource’s actual status
differs from the status sent by the database. Its signature is:
(resource) -> updated_resource
.updated_resource
is the instance of the resource that is up-to-date with the API. The Observer internal instance of the resource to observe will be updated. If the API cannot be contacted,None
can be returned. In this case the internal instance of the Observer will not be updated. - time_step (int, optional) – how frequently the Observer should watch the actual status of the resources.
-
observe_resource
()¶ Update the watched resource if its status is different from the status observed. The status sent for the update is the observed one.
-
poll_resource
()¶ Fetch the current status of the watched resource.
Returns: Return type: krake.data.core.Status
-
run
()¶ Start the observing process indefinitely, with the Observer time step.
-
class
krake.controller.
Reflector
(listing, watching, on_list=None, on_add=None, on_update=None, on_delete=None, resource_plural=None, loop=None)¶ Bases:
object
Component used to contact the API, fetch resources and handle disconnections.
Parameters: - listing (coroutine) – the coroutine used to get the list of resources currently
stored by the API. Its signature is:
() -> <Resource>List
. - watching (coroutine) – the coroutine used to watch updates on the resources,
as sent by the API. Its signature is:
() -> watching object
. This watching object should be able to be used as context manager, and as generator. - on_list (coroutine) – the coroutine called when listing all resources with the
fetched resources as parameter. Its signature is:
(resource) -> None
. - on_add (coroutine, optional) – the coroutine called during watch, when an
ADDED event has been received. Its signature is:
(resource) -> None
. - on_update (coroutine, optional) – the coroutine called during watch, when a
MODIFIED event has been received. Its signature is:
(resource) -> None
. - on_delete (coroutine, optional) – the coroutine called during watch, when a
DELETED event has been received. Its signature is:
(resource) -> None
. - resource_plural (str, optional) – name of the resource that the reflector is
monitoring. For logging purpose. Default is
"resources"
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
-
__call__
(min_interval=2)¶ Start the Reflector. Encapsulate the connections with a retry logic, as disconnections are expected. If any other kind of error occurs, they are not swallowed.
Between two connection attempts, the connection will be retried later with a delay. If the connection fails to fast, the delay will be increased, to wait for the API to be ready. If the connection succeeded for a certain interval, the value of the delay is reset.
Parameters: min_interval (int, optional) – if the connection was kept longer than this value, the delay is reset to the base value, as it is considered that a connection was possible.
-
list_and_watch
()¶ Start the given list and watch coroutines.
-
list_resource
()¶ Pass each resource returned by the current instance’s listing function as parameter to the receiving function.
-
watch_resource
(watcher)¶ Pass each resource returned by the current instance’s watching object as parameter to the event receiving functions.
Parameters: watcher – an object that returns a new event every time an update on a resource occurs
- listing (coroutine) – the coroutine used to get the list of resources currently
stored by the API. Its signature is:
-
class
krake.controller.
WorkQueue
(maxsize=0, debounce=0, loop=None)¶ Bases:
object
Simple asynchronous work queue.
The key manages a set of key-value pairs. The queue guarantees strict sequential processing of keys: A key-value pair retrieved via
get()
is not returned viaget()
again untildone()
with the corresponding key is called, even if a new key-value pair with the corresponding key was put into the queue during the time of processing.Parameters: - maxsize (int, optional) – Maximal number of items in the queue before
put()
blocks. Defaults to 0 which means the size is infinite - debounce (float) – time in second for the debouncing of the values. A number higher than 0 means that the queue will wait the given time before giving a value. If a newer value is received, this time is reset.
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used
dirty
holds the last known value of a key i.e. the next value which will be given by theget()
method.timers
holds the current debounce coroutine for a key. Either this coroutine is canceled (if a new value for a key is given to the WorkQueue through the meth:put) or the value is added to thedirty
dictionary.active
ensures that a key isn’t added twice to thequeue
. Keys are added to this set when they are first added to thedirty
dictionary, and are removed from the set when the Worker calls thedone()
method.Todo
- Implement rate limiting and delays
-
cancel
(key)¶ Cancel the corresponding debounce coroutine for the given key. An attempt to cancel the coroutine for a key which was not inserted into the queue does not raise any error, and is simply ignored.
Parameters: key – Key that identifies the value
-
close
()¶ Cancel all pending debounce timers.
-
done
(key)¶ Called by the Worker to notify that the work on the given key is done. This method first removes the key from the
active
set, and then adds this key to the set if a new value has arrived.Parameters: key – Key that used to identity the value
-
empty
()¶ Check if the queue is empty
- Returns
- bool: True if there are no dirty keys
-
get
()¶ Retrieve a key-value pair from the queue.
The queue will not return this key as long as
done()
is not called with this key.Returns: (key, value) tuple
-
put
(key, value, delay=None)¶ Put a new key-value pair into the queue.
Parameters:
- maxsize (int, optional) – Maximal number of items in the queue before
-
krake.controller.
create_ssl_context
(tls_config)¶ From a certificate, create an SSL Context that can be used on the client side for communicating with a Server.
Parameters: tls_config (krake.data.config.TlsClientConfiguration) – the “tls” configuration part of a controller. Returns: a default SSL Context tweaked with the given certificate elements Return type: ssl.SSLContext
-
krake.controller.
joint
(*aws, loop=None)¶ Start several coroutines together. Ensure that if one stops, all others are cancelled as well.
- FIXME: using asyncio.gather, if an error occurs in one of the “gathered” task, all
- the tasks are not necessarily stopped. @see https://stackoverflow.com/questions/59073556/how-to-cancel-all-remaining-tasks-in-gather-if-one-fails # noqa
Parameters: - aws (Awaitable) – a list of await-ables to start concurrently.
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
-
krake.controller.
run
(controller)¶ Start the controller using an executor.
Parameters: controller (krake.controller.Controller) – the controller to start
-
krake.controller.
sigmoid_delay
(retries, maximum=60.0, steepness=0.75, midpoint=10.0, base=1.0)¶ Compute a waiting time (delay) depending on the number of retries already performed. The computing function is a sigmoid.
Parameters: - retries (int) – the number of attempts that happened already.
- maximum (float) – the maximum delay that can be attained. Maximum of the sigmoid.
- steepness (float) – how fast the delay increases. Steepness of the sigmoid.
- midpoint (float) – number of retries to reach the delay between maximum and base. Midpoint of the sigmoid.
- base (float) – minimum value for the delay.
Returns: the computed next delay.
Return type:
Controller Kubernetes Application¶
Module comprises Krake Kubernetes application controller logic.
-
class
krake.controller.kubernetes.application.
KubernetesApplicationController
(api_endpoint, worker_count=10, loop=None, ssl_context=None, debounce=0, hooks=None, time_step=2)¶ Bases:
krake.controller.Controller
Controller responsible for
krake.data.kubernetes.Application
resources. The controller manages Application resources in “SCHEDULED” and “DELETING” state.-
kubernetes_api
¶ Krake internal API to connect to the “kubernetes” API of Krake.
Type: KubernetesApi
-
"kubernetes" API of Krake.
-
hooks
¶ configuration to be used by the hooks supported by the controller.
Type: krake.data.config.HooksConfiguration
-
observer_time_step
¶ for the Observers: the number of seconds between two observations of the actual resource.
Type: float
-
observers
¶ mapping of all Application resource’ UID to their respective Observer and task responsible for the Observer. The signature is:
<uid> --> <observer>, <reference_to_observer's_task>
.Type: dict[str, (Observer, Coroutine)]
Parameters: - api_endpoint (str) – URL to the API
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
- ssl_context (ssl.SSLContext, optional) – if given, this context will be used to communicate with the API endpoint.
- debounce (float, optional) – value of the debounce for the
WorkQueue
. - worker_count (int, optional) – the amount of worker function that should be run as background tasks.
- time_step (float, optional) – for the Observers: the number of seconds between two observations of the actual resource.
-
check_external_endpoint
()¶ Ensure the scheme in the external endpoint (if provided) is matching the scheme used by the Krake API (“https” or “http” if TLS is enabled or disabled respectively).
If they are not, a warning is logged and the scheme is replaced in the endpoint.
-
cleanup
()¶ Unregister all background tasks that are attributes.
-
handle_resource
(run_once=False)¶ Infinite loop which fetches and hand over the resources to the right coroutine. The specific exceptions and error handling have to be added here.
This function is meant to be run as background task. Lock the handling of a resource with the
lock
attribute.Parameters: run_once (bool, optional) – if True, the function only handles one resource, then stops. Otherwise, continue to handle each new resource on the queue indefinitely.
-
list_app
(app)¶ Accept the Applications that need to be managed by the Controller on listing them at startup. Starts the observer for the Applications with actual resources.
Parameters: app (krake.data.kubernetes.Application) – the Application to accept or not.
-
on_status_update
(app)¶ Called when an Observer noticed a difference of the status of an application. Request an update of the status on the API.
Parameters: - app (krake.data.kubernetes.Application) – the Application whose
- has been updated or (status) –
Returns: the updated Application sent by the API.
Return type:
-
prepare
(client)¶ Start all API clients that the controller will be using. Create all necessary coroutines and register them as background tasks that will be started by the Controller.
Parameters: client (krake.client.Client) – the base client to use for the API client to connect to the API.
-
static
scheduled_or_deleting
(app)¶ Check if a resource should be accepted or not by the Controller to be handled.
Parameters: app (krake.data.kubernetes.Application) – the Application to check. Returns: True if the Application should be handled, False otherwise. Return type: bool
-
-
class
krake.controller.kubernetes.application.
KubernetesClient
(kubeconfig, custom_resources=None)¶ Bases:
object
Client for connecting to a Kubernetes cluster. This client:
- prepares the connection based on the information stored in the cluster’s kubeconfig file;
- prepares the connection to a custom resource’s API, if a Kubernetes resource to be managed relies on a Kubernetes custom resource;
- offers two methods:
-
apply()
: apply a manifest to create or update a resource -delete()
: delete a resource.
The client can be used as a context manager, with the Kubernetes client being deleted when leaving the context.
-
custom_resources
¶ name of all custom resources that are available on the current cluster.
Type: list[str]
-
resource_apis
¶ mapping of a Kubernetes’s resource name to the API object of the Kubernetes client which manages it (e.g. a Pod belongs to the “CoreV1” API of Kubernetes, so the mapping would be “Pod” -> <client.CoreV1Api_instance>), wrapped in an
ApiAdapter
instance.Type: dict
-
apply
(resource)¶ Apply the given resource on the cluster using its internal data as reference.
Parameters: resource (dict) – the resource to create, as a manifest file translated in dict. Returns: response from the cluster as given by the Kubernetes client. Return type: object
-
custom_resource_apis
¶ Determine custom resource apis for given cluster.
If given cluster supports custom resources, Krake determines apis from custom resource definitions.
The custom resources apis are requested only once and then are cached by cached property decorator. This is an advantage in case of the application contains multiple Kubernetes custom resources with the same kind, but with the different content, see example.
Example:
--- apiVersion: stable.example.com/v1 kind: CRD metadata: name: cdr_1 spec: crdSpec: spec_1 --- apiVersion: stable.example.com/v1 kind: CRD metadata: name: cdr_2 spec: crdSpec: spec_2
Returns: Custom resource apis Return type: dict Raises: InvalidCustomResourceDefinitionError
– If the request for the custom resource definition failed.
-
default_namespace
¶ From the kubeconfig file, get the default Kubernetes namespace where the resources will be created. If no namespace is specified, “default” will be used.
Returns: the default namespace in the kubeconfig file. Return type: str
-
delete
(resource)¶ Delete the given resource on the cluster using its internal data as reference.
Parameters: resource (dict) – the resource to delete, as a manifest file translated in dict.
Returns: - response from the
cluster as given by the Kubernetes client.
Return type: kubernetes_asyncio.client.models.v1_status.V1Status
Raises: InvalidManifestError
– if the kind or name is not present in the resource.ApiException
– by the Kubernetes API in case of malformed content or error on the cluster’s side.
-
get_immutables
(resource)¶ From a resource manifest, look for the group, version, kind, name and namespace of the resource.
If the latter is not present, the default namespace of the cluster is used instead.
Parameters: resource (dict[str, Any]) – the manifest file translated in dict of the resource from which the fields will be extracted.
Returns: - the group, version, kind, name and
namespace of the resource.
Return type: Raises: InvalidResourceError
– if the apiVersion, kind or the name is not present.Raises: InvalidManifestError
– if the apiVersion, kind or name is not present in the resource.ApiException
– by the Kubernetes API in case of malformed content or error on the cluster’s side.
-
get_resource_api
(group, version, kind)¶ - Get the Kubernetes API corresponding to the given group and version.
- If not found, look for it into the supported custom resources for the cluster.
Parameters: Returns: the API adapter to use for this resource.
Return type: ApiAdapter
Raises: UnsupportedResourceError
– if the group and version given are not supported by the Controller, and given kind is not a supported custom resource.
-
static
log_response
(response, kind, action=None)¶ Utility function to parse a response from the Kubernetes cluster and log its content.
Parameters:
-
shutdown
(app)¶ Gracefully shutdown the given application on the cluster by calling the apps exposed shutdown address.
Parameters: () (app) – the app to gracefully shutdown.
Returns: - response from the
cluster as given by the Kubernetes client.
Return type: kubernetes_asyncio.client.models.v1_status.V1Status
Raises: InvalidManifestError
– if the kind or name is not present in the resource.ApiException
– by the Kubernetes API in case of malformed content or error on the cluster’s side.
-
krake.controller.kubernetes.application.
register_service
(app, cluster, resource, response)¶ Register endpoint of Kubernetes Service object on creation and update.
Parameters: - app (krake.data.kubernetes.Application) – Application the service belongs to
- cluster (krake.data.kubernetes.Cluster) – The cluster on which the application is running
- resource (dict) – Kubernetes object description as specified in the specification of the application.
- response (kubernetes_asyncio.client.V1Service) – Response of the Kubernetes API
-
krake.controller.kubernetes.application.
unregister_service
(app, resource, **kwargs)¶ Unregister endpoint of Kubernetes Service object on deletion.
Parameters: - app (krake.data.kubernetes.Application) – Application the service belongs to
- resource (dict) – Kubernetes object description as specified in the specification of the application.
-
class
krake.controller.kubernetes.application.
KubernetesApplicationObserver
(cluster, resource, on_res_update, time_step=2)¶ Bases:
krake.controller.Observer
Observer specific for Kubernetes Applications. One observer is created for each Application managed by the Controller, but not one per Kubernetes resource (Deployment, Service…). If several resources are defined by an Application, they are all monitored by the same observer.
The observer gets the actual status of the resources on the cluster using the Kubernetes API, and compare it to the status stored in the API.
- The observer is:
- started at initial Krake resource creation;
- deleted when a resource needs to be updated, then started again when it is done;
- simply deleted on resource deletion.
Parameters: - cluster (krake.data.kubernetes.Cluster) – the cluster on which the observed Application is created.
- resource (krake.data.kubernetes.Application) – the application that will be observed.
- on_res_update (coroutine) – a coroutine called when a resource’s actual status
differs from the status sent by the database. Its signature is:
(resource) -> updated_resource
.updated_resource
is the instance of the resource that is up-to-date with the API. The Observer internal instance of the resource to observe will be updated. If the API cannot be contacted,None
can be returned. In this case the internal instance of the Observer will not be updated. - time_step (int, optional) – how frequently the Observer should watch the actual status of the resources.
-
poll_resource
()¶ Fetch the current status of the Application monitored by the Observer.
Returns: - the status object created using information from the
- real world Applications resource.
Return type: krake.data.core.Status
-
krake.controller.kubernetes.application.
get_kubernetes_resource_idx
(manifest, resource, check_namespace=False)¶ Get a resource identified by its resource api, kind and name, from a manifest file
Parameters: Raises: IndexError
– If the resource is not present in the manifestReturns: Position of the resource in the manifest
Return type:
-
krake.controller.kubernetes.application.
update_last_applied_manifest_from_resp
(app, response, **kwargs)¶ Hook run after the creation or update of an application in order to update the status.last_applied_manifest using the k8s response.
Parameters: - app (krake.data.kubernetes.Application) – Application the service belongs to
- response (kubernetes_asyncio.client.V1Status) – Response of the Kubernetes API
After a Kubernetes resource has been created/updated, the status.last_applied_manifest has to be updated. All fields already initialized (either from the mangling of spec.manifest, or by a previous call to this function) should be left untouched. Only observed fields which are not present in status.last_applied_manifest should be initialized.
-
krake.controller.kubernetes.application.
update_last_observed_manifest_from_resp
(app, response, **kwargs)¶ Handler to run after the creation or update of a Kubernetes resource to update the last_observed_manifest from the response of the Kubernetes API.
Parameters: - app (krake.data.kubernetes.Application) – Application the service belongs to
- response (kubernetes_asyncio.client.V1Service) – Response of the Kubernetes API
The target last_observed_manifest holds the value of all observed fields plus the special control dictionaries for the list length
Controller Kubernetes Cluster¶
Module comprises Krake Kubernetes cluster controller logic.
-
class
krake.controller.kubernetes.cluster.
KubernetesClusterController
(api_endpoint, worker_count=10, loop=None, ssl_context=None, debounce=0, time_step=2)¶ Bases:
krake.controller.Controller
Controller responsible for
krake.data.kubernetes.Application
andkrake.data.kubernetes.Cluster
resources. The controller manages Application resources in “SCHEDULED” and “DELETING” state and Clusters in any state.-
kubernetes_api
¶ Krake internal API to connect to the “kubernetes” API of Krake.
Type: KubernetesApi
-
"kubernetes" API of Krake.
-
observer_time_step
¶ for the Observers: the number of seconds between two observations of the actual resource.
Type: float
-
observers
¶ mapping of all Application or Cluster resource’ UID to their respective Observer and task responsible for the Observer. The signature is:
<uid> --> <observer>, <reference_to_observer's_task>
.Type: dict[str, (Observer, Coroutine)]
Parameters: - api_endpoint (str) – URL to the API
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
- ssl_context (ssl.SSLContext, optional) – if given, this context will be used to communicate with the API endpoint.
- debounce (float, optional) – value of the debounce for the
WorkQueue
. - worker_count (int, optional) – the amount of worker function that should be run as background tasks.
- time_step (float, optional) – for the Observers: the number of seconds between two observations of the actual resource.
-
static
accept_accessible
(cluster)¶ Check if a resource should be accepted or not by the Controller.
Parameters: cluster (krake.data.kubernetes.Cluster) – the Cluster to check. Returns: True if the Cluster should be handled, False otherwise. Return type: bool
-
cleanup
()¶ Unregister all background tasks that are attributes.
-
handle_resource
(run_once=False)¶ Infinite loop which fetches and hand over the resources to the right coroutine. The specific exceptions and error handling have to be added here.
This function is meant to be run as background task. Lock the handling of a resource with the
lock
attribute.Parameters: run_once (bool, optional) – if True, the function only handles one resource, then stops. Otherwise, continue to handle each new resource on the queue indefinitely.
-
list_cluster
(cluster)¶ Accept the Clusters that need to be managed by the Controller on listing them at startup. Starts the observer for the Cluster.
Parameters: cluster (krake.data.kubernetes.Cluster) – the cluster to accept or not.
-
on_status_update
(cluster)¶ Called when an Observer noticed a difference of the status of a resource. Request an update of the status on the API.
Parameters: - cluster (krake.data.kubernetes.Cluster) – the Cluster whose status
- been updated. (has) –
Returns: the updated Cluster sent by the API.
Return type:
-
prepare
(client)¶ Start all API clients that the controller will be using. Create all necessary coroutines and register them as background tasks that will be started by the Controller.
Parameters: client (krake.client.Client) – the base client to use for the API client to connect to the API.
-
-
krake.controller.kubernetes.cluster.
register_service
(app, cluster, resource, response)¶ Register endpoint of Kubernetes Service object on creation and update.
Parameters: - app (krake.data.kubernetes.Application) – Application the service belongs to
- cluster (krake.data.kubernetes.Cluster) – The cluster on which the application is running
- resource (dict) – Kubernetes object description as specified in the specification of the application.
- response (kubernetes_asyncio.client.V1Service) – Response of the Kubernetes API
-
krake.controller.kubernetes.cluster.
unregister_service
(app, resource, **kwargs)¶ Unregister endpoint of Kubernetes Service object on deletion.
Parameters: - app (krake.data.kubernetes.Application) – Application the service belongs to
- resource (dict) – Kubernetes object description as specified in the specification of the application.
-
class
krake.controller.kubernetes.cluster.
KubernetesClusterObserver
(resource, on_res_update, time_step=2)¶ Bases:
krake.controller.Observer
Observer specific for Kubernetes Clusters. One observer is created for each Cluster managed by the Controller.
The observer gets the actual status of the cluster using the Kubernetes API, and compare it to the status stored in the API.
- The observer is:
- started at initial Krake resource creation;
- deleted when a resource needs to be updated, then started again when it is done;
- simply deleted on resource deletion.
Parameters: - resource (krake.data.kubernetes.Cluster) – the cluster which will be observed.
- on_res_update (coroutine) – a coroutine called when a resource’s actual status
differs from the status sent by the database. Its signature is:
(resource) -> updated_resource
.updated_resource
is the instance of the resource that is up-to-date with the API. The Observer internal instance of the resource to observe will be updated. If the API cannot be contacted,None
can be returned. In this case the internal instance of the Observer will not be updated. - time_step (int, optional) – how frequently the Observer should watch the actual status of the resources.
-
poll_resource
()¶ Fetch the current status of the Cluster monitored by the Observer.
- Note regarding exceptions handling:
- The current cluster status is fetched by
poll_resource()
from its API. If the cluster API is shutting down the API server responds with a 503 (service unavailable, apiserver is shutting down) HTTP response which leads to the kubernetes client ApiException. If the cluster’s API has been successfully shut down and there is an attempt to fetch cluster status, the ClientConnectorError is raised instead. Therefore, both exceptions should be handled.
Returns: - the status object created using information from the
- real world Cluster.
Return type: krake.data.core.Status
Controller Scheduler¶
Module comprises Krake scheduling logic of the Krake application.
-
class
krake.controller.scheduler.
Scheduler
(api_endpoint, worker_count=10, reschedule_after=60, stickiness=0.1, ssl_context=None, debounce=0, loop=None)¶ Bases:
krake.controller.Controller
The scheduler is a controller that receives all pending and updated applications and selects the “best” backend for each one of them based on metrics of the backends and application specifications.
Parameters: - worker_count (int, optional) – the amount of worker function that should be run as background tasks.
- reschedule_after (float, optional) – number of seconds after which a resource should be rescheduled.
- ssl_context (ssl.SSLContext, optional) – SSL context that should be used to communicate with the API server.
- debounce (float, optional) – number of seconds the scheduler should wait before it reacts to a state change.
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
-
cleanup
()¶ Unregister all background tasks that are attributes.
-
prepare
(client)¶ Start all API clients that the controller will be using. Create all necessary coroutines and register them as background tasks that will be started by the Controller.
Parameters: client (krake.client.Client) – the base client to use for the API client to connect to the API.
Controller Garbage Collector¶
This module defines the Garbage Collector present as a background task on the API application. When a resource is marked as deleted, the GC mark all its dependents as deleted. After cleanup is done by the respective Controller, the gc handles the final deletion of resources.
Marking a resource as deleted (by setting the deleted timestamp of its metadata) is irreversible: if the garbage collector receives such a resource, it will start the complete deletion process, with no further user involvement.
The configuration should have the following structure:
api_endpoint: http://localhost:8080
worker_count: 5
debounce: 1
tls:
enabled: false
client_ca: tmp/pki/ca.pem
client_cert: tmp/pki/system:gc.pem
client_key: tmp/pki/system:gc-key.pem
log:
...
-
exception
krake.controller.gc.
DependencyCycleException
(resource, cycle, *args)¶ Bases:
krake.controller.gc.DependencyException
Raised in case a cycle in the dependencies has been discovered while adding or updating a resource.
Parameters: - resource (krake.data.core.ResourceRef) – the resource added or updated that triggered the exception.
- cycle (set) – the cycle of dependency relationships that has been discovered.
-
exception
krake.controller.gc.
DependencyException
¶ Bases:
Exception
Base class for dependency exceptions.
-
class
krake.controller.gc.
DependencyGraph
¶ Bases:
object
Representation of the dependencies of all Krake resources by an acyclic directed graph. This graph can be used to get the dependents of any resource that the graph received.
If an instance of a resource A depends on a resource B, A will have B in its owner list. In this case, * A depends on B * B is a dependency of A * A is a dependent of B
The nodes of the graph are
krake.data.core.ResourceRef
, created from the actual resources. The edges are directed links from a dependency to its dependents.krake.data.core.ResourceRef
are used instead of the resource directly, as they are hashable and can be used as key of a dictionary. Otherwise, we would need to make any newly added resource as hashable for the sake of the dependency graph.The actual resources are still referenced in the
_resources
. It allows the access to the actual owners of a resource, not theirkrake.data.core.ResourceRef
.-
add_resource
(resource, owners, check_cycles=True)¶ Add a resource and its dependencies relationships to the graph.
Parameters: - resource (krake.data.core.ResourceRef) – the resource to add to the graph.
- owners (list) – list of owners (dependencies) of the resource.
- check_cycles (bool, optional) – if False, does not check if adding the resource creates a cycle, and simply add it.
-
get_direct_dependents
(resource)¶ Get the dependents of a resource, but only the ones directly dependent, no recursion is performed.
Parameters: resource (krake.data.core.ResourceRef) – the resource for which the search will be performed. Returns: - the list of
krake.data.core.ResourceRef
to the dependents - of the given resource (=that depends on the resource).
Return type: list - the list of
-
remove_resource
(resource, check_dependents=True)¶ If a resource has no dependent, remove it from the dependency graph, and from the dependents of other resources.
Parameters: - resource (krake.data.core.ResourceRef) – the resource to remove.
- check_dependents (bool, optional) – if False, does not check if the resource to remove has dependents, and simply remove it along with the dependents.
Raises: ResourceWithDependentsException
– if the resource to remove has dependents.
-
update_resource
(resource, owners)¶ Update the dependency relationships of a resource on the graph.
Parameters: - resource (krake.data.core.ResourceRef) – the resource whose ownership may need to be modified.
- owners (list) – list of owners (dependencies) of the resource.
-
-
class
krake.controller.gc.
GarbageCollector
(api_endpoint, worker_count=10, loop=None, ssl_context=None, debounce=0)¶ Bases:
krake.controller.Controller
Controller responsible for marking the dependents of a resource as deleted, and for deleting all resources without any finalizer.
Parameters: - api_endpoint (str) – URL to the API
- worker_count (int, optional) – the amount of worker function that should be run as background tasks.
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
- ssl_context (ssl.SSLContext, optional) – if given, this context will be used to communicate with the API endpoint.
- debounce (float, optional) – value of the debounce for the
WorkQueue
.
-
cleanup
()¶ Unregister all background tasks that are attributes.
-
get_api_method
(reference, verb)¶ Retrieve the client method of the API of the given resource to do the given action.
Parameters: - reference (any) – a resource or reference to a resource for which a method of its API needs to be selected.
- verb (str) – the verb describing the action for which the method should be returned.
Returns: - a method to perform the given action on the given resource
(through its client).
Return type: callable
-
handle_resource
(run_once=False)¶ Infinite loop which fetches and hand over the resources to the right coroutine. This function is meant to be run as background task.
Parameters: run_once (bool, optional) – if True, the function only handles one resource, then stops. Otherwise, continue to handle each new resource on the queue indefinitely.
-
static
is_in_deletion
(resource)¶ Check if a resource needs to be deleted or not.
Parameters: resource (krake.data.serializable.ApiObject) – the resource to check. Returns: True if the given resource is in deletion state, False otherwise. Return type: bool
-
on_received_deleted
(resource)¶ To be called when a resource is deleted on the API. Remove the resource from the dependency graph and add its dependencies to the Worker queue.
Parameters: resource (krake.data.serializable.ApiObject) – the deleted resource.
-
on_received_new
(resource)¶ To be called when a resource is received for the first time by the garbage collector. Add the resource to the dependency graph and handle the resource if accepted.
If a cycle is detected when adding the resource, all resources of the cycle are removed.
Parameters: resource (krake.data.serializable.ApiObject) – the newly added resource.
-
on_received_update
(resource)¶ To be called when a resource is updated on the API. Update the resource on the dependency graph and handle the resource if accepted.
If a cycle is detected when adding the resource, all resources of the cycle are removed.
Parameters: resource (krake.data.serializable.ApiObject) – the updated resource.
-
prepare
(client)¶ Start all API clients that the controller will be using. Create all necessary coroutines and register them as background tasks that will be started by the Controller.
Parameters: client (krake.client.Client) – the base client to use for the API client to connect to the API.
-
resource_received
(resource)¶ Core functionality of the garbage collector. Mark the given resource’s direct dependents as to be deleted, or remove the deletion finalizer if the resource has no dependent.
Parameters: resource (krake.data.serializable.ApiObject) – a resource in deletion state.
-
exception
krake.controller.gc.
ResourceWithDependentsException
(dependents, *args)¶ Bases:
krake.controller.gc.DependencyException
Raise when an attempt to remove a resource from the dependency graph implies removing a resource that has still dependents, and thus should not be removed if the integrity of the dependency graph needs to be kept.
For instance: If B depends on A, A should be removed.
Parameters: dependents (list) – The list of dependents that are now orphaned.
Controller Magnum¶
Module for Krake controller responsible for managing Magnum cluster resources and creating their respective Kubernetes cluster. It connects to the Magnum service of the Project on which a MagnumCluster has been scheduled.
python -m krake.controller.magnum --help
Configuration is loaded from the controllers.scheduler
section:
api_endpoint: http://localhost:8080
worker_count: 5
debounce: 1.0
poll_interval: 30
tls:
enabled: false
client_ca: tmp/pki/ca.pem
client_cert: tmp/pki/system:magnum.pem
client_key: tmp/pki/system:magnum-key.pem
log:
...
-
exception
krake.controller.magnum.
CreateFailed
(message)¶ Bases:
krake.controller.ControllerError
Raised in case the creation of a Magnum cluster failed.
-
exception
krake.controller.magnum.
DeleteFailed
(message)¶ Bases:
krake.controller.ControllerError
Raised in case the deletion of a Magnum cluster failed.
-
exception
krake.controller.magnum.
InvalidClusterTemplateType
(message)¶ Bases:
krake.controller.ControllerError
Raised in case the given Magnum template is not a template for a Kubernetes cluster.
-
class
krake.controller.magnum.
MagnumClusterController
(*args, worker_count=5, poll_interval=30, **kwargs)¶ Bases:
krake.controller.Controller
The Magnum controller receives the MagnumCluster resources from the API and acts on it, by creating, updating or deleting their actual cluster counterparts. It uses the OpenStack Magnum client for this purpose.
Parameters: - api_endpoint (str) – URL to the API
- loop (asyncio.AbstractEventLoop, optional) – Event loop that should be used.
- ssl_context (ssl.SSLContext, optional) – if given, this context will be used to communicate with the API endpoint.
- debounce (float, optional) – value of the debounce for the
WorkQueue
. - worker_count (int, optional) – the amount of worker function that should be run as background tasks.
- poll_interval (float) – time in second before two attempts to modify a Magnum cluster (creation, deletion, update, change from FAILED state…).
-
cleanup
()¶ Unregister all background tasks that are attributes.
-
consume
(run_once=False)¶ Continuously retrieve new elements from the worker queue to be processed.
Parameters: run_once (bool, optional) – if True, the function only handles one resource, then stops. Otherwise, continue to handle each new resource on the queue indefinitely.
-
create_magnum_client
(cluster)¶ Create a client to communicate with the Magnum service API for the given Magnum cluster. The specifications defined in the OpenStack project of the cluster are used to create the client.
Parameters: cluster (krake.data.openstack.MagnumCluster) – the cluster whose project’s specifications will be used to connect to the Magnum service. Returns: - the Magnum client to use to connect to the Magnum service on
- the project of the given Magnum cluster.
Return type: MagnumV1Client
-
delete_magnum_cluster
(cluster)¶ Initiate the deletion of the actual given Magnum cluster, and wait for its deletion. The finalizer specific to the Magnum Controller is also removed from the Magnum cluster resource.
Parameters: cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster that needs to be deleted.
-
on_creating
(cluster, magnum)¶ Called when a Magnum cluster with the CREATING state needs reconciliation.
Watch over a Magnum cluster currently being created on its scheduled OpenStack project, and updates the corresponding Kubernetes cluster created in the API.
As the Magnum cluster is in a stable state at the end, no further processing method is needed to return.
Parameters: - cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster that needs to be processed.
- magnum (MagnumV1Client) – the Magnum client to use to connect to the Magnum service on the project.
-
on_pending
(cluster, magnum)¶ Called when a Magnum cluster with the PENDING state needs reconciliation.
Initiate the creation of a Magnum cluster using the registered Magnum template, but does not ensure that the creation succeeded.
Parameters: - cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster to actually create on its scheduled OpenStack project.
- magnum (MagnumV1Client) – the Magnum client to use to connect to the Magnum service on the project.
Returns: - the next function to be called, as the Magnum cluster changed its
state. In this case, the Magnum cluster has now the CREATING state, thus the function returned is
on_creating()
.
Return type: callable
-
on_reconciling
(cluster, magnum)¶ Called when a Magnum cluster with the RECONCILING state needs reconciliation.
Watch over a Magnum cluster already created on its scheduled OpenStack project, and updates the corresponding Kubernetes cluster created in the API.
As the Magnum cluster is in a stable state at the end, no further processing method is needed to return.
Parameters: - cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster that needs to be processed.
- magnum (MagnumV1Client) – the Magnum client to use to connect to the Magnum service on the project.
-
on_running
(cluster, magnum)¶ Called when a Magnum cluster with the RUNNING state needs reconciliation.
If the Magnum cluster needs to be resized, initiate the resizing. Otherwise, updates the corresponding Kubernetes cluster created in the API.
Parameters: - cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster that needs to be processed.
- magnum (MagnumV1Client) – the Magnum client to use to connect to the Magnum service on the project.
Returns: - the next function to be called, as the Magnum cluster changed its
state. In the case of resizing, the Magnum cluster has now the RECONCILING state, thus the function returned is
on_creating()
. Otherwise, as the state is stable at the end, no further processing is needed and None is returned.
Return type: callable
-
prepare
(client)¶ Start all API clients that the controller will be using. Create all necessary coroutines and register them as background tasks that will be started by the Controller.
Parameters: client (krake.client.Client) – the base client to use for the API client to connect to the API.
-
process_cluster
(cluster)¶ Process a Magnum cluster: if the given cluster is marked for deletion, delete the actual cluster. Otherwise, start the reconciliation between a Magnum cluster spec and its state.
Handle any
ControllerError
or the supported OpenStack error that are raised during the processing.Parameters: cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster to process.
-
reconcile_kubernetes_resource
(cluster, magnum)¶ Create or update the Krake resource of the Kubernetes cluster that was created from a given Magnum cluster.
Parameters: - cluster (krake.data.openstack.MagnumCluster) – the Kubernetes cluster will be created using the specifications of this Magnum cluster.
- magnum (MagnumV1Client) – the Magnum client to use to connect to the Magnum service on the project.
Raises: ClientResponseError
– when checking if the Kubernetes cluster resource already exists, raise if any HTTP error except 404 is raised.
-
reconcile_magnum_cluster
(cluster)¶ Depending on the state of the given Magnum cluster, start the rapprochement of the wanted state of the cluster to the desired one.
Parameters: cluster (krake.data.openstack.MagnumCluster) – the cluster whose actual state will be modified to match the desired one.
-
wait_for_running
(cluster, magnum)¶ Await for an actual Magnum cluster to be in a stable state, that means, when its creation or update is finished.
Parameters: - cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster on which an operation is performed that needs to be awaited.
- magnum (MagnumV1Client) – the Magnum client to use to connect to the Magnum service on the project.
Raises: ControllerError
– if the operation on the cluster failed, a corresponding error will be raised (for instance CreateFailed in case the creation of the cluster failed).
-
exception
krake.controller.magnum.
ReconcileFailed
(message)¶ Bases:
krake.controller.ControllerError
Raised in case the update of a Magnum cluster failed.
-
krake.controller.magnum.
concurrent
(fn)¶ Decorator function to turn a synchronous function into an asynchronous coroutine that runs in another thread, that can be awaited and thus does not block the main asyncio loop. It is particularly useful for synchronous tasks which requires a long time to be run concurrently to the main asyncio loop.
Example
@concurrent def my_function(args_1, arg2=value): # long synchronous processing... return result await my_function(value1, arg2=value2) # function run in another thread
Parameters: fn (callable) – the function to run in parallel from the main loop. Returns: - decorator around the given function. The returned callable is an
- asyncio coroutine.
Return type: callable
-
krake.controller.magnum.
create_client_certificate
(client, cluster, csr)¶ Create and get a certificate for the given Magnum cluster.
Parameters: - client (MagnumV1Client) – the Magnum client to use to connect to the Magnum service.
- cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster for which a kubeconfig file will be created.
- csr (str) – the certificate signing request (CSR) to use on the Magnum service for the creation of the certificate.
Returns: the generated certificate.
Return type:
-
krake.controller.magnum.
create_magnum_cluster
(client, cluster)¶ Create an actual Magnum cluster by connecting to the the Magnum service.
Parameters: - client (MagnumV1Client) – the Magnum client to use to connect to the Magnum service.
- cluster (krake.data.openstack.MagnumCluster) – the cluster to create.
Returns: the cluster created by the Magnum service.
Return type: magnumclient.v1.clusters.Cluster
-
krake.controller.magnum.
delete_magnum_cluster
(client, cluster)¶ Delete the actual Magnum cluster that corresponds to the given resource.
Parameters: - client (MagnumV1Client) – the Magnum client to use to connect to the Magnum service.
- cluster (krake.data.openstack.MagnumCluster) – the cluster to delete.
Returns: the cluster deleted by the Magnum service.
Return type: magnumclient.v1.clusters.Cluster
-
krake.controller.magnum.
format_openstack_error
(error)¶ Create a more readable error message using OpenStack specific errors.
Parameters: error (BaseException) – the exception whose information is used to create a message. Returns: the generated error message. Return type: str
-
krake.controller.magnum.
generate_magnum_cluster_name
(cluster)¶ Create a unique name for a Magnum cluster from its metadata. The name has the following structure: “<namespace>-<name>-<random_lowercase_digit_string>”. Any special character that the Magnum service would see as invalid will be replaced.
Parameters: cluster (krake.data.openstack.MagnumCluster) – the cluster to use to create a name. Returns: the name generated. Return type: str
-
krake.controller.magnum.
make_csr
(key_size=4096)¶ Generates a private key and corresponding certificate and certificate signing request.
Parameters: key_size (int) – Length of private key in bits Returns: private key, certificate signing request (CSR) Return type: (str, str)
-
krake.controller.magnum.
make_keystone_session
(project)¶ Create an OpenStack Keystone session using the authentication information of the given project resource.
Parameters: project (krake.data.openstack.Project) – the OpenStack project to use for getting the credentials and endpoint. Returns: the Keystone session created. Return type: Session
-
krake.controller.magnum.
make_kubeconfig
(client, cluster)¶ Create a kubeconfig for the Kubernetes cluster associated with the given Magnum cluster. For this process, it uses (non exhaustively) the name, address and certificates associated with it.
Parameters: - client (MagnumV1Client) – the Magnum client to use to connect to the Magnum service.
- cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster for which a kubeconfig will be created.
Returns: the kubeconfig created, returned as a dictionary.
Return type:
-
krake.controller.magnum.
make_magnum_client
(project)¶ Create a Magnum client to connect to the given OpenStack project.
Parameters: project (krake.data.openstack.Project) – the project to connect to. Returns: - the client to connect to the Magnum service of the given
- project.
Return type: MagnumV1Client
-
krake.controller.magnum.
randstr
(length=7)¶ Create a random string of lowercase and digit character of the given length.
Parameters: length (int) – specifies how many characters should be present in the returned string. Returns: the string randomly generated. Return type: str
-
krake.controller.magnum.
read_ca_certificate
(client, cluster)¶ Get the certificate authority used by the given Magnum cluster.
Parameters: - client (MagnumV1Client) – the Magnum client to use to connect to the Magnum service.
- cluster (krake.data.openstack.MagnumCluster) – the Magnum cluster for which the certificate authority will be retrieved.
Returns: the certificate authority of the given cluster.
Return type:
-
krake.controller.magnum.
read_magnum_cluster
(client, cluster)¶ Read the actual information of the given Magnum cluster resource.
Parameters: - client (MagnumV1Client) – the Magnum client to use to connect to the Magnum service.
- cluster (krake.data.openstack.MagnumCluster) – the resource whose actual cluster state will be read.
Returns: - the current information regarding the given
Magnum cluster.
Return type: magnumclient.v1.clusters.Cluster
-
krake.controller.magnum.
read_magnum_cluster_template
(client, cluster)¶ Get the actual template associated with the one specified in the given Magnum cluster resource.
Parameters: - client (MagnumV1Client) – the Magnum client to use to connect to the Magnum service.
- cluster (krake.data.openstack.MagnumCluster) – the template given is the one specified by this Magnum cluster.
Returns: magnumclient.v1.cluster_templates.ClusterTemplate
-
krake.controller.magnum.
resize_magnum_cluster
(client, cluster)¶ Update the given Magnum cluster by changing its node count.
Parameters: - client (MagnumV1Client) – the Magnum client to use to connect to the Magnum service.
- cluster (krake.data.openstack.MagnumCluster) – the cluster to resize.
Returns: the cluster updated by the Magnum service.
Return type: magnumclient.v1.clusters.Cluster
Data Abstraction¶
Data abstraction module for all REST resources used by the Krake API. This
module provides common data definitions for krake.api
and
krake.client
.
The core functionality is provided by serializable
providing a Python
API for declarative definitions of data models together with serializing and
deserializing functionality.
Domain-specific models are defined in corresponding submodules, e.g.
Kubernetes-related data models are defined in kubernetes
.
-
class
krake.data.
Key
(template, attribute=None)¶ Bases:
object
Etcd key template using the same syntax as Python’s standard format strings for parameters.
Example
key = Key("/books/{namespaces}/{isbn}")
The parameters are substituted by in the corresponding methods by either attributes of the passed object or additional keyword arguments.
Parameters: - template (str) – Key template with format string-like parameters
- attribute (str, optional) – Load attributes in
format_object()
from this attribute of the passed object.
-
format_kwargs
(**kwargs)¶ Create a key from keyword arguments
Parameters: **kwargs – Keyword arguments for parameter substitution Returns: Key from the key template with all parameters substituted by the given keyword arguments. Return type: str
-
format_object
(obj)¶ Create a key from a given object
If
attribute
is given, attributes are loaded from this attribute of the object rather than the object itself.Parameters: obj (object) – Object from which attributes are looked up Returns: Key from the key template with all parameters substituted by attributes loaded from the given object. Return type: str Raises: AttributeError
– If a required parameter is missing
-
matches
(key)¶ Check if a given key matches the template
Parameters: key (str) – Key that should be checked Returns: True of the given key matches the key template Return type: bool
-
prefix
(**kwargs)¶ Create a partial key (prefix) for a given object.
Parameters: **kwargs – Parameters that will be used for substitution Returns: Partial key from the key template with some parameters substituted Return type: str Raises: TypeError
– If a parameter is passed as keyword argument but a preceding parameter is not given.
-
krake.data.
persistent
(key)¶ Decorator factory for marking a class with a template that should be used as etcd key.
The passed template will be converted into a
Key
instance using themetadata
attribute and will be assigned to the__etcd_key__
attribute of the decorated class.Example
from krake.data import persistent from krake.data.serializable import Serializable, persistent from krake.data.core import Metadata @persistent("/books/{name}") class Book(Serializable): metadata: Metadata
Parameters: key (str) – Etcd key template. Parameters will be loaded from the metadata
attribute of the decorated class.Returns: Decorator that can be used to assign an __etcd_key__
attribute to the decorated object based on the passed key template.Return type: callable
This module defines a declarative API for defining data models that are JSON-serializable and JSON-deserializable.
-
class
krake.data.serializable.
ApiObject
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Base class for objects manipulated via REST API.
api
andkind
should be defined as simple string class :variables. They are automatically converted into dataclass fields with :corresponding validators.Example
from krake.data.serializable import ApiObject from krake.data.core import Metadata, Status class Book(ApiObject): api: str = "shelf" # The book resource belongs to the "shelf api" kind: str = "Book" metadata: Metadata spec: BookSpec status: Status
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.serializable.
ModelizedSchema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶ Bases:
marshmallow.schema.Schema
Simple marshmallow schema constructing Python objects in a
post_load
hook.Subclasses can specify a callable attribute
__model__
which is called with all deserialized attributes as keyword arguments.The
Meta.unknown
field is set to avoid considering unknown fields during validation. It mostly prevents create tests from failing.-
__model__
¶ Model factory returning a new instance of a specific model
Type: callable
-
-
class
krake.data.serializable.
PolymorphicContainer
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Base class for polymorphic serializable objects.
The polymorphic serializable has a string attribute
type
which is used as discriminator for the different types. There is an attribute named exactly like the value of thetype
attribute containing the deserialized subtype.Every new subclass will create its own
Schema
attribute. This means every subclass has its own internal subtype registry.-
Schema
¶ Schema that will be used for (de-)serialization of the class.
Type: PolymorphicContainerSchema
Example:
from krake.data.serializable import Serializable, PolymorphicContainer class ValueSpec(PolymorphicContainer): pass @ProviderSpec.register("float") class FloatSpec(Serializable): min: float max: float @ProviderSpec.register("bool") class BoolSpec(Serializable): pass # Deserialization spec = ProviderSpec.deserialize({ "type": "float", "float": { "min": 0, "max": 1.0, }, }) assert isinstance(spec.float, FloatSpec) # Serialization assert ProviderSpec(type="bool", bool=BoolSpec()).serialize() == { "type": bool, "bool": {}, }
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)
-
classmethod
register
(name)¶ Decorator function for registering a class under a unique name.
Parameters: name (str) – Name that will be used as value for the type
field to identify the decorated class.Returns: Decorator that will register the decorated class in the polymorphic schema (see PolymorphicContainerSchema.register()
).Return type: callable
-
update
(overwrite)¶ Update the polymorphic container with fields from the overwrite object.
A reference to the polymorphic field – the field called like the value of the
type
attribute – of the overwrite object is assigned to the current object even if the types of the current object and the overwrite object are identical.Parameters: overwrite (Serializable) – Serializable object will be merged with the current object.
-
-
class
krake.data.serializable.
PolymorphicContainerSchema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶ Bases:
marshmallow.schema.Schema
Schema that is used by
PolymorphicContainer
It declares just one string field
type
which is used as discriminator for the different types.There should be a field called exactly like the type. The value of this field is passed to the registered schema for deserialization.
--- type: float float: min: 0 max: 1.0 --- type: int int: min: 0 max: 100
Every subclass will create its own internal subtype registry.
-
classmethod
register
(type, dataclass)¶ Register a
Serializable
for the given type stringParameters: Raises: ValueError
– If the type name is already registered
-
classmethod
-
class
krake.data.serializable.
Serializable
(**kwargs)¶ Bases:
object
Base class for declarative serialization API.
Fields can be marked with the
metadata
attribute ofdataclasses.Field
. Currently the following markers exists:- readonly
- A field marked as “readonly” is automatically generated by the
API server and not controlled by the user. The user cannot update
this field. The corresponding marshmallow field allows
None
as valid value. - subresource
- A field marked as “subresource” is ignored in update request of a resource. Extra REST call are required to update a subresource. A well known subresource is “status”.
All field metadata attributes are also passed to the
marshmallow.fields.Field
instance. This means the user can control the generated marshmallow field with the metadata attributes.The class also defines a custom
__init__
method accepting every attribute as keyword argument in arbitrary order in contrast to the standard init method of dataclasses.Example
from krake.data.serializable import Serializable class Book(Serializable): author: str title: str isbn: str = fields(metadata={"readonly": True}) assert hasattr(Book, "Schema")
There are cases where multiple levels needs to be validated together. In this case, the
validates
metadata key for a single field is not sufficient anymore. One solution is to overwrite the auto-generated schema by a custom schema using themarshmallow.decorators.validates_schema()
decorator.Another solution is leveraging the
__post_init__()
method of dataclasses. The fields can be validated in this method and a raisedmarshmallow.ValidationError
will propagate to the Schema deserialization method.from marshmallow import ValidationError class Interval(Serializable): max: int min: int def __post_init__(self): if self.min > self.max: raise ValidationError("'min' must not be greater than 'max'") # This will raise a ValidationError interval = Interval.deserialize({"min": 2, "max": 1})
-
Schema
¶ Schema for this dataclass
Type: ModelizedSchema
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)
-
__post_init__
()¶ The
__init__()
method calls this method after all fields are initialized.It is mostly useful for schema-level validation (see above).
For now,
Serializable
does not support init-only variables because they do not make much sense for object stored in a database. This means no additional parameters are passed to this method.
-
classmethod
deserialize
(data, creation_ignored=False)¶ Loading an instance of the class from JSON-encoded data.
Parameters: Raises: marshmallow.ValidationError
– If the data is invalid
-
classmethod
fields_ignored_by_creation
()¶ Return the name of all fields that do not have to be provided during the creation of an instance.
Returns: Set of name of fields that are either subresources or read-only, or nested read-only fields. Return type: set
-
classmethod
readonly_fields
(prefix=None)¶ Return the name of all read-only fields. Nested fields are returned with dot-notation, for lists also. In this case, the argument is the one taken into account for looking at the read-only fields.
Example:
class Comment(Serializable): id: int = field(metadata={"readonly": True}) content: str class BookMetadata(Serializable): name: str = field(metadata={"readonly": True}) published: datetime = field(metadata={"readonly": True}) last_borrowed: datetime class Book(Serializable): id: int = field(metadata={"readonly": True}) metadata: BookMetadata status: str comments: List[Comment] expected = {'id', 'metadata.name', 'metadata.published', 'comment.id'} assert Book.readonly_fields() == expected
Parameters: prefix (str, optional) – Used for internal recursion Returns: Set of field names that are marked as with readonly
in their metadata.Return type: set
-
serialize
(creation_ignored=False)¶ Serialize the object using the generated
Schema
.Parameters: creation_ignored (bool) – if True, all attributes not needed at the creation are ignored. This contains the read-only and subresources, which can only be created by the API. Returns: JSON representation of the object Return type: dict
-
classmethod
subresources_fields
()¶ Return the name of all fields that are defined as subresource.
Returns: - Set of field names that are marked as
subresource
in their - metadata
Return type: set - Set of field names that are marked as
-
update
(overwrite)¶ Update data class fields with corresponding fields from the overwrite object.
If a field is marked as _subresource_ or _readonly_ it is not modified. If a field is marked as _immutable_ and there is an attempt to update the value, the
ValueError
is raised. Otherwise, attributes from overwrite will replace attributes from the current object.The
update()
must ignore the _subresource_ and _readonly_ fields, to avoid accidentally overwriting e.g. status fields in read-modify-write scenarios.The function works recursively for nested
Serializable
attributes which means theupdate()
method of the attribute will be used. This means the identity of aSerializable
attribute will not change unless the current attribute or the overwrite attribute isNone
.All other attributes are updated by assigning references from the overwrite attributes to the current object. This leads to a behavior similar to “shallow copying” (see
copy.copy()
). If the attribute is mutable, e.g.list
ordict
, the attribute in the current object will reference the same object as in the overwrite object.Parameters: overwrite (Serializable) – Serializable object will be merged with the current object. Raises: ValueError
– If there is an attempt to update an _immutable_ field.
-
class
krake.data.serializable.
SerializableMeta
¶ Bases:
type
Metaclass for
Serializable
. It automatically converts a specified class into an dataclass (seedataclasses.dataclass()
) and creates a correspondingmarshmallow.Schema
class. The schema class is assigned to theSchema
attribute.
-
krake.data.serializable.
field_for_schema
(type_, default=<dataclasses._MISSING_TYPE object>, **metadata)¶ Create a corresponding
marshmallow.fields.Field
for the passed type.If
metadata
containsmarshmallow_field
key, the value will be used directly as field.If
type_
has aSchema
attribute which should be a subclass ofmarshmallow.Schema
a :class.`marshmallow.fields.Nested` field will be returned wrapping the schema.If
type_
has aField
attribute which should be a subclass ofmarshmallow.fields.Field
an instance of this attribute will be returned.Parameters: Returns: Serialization field for the passed type
Return type: Raises: NotImplementedError
– If the marshmallow field cannot not be determined for the passed type
-
krake.data.serializable.
is_base_generic
(cls)¶ Detects generic base classes, for example
List
but notList[int]
.Parameters: cls – Type annotation that should be checked Returns: True if the passed type annotation is a generic base. Return type: bool
-
krake.data.serializable.
is_generic
(cls)¶ Detects any kind of generic, for example List or List[int]. This includes “special” types like Union and Tuple - anything that’s subscriptable, basically.
Parameters: cls – Type annotation that should be checked Returns: True if the passed type annotation is a generic. Return type: bool
-
krake.data.serializable.
is_generic_subtype
(cls, base)¶ Check if a given generic class is a subtype of another generic class
If the base is a qualified generic, e.g.
List[int]
, it is checked if the types are equal. If the base or cls does not have the attribute __origin__, e.g. Union, Optional, it is checked, if the type of base or cls is equal to the opponent. This is done for every possible case. If the base and cls have the attribute __origin__, e.g.list
fortyping.List
, it is checked if the class is equal to the original type of the generic base class.Parameters: - cls – Generic type
- base – Generic type that should be the base of the given generic type.
Returns: True of the given generic type is a subtype of the given base generic type.
Return type:
-
krake.data.serializable.
is_qualified_generic
(cls)¶ Detects generics with arguments, for example
List[int]
but notList
Parameters: cls – Type annotation that should be checked Returns: True if the passed type annotation is a qualified generic. Return type: bool .
-
class
krake.data.core.
BaseMetric
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
BaseMetricsProvider
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
Conflict
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
CoreMetadata
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
GlobalMetric
(**kwargs)¶ Bases:
krake.data.core.BaseMetric
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
GlobalMetricList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
GlobalMetricsProvider
(**kwargs)¶ Bases:
krake.data.core.BaseMetricsProvider
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
GlobalMetricsProviderList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
KafkaSpec
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Specifications to connect to a KSQL database, and retrieve a specific row from a specific table.
-
comparison_column
¶ name of the column where the value will be compared to the metric name, to select the right metric.
Type: str
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
-
class
krake.data.core.
ListMetadata
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
Metadata
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
Metric
(**kwargs)¶ Bases:
krake.data.core.BaseMetric
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
MetricList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
MetricRef
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
MetricSpec
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
MetricSpecProvider
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
MetricsProvider
(**kwargs)¶ Bases:
krake.data.core.BaseMetricsProvider
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
MetricsProviderList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
MetricsProviderSpec
(**kwargs)¶ Bases:
krake.data.serializable.PolymorphicContainer
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
PrometheusSpec
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
Reason
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
ReasonCode
¶ Bases:
enum.IntEnum
An enumeration.
-
class
krake.data.core.
ResourceRef
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
Role
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
RoleBinding
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
RoleBindingList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
RoleList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
RoleRule
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
StaticSpec
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
Status
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.core.
WatchEvent
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
krake.data.core.
resource_ref
(resource)¶ Create a
ResourceRef
from aApiObject
Parameters: resource (serializable.ApiObject) – API object that should be referenced Returns: Corresponding reference to the API object Return type: ResourceRef
-
krake.data.core.
validate_key
(key)¶ Validate the given key against the corresponding regular expression.
Parameters: key – the string to validate Raises: ValidationError
– if the given key is not conform to the regular expression.
-
krake.data.core.
validate_value
(value)¶ Validate the given value against the corresponding regular expression.
Parameters: value – the string to validate Raises: ValidationError
– if the given value is not conform to the regular expression.
-
class
krake.data.infrastructure.
Cloud
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
CloudBinding
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
CloudList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
CloudSpec
(**kwargs)¶ Bases:
krake.data.serializable.PolymorphicContainer
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
CloudStatus
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Status subresource of
GlobalCloud
andCloud
.-
state
¶ Current state of the cloud.
Type: CloudState
-
metrics_reasons
¶ Mapping of the name of the metrics for which an error occurred to the reason for which it occurred.
Type: dict[str, Reason]
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
-
class
krake.data.infrastructure.
GlobalCloud
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
__post_init__
()¶ Method automatically ran at the end of the
__init__()
method, used to validate dependent attributes.- Validations:
- A non-namespaced GlobalCloud resource cannot reference the namespaced
InfrastructureProvider resource, see #499 for details- A non-namespaced GlobalCloud resource cannot reference the namespaced
Metric resource, see #499 for details- Note: This validation cannot be achieved directly using the
validate
- metadata, since
validate
must be a zero-argument callable, with no access to the other attributes of the dataclass.
-
class
-
class
krake.data.infrastructure.
GlobalCloudList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
GlobalInfrastructureProvider
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
GlobalInfrastructureProviderList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
ImSpec
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
IMSpec should contain access data to the IM provider instance.
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
__post_init__
()¶ Method automatically ran at the end of the
__init__()
method, used to validate dependent attributes.Validations: - At least one of the attributes from the following should be defined:
-
class
-
class
krake.data.infrastructure.
InfrastructureProvider
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
InfrastructureProviderList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
InfrastructureProviderRef
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
InfrastructureProviderSpec
(**kwargs)¶ Bases:
krake.data.serializable.PolymorphicContainer
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
OpenstackAuthMethod
(**kwargs)¶ Bases:
krake.data.serializable.PolymorphicContainer
Container for the different authentication strategies of OpenStack Identity service (Keystone).
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
OpenstackSpec
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
Password
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Data for the password authentication strategy of the OpenStack identity service (Keystone).
-
user
¶ OpenStack user that will be used for authentication
Type: UserReference
-
project
¶ OpenStack project that will be used by Krake
Type: ProjectReference
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
-
class
krake.data.infrastructure.
ProjectReference
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Reference to the OpenStack project that is used by the
Password
authentication strategy.-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.infrastructure.
UserReference
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Reference to the OpenStack user that is used by the
Password
authentication strategy.-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
Data model definitions for Kubernetes-related resources
-
class
krake.data.kubernetes.
Application
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ApplicationComplete
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ApplicationList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ApplicationShutdown
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ApplicationSpec
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Spec subresource of
Application
.-
manifest
¶ List of Kubernetes resources to create. This attribute is managed by the user.
Type: list[dict]
-
tosca
¶ The to be created TOSCA template. A TOSCA template should be defined as a python dict or with the URL, where the template is located. This attribute is managed by the user.
Type: Union[dict, str], optional
-
csar
¶ The to be created CSAR archive. A CSAR file should be defined with the URL, where the archive is located. This attribute is managed by the user.
Type: str, optional
-
observer_schema
¶ List of dictionaries of fields that should be observed by the Kubernetes Observer. This attribute is managed by the user. Using this attribute as a basis, the Kubernetes Controller generates the
status.mangled_observer_schema
.Type: list[dict], optional
-
constraints
¶ Scheduling constraints
Type: Constraints, optional
-
backoff
¶ multiplier applied to backoff_delay between attempts. default: 1 (no backoff)
Type: field, optional
-
backoff_delay
¶ delay [s] between attempts. default: 1
Type: field, optional
-
backoff_limit
¶ a maximal number of attempts, default: -1 (infinite)
Type: field, optional
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
__post_init__
()¶ Method automatically ran at the end of the
__init__()
method, used to validate dependent attributes.Validations: 1. At least one of the attributes from the following should be defined: -
manifest
-tosca
-csar
If the user specified multiple attributes at once, themanifest
has the highest priority, after thattosca
andcsar
.2. If a custom
observer_schema
andmanifest
are specified by the user, theobserver_schema
needs to be validated, i.e. verified that resources are correctly identified and refer to resources defined inmanifest
, that fields are correctly identified and that all special control dictionaries are correctly defined.- Note: These validations cannot be achieved directly using the
validate
- metadata, since
validate
must be a zero-argument callable, with no access to the other attributes of the dataclass.
- Note: These validations cannot be achieved directly using the
-
-
class
krake.data.kubernetes.
ApplicationStatus
(**kwargs)¶ Bases:
krake.data.core.Status
Status subresource of
Application
.-
state
¶ Current state of the application
Type: ApplicationState
-
container_health
¶ Specific details of the application
Type: ContainerHealth
-
kube_controller_triggered
¶ Timestamp that represents the last time the current version of the Application was scheduled (version here meaning the Application after an update). It is only updated after the update of the Application led to a rescheduling, or at the first scheduling. It is used to keep a strict workflow between the Scheduler and Kubernetes Controller: the first one should always handle an Application creation or update before the latter. Only after this field has been updated by the Scheduler to be higher than the modified timestamp can the Kubernetes Controller handle the Application.
Type: datetime.datetime
-
scheduled
¶ Timestamp that represents the last time the application was scheduled to a different cluster, in other words when
scheduled_to
was modified. Thus, it is updated at the first binding to a cluster, or during the binding with a different cluster. This represents the timestamp when the current Application was scheduled to its current cluster, even if it has been updated in the meantime.Type: datetime.datetime
-
scheduled_to
¶ Reference to the cluster where the application should run.
Type: ResourceRef
-
running_on
¶ Reference to the cluster where the application is currently running.
Type: ResourceRef
-
mangled_observer_schema
¶ Actual observer schema used by the Kubernetes Observer, generated from the user inputs
spec.observer_schema
Type: list[dict]
-
last_observed_manifest
¶ List of Kubernetes resources observed on the Kubernetes API.
Type: list[dict]
-
last_applied_manifest
¶ List of Kubernetes resources created via Krake. The manifest is augmented by additional resources needed to be created for the functioning of internal mechanisms, such as the “Complete Hook”.
Type: list[dict]
-
shutdown_grace_period
¶ time period the shutdown method waits on after the shutdown command was issued to an object
Type: datetime
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
-
class
krake.data.kubernetes.
CloudConstraints
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Constraints for the
Cloud
to which this cluster is scheduled.-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
Cluster
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ClusterBinding
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ClusterCloudConstraints
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Constraints restricting the scheduling decision for a
Cluster
.-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ClusterConstraints
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ClusterList
(**kwargs)¶ Bases:
krake.data.serializable.ApiObject
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ClusterNode
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Cluster node subresource of
ClusterStatus
.-
status
¶ Current status of the cluster node.
Type: ClusterNodeStatus, optional
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
-
class
krake.data.kubernetes.
ClusterNodeCondition
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Cluster node condition subresource of
ClusterNodeStatus
.-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ClusterNodeMetadata
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Cluster node metadata subresource of
ClusterNode
.-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ClusterNodeStatus
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Cluster node status subresource of
ClusterNode
.-
conditions
¶ List of current observed node conditions.
Type: list[ClusterNodeCondition]
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
-
class
krake.data.kubernetes.
ClusterSpec
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
Spec subresource of
Cluster
-
backoff
¶ multiplier applied to backoff_delay between attempts. default: 1 (no backoff)
Type: field, optional
-
backoff_delay
¶ delay [s] between attempts. default: 1
Type: field, optional
-
backoff_limit
¶ a maximal number of attempts, default: -1 (infinite)
Type: field, optional
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
__post_init__
()¶ Method automatically ran at the end of the
__init__()
method, used to validate dependent attributes.Validations: - At least one of the attributes from the following should be defined:
kubeconfig
tosca
- Note: This validation cannot be achieved directly using the
validate
- metadata, since
validate
must be a zero-argument callable, with no access to the other attributes of the dataclass.
-
-
class
krake.data.kubernetes.
ClusterStatus
(**kwargs)¶ Bases:
krake.data.core.Status
Status subresource of
Cluster
.-
kube_controller_triggered
¶ Time when the Kubernetes controller was
Type: datetime
-
triggered. This is used to handle cluster state transitions.
-
state
¶ Current state of the cluster.
Type: ClusterState
-
metrics_reasons
¶ mapping of the name of the metrics for which an error occurred to the reason for which it occurred.
Type: dict[str, Reason]
-
nodes
¶ list of cluster nodes.
Type: list[ClusterNode]
-
cluster_id
¶ UUID or name of the cluster (infrastructure) given by the infrastructure provider
Type: str
-
scheduled
¶ Timestamp that represents the last time the cluster was scheduled to a cloud.
Type: datetime.datetime
-
scheduled_to
¶ Reference to the cloud where the cluster should run.
Type: ResourceRef
-
running_on
¶ Reference to the cloud where the cluster is running.
Type: ResourceRef
-
retries
¶ Count of remaining retries to access the cluster. Is set via the Attribute backoff in in ClusterSpec.
Type: int
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
-
class
krake.data.kubernetes.
Constraints
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class
-
class
krake.data.kubernetes.
ContainerHealth
(**kwargs)¶ Bases:
krake.data.serializable.Serializable
-
class
Schema
(*, only: types.StrSequenceOrSet | None = None, exclude: types.StrSequenceOrSet = (), many: bool = False, context: dict | None = None, load_only: types.StrSequenceOrSet = (), dump_only: types.StrSequenceOrSet = (), partial: bool | types.StrSequenceOrSet = False, unknown: str | None = None)¶
-
class