The Kubernetes platform.
The Package manager.
The Open Service Broker.
Drycc Workflow's controller and passport component rely on a PostgreSQL database to store platform state.
By default, Drycc Workflow ships with the database component, which provides an in-cluster PostgreSQL database backed up to in-cluster or off-cluster object storage. Currently, for object storage, which is utilized by several Workflow components, only off-cluster solutions such as S3 or GCS are recommended in production environments. Experience has shown that many operators already opting for off-cluster object storage similarly prefer to host Postgres off-cluster as well, using Amazon RDS or similar. When excercising both options, a Workflow installation becomes entirely stateless, and is thus restored or rebuilt with greater ease should the need ever arise.
First, provision a PostgreSQL RDBMS using the cloud provider or other infrastructure of your choice. Take care to ensure that security groups or other firewall rules will permit connectivity from your Kubernetes worker nodes, any of which may play host to the Workflow controller component.
Take note of the following:
Within the off-cluster RDBMS, manually provision the following:
If you are able to log into the RDBMS as a superuser or a user with appropriate permissions, this process will typically look like this:
$ psql -h <host> -p <port> -d postgres -U <"postgres" or your own username> > create user <drycc username; typically "drycc"> with password '<password>'; > create database <database name; typically "drycc"> with owner <drycc username>; > \q
The Helm chart for Drycc Workflow can be easily configured to connect the Workflow controller component to an off-cluster PostgreSQL database.
helm inspect values drycc/workflow > values.yaml
[database]configuration section to properly reflect all connection details.
[controller]configuration section to properly reflect platformDomain details.
You are now ready to
helm install drycc/workflow --namespace drycc -f values.yaml as usual.