The Kubernetes platform.
The Package manager.
The Open Service Broker.
A variety of Drycc Workflow components rely on an object storage system to do their work including storing application slugs, Docker images and database logs.
Drycc Workflow ships with Minio by default, which provides in-cluster, ephemeral object storage. This means that if the Minio server crashes, all data will be lost. Therefore, Minio should be used for development or testing only.
Every component that relies on object storage uses two inputs for configuration:
The helm chart for Drycc Workflow can be easily configured to connect Workflow components to off-cluster object storage. Drycc Workflow currently supports Google Compute Storage, Amazon S3, Azure Blob Storage and OpenStack Swift Storage.
Create storage buckets for each of the Workflow subsystems:
Depending on your chosen object storage you may need to provide globally unique bucket names. If you are using S3, use hyphens instead of periods in the bucket names. Using periods in the bucket name will cause an ssl certificate validation issue with S3.
If you provide credentials with sufficient access to the underlying storage, Workflow components will create the buckets if they do not exist.
If applicable, generate credentials that have create and write access to the storage buckets created in Step 1.
If you are using AWS S3 and your Kubernetes nodes are configured with appropriate IAM API keys via InstanceRoles, you do not need to create API credentials. Do, however, validate that the InstanceRole has appropriate permissions to the configured buckets!
If you haven't already added the Helm repo, do so with
helm repo add drycc https://charts.drycc.cc/stable
Operators should configure object storage by editing the Helm values file before running
helm install. To do so:
helm inspect values drycc/workflow > values.yaml
global/storageparameter to reference the platform you are using, e.g.
All values will be automatically (base64) encoded except the
key_json values under
gcr. These must be base64-encoded. This is to support cleanly passing said encoded text via
helm --set cli functionality rather than attempting to pass the raw JSON data. For example:
$ helm install workflow --namespace drycc \ --set global.platform_domain=youdomain.com --set global.storage=gcs,gcs.key_json="$(cat /path/to/gcs_creds.json | base64 -w 0)"
You are now ready to run
helm install drycc/workflow --namespace drycc -f values.yaml using your desired object storage.