Deploy
ServiceProviders provide the actual services that can be consumed by customers via the ManagedControlPlanes within the OpenControlPlane ecosystem.
They can automatically be deployed via the ServiceProvider resource. The openmcp-operator is responsible for these resources.
All providers are cluster-scoped resources.
Example ServiceProvider Resource
apiVersion: openmcp.cloud/v1alpha1
kind: ServiceProvider
metadata:
name: example-service
spec:
image: ghcr.io/openmcp-project/images/service-provider-example:v1.0.0
verbosity: INFO
Common Provider Contract
All provider types (ClusterProviders, ServiceProviders, PlatformServices) follow the same deployment contract.
Executing the Binary
Image
Each provider implementation must provide a container image with the provider binary set as an entrypoint.
Subcommands
The provider binary must take two subcommands:
initinitializes the provider. This usually means deploying CRDs for custom resources used by the controller(s).- The
initsubcommand is executed as a job once whenever the deployed version of the provider changes.
- The
runruns the actual controller(s) required for the provider.- The
runsubcommand is executed in a pod as part of a deployment. - The pods with the
runcommand are only started after the init job has successfully run through. - It may be run multiple times in parallel (high-availability), so the provider implementation should support this, e.g. via leader election.
- The
Arguments
Both subcommands take the same arguments, which are explained below. These arguments will always be passed into the provider.
--environmentany lowercase string- The environment argument is meant to distinguish between multiple environments (=platform clusters) watching the same onboarding cluster. For example, there could be a public environment and another fenced one - both watch the same resources on the same cluster, but only one of them is meant to react on each resource, depending on its configuration.
- Most setups will probably use only a single environment.
- Will likely be set to the landscape name (e.g.
canary,live) most of the time.
--provider-nameany lowercase string- This argument contains the name of the k8s provider resource from which this pod was created.
- If ever multiple instances of the same provider are deployed in the same landscape, this value can be used to differentiate between them.
--verbosityor-venum: ERROR, INFO, or DEBUG- This value specifies the desired logging verbosity for the provider.
Environment Variables
The following environment variables can be expected to be set:
POD_NAME- Name of the pod the provider binary runs in.
POD_NAMESPACE- Namespace of the pod the provider binary runs in.
POD_IP- IP address of the pod the provider binary runs in.
POD_SERVICE_ACCOUNT_NAME- Name of the service account that is used to run the provider.
Customizations
While it is possible to customize some aspects of how the provider binary is executed, such as adding additional environment variables, overwriting the subcommands, adding additional arguments, etc., this should only be done in exceptional cases to keep the complexity of setting up an openMCP landscape as low as possible.
Configuration
Passing configuration into the provider binary via a command-line argument is not desired. If the provider requires configuration of some kind, it is expected to read it from one or more k8s resources, potentially even running a controller to reconcile these resources. The init subcommand can be used to register the CRDs for the configuration resources, although this leads to the disadvantage of the configuration resource only been known after the provider has already been started, which can cause problems with gitOps (or similar deployment methods that deploy all resources at the same time).
Tips and Tricks
Getting Access to the Onboarding Cluster
Providers generally live in the platform cluster, so they can simply access it by using the in-cluster configuration. Getting access to the onboarding cluster is a little bit more tricky: First, the Cluster resource of the onboarding cluster itself or any ClusterRequest pointing to it is required. The provider can simply create its own ClusterRequest with purpose onboarding - a little trick that is possible due to the shared nature of the onboarding cluster, all requests to it will result in a reference to the same Cluster. Then, the provider needs to create an AccessRequest with the desired permissions and wait until it is ready. This will result in a secret containing a kubeconfig for the onboarding cluster.
This flow is already implemented in the library function `CreateAndWaitForCluster.
Deployment Example
The ServiceProvider resource above will result in similar Job and Deployment resources as ClusterProviders, with the main difference being the kind: ServiceProvider annotation and corresponding labels.
The deployment structure follows the same pattern with an init job for CRD installation and a controller deployment for the actual service provider logic.