Installation
Capsule Proxy is an optional add-on of the main Capsule Operator, so make sure you have a working instance of Capsule before attempting to install it. Use the capsule-proxy only if you want Tenant Owners to list their Cluster-Scope resources.
The capsule-proxy can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the APIs server. Optionally, it can be deployed as a sidecar container in the backend of a dashboard.
We officially only support the installation of Capsule using the Helm chart. The chart itself handles the Installation/Upgrade of needed CustomResourceDefinitions. The following Artifacthub repository are official:
Perform the following steps to install the capsule Operator:
Add repository:
helm repo add projectcapsule https://projectcapsule.github.io/chartsInstall Capsule-Proxy:
helm install capsule-proxy projectcapsule/capsule-proxy -n capsule-system --create-namespaceor (OCI)
helm install capsule-proxy oci://ghcr.io/projectcapsule/charts/capsule-proxy -n capsule-system --create-namespaceShow the status:
helm status capsule-proxy -n capsule-systemUpgrade the Chart
helm upgrade capsule-proxy projectcapsule/capsule-proxy -n capsule-systemor (OCI)
helm upgrade capsule-proxy oci://ghcr.io/projectcapsule/charts/capsule-proxy --version 0.13.0Uninstall the Chart
helm uninstall capsule-proxy -n capsule-system
GitOps
There are no specific requirements for using Capsule with GitOps tools like ArgoCD or FluxCD. You can manage Capsule resources as you would with any other Kubernetes resource.
ArgoCD
Visit the ArgoCD Integration for more options to integrate Capsule with ArgoCD.
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: capsule
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: system
source:
repoURL: ghcr.io/projectcapsule/charts
targetRevision: 0.12.4
chart: capsule
helm:
valuesObject:
...
proxy:
enabled: true
webhooks:
enabled: true
certManager:
generateCertificates: true
options:
generateCertificates: false
oidcUsernameClaim: "email"
extraArgs:
- "--feature-gates=ProxyClusterScoped=true"
serviceMonitor:
enabled: true
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
destination:
server: https://kubernetes.default.svc
namespace: capsule-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- ServerSideApply=true
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
- RespectIgnoreDifferences=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
---
apiVersion: v1
kind: Secret
metadata:
name: capsule-repo
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: ghcr.io/projectcapsule/charts
name: capsule
project: system
type: helm
enableOCI: "true"
FluxCD
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: capsule
namespace: flux-system
spec:
serviceAccountName: kustomize-controller
targetNamespace: "capsule-system"
interval: 10m
releaseName: "capsule"
chart:
spec:
chart: capsule
version: "0.12.4"
sourceRef:
kind: HelmRepository
name: capsule
interval: 24h
install:
createNamespace: true
upgrade:
remediation:
remediateLastFailure: true
driftDetection:
mode: enabled
values:
proxy:
enabled: true
webhooks:
enabled: true
certManager:
generateCertificates: true
options:
generateCertificates: false
oidcUsernameClaim: "email"
extraArgs:
- "--feature-gates=ProxyClusterScoped=true"
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: capsule
namespace: flux-system
spec:
type: "oci"
interval: 12h0m0s
url: oci://ghcr.io/projectcapsule/charts
Considerations
Consdierations when deploying capsule-proxy
Exposure
Depending on your environment, you can expose the capsule-proxy by:
Gateway API (Recommended)Ingress (Recommended)NodePort ServiceLoadBalance ServiceHostPortHostNetwork
Gateway API
If you are using a Gateway API compliant Ingress Controller, you must first make the decision, how TLS is terminiated or rather what’s possible in your environment. We have two potential options.
Backend Termination (Recommended)
This is where the TLS termination is executed by the capsule-proxy, meaning that the Ingress Controller will just forward the encrypted traffic to the capsule-proxy, which will decrypt it and forward it to the Kubernetes API Server. In this way, the client certificate authentication will be preserved and reversed to the upstream.
- On your Gateway, add a listener which allows to forward the encrypted traffic to the capsule-proxy (Pass-Through TLS):
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: service-gateway
namespace: solar-system
spec:
gatewayClassName: default
listeners:
- allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
kubernetes.io/metadata.name: capsule-system
hostname: api.cluster-name.company.com
name: https-capsule-proxy
port: 443
protocol: TLS
tls:
mode: Passthrough
- Install a
TLSRouteresource to forward the encrypted traffic to the capsule-proxy:
extraManifests:
- apiVersion: gateway.networking.k8s.io/v1
kind: TLSRoute
metadata:
name: capsule-proxy-tls-route
namespace: capsule-system
spec:
parentRefs:
- name: service-gateway
namespace: solar-system
sectionName: https-capsule-proxy
hostnames:
- api.cluster-name.company.com
rules:
- backendRefs:
- name: capsule-proxy
port: 9001
Gateway Termination
When the Gateway is terminating we must ensure, that users use the corresponding CA/Serving Certificate in their Kubeconfigs. You must ensure that the CA certificate of the Gateway is distributed to the users, so they can use it to verify the identity of the capsule-proxy. In this way, the client certificate authentication will be withdrawn and not reversed to the upstream.
- Just create a listener on the Gateway and Reference the TLS certificate. The following example uses the cert-manager integration:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
annotations:
cert-manager.io/cluster-issuer: cluster-service-issuer
cert-manager.io/private-key-algorithm: RSA
cert-manager.io/private-key-size: "4096"
name: service-gateway
namespace: solar-system
spec:
gatewayClassName: default
listeners:
- allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
kubernetes.io/metadata.name: capsule-system
hostname: api.cluster-name.company.com
name: https-capsule-proxy
port: 443
protocol: HTTPS
tls:
certificateRefs:
- group: ""
kind: Secret
name: capsule-proxy-tls
mode: Terminate
Ingress
When using an Ingress Controller, you can expose the capsule-proxy through an Ingress resource. The Ingress Controller will handle the TLS termination and forward the requests to the capsule-proxy. The capsule-proxy will then forward the requests to the Kubernetes API Server.
+-----------+ +-----------+ +-----------+
kubectl ------>|:443 |--------->|:9001 |-------->|:6443 |
+-----------+ +-----------+ +-----------+
ingress-controller capsule-proxy kube-apiserver
You can use the Ingress Values provided in the Helm chart to configure the Ingress resource for the capsule-proxy:
ingress:
enabled: true
className: "nginx" # or your ingress class name
hosts:
- host: capsule-proxy.company.com
paths:
- "/"
Certificate Management
By default, Capsule delegates its certificate management to cert-manager. This is the recommended way to manage the TLS certificates for Capsule.This relates to certifiacates for the proxy and the admissions server. However, you can also use a job to generate self-signed certificates and store them in a Kubernetes Secret:
options:
generateCertificates: true
certManager:
generateCertificates: false
Distribute CA within the Cluster
The capsule-proxy requires the CA certificate to be distributed to the clients. The CA certificate is stored in a Secret named capsule-proxy in the capsule-system namespace, by default. In most cases the distribution of this secret is required for other clients within the cluster (e.g. the Tekton Dashboard). If you are using Ingress or any other endpoints for all the clients, this step is probably not required.
Here’s an example of how to distribute the CA certificate to the namespace tekton-pipelines by using kubectl and jq:
kubectl get secret capsule-proxy -n capsule-system -o json \
| jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
| kubectl apply -n tekton-pipelines -f -
This can be used for development purposes, but it’s not recommended for production environments. Here are solutions to distribute the CA certificate, which might be useful for production environments:
User Authentication
The capsule-proxy intercepts all the requests from the kubectl client directed to the APIs Server. Users using a TLS client-based authentication with a certificate and key can talk with the API Server since it can forward client certificates to the Kubernetes APIs server.
It is possible to protect the capsule-proxy using a certificate provided by Let’s Encrypt. Keep in mind that, in this way, the TLS termination will be executed by the Ingress Controller, meaning that the authentication based on the client certificate will be withdrawn and not reversed to the upstream.
If your prerequisite is exposing capsule-proxy using an Ingress, you must rely on the token-based authentication, for example, OIDC or Bearer tokens. Users providing tokens are always able to reach the APIs Server.
HTTP Support
NOTE: kubectl will not work against a http server.
Capsule proxy supports https and http, although the latter is not recommended, we understand that it can be useful for some use cases (i.e. development, working behind a TLS-terminated reverse proxy and so on). As the default behaviour is to work with https, we need to use the flag –enable-ssl=false if we want to work under http.
After having the capsule-proxy working under http, requests must provide authentication using an allowed Bearer Token.
For example:
TOKEN=<type your TOKEN>
curl -H "Authorization: Bearer $TOKEN" http://localhost:9001/api/v1/namespaces
Metrics
Starting from the v0.3.0 release, Capsule Proxy exposes Prometheus metrics available at http://0.0.0.0:8080/metrics.
The offered metrics are related to the internal controller-manager code base, such as work queue and REST client requests, and the Go runtime ones.
Along with these, metrics capsule_proxy_response_time_seconds and capsule_proxy_requests_total have been introduced and are specific to the Capsule Proxy code-base and functionalities.
capsule_proxy_response_time_seconds offers a bucket representation of the HTTP request duration. The available variables for these metrics are the following ones:
path: the HTTP path of every single request that Capsule Proxy passes to the upstream capsule_proxy_requests_total counts the global requests that Capsule Proxy is passing to the upstream with the following labels.
path: the HTTP path of every single request that Capsule Proxy passes to the upstream status: the HTTP status code of the request