Isolating Secrets Management within Kubernetes Namespaces

Isolating Secrets Management within Kubernetes Namespaces

Isolating Secrets Management within Kubernetes Namespaces

Why should I care?

This piece will be useful to you if you need to ensure that workloads in your cluster don’t have cluster-scoped access to resources, even if they are controllers that one would typically consider closer to the control plane than to individual workloads. You are probably looking for ways to configure external secrets management with namespace isolation or more specifically how to configure ESO (External Secrets Operator – external-secrets) through a namespaced approach. In this guide, we’ll discuss the challenges of achieving this with the open-source version due to its current release model, and how we’ve simplified this process with the enterprise External Secrets Inc offering.

Kubernetes controllers everywhere

In the cloud-native era, Kubernetes controllers and operators have become indispensable tools in managing complex applications. We use them to deploy stateful applications (like Prometheus or RabbitMQ), API Server admission control, and, as we’ll discuss here, manage secret synchronization across clusters.

Cluster-Wide vs. Namespaced Secrets Management

When it comes to secrets management, the common approach is to use a cluster-wide controller like External Secrets Operator (ESO). ESO synchronizes secrets across an entire cluster. Many people adopt this method because it’s relatively easy to set up and maintain. With this approach, tenancy is often managed by configuring namespaced SecretStores — resources that configure connection to external secret providers and are linked to ExternalSecrets, which define how to fetch and store secrets. However, some organizations might have the requirement of avoiding cluster-wide controller visibility. While the open-source ESO documentation touches on other tenancy configurations, it won’t really help you on deploying ESO with strict namespace-based isolation, since for now it is based on helm and OLM releases. This is one of the many things you get with our enterprise offering.

The Challenges of Namespaced ESO Deployments

Configuring ESO in a namespaced manner (isolating the operator to only specific namespaces) is technically possible but difficult to implement. The open-source project provides releases through Helm and the Operator Lifecycle Manager (OLM) and these installation methods don’t support easily scaling ESO deployments across multiple namespaces — especially when namespaces are dynamically created. Organizations needing namespaced ESO instances often end up creating custom in-house solutions to manage multiple ESO deployments, which can be time-consuming and error-prone.

How External Secrets Inc. Simplifies Namespace Isolation

Our enterprise offering addresses this challenge through the External Secrets Inc. Agent, which makes deploying namespaced ESOs seamless. This agent deploys ESO in a namespaced manner by monitoring your cluster for namespaces with a specific label. It then automatically deploys ESO instances to those namespaces, ensuring secrets are only synchronized within the intended scope.

With our agent, any new namespaces labeled with a pre-configured label will automatically receive:

  • The necessary pull secrets
  • Proper roles and permissions
  • A namespaced ESO deployment, with options for high availability if desired
  • Configurable update strategy, so you can get automatic updates if you want
  • Configurable registry to pull ESO from

Setting Up Namespaced ESO with External Secrets Inc. Agent

Prerequisites

  • An External Secrets Inc. account
  • kubectl installed
  • Access to a Kubernetes cluster with permissions to create deployments and assign roles

Step-by-Step Guide

  1. Log in to your account at https://app.externalsecrets.com/login and create a new agent from the Agents screen.

  2. Install the Agent

Copy the command provided in the agent screen and run it in your terminal, (make sure your kubectl is configured and pointing to the cluster where you want to deploy ESO). The command will look like this:

# This is just an example, copy from your dashboard
curl https://api.prod.externalsecrets.com/public/agents/<id>/manifest/latest?token=$token | kubectl apply -f -

Manifest

You should see output indicating that the agent resources were successfully created.

  1. Label Target Namespaces

Label the namespaces you want the ESODeployment to be deployed to:

kubectl label namespace <namespace-name> eso-deploy=true
  1. Create an ESODeployment Resource

Define an ESODeployment resource with the following sample YAML. This will configure the deployment scope, resource limits, and auto-update strategy, (for this guide it is very important that you use Namespaced for the scope):

apiVersion: eso.externalsecrets.com/v1
kind: EsoDeployment
metadata:
  name: esodeployment-sample
  labels:
    app.kubernetes.io/name: esodeployment-sample
    app.kubernetes.io/managed-by: esi-agent
spec:
  autoUpdateConfig:
    strategy: Fixed
    version: v0.11.1
  matchLabel:
    eso-deploy: "true"
  scope: Namespaced
  haEnabled: false
  resources:
    requests:
      memory: "64Mi"
      cpu: "100m"
    limits:
      memory: "128Mi"
      cpu: "100m"
  1. Apply this with kubectl apply -f <filename.yaml>, and after a short wait, ESO should be running in your labeled namespaces.

  2. Check ESO is Running

Verify that ESO is running in the desired namespaces by checking the pods or deployments with kubectl get pods -n <namespace>.

Doing the same with OSS ESO

You can even do this with the OSS version of ESO, or even ESO builds you might have in your internal registry. To do that you need to configure registryConfig:{"imgName", "URL"}, of the ESODeployment resource (here only with the Fixed Strategy, as you would need to do a little bit more to list tags for auto-updates).

Remove the previous one with kubectl delete -f <filename.yaml>

To do this with OSS ESO you just have to change the ESODeployment yaml (with correct urls, img names and valid versions):

apiVersion: eso.externalsecrets.com/v1
kind: EsoDeployment
metadata:
  labels:
    app.kubernetes.io/name: esodeployment-sample
    app.kubernetes.io/managed-by: esi-agent
  name: esodeployment-sample
spec:
  autoUpdateConfig:
    strategy: Fixed
    version: v0.10.4
  registryConfig:
    imgName: "external-secrets"
    URL: "ghcr.io/external-secrets"
  matchLabel:
    eso-deploy: "true"
  scope: Namespaced
  haEnabled: false
  resources:
    requests:
      memory: "64Mi"
      cpu: "100m"
    limits:
      memory: "128Mi"
      cpu: "100m"

Apply it and you will have the OSS version of ESO running in the labeled namespaces (or if you prefer, the internally built ESO you might have in your internal registry).

A Comparison with Manual Helm Installations

Without the External Secrets Inc. Agent, you would need to manually deploy ESO to each new namespace. This would involve creating and managing individual Helm releases for every namespace or building an in-house controller to automate the process. Our enterprise agent eliminates this hassle, offering a streamlined way to manage ESO deployments on a namespace basis.

External Secrets Inc. is committed to simplifying secrets management and reducing the burden of operating secure and compliant multi-tenant clusters. Try it out by signing up at https://app.externalsecrets.com/signup and explore how easy it is to keep your secrets isolated within namespaces while benefiting from FIPS-compliant image auto-updates and secure secret rotation, and more.

blog-image
Kubernetes Secrets Replication with ESO

Replicating secrets across namespaces is a common challenge in Kubernetes environments, particularly when multiple applications require shared access to sensitive data like database credentials or API keys. While there are tools like Kyverno that can handle this, they often fall short in terms of synchronization and integration with external secret stores. In this guide, we’ll demonstrate how to use the External Secrets Operator (ESO) to achieve seamless replication of secrets across namespaces.

blog-image
Just in time rotation is out!

Our Just-in-Time Rotation feature is now live, and you can explore it for free on our platform! We shared our plans in a recent blog post: Async Rotation and after gathering feedback and listening to requests from our community and partners, we officially launched this feature with a focus on real-world use cases. Many organizations face the challenge of waiting for data to sync within a refresh interval. Just-in-Time Rotation solves this by ensuring that updated secrets are available immediately upon changes in the source of truth or triggered by supported event sources.

blog-image
Isolating Secrets Management within Kubernetes Namespaces

Why should I care? This piece will be useful to you if you need to ensure that workloads in your cluster don’t have cluster-scoped access to resources, even if they are controllers that one would typically consider closer to the control plane than to individual workloads. You are probably looking for ways to configure external secrets management with namespace isolation or more specifically how to configure ESO (External Secrets Operator – external-secrets) through a namespaced approach.

Join us for effortless Secrets Management

Sign Up