Kubewarden

Not affected by cross-ns privilege escalation via policy api call

Author: VĂ­ctor Cuadrado Juan

Published:

Why Kubewarden is not affected by CVE-2026-22039

The recent vulnerability CVE-2026-22039 is doing the rounds in the Kubernetes security community, with dramatic titles such as “How an admission controller vulnerability turned Kubernetes namespaces into a security illusion”. You can read about people doubting admission controllers, claiming they have too much power, or they represent too high a value target.

In this blogpost, we reassure Kubewarden users that they aren’t affected thanks to our architecture, and explain why.

In those admission controllers that are vulnerable, users with permissions to create namespaced policies can manipulate the admission controller into accessing or modifying resources outside their intended scope, effectively bypassing namespace boundaries and undermining cluster security. You could deploy a badly configured policy in your own namespace (or an attacker could), and through that, you have access to all resources in the cluster. This is why this CVE is marked with a CVVS score of 9.9 (CRITICAL) over 10.

Kubewarden is not affected by this vulnerability.

From the very beginning, we designed our architecture to prevent such privilege escalations:

  • Namespaced admission policies in Kubewarden are strictly confined: they cannot access cluster-wide data or resources outside their namespace. Users can add namespaced policies to police their own namespace, but they keep being confined by Kubernetes. This is part of our “persona” approach for Kubewarden; different personas, different permissions and constraints.

  • Only cluster-wide policies, intended for highly privileged users such as cluster operators, can request broader access. Even then, their capabilities are tightly controlled and must be explicitly granted by the cluster operator.

    This is implemented via our context-aware policies feature. Specifically:

    • The cluster operator must list the policy capabilities under spec.contextAwareResources, for example “read access to Secrets, ConfigMaps”.
    • The cluster operator must schedule the policy, using spec.PolicyServer, in a PolicyServer that uses a ServiceAccount with enough permissions for those capabilities.
  • The default PolicyServer ServiceAccount only has read permissions for Namespaces, Pods, Services, and Ingresses. See the kubewarden-controller Helm chart values.policyServer.permissions. Cluster operators configure and deploy PolicyServers consciously and explicitly.

  • All violation attempts are logged by their PolicyServer.

We like to compare our context-aware feature to Android or iPhone app permissions. Permissions are explicit, per policy, reviewed and logged.

This architecture compounds with our policies being run in WebAssembly hosts. The defense-in-depth approach ensures that namespace isolation is preserved and that no policy can overstep its intended boundaries, regardless of user intent or policy configuration.

Our commitment to secure defaults and explicit privilege boundaries is core to Kubewarden’s philosophy. We encourage users and operators to review our documentation for further details on policy permissions and security best practices.