Manage Custom Service Mesh Policies with PolicyResourceManager¶
This guide explains how to use the PolicyResourceManager class to create and manage custom service mesh authorization policies directly from your charm. This is an advanced feature for scenarios where the automatic policy generation provided by ServiceMeshConsumer is not sufficient.
Prerequisites¶
This guide assumes you have:
Basic knowledge of Juju charms and charm development
Understanding of service mesh concepts
Familiarity with adding mesh support to charms
Understanding of how traffic authorization works in charmed service meshes
Understanding automatic vs. custom policy management¶
Before using PolicyResourceManager, it’s important to understand how policy management works in Charmed Service Mesh:
Automatic policy management with ServiceMeshConsumer¶
When you add mesh support to your charm using ServiceMeshConsumer, your charm integrates with a beacon charm (like istio-beacon-k8s) via the service-mesh relation. In managed mode, the beacon charm automatically generates authorization policies based on your Juju relations and the AppPolicy or UnitPolicy definitions you provide.
The beacon charm manages these policies completely - creating, updating, and deleting them as relations change. This works well for typical charm-to-charm communication patterns.
Custom policy management with PolicyResourceManager¶
PolicyResourceManager gives you direct control to create policies that don’t follow the automatic relation-based pattern. Unlike ServiceMeshConsumer, where policies are managed by the beacon charm, PolicyResourceManager allows your charm to create and manage its own AuthorizationPolicy resources directly in Kubernetes.
When to use PolicyResourceManager¶
Consider using PolicyResourceManager in situations like, but not limited to:
Custom policy requirements: Your authorization policies cannot be expressed through the relation-based approach of
ServiceMeshConsumerNon-related applications: You need to manage policies between applications that are not related via Juju relations
Operating without managed mode: You’re working in an environment where the beacon’s managed mode is disabled
Note
For most charms, the ServiceMeshConsumer with AppPolicy and UnitPolicy is sufficient and recommended. Only use PolicyResourceManager if you have specific requirements that cannot be met by the automatic policy generation provided by the service-mesh relation.
How PolicyResourceManager identifies and owns resources¶
The PolicyResourceManager uses Kubernetes labels to identify and manage the policy resources it creates. This label-based ownership model is critical to understand:
Label-based resource identification¶
When you instantiate a PolicyResourceManager with specific labels:
PolicyResourceManager(
charm=self,
lightkube_client=client,
labels={
"app.kubernetes.io/instance": f"{self.app.name}-{self.model.name}",
"kubernetes-resource-handler-scope": "cluster-internal",
},
)
These labels serve two purposes:
Resource tagging: Every policy resource created by this
PolicyResourceManagerinstance will be tagged with these labelsResource querying: When calling
reconcile()ordelete(), thePolicyResourceManagerqueries Kubernetes for all resources matching these labels to determine what it currently owns
Why labels must be unique¶
The labels you provide must be unique to this specific PolicyResourceManager instance. This ensures:
Complete ownership: The
PolicyResourceManagercan safely delete any resource with these labels without affecting resources managed by other componentsClean reconciliation: During
reconcile(), the manager can accurately determine which existing resources should be kept, updated, or deletedNo conflicts: Multiple
PolicyResourceManagerinstances (even in the same charm) can coexist as long as they use different label sets
Warning
If you use the same labels for multiple PolicyResourceManager instances, they will conflict and may delete each other’s resources. Always ensure your label combination is unique to each policy manager instance.
Practical labeling strategy¶
A good labeling strategy combines:
labels = {
# Identifies which charm/model created this resource
"app.kubernetes.io/instance": f"{self.app.name}-{self.model.name}",
# Identifies the purpose/scope within the charm
"kubernetes-resource-handler-scope": "descriptive-scope-name",
}
For example, if a single charm needs to manage multiple sets of policies:
# Manager for cluster-internal policies
internal_prm = PolicyResourceManager(
charm=self,
lightkube_client=client,
labels={
"app.kubernetes.io/instance": f"{self.app.name}-{self.model.name}",
"kubernetes-resource-handler-scope": "cluster-internal",
},
)
# Manager for external service policies
external_prm = PolicyResourceManager(
charm=self,
lightkube_client=client,
labels={
"app.kubernetes.io/instance": f"{self.app.name}-{self.model.name}",
"kubernetes-resource-handler-scope": "external-services",
},
)
Each manager can independently reconcile its own set of policies without interfering with the other.
Add PolicyResourceManager to your charm¶
Step 1: Import the required classes¶
First, fetch the service-mesh library and import the necessary classes in your charm:
from charms.istio_beacon_k8s.v0.service_mesh import (
Endpoint,
MeshPolicy,
Method,
PolicyResourceManager,
PolicyTargetType,
ServiceMeshConsumer,
)
from lightkube import Client
Step 2: Instantiate the PolicyResourceManager¶
Create a method in your charm to instantiate the PolicyResourceManager:
class MyCharm(CharmBase):
def __init__(self, *args):
super().__init__(*args)
# Your existing ServiceMeshConsumer (optional but recommended)
self._mesh = ServiceMeshConsumer(self)
# Observe events where policies need reconciliation
self.framework.observe(self.on.config_changed, self._reconcile_policies)
self.framework.observe(self.on.remove, self._on_remove)
def _get_policy_manager(self) -> PolicyResourceManager:
"""Return a PolicyResourceManager instance."""
return PolicyResourceManager(
charm=self,
lightkube_client=Client(
field_manager=f"{self.app.name}-{self.model.name}"
),
labels={
"app.kubernetes.io/instance": f"{self.app.name}-{self.model.name}",
"kubernetes-resource-handler-scope": f"{self.app.name}-custom-policies",
},
logger=self.logger,
)
Note
The lightkube_client must be instantiated with a field_manager parameter. This is required for Kubernetes server-side apply operations. A good practice is to use your application name combined with the model name to ensure uniqueness.
Step 3: Define your custom MeshPolicy objects¶
Create a method that returns the list of policies you want to manage:
def _get_custom_policies(self) -> List[MeshPolicy]:
"""Return the list of custom mesh policies to reconcile."""
policies = []
# Example 1: Allow app-a to access app-b's service on specific endpoints
policies.append(
MeshPolicy(
source_namespace="model-a",
source_app_name="app-a",
target_namespace="model-b",
target_app_name="app-b",
target_type=PolicyTargetType.app,
endpoints=[
Endpoint(
ports=[8080, 443],
methods=[Method.get, Method.post],
paths=["/api/*", "/health"],
)
],
)
)
# Example 2: Allow app-a to access all units with specific labels
policies.append(
MeshPolicy(
source_namespace="model-a",
source_app_name="app-a",
target_namespace="model-b",
target_selector_labels={
"app.kubernetes.io/name": "worker-app",
"cluster-role": "worker",
},
target_type=PolicyTargetType.unit,
endpoints=[
Endpoint(ports=[9090])
],
)
)
return policies
Step 4: Reconcile policies in your charm’s event handlers¶
Call the reconcile() method to create or update the policies:
def _reconcile_policies(self, event):
"""Reconcile custom mesh policies."""
if not self.unit.is_leader():
return
# Get the mesh type from ServiceMeshConsumer (if using it)
mesh_type = self._mesh.mesh_type()
if not mesh_type:
self.logger.info("No active service mesh connection, skipping policy reconciliation")
return
prm = self._get_policy_manager()
policies = self._get_custom_policies()
# Reconcile will create, update, or delete policies as needed
prm.reconcile(policies, mesh_type)
Step 5: Clean up on removal¶
Ensure policies are deleted when your charm is removed:
def _on_remove(self, event):
"""Clean up custom policies on charm removal."""
if not self.unit.is_leader():
return
prm = self._get_policy_manager()
prm.delete()
Understanding MeshPolicy configuration¶
A MeshPolicy defines a complete authorization policy with the following key fields. For more details on how these policies translate to actual authorization rules, see the traffic authorization documentation.
Source configuration¶
source_namespace: The Juju model (Kubernetes namespace) of the application making the requestsource_app_name: The name of the Juju application making the request
Target configuration¶
target_namespace: The Juju model (Kubernetes namespace) of the target applicationtarget_type: EitherPolicyTargetType.apporPolicyTargetType.unit
App-targeted policies vs. Unit-targeted policies¶
The behavior differs significantly based on the target_type. For a detailed explanation of these policy types, see the charm mesh support guide.
For app-targeted policies (PolicyTargetType.app):
Traffic is directed to the target application’s Kubernetes Service address
Supports fine-grained Layer 7 (HTTP) access control
target_app_name: The name of the target Juju applicationtarget_service: (Optional) The Kubernetes service name if different from the app nameendpoints: List ofEndpointobjects withports,methods,paths, andhosts
For unit-targeted policies (PolicyTargetType.unit):
Traffic is directed to individual Pods (units) of the target application
Supports Layer 4 (TCP) access control only
target_app_name: The name of the target Juju application, ORtarget_selector_labels: A dictionary of Kubernetes labels to select target podsendpoints: List ofEndpointobjects with onlyports(methods, paths, and hosts are not supported)
Note
Unit-targeted policies provide Layer 4 (TCP) access control to individual pods. They cannot restrict by HTTP methods, paths, or hosts - only by ports. This limitation comes from the underlying Istio service mesh implementation. Use unit policies when you need to access individual units directly, such as for metrics scraping from each pod.
Best practices¶
Combining ServiceMeshConsumer and PolicyResourceManager¶
You can use both ServiceMeshConsumer and PolicyResourceManager together:
class MyCharm(CharmBase):
def __init__(self, *args):
super().__init__(*args)
# ServiceMeshConsumer for standard relation-based policies
# These are managed automatically by the beacon charm
self._mesh = ServiceMeshConsumer(
self,
policies=[
AppPolicy(
relation="database",
endpoints=[
Endpoint(ports=[5432], methods=[Method.get, Method.post])
]
)
]
)
def _reconcile_custom_policies(self, event):
"""Manage custom policies that can't be expressed via relations."""
# Get mesh type from ServiceMeshConsumer
mesh_type = self._mesh.mesh_type()
if mesh_type:
prm = self._get_policy_manager()
# These policies are managed directly by your charm
prm.reconcile(self._get_custom_policies(), mesh_type)
This approach gives you:
Automatic policy management for standard charm-to-charm communication via the
service-meshrelationCustom policy management for special cases that don’t fit the standard pattern
Reconciliation timing¶
Call reconcile() in response to events that affect your policies:
When cluster topology changes (e.g., relation added/removed)
On config-changed if policies depend on configuration
On upgrade-charm to ensure policies are up to date
When mesh connection is established (e.g.,
service-meshrelation created)
Handling empty policy lists¶
The reconcile() method handles empty policy lists gracefully by deleting all managed resources:
# If no policies are needed, pass an empty list
prm.reconcile([], mesh_type)
# This is equivalent to:
prm.delete()
Further reading¶
Learn more about service mesh concepts
Learn about managed mode and automatic policy generation
Read the how-to guide for adding mesh support to understand
AppPolicyandUnitPolicyExplore the service_mesh library API documentation
See how authorization policies work in Istio documentation