Host Filtering
Host filtering is Kloak's mechanism for restricting which TLS destinations can receive a secret's real value. Even if an attacker gains code execution inside your container, they cannot exfiltrate secrets to unauthorized hosts -- the eBPF program will refuse to perform the rewrite.
Why Host Filtering Matters
Without host filtering, any outbound TLS connection from a Kloak-enabled pod could receive the real secret value. Consider this scenario:
- Your application sends an API key to
api.stripe.comin theAuthorizationheader - An attacker exploits an SSRF vulnerability and makes your app send the same header to
evil.attacker.com - Without host filtering, the eBPF uprobe rewrites the
kloak:placeholder for both destinations
With host filtering enabled, the eBPF program checks the TLS connection's destination hostname. If it does not match the allowed list, the placeholder is not rewritten -- the remote server receives the harmless kloak:<ULID> string instead of your real secret.
DANGER
Without host filtering, Kloak protects secrets from being visible in application memory, but does not prevent network-level exfiltration. Always configure getkloak.io/hosts for production secrets.
Configuring Host Filtering
Add the getkloak.io/hosts label to your Secret with a comma-separated list of allowed hostnames:
apiVersion: v1
kind: Secret
metadata:
name: stripe-api-key
labels:
getkloak.io/enabled: "true"
getkloak.io/hosts: "api.stripe.com"
type: Opaque
data:
api-key: c2stbGl2ZS1rZXktMTIzNDU2 # sk-live-key-123456Or using kubectl:
kubectl create secret generic stripe-api-key \
--from-literal=api-key="sk-live-key-123456" \
-n payments --dry-run=client -o yaml | \
kubectl label -f - \
getkloak.io/enabled="true" \
getkloak.io/hosts="api.stripe.com" \
--local -o yaml | \
kubectl apply -f -Multiple Allowed Hosts
Separate multiple hostnames with commas:
metadata:
labels:
getkloak.io/enabled: "true"
getkloak.io/hosts: "api.stripe.com,api.stripe.com:443"WARNING
Currently, only the first host in the comma-separated list is enforced in the eBPF map (due to the single AllowedHost field in the BPF value struct). Support for multiple hosts per secret is planned.
No Host Filter (Wildcard)
If the getkloak.io/hosts label is omitted, the secret is allowed for all hosts:
metadata:
labels:
getkloak.io/enabled: "true"
# No getkloak.io/hosts = wildcard, rewrite for any destinationThis is equivalent to AllowedHosts: ["*"] internally.
How Host Resolution Works
Kloak uses DNS-verified host filtering — a language-agnostic approach that works identically for all TLS runtimes (Go, Python, Node.js, Rust, etc.) without depending on SNI or HTTP headers.
DNS-Verified Trust Chain
The eBPF program builds a chain of trust from DNS resolution to TLS write:
DNS Capture — A kprobe on the kernel's
udp_recvmsgfunction intercepts all DNS responses on the node. For hostnames listed ingetkloak.io/hostslabels (thewatched_hostsset), the resolved A/AAAA record IPs are stored indns_ip_mapwith their TTL.Connection Tracking — Tracepoints on
sys_enter_connectandsys_exit_connectrecord every TCP connection's file descriptor → destination IP mapping inconn_ip_map. If the destination IP exists indns_ip_map, the fd is cached inlast_verified_fdfor that process.Host Resolution at TLS Write Time — When
SSL_writeorcrypto/tls.Writeis called, theresolve_host()function chains:last_verified_fd→conn_ip_map[{tgid, fd}]→dns_ip_map[ip]to determine the hostname of the current TLS connection.Secret Filtering — The resolved hostname is compared against the secret's
allowed_host. Match → secret is rewritten. Mismatch → placeholder sent as-is.TTL Enforcement — DNS entries include a TTL from the original DNS response. Expired entries are skipped on lookup, forcing re-verification through fresh DNS responses.
Connection Cleanup — A tracepoint on
sys_enter_closeremovesconn_ip_mapentries when file descriptors are closed, preventing stale mappings from being used after fd reuse.
TIP
This approach is language-agnostic — it works the same way for Go, Python, Node.js, and any OpenSSL/BoringSSL-based runtime. No SNI capture or HTTP header parsing is needed.
Host Resolution Flow
| Runtime | TLS Hook | Host Resolution Method |
|---|---|---|
| Python (OpenSSL) | SSL_write uprobe | DNS-verified via udp_recvmsg kprobe |
| Node.js (BoringSSL) | SSL_write uprobe | DNS-verified via udp_recvmsg kprobe |
| Go (crypto/tls) | crypto/tls.(*Conn).Write uprobe | DNS-verified via udp_recvmsg kprobe |
| Rust, Ruby, PHP, curl | SSL_write / SSL_write_ex uprobe | DNS-verified via udp_recvmsg kprobe |
Practical Examples
Example 1: Stripe API Key (Single Host)
Only allow the secret to be sent to Stripe's API:
apiVersion: v1
kind: Secret
metadata:
name: stripe-key
labels:
getkloak.io/enabled: "true"
getkloak.io/hosts: "api.stripe.com"
type: Opaque
data:
key: c2stbGl2ZS0xMjM0NTY3ODkw # sk-live-1234567890Result:
- Request to
https://api.stripe.com/v1/charges-- secret is rewritten with real value - Request to
https://evil.example.com/steal-- secret remains askloak:<ULID>
Example 2: Two Secrets, Different Hosts
A common pattern: one secret for an allowed API, another restricted to a different host:
# Secret allowed for httpbin.org
kubectl create secret generic secret-allowed \
--from-literal=api-key="REAL-ALLOWED-KEY-12345" \
-n demo --dry-run=client -o yaml | \
kubectl label -f - getkloak.io/enabled="true" getkloak.io/hosts="httpbin.org" --local -o yaml | \
kubectl apply -f -
# Secret only allowed for example.com
kubectl create secret generic secret-blocked \
--from-literal=api-key="REAL-BLOCKED-KEY-67890" \
-n demo --dry-run=client -o yaml | \
kubectl label -f - getkloak.io/enabled="true" getkloak.io/hosts="example.com" --local -o yaml | \
kubectl apply -f -When the application sends both secrets to httpbin.org:
X-Secret-Allowed: REAL-ALLOWED-KEY-12345 # Replaced -- host matches
X-Secret-Blocked: kloak:QN4FX8KJ... # NOT replaced -- host mismatchExample 3: Raw TLS Filtering (Non-HTTP)
Host filtering works even for non-HTTP TLS protocols. The DNS resolution of the hostname is what enables host verification — no HTTP headers or SNI capture required:
import ssl
import socket
ctx = ssl.create_default_context()
# DNS resolution of "api.stripe.com" is captured by the kprobe
# and stored in dns_ip_map for host verification
with socket.create_connection(("api.stripe.com", 443)) as sock:
with ctx.wrap_socket(sock, server_hostname="api.stripe.com") as tls:
tls.sendall(b"secret data containing kloak:ULID here")Verifying Host Filtering
Check Controller Logs
The controller logs show when secrets are synced to the eBPF map, including the host restriction:
kubectl logs -n kloak-system -l app.kubernetes.io/component=controller | grep "Synced secret"Output:
Synced secret into eBPF map hash="kloak:MPZVR3GH..." hostLen=15A hostLen greater than 0 confirms host filtering is active. A hostLen of 0 means wildcard (all hosts allowed).
Test with httpbin
Deploy the demo application and check the response:
kubectl logs -l app=demo-python -n kloak-demo -c demo-app | grep -A5 "headers"You should see the allowed secret replaced with the real value and the blocked secret still showing the kloak: ULID.
Security Considerations
- Host verification is DNS-based. The trust chain depends on the integrity of DNS responses. DNS spoofing could potentially trick the host filter. Use DNSSEC or trusted DNS resolvers to mitigate this.
- DNS entries have TTL enforcement. Expired entries are skipped, forcing re-verification through fresh DNS responses. This limits the window for stale IP → hostname mappings.
- Hostname length is limited to 32 bytes in the BPF map. Hostnames longer than 32 characters are truncated. This covers the vast majority of real-world API endpoints.
- Wildcard matching is not supported. You must specify exact hostnames.
*.stripe.comwill not work -- useapi.stripe.comexplicitly. - Host filtering is enforced in-kernel by eBPF. Application code cannot bypass it, even with arbitrary code execution in the container.
- DNS and connection tracking are global on the node. All DNS responses and TCP connections are monitored (filtered by
watched_hostsfor DNS). This is necessary for containerized environments where DNS proxies may handle resolution in a different process context.