k8s-aws-oidc republishes the Kubernetes service-account issuer discovery
document and JWKS so AWS IAM can validate AssumeRoleWithWebIdentity for a
private cluster. It exposes only the public OIDC metadata endpoints through
Tailscale Funnel, which lets AWS trust the cluster issuer without exposing the
Kubernetes API server publicly.
The full project documentation is published at k8s.oidc.meigma.dev.
- Features
- Installation
- Usage
- Configuration
- Documentation
- Repository Layout
- Development
- Contributing
- License
- Republishes
/.well-known/openid-configurationand/openid/v1/jwksfor a private Kubernetes issuer - Uses Tailscale Funnel to publish the issuer URL without exposing the API server
- Supports active/passive HA with Kubernetes leader election
- Ships a Helm chart for cluster deployment
- Ships Terraform modules for the AWS IAM OIDC provider and trusted roles
- Includes Diataxis documentation for deployment, operations, and troubleshooting
Before installing the bridge, make sure you have:
- a Kubernetes cluster where you can set
--service-account-issuer - an API server audience list that includes
sts.amazonaws.com - a Tailscale tailnet where the chosen tag can use Funnel
- Tailscale OAuth client credentials for the bridge node
- an AWS account where you can create an IAM OIDC provider and IAM roles
kubectl,helm, andtofu
The same public issuer URL must be used in all three places:
- the Kubernetes API server
--service-account-issuer - the bridge
issuerUrl - the AWS IAM OIDC provider
The API server audience list must also include sts.amazonaws.com, for example
through --api-audiences=https://kubernetes.default.svc.cluster.local,sts.amazonaws.com.
Export a minimal set of values:
export ISSUER_URL=https://oidc.example.tailnet.ts.net
export TS_HOSTNAME=oidc-example
export TS_TAG=tag:k8s-oidc
export NAMESPACE=oidc-systemCreate the namespace and the Secret that holds the Tailscale OAuth client credentials:
kubectl create namespace "${NAMESPACE}"
kubectl -n "${NAMESPACE}" create secret generic tailscale-oauth \
--from-literal=TS_API_CLIENT_ID='<client-id>' \
--from-literal=TS_API_CLIENT_SECRET='<client-secret>'Install the chart from GHCR:
helm upgrade --install oidc-bridge oci://ghcr.io/meigma/k8s-aws-oidc-chart \
--namespace "${NAMESPACE}" \
--set issuerUrl="${ISSUER_URL}" \
--set tailscale.hostname="${TS_HOSTNAME}" \
--set tailscale.tag="${TS_TAG}" \
--set tailscale.oauthSecret.name=tailscale-oauthFor local chart development, replace the OCI reference with ./chart.
Verify that the bridge is serving the public metadata endpoints AWS needs:
kubectl -n "${NAMESPACE}" rollout status deployment/oidc-bridge --timeout=300s
curl "${ISSUER_URL}/.well-known/openid-configuration"
curl "${ISSUER_URL}/openid/v1/jwks"Create the AWS IAM provider and a workload role from the included example:
cd terraform/examples/basic
tofu init
tofu apply \
-var="issuer_url=${ISSUER_URL}" \
-var="role_name=demo-app-role" \
-var="kubernetes_namespace=demo" \
-var="kubernetes_service_account=demo-app"For a complete end-to-end walkthrough, including a projected service-account token test from a pod, use First deploy.
The chart values that matter most in real deployments are:
issuerUrltailscale.hostnametailscale.tagtailscale.oauthSecret.nametailscale.stateSecret.nameserviceAccount.*sourceIpAllowlist.*
Runtime environment variables and validation rules are documented in the config reference.
The deployed documentation site is the primary operator guide:
Repository-local docs live under docs/.
chart/- Helm chart for deploying the bridgeterraform/- Terraform modules and example configuration for AWSdocs/- Docusaurus source for the published documentation siteinternal/- Go packages for config, OIDC metadata handling, network helpers, and Tailscale runtime integration
The repo uses moon for local task orchestration.
Run the Go and chart checks:
moon run :build
moon run :test
moon run :vet
moon run :lint
moon run :chart-lint
moon run :chart-validateRun the Terraform checks:
moon run terraform:fmt
moon run terraform:lint
moon run terraform:validateRun the local end-to-end smoke harness when you can satisfy the live prerequisites locally:
aws-vault exec --no-session <profile> -- just up
just downThe smoke harness is documented in
Local smoke and keeps its
working state under tmp/smoke/.
Work on the documentation site locally with Node.js 20 or newer:
cd docs
npm install
npm run startIssues and pull requests should stay scoped to the existing project structure.
Before opening a change, run the relevant moon tasks for the code, chart,
Terraform, or docs areas you touched.
This repository is dual-licensed under either of:
You may use this repository under either license, at your option.