Evolution of pod’s privilege in EKS
Host role → Proxy roles →IRSA → Pod Identity
Host Role (Pre Kube2iam)
AWS EC2 instance has a concept of Instance Profile & linked role to assume. Initially, EKS Service leveraged this role to allow pods to talk to AWS resources. A pod running in a Kubernetes cluster is just a process on an EC2 instance, so when it asks for credentials, the call is immediately handled by the metadata service of the node. The same is true for all pods on the node, so they will therefore receive the same credentials.

Proxy metadata servers (2017–2019)
To improve on this many projects like uswitch/kiam, zalando/kube-aws-iam-controller, lyft/metadataproxy & kube2iam took different approaches to have different roles for different pods. One thing was common in all of that they put a proxy server in front of the instance metadata service. All of the approaches forwarded traffic meant for 169.254.169.254 to other server (localhost or server running on a daemonset). This server will respond to the requests by acquiring the mapped role credentials and injecting them in the pod.
Metadataproxy
A summary of one approach taken by metadataproxy —
For IAM routes, the metadataproxy will use STS to assume roles for containers. To do so it takes the incoming IP address of metadata requests and finds the running docker container associated with the IP address. It uses the value of the container’s IAM_ROLE
environment variable as the role it will assume. It then assumes the role and gives back STS credentials in the metadata response. STS-attained credentials are cached and automatically rotated as they expire.
Kiam
It took an approach where it split the whole process into two independent processes.
Agent — A DaemonSet agent runs an HTTP proxy that intercepts credentials requests to metadata.
Server — This process watches Pods and communicates with AWS STS to request credentials. It also maintains a cache of credentials for roles currently in use by running pods- ensuring that credentials are refreshed every few minutes and stored in advance of Pods needing them.

IRSA (2019–2023)
IRSA leverages the foundational constructs offered by AWS IAM such as OpenID Connect (OIDC) identity providers and IAM trust policy for establishing trust between an IAM role and Amazon EKS cluster. OIDC provider establishes trust between the Kubernetes IdP, which is compatible with OIDC, and the AWS account where it was created.
An application running in a pod can pass the projected service account token along with a role ARN to the STS API AssumeRoleWithWebIdentity, and get back temporary role credentials. It uses annotation to map the pod and IAM role.

IRSA enables EKS operators to comply with:
- The security Principle of Least Privilege (PoLP) by scoping the IAM permissions to a Kubernetes Service Account (SA) and Pods using the service account have access to those permissions only.
- Credential Isolation (CI) where a Pod’s containers can only fetch credentials for the associated IRSA and it doesn’t have access to credentials other Pods (containers really) use.
How to enable IRSA?
# An IAM OIDC Identity Provider must be created and associated with a cluster
# On IAM Console: Confirm OIDC provider has created for the EKS cluster
# With AWS CLI: Verify the IAM OIDC Identity Provider.
aws iam list-open-id-connect-providers
# Confirm OIDC IP association with the Amazon EKS cluster
aws eks describe-cluster --name ${EKS_CLUSTER_NAME} --query 'cluster.identity'
# Check IAM role providing the required permissions for the carts service
# Granting read and write to DynamoDB table
# View the policy
aws iam get-policy-version \
--version-id v1 --policy-arn \
--query 'PolicyVersion.Document' \
arn:aws:iam::${AWS_ACCOUNT_ID}:policy/${EKS_CLUSTER_NAME}-carts-dynamo | jq .
# Check role's trust relationship allowing OIDC provider associated
# the EKS cluster to assume this role as long as the subject is the
# ServiceAccount for the carts component.
#
# Review role
aws iam get-role \
--query 'Role.AssumeRolePolicyDocument' \
--role-name ${EKS_CLUSTER_NAME}-carts-dynamo | jq .
How does IRSA work?

Service Account definition
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789123:role/aws-eks-irsa-aws-node-irsa
name: aws-node
namespace: kube-system
Mutation Webhook modifies the pod definition to the following:
apiVersion: v1
kind: Pod
metadata:
name: aws-node-ghp8s
namespace: kube-system
spec:
serviceAccount: aws-node
serviceAccountName: aws-node
containers:
- name: aws-node
image: 602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni:v1.10.4-eksbuild.1
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: eu-west-1
- name: AWS_REGION
value: eu-west-1
- name: AWS_ROLE_ARN
value: arn:aws:iam::123456789123:role/aws-eks-irsa-aws-node-irsa
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
IRSA approach limitations
There can be mulitude of issues while configuring IRSA i.e. Errors in trust policy specifying OIDC provider, Errors in ServiceAccount (ARN of role). Also, an IAM role was restricted to an EKS cluster.
POD Identity (2023-Now)
Pod Identity, the easiest way to grant access to Kubernetes pods to AWS services. AWS EKS Pod Identity, an evolution from “IRSA”, simplifies IAM permission management for EKS clusters by removing OIDC limitations, shifting to a unified trust policy (pods.eks.amazonaws.com), supporting STS session tags for granular access control, and separating AWS and Kubernetes API concerns, meaning, for example, that you can use a single IAM role in multiple clusters.
How does it work?

- Install Amazon EKS Pod Identity Agent add-on. The identity agent exposes API on 169.254.170.23 (an arbitrary link-local address) on port 80:
AWS_CONTAINER_CREDENTIALS_FULL_URI = http://169.254.170.23/v1/credentials
AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE =/var/run/secrets/eks.amazonaws.com/serviceaccount/token
- Modify launch template of worker nodes to set the IMDSv2 Hop to 2.
- Create an IAM role that can be assumed by EKS Service principal
pods.eks.amazonaws.com
. - Create an association between the IAM role and Kubernetes service account
aws eks create-addon --cluster-name eks-pod-identity-demo --addon-name eks-pod-identity-agent
aws eks wait addon-active --cluster-name eks-pod-identity-demo --addon-name eks-pod-identity-agent
kubectl -n kube-system get daemonset eks-pod-identity-agent
aws eks create-pod-identity-association \
--cluster-name eks-pod-identity-demo \
--namespace demo-ns\
--service-account demo-sa \
--role-arn $IAM_ROLE_ARN
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
Then we deploy the test pod
apiVersion: v1
kind: Namespace
metadata:
name: demo-ns
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
namespace: demo-ns
---
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
namespace: demo-ns
spec:
containers:
- name: aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
serviceAccountName: demo-sa
Mutated Pod definition
- env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: us-west-2
- name: AWS_REGION
value: us-west-2
- name: AWS_CONTAINER_CREDENTIALS_FULL_URI
value: http://169.254.170.23/v1/credentials
- name: AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE
value: /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
volumeMounts:
- mountPath: /var/run/secrets/pods.eks.amazonaws.com/serviceaccount
name: eks-pod-identity-token
readOnly: true
volumes:
- name: eks-pod-identity-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: pods.eks.amazonaws.com
expirationSeconds: 86400
path: eks-pod-identity-token
Important things to note down
- Pods must use an AWS SDK version that supports assuming an IAM role from the EKS Pod Identity Agent.
- EKS Pod Identity Agent runs as a DaemonSet pod on every eligible worker node. This agent is made available to you as an EKS Add-on and is a pre-requisite to use EKS Pod Identity feature.
- EKS Pod Identity webhook, running on the Amazon EKS cluster’s control plane.
- Worker nodes must use Linux Amazon EC2 instances
- AWS Fargate (Fargate) are not supported
- You can associate only one IAM role to a Kubernetes service account using EKS Pod Identity feature.
Difference with IRSA


In the case of IRSA, the number of service accounts/clusters per role will be limited to ~ 10 due to the limitation on the size of the trust policy. In the case of Pod Identity, there is no limit. Also, Pod Identity supports ABAC while IRSA does not.