Provide SSO authenticated IAM user access to Kubernetes cluster

When you create an EKS cluster using IAM user/role other than your user i.e using a service account, you might encounter some error like this

“kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster”

This is because when EKS is configured, it has only one IAM user/role bound to that cluster and grants access which is the one that created it.

In order to provide other user access to that EKS cluster, you need to add the role to the configmap/aws-auth in the Kube-system namespace.

First, switch to IAM user which has access to the cluster using the following command.

aws configure — — profile <PROFILE_NAME>

Note: I will be using profile named corp for IAM user and is configured inside ~/.aws/credentials file.

Once credentials are configured, when you do kubectl get nodes you should see the cluster

Now to add IAM user authenticated via SSO to EKS cluster RBAC.

First, Get the ARN of the role that the SSO authenticated user is assuming from IAM in AWS console to find which role user is assuming, run following command

aws sts get-caller-identity


“UserId”: “AROAACBDHTN34QXZV7HL:prabesh”,

“Account”: “<ACCOUNTID>”,

“Arn”: “arn:aws:sts::<ACCOUNTID>:assumed-role/AWSReservedSSO_AdministratorAccess_4b272cfed6132d4f/prabesh”


From the above information, we can see the role is AWSReservedSSO_AdministratorAccess. Open that role and get the role ARN. It would be something like this


Now, edit the Kube-system config using the following command to provide user access to the cluster.

kubectl edit -n kube-system configmap/aws-auth

Once you run it, this will open a YAML file. Add the following line in the mapRoles section and paste the roleARN that we copied from IAM.

— rolearn: arn:aws:iam::<AccountID>:role/AWSReservedSSO_AdministratorAccess_4b272cfed62352d4f
username: <USERNAME>:{{SessionName}}
— system:masters

Note: Note that was-reserved “/” needs been removed.

Once you add the entry, save the config map YAML file. Update your EKS config file using the following command.

Now switch to the regular IAM user using AWS configure command and pass your access and secret keys. Once it’s done, run following command to update your kubeconfig file. Once you run the command and everything is working, you will get a message stating the config has been updated.

aws eks update-kubeconfig — name eks-cluster — region us-west-2 — profile corp

Now run the command “kubectl get nodes, you should start seeing the cluster.

DevOps / SRE Engineer. Blog: System admin turned SRE. I love Linux.