Provide SSO authenticated IAM user access to Kubernetes cluster

When you create an EKS cluster using IAM user/role other than your user i.e using a service account, you might encounter some error like this

“kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster”

This is because when EKS is configured, it has only one IAM user/role bound to that cluster and grants access which is the one that created it.

In order to provide other user access to that EKS cluster, you need to add the role to the configmap/aws-auth in the Kube-system namespace.

First, switch to IAM user which has access to the cluster using the following command.

aws configure — — profile <PROFILE_NAME>

Note: I will be using profile named corp for IAM user and is configured inside ~/.aws/credentials file.

Once credentials are configured, when you do kubectl get nodes you should see the cluster

Now to add IAM user authenticated via SSO to EKS cluster RBAC.

First, Get the ARN of the role that the SSO authenticated user is assuming from IAM in AWS console to find which role user is assuming, run following command

aws sts get-caller-identity


“UserId”: “AROAACBDHTN34QXZV7HL:prabesh”,

“Account”: “<ACCOUNTID>”,

“Arn”: “arn:aws:sts::<ACCOUNTID>:assumed-role/AWSReservedSSO_AdministratorAccess_4b272cfed6132d4f/prabesh”


From the above information, we can see the role is AWSReservedSSO_AdministratorAccess. Open that role and get the role ARN. It would be something like this


Now, edit the Kube-system config using the following command to provide user access to the cluster.

kubectl edit -n kube-system configmap/aws-auth

Once you run it, this will open a YAML file. Add the following line in the mapRoles section and paste the roleARN that we copied from IAM.

— rolearn: arn:aws:iam::<AccountID>:role/AWSReservedSSO_AdministratorAccess_4b272cfed62352d4f
username: <USERNAME>:{{SessionName}}
— system:masters

Note: Note that was-reserved “/” needs been removed.

Once you add the entry, save the config map YAML file. Update your EKS config file using the following command.

Now switch to the regular IAM user using AWS configure command and pass your access and secret keys. Once it’s done, run following command to update your kubeconfig file. Once you run the command and everything is working, you will get a message stating the config has been updated.

aws eks update-kubeconfig — name eks-cluster — region us-west-2 — profile corp

Now run the command “kubectl get nodes, you should start seeing the cluster.




DevOps / SRE Engineer. Blog: System admin turned SRE. I love Linux.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Flink: Implementing the Count Window

Docker and systemd

Create a Simple Python Web Server on OCI

System Tests: An Objective Look at the Product

Update on the New ZinaX Pools from the Team

How to Deploy a Ruby App in 2020

Fantastic CompletableFuture.allOf() and how to handle errors.

Java Virtual Memory — 101

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Prabesh Thapa

Prabesh Thapa

DevOps / SRE Engineer. Blog: System admin turned SRE. I love Linux.

More from Medium

Using Environment variables in helm chart with config maps

Learn Kubernetes: Deployments — Creating a deployment with resource manifest

How to monitor AWS EC2 PPS allowance limits

Creating GitOps Pipelines Using Amazon Elastic Kubernetes Service(EKS) and GitHub Actions |…