To be completely honest, this article spawns out of some troubleshooting frustration. So hopefully this will save others some headaches.
The scenario: after having configured an EKS cluster, I wanted to provide permissions for more IAM users. After creating a new IAM user with belonged to the target intended IAM groups, the following exceptions were thrown in the CLI:
kubectl get svc
error: the server doesn't have a resource type "svc"
kubectl get nodes
error: You must be logged in to the server (Unauthorized)
AWS profile config
First configure your local AWS profile. This is also useful if you want to test for different users and roles.
# aws configure --profile # for example: aws configure --profile dev
If this is your firts time, this will generate two files,
~/.aws/config
and ~/.aws/credentials
It will simply append to them, which means that you can obviously edit the files manually as well if you prefer. The way you can alternate between these profiles in the CLI is:
#export AWS_PROFILE= # for example: export AWS_PROFILE=dev
Now before you move on to the next section, validate that you are referencing the correct user or role in your local aws configuration:
# aws --profile sts get-caller-identity # for example: aws --profile dev sts get-caller-identity { "Account": "REDACTED", "UserId": "REDACTED", "Arn": "arn:aws:iam:::user/john.doe" }
Validate AWS permissions
Validate that your user has the correct permissions, namely the following two are required:
# aws eks describe-cluster --name= # for example: aws eks describe-cluster --name=eks-dev
Add IAM users/roles to cluster config
If you managed to add worker nodes to your EKS cluster, then this documentation should be familiar already. You probably AWS documentation describes
kubectl apply -f aws-auth-cm.yaml
While troubleshooting I saw some people trying to use the clusters role in the “-r” part. However you can not assume a role used by the cluster, as this is a role reserved/trusted for instances. You need to create your own role, and add root account as trusted entity, and add permission for the user/group to assume it, for example as follows:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "eks.amazonaws.com", "AWS": "arn:aws:iam:::user/john.doe" }, "Action": "sts:AssumeRole" } ] }
Kubernetes local config
then, generate a new kube configuration file. Note that the following command will create a new file in ~/.kube/config
aws --profile=dev eks update-kubeconfig --name esk-dev
AWS suggests isolating your configuration in a file with name “config-“. So, assuming our cluster name is “dev”, then:
export KUBECONFIG=~/.kube/config-eks-dev aws --profile=dev eks update-kubeconfig --name esk-dev
This will then create a the config file in ~/.kube/config-eks-dev rather than ~/.kube/config
As described in AWS documentation, your kube configuration should be something similar to the following:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
apiVersion: v1 | |
clusters: | |
– cluster: | |
certificate-authority-data: <certificateAuthority.data from describe-cluster> | |
server: <endpoint from describe-cluster> | |
name: <cluster-name> | |
contexts: | |
– context: | |
cluster: <cluster-name> | |
user: aws | |
name: aws | |
current-context: aws | |
kind: Config | |
preferences: {} | |
users: | |
– name: aws | |
user: | |
exec: | |
apiVersion: client.authentication.k8s.io/v1alpha1 | |
args: | |
– token | |
– -i | |
– <cluster-name> | |
command: heptio-authenticator-aws | |
env: | |
– name: AWS_PROFILE | |
value: <profile-name> |
If you want to make sure you are using the correct configuration:
export KUBECONFIG=~/.kube/config-eks-dev kubectl config current-context
This will print whatever the alias you gave in the config file.
Last but not least, update the new config file and add the profile used.
Last step is to confirm you have permissions:
export KUBECONFIG=~/.kube/config-eks-dev kubectl auth can -i get pods # Ideally you get "yes" as the answer. kubectl get svc
Troubleshooting
To make sure you are not working in a environment with hidden environmental variables that you are not aware and may conflict, make sure you unset them as follows:
unset AWS_ACCESS_KEY_ID && unset AWS_SECRET_ACCESS_KEY && unset AWS_SESSION_TOKEN
Also if you are getting as follows:
Then it means you are specifying the -r flag in your kube/config file. This should be only used for roles.
Hopefully this short article was enough to unblock you, but in case not, here is a collection of further potential useful articles: