Harden Kubernetes Access Security with RBAC and AWS IAM
In this Collective Talk, we learned how to secure Kubernetes cluster installed on AWS infrastructure.
We answered questions like:
- How Kubernetes uses RBAC to secure API access?
- What are the Kubernetes Service Accounts?
- How to integrate AWS IAM with Kubernetes Auth Controller via aws-iam-authenticator?
- How to update your
kubeconfig
to properly utilize youraws-cli
config setup? - How to securely access multiple Kubernetes clusters from the same local machine?
This was the 3rd Collective Talk of the Cloud Native Series scheduled from September 2018 to November 2018.
Recap
Command Reference
Prerequisites:
1. awscli
, go
, and kubectl
are installed on the local machine.
$ brew update
$ brew install awscli
$ brew install go --cross-compile-common
$ brew install kubernetes-cli
2. a Kubernetes
cluster (let’s say k8s.ennate.academy) with aws-iam-authenticator
daemonset is already installed on the AWS cluster using kops. (How to Install k8s using kops?)
# make sure kops Cluster spec has following properties.
spec:
...
authentication:
aws: {}
authorization:
rbac: {}
cloudProvider: aws
...
3. a k8s namespace is already created
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: dev
EOF
4. [optional] deploy Kubernetes Dashboard
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
# access k8s dashboard via proxy
$ kubectl proxy
$ open http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
AWS IAM Configuration
1. create a new IAM role (e.g. k8s.ennate.academy-developer) in the AWS Account where the k8s cluster is hosted:
Type of Trusted Entity: Another AWS Account
Account ID: YOUR_AWS_ACCOUNT_ID
Permissions: None (skip it)
Name: k8s.ennate.academy-developer
2. create a new IAM Policy in the AWS account where IAM User is created
Service: STS
Action: AssumeRole
Role: ARN_OF_THE_ROLE_CREATED_IN_STEP_1
Name:k8s.ennate.academy-developer-policy
3. assign this policy to your IAM users who wants to assume a developer role.
4. add IAM User’s ARN to the Trusted Entities list of the IAM Role created in step 1.
5. [optional] repeat steps 1-4 for more roles (e.g. admin, devops, readonly etc.)
Kubernetes Configuration
1. apply following ConfigMap
for the aws-iam-authenticator
(make sure to replace ROLE_ARN values in the config)
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: aws-iam-authenticator
labels:
k8s-app: aws-iam-authenticator
data:
config.yaml: |
# a unique-per-cluster identifier to prevent replay attacks
# (good choices are a random token or a domain name that will be unique to your cluster)
clusterID: k8s.ennate.academy
server:
# each mapRoles entry maps an IAM role to a username and set of groups
# Each username and group can optionally contain template parameters:
# 1) "{{AccountID}}" is the 12 digit AWS ID.
# 2) "{{SessionName}}" is the role session name.
mapRoles:
- roleARN: <ROLE_ARN1>
username: k8s.ennate.academy-admin:{{AccountID}}:{{SessionName}}
groups:
- system:masters
- roleARN: <ROLE_ARN2>
username: k8s.ennate.academy-developer:{{AccountID}}:{{SessionName}}
groups:
- k8s.ennate.academy-developer
# developer specific role bindings
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: k8s.ennate.academy-developer-dev-binding
namespace: dev
subjects:
- kind: Group
name: k8s.ennate.academy-developer
apiGroup: ""
roleRef:
kind: ClusterRole
name: admin
apiGroup: ""
EOF
2. If aws-iam-authenticator
doesn’t pick up the ConfigMap
automatically, kill the pod. The daemonset will recreate the pod again.
$ kubectl -n kube-system delete pods -l k8s-app=aws-iam-authenticator
Local Machine Configuration
1. install aws-iam-authenticator
binary on the local machine
$ go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator
2. export ~/go/bin
path to the environment variable
$ export PATH="$HOME/go/bin:$PATH"
3. update ~/.kube/config
with following two sections:
# add new context (e.g. developer)
- context:
cluster: k8s.ennate.academy
namespace: dev
user: k8s.ennate.academy-developer
name: k8s.ennate.academy-dev
# add new user (e.g.k8s.ennate.academy-developer)
# make sure to replace ROLE_ARN with correct value
- name: k8s.ennate.academy-developer
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- token
- -i
- k8s.ennate.academy
- -r
- <ROLE_ARN>
4. test the setup after changing the context to k8s.ennate.academy-dev
$ kubectl config use-context k8s.ennate.academy-dev
# should not throw permission error
$ kubectl get pods
# should throw permission error
$ kubectl get nodes