Harden Kubernetes Access Security with RBAC and AWS IAM

Presented by

Praveen Salitra

Presented on

Oct 11, 2018

  • Cloud
  • Continuous Delivery
  • Security
  • Technical Deep Dive

In this Collective Talk, we learned how to secure Kubernetes cluster installed on AWS infrastructure.

We answered questions like:

  • How Kubernetes uses RBAC to secure API access?
  • What are the Kubernetes Service Accounts?
  • How to integrate AWS IAM with Kubernetes Auth Controller via aws-iam-authenticator?
  • How to update your kubeconfig to properly utilize your aws-cli config setup?
  • How to securely access multiple Kubernetes clusters from the same local machine?

This was the 3rd Collective Talk of the Cloud Native Series scheduled from September 2018 to November 2018.


Command Reference


1. awscli, go,  and kubectl are installed on the local machine.

$ brew update
$ brew install awscli
$ brew install go --cross-compile-common
$ brew install kubernetes-cli

2. a Kubernetes cluster (let’s say k8s.ennate.academy) with aws-iam-authenticator daemonset is already installed on the AWS cluster using kops. (How to Install k8s using kops?)

# make sure kops Cluster spec has following properties.
    aws: {}
    rbac: {}
  cloudProvider: aws

3. a k8s namespace is already created

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
  name: dev

4. [optional] deploy Kubernetes Dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

# access k8s dashboard via proxy
$ kubectl proxy
$ open http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

AWS IAM Configuration

1. create a new IAM role (e.g. k8s.ennate.academy-developer) in the AWS Account where the k8s cluster is hosted:

Type of Trusted Entity: Another AWS Account
Permissions: None (skip it)
Name: k8s.ennate.academy-developer

2. create a new IAM Policy in the AWS account where IAM User is created

Service: STS
Action: AssumeRole

3. assign this policy to your IAM users who wants to assume a developer role.

4. add IAM User’s ARN to the Trusted Entities list of the IAM Role created in step 1.

5. [optional] repeat steps 1-4 for more roles (e.g. admin, devops, readonly etc.)

Kubernetes Configuration

1. apply following ConfigMap for the aws-iam-authenticator (make sure to replace ROLE_ARN values in the config)

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
  namespace: kube-system
  name: aws-iam-authenticator
    k8s-app: aws-iam-authenticator
  config.yaml: |
    # a unique-per-cluster identifier to prevent replay attacks
    # (good choices are a random token or a domain name that will be unique to your cluster)
    clusterID: k8s.ennate.academy
      # each mapRoles entry maps an IAM role to a username and set of groups
      # Each username and group can optionally contain template parameters:
      #  1) "{{AccountID}}" is the 12 digit AWS ID.
      #  2) "{{SessionName}}" is the role session name.
      - roleARN: <ROLE_ARN1>
        username: k8s.ennate.academy-admin:{{AccountID}}:{{SessionName}}
        - system:masters
      - roleARN: <ROLE_ARN2>
        username: k8s.ennate.academy-developer:{{AccountID}}:{{SessionName}}
        - k8s.ennate.academy-developer

# developer specific role bindings
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
  name: k8s.ennate.academy-developer-dev-binding
  namespace: dev
- kind: Group
  name: k8s.ennate.academy-developer
  apiGroup: ""
  kind: ClusterRole
  name: admin
  apiGroup: ""

2. If aws-iam-authenticator doesn’t pick up the ConfigMap automatically, kill the pod. The daemonset will recreate the pod again.

$ kubectl -n kube-system delete pods -l k8s-app=aws-iam-authenticator

Local Machine Configuration

1. install aws-iam-authenticator binary on the local machine

$ go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator

2. export ~/go/bin path to the environment variable

$ export PATH="$HOME/go/bin:$PATH"

3. update ~/.kube/config with following two sections:

# add new context (e.g. developer)
- context:
    cluster: k8s.ennate.academy
    namespace: dev
    user: k8s.ennate.academy-developer
  name: k8s.ennate.academy-dev

# add new user (e.g.k8s.ennate.academy-developer)
# make sure to replace ROLE_ARN with correct value
- name: k8s.ennate.academy-developer
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      - token
      - -i
      - k8s.ennate.academy
      - -r
      - <ROLE_ARN>

4. test the setup after changing the context to k8s.ennate.academy-dev

$ kubectl config use-context k8s.ennate.academy-dev

# should not throw permission error
$ kubectl get pods

# should throw permission error
$ kubectl get nodes

You might also like

All Insights