Setup a High-Available Kubernetes Cluster on AWS using kops
In this Collective Talk, we learned how to setup Kubernetes on the local machine. And then we went through the complex production-ready setup of Kubernetes on AWS infrastructure.
We answered questions like:
- How to setup Kubernetes on the local machine?
- What components are required to be installed when setting up Kubernetes?
- How to setup Kubernetes on an AWS cluster using kops?
- How to make Kubernetes cluster highly available (HA) on AWS?
- How is kops setup compared to the Amazon’s managed service EKS?
- What are some important things to keep in mind while working with kops and Kubernetes?
This was the 2nd Collective Talk of the Cloud Native Series scheduled from September 2018 to November 2018.
Recap
Command Reference
Prerequisites:
$ brew update
$ brew install awscli
$ brew install kubernetes-cli
$ brew install kops
Local Setup + Test
1. enable Kubernetes
in the Docker Engine.
2. deploy nginx
based image
$ kubectl run nginx-app \
--replicas=2 \
--image=salitrapraveen/whale \
--port=80
3. expose deployment to the host
$ kubectl expose deployment nginx-app \
--type=NodePort \
--name=nginx-app
4. get the node port from the following command and open localhost:<NODE_PORT>
in the browser
$ kubectl get service nginx-app
Cluster Setup on AWS
1. set env vars for names of k8s cluster and state store:
$ export K8S_CLUSTER_NAME=k8s.ennate.academy
$ export S3_BUCKET_NAME=k8s-ennate-academy-state-store
2. create an AWS IAM User with Programmatic Access and assign following permissions:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
3. configure aws-cli
with AWS_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
of the user created in step above:
$ aws configure
4. create S3 bucket:
$ aws s3api create-bucket \
--bucket $S3_BUCKET_NAME \
--region us-east-1
5. enable bucket versioning:
$ aws s3api put-bucket-versioning \
--bucket $S3_BUCKET_NAME \
--versioning-configuration Status=Enabled
6. set env var for state store:
$ export KOPS_STATE_STORE=s3://$S3_BUCKET_NAME
7. dry-run cluster with default config:
$ kops create cluster \
--zones us-east-1a \
--name $K8S_CLUSTER_NAME \
--dry-run -oyaml
8. dry-run cluster with single master:
$ kops create cluster \
--dns-zone $K8S_CLUSTER_NAME \
--zones "us-east-1a" \
--master-size m5.large \
--master-count 1 \
--node-size m5.large \
--node-count 2 \
--image "kope.io/k8s-1.10-debian-stretch-amd64-hvm-ebs-2018-05-27" \
--networking kube-router \
--topology private \
--bastion \
--name $K8S_CLUSTER_NAME \
--dry-run -oyaml
9. dry-run cluster with multi-master (highly-available) and save the output to a YAML file:
$ kops create cluster \
--dns-zone $K8S_CLUSTER_NAME \
--zones us-east-1a,us-east-1b,us-east-1c \
--master-size m5.large \
--master-count 3 \
--node-size m5.large \
--node-count 6 \
--image "kope.io/k8s-1.10-debian-stretch-amd64-hvm-ebs-2018-05-27" \
--networking kube-router \
--topology private \
--bastion \
--name $K8S_CLUSTER_NAME \
--dry-run -oyaml > cluster.yaml
10. create kops config via cluster manifest YAML file:
$ kops create -f cluster.yaml
11. set your SSH access with AWS k8s cluster:
$ kops create secret --name $K8S_CLUSTER_NAME sshpublickey admin -i ~/.ssh/id_rsa.pub
12. apply the configuration to the AWS cluster:
$ kops update cluster $K8S_CLUSTER_NAME --yes
13. validate the cluster:
$ kops validate cluster
14. access the cluster via bastion:
# verify if your SSH public key is in the `ssh-agent`
$ ssh-add -L
# add the your public key to ssh-agent
$ ssh-add ~/.ssh/id_rsa
# SSH into the bastion
$ ssh -A admin@bastion.$K8S_CLUSTER_NAME
# and then from the bastion shell, ssh to any node in the cluster
$ ssh admin@<master_or_node_ip>
15. delete the cluster:
$ kops delete cluster $K8S_CLUSTER_NAME --yes