Setup a High-Available Kubernetes Cluster on AWS using kops
In this Collective Talk, we learned how to setup Kubernetes on the local machine. And then we went through the complex production-ready setup of Kubernetes on AWS infrastructure.
We answered questions like:
- How to setup Kubernetes on the local machine?
- What components are required to be installed when setting up Kubernetes?
- How to setup Kubernetes on an AWS cluster using kops?
- How to make Kubernetes cluster highly available (HA) on AWS?
- How is kops setup compared to the Amazon’s managed service EKS?
- What are some important things to keep in mind while working with kops and Kubernetes?
This was the 2nd Collective Talk of the Cloud Native Series scheduled from September 2018 to November 2018.
$ brew update $ brew install awscli $ brew install kubernetes-cli $ brew install kops
Local Setup + Test
Kubernetes in the Docker Engine.
nginx based image
$ kubectl run nginx-app \ --replicas=2 \ --image=salitrapraveen/whale \ --port=80
3. expose deployment to the host
$ kubectl expose deployment nginx-app \ --type=NodePort \ --name=nginx-app
4. get the node port from the following command and open
localhost:<NODE_PORT> in the browser
$ kubectl get service nginx-app
Cluster Setup on AWS
1. set env vars for names of k8s cluster and state store:
$ export K8S_CLUSTER_NAME=k8s.ennate.academy $ export S3_BUCKET_NAME=k8s-ennate-academy-state-store
2. create an AWS IAM User with Programmatic Access and assign following permissions:
AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess
AWS_SECRET_ACCESS_KEY of the user created in step above:
$ aws configure
4. create S3 bucket:
$ aws s3api create-bucket \ --bucket $S3_BUCKET_NAME \ --region us-east-1
5. enable bucket versioning:
$ aws s3api put-bucket-versioning \ --bucket $S3_BUCKET_NAME \ --versioning-configuration Status=Enabled
6. set env var for state store:
$ export KOPS_STATE_STORE=s3://$S3_BUCKET_NAME
7. dry-run cluster with default config:
$ kops create cluster \ --zones us-east-1a \ --name $K8S_CLUSTER_NAME \ --dry-run -oyaml
8. dry-run cluster with single master:
$ kops create cluster \ --dns-zone $K8S_CLUSTER_NAME \ --zones "us-east-1a" \ --master-size m5.large \ --master-count 1 \ --node-size m5.large \ --node-count 2 \ --image "kope.io/k8s-1.10-debian-stretch-amd64-hvm-ebs-2018-05-27" \ --networking kube-router \ --topology private \ --bastion \ --name $K8S_CLUSTER_NAME \ --dry-run -oyaml
9. dry-run cluster with multi-master (highly-available) and save the output to a YAML file:
$ kops create cluster \ --dns-zone $K8S_CLUSTER_NAME \ --zones us-east-1a,us-east-1b,us-east-1c \ --master-size m5.large \ --master-count 3 \ --node-size m5.large \ --node-count 6 \ --image "kope.io/k8s-1.10-debian-stretch-amd64-hvm-ebs-2018-05-27" \ --networking kube-router \ --topology private \ --bastion \ --name $K8S_CLUSTER_NAME \ --dry-run -oyaml > cluster.yaml
10. create kops config via cluster manifest YAML file:
$ kops create -f cluster.yaml
11. set your SSH access with AWS k8s cluster:
$ kops create secret --name $K8S_CLUSTER_NAME sshpublickey admin -i ~/.ssh/id_rsa.pub
12. apply the configuration to the AWS cluster:
$ kops update cluster $K8S_CLUSTER_NAME --yes
13. validate the cluster:
$ kops validate cluster
14. access the cluster via bastion:
# verify if your SSH public key is in the `ssh-agent` $ ssh-add -L # add the your public key to ssh-agent $ ssh-add ~/.ssh/id_rsa # SSH into the bastion $ ssh -A admin@bastion.$K8S_CLUSTER_NAME # and then from the bastion shell, ssh to any node in the cluster $ ssh admin@<master_or_node_ip>
15. delete the cluster:
$ kops delete cluster $K8S_CLUSTER_NAME --yes
You might also like
Going Reactive with Kotlin and Coroutines
Already on the Cloud? What's next? Is reactive the next big thing? Let's discuss how to build reactive systems using Kotlin and Coroutines.Watch on demand
Elasticsearch Bottoms Up
You have started using Elasticsearch. Being a distributed system, it is complex to manage as well as use it. Let's learn few advance concepts from our experience of using Elastic eco-system across various use-cases.Watch on demand
Elasticsearch for Beginners and SQL Developers
In this video we will learn some basic concepts of Elasticsearch eco-system. Topics that will be covered in this video: 1. What is Elasticsearch and its various use cases. 2. Key SQL Concepts and how ES handles it (or not)? 3. Hands-on demo with ES and SQL queries side by side.Watch on demand
Learn how to build reactive systems using project Reactor and various Spring projects
Let's discuss how to build reactive systems using Project Reactor and various Spring projects.Watch on demand
Handle MLOps across multiple cloud providers using Kubeflow
Machine Learning Models are relatively easy to build but hard to roll out. Learn how to make ML workflows production-ready with Kubeflow.Watch on demand
Role of service mesh in Kubernetes explained
Let's understand the role of service mesh in the Kubernetes world. Learn about Istio and its features (and if you even need it).Watch on demand
A crash course into CI/CD services
Let's talk about serverless CI/CD services provided by the Cloud Providers - we'll review the options and look at how to configure them.Watch on demand
Cloud Collective: Real-time Data Integration with Webhooks
Let's talk about ways to integrate and push data out to external systems in near real-time with polling, streaming, and Webhooks.Watch on demand