Deploy Stateless Workload on Kubernetes
In this Collective Talk, we learned how to deploy stateless workload with Pods, Deployments, DaemonSets, and Services in a Kubernetes cluster installed on AWS infrastructure.
With Pods, Deployments, Daemonsets, and Services
We answered questions like:
- What is a minimum deployable unit (aka Pod) in Kubernetes?
- What are Deployment and DaemonSet? How are these related to a Pod?
- How to set resource quota limits (CPU and Memory) for the Pods and Namespaces?
- What is a Service? What are different types of Service?
- What are a few important things to keep in mind while working with the stateless workload on Kubernetes?
This was the 4th Collective Talk of the Cloud Native Series scheduled from September 2018 to November 2018.
Recap
Command Reference
Prerequisites:
1. awscli
and kubectl
are installed on the local machine.
2. a Kubernetes
cluster (let’s say k8s.ennate.academy) with aws-iam-authenticator
daemonset is already installed on the AWS cluster using kops. (How to Install k8s cluster with IAM support using kops?)
3. a k8s namespace is already created
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: dev
EOF
Pods:
1. deploy a pod with two containers:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: whale
image: salitrapraveen/whale:latest
ports:
- containerPort: 80
name: http
protocol: TCP
- name: alpine
image: alpine:3.8
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
EOF
2. deploy a pod with resource quota:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: alpine
spec:
containers:
- name: alpine
image: alpine:3.8
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
resources:
requests:
memory: "64Mi" // Ki, Mi, Gi, Ti, Pi, Ei
cpu: "250m" // 1CPU = 1 AWS vCPU, 1GCP Core, 1 Azure vCore
limits:
memory: "128Mi"
cpu: "500m"
EOF
3. pod specific kubectl
commands:
$ kubectl describe pod POD_NAME
$ kubectl logs POD_NAME [-c CONTAINER_NAME]
$ kubectl exec -it POD_NAME [-c CONTAINER_NAME] /bin/sh
$ kubectl delete pod POD_NAME
Deployments:
1. create a deployment with 5 replicas of a pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: whale-deployment
spec:
replicas: 5
strategy:
type: RollingUpdate
selector:
matchLabels:
app: whale-app
template:
metadata:
labels:
app: whale-app
spec:
containers:
- name: whale
image: salitrapraveen/whale:latest
ports:
- containerPort: 80
name: http
- name: alpine
image: alpine:3.8
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
env:
- name: SOME_ENV_VAR
value: awesome
EOF
2. deployment specific kubectl
commands:
$ kubectl get deployments
$ kubectl get replicasets
$ kubectl scale deployment DEPLOYMENT_NAME --replicas=COUNT
$ kubectl describe deployment DEPLOYMENT_NAME
$ kubectl edit deployment DEPLOYMENT_NAME
$ kubectl delete deployment DEPLOYMENT_NAME
DaemonSets:
1. create a daemonset that deploys pods on the master nodes (see nodeSelector
and tolerations
in the spec):
$ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: whale-daemon
spec:
selector:
matchLabels:
app: whale-app
template:
metadata:
labels:
app: whale-app
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
containers:
- name: whale
image: salitrapraveen/whale:latest
ports:
- containerPort: 80
name: http
- name: alpine
image: alpine:3.8
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
env:
- name: SOME_ENV_VAR
value: awesome
EOF
2. daemonset specific kubectl
commands:
$ kubectl get daemonsets
$ kubectl describe daemonset DAEMONSET_NAME
$ kubectl edit daemonset DAEMONSET_NAME
$ kubectl delete daemonset DAEMONSET_NAME
Services:
1. create a ClusterIP
(default) type service:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: whale-service
spec:
selector:
app: whale-app
ports:
- protocol: TCP //supports TCP, UDP and SCTP
port: 8080 //port where service would be listening for the TCP requests
targetPort: 80 //containerPort of the pod. Value can be a string name of the containerPort as well.
EOF
2. create a NodePort
type service:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: whale-service
spec:
selector:
app: whale-app
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30270 //k8s assigns a random port (30000-32767) if not specified
EOF
3. create a LoadBalancer
type service:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: whale-service
spec:
selector:
app: whale-app
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 80
EOF
4. create a ExternalName
type service:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: whale-service
spec:
type: ExternalName
externalName: my.database.example.com
EOF
5. service specific kubectl
commands:
$ kubectl get services
$ kubectl get endpoints
$ kubectl edit service SERVICE_NAME
$ kubectl delete service SERVICE_NAME