Back Home


Kubernates basic architecture is made up of:

  • Nodes - either a physical or a virtual machine, defined by the pod.yaml
  • Pods - group of one or more containers
  • Deployments - are used to make updates on Pods
  • Services - Kubernetes Service consist of an (exposed)set of Pods and a policy that defines the access control. Services can have label selectors, which are commonly used to invoke actions over the right subset of pods.
  • The Control Plane

The control plane is Kubernetes' brain. It has an overall view of every container and pod running on the cluster, can * schedule new pods (which can include containers with root access to their parent node),

  • can read all the secrets stored in the cluster.

Pod Configuration

A basic pod configuration might look like (this is a nodejs app on apache)

apiVersion: v1 
kind: Pod 
  name: node-js-pod 
  - name: node-js-pod 
    image: bitnami/apache:latest 
    - containerPort: 80 

The pod configuration can be tested and created with

kubectl create -f nodejs-pod.yaml 

this will output the pod node name


Getting pod information

 kubectl describe pods/node-js-pod 

Starting up the pod

 kubectl exec node-js-pod -- curl <private ip address> 


Pods with matching labels are added to the list of candidates where the service forwards traffic.

apiVersion: v1 
kind: Service 
  name: node-js 
    name: node-js 
  type: LoadBalancer 
  - port: 80 
    name: node-js 

creating a service

kubectl create -f nodejs-rc-service.yaml 

One Google's GCE, staring a service will create an external load balancer and forwarding rules, but you may need to add additional firewall rules.

Replication Controller

Replication controllers (RCs), manage the number of nodes that a pod and included container images run on.

apiVersion: v1 
kind: ReplicationController 
  name: node-js 
    name: node-js 
deployment: demo 
  replicas: 3 
    name: node-js 
    deployment: demo 
        name: node-js 
      - name: node-js 
        image: jonbaier/node-express-info:latest 
        - containerPort: 80  

Example replication controller file

$ kubectl create -f nodejs-controller.yaml 

and to see the services that are running

kubectl get services 

Local Setup



minikube start
minikube stop 
minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at

To get the ip the minikube is running on

minikube ip

Minikube context : The context is what determines which cluster kubectl is interacting with. You can see all your available contexts in the ~/.kube/config file.

kubectl config use-context minikube

Verify that kubectl is configured to communicate with your cluster:

kubectl cluster-info

Open the Kubernetes dashboard in a browser:

minikube dashboard

that will open your browser to something like ..!/overview?namespace=default

Deploy an image to Kube use the run command

kubectl run salesforce --image=wtr-ecomm-etl-salesforce/wtr-etl-salesforce:latest  --port=8080

To update a run we can set a new instance:

kubectl set image deployment/hello-node hello-node=hello-node:v2

Kubectrl Pull Problem

with docker images being listed in docker images minikube failed to run saying unable to pull the docker image

It turns out that minikube has it's own registry and cannot see the local docker regsistry

Hack #1

The command:

minikube docker-env

lists the minikube env variables and in local environment it can be sourced with

eval $(minikube docker-env)

the running in the console it is possible to

gradle build // after eval, will now install into minikube registry 
// now we run without going out to pull 
kubectl run salesforce --image=wtr/wtr-etl-salesforce-1.0-snapshot:1.0-snapshot --image-pull-policy=IfNotPresent

Hack 2 Run a Local Registry

Use a local registry:

docker run -d -p 5000:5000 --restart=always --name registry registry:2

Now tag your image properly:

docker tag ubuntu localhost:5000/ubuntu

Note that localhost should be changed to dns name of the machine running registry container.

Now push your image to local registry:

docker push localhost:5000/ubuntu

You should be pull it back:

docker pull localhost:5000/ubuntu

Create a Deployment

A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only one Container. A Kubernetes Deployment checks on the health of your Pod and restarts the Pod’s Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods.

Use the kubectl run command to create a Deployment that manages a Pod. The Pod runs a Container based on your hello-node:v1 Docker image:

kubectl run hello-node –image=hello-node:v1 –port=8080 View the Deployment:

kubectl get deployments

Kubernates Gateways


A distributed, reliable key-value store for the most critical data of a distributed system

kubernates.txt · Last modified: 2019/09/08 07:43 by root
RSS - 200 © CrosswireDigitialMedia Ltd