Docker in Detail

What is Docker ?

Docker is a software technology providing operating-system-level virtualization also known as containers, promoted by the company Docker, Inc.Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines (VMs).

 If Virtual Machines are hardware virtualization then containers are os virtualization.
 We don't need a real OS in the container to install our application.
Applications inside the container are dependent on HOST OS kernel where its running.

For example if we hosted java application like inside the container it will use all the java libraries and config files from container data but for compute resource its relied on the host os kernel.

Containers are like other processes that run in an operating system but its isolated, its processes, files, libraries, configurations are contained within the boundaries of the container.

Containers have their own process tree and networking also. Every container will have an IP address and port on which the application inside container is running. This may sound like a virtual machine but its not, remember VM has its own OS and containers does not.

Containers is not a modern technology, it was around us in different forms and technologies. But Docker has brought it to a whole new level when it comes to building, shipping and managing containers.

                                                   Dockers

Docker started its life as a platform as a service provided called dotcloud.
Behind the scenes, the dotcloud platform leveraged Linux contain-ers.

Docker relies on Linux kernel features, such as namespaces and cgroups, to ensure resource isolation and to package an application along with its dependencies. This packaging of the dependencies enables an application to run as expected across different Linux operating systems.
It's this portability that's piqued the interest of developers and systems administrators alike.

But when somebody says Doker, they can be referreing to any of the atleast three things:

Docker, Inc the company.
Docker the container runtime and orchestration technology.
Docker the open source project.

Editions :
Docker community Edition
Docker Enterprise Edition

When most of the people talk about Docker they generally refer to Docker Engine.
Docker engine runs and orchestrate containers.
As of now we can think docker engine like a hypervisor.

There are so many docker technologies that gets integrated with the docker engine to auomate, orchestrate or manage docker containers.

 Installation of Docker:

Docker can be installed on Windows, MAC and Linux OS.

I have installed docker on centos 6.9 in this scenario with docker open source edition.

To install any open source package we need to download the repository of EPEL rpm.
Download and install epel rpm in centos so that it will automatically create the epel repo in yum.
After installation of epel , execute the command

# yum install docker-ce* -y

So that it will automatically download and install the latest package with all the required dependencies.

Note : Before installing Docker, prefer uninstall of existing docker if any.

After the successful installation, we need to start the docker service.

# service docker status

# service docker start.

# chkconfig docker on

The above command will enable the docker service at 3,4 and 5 run levels by default.

Docker commands can be executed by root user and sudo users.
We can execute Docker commands by normal user as well to do this we need to add the user into the docker group.

# usermod -ag docker <username>

After successful installation of docker engine, we will get mainly two components.

Docker Client
Docker Engine

Let's fetch few more details about docker engine :

we will operate or work with Docker engines mainly in two areas :

Docker Images
Docker Containers.

                         Docker Images :

We can think that images are like Vagrant box image, VM image .. It is very much different from the VM image but it will feel as same initially. Vagrant boxes are stopped state of a VM and images are stopped state of a containers.

# docker images

The above command will list all the downloaded images on your machine.
Now you don't see anything in the output, as of now we have not yet downloaded any image.
We have just installed and started the docker service.

So Let us download the docker images (Pulling an image)

We download the docker images from docker registries.
The most famous docker registry is docker hub. There are other few registries are also there, from Google, Redhat .. etc..,

From here onwards I will use Pull image instead of download image.
In docker world, we will use pull instead of download.

# docker pull cenos:latest

               or

# docker pull centos

Any of the above commands will pull the latest available centos image.
After completion of  pull, execute the command docker images

# docker images

Now this will list the pulled centos image from the docker hub.


Containers :

We can run containers from the downloaded / Pulled image.
In this scenario we have Pulled the centos image.
To run  the container, we need to execute the below command.

# docker run -it centos:latest /bin/bash

If you observe closely, the shell prompt after executing the above command will get attached to the docker container. Literally, you have initiated the centos container to run and you are inside the container now.

      docker run tells the docker daemon to start a new container.
      The -it flag will tells the daeman to connect to the container interactively with the terminal.
      Next centos:latest is the image name, which we are initiating the container.
      We are running the /bin/bash process inside the container.

Run the # ps -ef command inside the container, so that you can see /bin/bash process output in it.

It will result only two processes.
First one is /bin/bash process which we told to container to run
second one is ps -ef command itself.

Now if you want to get out from the container with out stopping it, press ctrl+P+Q
So that you can get out of container with out killing it.

Now you are at your OS prompt type # docker ps

# docker ps

It will provide the list of contianers running in the computer. For now it will show only one container as a result.

Again if you want to get into the container, by using # docker exec command we can attach to the container.

# docker exec -it <container id> /bin/bash

With this you can connect to the docker container, so that you can execute the commands from inside the container. In the above example I have used -it option to attach our shell to the container's shell.

Again press ctrl+P+Q to get out of the container with out killing it.
Execute the command # docker ps to see the current running containers.

# docker ps

(you can observe the changes)

Now let us see how to stop and kill the containers.

# docker stop <container ID>

Note : Every container will have the unique container ID.
          You can get the container ID details by executing the command docker ps.

So that it will stop the container.

# docker ps

you can see that no containers are running now.





To kill container execute the below command # docker rm

# docker rm <container ID>

# docker ps

You can verify that container was successfully deleted by executing the docker ps command.


Images :

Docker Images :

An image is a read-only template for creating application containers.
Images are build time constructs, containers are their run-time siblings.
    Think of a container may be a running image, image is a stopped container in vice-versa.
    An image is a bunch of layers that topped on each other which will be in Unified file system.
    We can run multiple containers from the single image, each container will have new unique read-   write layer on top of an image.
     Images are stored in registry. We pull them to the hosts using "docker pull <>" Command.





Steps to Build an Image from the Container.

Pull the image
Run the Container.
Customize it as per our requirement.
Commit the container into an image and then ship it.


         
After pulling an image once you have started to run the container, both of them become dependent on each other. So you should not delete the image until the container which we are running based on that image has stopped and destroyed. When you try to delete an image with out stopping and deleting  the containers based on that image, will result in errors.

Best Practices : Images that we are shipping should be light weight and it should not contain any files are folders other then application related libraries. For example, if we have built an image with the Java application running on it. It should contain only Java related files and libraries, other than that nothing should be there.



# docker pull node:latest

this will pull the node image which is latest uploaded into the docker registry.

# docker images

we can see the downloaded images

Now we have downloaded two images on our docker engine
centos:latest
node:latest.


Image Registries :

Docker images are stored in Registries.
The most common image registry is Docker Hub.
There are more third party image registries exists, for now we are using docker hub as our image registry.
https://hub.docker.com

Image registries contain multiple image repositories, Image repositories contains images.Image result for docker hub



Docker hub contains official and unofficial images. Official images are verified by Docker which are secure to use. Unofficial Images are built by any one and not verified by Docker Inc.
Our personal images are live in unofficial respositories.


Image Tags:

While pulling an image we give the imagename:tag and docker will pull by default from the docker hub registry and find the image with the tag name specified.

Imagename:Tagname = Imagename refers to the docker image name and Tag name refers to the version of the image.

eg:

docker pull centos:6.9

docker pull centos:latest

Images and layers :

All docker images are made up of one or more read-only layers.
There are few ways to see the layers of the image.
Lets take a look at the output of docker pull node:latest command.

# docker pull node

 *********   Pull complete
 *********  Pull complete

each line in the output ends with pull complete represents a layer in the image.

Image result for docker image layers

Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers.
This layer is often called the "Container Layer".

All changes made to the running container, such as writing new files, modifying existing files, and deleting files are written to this thin writable container layer. The diagram above represents the same.


Container and Layers :

The major difference between container and an image is the top writable layer.
All writes to the data that add new or modifying existing data will be stored on this writable layer.
So when we delete the container, writable layer of the container will also gets deleted.
The underlying image of the deleted container, remains unchanged.

Image result for docker image layers


Each container has its own writable container layer and all changes are stored in this container layer.
We can run multiple containers from the same single image, in this case, all the containers will have their own data state. Above diagram represents the same.


Docker uses storage drivers to manage contents of the image layers and the writable container layer.
Each storage driver handles the implementation  differently, but all drivers use stackable image layers and the copy on write strategy.

we can inspect the layer by using the docker inspect command.

# docker inspect centos:latest


Deleting Images :

When you no longer need an image, you can delete it from your docker host by using the docker rmi command. rmi refers to remove image.

# docker rmi <image id>


How docker will build a Customized  image.

Docker can build images automatically by reading the instructions from a dockerfile.
Docker file is a text file that contains all the commands in order to build an image.
Docker file is adhere to specific format and use a specific set of instructions.








Docker file will define what does on in the environment inside your container. Access to resources like networking interfaces and disk drives is virtualized inside this environment, which is isolated from the rest of your system, so you have to map ports to the outside world and be specific about what files you want to copy in to that environment. 


Docker File Instructions :


We know that docker images can be built using docker file. but we need to follow the set of instructions to write the docker file. There are around 12 instructions that we can use in our docker file.

1.FROM
2. MAINTAINER
3. RUN
4. CMD
5. EXPOSE
6. ENV
7. COPY
8. ADD
9. ENTRYPOINT
10. VOLUME
11.  USER
12. WORKDIR
13. ONBUILD

###################################################################################
Docker Swarm :

Installed docker-ce on all the 4 servers.
Now initializing the docker swarm on 1st server.
# docker swarm init
Note : Whenever we run the docker swarm init first time on any swarm group, it will become swarm manager and Leader of that swarm group.

Next we need to make 'workers' in the other nodes of the swarm group.

docker swarm join --token SWMTKN-1-45t0tqw6aozby5jyl48hgdda38pzyb7z6pdkb4jfaz0s9mznsi-8w3qme3ja8wms1bops94brv2h 10.0.0.95:2377

execute the above command to add a worker node on the respective worker.

To verify the details of the docker nodes in a swarm  execute the command # docker node ls

[root@centos1 docker]# docker node ls
ID                            HOSTNAME              STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
qrkdspiu6nut6fvrp2ylu3jsp *   centos1.example.com   Ready               Active              Leader              18.03.0-ce-rc2
ixkn1z1g08wm9m5fattj4x4my     centos2.example.com   Ready               Active                                  18.03.0-ce-rc3
jo7g1ns912blaowjj5emnrcac     centos3.example.com   Ready               Active                                  18.03.0-ce-rc3
q5iltpvvc5kwu9pl4frde9cy4     centos4.example.com   Ready               Active                                  18.03.0-ce-rc3
[root@centos1 docker]#

Now let us establish a network in the swarm group.
Network :
two types
bridge network (for single worker node)
Overlay Network : (for Swarm group)

Overlay network : It is also called as multi-host network.
Bridged is scoped to single host, overlay is a sinble layer two-network spanning multiple hosts.
This is the only scope to swarm.

To create the network :

# docker network create -d overlay swarmnetwork

-d : type of network bridged or overlay

(this command should run on manager node only)

We have created the network, now need to create a service (Running containers on different hosts.) to test the network by running alpine container.

docker service create -d --name pinger --replicas 3 --network swarmnetwork alpine sleep 1d

docker service ls (to verify the status of the services.)

docker service ps pinger (Provides the info of the containers with its respective hosts.)

pinger.1.ttfd9ntcg6kgqctwyrcph4vnf

In order to ping containers to each other which are on different hosts we need to allow the UDP ports.
for now in AWS I have configured the security group to allow all UDP ports.
Wait for my notes to provide the correct UDP port number for ping to work.
##################################################################################




#################################################################################


Installation of Docker Enterprise Edition :

Creation of Repository :

Syntax : export DOCKERURL='<DOCKER-EE-URL>'

Eg:
#export DOCKERURL='https://storebits.docker.com/ee/trial/sub-464b-b6e2-a24875b6e159'
 (URL Taken from the Docker site setup)

Now we need to store Docker EE repository URL which we have taken from the SETUP in a yum variable in /etc/yum/vars/

#echo "$DOCKERURL/rhel" > /etc/yum/vars/dockerurl

Store your OS Version string in /etc/yum/vars/dockerosversion
As we are using RHEL 7 (EE will support from RHEL 7 onwards I guess so)

#echo "7" > /etc/yum/vars/dockerosversion



Install required packages. yum-utils provides the yum-config-manager utility, and device-mapper-persistent-data and lvm2 are required by the devicemapper storage driver.

#yum install -y yum-utils  device-mapper-persistent-data  lvm2

Enable the extras RHEL repository. This ensures access to the container-selinux package which is required by docker-ee

#yum-config-manager --enable rhel-7-server-extras-rpms

Depending on cloud provider, you may also need to enable another repository.

#yum-config-manager     --add-repo     "$DOCKERURL/rhel/docker-ee.repo"

Now Proceed to install Docker-EE

#yum install docker-ee -y

On production systems, you should install a specific version of Docker EE instead of always using the latest. List the available versions. This example uses the sort -r command to sort the results by version number, highest to lowest, and is truncated.

#yum list docker-ee  --showduplicates | sort -r

Docker is installed but not started. The docker group is created, but no users are added to the group.

If you need to use devicemapper, follow the procedure in the devicemapper storage driver guide before starting Docker.
For production systems using devicemapper, you must use direct-lvm mode, which requires you to prepare the block devices.

Start the Docker service

systemctl start docker

Verify that Docker EE is installed correctly by running the hello-world image.

#docker run hello-world

This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.
########################################################################################################################
Configuring Proxy to download Docker containers
########################################################################################################################
When we don't have direct internet connection, we will use proxy to download the images.
Here we are configuring the proxy for docker to download the images for docker.

mkdir /etc/systemd/system/docker.service.d
cd /etc/systemd/system/docker.service.d
vi http-proxy.conf
[root@cdc-docker2 docker.service.d]# more http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://IPaddrss/URL/"
[root@cdc-docker2 docker.service.d]#
systemctl daemon-reload
systemctl show --property Environment docker
systemctl restart docker
docker run hello-world --- to check whether the docker is able to connect to docker site to download the images
docker ps

###################################################################################

Docker Universal Control Plane :

It allows developer teams to be able to manage the application lifecycle from the same platform.
Also it integrates the native Docker tools - Engine, Compose and Swarm. Integrates them all in a graphical front end.

Comments

Post a Comment

Popular posts from this blog

Python reference Interview questions

[SOLVED]* Please wait for the system Event Notification service

Rebuild the initial ramdisk image in Red Hat Enterprise Linux