What is Docker Fundamentals?

Docker Fundamentals is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

Docker Fundamentals is a containerization platform that packages an application & all its dependencies into a container to ensure that your application runs on any environment, seamlessly. Docker can be used on local machines as well as the cloud.


Why do we need to go for Docker?


As a DevOps  I have been hearing this from many developers: “It works on my machine, I don’t know why it won’t work on the server.”

Another problem in hosting any application is configuring multiple environments such as development, UAT, production. It takes a lot of time to configure to each environment for different kinds of applications. 

We can avoid these problems by using Docker Fundamentals.If we configure the docker for an application we can deploy the same in multiple environments quickly. We can also use in local machine so that we could avoid the dependency issues on the server while deploying.

Virtual Machines Vs Docker Containers

From fig 1, you can see the traditional architecture of deployment using virtual machines. Infrastructure using the hypervisor which divides the same hardware resources for multiple virtual machines(VM). However, VM’s can take up a lot of system resources. Each VM runs not just a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run. This quickly adds up to a lot of RAM and CPU cycles. That’s still economical compared to running separate actual computers, but for some applications, it can be overkill, which led to the development of containers.

Operating system (OS) virtualization has grown in popularity over the last decade to enable software to run predictably and well when moved from one server environment to another. But containers provide a way to run these isolated systems on a single server or host OS.



Fig 2 shows the containerization, Containers sit on top of a physical server and its host OS, for example, Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Containers are thus exceptionally “light” they are only megabytes in size and take just seconds to start, versus gigabytes and minutes for a VM.

Think of a Docker container as the above image. There are multiple applications running on the same machine. These applications are put into docker containers and any changes made on these containers do not affect the other container. Docker helps you to create, deploy, and run applications using containers.

Images and containers:

A Docker image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

A container is launched by running an image. An image is an executable package that includes everything needed to run an application--the code, a runtime, libraries, environment variables, and configuration files.

Docker Installation on Ubuntu :

Update the apt package index:

$ sudo apt-get update
Install packages to allow apt to use a repository over HTTPS:
$ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ Software-properties-common

Add Docker’s official GPG key:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the repository:

$ sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \  $(lsb_release -cs) \   Stable"

INSTALL DOCKER CE

Update the apt package index.
$ sudo apt-get update
Install the latest version of Docker CE, or go to the next step to install a specific version. Any existing installation of Docker is replaced.
$ sudo apt-get install docker-ce
Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it's running:
$ sudo systemctl status docker

Getting Started with Docker Example:

  First step of creating a Docker Fundamentals image is creating the Dockerfile. Dockerfile defines what goes on in the environment inside your container. Access to resources like networking interfaces and disk drives is virtualized inside this environment, which is isolated from the rest of your host machine, so you need to map ports to the outside world, and be specific about what files you want to “copy in” to that environment. However, after doing that, you can expect that the build of your app defined in this Dockerfile behaves exactly the same wherever it runs. Let us take an example Dockerfile of Flask application:

Dockerfile:

 
FROM python:3.6

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

COPY mylocalfolder/requirements.txt .

RUN pip3 install --no-cache-dir -r requirements.txt

EXPOSE 8000

COPY  mylocalfolder  .

CMD ["python", "app.py"]
 

Let us go through each line of Dockerfile and understand what it does:

FROM is the Dockerfile instruction to indicate the base image. Here we are using the python3.6 image from the docker hub. Docker Hub is a registry service on the cloud that allows you to download Docker images that are built by other communities.

RUN instruction signals to Docker that the following command should be run as a normal Linux commands during the Docker build. The above Dockerfile creates a directory /usr/src/app which we use to copy the code from the host.

The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. Here is set the working directory to /usr/src/app.

COPY instruction copies the current mylocalfolder/requirements.txt contents into the container at /usr/src/app. The second occurrence of COPY instruction in the Dockerfile will copy all the contents from my local folder into the container at /usr/src/app.

RUN pip3 install --no-cache-dir -r requirements.txt - This instruction installs all the python packages mentioned in requirements.txt into the container.

EXPOSE instruction exposes the port to the host operating system making it easy to see what port on the Docker container should be bound out to the host OS.

CMD specifies what command to run within the container.

requirements.txt

 
Flask

Redis
 

app.py  source(docker.com)

 
from flask import Flask
from redis import Redis, RedisError
import os
import socket

# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)

@app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"

html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
  To build the above flask app and tagging the image user friendly name here I’m naming as flaskapp:
$ docker build --tag=flaskapp .
To list the images we have built:
$ docker image ls
To run the above flask application, mapping your machine’s port 4000 to the container’s exposed port 8000 using -p
$ docker run -p 4000:8000 flaskapp:latest
To view running container:
$ docker ps
If you are working on local machine you can check on browser using http://localhost:4000 Or you can also use the curl command in a shell to view the same content. curl http://localhost:4000. To stop the running container:
$ docker container stop container_id   (get the container id from $docker ps)
To remove docker image:
$ docker rmi image_id/image_name

Advantages of Using Docker:

Rapid application deployment – containers include the minimal runtime requirements of the application, reducing their size and allowing them to be deployed quickly.

Simplified maintenance – Docker reduces effort and risk of problems with application dependencies.

Security - separating the different components of a large application into different containers can have security benefits: if one container is compromised the others remain unaffected.

Lightweight footprint and minimal overhead – Docker images are typically very small, which facilitates rapid delivery and reduces the time to deploy new application containers.

Sharing – you can use a remote repository to share your container with others using a private registry or docker hub.

Portability across machines – an application and all its dependencies can be bundled into a single container that is independent from the host version.

When not to use Docker:

Multiple operating systems. Since Docker containers share the host computer’s operating system, if you want to run or test the same application on different operating systems, you will need to use virtual machines instead of Docker.

Your app is complicated and you are not/do not have a sysadmin. For large or complicated applications, using a pre-made Dockerfile or pulling an existing image will not be sufficient. Building, editing, and managing communication between multiple containers on multiple servers is a time-consuming task.

Will deep dive into Docker Fundamentals in terms of docker-compose, swarm in the next part of this blog. Stay tuned…..