Docker Tutorial For Beginners |  What is Docker | Online Devops Training | Intellipaat
- Articles, Blog

Docker Tutorial For Beginners | What is Docker | Online Devops Training | Intellipaat


Hello everyone this is Hemant from Intellipaat and I welcome you all to the session on docker tutorial for beginners.
Today in this session we are going to learn docker from scratch.
Yes you heard that right so if you’re one of those who have heard docker as a
buzz word and want to learn more about docker you have come to the right place.
So without wasting any more time let’s go ahead and start with the agenda to
see what all we’ll be covering in today’s session. So we’ll start this
session by an introduction to docker where I’ll make you understand why do we
need docker and what docker is exactly. Once you have understood that, we’ll look
at some of the common docker operations that you will be performing in your
day-to-day life. Once we’re done with that we’ll go ahead and understand what
is a docker file followed by what are docker volumes. After that I’ll show you
guys how you can break a monolithic application into microservices using
docker. Once we’re done with that we’ll move on and learn what is docker compose
then we learn what is container orchestration and towards the end will
deploy a multi-tier application on the docker swarm, which is a container
orchestration tool by Docker. Alright so guys this is our agenda for today I hope
it’s clear to you so let’s go ahead and start with the first topic of today’s
session which is why do we need docker? so this docker was essentially a need
which came out of the problem that we’re gonna discuss just now. So imagine
yourself as a developer and probably you’re just creating a website on say
PHP, so if you create a website on PHP the first thing that you would do is
write some code. Now when you want to test that code the kind of environment
that you would be putting it in let’s see what all components would be there
in that particular environment. So the first thing that you would need in this
environment that is around the PHP file is an operating system right so an
operating system which probably would have a browser or would have a text
editor, so you need an operating system to work on. Now
secondly because you are developing a PHP application of course you need the
PHP software installed on that operating system so that your file can actually
work. Now when you are developing a web application on a particular language
it’s not just the language which will suffice with what you are doing. You also
have to take into mind the third-party applications the third-party libraries
that you would be including in your code so the third component that you
would need when you are developing a website are libraries for example
if your PHP website has to connect to a MY SQL then it would need a PHP MY SQL library for connecting to it so these are the components that
are basically required by a developer to run his program. So what he does is
he writes a program he configures the environment according to the program and
when everything goes fine his program works well. Now the problem occurred when
this particular program had to be given to someone else so the developer
finished his job now it was job of the tester to check this program so the
developer would give the PHP file to the operations guy and let’s see
what the operations guy would do. Now the operations guy in the same exact
position that the developer was when he was starting to develop his code
so but the difference over here is now the PHP file is running it is just that
he has to run this file on the system, on his system, so the first thing that
the OPS guy would do is replicate the environment that the developer is having
right so he would get the same OS the same software and the same libraries to
run that PHP program. Now there are a lot of versions for a particular software
for example for PHP you have PHP 7 which is currently running but there are
companies just still I mean they’re still working on PHP 5.6 because of you
know probably legacy problems or because the commands have changed and if they
want to move on to 7 they don’t have to change all the commands in the codebase
I checked like there are a lot of reasons right so there is a particular
version that this company is also following so this ops guy has to have the exact same
version so as to make the file work of the developer right now he tried his
best possible judgment in it and he installed the PHP software and he
installed the libraries, install the ways but still the problem was the ops guy
couldn’t make the file run and he came to a conclusion that the code is faulty
but the developer says it ran fine on my system it’s your system which has the
problem so fix it by yourself. So these were some of the statements which were
exchanged between developers and operations guy. I mean if you
think about it it was actually nobody’s fault. The operations guy did what he could in the best of his ability to
replicate the environment but it didn’t work out now you guys might like argue
that why don’t we you know use VMs in this case probably the developer could
work in a particular VM and then just give that VM to the OPS guy and then he
can work on the same VM and probably run the file right I mean that is a viable
solution but the problem is the size of VMs is too large to you know give to the
OPS guy and give to the testing guy on every feature that a developer adds. For in that case every developer has to have a VM that he’s working on
has to have his own version of the VM because if you are adding say feature A
and feature A requires three or four more libraries that you are included in
to your particular VM it should become inconsistent with what other developers
have. Now imagine the ops guy having several versions of VMs and here not
knowing what kind of VM he has to impose on the production server or on the
testing server which basically will act as the final version of the image on
which all the features of all the developers will work now it’s a very
hectic situation so the problem was the VMS were very large and they were
complex as well to handle when we are talking about a staging area or the
deployment area the production area they were a little
difficult to handle. Now this was the problem that was there before docker. So
what did we need. We basically needed a wrapper like in the case of what we just
discussed the wrapper was a VM wherein it basically contained all the operating
system files all the software all the code and all the libraries together and
could be basically lifted and shifted to a different system where it would
execute but we needed something more portable we needed something more
smaller that could easily be given to some other person for testing right we
needed something like that we needed a wrapper around our files and the answer
to this was this particular scenario that now if we get a wrapper which will
basically contain all the files in that case each and every environment of ours
will become the same the developer environment is going to be the same and
the moment it passes gets passed on to the ops guy he will again deploy the
same wrapper and his problem is also solved because the ops guy does not have
to match the versions of the OS, the software, the libraries because
everything is there inside that wrapper everything is there inside that
container. He just has to run that container and he can just see if the
code is working properly according to what has been specified or not. So
the answer was that all the environments that developer environment, the staging
or testing environment the production environment all of these environments
are now going to be the same and this was possible using docker. So what
is docker docker is basically a software which helps us creating this wrapper
around the files of a code the operating system and the libraries that we
mentioned that are required for a particular code to work right but the
awesome thing about docker is that you do not need the environment to be in
jeebies right you do not need needed to be somewhere around 900 MB or 800 MB or
1 GB or 1.5 gb a container could be as low as 50 MB that is the size of
a container called Alpine it’s as low as 50 MB so it becomes very easy to
basically give it to other people to test your code with all the environment
set up in that particular container right so docker is a software which
helps us enable that dropper docker is basically a tool which helps us in
continue rising an application right and let’s now let’s see how basically docker
is doing this how is it so portable how is it so low in size so basically this
is the architecture of docker so at the base level you have the hardware that
basically means that you would have a computer and on that computer you would
have an operating system present right so this is your operating system so it’s
as simple as you have a laptop and on that laptop you have Windows or Linux
installed right now once you have the operating system installed the next
thing is the container engine that you would install so the container engine is
nothing but the docker software that you would install on top of your operating
system and that is it once you have installed the container engine you can
run any type of container that is there in the repository on the door or on the
web which is available for example if your code needs an ubuntu container
right all you have to do is write docker and Ubuntu and what would it do it would
basically download the ubuntu container and then you can add your code files
inside that a12 container and start working right but how is it low in size
how is it not in GBS when we compare it to the Ubuntu operating system it’s
around 800 900 MB right the Ubuntu desktop version but how come this
container is so small so the answer for that is the container does not contain
all the operating system files right so as you can see the container engine
actually shares its space with the operating system so a container is void
a container does not have a kernel in place right it shares the underlying
kernel of the operating system on which the container engine
is installed so if I talk about say you know you have an Ubuntu operating system
or you have a Windows operating system and basically what you do is you
download the container for sent OS so Center S is basically a Linux
distribution right so you download the container for CentOS and now when you’re
working in that container you feel as if you are in center is but the point is
that the center is operating system is actually not installed what is basically
happening is your container is sharing the resources from the operating system
but your container has the minimum binaries and libraries that are required
to run the send to s commands right for example for installing anything on
Center s you you pass the command yum install and then the package name right
so it would have the yum package in there in the container right it would
have all the repository URLs there in the container and that’s how it behaves
like a st. OS operating system although the container is nothing but a set of
boundaries and libraries which are important for the Centaurs commands to
work right that is all water container is but with the underlying
virtualization of the kernel or the bios of the operating system your container
seems to be like an operating system although it is not and because of very
this very reason that it virtualizes the underlying kernel or BIOS to make it as
if it’s running on its own that that is it is its own kernel or BIOS is the
reason that the containers are very lightweight because they don’t need to
have all the files required for an operating system they just need the bare
minimum binaries or libraries required for that particular environment to work
ok so if you again come back to see this architecture you have the hardware on
top of the hardware you have the operating system on top of operating
system you install the docker software and on top of docker software you run
containers so all these containers have the minimum binaries or libraries and on
these you basically just put your code in my case I will put my PHP code on top
of the container and that would be app 1 and there could be multiple containers
running on my system for example one one code file is running an Ubuntu the other
code file could be running in sent to its container the third could be running
in and some other operating system or some other flavor of Linux container and
all this is possible using the docker software ok so this is the architecture
for a docker container or a container in general and now let us compare it with
the VM right so I gave you the example of VM and I said that VMs are heavy in
nature so let us understand how containers are different from VMs
alright so on the right side you can see the architecture of VM guys so on the
VMS you have the hardware which is the same as the the docker architecture that
we saw then on top of hardware you have the host operating system which is the
same as the docker container that we saw now on top of the host operating system
you have a hypervisor or a VirtualBox kind of software which will basically
virtualize the hardware from the operating system and give it to the VM
running on top of it right so you have hypervisor software in
case of VM architecture and then you have the container engine software in
case of a docker architecture then on top of the hyper hypervisor you have the
guest operating system and this is where you all or all of the differences right
so in case of containers you just have the bare minimum binaries or libraries
which are present but in case of your VMs you have the whole operating system
which is basically installed on top of the hypervisor alright so because you
have the whole of the operating system installed your size of the VM is quite
bigger than what you have in the container right now you again the VMS
could be multiple in nature when we talk about how many VMs can be run on
it totally depends on the specs of the machine right what are the
specifications of the machine so in vm’s also you can run multiple VMs on a
single machine and also one thing to notice over here is that there could be
or there could not be a host operating system in case of VMs right so in case
of virtualization technologies it has advanced this much that the hypervisor
itself it does not require a host operating system it could directly work
on the hardware level as well but usually when we guys work in rnd or when
we have the kind of laptops that we have right the kind of specification that we
have we go ahead with this kind of an architecture where we have an existing
operating system on top of that we install tools like VMware or VirtualBox
and on top of that we install the VMS alright so this was the basic difference
between a docker architecture and a word should machine architecture moving
forward now let’s just discuss so we have understood why do we need docker
and we have understood that what docker is exactly right now let’s go ahead and
see how we can install docker on our machines now when it comes to
installation there could be basically three kind of installations that you
might come across you could either have an apple or a Mac system that you are
working on you could have a Windows system that you’re working on and you
could have a Linux distribution that you’re working on so let me walk you
through all the three domains in basically installation so that we are on
the same page once we come out of installations and when we are doing
hands-on I hope you guys all will be able to you know go along with the video
as we are performing the hands-on all right so if you’re on Mac guys all you
have to do is go to this link so this link is basically going to download the
docker toolbox for Mac and once you have the docker toolbox
it basically installed everything to install compose it will install the
docker software it will install the other components of docker so you don’t
have to worry about what component is what but basically just go to this
particular link download the docker software
and once you have the rocker software just come back to this CLI and this just
passed the command docker version and if you get it get get the reply as the
version of the docker which is running on your system it’s your yes offer is
basically successfully installed right so this is the installation format if
you talk about the installation in Windows just again just go to this link
download the docker toolbox and everything is going to be set up a choir
automatically for you right when we talk about one two things that are a little
simple on the ubuntu side all you have to do is pass these commands and your
docker would be up and ready right so for our sake we will be basically
instead doing wall a hands-on today on the Ubuntu distribution of NX so what
I’m gonna do is I’m gonna just go back to my terminal so that we can use the
putty software to basically connect to our winter distribution so just give me
one second you alright guys so this is basically my
Ubuntu distribution right so what I’m gonna do is I’m gonna installed blocker
in it so the first thing that I would pass is the command sudo apt-get
Update so the first command that I’m gonna pass
a sudo apt-get update once I’ve done that the next command that I’m gonna
pass is sudo apt-get install doc dot IO all right and then you’ll be prompted
for a yes a no just type in Y which will mean yes and this will install all the
packages required for docker to work on this particular system right so if your
internet is fast it will hardly take a minute to install docker and once
installed docker is all installed and set up we can go ahead and check whether
docker is working on a system or not by typing in sudo docker version so just
give it a minute and this will all be over soon
alright so docker is installed now and all we have to do now is check the
version so it’s docker – – version
and as you can see you can see that the docker version is 18 point zero six one
C II and the build is this this basically confirms that docker has been
installed on our system so if success is installed docker so now let’s go ahead
and see what is in store for us on our next slide all right so we have
installed docker on our system guys and I hope you guys are all so you know if
depending on the kind of waste that you are working on I hope you guys have also
installed on your system if not you can pause this video do that
first and come back because I would want you guys to learn as much as you can
from this session and I would want you to follow me
while I’m doing the hands-on so that you get the maximum knowledge possible from
this session alright so the next thing that we’re going to do in this session
is get acquainted with the docker container lifecycle alright so basically
this is how this this is the whole lifecycle of what a container goes
through when we talk about the docker ecosystem now if you guys know what
github is or if you guys have worked on github you guys might know that there is
a central repository whenever you start or whenever you have to start working on
a particular codebase the first thing that you have to do is pull the codebase
from the central repository alright so that central repository is
nothing but something on the cloud or something on the internet which holds
all the codebase for your organization or for your team that everyone is
working on right so that is basically a central place where you can download
everything from and similarly in docker we have something called as docker up
now docker up would contain all the container images that are present there
are open source images that are present for example I told you that you could
run a container on a bun too you could run a container on st. OS you can run a
container on Alpine you can run a container or some other operating system
so all of that is present on docker right so the first thing that he do is
pull an image from docker hub onto your system and this is your system where
docker engine is installed and basically what you download
from docker hub is an image of a container okay so you get the image and
the next step that you do is you’d run that image and the moment you run an
image it becomes a container right so you can run that image and that would
basically be the normal normal way or the normal state of a container that it
is running once you want to stop a container you that is again a lifecycle
say that when you’re done with working on the container you can stop it and
once you are done with everything you don’t need it on your system anymore you
can remove it so this is the lifecycle of a container the first thing that you
do is you pull it from docker up it becomes an image then you run that image
and it becomes a container and the containers can either be in a running
State in the stop state or in the deleted State so this is the life cycle
of this basically I would say a summary of how what happens inside docker but of
course there is more to it and we will see as we move along in the session okay
so we’ve understood the common you know the common ecosystem of docker we’ve
understood what docker is we have understood how the containers work
inside the docker ecosystem you first download them they become an image then
you run that image and then we become containers right so with this knowledge
now and with docker installed on our systems let’s go ahead and perform some
common docker operations that you would be doing when you are working on the
opportunity to day life all right so this is the first command that we can
try out when we are working on docker all right so whenever you want to find
out which what version of docker that you’re working on you can simply pass
this command docker version and then you can basically get the version that
you’re getting on let me quickly change the color of the terminal because I feel
that this color is a little dull so so that it becomes better for us to see
just give me one second all right much better so when we pass
this command docker version you get the darker version the current aqua version
that installed on your system along with the Barrett name and this is exactly
what this command does all right the second command that he can go ahead and
try is docker point so we saw the container lifecycle the first thing that
you do is pull the image from docker up all right so what and the syntax for
pulling the image is docker pull and then the image name so as you can see in
the screenshot we pass a command da couple Ubuntu and what it did was it
downloaded a container from docker hub automatically so what we can do is we
can also try that out so let me just clear the screen all right okay so we’ll type in docker pull and
then the image name so this is the amazing and we’ll hit enter right so
sorry about that for what to use sudo alright
so sudo doc up will open to hit enter and this would download the latest open
– image on your system now remember guys which have just downloaded the image we
have not run the container yet right so our next command would be docker images
so now if you want to check what image did you download and is it existing on
your system to verify all you have to do is type in docker space images so by
that so sudo docker images and then you would be able to see the image study
just downloaded okay so this is the image that we just downloaded and you
can see the size it’s 86.2 mb eighty six point well that’s too low to be an
operating system right imagine the beauty guys so an open to operating
system of vm that you’re working on is I think around 1.5 or 2gb but when we talk
about this container it’s hardly 6.2 am be awesome right so
once you have seen the images the next step would be to run this image right
now to run this image the command is called docker run and along with the
image name right so you use some flags as well when you are using docker on so
let me explain you while I am executing this command so sudo docker run this is
the syntax the next flag that you add is – I T which basically means make the
terminal interactive – D which means run the container as a daemon run the
container in the background so whenever I run the container make it running in
the background at till basically I stopped the container so that is what –
B does right so the command would be sued
docker – IT IT means make the container interactive in the terminal so that I
can also pass on commands – D which basically means make the country
container a demon that it should be running in the background even though
I’m not working on it and then you pass in the image name so the image name is
Ubuntu and then you hit enter so can you see you just got an ID of the container
so basically this if you get something like this that this means that your
container has just started and then to view all the running containers all you
have to do is pass the command docker PS and that would list down all the
containers which are currently running in your system
so let us do that as well so let me just clear the screen so just pass in sudo
docker PS and this would basically show you the container which you have just
started so as you can see I started Ubuntu container 29 seconds ago and this
is the container ID for that particular container alright so my container is
running so what’s next next step would be to see docker PS
would basically show you the containers which are run in the running State but
what if you want to see all the containers which are there in the system
for example what I can do is I can do a sudo docker stop and I can stop this
container right so this container is now stopped and what I can do is I can run
one no container which would be sudo docker run – ID – D and then one – sorry
about that so if I do a docker PS I would only be able to see the container
which is currently running so you can see it is up seven seconds ago but if I
want to see all the containers which are there on my system that could either be
in the start state all stop state all I have to do is type in the command sudo
docker PS and then – a with this I can see the containers which
are running so this is container which is running which was made around 27
seconds ago and this is a container which has exited which we manually
stopped and we can see that also bypassing the command sudo docker PS –
eh okay now the next step would be to work with the container so we have
started the containers but the next step would be to start working on them and
how can we do that we can basically do that using the command docker exec so
what we’ll do we’ll just first get the container ID so that 29 B is docker PS all right so this is a container this is
where the this is the container which is currently running now if you if I want
to get inside this container the command for that would be dr. sudo docker
exec so e x EC then – I T make it interactive
give the container ID and then bash bash would be that I want to run this
container in the current terminal space that I am working in and the current
terminal space is patched and I’ll hit enter so let me just clear the screen so
can you see that you are now inside the container right so this is the container
ID and this is the user of the container so we are basically acting as a root or
the container ID that is we are inside the container and we are acting as root
so this is basically the environment this is the environment that I was
talking about that the developer will start working in right and this is
basically a 1/2 container so all the Ubuntu commands are gonna work inside
this particular container so once you are inside the container we can do
whatever like so for example if I have to update the container I can do any
apt-get update and it will start updating the container as if it’s a new
operating system which is running on the system right and also to show you guys
that this is different or this is this is completely independent from what we
but doing outside the container you remember we have dock and installed on
our host operating system now if I try to access dock from here I will not be
able to do that so if I do a sudo docker PS you can see that it says okay let me
do a command docker PS you can see that it says docker command not found right
so basically docker is not installed inside of the container and that is the
reason it is not able to access docker also an interesting thing that you can
see over here is that I passed a command sudo docker PS and it says sudo command
not for and so the pseudo library is not
installed in the container right so I told you the bare minimum libraries
which are required for a container to make it behave as a particular operating
system is only present in the operating system is only present in the container
and nothing else and that is why you can see even pseudo as a command is not
present inside the container all right so if you are inside the container and
you want to exit the container all you have to do is type in the command exit
and this will make you come out to your host operating system but mind you guys
your container is still running so if you do a docker PS you can still see the
container running in your docker PS space all right
so this is how you can get inside a container all you have to do is docker
exec – IT and then the container ID space the terminal that you are working
in which in my case was bash all right okay now again so this I already showed
you that if you want to stop a container all you have to do is sudo docker stop
and then the container ID hit enter and the container will be stopped and then
if you do a docker PS you will not be able to see any containers which will be
running inside the systems as you can see once we do it once you did a stop
there are no containers running over there in a docker PS space now okay you
can also kill a container in case a container becomes you know
non-responsive and you’re stopping the container but it’s not able to exit what
you can do is you can kill a container it’s similar to that of stop but when
you stop a container it basically gracefully exits the container it’s just
like shutting down your computer or just switching off the power switch from
behind the computer’s power outlet right so if you have the stop the container
but the container is still not stopping because of some program which is
comparing in loop inside the container or something like that you can
immediately kill the container using the command docker kill and then the
container ID then you have something called as docker RM so basically I told
you guys that if I if you do a docker PS – a you can still see the containers
which was chopped which were in this top state right but as we saw in the
lifecycle there’s a third stage of docker containers which is basically the
delete stage and to reach the lead stage you pass in the command docker RM and
then the container ID and this would basically delete the container from your
system now let’s see how we can do that for example if you want to remove or
both these containers from the system what I have to do is take the container
ID pass in the command sudo docker RM and then pass the container ID hit enter
and this would remove the container from my system so similarly if I want to
remove the second container as well I just have to pass in the command sudo
docker RM the container ID hit enter and the containers will be removed and now
if I pass in the command let me just quickly clear the screen now if I pass
in the command sudo docker PS – a you can see there are no containers which
now exist because they just removed them from a system using the command docker
RM alright moving forward now what to do when you have to remove an image now you
guys know we already have an image in our Dockers system which is this we have the Ubuntu image now if you
want to remove this image from the system the command for that is talker
RMI and then the image ID and this would remove the image from your system so if
I want to remove this all I have to do is type in sudo docker R M so RM was to
remove the container if you want to remove the image just type in I as well
in front of the RM command and then the can image ID which is this fasten this
hit enter and this would delete the image from your system so if you do a
sudo docker images now you can see that the image is not present in the system
anymore all right so this is how you can remove image from
your system and this was it guys so this is how you can basically go ahead and do
these these are some of the common operations that you would do in your
day-to-day life while you’re working on docker and I hope you guys are well
acquainted with all these operations so what I would suggest you guys is pause
this video to try to do all those commands together and once you’re done
with that let’s resume the video to going ahead right so our next topic is
basically to create a docker hub account so like I was comparing it to github so
if you guys have worked on github you also have to create an account on github
in case you want to have your own repository when you want to push your
own personal stuff or your personal code for your probably R&D or whatever you
want to call it right similarly you have something in docker hub as well if you
have created a container for your own testing purposes or are in the purposes
and you want to push it to docker up so that in later time probably you can pull
it whenever you want or if you want to share that container with the other
people all you have to do is create a docker hub account and push your
container image on that docker hub account alright so that is possible but
the first step would be to create a docker hub account and for that you have
these steps so first you have to go to the the the website hub docker calm
so let me quickly show you that so all I have to do is go to the browser go to
hobknocker comm and you would see a website which would look something like
this all right the next thing that it would have to do is you will have to
sign up on this website so just choose a darker ID now this docker ID is
basically like your username and this username has to be unique
all right enter the docker ID enter your email address your password agree to the
terms of conditions and just click on sign up after this you would get a
verification email on the email address that you specify over here verify your
email and that’s it your docket of account is all setup right I already
have a Tokarev account let me just quickly show you that
so my docker hub kuma account looks something like this give me one second
all right so guys this is my docker hub account and as you can see I have some
personal continuous that I floated over here and I’ll show you guys also how you
can do this but before that let’s come back so once you have setup you’d open
up account you would be able to see your rocket up account which would look
something like this right and always remember guys remember you
user ID because it’s going to help you out in the future when you are basically
pushing your custom images onto your account that is the odd up token up
account also always keep your user ID handy alright so guys this is how you
can sign up for docker up once you have signed up let’s go ahead and see how you
can save changes to a container alright so what does this basically step means
so this step basically means that if I am on my system and say you know I run
my docker command now see Aaron docker container all right so this is basically download
and run the container for me so sudo docker PS this is the container that I
want to go into all right so I’m now inside the
container now what I want to do is let us do an LL and you can see these are
all the directories which are present inside the container as a frown now what
I want to do is I want to say create a directory and say I create a light
recalled app right so if I do an LS now you can see there is a new directly app
which has been created inside this container right now if I exit this
container and say I do an LS sorry if I do a sudo doc PS I can see that the
container is running but the catch here is that if I delete this container say I
delete this container okay so one more thing guys if you want to remove or
delete a container which is running you can pass in the command sudo docker RM
hyphen F and then the container ID the other way to do is to stock the
container first and then delete it that is a general way of doing it but if you
are in a hurry and you want to delete the container which is run in the
running state right now just type in the command sudo docker RM hyphen F and then
the container ID and this would remove your container all right so I’ve deleted
my container now and if I now type in if I again run the image to the sudo docker
run – I T – D and then the image name and if I go inside this container and if everyone LS you can see the
directory that I created it caught it is not nowhere present over here now the
reason for that is whatever changes you made to the container they were only
present inside the container and these dis container when deleted these changes
do not propagate into the image that you downloaded originally all right now if
you want to make changes to a container and want those changes to be saved
inside an image so that you can later on again launch a new continent and it
should have all those files that are files or folders or profiles that you
inside installed inside the container you would have to save these changes
inside the container and for that you would have to learn how to save changes
inside a container let’s let’s go ahead and learn how you can save these changes
inside a container alright so for saving changes inside a container the command
is called docker commit okay so you have to pass in the command docker commit the
container ID that you want save and then the new image name so basically this
will create a new image of a container and once you pass it the container ID
space you have to give the new image name that you want that custom image to
have right so for example in my case if I create the folder say app ID when LS I
can see the app folder is there now right I just accept the container I’ll
do a sudo docker PS I can see this is the container ID so I
just copy that and sudo docker commit a sin the container ID and then the new
image name so the image name let’s say I say the image name is test alright so
with this if I now do a docker images I can see that there’s a new image which
is present which is called test alright and now I can run a container on the
image test and all I have to do a pseudo docker run – I T – D and then test this
would run the container and if I go inside the container now I can see that
my changes would be there inside this container so if I do an LS I can see
that the app folder is present and this is how you can create a custom container
so I for demo purposes says just created a folder you can also install software’s
inside a container like for example you install Apache you install my sequel or
any other kind of software that you want then all you have to do is save the
container using the docker commit command and you’re all set whenever you
run that image again you could have all those offers installed inside that
container alright and this is how you save changes to a container alright so
now let’s do this example so basically what you’re going to do is we cannot run
an ubuntu container we’re gonna install Apache on it and once the Apache is
install we’re going to save that container into an image alright so let’s
see how we can do this so let’s clear the screen let’s exit this container and
let us first clear everything up so sudo docker PS so there are two containers
running ok so let me show you one more command which is basically like a
shortcut so if you if there are like tens of fives or more than 5 or more
than three containers running on your system you always do not have to pass in
the command sudo darker docker RM hyphen F and then pass the container IDs the
shortcut to remove all the containers at once is the command sudo docker RM
hyphen F and then pipe it with another command this is the pseudo docker PS – a
and then having cube you pass in this command it would remove all the
containers which are present in your system so if I do a sudo okapi
yes now you can see there are no containers
running anymore alright so this is the shortcut that you can use while working
with docker alright so my next step is to basically install
so first I have to run and open to container so I will do a sudo docker –
run I have an ID Ubuntu alright so my container is now
running the next thing that I have to do is install a patches over on this
container ok ok so let’s take it a little differently now so basically ok
so let it be like this we’ll come back to the path that I wanted to explain to
you guys later so basically now I’ll just exact into
this container okay I’m in first thing that I’ll do I’ll just update the
container so apt-get update once is updated the next step is to
install Apache and for other commanders apt-get install Apache to and this
install the Apache software all right and so
a partay to status would give us whether Apache is
installed or not so it says Apache 2 is not running so
let’s start the service so as Apaches to start hit enter and if I now check the
status status of budget – I can see that Apache 2 is running inside the container
now awesome this is what I wanted so let me just accept the container now and let
me save the container so we have installed Apache on our container next
step is to commit these changes to the containers for that it becomes an Apache
container alright so I will do a sudo docker PS this is the container ID and
it’s a sudo doc commit give the container ID and now give the image name
so let’s give it as Apache or let me make it a little more simpler for me
when I move ahead so whenever you want an image pushed to your docket of
account there’s a certain nomenclature that you have to follow when you’re
naming your image so first of all you would have to write the username for
your docker up account remember I told you to remember your user ID and this is
exactly for that so specify your user ID slash and then whatever name you want to
give to your container so that would be Apache right so the naming would be
specify your user ID slash your container name that you want to give all
right so sudo docker commit container ID and then the container image stain that
you won’t give to your custom image hit enter and this should basically save
your images so if I do a sudo docker images now you can see that our image
has been saved over here with the name SSH our slash Apache you should also
notice that now my size of the container is 212 MB because I have installed a
software the size has gone up it was 86 MB before but because we have installed
and software on it has now become 212 MB which is fine all
right so now we have the new image so what I’m going to do is I’m gonna remove
all the containers which are running now so sudo dock up yes this is the
container running so let me just remove it okay for what to do the – f awesome so now what I’ll have to do I
just to check whether my image is functioning properly or not what I can
do is I just pass in the command sudo docker run – I T i fin D and then the
image names at the it’s a har slash Apache okay I’ll introduce you to a one
more flag which is – P now what – P does is it basically does port mapping so
when I have installed Apache if I want to check if a patch is working or not
normally a patch it works on port 80 right
so one containers put 80 a pass a would be running but if I want to check the
port of the container or if I want to check if everything is running inside
the container fine I’ll have to map the internal port of the container to the
outside host operating system so in this case I have to map it say if I want to
map it to port 80 – of my OS of my host operating system – the 80 port of my
container this is the command to do that – P space port number that you want to
specify : the port number of the container that you want to link it with
so I want to link the port 82 with the internal port 80 of my container which
has the image running sshi slash Apache alright
I’ll hit enter this would create the container and if I do a sudo docker PS
if I can see the container is now running
so let’s exact into this container so sudo docker exec – I T give the
container ID bash and if now I do a service a parser to start it should
basically give me either an error that Apache – not found or it should start
the server so let’s see what happens so it says service Apache web server
purchases this basically means Apache was present inside this container
because we did the sudo docker commit Apache was installed inside this
container and this now service has been start
now remember guys we have mapped it to port 82 so now let’s check if we can
access the apache software on port 80 to of this server alright so let me just go
to the IP address of the server so basically this server is running on AWS
so this is the IP address of the server and I have opened it on port 80 too
right so right now if I hit enter it’ll not work reason being I’ll have to open
the ports on this so so let me just do that let me open all traffic’s so that
we are on the safe aside okay
so IP address : port number which is 82 hit enter and you can see the Apache is
successfully running so mind you guys Apache is not installed on the server it
is installed inside the container and the thatch off will able to access on
the browser using the port that you mapped on the container alright so for
our for our verification what I’ll do is sudo docker PS and I’m gonna stop this
container now sudo docker stop pass this ID and now if I do a refresh over here
you can see that it says the site can’t be reached reason being I’ve stopped the
container over here alright and this is how you can save changes to a container
install software and create a new image out of it now next step would be to push
it to docker hub right so I – I told you guys once you are done I mean working on
your container and you created a custom clinic container that you probably you
want your team to access or you want it to be there on darker up for safety
purposes you can push it to docker amp and to do that the command would be sudo
docker push which I’m going to explain in a little bit but before that we saw
the life cycle right we saw we can push or pull from the docker hub so this is
the pushing stage so we had a 1/2 container we installed Apache on it and
then we committed the changes and now we have an image on a local system the next
step is to push it to docker hub right and let me show you exactly how to do
that so this is your docker hub guys and the first thing that you would have to
do is you would have to login to your docker hub from your console and to do
that you will have to type in the command sudo dock login
passing this command you’ll be asked the username so passing the username of the
docker hub and then pass the password so if there is any problem with your
password it will give you this error so let me just try once more so I’ll pass
in the username and then I pass in the password and on a successful login you
will get this message that is login succeeded awesome so I have logged into
my daugher up now what I already want to do I want to push my image so I’ll have
to do a sudo docker push and what was your image name it was a chess har slash
Apache this is the image name right that we gave to our custom container image
hit enter and just start pushing your image to docker hub and once it has
pushed the image all you have to do is you can just visit your dog or herb repo
and you’d be able to see your image listed over here just do a refresh and
just go to the last page and you can see this is the image that I just pushed to
the repository which is a chess har slash Apache awesome so we have
successfully pushed our image to dock it up so it’s very easy just type in the
command sudo docker push and that should be it
alright so our next topic now is introduction to docker file and let us
go ahead and learn what is the aquifer but before that let us recap what we
have just learned so we understood what docker is why do we need docker what is
to occur exactly then we went through some the docker operations the
day-to-day operations that you would be going through we got acquainted through
that then we saw how to run a container and then how to do some changes to the
container and save it on your local system
and finally how to push that container onto a docker hub this is what we have
learned now the next step is what is a docker file so a little introduction
about a docker file so you just saw that if I had to make changes to a container
I basically had to run the container first go inside the container install
whatever software I want install whatever I want and then come out commit
the container and only then it’ll get saved but there’s a shorter version of
this when you’re working on production grade environment when you’re working in
a production rate environment everything has to be lightning-fast right and for
making changes to a particular container image you can also use something called
as a docker file right so we’re gonna discuss what a docker file is now so
let’s go ahead and start with this topic now so a docx file is nothing but a text
document in which you write how do you want your container to be customized
right so one example like I just told you guys was when I did it manually I
ran a container I went inside it I installed software came out committed
the container and then pushed it onto docker hub now all these changes can be
automated using a script style which we call the docker file now our dr file is
very easy to write and there are basically some nomenclatures or there’s
some some syntaxes that you would have to learn but I’m sure they are very easy
to learn and once you learn them it will only be docker file that you would be
using rather than doing everything manually alright so let’s go ahead and
see how we can create a docker file so guys these are so the sin taxes these
are some of the important sin taxes that are relevant to creating a docker file
now the first line of a docker file is always from right so the image on which
you want to make changes for example we made changes on the Ubuntu images every
installed Apache alright so what we did we did docker run – iti have Indy and
then Ubuntu so in dockerfile the first one base
image that you want to work on you’ll have to specify it on the first line
using the command frog okay so this this area of your of your presentation will
basically tell you the dockerfile content so this is exactly how the
content of the dock or file should look like so the first command is from so
from Ubuntu so the base image that I’m going to work on is a bun – so this
would be the first line second command would be to add all the files so add is
basically used to add files inside a container now for example say I create a
HTML file and I want that HTML file to be added inside the abun – container so
for doing that the command add the first argument would be the place value where
the files are present and the second argument would basically be the location
inside the container where you want all those files to be copied all right so
add space and taught the meaning take files or take all the files from the
current directory and put it inside slash where slash www.and slash HTML all
right so this is what the add command does the third is run so whenever you
want to run any command in the container for example I ran a PD cut get update
and apt-get install apache2 inside the container right now if you want to run
the same commands from the docker file you can do that using the command run so
over here there are two commands that I want to run the first is infinity get
update and the second is if it get install apache2 right the – vice
signifies over here the option that we get that whether you want to go ahead
with installation yes and no so if we specify – by implicitly in docker file
it will not ask that option it would continue the installation without any
prompts all right so run command is basically used to run any command inside
the container on the on the terminal that you would have ran okay then you
have a command called CMD so CMD key word is basically used to run or any
command that you want to run and this start of the container right and these
commands run only when there is no argument specified while you are running
the container so while you’re running a container you can also specify any
command that you want to run so in case you specify a command while so for
running or continue the cont the command is docker and – ie – D and then the
image name now after the image name you can also specify a command that you want
to learn inside the container okay so if you don’t specify anything like we did
then in the docker file this basically this command will be running when we are
starting the container otherwise whatever command we specify in the dock
around command that would be running okay then the so this was about the CMD
command line so in this case what we are doing is what we want to do is like
remember when we started the container we always had to go inside the container
and start the Apache service manually that is once the image started or once
the container started go inside the container and type in the command
service a party to start right so that had to be done manually but when you
pass it using the command CMD which is nothing but all the commands run on the
startup time whenever the container will start this command will execute
automatically if you specify it under CMD so CMD Apache CTL – the fokker
foreground would do nothing but the run Apache the movement the container start
and this is exactly what we want right so this was possible using CMD so again
I’ll repeat it CMD is used to run commands at the runtime of a container
and they run only when there is no argument specified in the docker run
command if there is no argument specifies the CMD command will run
otherwise it will be skipped from the dock
all right the next command is entry point now entry point is exactly the
same as CMD that it that is it runs at the starting of the container but the
only difference is CMD will not run if you specify an argument in the no
current but entry point will run irrespective of the fact whether you
have specified an argument or not all right
so CMD an entry point can be used interchangeably but if there are cases
when you are will be running the container with an argument so it’s
better to use entry points so that the command does not get skipped CMD command
will get skipped if you specify an argument after the rocker and command
okay so in our case we use the Apache CT life in the foreground with the entry
point command all right the next command is env so if there are any environment
variables that you want to set inside the container you can pass it using the
command env space the name of the variable and space the value of the
variable all right so in my case I specified a variable
name which has the value DevOps in telepath all right so these are some of
the commands that you can use to create a doc file now let us create this raka
file on our system as well so what I’m gonna do is I’m gonna go to my pootie
software and I’m gonna specify nano and then aqua file so let me just
create a directory for so let me create a directory called the aqua file itself
simply a Dhaka file let us go inside it so see the naka file and now whenever
you are creating an aqua file the name of the file always has to be dhaka file
itself right so i’ll specify nano and then aqua file we went inside so the
first thing that we want to do is we want the
to image to be called then we want to update this image so apt
get update then we want to install Apache inside it if it again – why so by
that so if it get – by and stole a patch a – all right sounds good then we are
going to add all the files from this directory to the directory where our
www.h tml so we’re going to create this file do not worry we’ll just create this
file right once we have done that the next step would be to run Apache in the
foreground that has run it so Apache is CTL p for ground so this would run a party automatically okay and you specify the entry point and
say I also want to specify environment variable say let me create an
environment with a blue called name and I want to specify the value s in ten
apart okay sounds good so this is my aqua file
and I just save this now and let me create HTML file as well so it’s create
and one dot HTML and that is make it a little simple this will be a
HelloWorld HTML file okay the body body say I have an h1 which will say
hello from Intel apart close the header close the body and then close tht laughs all right this
is what I wanted to and shave this and now we are done with the daka file and
we have the HTML page in place so the next step would be to build this docker
file now let’s see how we can build it so for building this docker file all you
have to do is talk build where do you want build it I want to
build it from the current directory and the image that it will create I want to
name that image as say new underscore Dhaka file okay so it will be named like
this so I’ll hit enter and I have forgot to mention sudo
so let me just clear the screen all right so let me first teach you guys how
to run a docker Dokic amount without sudo so for doing that let’s type in
sudo use a mod – a and G and then talker and then dollar user enter and now all I
have to do is just relogin into your session this should work Sofer do a docker PS now without you
know you can see the command is running also
so what I want to do is I want to go inside
the dockerfile okay and now I want to build the stock of as a docker build dot
– T new underscore the aquifer so you can see the dockerfile is now
being executed and basically this is creating the container for me and this
will basically come up with an image which will have all these changes that
we just told to the Container so now if I do a docker images
I can see that there is a new image which has been created which is new
underscore dockerfile so let us run this new underscore docker file or docker run
– ID – d also got to specify the port number so let us open it at port 84 and
then – d new underscore docker file and then hit enter okay so the container
is launched now so if I do and talk PS I can see that a container has been
launched seven seconds ago awesome and it being opened on port 80 four so let’s
check if everything is working well for us so we just have to change this to 84
and you can see Apache is running awesome Apache is running and now let’s
check our webpage if it’s there in the country no not so we named it as one dot
HTML and yes so this is the HTML that we created which has been added inside the
container so let me show you inside the container what exactly happened okay so
docker exact – 9 T continue named mm bash and let me just clear the screen
okay and let me compare it with my Hakka file so the first thing that we did was
apt-get update apt-get – why install apache2
this basically installed a patches so this was clear then we added everything
from the current directory to where www.h tml right so if we go inside where
and then www and then HTML and do an LS over there you can see that the one dot
HTML file has been added and at the same time docker file was also added so
inside the doc inside the directory we had these two files right one dot HTML
and aqua file so both of these were added inside the container now to not
have docker file inside the container what you can do is instead of dots
specified dot slash and one dot HTML that would solve this problem it would
only add one dot HTML in the container okay so it added one dot HTML and index
dot HTML is basically the default page of Apache that you see right so we added
one dot HTML so this was added by the Rockefeller the same time what we did
again was we defined apache to run in the
foreground so as you can see but did not invoke the apache service by going
inside the container we just went directly to the port number
that we mapped the container to and the apache was up and running so that is
awesome and the last thing that we did was we defined an environment with evil
so what I can do is I can echo the variable which was dollar name and you
can see this is the value of the variable that is specified in the docker
file I specify the variable names to be in tell apart and if I specify echo
dollar name this is the value that I get automatically so this was set by the
docker file in the container image that I just created and now what I can do is
I can just exempt this container and if I want you guys to use it
all I have to do is do a docker push to this container on my docker hub and you
guys will also be able to access this particular container alright so for that
I just have to change the name do a docker push and you should be able to
use it but I’m sure you would not be needing it because if you pass the same
commands that are written over here you would get the exact same container that
I created alright so let me know if you guys face any problems in the comment
section and we’ll be happy to answer all the queries see you alright moving
forward now we got to know how to create darker files we got to know how to build
images from the docker file automatically kind of like a script kind
of thing alright so now let’s start with our next topic which is introduction to
a docker volume so what is the dock volume so the volume is basically used
to persist data across the lifetime of a container for example I demonstrated it
to you guys that when you create a container you make some changes in the
filesystem you delete the container and you launch it again and the changes are
not there anymore right so that can be fixed so imagine it like
this for example you using a patch a container and you have a website inside
of it and the container stops responding because of some reasons what you do you
delete the container and you launch it again but with that what happens is your
website content is also deleted and if your website content is not present in
this new container it becomes a problem so to solve these kind of problems we
came up with docker volumes so what dr. volumes does is it basically maps or it
basically hosts the storage outside of the container and maps it to inside the
container that is the storage is there on the host system rather than on the
container but rather than the files being written on the container itself
they are written outside the container and that location is basically mapped
inside the container so irrespective the fact whether the container is deleted or
started again the container which is being attached to that volume will have
the same filesystem as that of the older container and that is how this problem
is solved that is of persistence of data across the container lifecycle all right
so there are basically two ways to do it one is called a bind mount and one is
called a docker volume now what are the differences between a bind Mountain a
docker volume let us see that so basically a bind mount would be that let
me just come out of the directory a bind mount would be that I mount a particular
file location inside the container okay for example I created this daka file
directory right so what I can do is I can map this docker file directory
inside a container now to mount this particular docker file folder inside my
container all I have to do is docker run – IT and then there’s a flag called – V
that I left i specify the location of the directory
with the slash shovel to slash home slash open – slash – aqua file now this
is the location of my folder and then what is the mount point or if inside my
container so say I say it slash up right so I specified that I’ll specify – D and
then I’ll specify a bun – I’ll hit enter and my container is now launched now if
I go inside this container I can I will be able to see that there is a folder
called slash up which has been created right so if I go to app and if I do an
LS you can see the folder the files inside this container are that of the
taça file now these files are not actually copied inside the container but
these are actually being mirrored from the directory on the host operating
system that is if i do if i duplicate this session
let me just duplicate this session so I just go inside the directory docker file
right if I do an LS and say attach a file say to dot HTML right so if I do LS
in the container now I can see that there is one more file to dot HTML right
so it is inside the container so whatever changes I make inside this
directory will automatically or dynamically be available inside this
container okay the reason for that is that this
directly directory is mapped to the directory slash up in the container all
right so this is called bind mount but there’s a disadvantage with this the
disadvantage is that a bind mount will only work when the filesystem is the
same that you have specified like over here in this container right for example
what can what can happen is I can specify this mind mount to be in the
configuration of the container and I can push it to docker hub so anyone who is
downloading this container will automatically have this bind mount on
his operating system but the problem which will arise is that say you
download this container on a Windows operating system right in the Windows
operating system this file path that I gave slash home slash up unto slash
docker file is not going to exist and that is where where it is going to give
you an error it and it will cause you a problem right so this way of persons
persisting data is a little difficult or is a little I’d say complex when it
comes to different environment setups when you’re wheeling dealing with
different demands for example if I have to talk about say a different operating
system say I want to work incentives so in that the file structure would be a
little different right for for for any other operating system in addition for
Windows it is completely different right so a bind mount will not work in
different environment and that is where it has a disadvantage but then this
advantage is basically overcome by now what are volumes volumes are
basically entities or are basically storage which is managed by docker bind
mounts are not managed by docker because you create the directory you decide
which directory your data is gonna reside in right and inside that
directory you are making the changes and that will being reflected in the
container but if you create a volume the docker engine the docker engine which is
has which has been installed on a host operating system automatically
identifies where this volume has to exist okay and then creates that volume
over there also in bind mount it is a little
difficult to basically migrate the data in volumes it is very easy you just pick
take the volume up and then just put it on some other system and should work but
in case of bind mount you will have to decide at which place you want to keep
it and then bind that place to the container only in that case if you want
right let me show you what happens in in kinds of a volume right so in case of
volume we want to create a volume this is the syntax so the syntax is door
volume create and then name of the volume so let me show you how you can
create it so let me just top cur volume to it and let us create a volume say
test okay so this would create a volume called test as you can see where they
ignore the warning guys so this created a volume test and if you want to see all
the volumes which are present on your system you can pass the command docker
volume LS and this would give you all the volumes which are present on your
system so on my system right now only the test volume is present I don’t want
have to know where is it present because docker manages the handling part of the
file system itself I don’t have to worry about that I just want to know that if
there is a volume say yes there’s a volume on your system which is called
test awesome all right so the next thing that I have to do is basically I have to
type this command docker run – IT – – mount and so basically if I
were to give you the syntax this is the syntax for mounting a volume so docker
and – IT – – mount source easy would be equal to the name of the volume so in
our case the name of the volume is test then the target would be the where do
you want to mount it inside the container so we want to mount it inside
slash app right and then – D and then the image
name alright so once the container is launched now what we can do is we can go
inside this container click check if everything is working fine and we go inside app and do another so
there’s nothing inside this volume as of now right so what we can do we can
create say touch then one dot HTML and then touch – dot HTML you can do all
this and then what we can do we can also do you placate the session let me just
duplicate the session and what I’m gonna do now is I’m gonna launch one more
container okay so I’m gonna launch and talk run – IP – – mount and then say source
is equal to test and target could be some other folder as well but for the
sake of simplicity let’s keep slash up and then – Deana bun – this would launch
one more container right let’s go inside this container docker exec – IT this and
then bash okay let’s go inside app if I do an LS you can see that whatever
changes are making in this volume is also being reflected in this container
so say I create three dot HTML over here if I go here and do an LS I can see the
three dot HTML as well alright so this volume is basically being shared between
two containers even a bind mount can be shared between two or more containers a
volume can be shared between two or more containers but in case of volume you do
not have to worry where your volume is being stored it is automatically handled
by the docker engine okay the other cool thing over here is that
if you delete this container and launch it again it will all be the container or
your your end user will not even realize if anything changed for that let me give
you an example so let me just exit this container and we created a image called
new underscore docker file okay so what I’m gonna do is I’m gonna launch docker
run – mighty and then source say is equal to source is equal
let me create one more volume docker volume create and let us create a volume
for Apache okay now what I’m gonna do is I’m gonna launch docker run – IT and
then – – mount source would be Apache now you look at this closely look at the
difference so source is Apache and what I want to do is I want to target this
particular directory where www and then HTML so whatever would be inside this
directive it automatically comes inside the volume okay so I don’t have to add
anything to into the volume and I’ll show you how to do that so you specify
this – deed new underscore drop a file this was image stream and let us hit
enter so it created the container let’s go inside this docker exec – IP and then
this container and then bash okay so we are inside the container now
now if I go to CD www HTML and when’ll s you can see the files are
still here but these files are also now mounted on the volume and that is
evident over here so if I exit this container and if I launch a new
container with the mount as Apache and target would be say slash up slash
up and the image is this okay so this is the a bun to image right so it should
not have the files inside it but if I go inside this container now can you see the files over here as same
as what was there inside this container where / travellerbruce ash HTML okay
now the beauty is this guys that if I come out say if I create a – – HTML over
here right so what I do is I come out of this
directory okay and what I do is I create nano drew dot HTML a clear file and then
I specify that HTML body h1 this is the new HTML file close the header close the
body close the HTML come out okay now I will have I want to
copy the student dot HTML inside this container so what I will do I’ll do a
docker CP dot slash two dot HTML and then slash vast lighter blue HTML and
then the container ID – laughs I should blue slash HTML okay
so now – dot HTML is present inside the container okay now what I will do I’ll
just delete the container I delete the new underscore dockerfile container
Dockers are M – f delete okay so idly if now launched a
new docker run – IP – – mount source is equal to Apache
target is equal to / wife / LW / HTML okay and – we also we specify that we
want a t1 port to be exposed and – D and then the image name new underscore
docker file so what will happen now is this that if I go on the browser
if I go to port 81 the container is running if I go to to dot html’ you can
see that the new file is also there in this container that is new underscore
docker file so it does not matter if your container has the file or not if
you have mounted the volume and that volume at some point had that file
inside bit it will have it inside this container as well okay originally inside
this image you don’t have the to dot HTML right and let me prove that to you
as well so if I do not mount the volume and I launch this container you right so if I go to this IP address and
but on page 82 you can see the container is up and running if I go to dot HTML it
will say to dot HTML it was not found on this so reason being the volume is not
attached to this server this file only has one dot HTML that we wrote using the
docker file right for volume we have it in etosha in the container which is
mapped to port 81 and that has the file to dot HTML which is this so awesome
guys we have successfully understood what taco volumes are and I am sure that
you guys also understood if there are any doubts guys you can mention it in
the comment box and we’ll be more than happy to help you out with your doubt
all right so with that guys now we come to the next topic of our session which
is how our micro services relevant right so till now we have seen how we can
deploy a single container and do things with it but usually in a production
grade environment we have multiple docker containers which are working with
each other now why do we have that let us understand that using this topic so
first let us understand what are the monolithic application so guys a
monolithic application is an application which has everything inside one
particular program right for example let us take this scenario of an uber app so
our uber app has things like notification we have things like mail
payments location services customer service and persons amendment this is
there inside one app right you can see all of these things inside one app so if
you can see it inside one app that does not necessarily mean that it is actually
one program of uber that they have written right but for now let’s consider
everything is an under one program and I say one program probably you have
written a Java file or you have written a C file and
everything is there in that C file or everything is there in the project in
different different files basically everything is there on the on one server
is what I mean right so if that is the scenario what
will happen that if I want to change code in notifications I will have to
pull the whole codebase I have to pull up the whole codebase I don’t have to go
to the notifications file you have to change the code over there
and then I have to push the whole code again back to github which will
basically get deployed on production after being tested and everything right
but the problem over here is that if there is any problem in the
notifications code that I just changed it will have a replication on all the
components of my app right there could be that I did a mistake and probably
there is something wrong with the mail program now although I my intention was
not to touch the mail program but because I had to work with the code base
which had everything probably something went wrong over here
or something went wrong over here or something went wrong where so this was a
problem with a monolithic application that is that you had to submit the whole
codebase even if you had to change the minimus thing that is there in the code
all right and this also led to downtime because when you are changing the
codebase there will be a certain amount of downtime that will occur there is
also a lot of risk because like I said the other files of the same project
could be impacted for example in our case I said that if notifications or a
safe location services is not working probably it have it it will have an
effect on payments or it will have an effect on mail if the code is related to
each other or if a function is being called from each other etc etc that is
how a normal you know a program is in one file you define a function in the
second file you are basically calling that function that happens right so all
of these are dependent on each other and whenever the components
whenever the module so all of these small small things like notifications
mail payments passenger management customer these are all small modules of
a program if these modules are interdependent to each other it will be
called a monolithic application right but when we talk about the disadvantages
of a monolithic application they are the following so like I said if the
application is large and complex it will be very difficult to understand so if
your app is now having more and more features and if someone has to add a new
feature he first has to understand all the dependencies between the components
right he has to understand the whole codebase and then only you can add a
feature as to understanding what will be the repercussions or what all files he
has to he or has to handle if he’s going to put a code inside the codebase it so
it is a little difficult to understand second thing is the whole application is
redeployed so like I said the whole code is again deployed and then there will be
a certain amount of downtime which will be pertained to you updating the
application right the third thing is if there is bug in any of the module and
because in a monolithic application everything is dependent on each other it
can lead to the an entire downtime of your own application right it can bring
down your entire application just because of a bug in a single module so
this is also a problem in monolithic application because the components are
dependent on each other and the last one being it has a barrier to adopting new
technologies and this basically means seeing a notification code is in Java
and your customer pass and your passenger data program because it’s a
monolithic application also has to be in Java all of your modules that you’re
defining all of the features that you have defined have to be in one
particular language that is what a monolithic application means right so
there was a restriction to adopting new technologies as well see it could no it
could not be like one module of mind is written in some other language the other
module is written in some other language this was what I’m only thing application
was now to solve this we came up with the micro services architecture now what
is the microservices architecture each and every module that is a
to my services function or the notifications function on the female
function the payment function passenger management function location services
function they’re all segregated from each other they all exist independently
that is they are not dependent on each other so if they have to interact
obviously they’ll have to interact with each other for example in uber until and
unless you don’t make the payment you will not be allowed to book cab right so
still communication has to be there between these two components that is the
payment and the booking has to be you know communicating with each other but
now they don’t have to communicate through in between the program they can
communicate through probably HTTP ways or you know by hitting the API of each
other they can do that by probably sending JSON to each other and stuff
like that so they can communicate in that way they don’t have to communicate
from within the program and there that is the beauty of micro services that the
services that we are defining now are not dependent on each other for example
say if the notifications is not working the notifications module is not working
then in that case it’s not like your Oberer app will go down probably in that
case it will say we are notifications and so is having some problems but you
can still book a cab you still be able to book a cab because booking of cab is
no way related to notifications but if we compare it to a monolithic
application in that case because the code was dependent and if there was
something wrong in the notifications app the whole application could come down
right because the code was not isolated to their member functions or to the
member modules but now because each and every function of the application can
exist on a different server altogether the downtime of the application becomes
almost zero when we have to update a feature for example if a developer has
to update a feature in say the payments app for example he has to add a new
payment method even only download the code base for the
payments app make the changes over there update the code on the payments module
and if there are any problems there will be problem only to the payments model
the other services will not be affected okay and this is what microservices are
now you might be wondering that this is a darker session why and how is how are
these micro services related to docker so to the answer that question these
applications all these micro services are deployed on docker containers okay
so all these act as separate entities and these containers then interact with
each other and now this also solves the problem of barrier of adopting of new
technologies reason being your containers could be of different
technologies for example a customer service could be written in Python your
notifications could be written in Java but if your customer service and
notifications programs have to interact with each other they can interact
through JSON in JSON is a way of writing your files just like we have CSVs where
n values are separated by commas in JSON you have a structure right any kind of
program beat any language can convert a file into JSON probably using libraries
or having inbuilt functions for that right and then pass it on to the other
service which has to read the information so with michael services our
problem of barrier restriction sorry of Technology restriction was also solved
so the advantages would be because the application is distributed can be
understood be really well it is not like if a developer has to work on a
particular feature he has to understand all the features of the application he
should know which application should he or which which modules of the program
should he should that particular program that he’s developing communicate to and
that shows that all he has no and of course the core of his own function as
well right and of course if he makes any changes and if he makes any bugs also in
the program it will not affect the whole application it will probably affect only
that function which is creating ok I guess we have covered all the
advantages so the first one being easy to understand then the code which has to
change will only be changed for the micro service which has to be worked on
then the bug in one service will not get into all the components of the
application it will only be isolated to that particular function and of course
you can do use any technology that you want with the micro service that you are
working with okay and all of this is possible using
containerization okay now because we were talking about
deploying multiple containers we have to talk about how to deploy them using
docker compose now if we were to deploy multiple containers till what we have
learned so far the only way to do this is by either using docker run or
probably creating a script file which will build multiple docker files and
then hence build those images but there is even a shorter way to basically build
images and run them and that is possible using docker compose now what is docker
compose docker compose is basically a tool which can be used with docker to
create multiple containers at once create and configure multiple creators
at once with a single command right and the way you can do that is by writing a
Hamel fire right so you write a Hamel file with all the configuration required
for the containers and of course I am talking about container that has more
than one containers you can have 100 containers also that you can launch with
one single command using the docker compose file okay and this docker
compose file is written using a camel format now we are working with us now if
I want to demonstrate the power of docker compose to you I can do that
using a sample dock of file that I have created
so this amble to Aqua file what will you do it would basically deploy a wordpress
website now a wordpress DEP website has a lot of pendants these guys it has to
has my sequel in the backend it has to have the WordPress container in the
front end and of course you will have to configure that DB password and
everything inside the container as well right now one way of doing it is doing
manually that you know installing WordPress and then configuring the
variables inside it and then doing the same with my sequel
there’s a shorter way to do that and that way is using docker compose now
this rocker compose file has actually been written in camel file so if I would
take you through it the version of the docker compose file is version 3.3 and
there are basically two kinds of containers that we are launching the
first kind of container is the DB container and the second type of
container is the WordPress container okay so the first we launch DB container
and the image that we are trying to pull for the DB container is my sequel 5.7 ok
in the my sequel 5.7 container we are defining a volume which is DB underscore
data and this is the target of the inside the container where it should be
mounted that is value of my sequel okay so the environment variables for this
container are my sequel root password that is my sequel data is my sequel user
my secret password these are the values to these environment variables that we
are also configuring inside the docker compose file itself right so inside this
container these are the environment values which will be configured and then
so this is the end of it this is what we have done in the levy container now we
are talking about the WordPress container in which we specify that it
depends on DB which this will basically create a link between the WordPress
container and the my sequel container or the DB container right and with this the
WordPress container will be able to communicate with the DB container then
we specified the image of the container that we want to download so it’s
WordPress and that to the latest version the port’s that we want to expose so
inside the container the WordPress is available on port 80 and we are mapping
it on the hosts with port 8000 okay and then the environment variables we
specify that the WordPress DB host is DB : 3 3 0 6 so DB is basically the host
name for this service that just got created which has my sequel inside of it
and my sequel is always always available on 3 3 0 6 port number so that we
specify over here then we have the word DB user so we specified the DB user that
is the my sequel database user as WordPress so that is exactly the same
value that we are specifying over here and the DB password is again WordPress
and that is also that we specified over here okay and what else so this is it
and after that we specify that the of the volume which has to be created is DB
data so this is for the whole gamal file that there is a volume which is being
created which is called DB data now this is a yam l file which will create two
containers which will configure it and then this will happen in like three
seconds right so let us go ahead and try this sample docker file out but before
that doc compose guys actually has to be installed on the system irrespective of
the fact whether you have installed docker or not right
so docker when you install it does not install docker compose automatically
except the users who have installed it on Windows and Mac using docker toolbox
if you installed using docker toolbox it has everything installed but if you have
installed it on Linux and you have just installed aqua dot io it will just
install the communicating community edition version and the docker swamp
Kanako compose is not installed by default now to install docker compose
all you have to do is let me first demonstrate it to you that docker
compose is not present so dock compose – – version so you can see it is
not there as of now so what I can do I can go to my professor I will search for
install doc compose and the first link is basically the official
alright so for installing it on Linux just click on Linux and these are the
commands for installing compose so first we copy this command that is we’ll have
to curl the file so the file has been curled now and then we’ll have to change
the permissions for it for the file the switch which has just been downloaded to
be converted into an executable file and that’s it and now if I run docker
compose version I can see that the docker compose
version is 1.2 3.1 and the builders so docker compose has been installed
automatically now the next step is to basically create a Yama file so for that
let us create a UML file for so let me first create a directory so it would be
compose and script inside compose and now let’s create a Yama file for a word
press dot gabble okay and if you basically this word press compose file
is available on docker Docs so you can directly go here and you can try this mm
file on your own as well just copy the CML file paste it into your gamal file
that you are creating and then save it just let’s let’s verify if everything
else copied correctly so yes it has and now finally save this Yama file and then
pass the command okay so there’s one more thing that I want you know but that
will be evident from the error which is that the docker compose file has to be
named a proper has to have a proper name for example if I pass the command docker
compose for basically running your docker
compose Yama file the command is docker compose up and then – D you pass it
and then you it will give you an arrow can’t find a suitable configuration file
in this directory right so it says that file can only be of this name docker
composed or Tamil or docker compose dot YAML right so what we’re going to do
we’ll just name it the same so I just rename the file to be docker compose dot
yeah okay so the file has now been renamed and now let’s pass the command
docker compose up – d I’ll hit enter and as you can see everything has started
automatically and it is pulling the containers as of now once it has pulled
the containers it will start configuring the container so DB container has been
completed now it is downloading the WordPress container and then we’ll
configure it and once the configuration has been done yes so it’s done so our
WordPress website is now up and all we have to do is go to the ec2 dashboard go
to the running instances so this is the these are the instances that we launched
this is my instance I’ll just copy the IP address and WordPress would be
available on port 8000 hit enter and here you go so website is up and
running and if I go on continue I just have to specify the value so let’s give
the site title as in telepods username as in telepath as well password would be
Intel one two three and then your email so it will it be something random and
tell apart at the rate in teleport dot-com and we’ll click on install
WordPress login and then username or password so we let us specify it has the
same that we gave right and now if it locks in that basically means that the
it is being able to communicate with the database as well right so your WordPress
website is all up and running it is now configured with your my sequel database
and that is how it is interacting and this is your dashboard this is basically
from your WordPress container your data is all being stored on the my sequel
container and this is a classic case of a multi tier app that you have just
launched on docker compose so if I do a dock of PS you can easily see that this
is the WordPress container which is running right and it is being exposed on
paid port 8000 and then you have this my sequel container which is running which
is being exposed on port 3306 on which this WordPress container is
being able to interact alright so guys this is how you can launch multiple
containers at once using toggle compose and this is also kind of like a micro
service although this can be further be broken down but those broken down
services can also be launched through docker compose and everything can be
configured in the docker compose yam will file itself alright
but again micro services are actually not launched to dock of compose they are
basically put on something called ours and container orchestration tool
so what is exactly a container orchestration tool all right so a
continued orchestration tool is when you have multiple containers that you have
launched and you want to monitor their health as well for example if I come
back to this this example that we just did what I’ll do
I just docker RM – F and then specify the container ID and hit enter the
container is removed right and now when I go over here and if I hit enter he’ll
say the site can’t be reached if I hit enter again
it’ll again say the same thing because your sight is gone you’d accidentally
removed your container or your container stopped working because of some reason
and now I know you’re not able to access your container your website is down it’s
gone this was a problem which was there with traditional methods of using
containers or microservices on docker but then there was something called as
container orchestration which everything which became very popular which
basically says that all your containers health will be monitored by docker
itself so if there is a container which goes down if there is if there is a
container which is not heavy anymore what dr. Swann does is it automatically
repairs it by stopping the container and launching a new one in place of it and
the end-user will not even realize what happened all right so this automation
led to what we today known as container orchestration now container there are a
lot of container orchestration tools with docker
prepackaged comes docker swamp so we’re going to discuss docker swarm now so
what is locust swarm so it’s basically a clustering and scheduling tool which is
used for container orchestration of docker containers and with locust swarm
you get the functionality of automatically monitoring the health of
containers and it helps you keep a healthy number of containers that you
have specified always in the running state that is the basic aim of having a
docker soem up and running now how does the Dorcas wan basically work so our
darkest one basically works like this so a doctor swamp in cannot work with just
one machine all right so because like containers even machines can be faulty
sometimes maybe you know you have configured docker swarm on a single
machine which can automatically repair containers but what happens if the
machine goes wrong right so to mitigate those kinds of things as
well we came up with distributed kind of an architecture where you have multiple
machines running in the swamp so they’re in the swarm there will be one machine
which will be called as the lead which will basically tell the workers
what to do and the workers will have the containers running on them right so you
have the leader like we have it over here and then you have multiple workers
which are running on the cluster and these workers will run the container
that we want to launch so this was about Dhaka song but this is
not it why let’s go ahead and start a container
or let us go ahead and start a cluster using docker swamp so let us see how we
can do that so for that let’s go ahead and first launch a machine on AWS so we
have the master let us launch a worker so for that let us launch in a bun to
system okay so now our Ubuntu system is now
launching let’s name this instance as a worker okay so while this is launching
you’ll have to do some steps on your leader so say this is the instance that
I want to become the leader for my daugher Swan cluster so what I’ll have
to do I’ll have to say docker swarm in it
and then I have to specify the advertised address so advertise address
is basically the private IP address of the instance so in my case the instance
is this is the private IP of my instance it shall have to specify this over here
and then I’ll hit enter so with this you will get this command swarm is
initialized current node is this and it is now a manager so manager is nothing
but a leader ok and for any node which has to join to this manager they’ll have
to pass this command I could add a docker swarm join and then this command
that they’ll have to pass so for this so basically I’ll have to login to this
particular worker of mine who installed docker first and then we’ll go ahead and
join it to the cluster so let us connect to our worker and let me make the font a little bigger
so that you can also see what I’m typing okay kid
so this is my instance now what I’ll have to do
I’d have to first install docker so let me do an update first sadaqa is ablated now let me sorry the
machine is updated now let me install docker so sudo apt-get install talk dot
IO you alright so docker is installed let us
verify that by typing sudo doc version great so docker is installed now swarm
is automatically installed when you install docker so all you have to do now
is go here copy this command and paste it over here right and also while we are
doing that I’ll have to open the port ports for these instances to interact
with each other so let me put all traffic over here and now my instances
should be able to interact I just pass the command here and hit enter all right so I figured out what the
problem the problem was I have not specified the
address correctly so my master is basically on this IP address let me copy
that and let me pass the command again now
this is the command that we get let us copy this let us paste it over here hit
enter and you can see then it says this node joined as a swarm worker all right
awesome now if I go here and I do a darker nude LS you can see that that I
have two nodes which are there this is my current node which is the leader and
it’s in the ready state and so is the second node which is also in the ready
state and has joined the swamp so this is how you can create a docker swamp
cluster guys so again to be to be very clear the first thing that you will have
to do is do a docker swarm in it on the master node then you will get a command
just pass that command and the worker and your worker will directly connect to
your master now if you want to leave the cluster all you have to do is say sudo
Dorcas one two door docker swarm leave
and you can see to say nude left the swamp and if you do a docker node LS on
the master it well in a few moments when the health
check-up has failed it’ll say the status is down for this particular node okay if
you want even the master to leave leave the Swan all you have to do is pass the
command docker firmly on the master as well now as you can see over here this
node with because it has left the status is now down which basically needs means
that the node is no longer reachable okay now if you want the master to also
leave the swamp cluster all your flu is say docker swarm leave and then say – –
force k-keep enter and it’ll say node left
this form and yours form cluster has now been left okay so this is how you can go
ahead and create a dock a swarm cluster and the command like I said is docker
swarm in it advertised address equal to IP address of leader specify that hit
enter and just copy this command on the worker and it will work like a charm
alright so our next step is now to deploy an app on the dock his form so
before that let me quickly launch the swamp lustre again locust swarm in it
advertise address and specify the IP address of the master which is this
so this is the IP addresses specify that hit enter
and then copy this command and pass it on the fucker okay and if I do a doc or node LS over
here you can see the cluster is ready awesome
now what I want to do I want to deploy an app on darkus form but before that
we’ll have to understand how does an application actually work on docker
swamp so an application works something like this so basically you create
services on docker swarm and that service will basically create containers
for you for that particular image that you have specified right so all right so
let’s go ahead and create a service so guys this is the syntax for creating a
service in dock a swamp what we’ll have to do is we’ll have to type in docker
service create and then specify in the name of the service say I’m on this name
the service to be Apache and then specify the number of replicas that we
want to learn so I’ll say I want to run five replicas and then the port mapping
so basically I want to do it to be launched on port 80 three right and then
the name of the image so say I’m launching it on HSH a r slash web app
okay I hit enter if you run five containers it will
verify if everything is running fine and once everything is running great it will
exit so now if you want to see what all services are running on your swarm class
all you have to do they say Dockers home service LS rocker service LS and we’ll
show you that this is the service which is running it is running in the mood are
replicated right and it has five to five replicas running and this is the image
name now I’ll show you a very awesome thing so basically this port is 83 is
exposed right so what I’ll do I’ll go to the Masters IP address which is this
Kohi here I’ll type this I go to put 83 this is working now what I will do I
will go here and we’ll go to the worker I will again go to the workers IP
address and I will go to port 83 awesome isn’t it
so basically in swamp any IP address either of the master or the worker that
he go on they will have the application ready on both these servers on port 83
and the most awesome thing is that I showed you guys that if I do a docker PS
over here I can see that there are two out of five containers there is there
are two containers which are running on the leader and there should be two
containers running over here on the worker so there are three containers
running over here so what I can do is I will basically just do a sudo docker RM
hyphen F and what I do I will remove all the containers removed and if I go to my masters IP
address again and hit enter I still have the comp Tina’s running over here right
similarly if I go here sudo docker p RM hyphen F and then I say I want to remove
everything I forgot the suit over here it removes three containers but if I go
to my web browser and refresh the workers IP address I still have the
container running over here right this basically means that my containers are
being Auto scaled which means if they get deleted it again gets created so you
can see I deleted the containers over here but if I do a docker PS again can you see it has three containers now
launched and I guess if we go on the worker and do a PS over here now it
should have two containers exactly as we saw right so it automatically creates
the containers again and it will always maintain the number of replicas to be
five because that is what you have specified right that is why that is what
it will always retain now you can always scale the replicas and for that all you
have to do is docker service scale Apache and then just specify the
number of replicas that you want say I want to need two replicas to be there
and hit enter and then what we’ll do it will basically scale down to only two
replicas now what now if I do a service PS docker service LS
you can see there are only two replicas running now so if I do a docker PS here
I have one container over here and if I do a docker PS here I have one container
running over here alright so you can also scale up and scale I can also scale
up right so I can again just go over here and type in the command ten
we’ll start ten containers for that web app right and you can verify that you
can verify if I do a pseudo doc appears over here it has around five containers
here and if I do a docker PS here it has around five containers here if you want
to remove a service if you want to remove an application from the cluster
all you have to do is docker service RM and then specify the name of the service
and it should remove that service and now if I do a docker PS it should slowly
remove everything out of here so it talked of the a soap can slowly it will
remove everything from the container so see the containers are gone over here
and similarly if I see here the containers are also gone over here as
well okay and again like I told you guys if you want to leave the node all you
have to say is to sudo Dockers home leave this node have left this form it
will say and similarly if I want to leave the master as well I would have to
say this Dockers one leave and on the master you’ll always have to force and
if you leave this for misspelled and then you’ll have a clean machine again
alright so this is how you can deploy an application on docker swarm alright so
our next topic is deploying a multi-tier app on docker swarm and we would be
doing this in our next session or in a further sessions on docker swarm
exclusively now this because this topic involves the know-how of how networking
works in Dhaka swamp so I’ll be explaining in detail in our next video
which is going to be there first so stay tuned so summarizing what all did we
learn today so we got introduced docker we saw how we covered some common docker
operations then we saw how we can create and save a container push it to docker
hub how to create containers using docker file then we saw how to create
multiple containers using Campos we learnt about microservices we
saw how to orchestrate containers using Dockers for motor deployment a multi
cluster application and then we also saw how to delete aswan cluster once we are
done with it alright so this was the summary of our
today’s session and this topic of deploying a multi-tier app that is two
containers which can interact with each other irrespective of the fact where
they are on the cluster is something that we are going to do in our next
session all right so thank you guys for attending today’s session I hope you
guys learned something new today so as always if you liked this video please
click on the like button and subscribe to our channel for any future updates
and if you have any doubts regarding any of the topics that we discussed today I
would request you all to comment it in the comment section that is mentioned
below and we’ll be happy to answer it for you alright so with that guys I will
take a leave from you guys have a great day and good bye

About Bill McCormick

Read All Posts By Bill McCormick

Leave a Reply

Your email address will not be published. Required fields are marked *