Diferencia entre revisiones de «Containers»
(→Exercise 7) |
(→Exercise 7) |
||
Línea 1028: | Línea 1028: | ||
* Copy an index.html file that contains <html>hello world</html> into /usr/share/nginx/html | * Copy an index.html file that contains <html>hello world</html> into /usr/share/nginx/html | ||
* Create a container image which automates all of steps that are described above using the name mynginx. Run an instance of it with a port mapping of TCP/8888 to TCP/80. Verify that your image works. | * Create a container image which automates all of steps that are described above using the name mynginx. Run an instance of it with a port mapping of TCP/8888 to TCP/80. Verify that your image works. | ||
+ | |||
+ | == Exercise 8 == | ||
+ | |||
+ | * Get the ''ubuntu'' container image. | ||
+ | * Install the ''python3-bottle'' package. Python bottle is a framework for web sites. | ||
+ | * Create a simple .py hello world program for Python Bottle | ||
+ | |||
+ | <source lang="bash"> | ||
+ | from bottle import route, run | ||
+ | |||
+ | @route('/') | ||
+ | @route('/hello/<name>') | ||
+ | def greet(name='Stranger'): | ||
+ | return 'Hello %s, how are you?'%name | ||
+ | |||
+ | run(host='localhost', port=8080, debug=True) | ||
+ | </source> | ||
+ | |||
+ | * Run the program. | ||
+ | * Create a container image that automates the steps above. | ||
+ | * Deploy the container image, map port localhost TCP/8888 to container TCP/8080. Use curl to validate that the container is running. |
Revisión actual del 10:47 26 ene 2022
In this practice we will learn how to use docker to deploy containers.
Contenido
- 1 Step 1: Installing docker
- 2 Step 2: Launch docker with our user
- 3 Step 4: Check images on our machine
- 4 Step 6: List running containers
- 5 Step 7: Stopping and deleting containers
- 6 Step 8: Connect to a container that is running in the background
- 7 Step 9: Execute another command in a container
- 8 Step 10: Persistence in containers
- 9 Step 11: Create our own docker image
- 10 Step 13: Several docker at once (docker-compose)
- 11 Step 14: Exercise
Step 1: Installing docker
Let's add a new repository, since docker is not in the repositories official repositories:
sudo apt update
sudo apt install docker.io
To see if everything is OK and it has been installed correctly, we will check that the service is active:
sudo service docker status
It should display something similar to the following:
● docker.service - Docker Application Container Engine.
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-08-26 09:41:40 UTC; 37s ago
Docs: https://docs.docker.com
Main PID: 3630 (dockerd)
Tasks: 8
CGroup: /system.slice/docker.service
└─3630 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Press the 'q' key to exit.
If the service appears inactive, you can launch it with:
sudo service docker start
Step 2: Launch docker with our user
To avoid needing sudo when using docker, we will add our user to the group docker:
sudo usermod -aG docker ubuntu
For the change to take effect, we will need to log out of our user's user and open it again. We can easily do this by pressing CTRL + D and re-inserting our credentials.
= Step 3: DockerHub container image repository
Docker offers a cloud service called DockerHub that can be employed as a social network to share your Docker images. In DockerHub there are also preconfigured container images of software, many of them official (offered by the software manufacturer itself) that can be used as a basis for building new images tailored to our needs.
We will be able to perform searches in this repository DockerHub you can use the following command:
docker search hello-world
To download the hello-world image we can use the pull command:
docker pull hello-world
We can also publish our own images, this requires user registration' in the DockerHub cloud, which is not' mandatory for the realization of this practice. To upload an image, you have to validate yourself as a user in the DockerHub cloud:
docker login
After entering your login and password, you can start publishing your own images.
docker push MY_IMAGE
Step 4: Check images on our machine
To see the images we have on our machine, we will use the command:
docker images
Initially, we don't have images in storage, we can download the official Ubuntu image:
docker pull ubuntu:latest
Now yes, the ubuntu image should appear.
Comment also that each image can have several tags, for example, the ubuntu image, when we have seen it in the list of images, came with the tag latest, which is the tag that is downloaded by default, but we can specify another, for example:
docker pull ubuntu:bionic
To bring us the version of the Ubuntu image bionic (18.04).
Now we will have two images with different label, which would be like the different versions of each image.
= Step 5: Launch a container
A container is nothing more than a docker image running, it would be similar to a a virtual machine, although much lighter.
To create a container, we will use an image, in this case, the image hello-world image. We do the following:
docker run hello-world
This will show us the following message if everything is correct:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
When creating containers using the 'run' command, we have several options:
Step 5.1: Run a terminal on the container in interactive mode (-ti)
For this example we are going to use a different image, in this case we are going to use an ubuntu image.
We are going to launch a container and run a terminal in which we are going to run the program bash' which offers me a command interpreter:
docker run -ti ubuntu bash
The -t option indicates that a terminal is created in the container. And the -i option indicates that the container is run in interactive mode.
When accessing the ubuntu container, a hash is displayed at the shell prompt, which identifies the container instance (this value is similar to the one displayed by the docker container ls command).
To exit the container, type 'exit' or press 'CTRL + D'.
Step 5.2 Running in the background (-d)
docker run -d hello-world
It will show us the container id and run in the background, so the output we got before is now not shown. In order to see this output, we can use the logs command and the display id above:
docker logs 2ec1daae4676a4e84dc04fd91399c1dfe92119544ff12ee307991fe573d3db64
== Step 5.3: Adding environment variables to a container == Step 5.3.
We will use the above image, and we will add an environment variable called TEST containing the value 'test'. In addition, we will also run the terminal as before, to check with the echo command, that the environment variable is created correctly. environment variable is created correctly:
docker run -ti -e TEST=test ubuntu bash
Now, from the container we can check the value of the $TEST environment variable.
echo $TEST
test
Step 6: List running containers
We have been testing and running containers in the previous step, let's see where are these containers we have run, and then we will see some more options of the run command.
We have two options for viewing the containers:
docker container ls -a
or the short version:
docker ps -a
Both do the same thing, and should show us output similar to the following:
CONTAINER ID CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e2bde5f5500c ubuntu "bash" 6 minutes ago Exited (0) 2 seconds ago pensive_sammet
544ad27dbc3c ubuntu "bash" 7 minutes ago Exited (0) 7 minutes ago serene_elgamal
In the information we will see that the container has the status exited, that means that it is not running and has already finished. Let's do a test to see a container running:
docker run -d ubuntu sleep 30
With the sleep 30 command we wait 30 seconds until the command finishes, so during those 30 seconds, the container will be running, let's see it:
docker ps -a
CONTAINER ID IMAGE IMAGE COMMAND CREATED STATUS PORTS NAMES
eacceb27d52d ubuntu "sleep 30" 12 seconds ago Up 11 seconds sharp_pasteur
Let's see some more options of the run command.
Step 6.1: Give a name to the container (--name)
By default, docker will automatically create a name as we have seen in the outputs of the docker ps -a command. We can set the container name we want with the --name option:
docker run -d --name my_ubuntu ubuntu
Step 6.2: Remove container when finished (--rm)
Let's mix everything seen above:
- Launch in the background (-d)
- Run container for 30 seconds (sleep 30)
- Give a name to the image (--name)
- Delete container when finished (--rm)
docker run -d --rm --name bye-bye ubuntu sleep 30
Before and after the 30 seconds have passed, we can check that the container has disappeared.
docker ps -a
Step 7: Stopping and deleting containers
Containers we can stop or delete them, let's see how to do it. First of all, we nothing, we will delete all the containers that are finished, to do this:
docker container prune
docker ps -a
We've got everything clean to continue, now let's create a container, using a linux command that doesn't terminate, so that the container will keep running infinitely: </syntaxhighlight>. running the container infinitely:
docker run -d -ti --name my_container ubuntu bash
docker ps -a
Let's stop it (we can use the given name or id):
docker stop my_container
docker stop e4901956f108
docker ps -a
We'll see that it's finished, and now we're going to delete it, but only this container, not like previously we deleted all of them. The same as to stop it, we can use the name or the id.
docker container rm my_container
docker container rm my_container
docker ps -a
We will see that we no longer have the container and it is correctly removed.
Step 8: Connect to a container that is running in the background
Let's re-run a container that does not terminate:
docker run -d -ti --name to_attach ubuntu bash
Now, let's connect to this container that is running in the background. background:
docker attach to_attach
We will see the bash command running, to finish, we do a 'CTRL + C', and now, we will see what has happened to the container:
docker ps -a
We will see it stopped, because what we have done was to connect to the container and stop the command that was running, so now we've got the container finished.
Step 9: Execute another command in a container
Let's see again how to execute a command in a container that is already running. already running:
docker run -d -ti --name to_exec ubuntu bash
Now, let's run a command on that container. The exec command is similar to the run command, but to execute a command inside a container:
docker exec -ti to_exec ls
We are now inside the container created earlier:
ps -a
Here we will see in the list of running processes.
Now let's exit the container, by typing 'exit' or 'CTRL + D', and let's check the status of the container:
docker ps -a
Now we will see that the container is still running, not like in the previous step.
Step 10: Persistence in containers
We are going to see that containers do not keep the data by default, they only give us give us a service but the data is NOT persistent. Let's create a container that we are going to leave running:
docker run -d -ti --name to_expire ubuntu bash
Now, let's create a folder in the container and check that this folder exists:
docker exec to_expire mkdir test
docker exec to_expire ls
We'll see that the folder exists, but what happens if we delete the docker and recreate one?
docker container stop to_expire
docker container rm to_expire
docker run -d -ti --name to_expire ubuntu bash
docker exec to_expire ls
We can check that the folder created earlier, does not exist if the container terminates. If we would like to maintain a persistence of data, we have several options:
Step 10.1: Maintain the persistence by changing the base image
We repeat the previous steps, but before stopping the container, we are going to save the state of the container in the image:
docker run -d -ti --name to_save ubuntu bash
docker exec to_save mkdir test
docker commit to_save ubuntu
docker stop to_save
Now, let's stop the container and run a new one to test that the data has persisted:
docker run -d -ti --name to_save_v2 ubuntu:with_directory_test bash
docker exec to_save_v2 ls
This creates an ubuntu image with the label "con_directory_test".
Step 10.2: Maintain persistence by creating volumes
The other option to maintain data is to mount a volume by running the container. First we will create a folder that will act as a volume on our machine, which will be the one we will mount later. machine, which will be the one we then mount for docker:
mkdir /home/ubuntu/volume
Now we are going to mount the folder by running the container, we do it with the -v option, which has 3 fields separated by ':':
- The volume on our machine.
- Where the volume will be mounted inside the container
- Mounting options, e.g. rw (read and write) or ro (read only)
docker run --rm -ti -v /home/ubuntu/volume:/volume:rw ubuntu bash
mkdir /volume/test
touch /volume/test/file
exit
Once we exit the container, we can see that on our machine are. the data:
ls /home/ubuntu/volume/
ls /home/ubuntu/volume/test
The next time we run a container, if we mount that volume, the data will be there. data will be there, plus this way has the benefit of sharing data between our machine and the our machine and the container in a simple way.
Step 11: Create our own docker image
To create a container image with a Python hug application, we first create the folder that will contain the files that allow us to create the image.
mkdir miapp
cd miapp
We define a Dockerfile file to define the image:
FROM ubuntu
ENV USERNAME="username"
RUN apt update
RUN apt install -y python3-hug
RUN mkdir /app
COPY endpoint.py /app
WORKDIR /app
CMD hug -f endpoint.py
Alternatively, you can create a Dockerfile using Ubuntu image as a base instead of Alpine.
And add the file endpoint.py which uses the Python hug framework with the following content:
import hug
@hug.get('/welcome')
def welcome(username="unknown"):
""" Say welcome to username """
return "Welcome " + username
This file is an example of using Python hug. If we make a request to /welcome?username="LSO", this will return the message "Welcome LSO".
Once we have the two files created in our virtual machine, we can build our image:
docker build -t app:v1 -f Dockerfile .
Let's explain what each line is for:
- FROM: will serve to base on an existing image, in this case, a python alpine image, which contains python and does not take up much space.
- ENV: to create environment variables
- RUN: to execute commands, either to create a folder, install dependencies or any other option we need.
- COPY: to copy data from our machine to the image
- WORKDIR: to change the working directory.
- CMD: this command will be executed by default when starting a container from this image.
This will build a docker image based on python:alpine, in which we have saved our small application, and it will saved our small application and it will run when we run a container based on that image. container based on that image. To check that the image has been created correctly, we list the images, and we should get an image with the name 'app' and tag name 'app' and tag 'v1':
docker images
Now let's create a container for our image, and we'll add a new option, -p 8001:8000. This option will cause the container's internal port 8000 to be exposed on port 8001 on our machine:
docker run --name my-app --rm -d -p 8001:8000 app:v1
Now let's test that our image and code works. On our machine we will do:
curl -X GET http://localhost:8001/welcome?username=LSO
We will check that the output of the curl command, is "Welcome LSO".
Before we finish, let's check another little detail that we added in our image, the environment variable USERNAME, let's check that it works:
docker exec -ti my-app ash
echo $USERNAME
It works correctly, but when running the container, we can change this environment variable as we saw in one of the previous steps:
docker run --rm -ti -e USERNAME=me app:v1 ash
echo $USERNAME
This will help us to have default options and the user can change it at will. change it at will, such as passwords or other details.
= Step 12: Delete images
We will see that after all the tests that we have been doing, we have many images, and some of them will not be useful or we will not use them. many images, and some of which will not be useful or we will not use them, so we can delete them to save space, so we can delete them to save space.
Just as with the containers, we have an option to delete images that are are not used:
docker image prune
This will delete intermediate images that are sometimes used to build our images or images that we have tried to build our images or images that we have tried to build and failed.
The other option to delete images one at a time, would be to use the command docker rmi command and using the image name or id. If this is the output of 'docker images':
REPOSITORY TAG IMAGE ID CREATED SIZE
app v1 18609c517d32 14 minutes ago 109MB
python alpine 39fb80313465 23 hours ago 98.7MB
debian latest 85c4fd36a543 13 days ago 114MB
hello-world latest fce289e99eb9 7 months ago 1.84kB
hello-world linux fce289e99eb9 7 months ago 1.84kB
We can remove the images from hello-world:
docker rmi hello-world
ó
docker rmi fce289e99eb9
Step 13: Several docker at once (docker-compose)
Before we start, let's stop all the containers we have open, to leave the docker environment clean, which will come in handy for this part.
When we want to execute several containers at the same time and that they are connected between them easily, we will use docker-compose, where with a file of simple configuration in yaml, we will have all the simple configuration file in yaml, we will have all the configuration of what we are going to run. run.
First we are going to install docker-compose:
sudo apt install docker-compose
Now let's see a small example where we will explain all the details to create a docker-compose, in this case, the default filename is usually docker-compose.yml, and the content will be similar to:
version: '3'
services:
web:
container_name: web
restart: always
build:
context: .
ports:
- "8000:8000"
volumes:
- /home/ubuntu/volume:/volume:rw
environment:
USERNAME: test
web2:
container_name: web2
build:
dockerfile: Dockerfile
context: .
depends_on:
- web
command: touch web2
web3:
container_name: web3
restart: on-failure
image: pstauffer/curl
depends_on:
- web
command: curl -X GET web:8000/welcome?username=web3
In the docker-compose we define different services to be executed, in our case we have started three different services, web, web2 and web3:
The first service, web:
- container_name: to give a default name to the container when it is created.
- restart: can be:
- "no": if the machine terminates, it is not restarted (default)
- always: whenever the container terminates, it tries to start again
- on-failure: only tries to restart if the container terminates on failure
- unless-stopped: only restarts the container if the user is the one who terminates the container
- It will use the Dockerfile created in step 11 to generate the image to use.
- It will expose the port 8000 of the container on the 8000 of our machine.
- Mount a volume, it is done in a similar way to the command line.
- Create an environment variable
The second service, web2:
- Will use the Dockerfile also, the difference between this and the first service, is that if the name of the Dockerfile changes, we will have to do it this second way, the first one by default will only look for the Dockerfile
- depends_on will make that to execute this service, its dependency has to be already started.
- We will execute a command which will replace the command in the image.
The third service, web3:
- It will use the ostauffer/curl image which is a minimal image containing the curl command.
- restart: in this case, until the command is executed correctly, the container will keep restarting.
- depends_on the same as service 2, it will depend on service 1
- We will execute a command to call from service 3 to service 1.
Step 13.1: Build docker-compose
In case there are services that have Dockerfiles, this will cause them to be built beforehand, such as doing a docker build of each of the services containing a Dockerfile:
docker-compose build
This will generate the images needed to run all the services.
Step 13.2: Starting docker-compose
To start all the services, we will execute:
docker-compose up -d
After this we will be able to see the logs of all services:
docker-compose logs -f
Here we will see that service 3 has successfully called service 1.
Now we will see how docker-compose has created the containers:
docker ps -a
The output will look similar to the following:
CONTAINER ID CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
219a4118244c pstauffer/curl "curl -X GET web:800..." 6 seconds ago Exited (0) 3 seconds ago web3
c97e894cb3ae ubuntu_web2 "touch web2" 6 seconds ago Exited (0) 4 seconds ago web2
f35d034204fc ubuntu_web "/bin/sh -c 'hug -f ..." 6 seconds ago Up 5 seconds 0.0.0.0.0:8000->8000/tcp web
Here we see several details:
- The name we gave it in the docker-compose worked correctly.
- We see that the commands are correct as well.
- web2 is stopped because the restart policy is to shut down when a command is finished executing
- web3 is stopped because the command has finished with a correct output.
Let's check now that in the web service everything is correct:
docker exec -ti web ash
echo $USERNAME
ls /volume
exit
We can verify that both changing the environment variable and creating a volume worked correctly.
Step 13.3: Stop docker-compose
To stop all services:
docker-compose down
We check:
docker ps -a
We see that all the containers have disappeared.
= Step 14: Exercises
Step 14: Exercise
Exercise 1: Variant of httpd
Build an httpd-hello-world image from the httpd image. The index.html file in the htdocs/ folder has to display the hello world message.
docker pull httpd
Edit the Dockerfile file:
nano Dockerfile
Add this content:
FROM httpd
COPY index.html htdocs/index.html
Edit the index.html file:
nano index.html
Add this content:
<html>hello world</html>.
Build image:
docker build -t mihttpd -f Dockerfile .
Now let's create a container of our image:
sudo docker run --name mihttpd -d -p 8080:80 mihttpd
Now let's test that our image and code works:
curl -X GET http://localhost:8080
It has to output to screen like this:
<html>hello world</html>
Exercise 2: flask application
Flask is a python framework for implementing webs. Create a "miapp-flask" image from the "python" image:
- Install flask by: pip install flask
- Create the "app" folder
- Set the FLASK_APP variable to "hello.py".
- Add the file "hello.py"
Set the working directory to "/app" * Set the working directory to "/app".
The hello.py file contains a "hello world" for Flask:
from flask import Flask
app = Flask(__name__)
@app.route("/").
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
Solution with alpine:
FROM python:alpine
RUN mkdir /app
RUN pip install flask
ENV FLASK_APP="app/hello.py"
COPY hello.py
WORKDIR /
CMD flask run --host=0.0.0.0.0
Solution with Ubuntu:
FROM ubuntu:bionic
RUN apt-get update
RUN apt-get -y install python python-pip wget
RUN pip install Flask
ENV FLASK_APP="app/hello.py" RUN mkdir /app.py
RUN mkdir /app
COPY hello.py /app
CMD flask run --host=0.0.0.0.0
Exercise 3: mysql
IN THE VIRTUAL MACHINE:'
mkdir miapp
cd miapp
sudo docker pull mysql
nano Dockerfile
FROM mysql
ENV MYSQL_ROOT_PASSWORD="thepassswordwhatever"
RUN mkdir /app
WORKDIR /app
CTRL+O , ENTER AND CTRL+X (to save Dockerfile and exit)
sudo docker build -ti mrfdocker -f Dockerfile .
sudo docker run --name mymrf --rm -rm -p 8001:3306 mrfdocker
sudo docker images
ON HOST TERMINAL:'
mysql installation:
sudo apt install mysql-client-core-5.7
> sudo apt-get install mysql-server
> sudo service mysql start
To login:
sudo mysql -u root -p -P 8001
Once inside msql:
mysql> show databases; To exit:
exit
Exercise 5: python pyramid
Pyramid is a python framework to implement webs. Create a "miapp-pyramid" image:
- Use the Ubuntu image as a reference, install the python and python-pip packages.
- Install Pyramid using: pip install pyramid
- Create the "app" folder
- Add the "hello.py" file
- Set the working directory to "/app".
- Launch the application with the command: python hello.py
- Check that it works with curl, the path to the web is http://127.0.0.1:8000/hello, assuming you have used port 8000 to expose the service.
The hello.py file contains a "hello world" for Pyramid:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
def hello_world(request):
print('Request inbound!')
return Response('Docker works with Pyramid!')
if __name__ == '__main__':
config = Configurator()
config.add_route('hello', '/')
config.add_view(hello_world, route_name='hello')
app = config.make_wsgi_app()
server = make_server('0.0.0.0.0', 6543, app)
server.serve_forever()
This program receives requests on port 6543.
Exercise 6
Create an Ubuntu derived image, install the apache2, php and libapache2-mod-php package. Activate the php module for apache2 using the command:
a2enmod php
Create the /var/www/php folder.
Create the index.php file
<?php
Print "Hello, World!"
?>
and copy it to /var/www/php/.
Create the 000-default.conf file and copy it to /etc/apache2/sites-enabled/.
This file contains.
<VirtualHost *:80>
DocumentRoot /var/www/php
<Directory /var/www/php/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order deny,allow
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
You have to launch apache with the command: apachectl -D FOREGROUND
After creating the image, launch it by mapping port 8888 to port 80, test that you can access http://127.0.0.1:8888/php/ using curl
Solution:
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y tzdata
RUN apt-get install -y apache2
RUN apt-get install -y php
RUN apt-get install -y libapache2-mod-php
RUN a2enmod php7.2
COPY index.php /var/www/php
COPY 000-default.conf /etc/apache2/sites-enabled/
CMD apachectl -D FOREGROUND
And to launch it:
docker run --name my-app --rm -d -p 8001:8000 app:v1
And to test it:
curl -X GET http://127.0.0.1:8888/php/
Exercise 7
NGINX is a popular opensource web server.
- Get the nginx container image.
- Run a container instance based on this image (in interactive mode and with display output on the terminal), use the name www for this container. Map local port TCP/8888 to container port TCP/80. Verify that the image is running and show logs.
- Copy an index.html file that contains <html>hello world</html> into /usr/share/nginx/html
- Create a container image which automates all of steps that are described above using the name mynginx. Run an instance of it with a port mapping of TCP/8888 to TCP/80. Verify that your image works.
Exercise 8
- Get the ubuntu container image.
- Install the python3-bottle package. Python bottle is a framework for web sites.
- Create a simple .py hello world program for Python Bottle
from bottle import route, run
@route('/')
@route('/hello/<name>')
def greet(name='Stranger'):
return 'Hello %s, how are you?'%name
run(host='localhost', port=8080, debug=True)
- Run the program.
- Create a container image that automates the steps above.
- Deploy the container image, map port localhost TCP/8888 to container TCP/8080. Use curl to validate that the container is running.