Jan 26, 2018
updated at: Apr 4, 2019
Docker ideally focuses on one app per container, one process in one container with the libraries and files it needs, nothing else. Docker makes it really easy to run containers. Originally it was based on LXC now it has it own libraries, REST API and it's written in go.
The core idea is isolating systems, for example having a java7 and a java8 application running on the same machine.
A good place to learn about Docker is Docker labs on github.
$ docker pull ubuntu
You can set below alias. With this, you can get the ID of the last-run container so it will be easier to manage.
$ alias dl='docker ps -l -q'
Run a container. End with Ctrl-p + Ctrl+q to exit the tty but keep the container running.
$ docker run -i -t image
$ docker run -i -t ubuntu /bin/bash
$ docker run -i -t --name=container1 ubuntu /bin/bash
$ docker run -d -p 80:80 image
$ docker run --rm -it alpine sh
$ docker stop "containerName"
$ docker start "containerName"
$ docker restart "containerName"
$ docker attach "containerName"
$ docker rm "containerName"
$ docker cp file1 "containerName":/file1
$ docker cp "containerName":/file1 .
One liners to stop and remove all containers or images.
Stop all containers:
$ docker stop $(docker ps -a -q)
Remove all containers:
$ docker rm $(docker ps -a -q)
Remove all images:
$ docker rmi $(docker images -q)
Remove all exited containers:
$ docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs docker rm
$ docker ps
$ docker ps -a
$ docker images
$ docker inspect "containerName"
$ docker top "containerName"
$ docker logs "containerName"
Docker Container images can be stored either in public or private registry. It is needed to sign Container images so that the client knows that image is coming from a trusted source and that it is not tampered with. Content publisher takes care of signing Container image and pushing it into the registry.
Only allow to import trusted images:
export DOCKER_CONTENT_TRUST=0
Docker runs default as root, but with user_namespaces you can map the root UID to a user UID. First you need to stop the Docker daemon.
$ docker daemon --userns-remap=default &
Then you can run as a user:
$ docker run --rm --user 1000:1000 alpine id
$ docker run --rm -it --user 1000:1000 alpine sh
The Linux kernel is able to break down the privileges of the root user into distinct units referred to as capabilities. For example, the CAP_CHOWN capability is what allows the root user to make arbitrary changes to file UIDs and GIDs. The CAP_DAC_OVERRIDE capability allows the root user to bypass kernel permission checks on file read, write and execute operations. Almost all of the special powers associated with the Linux root user are broken down into individual capabilities.
$ docker run --rm -it --cap-add $CAP alpine sh
$ docker --rm -it --cap-drop ALL --cap-add CHOWN alpine chown nobody /
Seccomp is a sandboxing facility in the Linux kernel that acts like a firewall for system calls (syscalls). It uses Berkeley Packet Filter (BPF) rules to filter syscalls and control how they are handled. These filters can significantly limit a containers access to the Docker Host's Linux kernel - especially for simple containers/applications.
You will need a host where seccomp is enabled.
$ docker run -it --rm --security-opt seccomp=default.json alpine sh
AppArmor (Application Armor) is a Linux Security Module (LSM). It protects the operating system by applying profiles to individual applications. In contrast to managing capabilities with CAP_DROP and syscalls with seccomp, AppArmor allows for much finer-grained control. For example, AppArmor can restrict file operations on specified paths.
You will need a host where AppArmor is enabled in the kernel. If so AppArmor is automatically used in containers.
Docker can build a local image by reading instructions from a dockerfile, using a command like:
$ docker build .
$ docker build -t /path/to/file .
$ docker build -t "imagename1" .
Before Docker sends the context to the Docker daemon (for building an image) it will first check for a .dockerignore file in the directory. Here is an example of the possibilities you can add in a .dockerignore file:
*/temp* --> Excludes files whose name start with temp
*/*/temp* --> Exludes files whose name start with temp,
two levels below root
temp? --> Excludes files in root directory whose
names are temp + one character
*.md --> Eclude all markdown files
!README.md --> Exception, include README.md
Command | Meaning | Syntax |
---|---|---|
FROM | Set the base image for the container | FROM <image>:<tag> |
MAINTAINER | The author of the image | MAINTAINER <name> |
RUN | Execute commands in new layer | RUN <command> |
CMD | Provide default for an executing container | CMD |
LABEL | Add metadata to image | LABEL <key>=<value> |
EXPOSE | Listen on the network ports | EXPOSE <port> |
ENV | Set environment variables | ENV <key> <value> |
ADD/COPY | Copies files from src to dest | ADD <src>... <dest> |
ENTRYPOINT | Configure container to run as executable | ENTRYPOINT ["exe", "arg"..] |
VOLUME | Create externally mounted volumes | VOLUME ["/data"] |
USER | Sets the user name or UID | USER <user> |
WORKDIR | Sets the working dir for instructions | WORKDIR /path/to/workdir |
ONEBUILD | Used for image in other build | ONEBUILD [instruction] |
############################################################
# dockerfile to build a LAMP stack
# based on Ubuntu
# not complete
############################################################
# set the base image to Ubuntu
FROM ubuntu:trusty
# File Author / Maintainer
MAINTAINER Sebastiaan <sebastiaan.vanhoecke@hotmail.be>
################## BEGIN INSTALLATION ######################
# update the repository sources list
RUN apt-get update && \
apt-get -y install supervisor git apache2 libapache2-mod-php5 \
mysql-server php5-mysql pwgen php-apc php5-mcrypt
################## BEGIN CONFIGURATION ######################
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# adding files to the image
ADD file1.sh /file1.sh
ADD file2.sh /file2.sh
ADD file3.conf /etc/dir1/dir2/file3.conf
RUN chmod 777 /*.sh
RUN git clone repo.git /app
RUN mkdir -p /app && rm -fr /var/www/html && ln -s /app /var/www/html
##################### CONFIGURATION END #####################
# set environment variables to configure
ENV var1 10
# expose the default ports
EXPOSE 80 3306
# run the last command
CMD ["/run.sh"]
############################################################
# dockerfile to build a NoVNC ubuntu docker for the browser
# based on Ubuntu
# go to http://localhost:6080
############################################################
# set the base image to Ubuntu
FROM ubuntu:trusty
# file Author / Maintainer
MAINTAINER Sebastiaan <sebastiaan.vanhoecke@hotmail.be>
################## BEGIN CONFIGURATION ######################
ENV DEBIAN_FRONTEND noninteractive
ADD startup.sh /startup.sh
RUN apt-get update -y && \
apt-get install -y git x11vnc wget python python-numpy \
unzip Xvfb firefox openbox geany menu && \
cd /root && git clone https://github.com/kanaka/noVNC.git && \
cd noVNC/utils && git clone https://github.com/kanaka/websockify \
websockify && \
cd /root && \
chmod 0755 /startup.sh && \
apt-get autoclean && \
apt-get autoremove && \
rm -rf /var/lib/apt/lists/*
##################### CONFIGURATION END #####################
# set environment variables to configure
ENV var1 10
# expose the default ports
EXPOSE 6080
# run the last command
CMD /startup.sh
startup.sh
#!/bin/bash
#save as startup.sh
export DISPLAY=:1
Xvfb :1 -screen 0 1600x900x16 &
sleep 5
openbox-session&
x11vnc -display :1 -nopw -listen localhost -xkb -ncache 10 -ncache_cr -forever &
cd /root/noVNC && ln -s vnc_auto.html index.html && ./utils/launch.sh --vnc localhost:5900
docker-compose is a utility to manage docker containers, it is an easier way of deploying linkage between containers. For example if you have an nginx container and you want a php-fpm container link you can do this easiliy with docker-compose.
docker-compose configuration files are written in YAML, the correct filename is docker-compose.yml.
An example:
version: '3'
networks:
webproxy:
driver: overlay
services:
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./default.nginx:/etc/nginx/conf.d/default.conf
- ./http:/http
links:
- php
networks:
- webproxy
php:
image: php:7-fpm
volumes:
- ./http:/http
networks:
- webproxy
NOTE: the current version of docker-compose is version 3 which is not compatible with previous versions.
To run:
$ docker network create webproxy # is needed for setting up network
$ docker-compose up
With version 3 of docker-compose files you can make a Docker stack. The difference between docker-compose and Docker stack is that docker-compose focusses on containers while Docker stack focusses on services. Docker stack is used to define a full application stack and scale it.
To make use of Docker stack you will need to initialize a Docker swarm cluster by:
$ docker swarm init
Next you can modify the YAML file from docker-compose:
version: '3'
networks:
webproxy:
driver: overlay
services:
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./default.nginx:/etc/nginx/conf.d/default.conf
- ./http:/http
links:
- php
deploy:
replicas: 5
networks:
- webproxy
php:
image: php:7-fpm
volumes:
- ./http:/http
deploy:
replicas: 5
networks:
- webproxy
Now we can make use of the deploy operation which can scale our services according to how much you want.
Run:
$ docker stack deploy
If you want to make the utmost use of Docker swarm you will need some virual vms. Let's say you have 4 virutal vm's.
In Docker swarm 1 vm is always the manager, on this vm you will run:
$ docker swarm init
NOTE: you need to make sure the vm's are all in the same subnet and ports 2377, 7946 and 4789 are reachable.
Next
You will join the other vm's in the swarm.
docker swarm join <id>
That's it now you can scale services over the vm's!. What cool about Docker swarm is, is that if for example 1 vm goes down Docker swarm will take care of the services and will balance them over the other vm's.
$ docker service create --name "test" --publish 80:80 <image>
$ docker service tasks <name>
$ docker service update --replicas=10 <name>
SELinux or another MAC can produce problems when starting Docker containers. Look at the output of SELinux or another MAC logs for more info. You can also look at the Docker logs.