Sunday, March 27, 2016

Installing docker.io on Mint Linux 17.2 (Rafaela)


Mint Linux is an OS that I use as desktop system and sometimes I need a server to quickly run a 'little server' to do something. One of the solutions to spin up the 'little server' is docker.

The Key Concepts
Daemon and Client
On a Linux host the docker engine runs on the host machine. You interact with the docker engine, called the docker daemon, via a docker client. It is the daemon that interacts with the containers, not the client.

If you would run on a Windows machine, you would first spin up a virtual machine that is a Linux host.

Localhost and Dockerhost
The concept localhost in the network world would mean the local computer, the term dockerhost is the machine that runs the containers.

You can use standard addressing like localhost:8000 or 0.0.0.0:8000.

Docker Infrastructure
The infrastructure consists of images, registries and containers.

Images are read-only templates. Docker images are the build component of docker. You download or build an image you use. The image can come with pre-installed software so you only need to push a configuration when you run it.

Registries are the places where you store images. You pull images from a registry and distribute images via a registry.

Containers are the things that actually run the image. A container can be started, stopped, moved and deleted.

How Do Images Work?
Docker uses images that consist of multiple layers. To do this it uses a virtual file system. Actions are done to the base image and layers are created and its relation to the other layers is documented. The result is that the distribution is rather lightweight and to update an image you only need to update the layer that is out of date.

Every image starts with a base image, in my next post it will be an Ubuntu base. By default all base images are downloaded from Docker Hub. To build the image the system uses a file with instructions called the Dockerfile. Each instruction basically creates a new layer in the image.

The images are stored locally in the docker registry. The docker hub can be considered as a public docker registry.

How Does A Container Work?
A container consists of the operating system, user-added files, and meta-data. The image tells docker what is inside the container and what processes to run the container is launched. The image is read-only but when a container is executed a read-write layer is added on top in which the application can run.

It is an important concept to grasp that when a container runs it has one process that is considered the main process and when this process is killed the container stops running.

Installation

To spin up docker on Mint I did the following:
1. Figure out the kernel version
sudo uname -r

2. Figure out the Ubuntu version
sudo uname -a
As we can see the Mint Linux is based on Ubuntu 14.04.1.

3. Update the APT repo's
sudo apt-get update

4. Install apt-transport-https and ca-certificates
sudo apt-get install apt-transport-https ca-certificates

5. Add the docker key to the system
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

6. Make the docker repo
sudo vi /etc/apt/sources.list.d/docker.list

Add the following repo (Ubuntu 14.04, is called Trusty Thar)
deb https://apt.dockerproject.org/repo ubuntu-trusty main

Close /etc/apt/sources.list.d/docker.list

7. Update the repos
sudo apt-get update

8. Install the linux-image-extra (pre-requirement for Ubuntu 14.04)
sudo apt-get install linux-image-extra-$(uname -r)

9. Install apparmor (pre-requirement for Ubuntu 14.04)
sudo apt-get install apparmor

10. Install the docker engine
sudo apt-get install docker-engine

11. Start the service
sudo service docker start

12. Test the installation with a hello-world
sudo docker run hello-world

This outputs
Hello from Docker.
This message shows that your installation appears to be working correctly.

...

To run docker without the need of sudo you can add the user who has sudo rights to the docker group.

sudo usermod -aG docker user

If you have tried to execute without the appropriate permissions you will get the following message:
Cannot connect to the Docker daemon. Is 'docker daemon' running on this host?


The service comes automatically online after a (re)boot.

Memory Tuning  
According to the docker documentation the memory overhead is around 1% of the total available memory and the performance degradation is around 10%. To enable memory and swap accounting you can edit /etc/default/grub.

sudo vi /etc/default/grub

and change add to GRUB_CMDLINE_LINUX the value cgroup_enable=memory swapaccount=1.

To take effect you need to run update-grub.

Firewall

As always it is recommended to run the local firewall (ufw). If you want to connect from the host to the container you need to do nothing, if you want to connect from another host you'll need to open port 2376/tcp or 2375/tcp. The difference between both is that 2376 is for use with TLS and 2375 isn't.

Note that this means that all communication over 2375 will thus be unencrypted.

To create the forward policy you edit /etc/default/ufw and set DEFAULT_FORWARD_POLICY="ACCEPT"

You need to reload ufw to take effect
sudo ufw reload

Finally you need to allow the incoming connections on the right ports
sudo ufw allow 2376/tcp
sudo ufw allow 2375/tcp

DNS
Ubuntu-based systems are configured in such a way that by default you can't use /etc/resolv.conf.
To configure Ubuntu 14.04 you need specify the DNS server that docker needs to use in DOCKER_OPTS in /etc/default/docker/. Set your dns server's ip address instead of the Google IP addresses.

Sunday, March 20, 2016

Wifipwdump.sh

A small post to tell you about a little script I wrote to troubleshoot an issue I had with a wireless network the other day. Nobody was able to tell me what the WiFi password was thus I had to dump it from a system that was connected.

Since I will most likely not remember how I did it a next time I wrote a little script for it you can find at https://github.com/Xiobe/scripts/blob/master/wifipwdump.sh

Sunday, March 13, 2016

Timestamping history

Being an incident response person means sometimes you are also asked some help for troubleshooting things. The other day I was looking at a config and wanted to know what the sysadmin did on his Linux box before calling me in.

After I helped him (it was a typo in a config file) I told him something I only learned half a year ago but I think is pretty handy. You can actually timestamp the commands in the history file.

The magical line is :
echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile
I also put it in my Dockerfile with the following instruction:
ENV HISTTIMEFORMAT="%d/%m/%y %T "