Monday, November 28, 2016

makepasswd generating passwords on linux

I was writing a script the other day and had to generate a password and found the nifty tool call makepasswd.

makepasswd is a command that generate true random passwords using /dev/random.

To install you do
sudo apt-get install makepasswd

To generate a password you do


and if you want a 16 character password you do

makepasswd --chars 16

Monday, November 14, 2016

FIR (fast incident response) in docker

FIR (Fast Incident Response) is a project by CERT Société Générale. It is a nice system to do incident tracking and I use it on a regular basis for over a year now. After a year of daily use, I gathered the users and a series of issues and wanted features where expressed.

To make things go forward in an easy way I decided it was time to dockerize the installation so the end users can give quick feedback on features under development.

Although there is a Dockerfile in the repo, I decided to make my own based on the existing one:

# Dockerfile to build FIR container
# Original Dockerfile by Kyle Maxwell
# to build: docker build -t fir .
# to run: docker run -d p 8000:8000 fir
# webinterface: http://x.x.x.x.:8000
# default administrator: admin
# default password:  admin

# Based on ubuntu:latest
FROM ubuntu:16.04
MAINTAINER Erik Vanderhasselt

# Set environment variables
ENV DEBIAN_FRONTEND noninteractive

# Upgrade Ubuntu
  apt-get update && \
  apt-get dist-upgrade -y && \
  apt-get autoremove -y && \
  apt-get clean

# Set the timezone

RUN ln -fs /usr/share/zoneinfo/Europe/Brussels /etc/localtime

RUN dpkg-reconfigure -f noninteractive tzdata

# Install dependencies
RUN apt-get install -y python-dev
RUN apt-get install -y python-pip
RUN apt-get install -y python-lxml
RUN apt-get install -y git
RUN apt-get install -y libxml2-dev
RUN apt-get install -y libxslt1-dev
RUN apt-get install -y libz-dev

# Install the latest version of pip
RUN pip install --upgrade pip

# create the user and group
RUN groupadd -r fir
RUN useradd -r -g fir -d /home/fir -s /usr/sbin/nologin -c "FIR user" fir

# Download FIR from Github
RUN mkdir /home/fir
RUN cd /home/fir
RUN git clone
RUN mv FIR fir
RUN chown -R fir:fir /home/fir

# install the requirements
WORKDIR /home/fir/FIR
# remove psycopg2==2.6.2 from requirements.txt since we are not using PostgreSQL
RUN sed '/^psycopg2/d' /home/fir/FIR/requirements.txt > /home/fir/FIR/req1.txt
# run pip
RUN pip install -r /home/fir/FIR/req1.txt

# prepare to run
USER fir
ENV HOME /home/fir
WORKDIR /home/fir/FIR
RUN ./ migrate
RUN ./ loaddata incidents/fixtures/seed_data.json
RUN ./ loaddata incidents/fixtures/dev_users.json


# make it run
ENTRYPOINT ["/home/fir/FIR/"]
CMD ["runserver", ""]

To build the container you do sudo docker build -t fir .
To run the container you do sudo docker run -d p 8000:8000 fir
To access fir you point your browser to http://localhost:8000, the default login is admin and the default password is admin too.

Now you have a nice system to record your incidents which is a good start but you need incident response procedures. If you got no idea what I am talking about I recommend you read up on the documents written by ENISA, NIST,, etc.

Monday, October 31, 2016

Adding disks to an LVM

One of my virtual machines ran out of disk space the other day because I wasn't sure of disk sizing when I initially started playing with it. The solution was simple the LVM had to be extended. This is how you do it:

  1. sudo apt-get install system-config-lvm
  2. sudo pvcreate /dev/your_disk
  3. sudo vgextend VG_Name /dev/your_disk
  4. sudo lvextend -1 +100%FREE LV_PATH 
  5. sudo resize2fs LV_PATH
  6. sudo init 6
That is it, simple right?

To determine the disk(s) you want to add you do ls /dev/sd* it will return your disks, you will probably want to add the disks with no numbers at the end.

To figure out your volume group (VG_NAME) you do sudo vgdisplay and to figure out the logical volume path (LV_PATH) you do sudo lvdisplay.

Monday, October 17, 2016

Virtualbox guest additions on ubuntu

When I want to try something out I often use a virtualbox to play in. My guest operating system is often an Ubuntu. One of the things you want to do is share a folder with the guest operating system and thus you need to install the virtualbox guest additions.

Where in the past I use to work with the additions CD-rom, I now use the package that Ubuntu offers called virtualbox-guest-additions-iso. It is important to be aware about the fact that Ubuntu has split things up a bit and thus you have other packages to install too if you need some functionalities like vitualbox-guest-dkms, virtualbox-guest-x11 and virtualbox-guest-utils.

These virtualbox-guest packages are DKMS aware, which means that they can update without changing the whole kernel.

A common operation is to share a directory with the virtual machine. You do this by setting up a directory and permanently mounting it. It will be mounted in /media.

Monday, October 3, 2016

Can you hack my mac?

A couple of weeks back this was the question I got by text message. It was from a friend who had some issues and asked one of his other friends to have a look at it and since then he was totally locked out.

This is the procedure I used to access his Mac.

Removing the setup file.
I booted the system with the command (that weird Apple key for Windows users) + S. This boots the system into single user mode and gives back a terminal.

Next I did a file system check with fsck -fy. The file system was ok.
The following step was to mount the root drive as writable:
mount -uw /

Finally I renamed the .AppleSetupDone
mv /var/db/.AppleSetupDone /var/db/.AppleSetupDone.old

Creating a new Admin.
After the reboot the "Welcome wizard" screen came on, I made a new account, which automatically made me an Admin account.

Reset of the old Admin's password.
The only thing left to do was reset the old admin password. This is done via the system preferences, accounts. You have to unlock the little lock icon at the bottom, and reset the original Admin account.

I logged out of the new admin and logged my friend in to his familiar session.

Monday, September 19, 2016

Now you screen me ... now you don't

screen is a little command that I use on a daily basis, it allows you to start a session, execute some commands, disconnect from the session while the command continues and later reconnect to it.

Starting a screen session

To start a screen session you just hit screen. Then you start whatever you need to run. By hitting ctrl+a and followed by d, you detach from your session. You see the session ID when you have exited the screen session, you will need this ID to reconnect to it.

Listing your sessions

If you have multiple sessions and want to have an overview you run screen -ls.

Reattaching to a session

When you come back and want to reattach to a session you do screen -r .

Killing a session from within a session

When you are in a session you can kill it by hitting ctrl+a and then k.

Killing a session from outside a session

When you want to kill a session from outside a screen session you do screen -X -S

Monday, August 15, 2016

Setting your editor in Ubuntu

Recently I needed to alter /etc/sudoers and this is done with visudo. The visudo default editor is Nano and I have a personal preference for vi.

To change the default editor on a system you do:

sudo update-alternatives --config editor

It will present you with a list of editors and you basically chose the number of the editor you prefer.

Monday, August 1, 2016

Sysmon ... digging for gold

When things are bizarre, weird and strange often people come and see their incident response team. This incident wasn't different some process wasn't doing what the admin was expecting it would do but he didn't knew what it was doing.

He knows I got a nice bag of little tools and thus I introduced the sysadmin to sysmon. I would recommend to install on each and every Windows system. It logs much more than what a standard windows system logs and is thus a treasure chest for any incident responder.

You can download the 32-bit and the 64-bit version from sysinternals. I prefer to make my sysinternals tools from

The installation is pretty straight forward. You open a command prompt with Administrator privileges and go to the directory where you've downloaded sysmon. I will reference during the rest of this post to sysmon.exe depending on your platform you will need to reference the 32-bit or 64-bit version.

To install it run sysmon.exe -i --accepteula. This outputs

System Monitor v4.1 - System activity monitor
Copyright (C) 2014-2016 Mark Russinovich and Thomas Garnier
Sysinternals -

Sysmon installed.
SysmonDrv installed.
Starting SysmonDrv.
SysmonDrv started.
Starting Sysmon..
Sysmon started.

Software needs to be configured. I like my logs verbose so lets go over the the options:

-c   Update configuration of an installed Sysmon driver or dump the current configuration if no other argument is provided. Optionally take a configuration file.
-h   Specify the hash algorithms used for image identification (default is SHA1). It supports multiple algorithms at the same time. Configuration entry: HashAlgorithms.
-i   Install service and driver. Optionally take a configuration file.
-l   Log loading of modules. Optionally take a list of processes to track.
-m   Install the event manifest (done on service install as well).
-n   Log network connections. Optionally take a list of processes to track.
-r   Check for signature certificate revocation. Configuration entry: CheckRevocation.
-u   Uninstall service and driver.

I configure my systems the following way:
sysmon -c -l -n -r

I like my hash to be sha1 because that makes it easy to submit to websites like virustotal.

The Logs
You can find the logs created by sysmon in the event viewer (you need administrative privileges).

  1. Open the event viewer
  2. Go to Applications and Services logs
  3. Go to Microsoft
  4. Go to Windows
  5. Go to Sysmon
  6. Go to Operational

Remember that it is a good practice to split off your event logs to a separate disk if the I/O is a bottle neck. When you right click on operational and request the properties you can change the log path and the log size. Since I like verbose logs I've set mine to at least 250 MB (249984 KB) and cyclical.

Now that everything is configured it is time to restart the service. Open a powershell prompt with elevated privileges and do:

restart-service sysmon

Digging for Gold
The last step to figure out what is going on is of course log analysis. There are a couple of event IDs

EventID 1 shows you process creation
Process Create:
UtcTime: 2016-08-01 14:24:12.390
ProcessGuid: {ddfd1a0f-5b8c-579f-0000-0010f4d2d004}
ProcessId: 7204
Image: C:\Windows\System32\mmc.exe
CommandLine: "C:\WINDOWS\system32\mmc.exe" "C:\WINDOWS\system32\eventvwr.msc" /s
CurrentDirectory: C:\WINDOWS\system32\
LogonGuid: {---}
LogonId: 0x4d0c45f
TerminalSessionId: 1
IntegrityLevel: High
Hashes: SHA1=F5DC12D658402900A2B01AF2F018D113619B96B8
ParentProcessGuid: {ddfd1a0f-62f2-579c-0000-0010f1060400}
ParentProcessId: 2940
ParentImage: C:\Windows\explorer.exe
ParentCommandLine: C:\WINDOWS\Explorer.EXE

Event ID 2 shows you when a file was created

File creation time changed:
UtcTime: 2016-08-01 14:24:22.358
ProcessGuid: {ddfd1a0f-3a92-579f-0000-0010c31a2804}
ProcessId: 2996
Image: C:\Users\\Desktop\portable\firefox\FirefoxPortable\App\firefox\firefox.exe
TargetFilename: C:\Users\
CreationUtcTime: 2015-12-18 08:35:35.991
PreviousCreationUtcTime: 2016-08-01 14:24:22.343

Event ID 3 shows you the network connections
Network connection detected:
UtcTime: 2016-08-01 14:24:19.240
ProcessGuid: {ddfd1a0f-62d5-579c-0000-0010eb030000}
ProcessId: 4
Image: System
Protocol: udp
Initiated: false
SourceIsIpv6: false
SourcePort: 137
SourcePortName: netbios-ns
DestinationIsIpv6: false
DestinationPort: 137
DestinationPortName: netbios-ns

Event ID 5 shows you when a process is terminated
Process terminated:
UtcTime: 2016-08-01 14:24:17.398
ProcessGuid: {ddfd1a0f-5b8c-579f-0000-00103dcfd004}
ProcessId: 5684
Image: C:\Windows\System32\dllhost.exe

As you can see there is a tremendous amount of info available for an incident responder. If you want some cool ideas what you can do with the data I recommend you to read this excellent post by CrowdStrike will help you get amazing value out of the collected data.

Monday, July 18, 2016

Setting up a DNS Server in Ubuntu

This month I have a student, Yannick Merckx, sitting next to me who is specializing in Artificial Intelligence and the goal is to leverage machine learning to detect malware using our DNS logs.

This DNS adventure gave me the idea to set up my own local DNS server up so I can block a bunch of things by making a sinkhole. The theory is simple, your local DNS server intercepts the request and does the lookup instead of the one given to you by the network/internet provider.

Installing bind9

The first step is to install a DNS server. I chose bind9 because that is one I used in the past and thus have some experience with.

sudo apt-get install bind9 bind9utils

Configuring bind9
Once the software is installed you need to configure it. The configuration lives in /etc/bind.

named.conf is where your configuration starts. It contains a bunch of include statements.

named.conf.options is where you configure the forwarders. The forwarders are the name servers your DNS server will use if it doesn't know the answer. If you want for example google's DNS servers to answer you set it like.

forwarders {;

You can set multiple DNS servers, you can separate them with a semi-column (;). If you want to use other DNS servers than google you can for example use OpenDNS's servers which are and

In named.conf.local you configure what databases you want to use.

zone "" {
  type master;
  file "/etc/bind/";

zone "" {
  type master;
  file "/etc/bind/db.127";

logging {
  channel simple_log {
    file "/var/log/named/bind9.log" versions 3 size 5m;
    severity debug 10;
    print-time yes;
    print-severity yes;
    print-category yes;

  category default {


I've set up a zone for, my domain, and said that the master database is located at /etc/bind/ The next zone I did exactly the same thing for the reverse lookup database.

The reason why I've set the severity to debug 10 is because this allows me to actually log the answer for the requested domain.

Finally I declared how the logging has to take place. The location of the log is specific since there is already an entry in the apparmor profile (/etc/apparmor.d/usr.sbin.named).

You have to create the directory named and the log file.
sudo mkdir /var/log/named
sudo touch /var/log/named/bind9.log
sudo chown -R bind:bind /var/log/named

The file are copies of the ones that come with /etc/bind and I just added the IP addresses for Xiobe's website so no further lookup needs to occur. In db.127 nothing changes since I want to point to localhost.

In this file we don't need to change a thing. 

Testing the configuration
Testing the configuration was done by doing an nslookup.


I got a reply and in the log it looked like

;            IN    A

;        4178    IN    CNAME
;        2758    IN    A

In a next post I will explain how to set up the sinkhole

small update
I made a little mistake in the logging part above. I adapted the post.

Monday, June 6, 2016

Dockerfiles ... creating your own docker images

In this post we are going to build our first image to go over the basics. We are going to build an OpenSSH server and expose it.
Creating a Dockerfile
The complete reference can be found at but I am going to use only a part of the possibilities. 

The Dockerfile is a set of instructions that allows you to build an image that can be used to a container

The first step is to create the Dockerfile
touch Dockerfile

In you favorite editor you open your Dockerfile and write
FROM ubuntu
MAINTAINER your_name _email>

We need the following line so we can spin up the container with as argurment the SSH root password.
ARG SSHD_Root_Password

Since it is best practice to keep an OS up-to-date you will need to add the following

  apt-get update && \
  apt-get dist-upgrade -y && \
  apt-get autoremove -y && \
  apt-get clean

The instructions above will pull down all the updates for the OS when you spin up your container.

To install the OpenSSH server we do the following

  apt-get install -y openssh-server && \
  apt-get clean

Lets make a backup of the original config file and change the permissions so it can't be read by the whole world.

RUN cp /etc/ssh/sshd_config /etc/ssh/sshd_config.orig
RUN chmod a-w /etc/ssh/sshd_config.orig

We still need to set the root password based on the argument given to the container. To change the root password you add the following line
RUN echo "root:$SSHD_ROOT_PASSWORD" | chpasswd

It is not a security best practice to allow a root user to log in over ssh, but for simplicity sake we are going to allow it in this configuration. Never use this for production.
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

Now we need to specify the default command that runs when the container starts, it takes a json format input and that is why the command and argument are between double quotes.
CMD ["/usr/sbin/sshd", "-D"]

Finally we need to expose the SSH port to the network

Now that the Dockerfile is ready we need to actually build an image from it.

Building the image
Building an image is the next step that is done with docker build -t .

Notice the "." at the end, it is something you can sometimes miss.

To make it concrete the command is
docker build -t xiobe/sshd --build-arg SSHD_ROOT_PASSWORD=demo

xiobe/sshd is just the name I gave the image but it can be anything you like and as you can see we give the building argument SSHD_ROOT_PASSWORD a value of demo.

Remember that it might be handy to have the same SSH password everywhere as an admin but as an attack it is even handier since I need to obtain only one password. A better way would be to build in ssh-keys if you would like to but that is not the purpose of this demo.

Spinning the container up
The only thing that we still need to do is spin up the container. This is done with
docker run -d -P xiobe/sshd

the -d means that the docker container runs in background, when you use this option it returns the ID of the container.

the -P is to publish the exposed port to a random port this means when you spin it up docker will choose a random port on the host machine and map it to the port 22 of the container.

To see that the container is running you do
docker ps 

which also shows something like>22/tcp what means the randomly selected port was in this case 32768.

When you are exposing real apps you will often do mappings with -p. In this case you would for example map port 22 on the host to the guest's 22. This would look like
docker run -d -p 22:22 xiobe/sshd

Removing an image
If you are building and it fails for some reason you will probably want to remove images after you are done. This can be done with

docker rmi

To make up your cleanup a bit faster you can clean up the untagged and unnamed images like this

docker images -q --no-trunc -f dangling=true | xargs docker rmi

Running the image after a successful build

Before you get to production grade you will most likely have to build a couple of images and run them to see if you are happy with the result.  Instead of cleaning up containers after you are done you can specify the -rm on your docker run command. This will automatically clean up after a run.

Monday, May 23, 2016

Running an applicication in docker

In the last blog post I wrote about how to run a docker container, in this blog post I explore running applications.

Running an application
Running an application inside a container is done with docker run. For example
docker run ubuntu /bin/echo "hello world"
outputs hello world

In the command above we tell docker to run the image ubuntu and to run in the container /bin/echo and pass it as parameter "hello world". When the application is executed the container is shutdown.

What is the point?
Okay, I admit hello world isn't the world best example but the point is that we ran /bin/echo in the container and passed it on the parameter "hello world". It executed and once executed it exited the container.

As we saw in the previous post docker run -i -t ubuntu /bin/bash does basically the same thing. It runs /bin/bash within the container and it is only because /bin/bash isn't finished executing that the container doesn't stop.

Cleaning up
Once your container has exited you can actually work with it again.

docker ps

Will show you all running containers and

docker ps -a

Will show you all running and exited containers.

If you have a container that isn't of any use to you any more you can just use

docker rm

To remove the container from the system.

Once you are getting the hang of it you will have a number of container with the status exited. To clean up my containers I use

docker ps -aq -f=exited | xargs docker rm

Monday, May 9, 2016

Using docker for the first time ...

In my previous post I described how to install docker, in this post we are going to download a pre-built image and run it.

To check if the docker installation is working you can run
docker info

Downloading the Ubuntu Image
We start by downloading an Ubuntu Image to play with. Ubuntu posts its image to the docker repository of images called the docker hub. To pull the image from the hub you do
docker pull ubuntu

You will see that the image is getting downloaded to your local image cache and the hashes are shown and the message "pull complete".  The hash is called the image ID and are the first 12 characters of the full name of the image. If you reissue the pull, you will get the information that the the image already exists.

The Image Cache
To get an overview of your image cache you do
docker images

This will show you the name of the images, their ID, size and when they got created. To get more information about the image you can do docker inspect followed by the IMAGE ID.

Starting the Container
To start your first container you do docker run -i -t ubuntu /bin/bash. This will open a prompt in your container. The -i flag tells docker it is an interactive container. The -t creates a pseudo-TTY and attaches it to stdin and stdout.

To exit the container you need to enter the sequence ctrl+p followed by ctrl+q. To have an overview of what is running you have docker ps -a.

To reattach the image back to the TTY you do docker attach followed by the IMAGE ID.

Quitting the Container
When you are done with the container you simply type exit and it closes your session. If you run a docker ps -a after you have exited your container you will see it described as "exited x minutes ago".

Installing Software
We will come back on how to prepare the container but I just already want to point out that if you start a container, install software and then exit the container the installed software is no longer in the container since every time you run docker run -i -t ubuntu /bin/bash a new container is started.

The same logic also applies to data, so if you want to use a container for running something and need a configuration, you got to make sure the configuration is stored outside of your container.

What Happens When You Run a Container?
When you type docker run it tells the daemon you want it to run a container. The parameters we specified are the image it needs to run and what to start if the image is running. In our case above we specified the Ubuntu image and to run /bin/bash.

  1. If the image is not available on the system it will be pulled from the Docker Hub 
  2. Once the image exists on the system docker will create a container.
  3. The container is created with a read-write layer on top.
  4. The network interface is set up to communicate with the localhost.
  5. The IP address is set up.
  6. The requested process is executed.

Saturday, April 30, 2016

Git for Windows Users with Git GUI

In my previous post i've set up a git server. In this post I will focus on Windows and git from a never-used-before standpoint. How to use git is not part of this series but there are some good YouTube videos where you can learn the basics

Since my users are typical Windows users that like to point and click in a GUI I went for Git GUI which can be found at

After the installation some configuration needs to be done before you can start using it.

Setting up your local repository
To set up your local copy of the repository we need to create a directory. When you right click in your window you have in the shell menu the Git Gui Here option you should click.

This opens the Git GUI window where you can choose "Create New Repository". It will ask you to select a directory to create the repository.

Choose "browse" and click immediately "select folder". This will select the folder you just created to create you repository in.

Click "Create" and this will create the git repository for you. A new Git GUI window will open up.

Coupling the remote repository to the local repository

The first action is to chose the remote server, so we do Ctrl+A. This will pop up a new window asking you for the name of the repository and the location.

The name of the powershell repository I created in the previous post was "powershell" and the location was "".

A prompt appears for bob's password and the data is fetched from the repository.

Working with the local repository
When you are satisfied with your work you open up the Git GUI for the local repository and then you need to "stage all the changed files to commit" (Ctrl-I).

 Next you add your commit message, on the quality of commit messages can be written books but the same principles of good communication always apply.

Finally you hit the commit button to commit it to the local repository and if you are happy with the end result you push it to the server. This last step will pop up a new window which is pretty straight forward.

Fetching changes from the server
The idea of git is of course to work together on projects thus our last step is to explain how you get the changes from others to your local repository.

The first step is to go into remote, and select fetch from powershell, our repository.

You will be prompted for your password. This fetches the data from the remote repository and thus all the changes. The next step is to merge the changes with the data you already have in your repository.

The merge will show you what has changed since your last synchronization and then you are good to go.

It is not that hard to work with git but it takes the discipline to synchronize your repositories. When you develop new features it is of course recommended to make branches and merge these but that is beyond the scope of this very basic tutorial.