Friday, February 26, 2016

Installing Kali2 on an HP Zbook 17

The other day I had to install Kali2 on an HP Zbook 17. Usually it goes without any problems but this one was a harder nut to crack. I usually do the install a couple of times, just to make sure that the process is repeatable for my colleagues if they need to repeat it.

After it got installed sometimes the computer completely froze up. When I tried to shut it down it froze systematically. The problem was thus that troubleshooting was a pain since sometimes it froze.

The solution was actually simple (but usually things are once you have figured it out). At boot you make the boot loader go and boot in recovery mode. I got greeted by a shell instead of the GUI and for some reason I tried immediately to shutdown and that worked without any problem so the actual problem had to do with the GUI.

I restarted the system and I looked at my dmesg output. Hidden in between all other lines, it said acpi had issues. The solution was to edit /etc/default/grub and add to the value for GRUB_CMDLINE_LINUX_DEFAULT the bootoption acpi=off.

To make the changes work the magic word is sudo update-grub and reboot the computer.

After the reboot I got the message "A start job is running for dev-disk-by (...)". The issue was pretty well documented in this blogpost. It is was caused by the swap not having the correct UUID, by simply looking it up with lsblk -f and manually correcting /etc/fstab all problems are gone and the machine runs smoothly.

Sunday, February 14, 2016

Setting our proxy for apt

In Debian/Ubuntu you have the command apt-get to do your updates and installation and it works all fine and well when you have a direct connection to the Internet but once you are behind a proxy you need to set a specific proxy configuration.

The reason why an export http_proxy doesn't have effect is because apt-get is executed in a sudo context and the environment variable is simply ignored. To solve this you need to do the following:

sudo touch /etc/apt/apt.conf

Edit /etc/apt/apt.conf
sudo vi /etc/apt/apt.conf

Add the following content:
Acquire::http::Proxy "http://username:password@proxy:port";

For example:
Acquire::http::Proxy "";

After you have edited the file you need of course to restart the service or reboot the system to take effect. If you get the message "Extra junk at the end of file" it means something is wrong with the syntax like a missing semi-column.

Tuesday, February 2, 2016


Last weekend I visited FOSDEM in Brussels. I've been going to FOSDEM since the first time it got organized and saw it evolve over the years. As a security professional I regret the fact there is no security track anymore but a couple of talks still got my curiosity.

What do code reviews at Microsoft and in Open Source Projects have in common?
The first talk I attended was given by Alberto Bacchelli who did a very interesting study on code reviews. He started out with the fact that regardless of the tool the common factors are that code review is informal, tool-based and asynchronous.

In the good ol' days we had something like code inspection, a process that took forever and it evolved into code review. The main research question of Alberto was "Why do we do code review?". He approached Microsoft Research and started out with observations, interviews, surveys of management and developers and the top reason seems to be to improve the code quality rather than finding bugs as one would expect.

When Alberto reviewed the submitted comments in MS Codeflow, the system Microsoft uses to do its code review, he used a clustering method. The interesting observation that came out of this analysis is that the comments were focused around low level defects and not discussing any design.

To make the comparison with the FOSS world Alberto chose Gromacs and conQAT. Two projects I am not familiar with. Alberto's observation was that in both cases the majority where non-functional changes and the same pattern was observed at Microsoft. Personally I think that is very interesting.

Currently Alberto is busy with studies in the field of software analytics. This is data science applied to software code. He takes data sources like IDE logs, versioning system logs,  issue track logs and review data. He classifies the data, looks for patterns and clusters it. The current questions he tries to find an answer to are:
  1. Who is the optimal person to review my code?
  2. How many times do you have multiple changes in one iteration?
  3. Are there parts that are more likely to contain bugs, how can we focus on the risky things?
This last question is from a security perspective a very interesting question.

Rspamd is an open source spam filtering system developed by Vsevolod Stakhov. The origins of rspamd is the Vsevolod's frustration managing a big cluster of spamassin machines that couldn't handle the load. He wrote rspamd in C and uses an event-driven model with the possibility to make LUA rules.

Vsevolod argues that there are basically two kinds of spam:
  • fraud: Nigerian fraud, phishing, ...
  • advertisement: classic Viagra, social networks, ...
Rspamd has 3 main filtering methods. You have the classic policies such as SPF, DKIM and DMARC. A second filtering technique is static content matching using patterns and the third technique he implemented is based on statistics and machine learning.

The second part of the talk was rather technical on how Vsevolod implemented his ideas. I really liked the talk but was a bit disappointed when he said that there are issues to find package maintainers for the major Linux distro's. It looks a promising piece of software but in a commercial environment you are often required to work with standard packages and can't tell management the spam filter is going down because you need to recompile the new version.


The next talk I attended was systemtap. The presentor, Frank Ch. Eigler, is a rather funny character on stage. The idea of virtual patching has been around for a while but I haven't seen implementations in real life. Frank's idea finds its origins in dtrace and (scripted) dgb.

systemtap is also an event-driven system and implemented with a kernel module. The logic is:
  1. study the vulnerability
  2. analyze the conditions of the vulnerability
  3. draft an algorithm to make the hostile data safe or reject it
  4. express the algorithm in a script
  5. run the script
I liked the idea but unless you really deep understand the system it seems impossible for most production environments and making hostile data safe is in my opinion not a good idea, reject it is better. From an incident response point of view is reject and alert the correct option.

The evil side in my head was wondering how we could use such a system as a rootkit.

How to run a telco on free software
This talk given by Dave Neary was eye-opening. Although my previous employer was an ISP there is still a big difference between an ISP and a telco.

Dave started out with the history of telco's in the Western world. According to him there are 2 major revolutions in the telco world. The first being the addition of data where it used to be a voice-only story and the second revolution is the change in medium where we used to have copper wire we now have fibre, mobile, satelite, etc.

During his talk Dave explained OpenNFV. NFV stands for network function virtualization. Examples of NFV are loadbalancers, firewalls and intrusion detection devices. The OpenNFV standard is based on devops principles and it was interesting to see the projects Dave mentioned.

Here is a list in random order:
 It was interesting to see how our world is evolving and from a security standpoint. All these technologies will bring challenges and opportunities for infosec people.