Thursday, December 12, 2013

Tomtom password reset issue.

I reported on the 6th of June 2013 an issue at Tomtom's website. It was something that I discovered by accident because I was helping out a family member who had forgotten his password. I waited until now to publicly disclose it because I wanted to give Tomtom the opportunity to fix it.

The Tomtom application that was installed on my family member's computer allowed my family member to trigger a password reset (by entering the e-mail address coupled to the account). My family member opened his mailbox and had a new e-mail from Tomtom with the password reset.

I got distracted in the process by the cat (cats are masters in social engineering) and asked to reset it a second time. In the inbox of my family member were thus 2 e-mails coming from the reset service. My family isn't into computers and thus when I asked to click the the link, they clicked on the first mail they saw, which was the eldest one. The reset worked and my family was happy, not realizing that this e-mail wasn't suppose to trigger the reset since there was a newer request for the reset.

The link in the e-mail looks like this:
http://www.tomtom.com/myTomTom/password_reminder_confirm.php?frm_email=familymember@mail.com&frm_check=f4357e2fa574a1764edcf077eaaf95dd

As you can see the format of the link is quite basic, an e-mail address and a hash.

On my way home I was going over the situation and asked my family member if I could get a copy of the e-mails to make sure if I didn't misinterpret something. I wondered if I could make a password reset now that the password was already reset.

I just clicked the link (no proxies in between) and did a reset of the password by using the form. Thus basically anybody who had that link could reset the password. What exact information you can find and how valuable the information is something I considered out of scope.

Since I work for CERT.be, I am familiar with the responsible disclosure guide of NCSC. The first problem I had was finding out who I had to contact at Tomtom. No information on their website, but I was lucky, the whois contact worked.

In CC of my e-mail to Tomtom I had put the NCSC (The CERT of The Netherlands) and CERT.be. The reason why is simply to have a cover-my-ass strategy. Tomtom is a company in The Netherlands and well I am a Belgian citizen that is why I chose to put both national CERT teams in copy. I do not want to get in trouble for discovering a problem, I just want it to get it fixed.

I got a reply from Tomtom on the 14th of June 2013. First of all they thanked me, they would look into the problem and promised me to keep me informed. The sad truth is that this last promise wasn't kept. I don't know if the reason why I never got a reply is that I was truthful about the fact I would write this blog entry about it.

Monday, October 7, 2013

Adding repositories to your sources.list

Tonight I added a repo to my sources.list but when I ran update I got an error message telling me my system could not trust the content since the GPG key was unknown. If you are regularly confronted with this you will probably know how to handle this but I have been install quite a lot of Linux for people that are totally new to Linux so I am going to use my blog for posting the solution.

When you run the apt-get update it will tell you which key the system isn't sure about. This value will be needed.

1. The first step is to get a copy of the key on your system. I found the following example online to illustrate this: gpg --keyserver pgpkeys.mit.edu --recv-key AED4B06F473041FA

This basically means get a copy of key AED4B06F473041FA from the key server at MIT. MIT is not the only key server in the world but it is a very popular one.

2. Now that the key is on your system you need to add it to apt's key ring:
gpg -a --export AED4B06F473041FA | sudo apt-key add -

Now that the key is known to apt you can run apt-get update again and will not get any errors for that key (may be others, but you got to repeat the procedure). Remember only to add sources that you trust.

Tuesday, August 27, 2013

Fun with Google Safe Browsing

You probably have encountered it, you want to go to a website and you get a red page to say that something is wrong with the site and malware has been found on it.

Google Safe Browsing is part of your standard Mozilla Firefox and Google Chrome browser. Google isn't the only one playing this game. Microsoft has its SmartScreen filter and most major AV-solutions have something similar.

This is all fun but what if you are interested as a website owner if you have been flagged? Well actually you can get this report. If you surf to http://www.google.com/safebrowsing/diagnostic?site= you get a nice overview of what was detected for that website.

An example:
http://www.google.com/safebrowsing/diagnostic?site=google.com

It tells me that for the domain google.com in the last 90 days 903341 pages got tested:

  • 484 drive-by-downloads
  • 252 trojans
  • 103 exploits
  • 46 scripting exploits
So as you see this has some value in risk management. Personally I use this technique for information gathering when doing incident handling. You can use it in a risk management to monitor your own website and those of who you do business with in a rather cheap way.

Another cool little trick is that you can get more information on an Autonomous System (AS). 

If you are the owner of the AS, like my current employer is the owner of the Belnet AS with the number 2611, Google has a nice little tool to generate alerts for your incident handlers

Some of us don't own AS systems. Thus I want to share with you one last toy for website owners. Enter "Fetch Like Google.  "Fetch like Google" allows you to fetch up to 500 URLs a week for the sites you own and can be very handy to figure out if the Googlebot still sees your website as infected.

Some people have trouble with https but I haven't had that experience personally. I found on this video on youtube which Google's answer to people having trouble. Basically it works for Google too.

Monday, August 19, 2013

Playing with Social Engineering at a music festival

It is summer in the Northern hemisphere of planet earth and this means that we have music festivals. Traditionally at the festival area you have two checkpoints, one for the entrance bracelet and one to inspect the backpacks for drinks.

The funny part is that people smuggle in drinks because it is kind of a challenge.  My theory was if the man that would check my backpack would find something he would be happy and stop looking through the rest of my backpack.

I packed my bag with 2 glass bottles of Belgian beer, put them inside my sweater and put all the rest of my bicycle gear in my backpack. The thing I had planted for the man to discover was a deodorant spray. When you just pad the backpack it feels kind of like a can of coke when you are unexperienced.

I stood in the queue and when it was my turn, I presented the backpack and opened it cooperatively. I showed that I had my gear like my helmet and everything what you need to bike in a city,  and the guy started padding the backpack. He found the deodorant and he asked me immediately what it was. Instead of answering him I opened up the backpack showed him the spray and he was happy with the answer.

I gave him a frame of "the guy on his bike" so the big backpack made sense.

As expected the man had a flow in his mind:
1. look into the bag, when no bottle visible goto 2 otherwise confiscate bottle
2. pad the bag, when nothing let through, when something ask question

The security problem was clearly in this last part, he knew he had to confront me with the fact that he had found something but when he was given an explanation that was different from "shit, bottle found". He was happy because he had the positive feeling he had done his job.

For your information, my friends and I still buy our beers at the festivals, but as I said before it is kind of a challenge to see if you can beat the system.

Monday, August 12, 2013

Inverse diff - repeated malicious javascript code

I was looking into some pages for malicious javascript and needed to figure out between all the instances we found online how many where basically the same malicious code and how many were unique.

If you have been playing with linux for a while you will probably have run into diff, a nice little command to figure out the differences between files. So what I actually needed is the opposite of the "classic" diff. After a little search online I found the syntax

 diff --unchanged-group-format=%= --new-group-format= --old-group-format= file1 file2

To make this a bit visual:
file1 contains:
123
abc
def
999

file2 contains:
123
def
ddd
lalala

and the output will be:
123
def

Thursday, June 6, 2013

Your log capacity

In this post I want to talk about log capacity. The reason why I want to talk about this is because we noticed that quite a lot of people understand that logs are pretty handy in incident response to figure out what happend but not always have an idea how and what.

A lot of information is produced in your computer system. During an incident response situation, the analyst needs to sift through these log to figure out what happend. Since we live in networked times, this means you got to get these logs from multiple nodes in the network. These can be anything if you want to. To handle this it is important to create a central log server. This makes the attacker his or her life more difficult because now the logs need to be changed at two places.

When setting up a log server one has to take into account that this is traffic over a network, thus you need to make sure that that the protocol used for that log shipping is secure.

A question I sometimes get is what to log, and there the answer is the classic "that depends". Depending on the operating system and the running services and applications the answer depends. The internet is your friend (try log analysis + your subject), but usually is default setting not enough.

Once you are getting a nice amount of data on your log server you might run into storage capacity. One of the important things to know about this is that it used to take more than one year before an organization would discover they got compromised and nowadays it is a bit less than a year.

When you look at an attack campaign like a supply chain model, things have to happen in certain order. Let's say you discover that data is being stolen from your organization, this means that the attacker is at the end of the campaign and if you want to learn about how the bad person got in you got to find it in your logs. When you have only the log capacity of 1 week or 1 month chances are that most information is already gone.

I know disks for things like SANs are not the cheapest things in the world, this means choices have to be made. Depending on your situation a cheaper solution like a NAS or a couple of terabytes of USB/firewire to store offline might be a solution. In this case it is better to have something, than to have nothing because an incident handler can't magically make logs appear.

Wednesday, May 22, 2013

Facebook and security

I discovered something interesting recently. I knew that Facebook checked if it was a know connection before giving you access to your homepage and otherwise it tells you it is an unknown device. Well in my recent experience they made quite a lot of mistakes in the process.

I made a connection from abroad with my laptop so it was the same device but it triggered the same mechanism. To prove I was myself I had to identify people in pictures they uploaded.

The interesting thing is that some of the people I know have put family pictures up. One series of photos was the dad in one picture, the mom in another one and a daughter in the last one. Looking at the names I could figure out which name I had to select but it was interesting that it was 3 times a different person.

While doing the procedure I thought of how I would attack it and it would be actually quite easy. You just have to do some homework like figuring out family and friends which is not that hard. I guess facial recognition software would be a possibility too.

Thursday, May 9, 2013

A simple OpenVPN setup with Zentyal

The other day I needed to come up with a VPN solution for somebody with no networking or IT knowledge whatsoever. The question was of course not formulated like that. The original question was "I need a way to surf the Internet from anywhere in the world and be sure I can do everything that requires security like online banking ect."

I recently ran into Zentyal, a modified Ubuntu, and that looked like an actual tool for this job. So I ran a test the other day and I must say I was pretty impressed by the ease of setup.

The first step was the regular OS install. Once installed logged into the management console which is completely web based which is good because that would take care of the simple part for the person I was building this solution. The good part is that I still have a command line when I need something, it is still a full blown linux.

As a second step I configured the network, gave the machine a static IP address and the IP address of the gateway. Did a ping to www.google.com to test name resolution and network connectivity and it worked so I was up and running for some basic testing.

My first test was to check how the software installation from the web interface worked and I must say it was pretty slick. I installed the ClamAV module as a first test. It installed it, downloaded the latest virus definitions and ran. The next day there was an update for ClamAV on Ubuntu (I know this because CERT.be published it in its advisories). When I got home, I saw it didn't update ... I was not happy of course because. It was my mistake, I found that there is this auto-update which I tested and it works fine.

Then it was time for the real test, setting up the OpenVPN (We worked with dynamic DNS for the OpenVPN server). According to the manual, it looked pretty straight forward. When selecting the package from the inventory it said you also need the certificate authority. After creating the certificates an configuring the VPN with the certificates it was just a click to download the configuration file with the correct certificates in a tar.gz and copying them on the other machine. The files can be produced for Windows, Linux and MacOS X.

I installed Tunnelblick on the Mac, dumped the contents of the tar.gz in the appropriate directory. Last step was to configure the gateway to allow the Tunnelblick to connect to the OpenVPN server and I was ready to run a test. It worked like a charm.

I must say I am pretty impressed because the GUI allowed me to explain everything in a simple way to the end user. How to create other users, track there activities etc. I would recommend it to check it out if you are looking for a solution in a small environment.

Update: I forgot to mention I needed an extra route in my router because the VPN is a different IP range.

May 2013 ISSA-BE Wrap Up

This week there was another ISSA-BE chapter meeting. The whole evening was themed around forensics.

The first talk was given by Sally Trivino and was called Forensics Technology Solutions for Litigation Support.

First topic at hand was "what is evidence". Sally pointed out that two things come into play. The validation of suspicions and the legal facts admissible in court. A little side note I want to make here is that I learned from my legal department that to prosecute somebody, you need to be able to show you suffered damage.


Life would be simple if there was just evidence but there are different kinds of evidence. The first type is rather straight forward, direct evidence. The best example of this is "the smoking gun". You have actual direct proof of what happend and who caused it. Of course this is not the case in computer forensics so you need to find circumstantial evidence. This means that you must correlate different sources to confirm a hypothesis.

For your circumstantial evidence to be admissible in court you have to follow forensic procedures. These are a set of procedures you must follow or you can't build a case. Which forensic tools you can use depends on the jurisdiction. By using scripts, you use a tool that everybody can read thus it is most likely to be acceptable. Standard tools like EnCase, FTK, and SANS SIFT are usually part of the acceptable tools but it is better to check then to be sorry.

As for any craft you will pick the tool based upon the job you need to do. Depending if the data is structured or unstructured, volatile or static, direct or indirect you will need other tools and the field of forensics has a different name.

To explain to the audience how forensics take place Sally showed us as example the EDRM general approach. Some things are obvious other things are less. It is hard to go into detail what she said but there is one thing I think is important to take away from this is that when you write your report you need to write it for non-techies a.k.a "normal" people, not lawyers. It is your job to explain what happend to a judge. To explain that story it is good to have an investigation trail.

As last part of the talk Sally highlighted a couple of best practices in forensics:
- make a safeguard as soon as possible
- make sure you have minimal impact on the system
- look at the different legal aspects
- make sure you have a chain of custody
- make sure you have an evidence trail
- make sure you have pristine copies, only work on copies.
- make sure you have reputable tools
- make sure your files are cryptographically verifiable
- factor in data skew (check against the atomic clock)
- correlate logs.

Finally we briefly touched upon the challenges in the forensics field.

First of all there is wiretapping. In Belgium this is according to the law only allowed for certain purposes by law enforcement. If you thus sniff traffic on your company/private network you might be committing a crime against the privacy laws. I recently assisted a workshop and this was one of the topics addressed during the workshop. From what I remember you can actually sniff but before you do talk to the legal department because the matter is rather complex.

Of course forensics with the whole cloud computing story becomes more complicated. A piece of advise on this was that you have to be sure where your data is because
a) you can't "export" certain types of data
b) the rules of other countries may apply to that data

When you outsource tasks you are still liable for everything, you can't outsource legal responsibility

Finally Sally gave us a heads up on the data breach notification act from the EU. If you are not familiar how it works. It is rather simple the EU makes directives and then the EU countries have a certain period to implement a law about it. On the matter of data breach notification I can recommend ENISA's text about the subject.

The second talk was by Didier Stevens on his network device forensics. Unless you have been living under a rock lately or don't read Didier's blog you must have heard about the new tool that Didier wrote recently.

His research was done on an CISCO ASA device but it is not his intention to limit it to this type of devices. He wanted to figure out what information he forensically could retrieve from a network device.

Like a regular computer system the golden advice nowadays is "do not shutdown". The devices have a small disk but everything runs in memory. It might be a good idea to disconnect the network if possible because the data coming in and out will change the memory.

The first step in forensics Didier made confirmed what Sally said in her talk. You got to have logs, and not just stored locally but centralized with a syslog-like solution. What to log is important too. By default you get some but not all information, so it is recommended that you change that in your configuration.

An example of events to log:

When you connect a laptop or a desktop to a switch is a “switch port state” change. It logs the physical connection and the logical connection. This could be useful since you now have a physical location where the person was when he/she plugged-in.

Some devices have special security features like NAC/NAP or DHCP snooping. When you use this in monitoring mode, it creates a log but no policy is not enforced. This log can then be used in forensic analysis.

Compromising a network device can be done on multiple levels:

A (running) configuration change can be made. Therefore it is important to have configuration and release management. You can dump the running configuration to a file and compair the hashes. You must know what is “running” on a CISCO device and the “written” configuration are not the same. You got to specifically say “store on disk”.

Scripts change the behavior of the device configuration. Again when you use them you must be able to compair them with the script you've put in by doing configuration and release management.

You can compromise the OS image. Didier explained how he manipulated the function that calculates the hash for the image and it always came back that the hashes were ok.

Then we were in for a treat, a little demo of NAFT. Right now only NAFT and CIR are the only open source tools for doing this kinds of forensic analysis as far as Didier is aware. The demo where you see passwords being dumped from the image was pretty cool.

It was an interesting evening, the next ISSA-BE chapter meeting will be in June. Check the website if your are interested in joining.