Daily Blog #223: Saturday Reading 2/1/14

Hello Reader,
         It's Saturday and after another week of hard work you deserve a break. Use that break time to get even better at DFIR with this weeks Saturday Reading!

1. We had another great forensic lunch this week! You can watch it here: http://www.youtube.com/watch?feature=player_embedded&v=2P5Sv6yyd5Y This weeks guests:

Ian Duffy, +Ian Duffy , talking about his research into the Microsoft Office compound file format.
You can read Ian's blogs on this topic here: http://forensecurity.blogspot.com/2014/01/microsoft-office-compound-document.html

Andrew Case, +Andrew Case , discussing his work in the memory forensics and Volatility


Matthew and I showing the latest changes for this months Beta release of ANJP.
2. Jason Hale has a neat blog up this week relating to a new feature in Microsoft Word 2013. The feature tracks where a user left off in their reading of a document, something that can be very useful in showing more than just opening of a word document. Read more about it here: http://dfstream.blogspot.com/2014/01/ms-word-2013-reading-locations.html

3. Lenny Zeltser has a good blog up on the SANS Forensics blog this week talking about all the different specialties of DFIR that are forming, http://digital-forensics.sans.org/blog/2014/01/30/many-fields-of-dfir. I think this type of knowledge needs a wider audience to understand just how wide and deep our field is.

4. Jamie Levy has put a link to her slides for OMFW talk about profiling normal system memory http://gleeda.blogspot.com/2014/01/omfw-2013-slides.html. This is something we've been talking about for the last two Forensic Lunches so I'm very interested in learning more.

5. The Bsides NOLA CFP ends today! Quick get your submission in! http://www.securitybsides.com/w/page/71231585/BsidesNola2014

6. Jack Crook has a great analysis of the ADD affected memory image up on his blog, http://blog.handlerdiaries.com/?p=363. This is a great post for understanding how to spot whats abnormal and track it down.

7. Here is a good post by Brian Moran showing how open source tools fare against the Target POS Malware, http://brimorlabs.blogspot.com/2014/01/target-pos-malware-vs-open-source-tools.html.

8. Julie Desautels has put up an interesting blog using her Google Glass forensic research to make the case that a driver was or was not operating the Glass device at the time she was pulled over. These types of devices are only going to grow in the future so get a head start and read this here, http://desautelsja.blogspot.com/2014/01/proving-case-of-cecilia-abadie-using.html.


Daily Blog #222: Forensic Lunch 1/31/14

Hello Reader,
         We had a very interesting Forensic Lunch this week! This weeks guests:

Ian Duffy, +Ian Duffy , talking about his research into the Microsoft Office compound file format.
You can read Ian's blogs on this topic here: http://forensecurity.blogspot.com/2014/01/microsoft-office-compound-document.html

Andrew Case, +Andrew Case , discussing his work in the memory forensics and Volatility

Matthew and I showing the latest changes for this months Beta release of ANJP.

Daily Blog #221: RHEL Forensics Part 3

Hello Reader,
        Today we talk about recovering deleted mlocate databases. This was actually harder than I expected as not only did ext3 set the size of the file equal to 0 but the direct block that istat -B came back with was not the first block in our database. So instead I followed the instructions here: http://wiki.sleuthkit.org/index.php?title=FS_Analysis to do a manual recovery of deleted databases. There is still some work to be done here in order to clean up whats been recovered back into parsable databases but I'll leave that bit for next week.

Today let's go through the steps necessary to recover deleted mlocate databases on a RHEL v5 system using Ext3 as the file system. Remember this is necessary as the updatedb command runs daily and deletes the mlocate database before creating a new one.

Step 1. We need to figure out which group of inodes our parent belongs to. You can see in the screenshot below that the parent directory /var/lib/mlocate has inode number 1077298 so that is the group we need to find.


Step 2. Run fsstat to find out which group contains our inode, in this case Group 33 contains our inode as shown below. We can use them to determine which blocks to recover for deleted databases.


Step 3. Use blkls to recover those unallocated blocks within Group 33 as shown below to a new file:


Step 4. Use xxd to parse the recovered blocks and  find the mlocate database signature of 'mlocate'
  

This looks like an mlocate database but right now its stuck in the middle of the rest of the unallocated data. So the next thing we need to do next week as we continue this series is to write some code to carve out the mlocate database from this unallocated block chunk. 

Make sure to come back tomorrow for the Forensic Lunch with guests Ian Duffy and Andrew Case!

Daily Blog #220: RHEL Forensics Part 2

Hello Reader,
             Yesterday we talked about extending this weeks Sunday Funday answer using the mlocate database on RHEL. Today let's look at what we can determine from the mlocate database using Hal Pomeran'z mlocate-time script and setup tomorrow's entry regarding the recovery of deleted mlocate databases.

mlocate, default on RHEL since v4, queries a database of known files and directories called /var/lib/mlocate/mlocate.db. The database stores the full path to every file and directory it encounters as well as the timestamps of the directories. The timestamp according to the man page will be either the change time or modification time of the directory, whichever is more recent. The timestamp is being kept to determine if during the update process if mlocate should re-index the contents of a directory. This leads to the question, will timestamp manipulation get around mlocate indexing of a file's existence, which is something we can test in this series.

For today's example I have created a file in my home directory called 'secret_file' and then deleted it.
Since this filesystem is ext3 the name of the file 'secret_file' is now zero'd out within the directory inode. The only way to know it existed is to hope that there is another recent reference to the file within the ext3 journal to re-associate it or to search the mlocate database. There may be other artifacts but we will focus on those two for the moment.

Searching the mlocate database confirms the file entry still exists:

Looking into the parsed database records shows the last time the directory was modified when the file still existed within it:

So that's great we can establish a timeframe when the file did exist and we could compare the contents of the current filesystem against the mlocate database to determine which files had been deleted since the last daily cron run. This can be helpful for determining what has changed in the last day in a live response scenario. This does not help though when we want to know what is occurring on a longer term basis.

The mlocate database is updated by default once daily when /etc/cron.daily/mlocate.cron runs and execute updatedb. What Hal pointed out from his tests though is that when that updatedb command runs that it does not overwrite the database but instead unlinks (deletes) it and then creates a new one. We can see that in the following screenshots showing the inode numbers of the mlocate database.

Before updatedb:
After updatedb:
Notice that the inode number of mlocate.db has changed from 1077692 to 1077693 meaning a new inode has been created and the old inode is still recoverable. As Hal also pointed out the mlocate database has a unique signature that can make determining which deleted inodes contain mlocate databases so tomorow let's do that. Let's see if we can make a quick script that will find and recovery deleted mlocate databases for longer historical comparisons of which files existed on our system.

Also when I'm done with this series I'll be uploading my test image for download so you'll be able to recover the same data! Come back tomorrow and through the rest of this series as we determine:

1. How to identify and recover inodes containing mlocate databases
2. Examining the possibility of carving mlocate database entries from freespace

Daily Blog #219: RHEL forensics part 1

Hello Reader,
      If you read this weekends Sunday Funday winning answer you learned a lot about how to do forensics on a redhat enterprise linux server. As with anything we do in digital forensics though, there is always more to learn. Today we start a series on what we can do to go beyond this weeks winning answer, which was very good. Let's start by looking into a tool that Hal Pomeranz introduced us to on last weeks Forensic Lunch.

Hal's tool, mlocate-time, allows us to parse the mlocate database for all of the files, directories and timestamps for all files that existed on the system as of the last mlocate database update. By default there is a chron job that is set to run daily to update the mlocate database so the live data will only contain those files that existed since the last daily cron job run. Comparing the files known to exist in the mlocate database to the files live on the current system can reveal files that have been deleted, but what we want to look at is the recovery of past mlocate databases. There are two sources for these:

1. Hope there are backups of the system stored somewhere. From the backups we can extract out all copies of the mlocate database and then parse them with Hal's tool.

2. Recover the deleted inode or carve them out of unallocated space. While on the lunch Hal tested and confirmed that each time the database is updated it deletes the old database and creates a new one. That means that all the old locate databases and their timestamps are either recoverable inodes or carvable data blocks allowing you to bring back the proof of the prior existence of files and timestamps on your system. This is not just true for RHEL but other linux distributions making use of mlocate as well.

Tomorrow let's get into what the data contained with mlocate can do for our investigation and end with some mlocate database reovery. I'm downloading a RHEL v5 evaluation iso tonight to start my tests!

Daily Blog #218: Sunday Funday 1/26/14 Winner!

Hello Reader,
   One of the great things about Sunday Funday's is that we get to find those individuals out there whose experience shines through in their answers. This weeks challenge had a few great answers,but this weeks winning answer was not only received before the other contenders but shown through as a winner. Take the time to read this one, you'll find some great ideas for your future Linux server investigations.

The Challenge:
You have a Redhat Enterprise Linux v5 sever running an eCommerce site.  The server was breached as the attacker logged in as the root user two weeks ago and linked the shell history file to /dev/null. What other artifacts can you rely on to determine what the attacker did over the past two weeks?

The Winning Answer:
 Anonmyous

TL;DR: /var/log/secure, SSH log, syslog, wtmp & btmp, Apache logs, firewall logs, acct files, memory image, file system metadata & journal & deleted content.



RHEL 5 first released 2007, uses kernel 2.6.18, even in the latest update (update 10, October 2013).


My strategy for approaching this investigation would consist of two phases: first, identify the periods of potential attacker activity; second, drill into these suspicious time ranges collect attacker commands and actions. Generally speaking, I would use multiple log sources to draft an initial list of suspicious time ranges. Then, I would use more specific tools to recover evidence of commands and actions within those ranges.



Due to the specific wording used in the scenario, I don’t have to worry about reviewing the system for evidence of a remote exploit such as SQL injection. Of course, the best place to start running that down is by reviewing application and server logs.



To begin, I would review the file /var/log/secure to identify how the attacker logged in for the first time. This is a log file that records entries associated with authentication requests, including timestamps, usernames, source processes, and error messages. According to the scenario, the system was compromised for the first time via the root login. So, I’d need to cross reference all legitimate administrator activity with root logins since approximately two weeks ago. The outstanding entries should be associated with the attacker (or, poorly configured services).



If I saw a single authentication attempt leading to a successful login, I would suspect that the attacker acquired legitimate credentials (account password or SSH certificate) elsewhere, perhaps by compromising another system, phishing the administrators, etc. I’d have to track this down by expanding the scope of the investigation. It is also possible that the password fell to a brute force attack, in which case I’d expect to see many, many unsuccessful attempts before a single successful authentication. The answer to this question may give me some insight into type of attack I was dealing with, and how I might expect to see the remainder of the system configured. For instance, a properly secured environment should not fall to a brute force attack targetting the entire internet.



I’d review and cross reference wtmp and btmp files for additional session information. wtmp tracks a history of logins and logouts by user, and btmp tracks failed authentication attempts. utmp could be helpful, but it typically tracks the current state of the system. All these files can be found in the /var/log directory. These are binary files, but the format is well known, and similar version of Linux (such as the Fedora release using kernel 2.6.18) can be used as effective analysis machines.



Once I had identified the first relevant login session, I would confirm the means of access: was it SSH, VNC, or some other remote access protocol. In each of these cases, I’d review the network architecture to determine from which network segments this protocol was allowed. Ideally, these administrative interfaces would not be exposed to the greater internet, but we’ve all see that too often. If the administrative ports were not accessible to the internet, then it again means that the scope of the investigation should be relaxed to include additional systems on the local network segment.



From the scenario description, the server is running an eCommerce site. An eCommerce site is typically composed of front end web services (serving static media like HTML, images, CSS, and Javascript, as well as dynamic pages generated by languages like PHP, Perl, or Python) and databases (MySQL or Postgres are popular). It is probably running at least some of the frontend services, and therefore that it is probably accessible to the internet.



This internet connection might be direct through a firewall, or through a load balancer/reverse proxy and firewall. I would review logs from the firewalls and load balancers to identify requests related activity from the same source IP address. This would help define additional periods of activity associated with the attacker.



I would timeline all application logs (usually, /var/log/*/*), syslog entries (/var/log/messages, etc.), and file system activity. Log2timeline or Plaso are good tools for organizing all this information. Some types of interesting application log entries could be yum package manager entries (/var/log/yum.log) indicating that the attacker installed additional software, or Apache web server entries (usually, /var/log/httpd/*) showing that the attacker test access to web directories via a web browser.



I would pay particular attention to the file system activity, reviewing file system metadata for newly created, modified, or deleted files. The Sleuthkit and loopback devices are my favorite tools for working with Linux images. I’d hope to find attacker tools and/or attacker archived data using the file system metadata. To recover further deleted files, I might try Foremost and extundelete. Foremost carves chunks from a binary stream using known file signatures. extundelete processes the journal on ext3/ext4 and attempts to recover old copies of inodes, and subsequently the file data.



Of course, I would also acquire a memory image of the server, and subsequently use Volatility to extract artifacts. I would first attempt to use the “linux_bash” plugin, which extracts Bash shell history entries from memory. These entries may still be in memory despite the /dev/null link. However, due to the duration of the compromise (two weeks), I would not consider this source authoritative of all activity. A number of the other plugins (for instance, linux_check_*) are also appropriate to use to identify the presence of rootkits and other suspicious processes.



Finally, I would review the process accounting information tracked by the “acct” service. This service typically stores its data within the directory /var/account/pacct that includes processes run and resources consumed. I would start by reviewing the data using “lastcomm” and “sar” programs to identify process names I don’t recognize. I could also correlate processes run before two weeks ago with those run after. Though the process accounting logs do not always contain verbose information, they could be effective in identifying Bitcoin miners or other rogue processes.

Daily Blog #217: Sunday Funday 1/26/14

Hello Reader,
            If you watched the forensic lunch this week you heard Hal Pomeranz talk about his newly released tools and scripts with a focus on Linux analysis. So let's extend the conversation into the challenges in dealing with Linux servers as our prior Linux Sunday Funday focused on Xwindows usage.

The Prize:
A $200 Amazon Gift Card



The Rules:
  1. You must post your answer before Monday 1/27/14 2AM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post

The Challenge:
You have a Redhat Enterprise Linux v5 sever running an eCommerce site.  The server was breached as the attacker logged in as the root user two weeks ago and linked the shell history file to /dev/null. What other artifacts can you rely on to determine what the attacker did over the past two weeks?

Daily Blog #216: Saturday Reading 1/25/14

Hello Reader,
          It's Cold! Stay inside and heat up some pizza,let's get down to some DFIR reading! Time for more links to make you think on this week's Saturday Reading:

1. What, did we have a forensic lunch this week? Why, yes we did! This week we had:
Jacob Williams, @malwarejake,  talking about his proof of concept code shown at shmoocon check it out here: http://malwarejake.blogspot.com/2014/01/shmoocon-talk-and-add.html and download the tool/memory samples here http://code.google.com/p/attention-deficit-disorder/

Hal Pomeranz, @hal_pomeranz,  talking about the scripts he's been sharing via GitHub for the DFIR Community: https://github.com/halpomeranz/dfis

Lee Whitefield, @lee_whitfield, talking about his new series of internet safety videos that you can show to your friends and family, found here: https://www.youtube.com/user/mrleewhitfield

2. Apple Examiner has been updating its analysis pages for OSX, http://www.appleexaminer.com/MacsAndOS/Analysis/InitialDataGathering/InitialDataGathering.html, give it a read and keep up to date.

3. Patrick Olsen has a great post on how to spot lateral movement from bad guys, http://sysforensics.org/2014/01/lateral-movement.html. I like how he breaks down categories of lateral movement techniques and show their combinations for analysts to find.

4. SANS is hosting a photo contest to win a free simulcast seat to a training class of your choice, http://digital-forensics.sans.org/blog/2014/01/20/announcing-the-dfircon-photo-contest-changce-to-win-a-free-simulcast-course, a pretty sweet prize for just taking a photo.

5. Patrick Olsen also wrote a great post on knowing whats normal in a Windows system, http://sysforensics.org/2014/01/know-your-windows-processes.html, you have to know whats normal to know whats wrong! I certainly hope Patrick keeps blogging!

6. A firm called Cassidiancy Cybersecurity has put out a tool for carving $i30 entries,     http://blog.cassidiancybersecurity.com/post/2014/01/Introducing-MftCrawler%2C-a-MFT-parser-with-%24i30-carving-capabilities. It's written in Lua, can't say I've seen that very often.

7. Here's a good read on securing logs so they can be reviewed later, http://www.scip.ch/en/?labs.20140123#null. If you are doing internal IR making sure the logs you need actually make it to be analyzed is kinda important.

8. Brian Moran has another post up, http://brimorlabs.blogspot.com/2014/01/identifying-truecrypt-volumes-for-fun.html, this time extending the truecrypt master password recovery plugin that the Volatility devs released with how to find these volumes.

9. Mandiant, now Fireeye?, has a new blog up https://www.mandiant.com/blog/tracking-malware-import-hashing talking about their methods for attribution via which modules a backdoor imports as each team has their own preferred backdoor kits. Kinda neat!

That's all for this week! Did I miss something interesting? Leave it in the comments below so others can find it and I can add it to my feedly for next week!

Daily Blog #215: Forensic Lunch 1/24/14

Hello Reader,
       We had a fun forensic lunch this week with a group of guests who had as much fun before the show as they did during. This weeks guests are:

Jacob Williams talking about his proof of concept code shown at shmoocon check it out here: http://malwarejake.blogspot.com/2014/01/shmoocon-talk-and-add.html and download the tool/memory samples here http://code.google.com/p/attention-deficit-disorder/

Hal Pomeranz talking about the scripts he's been sharing via GitHub for the DFIR Community: https://github.com/halpomeranz/dfis

Lee Whitefield, talking about his new series of internet safety videos that you can show to your friends and family, found here: https://www.youtube.com/user/mrleewhitfield




Daily Blog #214: Let's talk about MTP Part 6

Hello Reader,
        I'm intending for this to be the last blog in the series, at least for now. Today let's look at recovering past interaction with files accessed from MTP devices. You may ask yourself why what we've seen so far isn't enough to determine past interaction, well there are several reasons:

1. Windows 7 (haven't tested other versions at this point) will delete the contents of the WPDNSE directory on reboot
2. The MFT will only contain those WPDNSE deleted content FILE records until they are overwritten or Defrag runs, which is once a week on Windows 7.
3. Most applications will not make LNK files for the accesses
4. Most applications will not make jumplists for the accesses
5. The Shellbags will reveal directories on the MTP devices that were accessed but not the files themselves.

So we are left with two possible source for historical viewing after a weeks worth of time.

1. MRU keys
 No record of any of these files having a MRU entry. This isn't terribly surprising as we didn't find a LNK file or a Jumplist entry for them.

2. USN Journal
The USN Journal is a great source of information here. Since the issues of GUIDs an the like are not an issue you just need to find the MFT entry numbers for the WPDNSE directory, which in my testing was 57374 and then search for all USN entries with a 57374 as the parent entry number.( If this was a real case I would also match the sequence number. ) The following is what comes back:


So now we can see that the Folder GUID '{00000025-0001-0001-0000-000000000000}' was the only directory created within our USN Journal within the WPDNSE directory. We don't have the translation to directory name here, but as we saw in the prior post we can look that up from the shellbags.

Looking deeper and now looking for all files with a parent MFT entry of 57374 which we got from the above screenshot we can see the following files were accessed from the MTP device:
and there we go, all of the files I accessed and the different times I accessed them.

Now if this was an XP system you would be out of luck, so let's hope any MTP analysis you need to do is on Vista/7 or 8!

Tomorrow is the forensic lunch, try to make time to watch it live and ask questions!

Daily Blog #213: Let's Talk about MTP Part 5

Hello Reader,
         Yesterday we went through the temporary directory that stores files accessed from MTP devices.In our first post in the series we talked about the ability to recover MTP accesses from shellabgs, and if you read Nicole's post you'll see about her ability to recover files accessed from the WPDNSE directory. In my testing, using different applications than Nicole, I could not get a LNK file to be created from any of the following file types:
  • docx - MS Word 2010 
  • png - Microsoft Media Viewer
  • pl - Activestate Komodo
  • txt - Notepad
I even checked the office recent documents folder and found no LNK files that pointed to these files, those directories or the MTP device.

I did find an entry in the Windows Explorer Pinned and Recent Jumplist AppID 1b4dd67f29cb1962 looking each jumplist with a hex editor. What was interesting is how different Jumplist parsers handled this entry. I tested this jumplist with two different jumplist parsers.

Tzworks jmp v.25 64 bit did not show the entry
Woanware jumplister provided the following in the 'destlist' entry but could not parse out the entry.
291
1/23/2014 2:44
1/1/0001 12:00:00 AM 1/1/0001 12:00:00 AM ::{20D04FE0-3AEA-1069-A2D8-08002B30309D}\\\?\usb#vid_19d2&pid_0307#p752a15#{6ac27878-a6fa-4155-ba85-f98f491d4f33}\SID-{10001,,2410917888}\{00000025-0001-0001-0000-000000000000}

 This is very interesting as the raw hex showed the following providing a translation of the folder GUID to the name of the folder on the MTP device itself.


Here you can see the folder name 'Test' (yes I'm very original in my directory naming) and the folder GUID found in the WPDNSE directory '00000025-0001-0001-0000-000000000000'. This is similar to the shell bags entry Nicole found and TzWorks not successfully parses in v.36 of Sbags.


[1] New Folder; [2] {00000025-0001-0001-0000-000000000000}; [3] Name : New Folder; [4] ObjId : o25; [5] FuncObjId : s10001; [6] UniqueId : {00000025-0001-0001-0000-000000000000}
 It looks like we need to get out jumplist parsing tools to also support the MTP structures that other tools have had to do.

Tomorrow let's talk about what the USN Journal shows us.


Daily Blog #212: Let's talk about MTP Part 4

Hello Reader,
        Let's get back to this series. If you've read Nicole Ibrahim's blog you've already seen most of this data, I'm just doing my own testing to confirm her findings and see what else I find. Today let's look at artifacts of file access from an Android phone using MTP.

I again attached my AT&T Avail 2 and this time opened up the file I copied on to it, shellbags.pl. Following Nicole's research, found here, I went to the WPDNSE directory located under:
"C:\Users\\AppData\Local\Temp\WPDNSE\"
from there I found a folder with the GUID name:
"{00000025-0001-0001-0000-000000000000}"
located under it was the shellbags.pl file I accessed from the phone as expected. There will be one GUID folder created for every folder that a file is accessed from within the MTP device, for all MTP devices accessed. To determine which folder or device this GUID came from you'll have to go to the shellbags. We'll cover that tomorrow and look for other sources of this correlation.

 What was interesting to me that I didn't see Nicole mention was the dates on the file located under the GUID folder. The creation date of the file was set to the time I accessed the file from the phone, not the time the file was copied to the phone.


The modification time of the file corresponded to the original modification date of the file I copied onto the MTP device in the prior test. When looking at the files through the MTP shell extension I notice that only the modification date is displayed in the properties.
I copied a file into the same directory on the Android phone via MTP again, this time with the WPDNSE directory open, but no temporary file got created. So we get artifacts within the WPDNSE directory from file accesses via MTP but not from file copies to a MTP device.

Tomorrow let's look what other artifacts are left from these file copies and accesses.

Daily Blog #211: Sunday Funday 1/19/14 Winner!

Hello Reader,
           Another Sunday Funday come and gone, more great information for everyone to benefit from. I liked this answer because it went into depth on differences between different versions of the OS and directly spoke to the questions being asked. I've been doing my own research into this issue that I'll be blogging out after the MTP series is finally completed but this weeks Anonymous winning answer best responded to the challenge posed.

The Challenge:
Since Windows XP we've been able to create a registry key that will treat USB devices as a read only. Answer any or all of the following questions to show how well you understand that functionality:

1. How does the write blocking become effective between XP, Vista and 7? What steps between applying the registry key and the write protection coming into effect need to take place.
2. What windows subsystem is enforcing the write protection?
3. What happens to USB devices already plugged in when the write protection?
4. Can anything bypass the write protection offered by this registry key?
5. Does this registry key protect MTP USB Devices?
6.  Why does this registry key not protect non USB Devices?

The Winning Answer:
Anonymous



  1. How does the write blocking become effective between XP, Vista and 7? What steps between applying the registry key and the write protection coming into effect need to take place.
In Windows XP and later a user can add/modify the registry value “WriteProtect” found in HKLM\System\CurrentControlSet\Control\StorageDevicePolicies to enable write blocking for USB devices.
The StorageDevicePolicies key may not exist by default and must be added by an administrator. If the value is set to “00000001” then all newly connected USB drives will be write blocked.
In the test that I performed on Windows 7 the effect was immediate, however according to an article on Howtogeek.com (1), on Windows XP; a restart is required when the key is initially added.
1. http://www.howtogeek.com/howto/windows-vista/registry-hack-to-disable-writing-to-usb-drives/ - Not that the reg files provided are mixed up and the “EnableUSBWrite” sets the key to 00000000.
2. What windows subsystem is enforcing the write protection?
Unsure.
The Plug-and-Play manager receives notification that a drive has been connected and then queries a number of keys in the SYSTEM hive. I imagine that it looks for the StorageDevicePolicies key if it exists and acts accordingly.
2. Windows Registry Forensics, Carvey, p 110.
3. What happens to USB devices already plugged in when the write protection?
If a USB device is currently connected when the registry key is changed it will remain writeable until it is removed and reconnected.
4. Can anything bypass the write protection offered by this registry key?
Yes, using a hex editor will bypass this kind of write protection (but not a physical write blocker).
5. Does this registry key protect MTP USB Devices?
No.
I performed a quick test using my Nexus 5 and saw that it mounted as a portable device. I then successfully copied a file onto the device even though write protection was enabled.
6.  Why does this registry key not protect non USB Devices?
Unsure.
I imagine it has something to do with the way that Windows checked the registry key before it mounts USB drives but not before it mounts hard drives or portable devices.
It is possible to write protect hard disks using diskpart

Daily Blog #210: Sunday Funday 1/19/14

Hello Reader,
       If you watched the lunch this week you heard Sarah Edwards discuss her OSX class and a great conversation with Craig Ball regarding his work as a special master and other topics. One of things Craig and I discussed was the need for passion and deep knowledge in forensics, so I thought I'd let this weeks challenge let you show your deep knowledge.

The Prize:




The Rules:
  1. You must post your answer before Monday 1/20/14 2AM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post

The Challenge:
 Since Windows XP we've been able to create a registry key that will treat USB devices as a read only. Answer any or all of the following questions to show how well you understand that functionality:

1. How does the write blocking become effective between XP, Vista and 7? What steps between applying the registry key and the write protection coming into effect need to take place.
2. What windows subsystem is enforcing the write protection?
3. What happens to USB devices already plugged in when the write protection?
4. Can anything bypass the write protection offered by this registry key?
5. Does this registry key protect MTP USB Devices?
6.  Why does this registry key not protect non USB Devices?

Daily Blog #209 Saturday Reading 1/18/14

Hello Reader,
     The DFIR world has been busy this week! I have a lot of links for you to look at that it might take you into Sunday! So put on a full pot of coffee, because it's time for links to make you think on this week's Saturday Reading

1.Did you know we do a live google on air hangout every Friday called the Forensic Lunch? We do! This week our guests were:
Sarah Edwards talking about her OSX Forensics class for SANS, signup for the beta here:http://computer-forensics.sans.org/blog/2014/01/14/introducing-mac-forensics-the-new-sans-dfir-course-in-beta-starting-in-april-2014

Craig Ball talking about his work as a Special Master within the Civil Courts and his perspectives on DFIR, you can read more from Craig at his website: http://craigball.com/

Matthew and I talking about the v3 Beta, the NCCDC Red Team intern position opening for CCDC alumni and more.

2. The Volatility team is always coming up with new and cool tools. This weeks post is no exception, click the link to read on how to recover truecrypt keys from memory! http://volatility-labs.blogspot.com/2014/01/truecrypt-master-key-extraction-and.html

3. This post on the securosis blog, https://securosis.com/blog/cloud-forensics-101, is a great primer for those of you having to do an examination on an AWS (Amazon Web Service) virtual instance.

4. Corey has really been doing some seriously good posts lately, this post about tying up all the sources of program execution is no exception, read it here http://journeyintoir.blogspot.com/2014/01/it-is-all-about-program-execution.html


5. I mentioned this post in the Forensic Lunch and I'll probably write about it again next week. The  team that runs the National Collegiate Cyber Defense Competition has put together an 'intern seat' on my red team at nationals, open to Alumni of the CCDC games. If you qualify, go here to fidn out how to apply and join Team Hillarious (Two L's because we are extra funny) http://www.nationalccdc.org/blog/do-you-want-to-be-the-1st-red-team-intern/

6. I tend not to talk about malware and IR much as this is a digital forensics blog for the most part, but I don't think of any of us are not fascinated by the Target breach. Brian Krebs has two great articles up looking into what he's uncovered: Part 1 is here http://krebsonsecurity.com/2014/01/a-first-look-at-the-target-intrusion-malware/ and Part 2 is here http://krebsonsecurity.com/2014/01/a-closer-look-at-the-target-malware-part-ii/

7. To follow up from Brian Krebs post on the Target Breach, here is the Volatility team's write up on the POS malware and the technique of RAM scraping, http://volatility-labs.blogspot.com/2014/01/comparing-dexter-and-blackpos-target.html

8. Willi Ballenthin has released three tools this week, you should go get all of them... right now http://www.williballenthin.com/blog/2014/01/16/tool-release-fuse-mft/
http://www.williballenthin.com/blog/2014/01/15/tool-release-list-mft/
http://www.williballenthin.com/blog/2014/01/13/tool-release-get-file-info/

9. If you have to talk to lawyers regularly in your work you may have been asked the question how many boxes of paper would X data represent, Craig Ball has a new post up where he examines the issues in answering this question http://ballinyourcourt.wordpress.com/2014/01/15/revisiting-how-many-documents-in-a-gigabyte/

10. Jesse Kornblum has a quick post up pointing to new capability on hashsets.com to search the NSRL online, that's seriously cool. http://jessekornblum.livejournal.com/295268.html 
 
11. I do a lot of examinations of MS Office documents, so when I see a blog post regarding new findings in them I pay attention. Check out this post on Jason Hale's blog to learn about some new artifacts in MS Excel 2013,  http://dfstream.blogspot.com/2014/01/ms-excel-2013-last-saved-location.html

12. Harlan has a new post up this week discussing the gap or disconnect between those doing IR and those reverse engineering the malware that responders find. In it he argues for the integration of these two distinct roles or at least the communication between them to allow both aprties to do their jobs better. http://windowsir.blogspot.com/2014/01/malware-re-ir-disconnect.html

That's all for this week, keep up the great work out there! Make sure to come back tomorrow for a chance to win a Write Protectable USB3 Flash drive on Sunday Funday!

Daily Blog #208: Forensic Lunch 1/17/14

Hello Reader,

This week we had another great forensic lunch, we had:

Sarah Edwards talking about her OSX Forensics class for SANS, signup for the beta here:http://computer-forensics.sans.org/blog/2014/01/14/introducing-mac-forensics-the-new-sans-dfir-course-in-beta-starting-in-april-2014

Craig Ball talking about his work as a Special Master within the Civil Courts and his perspectives on DFIR, you can read more from Craig at his website: http://craigball.com/

Matthew and I talking about the v3 Beta, the NCCDC Red Team intern position opening for CCDC alumni and more.

CCDC Alumni can apply for the red team intern slot here: http://www.nationalccdc.org/blog/do-you-want-to-be-the-1st-red-team-intern/


Daily Blog #207: SWGDE new best practices published

Hello Reader,
            If you've followed the blog for awhile you know that I am a member and a supporter of the efforts of the Scientific Working Group on Digital Evidence (SWGDE). We just finished up our meeting for the quarter and two documents have left public comment status:

This document provides tech notes in examination of OSX systems:
https://www.swgde.org/documents/Released%20For%20Public%20Comment/2013-09-14%20SWGDE%20Mac%20OS%20X%20Tech%20Notes%20V1V1

This document makes examiners aware of potential issues with UEFI in imaging:
https://www.swgde.org/documents/Released%20For%20Public%20Comment/2013-09-14%20SWGDE%20UEFI%20Effect%20on%20Digital%20Imaging%20V1

and moved into official public documents.

One document that should be released for public comment in the next few weeks is a best practices for dealing with skimming devices. When it's up for review I'll link it so you can join in on the public comment period with any concerns or suggestions you have.

I like SWGDE because they are working hard to put out good best practices, training guidelines, and guidance to those of us in the field. SWGDE has put out a lot of great information, which you can see here: https://www.swgde.org/documents/Current%20Documents

For those of you like me who are in the private sector, you should know that SWGDE now allows us full membership. If you want your input and ideas to be included in future SWGDE documents you should consider filling out a guest request: https://www.swgde.org/documents/Application%20and%20Nomination%20Forms/Guest%20Invitation%20Letter%20Request%20(pdf)

and coming to a meeting to see if its for you.

Daily Blog #206: Download our Multi Boot USB Drive

Hello Reader,
        Many of you have expressed interest in our project to create a thumbdrive that can boot multiple live distributions and also have a live response toolkit partition. In fact yesterdays blog showing how to create your own has been one of the more popular posts this year. I thought I would follow that up with a link to download the thumbdrive image we've already made so you can use ours if you don't want to make your own. You can download it here:

Update 1/23/14: Google Drive was shutting down the link due to excess traffic due to the size and the number of concurrent downloads. Here is a new link from Mega that is claiming to give me 46TB of bandwidth.
https://mega.co.nz/#!3pIUQbzL!aM9VOSTWYNCoSb64TZZfQjOHML9vBZqT4tyctkegV3o


Things to know:
1. This thumbdrive image when restored is not write protected, if you want write protection against whatever nastiness is going to be on a live system you will plug it in into get a thumb drive that has a write protect switch. The Kanguru SS3 http://www.amazon.com/Kanguru-Flash-Physical-Protect-switch/dp/B008OGNM8E/ref=sr_1_1?ie=UTF8&qid=1389798136&sr=8-1&keywords=kanguru+ss3 is the drive we are testing with and having good success with.

2. We removed Kali Linux from the image until we understand the licensing issues of some of the bundled software. We've emailed them asking for clarification and if we are free to redistribute their ISO in our image I'll update the link.

3. The live response partition is fat32, and contains directories for osx/linux/windows natively compiled tools.

4. We are not responsible for any issues that arrive in the use of this, this is not a commercial or supported product. If you have questions you are welcome to send them to info@g-cpartners.com but understand that this is just a fun side project for us right now that we thought others would find useful.

Have an ISO or tool you think should be included? Please leave a comment below and we'll see if it will work!

Daily Blog #205: How to make your own Multi Boot Thumbdrive

Hello Reader,
          If you watched the forensic lunch last week you would have seen us demonstrate a multi boot USB key we've made. While we work out any potential licensing or permission we need to receive before we distribute someones work I thought it would be helpful to explain how we did it, so you can do it as well. So here is what Kevin Stokes in our lab wrote up:



In this walk-through, I’ll show you how to create a multi-boot USB drive to carry lots of great DFIR tools, or whatever else you want.

We started with a USB 3.0 32GB thumb drive.  They are very cheap now-a-days.  You can use a smaller drive.  We actually still have a lot of extra space, but that does leave plenty of room for add-ons later.

To keep the tool compatible with older systems, we used FAT32 and added several distros of linux to cover many situations and configurations.  Some of the distros will boot on USB 3 and some will not, however, they will all boot from USB 2.  Here are the distros we are using:
  • SIFT 2.14
  • Kali Linux
  • Paladin 5
  • Raptor 3


These will give a lot of compatibility with multiple systems and many tools for multiple situations.  Paladin and Raptor will even boot on MAC systems.   Feel free to add your favorite!

To make this tool even more versatile, we will add a second FAT32 partition for any other tools we wanted to have available.  Such as tools for Windows systems like the SysInternals Suite, FTK Imager Lite, among many others.

You can partition it with whatever tool you find that will partition removable drives.  I chose EaseUS Partition Master Free Edition, which has been pretty easy.  It is recommended that you make all your partitions Primary, however.  As apparently Windows will only look at the first Primary partition on a removable drive.  We can use another program called RMPrepUSB to switch the order of the active partitions (Ctrl-O) so we can manipulate each partition individually.  RMPrepUSB will do many of the other steps we need, too.  However, I found the other tools more intuitive.  Though I did not find another tool that would swap the order of the partitions, which we will need.

Once you have the thumb drive partitioned how you like, use XBoot to create the multi-boot partition.  When you add the ISO file to XBoot, select “ISO files which support Live-media-path kernel parameter”. 



Then add as many distros as you would like, in this manner.  Once you have all your distros added, you can select “Create USB”, a pop-up will appear to select the USB drive (make sure you get the right one!).  Syslinux bootloader is recommended for FAT32.  Select “OK”, then it will begin to create your bootable partition and add the distros you selected.  Be sure to test this out!  You can use the QEMU to test.



It’s not difficult to edit the menu, just grab a text editor and make adjustments to the right .cfg files.   For the image, I merely edited the default xboot.jpg image.  It’s a fun way to further customize your toolkit.   Add some extra information to assist you in choosing the right tool for the job.  For example, so far in my testing only Paladin and Raptor would boot on a MacMini here in the lab.  So I added information to save time and trouble later.

To add tools to the second partition, use RMPrepUSB tool (option Ctrl-O), to switch the partition that windows is showing you.



At this point, you have access to the non-boot partition, then just add whatever you would like.  There are many portable apps available.  I’d recommend, considering forensic use of this device, that you create a separate folder for any programs that require installation or just leave them out.

To keep the drive bootable and to always have access to the non-boot partition in Windows, make sure once you have finalized your customizations that you have the non-boot partition set as the first Primary partition.  That way Windows will always find it.  The computer will still see the boot partition when you’re booting from the thumbdrive, assuming you have the bios setup right.