Wednesday, July 31, 2013

Daily Blog #38: Web 2.0 Forensics Part 3

Hello Reader,
        This post is a bit late in the day but that happens sometimes when you are onsite and can't sneak away for some blog writing. In the last two posts we've discussed where to find JSON/AJAX fragments and how Gmail stores message data within them. Today we will discuss how these artifacts are created and what you can and cannot recover from them.

What you can recover
Much like other web artifacts we can only recover what was sent by the server and viewed by the custodian. This includes:

  • the content of emails read
  • the names of contents of attachments accessed
  • what was contained on each mailbox folder viewed (such as the inbox, sent, saved)
    • For some webmail clients (such as gmail) you can also see the a preview of the email messages contained in the mailbox even if they did not read them as the data is precached.
    • Whether the message had been read
    • If the message had an attachment
  • a list of all the mailbox folders the custodian had in use
  • contacts
  • for gmail specifically google talk participants 
  • for gmail specifically a list of all the circles they are in.


What you can't recover
If the data was never sent from the server and viewed it won't be in cached form anywhere except live memory. The list of things you can't recover includes:


  • The text of emails sent from the custodian unless they viewed a preview of the message, checked their sent mail or read a reply to the message. 
  • The content of attachments sent via email, though you can match up the file by name to files on their system as the attachment successful method will be sent from the server to the browser.
  • The full contents of mail folders if all the pages containing messages were not viewed
  • The contents of all webmail read, over time the data will be overwritten in the pagefile and the shadow copies will expire as well as the hiberfil will be overwritten on the next hibernation.

The examples i'm showing here are for webmail, there are other ajax/json services out there (facebook, twitter, etc..) that are popular. I'm focusing on webmail because in my line of work its a popular method for exfiltration of data and discussing plans that they don't want saved in company email. I will see about expanding the series to other types of web 2.0 applications likey after my html 5 offline caching research with Blazer Catzen is complete.

Tomorrow we continue the web 2.0 forensic series, hopefully with an earlier posting time.

Tuesday, July 30, 2013

Daily Blog #37: Web 2.0 Forensics Part 2

Hello Reader,
             Sunday Funday is always fun for me for two reasons. One it gets me two blog posts out of one so I get more time to get work done and two I like getting a general feeling of what level of understanding exists on certain artifacts. So while you get a prize, that I strive to make worth your effort, I get to see what I can continue to help you learn by writing additional blog posts to fill those gaps. With that said we are continuing the web 2.0 series today that I realized was needed from the IEF Sunday Funday challenge two weeks ago.

Json Data Structures

Json data structures are fairly easy to find, they are structure name pairs that are exchanged between the web server and the web client, for instance the Gmail server and the Chrome browser. In this example the Chrome browser would then parse the data to generate the view that you see.

Here is what a message summary from your Gmail inbox looks like:

Index data for gmail
["140303866b4ce541","140303866b4ce541","140303866b4ce541",1,0,["^all","^i","^o","^smartlabel_notification"]
,[]

Email from/subject/message preview and date
,"\u003cspan class\u003d\"yP\" email\u003d\"mail-noreply@google.com\" name\u003d\"Gmail Team\"\u003eGmail Team\u003c/span\u003e","\u0026raquo;\u0026nbsp;","Welcome to the new Gmail inbox","Hi David Meet the new inbox Inbox tabs put you back in control with simple organization so that you",0,"","","10:35 am","Tue, Jul 30, 2013 at 10:35 AM",1375198584460000,,[]
,,0,[]
,,[]
,,"3",[0]
,,"mail-noreply@google.com",,,,0,0]

Here is what a full message loaded and what the email header looks like:



 
 
 

 
   
 
   

    Gmail Team
    <mail-noreply@google.com>
   
 
 






10:35 AM (36 minutes ago)


img class="f T-KT-JX" src="images/cleardot.gif" alt="">


















to me 














This is followed by the  body of the message.In addition on each page you have a listing of all the labels, email counts, circles and more data that is preloaded to each page providing you with a large amount of data on your custodians activities but also providing for a large amount of duplicates.

Tomorrow we will go into the important fields and their meanings and I'll provide a regex for carving them out. Recovering webmail used to be simple, just find a javascript library known to the service and carve out the html before and after it, now with JSON/Ajax services like Gmail we get fragments of emails and possibly entire messages but we either have to manually carve them or use a tool like IEF to do it for us.

I start with IEF and let find the fully formed messages and then go back myself to find partials knowing the users email address.

See you tomorrow! Leave comments or questions below if your seeing data differently. I'm going to install fiddler on my system tonight to show how the data looks as its being transmitted.

Monday, July 29, 2013

Daily Blog #36: Sunday Funday 7/28/13 Winner!

Hello Reader,
                This Sunday Funday I thought was easier than the last and we had several submissions both post on the blog and submitted anonymously but only one was done before the deadline of Midnight PST. o congratulations go out to Jonathan Turner who while not having the most complete answer of all the ones submitted, that goes to Harlan Carvey this week, as he was the only one who submitted his answer before the cutoff!

I got a lot of answers after, do you need me to change the rules to give you more time to play? I thought 24 hours (I try to post at Saturday midnight CST) was enough time, but you need more time to play I can change the rules to let more people participate. I'm hoping as these contests continue we will continue to get great prizes to give away that will tip you over the 'should I try this one' cliff.

Here was the challenge:
The Challenge:     I'm going to step down the difficulty from last week, I may have been asking for a bit much on a Sunday. So this weeks question is going back to basics:
For a Windows 7 system:
Your client has provided you with a forensic image of a laptop computer that was used by an ex-employee at their new employer, it was obtained legally through discovery in a litigation against them. You previously identified that the employee took data when they left where on the system would you look for the following:
1. The same external drive was plugged into both systems
2. What documents were copied onto the system
3. What documents were accessed on the system

Here is Jonathan's answer:
1) The manufacturer, model, and serial number of USB keys plugged into a system are stored in the registry at HKLM\SYSTEM\Control\(CurrentControlSet|ControlSet001|ControlSet002)\Enum\USBSTOR. Comparing these keys on the two systems should show any common devices.
2) The created timestamp on the above registry key can be used to filter a timeline of file creation times to determine what files were added to the system around the time it was plugged in. These files could contain metadata about where they were originally created as well as other interesting information that can be manually collected.
3) Documents accessed on the system should show up in jump lists and (potentially) shellbag information stored in the users' ntuser.dat hive.

 Here is Harlan's answer:
Sorry this is late, but I was at a couple of events yesterday starting at around 2pm...I'm not sending it in so much as a submission, but more to just provide my response...

*1. The same external drive was plugged into both systems

This type of analysis starts with the Enum\USBStor keys.  I would locate the subkey that contained the device identifier for the external drive in question, and see if there is a serial number listed.  If not, that's okay...we have other correlating information available.  If there is a serial number pulled from the device firmware, then we're in luck.  

Beneath the device serial number key, I can get information about when the device was first plugged in, from the LastWrite time to the LogConf key, as well as the Data value (FILETIME time stamp) from the \Properties\{83da6326-97a6-4088-9453-a1923f573b29}\00000065\00000000 subkey.  I would correlate this time with the value in the setupapi.dev.log file, as well as with the first time for that device that I found in the Windows Event Log (for device connection events).    I could then get subsequent connection times via the Windows Event Log, as well as the final connection time from the NTUSER.DAT hive for the user, via the MountPoints2 key (for the device, given the volume GUID from the MountedDevices key) LastWrite time value.  

To be thorough, I would also check beneath the \Enum\WpdBusEnumRoot\UMB key for any volume subkeys whose names contained information (device ID, SN) about the device in question.

Getting the disk signature for the specific external drive can be difficult on Win7, using just the System hive file, as there is very little information to correlate the Enum\USBStor information to the information in the contents of the MountedDevices key.  However, further analysis will be of use, so keep reading.  ;-)

The "\Microsoft\Windows NT\CurrentVersion\EMDMgmt" key in the Software hive contains a good deal of information regarding both USB thumb drives and external drives; the subkeys will be identifiers for devices, and for external drives, you'd be interested in those that do NOT start with "_??USBSTOR".  The subkey names will have an identifier, as well as several underscores (""); if the name is split on underscores, the first to the last item, if there is one, will be the volume name, and the last item will be the volume serial number, listed in decimal format.  This final value changes if the device is reformatted, but it wouldn't make any sense to copy files to the device, reformat, and then connect it to the target device, so we can assume that this information won't change between the two systems.

I could then use this information to correlate to LNK files in the Windows\Recent and Office\Recent folder within the user profile, as well as LNK streams within the user's *.automaticDestinations-ms Jump Lists.

At this point, I will have a drive letter that the external drive was mapped to, so I can then return to the MountedDevices key in the system hive, and by accessing available VSCs, locate one in which the drive letter was available for the ext. drive.  This will provide me with the disk signature of the device itself, as well as the volume GUID.

At this point, I have device identifier, the device serial number, the volume serial number, potentially the disk signature, and the time(s) of when the external drive had been connected to the laptop.  I can then use this information to correlate to the other system.

*2. What documents were copied onto the system

I would create a timeline of system activity, correlating file creation dates on the system with times when device was connected to the system, based on the time-based information provided in the response to #1 above. 

*3. What documents were accessed on the system

The shellbags artifacts likely won't server you much use this time, as on Win7, they tend to not contain the same sort (and volume) of information as they do on WinXP.  However, I would start by looking at the shortcut/LNK files in the user's profile (Windows\Recent and Office\Recent), as well as Jump Lists.  This information also helps us identify the application used to access the documents (Office, Adobe, etc).  I would also, for clarity sake, verify this information via Registry MRUs, even though some of them (ie, RecentDocs) will not contain full path information.  However, now that we have information about the applications used (from the Jump Lists, after performing any required AppID lookups), I would be sure to examine any available application-specific MRUs.

Harlan gave a great answer but didn't get in on time, so the winner of a Specialist Track ticket to PFIC is Jonathan Turner. There is still more to be said on this topic though. I use specific operating systems for a reason as artifacts change between them and there are still artifacts and scenarios not clearly being shown even in both of these answers. When I'm done with the web 2.0 series I'll go into depth on it.

In the mean time, do you want to go to PFIC? I still have more tickets to give away next week. If two answers make it in on time that are both great (or I change the rules based on your feedback to extend the time), I can give away more than one! Tomorrow we resume the web 2.0 series and I hope you follow along as it continues to give me the motivation to keep these up daily! Only 316 more blogs before the year is up!

Sunday, July 28, 2013

Daily Blog #35: Sunday Funday 7/28/13

Hello Reader,
           It's that time again, Sunday Funday time! For those not familiar every Sunday I throw down the forensic gauntlet by asking a tough question. To the winner go the accolades of their peers and prizes hopefully worth the time they put into their answer. This week we have quite the prize from our friends at the Paraben Forensic Innovations Conference.

The Prize:
  • A free Specialist track ticket to PFIC (worth $399)
The Rules:
  1. You must post your answer before Midnight PST (GMT -7)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post
The Challenge:
     I'm going to step down the difficulty from last week, I may have been asking for a bit much on a Sunday. So this weeks question is going back to basics:

For a Windows 7 system:
Your client has provided you with a forensic image of a laptop computer that was used by an ex-employee at their new employer, it was obtained legally through discovery in a litigation against them. You previously identified that the employee took data when they left where on the system would you look for the following:

1. The same external drive was plugged into both systems
2. What documents were copied onto the system
3. What documents were accessed on the system

As a reminder I'll be speaking at PFIC and the agenda is pretty great this year, I hope to see you there! This should allow everyone a good shot at playing, but this answer can go very, very deep. I'm excited to see your answers, good luck!

Friday, July 26, 2013

Daily Blog #34: Saturday Reading 7/26/13

Hello Reader,
        It's Saturday, time to put on a long movie for the little ones while you fire up the web browser to prepare for another week of deep dives into forensic images. This week we have links to deep reads on a wide range of topics so I hope you'll stay informed as we all move forward towards the quest for new knowledge of artifacts and deeper understanding of what's possible. Don't forget tomorrow's Sunday Funday where you can win a ticket to PFIC!

1. This is an older article but I certainly didn't hear anything about it at the time, http://geeknizer.com/pros-cons-of-html-5-local-database-storage-and-future-of-web-apps/, it has to do with what Blazer Catzen and I are looking into now. Specifically we are researching HTML5 offline content cache databases and what data they are storing that you may currently not be paying attention to. The most relevant example first described to me by Blazer and then after some quick research of the database table names I found the article linked above. The article details how iOS based webkit browsers (safari, firefox, etc..) that visit gmail will have  a summary of the contents of the displayed messages stored in a sqlite database on the device.

The question in my mind is not just how can we extract out and recover more gmail then we knew about before on iOS devices, though that is a great thing, but what other web applications are making use of this feature and on what platforms/browsers? I'll be updating our findings here on the blog and during the forensic lunch (hopefully Blazer will come on!) as we learn more but I think there is a lot more here to discover.

2. Tomorrow while your working on your winning answer you might find some insight from Lee Whitfield on the forensic 4cast, https://plus.google.com/u/0/events/cle30c05m88rpjs467k4dnns27k. If you have the time Lee is always informative and entertaining.

3. I do have interests outside of forensics and this article made me want to actually go outside, http://travisgoodspeed.blogspot.com/2013/07/hillbilly-tracking-of-low-earth-orbit.html, and monitor satellites. 

4. It's hard for me not to link to a post that mention's our research, when they add other good methods to detect anti-forensics then I can justify it. Harlan Carvey is still blogging up a storm of useful posts and this one, http://windowsir.blogspot.com/2013/07/howto-determinedetect-use-of-anti.html, based on my interests is one of my favorities in his How To series. 

5. I don't often just link to product pages on Saturday's but I need to this week as this tool helped me out of a jam, http://www.lostpassword.com/kit-forensic.htm. If you are dealing with Windows 7 protected storage encrypted files (intelliforms, chrome logins, dropbox, etc..) you know that you need to know the user's password (something I haven't had to care about in years). Using Passware's tool I was able to recover the plaintext password from the hiberfil in an hour for a password that we had been trying to crack for two months. Do you know of an alternative or free/open source solution? Please comment and let me know!

6. If your looking for something to listen to rather than read check out our first recorded Forensic Lunchcast, http://www.youtube.com/watch?v=4A_GynQF3n0&list=PLzO8L5QHW0ME1xEyDBEAjmN_Ew30ewrgX&index=1. We are still trying to decide how often we should do these but we will be doing it again on friday if you want to participate.

7. Last one for the week, it's not often that the cyb3rcrim3 blog mentions civil case so when they do I pay attention http://cyb3rcrim3.blogspot.com/2013/07/unauthorized-access-email-and-team.html. I'd like to know more about what happened in this case but I thought it was odd that someone would access online webmail to get access to communications when we have such great tools and techniques to do so. I've certainly had people get confused and thought I accessed their accounts to get cached webmail (or json fragments) but this is first I've seen where someone actually did and used the evidence!

That wraps up this saturday reading, I hope these links will keep you busy until next Saturday. You won't have much time to read these tomorrow though because you'll be too busy competing in the next Sunday Funday contest and win a free ticket to PFIC! See you then!

Daily Blog #33: Web 2.0 Forensics Part 1

Hello Reader,
                 I've finished two series, I've never even finished one in the last 5 years so I think this daily blog experiment is working. Thanks to all of you that are following along, I know it can be hard to keep up daily and for those that do (I compulsively watch pageviews) it does help me to keep going on with the dailies.

Today we begin a new series on 'web 2.0' forensics. I don't mean to use buzzwords but 'web 2.0' has come to represent a combination of technologies that have changed how custodians/suspects are accessing data from web services and how the systems we analyze are storing it. It is the aspect of retrieval of these asynchronous transactions that we will be talking about over the next blog posts. Based on the responses I got from the last Sunday Funday challenge I took it that many of you don't feel comfortable with these artifacts and how they get created so let's get into it so you can start getting more evidence!

The key technology that allowed web pages to update sections of their page without refreshing the contents of the entire page being viewed is AJAX. AJAX or Asynchronous Javascript and XML first introduced in 2006 standardized a mechanism allowing javascript executed within the browser to make a request to the webserver, receive the request, parse it and update the content on the page all seamless to the user. It is this technology that allowed many webmail systems to present a more fluid experience to theirs users and totally ruined the day of many a forensic examiner.

Before AJAX it was easy to write a carver to recognize the javascript in cached pages found all over the unallocated space of the disk recovering scores of webmail views. I wrote my first such Enscript back in 2002 and it became one of my favorite ways of finding data exfiltration. After AJAX all of the webmail views are being delivered via updates to single page loads all of which where occuring in memory and not being committed to the disk, this was a sad day. Suddenly the evidence we were all relying on was thought to be gone and unreachable.

Then someone started looking at the network traffic and what was being viewed and found the data structure of the XML/JSON requests being sent back and forth (I don't know who founded this research and if you do please comment below). They found these fragments in memory and more importantly in the Pagefile and Hiberfil! Now we don't have the same length of time back as we did when we had glorious cached pages being written to disk but we can again recover webmail and no one can complain about that.

If you remember one of the challenge questions was where we can recover JSON fragments from Gmail. The pagefile and hiberfil (and active memory of course but I'm looking at past activity recovery) before Windows Vista used to be the only locations, but now with shadow copies there's more! If you've heard me talk I've mentioned that Shadow Volume Copies contain more data then most people expect. In fact they also contain hiberfile and pagefile for each backup! That means for a shadow copy enabled disk you have by default weekly snapshots of possible JSON recovery available. If you are not extracting and searching this data (remember hiberfil is compressed and will not be searchable unless extracted and decompressed or a tool specifically supports it in the volume shadow) you are missing evidence.

Before we go on I actually got a second answer to the contest, while he didn't win a prize (late submission) he did give a different answer that I wanted to highlight.

Seth Ludwig writes:
In response to your blog post:
For a Windows 7 system:
1. Describe the Gmail JSON format and how you would recover it
A typical gmail JSON capture might look like the following:
while(1);
[[["v","137s2mfg40boa","1c22e772e53ff3
de","-902218240","1","vaknsvbtjz8a"]
,["gn","gsi test502"]
,["cfs",[]
,[]
]
,["i",50]
,["st",1208038540]
,["qu","0","6616","0","#006633",
0,0,0,"0","6.5"]
,["ft","Send photos easily from Gmail with
Google\'s \u003ca href\u003d\"http://
picasa.google.com\"
Recovering the JSON data could be achieved using a variety of forensics tools, both commercial and opensource, to carve for the files with the embedded JSON. (Encase, IEF, Helix3, etc).
http://capec.mitre.org/data/definitions/111.html


2. Describe where in the disk you would expect to find Gmail JSON fragments.
Sometimes you simply cannot find them. The reason that this data is sometimes written to disk is largely because of browser bugs or lack of proper support for the no-cache HTML meta tag. This data isn't supposed to be written to disk in the first place, but due to various bugs it sometimes is. When the files are cached, you will find them named "mail[somenumber]", and is mainly located in Temporary Internet Files or other caches of unidentified data. Often you will be able to find these files in unallocated space. Additionally, you will find other files in the same places named "mail[somenumber].htm". There's often some JSON as described above contained within them.
Other possible and more likely locations:
Memory dumps
Pagefile
Hiberfil.sys (remember to decompress)

3. Which services popular in forensic investigations utilize JSON
Facebook, Twitter, Gmail, Skype, Google Talk, Yahoo Messenger and many others.

4. Provide a carve signature for the header and footer of a Gmail JSON
It's 1AM. You win this round.

5. Describe what Gmail's JSON would reveal to you
Utilizing JSON files, one has the potential to retrieve the following information:
Server name
Account Quota
Folders
Message List (Thread)
Conversation Summary
Message Information/Index
Message Body
Message Attachments
GMail Data Packet header
Invitation
Categories/Labels/Contacts
Thread Summary
End of Thread List
GMail Version

That's enough for today, hoepfully I've gotten you thinking. In the next post on Tuesday we will go into JSON data structures and how services use/store the data and how you can recover it.

Stay tuned for tomorrow's saturday reading and more importantly this Sunday Funday where you can win a free ticket to PFIC!

Thursday, July 25, 2013

Blog Post #32: Go Bag part 7 end of series

Guten tag Reader,
          It's time to wrap up this series and move onto to other topics. I hope you've found these scenarios and how I deal with them from my light go bag helpful. Hopefully I can help you lighten your load when you are out in the field, it really is a more pleasant experience. In this post we will cover handling all the assorted storage locations you might receive and how I deal with them.
       
CDs/DVDs - Imaging these is fairly straight forward as I'm not aware of any operating system that tries to write to a CD/DVD rewritable on insert. Remember not all CDs/DVDs are simple write once media, if the burns are layered in sessions you can recover the prior sessions burned once imaged.

MMC/SD Cards - Many of these actually have switches to make them read only, but otherwise I will either boot into Linux to acquire these read only or enable the usb write block hack and plug in a usb card reader.

External drives - I don't carry a USB write blocker because I haven't found a usb 3.0 one yet and they don't always work with the random drives I encounter. So instead I use the usb write block hack to acquire the drive if I can't easily access the underlying drive and attach it via SATA. This is also why I make sure my acquisition system has eSATA so I can always have a writeable external storage interface available to me and I can leave my USB ports read only when acquiring.

Email accounts on mail servers - Many times i'll be asked to preserve the contents of a mailbox I use a piece of software from transend called transend forensic migrator for this. Transend supports a large variety of mail servers (Exchange, Lotus, Groupwise Imap, etc..) so it makes my life easier to just plug in one or one hundred credentials (via batch mode) and have all the mail stored in your choice of output format (pst, mbox, etc..) with a log of its actions when its done. You can even enable filter options to limit the data your acquiring.

Webmail servers - One of the other types of email we are asked to grab is webmail, I've found the easiest way to deal with grabbing someones webmail is to search for the webmail providers instructions for email access from a smart phone. Through those instructions they will typically identify an Imap or Pop3 server that a phone and your software can connect to and grab the data.

Sharepoint - There are two good ways to deal with sharepoint, neither of which involve grabbing the underlying database. You can access a sharepoint website through webdav and copy down the contents or use a commercial tool like ontrack powercontrols to grab the data if you have the budget.

That's all I can think of right now but I think this capture 99% of what I deal with when out in the field and how I deal with it. Tomorrow we will switch topics to 'web 2.0' forensics and then make time Sunday for the weekly contest where this weeks prize is a free ticket to PFIC!

Wednesday, July 24, 2013

Daily Blog #31: Go Bag Part 6

Hello Reader,
                     Have I mentioned how good Civ 5 brave new world is? It's really good, and the reason I'm writing this blog post this morning instead of last night again. Tip, playing Venice is hard on king. I realized I missed a couple scenarios we should go over so I'm continuing the go bag series a little longer before we begin our discussion on 'Web 2.0' forensics. We focused on the NAS's in the last post and now we are going to talk about how to deal with embedded devices and a quick look a memory storage devices.

The system is an embedded storage device - You won't see this very often, but every so often you'll be told that there is an embedded device that contains logs you need.

When dealing with embedded devices you have a couple types you'll come across

1. SoC (System on a chip) with sdcard storage

If you have this you are lucky, power down the embedded device and remove the sdcard for standard imaging. When you image it you have two options, you can actually get a memory card write blocker or you can use software write blocking. In Windows you can use the registry write block hack and then attach a memory card reader via USB or you can boot off a forensically sound Linux distribution and image the device. In either case this is the best possible scenario.

2. SoC (System on a chip) with a maintenance port

You may come across an embedded device where the memory is soldered to the board and no removable storage options exist, but they may have a maintenance port. Either through ethernet, usb or com port getting access to the underlying maintenance port can also lead to shell access as many of these devices are running embedded unix variants and others are running DOS. Once in this shell, and getting there will very by device, the nice part about embedded systems it that they are rarely multi user systems, meaning every process runs as root or administrator. Once you have the console you can then capture raw logs back to your system through it, sometimes if your lucky there may be older kermit/zmodem transmission programs left on the image or tftp for network connected systems (originally intended for network booting). 

You can in embedded unix systems get full disk images this way by dumping the contents of the physical memory devices, just remember that you need to use a protocol capable to transmit the data without treating it as ascii strings or pipe it through a function to encode it first (base64 works here well). 

3. SoC (System on a chip) with no access

This happens, and it sucks. At this point you can hope that there is some kind of Jtag access or firmware flashing access. If there is no firmware flashing access (which you can use to download the current memory image) then you are stuck with Jtag. Jtag means you are going to have to find the Jtag pads (documented if your lucky) solder a Jtag connection to the board and find a compatible app for the processor to dump the nvram to your system. 

This isn't fun and if you are not experienced with Jtag easy to mess up. At this point you should probably let your client know that you need to send this system off to a specialist shop for extraction. 

4. SoC (System on a chip) locked down for security

This typically is only found in high security embedded devices (ATMs, Lottery terminals, etc...) where they have attempted to remove all internal access to the system and its underlying data. You have one option here and you can't really go back from it. You have to de-solder the memory chips from the board and plug them into a raw reader. From that point its up to you to reconstruct the file system and access the underlying files. If you are on this step you are likely dealing with a pretty serious case and if you are not comfortable with what mobile forensic experts have termed 'Chip Off' forensics I would send this to a lab that is. Once you remove the chip its not likely you'll get it on the board and get the device functioning again so remember this is on way street, no going back. 

That was longer than I thought it would be, as you can see I've dealt with a lot of weird systems over the last 14 years. We should talk about memory cards tomorrow and then on to 'Web 2.0' forensics. Don't forget this sunday you can win tickets to PFIC!

Tuesday, July 23, 2013

Daily Blog #30: Go Bag Part 5

Hello Reader,
             Another day another blog, I should have started this one last night but Civilization 5's Brave New World expansion is out, and it's really good. I am going to try to finish the Go Bag series before moving on to 'web 2.0 forensics' and dealing with JSON fragments. In other news I'm reaching out to more companies I like that provide forensic products I use that want to provide prizes for the Sunday Funday contests. I'm happy to announce that Paraben is offering free tickets to the PFIC conference. The first of them, a $399 ticket,  will be given away this Sunday to the best answer to make sure to pencil in some time on Sunday if you are interested. I'll be speaking there as well some other very talented DFIR pros and the conference is a great deal of fun, and its held in a ski resort!

The system is a NAS - You've imaged the systems the custodian used but are then informed that his network data is on a NAS

Note: Remember that most NAS's are not windows embedded systems and thus will likely not have the same file system internally that the custodian was using. This means the custodians computer will treat the underlying file share as it would any windows network file share but what file system metadata actually gets recorded (change versus recorded time stamps for instance) depends on what file system the NAS has formatted the volume to do be.

There are three different types of NAS systems you'll commonly encounter:

1. The consumer grade NAS
These devices typically have a couple of drives internally and run embedded linux. Some of these will just have one drive. You can either remove the drive and image it or in some models attach it to your imaging laptop via USB. The important part here is that you realize there is a difference between what the NAS exposes and what you can acquire. 

Logically imaging the network drive - This will allow you to capture in a forensic container all of the data as its currently seen within the NAS. However, what it will not allow you to do is acquire any of the deleted data or free space of the disk as the NAS will only be providing you with a logical view of the file system. If your case does not mandate deleted data 

Physically imaging the drives - typically consumer grade NAS systems don't have iSCSI so i'll leave that option out of this section. You will have two options at this point, you can remove the drives from the NAS and image them (for many models this is easy as they are meant to be swapped out) or if you are luck and there is a USB port you can attach the NAS to your system for imaging. Remember to use a USB write blocker (software or hardware) to prevent writing to the drives.

2. The small business NAS
Small business NAS's typically have more features but lack the USB option for direct connection. What feature they will typically add though is iSCSI. iSCSI allows you to present the local physical disk to another system over the network, this is how f-response provides access to remote disks (but they do so in a read only fashion). If you can create an iSCSI connection then you can get the physical image you want using any tool that you have on your forensic workstation, if your going to do this i would recommend doing it in Linux or WinFE to prevent the system from touching the disk as I'm not aware of a iSCSI write blocking solution outside of f-response.

If iSCSI is not available then look at the other two options listed to determine what you have available to you.

3. The enterprise NAS
Enterprise NAS systems like those from NetApp may or may not have an iSCSI function but what they typically do have is some type of maintenance connection giving you a command shell on the local system. With these systems I typically will acquire the data logically and then log into the command shell and run dd locally and output the data to my collection system via a netcat listner. This isn't fast but when you get to proprietary systems it may become the only way to get the data out. 

If i can actually load a utility onto the box for execution f-response is a great option here.

If you want system logs or data you can also logically take the contents of the running NAS out over a netcat listener this way as well. 

Time to put together my notes and see whats left for this series before moving on. Have questions about handling onsite imaging situations? Ask them in the comments!

Monday, July 22, 2013

Daily Blog #29: 7/21/13 Sunday Funday Winner!

Hello Reader,
         I think I may have been a bit to harsh in the last contest, I'll work to make these either more doable in a couple hours or span them out over more days in the future. For those who were hesitant to enter you should know the winner was the only person who submitted an answer and you might have been able to answer more completely! Also this is the first time I've received a request from someone to submit an answer anonymously, a request I have accepted and will change the rules to allow going forward.

Why allow anonymous entries? Many of us are testifying experts and we still want to participate in the community without providing fodder for cross examination. I'm largely past this point as with the amount of written material I've put out its just a fact of life that opposing counsel is going to quote something I've written in a book or blog to see if he can try to trick me. So for those of you worried about your contest entries being used against you I will handle anonymous entries as follows:

1. You must email me your response before the deadline
2. If you want to be eligible to receive the prize I have to know where to send it to
3. If you do win I need to know how you would like to be credited

Regardless of anonymous or not I will post the winning answer the following day.

So with that said, here was yesterday's challenge:
For a Windows 7 system:
1. Describe the Gmail JSON format and how you would recover it
2. Describe where in the disk you would expect to find Gmail JSON fragments
3. Which services popular in forensic investigations utilize JSON
4. Provide a carve signature for the header and footer of a Gmail JSON
5. Describe what Gmail's JSON would reveal to you

Here is the winning answer:
1. Describe the Gmail JSON format and how you would recover it

Gmail JSON (and json primer)
As I understand it, it changed as recently as this month. Gmail recently re-constructed their front end and I would expect it to result in new json.

As you know Java Script object notation works by pairing object names with their values. Can be thought of as tags and lists. Programmatic objects have names and content. The content can be values, lists or other objects. The object names are referenced by the calling function and JSON file can be used to populate the value(s). All Json files will be ascii by default and as such have no defined “file signature” but that said they will all contain Data Structures defined by open and close square brackets and in the event of scripting code, structs defined by open squiggle  “{“  and closing “}” squiggle brackets

Opening is generally followed by a crlf Thus we could grep for \x7b\x0d\x0a
The crlf is optional.

Old gmail json used many documented tags and included server, account name, attachments and message body (to name a few)
Conveniently they all started with (No Quotes) “while(1); “
The format for the value pairs was (and may still be… )
\[“[a-z][a-z]?”,
 Of most interest is the [“mb”, tag = message body
[“gn” = account name

2. Describe where in the disk you would expect to find Gmail JSON fragments 

Allegedly this information is not supposed to be cached to disk. But (version dependent) can be found in temporary internet (or wherever your browser of study puts its temp files… eg Mozilla\profiles\\cache.
The actual mail will often be found as mail[x].htm
Pagefile, unallocated and hiberfil are also good places to look for the fragments.
 Still working on the  footer question (and in fact the piece of research I need to do for my case)
 In short, json may be used to render the entire email so not only will you get email content but folders, quotas, version, display options and more….

5. Describe what Gmail's JSON would reveal to you

Balance of documented tags (from SANS John McCash)
["gn",
Account Name
["st",
Server name
["qu",
Account Quota
["ds",
Folders
["t",
Message List (Thread)
["cs",
Conversation Summary
["mi",
Message Information/Index
["mb",
Message Body (This is where the meat is)
["ma",
Message Attachments (Number & Filenames)
while(1);
GMail Data Packet header (beginning of file)
["i",
Invitation
["ft",
Fast Tip (no I don't know what that means)
["ct",
Categories/Labels/Contacts
["ts",
Thread Summary (Similar to Conversation Summary)
["te",
End of Thread List
["v",
GMail Version
  Also not asked for but very interesting is the apple webkit.Path.. (?? Away from forensic box and docs) but along the lines of …… users/library/application dataWhats cool is that this is a mail (and includes gmail)  rendering engine that stores pieces of gmail in sql lite db.The DB includes the first couple lines (as presented on ios device) of an email as well as conversations, senders, recipients and dates.One caveat… the webkit builds conversations based on subject line thus if we have an email subject “Sunday funday”  and I send one to you and another totally different email to John smith, the webkit SQLLite DB will include both the names as part of the conversation when in fact no single email went to both parties.But of interest… this is the storage for the javascript and rendering of webmail.This becomes particularly valuable when dealing with ipad 2 and > or iphone 4s and greater as no tools I am aware of are getting email off those devices but webkit data can be found in all ios devices (I will check my mac book and get back to you on that … I think its there as well) 
Now this was not a complete answer but it was a good answer! I plan to take the time fully write out what I would consider to be a full answer this week as it seems this very important set of artifacts isn't as understood as I thought. While Magnet Forensics IEF tool solves this pain point for getting reviewable webmail results for me, you still need to understand the JSON format to find partial fragments that a carver won't locate and to understand what else is possible/available to recover.

Hope you enjoyed the contest and you'll participate in this weeks Forensic Lunch webcast on Friday and next weeks Sunday Funday. I'm reaching out to other companies whose products I like and use in my own investigations to see if they want to step up as Magnet Forensics has and provide prizes to those of you willing to put in the time to share your knowledge through these Sunday Funday contests!

Sunday, July 21, 2013

Daily Blog #28: Sunday Funday! 7/21/13

Hello Reader,
           It's that time again, Sunday Funday time! For those not familiar every Sunday I throw down the forensic gauntlet by asking a tough question. To the winner go the accolades of their peers and prizes hopefully worth the time they put into their answer. This week we have quite the prize from our friends at Magnet Forensics.

The Prize:
The Rules:
  1. You must post your answer before Midnight PST (GMT -7)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
The Challenge:
    Since Magnet is providing the prize I wanted to create a challenge that would help others understand the pain point their tools can solve.I originally bought IEF because of its ability to handle JSON artifacts into well parsed output, before that I had to carve them out myself and write code to make sense of it. IEF has made dealing with these things much easier for me and in most cases is the second thing I run after making the forensic image. With that said here is the challenge.

For a Windows 7 system:
1. Describe the Gmail JSON format and how you would recover it
2. Describe where in the disk you would expect to find Gmail JSON fragments
3. Which services popular in forensic investigations utilize JSON
4. Provide a carve signature for the header and footer of a Gmail JSON
5. Describe what Gmail's JSON would reveal to you

    'Web 2.0' as they call it has been both good and bad for us as forensic examiners, its time to see how much you know about it's artifacts! Good luck!

Friday, July 19, 2013

Daily Blog #27: Saturday Reading 7/20/13

Hello Reader,
           It's Saturday, so its time for another fresh batch of forensic reading. Time to bring the dogs in and put the kids outside, grab a good cup of coffee and lets learn.

1. Harlan has been doing some serious blogging lately, one of his more recent posts goes into different scenarios in data exfiltration, read it here http://windowsir.blogspot.com/2013/07/howto-data-exfiltration.html. If you are faced with the task of determining infiltration this is a great resource. If your suspected exfil route isn't listed its time to step back and think about what artifacts are relevant and make a plan similar the examples Harlan posted for yourself.

2. Over at the SANS forensics blog they have the results of their yearly DFIR surrvey, read it here http://computer-forensics.sans.org/blog/2013/07/19/sans-survey-of-digital-forensics-and-incident-response-dfir. If you want to see what other examiners are dealing with and look to bring some issues up to your management this survey could be a great tool.

3. On the cyb3rcrim3 blog they cover legal cases that involve computer forensic issues. Typically these cases involve the appealing of a ruling to exclude evidence, a tool being challenged, etc.. but this posting was something all together different. This post covers a legal dispute between computer forensics company Vestigage and a client about unpaid invoices and the actions that occured during the litgiation Vestige was retained in, you can read it here http://cyb3rcrim3.blogspot.com/2013/07/the-computer-forensic-company-evidence.html.

Needless to say its pretty much every examiners worst nightmare to have their faults linked to and published around the internet but as its already out there I don't feel I'm adding to much to their pain. Please if you read any of these links this weekend read this one and learn from the mistakes that occurred here. I don't know all the facts of this case and who is right or wrong but the biggest takeaway is to make sure you understand the work of those who work with you and clearly communicating your findings both good and bad to counsel if you want to be the best expert you can be for them.

4. All of the SANS DFIR Summit 360 videos have been posted here, http://www.youtube.com/playlist?list=PLfouvuAjspToxKMa8DeLTEh5BppA_p_pG. The 360 talks are fun to watch as each speaker only has 6 minutes to talk meaning they get to the heart of things very quickly. I linked to Hal Pomeranz's talk last time but this playlist has all the videos now for you to watch.

5. If you are on the IR side of the fence and need to get management to understand the importance of your work and need for resources read this http://blogs.gartner.com/anton-chuvakin/2013/07/15/on-importance-of-incident-response/. Just having the Gartner name on it should get their interest, the fact that the information is good and interesting also helps.

That wraps up this weeks reading list, it's been a good week in the lab and I hope you enjoyed this weeks go bag series and the interview with SA Eric Zimmerman. We previewed some research we've done on USN journal artifacts of Outlook attachment access during our first 'Forensic Lunch' Google hangout, I hope to get that data documented into a white paper and posted next week. If you are interested in an informal conversation with the people in my lab and others in the DFIR world we are attempting another 'Forensic Lunch' friday 7/26/13 details here https://plus.google.com/u/0/events/ce2rsd6sumer9laimu4s0hdoddo. The way Google+ currently handles this we will post/tweet/broadcast/share the link to watch the broadcast live once it begins and then the recording will be available on our Youtube account.

Tomorrow is Sunday Funday and the prize is being provided by Magnet Forensics who have graciously offered the following to the winner:

  • Three month license to IEF
  • A Magnet Forensics baseball cap
  • A gift card to Amazon
I reached out to Jad and asked him if he would be interested and he and Magnet were very gracious in their response. I want to keep the prizes for Sunday Funday varied and interesting to keep driving great and thorough answers we all can benefit from. Good luck to the participants I have to make the question worth the prize!

Thursday, July 18, 2013

Daily Blog #26: Interview with SA Eric Zimmerman

Hi Reader,
          When I finished the milestone series I asked that those of you who have hit milestone 14 in your career to email me. Eric Zimmerman was the first brave soul to do so and was willing to be interviewed for the blog on his career, his forensic interests and his views. I think he brought some great answers back and I hope you enjoy his insights. If you have reached milestone 14 and are hesitating until this point to email me at dcowen@g-cpartners.com for an interview, then read this to see what I'm interested in. I don't want to put your investigations, case or research at risk, I want to help others see how you got started and how you got to milestone 14 so they can do the same!

    If you are reading this before noon central on friday 7/19/13 then please come join us for our first forensic lunch where we will talk about the tri-force, our usn research, shadowkit and your questions:
https://plus.google.com/u/0/events/cedl2na1nqhvomfful00sad9teo

With that said, here is the interview with Special Agent Eric Zimmerman.
EZ: First of all i would like to thank you for the interview opportunity. I like the way you defined the milestones and it serves as a great barometer for people to use in their careers. My progression thru the milestones and optional achievements wasn't linear, but i suspect that's the case with most people.
DC:  How did you get started in computer forensics?
EZ: I got started in forensics years ago as a byproduct of being a computer geek, but i didn't get serious about it until i became an agent and started needing digital forensics in my day to day work. I've been using hex editors for years for a variety of things and started using WinHex about 4 years ago for some case work. I got my EnCE about two years ago using Encase 6. Soon after i fully transitioned to X-Ways Forensics. i have been fortunate enough to work violations where essentially everything involves a computer, so there was ample opportunity to learn and gain experience.
DC: What event in your career propelled you forward the most?
EZ: I would say the biggest benefit to my career was being the case agent and primary forensic examiner in a very technical case involving p2p networks and encryption. In addition to the trial itself, I went through a Daubert hearing and was qualified as an expert in federal court. I wrote hundreds of pages of reports for a wide variety of audiences. Being able to articulate information to people in a way they can relate to is a critical skill. it is not enough to be an expert in digital forensics. you have to be able to convey your findings in a meaningful way to the consumer of that information. Events like a big trial or similar is where you get to finally use all the skills and knowledge you've built up and practiced over the years.
Another major event was winning the 2011 NCMEC award for my work in combating the online sexual exploitation of children. In 2011, my software led to the rescue of at least 45 children, the execution of 330 search warrants, and 222 arrests. To date my software is in use by at least 4000 people in 52 countries.
DC:  Do you remember what lighted your passion for computer forensics, what pushed you forward to Milestone 14?
EZ: My passion began with wanting to understand the underlying technology behind computers. Once you start peeling back the layers you begin to get an understanding of how deep the rabbit hole goes. I tend to get bored easily so having such a wide variety of things to learn keeps it interesting. There is always something new to learn and even more to discover for the first time.
As for what pushed me toward milestone 14, necessity was the biggest thing. After you see the fruits of solving a new problem or reversing a previously unknown artifact you start to see the potential in looking into the unknown. I do not have the ability to do pure research as much as i would like to, so most of my milestone 14 stuff revolves around either my own or my colleagues cases. as new problems come up i work to solve them.
Beyond necessity was wanting to figure out something that was previously unknown. It was a challenge to solve a "puzzle" from scratch with nothing more than some network captures, binary files, a hex editor and some programming skill. A lot of my work involved reverse engineering proprietary, closed source protocols (sorry i cant be more specific than that) and when i started looking into it very little was understood about the protocols and other artifacts. I wrote some cool custom software to assist with things as needed.
Passion for the work is really critical, almost more so than one's technical ability, because without it you may not have the stamina to follow through to the end. Frustration is inevitable but you just have to keep your head down and move the ball forward. Working with a team of people is also very helpful.
DC: What is your favorite forensic artifact?
EZ: In general, the registry, but as to a specific artifact it would be ShellBags by far. It is amazing how much detail is maintained in ShellBags. i have used them to show what was inside encrypted TrueCrypt containers in order to prove intent as well as corroborate other artifacts. in the case of encryption, it basically serves as a means to see the file names, time stamps and file sizes of things inside a container. If you can then tie that information to more concrete artifacts involving file hashes (and therefore the file size) you can peer inside the encryption and say with certainty what is in there.
DC: What are you researching now?
EZ: When i get a break from my cases, my primary focus is continuing to expand the abilities of my live response software, osTriage. Version 1 is for law enforcement/government only but with version 2 i want it to be available to a wider audience. My approach for version 2 is to use plugins to provide functionality vs. a monolithic executable as I did with version 1.
By making the programming interfaces available to anyone, people can write plugins that are meaningful to them in case their particular issue isn't included out of the box. plugin authors can choose to share their work or keep it in-house. I've written the main program in such a way that it works with the interfaces to automatically generate reports,bookmark items of interest, copy files from computers, etc. This lets plugin authors focus on new features and not basic plumbing.
In version 2, I have spent a lot of time focusing on performance and have seen some fantastic gains in speed. For example, my new code can iterate every file and directory on a 256GB hard drive (with over 276,000 files and 59,000 directories) in about 22 seconds whereas the version 1 took over 8 minutes to do the same search. that 22 seconds includes finding pictures, hashing them, and generating thumbnails,exploring archive files, parsing .lnk files, and pulling dozens of pieces of live response data. I demonstrated this at the 2013 Boston Cyber Conference earlier this year in my talk on the need for improved triage techniques.
DC: What inspired you to write a book?
EZ: The inspiration for the book was to be able to unpack the X-Ways manual into a format that more people would be able to relate to based on their existing knowledge in forensics. Our goal wasn't to teach forensics, but rather to explain X-Ways.
The X-Ways manual is a fine piece of technical writing, but few people have the patience or time to penetrate its depths. I really think X-Ways is at the top of the pyramid when it comes to forensic suites but in some circles it has the reputation of being hard to use. Where people can run into trouble is there are a lot of ways to accomplish a goal in X-Ways rather than one linear path as found in other tools.
X-Ways puts incredible power in the hands of the forensic examiner and lets them wield that power in a way that makes sense to them and the case at hand. Once people try X-Ways and get comfortable with it they rarely go back to other tools. I found the best way to jump in was to work a case in X-Ways Forensics solo or in parallel with an existing tool.
With the book in hand you can begin the transition from other tools to X-Ways in a straightforward manner. The book is written in such a way to walk people through its use from initial installation, hard drive imaging, reporting and everything in between.
DC: Where can we buy the book?
EZ: The book is titled "X-Ways Forensics Practitioner's Guide" and is currently available for pre-order at Amazon (http://goo.gl/vWmqa) as well as Barnes and Noble (http://goo.gl/DIJO6). We recently sent our final proofs back to the publisher well ahead of schedule and hope to see the book shipping in August. we have more information as well as software programs I wrote for the book at http://xwaysforensics.wordpress.com/.
DC: What is next for your career? What is beyond Milestone 14 for you?
EZ:  I would like to continue to expand the capabilities of firstresponders and raise the bar when it comes to triage as it relates to what we can cull from computers. I have been focusing on trying to define what the needs of the majority of people are when it comes to digital evidence (at least as it relates to law enforcement). My ultimate goal is to be able to deliver 90% of the relevant information for a case in 10 minutes or so.
For me, moving beyond milestone 14 involves thinking at a more strategic level vs. the day to day existence in the trenches. This involves defining and polishing best practices for colleagues and peers and automating common tasks to act as a force multiplier for understaffed or smaller departments.
DC: What are your favorite tools?
EZ: My favorite tools include X-Ways Forensics (of course) and WinHex, CommView, Wireshark, Edit Pad Pro, RegRipper, F-Response, Directory Opus(Explorer replacement), Visual Studio, Sysinternals stuff, Volatility, and who can forget the Tri-force! The amount of high quality software out there amazes me. there are many gifted developers and digital forensics people out there who put a ton of time into great tools. Some even choose to give their work away. Thanks to all the devs out there! Much of what we can do in digital forensics would not be possible without your contributions.
DC: What do you believe is the greatest challenge facing forensic examiners?
EZ: The ability to separate the wheat from the chaff when it comes to digital evidence. related to this is a continued reliance on outdated workflows when it comes to processing data. I wont mention any names but there are a lot of solutions out there that require a massive amount of up front processing before an exam can start. Combine this with a lack of checkpointing and you have a recipe for pain when things crash.
Storage capacities continue to increase exponentially while our ability to examine that data is only increasing mathematically. it doesn't take long to realize we have to get smarter in how we look at data or the lead times for a full forensics review will continue to get longer and longer. In my estimation, the answer (or at least a partial answer) to this problem is better triage techniques. If we can identify the computers and digital devices that are relevant to us we can focus our efforts on those devices vs the "examine everything" approach most often employed now. We have to find the balance between thoroughness and timeliness in our examinations. Its a tough problem for sure, but one i think the community can solve.
Thanks Eric for the interview, I hope everyone gets something out of it. Tomorrow is Saturday reading and I have some interesting links to share. The big event though is this coming Sunday Funday where we have a prize provided by Magnet Forensics that I think you will want to win!