Happy Holidays - Research update

Happy Holidays - Research update by David Cowen


Happy Holidays Reader!,
                                          As we get ready to actually take a couple days off for christsmas and get ready for the next year I wanted to give an update on our current file system journal research.

NTFS - Our NTFS Journal Parser has hit 1.0, we are still writing up a blog post to try to realy encapsulate what we want to say but you can get copy now by emailing me at dcowen@g-cpartners.com. If you sent a request prior and I missed it I apologize, just send it again.

EXT3/4 - We have a working EXT3 Journal parser now, and we can reassociate file names to deleted inodes. However, its not as straight forward as the NTFS Journal (as strange as that may sound) as the EXT3/4 Journal is a redo only journal meaning it does not store the ability to roll back a change just the ability to redo a change. We are able to recovery file names because when a change is recorded its not just the inode being changed thats written to the redo log, its the entire block that the inode is stored in! So we have alot of ancillary data that we have to parse through and then do a search through all those entries for directory entries pointing to the known deleted inode.

We've had to actually switch to a database backend for this to work as there is alot of data we have to make it through to get these changes. If you want in on the beta please email me dcowen@g-cpartners.com and we'll get you in the loop.

The only real change we've found between the ext3 and ext4 journals is a crc value inserted, but we'll look at it further as we move forward to make sure of that. I'm excited about ext4 as its the default file system for most android phones.

HFS+  - This is next on our list, we got a shiny new mac mini for testing on this and we are looking forward to it.

That's it, I hope you understand its been a busy year for so I haven't been able to write all the other cool stuff we want to share with you. In book news Computer Forensics, A Beginner's Guide is 1/3 the way through copy edit and the book should be out in the spring. I'm feeling pretty good about it and its aleady listed on the Amazon store (since its soooo delayed which is 100% my fault). I actually managed to snag www.learndfir.com for the book and I'll have tools, documents, images and this blog mirrored there. Also we are making tutorial videos for the cases in the book that will be put on our YouTube channel LearnForensics.

I hope to see you all at conferences this year, if you are looking for a speaker on advaned file system forensics please let me know!

Talk to you all in 2013!

PFIC 2012 Slides & Bsides DFW

PFIC 2012 Slides & Bsides DFW

Hello Reader,
                      With another presentation done here are my slides from PFIC, where I again presented on Anti Anti Forensics. This is a similar presentation to the one I did at Bsides DFW but  with more details on the actual structure of $logfile records and more information.

Slides can be found here: Slides

We are getting close to the official release of ANJP (Advanced NTFS Journal Parser) as we write up our official blog post to put up on the SANS blog. Until then, if you would like a copy of the version 1 free tool please email me dcowen@g-cpartners.com so I can get you going. Our goal is to get the community access to our research as quickly as possible!

I'm looking for conferences to spread the good word on journaled file system forensics for next year, so if you are looking for advanced content please let me know!


Updates and DFIR Conferences

Updates and DFIR Conferences by David Cowen - Hacking Exposed Computer Forensics Blog


Hello Readers,
                        I know I've been silent, our workload and conferences have kept me quite busy. Updates for you:

Book News

Computer Forensics, A beginners guide is out to copy edit or will be soon. Looking at an early Q1 2013 release to bookstores. I've been working on this book for way to long but having a child while writing a book will do that.

Hacking Exposed, Computer Forensics Third Edition we just signed the contract for this. Look for a new edition in 2014 with a lot of new content and new sections. We really want to keep this series not only relevant but expand its scope from the US legal system to the world.

Conference News

I spoke at Derbycon this past weekend, but not on forensics. I spoke on running a successful red team, which is both my professional past as well as part of the work I do at the National Collegiate Cyber Defense Competition. People seemed to enjoy the content and here are my slides!

My derbycon slides with notes!

I'll be speaking next at BsidesDFW on November 3, 2012 on Anti Anti forensics. I won't be staying very long after as I have to catch a plane to Utah but I do plan to go to the movie screening the night before so hopefully I'll see you there!

My last planned presentation of the year is at Paraben's Forensic Innovations Conference so if you're going I hope to see you there. I'll be doing my Anti-Anti Forensics talk again but this time be doing a live demonstration of the updated tool we've been showing in the blog here. which leads me to my next update

NTFS $Logfile Parser

After a good response from our beta testers we are feeling confident in elimination of bugs in what we are getting ready to release as version 1.0. In addition we got some great fixes after testing our parser on the NIST CFReDS project's deleted file recovery test images. If you are looking to validate a new tool or test a current one the NIST CFReDS images are great and well documented as a control.

We've decided to call the parser ANJP, Advanced NTFS Journal Parser, to have a clear and distinct acronym from anything else. We plan to expand our research into Ext3 and HFS+ after this and will have AEJP and AHJP parsers released at a later date to expand what we believe is a vital piece of information missing from your examinations. There is a lot of research around Ext3/HFS+ regarding recovering deleted files from the journal, but we can't find much focus on mapping out file creations, time stamps changing or files being renamed. All things possibly unique to the interest of the DFIR community. Our plan is to expand out our research so you can take advantage of all the data available to you.

So what will be in version 1.0?
  • Identification of deleted files with full metadata, in our testing on the NIST CFReDS images we recovered all deleted file records with full metadata.
  • Identification of files being created with full metadata
  • Identification of files being renamed with metadata before and after the rename.
  • Log2timeline output

But Dave, what about all the other cool things you've mentioned? 

There is much more we can determine from the NTFS $logfile, but we've realized that understanding it isn't as simple as just reading the csv it outputs. We don't want to release a tool that becomes a source for false positives and bad testimony so we are going to do follow the Viaforensics model (thanks for thinking this up guys!). We are going to be offering a one day training class that explains NTFS, the MFT and most importantly the $logfile. That class will explain how to parse the log, the event records, the redo/undo operation codes and how to stitch those together to find the information we provide in version 1.0.

Extending beyond that we will then explain how to take the Update Sequence Arrays, timestamp changes, file id/directory ids and tie them back into the MFT, recovering resident files, identifying the approximate number of external drives changes, determining how many systems an external drive was plugged into and be able to make good, reliable conclusions from them for use in your case work.  At the end of the class you'll get a copy of the super duper version 1.0 that gives you way more information that you will be qualified to draw opinions from. There won't be a dongle or a license or any other such thing. If you decide to give a copy to someone we just hope they don't testify to its results without taking our class.

In the future as we continue our research we may be able to reduce the possibility for error in the additional evidence sources and when we will / as we do we will update the publicly released tool to include those. Until then we think everyone is best served by this model that gets the most reliable evidence in everyone's hands ASAP and giving those who want to go deeper a chance to.

I hope to have version 1.0 released in the next week or two and I'll be posting it here when I do.

If you are running a conference and want us to do the ANJP training at your event let us know, we want to get as many people as possible using this as possible! When you see what all we can determine from the $logfile we think you'll agree.

What conferences do you get the most from?

I am planning my 2013 conference schedule and I've asked twitter and I want to ask you the reader, what conferences do you get the most from? I'm planning on CEIC, PFIC and possibly blackhat but  otherwise I want to hear your suggestions! Leave a comment and lets talk.

Time to find a fancy hat, I'm speaking at Derbycon

I'm speaking at Derbycon by David Cowen - Hacking Exposed Computer Forensics Blog


Hello Reader,
                      I've been trying to find more conferences to speak at lately (If you are running a conference let me know) to let more people know about fun forensic artifacts. I've been selected to speak at Derbycon 2.0 but not for a forensic topic this time (though I did submit one). Instead the fine folks at Derbycon like my topic titled 'How to run a successful redteam'. If you've been following the blog you'd know that once a year I lead the national red team for the national collegiate cyber defense competition and have been doing so for 5 years now. We've learned alot on how to build and succeed as a competition red team and I thought it would be a good idea to share what we've learned.

So if you are going to at Derbycon and want to either:
 a) have a beer with me and talk forensics or
b) find out how to be a lethal red team full of 'i love it when a plan comes together' moments

Then I'll see you there! Let me know you read the blog if you don't mind it's always nice to know someone is one the other side of the screen.


Also Read: Updates and Status

Updates and status

Updates and status - NTFS $logfile by David Cowen


Hello Reader!,
                       It's been awhile since we've talked. Things here at G-C have been pretty busy, the legal sector at least appears to be in a full recovery (knock on wood). While I haven't had time to write up a full blog post on some of the new things we've found over the summer, I did want to take the time to show you how our NTFS $logfile parser is coming. For those of you who attended my CEIC session on 'anti-anti forensics' or who downloaded the labs I posted afterwords you know that we had a rough parser and tests to recover the names of wiped files before.

I'm happy to say we've come a long way since then. The initial proof of concept parser was shown to validate the artifact and divide up the pieces into something we could then further understand. We now have a parser, that is still in development, that can go even further creating CSVs of human readable data extracted from those $logfile segments.

What does that mean? Well it means:

1. We can recover the names of deleted files and their metadata, even if its been purged out of the MFT. This includes the metadata associated with the file (directory, creation, modification and acess times).

2. We can recover the complete rename operation showing cleanly which file became which file. Including parsing out the directory, creation, modification and access times before and after the operation. This essentially will allow you to undo what a wiper has done (except for recover the contents of the file itself).

3. We can determine if files were written to other drives, and an approximation of how many. (This is not in the current version of the parsers and will require ist own blog post).

4. We can recover the original metadata of a file when it was created.

5. We should be able to recover timestamps that have been altered.

It's all written in perl (woo!) and we are going to release the source and documentation as soon as its ready (tm). In the mean time check out this awesome screenshot showing the parser recovering the metadata from 22 files that were wiped with eraser:

Updates and status - NTFS $logfile by David Cowen

If you are need of this tool for a case immediately drop me a line and I'll see what we can do to help you out!



CEIC 2012 - Anti Anti Forensics Materials

CEIC 2012 - Anti Anti Forensics Materials


Hello Possible CEIC Attendee,
   
           I always put my materials up after I give a presentiation. This time since I also made a couple labs to show how to perform this type of investigation into indentifying, detecting and recovering from anti forensic tools I am including those as well. There are 3 labs making up 10gbs of data compressed. The images are e01 and the cases are saved in Encase v7.04 since this was a guidance software conference. There is a lab manual for each lab as well in the root directory to walk you through what you are expected to find.

I'm putting this up on a dropbox account as they are the only file hosting service I could find without  max file size limit (that you couldn't pay to increase).

All three labs here:
https://dl.dropbox.com/s/no8w524ecshulz4/dcowen_ceic_labs.zip?dl=1

The ppt slides are here:
https://dl.dropbox.com/s/c0u980a53ipaq7h/CEIC-2012_Anti-Anti_Forensics.pptx?dl=1

As I've said in the prior post, I'm more of a talker than a powerpoint slide maker. So if you have questions based on the presentation/lab please leave them in the comments below and I'll do my best to answer them.

Also Lab 3 contains a preview of our $logfile research that we will hopefully be presenting at blackhat (please pick me blackhat review board).

If this type of lab download/review thing is popular with you readers I can put up more and we can do a forensic challenge style of blogging for a bit!


New Project, Tool testing

New Project, Tool testing by David Cowen - Hacking Exposed Computer Forensics Blog

One of the advantages of running a computer forensic company is that I get to buy lots of tools to use. When I was working for other companies I would have to wait for budget cycles and submit justification for tool purchases, but for the last 7 years I’ve been able to buy them as I needed them. In those 7 years we’ve accumulated a lot of tools that we use for different specializations and a body of knowledge related to them that I feel could be better utilized to share with all of you.


With that in mind I think it would be interesting to see how all these tools compare when working on the same forensic image. So with that in mind I’m going to start making some test images to see how data is interpreted from the same disk but in different image formats. I am going to start with the identification, not recovery, of deleted files and go from there.

My initial tool list to test includes:

Encase v. 7.04

FTK v. 4.01

Smart 3-26-12

X-ways forensics v. 16.5

SIFT v. 2.13

Any other tool you want us to test? Let me know in the comments below

I'll post my results as we finish a round of tests and as always a large case could easily distract me!


CEIC 2012 - Anti Anti Forensics

CEIC 2012 - Anti Anti Forensics


Hello possible CEIC attendee reader,
                                                            My class 'anti-anti forensics' will be tuesday at 2:00pm and is apperantly full from what I saw in the regestration page. For those of you who wanted to attend it but didn't get to sign up they normally allow people to queue up at the door to take vacant spots/empty space.

So why would you want to queue up? I'm happy you asked! In this class I plan to preview some research we've been doing on the NTFS $logfile. While I'm not ready to give a presentation dedicated to that, I've submitted to blackhat for that (please pick me blackhat reviewers), I will be showing what I consider to be amazing new tricks to defeat anti forensic tools using the NTFS $logfile.

As in prior presentations I will make my slides available on the blog afterwords for anyones review, but I don't feel that they really ever capture everything that I talk about. I'm much more of a talker than a slide writer so my slides typically just cover major topics and points rather than the details that I would hope you want to hear.

See you there!


New Tools for Shadow Copies

New Tools for Shadow Copies


Hello Readers,
                         I think a lot of us are still using the old tool Shadowcopy to explore shadowcopies from forensic images mounted. While we know how to mount volume shadow copies to mount points within our system, I personally always liked the interface shadowcopy provided me to jump between restore points and browser their contents. Since Shadowcopy hasn't been updated in over a year one I was happy to new someone else pick up the guantlet of a GUI based VSS explorer.

The entrant? Shadowkit which is currently available for free on the website www.shadowkit.com. I've tested the tool on my own systems it's quick and has some nice features that I've been wanting from Shadowexplorer such as:

1. File list exporting.

2. Filtering shadow copies of your local system versus shadow copies from mounted images.

3. drop downs for machine names as well as individual volumes backed up, preventing confusion with imported shadow copies.

4. MAC and attributes displayed within the interface for each file.

5. The ability to open files within the tool without having to export them, it places them in a temp directory for viewing.

6. Extension filtering.

In talking to the author he has more features planned with an eye towards the needs of the DFIR community so I'm expecting good things.


NCCDC 2012 Wrap up post

NCCDC 2012 Wrap up post


Hello Friends,
                        This post is here to give links to the two presentations I gave at NCCDC for those interested and contacts to red team members for those interested in knowing more about what happened to them.

Red Team Debrief PPT: https://skydrive.live.com/redir.aspx?cid=c252230f74663370&resid=C252230F74663370!848&parid=C252230F74663370!126&authkey=!AEdgB9WG-fF3pd0

Responding to the incident: https://skydrive.live.com/redir.aspx?cid=c252230f74663370&resid=C252230F74663370!849&parid=C252230F74663370!126&authkey=!ADnYRFJV9VCMK08

Red Teamers:
Mubix @mubix on twitter
Chris Nickers: @indi303 on twitter
Ryan Jones: @lizborden on twitter

i'll get more email addresses and twitter names and update this post as I hear back from them.

If you would like to watch the red team debrief, albeit with some bad audio, you can watch it here:
http://t.co/4umtDJnB

Also Read: See you at CEIC

See you at CEIC

See you at CEIC by David Cowen - Hacking Exposed Computer Forensics Blog


Hello Readers,
                        Looks like I'll be speaking at CEIC Tuesday afternoon 2:00pm on "Anti Anti Forensics". If you've enjoyed the blog/books/tweets I hope you'll come as we get into how I go about detecting and sometimes overcoming wiping/system cleaners in a hands on lab.


Making Forensics Faster

Making Forensics Faster by David Cowen - Hacking Exposed Computer Forensics Blog


Hola Reader,
                     There are always bigger hard drives, at least for the near future, and more devices available to individuals every day. While the everyday work flow for an individual is drastically improved, hopefully, by obtaining this additional storage and processing power it only further compounds the amount of evidence a computer forensic examiner has to go through. 

It's not uncommon now for a single person to have over a terabyte of data between their laptop, desktop, mobile phone, tablet, hosted email, file sharing and backup services. The question then becomes how do we keep up with the larger volumes of data without allowing each individual to take a month to process.

There are several options out there, depending on your choice of tools (encase, SIFT, FTK, Prodiscover, SMART, etc...) to really speed up the processing of the evidence so you can get full text indexes built and artifacts parsed. In most cases the answer comes down to:


1. Faster CPUs to process the evidence
2. More RAM to process the evidence
3. Distributing the workload across multiple systems (if your software permits this)
4. Faster storage to hold the evidence

In my lab we've approached this from all of these points.

1. Faster CPUs to process the evidence
We have multi cored, multi processor CPUs on our systems. What we've found is that typically, unless we are password cracking, that the I/O from the disks can't keep up with resources available. Meaning our CPUs are never maxed out. So the CPUs are not the bottleneck for getting more speed.

2. More RAM to process the evidence
We have machines with up to 49GBs of ram, under large loads we sometimes see usage into the 30GBs but again because of I/O from the disks more than enough is available. So the RAM is not the bottleneck for getting more speed.

3. Distributing the workload across multiple systems (if your software permits this)
We've tried using FTK's feature that allows multiple systems to distribute processing across them. The problem comes in that the evidence is still being stored and accessed from system. Again the issue comes up the I/O from the disks can't keep up with amount of requests being made from the additional systems now requesting data to process over the network. So distributed processing alone did not speed up processing.

4. Faster storage to hold the evidence
Here we find the greatest benefit. Initially we bought large direct attached storage RAIDs to store and process our evidence. While these systems get us I/O speeds around 100-200MBs depending on the type of disks it still wasn't enough speed to max out the CPUs and RAM. So we started looking around at other options. When you start looking at larger/faster storage systems things can get very expensive, very quickly.

If we had a very large budget we could have gone for a RAMSAN, http://www.ramsan.com/, and gotten a couple terabytes of storage at 4GBs a second read and write speeds. That kind of I/O would clearly max out most systems attached to it and easily keep up with the demands of a distribute processing system.

Unfortunately we don't have that large of a budget for storage, especially not with the amount of data that we working with. A RAMSAN 810 according to this article, start at $45,000 as of 8/23/11. So we look at the midline prosumer range of solid state storage as they are faster than similar SAS 15k drives but priced around the same in the most cases for larger sizes. Most prosumer SSD disks can go up to 300-400MBs a seconds and are connected via SATA meaning you could easily turn them into a small RAID and load your images there. However again the cost of doing this can quickly scale up depending on the amount of storage you need to hold the evidence.

Instead we have been implementing is a middle of the road approach. Instead of loading the entire evidence set into faster storage we purchased a PCI-E based SSD card from amazon with read speeds of 1.5GBs and write speeds of 1.2GBs  it has a smaller amount of storage (we opted for 240gbs) but allows the most heavy part of the processing (dealing with all the data extracted from the forensic images) to be done on the fastest storage. To accomplish this in FTK we pointed the ADTemp directory to the PCI-E SSD card and our processing speeds improved dramatically. We were able to complete a full text index of a 149GB full forensic image in 1 hour. This test wasn't even optimal as the forensic image wasn't even copied onto faster RAID storage but instead was just attached via USB3.

We've since ordered PCI-E SSD cards for all of our evidence processing servers and will post most benchmarks as we move forward in our testing and processing. I would also like to expand our SSD storage to include the evidence storage media and database media but I'm doing this one step a time to find out what parts are getting me the largest increases in speed for the dollar.

Paul Henry (@phenrycissp on twitter) has already taken this a step further by putting all his pieces onto SSD SATA storage and integrating in the PCI-E SSD card for his temporary directory. His new issue is finding test images large enough to show meaningful results since its' processing so quickly So if you have a test image 100GBs or larger, please let him know!


The best feature you never knew existed - Export LNK Contents & Export LNK metadata in FTK

Export LNK Contents & Export LNK metadata in FTK

Bonjour Reader!,
I know I have large gaps in my blog posts, its not for a lack of ideas but it is for a lack of time. With the economic recovery in full swing in the legal world we are very busy.

However, I still need to finish my new book and start getting back to blogging more regularly so please feel free to harass me on twitter @hecfblog if I don't write a post once a week.

In this short post I am going to point out a feature in FTK that has existed since 3.3 atleast that I never knew existed. The feature is called 'export lnk contents' in ftk 3.3 and 'export LNK metdata' in ftk 4.0 and it may be the one feature that I wish existed in FTK for the last 8 years of using it. When I've mentioned what this feature is and what it does to fellow examiners each of them has said the same two things:

1. "Woh! This going to save me so much time!"
2. "Why didn't they tell everyone this was here?!"

So in relation to point number 2, let me do that for them.

HEY EVERYONE, FTK will now export out all of the metadata of a lnk file and the contents of the parsed lnks to a file (from atleast 3.2-4.0)!

It can do this with one, some or all LNK files just highlight them, right click a lnk and the context menu will show the option! Suddenly all the manual copy and pasting into a spreadsheet or running other tools (like tzworks lslnk) are no longer necessary. This is especially great when it comes to carved LNK files that may not actually be valid and break many third party tools when they try to parse them.

What all does it export you say?
Keep reading!

Surely there is no way they snuck in a feature everyone wanted and didn't tell anyone?
I sure didn't see it!

It must be missing something right?
Not that I can see! It exports out into a tab seperated file:

* Shortcut File - Name of the LNK file

* Local Path - The path to the file the LNK file is pointing to

* Volume Type - The type of volume (Fixed, Removable, CDROM) of the volume being accessed

* Volume Label - The volume label for the volume being accessed

* Volume Serial Number - The VSN of the volume being accessed

* Network Path - If this was done over the network, the full UNC path to the file

* Short Name - The 8.3 name of the file

* File Size - Size of the file in bytes

* Creation time (UTC) - When the file the LNK file is pointing to was created

* Last write time (UTC) - When the file the LNK file is pointing to was modified

* Last access time (UTC) - When the file the LNK file is pointing to was accessed

* Directory - If file the LNK file is ponting to is a directory

* Compressed - If file the LNK file is ponting to is compressed

* Encrypted - If file the LNK file is ponting to is encrypted

* Read-only - If file the LNK file is ponting to is marked read only

* Hidden - If file the LNK file is ponting to is marked hidden

* system - If file the LNK file is ponting to is marked as a system file

* Archive - If file the LNK file is ponting to is marked as to be archived

* Sparse - If file the LNK file is ponting to is 'sparse'

* Offline - If file the LNK file is ponting to is offline

* Temporary - If file the LNK file is ponting to is a ntfs temporary file

* Reparse point - If file the LNK file is ponting to is extended directory information

* Relative Path - The relative path to the LNK file

* Program arguments - Any arguements stored for the execution of the program

* Working directory - Where the executable will default for reads/writes without a path

* Icon - What icon is associated with the executable if any

* Comment - This is an outlook feature, not sure why its included

* NetBIOS name - The network names of the system the LNK file was accessing

* MAC address - The MAC of the system the LNK file was accessing

So the next time you are working a case in FTK and you want to know what was being accessed from external drives (and you are checking shell bags and other artifacts seperately of course) then make a filter for all file with the extension 'LNK' and right click on one and export all of them to TSV. Import that TSV into excel, sort by Local Path and your done! This may be one the biggest time savers I've found in FTK in years and I now use it on every case.

Have you found a feature you love that everyone seems to miss? Leave it in the comments below.