Daily Blog #381 National CCDC Redteam Debrief

National CCDC Redteam Debrief by David Cowen - Hacking Exposed Computer Forensics Blog



Hello Reader,
     The 11th year of the National Collegiate Cyber Defense Competition has ended, congratulations to the University of Central Florida for a their third consecutive win. I hope you make it back next year for another test of your schools program and ability to transfer knowledge to new generations of blue teams.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: 



However the team that won over the Redteam was the University of Tulsa who came with a sense of humor. Behold their Hat and Badges:
National CCDC Redteam Debrief by David Cowen - Hacking Exposed Computer Forensics Blog


Also you have to check out the player cards they made here:
https://engineering.utulsa.edu/news/tu-cyber-security-expert-creates-trading-cards-collegiate-cyber-defense-competition/

Here is an my favorite:
National CCDC Redteam Debrief by David Cowen - Hacking Exposed Computer Forensics Blog


You can download my Redteam debrief here:
https://drive.google.com/file/d/0B_mjsPB8uKOAcUQtOThUNUpTZ0k/view?usp=sharing



Daily Blog #380: National CCDC 2016

Daily Blog #380: National CCDC 2016 by David Cowen - Hacking Exposed Computer Forensics Blog



Hello Reader,
           I'm in San Antonio for the National Collegiate Cyber Defense Competition which starts at 10am CST 4/22/16. If you didn't know I lead the red team here at Nationals where the top 10 college teams in the country come and find out who does the best defending their network while completing business objectives.

I'm hoping to follow up this post with some videos and links to what happens tomorrow, in the mean time make sure to follow #CCDC or #NCCDC on twitter to watch some of our funny business in real time. 

Daily Blog #379: Automating DFIR with dfVFS Part 6

Automating DFIR with dfVFS Part 6 by David Cowen - Hacking Exposed Computer Forensics Blog



Hello Reader,
         It's time to continue our series by iterating through all the partitions within a disk or image, instead of just hard coding the one. To start with you'll need another image, one that not only has more than one partition but also has shadow copies for us to interact with next.

You can download the image here:


If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/

First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="Windows 7 Professional SP1 x86 Suspect.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

  volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

  mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=volume_path_spec)

  file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


  stat_object = file_entry.GetStat()

  print(u'Inode: {0:d}'.format(stat_object.ino))
  print(u'Inode: {0:s}'.format(file_entry.name))
  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')
  file_object = file_entry.GetFileObject()

  data = file_object.read(4096)
  while data:
      extractFile.write(data)
      data = file_object.read(4096)

  extractFile.close
  file_object.close()
  volume_path_spec=""
  mft_path_spec=""

Believe it or not we didn't have to change much here to get it to go from looking at one partition and extracting the $MFT to extracting it from all the partitions. What we had to do was four things.

1. We moved our file extraction code over by one indent allowing it to execute as part of the for loop we first wrote to print out the list of partitions in an image. Remember that in Python we don't use braces to determine how the code will be executed, its all indentation that decides how the code logic will be read and followed.

2. Next we changed the location where our volume path specification object is set to from a hard coded /p1 to whatever volume identifier we are currently looking at in the for loop.

 volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

You can see that the location variable is now set to u'/' being appended to the volume_identifier variable. This would be resolved to /p1, /p2, etc.. as many partitions as we have on the image.

3. Now that we are going to extracting this file from multiple partitions we don't want to overwrite the file we previously extracted so we need to make the file name unique. We do that by appending the partition number to the file name.

  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')

This results in a file named p1$MFT, p2$MFT, and so on. To accomplish this we make a new variable called outfile which is set to the partition number (volume_identifier) appended with the file name (file_entry.name). Then we pass that the open file handle argument we wrote before.

4. One last simple change.

volume_path_spec=""
mft_path_spec=""

We are setting our partition and file path spec objects back to null. Why? Because if not they are globally set and will just keep appending on to the prior object. 

That will result in very funny errors.
That's it! No more code changes. 

You can get the code from Github: 


In the next post we will be iterating through shadow copies!

Daily Blog #378: Automating DFIR with dfVFS part 5

Automating DFIR with dfVFS part 5 by David Cowen - Hacking Exposed Computer Forensics Blog



Hello Reader,

Wondering where yesterdays post is? Well, there was no winner of last weekends Sunday Funday.
That's ok though because I am going to post the same challenge this Sunday so you have a whole week to figure it out!

-- Now back to our regularly scheduled series --

              I use Komodo from Activestate as my IDE of choice when writing perl and python. I bring this up because one of the things I really like about it is the debugger it comes with that allows you to view all of the objects you have made and their current assignments. I was thinking about the layer cake example I crudely drew in ascii in a prior post when I realized I could show this much better from the Activestate Debugger.

So here is what the path spec object we made to access the $MFT in a VHD looks like.

Automating DFIR with dfVFS part 5 by David Cowen - Hacking Exposed Computer Forensics Blog

I've underlined in red the important things to draw your attention to when you are trying to understand how that file path specification object we built can access the MFT and all the other layers involved.

So if you look you can see from the top down its:

  • TSK Type with a Location of /$MFT
    • With a parent of TSK Partition type with a location of /p1
      • With a parent of VHDI type 
        • With a parent of OS type with a location of the full path to where the vhd I'm working with sits.

Let's look at the same object with with an e01 loaded.

Automating DFIR with dfVFS part 5 by David Cowen - Hacking Exposed Computer Forensics Blog

Notice what I highlighted, the image type has changed from VHDI to EWF. Otherwise the object, its properties and the methods are the same. 

Let's do this one more time to really reinforce this with a raw/dd image.

Automating DFIR with dfVFS part 5 by David Cowen - Hacking Exposed Computer Forensics Blog

Everything else is the same, except for the type changing to RAW. 

So no matter what type of image we are working with dfVFS allows us to build an object in layers that permits the code that follows not to have to worry about the code behind. It is normalizing all the different image types access libraries so we can prevent things like the work around we do in pytsk.

Tomorrow, more code!

Daily Blog #377: Sunday Funday 4/17/16 - dfVFS Testing Script Challenge

dfVFS Testing Script Challenge  by David Cowen - Hacking Exposed Computer Forensics Blog




Hello Reader,
              If  you have been following the blow the last two weeks you would have seen its been all about dfVFS. Phil aka Random Access posted something I was thinking about on his blog, https://thisweekin4n6.wordpress.com, that I thought was worthy of a Sunday Funday challenge. In short Phil saw that I posted a video regarding how to verify dfVFS was installed correctly and there is a whole post just on installing it and mentioned that someone should automate this process. I agree Phil, and now I turn it over to you Reader, let's try out your scripting skills in this weeks Sunday Funday Challenge. 

The Prize:
$200 Amazon Giftcard

The Rules:

  1. You must post your answer before Monday 4/18/16 3PM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post



The Challenge:

Read the following blogpost: https://www.hecfblog.com/2015/12/how-to-install-dfvfs-on-windows-without.html and then write a script in your choice of scripting language that will pull down and install those packages for a user. Second the script should then run the dfVFS testing script shown in this video https://www.hecfblog.com/2016/04/daily-blog-375-video-blog-showing-how.html to validate the install. 

Also Read: Daily Blog #376

Daily Blog #376: Saturday Reading 4/16/16

Saturday Reading by David Cowen - Hacking Exposed Computer Forensics Blog

Hello Reader,

          It's Saturday!  Soccer Games, Birthday Parties and forensics oh my! That is my weekend, how's yous? If its raining where you are and the kids are going nuts here are some good links to distract you.

1. Diider Stevens posted an index of all the posts he's made in March, https://blog.didierstevens.com/2016/04/17/overview-of-content-published-in-march/. If you are at all interested in malicious document deconstruction and reverse engineer it's worth your time to read. 

2. If you've done any work on ransomware and other drive by malware deployments this article by Brian Krebs on the the sentencing of the black hole kit author is worth a read, http://krebsonsecurity.com/2016/04/blackhole-exploit-kit-author-gets-8-years/

3. Harlan has a new blog up this week with some links to various incident response articles he's found interesting, http://windowsir.blogspot.com/2016/04/links.html. This includes a link to the newly published 2nd edition of Windows Registry Forensics!

4. Mary Ellen has a post up with a presentation she made regarding the analysis of phishing attacks, http://manhattanmennonite.blogspot.com/2016/04/gone-phishing.html, The presentation also links to a Malware lab. Maybe this will see more posts from Mary Ellen.

5. Adam over at Hexcorn has a very interesting write up on EICAR, http://www.hexacorn.com/blog/2016/04/10/a-few-things-about-eicar-that-you-may-be-not-aware-of/. I wasn't aware of EICAR until Adam posted about it and found the whole read fascinating. EICAR is apparently a standard file created to allow anti virus developers test their own software and as Adam discusses others have made their own variations. 

6. In a bit of inception posting, Random Access has a weekly reading list of his own on his blog. This is his post from 4/10/16, https://thisweekin4n6.wordpress.com/2016/04/10/week-14-2016/. He does a very good job covering things I miss and frankly I should just be copying and pasting his posts here, but I think that's looked down on. 

So Phil, if you are reading this. Do you want to post here on Saturdays?

That's all for this week! Did I miss something? Post a link to a blog or site I need to add to my feedly below.

Daily Blog #375: Video Blog showing How to verify and test your dfVFS install

How to verify and test your dfVFS install by David Cowen

Hello Reader,
        This is a first for me, I've created a video blog today to show how to verify and test that your dfVFS installation was successful in Windows.

If you want to show your support for my efforts, there is an easy way to do that. 


Vote for me for Digital Forensic Investigator of the Year here:

 https://forensic4cast.com/forensic-4cast-awards/



Watch it here: https://youtu.be/GI8tbi74DFY

or below:

Daily Blog #374: Automating DFIR with dfVFS part 4

Automating DFIR with dfVFS part 4 by David Cowen - Hacking Exposed Computer Forensics Blog

Hello Reader,
            In our last entry in this series we took our partition listing script and added support for raw images. Now our simple script should be able to work with forensic images, virtual disks, raw images and live disks.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


Now that we have that working let's actually get it to do something useful, like extract a file.

First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="stage2.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

print(u'')

path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/p1',
        parent=path_spec)

mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=path_spec)

file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


stat_object = file_entry.GetStat()

print(u'Inode: {0:d}'.format(stat_object.ino))
print(u'Inode: {0:s}'.format(file_entry.name))
extractFile = open(file_entry.name,'wb')
file_object = file_entry.GetFileObject()

data = file_object.read(4096)
while data:
          extractFile.write(data)
          data = file_object.read(4096)

extractFile.close
file_object.close()

The first thing I changed was what image I'm working from back to stage2.vhd.

source_path="stage2.vhd"

 At this point though you should be able to pass it any type of supported image.

Next after the code we first wrote to list out the partitions within an image we added a new path specification layer to make an object that points to the first partition within the image.

path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/p1',
        parent=path_spec)
You can see we are using the type of TSK_PARTITION again because we know this is a partition but the location has changed from the prior type we made a parition path spec object. This is because our prior object pointed to the root of the image so we could iterate through the partitions and the new object is referencing just the 1st partition.

Next we make another path specification object that build on the partition type object.

mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=path_spec)

Here we are creating a TSK object and telling it that we want it to point to the file $MFT at the root of the file system. Notice we didn't have to tell it the kind of file system, offsets to where it begins or any other data. The resolver and analyzer helper classes within dfVFS will figure all out of that out for us, if it can. In tomorrows post we will put in some more conditional code to detect when it can in fact not do that for us.

So now that we have a path spec object was a reference to a file we want to work with let's get an object for that file.

file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)

The resolver helper class OpenFileEntry function takes the path spec object we made that points to the $MFT and if it can access it will return an object that references it.

Next we are going to gather some data about the file we are accessing.

stat_object = file_entry.GetStat()

First we used the GetStat function available from the file entry object to return information about the file into a new object called stat object. This is similar to running the stat command on a file.

Next we are going to print what I'm refering to below as the Inode number:
print(u'Inode: {0:d}'.format(stat_object.ino))

MFT's don't have Inodes this is actually the MFT record number but the concept is the same. We are calling the stat_object property ino to access the mft record number. You could also access the size of the file, dates associated and other data but this is a good starting place.

Next we want to print the name of the file we are accessing.
print(u'Inode: {0:s}'.format(file_entry.name))


The file_entry object property contains the name. This is much easier than with pyTsk where we had to walk a meta sub object property structure to get the file name out.

Now we need to open a file handle to where we want to put the MFT data out to

extractFile = open(file_entry.name,'wb')

Notice two things. One we are using the file_entry.name property directly in the open file handle call, this means our extracted file will have the same name as the file in the image. Two we are passing in the options wb which means that the file handle can be written to, and when it is written to should be treated as a binary file. This is important in Windows systems as when you write out binary data any new lines could be interpreted unless you pass in the binary mode flag.

Now we need to interact with not just the properties of the file in the image, but what data its actually storing

file_object = file_entry.GetFileObject()

We do that by calling the GetFileObject function from the file_entry object. This is giving us a file object just like extractFile that normal python functions can read from. The file handle is being stored in the variable file_object.

Now we need to read the data from the file in the image and then write it out to a file on the disk.

data = file_object.read(4096)
while data:
          extractFile.write(data)
          data = file_object.read(4096)

First we need to read from the file handle we opened to the image. We are going to do that for 4k of data and then enter a while loop. The while loop is saying as long as there is data being read from the read call to file_object to keep reading 4k chunks. When we reach the end of the file our data variable will contain a null return and the while loop will stop iterating.

While there is data the write function on the extractFile handle will write the data we read and then we will read the next 4k chunk and iterate through the loop again.

Lastly for good measure we are going to close the handle to both file within the image and the file we are writing to on our local disk.

extractFile.close
file_object.close()

And that's it!

In future posts we are going to access volume shadow copies, take command line options, iterate through multiple partitions and directories and add a GUI. Lot's to do but we will do it one piece at a time.

You can download this posts code here on GitHub: 




Daily Blog #373: Automating DFIR with dfVFS part 3

Automating DFIR with dfVFS part 3 by David Cowen - Hacking Exposed Computer Forensics Blog



Hello Reader,
           In our last post I expanded on the concept of path specification objects. Now let's expand the support of our dfVFS code to go beyond just forensic images and known virtual drives to live disks and raw images.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/

Why is this not supported with the same function call you ask? Live disks and raw images do not have any magic headers that dfVFS can parse and know what it is dealing with. So instead we need to add some conditional logic to help it know when to test if what we are working with is an image or a raw disk.

First as we did last time let's see what the code looks like now:
import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
## Adding Resolver
from dfvfs.resolver import resolver
## Adding raw support
from dfvfs.lib import raw

source_path="dfr-16-ntfs.dd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_system_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_system_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

print(u'')


The first thing is different is two more helper functions from dfVFS being imported:
## Adding Resolver
from dfvfs.resolver import resolver
## Adding raw support
from dfvfs.lib import raw

The first one, resolver, is a helper function that attempts to resolve path specification
objects to file system objects. You might remember that in pytsk the first thing we did
after getting a volume object was to get a file system object. Resolver is doing this for us.

The second is 'raw'. Raw is the class that supports raw images in dfVFS. It defines the
rawGlobalPathSpec function that creates a special path specification object for raw images.

Next we are changing what image we are working with to a raw image:
source_path="dfr-16-ntfs.dd"

We are now ready to deal with a raw image aka a dd image or live disk/partition.

First we are going to change the conditional logic around our type indicator helper function call.
In the first version of the script we knew the type of image we were dealing with so we didn't bother
testing what the type_indicator function returned. Now we could be dealing with multiple types of
images (forensic image, raw image, unkown types) so we need to put in some conditional testing to deal with it.

if len(type_indicators) > 1: 
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

The first check we do with what is returned into type_indicators is see is more than one type has
been identified. Currently dfVFS only supports one type of image within a single file. I'm not quite
sure when this would happen but its prudent to check for. If this condition were to occur we are calling the built in raise function to call a 'RunTimeError'  printing a message to the user that we don't support this type of media.

The second check is what we saw in the first example, there is one known type of media stored within this image. You can tell we are checking for 1 type because we are calling the length function on the type_indicators list object and checking to see if the length is 1.We are going to use what is returned ([0] refers to first element returned in the list contained within type_indicators) and create our path_spec object for the image. One thing does change here and that is we are no longer giving what is returned from the NewPathSpec function into a new variable. Instead we are taking advantage of the layering described in the prior post to store the new object into the same variable name knowing that the prior object has been layered in with the parent being set to path_spec.

Only two more changes and our script is done. Next we need to check to see if there are no known media format stored in type_indicators. We do that by checking to see if nothing is stored inside of the variable type_indicators using the if not operator. This basically says if the type_indicator variable is null (nothing was returned from the function called to populate it) run the following code.

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)



There are two things that code is going to do if there is no type returned, indicating this is possibly a raw image. The first is to call the resolver helper class function OpenFileSystem with the path_spec object we have made. If this is is successful that we are creating a new path specification object and manually setting the type of the object we are layering on to be TYPE_INDICATOR_RAW or a raw image.

Last change we make is taking that new raw image path specification and making it work with the other dfVFS functions that may not explicitly work with a raw image object. We do that be calling the raw function's RawGlobPathSpec function and passing it two objects. The first is the file system object we made in the section just above and the second is the raw_path_spec object we made. The RawGlobPathSpec object is then going to bundle those objects up and if it is successful it will return a new path specification object that the rest of the library will work with.

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

We then test the glob_results variable to make sure something was stored within it, a sign it ran successfully. If there is in fact an object contained within it we assign it to our path_spec variable.

That's it!

After running the script this should be what you see:

The following partitions were found:
Identifier Offset Size
p1 65536 (0x00010000) 314572800
You can download the image I'm testing with here: http://www.cfreds.nist.gov/dfr-images/dfr-16-ntfs.dd.bz2

You can download the source code for this example from GitHub here: https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv2.py

Tomorrow we continue to add more functionality!

Daily Blog #372: Automating DFIR with dfVFS part 2

Automating DFIR with dfVFS part 2 by David Cowen - Hacking Exposed Computer Forensics Blog

Hello Reader,
        In this short post I want to get more into the idea of the path specification object we made in the prior part. If this post had a catch title it would be zen and the art of path specification.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/

In the prior post, part 1 of the series, we made three path specification objects. I described path specification objects as the corner stone in understand dfVFS which I believe to be true. What I didn't point out is that the path specification objects in that first example code where building on top of themselves like a layer cake.

Let's take a look at the three objects we created again.
path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

source_path_spec = path_spec_factory.Factory.NewPathSpec(
            type_indicators[0], parent=path_spec)

volume_system_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=source_path_spec)

If you were to look carefully you would notice there are a couple of differences between the calls to the NewPathSpec function. the calls to the NewPathSpec function.

1. The type of path specification we are making is changing. We start with a operating system
file, then an image (which is being set by the return of our indicator query) and lastly
we are working with a partition.
2. Two of our path specifications declare a location, one does not
3. Most importantly, source_path_spec and volume_System_path_spec have parents. Those parents
are the path specification objects created prior.

So if you were to look at it like one single object with multiple layers it would look
something like this.


---------------------------
|  OS File Path Spec       |
---------------------------
|  TSK Image Type Path Spec |
----------------------------
|  TSK Partition Path Spec  |
----------------------------
The lowest layer in the object can reference the upper layers. This is why we don't just create one path specification object. Instead we are initializing each layer of the object one call at a time as we determine the type of image, directory, archive, etc.. we are working with to allow our path specification object to reflect the data we are trying to get dfVFS to work with.

Depending on what part of the dfVFS framework you are working with will determine how many of these layers need to exist prior to calling that function with your fully developed path specification object.

As we go father into the series I will show you how to interact with the files stored in the partitions we listed in part 1. Doing that will create yet another layer to our object, the file system layer. This is very similar to how we built our objects in pyTSK.

If you want to read how Metz explains Path Specification objects you can read about them here:

https://github.com/log2timeline/dfvfs/wiki/Internals

Tomorrow I will explain how we access raw images and then Thursday we will extract a file from an image.

Daily Blog #371: Sunday Funday 4/10/16 Winner! Powershell Script Challenge

Powershell Script Challenge by David Cowen - Hacking Exposed Computer Forensics Blog


Hello Reader,

           Another challenge has been answered by you the readership. This week our anonymous winner claims a $200 Amazon Gift card for showing what the impact of installing and running PowerForensics is. You too can join the ranks of Sunday Funday winners and I think I'm going to do something special for all past and future winners so everyone can know of your deeds.          

The Challenge:

The term Forensically Sound has a lot of vagueness to it. Let's get rid of the ambiguity regarding what changes when you run the PowerForensics powershell script to extract the mft from a system. Explain what changes and what doesn't from executing the powershell script to extracting the file. 


The Winning Answer:
Anonymous Submission

This answer is based on the assumption that you are not connecting to the target system via F-Response or a similar method and that you are running the PowerForensics PowerShell script directly on the target system.  This also assumes that the PowerForensics module is already installed on the system.

When the powershell script is executed, program execution artifacts associated with PowerShell will be created.  These artifacts include the creation of a prefetch file (if application prefetching is enabled), a record in the application compatibility cache (the exact location/structure of which depends on the version of Windows installed), a record in the MUICache, and possibly a UserAssist entry (if the script was double-clicked in Explorer).  In addition, event log records may be created in the Security event log if process tracking is enabled. 

Installing the PowerForensics powershell module will result in different artifacts depending on the version of Powershell installed on the target system.  If the Windows Management Framework version 5 is not installed on the target system, the PowerForensics module can be installed by copying the module files to a directory in the PSModulePath.  Using this method will result in the creation of new files in a directory on the target system, which brings with it the file creation artifacts found in NTFS (e.g. $MFT record creation, USNJrnl record creations, parent directory $I30 updates, changes to the $BITMAP file, etc.).   If the Windows Management Framework version 5 is installed, the Install-Module cmdlet can be used to install.  This may require the installation of additional cmdlets in order to download/install the PowerForensics module, which would result in additional files and directories being created in a directory in the PSModulePath.

Since the script uses raw disk reads to determine the location of the $MFT on disk, it should not impact the $STANDARD_INFORMATION or $FILE_NAME timestamps of the files being copied.


Also Read: Daily Blog #370