Building your own travel sized virtual lab with ESXi and the Intel SkullCanyon NUC

Hello Reader,
          It's been awhile and I know that, sorry for not writing sooner but to quote Ferris Bueller

"Life moves pretty fast. If you don't stop and look around once in a while, you could miss it."

So while I've worked on a variety of cases, projects and new artifacts to share I've neglected the blog. For those of you who have been watching/listening you know I've kept up the Forensic Lunch videocast/podcast but to be fair the Blog is my first child and I've left it idle for too long.

Speaking of the Forensic Lunch if you watched this episode:
https://www.youtube.com/watch?v=Ru8fLioIVlA

You would have seen me talk about building my own portable cloud for lab testing and research. People seem to have received this very well and I've thoroughly enjoyed using it! So to that end I thought I would detail out how I set this up in case you wanted to do the same.

Step 1. Make an account on vmware.com (https://my.vmware.com/web/vmware/registration)

Step 2. Using chrome, not sure why I had some errors in firefox but I did, go to this page to register for the free version of ESXi. (Note this is the free version of ESXi that will generate a license key for life, the other version will expire after 60 days )
https://my.vmware.com/en/group/vmware/evalcenter?p=free-esxi6

Step 3. Make a note of your license key as seen in the picture below, you'll want to copy and paste this and keep it as it won't show up as a license key associated with your MyVmware account


Step 4. Click to download the product named "ESXi ISO image (Includes VMware Tools)". You could also download the vsphere client at this point or you can grab it from a link emebdded within the ESXI homepage when you get it installed. 

Step 5. After downloading the ISO you will need to put it onto some form of bootable media for it to install onto your Intel Skull Canyon NUC as it has no optical drive of its own. I choose to do this to a USB thumb drive. To do turn the ISO into a successfully booting USB drive I used rufus and you can to.

Step 5a. Download Rufus: https://rufus.akeo.ie/downloads/rufus-2.11.exe
Step 5b. Execute Rufus
Step 5c. Configure Rufus to look something like what I have below. Where Device is the USB thumb drive you have plugged in and under ISO image I've selected the ESXi iso file I downloaded and click start.






Step 6. With your ESXi media now on a bootable USB drive you are ready to move on to the Intel Skull Canyon NUC itself. Start by actually getting one! I got mine at Fry's Electronics, Microcenter also carries them and they both price match Amazon now. If you wanted to get it online I would recommend Amazon to do so and you can support a good charity while doing so by using smile.amazon.com. I support the Girl Scouts of Northeast Texas with my purchases.

Link to Intel Skull Canyon NUC:
https://smile.amazon.com/Intel-NUC-Kit-NUC6i7KYK-Mini/dp/B01DJ9XS52/ref=sr_1_1?ie=UTF8&qid=1474577754&sr=8-1&keywords=skull+canyon

The NUC comes with a processor, case, power supply and fans all built in or in the box. What you will need to provide is the RAM and storage.


Storage
I used the Samsung 950 Pro Series 512GB NVMe M.2 drive, the NUC can actually fit two of these but one has been enough so far for my initial testing.

Link to storage drive:
https://smile.amazon.com/Samsung-950-PRO-Internal-MZ-V5P512BW/dp/B01639694M/ref=pd_bxgy_147_img_2?ie=UTF8&psc=1&refRID=7N9JV1CX8FJQ4Y3JT858

RAM
For RAM I used Kingston HyperX with two 16GB sticks to get the full 32GB of RAM this unit is capable of.
Link to the RAM here:
https://smile.amazon.com/Kingston-Technology-2133MHz-HX421S13IBK2-32/dp/B01BNJL96A/ref=pd_sim_147_2?ie=UTF8&pd_rd_i=B01BNJL96A&pd_rd_r=7N9JV1CX8FJQ4Y3JT858&pd_rd_w=eFJsO&pd_rd_wg=HPiy3&psc=1&refRID=7N9JV1CX8FJQ4Y3JT858

You can use other storage and RAM of course, I used these because I wanted the speed of NVMe M.2 (2GB/sec reads and 1.5GB/sec writes) with all the memory I could get to feed the VMs that will be running on the NUC.

Step 7. Put the storage and RAM into the NUC, plug it in to the wall, attach a USB keyboard and mouse, attach a monitor and boot up to the Intel Visual Bios. You will need to disable the Thunderbolt controller on the NUC before installing ESXi, you can re-enable it after you are done installing ESXi.

To see what to click specifically in order to do this go here:
http://www.virten.net/2016/05/esxi-installation-on-nuc6i7kyk-fails-with-fatal-error-10-out-of-resources/

Step 8. Pop in the bootable USB drive and install ESXI.

You are now ready to start loading ISO's and VMs into your datastore and in the next blog post I'll show how to create an isolated virtual network to put them on. 

Daily Blog #381 National CCDC Redteam Debrief

Hello Reader,
     The 11th year of the National Collegiate Cyber Defense Competition has ended, congratulations to the University of Central Florida for a their third consecutive win. I hope you make it back next year for another test of your schools program and ability to transfer knowledge to new generations of blue teams.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


However the team that won over the Redteam was the University of Tulsa who came with a sense of humor. Behold their Hat and Badges:


Also you have to check out the player cards they made here:
https://engineering.utulsa.edu/news/tu-cyber-security-expert-creates-trading-cards-collegiate-cyber-defense-competition/

Here is an my favorite:


You can download my Redteam debrief here:
https://drive.google.com/file/d/0B_mjsPB8uKOAcUQtOThUNUpTZ0k/view?usp=sharing

Daily Blog #380: National CCDC 2016

Hello Reader,
           I'm in San Antonio for the National Collegiate Cyber Defense Competition which starts at 10am CST 4/22/16. If you didn't know I lead the red team here at Nationals where the top 10 college teams in the country come and find out who does the best defending their network while completing business objectives.

I'm hoping to follow up this post with some videos and links to what happens tomorrow, in the mean time make sure to follow #CCDC or #NCCDC on twitter to watch some of our funny business in real time. 

Daily Blog #379: Automating DFIR with dfVFS part 6

Hello Reader,
         It's time to continue our series by iterating through all the partitions within a disk or image, instead of just hard coding the one. To start with you'll need another image, one that not only has more than one partition but also has shadow copies for us to interact with next.

You can download the image here:
https://mega.nz/#!L45SRYpR!yl8zDOi7J7koqeGnFEhYV-_75jkVtI2CTrr14PqofBw


If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="Windows 7 Professional SP1 x86 Suspect.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

  volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

  mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=volume_path_spec)

  file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


  stat_object = file_entry.GetStat()

  print(u'Inode: {0:d}'.format(stat_object.ino))
  print(u'Inode: {0:s}'.format(file_entry.name))
  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')
  file_object = file_entry.GetFileObject()

  data = file_object.read(4096)
  while data:
      extractFile.write(data)
      data = file_object.read(4096)

  extractFile.close
  file_object.close()
  volume_path_spec=""
  mft_path_spec=""

Believe it or not we didn't have to change much here to get it to go from looking at one partition and extracting the $MFT to extracting it from all the partitions. What we had to do was four things.

1. We moved our file extraction code over by one indent allowing it to execute as part of the for loop we first wrote to print out the list of partitions in an image. Remember that in Python we don't use braces to determine how the code will be executed, its all indentation that decides how the code logic will be read and followed.
2. Next we changed the location where our volume path specification object is set to from a hard coded /p1 to whatever volume identifier we are currently looking at in the for loop.

 volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

You can see that the location variable is now set to u'/' being appended to the volume_identifier variable. This would be resolved to /p1, /p2, etc.. as many partitions as we have on the image.

3. Now that we are going to extracting this file from multiple partitions we don't want to overwrite the file we previously extracted so we need to make the file name unique. We do that by appending the partition number to the file name.

  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')

This results in a file named p1$MFT, p2$MFT, and so on. To accomplish this we make a new variable called outfile which is set to the partition number (volume_identifier) appended with the file name (file_entry.name). Then we pass that the open file handle argument we wrote before.

4. One last simple change.

volume_path_spec=""
mft_path_spec=""

We are setting our partition and file path spec objects back to null. Why? Because if not
they are globally set and will just keep appending on to the prior object. That will 
result in very funny errors.

That's it! No more code changes. 

You can get the code from Github: 
https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv4.py


In the next post we will be iterating through shadow copies!

Daily Blog #378: Automating DFIR with dfVFS part 5

Hello Reader,

Wondering where yesterdays post is? Well, there was no winner of last weekends Sunday Funday.
That's ok though because I am going to post the same challenge this Sunday so you have a whole week to figure it out!

-- Now back to our regularly scheduled series --

              I use Komodo from Activestate as my IDE of choice when writing perl and python. I bring this up because one of the things I really like about it is the debugger it comes with that allows you to view all of the objects you have made and their current assignments. I was thinking about the layer cake example I crudely drew in ascii in a prior post when I realized I could show this much better from the Activestate Debugger.

So here is what the path spec object we made to access the $MFT in a VHD looks like.


I've underlined in red the important things to draw your attention to when you are trying to understand how that file path specification object we built can access the MFT and all the other layers involved.

So if you look you can see from the top down its:

  • TSK Type with a Location of /$MFT
    • With a parent of TSK Partition type with a location of /p1
      • With a parent of VHDI type 
        • With a parent of OS type with a location of the full path to where the vhd I'm working with sits.

Let's look at the same object with with an e01 loaded.


Notice what I highlighted, the image type has changed from VHDI to EWF. Otherwise the object, its properties and the methods are the same. 

Let's do this one more time to really reinforce this with a raw/dd image.


Everything else is the same, except for the type changing to RAW. 

So no matter what type of image we are working with dfVFS allows us to build an object in layers that permits the code that follows not to have to worry about the code behind. It is normalizing all the different image types access libraries so we can prevent things like the work around we do in pytsk.

Tomorrow, more code!

Daily Blog #377: Sunday Funday 4/17/16

Hello Reader,
              If  you have been following the blow the last two weeks you would have seen its been all about dfVFS. Phil aka Random Access posted something I was thinking about on his blog, https://thisweekin4n6.wordpress.com, that I thought was worthy of a Sunday Funday challenge. In short Phil saw that I posted a video regarding how to verify dfVFS was installed correctly and there is a whole post just on installing it and mentioned that someone should automate this process. I agree Phil, and now I turn it over to you Reader, let's try out your scripting skills in this weeks Sunday Funday Challenge. 

The Prize:
$200 Amazon Giftcard

The Rules:

  1. You must post your answer before Monday 4/18/16 3PM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post



The Challenge:
Read the following blogpost: http://www.hecfblog.com/2015/12/how-to-install-dfvfs-on-windows-without.html and then write a script in your choice of scripting language that will pull down and install those packages for a user. Second the script should then run the dfVFS testing script shown in this video http://www.hecfblog.com/2016/04/daily-blog-375-video-blog-showing-how.html to validate the install. 

Daily Blog #376: Saturday Reading 4/16/16

Hello Reader,

          It's Saturday!  Soccer Games, Birthday Parties and forensics oh my! That is my weekend, how's yous? If its raining where you are and the kids are going nuts here are some good links to distract you.

1. Diider Stevens posted an index of all the posts he's made in March, https://blog.didierstevens.com/2016/04/17/overview-of-content-published-in-march/. If you are at all interested in malicious document deconstruction and reverse engineer it's worth your time to read. 

2. If you've done any work on ransomware and other drive by malware deployments this article by Brian Krebs on the the sentencing of the black hole kit author is worth a read, http://krebsonsecurity.com/2016/04/blackhole-exploit-kit-author-gets-8-years/

3. Harlan has a new blog up this week with some links to various incident response articles he's found interesting, http://windowsir.blogspot.com/2016/04/links.html. This includes a link to the newly published 2nd edition of Windows Registry Forensics!

4. Mary Ellen has a post up with a presentation she made regarding the analysis of phishing attacks, http://manhattanmennonite.blogspot.com/2016/04/gone-phishing.html, The presentation also links to a Malware lab. Maybe this will see more posts from Mary Ellen.

5. Adam over at Hexcorn has a very interesting write up on EICAR, http://www.hexacorn.com/blog/2016/04/10/a-few-things-about-eicar-that-you-may-be-not-aware-of/. I wasn't aware of EICAR until Adam posted about it and found the whole read fascinating. EICAR is apparently a standard file created to allow anti virus developers test their own software and as Adam discusses others have made their own variations. 

6. In a bit of inception posting, Random Access has a weekly reading list of his own on his blog. This is his post from 4/10/16, https://thisweekin4n6.wordpress.com/2016/04/10/week-14-2016/. He does a very good job covering things I miss and frankly I should just be copying and pasting his posts here, but I think that's looked down on. 

So Phil, if you are reading this. Do you want to post here on Saturdays?

That's all for this week! Did I miss something? Post a link to a blog or site I need to add to my feedly below.

Daily Blog #375: Video Blog showing how to verify and test your dfVFS install

Hello Reader,
        This is a first for me, I've created a video blog today to show how to verify and test that your dfVFS installation was successful in Windows.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


Watch it here: https://youtu.be/GI8tbi74DFY

or below:

Daily Blog #374: Automating DFIR with dfVFS part 4

Hello Reader,
            In our last entry in this series we took our partition listing script and added support for raw images. Now our simple script should be able to work with forensic images, virtual disks, raw images and live disks.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


Now that we have that working let's actually get it to do something useful, like extract a file.

First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="stage2.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

print(u'')

path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/p1',
        parent=path_spec)

mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=path_spec)

file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


stat_object = file_entry.GetStat()

print(u'Inode: {0:d}'.format(stat_object.ino))
print(u'Inode: {0:s}'.format(file_entry.name))
extractFile = open(file_entry.name,'wb')
file_object = file_entry.GetFileObject()

data = file_object.read(4096)
while data:
          extractFile.write(data)
          data = file_object.read(4096)

extractFile.close
file_object.close()

The first thing I changed was what image I'm working from back to stage2.vhd.

source_path="stage2.vhd"

 At this point though you should be able to pass it any type of supported image.

Next after the code we first wrote to list out the partitions within an image we added a new path specification layer to make an object that points to the first partition within the image.

path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/p1',
        parent=path_spec)
You can see we are using the type of TSK_PARTITION again because we know this is a partition but the location has changed from the prior type we made a parition path spec object. This is because our prior object pointed to the root of the image so we could iterate through the partitions and the new object is referencing just the 1st partition.

Next we make another path specification object that build on the partition type object.

mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=path_spec)

Here we are creating a TSK object and telling it that we want it to point to the file $MFT at the root of the file system. Notice we didn't have to tell it the kind of file system, offsets to where it begins or any other data. The resolver and analyzer helper classes within dfVFS will figure all out of that out for us, if it can. In tomorrows post we will put in some more conditional code to detect when it can in fact not do that for us.

So now that we have a path spec object was a reference to a file we want to work with let's get an object for that file.

file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)

The resolver helper class OpenFileEntry function takes the path spec object we made that points to the $MFT and if it can access it will return an object that references it.

Next we are going to gather some data about the file we are accessing.

stat_object = file_entry.GetStat()

First we used the GetStat function available from the file entry object to return information about the file into a new object called stat object. This is similar to running the stat command on a file.

Next we are going to print what I'm refering to below as the Inode number:
print(u'Inode: {0:d}'.format(stat_object.ino))

MFT's don't have Inodes this is actually the MFT record number but the concept is the same. We are calling the stat_object property ino to access the mft record number. You could also access the size of the file, dates associated and other data but this is a good starting place.

Next we want to print the name of the file we are accessing.
print(u'Inode: {0:s}'.format(file_entry.name))


The file_entry object property contains the name. This is much easier than with pyTsk where we had to walk a meta sub object property structure to get the file name out.

Now we need to open a file handle to where we want to put the MFT data out to

extractFile = open(file_entry.name,'wb')

Notice two things. One we are using the file_entry.name property directly in the open file handle call, this means our extracted file will have the same name as the file in the image. Two we are passing in the options wb which means that the file handle can be written to, and when it is written to should be treated as a binary file. This is important in Windows systems as when you write out binary data any new lines could be interpreted unless you pass in the binary mode flag.

Now we need to interact with not just the properties of the file in the image, but what data its actually storing

file_object = file_entry.GetFileObject()

We do that by calling the GetFileObject function from the file_entry object. This is giving us a file object just like extractFile that normal python functions can read from. The file handle is being stored in the variable file_object.

Now we need to read the data from the file in the image and then write it out to a file on the disk.

data = file_object.read(4096)
while data:
          extractFile.write(data)
          data = file_object.read(4096)

First we need to read from the file handle we opened to the image. We are going to do that for 4k of data and then enter a while loop. The while loop is saying as long as there is data being read from the read call to file_object to keep reading 4k chunks. When we reach the end of the file our data variable will contain a null return and the while loop will stop iterating.

While there is data the write function on the extractFile handle will write the data we read and then we will read the next 4k chunk and iterate through the loop again.

Lastly for good measure we are going to close the handle to both file within the image and the file we are writing to on our local disk.

extractFile.close
file_object.close()

And that's it!

In future posts we are going to access volume shadow copies, take command line options, iterate through multiple partitions and directories and add a GUI. Lot's to do but we will do it one piece at a time.

You can download this posts code here on GitHub: https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv3.py

Daily Blog #373: Automating DFIR with dfVFS part 3

Hello Reader,
           In our last post I expanded on the concept of path specification objects. Now let's expand the support of our dfVFS code to go beyond just forensic images and known virtual drives to live disks and raw images.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/

Why is this not supported with the same function call you ask? Live disks and raw images do not have any magic headers that dfVFS can parse and know what it is dealing with. So instead we need to add some conditional logic to help it know when to test if what we are working with is an image or a raw disk.

First as we did last time let's see what the code looks like now:
import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
## Adding Resolver
from dfvfs.resolver import resolver
## Adding raw support
from dfvfs.lib import raw

source_path="dfr-16-ntfs.dd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_system_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_system_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

print(u'')


The first thing is different is two more helper functions from dfVFS being imported:
## Adding Resolver
from dfvfs.resolver import resolver
## Adding raw support
from dfvfs.lib import raw

The first one, resolver, is a helper function that attempts to resolve path specification
objects to file system objects. You might remember that in pytsk the first thing we did
after getting a volume object was to get a file system object. Resolver is doing this for us.

The second is 'raw'. Raw is the class that supports raw images in dfVFS. It defines the
rawGlobalPathSpec function that creates a special path specification object for raw images.

Next we are changing what image we are working with to a raw image:
source_path="dfr-16-ntfs.dd"

We are now ready to deal with a raw image aka a dd image or live disk/partition.

First we are going to change the conditional logic around our type indicator helper function call.
In the first version of the script we knew the type of image we were dealing with so we didn't bother
testing what the type_indicator function returned. Now we could be dealing with multiple types of
images (forensic image, raw image, unkown types) so we need to put in some conditional testing to deal with it.

if len(type_indicators) > 1: 
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

The first check we do with what is returned into type_indicators is see is more than one type has
been identified. Currently dfVFS only supports one type of image within a single file. I'm not quite
sure when this would happen but its prudent to check for. If this condition were to occur we are calling the built in raise function to call a 'RunTimeError'  printing a message to the user that we don't support this type of media.

The second check is what we saw in the first example, there is one known type of media stored within this image. You can tell we are checking for 1 type because we are calling the length function on the type_indicators list object and checking to see if the length is 1.We are going to use what is returned ([0] refers to first element returned in the list contained within type_indicators) and create our path_spec object for the image. One thing does change here and that is we are no longer giving what is returned from the NewPathSpec function into a new variable. Instead we are taking advantage of the layering described in the prior post to store the new object into the same variable name knowing that the prior object has been layered in with the parent being set to path_spec.

Only two more changes and our script is done. Next we need to check to see if there are no known media format stored in type_indicators. We do that by checking to see if nothing is stored inside of the variable type_indicators using the if not operator. This basically says if the type_indicator variable is null (nothing was returned from the function called to populate it) run the following code.

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)



There are two things that code is going to do if there is no type returned, indicating this is possibly a raw image. The first is to call the resolver helper class function OpenFileSystem with the path_spec object we have made. If this is is successful that we are creating a new path specification object and manually setting the type of the object we are layering on to be TYPE_INDICATOR_RAW or a raw image.

Last change we make is taking that new raw image path specification and making it work with the other dfVFS functions that may not explicitly work with a raw image object. We do that be calling the raw function's RawGlobPathSpec function and passing it two objects. The first is the file system object we made in the section just above and the second is the raw_path_spec object we made. The RawGlobPathSpec object is then going to bundle those objects up and if it is successful it will return a new path specification object that the rest of the library will work with.

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

We then test the glob_results variable to make sure something was stored within it, a sign it ran successfully. If there is in fact an object contained within it we assign it to our path_spec variable.

That's it!

After running the script this should be what you see:

The following partitions were found:
Identifier Offset Size
p1 65536 (0x00010000) 314572800
You can download the image I'm testing with here: http://www.cfreds.nist.gov/dfr-images/dfr-16-ntfs.dd.bz2

You can download the source code for this example from GitHub here: https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv2.py

Tomorrow we continue to add more functionality!

Daily Blog #372: Automating DFIR with dfVFS part 2

Hello Reader,
        In this short post I want to get more into the idea of the path specification object we made in the prior part. If this post had a catch title it would be zen and the art of path specification.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/

In the prior post, part 1 of the series, we made three path specification objects. I described path specification objects as the corner stone in understand dfVFS which I believe to be true. What I didn't point out is that the path specification objects in that first example code where building on top of themselves like a layer cake.

Let's take a look at the three objects we created again.
path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

source_path_spec = path_spec_factory.Factory.NewPathSpec(
            type_indicators[0], parent=path_spec)

volume_system_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=source_path_spec)

If you were to look carefully you would notice there are a couple of differences between the calls to the NewPathSpec function. the calls to the NewPathSpec function.

1. The type of path specification we are making is changing. We start with a operating system
file, then an image (which is being set by the return of our indicator query) and lastly
we are working with a partition.
2. Two of our path specifications declare a location, one does not
3. Most importantly, source_path_spec and volume_System_path_spec have parents. Those parents
are the path specification objects created prior.

So if you were to look at it like one single object with multiple layers it would look
something like this.


---------------------------
|  OS File Path Spec       |
---------------------------
|  TSK Image Type Path Spec |
----------------------------
|  TSK Partition Path Spec  |
----------------------------
The lowest layer in the object can reference the upper layers. This is why we don't just
create one path specification object. Instead we are initializing each layer of the object
one call at a time as we determine the type of image, directory, archive, etc.. we are
working with to allow our path specification object to reflect the data we are trying
to get dfVFS to work with.

Depending on what part of the dfVFS framework you are working with will determine how many
of these layers need to exist prior to calling that function with your fully developed
path specification object.

As we go father into the series I will show you how to interact with the files stored in the
partitions we listed in part 1. Doing that will create yet another layer to our object,
the file system layer. This is very similar to how we built our objects in pyTSK.

If you want to read how Metz explains Path Specification objects you can read about them
here: https://github.com/log2timeline/dfvfs/wiki/Internals

Tomorrow I will explain how we access raw images and then Thursday we will extract a file
from an image.

Daily Blog #371: Sunday Funday 4/10/16 Winner!

Hello Reader,
           Another challenge has been answered by you the readership. This week our anonymous winner claims a $200 Amazon Gift card for showing what the impact of installing and running PowerForensics is. You too can join the ranks of Sunday Funday winners and I think I'm going to do something special for all past and future winners so everyone can know of your deeds.          




The Challenge:

The term Forensically Sound has a lot of vagueness to it. Let's get rid of the ambiguity regarding what changes when you run the PowerForensics powershell script to extract the mft from a system. Explain what changes and what doesn't from executing the powershell script to extracting the file. 


The Winning Answer:
Anonymous Submission

This answer is based on the assumption that you are not connecting to the target system via F-Response or a similar method and that you are running the PowerForensics PowerShell script directly on the target system.  This also assumes that the PowerForensics module is already installed on the system.

When the powershell script is executed, program execution artifacts associated with PowerShell will be created.  These artifacts include the creation of a prefetch file (if application prefetching is enabled), a record in the application compatibility cache (the exact location/structure of which depends on the version of Windows installed), a record in the MUICache, and possibly a UserAssist entry (if the script was double-clicked in Explorer).  In addition, event log records may be created in the Security event log if process tracking is enabled. 

Installing the PowerForensics powershell module will result in different artifacts depending on the version of Powershell installed on the target system.  If the Windows Management Framework version 5 is not installed on the target system, the PowerForensics module can be installed by copying the module files to a directory in the PSModulePath.  Using this method will result in the creation of new files in a directory on the target system, which brings with it the file creation artifacts found in NTFS (e.g. $MFT record creation, USNJrnl record creations, parent directory $I30 updates, changes to the $BITMAP file, etc.).   If the Windows Management Framework version 5 is installed, the Install-Module cmdlet can be used to install.  This may require the installation of additional cmdlets in order to download/install the PowerForensics module, which would result in additional files and directories being created in a directory in the PSModulePath.

Since the script uses raw disk reads to determine the location of the $MFT on disk, it should not impact the $STANDARD_INFORMATION or $FILE_NAME timestamps of the files being copied.

Daily Blog #370: Sunday Funday 4/10/16

Hello Reader,
              If  you watched the Forensic Lunch Friday you would have heard us talking to Jared Atkinson about PowerForensics, his DFIR framework all written in Power Shell. Let's see what your determination of its forensic soundness is in this weeks Sunday Funday challenge.

The Prize:
$200 Amazon Giftcard

The Rules:

  1. You must post your answer before Monday 4/11/16 3PM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post



The Challenge:
The term Forensically Sound has a lot of vagueness to it. Let's get rid of the ambiguity regarding what changes when you run the PowerForensics powershell script to extract the mft from a system. Explain what changes and what doesn't from executing the powershell script to extracting the file. 

Daily Blog #369: Saturday Reading 4/9/16

Hello Reader,

          It's Saturday! I'm excited to post my first Saturday Reading in almost two years!. While I get to work on seeing whats changed in the world of rss feeds and twitter tags since I last did this, here is this weeks Saturday Reading!

1. We had a great forensic lunch this week.  We had Jared Atkinson talking all about how to do forensics on a live system or mounted image with his Powershell framework PowerForensics.
You can watch the episode on youtube here: https://www.youtube.com/watch?v=uCffFc4r4-k

2. Adam over at Hexacorn is continuing to update his tool DeXRAY which can examine, extract and detail information about the malware that 20 different anti virus products. If you've ever been frustrated that the very thing you need to analyze is being withheld by an anti virus products quarantine this should help. 


3.  On the CYB3RCRIM3 blog there is a neat post covering the basic facts and a judges ultimate opinion regarding a civil case that involved the Computer Fraud and Abuse Act (CFAA). While there are alot of criminal cases out there that have CFAA charges there are few civil CFAA cases that I know of, outside of the ones I've been involved in. 


4. Harlan has a new post up on his blog Windows Incident Response. It covers some new WMI persistence techniques he's seen used by attackers in the wild. Not only does Harlan link to a blog he wrote for SecureWorks on the topic but he also linked to a presentation written by Matt Graeber from Mandiant.


5. Also on Harlan's Blog he's let us know that the 2nd version of Windows Registry Forensic is out! 

Read more about here and get a copy for yourself! http://windowsir.blogspot.com/2016/04/windows-registry-forensics-2e.html

6. The 2016 Volatility Plugin Contest is live! If you have an idea or just want to go through the learning process of how to write a Volatility plugin for cash and prizes you should go here: http://volatility-labs.blogspot.com/2016/04/the-2016-volatility-plugin-contest-is.html

Did I miss something? Let me know in the comments below!

Daily Blog #368: Forensic Lunch 4/8/16 with Jared Atkinson talking about Forensics with Powershell

Hello Reader,
         What a great Forensic Lunch today with Jared Atkinson talking all about how to do forensics on a live system or mounted image with his Powershell framework PowerForensics.

You can grab your own copy of PowerForensics on Github here:
https://github.com/Invoke-IR/PowerForensics

Read his Blog here:
www.invoke-ir.com

Vote for him in the Forensic4Cast Awards here:
https://forensic4cast.com/forensic-4cast-awards/
Reminder I'm up for voting in another category as well!

and of course you can follow him on Twitter here:
https://twitter.com/jaredcatkinson

Btw, if you want to learn Windows Forensic with me I'm schedule to teach SANS FOR408 Windows Forensics in Houston May 9-14. You can find out more here:
https://www.sans.org/event/houston-2016/course/windows-forensic-analysis


You can watch the episode on youtube here:
https://www.youtube.com/watch?v=uCffFc4r4-k

It's also on iTunes or you can just watch it below:

Daily Blog #367: Automating DFIR with dfVFS part 1

Hello Reader,
         Today we begin again with a new Automating DFIR series.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


The last time we started this series (you can read that here http://www.hecfblog.com/2015/02/automating-dfir-how-to-series-on.html) we were using the pytsk library primarily to access images and live systems. This time around we are going to restart the series to the first steps to show how to do this with the dfVFS library which makes use of pytsk and many, many other libraries.


         In comparison to dfVFS pytsk is a pretty simple and straightforward library, but it does have its limitations. dfVFS (Digital Forensics Virtual Filesystem) is not just one library, its a collection of DFIR libraries with all the glue in between so things work together without reinventing the wheel/processor/image format again. This post will start with opening a forensic image and printing the partition table much like we did in part 1 of the original Automating DFIR with pytsk series. What is different is that this time our code will work with E01s, S01s, AFF and other image formats without us having to write additional code for it. This is because dfVFS has all sorts of helper functions built in to determine the image format and load the right library for you to access the underlying data.

Before you get started on this series make sure you have the python 2.7 x86 installed and have followed the steps in the following updated blog post about how to get dfVFS setup:
http://www.hecfblog.com/2015/12/how-to-install-dfvfs-on-windows-without.html

You'll also want to download our first forensic image we are working with located here:
https://mega.nz/#!ShhFSLjY!RawTMjJoR6mJgn4P0sQAdzU5XOedR6ianFRcY_xxvwY


When I got my new machine setup I realized that a couple new libraries were not included in the original post so I updated it. If you followed the post to get your environment setup before yesterday you should check the list of modules to make sure you have them all installed. Second on my system I had an interesting issue where the libcrypto library was being installed as crypto but dfVFS was calling it as Crypto (case matters). I had to rename the directory under \python27\lib\site-packages\crypto to Crypto and then everything worked.

If you want to make sure everything works then download the full dfvfs package from github (linked in the installing dfvfs post) and run the tests before proceeding any further.

Let's start with what the code looks like:
import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system

source_path="stage2.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

source_path_spec = path_spec_factory.Factory.NewPathSpec(
            type_indicators[0], parent=path_spec)

volume_system_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=source_path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_system_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)

 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

print(u'')

as you can tell this is much larger than our first code example from the pytsk series which was this:
import sys
import pytsk3
imagefile = "Stage2.vhd"
imagehandle = pytsk3.Img_Info(imagefile)
partitionTable = pytsk3.Volume_Info(imagehandle)
for partition in partitionTable:
print partition.addr, partition.desc, "%ss(%s)" % (partition.start, partition.start * 512), partition.len


But the easier to read and smaller pytsk example is much more limited in functionality to what the dfVFS version can do. On its own our pytsk example could only work with raw images, our dfVFS example can work with multiple image types and already has built in support for multipart images and shadow copies!

Let's break down the code:
import sysimport logging

Here we are just importing in two standard python libraries. Sys for default python system library and logging which gives a mechanism to standardized our logging of errors and information messages that we can tweak so we can give different levels of information based on what level of logging is being requested.

Next we are bringing in multiple dfVFS functions:

from dfvfs.analyzer import analyzer

We are bringing in 4 helper functions here from dfVFS. First we are bringing in the 
analyzer function which can determine for us the type of image or archive or partition we 
are attempting to access, in its current version it can auto detect the following:

  • bde - bitlockered volumes
  • bzip2 - bzip2 archives
  • cpio - cpio archives
  • ewf - expert witness format aka e01 images
  • gzip - gzip archives
  • lvm - logical volume management, the linux partitioning system
  • ntfs - ntfs partitions
  • qcow - qcow images
  • tar - tar archives
  • tsk - tsk supported image types
  • tak partition - tsk identified partitions
  • vhd - vhd virtual drives
  • vmdk - vmware virtual drives
  • shadow volume - windows shadow volumes
  • zip - zip archives
from dfvfs.lib import definitions
The next helper function is the definitions function which maps our named types and 
identifiers to the values that the underlying libraries are expecting or returning. 

from dfvfs.path import factory as path_spec_factory
The path_spec_factory helper library is one of the cornerstones of understanding the 
dfVFS framework. path_spec's are what you pass into most of the dfvfs functions and contain 
the type of object you are passing in (from the definitions helper library), the location
where this thing is (either on the file system you are running the script from or the 
location within the image you are pointing to) and the parent path if there is one. As we
go through this code you'll notice we make multiple path_spec's as we work to make the 
object we need to pass to the right helper function to access the forensic image. 

         from dfvfs.volume import tsk_volume_system

This helper library is create a pytsk volume object for us to allow us to use pytsk to enumerate and access volumes/partitions. 

source_path="stage2.vhd"

Here we are creating a variable called source_path and storing within it the name of the forensic
image we would like to work with. In future examples we will work with other images types and
sizes but this is a small and simple image. I've tested this script with vhds and e01s and both opened
without issue and without changing any code other than the name of the file.

path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_OS, location=source_path)

Our first path_spec object. Here we are calling the path_spec_factory helper function and using the 
NewPathSpec function to return a path spec object. We are passing in the type of file we are working
with which is TYPE_INDICATOR_OS which is defined in the dfVFS wiki as a file contained within
the operating system and where the file is located as the location by passing in the source_path 
variable we made in the line above. 

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
path_spec)

Next we are letting the Analyzer helper function's Get Storage Media Image Type Indicator 
function to let it figure out what kind of image, file system, partition or archive 
we are dealing with by its signature. It will return back the type into a variable called
type indicators.

source_path_spec = path_spec_factory.Factory.NewPathSpec( type_indicators[0], parent=path_spec)

Once we have the type of thing we are working with we want to generate another path_spec
object that has that information within it so our next helper library knows what it is
dealing with. We do that by calling the same NewPathSpec function but now for a type
we are passing in the first result that was stored in the type_indicator. Now I am cheating
a bit here to make things simple, I should be checking to see how many types are being
returned and if we know what we are dealing with. However that won't make this program
any easier to read and I'm giving you an image that will work correctly with this code.
In future blog posts we will put in the logic to detect and report such errors.

volume_system_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
parent=source_path_spec)

Another path_spec object! Now that we have a path_spec object that identifies its type as
a forensic image we can create a path_spec object for the partitions contained within it.
You can see we are passing in the type of TSK_PARTITION meaning this is a tsk object 
that will work with the tsk volume functions. Again in future posts we will write code to
determine if there are in fact partitions here to deal with, ,but for now just know it 
works.

volume_system = tsk_volume_system.TSKVolumeSystem()volume_system.Open(volume_system_path_spec)

Now we are going back to our old buddy libtsk aka pytsk. We are creating a TSK Volume
object and storing it in volume_system. Then we are opening the path_spec object we just
made that contains a valid tsk object and passing that into our new tsk volume object and
telling it to open it.

volume_identifiers = []
for volume in volume_system.volumes:
volume_identifier = getattr(volume, 'identifier', None)
if volume_identifier:
volume_identifiers.append(volume_identifier)
Here we are initializing a list object called volume_identifiers and then making use of the volumes
function within the tsk volume_system object to return a list of volumes aka partitions stored within
the tsk volume object we just opened. Our for loop will then iterate through each volume returned
and for each volume it will grab the identifier attribute from the volume object created in the for loop
and store the result in the volume_identifier variable. 

Our last line of code is checking to see if a volume_identifier was returned, if it was then we will append it to our list of volume_identifiers we initialized prior to our for loop. 

print(u'The following partitions were found:')print(u'Identifier\tOffset\t\t\tSize') for volume_identifier in sorted(volume_identifiers): volume = volume_system.GetVolumeByIdentifier(volume_identifier) if not volume: raise RuntimeError( u'Volume missing for identifier: {0:s}.'.format(volume_identifier)) volume_extent = volume.extents[0] print( u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format( volume.identifier, volume_extent.offset, volume_extent.size)) print(u'')
In this last bit of code we are printing out the information we know so far about these
partitions/volumes. We do that by doing a for loop over our volume_indetifiers list.
For each volume stored within it we are calling the GetVolumeByIdentifier function and 
storing the returned object in the volume variable. 

We then print three properties from the volume object returned, the identifier 
(the partition or volume number), the offset to where the volume begins (in decimal
and hex) and lastly how large the volume is.

Woo, that's it! I know that is a lot to go through for an introduction post but it all
build on this and within a few posts you will really begin to understand the power of
dfVFS. 

You can download this python script from the github here: 
https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv1.py

Daily Blog #366: The return to Daily Blogging and pytsk vs dfvfs

Hello Reader,
               As crazy as it sounds, I've missed doing daily blogs. It forced me to keep looking, reading and thinking about new things to write about and do. The forensic lunch podcast is still going strong and is not going away but that is more me leaning on others in the community to talk about what they are doing and less about forcing myself to document and share my own research. 

So with that in mind, let's set our schedule for this blog.

Sunday - Sunday Funday's return, prepare yourself for more forensic fun and real prizes
Monday - Sunday Funday results
Tuesday - Daily Blog entry
Wednesday - Daily Blog entry
Thursday - Daily Blog entry
Friday - Either Forensic Lunch or a video tutorial depending on the broadcast schedule
Saturday - Saturday reading will return

This year you can expect more blogs about new artifacts, old artifacts, triforce, journal forensics, python programming for DFIR and more. 

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/

Otherwise, get involved! Leave comments, tell your friends about the blog/podcast, send me a tweet, drop me an email (dcowen@g-cpartners.com) it's always more fun when we all talk and work together. Windows 10 is out, OSX keeps getting updated with new features, Ubuntu is running on Windows, iOS and Android keep getting more interesting and so much more is out there to be researched! 

So with it being Wednesday let's get into our first topic which leads into my next planned blog posts. 


PYTSK v DFVFS


If you read the blog last year you would have seen a series of blogs under the series title, Automating DFIR. If you noticed, I stopped after part 13 and haven't continued the series since. There is a reason for this and the reason is not because I got tired of writing about it. Instead I hit the wall that required us to use DFVFS in Triforce; Shadow Copies and E01s. Libvshadow is an amazing library but as a standalone library it requires a raw disk image or a live disk, it does not support other forensic image formats directly. 

I looked into ways around this by reading the Plaso code and seeing what glue they were using to shape the object and the super classes in such a way that the libewf object would work with libvshadow but I realized in doing so that I was just creating more problems for myself that were already solved. DFVFS (Digital Forensics Virtual Filesystem) was created to solve all the known issues with all the different image formats and libraries that need to access them as framework and wrapper that allow all of these things to work together. Now DFVFS is more than just shadow access in E01s, it provides a wrapper around all of the forensic image and virtual disk formats that Metz's libraries support in a way that means you can write one piece of code to load in any disk type rather than writing 5 functions to deal with each image format and it's care and feeding. 


I initially was worried about using DFVFS in the blog because of the effort that it appears to take to get it up and going. However, with 13 blog posts already out there showing how to make pytsk work for simple solutions I think it's time to switch gears and libraries to allow us to accomplish more interesting and complicated tasks together with DFVFS directly. 

So with that in mind your homework dear reader is to read this post: http://www.hecfblog.com/2015/12/how-to-install-dfvfs-on-windows-without.html and be prepared for tomorrows first post showing how to work with this amazing collection of libraries.