Monday, March 20, 2017

DFIR Exposed #1: The crime of silence

Hello Reader,
          I've often been told I should commit to writing some of the stories of the cases we've worked as to not forget them. I've been told that I should write a book of them, and maybe some day I will. Until then I wanted to share some of cases we've worked where things went outside the norm to help you be aware of not what usually happens, but what happens with humans get involved.

Our story begins...
It's early January, the Christmas rush has just ended and my client reaches out to me stating.

"Hey Dave, Our customer has suffered a breach and credit cards have been sent to an email address in Russia"

No problem, this is unfortunately fairly common so I respond we can meet the client as soon as they are ready. After contracts are signed we are informed there are two locations we need to visit.

1. The datacenter where the servers affected are hosted which have not been preserved yet
2. The offices where the developers worked who noticed the breach

Now at this point you are saying, Dave... you do IR work? You don't talk about that much. No, we don't talk about it much, for a reason. We do IR work through attorneys mainly to preserve the privilege and I've always been worried that making that a public part of our offering would effect the DF part of our services as my people would be flying around the country.

BTW Did you know that IR investigations lead by an attorney are considered work product by case law on a case I'm happy to say I worked on? Read more here: https://www.orrick.com/Insights/2015/04/Court-Says-Cyber-Forensics-Covered-by-Legal-Privilege

So we send out one examiner to the datacenter while we gather information from the developers. Now you may be wondering why we were talking to the developers and not the security staff. It was the developers who found the intrusion after trying to track down an error in their code and comparing the production code to their checked in repository. Once they compared it they found a change in their shopping cart that showed the form submitted with the payment instructions was being processed while also being emailed to a russian hosted email address.

The developers claimed this was their first knowledge of any change and company management was quite upset with the datacenter as they were supposed to provide security in their view of the hosting contracts signed. Ideas of liability and litigation against the hosting provider were floating around and I was put on notice to see what existed to support that.

It was then that I got a call from my examiner who went to the datacenter, he let me know that one of the employees of the hosting company handed him a thumbdrive while he was imaging the systems saying only:

 'You'll want to read this'

You know what? He was right!

On the thumbdrive was a transcript of a ticket that was opened by the hosting companies SOC. In the transcript it was revealed that a month earlier the SOC staff was informing the same developers who claimed to have no prior knowledge of an intrusion that a foreign ip had logged into their VPS as root ... and that probably wasn't a good thing.

I called the attorney right away and let her know she likely needed to switch her focus from possible litigation against the hosting provider and to an internal investigation to find out what actually happened. Of course we still needed to finish our investigation of the compromise itself to make sure the damage was understood from a notification perspective.

Step 1. Analyzing the compromised server

Luckily for us the SOC ticket showed us when the attacker had first logged in as the root account which we were able to verify through the carved syslog files. We then went through the servers and located the effected files, established the mechanism used and helped them define the time frame of compromise so they could through their account records to find all the affected customers.

Unfortunately for our client, it was the Christmas season and one their busiest time of year. Luckily for the client it happened after Black Friday which IS their busiest time of the year. After identifying the access, modifications and exfil methods we turned our focus to the developers.

We talked to the attorney and came up with a game plan. First we would inform them that we needed to examine each of their workstations to make sure they were not compromised and open for re-exploitation, which was true. Then we would go back through their emails, chat logs and forensic artifacts to understand what they knew and or did when they were first notified of the breach. Lastly we would bring them in to be interviewed to see who would admit to what.

Imaging the computers was uneventful as you always hope it was, but the examination turned out to be very interesting. The developers used Skype to talk to each other and if you've ever analyzed Skype before you know that by default it keeps history Forever. There in the Skype chats was the developers talking to each other about the breach when it happened, asking each other questions about the attackers ip address, passing links and answers back to each other.

And then.... Nothing

Step 2. Investigating the developers

You see investigations are not strictly about the technical analysis in many cases, some are though, there is always the human element which is why I've stayed so enthralled by this field for so long. In this case the developers were under the belief that they were going to be laid off after Christmas so rather than take action they decided it wasn't their problem and went on with their lives. They did ask the hosting provider for recommendations of what to do next, but never followed up on them.

A month later they got informed they were not being laid off, and instead were going to be transferred to a different department. With the knowledge that this was suddenly their problem again they decided to actually look at the hosted system and found the modified code.


Step 3. Wrapping it up

So knowing this and comparing notes with the attorney we brought them in for an interview.

The first round we simply asked questions to see what they would say, who would admit what and possibly who could keep their jobs. When we finished talking to all the developers, all of which pretended to know nothing of the earlier date we documented their 'facts' and thanked them.

Then we asked them back in and one fact at a time showed them what we knew. Suddenly memories returned, apologies were given and the chronology of events was established. As it turns out the developers never notified management of the issue until the knew they were going to remain employed and just sat on the issue.

Needless to say, they no longer had that transfer option open as they were summarily terminated.

So in this case a breach that should have only lasted 4 hours at most (time of login notice by SOC to time of remediation) lasted 30 days of Christmas shopping because the developers of the eCommerce site committed the crime of silence for purely human reasons.


Wednesday, February 15, 2017

SOPs in DFIR

Hello Reader,
      It's been awhile! Sorry I haven't written sooner. Things are great here at camp aka G-C Partners where the nerds run the show. 2 years ago or so I got lucky enough to work with on our favorite customers in generating some standard operating procedures for their DFIR lab. While we list forensic lab consulting as a service on our website we don't get to engage in helping other labs improve as often as I'd like.

It may sound like a bad idea for a consulting company to help a potential client get better at what we both do, some may see this as a way of preventing future work for yourself. This in my view is short sighted and in my world view of DFIR and especially in the court testimony/expert witness world the better the internal lab is the better my life is when they decide to litigate a case. If I've helped you get on the same standard of work as my lab then I can spend my time using the prior work as a cheat sheet to validate faster and then looking for the newest techniques or research that could potentially find additional evidence.

Now beyond the business aspects of helping another lab improve I want to talk about the general first reactions that I get and used to have myself regarding making SOPs for what we do. SOP or Standard Operating Procedures are a good thing (TM) as they help set basic standards in methodology, quality and templated output without the expense of creative solutions... if they are done right.

When I was still doing penetration testing work I was first asked to try and make SOPs for my job. I balked at the idea stating that you can't proceduralize my work, there are too many variables! While this was true for the entire workflow what I didn't want to admit at the time is that there we were several parts of my normal work that were ripe for procedures to be created. I didn't want to admit it because that would mean additional work for myself in creating documentation I saw as something that would slow down my job. When I started doing forensic work in 1999 I was asked the same question for my DFIR work and again pushed back stating there were too many variables in an investigation to try to turn it into a playbook.

I was wrong then and you may be wrong now. The first thing you have to admit to yourself, and your coworkers, is that regardless of the outliers in our work there are certain things we always do. Creating these SOPs will let new and existing team members do more consistent work and create less work for yourself in continually repeating yourself on what to do and what they should give you at the end of it. This kind of SOP will work for you if you create a SOP that works more like a framework than a restrictive set of steps.

For examples of how SWGDE makes SOPs for DFIR look here:
https://www.swgde.org/documents/Current%20Documents/SWGDE%20QAM%20and%20SOP%20Manuals/SWGDE%20Model%20SOP%20for%20Computer%20Forensics

Read on to see what I do.

What you want:


  • You want the SOP for a particular task to work more like stored knowledge 
  • You want to explain what that particular task can and can't do to prevent confusion or misinformation
  • You want to establish what to check for to make sure everything ran correctly to catch errors early and often
  • You want to provide alternative tools or techniques that work in case your preferred tool or method fails
  • You want to link to tool documentation or blogs/articles that give more documentation on whats being done in case people want to know more
  • You want to establish a minimum set of requirements for what the output is to prevent work being done twice
  • You want to store these somewhere accessible and easy to access like a wiki/sharepoint/dropbox/google doc so people can easily refer to them and more importantly you can easily refer people to it
  • You want to build internal training that works to teach people how to perform tasks with the SOPs so they become part of your day to day work not 'that thing' that no one wants to use
  • You want your team to be part of making the SOP more helpful while keeping to these guidelines of simplicity and usability



What you don't want:

  • You don't want to create a step by step process for someone to follow, that's not a SOP those are instructions
  • You don't want to create a checklist of things to do, if you do that people will only follow the checklist and feel confined to it
  • You don't to use words like must/shall/always unless you really mean it, whatever you write in your SOP will be used to judge your work. Keep the language flexible and open so they serve as a guideline that you as an expert can navigate around when needed
  • You don't want to put a specific persons name or email address in, people move around and your SOPs will quickly fall out of date
  • You don't want to update your SOPs every time a tool version changes, so make sure you are not making them so specific that change of one choice or parameter breaks them
  • You don't want to make these by committee, just assign them to people with an example SOP you've already made and then show them to team for approval/changes

In the end what you are aiming for is to have a series of building blocks of different common tasks you have in your investigative procedures that you can chain together for different kinds of cases.

If that's hard to visualize lets go through an example SOP for prefetch files and how it fits into a larger case flow. In this example I am going to show the type of data I would put into the framework, not the the actual SOP I would write. 

Why not just give you my SOP for prefetch files? My SOP will be different from yours. We have an internal tool for preftech parsing, I want different output to feed our internal correlation system and I likely want more data than most people think is normal. 



Example framework for Prefetch files:

Requirements per SOP:

  •     Scope
    •  This SOP covers parsing Prefetch files on Windows XP-8.1. 
  • Limitations
    • The prefetch subsystem may be disabled if the system you are examining is running an SSD and is Windows 7 or newer or if running a server version of Windows. The prefetch directory only stores prefetch files for the last 128 executables executed on Windows XP - 7. You will need to recover shadow copies of preftech files and carve for prefetch files to find all possible prefetch entries on an image.
  •  Procedure
    • Extract prefetch files
    • Parse prefetch files with our preferred tool, gcprefetchparser.py
  •   Expected output
    • A json file for each prefetch
  •   Template for reporting
    • Excel spreadsheet, see prefetch.xlsx for an example 
  •  QA steps
    • Validate that timestamps are being correctly formatted in excel
    • If there are no prefetch files determine if a SSD was present or if anti forensics occurred
  •  Troubleshooting
    • Make sure timestamps look correct
    • Validate that paths and executable names fit within columns
    • Make sure the number of prefetch files present equal to the number of files you parsed
    • Remember for windows 10 that the prefetch format changed, you must use the Win10 Prefetch SOP
    • Remember that Windows Server OS's do not have Prefetch on my default
  • Alternative tools 
    • Tzworks pf 
  • Next Steps
    • Shimcache parsing
  • References
    • Links to prefetch format from metz 

Now you have a building block SOP for just parsing PE files. Now you can create workflows that combine a series of SOPs that guides an examiner without locking them into a series of steps. Here is an example for a malware case. 

Malware workflow:
  1. Identify source of alert and indicators to look for
  2. Follow triage SOP
  3. Volatility processes SOP
  4. Prefetch SOP
  5. Shimcache SOP
  6. MFT SOP
  7. Userassist SOP
  8. timeline SOP
  9. Review prior reports to find likely malicious executable
You can then reuse the same SOPs for other workflows that range from intrusions to intellectual property cases. The goal is not to document how to do an entire case but to standardize and improve the parts you do over and over again for every case with an eye on automating and eliminating errors to make you job easier/better and your teams work better. 

Now I don't normally do this but if you are looking at this and saying to yourself, I don't have the time or resources to do this but I do have the budget for help then reach out:

info@g-cpartners.com or me specifically dcowen@g-cpartners.com

We do some really cool stuff and we can help you do cool stuff as well. I try to keep my blogs technical and factual but I feel that sometimes I hold back on talking about what we do to the detriment of you the reader. So to be specific for customers who do engage us to help them improve their labs/teams we :

1. Provide customized internal training on SOPs, Triforce, internal tools, advanced topics
2. Create custom tools for use in your environment to automate correlation and workflows
3. Create SOPs and processes around your work
4. Provide Triforce and internal G-C tool site licenses and support
5. Do internal table top scenarios 
6. Do report and case validation to check on how your team is performing and what you could do better
7. Build out GRR servers and work with your team to teach you how to use it and look for funny business aka threat hunting
8. Act as a third party to evaluate vendors trying to sell you DFIR solutions


Ok there I'm done talking about what we do, hopefully this helps someone. I'll be posting again soon about my new forensic workstation and hopefully more posts in the near future.


Thursday, September 22, 2016

Building your own travel sized virtual lab with ESXi and the Intel SkullCanyon NUC

Hello Reader,
          It's been awhile and I know that, sorry for not writing sooner but to quote Ferris Bueller

"Life moves pretty fast. If you don't stop and look around once in a while, you could miss it."

So while I've worked on a variety of cases, projects and new artifacts to share I've neglected the blog. For those of you who have been watching/listening you know I've kept up the Forensic Lunch videocast/podcast but to be fair the Blog is my first child and I've left it idle for too long.

Speaking of the Forensic Lunch if you watched this episode:
https://www.youtube.com/watch?v=Ru8fLioIVlA

You would have seen me talk about building my own portable cloud for lab testing and research. People seem to have received this very well and I've thoroughly enjoyed using it! So to that end I thought I would detail out how I set this up in case you wanted to do the same.

Step 1. Make an account on vmware.com (https://my.vmware.com/web/vmware/registration)

Step 2. Using chrome, not sure why I had some errors in firefox but I did, go to this page to register for the free version of ESXi. (Note this is the free version of ESXi that will generate a license key for life, the other version will expire after 60 days )
https://my.vmware.com/en/group/vmware/evalcenter?p=free-esxi6

Step 3. Make a note of your license key as seen in the picture below, you'll want to copy and paste this and keep it as it won't show up as a license key associated with your MyVmware account


Step 4. Click to download the product named "ESXi ISO image (Includes VMware Tools)". You could also download the vsphere client at this point or you can grab it from a link emebdded within the ESXI homepage when you get it installed. 

Step 5. After downloading the ISO you will need to put it onto some form of bootable media for it to install onto your Intel Skull Canyon NUC as it has no optical drive of its own. I choose to do this to a USB thumb drive. To do turn the ISO into a successfully booting USB drive I used rufus and you can to.

Step 5a. Download Rufus: https://rufus.akeo.ie/downloads/rufus-2.11.exe
Step 5b. Execute Rufus
Step 5c. Configure Rufus to look something like what I have below. Where Device is the USB thumb drive you have plugged in and under ISO image I've selected the ESXi iso file I downloaded and click start.






Step 6. With your ESXi media now on a bootable USB drive you are ready to move on to the Intel Skull Canyon NUC itself. Start by actually getting one! I got mine at Fry's Electronics, Microcenter also carries them and they both price match Amazon now. If you wanted to get it online I would recommend Amazon to do so and you can support a good charity while doing so by using smile.amazon.com. I support the Girl Scouts of Northeast Texas with my purchases.

Link to Intel Skull Canyon NUC:
https://smile.amazon.com/Intel-NUC-Kit-NUC6i7KYK-Mini/dp/B01DJ9XS52/ref=sr_1_1?ie=UTF8&qid=1474577754&sr=8-1&keywords=skull+canyon

The NUC comes with a processor, case, power supply and fans all built in or in the box. What you will need to provide is the RAM and storage.


Storage
I used the Samsung 950 Pro Series 512GB NVMe M.2 drive, the NUC can actually fit two of these but one has been enough so far for my initial testing.

Link to storage drive:
https://smile.amazon.com/Samsung-950-PRO-Internal-MZ-V5P512BW/dp/B01639694M/ref=pd_bxgy_147_img_2?ie=UTF8&psc=1&refRID=7N9JV1CX8FJQ4Y3JT858

RAM
For RAM I used Kingston HyperX with two 16GB sticks to get the full 32GB of RAM this unit is capable of.
Link to the RAM here:
https://smile.amazon.com/Kingston-Technology-2133MHz-HX421S13IBK2-32/dp/B01BNJL96A/ref=pd_sim_147_2?ie=UTF8&pd_rd_i=B01BNJL96A&pd_rd_r=7N9JV1CX8FJQ4Y3JT858&pd_rd_w=eFJsO&pd_rd_wg=HPiy3&psc=1&refRID=7N9JV1CX8FJQ4Y3JT858

You can use other storage and RAM of course, I used these because I wanted the speed of NVMe M.2 (2GB/sec reads and 1.5GB/sec writes) with all the memory I could get to feed the VMs that will be running on the NUC.

Step 7. Put the storage and RAM into the NUC, plug it in to the wall, attach a USB keyboard and mouse, attach a monitor and boot up to the Intel Visual Bios. You will need to disable the Thunderbolt controller on the NUC before installing ESXi, you can re-enable it after you are done installing ESXi.

To see what to click specifically in order to do this go here:
http://www.virten.net/2016/05/esxi-installation-on-nuc6i7kyk-fails-with-fatal-error-10-out-of-resources/

Step 8. Pop in the bootable USB drive and install ESXI.

You are now ready to start loading ISO's and VMs into your datastore and in the next blog post I'll show how to create an isolated virtual network to put them on. 

Sunday, April 24, 2016

Daily Blog #381 National CCDC Redteam Debrief

Hello Reader,
     The 11th year of the National Collegiate Cyber Defense Competition has ended, congratulations to the University of Central Florida for a their third consecutive win. I hope you make it back next year for another test of your schools program and ability to transfer knowledge to new generations of blue teams.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


However the team that won over the Redteam was the University of Tulsa who came with a sense of humor. Behold their Hat and Badges:


Also you have to check out the player cards they made here:
https://engineering.utulsa.edu/news/tu-cyber-security-expert-creates-trading-cards-collegiate-cyber-defense-competition/

Here is an my favorite:


You can download my Redteam debrief here:
https://drive.google.com/file/d/0B_mjsPB8uKOAcUQtOThUNUpTZ0k/view?usp=sharing

Friday, April 22, 2016

Daily Blog #380: National CCDC 2016

Hello Reader,
           I'm in San Antonio for the National Collegiate Cyber Defense Competition which starts at 10am CST 4/22/16. If you didn't know I lead the red team here at Nationals where the top 10 college teams in the country come and find out who does the best defending their network while completing business objectives.

I'm hoping to follow up this post with some videos and links to what happens tomorrow, in the mean time make sure to follow #CCDC or #NCCDC on twitter to watch some of our funny business in real time. 

Wednesday, April 20, 2016

Daily Blog #379: Automating DFIR with dfVFS part 6

Hello Reader,
         It's time to continue our series by iterating through all the partitions within a disk or image, instead of just hard coding the one. To start with you'll need another image, one that not only has more than one partition but also has shadow copies for us to interact with next.

You can download the image here:
https://mega.nz/#!L45SRYpR!yl8zDOi7J7koqeGnFEhYV-_75jkVtI2CTrr14PqofBw


If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="Windows 7 Professional SP1 x86 Suspect.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

  volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

  mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=volume_path_spec)

  file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


  stat_object = file_entry.GetStat()

  print(u'Inode: {0:d}'.format(stat_object.ino))
  print(u'Inode: {0:s}'.format(file_entry.name))
  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')
  file_object = file_entry.GetFileObject()

  data = file_object.read(4096)
  while data:
      extractFile.write(data)
      data = file_object.read(4096)

  extractFile.close
  file_object.close()
  volume_path_spec=""
  mft_path_spec=""

Believe it or not we didn't have to change much here to get it to go from looking at one partition and extracting the $MFT to extracting it from all the partitions. What we had to do was four things.

1. We moved our file extraction code over by one indent allowing it to execute as part of the for loop we first wrote to print out the list of partitions in an image. Remember that in Python we don't use braces to determine how the code will be executed, its all indentation that decides how the code logic will be read and followed.
2. Next we changed the location where our volume path specification object is set to from a hard coded /p1 to whatever volume identifier we are currently looking at in the for loop.

 volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

You can see that the location variable is now set to u'/' being appended to the volume_identifier variable. This would be resolved to /p1, /p2, etc.. as many partitions as we have on the image.

3. Now that we are going to extracting this file from multiple partitions we don't want to overwrite the file we previously extracted so we need to make the file name unique. We do that by appending the partition number to the file name.

  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')

This results in a file named p1$MFT, p2$MFT, and so on. To accomplish this we make a new variable called outfile which is set to the partition number (volume_identifier) appended with the file name (file_entry.name). Then we pass that the open file handle argument we wrote before.

4. One last simple change.

volume_path_spec=""
mft_path_spec=""

We are setting our partition and file path spec objects back to null. Why? Because if not
they are globally set and will just keep appending on to the prior object. That will 
result in very funny errors.

That's it! No more code changes. 

You can get the code from Github: 
https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv4.py


In the next post we will be iterating through shadow copies!

Tuesday, April 19, 2016

Daily Blog #378: Automating DFIR with dfVFS part 5

Hello Reader,

Wondering where yesterdays post is? Well, there was no winner of last weekends Sunday Funday.
That's ok though because I am going to post the same challenge this Sunday so you have a whole week to figure it out!

-- Now back to our regularly scheduled series --

              I use Komodo from Activestate as my IDE of choice when writing perl and python. I bring this up because one of the things I really like about it is the debugger it comes with that allows you to view all of the objects you have made and their current assignments. I was thinking about the layer cake example I crudely drew in ascii in a prior post when I realized I could show this much better from the Activestate Debugger.

So here is what the path spec object we made to access the $MFT in a VHD looks like.


I've underlined in red the important things to draw your attention to when you are trying to understand how that file path specification object we built can access the MFT and all the other layers involved.

So if you look you can see from the top down its:

  • TSK Type with a Location of /$MFT
    • With a parent of TSK Partition type with a location of /p1
      • With a parent of VHDI type 
        • With a parent of OS type with a location of the full path to where the vhd I'm working with sits.

Let's look at the same object with with an e01 loaded.


Notice what I highlighted, the image type has changed from VHDI to EWF. Otherwise the object, its properties and the methods are the same. 

Let's do this one more time to really reinforce this with a raw/dd image.


Everything else is the same, except for the type changing to RAW. 

So no matter what type of image we are working with dfVFS allows us to build an object in layers that permits the code that follows not to have to worry about the code behind. It is normalizing all the different image types access libraries so we can prevent things like the work around we do in pytsk.

Tomorrow, more code!