@night 1803 access accessdata active directory admissibility ads aduc aim aix ajax alex levinson alissa torres amcache analysis anjp anssi answer key antiforensics apfs appcompat appcompatflags applocker april fools argparse arman gungor arsenal artifact extractor attachments attacker tools austin automating automation awards aws azure azuread back to basics backstage bam base16 best finds beta bias bitcoin bitlocker blackbag blackberry enterprise server blackhat blacklight blade blanche lagny book book review brute force bsides bulk extractor c2 carved carving case ccdc cd burning ceic cfp challenge champlain chat logs Christmas Christmas eve chrome cit client info cloud forensics command line computer forensics computername conference schedule consulting contest cool tools. tips copy and paste coreanalytics cortana court approved credentials cryptocurrency ctf cti summit cut and paste cyberbox Daily Blog dbir deep freeze defcon defender ata deviceclasses dfa dfir dfir automation dfir exposed dfir in 120 seconds dfir indepth dfir review dfir summit dfir wizard dfrws dfvfs dingo stole my baby directories directory dirty file system disablelastaccess discount download dropbox dvd burning e01 elastic search elcomsoft elevated email recovery email searching emdmgmt Encyclopedia Forensica enfuse eric huber es eshandler esxi evalexperience event log event logs evidence execution exfat ext3 ext4 extended mapi external drives f-response factory access mode false positive fat fde firefox for408 for498 for500 for526 for668 forenisc toolkit forensic 4cast forensic lunch forensic soundness forensic tips fraud free fsutil ftk ftk 2 full disk encryption future gcfe gcp github go bag golden ticket google gsuite guardduty gui hackthebox hal pomeranz hashlib hfs honeypot honeypots how does it work how i use it how to howto IE10 imaging incident response indepth information theft infosec pro guide intern internetusername Interview ios ip theft iphone ir itunes encrypted backups jailbreak jeddah jessica hyde joe sylve journals json jump lists kali kape kevin stokes kibana knowledgec korman labs lance mueller last access last logon lateral movement leanpub libtsk libvshadow linux linux forensics linux-3g live systems lnk files log analysis log2timeline login logs london love notes lznt1 mac mac_apt macmini magnet magnet user summit magnet virtual summit mari degrazia mathias fuchs md viewer memorial day memory forensics metaspike mft mftecmd mhn microsoft milestones mimikatz missing features mlocate mobile devices mojave mount mtp multiboot usb mus mus 2019 mus2019 nccdc netanalysis netbios netflow new book new years eve new years resolutions nominations nosql notifications ntfs ntfsdisablelastaccessupdate nuc nw3c objectid offensive forensics office office 2016 office 365 oleg skilkin osx outlook outlook web access owa packetsled paladin pancake viewer path specification pdf perl persistence pfic plists posix powerforensics powerpoint powershell prefetch psexec py2exe pyewf pyinstaller python pytsk rallysecurity raw images rdp re-c re-creation testing reader project recipes recon recursive hashing recycle bin redteam regipy registry registry explorer registry recon regripper remote research reverse engineering rhel rootless runas sample images san diego SANS sans dfir summit sarah edwards saturday Saturday reading sbe sccm scrap files search server 2008 server 2008 r2 server 2012 server 2019 setmace setupapi sha1 shadowkit shadows shell items shellbags shimcache silv3rhorn skull canyon skype slow down smb solution solution saturday sop speed sponsors sqlite srum ssd stage 1 stories storport sunday funday swgde syscache system t2 takeout telemetry temporary files test kitchen thanksgiving threat intel timeline times timestamps timestomp timezone tool tool testing training transaction logs triage triforce truecrypt tsk tun naung tutorial typed paths typedpaths uac unc understanding unicorn unified logs unread updates usb usb detective usbstor user assist userassist usnjrnl validation vhd video video blog videopost vlive vmug vmware volatility vote vss web2.0 webcast webinar webmail weekend reading what are you missing what did they take what don't we know What I wish I knew whitfield windows windows 10 windows 2008 windows 7 windows forensics windows server winfe winfe lite winscp wmi write head xboot xfs xways yarp yogesh zimmerman zone.identifier

Daily Blog #379: Automating DFIR with dfVFS part 6

Hello Reader,
         It's time to continue our series by iterating through all the partitions within a disk or image, instead of just hard coding the one. To start with you'll need another image, one that not only has more than one partition but also has shadow copies for us to interact with next.

You can download the image here:
https://mega.nz/#!L45SRYpR!yl8zDOi7J7koqeGnFEhYV-_75jkVtI2CTrr14PqofBw


If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="Windows 7 Professional SP1 x86 Suspect.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

  volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

  mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=volume_path_spec)

  file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


  stat_object = file_entry.GetStat()

  print(u'Inode: {0:d}'.format(stat_object.ino))
  print(u'Inode: {0:s}'.format(file_entry.name))
  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')
  file_object = file_entry.GetFileObject()

  data = file_object.read(4096)
  while data:
      extractFile.write(data)
      data = file_object.read(4096)

  extractFile.close
  file_object.close()
  volume_path_spec=""
  mft_path_spec=""

Believe it or not we didn't have to change much here to get it to go from looking at one partition and extracting the $MFT to extracting it from all the partitions. What we had to do was four things.

1. We moved our file extraction code over by one indent allowing it to execute as part of the for loop we first wrote to print out the list of partitions in an image. Remember that in Python we don't use braces to determine how the code will be executed, its all indentation that decides how the code logic will be read and followed.
2. Next we changed the location where our volume path specification object is set to from a hard coded /p1 to whatever volume identifier we are currently looking at in the for loop.

 volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

You can see that the location variable is now set to u'/' being appended to the volume_identifier variable. This would be resolved to /p1, /p2, etc.. as many partitions as we have on the image.

3. Now that we are going to extracting this file from multiple partitions we don't want to overwrite the file we previously extracted so we need to make the file name unique. We do that by appending the partition number to the file name.

  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')

This results in a file named p1$MFT, p2$MFT, and so on. To accomplish this we make a new variable called outfile which is set to the partition number (volume_identifier) appended with the file name (file_entry.name). Then we pass that the open file handle argument we wrote before.

4. One last simple change.

volume_path_spec=""
mft_path_spec=""

We are setting our partition and file path spec objects back to null. Why? Because if not
they are globally set and will just keep appending on to the prior object. That will 
result in very funny errors.

That's it! No more code changes. 

You can get the code from Github: 
https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv4.py


In the next post we will be iterating through shadow copies!
Labels: ,

Post a Comment

[blogger][disqus][facebook][spotim]

Author Name

Contact Form

Name

Email *

Message *

Powered by Blogger.