Hello Reader,
In our last post I expanded on the concept of path specification objects. Now let's expand the support of our dfVFS code to go beyond just forensic images and known virtual drives to live disks and raw images.
If you want to show your support for my efforts, there is an easy way to do that.
Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/
Why is this not supported with the same function call you ask? Live disks and raw images do not have any magic headers that dfVFS can parse and know what it is dealing with. So instead we need to add some conditional logic to help it know when to test if what we are working with is an image or a raw disk.
First as we did last time let's see what the code looks like now:
The first thing is different is two more helper functions from dfVFS being imported:
The first one, resolver, is a helper function that attempts to resolve path specification
objects to file system objects. You might remember that in pytsk the first thing we did
after getting a volume object was to get a file system object. Resolver is doing this for us.
The second is 'raw'. Raw is the class that supports raw images in dfVFS. It defines the
rawGlobalPathSpec function that creates a special path specification object for raw images.
Next we are changing what image we are working with to a raw image:
We are now ready to deal with a raw image aka a dd image or live disk/partition.
First we are going to change the conditional logic around our type indicator helper function call.
In the first version of the script we knew the type of image we were dealing with so we didn't bother
testing what the type_indicator function returned. Now we could be dealing with multiple types of
images (forensic image, raw image, unkown types) so we need to put in some conditional testing to deal with it.
The first check we do with what is returned into type_indicators is see is more than one type has
been identified. Currently dfVFS only supports one type of image within a single file. I'm not quite
sure when this would happen but its prudent to check for. If this condition were to occur we are calling the built in raise function to call a 'RunTimeError' printing a message to the user that we don't support this type of media.
The second check is what we saw in the first example, there is one known type of media stored within this image. You can tell we are checking for 1 type because we are calling the length function on the type_indicators list object and checking to see if the length is 1.We are going to use what is returned ([0] refers to first element returned in the list contained within type_indicators) and create our path_spec object for the image. One thing does change here and that is we are no longer giving what is returned from the NewPathSpec function into a new variable. Instead we are taking advantage of the layering described in the prior post to store the new object into the same variable name knowing that the prior object has been layered in with the parent being set to path_spec.
Only two more changes and our script is done. Next we need to check to see if there are no known media format stored in type_indicators. We do that by checking to see if nothing is stored inside of the variable type_indicators using the if not operator. This basically says if the type_indicator variable is null (nothing was returned from the function called to populate it) run the following code.
There are two things that code is going to do if there is no type returned, indicating this is possibly a raw image. The first is to call the resolver helper class function OpenFileSystem with the path_spec object we have made. If this is is successful that we are creating a new path specification object and manually setting the type of the object we are layering on to be TYPE_INDICATOR_RAW or a raw image.
Last change we make is taking that new raw image path specification and making it work with the other dfVFS functions that may not explicitly work with a raw image object. We do that be calling the raw function's RawGlobPathSpec function and passing it two objects. The first is the file system object we made in the section just above and the second is the raw_path_spec object we made. The RawGlobPathSpec object is then going to bundle those objects up and if it is successful it will return a new path specification object that the rest of the library will work with.
We then test the glob_results variable to make sure something was stored within it, a sign it ran successfully. If there is in fact an object contained within it we assign it to our path_spec variable.
That's it!
After running the script this should be what you see:
You can download the source code for this example from GitHub here: https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv2.py
Tomorrow we continue to add more functionality!
In our last post I expanded on the concept of path specification objects. Now let's expand the support of our dfVFS code to go beyond just forensic images and known virtual drives to live disks and raw images.
If you want to show your support for my efforts, there is an easy way to do that.
Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/
First as we did last time let's see what the code looks like now:
import sys
import logging
from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
## Adding Resolver
from dfvfs.resolver import resolver
## Adding raw support
from dfvfs.lib import raw
source_path="dfr-16-ntfs.dd"
path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_OS, location=source_path)
type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
path_spec)
if len(type_indicators) > 1:
raise RuntimeError((
u'Unsupported source: {0:s} found more than one storage media '
u'image types.').format(source_path))
if len(type_indicators) == 1:
path_spec = path_spec_factory.Factory.NewPathSpec(
type_indicators[0], parent=path_spec)
if not type_indicators:
# The RAW storage media image type cannot be detected based on
# a signature so we try to detect it based on common file naming
# schemas.
file_system = resolver.Resolver.OpenFileSystem(path_spec)
raw_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_RAW, parent=path_spec)
glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
if glob_results:
path_spec = raw_path_spec
volume_system_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
parent=path_spec)
volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_system_path_spec)
volume_identifiers = []
for volume in volume_system.volumes:
volume_identifier = getattr(volume, 'identifier', None)
if volume_identifier:
volume_identifiers.append(volume_identifier)
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')
for volume_identifier in sorted(volume_identifiers):
volume = volume_system.GetVolumeByIdentifier(volume_identifier)
if not volume:
raise RuntimeError(
u'Volume missing for identifier: {0:s}.'.format(volume_identifier))
volume_extent = volume.extents[0]
print(
u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
volume.identifier, volume_extent.offset, volume_extent.size))
print(u'')
The first thing is different is two more helper functions from dfVFS being imported:
## Adding Resolver
from dfvfs.resolver import resolver
## Adding raw support
from dfvfs.lib import raw
The first one, resolver, is a helper function that attempts to resolve path specification
objects to file system objects. You might remember that in pytsk the first thing we did
after getting a volume object was to get a file system object. Resolver is doing this for us.
The second is 'raw'. Raw is the class that supports raw images in dfVFS. It defines the
rawGlobalPathSpec function that creates a special path specification object for raw images.
Next we are changing what image we are working with to a raw image:
source_path="dfr-16-ntfs.dd"
We are now ready to deal with a raw image aka a dd image or live disk/partition.
First we are going to change the conditional logic around our type indicator helper function call.
In the first version of the script we knew the type of image we were dealing with so we didn't bother
testing what the type_indicator function returned. Now we could be dealing with multiple types of
images (forensic image, raw image, unkown types) so we need to put in some conditional testing to deal with it.
if len(type_indicators) > 1:
raise RuntimeError((
u'Unsupported source: {0:s} found more than one storage media '
u'image types.').format(source_path))
if len(type_indicators) == 1:
path_spec = path_spec_factory.Factory.NewPathSpec(
type_indicators[0], parent=path_spec)
been identified. Currently dfVFS only supports one type of image within a single file. I'm not quite
sure when this would happen but its prudent to check for. If this condition were to occur we are calling the built in raise function to call a 'RunTimeError' printing a message to the user that we don't support this type of media.
The second check is what we saw in the first example, there is one known type of media stored within this image. You can tell we are checking for 1 type because we are calling the length function on the type_indicators list object and checking to see if the length is 1.We are going to use what is returned ([0] refers to first element returned in the list contained within type_indicators) and create our path_spec object for the image. One thing does change here and that is we are no longer giving what is returned from the NewPathSpec function into a new variable. Instead we are taking advantage of the layering described in the prior post to store the new object into the same variable name knowing that the prior object has been layered in with the parent being set to path_spec.
Only two more changes and our script is done. Next we need to check to see if there are no known media format stored in type_indicators. We do that by checking to see if nothing is stored inside of the variable type_indicators using the if not operator. This basically says if the type_indicator variable is null (nothing was returned from the function called to populate it) run the following code.
if not type_indicators:
# The RAW storage media image type cannot be detected based on
# a signature so we try to detect it based on common file naming
# schemas.
file_system = resolver.Resolver.OpenFileSystem(path_spec)
raw_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_RAW, parent=path_spec)
There are two things that code is going to do if there is no type returned, indicating this is possibly a raw image. The first is to call the resolver helper class function OpenFileSystem with the path_spec object we have made. If this is is successful that we are creating a new path specification object and manually setting the type of the object we are layering on to be TYPE_INDICATOR_RAW or a raw image.
Last change we make is taking that new raw image path specification and making it work with the other dfVFS functions that may not explicitly work with a raw image object. We do that be calling the raw function's RawGlobPathSpec function and passing it two objects. The first is the file system object we made in the section just above and the second is the raw_path_spec object we made. The RawGlobPathSpec object is then going to bundle those objects up and if it is successful it will return a new path specification object that the rest of the library will work with.
glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
if glob_results:
path_spec = raw_path_spec
We then test the glob_results variable to make sure something was stored within it, a sign it ran successfully. If there is in fact an object contained within it we assign it to our path_spec variable.
That's it!
After running the script this should be what you see:
The following partitions were found:You can download the image I'm testing with here: http://www.cfreds.nist.gov/dfr-images/dfr-16-ntfs.dd.bz2
Identifier Offset Size
p1 65536 (0x00010000) 314572800
You can download the source code for this example from GitHub here: https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv2.py
Tomorrow we continue to add more functionality!
This is a 6-part series and here's link to all the parts:
Automating DFIR with dfVFS Part 6
Automating DFIR with dfVFS Part 5
Automating DFIR with dfVFS Part 4
Automating DFIR with dfVFS Part 3
Automating DFIR with dfVFS Part 2
Automating DFIR with dfVFS Part 1
Post a Comment