Thursday, May 4, 2017

Contents in sparse mirror may be smaller than they appear

By Matthew Seyer

As many of you know, David Cowen and I are huge fans of file system journals! This love also includes all change journals designed by operating systems such as FSEvents and the $UsnJrnl:$J. We have spent much of our Dev time writing tools to parse the journals. Needless to say, we have lots of experience with file system and change journals. Here is some USN Journal logic for you.

USN Journal Logic

First off it is important to know that the USN Journal file is a sparse file. MSDN explains what a sparse file is: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365564(v=vs.85).aspx. When the USN Journal (referred to as $J from here on out) is created it is created with a max size (the area of allocated data) and an allocation delta (the size in memory that stores records before it is committed to the $J on disk). This is seen here: https://technet.microsoft.com/en-us/library/cc788042(v=ws.11).aspx.

The issue is that many forensics tools cannot export a file out as a sparse file. Even if they could only a few file systems support them and I don’t even know if a sparse file on NTFS is the same as a sparse file on OSX. But this leads to a common problem. The forensic tool sees the $J as larger than it really is:





While this file is 20,380,061,592 bytes in size, the allocated portion of records is much smaller. Most forensic tools will export out the entire file with the unallocated data as 0x00. Which makes sense when you look at the MSDN Sparse File section (link above). When we extract this file out with FTK Imager we can verify with the windows command `fsutil sparse` to see that the exported file is not a sparse file (https://technet.microsoft.com/en-us/library/cc788025(v=ws.11).aspx):


 Trimming the $J

Once its exported out what’s a good way to find the start of the records? I like to use 010 Editor. I scroll towards the end of the file where there are still empty blocks (all 0x00s) then I search for 0x02 as I know I am looking for USN Record version 2:



Now if I want to export out just the record area I can start at the beginning of this found record and select to the end of the file and save the selection as a new file: 


The resulting file is 37,687,192 bytes in size and contains just the record portion of the file.



This is significantly smaller in size! Now, how do we go about this programmatically?

Automation

While other sparse files can have interspersed data, the $J sparse file keeps all of its data at the end of the file. This is because you can associate the Update Sequence Number in the record to the offset of the file! If you want to look at the structure of the USN record here it is: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365722(v=vs.85).aspx. Now I will note that I would go about this two different ways. One method for a file that has been extracted out from one tool and a different method for if it would be extracted out using the TSK lib. But for now, we will just look at the first scenario.

Because the records are located in the lasts blocks of the file, I would start from the end of the file and work our way backwards to find the first record, then write out just the records portion of the file. This saves a lot of time because you are not searching through potentially many gigs of zeros. You can find the code at https://github.com/devgc/UsnTrimmer. I have commented the code so that it is easy to understand what is happening.

Now lets use the tool:

We see that the usn.trim file is the same as the one we did manually but lets check the hash to make sure we have the same results as the manual extract:


So far I have verified this on SANS408 image system $J extract and some local $J files. But of course, make sure you use multiple techniques to verify. This was a quick proof of concept code.

Questions? Ask them in the comments below or send me a tweet at @forensic_matt

Monday, May 1, 2017

Forensic Lunch with Paul Shomo, Matt Bromiley, Phil Hagen, Lee Whitfield and David Cowen

Hello Reader,
     It's been awhile since I've cross posted that the videocast/podcast went up. We had a pretty great forensic lunch with lots of details about programs that are relevant frome everyone from academic students in forensics to serious artifact hunters. Here are the show notes:


Paul Shomo comes on to talk about Guidance Software's new Forensic Artifact Research Program where you can get $5,000 USD just for research you are already doing! Find out more here: https://bugcrowd.com/guidancesoftware?preview=114da7695ff86ae70ec01aaf2c6878b0&utm_campaign=9617-Forensic_artifact-20170426&utm_medium=Email&utm_source=Eloqua

Phil Hagen introduced the new SANS Network Forensics poster to be released later this month

Matt Bromiley is talking about the Ken Johnson Scholarship setup by SANS and KPMG you can learn more and apply here https://digital-forensics.sans.org/blog/2017/03/03/ken-johnson-dfir-scholarship

Phil, Matt, Lee and I talked about the DFIR Summit

Lee Whitfield and I talked about the 4Cast Awards, Voting is open here: https://forensic4cast.com/forensic-4cast-awards/



You can watch the lunch here: https://www.youtube.com/watch?v=hyRNh78GY2M
You can listen to the podcast here:http://forensiclunch.libsyn.com/forensic-lunch-42817
Or you can subscribe to it on iTunes, Stitcher, Tunein and any other quality podcast provider. 

Wednesday, April 19, 2017

Windows, Now with built in anti forensics!

Hello Reader,

Update: TZWorks updated USP in November of 2016 and it now not only shows these removed devices but if you pass in the -show_other_times flag it will tell you when the devices were removed. Very cool!

             If you've been using a tool to parse external storage device storage devices that relies on USB, USBStor, WPDBUSENUM or STORAGE as its primary key for fining all external devices you might be being tricked by Windows. Windows has been doing something new (to me at least) that I first observed in the Suncoast v Peter Scoppa et al case (Case No. 4:13-cv-03125) back in 2015 where Windows on its own and without user request is removing unused device entries from the registry on a regular basis driven by Task Scheduler.

This behavior that I've observed in my case work started in Windows 8.1 and I've confirmed it in Windows 10. A PDF I found that references this found here states he has seen it in Windows 7 but I can't confirm this behavior. The behavior is initiated from the Plug and Play scheduled task named 'Plug and Play Cleanup' as seen in the following screenshot:



I've found very few people talking about this online and even fewer DFIR people who seem to be aware, I know we are going to add it to the SANS windows forensics course. According to this post on MSDN the scheduled task will remove from the most common device storage registry keys all devices that haven't been plugged in for 30 days. When this removal happens like all other PnP installs and uninstalls it will be logged in setupapi.dev.log and here is an example of such an entry:

">>>  [Device and Driver Disk Cleanup Handler]
>>>  Section start 2017/04/08 18:54:37.650
      cmd: taskhostw.exe
     set: Searching for not-recently detected devices that may be removed from the system.
     set: Devices will be removed during this pass.
     set: Device STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} will be removed.
     set: Device STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} was removed.
     set: Device SWD\WPDBUSENUM\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} will be removed.
     set: Device SWD\WPDBUSENUM\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT13 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT13 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT14 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT14 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT15 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT15 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT16 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT16 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT17 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT17 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT18 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT18 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT19 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT19 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT20 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT20 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT21 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT21 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT22 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT22 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT23 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT23 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT24 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT24 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT25 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT25 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT26 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT26 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT27 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT27 was removed.
     set: Device USB\VID_13FE&PID_5200\07075B8D9E409826 will be removed.
     set: Device USB\VID_13FE&PID_5200\07075B8D9E409826 was removed.
     set: Device STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#070A6C62772BB880&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} will be removed.
     set: Device STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#070A6C62772BB880&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} was removed.
     set: Device USBSTOR\DISK&VEN_&PROD_&REV_PMAP\070A6C62772BB880&0 will be removed.
     set: Device USBSTOR\DISK&VEN_&PROD_&REV_PMAP\070A6C62772BB880&0 was removed.
     set: Device USB\VID_13FE&PID_5500\070A6C62772BB880 will be removed.
     set: Device USB\VID_13FE&PID_5500\070A6C62772BB880 was removed.
     set: Device SWD\WPDBUSENUM\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#070A6C62772BB880&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} will be removed.
     set: Device SWD\WPDBUSENUM\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#070A6C62772BB880&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} was removed.
     set: Device USBSTOR\DISK&VEN_&PROD_&REV_PMAP\07075B8D9E409826&0 will be removed.
     set: Device USBSTOR\DISK&VEN_&PROD_&REV_PMAP\07075B8D9E409826&0 was removed.
     set: Devices removed: 23
     set: Searching for unused drivers that may be removed from the system.
     set: Drivers will be removed during this pass.
     set: Recovery Timestamp: 11/11/2016 19:51:27:0391.
     set: Driver packages removed: 0
     set: Total size on disk: 0
<<<  Section end 2017/04/08 18:54:41.415
<<<  [Exit status: SUCCESS]"

Followed one of these for each device identified:
>>>  [Delete Device - STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B}]
>>>  Section start 2017/04/08 18:54:37.666
      cmd: taskhostw.exe
<<<  Section end 2017/04/08 18:54:37.704
<<<  [Exit status: SUCCESS]

Now the setupapi.dev.log isn't the only place these devices will remain. You will also find them in the following registry keys in my testing:
System\Setup\Upgrade\PnP\CurrentControlSet\Control\DeviceMigration\Classes\
System\Setup\Upgrade\PnP\CurrentControlSet\Control\DeviceMigration\Devices\SWD\WPDBUSENUM
System\Setup\Upgrade\PnP\CurrentControlSet\Control\DeviceMigration\Devices\USBSTOR
System\MountedDevices\
NTUSER,DAT\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\

Note that these removals do not effect the shell items artifacts (Lnk, Jumplist, Shellbags) that would be pointing to files accessed from these devices, just the common registry entries that record their existence. 

So why is this important? If you are being asked to review external devices accessed in a Windows 8.1 or newer system you will have to take additional steps to ensure that you account for any device that hasn't been plugged in for 30 days. In my testing the Woanware USBDeviceForensics tool will miss these devices in their reports.

So make sure to check! It's on by default and there could be a lot of devices you miss. 

Monday, March 20, 2017

DFIR Exposed #1: The crime of silence

Hello Reader,
          I've often been told I should commit to writing some of the stories of the cases we've worked as to not forget them. I've been told that I should write a book of them, and maybe some day I will. Until then I wanted to share some of cases we've worked where things went outside the norm to help you be aware of not what usually happens, but what happens with humans get involved.

Our story begins...
It's early January, the Christmas rush has just ended and my client reaches out to me stating.

"Hey Dave, Our customer has suffered a breach and credit cards have been sent to an email address in Russia"

No problem, this is unfortunately fairly common so I respond we can meet the client as soon as they are ready. After contracts are signed we are informed there are two locations we need to visit.

1. The datacenter where the servers affected are hosted which have not been preserved yet
2. The offices where the developers worked who noticed the breach

Now at this point you are saying, Dave... you do IR work? You don't talk about that much. No, we don't talk about it much, for a reason. We do IR work through attorneys mainly to preserve the privilege and I've always been worried that making that a public part of our offering would effect the DF part of our services as my people would be flying around the country.

BTW Did you know that IR investigations lead by an attorney are considered work product by case law on a case I'm happy to say I worked on? Read more here: https://www.orrick.com/Insights/2015/04/Court-Says-Cyber-Forensics-Covered-by-Legal-Privilege

So we send out one examiner to the datacenter while we gather information from the developers. Now you may be wondering why we were talking to the developers and not the security staff. It was the developers who found the intrusion after trying to track down an error in their code and comparing the production code to their checked in repository. Once they compared it they found a change in their shopping cart that showed the form submitted with the payment instructions was being processed while also being emailed to a russian hosted email address.

The developers claimed this was their first knowledge of any change and company management was quite upset with the datacenter as they were supposed to provide security in their view of the hosting contracts signed. Ideas of liability and litigation against the hosting provider were floating around and I was put on notice to see what existed to support that.

It was then that I got a call from my examiner who went to the datacenter, he let me know that one of the employees of the hosting company handed him a thumbdrive while he was imaging the systems saying only:

 'You'll want to read this'

You know what? He was right!

On the thumbdrive was a transcript of a ticket that was opened by the hosting companies SOC. In the transcript it was revealed that a month earlier the SOC staff was informing the same developers who claimed to have no prior knowledge of an intrusion that a foreign ip had logged into their VPS as root ... and that probably wasn't a good thing.

I called the attorney right away and let her know she likely needed to switch her focus from possible litigation against the hosting provider and to an internal investigation to find out what actually happened. Of course we still needed to finish our investigation of the compromise itself to make sure the damage was understood from a notification perspective.

Step 1. Analyzing the compromised server

Luckily for us the SOC ticket showed us when the attacker had first logged in as the root account which we were able to verify through the carved syslog files. We then went through the servers and located the effected files, established the mechanism used and helped them define the time frame of compromise so they could through their account records to find all the affected customers.

Unfortunately for our client, it was the Christmas season and one their busiest time of year. Luckily for the client it happened after Black Friday which IS their busiest time of the year. After identifying the access, modifications and exfil methods we turned our focus to the developers.

We talked to the attorney and came up with a game plan. First we would inform them that we needed to examine each of their workstations to make sure they were not compromised and open for re-exploitation, which was true. Then we would go back through their emails, chat logs and forensic artifacts to understand what they knew and or did when they were first notified of the breach. Lastly we would bring them in to be interviewed to see who would admit to what.

Imaging the computers was uneventful as you always hope it was, but the examination turned out to be very interesting. The developers used Skype to talk to each other and if you've ever analyzed Skype before you know that by default it keeps history Forever. There in the Skype chats was the developers talking to each other about the breach when it happened, asking each other questions about the attackers ip address, passing links and answers back to each other.

And then.... Nothing

Step 2. Investigating the developers

You see investigations are not strictly about the technical analysis in many cases, some are though, there is always the human element which is why I've stayed so enthralled by this field for so long. In this case the developers were under the belief that they were going to be laid off after Christmas so rather than take action they decided it wasn't their problem and went on with their lives. They did ask the hosting provider for recommendations of what to do next, but never followed up on them.

A month later they got informed they were not being laid off, and instead were going to be transferred to a different department. With the knowledge that this was suddenly their problem again they decided to actually look at the hosted system and found the modified code.


Step 3. Wrapping it up

So knowing this and comparing notes with the attorney we brought them in for an interview.

The first round we simply asked questions to see what they would say, who would admit what and possibly who could keep their jobs. When we finished talking to all the developers, all of which pretended to know nothing of the earlier date we documented their 'facts' and thanked them.

Then we asked them back in and one fact at a time showed them what we knew. Suddenly memories returned, apologies were given and the chronology of events was established. As it turns out the developers never notified management of the issue until the knew they were going to remain employed and just sat on the issue.

Needless to say, they no longer had that transfer option open as they were summarily terminated.

So in this case a breach that should have only lasted 4 hours at most (time of login notice by SOC to time of remediation) lasted 30 days of Christmas shopping because the developers of the eCommerce site committed the crime of silence for purely human reasons.


Wednesday, February 15, 2017

SOPs in DFIR

Hello Reader,
      It's been awhile! Sorry I haven't written sooner. Things are great here at camp aka G-C Partners where the nerds run the show. 2 years ago or so I got lucky enough to work with on our favorite customers in generating some standard operating procedures for their DFIR lab. While we list forensic lab consulting as a service on our website we don't get to engage in helping other labs improve as often as I'd like.

It may sound like a bad idea for a consulting company to help a potential client get better at what we both do, some may see this as a way of preventing future work for yourself. This in my view is short sighted and in my world view of DFIR and especially in the court testimony/expert witness world the better the internal lab is the better my life is when they decide to litigate a case. If I've helped you get on the same standard of work as my lab then I can spend my time using the prior work as a cheat sheet to validate faster and then looking for the newest techniques or research that could potentially find additional evidence.

Now beyond the business aspects of helping another lab improve I want to talk about the general first reactions that I get and used to have myself regarding making SOPs for what we do. SOP or Standard Operating Procedures are a good thing (TM) as they help set basic standards in methodology, quality and templated output without the expense of creative solutions... if they are done right.

When I was still doing penetration testing work I was first asked to try and make SOPs for my job. I balked at the idea stating that you can't proceduralize my work, there are too many variables! While this was true for the entire workflow what I didn't want to admit at the time is that there we were several parts of my normal work that were ripe for procedures to be created. I didn't want to admit it because that would mean additional work for myself in creating documentation I saw as something that would slow down my job. When I started doing forensic work in 1999 I was asked the same question for my DFIR work and again pushed back stating there were too many variables in an investigation to try to turn it into a playbook.

I was wrong then and you may be wrong now. The first thing you have to admit to yourself, and your coworkers, is that regardless of the outliers in our work there are certain things we always do. Creating these SOPs will let new and existing team members do more consistent work and create less work for yourself in continually repeating yourself on what to do and what they should give you at the end of it. This kind of SOP will work for you if you create a SOP that works more like a framework than a restrictive set of steps.

For examples of how SWGDE makes SOPs for DFIR look here:
https://www.swgde.org/documents/Current%20Documents/SWGDE%20QAM%20and%20SOP%20Manuals/SWGDE%20Model%20SOP%20for%20Computer%20Forensics

Read on to see what I do.

What you want:


  • You want the SOP for a particular task to work more like stored knowledge 
  • You want to explain what that particular task can and can't do to prevent confusion or misinformation
  • You want to establish what to check for to make sure everything ran correctly to catch errors early and often
  • You want to provide alternative tools or techniques that work in case your preferred tool or method fails
  • You want to link to tool documentation or blogs/articles that give more documentation on whats being done in case people want to know more
  • You want to establish a minimum set of requirements for what the output is to prevent work being done twice
  • You want to store these somewhere accessible and easy to access like a wiki/sharepoint/dropbox/google doc so people can easily refer to them and more importantly you can easily refer people to it
  • You want to build internal training that works to teach people how to perform tasks with the SOPs so they become part of your day to day work not 'that thing' that no one wants to use
  • You want your team to be part of making the SOP more helpful while keeping to these guidelines of simplicity and usability



What you don't want:

  • You don't want to create a step by step process for someone to follow, that's not a SOP those are instructions
  • You don't want to create a checklist of things to do, if you do that people will only follow the checklist and feel confined to it
  • You don't to use words like must/shall/always unless you really mean it, whatever you write in your SOP will be used to judge your work. Keep the language flexible and open so they serve as a guideline that you as an expert can navigate around when needed
  • You don't want to put a specific persons name or email address in, people move around and your SOPs will quickly fall out of date
  • You don't want to update your SOPs every time a tool version changes, so make sure you are not making them so specific that change of one choice or parameter breaks them
  • You don't want to make these by committee, just assign them to people with an example SOP you've already made and then show them to team for approval/changes

In the end what you are aiming for is to have a series of building blocks of different common tasks you have in your investigative procedures that you can chain together for different kinds of cases.

If that's hard to visualize lets go through an example SOP for prefetch files and how it fits into a larger case flow. In this example I am going to show the type of data I would put into the framework, not the the actual SOP I would write. 

Why not just give you my SOP for prefetch files? My SOP will be different from yours. We have an internal tool for preftech parsing, I want different output to feed our internal correlation system and I likely want more data than most people think is normal. 



Example framework for Prefetch files:

Requirements per SOP:

  •     Scope
    •  This SOP covers parsing Prefetch files on Windows XP-8.1. 
  • Limitations
    • The prefetch subsystem may be disabled if the system you are examining is running an SSD and is Windows 7 or newer or if running a server version of Windows. The prefetch directory only stores prefetch files for the last 128 executables executed on Windows XP - 7. You will need to recover shadow copies of preftech files and carve for prefetch files to find all possible prefetch entries on an image.
  •  Procedure
    • Extract prefetch files
    • Parse prefetch files with our preferred tool, gcprefetchparser.py
  •   Expected output
    • A json file for each prefetch
  •   Template for reporting
    • Excel spreadsheet, see prefetch.xlsx for an example 
  •  QA steps
    • Validate that timestamps are being correctly formatted in excel
    • If there are no prefetch files determine if a SSD was present or if anti forensics occurred
  •  Troubleshooting
    • Make sure timestamps look correct
    • Validate that paths and executable names fit within columns
    • Make sure the number of prefetch files present equal to the number of files you parsed
    • Remember for windows 10 that the prefetch format changed, you must use the Win10 Prefetch SOP
    • Remember that Windows Server OS's do not have Prefetch on my default
  • Alternative tools 
    • Tzworks pf 
  • Next Steps
    • Shimcache parsing
  • References
    • Links to prefetch format from metz 

Now you have a building block SOP for just parsing PE files. Now you can create workflows that combine a series of SOPs that guides an examiner without locking them into a series of steps. Here is an example for a malware case. 

Malware workflow:
  1. Identify source of alert and indicators to look for
  2. Follow triage SOP
  3. Volatility processes SOP
  4. Prefetch SOP
  5. Shimcache SOP
  6. MFT SOP
  7. Userassist SOP
  8. timeline SOP
  9. Review prior reports to find likely malicious executable
You can then reuse the same SOPs for other workflows that range from intrusions to intellectual property cases. The goal is not to document how to do an entire case but to standardize and improve the parts you do over and over again for every case with an eye on automating and eliminating errors to make you job easier/better and your teams work better. 

Now I don't normally do this but if you are looking at this and saying to yourself, I don't have the time or resources to do this but I do have the budget for help then reach out:

info@g-cpartners.com or me specifically dcowen@g-cpartners.com

We do some really cool stuff and we can help you do cool stuff as well. I try to keep my blogs technical and factual but I feel that sometimes I hold back on talking about what we do to the detriment of you the reader. So to be specific for customers who do engage us to help them improve their labs/teams we :

1. Provide customized internal training on SOPs, Triforce, internal tools, advanced topics
2. Create custom tools for use in your environment to automate correlation and workflows
3. Create SOPs and processes around your work
4. Provide Triforce and internal G-C tool site licenses and support
5. Do internal table top scenarios 
6. Do report and case validation to check on how your team is performing and what you could do better
7. Build out GRR servers and work with your team to teach you how to use it and look for funny business aka threat hunting
8. Act as a third party to evaluate vendors trying to sell you DFIR solutions


Ok there I'm done talking about what we do, hopefully this helps someone. I'll be posting again soon about my new forensic workstation and hopefully more posts in the near future.