FSEventsParser 3.1 Released

FSEventsParser 3.1 Released



By Nicole Ibrahim

G-C Partners' FSEventsParser python script 3.1 has been released. Version 3.1 now supports parsing macOS High Sierra FSEvents.

You can get the updated script here: https://github.com/dlcowen/FSEventsParser 

Prior versions of the script do not support High Sierra parsing, so it's important to upgrade to the current version of FSEventsParser.

Other recent updates include:

  • Better handling of carved gzip files has been added. Invalid record entries in corrupted gzips are now being excluded from the output reports.
  • Even more dates are being found using the names of system and application logs within each fsevent file. The dates are stored in the column 'approx_dates(plus_minus_one_day)' and indicates the approximate date or date range that the event occurred, plus or minus one day.
  • Script now reads a json file that contains custom SQLite queries to filter and export targeted reports from the database during parsing.

macOS High Sierra 10.13 and FSEvents

With the release of High Sierra, updates to the FSEvents API resulted in the following changes:
  • Magic Header: In macOS versions prior to 10.13, the magic header within a decompressed FSEvents log was '1SLD'. Beginning with 10.13, the magic header is now '2SLD'.
  • ItemCloned Flag: The ItemCloned flag was introduced with macOS 10.13.  When set, it indicates that the file system object at the specific path supplied in the event is a clone or was cloned. 
  •  File System Node ID: Beginning with 10.13, FSEvents records now contain a File System Node ID. 
    • e.g. If FSEvents were from an HFS+ formatted volume, this value would represent the Catalog Node ID.

FSEventsParser Database Report Views

Within the SQLite database, report views have been added for common artifacts. The report views are defined in the 'report_queries.json' file. They include:

  • Downloads Activity
  • Mount Activity
  • Browser Activity
  • User Profile Activity
  • Dropbox Activity
  • Email Attachments Activity
  • and more..
To access the report views, open the SQLite database generated by running the script using your SQLite viewer of choice. Expand "Views".

FSEventsParser Custom Reports

The FSEventsParser script now exports custom report views from the database during processing to individual TSV files.


The custom report views are defined in the file 'report_queries.json' which is also available on GitHub.

Users can modify the queries or add new ones to the json file using a text editor. Two examples are shown below: TrashActivity and MountActivity.

To add new queries to the json processing list, follow the json syntax shown below. Define the report views within the 'processing_list' array. To add a new item to the array, define:
1) 'report_name': The report/view name.
2) 'query': The SQLite query to be run.

Notes:

  • The report name must be unique and must match the view name in the SQLite query. e.g.
    • 'report_name': 'TrashActivity'
    • 'query':'CREATE VIEW TrashActivity AS ....'

  • The query follows standard SQLite syntax, must be valid, and is stored in the json file as a single-line string value.


FSEventsParser Usage

All options are required when running the script. 

==========================================================================
FSEParser v 3.1  -- provided by G-C Partners, LLC
==========================================================================

Usage: FSEParser_V3.1.py -c CASENAME -q REPORT_QUERIES -s SOURCEDIR -o OUTDIR

Options:
  -h, --help        show this help message and exit
  -c CASENAME       The name of the current session, used for naming standards
  -q REPORTQUERIES  The location of the report_queries.json file containing custom report
                    queries to generate targeted reports
  -s SOURCEDIR      The source directory containing fsevent files to be parsed
  -o OUTDIR         The destination directory used to store parsed reports

 Below is an example of running the script.


For more information about FSEvents and how you can use them in your investigation visit http://nicoleibrahim.com/apple-fsevents-forensics/.

If you have any comments or questions, please feel free to leave them below.


National Collegiate Cyber Defense Competition Red Team Debrief 2017

National Collegiate Cyber Defense Competition Red Team Debrief 2017



Hello Reader,
        I've been busy lately, so busy I didn't get around to posting this years red team debrief from the National CCDC. After just leaving Blackhat/ Bsides LV/ Defcon and running our first Defcon DFIR CTF I thought it was important to get these up and talk about the lessons learned.

The Debrief

First of all for those of you coming just to get the presentation its below:
here: https://www.dropbox.com/s/fy23c7wi35qe81b/NCCDCRedTeamDebrief2017.pptx?dl=0

For those of you who have no idea what any of these means, let me take a step back.

What is CCDC?


The National Collegiate Cyber Defense Competition ( CCDC ) is a now 12 year old competition where colleges around the United States form student teams to defend networks. CCDC is different from other competition involving network security as it focuses strictly on defense. Students who play are put in charge of a working network that they must defend, the only offensive activity in the competition comes from a centralized red team.

The kind of enterprise network students take charge of changes each year. Past years business scenarios have included:

  • Private Prison Operator
  • Electric Utility
  • Web hosting
  • Game Developer
  • Pharma
  • Defense Contractor
  • and more!
The idea is that the last IT team has been fired and the student team is coming in to keep it running and defend it. While the students are working on making sure their systems are functioning they also have to watch for, respond to and defend against the competition red team. 

Scoring happens a couple ways. 

Students get points for:
  • Keeping scored services running (websites, ecommerce sites, ssh access, email, etc..)
  • Completing business requests such as policy creation, password audits and disaster recovery plans
  • Presenting their work to the CEO of the fake company
  • Responding to customers 


Students lose points for:

  • Red team access to user or administrative credentials
  • Red team access to PII data
  • Services not responding to scoring checks aka services being down
  • SLA violations kick in if the service stays down for a period of time
There are now 160 universities competing in 10 regions across the united states. If a student team wins their region they make it to nationals where the top 10 teams in the country compete for some pretty amazing prizes, including on the spot job offers from raytheon. 


If you are a student or a professor who would like to know more about competing you can go here: http://nccdc.org/index.php/competition/competitors/rules

What is the National CCDC Red Team?

The National CCDC Red Team is a group of volunteers who works to build custom malware, c2 and exfiltration and persistence strategies to bear each year to give the students the best real world threat experience. I'm the captain of the red team and have been for the last 10 years.



How do I get on it? 

When the call for volunteers goes out send a resume to volunteer@nccdc.org.

Be advised our threshold for acceptance is very high and we look for the following:
- Active projects on github or otherwise to show your experience
- Real experience in developing, maintaining and layering persistence
- Custom malware kits that are unpublished to bring to bear

We don't care about certs, years of experience or who you work for. We need people who can not only get in (the easy part) but to stay in over a two day period of competition while an aggressive group of defenders seeks to keep you out. 


Contents in Sparse Mirror may be Smaller than they Appear

Contents in Sparse Mirror may be Smaller than they Appear



By Matthew Seyer

As many of you know, David Cowen and I are huge fans of file system journals! This love also includes all change journals designed by operating systems such as FSEvents and the $UsnJrnl:$J. We have spent much of our Dev time writing tools to parse the journals. Needless to say, we have lots of experience with file system and change journals. Here is some USN Journal logic for you.

USN Journal Logic

First off it is important to know that the USN Journal file is a sparse file. MSDN explains what a sparse file is: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365564(v=vs.85).aspx. When the USN Journal (referred to as $J from here on out) is created it is created with a max size (the area of allocated data) and an allocation delta (the size in memory that stores records before it is committed to the $J on disk). This is seen here: https://technet.microsoft.com/en-us/library/cc788042(v=ws.11).aspx.

The issue is that many forensics tools cannot export a file out as a sparse file. Even if they could only a few file systems support them and I don’t even know if a sparse file on NTFS is the same as a sparse file on OSX. But this leads to a common problem. The forensic tool sees the $J as larger than it really is:

Contents in Sparse Mirror may be Smaller than they Appear




While this file is 20,380,061,592 bytes in size, the allocated portion of records is much smaller. Most forensic tools will export out the entire file with the unallocated data as 0x00. Which makes sense when you look at the MSDN Sparse File section (link above). When we extract this file out with FTK Imager we can verify with the windows command `fsutil sparse` to see that the exported file is not a sparse file (https://technet.microsoft.com/en-us/library/cc788025(v=ws.11).aspx):

Contents in Sparse Mirror may be Smaller than they Appear

 Trimming the $J

Once its exported out what’s a good way to find the start of the records? I like to use 010 Editor. I scroll towards the end of the file where there are still empty blocks (all 0x00s) then I search for 0x02 as I know I am looking for USN Record version 2:

Contents in Sparse Mirror may be Smaller than they Appear


Now if I want to export out just the record area I can start at the beginning of this found record and select to the end of the file and save the selection as a new file: 

Contents in Sparse Mirror may be Smaller than they Appear

The resulting file is 37,687,192 bytes in size and contains just the record portion of the file.


Contents in Sparse Mirror may be Smaller than they Appear

This is significantly smaller in size! Now, how do we go about this programmatically?

Automation

While other sparse files can have interspersed data, the $J sparse file keeps all of its data at the end of the file. This is because you can associate the Update Sequence Number in the record to the offset of the file! If you want to look at the structure of the USN record here it is: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365722(v=vs.85).aspx. Now I will note that I would go about this two different ways. One method for a file that has been extracted out from one tool and a different method for if it would be extracted out using the TSK lib. But for now, we will just look at the first scenario.

Because the records are located in the lasts blocks of the file, I would start from the end of the file and work our way backwards to find the first record, then write out just the records portion of the file. This saves a lot of time because you are not searching through potentially many gigs of zeros. You can find the code at https://github.com/devgc/UsnTrimmer. I have commented the code so that it is easy to understand what is happening.

Now lets use the tool:

Contents in Sparse Mirror may be Smaller than they Appear

We see that the usn.trim file is the same as the one we did manually but lets check the hash to make sure we have the same results as the manual extract:

Contents in Sparse Mirror may be Smaller than they Appear

So far I have verified this on SANS408 image system $J extract and some local $J files. But of course, make sure you use multiple techniques to verify. This was a quick proof of concept code.

Questions? Ask them in the comments below or send me a tweet at @forensic_matt