@night 1803 access accessdata active directory admissibility ads aduc aim aix ajax alex levinson alissa torres amcache analysis anjp anssi answer key antiforensics apfs appcompat appcompatflags applocker april fools argparse arman gungor arsenal artifact extractor attachments attacker tools austin automating automation awards aws azure azuread back to basics backstage base16 best finds beta bias bitcoin bitlocker blackbag blackberry enterprise server blackhat blacklight blade blanche lagny book book review brute force bsides bulk extractor c2 carved carving case ccdc cd burning ceic cfp challenge champlain chat logs Christmas Christmas eve chrome cit client info cloud forensics command line computer forensics computername conference schedule consulting contest cool tools. tips copy and paste coreanalytics cortana court approved credentials cryptocurrency ctf cti summit cut and paste cyberbox Daily Blog dbir deep freeze defcon defender ata deviceclasses dfa dfir dfir automation dfir exposed dfir in 120 seconds dfir indepth dfir review dfir summit dfir wizard dfrws dfvfs dingo stole my baby directories directory dirty file system disablelastaccess discount download dropbox dvd burning e01 elastic search elcomsoft elevated email recovery email searching emdmgmt Encyclopedia Forensica enfuse eric huber es eshandler esxi evalexperience event log event logs evidence execution exfat ext3 ext4 extended mapi external drives f-response factory access mode false positive fat fde firefox for408 for498 for500 for526 for668 forenisc toolkit forensic 4cast forensic lunch forensic soundness forensic tips fraud free fsutil ftk ftk 2 full disk encryption future gcfe gcp github go bag golden ticket google gsuite guardduty gui hackthebox hal pomeranz hashlib hfs honeypot honeypots how does it work how i use it how to howto IE10 imaging incident response indepth information theft infosec pro guide intern internetusername Interview ios ip theft iphone ir itunes encrypted backups jailbreak jeddah jessica hyde joe sylve journals json jump lists kali kape kevin stokes kibana knowledgec korman labs lance mueller last access last logon lateral movement leanpub libtsk libvshadow linux linux forensics linux-3g live systems lnk files log analysis log2timeline login logs london love notes lznt1 mac mac_apt macmini magnet magnet user summit magnet virtual summit mari degrazia mathias fuchs md viewer memorial day memory forensics metaspike mft mftecmd mhn microsoft milestones mimikatz missing features mlocate mobile devices mojave mount mtp multiboot usb mus mus 2019 mus2019 nccdc netanalysis netbios netflow new book new years eve new years resolutions nominations nosql notifications ntfs ntfsdisablelastaccessupdate nuc nw3c objectid offensive forensics office office 2016 office 365 oleg skilkin osx outlook outlook web access owa packetsled paladin path specification pdf perl persistence pfic plists posix powerforensics powerpoint powershell prefetch psexec py2exe pyewf pyinstaller python pytsk rallysecurity raw images rdp re-c re-creation testing reader project recipes recon recursive hashing recycle bin redteam regipy registry registry explorer registry recon regripper remote research reverse engineering rhel rootless runas sample images san diego SANS sans dfir summit sarah edwards saturday Saturday reading sbe sccm scrap files search server 2008 server 2008 r2 server 2012 server 2019 setmace setupapi sha1 shadowkit shadows shell items shellbags shimcache silv3rhorn skull canyon skype slow down smb solution solution saturday sop speed sponsors sqlite srum ssd stage 1 stories storport sunday funday swgde syscache system t2 takeout telemetry temporary files test kitchen thanksgiving threat intel timeline times timestamps timestomp timezone tool tool testing training transaction logs triage triforce truecrypt tsk tun naung tutorial typed paths typedpaths uac unc understanding unicorn unified logs unread updates usb usb detective usbstor user assist userassist usnjrnl validation vhd video video blog videopost vlive vmug vmware volatility vote vss web2.0 webcast webinar webmail weekend reading what are you missing what did they take what don't we know What I wish I knew whitfield windows windows 10 windows 2008 windows 7 windows forensics windows server winfe winfe lite winscp wmi write head xboot xfs xways yarp yogesh zimmerman zone.identifier

Daily Blog #43: Sunday Funday Winner 8/5/13

Hello Reader,
      Another Sunday Funday is behind us and some more great answers were given, thanks to everyone who submitted on Google+ and anonymously! I've learned from this week challenge that I need to be a bit more specific to help for more focused answers, I'll make sure to do that for next weeks challenge. This week Eric Zimmerman turned in a great answer sharing the win with Jake Williams.

Here was the challenge:
The Challenge:     SInce we are giving away a copy of Triage, lets have a question related to manually triaging a system.
For a Windows XP system:
You have arrived onsite to a third party company that is producing a product for your company. It is believed that one of the employees of the company has ex-filtrated the database of your customers information your provided for mailing and processing sometime in the last 30 days, While the third party company is cooperating with the investigation they will not allow you image every system and take the images back to your lab. However, they will allow you to extract forensic artifacts to determine if there is evidence of ex-filtration present and will then allow a forensic image to be created and taken offsite.
With only forensic artifacts available and a 32gb thumbdrive what artifacts would you target to gather the information you would need to prove ex-filtration?

Here is Eric Zimmerman's winning answer:
Since this is a triage question, the goals are to get as much info in as short a time frame as possible. the idea is to cast as wide a net into a computers data as possible and intelligently look at that data for indicators of badness.
i am not going to include every key, subkey, querying lastwrite times/value and how to decode things from the registry or otherwise mundane details. these steps should be automated as much as possible for consistency and efficiency anyways.
the first thing i would do is interview management at the company to find out what kind of usage policies they have: are employees allowed to install whatever software they want? any access controls? who has rights to where? What kind of database was my customers stored in? who has rights to that database? and so on
i would also ask management who their competitors are and then locate their web sites, domain names, etc.
once i had the basic info i would assemble a list of relevant keywords (competitor names, relevant file extensions, etc). i would also look specifically for tools that can be used to connect to the database server and interact with it. this of course changes depending on which database it is (mysql i may look for putty or other terminal programs, oracle = the oracle client, sql server = that client, LinqPad, etc.)
with that basic info in hand i would triage each computer follows:
1. collect basic system information such as when windows was installed, last booted etc.
2. check running processes for things like cloud storage (dropbox, skydrive, teamviewer, other remote access tools)
3. look for any out of the ordinary file shares on the computer that can be used to access the computer from elsewhere on the network
4. check MRU keys for network shares, both mapped and accessed via command line
5. dump DNS cache and compare against keyword lists
6. dump open ports and compare against a list of processes of interest.
are any remote access tools running? file sharing?
7. Look to see what data, if any, is present on the clipboard. are there any suspicious email addresses or the text of an email or other document? what about a file or a list of files?
8. unpack all prefetch files and see what applications have been executed recently (certainly within the last 30 days, but expand as necessary). again we key in on processes of interest, etc
9. look at all the installed applications on a computer and specifically those installed within the last 30 days
10. dump a list of every USB device ever connected to the machine including make, model and serial #. also reference, when available, the  last inserted date of the device. cross reference this list with any issued thumb drives the company provided from interviews. make a note of any drive letters devices were last mounted to. also process and cross reference setupapi.log for devices connected within the last 30 days.
11. dump web browser history for IE, FireFox, Chrome, and Safari and look for keywords, competitor URLs, etc. hone in on last 30 days, but look for keywords thru entire history in case things were initiated previous to the data being exfil'ed. look for hits against cloud storage, VNC, and similar.
12. dump web browser search history including google, yahoo, youtube, twitter, social networks, etc and again filter by last 30 days with keyword hits across all date ranges. Also look for references to file activity such as file:///D:/somePath, etc.
13. dump passwords for browsers (all of them), mail clients, remote access tools, network passwords (RDP, etc). are any webmail addresses saved by the browsers?
14. dump keys from registry including CIDSizeMRU, FirstFolder, LastVisitedMIDMRU, LastVisitedMIDMRULegacy, MUICache, OpenSavePidlMRU, RDP sessions, RecentDocs, TypedPaths, TypedURLs, UserAssist, appcompatcache and of course ShellBags. all of these keys should be checked for keyword hits as before. specifically, look for any USB
15. Look for instant messaging programs and chat history for skype to include who they are talking to, if any files were xfered, and so on.
16. look for any p2p programs that could have been used to xfil data.
17. search the file systems for such things as archives, shortcut files (lnk), evidence eliminator type programs, drive and file wiping programs, etc. cross reference any lnk files with paths used by USB devices and shellbags to get an idea of what kinds of files were kept on any externally connected devices. look inside any archives found (zip, rar, tar, 7zip, etc) for any keywords of interest (like a text file containing my customers). filter based on MAC dates for files and of course look for keyword hits.
18. look at event logs for relevant entries (what is relevant would be determined by how the computers are configured. what kind of auditing is enabled by the network admins, etc). things like remote access and logins, program execution, etc would be key here.
19. time permitting, and based upon the results from above, use a specialized tool to unpack restore points and look for files as outlined above (lnk files, programs installed, etc)
20. look in the recycle bin for files (hey, ive worked plenty of cases where the incriminating evidence was in there!)
21. dump ram and run a quick "strings" against the binary, then look for keywords. going crazy with volatility is beyond triage, so this will suffice.
depending on where the database lives i would triage that system in the same way (if windows based) but if its mysql on linux or something i would review bash history files, sign ins, FTP logs, etc for signs of data being ex-filed. i would look at the database log files for logins and, if available, sql statements executed, errors, etc from the last 30 days.
finally i would ask about and review any web proxy logs or other logging systems the company has to look for suspicious activity.
all of this data would be automatically added to a timeline that could then be used to further narrow in on interesting periods of activity on each system.
with all the data collected i would want to start looking for default export names or extensions, keyword hits, and whatnot. the machines that have more indicators would go up on my list of machines to want to image. machines with little to no indicators would be removed from consideration.
ShellBags are going to be a key artifact in this case because they contain sooo much good data on Win XP. what other files were on any external devices connected to the systems? do i see the presence of "hacking" tools, ftp clients, putty, etc? are there folders or files indicative of my data or any of my competitors?
32GB is more than enough space to triage all the computers found at the business as there isnt a ton of need to copy files off the computer.
now all those steps are a heck of a lot to do manually (and several of them would be near impossible to do by hand), so in my case i would just run osTriage on each computer and it would pull all that info (and more) in a few seconds. add a bit of time to review the results and i would know which machines i wanted to image for a more thorough review.
with that info in hand i would most likely already know who exfi'led the data, but i would still request an image be made of each machine where suspicious activity was found.
(all of those steps could be further unpacked, but since this is a triage based funday question my response is kept in true triage style, fast and just enough of a deep dive to hone in on computers of interest).

However, Special Agent Zimmerman cannot accept the prize. So Jake Williams hard work in his winning answer seen below wins the prize of a year license of AccessData Triage:
What artifacts would you look for across multiple Windows XP machine with only a 32GB USB drive to hold them all?
So we think that an evil user exfiltrated a database we provided to the business partner.  Because of the verbiage, we’re working under the assumption here that they were provided with an actual database file (.mdb).
Great. That probably wasn’t bright. In the future, we should NOT provide the business partner the database file and rather provide secure and AUDITABLE access to the data.  This seems like a good idea. There are other issues here, such as revocation of access and even keeping the current data picture (including opt outs for example) that further reinforce why this is better than a file. So we should definitely provide auditable access to the DB in the future, not a database file.
For this writeup, I’ll focus on evidence of execution, evidence of access, and then touch on potential evidence of exfiltration.  Here’s why: under the best of circumstances, we can have a hard time finding evidence of exfiltration. But these aren’t the best of circumstances. 
1. We have no information about how the partner may have exfiltrated the data.  
2. We have limited space in which to collect our data for further probable cause.
We’re really looking for suspicious activity on the machines that will open the door to full images for a complete investigation.  For that reason, we have to keep the scope small and limit it to that which will cover the most ground.
Evidence of execution:
So the first thing I want is access to prefetch files on all the machines.  This is my first stop.  If the user exfiltrated the database AND we have a DLP solution in place, they may need to encrypt the file first. I’d want to look for rar.exe, winzip.exe, or 7z.exe to look for evidence of execution of those utilities. Also, we’re looking for evidence of execution of any anti-forensics tools (commonly used when users are doing illegal stuff).  As a side note here, I’ve performed forensic investigations where I’ve found stuff like wce.exe or other “hacking tools” in prefetch.  In at least one particular case, this discovery was not part of the investigation specifically.  However, the fact that we highlighted it bought us a lot of good will with the client (since this was an indicator of a compromise or an AUP violation).
We’d want to know if the users used any cloud services that aren’t explicitly allowed by policy. For example, Dropbox, SkyDrive, GoogleDrive, etc. would be interesting finds.  While use of these services doesn’t necessarily imply evil, they can be used to exfil files.  Evidence of execution for any of these services would provide probable cause to get the logs from the devices.  For those who don’t know, this is a real passion of mine.  I did a talk at the SANS DFIR Summit looking at detecting data exfiltration in cloud file sharind services and the bottom line is that it isn’t easy. Because of the complexity, I expect criminals to use it more.  Those logs can contain a lot of information, but grabbing all logs in all possible user application directories might be too broad (especially given the 32gb USB drive limitation).  We’ll just start small with Prefetch. 
I’d also want to get uninstall registry keys (HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall). My thoughts here are that 32GB is so little data for an enterprise that I’d be looking for evidence of programs installed that may have been used to read the data from the database or exfiltrate the data.  Again, this is so little data that we can store it easily.
UserAssist registry keys from all users would also be on my shopping list.  If the company uses a domain (and honestly what business doesn’t) this will be easier if roaming profiles are enabled.  We want to pull from these two keys for windows XP:
▪ HKEY_USERS\{SID}\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{GUID}\Count\
▪ HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{GUID}\Count\
Where GUIDs are usually {75048700-EF1F-11D0-9888-006097DEACF9} or {5E6AB780-7743-11CF-A12B-00AA004AE837}
 Again, I’m focusing on evidence of execution because space is tight. These entries won’t cover everything that was executed, generally it only includes items opened via Explorer.exe (double click).  Also, the entries are ROT13 encoded, but that’s easily overcome. Because it is possible that users deleted data, we might also want to grab UserAssist from NTUSER.DAT files in restore points.  This might be pushing the limit of my storage depending on how many machines our target has to triage (and how many Restore Points they each have).
Evidence of Access:
In this category, I’d be looking at MRU keys for Access.  Now these change with the version of MS Office, but a good starting point is to look in these subkeys in the user’s profile (where X.X is the version):
• Software\Microsoft\Office\X.X\Common\Open Find\Microsoft Access\Settings\Open\File Name MRU
• Software\Microsoft\Office\X.X\Common\Open Find\Microsoft Access\Settings\File New Database\File Name MRU
• Software\Microsoft\Office\X.X\Access\Settings
Locating our filename doesn’t prove anything, presumably we gave it to them to open, but it gives us a start.
If we know that the file was placed on a network share with auditing enabled, we want to identify who had access to that share using the records in the Security event log.  If auditing wasn’t enabled, we may still be able to find evidence of failed logon attempts to the share in the event logs on the file server.  Successful connections to the share may be found be in the MountPoints2 (Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2) key so we want to grab that from user’s profiles.  Of course, it goes without saying that just because someone mapped a share doesn’t mean they even read our file (let alone exfiltrated it).
Event logs:
Depending on the event logs available, we may be able to tell if a user has accessed the database via an ODBC connector.  Usually users just open an Access file, but they could add it as an ODBC data source.  I don’t have my systems available here at DEFCON to do testing, but if the file was added as an ODBC source, there should be some remnants left over to locate.  But often there will show up in event logs. We want to check event logs for our database file name.
Possible Evidence of Exfiltration:
Firewall logs are another item I’d collect.  Yes, I know some people will laugh at me here, but we are looking for data exfiltration and that may have happened over the network.  If we have some idea of where the data was exfiltrated to, firewall logs, if enabled, are a useful source of information.  Fortunately for our case with only a 32GB USB drive for the whole network, the logs capped at 4M by default.  This allows us to collect a lot of them without taking up lots of space.  We could get logs from 100 machines and only consume 4GB of our space.
Setupapi.log is another file I’d like to collect.  This log shows first insertion time for USB devices (a common exfiltration point).  While this log can’t tell us if a file was copied to a USB, analyzing setupapi.log files over an enterprise can show patterns of USB use (or misuse).  Correlating that with information with their security policy may yield some suspicious behavior that may be probable cause for further forensic images.
If there are other logs (from an endpoint protection suite) that log connections, I’d want to see if I could pull those as well.  While we’re at it, we’d want to filter event logs (particularly application event logs) for detection notices from the AV software.  What we are looking for here is to determine if any of the machines in scope have had infections since we turned over our database file.  We can filter by the log provider and we probably want to eliminate startup, shutdown, and update messages for the AV software.
If I had more space, I’d grab index.dat files from profile directories.  Depending on the number of systems and profiles, we’d probably run out of space pretty quickly though.  What we’re looking for here are applications that may use WinInet APIs and inadvertently cache information in index.dat files.  This happens sometimes in malware and certainly data exfiltration applications might also fit the bill.  However, my spidey-sense tells me that these index.dat files alone from many profiles/machines could exhaust my 32GB of space.
Parting thoughts:
Forensics where we rely on minimal information is a pain.  You have to adapt your techniques and triage large numbers of machines while collecting minimal data (32GB in this case).  I’d like to do more disk forensics and build timelines. I might even use the NTFS triforce tool.  If this were a single machine we were performing triage on, then my answer would certainly involve pulling the $USNJrnl, $LogFile, and $MFT files to start building timelines. The SYSTEM, SOFTWARE, and NTUSER.DAT hives on the machine would also be on my short shopping list.  However, over the multiple machine I believe the scenario covers, this just isn’t feasible in the space we’ve been given.

I'll follow up this contest with how I approached this case in real life in a later blog post. I will say that in my case the first thing I did was triage which systems showed access to the database itself to create a pool of possible ex-filtraters. Then I went back and started pulling the data discussed in our two winning answers! From there I was able to discover enough suspicious activity and patterns of access to the underlying data through the userassist, shellbags and lnk files to get approval to create a forensic image.

Tomorrow we continue the web 2.0 forensics series as I look to see when I should stop and move on and then come back to it later with other services besides Gmail.

Post a Comment


Author Name

Contact Form


Email *

Message *

Powered by Blogger.