Daily Blog #142: Finding new artifacts - Re-creation testing part 2 Isolation and Uniqueness

Finding new artifacts - Re-creation testing part 2 Isolation and Uniqueness

Hello Reader,
          As I write this I'm on a flight to PFIC where I will be speaking on our further research into file system forensics. PFIC is a fun conference as its big enough to get a critical mass of people but small enough to allow for easy conversation. I'm looking forward to doing some demos and talking tech in the upcoming week. If you are at PFIC please don't hesitate to come up and say hi, it's always nice to know that the view count that I watch to determine if anyone is reading is more than just web crawlers :)

Today I wanted to continue the finding new artifacts post and get more into what we do. This is not the only way to do things, but its a set method that has been successful in my lab and lead to most of the research you've read on this blog and in the books. I'm currently typing on my surface so this won't be the longest of posts, but I wanted to cover the concepts of Isolation and Uniqueness today.

Isolation 

When I say isolation here I don't mean process isolation, air gaping or any other standard method. I mean trying to isolate as much as possible what your testing versus what the operating system is generating in the background, When we first started our file system journaling research we did so on the main system disk within our virtual machine. Doing this lead to mass confusion because we couldn't determine where in the unknown data structure we were trying to decode our changes were located versus what the underlying system was changing in its background actions.

We solved this issue by creating a separate disk and partition where the only actions taking against it was ourselves and the file system drivers. Once we knew that all the changes in the data structure were reflected my our changes it was much easier to find patterns and timestams.

I've since taken this method of isolation and applied it whenever possible, always trying to move whatever programs/files/methods I'm testing to a non system disk not shared with any other test that i'm doing. When I do this I find my results are more reliable and they come quicker as well. I know reading this it may seem obvious, but you really never understand just how much activity is going on in the background by the operating system until you try to go through every change looking for your test results.

Uniqueness

The concept of uniqueness applies to what you name the things you test with. The idea is that every directory, file, program, dll you create/call/reference should have a name unique enough that if you search for it that you won't find any false positives. If you are going to run multiple tests in sequence its equally important for those test runs to identifiable to what test its part of. For instance lets say you are testing a system cleaner (ccleaner for instance) to determine what it does when it wipes a file. You would want to create a test plan where you document:
  • Each of the combination of options you are going to try
  • The operating system version and service pack you are testing
  • Which file system you are testing
  • What version of the program you are testing
  • The name of the file and directory you wiped
    • An example being UniqueFileToBeWipedTest1
  • The time to the second when you executed the test
  • The time to the second when the processes ended
With these facts at hand you can easily isolate the changes you are making in those times from other tests and know which files are being effected by your testing. The worst thing that you can do is not document your testing well, causing your results to be either unverifiable to another examiner and making you spend all the time to recreate all your work.

That's all for today, I want to continue this topic this week going into what we do to test, how we pick our tests and the tools we use to isolate results.


Post a Comment