Thursday, September 22, 2016

Building your own travel sized virtual lab with ESXi and the Intel SkullCanyon NUC

Hello Reader,
          It's been awhile and I know that, sorry for not writing sooner but to quote Ferris Bueller

"Life moves pretty fast. If you don't stop and look around once in a while, you could miss it."

So while I've worked on a variety of cases, projects and new artifacts to share I've neglected the blog. For those of you who have been watching/listening you know I've kept up the Forensic Lunch videocast/podcast but to be fair the Blog is my first child and I've left it idle for too long.

Speaking of the Forensic Lunch if you watched this episode:
https://www.youtube.com/watch?v=Ru8fLioIVlA

You would have seen me talk about building my own portable cloud for lab testing and research. People seem to have received this very well and I've thoroughly enjoyed using it! So to that end I thought I would detail out how I set this up in case you wanted to do the same.

Step 1. Make an account on vmware.com (https://my.vmware.com/web/vmware/registration)

Step 2. Using chrome, not sure why I had some errors in firefox but I did, go to this page to register for the free version of ESXi. (Note this is the free version of ESXi that will generate a license key for life, the other version will expire after 60 days )
https://my.vmware.com/en/group/vmware/evalcenter?p=free-esxi6

Step 3. Make a note of your license key as seen in the picture below, you'll want to copy and paste this and keep it as it won't show up as a license key associated with your MyVmware account


Step 4. Click to download the product named "ESXi ISO image (Includes VMware Tools)". You could also download the vsphere client at this point or you can grab it from a link emebdded within the ESXI homepage when you get it installed. 

Step 5. After downloading the ISO you will need to put it onto some form of bootable media for it to install onto your Intel Skull Canyon NUC as it has no optical drive of its own. I choose to do this to a USB thumb drive. To do turn the ISO into a successfully booting USB drive I used rufus and you can to.

Step 5a. Download Rufus: https://rufus.akeo.ie/downloads/rufus-2.11.exe
Step 5b. Execute Rufus
Step 5c. Configure Rufus to look something like what I have below. Where Device is the USB thumb drive you have plugged in and under ISO image I've selected the ESXi iso file I downloaded and click start.






Step 6. With your ESXi media now on a bootable USB drive you are ready to move on to the Intel Skull Canyon NUC itself. Start by actually getting one! I got mine at Fry's Electronics, Microcenter also carries them and they both price match Amazon now. If you wanted to get it online I would recommend Amazon to do so and you can support a good charity while doing so by using smile.amazon.com. I support the Girl Scouts of Northeast Texas with my purchases.

Link to Intel Skull Canyon NUC:
https://smile.amazon.com/Intel-NUC-Kit-NUC6i7KYK-Mini/dp/B01DJ9XS52/ref=sr_1_1?ie=UTF8&qid=1474577754&sr=8-1&keywords=skull+canyon

The NUC comes with a processor, case, power supply and fans all built in or in the box. What you will need to provide is the RAM and storage.


Storage
I used the Samsung 950 Pro Series 512GB NVMe M.2 drive, the NUC can actually fit two of these but one has been enough so far for my initial testing.

Link to storage drive:
https://smile.amazon.com/Samsung-950-PRO-Internal-MZ-V5P512BW/dp/B01639694M/ref=pd_bxgy_147_img_2?ie=UTF8&psc=1&refRID=7N9JV1CX8FJQ4Y3JT858

RAM
For RAM I used Kingston HyperX with two 16GB sticks to get the full 32GB of RAM this unit is capable of.
Link to the RAM here:
https://smile.amazon.com/Kingston-Technology-2133MHz-HX421S13IBK2-32/dp/B01BNJL96A/ref=pd_sim_147_2?ie=UTF8&pd_rd_i=B01BNJL96A&pd_rd_r=7N9JV1CX8FJQ4Y3JT858&pd_rd_w=eFJsO&pd_rd_wg=HPiy3&psc=1&refRID=7N9JV1CX8FJQ4Y3JT858

You can use other storage and RAM of course, I used these because I wanted the speed of NVMe M.2 (2GB/sec reads and 1.5GB/sec writes) with all the memory I could get to feed the VMs that will be running on the NUC.

Step 7. Put the storage and RAM into the NUC, plug it in to the wall, attach a USB keyboard and mouse, attach a monitor and boot up to the Intel Visual Bios. You will need to disable the Thunderbolt controller on the NUC before installing ESXi, you can re-enable it after you are done installing ESXi.

To see what to click specifically in order to do this go here:
http://www.virten.net/2016/05/esxi-installation-on-nuc6i7kyk-fails-with-fatal-error-10-out-of-resources/

Step 8. Pop in the bootable USB drive and install ESXI.

You are now ready to start loading ISO's and VMs into your datastore and in the next blog post I'll show how to create an isolated virtual network to put them on. 

Sunday, April 24, 2016

Daily Blog #381 National CCDC Redteam Debrief

Hello Reader,
     The 11th year of the National Collegiate Cyber Defense Competition has ended, congratulations to the University of Central Florida for a their third consecutive win. I hope you make it back next year for another test of your schools program and ability to transfer knowledge to new generations of blue teams.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


However the team that won over the Redteam was the University of Tulsa who came with a sense of humor. Behold their Hat and Badges:


Also you have to check out the player cards they made here:
https://engineering.utulsa.edu/news/tu-cyber-security-expert-creates-trading-cards-collegiate-cyber-defense-competition/

Here is an my favorite:


You can download my Redteam debrief here:
https://drive.google.com/file/d/0B_mjsPB8uKOAcUQtOThUNUpTZ0k/view?usp=sharing

Friday, April 22, 2016

Daily Blog #380: National CCDC 2016

Hello Reader,
           I'm in San Antonio for the National Collegiate Cyber Defense Competition which starts at 10am CST 4/22/16. If you didn't know I lead the red team here at Nationals where the top 10 college teams in the country come and find out who does the best defending their network while completing business objectives.

I'm hoping to follow up this post with some videos and links to what happens tomorrow, in the mean time make sure to follow #CCDC or #NCCDC on twitter to watch some of our funny business in real time. 

Wednesday, April 20, 2016

Daily Blog #379: Automating DFIR with dfVFS part 6

Hello Reader,
         It's time to continue our series by iterating through all the partitions within a disk or image, instead of just hard coding the one. To start with you'll need another image, one that not only has more than one partition but also has shadow copies for us to interact with next.

You can download the image here:
https://mega.nz/#!L45SRYpR!yl8zDOi7J7koqeGnFEhYV-_75jkVtI2CTrr14PqofBw


If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="Windows 7 Professional SP1 x86 Suspect.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
          definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
          path_spec)

if len(type_indicators) > 1:
  raise RuntimeError((
      u'Unsupported source: {0:s} found more than one storage media '
      u'image types.').format(source_path))

if len(type_indicators) == 1:
  path_spec = path_spec_factory.Factory.NewPathSpec(
      type_indicators[0], parent=path_spec)

if not type_indicators:
  # The RAW storage media image type cannot be detected based on
  # a signature so we try to detect it based on common file naming
  # schemas.
  file_system = resolver.Resolver.OpenFileSystem(path_spec)
  raw_path_spec = path_spec_factory.Factory.NewPathSpec(
      definitions.TYPE_INDICATOR_RAW, parent=path_spec)

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
  if glob_results:
    path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
        parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
  volume_identifier = getattr(volume, 'identifier', None)
  if volume_identifier:
    volume_identifiers.append(volume_identifier)
 
print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
  volume = volume_system.GetVolumeByIdentifier(volume_identifier)
  if not volume:
    raise RuntimeError(
        u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

  volume_extent = volume.extents[0]
  print(
      u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
          volume.identifier, volume_extent.offset, volume_extent.size))

  volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

  mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=volume_path_spec)

  file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


  stat_object = file_entry.GetStat()

  print(u'Inode: {0:d}'.format(stat_object.ino))
  print(u'Inode: {0:s}'.format(file_entry.name))
  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')
  file_object = file_entry.GetFileObject()

  data = file_object.read(4096)
  while data:
      extractFile.write(data)
      data = file_object.read(4096)

  extractFile.close
  file_object.close()
  volume_path_spec=""
  mft_path_spec=""

Believe it or not we didn't have to change much here to get it to go from looking at one partition and extracting the $MFT to extracting it from all the partitions. What we had to do was four things.

1. We moved our file extraction code over by one indent allowing it to execute as part of the for loop we first wrote to print out the list of partitions in an image. Remember that in Python we don't use braces to determine how the code will be executed, its all indentation that decides how the code logic will be read and followed.
2. Next we changed the location where our volume path specification object is set to from a hard coded /p1 to whatever volume identifier we are currently looking at in the for loop.

 volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

You can see that the location variable is now set to u'/' being appended to the volume_identifier variable. This would be resolved to /p1, /p2, etc.. as many partitions as we have on the image.

3. Now that we are going to extracting this file from multiple partitions we don't want to overwrite the file we previously extracted so we need to make the file name unique. We do that by appending the partition number to the file name.

  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')

This results in a file named p1$MFT, p2$MFT, and so on. To accomplish this we make a new variable called outfile which is set to the partition number (volume_identifier) appended with the file name (file_entry.name). Then we pass that the open file handle argument we wrote before.

4. One last simple change.

volume_path_spec=""
mft_path_spec=""

We are setting our partition and file path spec objects back to null. Why? Because if not
they are globally set and will just keep appending on to the prior object. That will 
result in very funny errors.

That's it! No more code changes. 

You can get the code from Github: 
https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv4.py


In the next post we will be iterating through shadow copies!

Tuesday, April 19, 2016

Daily Blog #378: Automating DFIR with dfVFS part 5

Hello Reader,

Wondering where yesterdays post is? Well, there was no winner of last weekends Sunday Funday.
That's ok though because I am going to post the same challenge this Sunday so you have a whole week to figure it out!

-- Now back to our regularly scheduled series --

              I use Komodo from Activestate as my IDE of choice when writing perl and python. I bring this up because one of the things I really like about it is the debugger it comes with that allows you to view all of the objects you have made and their current assignments. I was thinking about the layer cake example I crudely drew in ascii in a prior post when I realized I could show this much better from the Activestate Debugger.

So here is what the path spec object we made to access the $MFT in a VHD looks like.


I've underlined in red the important things to draw your attention to when you are trying to understand how that file path specification object we built can access the MFT and all the other layers involved.

So if you look you can see from the top down its:

  • TSK Type with a Location of /$MFT
    • With a parent of TSK Partition type with a location of /p1
      • With a parent of VHDI type 
        • With a parent of OS type with a location of the full path to where the vhd I'm working with sits.

Let's look at the same object with with an e01 loaded.


Notice what I highlighted, the image type has changed from VHDI to EWF. Otherwise the object, its properties and the methods are the same. 

Let's do this one more time to really reinforce this with a raw/dd image.


Everything else is the same, except for the type changing to RAW. 

So no matter what type of image we are working with dfVFS allows us to build an object in layers that permits the code that follows not to have to worry about the code behind. It is normalizing all the different image types access libraries so we can prevent things like the work around we do in pytsk.

Tomorrow, more code!

Sunday, April 17, 2016

Daily Blog #377: Sunday Funday 4/17/16

Hello Reader,
              If  you have been following the blow the last two weeks you would have seen its been all about dfVFS. Phil aka Random Access posted something I was thinking about on his blog, https://thisweekin4n6.wordpress.com, that I thought was worthy of a Sunday Funday challenge. In short Phil saw that I posted a video regarding how to verify dfVFS was installed correctly and there is a whole post just on installing it and mentioned that someone should automate this process. I agree Phil, and now I turn it over to you Reader, let's try out your scripting skills in this weeks Sunday Funday Challenge. 

The Prize:
$200 Amazon Giftcard

The Rules:

  1. You must post your answer before Monday 4/18/16 3PM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post



The Challenge:
Read the following blogpost: http://www.hecfblog.com/2015/12/how-to-install-dfvfs-on-windows-without.html and then write a script in your choice of scripting language that will pull down and install those packages for a user. Second the script should then run the dfVFS testing script shown in this video http://www.hecfblog.com/2016/04/daily-blog-375-video-blog-showing-how.html to validate the install. 

Saturday, April 16, 2016

Daily Blog #376: Saturday Reading 4/16/16

Hello Reader,

          It's Saturday!  Soccer Games, Birthday Parties and forensics oh my! That is my weekend, how's yous? If its raining where you are and the kids are going nuts here are some good links to distract you.

1. Diider Stevens posted an index of all the posts he's made in March, https://blog.didierstevens.com/2016/04/17/overview-of-content-published-in-march/. If you are at all interested in malicious document deconstruction and reverse engineer it's worth your time to read. 

2. If you've done any work on ransomware and other drive by malware deployments this article by Brian Krebs on the the sentencing of the black hole kit author is worth a read, http://krebsonsecurity.com/2016/04/blackhole-exploit-kit-author-gets-8-years/

3. Harlan has a new blog up this week with some links to various incident response articles he's found interesting, http://windowsir.blogspot.com/2016/04/links.html. This includes a link to the newly published 2nd edition of Windows Registry Forensics!

4. Mary Ellen has a post up with a presentation she made regarding the analysis of phishing attacks, http://manhattanmennonite.blogspot.com/2016/04/gone-phishing.html, The presentation also links to a Malware lab. Maybe this will see more posts from Mary Ellen.

5. Adam over at Hexcorn has a very interesting write up on EICAR, http://www.hexacorn.com/blog/2016/04/10/a-few-things-about-eicar-that-you-may-be-not-aware-of/. I wasn't aware of EICAR until Adam posted about it and found the whole read fascinating. EICAR is apparently a standard file created to allow anti virus developers test their own software and as Adam discusses others have made their own variations. 

6. In a bit of inception posting, Random Access has a weekly reading list of his own on his blog. This is his post from 4/10/16, https://thisweekin4n6.wordpress.com/2016/04/10/week-14-2016/. He does a very good job covering things I miss and frankly I should just be copying and pasting his posts here, but I think that's looked down on. 

So Phil, if you are reading this. Do you want to post here on Saturdays?

That's all for this week! Did I miss something? Post a link to a blog or site I need to add to my feedly below.