Daily Blog #38: Web 2.0 Forensics Part 3

Web 2.0 Forensics Part 3 by David Cowen - Hacking Exposed Computer Forensics Blog


Hello Reader,
        This post is a bit late in the day but that happens sometimes when you are onsite and can't sneak away for some blog writing. In the last two posts we've discussed where to find JSON/AJAX fragments and how Gmail stores message data within them. Today we will discuss how these artifacts are created and what you can and cannot recover from them.

What you can recover

Much like other web artifacts we can only recover what was sent by the server and viewed by the custodian. This includes:

  • the content of emails read
  • the names of contents of attachments accessed
  • what was contained on each mailbox folder viewed (such as the inbox, sent, saved)
    • For some webmail clients (such as gmail) you can also see the a preview of the email messages contained in the mailbox even if they did not read them as the data is precached.
    • Whether the message had been read
    • If the message had an attachment
  • a list of all the mailbox folders the custodian had in use
  • contacts
  • for gmail specifically google talk participants 
  • for gmail specifically a list of all the circles they are in.


What you can't recover
If the data was never sent from the server and viewed it won't be in cached form anywhere except live memory. The list of things you can't recover includes:


  • The text of emails sent from the custodian unless they viewed a preview of the message, checked their sent mail or read a reply to the message. 
  • The content of attachments sent via email, though you can match up the file by name to files on their system as the attachment successful method will be sent from the server to the browser.
  • The full contents of mail folders if all the pages containing messages were not viewed
  • The contents of all webmail read, over time the data will be overwritten in the pagefile and the shadow copies will expire as well as the hiberfil will be overwritten on the next hibernation.

The examples i'm showing here are for webmail, there are other ajax/json services out there (facebook, twitter, etc..) that are popular. I'm focusing on webmail because in my line of work its a popular method for exfiltration of data and discussing plans that they don't want saved in company email. I will see about expanding the series to other types of web 2.0 applications likey after my html 5 offline caching research with Blazer Catzen is complete.

Tomorrow we continue the web 2.0 forensic series, hopefully with an earlier posting time.

Daily Blog #37: Web 2.0 Forensics Part 2

Web 2.0 Forensics Part 2 by David Cowen - Hacking Exposed Computer Forensics Blog

Hello Reader,
             Sunday Funday is always fun for me for two reasons. One it gets me two blog posts out of one so I get more time to get work done and two I like getting a general feeling of what level of understanding exists on certain artifacts. 

So while you get a prize, that I strive to make worth your effort, I get to see what I can continue to help you learn by writing additional blog posts to fill those gaps. With that said we are continuing the web 2.0 series today that I realized was needed from the IEF Sunday Funday challenge two weeks ago.

Json Data Structures

Json data structures are fairly easy to find, they are structure name pairs that are exchanged between the web server and the web client, for instance the Gmail server and the Chrome browser. In this example the Chrome browser would then parse the data to generate the view that you see.

Here is what a message summary from your Gmail inbox looks like:

Index data for gmail
["140303866b4ce541","140303866b4ce541","140303866b4ce541",1,0,["^all","^i","^o","^smartlabel_notification"]
,[]

Email from/subject/message preview and date
,"\u003cspan class\u003d\"yP\" email\u003d\"mail-noreply@google.com\" name\u003d\"Gmail Team\"\u003eGmail Team\u003c/span\u003e","\u0026raquo;\u0026nbsp;","Welcome to the new Gmail inbox","Hi David Meet the new inbox Inbox tabs put you back in control with simple organization so that you",0,"","","10:35 am","Tue, Jul 30, 2013 at 10:35 AM",1375198584460000,,[]
,,0,[]
,,[]
,,"3",[0]
,,"mail-noreply@google.com",,,,0,0]
  
This is followed by the  body of the message.In addition on each page you have a listing of all the labels, email counts, circles and more data that is preloaded to each page providing you with a large amount of data on your custodians activities but also providing for a large amount of duplicates.

Tomorrow we will go into the important fields and their meanings and I'll provide a regex for carving them out. Recovering webmail used to be simple, just find a javascript library known to the service and carve out the html before and after it, now with JSON/Ajax services like Gmail we get fragments of emails and possibly entire messages but we either have to manually carve them or use a tool like IEF to do it for us.

I start with IEF and let find the fully formed messages and then go back myself to find partials knowing the users email address.

See you tomorrow! Leave comments or questions below if your seeing data differently. I'm going to install fiddler on my system tonight to show how the data looks as its being transmitted.

Daily Blog #36: Sunday Funday 7/28/13 Winner!

Sunday Funday Winner by David Cowen - Hacking Exposed Computer Forensics Blog


Hello Reader,
                This Sunday Funday I thought was easier than the last and we had several submissions both post on the blog and submitted anonymously but only one was done before the deadline of Midnight PST. o congratulations go out to Jonathan Turner who while not having the most complete answer of all the ones submitted, that goes to Harlan Carvey this week, as he was the only one who submitted his answer before the cutoff!

I got a lot of answers after, do you need me to change the rules to give you more time to play? I thought 24 hours (I try to post at Saturday midnight CST) was enough time, but you need more time to play I can change the rules to let more people participate. I'm hoping as these contests continue we will continue to get great prizes to give away that will tip you over the 'should I try this one' cliff.

Here was the challenge:
The Challenge:     I'm going to step down the difficulty from last week, I may have been asking for a bit much on a Sunday. So this weeks question is going back to basics:

 

For a Windows 7 system:

 

Your client has provided you with a forensic image of a laptop computer that was used by an ex-employee at their new employer, it was obtained legally through discovery in a litigation against them. You previously identified that the employee took data when they left where on the system would you look for the following:

 

1. The same external drive was plugged into both systems.

 

2. What documents were copied onto the system.

 

3. What documents were accessed on the system

Here is Jonathan's answer:
1) The manufacturer, model, and serial number of USB keys plugged into a system are stored in the registry at HKLM\SYSTEM\Control\(CurrentControlSet|ControlSet001|ControlSet002)\Enum\USBSTOR. Comparing these keys on the two systems should show any common devices.
2) The created timestamp on the above registry key can be used to filter a timeline of file creation times to determine what files were added to the system around the time it was plugged in. These files could contain metadata about where they were originally created as well as other interesting information that can be manually collected.
3) Documents accessed on the system should show up in jump lists and (potentially) shellbag information stored in the users' ntuser.dat hive.

 Here is Harlan's answer:
Sorry this is late, but I was at a couple of events yesterday starting at around 2pm...I'm not sending it in so much as a submission, but more to just provide my response...

*1. The same external drive was plugged into both systems

This type of analysis starts with the Enum\USBStor keys.  I would locate the subkey that contained the device identifier for the external drive in question, and see if there is a serial number listed.  If not, that's okay...we have other correlating information available.  If there is a serial number pulled from the device firmware, then we're in luck.  

Beneath the device serial number key, I can get information about when the device was first plugged in, from the LastWrite time to the LogConf key, as well as the Data value (FILETIME time stamp) from the \Properties\{83da6326-97a6-4088-9453-a1923f573b29}\00000065\00000000 subkey.  I would correlate this time with the value in the setupapi.dev.log file, as well as with the first time for that device that I found in the Windows Event Log (for device connection events).    I could then get subsequent connection times via the Windows Event Log, as well as the final connection time from the NTUSER.DAT hive for the user, via the MountPoints2 key (for the device, given the volume GUID from the MountedDevices key) LastWrite time value.  

To be thorough, I would also check beneath the \Enum\WpdBusEnumRoot\UMB key for any volume subkeys whose names contained information (device ID, SN) about the device in question.

Getting the disk signature for the specific external drive can be difficult on Win7, using just the System hive file, as there is very little information to correlate the Enum\USBStor information to the information in the contents of the MountedDevices key.  However, further analysis will be of use, so keep reading.  ;-)

The "\Microsoft\Windows NT\CurrentVersion\EMDMgmt" key in the Software hive contains a good deal of information regarding both USB thumb drives and external drives; the subkeys will be identifiers for devices, and for external drives, you'd be interested in those that do NOT start with "_??USBSTOR".  The subkey names will have an identifier, as well as several underscores (""); if the name is split on underscores, the first to the last item, if there is one, will be the volume name, and the last item will be the volume serial number, listed in decimal format.  This final value changes if the device is reformatted, but it wouldn't make any sense to copy files to the device, reformat, and then connect it to the target device, so we can assume that this information won't change between the two systems.

I could then use this information to correlate to LNK files in the Windows\Recent and Office\Recent folder within the user profile, as well as LNK streams within the user's *.automaticDestinations-ms Jump Lists.

At this point, I will have a drive letter that the external drive was mapped to, so I can then return to the MountedDevices key in the system hive, and by accessing available VSCs, locate one in which the drive letter was available for the ext. drive.  This will provide me with the disk signature of the device itself, as well as the volume GUID.

At this point, I have device identifier, the device serial number, the volume serial number, potentially the disk signature, and the time(s) of when the external drive had been connected to the laptop.  I can then use this information to correlate to the other system.

*2. What documents were copied onto the system

I would create a timeline of system activity, correlating file creation dates on the system with times when device was connected to the system, based on the time-based information provided in the response to #1 above. 

*3. What documents were accessed on the system

The shellbags artifacts likely won't server you much use this time, as on Win7, they tend to not contain the same sort (and volume) of information as they do on WinXP.  However, I would start by looking at the shortcut/LNK files in the user's profile (Windows\Recent and Office\Recent), as well as Jump Lists.  This information also helps us identify the application used to access the documents (Office, Adobe, etc).  I would also, for clarity sake, verify this information via Registry MRUs, even though some of them (ie, RecentDocs) will not contain full path information.  However, now that we have information about the applications used (from the Jump Lists, after performing any required AppID lookups), I would be sure to examine any available application-specific MRUs.

Harlan gave a great answer but didn't get in on time, so the winner of a Specialist Track ticket to PFIC is Jonathan Turner. There is still more to be said on this topic though. I use specific operating systems for a reason as artifacts change between them and there are still artifacts and scenarios not clearly being shown even in both of these answers. When I'm done with the web 2.0 series I'll go into depth on it.

In the mean time, do you want to go to PFIC? I still have more tickets to give away next week. If two answers make it in on time that are both great (or I change the rules based on your feedback to extend the time), I can give away more than one! Tomorrow we resume the web 2.0 series and I hope you follow along as it continues to give me the motivation to keep these up daily! Only 316 more blogs before the year is up!

Also Read: Daily Blog #35

Daily Blog #35: Sunday Funday 7/28/13 - Finding the Culprit

Finding the Culprit Challenge


Hello Reader,
           It's that time again, Sunday Funday time! For those not familiar every Sunday I throw down the forensic gauntlet by asking a tough question. To the winner go the accolades of their peers and prizes hopefully worth the time they put into their answer. This week we have quite the prize from our friends at the Paraben Forensic Innovations Conference.

The Prize:


. A free Specialist track ticket to PFIC (worth $399)

The Rules:


1. You must post your answer before Midnight PST (GMT -7)
The most complete answer wins.

2. You are allowed to edit your answer after posting.

3. If two answers are too similar for one to win, the one with the earlier posting time wins.

4. Be specific and be thoughtful.

5. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com.

6. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post.

The Challenge:


I'm going to step down the difficulty from last week, I may have been asking for a bit much on a Sunday. So this weeks question is going back to basics:

For a Windows 7 system:


Your client has provided you with a forensic image of a laptop computer that was used by an ex-employee at their new employer, it was obtained legally through discovery in a litigation against them. You previously identified that the employee took data when they left. Where on the system would you look for the following:

1. The same external drive was plugged into both systems.

2. What documents were copied onto the system.

3. What documents were accessed on the system.

As a reminder I'll be speaking at PFIC and the agenda is pretty great this year, I hope to see you there! This should allow everyone a good shot at playing, but this answer can go very, very deep. I'm excited to see your answers, good luck!

Also Read: Daily Blog #34

Daily Blog #34: Saturday Reading 7/26/13

Saturday Reading by David Cowen - Hacking Exposed Computer Forensics Blog

Hello Reader,
        It's Saturday, time to put on a long movie for the little ones while you fire up the web browser to prepare for another week of deep dives into forensic images. This week we have links to deep reads on a wide range of topics so I hope you'll stay informed as we all move forward towards the quest for new knowledge of artifacts and deeper understanding of what's possible. Don't forget tomorrow's Sunday Funday where you can win a ticket to PFIC!

1. This is an older article but I certainly didn't hear anything about it at the time, http://geeknizer.com/pros-cons-of-html-5-local-database-storage-and-future-of-web-apps/, it has to do with what Blazer Catzen and I are looking into now. Specifically we are researching HTML5 offline content cache databases and what data they are storing that you may currently not be paying attention to. The most relevant example first described to me by Blazer and then after some quick research of the database table names I found the article linked above. The article details how iOS based webkit browsers (safari, firefox, etc..) that visit gmail will have  a summary of the contents of the displayed messages stored in a sqlite database on the device.

The question in my mind is not just how can we extract out and recover more gmail then we knew about before on iOS devices, though that is a great thing, but what other web applications are making use of this feature and on what platforms/browsers? I'll be updating our findings here on the blog and during the forensic lunch (hopefully Blazer will come on!) as we learn more but I think there is a lot more here to discover.

2. Tomorrow while your working on your winning answer you might find some insight from Lee Whitfield on the forensic 4cast, https://plus.google.com/u/0/events/cle30c05m88rpjs467k4dnns27k. If you have the time Lee is always informative and entertaining.

3. I do have interests outside of forensics and this article made me want to actually go outside, http://travisgoodspeed.blogspot.com/2013/07/hillbilly-tracking-of-low-earth-orbit.html, and monitor satellites. 

4. It's hard for me not to link to a post that mention's our research, when they add other good methods to detect anti-forensics then I can justify it. Harlan Carvey is still blogging up a storm of useful posts and this one, http://windowsir.blogspot.com/2013/07/howto-determinedetect-use-of-anti.html, based on my interests is one of my favorities in his How To series. 

5. I don't often just link to product pages on Saturday's but I need to this week as this tool helped me out of a jam, http://www.lostpassword.com/kit-forensic.htm. If you are dealing with Windows 7 protected storage encrypted files (intelliforms, chrome logins, dropbox, etc..) you know that you need to know the user's password (something I haven't had to care about in years). Using Passware's tool I was able to recover the plaintext password from the hiberfil in an hour for a password that we had been trying to crack for two months. Do you know of an alternative or free/open source solution? Please comment and let me know!

6. If your looking for something to listen to rather than read check out our first recorded Forensic Lunchcast, http://www.youtube.com/watch?v=4A_GynQF3n0&list=PLzO8L5QHW0ME1xEyDBEAjmN_Ew30ewrgX&index=1. We are still trying to decide how often we should do these but we will be doing it again on friday if you want to participate.

7. Last one for the week, it's not often that the cyb3rcrim3 blog mentions civil case so when they do I pay attention http://cyb3rcrim3.blogspot.com/2013/07/unauthorized-access-email-and-team.html. I'd like to know more about what happened in this case but I thought it was odd that someone would access online webmail to get access to communications when we have such great tools and techniques to do so. I've certainly had people get confused and thought I accessed their accounts to get cached webmail (or json fragments) but this is first I've seen where someone actually did and used the evidence!

That wraps up this saturday reading, I hope these links will keep you busy until next Saturday. You won't have much time to read these tomorrow though because you'll be too busy competing in the next Sunday Funday contest and win a free ticket to PFIC! See you then!


Daily Blog #33: Web 2.0 Forensics Part 1

Web 2.0 Forensics Part 1 by David Cowen - Hacking Exposed Computer Forensics Blog

Hello Reader,
                 I've finished two series, I've never even finished one in the last 5 years so I think this daily blog experiment is working. Thanks to all of you that are following along, I know it can be hard to keep up daily and for those that do (I compulsively watch pageviews) it does help me to keep going on with the dailies.

Today we begin a new series on 'web 2.0' forensics. I don't mean to use buzzwords but 'web 2.0' has come to represent a combination of technologies that have changed how custodians/suspects are accessing data from web services and how the systems we analyze are storing it. It is the aspect of retrieval of these asynchronous transactions that we will be talking about over the next blog posts. Based on the responses I got from the last Sunday Funday challenge I took it that many of you don't feel comfortable with these artifacts and how they get created so let's get into it so you can start getting more evidence!

The key technology that allowed web pages to update sections of their page without refreshing the contents of the entire page being viewed is AJAX. AJAX or Asynchronous Javascript and XML first introduced in 2006 standardized a mechanism allowing javascript executed within the browser to make a request to the webserver, receive the request, parse it and update the content on the page all seamless to the user. It is this technology that allowed many webmail systems to present a more fluid experience to theirs users and totally ruined the day of many a forensic examiner.

Before AJAX it was easy to write a carver to recognize the javascript in cached pages found all over the unallocated space of the disk recovering scores of webmail views. I wrote my first such Enscript back in 2002 and it became one of my favorite ways of finding data exfiltration. After AJAX all of the webmail views are being delivered via updates to single page loads all of which where occuring in memory and not being committed to the disk, this was a sad day. Suddenly the evidence we were all relying on was thought to be gone and unreachable.

Then someone started looking at the network traffic and what was being viewed and found the data structure of the XML/JSON requests being sent back and forth (I don't know who founded this research and if you do please comment below). They found these fragments in memory and more importantly in the Pagefile and Hiberfil! Now we don't have the same length of time back as we did when we had glorious cached pages being written to disk but we can again recover webmail and no one can complain about that.

If you remember one of the challenge questions was where we can recover JSON fragments from Gmail. The pagefile and hiberfil (and active memory of course but I'm looking at past activity recovery) before Windows Vista used to be the only locations, but now with shadow copies there's more! If you've heard me talk I've mentioned that Shadow Volume Copies contain more data then most people expect. In fact they also contain hiberfile and pagefile for each backup! That means for a shadow copy enabled disk you have by default weekly snapshots of possible JSON recovery available. If you are not extracting and searching this data (remember hiberfil is compressed and will not be searchable unless extracted and decompressed or a tool specifically supports it in the volume shadow) you are missing evidence.

Before we go on I actually got a second answer to the contest, while he didn't win a prize (late submission) he did give a different answer that I wanted to highlight.

Seth Ludwig writes:

In response to your blog post:
For a Windows 7 system:
1. Describe the Gmail JSON format and how you would recover it
A typical gmail JSON capture might look like the following:
while(1);
[[["v","137s2mfg40boa","1c22e772e53ff3
de","-902218240","1","vaknsvbtjz8a"]
,["gn","gsi test502"]
,["cfs",[]
,[]
]
,["i",50]
,["st",1208038540]
,["qu","0","6616","0","#006633",
0,0,0,"0","6.5"]
,["ft","Send photos easily from Gmail with
Google\'s \u003ca href\u003d\"http://
picasa.google.com\"
Recovering the JSON data could be achieved using a variety of forensics tools, both commercial and opensource, to carve for the files with the embedded JSON. (Encase, IEF, Helix3, etc).
http://capec.mitre.org/data/definitions/111.html


2. Describe where in the disk you would expect to find Gmail JSON fragments.
Sometimes you simply cannot find them. The reason that this data is sometimes written to disk is largely because of browser bugs or lack of proper support for the no-cache HTML meta tag. This data isn't supposed to be written to disk in the first place, but due to various bugs it sometimes is. When the files are cached, you will find them named "mail[somenumber]", and is mainly located in Temporary Internet Files or other caches of unidentified data. Often you will be able to find these files in unallocated space. Additionally, you will find other files in the same places named "mail[somenumber].htm". There's often some JSON as described above contained within them.
Other possible and more likely locations:
Memory dumps
Pagefile
Hiberfil.sys (remember to decompress)

3. Which services popular in forensic investigations utilize JSON
Facebook, Twitter, Gmail, Skype, Google Talk, Yahoo Messenger and many others.

4. Provide a carve signature for the header and footer of a Gmail JSON
It's 1AM. You win this round.

5. Describe what Gmail's JSON would reveal to you
Utilizing JSON files, one has the potential to retrieve the following information:
Server name
Account Quota
Folders
Message List (Thread)
Conversation Summary
Message Information/Index
Message Body
Message Attachments
GMail Data Packet header
Invitation
Categories/Labels/Contacts
Thread Summary
End of Thread List
GMail Version

That's enough for today, hoepfully I've gotten you thinking. In the next post on Tuesday we will go into JSON data structures and how services use/store the data and how you can recover it.

Stay tuned for tomorrow's saturday reading and more importantly this Sunday Funday where you can win a free ticket to PFIC!

Daily Blog #32: Go Bag Part 7 - End of series

Go Bag Part 7 - End of series by David Cowen - Hacking Exposed Computer Forensics Blog

Guten tag Reader,
          It's time to wrap up this series and move onto to other topics. I hope you've found these scenarios and how I deal with them from my light go bag helpful. Hopefully I can help you lighten your load when you are out in the field, it really is a more pleasant experience. In this post we will cover handling all the assorted storage locations you might receive and how I deal with them.
       
CDs/DVDs - Imaging these is fairly straight forward as I'm not aware of any operating system that tries to write to a CD/DVD rewritable on insert. Remember not all CDs/DVDs are simple write once media, if the burns are layered in sessions you can recover the prior sessions burned once imaged.

MMC/SD Cards - Many of these actually have switches to make them read only, but otherwise I will either boot into Linux to acquire these read only or enable the usb write block hack and plug in a usb card reader.

External drives - I don't carry a USB write blocker because I haven't found a usb 3.0 one yet and they don't always work with the random drives I encounter. So instead I use the usb write block hack to acquire the drive if I can't easily access the underlying drive and attach it via SATA. This is also why I make sure my acquisition system has eSATA so I can always have a writeable external storage interface available to me and I can leave my USB ports read only when acquiring.

Email accounts on mail servers - Many times i'll be asked to preserve the contents of a mailbox I use a piece of software from transend called transend forensic migrator for this. Transend supports a large variety of mail servers (Exchange, Lotus, Groupwise Imap, etc..) so it makes my life easier to just plug in one or one hundred credentials (via batch mode) and have all the mail stored in your choice of output format (pst, mbox, etc..) with a log of its actions when its done. You can even enable filter options to limit the data your acquiring.

Webmail servers - One of the other types of email we are asked to grab is webmail, I've found the easiest way to deal with grabbing someones webmail is to search for the webmail providers instructions for email access from a smart phone. Through those instructions they will typically identify an Imap or Pop3 server that a phone and your software can connect to and grab the data.

Sharepoint - There are two good ways to deal with sharepoint, neither of which involve grabbing the underlying database. You can access a sharepoint website through webdav and copy down the contents or use a commercial tool like ontrack powercontrols to grab the data if you have the budget.

That's all I can think of right now but I think this capture 99% of what I deal with when out in the field and how I deal with it. Tomorrow we will switch topics to 'web 2.0' forensics and then make time Sunday for the weekly contest where this weeks prize is a free ticket to PFIC!

Links to All Parts:


Daily Blog #31: Go Bag Part 6

Go Bag Part 6 by David Cowen - Hacking Exposed Computer Forensics Blog


Hello Reader,
                     Have I mentioned how good Civ 5 brave new world is? It's really good, and the reason I'm writing this blog post this morning instead of last night again. Tip, playing Venice is hard on king. I realized I missed a couple scenarios we should go over so I'm continuing the go bag series a little longer before we begin our discussion on 'Web 2.0' forensics. We focused on the NAS's in the last post and now we are going to talk about how to deal with embedded devices and a quick look a memory storage devices.

The system is an embedded storage device - You won't see this very often, but every so often you'll be told that there is an embedded device that contains logs you need.

When dealing with embedded devices you have a couple types you'll come across

1. SoC (System on a chip) with sdcard storage

If you have this you are lucky, power down the embedded device and remove the sdcard for standard imaging. When you image it you have two options, you can actually get a memory card write blocker or you can use software write blocking. In Windows you can use the registry write block hack and then attach a memory card reader via USB or you can boot off a forensically sound Linux distribution and image the device. In either case this is the best possible scenario.

2. SoC (System on a chip) with a maintenance port

You may come across an embedded device where the memory is soldered to the board and no removable storage options exist, but they may have a maintenance port. Either through ethernet, usb or com port getting access to the underlying maintenance port can also lead to shell access as many of these devices are running embedded unix variants and others are running DOS. Once in this shell, and getting there will very by device, the nice part about embedded systems it that they are rarely multi user systems, meaning every process runs as root or administrator. Once you have the console you can then capture raw logs back to your system through it, sometimes if your lucky there may be older kermit/zmodem transmission programs left on the image or tftp for network connected systems (originally intended for network booting). 

You can in embedded unix systems get full disk images this way by dumping the contents of the physical memory devices, just remember that you need to use a protocol capable to transmit the data without treating it as ascii strings or pipe it through a function to encode it first (base64 works here well). 

3. SoC (System on a chip) with no access

This happens, and it sucks. At this point you can hope that there is some kind of Jtag access or firmware flashing access. If there is no firmware flashing access (which you can use to download the current memory image) then you are stuck with Jtag. Jtag means you are going to have to find the Jtag pads (documented if your lucky) solder a Jtag connection to the board and find a compatible app for the processor to dump the nvram to your system. 

This isn't fun and if you are not experienced with Jtag easy to mess up. At this point you should probably let your client know that you need to send this system off to a specialist shop for extraction. 

4. SoC (System on a chip) locked down for security

This typically is only found in high security embedded devices (ATMs, Lottery terminals, etc...) where they have attempted to remove all internal access to the system and its underlying data. You have one option here and you can't really go back from it. You have to de-solder the memory chips from the board and plug them into a raw reader. From that point its up to you to reconstruct the file system and access the underlying files. If you are on this step you are likely dealing with a pretty serious case and if you are not comfortable with what mobile forensic experts have termed 'Chip Off' forensics I would send this to a lab that is. Once you remove the chip its not likely you'll get it on the board and get the device functioning again so remember this is on way street, no going back. 

That was longer than I thought it would be, as you can see I've dealt with a lot of weird systems over the last 14 years. We should talk about memory cards tomorrow and then on to 'Web 2.0' forensics. Don't forget this sunday you can win tickets to PFIC!