Daily Blog #223: Saturday Reading 2/1/14

Get even better at DFIR with David Cowen

Hello Reader,
         It's Saturday and after another week of hard work you deserve a break. Use that break time to get even better at DFIR with this weeks Saturday Reading!

1. We had another great forensic lunch this week! You can watch it here: http://www.youtube.com/watch?feature=player_embedded&v=2P5Sv6yyd5Y This weeks guests:

Ian Duffy, +Ian Duffy , talking about his research into the Microsoft Office compound file format.
You can read Ian's blogs on this topic here: http://forensecurity.blogspot.com/2014/01/microsoft-office-compound-document.html

Andrew Case, +Andrew Case , discussing his work in the memory forensics and Volatility


Matthew and I showing the latest changes for this months Beta release of ANJP.
2. Jason Hale has a neat blog up this week relating to a new feature in Microsoft Word 2013. The feature tracks where a user left off in their reading of a document, something that can be very useful in showing more than just opening of a word document. Read more about it here: http://dfstream.blogspot.com/2014/01/ms-word-2013-reading-locations.html

3. Lenny Zeltser has a good blog up on the SANS Forensics blog this week talking about all the different specialties of DFIR that are forming, http://digital-forensics.sans.org/blog/2014/01/30/many-fields-of-dfir. I think this type of knowledge needs a wider audience to understand just how wide and deep our field is.

4. Jamie Levy has put a link to her slides for OMFW talk about profiling normal system memory http://gleeda.blogspot.com/2014/01/omfw-2013-slides.html. This is something we've been talking about for the last two Forensic Lunches so I'm very interested in learning more.

5. The Bsides NOLA CFP ends today! Quick get your submission in! http://www.securitybsides.com/w/page/71231585/BsidesNola2014

6. Jack Crook has a great analysis of the ADD affected memory image up on his blog, http://blog.handlerdiaries.com/?p=363. This is a great post for understanding how to spot whats abnormal and track it down.

7. Here is a good post by Brian Moran showing how open source tools fare against the Target POS Malware, http://brimorlabs.blogspot.com/2014/01/target-pos-malware-vs-open-source-tools.html.

8. Julie Desautels has put up an interesting blog using her Google Glass forensic research to make the case that a driver was or was not operating the Glass device at the time she was pulled over. These types of devices are only going to grow in the future so get a head start and read this here, http://desautelsja.blogspot.com/2014/01/proving-case-of-cecilia-abadie-using.html.

Also Read: Daily Blog #222

Daily Blog #222: Forensic Lunch 1/31/14 - Discussion with Ian Duffy, Andrew Case, and Matthew



Hello Reader,
         We had a very interesting Forensic Lunch this week! This weeks guests:

Ian Duffy, +Ian Duffy , talking about his research into the Microsoft Office compound file format.
You can read Ian's blogs on this topic here: http://forensecurity.blogspot.com/2014/01/microsoft-office-compound-document.html

Andrew Case, +Andrew Case , discussing his work in the memory forensics and Volatility
Matthew and I showing the latest changes for this months Beta release of ANJP.


Also Read: Daily Blog #221

Daily Blog #221: RHEL Forensics Part 3

RHEL Forensics Part 3 by David Cowen HECF Blog

Hello Reader,
        Today we talk about recovering deleted mlocate databases. This was actually harder than I expected as not only did ext3 set the size of the file equal to 0 but the direct block that istat -B came back with was not the first block in our database. So instead I followed the instructions here: http://wiki.sleuthkit.org/index.php?title=FS_Analysis to do a manual recovery of deleted databases. There is still some work to be done here in order to clean up whats been recovered back into parsable databases but I'll leave that bit for next week.

Today let's go through the steps necessary to recover deleted mlocate databases on a RHEL v5 system using Ext3 as the file system. Remember this is necessary as the updatedb command runs daily and deletes the mlocate database before creating a new one.

Step 1. We need to figure out which group of inodes our parent belongs to. You can see in the screenshot below that the parent directory /var/lib/mlocate has inode number 1077298 so that is the group we need to find.

RHEL Forensics Part 3 by David Cowen HECF Blog


Step 2. Run fsstat to find out which group contains our inode, in this case Group 33 contains our inode as shown below. We can use them to determine which blocks to recover for deleted databases.

RHEL Forensics Part 3 by David Cowen HECF Blog


Step 3. Use blkls to recover those unallocated blocks within Group 33 as shown below to a new file:

RHEL Forensics Part 3 by David Cowen HECF Blog

Step 4. Use xxd to parse the recovered blocks and  find the mlocate database signature of 'mlocate'.

RHEL Forensics Part 3 by David Cowen HECF Blog  

This looks like an mlocate database but right now its stuck in the middle of the rest of the unallocated data. So the next thing we need to do next week as we continue this series is to write some code to carve out the mlocate database from this unallocated block chunk. 

Make sure to come back tomorrow for the Forensic Lunch with guests Ian Duffy and Andrew Case!


Daily Blog #220: RHEL Forensics Part 2

RHEL Forensics Part 2 by David Cowen

Hello Reader,
             Yesterday we talked about extending this weeks Sunday Funday answer using the mlocate database on RHEL. Today let's look at what we can determine from the mlocate database using Hal Pomeran'z mlocate-time script and setup tomorrow's entry regarding the recovery of deleted mlocate databases.

mlocate, default on RHEL since v4, queries a database of known files and directories called /var/lib/mlocate/mlocate.db. The database stores the full path to every file and directory it encounters as well as the timestamps of the directories. The timestamp according to the man page will be either the change time or modification time of the directory, whichever is more recent. The timestamp is being kept to determine if during the update process if mlocate should re-index the contents of a directory. This leads to the question, will timestamp manipulation get around mlocate indexing of a file's existence, which is something we can test in this series.

For today's example I have created a file in my home directory called 'secret_file' and then deleted it.

RHEL Forensics Part 2 by David Cowen

Since this filesystem is ext3 the name of the file 'secret_file' is now zero'd out within the directory inode. The only way to know it existed is to hope that there is another recent reference to the file within the ext3 journal to re-associate it or to search the mlocate database. There may be other artifacts but we will focus on those two for the moment.

Searching the mlocate database confirms the file entry still exists:

RHEL Forensics Part 2 by David Cowen

Looking into the parsed database records shows the last time the directory was modified when the file still existed within it:

RHEL Forensics Part 2 by David Cowen

So that's great we can establish a timeframe when the file did exist and we could compare the contents of the current filesystem against the mlocate database to determine which files had been deleted since the last daily cron run. This can be helpful for determining what has changed in the last day in a live response scenario. This does not help though when we want to know what is occurring on a longer term basis.

The mlocate database is updated by default once daily when /etc/cron.daily/mlocate.cron runs and execute updatedb. What Hal pointed out from his tests though is that when that updatedb command runs that it does not overwrite the database but instead unlinks (deletes) it and then creates a new one. We can see that in the following screenshots showing the inode numbers of the mlocate database.

Before updatedb:

RHEL Forensics Part 2 by David Cowen

After updatedb:

RHEL Forensics Part 2 by David Cowen

Notice that the inode number of mlocate.db has changed from 1077692 to 1077693 meaning a new inode has been created and the old inode is still recoverable. As Hal also pointed out the mlocate database has a unique signature that can make determining which deleted inodes contain mlocate databases so tomorow let's do that. Let's see if we can make a quick script that will find and recovery deleted mlocate databases for longer historical comparisons of which files existed on our system.

Also when I'm done with this series I'll be uploading my test image for download so you'll be able to recover the same data! Come back tomorrow and through the rest of this series as we determine:

1. How to identify and recover inodes containing mlocate databases.

2. Examining the possibility of carving mlocate database entries from freespace.

This is a 6-part series. Also Read:

Daily Blog #219: RHEL Forensics Part 1

RHEL Forensics Part 1 by David Cowen - HECF Blog

Hello Reader,
      If you read this weekends Sunday Funday winning answer you learned a lot about how to do forensics on a redhat enterprise linux server. As with anything we do in digital forensics though, there is always more to learn. Today we start a series on what we can do to go beyond this weeks winning answer, which was very good. Let's start by looking into a tool that Hal Pomeranz introduced us to on last weeks Forensic Lunch.

Hal's tool, mlocate-time, allows us to parse the mlocate database for all of the files, directories and timestamps for all files that existed on the system as of the last mlocate database update. By default there is a chron job that is set to run daily to update the mlocate database so the live data will only contain those files that existed since the last daily cron job run. Comparing the files known to exist in the mlocate database to the files live on the current system can reveal files that have been deleted, but what we want to look at is the recovery of past mlocate databases. There are two sources for these:

1. Hope there are backups of the system stored somewhere. From the backups we can extract out all copies of the mlocate database and then parse them with Hal's tool.

2. Recover the deleted inode or carve them out of unallocated space. While on the lunch Hal tested and confirmed that each time the database is updated it deletes the old database and creates a new one. That means that all the old locate databases and their timestamps are either recoverable inodes or carvable data blocks allowing you to bring back the proof of the prior existence of files and timestamps on your system. This is not just true for RHEL but other linux distributions making use of mlocate as well.

Tomorrow let's get into what the data contained with mlocate can do for our investigation and end with some mlocate database reovery. I'm downloading a RHEL v5 evaluation iso tonight to start my tests!

Continue Reading:
 

Daily Blog #218: Sunday Funday 1/26/14 Winner!

Redhat Enterprise Linux v5 sever challenge - HECF Blog

Hello Reader,
   One of the great things about Sunday Funday's is that we get to find those individuals out there whose experience shines through in their answers. This weeks challenge had a few great answers,but this weeks winning answer was not only received before the other contenders but shown through as a winner. Take the time to read this one, you'll find some great ideas for your future Linux server investigations.

The Challenge:

You have a Redhat Enterprise Linux v5 sever running an eCommerce site.  The server was breached as the attacker logged in as the root user two weeks ago and linked the shell history file to /dev/null. What other artifacts can you rely on to determine what the attacker did over the past two weeks?

The Winning Answer:

 Anonmyous

TL;DR: /var/log/secure, SSH log, syslog, wtmp & btmp, Apache logs, firewall logs, acct files, memory image, file system metadata & journal & deleted content.
RHEL 5 first released 2007, uses kernel 2.6.18, even in the latest update (update 10, October 2013).

My strategy for approaching this investigation would consist of two phases: first, identify the periods of potential attacker activity; second, drill into these suspicious time ranges collect attacker commands and actions. Generally speaking, I would use multiple log sources to draft an initial list of suspicious time ranges. Then, I would use more specific tools to recover evidence of commands and actions within those ranges.
Due to the specific wording used in the scenario, I don’t have to worry about reviewing the system for evidence of a remote exploit such as SQL injection. Of course, the best place to start running that down is by reviewing application and server logs.
To begin, I would review the file /var/log/secure to identify how the attacker logged in for the first time. This is a log file that records entries associated with authentication requests, including timestamps, usernames, source processes, and error messages. According to the scenario, the system was compromised for the first time via the root login. So, I’d need to cross reference all legitimate administrator activity with root logins since approximately two weeks ago. The outstanding entries should be associated with the attacker (or, poorly configured services). 
If I saw a single authentication attempt leading to a successful login, I would suspect that the attacker acquired legitimate credentials (account password or SSH certificate) elsewhere, perhaps by compromising another system, phishing the administrators, etc. I’d have to track this down by expanding the scope of the investigation. It is also possible that the password fell to a brute force attack, in which case I’d expect to see many, many unsuccessful attempts before a single successful authentication. The answer to this question may give me some insight into type of attack I was dealing with, and how I might expect to see the remainder of the system configured. For instance, a properly secured environment should not fall to a brute force attack targetting the entire internet.

I’d review and cross reference wtmp and btmp files for additional session information. wtmp tracks a history of logins and logouts by user, and btmp tracks failed authentication attempts. utmp could be helpful, but it typically tracks the current state of the system. All these files can be found in the /var/log directory. These are binary files, but the format is well known, and similar version of Linux (such as the Fedora release using kernel 2.6.18) can be used as effective analysis machines.

Once I had identified the first relevant login session, I would confirm the means of access: was it SSH, VNC, or some other remote access protocol. In each of these cases, I’d review the network architecture to determine from which network segments this protocol was allowed. Ideally, these administrative interfaces would not be exposed to the greater internet, but we’ve all see that too often. If the administrative ports were not accessible to the internet, then it again means that the scope of the investigation should be relaxed to include additional systems on the local network segment.

From the scenario description, the server is running an eCommerce site. An eCommerce site is typically composed of front end web services (serving static media like HTML, images, CSS, and Javascript, as well as dynamic pages generated by languages like PHP, Perl, or Python) and databases (MySQL or Postgres are popular). It is probably running at least some of the frontend services, and therefore that it is probably accessible to the internet.

This internet connection might be direct through a firewall, or through a load balancer/reverse proxy and firewall. I would review logs from the firewalls and load balancers to identify requests related activity from the same source IP address. This would help define additional periods of activity associated with the attacker.

I would timeline all application logs (usually, /var/log/*/*), syslog entries (/var/log/messages, etc.), and file system activity. Log2timeline or Plaso are good tools for organizing all this information. Some types of interesting application log entries could be yum package manager entries (/var/log/yum.log) indicating that the attacker installed additional software, or Apache web server entries (usually, /var/log/httpd/*) showing that the attacker test access to web directories via a web browser.

I would pay particular attention to the file system activity, reviewing file system metadata for newly created, modified, or deleted files. The Sleuthkit and loopback devices are my favorite tools for working with Linux images. I’d hope to find attacker tools and/or attacker archived data using the file system metadata. To recover further deleted files, I might try Foremost and extundelete. Foremost carves chunks from a binary stream using known file signatures. extundelete processes the journal on ext3/ext4 and attempts to recover old copies of inodes, and subsequently the file data.

Of course, I would also acquire a memory image of the server, and subsequently use Volatility to extract artifacts. I would first attempt to use the “linux_bash” plugin, which extracts Bash shell history entries from memory. These entries may still be in memory despite the /dev/null link. However, due to the duration of the compromise (two weeks), I would not consider this source authoritative of all activity. A number of the other plugins (for instance, linux_check_*) are also appropriate to use to identify the presence of rootkits and other suspicious processes.

Finally, I would review the process accounting information tracked by the “acct” service. This service typically stores its data within the directory /var/account/pacct that includes processes run and resources consumed. I would start by reviewing the data using “lastcomm” and “sar” programs to identify process names I don’t recognize. I could also correlate processes run before two weeks ago with those run after. Though the process accounting logs do not always contain verbose information, they could be effective in identifying Bitcoin miners or other rogue processes.


Also Read:  Daily Blog #217

Daily Blog #217: Sunday Funday 1/26/14 - Redhat Enterprise Linux v5 Sever Challenge

Redhat Enterprise Linux v5 Sever Challenge

Hello Reader,
            If you watched the forensic lunch this week you heard Hal Pomeranz talk about his newly released tools and scripts with a focus on Linux analysis. So let's extend the conversation into the challenges in dealing with Linux servers as our prior Linux Sunday Funday focused on Xwindows usage.

The Prize:

A $200 Amazon Gift Card

The Rules:
  1. You must post your answer before Monday 1/27/14 2AM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post

The Challenge:

You have a Redhat Enterprise Linux v5 sever running an eCommerce site.  The server was breached as the attacker logged in as the root user two weeks ago and linked the shell history file to /dev/null. What other artifacts can you rely on to determine what the attacker did over the past two weeks?

Also Read: Daily Blog #216

Daily Blog #216: Saturday Reading 1/25/14


Hello Reader,
          It's Cold! Stay inside and heat up some pizza,let's get down to some DFIR reading! Time for more links to make you think on this week's Saturday Reading:

1. What, did we have a forensic lunch this week? Why, yes we did! This week we had:
Jacob Williams, @malwarejake,  talking about his proof of concept code shown at shmoocon check it out here: http://malwarejake.blogspot.com/2014/01/shmoocon-talk-and-add.html and download the tool/memory samples here http://code.google.com/p/attention-deficit-disorder/

Hal Pomeranz, @hal_pomeranz,  talking about the scripts he's been sharing via GitHub for the DFIR Community: https://github.com/halpomeranz/dfis

Lee Whitefield, @lee_whitfield, talking about his new series of internet safety videos that you can show to your friends and family, found here: https://www.youtube.com/user/mrleewhitfield

2. Apple Examiner has been updating its analysis pages for OSX, http://www.appleexaminer.com/MacsAndOS/Analysis/InitialDataGathering/InitialDataGathering.html, give it a read and keep up to date.

3. Patrick Olsen has a great post on how to spot lateral movement from bad guys, http://sysforensics.org/2014/01/lateral-movement.html. I like how he breaks down categories of lateral movement techniques and show their combinations for analysts to find.

4. SANS is hosting a photo contest to win a free simulcast seat to a training class of your choice, http://digital-forensics.sans.org/blog/2014/01/20/announcing-the-dfircon-photo-contest-changce-to-win-a-free-simulcast-course, a pretty sweet prize for just taking a photo.

5. Patrick Olsen also wrote a great post on knowing whats normal in a Windows system, http://sysforensics.org/2014/01/know-your-windows-processes.html, you have to know whats normal to know whats wrong! I certainly hope Patrick keeps blogging!

6. A firm called Cassidiancy Cybersecurity has put out a tool for carving $i30 entries,     http://blog.cassidiancybersecurity.com/post/2014/01/Introducing-MftCrawler%2C-a-MFT-parser-with-%24i30-carving-capabilities. It's written in Lua, can't say I've seen that very often.

7. Here's a good read on securing logs so they can be reviewed later, http://www.scip.ch/en/?labs.20140123#null. If you are doing internal IR making sure the logs you need actually make it to be analyzed is kinda important.

8. Brian Moran has another post up, http://brimorlabs.blogspot.com/2014/01/identifying-truecrypt-volumes-for-fun.html, this time extending the truecrypt master password recovery plugin that the Volatility devs released with how to find these volumes.

9. Mandiant, now Fireeye?, has a new blog up https://www.mandiant.com/blog/tracking-malware-import-hashing talking about their methods for attribution via which modules a backdoor imports as each team has their own preferred backdoor kits. Kinda neat!

That's all for this week! Did I miss something interesting? Leave it in the comments below so others can find it and I can add it to my feedly for next week!

Also Read: Daily Blog #215

Daily Blog #215: Forensic Lunch 1/24/14 - Discussion with Jacob Williams, Hal Pomeranz, and Lee Whitefield



Hello Reader,
       We had a fun forensic lunch this week with a group of guests who had as much fun before the show as they did during. This weeks guests are:

Jacob Williams talking about his proof of concept code shown at shmoocon check it out here: http://malwarejake.blogspot.com/2014/01/shmoocon-talk-and-add.html and download the tool/memory samples here http://code.google.com/p/attention-deficit-disorder/

Hal Pomeranz talking about the scripts he's been sharing via GitHub for the DFIR Community: https://github.com/halpomeranz/dfis

Lee Whitefield, talking about his new series of internet safety videos that you can show to your friends and family, found here: https://www.youtube.com/user/mrleewhitfield


Daily Blog #214: Let's talk about MTP Part 6

Let's talk about MTP Part 6 by David Cowen - Hacking Exposed Computer Forensics Blog

Hello Reader,
        I'm intending for this to be the last blog in the series, at least for now. Today let's look at recovering past interaction with files accessed from MTP devices. You may ask yourself why what we've seen so far isn't enough to determine past interaction, well there are several reasons:

1. Windows 7 (haven't tested other versions at this point) will delete the contents of the WPDNSE directory on reboot.

2. The MFT will only contain those WPDNSE deleted content FILE records until they are overwritten or Defrag runs, which is once a week on Windows 7.

3. Most applications will not make LNK files for the accesses.

4. Most applications will not make jumplists for the accesses.

5. The Shellbags will reveal directories on the MTP devices that were accessed but not the files themselves.

So we are left with two possible source for historical viewing after a weeks worth of time.

1. MRU keys
 No record of any of these files having a MRU entry. This isn't terribly surprising as we didn't find a LNK file or a Jumplist entry for them.

2. USN Journal
The USN Journal is a great source of information here. Since the issues of GUIDs an the like are not an issue you just need to find the MFT entry numbers for the WPDNSE directory, which in my testing was 57374 and then search for all USN entries with a 57374 as the parent entry number.( If this was a real case I would also match the sequence number. ) The following is what comes back:

Let's talk about MTP Part 6 by David Cowen - Hacking Exposed Computer Forensics Blog

So now we can see that the Folder GUID '{00000025-0001-0001-0000-000000000000}' was the only directory created within our USN Journal within the WPDNSE directory. We don't have the translation to directory name here, but as we saw in the prior post we can look that up from the shellbags.

Looking deeper and now looking for all files with a parent MFT entry of 57374 which we got from the above screenshot we can see the following files were accessed from the MTP device:

Let's talk about MTP Part 6 by David Cowen - Hacking Exposed Computer Forensics Blog

and there we go, all of the files I accessed and the different times I accessed them.

Now if this was an XP system you would be out of luck, so let's hope any MTP analysis you need to do is on Vista/7 or 8!

Tomorrow is the forensic lunch, try to make time to watch it live and ask questions!

Don't miss out on:


Daily Blog #213: Let's Talk about MTP Part 5

Let's Talk about MTP Part 5 by David Cowen - Hacking Exposed Computer Forensics Blog


Hello Reader,
         Yesterday we went through the temporary directory that stores files accessed from MTP devices.In our first post in the series we talked about the ability to recover MTP accesses from shellabgs, and if you read Nicole's post you'll see about her ability to recover files accessed from the WPDNSE directory. In my testing, using different applications than Nicole, I could not get a LNK file to be created from any of the following file types:
  • docx - MS Word 2010 
  • png - Microsoft Media Viewer
  • pl - Activestate Komodo
  • txt - Notepad
I even checked the office recent documents folder and found no LNK files that pointed to these files, those directories or the MTP device.

I did find an entry in the Windows Explorer Pinned and Recent Jumplist AppID 1b4dd67f29cb1962 looking each jumplist with a hex editor. What was interesting is how different Jumplist parsers handled this entry. I tested this jumplist with two different jumplist parsers.

Tzworks jmp v.25 64 bit did not show the entry
Woanware jumplister provided the following in the 'destlist' entry but could not parse out the entry.
291
1/23/2014 2:44
1/1/0001 12:00:00 AM 1/1/0001 12:00:00 AM ::{20D04FE0-3AEA-1069-A2D8-08002B30309D}\\\?\usb#vid_19d2&pid_0307#p752a15#{6ac27878-a6fa-4155-ba85-f98f491d4f33}\SID-{10001,,2410917888}\{00000025-0001-0001-0000-000000000000}

 This is very interesting as the raw hex showed the following providing a translation of the folder GUID to the name of the folder on the MTP device itself.


Here you can see the folder name 'Test' (yes I'm very original in my directory naming) and the folder GUID found in the WPDNSE directory '00000025-0001-0001-0000-000000000000'. This is similar to the shell bags entry Nicole found and TzWorks not successfully parses in v.36 of Sbags.


[1] New Folder; [2] {00000025-0001-0001-0000-000000000000}; [3] Name : New Folder; [4] ObjId : o25; [5] FuncObjId : s10001; [6] UniqueId : {00000025-0001-0001-0000-000000000000}
 It looks like we need to get out jumplist parsing tools to also support the MTP structures that other tools have had to do.

Tomorrow let's talk about what the USN Journal shows us.

Also Read: 

Daily Blog #212: Let's talk about MTP Part 4

Let's talk about MTP Part 4 by David Cowen - Hacking Exposed Computer Forensics Blog

Hello Reader,
        Let's get back to this series. If you've read Nicole Ibrahim's blog you've already seen most of this data, I'm just doing my own testing to confirm her findings and see what else I find. Today let's look at artifacts of file access from an Android phone using MTP.

I again attached my AT&T Avail 2 and this time opened up the file I copied on to it, shellbags.pl. Following Nicole's research, found here, I went to the WPDNSE directory located under:
"C:\Users\\AppData\Local\Temp\WPDNSE\"
from there I found a folder with the GUID name:
"{00000025-0001-0001-0000-000000000000}"
located under it was the shellbags.pl file I accessed from the phone as expected. There will be one GUID folder created for every folder that a file is accessed from within the MTP device, for all MTP devices accessed. To determine which folder or device this GUID came from you'll have to go to the shellbags. We'll cover that tomorrow and look for other sources of this correlation.

 What was interesting to me that I didn't see Nicole mention was the dates on the file located under the GUID folder. The creation date of the file was set to the time I accessed the file from the phone, not the time the file was copied to the phone.

Let's talk about MTP Part 4 by David Cowen - Hacking Exposed Computer Forensics Blog


The modification time of the file corresponded to the original modification date of the file I copied onto the MTP device in the prior test. When looking at the files through the MTP shell extension I notice that only the modification date is displayed in the properties.

Let's talk about MTP Part 4 by David Cowen - Hacking Exposed Computer Forensics Blog


I copied a file into the same directory on the Android phone via MTP again, this time with the WPDNSE directory open, but no temporary file got created. So we get artifacts within the WPDNSE directory from file accesses via MTP but not from file copies to a MTP device.

Tomorrow let's look what other artifacts are left from these file copies and accesses.

Also Read: