Monday, August 29, 2011

Updates and Links

Report Writing
One of the hardest parts of what we do is writing reports; technical people hate to write.  I've seen this fact demonstrated time and again over the years.

Paul Bobby wrote up a very interesting blog post about criteria for an effective report.  As I read through it, I found myself agreeing, and by the time I got to the end of the post, I noticed that there were some things that I see in a lot of reports that had not been mentioned...for example,

One section of the post that caught my eye was the Withstand a barrage of employee objections section...I think that this can be applied to a number of other examinations.  For example, CP cases will sometimes result in "Trojan Defense" or remote access claims (I've seen both).  Adding the appropriate checklists (and training) to your investigative process can make answering these questions before they're asked an easy-to-complete task.

At the end of the post, Paul mentions adding opinions and recommendations; I don't really so much have an issue with this, per se, as long as the opinions are based on and supported by clearly documented analysis and findings, and clearly and concisely described in the report.  In many of the reports I've reviewed over the years, the more prolific the author attempts to be, the less clear the report becomes.  Also, invariably, the report becomes more difficult of the author to write.

CyberSpeak Podcast
Ovie's posted another CyberSpeak podcast, this one with an interview of Chris Pogue, author of the "Sniper Forensics" presentations.  Chris talks about the components of "Sniper Forensics", including Locard's Exchange and the Alexiou Principles.

Another thing that Chris talks about is Occam's Razor...specifically, Chris (who loves bread pudding, particularly a serving the size of your head...) described a situtation that we're all familiar with, in that an analyst will find one data point, and then jump to a conclusion as to the meaning of that data point, not realizing that the conclusion is supported by that one data point and a whole bunch of assumptions.  When I find something that is critical to addressing the primary goal of my examination, I tend to look for other supporting artifacts to provide context, as well as a stronger relative level of confidence, to the data I'm looking at, so that I can get a better understanding of what is actually happening.

At  the beginning of the podcast, Ovie addresses having someone review your analysis report before heading off to court, sort of a peer review thing.  Ovie said that Keith's mention (in a previous podcast) of this review probably referenced folks in your office, but this sort of thing can also include trusted outside analysts.  Ovie mentioned that you have to be careful about this, in case the analyst then goes about talking/blogging about their input to your case.  I agree that this could be an issue, but I would also suggest that if the analyst were trusted, then you could trust them not to say anything.

One thing to remember from the podcast is that there is no such thing as a court-approved tool...the term is simply marketing hype.

Finally, Chris...HUGE thanks for the RegRipper (and ripXP) shout-out!  And a HUGE thanks to Ovie and the CyberSpeak team for putting together such a great resource to the community.

Morto
I recently blogged regarding Jump Lists, and in that post had indicated what artifacts are available when the user uses the Remote Desktop Client to connect to other systems via RDP.  Another thought as to how this might be useful came with F-Secure's announcement of a worm called Morto, which appears to use RDP to spread.  How Jump Lists might come into play is if RDP connections are observed between systems (or in the logs of the system being accessed); an investigation might show no Jump Lists associated with the Remote Desktop Client for the primary user on that system.  This goes back to what I was referring to earlier in this post...let's say you see repeated RDP connections between systems, and go to the system from which they originated.  Do you assume that the connections were the result of malware or the user?  Examining the system will provide you with the necessary supporting information, giving you that context.

Mentions of Morto can also be found at Rapid7,  as well as MMPC.


NoVA Forensics Meetup Reminder
The next NoVA Forensics Meetup is set for 7 Sept.  We're scheduled to have a presentation on botnets from Mitch Harris...I'm really looking forward to it!

Tools
I posted recently regarding StickyNotes analysis, and also recently completed my own StickyNotes parser.  It works very well, and I've written it so that the output is available in a listing, CSV, and TLN formats.  Not only does it print out information about the embedded notes within the StickyNotes.snt file but it also provides the modification date/time for the "Root Entry" of the .snt file itself.  This would be useful if the user had deleted all of the sticky notes as it would provide an indication of user activity on the system (i.e., the user would have to be logged in to delete the sticky notes).  In order to write this tool, I followed the MS OLE/Compound Document binary format spec, and wrote my own module to parse the Sticky Notes.  As I didn't use any proprietary modules (only used the Perl seek(), read(), and unpack() functions) the tool should be cross-platform.

Anyway, the tool parses out the notes out of the .snt file, and presents information such as the creation and modification dates, and the contents of the text stream (not the RTF stream) of the note.  It also displays the modification date for the Root Entry of the OLE document, as well...

C:\Perl\sticky>sn.pl -f stickynotes.snt
Root Entry
  Mod Date     : Fri Aug 26 11:51:35 2011

Note: a4aed27b-cfd9-11e0-8
  Creation Date: Fri Aug 26 11:51:35 2011
  Mod Date     : Fri Aug 26 11:51:35 2011
  Text: Yet another test note||1. Testing is important!

Note: e3a17883-cfd8-11e0-8
  Creation Date: Fri Aug 26 11:46:18 2011
  Mod Date     : Fri Aug 26 11:46:18 2011
  Text: This is a test note

I also have CSV and TLN (shown below) output formats:

C:\Perl\sticky>sn.pl -f stickynotes2.snt -t
1314359573|StickyNote|||M... stickynotes2.snt Root Entry modified

In the above example, all of the notes had been deleted from the .snt file, so the only information that was retrieved was the modification date of Root Entry of the document.

Addendum: I've posted the Windows binary of the Sticky Notes parsing tool to my Google Code site.  Note that all times are displayed in UTC format.

Saturday, August 27, 2011

Sticky Notes Analysis

Another cool feature for Windows 7 systems is the built-in Sticky Notes application, which allows the user to create little reminders for themselves on the desktop, just like with regular Sticky Notes IRL.  Having written a Jump List parser and knowing (thanks to Troy Larson) that Sticky Notes also follow the MS compound document binary format, I decided to take a look at writing a parser for Sticky Notes.  One of the interesting aspects of the OLE format is the amount of metadata (particularly time stamps) that are simply a "feature" of the format.

When a user creates sticky notes, they appear on the desktop like...well...sticky notes.  Users can change fonts and colors for their notes, but for the most part, the available functionality is pretty limited.  Now, all of the sticky notes end up in a single file, found within the user's profile (path is "%UserProfile%\AppData\Roaming\Microsoft\Sticky Notes"), named StickyNotes.snt.

So what is the potential forensic value of sticky notes?  Well, it kind of depends on your case, what you're looking for, what you're trying to show, etc.  For example, it's possible that a user may have sticky notes that contain information regarding people they know (contacts), appointments or meetings that they may have, etc.  As far as visible content, we may not really get an idea of what's there until we start to see them used by the user.  Based on the format used, there is additional information available.  Remember that all sticky notes appear in one file, so the file system MACB times apply to the file as a whole.  However, each individual sticky note is held in an OLE storage stream, which has creation and modification dates associated with it.  Opening the Sticky Notes file in MiTeC's Structured Storage Viewer, you can see that the file has several streams; Version, Metafile, as well as the storage streams (i.e., folders with 17 character names) that each "contain" streams named 0, 1, and 3.  In each case, the "0" stream contains the complete RTF "document" for the sticky note (which can be extracted and opened in WordPad), and the "3" stream contains the text of the sticky note, in Unicode format.

Now, because the storage streams for each sticky note have creation and modification dates, we can use this information in timeline analysis to demonstrate user activity during specific time frames.  Extracting the "B" (creation) and "M" (modification) times, we can add this information to a timeline in order to demonstrate shell-based access to the system by a specific user.

Again, the usefulness of this information is predicated on the actual use of Sticky Notes, but automating the collection of this information allows us to quickly add context to a timeline with minimal effort.  That's where programming (Perl) comes into play.  I don't see Sticky Notes and Jump Lists being picked up as part of Windows 7 analysis processes any time soon, as analysts really don't seem to be seeing either of these as valuable forensic resources...yet. However, having an automated, cross-platform parsing capability now allows me to do further research and analysis, as well as incorporate it into a more comprehensive analysis framework.

For example, I wondered, "what happens if the user has no Sticky Notes on their desktop?"  Well, that doesn't mean that an analyst shouldn't look for the stickynotes.snt file.  Here's what I did...I created a bunch of sticky notes with various messages on my desktop and copied the *.snt file off of my system.  Then I deleted all of the Sticky Notes, and again, copied the *.snt file.  This second file only contained the Metafile and Version streams, but the Metafile stream still contained the names of all of the previously created sticky notes (see the above graphic); however, as of yet, this stream doesn't appear to contain any recognizable time stamps .  The good news is that the modification time of the Root Entry reflected when the last sticky note was deleted. Like I mentioned earlier in this post, understanding the underlying format of a storage container allows an analyst to exploit available information wherever they may find it.

Wednesday, August 24, 2011

Jump List Analysis, pt II

I recently posted regarding Jump List Analysis, and also updated the ForensicsWiki page that Jesse K created.  Mark McKinnon has added a great list of AppIDs to the ForensicsWiki, as well.  So why am I talking about Jump Lists again?  Over two and a half years ago, I said the following in this blog:

...from a forensic perspective, this "Jump List" thing is just going to be a gold mine for an analyst...

The more I look into Jump Lists, the more convinced I am of that statement.  However, there's still a great deal that needs to be addressed regarding Jump Lists; for example, none of the tools available at this point parse the DestList streams in the automaticDestinations files (I wrote a Perl script that does this).  Also, the available tools that do parse Jump Lists, including ProDiscover, do so in a format decided upon by the developer, not by analysts.  I've heard from several folks who have stated that the process that they have to go through to find relevant information in the Jump List files is very manual and very cumbersome, and we all know how repetitive processes like this can be prone to errors.  Even though I've developed code that quickly parses either a single Jump List file (or a whole directory full of them), I haven't yet settled on an output format (beyond TLN and CSV) and how to display or present relevant information derived from the Jump List files.  That brings to this to mind...there's the issue of what is relevant to the analyst...what I want to see as part of my analysis isn't necessarily the same thing someone else would want to see.

AppIDs
Another issue is the application identifiers for the Jump List files.  Thanks to Mark McKinnon, a nice list of AppIDs is publicly available, but I'm sure that anyone looking at it will recognize that it's far from complete.  This is going to be a community effort to keep building the list.

This page at the WindowsTeamBlog site discusses how AppIDs are created, and strongly suggests that they're very similar to the manner in which Prefetch file hashes are created.  For example, the hash that is part of a Prefetch file name is created using the path and command line for the application; it appears from this page that a similar approach is used for AppIDs, indicating that the AppIDs can vary based on the file location and the command line used.

This page at MS talks about AppUserModelIDs; this page doesn't explicitly discuss Jump Lists, but does reference exclusion lists for Taskbar pinning and recent/frequent lists.  The page also discusses Registry keys to use for exclusion lists, as well as provides a list of applications (including rundll32.exe and mmc.exe) that are excluded from the MFU list.  What this indicates is that the automaticDestination Jump Lists are, in fact, MRU/MFU lists.

With respect to the exclusion lists discussed at the MS site, the page mentions a "Registry entry" as one means of preventing inclusion in the Start Menu MFU list, but not what kind of entry; i.e., key or value.  Reading on, there is mention that the type of the entry can be "Reg_None" and that the data is ignored, indicating that the "entry" is a value.

These pages do provide some insight as to how Jump List analysis needs to incorporate more than just looking at the files themselves; we have to include the Registry, as well.  For example, let's say that there are indications of an application on a Windows 7 system, but no Jump List files; do we just assume that the application was never used by the user in question if there are no Jump List files in their profile?  I would suggest that, based on the pages linked to above, conclusions regarding usage and attribution could not be drawn without including Registry analysis.


Now, I'm a bit skeptical about all of this, as when parsing Jump Lists created by the Remote Desktop Client, the numbered streams include a command line entry within the LNK stream (i.e., /v:"192.168.1.2").  However, all of the streams are contained in the same file rather than separate files.

What this means is that there's more we need to know about Jump Lists...

Analysis
So, how would you use Jump Lists and the information they contain during an examination?  Well, whenever there is some question of the user's activities on the system, would be my number one thought.  I would think that attempting to determine if a user had used an application (or had not...remember, forensic analysis can be used to exonerate someone, as well) would be the primary use of Jump List analysis.  There are secondary uses, for example, such as determining when the user was active on the system, or when the system itself was active (i.e., booted and running).

I would also include parsing Jump Lists and sorting the LNK streams based on the DestList stream time stamps in a timeline; again, this might be of particular value when there was some question regarding the user's activities.

Also, limited testing has demonstrated Jump Lists persist on a system even after the application with which they are associated have been deleted and removed from the system.  I found this to be the case with iTunes 10, and others have observed similar behavior.  This behavior can prove to be invaluable during investigations, providing a significant amount of valuable information that persists long after a user has made attempts to hide their activity.

Something else to consider is this...what if the user connects a USB drive to the system, or inserts a DVD into the system, and the attached media includes files with a unique viewer application?  That viewer would not have to be installed on the system, but the user's use of the application to view the files (images, movies, etc.) would likely create a Jump List file that would persist long after the user removed the media.  This is definitely something that needs to be looked into and tested, and it also demonstrates how valuable Jump Lists could possibly be during an examination.

Carving
One of the things that many of us do during an examination is file carving.  For the most part, this is done through one of the commercial tools, or using a tool (foremost, scalpel) that looks for file headers and footers, or just the headers (and it grabs X bytes following the header).

I've taken a more targeted approach to carving.  For example, on Windows XP and 2003 systems, using blkls to extract the unallocated space from an image, I then searched for the event record magic number, and instead of grabbing X bytes from there, I followed the event record format specification and retrieved over 330 "deleted" albeit valid Event Log event records from unallocated space.  I was able to do this because I know that there's more to the record format than finding a header and grabbing X bytes following that.

I've started to take a similar look at carving Jump Lists from unallocated space, and getting anything that can be parsed is going to be a challenge.  Remember that the compound file format is referred to as a 'file system within a file', and that's exactly what it is.  The header of the file specifies the location of the directory table within the file, and for the most part, each of the streams is comprised of 64 byte sectors.  However, at a certain point, the file contains enough numbered stream entries that the DestList stream gets over 4Kb (4096 bytes) in size, and it's content is moved to 512 byte sectors.  Also, as numbered streams are added to the Jump List, the DestList stream becomes intermingled amongst the rest of the sectors in the file...one of the things I had to do in writing my code was build lookup arrays/lists, and as the file becomes larger, the DestList stream becomes more dispersed throughout the file.

Now, consider the numbered streams.  A numbered stream that is 203 bytes in length, per the directory table, will consume four 64 byte sectors, with 53 bytes left over.  A numbered stream that is 456 bytes long will consume eight 64 byte sectors...and in neither case is there any requirement for these sectors to be contiguous.  This means that they could be dispersed within the Jump List file.  Reassembling those streams is enough of an issue without having to deal with hoping that you retrieved the correct sectors from unallocated space within the image.

Based on this, something does come to mind that would make an interesting honors or university project...carving within the Jump List file itself.  Locating deleted keys and values (as well as pulling out unallocated space) within Registry hive files has proved to be a very useful analysis technique, so maybe something of value could also be retrieved by locating and extracting unallocated space within Jump List files.  This would simply be a matter of reading the directory table and determining all of the 64- and 512-byte sectors used, and then extracting those that are currently not being used.

Resources
Alex Barnett's paper on Jump List Forensics
Mark Woan's JumpLister
Troy Larson's Forensic Examination of Windows 7 Jump Lists presentation
Mike Ahrendt's blog post on Jump Lists (3 Apr 2011)

Tuesday, August 23, 2011

Reconnoitre

I've had the opportunity recently, thanks to Paul Sanderson (of Sanderson Forensics) to take a look at his Reconnoitre tool for accessing files within Volume Shadow Copies (VSCs).  I'm not sure how many folks have a need to do this, but VSCs have been part of Windows systems since Vista, and can provide a wealth of forensically-valuable data.  I've posted in this blog regarding accessing VSCs, and I think that the more we address this topic and how accessing the VSCs can prove to be very valuable, the more analysts will pick it up as a technique that they incorporate into their analysis whenever possible.

I should also point out that the current versions of ProDiscover (with the exception of the Basic Edition) also allow you to access VSCs within an image, even if your analysis system is Windows XP.  Check out the TechPathways Resource Center for more info.

Installing Reconnoitre was straightforward...I installed it onto a Windows XP SP3 system, and it was up and running right away.  I then connected an external USB drive on which I have a logical image of a Windows 7 system, and added the image file to the case I created, and then sat back and watched the tool process the information.

That's right...I was accessing VSCs within a Windows 7 image, from a Windows XP system.

Paul shared the following with respect to Reconnoitre:

"1.  For those investigators who are used to working directly on an image it will be a more familiar experience, obviously this could be seen as  both good and bad.
2.  It allows you to view files in a vsc alongside the current live file and see at a glance how many variants there are, a sort of overview.
3.  It allows you to easily view the MFT entry and see where the changes are.


I think the last is possibly the most useful. I have an example image where I have changed a jpg once and there are 3 entries in shadows. One is obviously the original image, a second is also the original image but only the MFT entry has been changed (the addition of an Objid stream). The final example I have not yet got to the bottom of but I think it may be possibly due to the file being moved, defrag? The allocation for the file is different, as are some other bytes in the MFT. Changes to the MFT entry for the parent folder may also be relevant."

All of these capabilities can be extremely valuable to the analyst.  One of the things I really like about this tool is that it's a tool for analysts, written by an analyst...so the functionality of the tool is derived from the author's needs, developed from performing the same sorts of investigations that we all have encountered, and will continue to see.

Be sure to visit Paul's site and check out both his commercial products and free utilities.

Note: For the skeptical folks out there, let it be known that I receive NO benefit from this posting, other than the much-appreciated opportunity to see a new tool in action.  I gain nothing...monetarily or otherwise...from taking a look at this tool.

Monday, August 22, 2011

More Updates

Scanning Hosts
There's a great post over on the SANS ISC blog regarding how to find unwanted files on workstations...if you're a sysadmin for an organization and have any responsibilities regarding IR, this is a post you should really take a look at.

As a responder, one of the things I've run across is that we'd find something on a system that appeared to be a pretty solid indicator of compromise (IoC) or infection.  Sometimes this is a file, directory name, or even a Registry key or value.  This indicator may be something that we could use to sweep across the entire enterprise in order to look for it on other systems...and very often the question becomes, is there a tool that I can use to scan my infrastructure for other compromised/infected systems?  Well, there is...it's called a "batch file".

Forensics
One of the things most of us are aware of as analysts is that in many cases, deleted stuff really isn't; files, Registry keys, etc., get deleted but are often recoverable if you know what you're looking for.  Well, here's a great example of that was used recently. 

CyberSpeak Podcast
I was listening to Ovie's latest CyberSpeak podcast, and very early on in the show, Ovie read a listener email from a LEO who does forensics as a collateral duty.  Now, this is really nothing new...I've met a number of LEOs for whom forensics is an additional duty.  Also, for a lot of LEOs, digital forensics isn't the primary task of the job, even if it is the primary assignment, as LEOs need to remain qualified as LEOs in order to get promoted, and they very often do.  This means that someone will come into the digital forensics field as a seasoned investigator, and several years later move out to another aspect of the law enforcement field. 

I had an opportunity to sit down with some LEOs a couple of weeks ago, and one of the things we came up with as a resource is the LEO's rolodex; if you run into something that you have a question or thought on, call someone you know and trust that may have detailed knowledge of the subject, or knows someone who does.  No one of us knows everything, but there may be someone out there that you know who knows just a little bit more about something...they may have read one or two more articles, or they may have done one more bit of research or testing. 

Ovie also mentioned the ForensicsWiki as a resource, and I completely agree.  This is a great resource that needs to be updated by folks with knowledge and information on the topic areas, so that it will become a much more credible resource.

Also, I have to say that I disagree with Ovie's characterization that there are two different types of forensics; "intrusion forensics" and "regular forensics".  I've heard this sort of categorization before and I don't think that that's really the way it should be broken out...or that it should be broken out at all.  For example, I spoke to a LEO at the first OSDFC who informed me that "...you do intrusion and malware forensics; we do CP and fraud cases."  My response at the time was that I, and others like me, solve problems, and we just get called by folks with intrusion problems.  In addition, there's a lot of convergence in the industry, and you really can't separate the aspects of our industry out in that way.  So let's say that as a LEO, you have a CP case, and the defense counsel alludes to the "Trojan Defense"...you now have a malware aspect to your case, and you have to determine if there is a Trojan/malware on the system and if it could have been responsible for the files having been placed on the system.  Like many examiners, I've done work on CP cases, and the claim was made that someone accessed the system remotely...so now I had an intrusion component of the examination to address.

I went on and listened to Ovie's interview with Drew Fahey...great job, guys!

Time and Timelines
When I give presentations or classes on timeline analysis, one of the things I discuss (because it's important to do so) is those things that can affect time and how it's recorded and represented on a system.  One of the things I refer to is file system tunneling, which is a very interesting aspect of file systems, particularly on Windows.  In short, by default, on both FAT and NTFS systems, if you delete a file in a directory and create a new file of the same name within 15 seconds (default setting), then that new file uses the original files creation dates from both the $STANDARD_INFORMATION and $FILE_NAME attributes. 

This is just one of the things that can affect time on systems.  Grayson, a member of the Trustwave team along with Chris, recently posted to his blog regarding his MAC(b) Daddy presentation from DefCon19, and in that post, linked to this Security BrainDump blog post.

Tools
There's a "new" version of Autopsy available...v3.0.0beta.  Apparently, this one is Windows-only, according to the TSK download page.  I've been using the command line TSK tools for some time, in particular mmls, fls, and blkls...but this updated version of Autopsy brings the power of the TSK tools to the Windows platform in a bit of a more manageable manner.

Similarly, I received an email not long ago regarding a new version of OSForensics beta version 0.99f being available for testing.  I'd taken a look at this tool earlier this year...I haven't looked at this new version yet, but it does seem to have some very interesting capabilities.  There's some capability in the tool to interact with a live system, in that it appears to be able to capture and parse memory.  Also, the tool still seems to be written primarily to interact with a live system...I'll have to take another look at this latest version.

For mounting images, PassMark also makes their OSFMount tool available for free, as well.  This tool is capable of mounting a variety of image formats, which is great...generally, I look for "read-only", but OSFMount has the ability to mount some image formats "read-write", as well.

In chapter 3 of Windows Registry Forensics, I mentioned some tools that you can use to gather information about passwords within an image acquired from a Windows system.  This included tools that would not only quickly illustrate whether or not a user account has a password, but also allow you to do some password cracking.  Craig Wright has started a thread of posts his blog regarding password cracking tools; this thread goes into a bit more detail regarding the use, as well as pros and cons of each tool.

Thoughts on Tool Validation
Now again I see something posted in the lists and forums regarding tool validation, followed by a lot of agreement, but with little discussion regarding what that actually means.  I have also been contacted by folks who have asked about RegRipper validation, or have wanted to validate RegRipper.

When I ask what that means, often "validation" seems to refer to "showing expected results".  Okay, that's fine...but what, exactly, are the expected results?  What is the basis of the reviewer's expectation with respect to results?  When I was doing PCI work for example, we'd have to scan acquired images for credit card numbers (CCNs), and we sort of knew what the "expected result" should look like; however, we very often found CCNs that weren't actually CCNs (they were GUIDs embedded in MS PE files), but has passed the three tests that we used.  When looking for track data, we were even more sure of the accuracy of the results, as the number of tests were increased.  Also, we found that a lot of CCNs were missed; we were using the built-in isValidCreditCard() function that was part of the commercial forensic analysis tool we used, and it turned out that what the vendor considered to be a valid CCN (at the time) and what Visa considered to be a valid CCN were not completely overlapping or congruent sets.  We ended up seeking assistance to rewrite the functionality of that built-in function, and ended up sacrificing speed for accuracy. 

The point of this is that we found an issue with respect to the expected results returned by a specific bit of functionality, and compared that what we considered a "known good".  We knew what the expected result should look like, and as part of the test, we seeded several test files in an image to run side-by-side tests between the built-in function and our home-brew function.  In this case, we knew what the expected result should look like and had purposely seeded the data set with examples of data that should have been correctly and accurately parsed.

When someone takes it upon themselves to "validate" a tool, I have to ask myself, on what are they basing this validation?  For example, if someone says that they need to validate RegRipper (and by extension, rip.pl/.exe), what does that mean?  Does the person validating the tool understand the structures of Registry keys and values, and do they know what the expected result of a test (data extraction, I would presume) would be?  Validation should be performed against or in relation to something, such as a known-good standard...so, in this case, what standard would RegRipper be validated against?  If the validation is against another tool, then is the assumption made that the other tool is "correct"?

Another question to consider is, is the function and design of the tool itself understood?  With respect to RegRipper, if the test is to see if a certain value is extracted and it isn't, is the tool deemed a failure?  Did the person making the assessment ever check to see if the there was a plugin to retrieve the value in question, or did they simply assume that any possible condition they established was accounted for by the tool? The same thing would be true for tools such as Nessus...in a validation, are the properly constructed plugins available for the test?  RegRipper is open-source, and it's functionality isn't necessarily limited by any arbitrary measure.

Why do we validate tools?  We should be validating our tools to ensure that not only do they return the expected results, but at the same time, we're validating that we understand what that expected result should be.  Let's say that you're looking at Windows 7 Jump Lists, and your tool-of-choice doesn't give the expected result; i.e., it chokes and spits out an error.  What do you do?  I know what I do, and when I hear from folks about tools (either ones I've written or ones I know about) that have coughed up an error or not returned an expected result, I often find myself asking the same round of questions over and over.  So, here's what I do:

1.  Based on my case notes, I attempt to replicate the issue.  Can I perform the same actions, and get the same results that I just saw?

2.  Attempt to contact the author of the tool.  As I have my case notes currently available (and since my case notes don't contain any sensitive data, I can take them home, so "...my case notes are in the office..." is never an excuse...), I know what the data was that I attempted to access, where/how that data was derived, which tool and version I was using, and any additional tools I may have used to perform the same task that may have given the same or similar results.  I can also specify to the author the process that I used...this is important because some tools require a particular process to be correctly employed in their use, and you can't simply use it the way you think it should be used.  An example of this is ripXP...in order properly use the tool, you need to either mount the XP image via FTK Imager v3.0 in "file system/read-only" mode, or you have to extract all of the RPx subdirectories from the "_restore{GUID}" directory.  Doing it any other way, such as extracting the System hive files from each RP into the same directory (renaming each one) simply won't work, as the tool wasn't designed to address the situation in that manner.

3.  Many times, I'm familiar with what the format of the data should look like...in particular, Registry hive files, Jump Lists, etc.  Now, I do NOT expect every analyst to be intimately familiar with binary file formats.  However, as a professional analyst, I would hope that most of us would follow a troubleshooting process that doesn't simply start and end with posting to a list or forum.  At the very least, get up from your workbench or cubicle and get another analyst to look at what you're trying to do.  I've always said that I am not an expert and I don't know everything, so even with a simple tool, I could be missing a critical step, so I'll ask someone.  In a lot of cases, it could be just that simple, and reaching out to a trusted resource to ask a question ends up solving the problem.  I once had a case where I was searching an image for CCNs, and got several hits "in" Registry hive files.  I then opened the suspect hive files in a Registry viewer and searched for those hits, but didn't find anything.  As I am somewhat familiar with the binary format of Registry keys and values, I was able to determine that the hits were actually in yet-to-be-allocated sections of the hives...the operating system had selected sectors from disk for use as the hive files grew in size, and those sectors had once been part of a file that contained the potential CCNs.  So, the sectors were "added" to the hive files, but hadn't been completely written to by the time the system was acquired.

So, the point is, when we're "validating" tools, what does that really mean?  I completely agree that tools need to be validated, but at what point is "validating" a buzzword, and at what point is it meaningful?

Saturday, August 20, 2011

Carbon Black

I was recently afforded an opportunity to download and install the Carbon Black (Cb) standalone server; for testing purposes, I installed it on a Windows 7 host system, and got it up and running right away. 

If you're not aware, Cb is a small sensor that you can load on a Windows system, and it monitors executables being launched.   When a new process is started, information about that process is monitored by the sensor, and sent to the server for correlation and presentation.  The server can be maintained by the Kyrus guys, or it can be maintained within your infrastructure.  If you choose to maintain the server yourself, that's fine...it just means that you're going to need to have someone work with the tool, get familiar with it, and monitor it.

Getting Cb set up is easy.  The process of generating a license and getting the necessary sensor from the guys at Kyrus was simple, straightforward, and very quick.  From there, I rolled out the first sensor to a Windows XP VM that I set up as a guest on the Windows 7 host system via VMPlayer.  Shortly after installing the sensor on the target system, I began seeing events populating the dashboard, just like what I saw in the demo I attended in July.  I even began doing things on the system that one might expect to see being done during the recon phase of an incident, such as running some native tools to start mapping the network just a bit.  My "infrastructure" is a bit limited, but I got to see the Cb functionality first hand.

It's clear that a great deal of thought and effort has gone into the creating Cb, as well as structuring the interface, which is accessible via a browser.  I found that the interface and functionality is far more intuitive that other, similar tools I've seen, and in very short order, I was able to narrow down the information I wanted, based on my experience as an incident responder.

That's another thing I like about Cb...I can use the experience I've developed as an incident responder to get timely answers, and I don't have to learn someone else's framework or methodology.  For example, I look at the dashboard and see a "suspicious" process...in this case, I'd run "ipconfig /all" on the XP VM system...and all I have to do is click on an icon to see the parent process (in this case, cmd.exe).  I could also see loaded modules and files that had been modified.  I could even get a copy of the new executable that was 'seen' by the sensor.  Future enhancements to the sensor include monitoring of network connections and Registry keys.

All of this is very reminiscent of the demo...consider a "normal" incident involving a browser drive-by or some phishing attack against an employee.  Having Cb installed before an incident is the key, as it would allow a responder to very quickly navigate through the interface and once a suspicious process is located (based on a time hack, process name, or in the near future, network connection...), the responder can quickly identify the parent process or any child processes.  So why is this so special?  Well, most times for me, as a responder, I don't get called until after an incident occurs...which means processes have long since run and exited, and there may even have been efforts to clean up (ie, files deleted, etc.).  However, with Cb, there would still be a history of process execution, copies of the EXEs, etc.  In the case of July's demo, there were three stages to the attack, the first two of which launched and exited once their work was complete.  Further, of the three EXEs, AV detection of each was spotty, at best.

Having something like Cb deployed in your environment before an incident occurs is really the best way to approach not just using tools like this, but incident response in general.  In addition, Cb has a range of other uses...just ask the Kyrus guys for their case studies and stories, particularly the one about the CIO who used Cb to save thousands of dollars on Office licenses, based on actual, hard data (Deming, anyone?).

Cb is also a valuable tool to deploy during an incident...it's like sending out a bunch of lightweight little LP/OPs (listening or observation posts) to cast a wide net for data collection and correlation.  Under normal conditions, responders need to start taking systems down or performing live acquisitions, and then start analyzing those acquisitions, along with log files, network captures, etc., in order to begin scoping the incident and identify other systems that need to be acquired and analyzed.  Deploying Cb could (depending upon the type of incident) provide more information in a quicker manner, and require fewer resources (ie, fewer analysts/responders to send on-site, pay for travel, lodging, etc.), and reduce the overall impact on your infrastructure.  Deploying Cb in conjunction with F-Response would REALLY up your game to a whole new level.

Carbon Black is a tool that should be in your arsenal, as it changes the dynamics of incident response, in favor of the targets and responders, and takes away a lot of the advantages currently enjoyed by intruders.  If you have an infrastructure of any size, you should be calling the Kyrus guys.  It's not just large infrastructures that are being targeted...don't believe me?  Spend a rainy Sunday reading through Brian Krebs' blog, and then, if you can stop crying, look me straight in the eye and tell me sincerely that you're below the bad guy's radar.

In my experience as a responder, when a call came in, we'd start collecting information, not only about the incident, but we'd also have to get information from the customer that we'd use to populate the contract, so we'd have a couple of things going on in parallel.  Even so, it would still take us some time to get on-site (6 hrs, sometimes out to 3 or more days...), and then we'd have to start collecting data.  Even though we'd asked for network device and firewall logs to be collected, in some cases, it was only as a result of the incident itself that the customer found out that, "hey, wait a minute, what do you mean we aren't logging on the firewall (or web server)?"  I've had only one instance where I showed up and some data had actually been collected...in every other instance, when an incident occurred, the customer was completely unprepared and we didn't start collecting data until someone got on-site. 

Think about that, as well as any other incidents you may have encountered, for just a moment.  Now, imagine what it would be like if the customer who called you already had a contract and a relationship in place, and you'd helped them install Cb.  With the contract already set up, that's one thing you don't have to deal with...and with Cb rolled out, data is already being collected.  So, while they're on the phone, you can begin to assist them, or you could VPN into their infrastructure and access the Cb server yourself.  If a team needs to be deployed to assist, then you're already collecting information (if you have F-Response rolled out or the local IT staff has the training, memory and images may also be collected) even before the responders are able to get airline tickets!

Cb is like Burger King for IR...have it your way!

Wednesday, August 17, 2011

Jump List Analysis

Every now and again, I see questions about Windows forensic analysis such as "what's new/different in Windows 7?"  There are a number of things that are different about Windows 7, some of which may significantly impact how analysts approach an examination involving Windows systems.  While there are some aspects of Windows systems that are just different (Windows Event Logs, Registry, etc.), there are some things that are new technologies.

One of those new technologies is Jump Lists. Windows 7 Jump Lists (see the "Jump Lists" section of this post) are a new and interesting artifact of system usage that may have some significant value during forensic analysis where user activities are of interest.  Jump Lists consist primarily of two file types...the *.automaticDestinations-ms (autodest) files, which are created by the operating system when the user performs certain actions, such as opening files, using the Remote Desktop Connection tool, etc.  The specific Jump Lists produced appear to be associated through file extension analysis...if one use double-clicks a text file on one system, it may open in Notepad, whereas on another system it may open in another editor (I like UltraEdit).  The contents of these Jump Lists appear in application context menus on the TaskBar, as well as the Start Menu.  According to Troy Larson, senior forensic dude for MS, these files follow the OLE/compound document format, with individual numbered streams following the LNK file format.  The autodest files also contain a DestList stream, which according to research performed by Jimmy Weg, appears to be an MRU list, of sorts.

There are tools available to view the contents of the autodest files.  For example, you can use MiTeC's SSViewer to open the files and see the various streams.  From here, you would then need to save the numbered streams and use an LNK file viewer to see the contents of the streams.  There's also Mark Woan's JumpLister, which allows you to view the contents of the numbered streams right there in the tool, automatically parsing the LNK formats.  Chris Brown also added this capability to ProDiscover, including a Jump List Viewer in the tool that parses the contents of the numbered streams.

There are also custom Jump Lists, *.customDestinations-ms (customdest) files, which are created when a user "pins" a file to an application, such as via the TaskBar.  Per Troy, these files appear to consist of stacked segments (not in an OLE container) that are LNK file formats.

Both types of files start with a series of hex characters that are the application identifier, or AppID.  This is an identifier that refers to the specific application that the user was using.  While I've found some short lists of references to AppIDs, I haven't yet found a comprehensive list.  Most of what I have found refers to "fixing" Jump Lists by deleting the appropriate files and starting over.

Addendum: Mark McKinnon recently updated the ForensicsWiki page for Jump List IDs.

In an effort to develop a better understanding of the autodest files, I began digging into the Jump List file structure, and wrote some Perl code that parses the *.automaticDestinations-ms (autodest) Jump List files on a binary level.  This parsing capability consists of two Perl modules; the first parses the autodest Jump List files (maintained in MS OLE/Compound File format) and the DestList stream within those files.  The second module parses the numbered streams, which are maintained in the Windows shortcut/LNK file format.  By combining these two modules, I'm able to parse the autodest Jump List files, correlate the DestList stream entries to the numbered streams, and present the available information in any format (TLN, CSV, XML, etc.) I choose.

So far, this is the only tool that I'm aware of that parses the DestList streams.  I had done some research into the format, and it appears that I was able to figure out at least part of the structure of these streams.  I've also found that various applications maintain different information within the contents of the streams...some maintain file names, other maintain string identifiers that appear to be used similar to a GUID.  One thing of interest, and perhaps significant value to an analyst, is that there's a FILETIME object embedded within each structure, and based on Jimmy Weg's research and input, this appears to be an MRU time.  Each individual structure within the DestList stream has a number that is associated with a numbered stream, so the information can easily be correlated to develop a complete picture of what the Jump List contains.

Here's an interesting example of how the information in Jump Lists can be useful; when a user uses the Remote Desktop Connection tool, the "1bc392b8e104a00e.automaticDestinations-ms" Jump List file is created.  The DestList stream of the Jump List file contains the "MRU Time" for each connection, as well as an identifier string.  However, we can correlate each DestList entry to the corresponding numbered stream within the Jump List file, which is itself maintained in the Windows shortcut/LNK file format; as such, we can extract information such as the basename and command line (if it exists) of the shortcut.  If we combine the two, this would appears as:

C:\Windows\System32\mstsc.exe /v:"10.1.1.23"

The information that is available depends upon how the connection was made; for example, rather than an IP address, the command line element of the LNK stream may contain a system name.  However, what we do have is an action associated with a specific user, that occurred at a specific time.  As this is a Windows 7 system, we may also be able to find additional, historic MRU data in Jump Lists accessed via Volume Shadow Copies.

The code is Perl-based and doesn't use any proprietary or platform-specific modules; while it does make heavy use of seek(), read(), substr(), and unpack(), all of these functions are available in all versions of Perl.  Ideally, this code should run on Windows, Linux, and Mac systems equally well (I don't have a Mac to use for testing).

I opted to create Perl modules for this capability because it is a much more flexible method that allows me to incorporate it into other tools.  For example, I can incorporate the modules into a Perl script (which I have done) that will parse through either individual autodest Jump List files or all such files found in a directory, and list the information they contain in any manner that I choose.  Or, I can write a ProDiscover ProScript.  Or, I can (will) include this in my forensic scanner.  Or, to paraphrase Beyonce, "If you like then you better put a GUI on it!"

Output formats are also a matter of personal choice now.  I'm focusing on TLN and CSV formats for the time being, but there's nothing that restricts me to these formats; XML is a possibility (I simply don't have a style sheet format in mind, so I may not pursue this output format).

Issues
Jump Lists are fairly new...although Windows 7 has been out for a while now, I haven't seen a great deal of discussion or questions in public forums or lists looking for more information about these artifacts.  However, some issues have already come up.  For example, I was contacted recently by someone who indicated that one of available tools for parsing Jump Lists "didn't work".  Initial correspondence indicated that at least one Jump List may have been recovered from unallocated space, but it turned out that the three "problem" Jump Lists were from a live acquisition image, and the applications in question could have been open on the desktop during the acquisition.

This presents an interesting and valid issue...how do you deal with Jump Lists from live acquisition images, where the apps were open during the acquisition (live acquisition may be required for a number of reasons, such as whole disk encryption, etc.)?  Or, what about Jump Lists carved from unallocated space?

The answer is that you need to understand the binary format of the Jump Lists (or know someone who is), because that's really the only way to resolve these issues.  When a tool "doesn't work", you need to either have the understanding of the formats to troubleshoot the issue yourself, or go to the tool author for assistance, or go to another resource for that assistance.  If you're squeamish about sharing information about the issue, or the "problem" Jump List file, even with confidentiality agreements in place, then you're really limiting yourself, and by extension, your analysis.  However, this applies to every facet of an examination (Registry, Event Log, USB device analysis, etc.), not just Jump Lists.  So, the answer is to develop the capability internally, or develop trusted resources that you can reach to for assistance.

Summary
From an analyst's perspective, Jump Lists are a new technology and artifact that need to be better understood.  However, at this point, we have considerable information that clearly indicates that these artifacts have value and should be parsed, and the embedded information included in timelines for analysis.  In many ways, Jump Lists contain analytic attributes similar to the Registry and also to Prefetch files, and are tied to specific user actions.  Further research is required, but it appears at this point that Jump Lists also represent a persistent artifact that remains after files and applications are deleted.  In one test, I installed iTunes 10 on my system, and listened to two CyberSpeak podcasts via iTunes.  The Jump Lists persisted even after I removed the application from my system.

Resources
Code Project: Jump Lists
AppID list 1
ForensicsWiki Jump Lists page

Sunday, August 14, 2011

Updates and Links

ECSAP
I had a great time speaking on timeline analysis at an event last week...it was a great opportunity to get out in front of some folks and talk about this very important and valuable analysis technique.  My impression was that most of the folks in the room hadn't really done this sort of analysis beyond perhaps entering some interpreted times and data into a spreadsheet.

One take-away for me from the conference speaking is that people like to get free stuff.  In this case, I had one DiskLabs Find Evidence keyboard key left, and as I tend to do with conferences where I speak, I also gave away copies of my books...I gave away one copy of DFwOST and one of WRF (both of which were signed).  I hope that the continuing promises of free stuff kept folks coming back into the room from breaks... 

Along those lines, something I would offer back up to conference attendees is that speakers are people just like you, and they like to get free stuff, too...in particular, feedback.  Did what they say make sense?  Was the presentation material something that you feel you can use?  A simple "yes" doesn't really constitute feedback.  Some of us (not just me) have also written books or tools, which we may refer to...and getting feedback on those is always a bonus.  But again..."cool" isn't really "feedback".

I can't speak for every presenter, but I value honest, considered feedback, even if it's negative, over a positive albeit empty statement.  If what I talked about simply isn't useful, please...let me know.  If it's too easy or too hard to understand...let me know.  I think that most folks who present would welcome some honest feedback on what they covered.

Investigation Plans
Chris posted recently on the need to develop an investigation plan prior to doing any analysis.  Chris even outlines an exercise to clarify that, and to keep it firmly planted in your mind while you conduct your analysis.  I tend to do something very similar...I copy what I'm supposed to do (the goals) from the statement of work or contract to the top of the MSWord document I use for case notes, usually right below my description of the exhibits I've received.  From there, I also write a description of my initial approach to the analysis...keyword searches I may want to run, as well as anything of note that I may have available, such as a time frame to work with (i.e., "online banking fraud was found to have begun on 20 March, so begin timeline analysis of the system prior to that date...").

Analysis plans are not set in stone...they are not rigid scripts that you need to follow lock-step, beginning to end.  We all know that no plan survives first contact...the idea of an analysis plan is to get us started and keep us focused on the end goal, what we hope to achieve and what question(s) we need to answer for our customer.  Too many times, we won't have a plan and we'll find something "interesting" and begin running ourselves down that rabbit hole, and by the time we take a breath to look around, the original goals of the exam are nowhere in sight, but we've consumed considerable time getting there.

Timelines
Having presented recently on the topic of timelines, and working on some code to more fully exploit Windows 7 Jump Lists as forensic resources, the creation and use of timelines have been on my mind a lot recently.  I prepared and delivered a 2-day, hands-on course in timelines in June, and recently (ECSAP) presented a 2-hr compressed version of the class...which really doesn't do the subject justice (next time I'll push for at least a 4 hr slot).  One of the things I've been thinking about is how useful timelines can be from both an investigative and an analytic perspective, and how a timeline can be used to answer a wide range of questions.

One example involves artifacts from the use of USB devices on Windows systems; I've seen a number of questions in forums and lists in which the original poster (OP) identifies an anomaly that is either interesting, or needs to be explained as part of the examination...something appears odd with respect to the observed artifacts.  Often the question is, "what could have caused this?", and the answer may be found by developing a timeline of system activity, and identifying surrounding events and context to the observed artifacts.

Malware
Here's a great write-up on the Malware FreakShow 3 presentation provided by two TrustWave SpiderLabs researchers at DefCon19.  The presentation addresses malware found on Windows-based point-of-sale ("POS"...take that any way you like...) devices.

Tools
Need to find Facebook artifacts?  Take a look over on the TrustedSignal blog...there's a post indicating that a Python script has been updated and is available.  You never know when you're going to need something like this...

Resources and Links
Ken's (re)view of GFIRST...what I really like about this post is the amount of his perspective that Ken encapsulates in his post, giving his views and insights of what he saw and experienced.  Too many times in the community when someone talks about an event they attended or a book they read, the review is very mechanical..."the speaker talked about..." or "the book contains eight chapters; chapter 1 covers...".  I think this is odd, in a way, because when I talk to folks about what they want to see in a presentation or book, very often when they're looking for is the author's insights...so, in a way, this sort of goes back to what I was saying in the ECSAP section of this post.

Here's an interesting post regarding not just a trick used by malware to confuse a potential victim, but the post also describes the use of MoonSol's DumpIt and the Volatility Framework.

The DFS guys posted their materials from GFIRST and OMFW...thanks to Andrew and Golden for doing that.  There are a couple of great slide decks available; if you want to see how they investigated a data exfil incident (speaking of analysis plans), take a look at their slide pack from GFIRST...it's like reading their case notes from an exam.  You'll have to excuse a misspelling or two (slide 19 mentions the setup.api log file; spelled correctly on slide 40), but for the most part, their examination of the USB history, et al, from an XP system is a very good view into an actual investigation, and well worth writing into a process or checklist.

A couple of thoughts from the presentation:
slide 38 - instead of writing a "wrapper script" to import the information into Excel, it might be easier to modify the usbstor.pl plugin (use of a "wrapper script" mentioned again in slide 79)
slide 40 - the LastWrite times of the USBStor subkeys are not used to determine the last time the devices were plugged into the system; this is further indicated by the USB Analysis Process illustrated in slide 43
slide 90 - path should read HKLM\System\CurrentControlSet\Services\lanmanserver\Shares

Speaking of OMFW, gleeda posted to her blog and included links to her, MHL, and Moyix's slides.

DFwOST
Richard Bejtlich posted his impressions (not a full-on review) of DFwOST.  Thanks for the vote of confidence on a second edition, Richard...also, I do agree with what he mentioned with respect to the images, but as with WRF, there's not a lot that the authors can do about what the publisher or printer does with images.

Monday, August 08, 2011

Links and Updates

Working Remotely
Thanks to a tweet from Richard Bejtlich, I ran across this very interesting post titled, "Working Remotely".  The post makes a great deal of sense to me, as I joined ISS (now part of IBM) in Feb, 2006, and that's how we rolled at the time.  My boss lived about 2 miles from me, and there was an office in Herndon (with a REALLY great, albeit unused, classroom facility), but we had team members all over...Atlanta, Kansas City, Norfolk, and then as we expanded, Chicago, Corpus Christi, and Tulsa.  We lived near airports (our job was to fly out to perform emergency incident response), and FedEx (or insert your favorite shipment vendor) rounded out our "offices".

Even when we weren't flying, many of us were constantly in touch...so much so that when one person needed assistance with an engagement, it was easy for us to provide whatever support was needed.  Encryption made it very easy to send data for analysis, or for someone to provide insight to, or to write a script to parse a much wider sample of data.  Imagine being on an engagement and needing support...so you send someone a sample of data, and when you wake up, there's a parsing tool in your inbox. 

Something that the article points out is that it takes a certain kind of person to work remotely and that's very true...but when you find them, you need to do everything you can to not just keep them, but grow them.  The article also points out that if you want the best of the best, don't restrict yourself to your local area, or to those who are willing to relocate.  And in today's age, remote communications is relatively easy...if you don't want to bring everyone together once a year (more or less) due to the cost of gas and air fare, Skype and $20 web cam can do a LOT!

Jump Lists
Jimmy Weg has done some testing of Windows 7 Jump Lists (and shared his findings on the Win4n6 group list), and found (thus far) that the DestList stream structure within the Automatic Destination (autodest) Jump List does appear to be an MRU of sorts.  In his testing using Notepad to open text files, the FILETIME object written to the structure for each file correlated to when he opened the files.

When testing Windows Media Player, Jimmy found that there were no MRU entries for the application in the user's Registry hive, nor were any Windows shortcuts/LNK files created in the user's Recent folder.  Jimmy also found that applications such as OpenOffice (AppID: 1b4dd67f29cb1962) created Jump Lists, as well. 

Jimmy mentions Mark Woan's JumpLister application in his post for viewing numbered stream information found within the autodest Jump Lists; this is a very good tool, as is the MiTeC Structured Storage Viewer, although SSView doesn't parse the contents of each stream.  I like to use SSView at this point, although I have written Perl code that will parse the "autodest" Jump List files (those ending in "*.automaticDestinations-ms"), as it is based on the MS OLE format, and each numbered stream is based on the LNK file format.  I have also written code for parsing the DestList stream structure, as well, and thanks to Jimmy's testing, the validity and usefulness of that code is beginning to come to light.  My hope is that by having shared what I've found with respect to the DestList structure thus far, others will continue the research and identify other structure elements that can be of value to an analyst, and share that information.  I've also found some deprecation issues with Perl 5.12, with respect to some of the current Perl modules that handle parsing OLE documents; as such, I've taken a look at the MS documentation on the compound document binary specification, and I'm working on writing a platform-independent Jump List parser.

Troy Larson, senior forensic analyst at Microsoft, added that the DestList stream entries are either an MRU or MFU (most frequently used) list, depending upon the application, and that the order of activities in the DestList stream is reflected when you right-click on a pinned application (to the TaskBar).  The order of items in the DestList stream is apparently determined by how recently/frequently the activity (document opened, etc.) is performed.  Troy went on to mention that as of Windows 7, other methods of tracking files have been deprecated in favor of the API used to create Jump Lists.

CyberSpeak
Ovie's posted a new CyberSpeak podcast, this one addressing the launch of CDFS, which I mentioned in my last blog post.  If you have any questions about this organization, I'd recommend that you download the podcast, and give it a listen. Ovie interviews Det. Cindy Murphy, who's been a member of LE since 1985, and invited me to WACCI last year.

If you want to learn more about CDFS, give this podcast a listen.

Ovie, it's good to have you back, my friend.

Hostile Forensics
Mark Lachniet has released a whitepaper through the SANS Forensics blog site titled, "Hostile Forensics". This is the name given to "penetration-based forensics", in which the forensic analyst uses penetration techniques in order to gain access to a computer system in order to further exploit that system through forensic analysis techniques.

The PDF whitepaper, currently in version 1.0, is available online here.  The paper is 43 pages long, but if this is something that you're interested in, it's well worth the time it takes to read it.  Mark lays out the structure for his proposal, which he states is the result of a "thought experiment". 

Tools

It looks as if x0ner has released PDF X-RAY, an API for static analysis of PDF documents for malicious code.


On a similar note, Cuckoo is a freely available sandbox for analyzing PDF files and malware that runs in VirtualBox.  Cuckoo has it's own web site, as well.  If you're performing malware analysis, this may be something that you'd like to take a look at, along with Yara.  These are all great examples of the use of open-source and free tools for solving problems. 

Friday, August 05, 2011

Friday Updates

Meetup
This past Wed was a great NoVA Forensics Meetup, thanks to Sam Pena's efforts in putting the presentation together.  Sam put the effort into pulling together some information about the background and exploits of LulzSec and Anonymous, and then put forth some great questions for discussion.  After the background material slides, we moved the chairs into a circle and carried on from there!  A great big thanks to Sam for stepping up and giving the presentation, and for everyone who attended.  Also, thanks to ReverseSpace and Richard Harman for hosting.

Next month's meeting will feature a presentation botnets from Mitch Harris, and I've already received two offers for presentations on mobile devices, so stay tuned!

For those interested in attending, here's the FAQ:
- Anyone can attend...you don't need to be part of an organization or anything like that
- There are no fees
- We meet the first Wed of each month, starting at 7pm, at the ReverseSpace location; if you need more information, please see the NoVA Forensics Meetup page off of this blog

CDFS
"CDFS" stands for the "consortium of digital forensics specialists", and is a group dedicated to serving the DF community and providing leadership to guide the future of the profession.  Find out more about the focus and goals of the group by checking out the FAQ.  Also, see Eric Huber's commentary, as well (Eric's on the board).

Eric went on to describe the organization recently on G+:
CDFS isn't another organization offering certification, training, conferences and the like. It's an attempt by the various organizations and individuals to essentially act as a trade organization for the industry.

If you're like me and looking around the site, you're probably wondering, okay, I can become a member for $75 (for an individual) a year, but what does that get me?  Well, apparently, there are efforts afoot to yoke our profession with licensing...now, I say "yoke" because it sounds as if the licensing is being done without a great deal of involvement from our community, sort of like "taxation without representation".  I'm sure that I'm like 99.9999% of the community, and have no idea what's going on in those regards, but you know something, as I think about it, I do think that I'd like to have a vote in how that goes.  I'm not sure that I necessarily want to sit back and wait for someone else to make that decision for me, and then follow along (or not) with whatever licensing requirements are put in place, however arbitrarily. 

If you're curious about how you can be involved as a member, I'm sure that the Objectives page offers some insight as to where efforts will likely be directed.

OMFW
The 2011 OMFW was held recently, ahead of the DFRWS conference in New Orleans.  I had the great fortune of attending the original OMFW in 2008, and from what I hear, this one was just as good if not better.  OMFW pulls together the leaders in memory analysis, and brings them together in one place.  I can't speak to the format of this year's workshop, but if it was anything like the one in 2008, I'm sure that it was fast-paced and full of great information.

Speaking of information, MHL's presentation information (and Prezi) can be accessed here (ignore the publication date of the blog post), and Moyix's presentation can be found here.

Gleeda has graciously made her slides available, as well...she covered timelines, the Registry, Volatility and memory analysis all in one presentation!  What's not to love about that!

Let's not forget that Volatility 2.0 is now available (and Rob has added it to the recently updated SIFT appliance).

Tools
Ever been looking for malware in an image, only to find Symantec AV logs indicating that the malware had been detected and quarantined?  Well, check out the Security Braindump blog post on carving the Symantec VBN files.  Based on what BugBear has provided in the post, it should be pretty straightforward for anyone with a modicum of coding skill to write a decoder for this, if it's something that they need.

If you do any work at all with network traffic captures (i.e., capturing data, analyzing that data, analyzing data captured by others, etc.), then you must be sure to look at NetworkMiner.  Along with Wireshark, this is a very valuable (and free) component to your network traffic analysis arsenal. 

PFIC
I've mentioned before that I'll be speaking at PFIC 2011, along with Chad Tilbury.  It turns out that not only will I be speaking, I'll also be giving a lab, as well.  My talk will be on "Scanning for Low-hanging Fruit during an Investigation", and my lab will be "Intro To Windows Forensics", which will be geared toward first responders.  I'm really looking forward to this opportunity to engage with other practitioners from across the DFIR spectrum...I had a great time at PFIC last year, and had a great dinner one night thanks to Chad.

Timelines
I'm sure that at one point during the conference, the topic of timelines will come up (BTW...I'm doing a lecture/demo next week on timelines).  I think that understanding the "why" and "how" of creating timelines is very important for any analyst or examiner, in part because I have seen a number of exams where malware on the system has taken a number of steps to avoid detection and to foil the responder's investigation.  For example, file names and Registry keys are created with random names, file MAC times ($STANDARD_INFORMATION attribute in the MFT) are "stomped", and there are even indications that the malware attempted to "clean up" it's activity by deleting files.  In most cases, on-board AV never detected the infection, albeit in a few instances, the AV alerted on files being executed from at temp directory (but there was only a detection event, no action was taken) rather than detecting the malware based on some file signature.  In all cases, the AV was up-to-date at the time of infection, although MRT wasn't.  Often, the malware itself isn't detected when the analyst mounts and scans the image; rather, a secondary or tertiary file is detected instead.

In every case, a timeline allowed the analyst to "see" a number of related events grouped together, and based on the types of events, evaluate the relative level of confidence and context of that data and determine what is missing.  For example, finding a Prefetch file for an executable, or a reference to an oddly-named file in a Registry autostart location often leads the analyst to ask, "what's missing?" and go looking for it.

Tuesday, August 02, 2011

Updates and Links

Meetup
Just a reminder to everyone who wasn't able to make it to any of the big conferences going on in New Orleans or Las Vegas this week (or if you returned in time)...the NoVA Forensics Meetup for Aug 2011 will be Wed, 3 Aug, starting at 7pm. 

Be sure to check out the NoVA Forensics Meetup page to see what's going on.

Remember, anyone can come, and you don't need to be part of a group or anything.  There are no fees or anything like that.

All Things Open Source
Sergio Hernando posted some Perl code for performing Chrome forensics, specifically processing the history file via Perl.  For me, it's not so much that Sergio wrote this in Perl, because I can follow instructions and get Python or whatever else installed...no, what I like about this is that not only did Sergio take the time to explain what he was doing, but he shows it through an open-source mechanism.

I really like solutions to DFIR problems that use free or open-source tools, because in most cases, they also don't add so many layers of abstraction that ultimately, all you really know that went on was, "I pushed a button."  Solutions such as what Sergio has provided give us more than just that abstract view into what was done...in this case, it's more along the lines of "...I accessed this SQLite database because it contained this information, and this is what was found/determined, in the context of this other data over here...".

The script can be found at Sergio's Google Code site.

Also, be sure to take a look at Sergio's blog post on using Perl to parse the Firefox Download Manager database.

Techniques
For those of you who weren't able to make it to any of the conferences going about this time of the year (OMFW/DFRWS, BlackHat, etc.), looking out across the landscape of presentations, there were definitely some very interesting topics and titles.  While actually being at the conference affords you the opportunity to experience the flavor of the moment, and to mingle with others in the community, many of the conferences do provide copies of the presentations after the conference, and there's always supporting information available from additional sources.

For example, take this presentation on document exploitation attacks...this sounds like a very interesting presentation.  However, there's also some other information available, as well...for example, take a look at this post from the Cisco Security blog; I found this to be a very interesting open-source solution for extracting EXEs from (in this case, MS Word) documents.  Let's also not forget the Didier Stevens has done considerable work on detecting and extracting suspicious elements from PDF documents.

RegRipper
Speaking of open source and techniques, Corey Harrell put together a great post on how he uses RegRipper to gather information about the operating system he's analyzing.  This is a great use of the tool, and another great example of how an analyst can use the tools that are available to get the job done.

Volatility
For those of you who many not have known, the Open Memory Forensic Workshop (OMFW) was held recently, just prior to DFRWS in New Orleans.  Perhaps one of the most exciting things to come out of the conference (for those of us who couldn't attend) is Volatility 2.0! If you notice, under Downloads, there's a standalone Win32 executable available.

Volatility is one of the best of the open source projects out there.  Not only is the framework absolutely amazing, providing the capability to analyze Windows physical memory in ways that aren't available anywhere else, but it's also a shining example of how a small community of dedicated folks can come together and make this into the project that it is.  If you have any questions at all, start by checking out the Wiki, and if you do use this framework, consider contributing back to the project.