Tuesday, January 31, 2012

WFA 3/e

Okay, Windows Forensic Analysis 3/e is out and on it's way to those who've already purchased it through Syngress, as well as on it's way to the Amazon distribution centers.

I've posted previously in this blog regarding WFA 3/e (here, and here...).  As I've written each successive book, I've tried (with the help of folks like Jenn Kolde) to improve my writing and approach to the books, not just in the content, but in how the content is presented.  Sometimes, there just isn't enough time to put all that I would want into the book, and other times, you don't become aware of something until after the manuscript for that chapter is sent to the printer.

This post discusses the chapters and what they cover, so if you're wondering whether this is a book you'd be interested in or need, I hope that post (and this one) help convince you.  I'm linking it here again for those folks who want to know...well...what the chapters are and what they cover.  ;-)  Seriously...I think that this is an important factor to address because WFA 3/e does NOT replace the second edition.  In fact, WFA 3/e is a companion book to the second edition.  That's right...the way I wrote the third edition, you will want to have both editions (as well as Windows Registry Forensics) on your bookshelf.  The third edition doesn't address some of those things that haven't changed from the second edition (for example, the PE file format) and covers things that are specific to Windows 7; for example, StickyNotes and Jump Lists.

There are a couple of minor changes.  For example, chapter 2 is now "Immediate Response" (rather than "Live Response"), and focuses on the need for organizations to organically develop some capability for immediate response.  This is necessary, not only because it's mandated by many of the compliance and regulatory guidelines, but because it just makes sense...the sooner you can start the information collection and the quicker you can react, the better off you'll be during an incident.  Also, I was able to do some additional research and coding regarding Jump Lists after I finished writing, so I included some Jump List parsing code in the archive. 

The companion tools for the book can be found here, in "wfa3e.zip".  Now, I've put the tools together in the archive by chapter, and there isn't a lot of explanation. This isn't because I'm lazy (even though I am, in fact, lazy...); rather, it's due to the fact that the tools are discussed in the book.  Now, I know many folks are going to think, "hey, this is just a ploy to drive up the sales of your book!"  This may be an artifact of my decision, but the reason I didn't provide detailed explanations of the tools is simply because I have already spent a considerable amount of time and effort writing about them once, and I don't want to spend a lot of time doing it again.  So, nothing nefarious or mysterious or under-handed about that...and honestly, folks who write books like these don't make a lot of money, anyway.  And, if I were in it for the money, why would I have retweeted Syngress's discount code to get the book for half off, and re-posted that to every social media site for which I have an account?

Also, there are tools mentioned in chapter 4 of the book that you won't find in that folder within the archive.  This is because the tools are also discussed, albeit in a different capacity, in chapter 7 of the book.  You will find the tools in the ch. 7 folder in the archive.

Finally, now that the book is done and the code is available, there are opportunities to continue to develop and expand on much of what's in the book.  There are RegRipper plugins to be written, and new things to develop with respect to timeline creation and analysis.

Saturday, January 28, 2012

WFA 3/e

While I was in Atlanta recently for the DoD CyberCrime Conference, I received an email telling me that my advance copy of WFA 3/e was available...even though this is my sixth book, I still get excited/nervous/anxious.  After all, you never know how the book is going to be received.  Anyway, I arrived home last night to find a FedEx box sitting on my desk. 

However, I received an email from someone I know who was at ShmooCon...come find out, the books had been printed and several sent to ShmooCon.  So, apparently, I didn't really get an "advance copy".  

I had been informed that the book would not be available until 7 Feb, so I'll be posting the code associated with the book this week, and will provide a link off of the Books page on this blog.  I haven't been holding off on it...I have been adding some things to the distribution.  I'll let everyone know when it's up.

Friday, January 27, 2012

Revisiting "Forensic Value"

I posted some thoughts a while back on defining "forensic value"...in that post, I pondered the question of who defines the "forensic value" of an artifact or finding.  The post had received 15 comments relatively quickly, and it seems that there's something of a consensus that the forensic value of data is in the eyes of analyst.  One would suppose that this means, then, that the forensic value of a particular artifact can be determined by viewing that artifact through the lens of the analyst's knowledge and experience.  In my previous post, I used terms like relative and subjective value, and I think that these descriptive terms come not only from the goals of the examination, but also depend heavily on the knowledge and experience of the analyst.

I recently had the opportunity to review a paper written by someone who had done some pretty comprehensive testing and documented some interesting artifacts.  In that paper, the author mentioned certain artifacts that were available, but that these artifacts were not pertinent to the analysis being performed...the artifacts were provided for informational purposes, and might be of value to other analyses.  For the paper and the analysis being performed, I thought that this was a valid distinction to make, as it addressed the issue of why those artifacts were not discussed (how the got there, their format, etc.) further in the paper. In short, the author had made a clear distinction as to the relative value of certain artifacts.

The relative value of an artifact often has a lot to do with the context of that artifact.  Consider an email address found within an image, perhaps using something like bulk_extractor or a keyword search.  The email address may already have an intrinsic value to you, possibly as a piece of intelligence.  From that point, any additional value of that artifact can rely heavily on the context in which that artifact resides.  Was that artifact found in a file?  In a PST file?  If the email address was found in a PST file, in an email, what is the context?  The value of the email address vary greatly depending upon not only the goals of your examination, but also whether the address was found in the To:, From:, or CC: block of the email, or if it is located in the body of the email.  Depending upon the goals of your examination, the fact that the email address was found in the body of an email may be more important and have more relative value than if it were found in the To: block.

The absence of an artifact can also be of significant value, but again, that will depend heavily upon (a) the goals of the examination, and (b) the skills of the examiner.  For example, I was performing some Registry analysis a while back to determine which files a user account had been used to access on that system (goals).  I noticed immediately that RegRipper reported that the "RecentDocs key was not found."  This was a very significant finding, and not something I had expected...and my experience told me that it was something worth exploring, particularly given the goals of my exam.  I quickly determined that a "scrubbing" tool had been applied to the system, and determined when that tool had been installed and run, and by which user.  I was also able to recover a significant bit of deleted data from within the hive file itself.

Now for the big question...so what?  How does this apply to...well...anything?  Well, consider my recent post on Timeline Analysis...for those analysts who want to "see everything", how much of that "everything" is actually relevant and of forensic value?

In order to address that, I think that we need to look at a couple of things...and I'd start with, what are the goals of your exam?

I've heard some folks say that during analysis, it's important to understand an attacker's methods, as well as their intentions.  I'm not so sure that's something I'd hang my hat on...mostly because the attacker isn't sitting next to me so that I can ask them questions and understand their intentions.  When performing analysis, all we see is the results of their actions...maybe only some of those results...and we often look at those results and artifacts through the lens of our own experiences in order to attempt to determine the intentions of others.  Further, when it comes to understanding the methods an attacker uses, that's an understanding that's most often developed by observing various artifacts from within the system...but if we're not familiar with the system that we're analyzing, and we're using some automated tool to pull out all of the "relevant" artifacts for us, are we observing all of the artifacts, or at least the right ones to give us a view into what the attacker is trying to do?

Consider this...given an image acquired from a system...let's say, for the sake of discussion, a Windows 7 system...how do we know that when we run a particular tool (or set of tools) against that image that we're extracting all of the relevant data that we require in order to make informed opinions regarding our findings?  When an analyst says, "I want a timeline, and I want it with everything!", how does that analyst then know that they've got everything in that timeline?

The forensic value of any particular artifact is determined by the goals of the exam, and the relevance of that artifact, taken in context with other artifacts.  The terms "relevance" and "context" will often be subjective, based on the knowledge and skill level of the examiner.

Does this mean that every analyst needs to be an expert in Windows?  No, that's not entirely practical.  What it does mean is there needs to be should be multiple levels of access to knowledge, including training, mentors and trusted advisers available to your analysts, and that these should all be part of or accessible to your analysis team.

Friday, January 20, 2012

Stuff

DFIROnline Meetup
If you're interested purely in numbers, last night's DFIROnline meetup had, at one point, 97 attendees.  It might've helped that my presentation was addressing malware, and we ended up continuing Cory Altheide's drinking game from last year's OSDFC...every time I mispronounced the word as "mall wear", everyone had to take a drink.  I have to go back and review the tape, but my presentation may have ended up being more like a Ron White concert.  ;-)

My previous blog post includes a link to the slides I used, as well as the malware detection checklist that I mentioned in my presentation. 

There's an excellent write-up at the Digital Forensic Source blog regarding last night's meetup, if you're interested, and you can also search for the "#DFIROnline" hash tag on Twitter to see what comments folks made during the meetup.  I have to say, however, that most of the comments were made online, in chat window 3...

Again, a huge thanks to Mike for setting these up and making the resources available, and thanks to everyone who takes the time out of their evening (or day, depending on where you are) to attend and engage. 

Malware IOCs - Ramnit
Here's an excellent walk-through of creating an IOC for the Ramnit malware.  If you're interested in the OpenIOCs at all, or just want to see how someone would go about creating an IOC, take a look at the post...and be sure to read the first two parts, as well.

If you were on last night's DFIROnline presentation on malware detection within an acquired image, what would the malware characteristics be for Ramnit, based on the IOC?

Timelines
If you like case studies and discussions of practical analysis techniques, take a look at Rob's post on Digital Forensic SIFTing.  Rob provides some very good walk-thrus regarding how to use log2timeline effectively on several incident types, and this is well worth a look.

Tools
A bit ago I ran across something Yogesh had written on parsing IE RecoveryStore files.  As these files are based on the OLE format, and I've recently had some experience writing parsers for files that use this format (Jump Lists, StickyNotes), I thought I'd take a crack at this file, as well.  This is still something I'd like to do...I'm hoping Yogesh will release the specifics of parsing the various streams soon.

Along those lines, John Moan recently commented on a blog post and mentioned that he's written two tools, ParseRS and RipRS.  I haven't had a case yet that involves recovering information about a user's browser activity, but the approach he's taken is very interesting, and I'm sure that John would greatly appreciate it if folks would try the tools out and provide him with some valuable feedback.  I've added the tools to my FOSS Tools page, keeping them persistent in one place.

Case Studies
Speaking of case studies, this is one of the items of interest within the community.  I've known about it for a while...in fact, I've tried to write my books to include case studies, and I also tend to look for similar approaches in other books.  Writing about a tool or technique is dry enough as it is, and the way to engage the reader (using the vehicle of the written word) is to include a case study that describes how the tool or technique was used.

On a number of forums, I see requests for case studies.  Not long ago, a thread was started in a forum that included a request that analysts post case studies; this is nothing new, I've seen it before.  What I haven't seen is those folks then posting case studies themselves.  Now, there are a number of what could be considered case studies online.  In fact, if you go to the FOSS Tools page off of my blog, and scroll down to the "Sample Images" section, you'll see links to several sample images that you can download...several of them have actual scenarios associated with them, as well as solutions.  These can serve as some pretty good case studies.

Wednesday, January 18, 2012

DFIROnline: Detecting Malware in an Acquired Image

The next DFIROnline meetup is on Thu, 19 Jan 2012, at 8pm EST.  Eric Huber and I will each be presenting, with my presentation being Malware Detection within an Acquired Image (the PDF for the presentation is linked below).  I thought that this would be a good presentation to give, as it seems to be fairly topical.  We'll be focusing on understanding malware and addressing malware detection within an image acquired from a Windows system.

For those attending the presentation tonight, I'm sure that Eric and Mike would appreciate questions, feedback, thoughts and comments.  During the presentation, please feel free to use the available chat windows for any interaction, and also feel free to contact folks via email during or after the presentations.

In particular, please feel free to either volunteer to give presentations, or to offer up ideas and/or requests for material to be covered in these presentations.  Who knows...there might be someone out there with some great material who simply doesn't think that anyone could possibly be interested in what they have to say...and all it takes is one or two people to send in, "...I'd really appreciate hearing more about this topic...".

Finally, a HUGE thanks to Mike for setting this up and providing the resources to make this event possible on a regular basis.

Resources
Presentation PDF for 19 Jan DFIROnline Meetup

Malware page to this blog
Malware Detection Checklist

Friday, January 13, 2012

Timeline Analysis

The DoD Cybercrime Conference is approaching, and I've been doing some thinking about my topic, Timeline Analysis.  I'll be presenting on Wed morning, starting at 8:30am...I remember Cory Altheide saying at one point that all tech conferences should start no sooner than 1pm and run no later than 3:30pm, or something like that.  Cool idea.

So, anyway...I've been thinking about some of the things that I put into pretty much all of my timeline analysis presentations.  When it comes to creating timelines, IMHO there are essentially two "camps", or approaches.  One is what I call the "kitchen sink" approach, which is basically, "Give me everything and let me do the analysis."  The other is what I call the "layered" or "overlay" approach, in which the analyst is familiar with the system being analyzed and adds successive "layers" to the timeline.  When I had a chance to chat with Chad Tilbury at PFIC 2011, he recommended a hybrid of the two approaches...get everything, and then view the data a layer at a time, using something he referred to as a "zoom" capability.  This is something I think is completely within reach...but I digress.

One of the things I've heard folks say about using the "everything" or "kitchen sink" approach is that they'd rather have everything so that they can look at it all when they're conducting analysis, because that's how we find new things.  I completely agree with that (the "finding new things" part), and I think it's a great idea.  After all, one of the core, foundational ideas behind creating timelines is that they can provide a great deal of context to the events we're seeing, and generally speaking, the more data we have, the more context there is likely to be available.  After all, a file modification can be pretty meaningless, in and of itself...but if you are able to see other events going on "nearby", you'll begin to see what events led up to and occurred immediately following the file modification.  For example, you may see that the user launched IE, began browsing the web, requested a specific page, Java was launched, a file was created, and the file in question was modified...all of which provides a great deal of context.

That leads me to this question...if you're running a tool that someone else designed and put together, and you're just pushing a button or launching a command, how do you know that the tool got everything?  How do you know that what you're looking at in the output of the tool is, in fact, everything?

The reason I prefer the layered approach is that it's predicated on (a) fully understanding the goals of your examination, and (b) understanding the system that you're analyzing.  Because you understand your goals, you know what it is you're trying to achieve.  And because you understand that system you're analyzing...Windows XP, Windows 7, etc...you also understand how various aspects of the operating system interact and are interconnected.  As such, you're able to identify where there may be additional data, and either request or create your own tools for extracting the data that you need.  Yes, this approach is more manually-intensive than a more automated approach, but it does have it's positive points.  For one, you'll know exactly what should be in the timeline, because you added it.

Alternatively, most often when talking to analysts about collecting data, the sense I get is that the general feeling is to "GET ALL THE THINGS!!" and then begin digging through the volumes of data to perform "analysis".  I had a case a while back that involved SQL injection, and I created a timeline using only the file system metadata and the SQL injection statements from the web server logs; adding everything else available (including user profile data) would have simply made the timeline too cumbersome and too confusing to effectively analyze.  I understood the goals of my exam (i.e., determine what the bad guy did and/or was able to access), and I understood the system (in this case, how SQL injection works, particularly when the database and web server are on the same system).

Now, some folks are going to say, "hey, but what if you missed something?"  To that I say...well, how would you know?  Or, what if you had the data available because you grabbed everything, and because you had no real knowledge of how the system acted, you had no idea that the event(s) you were looking at were important?

Something else to consider is this...what does it tell us when artifacts that we expect to see are not present?  Or...


The absence of an artifact where you would expect to find one is itself an artifact.

Sound familiar?  An example of this would be creating a timeline from an image acquired from a Windows system, and not seeing any indication of Prefetch file metadata in the timeline.  A closer look might reveal that there are no files ending in .pf in the timeline.  So...what does that tell you?  I'll leave that one to the reader...

My point is that while there are (as I see it) two approaches to creating timelines, I'm not saying that one is better than the other...I'm not advocating one approach over another.  I know from experience that there a lot of analysts who are not comfortable operating in the command line (the "dark place"), and as such, might not create a timeline to begin with, and in particular not one that is pretty command-line-intensive.  I also know that there are a good number of folks who use log2timeline pretty regularly, but don't necessarily understand the complete set of data that it collects, or how it goes about doing so.

What I am saying is that, from my perspective, each has it's own strengths and weaknesses, and it's up to the analyst how they want to approach creating timelines.  You may not want to use a manually-intensive approach (which you can easily automate using batch files, a la Corey Harrell's approach), but if you end up using a substantive framework, how do you know you're getting everything?

Tuesday, January 10, 2012

Uncertainty

Not too long ago, I blogged with a view of how you can contribute to the DFIR community, and this post seems to have sparked some discussion, leading to posts from other bloggers.  I saw via Twitter this morning that Christa Miller had posted her review of the Jonathan Fields book, Uncertainty.  Unfortunately, Twitter is poor medium for commenting (although many seem to prefer it) as 140 characters simply is not enough space to offer comments, input or feedback on something.  Far too often, I think, for many forensicators it comes down to tweeting or nothing.  When that happens, I honestly believe the something is lost, and the community is less for it.  As such, I opted to post the thoughts that Christa's review percolated here on my own blog.

I won't rehash Christa's review here...there's really no point in doing that.  Christa is an excellent writer, and the only way to do her review and writing justice is to recommend that you go read what she's written, and draw your own opinions.

Two sentences in particular within Christa's review really caught my attention:

A forensicator’s fear of looking stupid or failing is not, on its face, all that irrational. Who wouldn’t worry about how one’s employer or a courtroom will react to the disclosure that you don’t have all the answers?

What I thought was interesting about this was not so much whether this fear is irrational or not; rather, what caught my attention was the "one's employer or a courtroom".  I'm sure that a lot of analysts are faced with this very situation or feeling, and as such, I wouldn't discount as being irrational at all.  Now, I'm not saying that Christa's review did this...rather, I'm simply saying that as a community, this is a place where a number of analysts find themselves.

When I was in graduate school, I was surrounded by other students, a few of whom were PhD candidates.  There were a great number of PhD academic professors, of course, and perhaps one of the most powerful things I learned in my 2 1/2 years at NPS was something one of my instructors shared with me.  He had been an enlisted Marine, switched over to "the dark side" to become an officer, and was a Major by the time he left the Marine Corps to pursue his PhD.  In short, he told me that if I was struggling with a 6th order differential equation, after no more than 15 minutes of not making any headway, ask for help.

That's right.  Admit that you need help, assistance, a gentle nudge...hey, we all find at times that we've worked ourselves into a tight corner by going down a rabbit hole, particularly the wrong one.  Why keep doing it, if all you really need is a little help?

So, I found myself thinking about that statement years later when I would be going over another analyst's case notes and report, and I'd see "Registry Analysis - 16 hrs" and nothing else.  No "this is what I was looking for" and no "this is what I found."  Why was that?  Why would a consultant consume 8 or 16 hrs doing something that they had no idea of and had no discernible results, and then charge a customer for that time?  Particularly when someone who could provide assistance was a phone call or a cubicle away?

Whenever I've encountered a situation where I'm not familiar with something, I tend to reach out for some assistance.  While I was on the ISS ERS team, I was tasked with a Saturday morning response to address a FreeBSD firewall in a server room in another state.  Now, I have some familiarity with Linux, but hey, this is a firewall...so I asked the engagement manager to see about lining someone up with whom I could speak once I got on-site, got situated and got an idea of what was going on.  After all, I'm not an expert on much of anything, in particular FreeBSD firewalls.

Having worked with teams of analysts over the years, I've seen this "fear of failure" issue several times.  Each time, I see two sides to the issue...on one hand, you have the analyst who's afraid to even ask a question, because (as I've been told) they're afraid of "looking stupid" to their peers and boss.  So what happens is that instead of asking for help, they turn in a report that's incomplete, full of glaring holes in the analysis and conclusions, and essentially blank case notes.  That gig to analyze one image that was spec'd out at 48 hrs now takes 72 or even 96 (or more) hours to complete between multiple analysts, and while the customer ultimately gets a half-way decent deliverable, your team has lost money on the engagement.  On top of that, there's now some ill-will on the team...because one analyst didn't want to ask for help, now another analyst has to drop everything (including their family time after 5pm) to work late, in emergency mode.

On the other hand, there's the analyst who does ask questions, does ask for assistance, and in the process learns something that they can then carry forward on future engagements.  The customer receives a comprehensive report in a timely manner, and the analyst is able to meet their revenue numbers, allowing them the time to take a vacation or "mental health day", and receive a bonus.

My point is this...there's not one of us that knows everything, and regardless of what your individual perception may be, no one expects you to know everything.  If you have a passion for what you do, you learn when you ask questions and engage with others, you incorporate that new information into what you do, and you grow from it.  If you're worried about people thinking you'll "look stupid", an option would be to pursue a trusted adviser relationship with someone with whom you feel comfortable asking questions.

If you're concerned with someone seeing you ask a question publicly (potential employer, defense counsel), then find someone you can ask questions of "off the grid". 

Ultimately, as I see it, the question becomes, do you continue into the future not knowing something, or do you ask someone and at the least get a leg up on fully discovering the answer?  Would you rather look like you don't know something for a moment (as you ask the question) and then have an answer (or at least a pathway to it), or would your preference be to not know something at all, and have it discovered later, after the issue has grown?

My recommendation with respect to the two sentences from Christa's review is this...if you find yourself in a situation where you are telling yourself, "I don't want people to think I'm dumb", consider what happens if you don't ask that question.  Are you going to run over hours on your analysis, and ultimately provide a poor product to your customer?  Are you missing data that would lead to the conviction or exoneration of someone who's been accused of a crime?  Or, can you take a moment to frame your question, provide some meaningful background data ("I'm looking at a Windows XP system"), maybe do some online searches, and ask it...even if that means you're reaching out to someone you know rather than posting to a public forum? 

Monday, January 02, 2012

Stuff

Using RegRipper
Russ McRee let me know recently that the folks at Passmark recently posted a tutorial on how to use their OSForensics tool with RegRipper.

Speaking of RegRipper, I was contacted not long ago about setting up a German mirror for RegRipper...while it doesn't appear to active yet, the domain has been set aside, and I'm told that the guys organizing it are going to use it not only as a mirror, but also as a site for some of the plugins they'll be getting in that are specific to what they've been doing.

If you're into GenToo Linux, there's also this site from Stefan Reimer which contains a RegRipper ebuild for that platform.


Updated tool:  Stefan over on the Win4n6 Yahoo group tried out the Jump List parser code and found out that, once again, I'd reversed two of the time stamps embedded in the LNK file parsing code.  I updated the code and reposted the archive.  Thanks!

Meetups
With respect to the NoVA Forensics Meetups, I posted here asking what folks thought about moving them to the DFIROnline meetups, and I tweeted something similar.  Thus far, I have yet to receive a response from the blog post, and of the responses I've seen on Twitter, the vast majority (2 or 3..I've only seen like 4 responses...) indicate that moving to the online format is just fine.  I did receive one response from someone who seems to like the IRL format...although that person also admitted that they haven't actually been to a meetup yet.

So...it looks like for 2012, we'll be moving to the online format.  Looking at the lineup thus far, we already seem to be getting some good presentations coming along in the near future.

Speaking of which, offering to either give a presentation or asking for some specific content to be presented on is a great way to contribute to the community.  Just something to keep in mind...if you're going to say, "...I'd like to hear about this topic", be prepared to engage in a discussion.  This isn't to say that someone's going to come after you and try to belittle your idea...not at all.  Instead, someone willing to present on the topic may need more information about your respective, what you've tried (if anything), any research that you've already done, etc.  So...please be willing to share ideas of what you'd like to see presented, but keep in mind that, "...what do you mean by that?" is NOT a slam.

New Tools
File this one under "oh, cr*p..."...

Seems setmace.exe has been released...if you haven't seen this yet, it apparently overcomes some of the issues with timestomp.exe; in particular, it is reportedly capable of modifying the time stamps in both the $STANDARD_INFORMATION and the $FILE_NAME attributes within the MFT.  However, it does so by creating a randomly-named subdirectory within the same volume, copying the file into the new directory, and then copying it back (Note: the description on the web page uses "copy" and "move" interchangeably).

Okay, so what does this mean to a forensic analyst, if something like this is used maliciously?  I'm going to leave that one to the community...

The folks at SimpleCarver have released a new tool to extract contents from the CurrentDatabase_327.wmdb file, a database associated with the Windows 7 Windows Media Player.   If you're working an exam that involves the use of WMP (i.e., you've seen the use of the application via the Registry and/or Jump Lists...), then you may want to consider taking a look at this tool.

You might also want to check out some of their other free tools.

Melissa posted to her blog regarding a couple of interesting tools for pulling information from memory dumps; specifically, pdgmail and Skypeex.  Both tools apparently require that you run strings first, but that shouldn't be a problem...the cost-benefit analysis seems to indicate that it's well worth running another command line tool.  An alternative to running these tools against a memory dump would be using Volatility or the MoonSols Windows Memory Toolkit to convert a hibernation file to a  raw dump format, and then run these tools.

Speaking of tools, Mike posted a list of non-forensics tools that he uses on Windows systems to his WriteBlocked blog.  This is a very good list, with a lot of useful tools (as well as tools I've used) on that list.  I recently used Wireshark to validate some network traffic...another tool that you might consider using alongside Wireshark is NetworkMiner...it's described as an NFAT tool, so I can see why it's not on Mike's list.  I use VirtualBox...I have a copy of the developer's build of Windows 8 running in it.

Wiping Utilities
Claus is back, and this time has a nice list of wiping utilities.  As forensic analysts, many times we have to sanitize the media that we're using, so having access to these tools is a very good thing.  I've always enjoyed Claus's posts, as well, and hope to see him posting more and more often in 2012.

Can anyone provide a technical reason why wiping with 7 passes (or more) is "better" than wiping with just 1 pass?

File Formats
I was reading over Yogesh Khatri's posts over at SwiftForensics.com, and found this post on IE RecoveryStore files.  Most analysts who have done any work with browser forensics are aware of the value of files that allow the browser to recover previous sessions...these resources can hold a good deal of potentially valuable data.

About halfway down the post, Yogesh states:

All files are in the Microsoft OLE structured storage container format.

That's awesome...he's identified the format, which means that we can now parse these files.  Yogesh mentions free tools, and one of the ones I like to use to view the contents of OLE files is MiTeC's SSV, as it not only allows me to view the file format and streams, but I can also extract streams for further analysis. 

Another reason I think that this is cool is that I recently released the code I wrote to parse Windows 7 Jump Lists (I previously released code to parse Win7 Sticky Notes), and the RecoveryStore files follow a similar basic format.  Also, Yogesh mentions that there are GUIDs within the file that include 60-bit UUID v1 time stamps...cool.  The Jump List parser code package includes LNK.pm, which includes some Perl code that I put together to parse these artifacts! 

I don't have, nor do I have access to at this time, any RecoveryStore files to work with (with respect to writing a parser)...however, over time, I'm sure that the value of these artifacts will reach a point such that someone writes, or someone contributes to writing, a parser for these files.
  

Sunday, January 01, 2012

Contributing to the Community

So, here we go with my first post of 2012...

Not long ago, I posted some thoughts on how analysts can contribute to the DFIR community.  What I wanted to do was offer suggestions to those within the community who had read Rob's post and had maybe thought that writing a batch script or a full-on program was a pretty daunting endeavor, let alone standing up an entire project such as log2timeline.  Recently, there have been some interesting exchanges on Twitter, and I think that this is a good time for folks to consider how they might make a contribution to the DFIR community in 2012.

I think that one of the biggest misconceptions within the DFIR community is how one person can make a contribution.  I think that a lot of analysts get themselves into a nice, comfortable little place called "complacency" through paralysis.  What I mean by that is that too many analysts will convince themselves that they can't contribute to the community because they don't know how to program.  Some analysts seem to look around, see how some others contribute, and say to themselves, "I can't contribute to the community because I don't know how to program."  I think that this applies to other ways of contributing besides just programming, and I think this is just an excuse, and a pretty bad one at that.

At the first Open Source Digital Forensics Conference (put on by Basis Tech and Brian Carrier...thanks, Brian!) in 2010, toward the end of the conference a member of the audience asked, "Why can't I dump and parse memory from Windows 7 systems??"  To that, a prominent member of the community asked, "What have you contributed?"...the idea being that one can't...and shouldn't...simply sit back and expect everything to come to them for free without putting forth something.  But the response didn't center around someone's ability to code in Python...rather, it was about other aspects of how someone could go about making contributions.  Had the person asking the question offered up extra hardware to support development efforts, offered to write up documentation, or just said, "Thank you"?

Not everyone (me, in particular) expects all DFIR analysts to be able to write code.  There are a lot of really good analysts out there who don't go beyond simple scripts and regexes, and others who don't code at all.  Corey Harrell has made some pretty fantastic contributions, simply by having written a number of batch scripts, in essence tying together other tools.

There are a number of ways that anyone can make a contribution to the community, and most do not require the ability to write code.  Some of the ways you can do that in 2012 include, but are not limited to, the following:

Case Studies - One of the things that is definitely true about the DFIR community is that folks really love hearing how others have done things.  Many of us encounter the same or similar cases, or have those "one-offs" that don't get seen very often, and we all enjoy hearing about novel approaches to solving problems.  Admit it...just to be in this community, you have to have a little nerd in you, and there's a part of you that likes to hear how someone else may have overcome an obstacle that they encountered.  Keeping that in mind...that you like to hear those "war stories"...consider sharing yours with others.

If you can't program, but you are able to download and use a tool (commercial, open source), then a great way to make a contribution is to comment on how you used the tool.  Was it useful/sufficient/accurate?  Was it easy to use? 

Ask a question - Very often, this is a huge contribution!  Asking a question very often has the effect of sharing your perspective on things with others, and seeing different perspectives can very often be extremely beneficial.  For example, I like to dig into the Registry, but many times I don't really know what it is that other analysts find useful, or what would be most valuable to their case work.  If someone asks a question about the Registry (specific keys/values, how to locate something, etc.), that gives others a perspective of how they look at things, how they approach problems, and how they go about solving them...and many times, just this perspective can help someone else with an issue that they're working on.

If you download a free, open source tool and you're having trouble using it, start by asking the author for pointers or assistance.  Maybe there's something  wrong with how you're using it...maybe you're missing a switch.  Or maybe you're running an MBR parsing tool against a .vmem file (hey, I can't make this stuff up).  Asking the author your question gives them insight into who's using the tool, how it's being used, and maybe how to improve it...and it's far better (and much more appreciated) than going to a public forum and stating, "...this tool don't work."  

Here's a great example of how you can ask a question...the example is specific to the DFIROnline meetups, but it demonstrates how a number of folks can come together to provide different perspectives when addressing issues and answering questions.

Review a blog post, book or paper - Don't code, and can't share case studies?  Don't feel as if you can maintain a blog?  No problem.  How about this...have you read Corey's blog?  Did you find something interesting in one of his posts?  Did you think what he posted was cool?  Did you tell him about it?  Did you comment on it, and share your thoughts?  If you can't do so directly to the comment section of his blog, have you considered sending him an email?

If you decide to review a book, consider doing something just a little bit more than repeating the table of contents.  While authors appreciate knowing that someone picked up their book, they appreciate it even more knowing (a) if the material was useful, and (b) how it was useful.  Again, sharing your perspective can be very valuable.

What Not To Do
In 2012, consider what you can do, and consider not spending time worrying about (or stating) what you can't do.  "I can't program" isn't a contribution to the community.

If you feel strongly enough to download a tool that someone wrote, take the time to thank them.  Okay, you may not have time to do any in-depth testing of the tool, and you may have downloaded it just to have it for future use, but you had the interest and took the time to download the tool.  Now, this doesn't mean that you have to reach out and thank someone every time you actually run the tool, but just having the courtesy to thank someone for their efforts can go a long way toward the development of that tool, or others.

I think that most times, folks look at what others do to contribute within the community and think to themselves, "I can't program, I don't have the time to write books, and I don't like public speaking, so I can't contribute."...and to be honest, nothing could be further from the truth.  No one is expected to make contributions all the time...hey, we're people and have lives.  But there's really no reason why, if you're capable of doing the work that we do in this field, that at some point in the space of a year, you can't make some contribution of some kind, no matter how small.

Clicking "Like", "+1", or re-tweeting a post isn't a contribution, as it doesn't add anything to whatever it is you're commenting on.  If you like something enough to click a button, take a moment to say what you like about it, or how it was useful or valuable to you.  The same holds true for book reviews...if you're going to review a book, reiterating the table of contents isn't a review; however, describing what you found valuable (or not) and how it was valuable to you is what most of us look for in a review, right?  "The car has cup holders" isn't so much a review of a vehicle as "the car has three cup holders, none of which the driver can easily reach."

What if you're one of those folks who is bound by a corporate policy or something else that prevents you from contributing?   What does this policy prevent you from doing?  You can't talk about casework?  That's fine...in a lot of cases, many of us are thankful that you're the one dealing with the details of the specific case and not us. 

Final Thoughts
No one of us is as smart as all of us, and the best way to get smarter within this community is to engage with each other, share perspectives and thoughts, and then build from there.

Sometimes, the biggest contribution you can make is to simply thank someone for their contribution and efforts.  Seriously.  This means a lot.  Think about it...if you did something, no matter how small, wouldn't you appreciate it if someone said, "thanks"?

As an example of mandating that members contribute, check out the NoVA Hackers blog...they follow the AHA model of participation...which, in short, says that if you want to remain a member, you have to participate.  The AHA page lists what you must do in order to remain a member of their group, and remember, membership is voluntary, so one must accept these conditions upon becoming part of the group.

Here's looking forward to a great 2012, everyone...

Addendum:
Erika posted on this topic, as did Ken...both are excellent posts that take the conversation regarding contributing to the community several steps further.  More than anything, I think that it's valuable to hear from others in this regard, in particular those within the community who might say, "...I want to contribute, but I don't think I have anything of value to share."  I've said it before and I'll say it again...sometimes, the best question to ask is "why?".  When I was on the IBM ERS team, we brought Don Weber on board, and besides just being a great guy, he'd ask me "why?" during engagements, and that got me to re-think (and in many cases, justify) my base assumptions with respect to next steps on the engagement.  That isn't to say that it changed what I was going to do as the engagement lead, but it did open up discussion so that Don could understand what I was thinking.  It also afforded me the opportunity to get Don's input, which was invaluable.  Sometimes, the most information can come from questions such as, "why did you do it this way?" or "how did you go about accomplishing this?"

An additional thought or two that might help...choose your circles and choose your medium.  If you don't feel comfortable posting to an open list, find some other medium.  One way to ask the questions you may have would be to send them directly to someone you trust, and either ask them to post them as a proxy, or just see if they know the answer.

What happens sometimes is that someone will ask a question, and the response will be terse or concise, or include a link to LMGTFY, or just be, "...which OS?" These happen very often when little initial thought, effort or research is put into the question, and are often viewed by the recipient as a "slam".  I think that what folks really don't realize is that you can't convey tone in 140 characters or less, so many times it's assumed because it can't be implied.  If a medium like Twitter (limit of 140 char) leaves you thinking to yourself, "...hey, I asked a question, but that guy's response being mean to me...", then maybe that isn't the right medium for what you're trying to accomplish. 

Not every medium is suitable for everyone in this industry.  For example, the Win4n6 Yahoo group currently has 830 members listed in the forum, and only about a dozen or so "regular" contributors.  As I approve every membership application (solely as an effort to keep bots out), I see all of those who post during the application process that their reason for joining is to "contribute" and "take part" in discussions...and we never hear from them again.  So maybe this medium isn't something that works for them.

Final thought...this time around, anyway...if you don't have the time to put into a question, maybe it's not a good time to ask it.  I'm not saying that you shouldn't ask your question, I'm just suggesting that if you don't have the time to do some research on your own, or don't have the time to let folks know that you're looking at a Windows 7 system and not Windows XP (if you don't know why that matters, please feel free to ask...), maybe now is not a good time to ask the question.  Maybe it's better to hold off until you have more time to do a thorough job, rather than just throwing it out to "the collective".  Just something to think about...Ken referred to this when he mentioned "...ask stronger questions about forensic topics."  Excellent point, Ken.