Friday, March 15, 2024

Uptycs Cybersecurity Standup

I was listening to a couple of fascinating interviews on the Uptycs Cybersecurity Standup podcast recently, and I have to tell you, there were some pretty insightful comments from the speakers.

The first one I listened to was Becky Gaylord talking about her career transition from an investigative journalist into cybersecurity.

Check out Becky's interview, and be sure to check out the show notes, as well.

I also listened to Quinn Varcoe's interview, talking about Quinn journey from zero experience in cybersecurity to owning and running her own consulting firm, Blueberry Security.

Check out Quinn's interview, and the show notes.

More recently, I listened to Olivia Rose's interview. Olivia and I crossed paths years ago at ISS, and has now hung out her own shingle as a virtual CISO (vCISO). I joined ISS in Feb 2006, about 6 months before their purchase by IBM, which was announced in August 2006. Olivia and I met at the IBM ISS sales kick-off in Atlanta early in 2007.

All of these interviews are extremely insightful; each speaker brings something unique with them from their background and experiences, and every single one of them has a very different "up-bringing" in the industry.

There's no one interview that stands out as more valuable than the others. Instead, my recommendation is to listen to them all, in fact, do so several times. Take notes. Take note of what they say.

Thursday, March 14, 2024

Investigative Scenario, 2024-03-12

Investigative Scenario
Chris Sanders posted another investigative scenario on Tues, 12 Mar, and this one, I thought, was interesting (see the image to the right).

First off, you can find the scenario posted on X/Twitter, and here on LinkedIn.

Now, let's go ahead and kick this off. In this scenario, a threat actor remotely wiped a laptop, and the sole source of evidence we have available is a backup of "the Windows Registry", made just prior to the system being wiped.

Goals
I try to make sure I have the investigative goals written out where I can see them and quickly refer back to them. 

Per the scenario, our goals are to determine:
1. How the threat actor accessed the system?
2. What were their actions on objectives, prior to wiping the system?

Investigation
The first thing I'd do is create a timeline from the Software and System hive files, in order to establish a pivot point. Per the scenario, the Registry was backed up "just before the attacker wiped the system". Therefore, by creating a timeline, we can assume that the last entry in the timeline was from just prior to the system being wiped. This would give us a starting point to work backward from, and provide an "aiming stake" for our investigation.

The next thing I'd do is examine the NTUSER.DAT files for any indication of "proof of life" up to that point. What I'm looking for here is to determine the how of the access; specifically, was the laptop accessed via a means that provided shell- or GUI-based access? 

If I did find "proof of life", I'd definitely check the SAM hive to see if the account is local (not a domain account), and if so, try to see if I could get last login time info, as well as any indication that the account password was changed, etc. However, keep in mind that the SAM hive is limited to local accounts only, and does not provide information about domain accounts.

Depending upon the version/build of Windows (that info was not available in the scenario), I might check the contents of the BAM subkeys, for some indication of process execution or "proof of life" during the time frame of interest.

If there are indications of "proof of life" from a user profile, and it's corroborated with the contents of the BAM subkeys, I'd definitely take a look at profile, and create a timeline of activity.

What we're looking for at this point is:
1. Shell-, GUI-based access, via RDP, or an RMM?
2. Network-, CLI-based access, such as via ssh, Meterpreter, user creds/PSExec/some variant, or a RAT

Shell-based access tends to provide us with a slew of artifacts to examine, such as RecentApps, RecentDocs, UserAssist, shellbags, WordWheelQuery, etc., all of which we can use to develop insight into a threat actor actor, via not just their activity, but the timing thereof, as well. 

If there are indications of shell-based access, we check the Registry to determine if RDP was enabled, or if there were RMM tools installed, but without Windows Event Logs and other other logs, we won't know definitively which means was used to access the laptop. Contrary to what some analysts seem to believe, the TSClients subkeys within the NTUSER.DAT hive do not show systems that have connected to the endpoint, but rather which systems were connected to from the endpoint.

Something else to consider is if the threat actor had shell-based access, and chose to perform their actions via a command prompt, or via Powershell, rather than navigating the system via the Explorer shell and double-clicking files and applications. As we have only the backed up Registry, we wouldn't be able to examine user's console history, nor the Powershell Event Logs.

However, if there are no indications of shell-based access, and since we only have the Registry and no access to any other log files from the endpoint, it's going to likely be impossible to determine the exact means of access. Further, if all of the threat actor's activity was via network-based/type 3 logins to the laptop, such as via Meterpreter, or PSExec, 

It doesn't do any good to parse the Security hive for the Security Event Log audit policy, because we don't have access to the Windows Event Logs. We could attempt to recover them via record parsing of the image, if we had a copy of the image. 

I would not put a priority on persistence; after all, if a threat actor is going to wipe a system, any persistence they create is not going to survive, unless the persistence they added was included in a system-wide or incremental backup, from which the system is restored. While this is possible, it's not something I'd prioritize at this point. I would definitely check autostart locations within the Registry for any indication of something that might look suspicious; for example, something that may be a RAT, etc. However, without more information, we wouldn't be able to definitively determine if (a) if the entry was malicious, and (b) if it was used by the threat actor to access the endpoint. For example, without logs, we have no way of knowing if an item in an autostart location started successfully, or generated an error and crashed each time it was launched. Even with logs, we would have no way of knowing if the threat actor accessed the laptop via an installed RAT.

Something else I would look for would be indications of third-party applications added to the laptop. For example, LANDesk used to have a Software Monitoring module, and it would record information about programs executed on the system, along with how many times it was launched, the last time it was launched, and the user name associated with the last launch. 

Findings
So, where do we stand with our goals? I'd say that at the moment, we're at "inclusive" because we simply do not have enough information to go on. There is no memory dump, no other files collected, no logs, etc., just the backed up Registry. While we won't know definitively how the threat actor was able to access the endpoint, we do know that if access was achieved via some means that allowed for shell-based access, we might have a chance at determining what actions the threat actor took while they were on the system. Of course, the extent to which we'd be able to do that also depends upon other factors, including the version of Windows, the software "load" (i.e., installed applications), actions taken by the threat actor (navigating/running apps via the Explorer shell vs. command prompt/Powershell). It's entirely possible that the threat actor accessed the endpoint via the network, through a means such as Meterpreter, or there was a RAT installed that they used to access the system.

Monday, February 26, 2024

PCAParse

I was doing some research recently regarding what's new to Windows 11, and ran across an interesting artifact, which seems to be referred to as "PCA". I found a couple of interesting references regarding this artifact, such as this one from Sygnia, and this one from AboutDFIR. Taking a look at the samples of files available from the DFIRArtifactMuseum, I wrote a parser for two of the files from the C:\Windows\appcompat\pca folder, converting the time stamps to Unix epoch format and sending the output to STDOUT, in TLN format so that it can be redirected to an events file.

An excerpt from the output from the PcaAppLaunchDic.txt file:

1654524437|PCA|||C:\ProgramData\ProtonVPN\Updates\ProtonVPN_win_v2.0.0.exe
1661428304|PCA|||C:\Windows\SysWOW64\msiexec.exe
1671064714|PCA|||C:\Program Files (x86)\Proton Technologies\ProtonVPN\ProtonVPN.exe
1654780550|PCA|||C:\Program Files\Microsoft OneDrive\22.116.0529.0002\Microsoft.SharePoint.exe

An excerpt from the output from the PcaGeneralDb0.txt file:

1652387261|PCA|||%programfiles%\freefilesync\bin\freefilesync_x64.exe - Abnormal process exit with code 0x2
1652387261|PCA|||%programfiles%\freefilesync\freefilesync.exe - Abnormal process exit with code 0x2
1652391162|PCA|||%USERPROFILE%\appdata\local\githubdesktop\app-2.9.9\resources\app\git\cmd\git.exe - Abnormal process exit with code 0x80
1652391162|PCA|||%USERPROFILE%\appdata\local\githubdesktop\app-2.9.9\resources\app\git\mingw64\bin\git.exe - Abnormal process exit with code 0x80

This output can be redirected to an events file, and included in a timeline, so that we can validate that the artifact does, in fact, illustrate evidence of execution. Incorporating file system information, Prefect and Windows Event Log data (and any other on-disk resources), as well as EDR telemetry (if available) will provide the necessary data to validate program execution.

Addendum, 2024-02-27: Okay, so I've been actively seeking out opportunities to use this parser in my role at my day job, and while I've been doing so, some things have occurred to me. First, there's nothing in either file that points to a specific user, so incorporating this data into an overall timeline that includes WEVTX data and EDR telemetry is going to help not only validate the information from the file themselves, but provide the necessary insight around process execution, depending of course on the availability of information. Fossilization on Windows systems is a wonderful thing, but not everyone takes advantage of it, nor really understands where it's simply not going to be available.

Not only is there no user information, there's also no information regarding process lineage. Still, I firmly believe that once we begin using this information in a consolidated timeline, and begin validating the information, we'll see that it adds yet another clarifying overlay to our timeline, as well as possible pivot points.

Saturday, February 24, 2024

A Look At Threat Intel, Through The Lens Of The r77 Rootkit

It's been almost a year, but this Elastic Security write-up on the r77 rootkit popped up on my radar recently, so I thought it would be useful to do a walk-through of how someone with my background would mine open reporting such as this for actionable intel. 

In this case, the r77 rootkit is described as an "open source userland rootkit used to deploy the XMRig crypto miner". I've seen XMRig before (several times), but not deployed alongside a rootkit.

The purpose of a rootkit is to hide stuff. Anyone who was around in the late '90s and early 2000s is familiar with the term "rootkit" and what it means. From the article, "r77’s primary purpose is to hide the presence of other software on a system by hooking important Windows APIs, making it an ideal tool for cybercriminals looking to carry out stealthy attacks. By leveraging the r77 rootkit, the authors of the malicious crypto miner were able to evade detection and continue their campaign undetected."

My point in sharing this definition/explanation is because many of us will see this, or generally accept that a rootkit is involved, and then not think critically about what we're seeing, but more importantly, what we're not seeing. For example, in this case, the Elastic Security write-up

The installer module is described as being written to the Registry, which is a commonly observed technique, especially when it comes to "fileless malware". The article states that the installer "creates a new registry key called $77stager in the HKEY_LOCAL_MACHINE\SOFTWARE hive and writes the stager module to the key." However, the code in the image immediately following that statement (images are not numbered in the article) shows the RegSetValueExW function being called. As such, it's not a Registry key that's created, but a value. 

This may seem pedantic to many, but the distinction is important. Clearly, a different API function is used to create a value than a key; this is because keys and values are completely different structures all together. You cannot write data to a key (i.e., "writes the stager module to the key"), that data has to be associated with a value. Many EDR frameworks, when monitoring or querying Registry keys vs values, use different API or function calls themselves. As such, monitoring for the creation of or simply searching for the $77stager key will miss this rootkit. 

Every. 

Single. 

Time. 

What's interesting is that the article later states:
It then stores the current process ID running the service module as a value in a registry key named either “svc32” or “svc64” under the key HKEY_LOCAL_MACHINE** SOFTWARE$77config\pid**. The svc32/64 key name is based on the system architecture.

Here, it looks as if the correct nomenclature is used.

And then there's threat hunting; that is, if you're going to write PowerShell code to sweep across your infrastructure and look for malware similar to this, the code to look for a key is different than that to look for a value. The same is true for triage or 'dead box' analysis via tools such as RegRipper. Threat hunting with PowerShell across live systems for direct artifacts of this rootkit likely won't get you very far, because...well...it's a rootkit, and the key is hidden through the use of userland API hooking. Elastic's article even points out that data is filtered when using tools such as RegEdit that rely on the hooked API functions. As such, verifying that the rootkit is actually there may require the use of reg.exe of something like FTK Imager to copy the Software hive off of the endpoint, and then parsing that hive file.

Searching for indirect artifacts related to this rootkit, however, is an entirely different matter, and is the reason why indirect artifacts are so valuable. The PowerShell code that is launched is captured in the Windows PowerShell Event Log, in PowerShell/600 event records, as well as in the Microsoft-Windows-PowerShell/Operational Event Log, in Microsoft-Windows-PowerShell/4104 records. This activity/these artifacts allow us to validate that the activity actually occurred, while providing for additional detection opportunities.

Some aspects of the malware not covered in the article include initial access, or how the whole kit is deployed. The technical depth of the article is impressive but not entirely actionable. For example, what aspects (direct artifacts) of the infection are hidden by the rootkit, and what indirect artifacts are 'visible'?

Monday, January 22, 2024

Lists of Images

There're a lot of discussions out there on social media regarding how to get started or improve yourself or set yourself apart in cybersecurity, and lot of the advice centers around doing things yourself; setting up a home lab, using various tools, etc. A lot of this advice is also centered around pen testing and red teaming; while it's not discussed as much, there is a lot you can do if you're interested in digital forensics, and the cool thing is that you don't have to "set up a home lab" to fully engage in most of it. All you need is a way to download the images and any tools you want, to a system to do the work on.

Fortunately, there are a number of sites where you can find these images, to practice doing analysis, or to engage in tool testing. Also, many of these sites are on lists...I've developed a list of my own, for example. Amongst the various available lists, there's most assuredly going to be duplication, so just be aware of that going in. That being said, let's take a look at some of the lists...

The folks at ArsenalRecon posted a list of publicly available images, and Brett Shavers followed up by sharing a DFIR Training link of "test" images.

Dr. Ali Hadi has a list of challenge images (he graciously allowed me to use one of them in Investigating Windows Systems), as well as a blog with some very valuable posts.

While "test" and CTF images are a great way to practice using various tools, and even developing new techniques, they lack the fossilization of user and system activity seen in real-world images. There's not a great deal that can be done about that; suffice to say that this is just something that folks need to be aware of when working with the images. It's also possible within the limited scope of the "incident" to develop not just threat intel, but also discern insights into the threat actor; that is, to observe human behavior rendered from digital forensics.

Many of the CTF images will be accompanied by a list of questions that need to be answered (i.e., the flags), few of which are ever actually asked for by customers, IRL. I've seen CTFs with 37 or even 51 questions, and across 25 yrs of DFIR experience, I've never had customers ask more than 5 questions, with one or two of them being duplicates. 

The point is that CTF images are a great place to start, particularly if you take more "real world" approach to the situation and define your own goals. "Is this system infected with malware? If so, how did this happen, what did the malware do, and was any data stolen as a result?"

It's also a great idea to do more than just answer the questions, but to also go beyond. For example, in the write up of your findings, did you consider control efficacy? What controls were in place, did they work or not, and what controls would you recommend?

I once worked a case where the endpoint was infected due to a phishing email and the customer responded that this couldn't be the case, because they had a package specifically designed to address such things on their email gateway. However, the phishing email had gotten on the system because the user accessed their personal email via a browser, bypassing the email gateway all together.

Can you recommend controls or system configuration changes that may have inhibited or even obviated the attack/infection? What controls either on the network, or on the endpoint itself may have had an impact on the attack?

What about detections? How would you detect this malware or activity on future cases? Can you write a Yara or Sigma rule that would address the attack at any point? Is there one data source that proved to be more valuable than others, something you can clearly delineate as, "...if you see this, then the attack succeeded..."?

What can you tell about the "attacker", as a person? Was this a human operated attack, and if so, what insights can you develop about the attacker from your DF analysis? Hours of operations, capabilities, situational awareness are all aspects you can look at. Were there failed attempts to log in, run commands, or install applications, or did the attacker seem to be prepared and good to go when they got on the box? What insights can be rendered from your analysis, and are there any gaps that would shed more light on what was happening?

Finally, set up a Github site or blog, and share your experience and findings. Write up a blog post, a series of blog posts, or upload a document to a Github repo, and invite others to review, and ask questions, make comments, etc.

Monday, January 15, 2024

EDRSilencer

There's been a good bit of discussion in the cybersecurity community regarding "EDR bypasses", and most of these discussions have been centered around technical means a threat actor can use to "bypass" EDR. Many of these discussions do not seem to take the logistics of such thing into account; that is, you can't suddenly "bypass EDR" on an endpoint without first accessing the endpoint, setting up a beachhead and then bringing your tools over. Even then, where is the guarantee that it will actually work? I've seen ransomware threat actors fail to get their file encryption software to run on some endpoints.

Going unnoticed on an endpoint when we believe or feel that EDR is prevalent can be a challenge, and this could be the reason why these discussions have taken hold. However, the fact of the matter is that the "feeling" that EDR is prevalent is just that...a feeling, and not supported by data, nor situational awareness. If you look at other aspects of EDR and SOC operations, there are plenty of opportunities using minimal/native tools to achieve the same effect; to have your actions not generate alerts that a SOC analyst investigates.

Situational Awareness
Not all threat actors have the same level of situational awareness. I've seen threat actors where EDR has blocked their process from executing, and they respond by attempting to uninstall AV that isn't installed on the endpoint. Yep, that's right...this was not preceded by a query attempting to determine which AV product was installed; rather, the threat actor when right to uninstalling ESET. In another instance, the threat actor attempted to uninstall Carbon Black; the monitored endpoint was running <EDR>. Again, no attempt was made to determine what was installed.

However, I did see one instance where the threat actor, before doing anything else or being blocked/inhibited, ran queries looking for <EDR> running on 15 other endpoints. From our dashboard, we knew that only 4 of those endpoints had <EDR> running; the threat actor moved to one of the 11 that didn't.

The take-away from this is that even beyond "shadow IT", there are likely endpoints within an infrastructure that don't have EDR installed; 100% coverage, while preferred, is not guaranteed. I remember an organization several years ago that was impacted by a breach, and after discovering the breach, installed EDR on only about 200 endpoints, out of almost 15,000. They also installed the EDR in "learning mode", and several of the installed endpoints were heavily used by the threat actors. As such, the EDR "learned" that the threat actor was "normal" activity.

EDRSilencer
Another aspect of EDR is that for the tool to be effective, most need to communicate to "the cloud"; that is, send data off of the endpoint and outside of the network, were it will be processed. Yes, I know that Carbon Black started out with an on-prem approach, and that Sysmon writes to a local Windows Event Log file, but most EDR frameworks send data to "the cloud", in part so that users with laptops will still have coverage. 

EDRSilencer takes advantage of this, not by stopping, altering or "blinding" EDR, but by preventing it from communicating off of the endpoint. See p1k4chu's write up here; EDRSilencer works by creating a WFP rule to block the EDR EXE from communicating off of the host, which, to be honest, is a great idea. 

Why a "great idea"? For one, it's neither easy nor productive to create a rule to alert when the EDR is no longer communicating. Some organizations will have hundreds or thousands of endpoints with EDR installed, and there's no real "heartbeat" function in many of them. Employees will disconnect laptops, offices (including WFH) may have power interruptions, etc., so there are LOT of reasons why an EDR agent may cease communicating. 

In 2000, I worked for an organization that had a rule that would detect significant time changes (more than a few minutes) on all of their Windows endpoints. The senior sysadmin and IT director would not do anything about the rules, and simply accepted that twice a year, we'd be inundated with these alerts for every endpoint. My point is that when you're talking about global/international infrastructures, or MDRs, having a means of detecting when an agent is not communicating is a tough nut to crack; do it wrong and don't plan well for edge cases, and you're going to crush your SOC. 

If you read the EDRSilencer Github page and p1k4chu's write-up closely, you'll see that EDRSilencer uses a hard-coded list of EDR executables, which doesn't include all possible EDR tools.

Fortunately, p1k4chu's write up provides some excellent insights as to how to detect the use of EDRSilencer, even pointing out specific audit configuration changes to ensure that the appropriate events are written to the Security Event Log.

As a bit of a side note, audtipol.exe is, in fact, natively available on Windows platforms.

Once the change is made, the two main events of interest are Security-Auditing/5441 and Security-Auditing/5157. P1k4chu's write-up also includes a Yara rule to detect the EDRSilencer executable, which is based in part on a list of the hard-coded EDR tools.

EDRNoiseMaker detects the use of EDRSilencer, by looking for filters blocking those communications.

Other "Opportunities"
There's another, perhaps more subtle way to inhibit communications off of an endpoint; modify the hosts file.  Credit goes to Dray (LinkedIn, X) for reminding me of this sneaky way to inhibiting off-system communications. The difference is that rather than blocking by executable, you need to know to where the communications are going, and add an entry so that the returned IP address is localhost.

I thought Dray's suggestion was both funny and timely; I used to do this for/to my daughter's computer when she was younger...I'd modify her hosts file right around 10pm, so that her favorites sites (MySpace, Facebook, whatever) resolved to localhost, but other sites, like Google, were still accessible.  

One of the side effects would likely be the difficulty in investigating an issue like this; how many current or relatively new SOC/DFIR analysts are familiar with the hosts file? How many understand or know the host name resolution process followed by Windows? I think that the first time I became aware of MS's documentation of the host name resolution process was 1995, when I was attempting to troubleshoot an issue; how often is this taught in networking classes these days?

Conclusion
Many of us have seen the use of offensive security tools (OSTs) by pen tester and threat actors alike, so how long do you think it will be before EDRSilencer, or something like it, makes its way into either toolkit? The question becomes, how capable is your team of detecting and responding to the use of such tools, particularly when used in combination with other techniques ("silence" EDR, then clear all Windows Event Logs)? Tools and techniques like this (EDRSilencer, or the technique it uses) shed a whole new light on initial recon  (process/service listing, query the Registry for installed applications, etc.) activities, particularly when they're intentionally and purposefully used to create situational awareness.

Wednesday, January 10, 2024

Human Behavior In Digital Forensics, pt III

So far, parts I and II of this series have been published, and at this point, there's something that we really haven't talked about.

That is, the "So, what?". Who cares? What are the benefits of understanding human behavior rendered via digital forensics? Why does it even matter?

Digital forensics can provide us insight into a threat actor's sophistication and situational awareness, which can, in turn, help us understand their intent. Are they new to the environment, and trying to get the "lay of the land", or are their actions extremely efficient, and do they appear to be going directly to the data they're looking for, as if they have been here before or had detailed prior knowledge?

Observing the threat actor's actions (or the impacts thereof) helps us understand not just their intent, but what else we should be looking for. For example, observing the Samas ransomware threat actors in 2016 revealed no apparent interest in data collection or theft; there was no searching or discovery, no data staging, etc. This is in contrast to the Non-PCI Case from my previous blog post; the threat actor was apparently interested in data, but did not appear to have an understanding of the infrastructure they'd accessed (searching for "banking" in a healthcare environment).

Carrying this forward, we can then use what we learn about the threat actor, by observing their actions and impacts, to better understand our own control efficacy; what worked, what didn't, and what can work better at preventing, or detection and responding to, the threat actor?

Per the graphic to the left, understanding human behavior rendered via digital forensics is thought to provide insight into future attacks...but can it really? And if this is the case, how so?

Well, we've known for some time that there's really no single actor or group that focuses solely on one type of target. Consider this blog post from 2015, making it almost 9 yrs old at the time of this writing. The findings presented in the blog post remain true, and are repeated, even today. 

So, "profiling" a threat actor may not allow you to anticipate who (what target infrastructure) they're going to attack next, but within a limited window, it will provide a great deal of insight into how you can expect them to conduct the follow-on stages of an attack. The target may not be known, but the actions taken, particularly in the near term, will be illuminated by what was observed on a previous attack.

In 2016, the team I was with responded to about half a dozen Samas ransomware attacks, across a wide range of verticals; they were targeting vulnerable JBoss CMS systems, regardless of the underlying business. What we learned by looking across those multiple attacks allowed us to identify other potential targets, as well as respond to and shut down some attacks that were underway; we saw that the threat actors took an average of 4 months to go from initial access to deploying the ransomware. During this time, there was no apparent interest in data staging or theft; the intent appeared to be to identify "critical" systems within the infrastructure, and obtain the necessary privileges to deploy ransomware to those systems.

Reacting to Stimulus
Additional insight can be found by observing how a threat actor reacts to "stimulus". There may be times when a threat actor's activities are unfettered; they proceed about their actions without being inhibited or blocked in anyway. They aren't blocked by EDR tools, nor AV. From these incidents, we can learn a good deal about the threat actor's playbook, and we may see how it evolves over time. However, there may be times where the threat actor encounters issues, either with security tooling blocking their efforts, or tools they bring in from the outside crashing and not executing on the endpoint. It's during these incidents that we get a more expansive view of the threat actor, as we observe their actions in response to stimulus.

While I was with Crowdstrike, we'd regularly "see", via the EDR telemetry, the actions taken by various threat actors when the Crowdstrike product blocked their processes from executing. In one instance, the Crowdstrike agent stopped the threat actor's process, and their reaction was to attempt to disable and remove Windows Defender. They then moved to another endpoint, and when they encountered the same issue, they attempted to remove an AV product that was not installed anywhere within the infrastructure. They finally moved to a third endpoint, and when their attempts continued to be blocked, they ran a batch file intended to remove several AV products, none of which were installed on the endpoint. Interestingly, they left the infrastructure without ever running a command to see what processes were running, nor what applications were installed.

We saw threat actors on endpoints monitored by the Crowdstrike agent doing queries to see if Carbon Black was installed. To be clear, the commands were not general, "...give me a list of processes..." commands, but were specific to identifying Carbon Black.

In another instance, we observed the threat actor land on a monitored endpoint, and begin querying other endpoints within the infrastructure to see if they were running the Falcon agent. They reached out to 15 endpoints, and while we could not see the responses, we knew from our dashboard that the agent was only on 4 of the queried endpoints. The threat actor then moved to one of the endpoints that did not have an agent installed. The interesting thing about this was that when they landed on the monitored endpoint, we saw no commands run nor any other indication of the threat actor checking that endpoint for the agent; it was as if they already knew. 

Even without EDR or AV blocking the threat actor's attempts, we may still be able to observe how the threat actor responds to stimulus. I've seen more than a few times where a threat actor will attempt to run something, and Windows Error Reporting kicks off because their EXE crashes. What do they do? I've seen ransomware threat actors unable to encrypt files on an endpoint, and running their tool with the "--debug" command switch, multiple times. They may also attempt to download newer or different copies of their tools, and try running them again. 

In other instances, I've seen commands fail, and the threat actor try something else. I've also seen tools crash, and the threat actor take no action. Seeing how a threat actor responds to the issues they encounter, watching their behavior and whether they encounter any issues, provides significant insight into their intent.

Other Aspects of the Attack
There are other aspects of an attack that we can look to to better understand the threat actor. For example, when the threat actor initially accesses an endpoint, how do they do so? RDP? MSSQL? Some other application, like TeamViewer?

Is the access preceded by failed login attempts, or does the source IP address for the threat actors successful access to the system not appear on the list of IP addresses for failed login attempts?

Once they have access, what do they do, how soon/fast do they do it, and how do they go about their activities? If they access the endpoint via RDP, do they use all GUI tools, do they go to PowerShell, do they use cmd.exe, etc.? Do they use WSL, if it's installed? Do they use native utilities/LOLBins? Do they use batch files? 

Did they create any additional persistence? If so, what do they do? Create user accounts? Add services or Scheduled Tasks? Do they lay any "booby traps", akin to the Targeted Threat Actor from my previous blog post?

During their time on the endpoint, do they seem prepared, or do they "muck about", as if they're wandering around a dark room, getting the lay of the land? Do they make mistakes, and if so, how do they overcome them? 

Do they use LOLBins? Do they bring tools with them, and if so, are the tools readily available? When the Samas ransomware actors were attacking JBoss CMS systems in 2016, they used the JexBoss exploit, which was readily available. 

When they disconnect their access, how do they go about it? Do they simply break the connection and log out, or do they "salt the earth", clearing Windows Event Logs, deleting files, etc.?

An important caveat to these aspects is we have to be very careful about how we view and understand the actions we observe. There have been more than a few times where I've worked with analysts with red team experience, and have heard them say, "...if I were the attacker, I would have...". This sort of bias can be detrimental to understanding what's actually going on, and can lead to resources being deployed in the wrong direction. 

Conclusion
As Blade stated during the first movie (quote 3), "...when you understand the nature of thing, you know what it's capable of." Understanding a threat actor's nature provides insight into what they're capable of, and what we should be looking for on endpoints and within the infrastructure.

This also helps us understand control efficacy; what controls did we have in place for prevention, detection, and response? Did they work, or did they fail? How could those controls be improved, or better implemented? 

Saturday, January 06, 2024

Human Behavior In Digital Forensics, pt II

On the heels of my first post on this topic, I wanted to follow up with some additional case studies that might demonstrate how digital forensics can provide insight into human activity and behavior, as part of an investigation.

Targeted Threat Actor
I was working a targeted threat actor response, and while we were continuing to collect information for scoping, so we could move to containment, we found that on one day, from one endpoint, the threat actor pushed their RAT installer to 8 endpoints, and had the installer launched via a Scheduled Task. Then, about a week later, we saw that the threat actor had pushed out another version of their RAT to a completely separate endpoint, by dropping the installer into the StartUp folder for an admin account.

Now, when I showed up on-site for this engagement, I walked into a meeting that served as the "war room", and before I got a chance to introduce myself, or find out what was going on, one of the admins came up to me and blurted out, "we don't use communal admin accounts." Yes, I know...very odd. No, "hi, I'm Steve", nothing like that. Just this comment about accounts. So, I filed it away.

The first thing we did once we got started was roll out our EDR tech, and begin getting insight into what was going on...which accounts had been compromised, which were the nexus systems the threat actor was operating from, how they were getting in, etc. After all, we couldn't establish a perimeter and move to containment until we determined scope, etc.

So we found this RAT installer in the StartUp folder for an admin account...a communal admin account. We found it because in the course of rolling out our EDR tech, the admins used this account to push out their software management platform, as well as our agent...and the initial login to install the software management platform activated the installer. When our tech was installed, it immediately alerted on the RAT, which had been installed by that point. It had a different configuration and C2 from what we'd seen from previous RAT installations, which appeared to be intentional. We grabbed a full image of that endpoint, so we were able to get information from VSCs, including a copy of the original installer file. 

Just because an admin told me that they didn't use communal admin accounts doesn't mean that I believed him. I tend to follow the data. However, in this case, the threat actor clearly already knew the truth, regardless of what the admins stated. On top of that, they planned out far enough in advance to have multiple means of access, including leaving behind "booby traps" what would be tripped through admin activity, but not have the same configuration. That way, if admins had blocked access to their first C2 IP address at the firewall, or were monitoring for that specific IP address via some other means, having the new, second C2 IP address would mean that they would go unnoticed, at least for a while. 

What I took away from all of the totality of what we saw, largely through historical data on a few endpoints, was that the threat actor seemed to have something of a plan in place regarding their goals. We never saw any indication of search terms, wandering around looking for files, etc., and as such, it seemed that they were intent upon establishing persistence at that point. The customer didn't have EDR in place prior to our arrival, so there's a lot we likely missed out on, but from what we were able to assemble from host-based historical data, it seemed that the threat actor's plan, at the point we were brought in,  was to establish a beachhead.

Pro Bono Legal Case
A number of years ago, I did some work on a legal case. The background was that someone had taken a job at a company, and on their first day, they were given an account and password on a system for them to use, but they couldn't change the password. The reason they were given was that this company had one licensed copy of an application, and it was installed on that system, and multiple people needed access.

Jump forward about a year, and the guy who got hired grew disillusioned, and went in one Friday morning, logged into the computer, wrote out a Word document where they resigned, effective immediately. They sent the document to the printer, then signed it, handed it in, and apparently walked out. 

So, as it turns out, several files on the system were encrypted with ransomware, and this guy's now-former employer claimed that he'd done it, basically "salting the earth" on his way out the door. There were suits and countersuits, and I was asked to examine the image of the system, after exams had already been performed by law enforcement and an expert from SANS.

What I found was that on Thursday evening, the day before the guy resigned, at 9pm, someone had logged into the system locally (at the console) and surfed the web for about 6 minutes. During that time, the browser landing on a specific web site caused the ransomware executable to be downloaded to the system, with persistence written to the user account's Run key. Then, when the guy returned the following morning and logged into the account, the ransomware launched, albeit without his knowledge. Using a variety of data sources, to include the Registry, Event Log, file system metadata, etc., I was able to demonstrate when the infection activity actually took place, and in this instance, I had to leave it up to others to establish who had actually been sitting at the keyboard. I was able to articulate a clear story of human activity and what led to the files being encrypted. As part of the legal battle, the guy had witness statements and receipts from the bar he had been at the evening prior to resigning, where he'd been out with friends celebrating. Further, the employer had testified that they'd sat at the computer the evening prior, but all they'd done was a short web browser session before logging out.

As far as the ransomware itself was concerned, it was purely opportunistic. "Damage" was limited to files on the endpoint, and no attempt was made to spread to other endpoints within the infrastructure. On the surface, what happened was clearly what the former employer described; the former employee came in, typed and printed their resignation, and launched the ransomware executable on their way out the door. However, file system metadata, Registry key LastWrite times, and browser history painted a different story all together. The interesting thing about this case was that all of the activity occurred within the same user account, and as such, the technical findings needed to be (and were) supported by external data sources.

RAT Removal
During another targeted threat actor response engagement, I worked with a customer that had sales offices in China, and was seeing sporadic traffic associated with a specific variant of a well-known RAT come across the VPN from China. As part of the engagement, we worked out a plan to have the laptop in question sent back to the states; when we received the laptop, the first thing I did was remove and image the hard drive.

The laptop had run Windows 7, which ended up being very beneficial for our analysis. We found that, yes, the RAT had been installed on the system at one point, and our analysis of the available data painted a much clearer picture. 

Apparently, the employee/user of the endpoint had been coerced to install the RAT. Using all the parts of the buffalo (file system, WEVTX, Registry, VSCs, hibernation file, etc.), we were able to determine that, at one point, the user had logged into the console, attached a USB device, and run the RAT installer. Then, after the user had been contacted to turn the system over to their employer, we could clearly see where they made attempts to remove and "clean up" the RAT. Again, as with the RAT installation, the user account that performed the various "clean up" attempts logged in locally, and performed some steps that were very clearly manual attempts to remove and "clean up" the RAT by someone who didn't fully understand what they were doing. 

Non-PCI Breach
I was investigating a breach into corporate infrastructure at a company that was part of the healthcare industry. I turned out that an employee with remote access had somehow ended up with a keystroke logger installed on their home system, which they used to remote into the corporate infrastructure via RDP. This was about 2 weeks before they were scheduled to implement MFA.

The threat actors was moving around the infrastructure via RDP, using an account that hadn't accessed the internal systems, because there was no need for the employee to do so. This meant that on all of these systems, the login initiated the creation of the user profile, so we had a really good view of the timeline across the infrastructure, and we could 'see' a lot of their activity. This was before EDR tools were in use, but that was okay, because the threat actor stuck to the GUI-based access they had via RDP. We could see documents they accessed, shares and drives they opened, and ever searches they ran. This was a healthcare organization, which the threat actor was apparently unaware of, because they were running searches for "password", as well as various misspellings of the word "banking" (i.e., "bangking", etc.). 

The organization was fully aware that they had two spreadsheets on a share that contained unencrypted PCI data. They'd been trying to get the data owner to remove them, but at the time of the incident, the files were still accessible. As such, this incident had to be reported to the PCI Council, but we did so with as complete a picture as possible, which showed that the threat actor was both unaware of the files, as well as apparently not interested in credit card, nor billing, data. 

Based on the nature of the totality of the data, we had a picture of an opportunistic breach, one that clearly wasn't planned, and I might even go so far as to describe the threat actor as "caught off guard" that they'd actually gained access to an organization. There was apparently no research conducted, the breach wasn't intentional, and had all the hallmarks of someone wandering around the systems, in shock that they'd actually accessed them. Presenting this data to the PCI Council in a clear, concise manner led to a greatly reduced fine for the customer - yes, the data should not have been there, but no, it hadn't been accessed or exposed by the intruder. 

Wednesday, January 03, 2024

Human Behavior In Digital Forensics

I've always been a fan of books or shows where someone follow clues and develops an overall picture to lead them to their end goal. I've always like the "hot on the trail" mysteries, particularly when the clues are assembled in a way to understand that the antagonist was going to do next, what their next likely move would be. Interestingly enough, a lot of the shows I've watched have been centered around the FBI, shows like "The X-Files", and "Criminal Minds". I know intellectually that these shows are contrived, but assembling a trail of technical bread crumbs to develop a profile of human behavior is a fascinating idea, and something I've tried to bring to my work in DFIR. 

Former FBI Supervisory Special Agent and Behavioral Profiler Cameron Malin recently shared that his newest endeavor, Modus Cyberandi, has gone live! The main focus of his effort, cyber behavior profiling, is right there at the top of the main web page. In fact, the main web page even includes a brief history of behavioral profiling.

This seems to be similar to Len Opanashuk's endeavor, Motives Unlocked, which leads me to wonder, is this a thing

Is this something folks are interested in?

Apparently ,it is, as there's research to suggest that this is, in fact, the case. Consider this research paper describing behavioral evidence analysis as a "paradigm shift", or this paper on idiographic digital profiling from the Journal of Digital Forensics, Security, and Law, to name but a few. Further, Google lists a number of (mostly academic) resources dedicated to cyber behavioral profiling.

This topic seems to be talked about here and there, so maybe there is an interest in this sort of analysis, but the question is, is the interest more academic, is the focus more niche (law enforcement), or is this something that can be effectively leveraged in the private sector, particularly where digital forensics and intrusion intelligence intersect?

I ask the question, as this is something I've looked at for some time now, in order to not only develop a better understanding of targeted threat actors who are still active during incident response, but to also determine the difference between a threat actor's actions during the response, and those of others involved (IT staff, responders, legitimate users of endpoints, etc.). 

In a recent comment on social media, Cameron used the phrase, "...adversary analysis and how human behavior renders in digital forensics...", and it occurred to me that this really does a great job of describing going beyond just individual data points and malware analysis in DFIR, particularly when it comes to hands-on targeted threat actors. By going beyond just individual data points and looking at the multifaceted, nuanced nature of those artifacts, we can begin to discern patterns that inform us about the intent, sophistication, and situational awareness of the threat actor.

To that end, Joe Slowik has correctly stated that there's a need in CTI (and DFIR, SOC, etc.) to view indicators as composite objects, that things like hashes and IP addresses have greater value when other aspects of their nature is understood. Many times we tend to view IP addresses (and other indicators) one-dimensionally; however, there's so much more about those indicators that can provide insight to the threat actor behind them, such as when, how, and in what context that IP address was used. Was it the source of a login, and if so, what type? Was it a C2 IP address, or the source of a download or upload? If so, how...via HTTP, curl, msiexec, BITS, etc?

Here's an example of an IP address; in this case, 185.56.83.82. We can get some insight on this IP address from VirusTotal, enough to know that we should probably pay attention. However, if you read the blog post, you'll see that this IP address was used as the target for data exfiltration. 

Via finger.exe.

Add to that the use of the LOLBin is identical to what was described in this 2020 advisory, and it should be easy to see that we've gone well beyond just an IP address, by this point, as we've started to unlock and reveal the composite nature of that indicator. 

The point of all this is that there's more to the data we have available than just the one-dimensional perspective that we're used to thinking in, in which we've been viewing that data. Now, if we begin to incorporate other data sources that are available to us (EDR telemetry, endpoint data and configurations, etc.), we'll being to see exactly how, as Cameron stated, human behavior renders in digital forensics. Some of the things I've pursued and been successful in demonstration during previous engagements includes things like hours of operations, preferred TTPs and approaches, enough so to separate the actions of two different threat actors on a single endpoint. 

I've also gained insight into the situational awareness of a threat actor by observing how they reacted to stimulus; during one incident, the installed EDR framework was blocking the threat actor's tools from executing on different endpoints. The threat actor never bothered to query any of the three endpoints to determine what was blocking their attempts; rather, on one endpoint, they attempted to disable Windows Defender. On the second endpoint, they attempted to delete a specific AV product, without ever first determining if it was installed on the endpoint; the batch file they ran to delete all aspects and variations of the product were not preceded by query commands. Finally, on the third endpoint, the threat actor ran a "spray-and-pray" batch file that attempted to disable or delete a variety of products, none of which were actually installed on the endpoint. When none of these succeeded in allowing them to pursue their goals, they left.

So, yes, viewed through the right lens, with the right perspective, human behavior can be discerned through digital forensics. But the question remains...is this useful? Is the insight that this approach provides valuable to anyone?

Sunday, December 31, 2023

2023 Wrap-up

Another trip around the sun is in the books. Looking back over the year, I thought I'd tie a bow on some of the things I'd done, and share a bit about what to expect in the coming year.

In August, I released RegRipper 4.0. Among the updates are some plugins with JSON output, and I found a way to integrate Yara into RegRipper.

I also continued updating Events Ripper, which I've got to say, has proven (for me) time and again to be well worth the effort, and extremely valuable. As a matter of fact, within the last week or so, I've used Events Ripper to great effect, specifically with respect to MSSQLServer, not to "save my bacon", as it were, but to quickly illuminate what was going on on the endpoint being investigated. 

For anyone who's followed me for a while, either via my blog or on LinkedIn or X, you'll know that I'm a fan of (to steal a turn of phrase from Jesse Kornblum) "using all the parts of the buffalo", particularly when it comes to LNK file metadata.

For next year, I'm working on an LNK parser that will allow you to automatically generate a bare-bones Yara rule for detecting other similar LNK files (if you have a repository from a campaign), or submitting as a retro-hunt to VirusTotal. 

Finally, I'm working on what I hope to be the first of several self-published projects. We'll see how the first one goes, as the goal is to provide the foundation of other subsequent projects.

That being said, I hope everyone had a great 2023, and that you're looking forward to a wonderful 2024...even though for many of us, it's probably going to be April before we realize that we're writing 2023 on checks, etc.

Monday, December 18, 2023

Round Up

MSSQL is still a thing
TheDFIRReport recently posted an article regarding BlueSky ransomware being deployed following MSSQL being brute forced. I'm always interested in things like this because it's possible that the author will provide clear observables so that folks can consider the information in light of their infrastructure, and write EDR detections, or create filter rules for DFIR work, etc. In this case, I was interested to see how they'd gone about determining that MSSQL had been brute forced.

You'll have to bear with me...this is one of those write-ups where images and figures aren't numbered. However, in the section marked "Initial Access", there's some really good information shared, specifically where it says, "SQL Server event ID 18456 Failure Audit Events in the Windows application logs:"...specifically, what they're looking at is MSSQLServer/18456 events in the Application Event Log, indicating a failed login attempt to the server (as opposed to the OS). This is why I wrote the Events Ripper mssql.pl plugin. I'd seen a number of systems running Veeam and MSSQL, and needed a straightforward, consistent, repeatable means to determine if  a compromise of Veeam was the culprit, or if something else had occurred.


LNK Files

TheDFIRSpot had an interesting write-up on using LNK files in your investigations, largely from the perspective of determining what a user or threat actor may have done or accessed while logged in via the Windows Explorer shell. Lining up creation and last modification times of shortcuts/LNK files in the account's Recent folder can provide insight into what might have occurred. Again, keep in mind that for this to work, for the LNK files to be present, access was obtained via the shell (Windows Explorer). If that's the case, then you're likely going to also want to look at the automatic JumpLists, as they will provide similar information, and LNK files in the Recent folder, and the RecentDocs and shellbags keys for the account can provide a great deal of insight into, and validation of activity. Note that automatic JumpLists are OLE/structured storage format files, with the individual streams consisting of data that follows the LNK format.

While I do agree that blog posts like this are extremely valuable in reminding of us of the value/importance of certain artifacts, we need to take an additional step to normalize a more comprehensive approach; that is, we need to consistently drive home the point that we shouldn't just be looking at a single artifact. We need to normalize and reinforce the understanding that there is no go-to artifact for any evidence category, when we should be considering artifact constellations, and that constellation will depend upon the base OS version and software load of the endpoint. Understanding default constellations, as part of a base software load (OS, minimal applications) is imperative, as is having a process to build out that constellation based on additional installed software (Sysmon, LANDesk Software Monitoring, etc.).

Something to keep in mind is that access via the shell has some advantages for the threat actor, one being that using GUI tools means that EDR is blind to most activity. EDR tools are great at recording process creation events, for example, but when the process (explorer.exe) already exists, what happens via the process that does not involve cmd.exe, PowerShell, WSL, or WSA (Windows Subsystem for Android) may not be visible to EDR. Yes, some EDR frameworks also monitor network connections, as well as Registry and file system modifications, but by necessity, those are often filtered. When a GUI tool is opened, EDR based on process creation events is largely blind to activity that occurs via drop-down boxes, check boxes, text fields, and buttons being pushed.

For example, check out this recent Huntress blog where curl.exe was observed being used for data exfil (on the heels of this Huntress blog showing finger.exe being used for data exfil). In the curl blog, there's a description of MemProcFS being used for memory dumping; using a GUI tool essentially "blinds" EDR, because you (the analyst) can't see which buttons the threat actor pushes. We can assume that the 4-digit number listed in the minidump file path was the process ID, but the creation of that process was beyond the data retention window (the endpoint had not been recently rebooted...), so we weren't able to verify which process the threat actor targeted for the memory dump.

Malware Write-ups
Malware and threat actor write-ups need to include clear observables so that analysts can implement them, whether they're doing DFIR work, threat hunting, or working on writing detections. Here is Simone Kraus's write-up on the Rhysida ransomware; I've got to tell you, it's chock full of detection and hunting opportunities. Like many write-ups, the images and listings aren't numbered, but about 1/4 of the way down the blog post, there's a listing of reg.exe commands meant to change the wallpaper to the ransom note, many of which are duplicates. What I mean by that is that you'll see a "cmd /c reg add" command, followed by a "reg.exe add" command with the same arguments in the command line. As Simone says, these are commands that the ransomware would execute...these commands are embedded in the executable itself; this is something we see with RaaS offerings, where commands for disabling services and the ability to recover the system are embedded within the EXE itself. In 2020, a sample of the Sodinokibi ransomware contained 156 unique commands, just for shutting off various Windows services. If your EDR tech allows for monitoring the Registry and disabling processes at the endpoint, this may be a good option to enable automated response rules. Otherwise, detecting these processes can lead to isolating endpoints, or the values themselves can be used for threat hunting across the enterprise.

Something else that's interesting about the listing is that  the first two entries are misspelled; since the key path doesn't exist by default, the command will fail. It's likely that Simone simply cut-n-pasted these commands, and since they're embedded within the EXE, they likely will not be corrected without the EXE being recompiled. This misspelling provides an opportunity for a high fidelity threat hunt across EDR telemetry.

Monday, December 11, 2023

...and the question is...

I received an interesting question via LinkedIn not long ago, but before we dive into the question and the response...

If you've followed me for any amount of time, particularly recently, you'll know that I've put some effort forth in correcting the assumption that individual artifacts, particularly ShimCache and AmCache, provide "evidence of execution". The is a massive oversimplification of the nature and value of each of these artifacts, in addition to just being an extremely poor analytic process; that is, viewing single artifacts in isolation to establish a finding.

Okay, so now, the question I was asked was, what is my "go to" artifact to demonstrate evidence of execution?

First, let me say, I get it...I really do. During my time in the industry, I've heard customers ask, "..what is the product I need to purchase to protect my infrastructure?", so an analyst asking, "...what is the artifact that illustrates evidence of execution?" is not entirely unexpected. After all, isn't that the way things work sometimes? What is the one thing, which button do I push, which is the lever I pull, what is the one action I need to take, or one choice I need to make to move forward?

So, in a way, the question of the "go to" artifact to demonstrate...well, anything...is a trick question. Because there should not be one. Looking just at "evidence of execution", some might think, "...well, there's Prefetch files...right?", and that's a good option, but what do we know about application prefetching? 

We know that the prefetcher monitors the first 10 seconds of execution, and tracks files that are loaded.

We know that beginning with Windows 8, Prefetch files can hold up to 8 "last run" times, embedded within the file itself. 

We know that application prefetching is enabled by default on workstations, but not servers. 

Okay, this is great...but what happens after those first 10 seconds? What I mean is, what happens if code within the program throws an error, doesn't work, or the running application is detected by AV? Do we consider that the application "executed" only if it started, or do we consider "evidence of execution" to include the application completing, and impacting the endpoint in some manner?

So, again, the answer is that there is no "go to" artifact. Instead, there's a "go to" process, one that includes multiple, disparate data sources (file system, Registry, WEVTX, SRUM, etc.), normalized and correlated based on some common element, such as time. Windows Event Log records include time stamps, as do MFT records, Registry keys and some values.

Our analytic process needs to encompass two concepts...artifact constellations, and validation. First off, we don't ever look at single artifacts to establish findings; rather, we need to incorporate multiple, disparate data sources, through a process of parsing, normalization, decoration and enrichment to truly determine the context of an event. Looking at just a log entry, or entry from EDR telemetry by itself does not truly tell us if something executed successfully. If it was launched, did it complete successfully? Did it have the intended impact on the endpoint, leaving traces of its execution?

Second, artifact constellations lead to validation. By looking at multiple, disparate data sources, we can determine if what we thought was executed, what appeared to have been executed, was able to "survive". For example, I've seen malware launched, visible through EDR telemetry and log sources, that never succeeded. Each time it launched, it generated an error, per Windows Error Reporting. I've seen malicious installation processes (MSI files) fail to install. I've seen threat actors push out their ransomware EXE to multiple endpoints and run each instance, resulting in files on those systems being encrypted, but not be able to get the executable to run on the nexus endpoint; I've seen threat actors run their ransomware EXE multiple times with the "--debug" option, and the files on that endpoint were never encrypted.

If you're going to continue to view single artifacts in isolation, then please understand the nature and nuance of the artifacts themselves. Thoroughly review (and understand) this research regarding AmCache, as well as Mandiant's findings regarding ShimCache. However, over the years, I've found it so much more straightforward to incorporate these artifacts into an overall analysis process, as it continually demonstrates the value of the individual artifacts, as well as provides insights into the intent and capabilities of the threat actor.

Tuesday, November 28, 2023

Roll-up

One of the things I love about the industry is that it's like fashion...given enough time, the style that came and went comes back around again. Much like the fashion industry, we see things time and again...just wait.

A good example of this is the finger application. I first encountered finger toward the end of 1994,

during my first 6 months in grad school. I was doing some extracurricular research, and came across a reference to finger as making systems vulnerable, but it wasn't clear why. I asked the senior sysadmin in our department; they looked at me, smiled, and walked away.

Jump forward about 29 years to just recently, and I saw finger.exe, on a Windows system, used for data exfiltration. John Page/hyp3rlinx wrote an advisory (published 2020-09-11) describing how to do this, and yes, from the client side, what I saw looked like it was taken directly from John's advisory.

What this means to us is that the things we learn may feel like they fade with time, but wait long enough, and you'll see them, or some variation, again. I've seen this happen with ADSs; more recently, the specific MotW variations have taken precedence. I've also seen it happen with shell items (i.e., the "building blocks" of LNK files, JumpLists, and shellbags), as well as with the OLE file format. You may think, "...man, I spent all that time learning about that thing, and now it's no longer used..."; wait. It'll come back, like bell bottoms.

Deleted Things
In DFIR, we often say that just because you delete something, that doesn't mean that it's gone. For files, Registry keys and values, etc., this is all very true.

Scheduled Tasks
A while back, I blogged about an ops debrief call that I'd joined, and listened to an analyst discuss their findings from their engagement. At the beginning of the call, they'd mentioned something, almost in passing, glossing over it like it was inconsequential; however, some research revealed that it was actually an extremely high-fidelity indicator based on specific threat actor TTPs.

In many instances, threat actors will create Scheduled Tasks as a means of persisting on endpoints. In fact, not too long ago, I saw a threat actor create two Scheduled Tasks for the same command; one to run based on a time trigger, and the other to run ONSTART. 

In the case this analyst was discussing, the threat actor had created a Scheduled Task on a Windows 7 (like I said, this was a while back) system. The task was for a long-running application; essentially, the application would run until it was specifically stopped, either directly or by the system being turned off. Once the application was launched, the threat actor deleted the Scheduled Task, removing to the XML and binary task files; Windows 7 used a combination of the XML-format task files we see today on Windows 10 and 11 endpoints, as well as the binary *.job file format we saw on Windows XP.

Volume Shadow Copies
About 7 yrs ago or so, I published a blog post that included a reference to a presentation from 2016, and to a Carbon Black blog post that had been published in August, 2015. The short version of what was discussed in both was that a threat actor performed the following:

1. Copied their malware EXE to the root of a file system.
2. Created a Volume Shadow Copy (VSC).
3. Mounted the VSC they'd created, and launched the malware/Trojan EXE from within the mounted VSC.
4. Deleted the VSC they'd created, leaving the malware EXE running in memory.

I tried replicating this...and it worked. Not a great persistence mechanism...reboot the endpoint and it's no longer infected...but fascinating nonetheless. What's interesting about this approach is that if the endpoint hadn't had an EDR agent installed, all a responder would have available to them by dumping process information from the live endpoint, or by grabbing a memory dump, is a process command line with a file path that didn't actually exist on the endpoint. 

WSL
We've known about the Windows Subsystem for Linux (WSL) for a while. 

Not too long ago, an academic paper addressing WSL2 forensics was published illustrating artifacts associated with the installation and use of Linux distributions. The authors reference the use of RegRipper (version 3.0, apparently) in several locations, particularly when examining the System and Software Registry hives; for some reason, they chose to not use RegRipper to parse the AmCache.hve file. 

Now, let's keep our eyes open for a similar paper on the Windows Subsystem for Android...just sayin'...

Friday, November 10, 2023

Roll-up

I don't like checklists in #DFIR. 

Rather, I don't like how checklists are used in #DFIR. Too often, they're used as a replacement for learning and knowledge, and looked at as, "...if I do just this, I'm good...". Nothing could be further from the truth, which is why even in November 2023, we still see analysts retrieving just the Security, Application, and System Event Logs from Windows 10 & 11 endpoints.

I'm also not a fan of lists in #DFIR. Rather than a long list of links with no context or insight, I'd much rather see just a few links with descriptions of how useful they are (or, they aren't, as the case may be...), and how they were incorporated into an analysis workflow.

SRUM DB
Shanna Daly recently shared some excellent content regarding SRUMDB, excellent in the sense that it was not only enjoyable to read, but it was thorough in its content, particularly regarding the fact that the database contents are written on an hourly basis. As such, this data source is not a good candidate for being included in a timeline, but it is an excellent pivot point.

This is where timelines and artifact constellations cross paths, and lay a foundation for validation of findings. Most analysts are familiar with ShimCache and AmCache artifacts, but many still mistakenly believe that these are "evidence of execution"; in fact, the recently published Windows Forensics Analysts Field Guide states this, as well. So, what happens is that analysts will see an entry in either artifact for apparent malware and declare victory, basing their finding on that one artifact, in isolation. All either of these artifacts tells us definitively is that file existed on the endpoint; we need additional information, other elements of the constellation, to confirm execution. So, there's Prefetch files...unless you're examining a server. One place to pivot to for validation is the SRUM DB, which Shanna does a thorough job of addressing and describing. 

Dev Drive
Grzegorz recently tweeted regarding Windows "dev drive" (LinkedIn post here), a capability that allows a developer to optimize an area of their hard drive for storage operations. Apparently, part of this allows the developer to "disallow" AV, which sounds similar to designating exclusions in Windows Defender. However, in this case, it sounds as if it's for all AV, not just Defender. 

MS provides information on "dev drive", including describing how to enable it via GPO.

Finger
I was doing some research recently for a blog post on the use of finger.exe for both file download, as well as exfil, and ran across a couple of very similar articles and posts, all of which seemed to be derived from a single resource (from hyp3rlinx).

And yes, you read that right...the LOLBin/LOLBAS finger.exe used for data exfil. When I was in graduate school and working on my master's thesis (late '95 through '96), I was teaching myself Java programming in order to facilitate data collection for my thesis. As part of my self-study, I wrote networking code to implement SMTP, finger, etc., clients on Windows (at the time, Windows 3.11 for Workgroups and Windows 95). However, at the time, I wasn't as focused on things like data exfil and digital forensics...rather, I was focused on implementing networking sockets and protocols to replicate various client applications. What's wild about this one is that I don't think I ever expected to see it "in the wild", but in October 2023, I did. 

Actively used, "in the wild". 

And to be quite honest, it's pretty freaking cool!  

Ancillary to this, something I've encountered/been thinking of for some time now is that there are things that have been around for years that have confounded current analysis and led to mistakes via assumptions. For example, about 40 or so years ago, I took a BASIC programming course (on the Mac IIe), and one of the first things we learned was preceding lines to be "commented out" with "REM". Commenting lines was part of the formal instruction, using "REM" as a "poor man's debugger" was part of the informal instruction. Anyway, I've seen "obfuscated" code that contained long strings of what looked like base64-encoded lines, only to see them preceded by "REM" or an apostrophe. And yet, instead of skipping those lines, some analysts have been bogged down trying to decode the apparent base64-encoded strings. 

Another example is NTFS alternate data streams (ADSs). This NTFS file system artifact has been around since...well...NTFS, but there are more than a few analysts who haven't experienced them and aren't familiar with them. 

The point of this isn't to point out shortcomings in training, education, experience, or knowledge; rather, that threat actors can use (and have used) something "old" with great success, because it's not recognized by current analysts. Think about it for a second...think DOS batch files are "lame" when compared to PowerShell or some more "modern" scripting languages? They may be but they work, really well, in fact. There's two Windows Event Logs that PowerShell code can end up in, but batch files don't get "recorded" anywhere. Further, there are some pretty straightforward things you can do with DOS batch files that will not only work, but have the added benefit of confusing the crap out of "modern" analysts. 

So, here's something to think about...there's a lot of different ways to data exfiltration as part of recon activities, but one that folks may not be expecting is to do so via finger.exe. Do you employ EDR technology, or have an MDR? If so, how often is finger.exe launched in your infrastructure? Would it be a good idea to have a rule that simply monitors for the execution of that LOLBAS?