Shopping Cart

No products in the cart.

Securing AWS Discover Cloud Vulnerabilities

This webcast was originally published on November 11, 2022.

In this video, Beau discusses securing AWS, focusing on lateral movement in the cloud. He explains initial access and post-compromise techniques, demonstrating AWS CLI usage for enumerating resources and escalating privileges. Throughout the webcast, Beau highlights the importance of understanding both identity and resource-based policies for effective penetration testing in cloud environments.

  • The webinar focuses on advanced AWS security, specifically on lateral movement and post-compromise actions in the cloud.
  • It highlights the importance of understanding both initial access and subsequent potential actions an attacker can take using stolen credentials.
  • The discussion includes practical demonstrations using AWS CLI for command execution, illustrating real-world attack methodologies in cloud environments.

Highlights

Full Video

Transcript

Beau Bullock

Today I’m going to be talking about securing AWS. about a year ago I did a webcast on getting started pen testing azure. That particular webcast was very much heavy on the O 365 side and a little bit more on the initial access today.

what I really want to focus on is first off, obviously aws, but secondly, I wanted to talk more about what lateral movement looks like in the cloud. I think that throughout today’s webcast you should get at least a decent picture of how does initial access look.

And after an attacker gets credentials, what can they do with those and what does that look like and what are some potential things you can look for in organizations if you’re a pen tester, or if you’re looking for them, in your own environment as well.

Quick note about me. my name is Bo, I am a pen tester, red teamer, here at Black Hills. I’ve been here for about eight and a half years now. I also wrote a cloud hacking training course called breaching the cloud that I teach.

so that one I have a live run of that class coming up in December. you can sign up through that through antiseiphon if you enjoy this cloud talk. This, a lot of the stuff that I’m talking about today comes a lot honestly from that class.

So if you enjoyed this webcast, you might like the class. I have also, done a number of different talks. I just recently started doing more YouTube videos again.

I, so years ago I used to do a show called Tradecraft security Weekly where I would do these like ten minute videos for basically any sort of tradecraft topic around security, hacking.

And I kind of like fell off the bandwagon on those a bit. And I just recently started doing similar type videos again. So, that’s on YouTube. but yeah, that’s, that’s me basically.

All right, so let’s talk about roadmap. What are we going to talk about today? I actually have quite a bit to talk about and I’m going to try to get through it all. I think the azure webcast I did, I ended up going over quite a bit.

so hopefully I can condense this enough into this hour. This time I think I timed it correctly, roadmap wise.

First off, we’re going to do a quick intro into aws and authentication. Talk about how are iam users, authenticating, what mechanisms are they using.

then we’re going to do a very short section on common initial access methods. Again, I want the focus of today being more post compromise. I want to talk about we have credentials, what could an attacker do with those?

I’ll talk about some common initial access methods. Then we’ll move on to this post compromise recon section. I’m going to give you a quick demo into what is the AWS ClI.

How do you run command line, commands, from a console to the aws, environment to enumerate various resources, to enumerate roles for users and help you determine an attack path potentially.

Then we’re going to move on to privilege escalation. What can we do in terms of escalating privileges in an environment, specifically cloud environments? We’ll talk about some common privilege escalation techniques and lateral movement techniques, specific to cloud.

And then we’ll do a quick scanning tool section. So I’ll talk about some of the ways that we can help automate a lot of our scanning. then finally I’m going to try to do a live demo, at the end of this where I’m going to do a multi resource pivot.

This is going to be something that hopefully helps connect all the pieces throughout today to help show what would happen if an attacker got access to a set of credentials. And how would that look in a lateral movement scenario across multiple different types of resources in the cloud?

First up, Amazon web services. Why are we talking about this today? like I said, I did an entire webcast on azure previously. This one I wanted to focus on AWS.

The big difference in my own point of view is generally we see a lot of azure environments really focus heavily on the SaaS side. focus on things like productivity software, outlook, the email side, sharepoint, that thing.

on aws we typically see a lot of infrastructure. It is by far one of the most popular pieces of software for cloud based infrastructure.

think things like virtual machines, storage, networking databases, has a ton of different services. And there’s an interesting view when it comes to looking at how can these resources be exploited and how are they unique in terms of what information can you gather from them as opposed to a general old school on prem virtual machine, what are the differences?

Many of the security features are baked in as well. I’m going to give you a quick high level overview of what does authentication look like in AWS. First off, there’s generally two primary methods for setting up authentication for a user.

There’s programmatic access and then you have your standard AWS management console access. the difference between these two is, gui access for the most part, graphic user interface.

so on the AWS management console side, that’s the way you would imagine logging into a web portal and being able to manage resources. on the programmatic access side, we’re talking about submitting requests to an API.

when you generate a programmatic set of access keys, you get an access key id and a secret access key. What these allow you to do is effectively, run commands from either an API perspective or the command line tools, that allow you to interact with resources within your AWS account.

This is commonly what we find on penetration tests. These are commonly the credentials we’re looking for in most cases and the ones that we oftentimes stumble upon.

When we look at the picture on the screen here, you can see that we have an access key id and a secret access key. Now this specific set of access keys is what is known as a temporary set of security credentials.

The reason I know that is because the first four letters start with A.S.I.A. In general, long term access keys will start with A.K.I.A. I’ll talk about what the difference is between long, term and temporary, momentarily, and that’ll come into play and be very important later on, in the demo as well.

But for now, just know that there are ways that you can generate temporary sets of credentials similar to the way that you would have like a session token mechanism for like an application.

so the management console, this is what it looks like. so if you were to log in to the web portal, you can be presented with an interface that allows you to manage resources. this is something that if we’re trying to fish for credentials, this might be what we end up looking at, as an attacker.

So, let’s say we’re targeting an AWS admin and we submit a phishing campaign, to them, that targets their credentials and we are successful in phishing their credentials.

Potentially we could log into this management console and be operating from here as opposed to command line initial access. Again, I’m going to make this somewhat short and sweet, on the initial access side because I don’t want to focus too much on just the initial access today.

I really want to focus heavily on the post compromise. Also, by the way, all the pictures you see, I would say 95% of them are AI art. Generated with mid journey. I have kind of like this new addiction to creating AI art for slides.

So when it comes to initial access, there’s a few common techniques for getting initial access into a cloud environment, and these are four that I’m going to run through in the next few slides here.

So first off, public accessibility of resources. So having something exposed externally that shouldn’t be leaking secrets and code repositories, this is something that happens, pretty common actually, something that still comes up, on occasion.

And phishing, for creds, like I mentioned with the concept of sending a phishing email, to get credentials from an admin or somebody who is using the application or using AWS, and then resource exploitation.

So what can we do in terms of actually exploiting resources within a cloud environment as well? So first up, public accessibility resources. The thing about cloud environments is that it’s very easy to configure them to be publicly exposed in traditional internal networks.

you don’t typically have to worry about every web application you create or every file share you create, having to worry about making it publicly accessible to the entire world. When it comes to cloud environments, though, you can easily deploy an s three bucket, a storage resource, within aws and configure it to be publicly exposed.

And a lot of the resources within AWS are the same way where it’s very easy to misconfigure them to be public. So what happens is an organization will accidentally leak secrets occasionally by either a they didn’t mean to make it public, or b they made it public but then forgot it was public and then put something sensitive in it.

and then another example would be an insecure web app or function exposed externally. So we’re talking about lambda functions. one of the pieces of the demo I’m going to show later on involves an insecure web app, that’s being publicly exposed, database services with weak creds.

It’s literally a checkbox to make most databases, public, to the Internet. From an AWS environment, why would this matter? This can lead to compromise of things like sensitive data that are in the buckets, potentially credentials that maybe have been put in a config file bucket, additional access keys, and in the case of vulnerable services, potentially underlying server infrastructure as well, secrets and code repositories.

This is something that we see come up, pretty often still. And it’s actually surprising because, a lot of the larger repository softwares like GitHub, they actually have functionalities now that look for secrets that if I were to go try to post a piece of code to Aws today, there’s actually some checks that happen to look for that to look for common secrets, things like akia for an access key.

however, even with those checks in place, we still see these come up where a developer will go to publish a piece of code and they will have hard coded in a set of credentials into that piece of code and they’ll make it public, they’ll actually publish it to a public GitHub repo.

And now those credentials are just exposed externally. What’s interesting about code repositories is they have commit, histories. In most cases, anytime you go to update that piece of code you are not only creating a new instance of that code, but the entire history of what was there before it is still there.

in the case where let’s say a user published a password, to a git repo, that password will live there. They could go and override it with another commit. But if they didn’t go and actually scrub that commit history, then that password might still be there in the commit history.

this is one of the things we look for as well. when it comes to phishing, standard phishing techniques, apply here.

this is an example I saw the other day. it’s an Amazon specific phish that says, hey, your service is about to be canceled, or it has now been suspended.

You can go pay at this payment page if you’ve ever had an AWS account. that like you get billed for like even the tiniest little things, right? spinning up, an instance for just a few, a few minutes or having, any sort of storage bucket out there like it generates costs pretty quickly.

if you don’t put anything down. So it’s common to get emails from Amazon saying hey, you’ve been charged x amount. so this is one of the, one of the fishes that I think is, is pretty sneaky on this, on this front because it’s, it’s kind of playing off that idea of hey, you didn’t pay your bill and now your service is going to be suspended.

So the idea here would be though to fish for credentials. So this could be not necessarily looking for programmatic access, but maybe access to that management console. And so this would be a case where maybe we spun up something like an evil gen x like reverse proxy situation to try to get access to credentials, resource exploitation.

So just like traditional networks, you have the ability to spin up resources like virtual machines, databases, web apps, that can be exposed publicly, within your own cloud environments.

Now these resources, these pieces of software are still vulnerable to the same attacks and same exploits that end up coming out for normal pieces of software.

Anytime that you have an unpatched vulnerability, anytime that something new comes out, these are things that we need to look for as well, from an external context. Same thing applies to a traditional web app vulnerabilities, things like having weak authentication, being able to brute force credentials, being able to inject commands.

So like SQL injection command injection type vulnerabilities. And one vulnerability that I find actually very specific, that is in my eyes, not something that you typically tend to find all that interesting in most web app cases, tends to be extremely interesting on cloud environments is server side request forgery.

Server side request forgery is the, the ability to cause a web server to send a request on your behalf. Basically the idea is that you find a vulnerability that causes the web server to be able to submit a web request for you.

And that will be important when we come up to our metadata section later on, because there’s a specific web URL that only that server can hit itself.

and I’ll show you that in just a bit. After exploiting a resource though, what do we look for? so we need to start talking about what can we do from a post compromise recon perspective, what can we do after we’ve compromised, a set of credentials.

Now, like I said, the initial access section, that’s honestly an entire day of my class. so it’s very condensed. but post compromise is really what we want to focus on today we’re going to talk about what do we have access to?

what roles do we have. things like is it is MFA enabled? So if we had fish a credential, are we going to need to worry about MFA? what, what can we access this? This is actually probably the biggest piece of this, right?

So when you start looking at recon from a post compromise perspective in any environment, it comes down to well, what can you get access to? Can you access, these virtual machines over here? Can you access these storage buckets over here? Who are the admins?

and then ultimately how are we going to escalate privileges? And one of the kind of interesting things about API level access to cloud environments is you can generally see configurations around security products.

You can query things like is cloudtrail enabled for this region? That would be the logging functionality. that’s one of the bigger differences.

Think if you were to get access to, an endpoint, you could open up task manager and see this ad or this EDR is enabled. Same thing, but in the cloud context we can query APIs, to determine what security protections are there as well.

All right, so when it comes to AWS permissions, there are various types of AWS accounts. First of all, so you have a root account, you can have IAM accounts that are created underneath the root account.

And then you can have temporary security credentials, that have, they can all have policies and permissions applied to them. Now the root account is basically the account that when you sign up for that AWS account, it’s the main highest level account that has all the privileges in the world to do anything it wants.

now underneath that account you can start to create what are known as IAm users. And as IAm users, you can start to set permissions for. So in aws, permissions are typically set with three different pieces.

So you have an effect, you have an action and a resource. There’s a few examples here on the right, and I’ll walk through what these mean, but in general the effect is an allow or deny rule. So you’re basically saying, are we going to allow this?

Are we going to deny this? the action is a set of specific parameters. for example, you could say, I want to allow the action of s three.

Anything you could write to an s three bucket. You could create an s three bucket. You could get items from an s three bucket. So you can actually include wildcards like this. So s three, star.

Or you can be very specific about it. so you could give an action of something like iam colon create user, where you’re basically saying that you can leverage the IAM resource to create users, but that’s it.

and then on the last piece, resource. So you can actually filter it down to what specific resource you actually want to apply the policy to. So in this case we’re applying it to just an s three bucket.

So you can say we’re going to allow, get privileges on this one specific s three bucket. Now in general, this is how, these policies look.

When you start looking at policies, you’ll see something like action star effect, allow resource star. And so that would be like an administrative policy because you’re basically saying I’m allowing all actions, because of the wildcard there and on resource Star.

So literally that’d be like an administrative policy saying it has full access to everything. Now, temporary security credentials are kind of interesting, and we’ll talk about those in just a bit.

When we get to a section we’re going to talk about assumed role policies, but in general they’re short term, short term permissions that you can request for certain purposes.

So one thing I really want to highlight here is the ability to apply policies to identities and resources.

If you were to get access to let’s say a user account on an internal active directory domain, you might be familiar with the concept of being able to query your groups, from the domain.

You can say hey, what groups am I in? And that will tell you what your specific user is a part of. But just like on an internal domain, a file share can have a permission that says hey, that user, that domain user has permission on the file share.

It’s not something you would query necessarily on your user object from the domain, but it’s something that you could query, if you ask that specific system. Same thing, but in the cloud context, you can create identity based policies that are applied to users and you can create policies that are applied directly to resources.

Think things like an s three bucket. You could apply a policy directly to an s three bucket that says user x over here has access to it. Now unless you enumerate manually or automated, and go and try to access that s three bucket, you might not realize you have access to that s three bucket.

So on the right here, there’s a good example showing identity based policies versus resource based policies. take a look at Zangway down here at the bottom. Zangway has no identity policy applied to them at all.

If you were to query that user’s access, it would look like they literally can’t do anything. Now if you look at the resource based policies on the rights, you can see that resource, z here has Zengway allowed full access.

That user would be able to have full access on whatever resource that is by accessing it directly. This is why it becomes important to, to not only look at what roles you have, but also start to analyze resources and what roles are applied to other resources.

All right, so how do we start to query this stuff? How do we start to gain a little bit of information about these resources and batter access? Well, the AWs Cli is pretty much the go to tool for this.

so on the topic of being able to connect to the API, there’s a few different ways you could do it. First off, the AW CLI is one, there is a great python library called Bodu three, which is also a, good way to do it.

but in our case I’m going to just give you a high level, I guess, crash course on AWS CLI here. It’s a multi platform tool. First of all, if you had a set of credentials, programmatic access keys, that is, you can set up a profile on your system with the AWS configure command.

When you run AWS configure, it’ll walk you through the process of adding in your access key as well as, your secret, then you can set default regions if you want. Now, one of the things interesting about AWS is that there are certain regions that you might have to query for certain resources.

Things like EC two tend to have resources spun up in specific regions. After you configure a set, of access keys, then you would be able to leverage the AWS CLI.

Now one of the things that I think is really awesome about the AWS CLI is that you can create profiles so you can not only have your default profile, but let’s say that you’re on a pen test and you’ve got ten sets of creds.

instead of just overriding the same default, profile, you can specify specific profiles with dash dash profile. So you can say AWS configure profile creds I got from a git repo and you could call it that.

Right. And then now whenever you go to query the AWS ClI, you can specify that profile for each command you would run. So the, this is kind of. So if you’ve ever been caught on a pen test where you ran local, like binaries, things like the net command ip config, who am m I tends to be another one of those that get caught on a lot of engagements.

This would honestly be one to watch out for if you’re a defender. I think in a lot of cases it gets called a lot though. So it’s probably not something that is going to be, the easiest thing, to identify.

However, the AWS Sts git caller identity command is effectively the who am m I of AWS Cli. Let’s say you got a set of access keys so you have your akia access key you have the, secret access key that tells you nothing about that user.

There’s no username. This says, hey, this is an admin. Or hey, this is backup user, or hey, this is dev account one, or something like that. There’s no identifiable information there. It’s just a long string of letters and numbers.

So how do you get at least a little bit of an idea about what that user is for? You can run the AWS Sts, get dash caller identity command, and you will get the actual username for that IAm user that was specified in the console whenever they created that user.

But again, that’s like some edrs now, like literally running the who am m I? Command will get you burned. so how does the AWS cli break up commands?

First off, when you run the AWS command itself, the first section immediately following AWS is the service you want to interact with. So in the picture I made here, we’ve got Ec two.

So if you wanted to interact with EC two, you’d say Aws Ec two, and then you would complete whatever commands you wanted to send to the EC two service. Now, let’s say you wanted to, operate with iam and you would put IAm here.

If you want to operate with s three, you’d put s three here. this is basically where you’d specify the service. The next section is the action. We want to, describe dash instances.

That is the, Aws, command for listing out ec two instances and any information we can identify, there as well, ip things like, DNS names, things like network, configuration information, that kind of thing.

and then finally region selection. Now, like I said, some AWS services require that you leverage a region as well. and that’s just because of, the nature of how AWS spins up resources in certain regions.

But things like s three buckets, you can run aws s three ls and that should list out all s three buckets globally for that account.

When, it comes to listing out IAm users, you could run aws, IAm list users, list roles, to list out those roles that we talked about that we might want to identify whether, or not we’re a part of.

we might want to identify if we’re part of a group, because not only can you directly apply roles and policies to a user, you can apply them, to groups as well. That actually comes into play when we start to look at assumed role policies later on, for privilege escalation.

When we start to look at public resources. Now, to start, this is something that honestly should be done externally to start with. In most cases, in most cases you want to be able to attempt to identify any public resources externally.

Now the way that this is done is via, predictable, domains. So if you go to create an s three bucket on an Amazon account, you create a custom name for that s three bucket, but it gets a, it gets prepended to an actually, predictable URL.

So it gets prepended to s three dot Amazon aws.com dot. So it makes it a brute forceful moment where we can brute force, names of s three buckets themselves.

So this is something that, it’s hit or miss though, because like I could go spin up a bucket that’s called FBI surveillance van one, if that one’s not taken.

And same thing for your googles and twitters and whatever else. If those buckets had not previously been generated, you can go create them. Now this becomes a problem of attribution, to an actual organization.

What I would recommend doing is running a tool like cloudyname. the tool I’ve listed here on the screen. one of my favorite tools for identifying public resources like this is if, if I were to go on an assessment for red team engagement or a cloud assessment or external today, this is probably the first tool I’m going to run, because this is a great way to help identify any public resources that may be out there that you could brute force potentially, the names for.

Anyway, and you’ll get something kind of like what’s on the right here. so in this example, cloud found an s three bucket called glitch cloud, which is, that’s when I set up for this, and it immediately listed out the files within it.

Now if I was testing company glitch cloud, I would have to say, hey, is this actually yours or not? Or is this something that somebody else set up? Because you have to at least make sure that you are attacking the correct organization here because again, anybody can spin these names up.

Now this is something that I think is very, very important when it comes to public resources. When you’re external and you’re trying to identify these public resources from an external context, there is a high likelihood that you’re not going to find everything in most cases, unless they are very specific about the bucket names, they just name it the company name.

there’s a likelihood there that you’re not going to find everything. I always, after getting credentials, after doing any sort of authentication to an organization’s environment, try to enumerate all public resources, not only s three buckets, but things like public ips for EC, two instances, load balancers.

And I ended up putting together a few scripts to help with this. so I have a repo on GitHub called cloud pen test cheat sheets, that I’ve been trying to keep updated with a lot of the stuff that I use on pen tests, on cloud pen tests.

And I just recently added in a few loops, a few while loops here to go through and dump things like all public ip addresses for EC two instances, to dump all elastic load balancer DNS addresses.

To dump all rds DNS addresses. The reason is because after getting access you now have the ability to see, okay, well I didn’t brute force this s three bucket and the reason is because it’s got a really long complicated name.

It’s not something that was easily just brute forceable, externally but nevertheless it’s public. And I’ve had cases where I will be assessing a company after getting credentials.

Find that there’s a public bucket, a public storage bucket. Go look at the storage bucket and it’s got a ton of sensitive stuff in it. Or go find a an elastic load balancer web address and find that it’s running like an old version of PHP or something.

Right? So additional vulnerabilities that can come from just identifying those public resources and then trying to look at them externally? Privilege escalation.

So now what are we going to leverage here after authenticating to actually gain privileges or permissions that we previously didn’t have? Is there a way for us to take what we learned from the previous commands, by enumerating resources, enumerating roles to either a get access to additional credentials, or b find vulnerabilities and services, maybe, maybe some of those services externally exposed.

we can go and exploit to some, to some extent. So let’s walk through a few methods, potential privilege escalation methods here.

All right, so first up, instance metadata service. This is one of my favorite things to talk, talk about when it comes to anything cloud related because this is something that I find honestly kind of fascinating about cloud environments is that instance metadata services are they’re ways for cloud systems to orient themselves in the cloud.

So let’s say you’re deploying 10,000 virtual machines, within a cloud, you can specify certain pieces of metadata that can be applied to every single one of them.

And what the metadata service is is literally a web, it’s a web app of sorts that gets spun up on the non routable IP address of one six nine dot two five four dot one six nine dot two five four on every single cloud virtual machine.

So that’s something like, the first time I saw it, I was like, wait, there’s a web service running on this virtual machine. You can’t access it. You’re not supposed to be able to access it, externally.

However, due to certain vulnerabilities like SSRF might be able to send requests to it externally, so they should only be reachable from the local host.

However, things like server compromise and certain vulnerabilities like SSRF might allow remote attackers to get it. So why is that important? Well, first off, let’s say that you had, let’s say you had an application, and you had an EC two instance.

So a virtual machine that you wanted to authenticate to an s three bucket or you wanted to have the permissions to retrieve something from that s three bucket.

So one of the ways could be you hard code credentials into the vm. Likely not the best solution there. one of the ways that Amazon allows you to solve this problem is through role, policies.

You can specify a role policy and you can assign it to a virtual machine and you can say, hey, this virtual machine, if it requests this role policy, we’re going to give them temporary security credentials that allow them to do x.

You’re not hard coding credentials necessarily, at that point, however, by being able to query that role policy service on the metadata service, it can allow you to actually gain access to those temporary sets of credentials.

First off, this is pretty much where it lives on the AWS, metadata service. So it’s at latest metadash data, IAM security credentials and then whatever that role name is.

Now this is literally a directory indexed service as well. Whenever you start to dive through this, I’ll show you in a minute. in the demo, you can poke around quite a bit.

but once you navigate to this URL directly, if a role policy is applied to it, it will spit out a set of temporary security credentials for you. Now that set of temporary security credentials technically, can have more permissions than what you currently have as that instance.

This could be a potential privilege escalation here. This, might also be something you could hit, potentially externally, via a vulnerable proxy as well. If there’s a proxy service like NgInx running on, an EC two instance that’s misconfigured to allow connections to that metadata service, you might be able to actually proxy a request directly through it to itself, to hit that metadata service as well.

This is what, happened in the capital one hack as well. An attacker exploited SSRF, on EC two and they access the metadata service. That allowed them to get that temporary set of credentials.

That temporary set of credentials allowed them to authenticate directly to aws to hit an s three bucket that was private, which was something that was not publicly exposed. But due to having the ability to create that metadata service, it gave them the creds they needed to access a private, s three bucket that had a ton of sensitive data.

Yeah, I saw a mention of IMDSV two in the chat. Yeah, I think a couple of years ago. Now at this point, Aws created what is known as the IMDs version two.

And it’s a way to mitigate a lot of the remote access to the metadata service, via, things like SsRF, via the proxy.

however, IMDSv one is still like the one that’s enabled by default in most cases. But it is something that even if you did get remote, server compromise, IMDsV two isn’t going to prevent you from accessing the metadata service.

another thing that we tend to find credentials and this could be another privilege escalation opportunity, is through user data and environment variables tied to various services. a lot of times we’ll see things like lambda functions get deployed with environment variables that contain secrets.

It looks like a screenshot here. This is a lambda function where a set of credentials is literally being passed to that lambda function, via environment variables.

When that lambda function executes, it has its credentials available to it so it could do something. now the thing is again with the AWS API and using the Cli we can query the configurations around lambda functions.

And oftentimes we find things like this, same thing on EC two instances with, user data. I just saw an engagement recently where, there was an ssh key, in clear text, the actual full ssh key and user data assume role policies.

All right, so I mentioned it briefly a minute ago about temporary security credentials and this is where assume role policies kind of come into play. There’s ways to create trust relationships between users that you want to provide access to a specific service.

Without actually granting them it directly. So similar to the way you would apply a role to like an EC two instance and say hey, I’m not going to hard code creds for you, but you can request this role policy and get temporary creds.

Same thing, but for users and iam users. So let’s say that I wanted to provide access to somebody who wasn’t working at my company, access to a specific resource.

They can give me their account ident, resource number, which is the arn from their own Amazon account and I can apply an assume role policy to their account.

And effectively what that does is it allows them to call what is known as the security token service, assume role, resource. And whenever you call that, if they have the permissions that I’ve specified in my own assumed role policy, then they can request temporary security credentials, that give them the ability to go and access whatever I specified for them.

In the example here, there’s two separate accounts. you have the account. On the left is the one that is providing access to their own dynamodb resource account.

On the right has a user called Joe who is or sorry, it’s backwards. So Joe’s on the left, counts on the right, so on the left we’ve got Joe who wants to authenticate to the dynamodb.

However, instead of me applying direct policy to that user to say hey, you have access to this dynamodb, I give them an assume role policy that they authenticate using their keys, they request temporary security credentials.

I give them this set of temporary security credentials, they can now leverage that temporary set of security credentials to authenticate Dynamodb and they’re short lived, typically like an hour.

This becomes kind of interesting because if, let’s say an assumed role policy was applied within a group, within an actual Amazon account and you enumerated policies tied to that group and maybe they have like overly, they have, they’ve created overly permissive permissions for that group.

Through an assumed role policy I might be able to as my own user assume that role policy and get a new set of permissions that I previously didn’t have.

So let’s talk a little bit about leveraging scanning tools to help us kind of identify some of the stuff too because so far we’ve talked about CLI a little bit. But on the automation side, what can we do that can help us?

First off, depending on if you’re trying to be evasive, or not is where you have to draw the line if you want to start automating things. I would say in most cases where we’re not trying to be evasive, we will start running scanning tools to identify as much as we can quickly.

First off, I would recommend at least starting with manual inspection using single commands one at a time. but if you really want to quickly assess cloud environments, there are a number of scanning tools that allow us to help, to find things like IAM permission issues, to find public accessibility resources, things like vm instance, storage encryption.

That’s one that comes up pretty often. network and egress rules and then virtual machine metadata. There’s a couple of tools here I’m going to walk through. Paku is a tool from rhino security labs, great tool for quickly identifying, privilege escalation vulnerabilities.

There’s a scanner, privilege escalation scanner, that will attempt to identify a number of common vulnerabilities. there’s EC two enumeration, or sorry, I already said that.

Persistence modules and exploitation modules as well. Scout suite is another one, that I tend to use pretty often and find it super useful for identifying a lot of best practice issues around cloud environments.

So scoutsuite by NCC group, it’s a multi cloud auditing tool. supports AWS, azure, GCP, alibaba cloud, oracle as well.

if I had to compare it to something like a Nesta scanner, it’s probably the closest ish thing to a vulnerability scanner of sorts, for cloud environments. however, I will say that it’s more on the best practice configuration side than it is like actual vulnerabilities.

So it’s a good tool to run though to get at least that high level best practice, look into a cloud environment. Weird. Al by Chris Gates I love this tool and so this is more of an enumeration tool than a vulnerability discovery tool.

So if we go back to one of the slides that I talked about before in regards to identity based permissions versus resource based, policies, if I’m just looking at my own permissions with the various commands I could run to enumerate my own permissions, I’m going to miss out on a lot of the stuff where resources have actually had certain permissions applied to them.

The thing I love about this tool is it will go through one by one through each service and attempt to run various commands to enumerate what permissions you have.

So even if you don’t have full read access. You might still get some interesting data just by running, weird Al and I’ll show you here in a second as well. All right, so we’re up to a demo and I’m going to do this live.

So we’re going to see how this goes. hopefully sacrifice enough chickens. I did have a chicken sandwich so I think that counts though.

I think it counts. All right, so we’re going to pretend here that we’ve got an AWS config. We’re going to start here with this idea of us compromising a set of credentials through a code repository.

Now what I’m going to walk through here, what I’m going to walk through here is basically a, a demonstration of a tool called cloud goat.

I have a link to that and I think the next slide, after this cloud goat is awesome. Cloud goat, it’s a tool that’s written to help leverage terraform to spin up resources within your account.

It’s a way to generate vulnerable infrastructure within your own, within your own environments. We’re going to walk through one of those scenarios, from cloud go.

First off, the cool thing about Cloudgood is once you spun it up it will spit out a set of keys for you. Then it’s up to you to figure out what to do. From there it becomes a CTF of sorts.

First off, we’re going to pretend that we were assessing some code at an organization. We stumbled across an AWS config file and sure enough in this Aws config file we’ve got an akia access, key and we’ve got a secret access key.

So it’s one of the first things we could do. Well, let’s configure a new account. So we’ll do aws configure. and we’re going to give it a profile, like I mentioned earlier.

and we’re just going to call this configurates and it should ask us for the access key. And then it should ask us for the access, secret access key.

I’m not going to set the default region, or the format name. now that we have this set of credentials configured, let’s try them out.

Remember I mentioned using aws sts, git caller identity. then we’ll do profile of config creds.

If it was successful we should get something like this where we now see the user id tied to that. But more importantly we get the actual name from the account. We see user, solace.

and then there’s a grid of sorts on the end here. this is initial access. We have initial access to this account. What’s one first thing we could try to do, how about we try to describe Ec two instances, aws ec two describe instances profile config creds and we’re going to give it the region of us, east one here, I’ll get rid of this because we don’t need that anymore.

What happens if we run that? We get access denied. We don’t have permissions to read ec two with this set of credentials. What about lambda functions? Can we read lambda functions?

Because that is another one. Like I said, we tend to find interesting stuff in and sure enough we’ve got read access to lambda.

If we start to look through lambda functions here we might find something interesting like environment variables. Now this is honestly a super real world scenario here.

this is something that happens pretty often where we see environment variables tied to lambda functions. So in the lambda functions themselves we’ve got a new set of access keys.

So this is a different set of access keys than we had previously. So now this is where the lateral movement comes in, right? This is where like cloud pivoting comes in is we have a new set of credentials.

Let’s go set those up and see what they have. Right. Let’s go try them out. So and see if they have any, any additional permissions that we didn’t previously have.

So what I’m going to do is I’m going to copy that access key and I’m going to configure a new account. So I’m going to AWS configure dash dash profile and we’re going to call this one, lambda creds.

So this one is going to be our credentials we got from the lambda function. So now we’ve got it. Access key, id, secret access key again.

we can run AWS, sts, git, caller identity, give it the profile of lambda, creds, and it works. we can see that the user in this one is different.

It’s not solace, it’s wrecks, whatever that is. All right, so if you remember with the config creds we tried to describe Ec two instances, we got denied.

What happens if we run that exact same command using the lambda creds? If we run aws, Ec two describe instances, dash, us profile and we’re going to give it the lambda creds profile this time, right?

Dash us region, us east, dash one. What happens? We have the ability to read Ec two with the lambda creds. So you can see where the pivoting is coming into play, right?

Yeah. And of course, you’re reporting as you go. Yes. Thank you, time traveling nerd herder. Yes. That is a great point to make. You got to take those screenshots. You got to at least make some notes here.

Our first escalation here is we didn’t have access to Ec two with the first set of creds. We found a set of lambda creds. We’re using those lambda creds.

Now, that set of lambda creds can read Ec two. Let’s start to look through Ec two. We see an instance here. We, see that there is a public ip address tied to this instance.

what happens if we open up a browser and navigate to it? Is there anything interesting running on the web? Like, web server there? So HTTP. And then we’ll throw in the ip.

we get an error. so let me make it a little bigger. So this is where, bB’s web app hacking class comes into play. So what do you do if you start to see, errors on a web server?

Well, first off, we see this type error. URL must be a string. not defined. So, not defined. So what if we throw, URL parameter on the end of it? What happens?

Okay, well, we’ve got a web server. It says, welcome to Seth’s sex, ssrf demo. I wanted to be useful, but I could not find blank for you.

Right. so again, this is bb hacking, web app hacking class 101 here. But we got to start poking around, right? We’ve got to start playing around with the URL.

So, URL parameter. What if we throw HTTPs, google.com in there, And it tries to find google.com dot. sure enough, we could try, some xss.

Probably, right? Like, we could see if this thing’s vulnerable to xss real quick so we could, throw some script tags in, if I can type correctly.

let’s see, alert, hacked and. Yeah. Okay, so it’s vulnerable xss. All right, so now what else?

Now, remember what I mentioned earlier. We’re really hyper focused on cloud here. We know this is an EC two instance. We know that, we’re working with a cloud based system here. Can we hit that metadata service.

This is where server side request forgery comes into play. HTTP 169 205 416-9254 we get something that looks like this here.

What I’m going to do is I’m going to show you the source because it looks a little better when you do it from page source. We get a bit of a directory listing here.

This is the metadata service. This is the instance metadata service. Let’s go to latest on this metadata service. You should see a few additional directories here.

we’ve got dynamic, we got metadash data and we’ve got user data. First up, user dash data. A lot of times you can find really interesting stuff in user data because that tends to be the place where custom scripts and things like settings that are being applied to that virtual machine get placed.

You can see there’s a script here. So bash script. this is the place that we typically find things like ssh keys and credentials and things like that. That’s one place that you should be looking.

Secondly, if we dive down into the metadash data URL, we can see that there’s a number of settings here that are available to instances.

We can start to look at things like the AMI ID, the profile public hostname, all these things. for the sake of time, because we’re already getting close to the end of time here. We’re just going to go straight to IAM, which is where we’re going to look for temporary set of security credentials.

So if we go to IAm, we should see info and security credentials. So if we dive into security credentials, what’s there?

We see a role. Now if you don’t see something here, then that typically means that there’s not a role policy applied to that specific instance. But in this case we have a role policy.

So now what happens if we navigate directly to that role policy URL? You should see something that looks like this. Let’s go back to the non source version real quick so you can see this a little better.

You should see something that looks like this. We’ve got an access key id. We’ve got a secret access key right here. And then we’ve got a token, long token.

If you remember back to when I first started talking about programmatic access keys, I mentioned how Asia is a temporary security credential. So that means that this is something that is short lived.

it’s not something that will last a long time. however, we could just request this URL again. Potentially get an updated set of, temporary security credentials. Now how do you use this? How do you use a temporary set of, security credentials?

I’m going to show you weird al real quick, because that is, like I said, one of my favorite tools when it comes to enumerating, information quickly, in an AWS environment.

I’m going to run, Python three, m, v, m and v. I’m going to start a, virtual environment for weird owl.

Then we will source weird al, bin activates. Then finally, python three create db.

So weird al has database structure and it stores all of the results in a database.

Now with weird al it leverages an env file, to authenticate. So if we are, going to authenticate with that set of temporary security credentials we need to copy the envy sample file over to env, and then modify env with our credentials.

This is what a standard set of AWS credentials looks like, a file, so to speak. So you typically have an AWS access key which is going to be our Asia key here.

And then we’ve got the secret access key. So we’ll paste that in the AWS secret access key spot.

Then finally we need one more thing. This is where temporary security credentials, started to become interesting. With temporary security credentials, you not only need the access key id and the secret access key, you also have to have a session token.

We’ve got a session token here, that’s presented in this metadata service URL. And you just add an extra line, for Aws session token.

And then we’ll paste that in and we should be good to go from there. that should be actually pasted it incorrectly. There we go. All right, so we’re going to paste it in, save the file.

And now we should be able to use word aal to authenticate as that temporary set of security credentials. A quick way, again very noisy way to enumerate everything here is we could run Python three, weird al py, m m recon, all t, ssrF.

So we’re creating a table called SSRF here and if those credentials are valid, we should see that it starts to authenticate and we should see it start to go to town on the various services.

What have we done here? we first had a AWS config file that we compromised via a GitHub repo. We added in those credentials.

We authenticated. We tried to access Cc two, couldn’t access Cc two, we accessed Lambda lambda, had an environment variable that had a set of credentials. we used that environment variable credentials to authenticate as well.

We saw that we could use those credentials to access Ec two instances. we found this web server, Ec two instance and we found that it’s vulnerable to SSRF, leveraged SSRF to access this temporary set of credentials.

And now we’re authenticating more. I’m not going to let this complete and we’re not going to stay here all day and run this.

But if you want to see the end of this, go do, the cloud go demo and see where this takes you. Because this is not the final step. I’ll just say that, let’s go back to slides here.

that’s a demo I mainly just want to show what does a cloud pivot look like? That is effectively how lateral movement looks. A lot of times in the cloud, you’re getting access to new sets of credentials.

You’re moving from resource to resource and leveraging things like vulnerabilities, like SRFdev resource wise.

so what are some resources that you might help with this? again, I created these cloud pen test cheat sheets that have a lot of standard commands that I might run on a pen test, cloud goat. That is the tool that I was just leveraging a minute ago to generate that vulnerable environment.

There’s a few other vulnerable environments out there if you want to practice, doing any sort of AWS based, assessment, flaws, cloud floss two, cloud sat cloud is another good one.

What are some key takeaways here? Initial access can come from many forms. We talked about phishing, we talked about finding public resources. we talked about getting access, to credentials via GitHub repos.

Secondly, attackers can leverage the AWS CLI to send API requests to interact with resources. You could see, just having that set of credentials. We were able to authenticate to the API and we were able to understand the environments purely from the API perspective, enumerate, both identity and resource based policies.

Because a lot of times you won’t be able, it’s not going to be evidence that you have access to something unless you try to access that thing. it’s very important to enumerate both. Cloud, environments can present a new and interesting method, for lateral movement.

You could see here we pivoted from credential to credential via various resource permissions. That’s typically how cloud, pivoting looks.

And then finally leverage scanning tools to help you with, automating the enumeration parts and any of that extra vulnerability discovery. So that’s the end, if you like to see.

I teach a class on this stuff, called breaching cloud. I, also make, some YouTube videos. So I’ve got a link there, bit, ly bo yt. I would be super grateful if you guys followed me, subscribe, that kind of stuff.

So thank you guys so much. Jason, I will pass it off to you. Man, I stayed under an hour.

Jason Blanchard

That was the fastest hour. Like, that just went by. So well done, bo. I used to sit in on your four hour classes. Like, the four hour classes go by as well.

It’s not like, oh, man, what just happened? But you also, do you still stay after and do lab stuff or, like, how do you do your labs? Because it’s all hands on labs.

Beau Bullock

Yeah. Yeah. So, with, with my class, I so initially first wrote it during, like, prime, like, Covid season. so it was all remote, right? And, whenever I wrote it, I kind of wrote in the labs for me to just demo them just like that.

And then I was like, well, that’s not good for, students have the lab time. So what I ended up doing is saying, like, all right, well, I’m going to hang out for, like, a couple hours after each class and before each class, to just answer questions for labs.

So. Yeah, I still do that. Okay. Huh.

Jason Blanchard

All right, so, hey, everyone, thank you so much for joining us for this Black Hills information security webcast. If you ever need a red team thread hunt pen test, where to find us. but bo, you’ll be back in the future.

You’ll do more webcasts. We do webcasts pretty much every single week. So if this was your first time here, we hope you come back and enjoy one of these again.