Background: As the worldwide leader of importing and exporting, Vandalay Industries has been the target of many adversaries attempting to disrupt their online business. Recently, Vandaly has been experiencing DDOS attacks against their web servers.
Not only were web servers taken offline by a DDOS attack, but upload and download speed were also significantly impacted after the outage. Your networking team provided results of a network speed run around the time of the latest DDOS attack.
Speed Test File
Screenshot of uploaded 'server_speedtest.csv' file into Splunk:
eval
command, create a field called ratio that shows the ratio between the upload and download speeds.Hint: The format for creating a ratio is: | eval new_field_name = 'fieldA' / 'fieldB'
Query used in Splunk to generate a ratio between upload and download speeds - source="server_speedtest.csv" | eval ratio = DOWNLOAD_MEGABITS/UPLOAD_MEGABITS
We can see the ratio of downloads and uploads from this query.
_time
IP_ADDRESS
DOWNLOAD_MEGABITS
UPLOAD_MEGABITS
ratio
Hint: Use the following format when for the table command: | table fieldA fieldB fieldC
Query used in Splunk to generate a ratio between upload and download speeds - source="server_speedtest.csv" | eval ratio = DOWNLOAD_MEGABITS/UPLOAD_MEGABITS | table _time, IP_ADDRESS, DOWNLOAD_MEGABITS, UPLOAD_MEGABITS, ratio
From the above screenshot, we can see the total event timeline of when downloads and uploads took place.
From the above screenshot, we can see the time in which downloads and uploads took place as well as the source IP address that was taking this action.
From the above screenshot, we can see a visual representation of these download and upload events in a column graph format. The time is displayed on the bottom and this makes our data much more readable.
Based on our findings in the above queries, we can see download and upload speed was drastically dropped at approximately 2:30PM on the 23rd of February. We can see this in the below screenshot:
We can se the downloads dropped to 7.87 at 2:30PM. This is a drastic drop from 109.16 megabits that was the previous event. It's fair to assume then that this is around when the attack started.
We can see from the above screenshot that downloaded megabits increased from 17.56 to 65.34, which is a drastic increase. Therefore, it's safe to assume this is around when the system started to recover and get back to normal network traffic flow.
We can see from the above screenshot that downloaded megabits jumped to 123.91 from 78.34 at approximately 11:30PM on the 23rd of Feb. It is safe to assume that this is about the time where network traffic flow has stabilised and returned to normal.
Submit a screen shot of your report and the answer to the questions above.
Background: Due to the frequency of attacks, your manager needs to be sure that sensitive customer data on their servers is not vulnerable. Since Vandalay uses Nessus vulnerability scanners, you have pulled the last 24 hours of scans to see if there are any critical vulnerabilities.
For more information on Nessus, read the following link: https://www.tenable.com/products/nessus
Nessus Scan Results
Screenshot of nessus scan results uploaded into Splunk:
10.11.36.23
.Use dest_ip="10.11.36.23"
in the query
It's worth noting there are 5 types of severity levels these logs look for, as shown below:
We want to filter for any vulnerabilities whereby the severity level is a 'crtiical' value. We can use severity="crtiical"
in our query.
The entire query is source="nessus_logs.csv" dest_ip="10.11.36.23" severity="critical"
See below screenshot for entire query that displays the count of critical vulnerabilities from the customer database server from this log file:
There were a total of 49 critical vulnerabilities found when the destination IP was 10.11.36.23 (which is the database server).
[email protected]
.Create the alert by clicking 'save as' and then 'alert' after we've generated our results from the query above:
Enter details to generate alerts whenever Nessus detects a vulnerability on the database server. Enter name, description and any other relative information:
Enter email details to send the generated alert to - [email protected]
and enter message details to be sent with the email:
Contiune entering details of alert and then click 'save':
Once saved we should be able to see the alert and edit if we wish:
Submit a screenshot of your report and a screenshot of proof that the alert has been created.
Background: A Vandalay server is also experiencing brute force attacks into their administrator account. Management would like you to set up monitoring to notify the SOC team if a brute force attack occurs again.
Admin Logins
Screenshot of uploaded 'Administrator_logs.csv' file into Splunk:
Hints:
Look for the name field to find failed logins.
Note the attack lasted several hours.
Because we want to look for any indiactors of a brute force attack, we want to drill down the logs into whereby accounts failed to log on. If we navigate to the field 'name' we can see the values this holds in the logs. In this case, we can see a count of 1004 "An account failed to log on". This is a good indactor that a brute force attack has occured. See screenshot:
Now if we add this to our search we can see the total events of when this occurred. We can then see when the majority of these events took place, which is a good indicator as to when this occurred. See screenshot:
As we can see, there is quite a large spike in the events whereby "An account failed to log on" at around 9am on the 21st of Febraury. We can see of the total 1004 events that occurred, at 9am there was 124 of these events with similar numbers appearing in the hours after. Thereby, I believe the attack started around this time - 9:00AM on the 21st of Febraury 2020.
We can see from the timeline of these events that 124 is quite a spike. In the preceeding hours to this event, the most amount of bad logins was around 23. We can consider this normal behaviour. See screenshot:
With this in mind and taking into account there were around 124-135 log on attempts when the brute force attack occurred, I would consider a baseline of about 40 bad log ons per hour. I would have the ranges for bad log ons per hour as follows:
[email protected]
if triggered.To create an alert for this type of event, I went through the following steps:
Click 'save as' and then 'alert':
Fill in details of alert, including when to trigger it (anything above 25 attempts per hour):
Enter details of an action. In this case we will be sending an email to [email protected]
:
Save the alert and we can then view it:
We have now successfully created an alert that will trigger when there are over 25 events of an account failing to log in.
Submit the answers to the questions about the brute force timing, baseline and threshold. Additionally, provide a screenshot as proof that the alert has been created.