Vulnerability & Penetration Testing

This is a guide on vulnerability and penetration testing.

C++ is among the best languages to start with to learn programming. It is not the easiest, but with its speed and strength, it is one of the most effective. This small study book is ideal for middle school or high school students.

Vulnerability Assessments

Vulnerability Assessments. This is an ongoing task and it's sometimes is required for compliance with various security standards or regulations. So, the purpose of a vulnerability assessment, of course, is to identify weaknesses before they get exploited so that we as cybersecurity analysts can do something about it. This means detecting any potential weaknesses at the network level, perhaps with network perimeter devices like Wi-Fi routers or regular routers, firewall appliances. But we can also use vulnerability assessments to gauge the security posture of individual hosts, whether it's a server-based machine, a client desktop or laptop, a mobile device like a phone or a tablet.
 
All of these things need to be periodically scanned because vulnerability assessments are the best tool that we have to truly assess risk, so it feeds directly into risk management. We can also do vulnerability assessments against specific applications like a particular web app, which ideally should be protected with the web application firewall to protect against common web attacks like injection attacks of various types, cross-site scripting attacks, directory traversal attacks, and on and on. We can also run vulnerability assessments against internal business processes, such as how the payment of invoices is handled by more than one person internally, so having internal security controls that might not directly relate solely to IT systems. So, we need to run these vulnerability assessments periodically. You might ask, is this not like a security audit?
 
You can call it whatever you wish and you might be required to do it quarterly or monthly, but there is nothing bad in terms of the outcome of running frequent vulnerability assessments. Yes, it takes time. We have to allocate time for technicians or hire outside contractors to do it and that, of course, translates to having a cost. But imagine the cost of non-compliance with regulations, fines, penalties, loss of customer confidence due to sensitive data breaches, it's well worth having vulnerability assessments at the forefront of our minds. There are two main categories of vulnerability scanning. One is active. This means that we not only identify weaknesses or vulnerabilities, but we also have tools where we will attempt to exploit them in a controlled manner so that we can determine how vulnerable that system or network really is.
 
And really, what we're defining there is penetration testing, otherwise called pen testing. And you can't just conduct pen tests randomly whenever you want. There has to be an approval process in place because it could disrupt systems. If you plan on running a pen test against your cloud deployment in the public cloud, you'd better check your cloud provider's rules of engagement related to pen testing in the cloud. The next category is a passive type of vulnerability scan. This means we are identifying but not attempting to exploit discovered weaknesses. Instead, the idea is to identify those weaknesses and report on them, and then focus our energy on securing those discovered problems. So, where active is more of a penetration test, passive vulnerability scanning is basically a vulnerability assessment.
 
When you plan vulnerability scans, there are a number of things that you need to consider because we know it should be frequent so we definitely have to plan this out. And the details about planning a vulnerability scan within an organization from one region to another, if it's an international firm or even from one network to another might vary depends on what's being hosted on those networks. Either way, we need to begin with establishing a baseline. What is normal? Let's say, on network A where we plan on scheduling vulnerability scans every month. What's normal on network A for inbound outbound traffic? Which devices normally appear on network A? Are they all patched? Are they all configured with a baseline security configuration that we've established?
 
This needs to be done because as we know, because we've been saying it over and over and over, how could you possibly detect suspicious activity or anomalies, at least some of the hidden ones if you don't know what's normal? Now, we also need to make sure that we determine which scan targets we will test security against. Maybe it's not every host on a network, maybe it's only web application servers. It could be either one. What type of scan? Are we just scanning for the most common types of vulnerabilities? Or do we plan on doing an in-depth detailed scan of each and every discovered device on the network which could take a long time? Maybe that you would do quarterly but do a quicker scan monthly.
 
You might also consider a credentialed scan. So, you need to make sure, of course, that there is approval at the IT level and the management level to run vulnerability scans because this might set off alarms for intrusion detection systems. You also have to make sure if you're going to do a very deep intensive scan of every host, you might consider credentialed scans where in your vulnerability scanning tool, you put in the credentials such as a general admin account used on stations that would allow it to not only do an external scan of those hosts, but an internal scan which can shed light on a lot of vulnerabilities. But there's no right or wrong, it just depends on what your requirements are. In a perfect world, you would run frequent scans of everything fully in-depth. The vulnerability scanning process begins with step one, making sure your scanning tool is up to date.
 
Not only the engine that actually runs the scan, but vulnerability scanning tools usually have a vulnerabilities database with the latest vulnerabilities that they gauge against scanned hosts. Naturally, because threats change constantly, you want to make sure you keep this up to date. A good vulnerability scanning tool will allow you to schedule these updates so it's automated and you don't have to think about it. Once the tool is in place and we're ready to go, in step two, we run the scan. You can run this on demand, but you most certainly should have this scheduled automatically. The frequency will depend on what your requirements are, which could be influenced by regulations or security standards like PCI DSS. So, we could scan the entire network and when you do that, you have to wonder, well, do we scan it from outside of the network or from inside the network?
 
In a perfect world, both. Same with hosts. In a perfect world, we would scan them with no credentials. So, an attacker that might be just scanning the network, what would they see, would there be any vulnerabilities? But also, a vulnerability scan when you're signed into the host might point out vulnerabilities with components that you might be using on a web server that are out of date, which you would not know by doing a scan externally that wasn't credentialed on that host. So, we see then the benefit of performing multiple types of scans. And of course, you might choose to scan individual applications on hosts, so there could be a hybrid of the targets. Then prioritize the scan results.
 
Well, based on what, this is easy. Which assets, for example, which web servers are considered more critical than others? Either because they have a very high dollar value to the organization, they generate revenue, or they are used in research which results in valuable research data, or they process very sensitive customer financial data. Whatever the case is, we need to prioritize the scan results to determine where we're going to focus our energy. So, to implement or modify security controls to protect high-value assets. But be very careful with that. For example, if you've got vulnerabilities on a revenue-generating web server and you've got vulnerabilities discovered on a user desktop, don't just assume that the desktop vulnerabilities are not important because if that desktop is compromised, it could lead to your revenue-generating server being compromised.
 
You have to be very careful about how you prioritize these results. So, the vulnerability scan will have a number of outcomes. It'll define any missing patches that should be applied. A vulnerability scan might also include testing user awareness of things like phishing scams by sending out fake email messages so that we can gauge user responses. The scan might also point out any security misconfigurations that we have, like the use of weak passwords or default settings not being changed, or known flawed components being used such as with a web server, or unnecessary public-facing services that really should only be visible private on an internal network.
 
Running vulnerability scans can also result in defined actions that we will take to address those reported weaknesses. So, there are a lot of benefits then, of course, in conducting vulnerability assessments, such as achieving legal, regulatory, or security standards compliance, preventing security incidents from happening or reducing their impact, maintaining client and shareholder confidence in our business processes and supporting IT systems, avoiding fines and penalties such as due to data privacy breaches. And, of course, enhancing business continuity. If there is a downside to vulnerability assessments, even if you can think of it, it will pale in comparison to the benefits.
 

Common Vulnerability Assessment Tools

We now know the importance of periodic vulnerability assessments, so let's take a few minutes and let's focus on vulnerability assessment tools and their features. The first thing to consider is that we've got vulnerability assessment tools that we can use to test the security of on-premises networks and hosts as well as cloud-based network and hosts. And if we want to be even more granular, there are specific tools that will allow us to scan applications for vulnerabilities. Regardless of the specific vulnerability scanning tool we're using, there are some common features we should look out for that those tools really should have, such as the ability to schedule the scans. Remember, for compliance with some regulations or security standards such as PCI DSS, you're going to have to be conducting security scans perhaps more than just once per year.
 
Things change a lot on the threat landscape within a one-year period. Performing vulnerability scans monthly would be a much better solution. You might want to make sure that you have the option of running credentialed scans where you feed in credentials into your scanner tool, and when it reaches out to hosts on the network that it discovers, it can use those credentials to sign in and do a detailed, thorough scan of that machine. Besides scheduling the scans themselves, we also want a way to schedule the update of the vulnerability database that the tool uses. Now, we also have network scanning and mapping tools as well. They're a little bit different because they aren't designed to focus on vulnerabilities, but rather discovering what is actually up and running on the network.
 
These days a lot of vulnerability assessment tools have this built-in, of course, but under themselves, network scanning and mapping tools discover networks, active hosts that are on those networks and responding, unless they're firewalled to the hilt, then they might not show up. And in some cases which services are running on those hosts, like an HTTP web server or an SMTP mail server? Network scanning and mapping is great from the security technician's standpoint to determine what's on the network and what might be running, but it's also used by malicious actors as part of reconnaissance. As a result, it's very easy to discover if there are network and host scans being conducted on the network using standard intrusion detection system or IDS tools. So, if we're doing this legitimately on an internal network, we need to be aware of IDS configuration so we don't unnecessarily set off alarm bells.
 
So, what are some common network scanning tools? Well, we know that it can be used by security technicians or attackers during the reconnaissance phase. Tools would include Nmap, the Angry IP Scanner, Maltego, which is a tool that integrates with many other security-based information sources and tools like Shodan, VirusTotal, and WHOIS. Then there are cloud scanners like Scout Suite, Prowler, Pacu. Then there are vulnerability scanners and debuggers. Now, vulnerability scanners are more than just network scanners because they are designed to compare their list of vulnerabilities in a vulnerabilities database to what might be discovered on hosts. Tools would include Nessus, OpenVAS to name but just a few. Then there are debuggers which would be used by software developers to learn about the behavior of malware even in some cases to begin the reverse engineering process. Debuggers would include things like the GNU Debugger, Immunity Debugger, again to name but just a few.
 
Pictured on the screen, we've got a screenshot of the shodan.io website. The Shodan website is very interesting,
 
[Video description begins] A screenshot appears. It shows a website: Shodan. It contains 3 options in the menu bar: Explore, Pricing of, and Search bar. The Explore option is active. Below, it contains various sections: CATEGORIES, RESEARCH, BROWSE SEARCH DIRECTORY, Popular Tags, and so on. [Video description ends]
 
in that, it allows you to browse discovered IT services over the network, meaning over the Internet that might be considered to be vulnerable like video surveillance systems, video cameras, industrial control systems exposed to the Internet, databases, RDP hosts, the list goes on and on. So, it's quite an eye-opening experience to focus on the shodan.io website. There were also web application vulnerability scanners like the OWASP Zed Attack Proxy, otherwise just called OWASP ZAP, Arachni, Nikto, Recon-ng, and Burp Suite, again to name but just a few. Is one tool better than the other? Not necessarily. Some organizations might even use a combination of these tools for the utmost in security scans.
 

Using nmap to conduct Port Scanning

In this demonstration, we're going to take a look at how to use the free Nmap tool to perform a host scan. So, Nmap is a host as well as a network scanner. You can scan a network or an individual host to see which machines respond and also to get a list of which services are running on them, such as HTTP, SMTP, DNS, and so on. Some Linux distributions might already include Nmap, otherwise, you can install it on a multitude of platforms beyond Linux. Here I've navigated to nmap.org and I'm looking at the Download page where we can get the latest Nmap version for Windows, the macOS, Linux, and other OSs, so you have the source code that you would then have to compile.
 
[Video description begins] A webpage appears with the heading: Download the Free Nmap Security Scanner. It contains various tabs: Download, Reference Guide, Book, Docs, Zenmap GUI, and In the Movies. The main pane displays: Get the latest Nmap for your system: Windows, macOS, Linux (RPM), Any other OS (source code). [Video description ends]
 
You also have a GUI version of that called Zenmap.
 
We'll be using Nmap in this demo, but in another demo, we'll take a look at the Zenmap GUI. Here I happen to be using the Kali Linux distribution, which includes, among many other tools, Nmap. I'm going to go ahead and click on the Terminal icon in the upper left to open a Terminal window. The first thing, I want to do is run sudo service ssh status to see if the SSH daemon is running.
 
[Video description begins] A terminal window appears with the heading: kali@kali : ~. [Video description ends]
 
 
OK, it looks like it's inactive, so I'm going to go ahead and use the Up arrow key to bring up that previous command and I would change the word status to start.
 
[Video description begins] A command is added: sudo service ssh status. [Video description ends]
 
[Video description begins] Another command is added: sudo service ssh start. [Video description ends]
 
So, now if I check the status, the SSH daemon is active and running. So, if I were to run commands like sudo netstat -an, and I want to see all numeric ports, and I want to specifically pipe that to grep, I'm interested in looking at :22, port 22. We do have port 22 in a listening state.
 
[Video description begins] A new command is added: sudo netstat -an | grep :22. [Video description ends]
 
For tcp that would be IPv4 as well as IPv6. Now, the reason we're looking at this is we want to have a few services running here because we can run Nmap against the host to see what services are there. So, I'm going to run sudo nmap and I'm going to run this against the localhost. You don't have to do that. We'll do a network scan after, but 127.0.0.1 -sU. That means I want to perform a UDP scan and uppercase T, I want to do a TCP scan, and I'll press Enter. So, what it's telling us then is what we already expect in the sense that the ssh daemon is in an open state. So, the SSH daemon is running on this machine and accepting connections. There are no other services listed.
 
[Video description begins] A command is added: sudo nmap 127.0.0.1 -sUT. [Video description ends]
 
If I clear the screen and do an ip a, notice that we are on the 192.168.2 network here that's the network and I know that because we have a /24 mask, so the first 24 bits or the first 3 bytes. So, let's do a network scan sudo nmap 192.168.2.0/24, let's say slash and we'll add -v for verbose.
 
[Video description begins] A new command is added: ip a. [Video description ends]
 
[Video description begins] A command is added: sudo nmap 192.168.2.0/24 -v. [Video description ends]
 
OK, notice here it's either displaying that the host is down or that we've got open ports and the port numbers are shown here like port 8008, port 80, 443. We also have the IP of the host on which that service is listening. Now, this as is the case usually with any type of powerful tool, can be used for good or bad reasons. For legitimate purposes, security technicians can run periodic scans on the network so they know what is on their network and whether there are unnecessary ports that are running, like port 80, port 443, which would indicate that we might have a web server. We could determine whether or not that should actually be running.
 
It's all about reducing the attack surface and only having what you need on your network, or perhaps also using network scans to discover new hosts on the network that shouldn't be there, that could be suspicious. OK, so once the scan has completed, we could go back up through the output to get a listing of the devices, it might be able to discover a name or a model of the device and showing things like the hardware or the MAC Address as well as any open ports. So, some may not have many open ports, other hosts may have plenty of them. Now, of course, you could use standard output redirection in Linux, such as with a greater than symbol to write that to a file. But you can also use -o for output, let's say X for XML. And maybe I'll call this file nmap_results.txt. Once that's completed, if I do an ls, there's my nmap_results.txt.
 
[Video description begins] A new command is added: sudo nmap 192.168.2.0/24 -v -oX nmap_results.txt. [Video description ends]
 
[Video description begins] Another command is added: ls. [Video description ends]
 
Of course, I might want to rename that so it has an XML extension. Doesn't really matter in this case. I'm just going to run sudo cat against that file so we can see what's inside it. And indeed, it is formatted as an XML file.
 
[Video description begins] A command is typed in: cat nmap_results.txt. [Video description ends]
 
You can also perform OS fingerprinting, for example, nmap -O. So, we can try to determine what kind of an OS is running on a given host. We can then specify an individual host or a range. Let's say I specify an individual.
 
[Video description begins] Another command is added: nmap -O 192.168.2.1. [Video description ends]
 
Now, here it says we need root privileges. That's simply me forgetting to add the sudo prefix at the beginning of this command to run this as an elevated command.
 
[Video description begins] He modifies the previous command and types out: sudo nmap -O 192.168.2.1. [Video description ends]
 
Now, always fingerprinting isn't always successful, it will attempt to determine what kind of an OS it is.
 
So, the OS guesses that are returned here are Linux variants, and in fact, this happens to be a wireless router device running an embedded firmware Linux OS. There are plenty of other command line options, we can't possibly cover them all. However, be aware that you can look at the manual page, the man page, sudo man nmap.
 
[Video description begins] A new command is added: sudo man nmap. [Video description ends]
 
So, of course, Nmap is a Network Mapper. It's not quite correct to call it a network vulnerability scanner because it doesn't actually probe each individual device looking inside the OS or at apps that might be configured in an insecured manner or that might be missing patches. Some people will say it's kind of a vulnerability scanner, in that, it shows you what's up and running on the network and what listening ports are available.
 
But generally, most people call Nmap a Network Scanner or a Network Mapper. At any rate, here in the man page, we have all kinds of examples of using command line switches like -A, and many other examples. And as we get further down when it comes to a host discovery, we have a lot of variations on command line parameters and various scanning techniques. One of the ones that we used here, you might recall was performing a UDP Scan. So, the man page is telling us that that is done with -s and U together. Or you can specify only certain ports that you're interested in scanning forward -p. So, I'll press Q for quit.
 
As an example, if we run sudo nmap -p for port 80 192.168.2.0/24. Basically, I want to see all of the web servers that are listening on port 80 on my subnet.
 
[Video description begins] A command is added: sudo nmap -p 80 192.168.2.0/24. [Video description ends]
 
So, if you see the state of open from port 80, it's a standard port 80 HTTP web server. Or if it's firewalled in some way, it might show up as being filtered. So, as a cybersecurity analyst, you must have a sense of how the Nmap tool is used and what the results are. Some of the questions on the Cybersecurity Analyst+ exam might test your knowledge of what you should do, such as to test which web servers are available on the network at the current time, or how to read output from a command where you have to determine which command resulted in that output.
 

Conducting a Network Vulnerability Assessment

In this demonstration, I will be using Nessus to execute a vulnerability scan. Nessus is a tool that has the ability, such as Nmap does also, of scanning the network to see which hosts are there and which services are running. But unlike NMAP, Nessus is actually a vulnerability scanner, meaning it has a list of common vulnerabilities that it checks against hosts that it discovers on the network. So, it's doing a bit more of a deep dive than just simply scanning the network and showing which ports are open on hosts. It can determine if there are insecure configurations at the OS level, or if there are missing patches, and so on. So, here in my web browser, I've navigated to the tenable.com/downloads page where I can choose to download the specific Version and Platform for Nessus.
 
[Video description begins] A webpage appears with the heading: Download Tenable Nessus. The main pane displays: Download and
Install Nessus. Below, it shows: Choose Download. Further, it contains two field options: Version and Platform. [Video description ends]
 
 
Now I've already gone and downloaded and installed this to save time. On the host where you've installed that if it's running Windows, when you go into your Services tool, in the Ts, you will expect to see Tenable Nessus. It's set as an Automatic start service, which is good because, of course, you can schedule recurring scans.
 
[Video description begins] A window appears titled: Services. It contains a left panel for description. The right panel contains a table with the column headings: Name, Description, Status, Startup Type, and so on. It contains various items such as Task Scheduler, Tenable Nessus, Themes, Time Broker, and so on. [Video description ends]
 
When Nessus is installed, it installs a web application that listens by default on the localhost where you've installed it on port 8834, and it should automatically open a web browser to allow you to log in.
 
[Video description begins] A webpage appears with the heading: Nessus Professional / Login. The main pane displays: tenable Nessus Professional. It contains two field options: Username and Password. [Video description ends]
 
So, for the credentials, I'm going to use admin as the username and admin as the password. If for some reason you forget the credentials for your Nessus installation, you can go to the Command Prompt.
 
Here I've navigated to Program Files\Tenable\Nessus where you can run nessuscli chpasswd, change password, and then give it the user for whom you would like to change the password, such as admin.
 
[Video description begins] A terminal window appears with the heading: Administrator: Command Prompt. He adds a command: Nessus>nessuscli chpasswd admin. [Video description ends]
 
It'll prompt you for the password, you'll confirm it, and then it's done. So, you can then use those credentials to sign in on port 8834. If I go to All Scans, New Scan, one of the things you can specify are Credentials.
 
[Video description begins] A webpage appears with the heading Nessus Professional / Scans. The left navigation menu contains various options: My Scans, All Scans, Trash, Policies, Plugin Rules, Customized Reports, and Terrascan. The main pane displays: New Scan / Malware Scan. Below, it contains three tabs: Settings, Credentials, and Plugins. Further, it contains two options: SSH and Windows. [Video description ends]
 
In this case, for Malware Scanning, I can specify SSH credentials for Linux-based hosts or Windows credentials. Now a credentialed scan means you're putting in the credentials, so that Nessus has the ability to go into that operating system and poke around, and that will give you a much more detailed perspective on things like malware.
 
If I don't specify any credentials, then the scan will only see what an outsider might see if they were trying to scan the network looking for vulnerabilities. With credentials, it looks more like what malware that might have been infected a machine where users logged on would look like. So, when you configure a scan, one of the things to consider as well is the target. You could specify a range of IPs or an entire subnet, for example, 192.168.2.0/24.
 
[Video description begins] The Settings tab is now active. The second left menu contains various options: BASIC, General, Schedule, Notifications, DISCOVERY, ASSESSMENT, REPORT, and ADVANCED. The main pane displays: General Settings. It contains various field options: Name, Description, Folder, Targets, Upload Targets, and so on. [Video description ends]
 
 Or I could specify a range by using a dash between two IP addresses. You can also click Schedule to enable scheduling so you can have a recurring scan that occurs. Now if I go back to All scans and choose New scan, notice that there are plenty of options available.
 
[Video description begins] The main pane displays: Scan Templates. It contains various templates: Host Discovery, Basic Network Scan, Advanced Scan, Advanced Dynamic Scan, Malware Scan, Mobile Device Scan, Web Application Tests, WannaCry Ransomware, Active Directory Starter Scan, Internal PCI Network Scan, and More. [Video description ends]
 
Basic Network Scan, Advanced Scan, Malware Scan, Mobile Device Scan which would require product upgrade, Web Application scanning, Ransomware specifically the WannaCry Ransomware, a Microsoft Active Directory scanner, and if I go down under COMPLIANCE, even an Internal PCI Network Scan for PCI DSS compliance where one of the control objectives for PCI is to have a vulnerability management program with periodic scans. If I go to All Scans so I can see what the results are thus far. If I click on Advanced Scan, here it's discovered a number of hosts on the network that I specified as the target range.
 
[Video description begins] The Advanced Scan page from All Scans option is now open. It contains 3 tabs: Hosts, Vulnerabilities, and History. Below, it contains a table with a horizontal bar chart with the column headings: Host and Vulnerabilities. There are two sections in the right pane: Scan Details and Vulnerabilities. [Video description ends]
 
And based on our Vulnerabilities legend, it looks like we might have just a handful of Critical and High and Medium Vulnerabilities on given hosts shown here with their IP address. Let's say I click on the second one found here. So, it looks like we've got an SSL type of high severity issue and if I click on that, it gives me the details about the fact that we might have SSL configured on the host using only Medium Strength Ciphers or encryption functions.
 
[Video description begins] The main pane displays: Advanced Scan / 192.168.2.10/SSL (Multiple Issues). The Vulnerabilities tab is active. It contains a table with the column headings: Sev, CVSS, VPR, Name, and so on. [Video description ends]
 
[Video description begins] A page appears which displays: SSL Medium Strength Cipher Suites Supported (SWEET32). Below, it contains various information sections: Description, Solution, See Also, Output, and so on. The right pane shows Plugin Details and Risk Information. [Video description ends]
 
The great thing about vulnerability scanners like this is they also provide references for further reading. I can use the links at the top to go back through my scans. Looks like I've got a lot of SSL-related issues for certificates and cipher suites. Always bear in mind that while we explore these tools and we're looking at these things manually, a lot of threat-hunting and SIEM and SOAR tools will do this automatically. You would configure perhaps alert thresholds, but a lot of this stuff is automated. And on the Cybersecurity Analyst+ exam, you might get a scenario-based question that asks what you should do to enhance the security posture of an organization, and part of that might include scheduling credentialed vulnerability scans.
 

Using Zenmap for Network Scanning

We've briefly discussed already how to use the Nmap network scanning tool and the fact that you can download it for free from nmap.org. Now we can also work with the Zenmap GUI.
 
 
[Video description begins] A webpage appears with the heading: Download the Free Nmap Security Scanner. It contains various tabs: Download, Reference Guide, Book, Docs, Zenmap GUI, and In the Movies. [Video description ends]
 
So, this is basically a graphical user interface front end that uses Nmap in the background with various command line options depending on the type of scan that you tell it you want to run. I've already downloaded and installed Nmap and Zenmap. So, from my machine Start menu, I'm going to launch Zenmap. So, the Zenmap GUI shows up. I'll click to launch it. And here's the interface for it.
 
 
[Video description begins] A new window appears with the heading: Zenmap. It contains a menu bar with 4 options: Scan, Tools, Profile, and Help. Below, it contains 3 field options: Target, Profile, and Command. Besides, it has two buttons: Scan and Cancel. The left pane contains two tabs: Hosts and Services. The right pane contains various tabs: Nmap Output, Ports/Hosts, Topology, Host Details, and Scans. [Video description ends]
 
Now, because we've installed Nmap and it uses Nmap in the background, if we were to pop out into a command line environment, we could also run nmap from here if we so chose.
 
[Video description begins] A Command Prompt window appears. He types in:nmap. [Video description ends]
 
However, I'm going to exit this and go back into the Zenmap GUI.
 
The first thing we have to determine here is what the Target is. What is it that we want to scan? It could be an individual host like 192.168.2.1, or it could be a range of IPs I could specify using the dash notation, or I could scan an entire subnet by using CIDR notation. Here I want to scan 192.168.2.0/24. Now, for the profile, you can tell it you want to do an Intense scan which will take the longest time, which will take longer than just doing a Quick scan.
 
[Video description begins] A drop-down list appears below the Profile option. It contains various options: Intense scan, Intense scan plus UDP, Intense scan, no ping, Quick scan, Quick scan plus, Slow comprehensive scan, and so on. [Video description ends]
 
It'll be able to probe more in-depth into the services running in the machine to see if there are any vulnerabilities. Or you can do the Slow comprehensive scan. So, for example, let's say, we do an Intense scan. Now, what it's doing as I'm switching these different profile types is it's changing the command line syntax for the underlying nmap command.
 
OK, we could run that at the command line if we so chose. When I've got the parameters set correctly, I could then click the Scan button to begin the nmap scan. You can also go to the Scan menu and Open Scans that you've saved in the past. I've got one here called Scan1 and it saves it in XML format, so I'm going to go ahead and Open that past scan. This is the result you get when you run an nmap scan.
 
[Video description begins] A dialog box appears with the heading: Open Scan. The left navigation menu contains various options: Recent, Home, Desktop, Documents, Downloads, and so on. The Recent option is highlighted. The main pane contains a table with the column headings: Name, Location, Size, Type, and Accessed. It contains two items. [Video description ends]
 
For example, over on the left, I could simply view either the Hosts that were discovered, so it's shown here by IP address and also sometimes by MAC address or by host name, such as a Microsoft XBOXONE device, an HP network printing device. You could also click on Services to organize it by the discovered services like http.
 
When I select http on the left, on the right, it exposes those discovered hosts from the scan, whatever the scan range was that are running some form of an HTTP server. For example, it's telling me we've got an HP Officejet Pro 8610 network printer. That's a lot of returned detail. We've also got other hosts that are returning that they are running, for example, the nginx HTTP web server stack or a Microsoft HTTP web server. If it's easy for us to do this and this is a non-credential type of network scan, then it's just as easy for an attacker performing reconnaissance to get the same information assuming they can get on the network somehow. Now we're looking at Ports/Hosts over on the right.
 
We can also go to Host Details to get the details about the selected host, like the number of Open ports, the Last boot time so it's taking time-stamped, it's configured addressing information, the version of the operating system that is valuable information that we don't really want to have advertised, the Ports that are in use from that host, like port 135, port 32815.
 
[Video description begins] The Host Details tab is open in the right pane. It contains various sections: Host Status, Addresses, Operating System, Ports used, and More. The Host Status section contains various details: State, Opened ports, Filtered ports, Closed ports, Last boot, and so on. [Video description ends]
 
Again, if we view it from the host's perspective by clicking Hosts in the left-hand navigator, we can zoom directly into a particular device to get its individual details. Depending on what you're trying to achieve by running these types of network scans will determine whether it's a good thing that you can probe and get this detail. This way you kind of have an inventory, you know exactly what's on your network.
 
But that can also be achieved using host inventory tools that have a software agent running locally that's authenticated. Ideally, we don't want any information disclosed when someone performs like an uncredentialed scan, such as was done here. Now, this is still just a network scanner. This is not going in-depth into each host looking for vulnerabilities at the OS level or at the installed app level missing patches. It's not doing that as other tools would do.
 
Also, in closing, we can also go to the Tools menu and Compare scan results by selecting what they're referring to as the A Scan and then the B Scan. So, these are two scans from different points in time, ideally of the same network, of course, so you can pinpoint things that have been added or removed from the network.
 
[Video description begins] A dialog box appears with the heading: Compare Results. It contains two drop-down menu options: A Scan and B Scan. [Video description ends]
 

Testing Web Application Security

So far, we've discussed about the importance of scanning networks and then scanning hosts on those networks to see which ports are open and also how important it can be to run vulnerability scans to do an in-depth analysis on given hosts including all the software installed on it, making sure configurations are secure, that outdated components aren't being used, and that patches are applied. Here we're going to be focusing on testing web application security and we're going to do that using the OWASP ZAP tool, the ZED Attack Proxy tool. First things first, I've got the Metasploitable 2 downloaded virtual machine which is a collection of different web pages that can be set to low security for testing security like we're going to do.
 
So, I'm going to start here by clearing the screen and typing ip a to get the IP.
 
[Video description begins] A terminal window appears with the heading: 192.168.2.34 - PuTTY. [Video description ends]
 
[Video description begins] A command is added: msfadmin@metasploitable : ~ $ ip a. [Video description ends] So, this intentionally vulnerable web app is listening on IP 192.168.2.34. And if I go into a web browser and navigate to that IP address, indeed we've got the metasploitable2 intentionally vulnerable web application available.
 
[Video description begins] A webpage appears titled: Metasploitable2 - Linux. The main pane displays Warning, Contact, and Login. Below, it contains various links: TWiki, phpMyAdmin, Mutillidae, DVWA, and WebDAV. [Video description ends]
 
Now, one thing that we can do here is if I go to DVWA and if I sign in with the credentials that I've configured this with, the default credentials are Username admin and a Password of password. What I can do is go to DVWA Security on the left and set it down to low and Submit.
 
[Video description begins] A new page displays: DVWA. It contains two fields: Username and Password. [Video description ends]
 
 [Video description begins] A new page appears with the heading: Damn Vulnerable Web App. The left navigation menu contains various options: Home, Instructions, Setup, Brute Force, File Inclusion, SQL DVWA Security, Injection, Upload, PHP Info, About, and so on. The DVWA Security option is active. The main pane contains two sections: Script Security and PHPIDS. The Script Security section contains a drop-down menu field option which displays: low. [Video description ends]
 
By setting the Security level to low, all of these web pages with all these ways to test things like SQL Injection attacks, Brute Force attacks, it is using an intentionally vulnerable version of the web page, and that's what I want here for us to do our web app security scan.
 
There are plenty of tools that you can use to scan the security of a given web application. We're going to be using the free ZED Attack Proxy or the OWASP ZAP tool which can be downloaded from zaproxy.org/download.
 
[Video description begins] A new webpage appears with the heading: ZAP - Download. The main pane displays: ZAP 2.13.0. Below, it contains various download links. [Video description ends]
 
In some cases, you might have tools such as the Kali Linux distribution which is designed for pen testers that already includes the OWASP ZAP tool. So, here in Kali Linux, if I open up my menu in the upper left and go down to item 03, Web Application Analysis, over on the right, among the list of tools, I have ZAP, I'm going to tell it No, I do not want to persist this session at this moment in time. Every time I start the ZAP tool, I want to have a fresh new session. I'll click the Start button.
 
[Video description begins] A menu appears on the desktop. It contains various options: Favorites, Recently Used, All Applications, Information Gathering, Vulnerability Analysis, Web Application Analysis, Database Assessment, and so on. The menu besides it contains further options such as: Web Application Proxies, Web Vulnerability Scanners, burpsuite, ZAP, and More. [Video description ends]
 
OK, that puts me in the interface for ZAP where I'm going to click Automated Scan, and in the URL to attack field, I'm going to pop in the IP address of the web app server, 192.168.2, in our case, .34.
 
[Video description begins] A window opens with the heading: OWASP ZAP - OWASP ZAP 2.10.0. It contains a menu bar with various options: File, Edit, View, Analyse, Report, and so on. The main pane contains three sections. The left pane displays Sites. The main right pane contains three tabs: Quick Start, Request, and Response, The Quick Start tab is active. Below, it displays: Automated Scan. Further, it contains various field options: URL to attack, Use traditional spider, Use ajax spider, Progress, and so on. The Use ajax spider option contains a check box and a drop-down menu. Below, it has two buttons: Attack and Stop. [Video description ends]
 
Down below, I'm going to tell it to also use an ajax spider with a web browser to test the security. I'll tell it to use Firefox. Now to start this scan for web vulnerabilities, I would click Attack. You want to make sure that you have express written consent to check for these vulnerabilities. Even though we're only checking, it can impact the performance of the web application. So, I'm going to go ahead and click the Attack button and we already have some information here. Basically, it's scouring all of the pages and going through them all on that web application server.
 
[Video description begins] The panel at the bottom is active now. It contains various tabs: History, Search, Alerts, Output, and Spider. The Spider tab is active. Below, it contains a progress bar. Further, it contains three tabs: URLs, Added Nodes, and Messages. The URLs tab is active. It contains a table with the column headings: Processed, Method, URI, and Flags. [Video description ends]
 
So, notice it's going through different pages, different graphics. It's testing HTTP GET and HTTP POSTS and it's doing this using the Firefox browser. Now, depending on the web application, how many pages there are, how complex it is will determine how long this takes. One of the things that we want to focus on here in the ZAP tool is the Alerts output down below, I'm going to click on Alerts. We already have some issues that we need to pay attention to and notice the counters. The numbers are rolling here as we're looking at it and it discovers new related items. For example, Application Error Disclosure.
 
[Video description begins] The Alerts tab is now active. It contains two sections. The left panel contains various files: Alerts (13), Application Error Disclosure (53), Vulnerable jS Library, Cookie No HttpOnly Flag (14), and so on. The panel in the right displays description of the selected alert. [Video description ends]
 
So, when we read the description, this is where you get the meat and potatoes. It's going to tell you what the problem is like specific pages. The page in question here you'll be able to gather from the URL at the top.
 
This is a Medium security issue. It might be providing sensitive information like the location of a file if there's some kind of an error. The great thing is not only is it all doom and gloom, it offers a solution. We've got another problem here, Vulnerable JavaScript or JS Library. And it says over here on the right for the Solution, Please upgrade to the latest version of jquery. That would be something that would be the responsibility of the web server technician. It even lists a couple of Common Vulnerability and Exposures or CVE-related documents for those specific problems. If I select Cookie No HTTPOnly Flag, looks like there's some kind of PHP session ID where we've got the cookie not using the HTTPOnly flag, which means it might be vulnerable to being accessed by local scripts.
 
Now, notice it's still active and if I go back to the Spider tab, we get a sense of how far along we are, in this case, 59%. We also have a Report menu where we can Generate a variety of different types of reports in different formats, such as an HTML report of the findings of our web application vulnerability scanner, XML, or JSON.
 
[Video description begins] A drop-down list appears under the Report option on the menu bar. It contains various options: Generate HTML Report, Generate XML Report, Generate Markdown Report, Generate JSON Report, Compare with Another Session, and so on. [Video description ends]
 
You also have the option of comparing it with Another Session so that if vulnerabilities were detected, let's say, the last time we did a scan of that web app a month ago, we can determine whether it's been addressed or not by running the scan again now. So, needless to say, this is a crucial tool for helping to ensure that web applications are kept safe, in addition to other techniques such as using a web application firewall.
 

Penetration Testing

Penetration testing is often just called pen testing. This is an active form of security scanning. We've discussed vulnerability scans where we scan networks and hosts seeking vulnerabilities but not trying to exploit them. So, vulnerability scanning is passive by nature. But this isn't vulnerability scanning, it's penetration testing, which can include vulnerability scanning first so that discovered weaknesses are exploited. We use tools that are designed to exploit these vulnerabilities to test security. Now, just because a vulnerability is detected, it doesn't mean that when you run pen tests you will immediately succeed in breaking into a system or crashing a server, whatever the case might be.
 
But we need to recognize how penetration testing provides value to the organization's security program. Of course, it might be required for compliance with contractual obligations or regulation related to things like data privacy. And if you've ever been involved in a security audit of an organization related to its IT systems, then penetration may or may not have been a part of that. Organizations can use in-house cybersecurity technicians to perform these types of tests, or they might contract it out to a third party. But penetration testing is not to be taken lightly. There are always rules of engagement to consider, such as the scope. What is it that we're pen testing? Is it 1 server? Is it a network infrastructure appliance like a router or a switch? Is it a web app?
 
Is it just an entire network? And we have to think about when we are going to conduct pen testing. Certainly, we must be in constant and clear communication with the system owners that we are pen testing. Maybe part of the agreement is they don't know exactly when pen testing will occur in an attempt to simulate real-world attacks or there might be a very specific time frame that is agreed upon by the system owners and the pen test team of when pen tests will occur. Now the reason that there are these rules of engagement, we have to be very careful is because pen testing, as you know, involves exploiting vulnerabilities that get discovered.
 
Exploiting those vulnerabilities could result in the disclosure of very sensitive information, which is why sometimes we'll have to have pen testing teams sign off on a non-disclosure agreement or an NDA in case they might see sensitive data, which could be customer data, employee data, medical records, financial records, company trade secrets, whatever it is. But the potential service disruption might also actually make services unavailable for legitimate use, such as line of business servers used internally by employees. That's a serious thing. And so, that's why we have to be very clear about what the rules of engagement are when we run pen tests or when we work with teams that will conduct pen tests. And then there's the cloud. Here we have a screenshot for the Microsoft Azure cloud. The web page is labeled Penetration Testing Rules of Engagement. [Video description begins] A screenshot appears of a webpage. The menu bar displays: Microsoft | MSRC. Besides, it contains various options: Report an issue, Customer guidance, Engage, Who we are, and so on. The main pane displays: Penetration Testing Rules of Engagement. Below, it displays: INTRODUCTION AND PURPOSE. [Video description ends]
 
Of course, we must abide by these rules if we plan on running either vulnerability scans and/or penetration tests against our deployed resources in the Microsoft Azure cloud. Just because we pay for a monthly subscription and pay for our use of cloud services, it doesn't mean we have carte blanche to do whatever we want when it comes to pen testing. And each cloud provider might have a different set of rules. So, it's very important for cybersecurity analysts to be aware of those rules before pen tests are conducted. There are a couple of categories of pen testing, the first of which is called a known environment or some people will call it white box testing. This means that the details of the environment that are being tested are known to the pen testing team. We might wonder what good is that? How does that test security?
 
It's very handy and very important because it can simulate insider attacks where the configuration, the environment is partially or fully known. Now, an unknown environment is also sometimes called black box testing. This means that from the penetration team's perspective, there are no details known about what will be tested. There's no network diagrams, there's no server names, IP address ranges that are no nothing. So really, this allows us to simulate what external attackers would be able to do. Now they might run reconnaissance scans to learn about IP address ranges, host names, and so on. But from the beginning, nothing is known, and then somewhere in the middle, you've also got partially known environments. This is often called gray box testing, so some details might be known, such as the fact that there is a web server used by an organization to host an app and it's running Apache.
 
Regardless of the type of pen test being conducted, the goal is to uncover flaws, especially those that are easily exploitable and mitigate those problems immediately. The pen test red team is otherwise called an offensive team. This is the team that executes the penetration testing, whether it's against a network, specific devices, applications, databases. Pen tests can also be non-technical in the sense it might be social engineering through telephone calls trying to trick people into divulging sensitive information. The idea is that we want to simulate real-world attacks as much as possible. That's the red team. In penetration testing, the blue team is the defensive team. Now, this could be in-house security technicians that secure and monitor IT systems, or it could be contracted to an outside party, whatever the case might be.
 
But the blue team monitors for security events, and coincidentally, that is what being a cybersecurity analyst is about. It's about monitoring, detecting, and responding. Ideally, in a perfect world, the blue team will be able to detect and prevent, and even stop red team attacks while they're occurring. Just imagine the wealth of information learned from conducting pen tests. It can also enhance security awareness. It can be used in training materials that is used to educate staff about security, such as giving a high-level overview of pen test results and what was vulnerable. Of course, doing that in a meaningful way that's engaging and interesting to the audience this is being reported to. We can also learn about new mitigation strategies and techniques. So, there's not a bad thing about penetration testing. When you look at the results, analyze them, and make improvements based on those testing results.
 

Navigating the Metasploit Framework

It's important for cybersecurity analysts to have a general understanding of some of the commonly used tools to exploit vulnerabilities. And one of those is the Metasploit Framework, which is a collection of tools with a variety of different types of exploits that can be deployed against targets. Now, you can install the Metasploit Framework manually, but it's included automatically in this case with the Kali Linux distribution. So, I'm going to go into a Terminal window to a Command Prompt environment where the first thing I'll do is change directory into /usr/share/metasploit-framework.
 
[Video description begins] A terminal window appears with the heading: kali@kali : ~. [Video description ends]
 
[Video description begins] A command is added: cd /usr/share/metasploit -framework. [Video description ends]
 
If I type ls here, I've got a number of subdirectories which are shown here in blue.
 
[Video description begins] A new command is added: ls. [Video description ends] One of those is called modules.
 
So, I'll change directory to modules and if we do an ls in here, one of the folders or directories in here is called exploits.
 
[Video description begins] Two commands are added. The first one reads: cd modules. The second one reads ls. [Video description ends]
 
Now, notice we've got exploits, payloads, and we also have a post directory. As you might guess, the exploit is code that executes on a target machine to take advantage of a vulnerability. Normally, if this were an attacker doing all of this, they would first perform some kind of reconnaissance to determine what device is on the network, or maybe infect it with malware through an email phishing campaign. Whatever the method might be, the attacker discovers that there's a given vulnerability on a host, and that's how they would know which exploit to use against that target. So, if I change directory into the exploits folder and if we do an ls, here we've got a number of folders based on the platform such as aix, unix, or android, apple_ios, bsd, firefox, linux, windows.
 
[Video description begins] Two new commands are added. The first one reads: cd exploits. The second one reads ls. [Video description ends]
 
So, if I change directory into windows, for example, and do an ls, here we have a number of exploits specifically related to the windows platform.
 
[Video description begins] Two commands are added. The first one reads: cd windows. The second one reads ls. [Video description ends]
 
Or smtp, vnc, winrm for Windows Remote Management, mssql for Microsoft SQL, iis for the IIS web server. And if I change directory into iis, here we have a number of exploit code files written in Ruby, that's why we have the rb file extension. So, if we were to use the cat command here to view the contents of one of these Ruby files, I'll just pipe that to more. So, these are actually exploit files. Here we've got the code that is used to run whatever this particular exploit happens to be. I'll press Q to get out of there. So, those are exploits. If we go back into the modules directory level, you might recall that we also had a payloads directory.