Logging and Monitoring

This a guide on logging and monitoring.

C++ is among the best languages to start with to learn programming. It is not the easiest, but with its speed and strength, it is one of the most effective. This small study book is ideal for middle school or high school students.

Linux Logs

As cybersecurity analysts, we recognize the importance of logging and being able to scan through logs and look for anomalies or anything out of the norm. We're going to focus here on Linux logging facilities.
 
So most Linux logs will be stored in or under the subdirectory structure of /var/log. There are many different distributions of the Linux kernel and many versions. And so therefore there might be some variances in some cases where specific log files are stored. But it's safe to say most of the time it’s going to be somewhere under /var/log.
 
If we were to focus, for example, on the Ubuntu Linux distribution, the system log file for Ubuntu is called ‘syslog’ and it’s a text file. Now we can also use regular expressions to filter out the contents of log files, to return only specific things that we want to see. Regular expressions use specialized symbols for pattern matching when you need to filter out large amounts of text, seeking something very specific.
 
For example, we could use the sed command, sed with a -e' we have /cron/d'. The d means we want to delete any occurrences of the word cron, but from where? The next parameter in this case is /var/log/syslog. So, this would remove any lines containing the word 'cron' from the syslog. Perhaps we don’t want to see anything related to that, so we want to remove those entries.
 
And that’s only temporary in the output to the screen, or if you’re redirecting that output to a file somewhere. We then have the Linux lastlog command. This is designed to give you a list of user accounts and when they last logged in. [Video description begins] A screenshot with a list of user accounts is displayed on the slide. Most entries in the list are seen to have never logged in, except one. This is a log entry from user cblackwell. [Video description ends]
 
Notice in our screenshot the bulk of these user accounts have never been used to log in. So there’s really nothing suspicious there, those are probably accounts that are used by background services or daemons.However, the highlighted entry in our screenshot is for a user by the name of cblackwell. [Video description begins] The highlighted entry reads as cblackwell pts/1 192.168.2.11 Sun Jun 11 12:02:21 +0000 2023 [Video description ends]
 
This account signed into pseudo terminal or pts/1, that’s how it shows up when they SSH in remotely to manage a host. And we also have the IP address from which that occurred, and of course its date and time stamp. Now this can be very important output.
 
If, for instance, we are not using the 192.168 network prefix, why on earth would we have an entry in this log stemming from that location for remote connections to this Linux host? That's weird. Or if that is a regular user account that would never sign into a Linux host to remotely manage it, that is also abnormal.
 
So if we compare this or correlate it to past log on activity, which a lot of threat hunting tools will do automatically for you, whether on-premises or even in cloud computing, then it would tell you that that is abnormal log in activity, perhaps from a different location.
 
It's abnormal log in to a host that is normally not connected to. This is the purpose of scouring through logs manually or automatically to discover potential indicators of compromise.
 
We then have the dmesg command, spelled d m e s g. This is designed to show us log messages related to the operation of the Linux kernel. The Linux kernel, of course, is the heart and soul of the Linux OS. [Video description begins] A screenshot populated with Kernel log messages is displayed on the slide. Here are the first two lines on the screenshot. Line 1: [ 1.300111] hub 1-0:1.0: USB hub found Line 2: [ 1.300156] hub 1-0:1.0: 6 ports detected [Video description ends] So you'll see a lot of messages related to things like hardware detection and whatnot. Now, is this important? Of course it is.
 
We might have a record, for example, of a USB thumb drive being inserted at a certain point in time where that is not normally done. We just don't plug USB thumb drives into Linux servers. Well, then why do we have that kind of an entry in a log? Maybe it's a compromise. It could even be an employee, either intentionally or unintentionally; either introducing malware to a host, or maybe copying sensitive data to the USB drive, we have to account for all possibilities.
 
You can also log. You can also rotate logs. Rotating a log means you are creating a new log for new log events to be written to, but you still retain the old ones. You can't keep log files forever. They consume space.
 
However, you might be influenced by organizational security policies, which might in-turn be influenced by laws, regulations, contractual obligations that state you must keep logs for a certain period of time on certain types of hosts or devices.
 
Remember, the Linux OS can be tailored, and watered down, and built into a lot of IoT devices. So we're not just talking about installing Linux manually on a disk on a server. That's one part of it. But many IoT devices, even some network printers and so on have a watered down, specialized version of Linux tailored for that type of hardware. And as a result, it will always be logging options available.
 
So we can use the Linux ‘logrotate’ tool. When I say tool, I mean it’s a command you can run at the command line. So you can configure log rotation then, and we’ll see this in a demo because it’s important, where older logs can be retained for a specified period of time, given they reach a certain size so that we can compress them.
 
Or we might configure it to only keep recent logs. But let's not forget that every device, whether it's a smartphone, a specialized piece of medical hardware, a Linux server, whatever it is, we need to have logging going to a centralized location for data analysis and ultimately for threat hunting. That’s just one of those things you cannot go without.
 
So Linux log rotation. What happens is that we have a conf file, a configuration file under /etc and the file is called logrotate.conf It includes a directive to include the contents of the /etc/logrotate.d directory, where you could have individual configuration files for log rotation, for things like the Apache web server, for SSH, and all these different services.
 
So that's really the purpose of that. It's a main global configuration file that points to a directory where you have individual log rotation config files. So with Linux log rotation we could configure one of those specific files, let’s say for the Apache web server. We might use the cat command to view the contents of /etc/logrotate.d/apache2.conf
 
Now inside that file we might have the following contents, { and then we would have a number of directives like missingok, this means if we don’t already have an existing log file for this service, just ignore that fact and proceed.
 
We also have daily which means that we want to rotate log files daily. What does that mean? Each day we start a new empty log and we retain the previous log. We can choose to rotate on a periodic basis.
 
Here we have rotate 3 that means we only want to keep three rotated or archived logs. Then we have the directive notifempty that means that if we have an empty log file, maybe the service wasn’t running, there’s no activity, whatever the case is, then don't bother rotating because there's nothing to rotate.
 
Of course, we could choose to compress logs and then we close it off with a }.
 
This is just one example of log rotation settings for a particular service. In reality, there are many other directive statements you might have when you configure log rotation. This is just a small sample.
 
Now log rotation is usually scheduled as a background cron job in Linux, so it runs on a schedule automatically. However, you can force it immediately by using the logrotate command -d, and you could point to that specific config file, the one that we just modified in our little example, so either manually or automated with a cron job.
 
The CySA+ will not likely test your knowledge of specific [Video description begins] The manual command displayed on screen reads: logrotate -d /etc/logrotate.d/apache2.conf [Video description ends] syntax in Linux. While it’s good to know that, what’s very important is being able to look at the configuration in a Linux config file, or look at a command line in terms of syntax, to determine if it’s valid. We should be able to look at the results of a command and determine what happened, or if there’s something suspicious, just like we need to be able to view logs to determine if there’s anything suspicious in them.
 

Viewing Linux Logs

In this demonstration we will be viewing Linux logs. An important part of threat hunting is to be able to go through logs and look for suspicious activity. What does suspicious activity look like?
 
In one form, it could be the absence of log entries for a certain time frame, which could indicate that those logs were wiped by an attacker. Or numerous failed login attempts, or excessive traffic or port scans when that's not normal on a given network.
 
These are the types of things that can be considered suspicious. In this day and age, in a larger enterprise, you’re going to have a centralized SIEM system that analyzes ingested data from all these sources, looking for the types of things that we've just talked about and much more. So let's get familiar with Linux logging. [Video description begins] A terminal window titled cblackwell@ubuntu1:~ is open. The command prompt reads cblackwell@ubuntu1:~$ [Video description ends]
 
I’m going to change directory to /var/log. [Video description begins] The command run by the host reads cd /var/log which changes the command prompt to cblackwell@ubuntu1: /var/log$ [Video description ends] In most Linux distributions, this is where you’re going to find most log files. That can vary slightly depending on the specific software package you might be working with and whose log you might be looking for. [Video description begins] The host clears the screen and runs the ls command. The command returns a list of log files in three columns. Some of these are syslog, syslog.1, syslog.2.gz, auth.log, dmesg, and more. [Video description ends]
 
For example, here we've got syslog, but we’ve also got syslog.1, syslog.2.gz, so compressed. So log rotation, which we’ll look at separately later, allows you to retain older versions of logs to keep them archived based on your configuration while maintaining the current one, which in this case for the system log is just called syslog.
 
So if I were to cat syslog and let's pipe that to more to stop after the first screenful, here we’ve got logged information, which of course is date and time stamped. [Video description begins] The command run by the host is cat syslog | more. The log entries returned by the command populate the screen. The first log entry reads - Oct 6 10:41:00 ubuntu1 rsyslogd: [origin software="rsyslogd" swVersion="8.200 1.0" x-pid="664" x-info="https://www.rsyslog.com"] rsyslogd was HUPed [Video description ends]
 
We have the name of the host,in this case it’s the server we are on called ubuntu1. And we'll have the source of the logged entry, like the syslog daemon, or systemd, or containerd, or the kernel. And then we have some of the details related to that log entry.
 
Let’s clear the screen and do an ls, we’ve also got a number of interesting logs here. And again the specific file names may vary from one Linux distribution to the other. But here in Ubuntu Linux we’ve also got auth.log If I cat auth.log and we pipe that to more, this will give us information related to authentication such as the use of the sudo command and of course the command that was executed. [Video description begins] The command run by the host iscat auth.log | more. The log entries returned by the command populate the screen. The first log entry reads - Oct 6 10:41:08 ubuntu1 sudo: omsagent : TTY = unknown ; PWD=/opt/microsoft/omsconfig/Scripts/3.x ; USER=root ; COMMAND=/opt/microsoft/omsconfig/Scripts/OMSAPtUpdates.sh [Video description ends]
 
So as usual, each logged entry is date and time stamped with the host from which that log message is sourced from. Now that becomes very important when you start forwarding log information to a centralized SIEM system. We need to know which host originated that message. I’ll press q for quit.
 
If I do an ls once again, we also have a dpkg.log file. So that's going to be related to software package activity on this machine, such as installed packages and any related activity.
 
For example, let’s say we did a cat of dpkg.log, if we pipe that to more, we get no returned results, but it could be because that’s a new rolled over log with no entries. If we go back to the previous one, which is dpkg.log.1, we do have activity in here.
 
[Video description begins] The host first runscat dpkg.log | more, but the screen returns no results, after which he runs the command /var/log$cat dpkg.log.1 | more. This command returns log entries. A typical entry reads 2023-09-13 18:02:08 startup archives unpack [Video description ends]
 
So here we have things like the installation of packages, and it might be suspicious if we have the installation of packages that are not normally part of what's allowed to be installed on our Linux hosts, or the removal of security based packages like virus scanners running in Linux. I’ll press q for quit and clear the screen.
 
If I move over to a different Linux host where I know the Apache web server is installed, then we would have the option of taking a look at logs related to that specific service. [Video description begins] A new terminal window titled cblackwell@Ubuntu2:~ is open. The command prompt reads cblackwell@Ubuntu2:~$ [Video description ends]
 
So I’m going to change directory to /var/log/apache2. I’ll do an ls. Here, we’ve got the logs related to the Apache web server. [Video description begins] The first command run by the host reads cd /var/log/apache2. The command prompt now becomes cblackwell@Ubuntu2: /var/log/apache2$ and the host runs an ls command. The command returns a series of log names. Some of these are access.log, error.log, error.log.1, and more. [Video description ends] And these logs also really should be forwarded to a centralized logging host, and ingestion and data analysis system, basically a SIEM system.
 
As an example, if I were to cat the error.log file, if there are no entries, that’s good. But let’s go back and take a look at error.log.1, here we’ve got errors related to the Apache web server stack. [Video description begins] The host first runs cat error.log but the command doesn't return any entries, so he then runs cat error.log.1 which returns a series of entries that populate the screen. [Video description ends] So if there's been some kind of an attack, maybe a denial-of-service attack to freeze the web server, we’ll get those types of entries shown here in the error.log file.
 
We also have an access.log. We’re going to take a look at access.log.1, the previous version which is pretty much guaranteed to have entries. So let’s cat access.log.1 and let’s pipe that to more. Sure enough, we have activity related to the actual functioning of the HTTP web server stack. [Video description begins] The command reads cat access.log.1 | more which returns a series of entries that populate the screen. The first entry reads 162.216.150.20 -- [21/Aug/2023:00:21:52 +0000] "GET / HTTP/1.1" 200 11173 "-" "Expanse, a Palo Alto Networks company, searches across the global IPv4 space multiple times per day to identify customers' presences on the Internet. If you would like to be excluded from our scans, please send IP addresses/domains to: [email protected]" [Video description ends]
 
So connections and things like GET requests for HTTP from various types of clients. Now what might be suspicious here? Well, con activity and maybe the attempt to break into websites. Here, apparently we’ve got a date and time stamped entry from Palo Alto Networks company, which is apparently searching the global IPv4 space in terms of network scans. So is that suspicious? No, it doesn't look to be suspicious. Although an attacker might be performing these types of scans and making it look like this.
 
Now, what's always important here, of course, is the IP address information that is logged. Even though we know that can be spoofed, when you’re threat hunting, it’s always important to at least to begin with taking a look at an IP and then the activity, whether it’s an HTTP GET or it might be very suspicious that we’ve got HTTP POST transactions when our web server is basically a read only website that does not accept anything. So why do we have POSTs? That would be abnormal and suspicious.
 
There are so many other Linux logs. For example, if I type sudo dmesg, d message this is the kernel log for kernel messages on the Linux side. if we run sudo journalctl, journal control, this is the Linux audit log. If we were to run sudo lastlog, this shows the last login information related to individual user accounts.
 
For example, cblackwell logged in from pts/0, pseudoterminal 0, that’s how it shows up when you SSH over the network, into the host to manage it. We see the IP address from which that management session came from, and the date and time stamp that goes along with that.
 
So these are some log files that cybersecurity analysts must be aware of. And ideally that will be forwarded to a centralized logging host, for data analysis and ultimately threat hunting.
 

Configuring Linux Log Rotation

Video description begins] The host runs the cd /var/log command which changes the command prompt tocblackwell@ubuntu1: /var/log$ and then runs the ls command. This returns a series of logs listed under sub directories like azure, journal, and more that are highlighted in blue. Some log entries are syslog, syslog.1, syslog.2.gz, and more. [Video description ends] There might be sub directories that contain specific logs for certain types of services.
 
But the other thing to notice is that if we take a log, let's say syslog, the previous version will be called syslog.1. But notice that the previous version before that is called syslog.2.gz, so it’s been compressed. And then we’ve got a number of compressed syslog files kept from the past. This is the result of log rotation.
 
We can configure how many versions or how many copies of log files that we want to retain. Perhaps their maximum size, how often to check to see if logs get rotated, that type of thing. This is normally done in accordance with organizational security policies related to log retention on specific types of hosts.
 
Now let's switch to a Linux host that has log files related to the Apache web server. [Video description begins] A new terminal window titled cblackwell@Ubuntu2: / is open. The command prompt reads cblackwell@Ubuntu2: /$ [Video description ends] If I change directory to /var/log/apache2 and do an ls, here we have a number of logs. And notice that we've also got some previous compressed versions of logs, such as for the access and error logs.
 
[Video description begins] The host changes directory with cd /var/log/apache2 which changes the command prompt to cblackwell@Ubuntu2: /var/log/apache2$ and follows it up by running an ls command which returns log entries such as access.log.2.gz, error.log.2.gz and more. [Video description ends]
 
So let’s say that we want to configure log rotation specifically for apache2. So I’m going to change directory to /etc and I’m going to run sudo nano logrotate.conf, so that’s the config file for logrotate. [Video description begins] The host first runs cd /etc which turns the command prompt to cblackwell@Ubuntu2: /etc$ and follows it by running the sudo nano logrotate.conf command to open the config file for logrotate. [Video description ends] So here we’ve got some global settings such as to keep up to four weeks worth of backlogs.
 
But what I’m interested in is the directive towards the bottom, the include statement. It says include /etc/logrotate.d, so what we can do is configure log rotation for various services, such as the Apache web server by creating a file in the logrotate.d directory. And you might have log rotation for SSH in a separate config file in that directory, and so on. Okay. So we’re not going to change this file. Ctrl+x to get out here.
 
Change directory here because I’m already in etc, into logrotate.d and let’s do an ls. [Video description begins] The host switches back to the terminal window and runs the cd logrotate.d command and then clears the screen. The command prompt is now cblackwell@Ubuntu2: /etc/logrotate.d$ which he follows up with the ls command. A series of file names are returned, some of which are apache2, ufw, rsyslog, and more. [Video description ends]
 
Here we have a number of files for the log rotation specific to the snort intrusion detection system, the squid proxy, the uncomplicated firewall or ufw, but I’m interested in the apache2 file. Now you can just go ahead and create a file here for a given service if it doesn't already exist. The file name itself doesn't matter.
 
Okay, so let’s run sudo nano and let’s go into the apache2 file. Now we said the name of this config file doesn’t matter, and that’s true. [Video description begins] The host runs the sudo nano apache2 command to open the apache2 file that displays a series of code lines. [Video description ends] Well how does it know which logs to look at? Well that’s part of what we have here in this definition file, /var/log/apache2/*.log, then we’ve got a bunch of settings within curly braces.
 
So we’ve got an { the closing of which is down at the bottom. Now we’ve got all of these directives. What does this mean? [Video description begins] The host points to the code lines on screen. The first few code lines are as follows. Line 1: /var/log/apache2/*.log { Line 2: daily Line 3: missingok Line 4: rotate 14 Line 5: compress Line 6: delaycompress Line 7: notifempty Line 8: create 640 root adm Line 9: sharedscripts [Video description ends]
 
We have daily, what does that mean? It means that log files, in this case all of them for apache2 will get rotated daily. missingok, this means that if the log file is missing, then go on to the next one and don't issue any type of error. rotate14 means we want to keep 14 rotated logs. We want to compress them, but we don’t want to rotate logs if the current log has no entries and it’s empty.
 
When the log is created, it will be created with a certain set of permissions. The sharedscripts directive means that we will run, postrotate scripts after all logs, because here we’ve got *.log here at the top, after all logs have been rotated and compressed. So we don’t want to run the script for each individual apache2 log. [Video description begins] The host then points to the next set of code lines that are as follows. Line 10: postrotate Line 11: if invoke-rc.d apache2 status > /dev/null 2>&1; then \ Line 12: invoke-rc.d apache2 reload > /dev/null 2>&1; \ Line13: fi; Line 14: endscript [Video description ends]
 
Then we have the postrotate statement followed by endscript. And whatever commands you want executed, such as checking if something is running and maybe writing the output to /dev/null, and error messages that would be 2, also being redirected to wherever normal messages are rotated. So we could reload the Apache web server, postrotate.
 
You also have a prerotate and endscript statement for anything that you might want to execute prior to the log rotation taking place. And of course, as we know, after that we’ve got the }.
 
So you can create these, you can modify them, there might already be one here by default depending on the software package that you've installed.
 
Normally, if we wanted to manually invoke log rotation right now we could do that. So for example sudo logrotate -d /etc/logrotate.d/ and in this case we point to our apache2 config file. [Video description begins] The host switches back to the terminal window and runs the sudo logrotate -d /etc/logrotate.d/apache2 command. [Video description ends] Now here we've got a message here that says there doesn't need to be any log rotation because the log is empty. And so therefore the pre and post scripts do not apply.
 
We know that threat hunting definitely involves detecting suspicious activity within log files. So therefore log rotation is a big deal in Linux as it relates to cybersecurity.
 

Enabling Linux Log Forwarding

Not only is it important that we have logs on individual devices, but it's also important that those logs be forwarded to some kind of a centralized logging server. Now, that server might also be a SIEM system that performs data analysis and threat hunting. Or you might have a central logging server hierarchy, where the centralized logging servers then send their data to a centralized SIEM system.
 
Whatever the case is, in this demo, we're going to be setting up log forwarding in Linux. Here I’m on a host called ubuntu1. [Video description begins] A terminal window titled cblackwell@ubuntu1: / is open. The command prompt reads cblackwell@ubuntu1: /$ [Video description ends] This is going to be our central logging server.
 
I’ve also got a second Ubuntu Linux host called Ubuntu2. [Video description begins] The terminal window that the host has now switched to is titled cblackwell@Ubuntu2: / and the command prompt on it reads cblackwell@Ubuntu2: /$ [Video description ends] In our example, this one will serve as the client, meaning its log entries will be forwarded to the central logging server, which is ubuntu1. So let’s go back to ubuntu1, our central logging server, and let’s begin the configuration.
 
First thing I’ll do here is run sudo service rsyslog status. Okay! [Video description begins] The command run by the host reads sudo service rsyslog status and the screen populates with its output that the host points towards. [Video description ends] The rsyslog service is what we will be using for log forwarding, and notice that it is indeed active and running. I’ll press q for quit. Next thing I’ll do is an ls of /etc/rsyslog.d, notice here we have a couple of conf files. [Video description begins] The host runs the ls /etc/rsyslog.d command and a couple of conf files are returned by the system. [Video description ends]
 
I'm interested in the default conf file. So if I were to cat let’s say /etc/rsyslog.d, and in this case I’ll specify our file name for the default, we’ve got some notation here for various types of sources of log entries, such as for authorization or the cron daemon or the Linux kernel. [Video description begins] The host runs the cat /etc/rsyslog.d 50-default.conf command. [Video description ends]
 
Some of these are commented out. The lines begin with a hashtag or pound symbol. So you could remove that hashtag if you want it to configure logging for log messages stemming from those sources. However, we're not going to change anything there.
 
Let’s run sudo nano, again /etc/rsyslog.conf, so you can determine if you want to use UDP for the transmission of syslog messages to this central logging host, or TCP. [Video description begins] The host runs the sudo nano/etc/rsyslog.conf command to open the /etc/rsyslog.conf file. [Video description ends]
 
Let’s say we want to use TCP. [Video description begins] The host points to a set of code lines displayed on screen which are as follows. Line 1 : # provides TCP syslog reception Line 2: #module (load="imtcp") Line 3: #input (type="imtcp" port="514") [Video description ends] module (load =“imtcp”) So I’ll uncomment the line for and I’ll also uncomment the line below it for input (type=“imtcp” port = and we’ve got the default listening port of "514". [Video description begins] The host removes the # symbol from the lines 2 and 3 of the aforementioned code. [Video description ends]
 
So given that we are configuring the centralized logging server, it will listen for log messages from other hosts on TCP port 514. Now the next thing I’m going to do, is let’s say go all the way down to the bottom where I'm going to add a couple of lines.
 
The idea here is I want a separate sub directory created here on the central logging server for each logging client whose log entries we receive, just to keep things organized and easy to find.
 
So I’m going to type in $template, so this is a configuration, 'space' RemoteLogs, this is what I’m calling it, 'comma', the definition will be in quotes, "/var/log/%HOSTNAME%. So what I’m doing is I’m naming the sub directory under /var/log on this host. With the name of the host, that’s a variable I’m calling upon here, for each client that sends us log information. And then within that, I want the file to be called Remote.log". [Video description begins] The code line added by the host reads $template RemoteLogs, "/var/log/%HOSTNAME%/Remote.log" [Video description ends]
 
I’ll then add a line *.* or all sources of log files, the second asterisk is all types of messages, 'space', and we want to tell it to use our template definition, because the line above $template simply is that, it’s a definition of what to do. [Video description begins] The line above $template reads $IncludeConfig /etc/rsyslog.d/*.conf [Video description ends] To actively tell it to be used, we refer to it with a ? and the name of the definition, in this case RemoteLogs. [Video description begins] The line added by the host reads *.* ?RemoteLogs [Video description ends]
 
Okay, I’ll press ctrl+x and I’ll press Y to write that file out. Now we're also going to want to make sure the firewall allows for that. So sudo ufw, that’s the uncomplicated firewall, allow 514/tcp. [Video description begins] The host switches to the ubuntu1 terminal window and runs thesudo ufw allow 514/tcp command. The system returns a 'Rules Updated' message. [Video description ends]
 
Now if this is a cloud based virtual machine and there are other firewall components at play, such as network security groups in Microsoft Azure as an example, then make sure that those firewalls allow 514 TCP traffic, and you might be doing it solely that way in the cloud, as opposed to doing what I’m doing here in the OS. So know your environment.
 
Okay, let’s run sudo service rsyslog restart and we’ll just bring that command back up with the up arrow key, and let’s just change restart to status. Okay! So we are active and running. [Video description begins] The first command readssudo service rsyslog restart while the second command reads sudo service rsyslog status and the screen populates with the output returned. [Video description ends] So the server configuration has been set.
 
Let's switch over to the client whose log entries will be forwarded to this server. So now I’m on server Ubuntu2. So I’m going to run sudo nano /etc/rsyslog.conf, we have to tell this client where to send log messages to. [Video description begins] The host switches to the Ubuntu2 terminal window and runs the sudo nano /etc/rsyslog.conf command to open the /etc/rsyslog.conf file. [Video description ends]
 
So I'm just going to go all the way down to the bottom. And I’m going to put *.* . The first * means the source of log entries such as kernel, mail, auth, cron. After the dot, the next * means any type of log entry from those sources, whether it’s a warning, or informational, and so on.
 
So I want all log entries, followed by a space, and then @ symbol. And this is where I’ll put in the IP address of our logging server. Back here on the server, if we run ip a, we’ll see that our IP address here, our private IP, which is what we’re going to use, is 172.17.0.1; so I’ll go ahead back here on Ubuntu2 and pop that in as the target for receiving log entries. But I also have to put in :514. We know that that is listening on port 514. [Video description begins] The line added by the host reads *.* @172.17.0.1:514 [Video description ends]
 
Ctrl+x to save that, Y for yes to write it out, And then on the client here, okay. And then next thing we’ll do is run sudo service rsyslog restart. [Video description begins] The host switches to the Ubuntu2 terminal window and runs thesudo service rsyslog restart command. [Video description ends] We can test logging by forcing a message using the logger command. I can do that with sudo logger, and in double quotes we put in whatever we want Logged. Let’s say we just have it say “Testing log forwarding - does it work?”
 
[Video description begins] The command run by the host reads sudo logger "Testing log forwarding - does it work?" [Video description ends]
 
So if we go back to our centralized logging host that would be ubuntu1, we’re still in /var/log. If I do an ls, notice it has created a sub directory with the host name of ubuntu2.
 
If we change directory into ubuntu2, do an ls, there’s the Remote.log file. So if I ran sudo tail Remote.log to view the last ten lines, the last ten recent entries, notice it’s coming from host ubuntu2. And here it is. We have our Testing log forwarding - does it work? message. So we have successfully forwarded logs from ubuntu2 to server ubuntu1.
 

Viewing and Configuring Windows Logs

We all know how important it is to have valid logs that we check, or that we have automated systems check for indicators of compromise, for anything that might look suspicious. Well, in order to truly have an effective way to look for threats or indicators of compromise through logs is to know where those logs are and how they're structured.
 
So let's take a few minutes and take a look at how to view and configure Windows Logs. [Video description begins] A Windows Server desktop is open on screen. [Video description ends] Windows uses the Event Viewer service to handle logging. So from the Start menu here on my Windows Server, but it just as well could be a windows client OS, doesn’t really matter, we can launch the Event Viewer.
 
[Video description begins] The Event Viewer home page is open. The left has a navigation menu, the middle lists the Overview and Summary, while the right lists Actions. The navigator has four options Custom Views, Windows Logs, Applications and Service Logs, and Subscriptions. [Video description ends]
 
In the Event Viewer, over on the left, by default, we are connected to our local machine. But take note we could right click and connect to another computer over the network. That should be a rarity because we should be using log forwarding to a centralized logging location where we can view everything.
 
But here we’ll just look at a single individual Windows server. If I expand Windows Logs on the left, we have some standard categories. The Windows Application Log.
 
Not all applications you install in Windows will write here, however, it's a good starting point. [Video description begins] The host clicks on Application from within Windows Logs in the left hand navigator which opens a series of events in the middle pane. Event details are listed in different columns under headers such as Level, Date and Time, Source, Event ID and Task category. Once an event is selected, its details open up in a section below. [Video description ends] Now how do we know if anything looks suspicious in here? Well, it might not be suspicious, but we definitely want to focus on things that are in an erroneous state.
 
Now, these errors specifically are simply about having drives locked by BitLocker Drive Encryption, and they’re simply not unlocked. You can unlock a BitLocker encrypted drive either by entering a PIN, or a password, or inserting a smart card, so those are not necessarily suspicious.
 
We’ve got the Security log, which can be very important for auditing things like File System access or Process Creation, such as malware launching a service or some kind of a process in the background without the user's knowledge. [Video description begins] The host has now moved on to the Security option within Windows Logs to display a different series of events that are listed in the middle pane under columns such as Keywords, Event ID, and more. The event that is currently selected has Audit Success mentioned under the Keywords header and Process Creation mentioned under Task Category. The section below event entries describes event details such as the Creator Subject, the Target Subject and the Process Information [Video description ends]
 
Now we're always going to be interested under which account this process would be running, in this case SYSTEM. And then of course, in this context, the details about the name of the process, the executable. If it looks like a name that’s not familiar, or if it's being loaded from a USB external drive, it could be considered suspicious if it's not already been picked up by your malware scanner.
 
We also have a Setup log here for installation of various packages on the machine. We’ve got the standard operating system log, so the System log. Now currently we're only viewing informational types of messages. [Video description begins] The host clicks on the Setup option from within Windows Logs to talk about it briefly before opening the System option from within Windows Logs. The middle pane displays a series of events that are listed under various headers. Most events have information listed as the level in these event entries. [Video description ends]
 
However you can click on column headers here to sort by any of these values, Date and Time, Source, the Event ID, whatever the case might be that would be of interest.
 
And then we have Forwarded Events, if we’ve got log forwarding configured. [Video description begins] The host clicks on the Forwarded Events option from within Windows Logs [Video description ends] And with each of these logs you have the application of right clicking on the log on the left and clearing the log and choosing whether you want to save it or just clear it.
 
Most policies within an organization would require that logs be retained or saved, not just wiped.  [Video description begins] The host right clicks on the Application option from the left hand navigator to open a menu and click on the clear log option. A dialog box with options to Save and Clear, Clear, or Cancel opens, which the host cancels out of. [Video description ends] Naturally, if we have a time span of log entries that appear to be missing, that would definitely be an indicator that something suspicious is happening. So keep an eye out for that type of stuff if you're manually looking through logs.
 
However, I can right click on a log, and I can also choose to filter it by a variety of things. [Video description begins] The host right clicks on the Application option from the left-hand navigator to click on the Filter Current log option from the menu. The Filter Current Log window opens on top of the event viewer. It has options to filter on the basis of Event level such as Critical or Error that can be check marked. It also has drop down options to choose from next to aspects like Task category, Keywords, User, and more. The bottom has Ok and Cancel options. [Video description ends] For example, I just want critical and error messages shown. Okay. Well, therefore that’s all I’m seeing, of course they’re all date and time stamped.
 
We can also right click on a given log, I’ll clear the filter, and we can also find items. [Video description begins] The host clears the filter after selecting the Clear Filter option from the menu after right clicking on the Application log; and then clicks on the Find option from the same menu. The Find dialog box opens with an option to enter values against Find what. [Video description ends] So for example if I'm interested in searching through the log for occurrences of certificates, as in PKI certificates, then we can start continuing through our list of found results by clicking Find Next. [Video description begins] The Find Next button becomes active as soon as the host enters the value against Find what. [Video description ends] You can also right click on a log and save the events as a specified file format, such as an Xml file.
 
Notice that we have specific event IDs in the Event ID column. Now the Event ID is a specific occurrence such as event 8197 is about File Server Resource Manager having a Service error: Unexpected error. What we can do is we can right click and choose to attach a task to that specific event.
 
When that event is raised in the future, that specific event, we can have it do things like Start a Program. Normally, what you would do here is have it launch a script of some kind that you've created that will mitigate or remediate the situation. So that’s another important part of Windows logging.
 
Now those are just the standard Windows logs. [Video description begins] The host clicks on Applications and Services Logs in the left-hand navigator to expand a list of options such as DNS Server, Microsoft, OpenSSH, and more. [Video description ends] We can also go under Applications and Service logs to get even more logged information, depending on what's installed on the server, such as Directory Service for Active Directory, DNS server, and if I expand Microsoft and go down to Windows, then we have logs for all the specific components within the Windows OS.
 
It's broken down here. For example, we’ve got Group Policy - Operational. And depending on what else you have installed, I’ve got OpenSSH installed here, you might have additional log entries shown here.
 
And finally you can also build custom views. If I expand Custom Views on the left, I’ve already got something here called Administrative Events. And when I click it it's showing certain items. Now what is this?
 
Well, when I right click on Custom Views, I can create a custom view that filters on things like the event type. So I only want errors or critical items from specific event logs. I can open up the Event logs list, or I can specify by event source which component generated that event, and what we could do is save that as a customized view. [Video description begins] The host right clicks on the Custom View option to open the Create Custom View window. The Filter option is selected by default, with various options mentioned by the host such as Event level, Event logs, and more to filter on the basis of. [Video description ends] So that's why my administrative events is only showing things like errors and so on. That way you're filtering out what you're looking at instead of having to peruse through everything.
 
Normally, you won't have very much interest in informational messages, as they are simply essentially a narrative of things running as they normally should.
 

Enabling Windows Log Forwarding

As cybersecurity analysts, we are well aware of the importance of logging and how it can be used to detect threats. We've taken a look already at how to enable log forwarding in Linux. We’ve also perused logs on an individual Windows station. Now it's time for us to figure out how can we forward logs to a centralized logging server in the Windows environment. [Video description begins] A windows server desktop is open. [Video description ends]
 
On a larger scale, such as within each branch location, you might have a centralized windows logging server that all of the client devices report to, in that location. And then as a second tier, you might have each of those Windows centralized logging hosts send everything they have to a centralized SIEM system for data analysis and threat hunting. So let’s take a look at how this works.
 
Here on my Windows server, I'm going to go into the Start menu and fire up the Event Viewer. What I'm going to do is click on Subscriptions on the left. [Video description begins] The Event Viewer home page is open. The left has a navigation menu, the middle lists the Overview and Summary, while the right lists Actions. The navigator has four options Custom Views, Windows Logs, Applications and Service Logs, and Subscriptions. [Video description ends] In order to configure centralized Windows logging, you create the subscription on the server.
 
For example, if I right click here, I can choose to create a subscription, CollectClientLogEvents. [Video description begins] The host selects the Create Subscription option after right clicking on Subscriptions. A dialog box opens up. It has fields such as Subscription name, Description, and Designation log that need to be populated. The host enters CollectClientLogEvents in the Subscription name field. The dialog box also has a section for Subscription type and source computers that has two options namely Collector initiated and Source computer initiated, with radio buttons next to them. [Video description ends] Now the destination log here when we receive client log events will be Forwarded Events, which will take a look at in a moment; that’s under Windows Logs over here in the left hand navigator.
 
A Collector initiated configuration means the collector, that would be the logging server which we are sitting at, will reach out periodically and retrieve log messages that we’ve configured it to retrieve, but we have to select computers. Before we do that, let’s open up the Start menu on this host and go under Windows Administrative Tools, Active Directory Users and Computers. [Video description begins] The Active Directory Users and Computers window opens. It has numerous options in the left hand navigator, such as Domain Controllers, East, Users and more. The East option is expanded to display folders such as Computers, Groups, and more. The computers option is open and a singular file starting with DESKTOP is visible in the right pane. [Video description ends] This server is an active directory domain controller.
 
If I go to the Domain Controller’s view, the computer name is shown here. It starts with WIN. However, I also have a desktop Windows computer joined to the domain here under the East Computers OU. Its name begins with DESKTOP. What I want to do is I want to configure it such that the desktop computer log events get sent to this centralized logging host. [Video description begins] The host switches back to the Event Viewer window, right into subscription properties. [Video description ends]
 
So I’m going to click Select Computers. I’ll click Add Domain Computers. [Video description begins] The Computers dialog box opens, with an option to Add Domain Computers, Remove, and Test options. Only the first option is active at the moment. [Video description ends] The list is filtered here for Computers. So I’ll click Advanced, Find Now, and I know my desktop starts with the name DESKTOP, so I’ll click that and add it here. [Video description begins] Remove and Test options become active as soon as the computer is added. [Video description ends] I can even click Test to test connectivity to that host.
 
If we get this kind of a message, it's telling us that either a firewall is blocking the connection or we need to start the WinRM service. [Video description begins] The host points to an error dialog box that pops up on screen when he clicks on Test. [Video description ends] Back on the Windows client, we’re going to run winrm qc for quick config. It says WinRM, Windows Remote Management, is not set up to receive requests on this machine.
 
Well no wonder we had a problem. Make these changes? Yes. Enable the firewall exception? Yes. [Video description begins] The host adds y for yes in both instances. [Video description ends] Okay. We should be good to go. Let’s go back to the server now. This time it worked. [Video description begins] The screen displays a dialog box with the message 'Connectivity test succeeded' [Video description ends]
 
Okay, let’s change the name to CollectLogEventsFromOthers. [Video description begins] The host changes the Subscription name to CollectLogEventsFromOthers in the Subscription Properties window. [Video description ends] Now we could choose this to be Source computer initiated, which would be from the client perspective, but we’re doing it from the server, so Collector Initiated. We have to click Select Events so we can determine which items we are interested in from client devices. [Video description begins] A Query Filter dialog box opens with the Filter option selected by default. It has options to filter on the basis of Event level such as Critical or Error that can be check marked. It also has drop down options to choose from next to aspects like Task category, Keywords, User, and more. [Video description ends]
 
So how about Critical, Warning, Error, And maybe I’ll choose from the Windows, System log. You could choose anything in terms of the log you want to use to gather; Critical, Warning, and Error messages. You could specify Event IDs, special Keywords you’re looking for perhaps for log events you want to gather, Users, Computers; I’ll click OK, and I’ll click OK again. Alright!
 
So we have an Active subscription here. [Video description begins] A subscription entry becomes visible on the otherwise empty middle pane of Event Viewer. Its status is Active. [Video description ends] If I right click on that and choose Runtime Status, we've got some kind of an access denied message on that station. Well there's a reason for that. Back on the Windows client machine, if I go to my list of Groups, there is an Event Log Readers group.
 
We need to click Add and make sure that our server has the ability to come in here and read event entries. So for Object Types at the top, I'm going to uncheck everything that's checked and turn on the check mark for Computers, and I’m going to select my Windows server, and then I’ll click OK.
 
Now if I go back to the server, and right click on that subscription, and choose Retry, if I right click on it again go into Runtime Status, this time it’s Active. We’re ready to go because the server is allowed to go in because it’s a member of the Event Log Readers group.
 
So the result of this then, is that client logs of the types that we asked for will be forwarded to this host and will show up under Windows Logs forwarded events.
 

Honeypots and Honeynets

In our discussions so far, we have made references to honeypots and honeynets. Well, now it's time to go through the details of what these are and what value they provide. So what is this?
 
Well, a honeypot, for example, would serve as a decoy for attackers. It's a host that would mimic a production environment, maybe an app, but it could also be an entire production network that’s being mimicked, that would be a honeynet, and also it might include some fake data that looks real.
 
So in other words, honeypots or collectively honeynets would intentionally be configured to be vulnerable in order to attract attackers. Why would you want to attract attackers?
 
Well, first of all, it serves as a diversion from the real production network and data. But it also allows cybersecurity analysts to track, monitor, and learn about attacker techniques. So when we talk about a system being intentionally configured to look vulnerable, what might that look like?
 
That might include things like not applying patches for the operating system, for software, for daemons, for components of a web server; whatever the case might be, we are intentionally not applying patches. Or we are not using strong passwords or multi-factor authentication. The permissions for databases and file system resources are not set to be in a secured manner. We have default configurations left for many services or insecure configuration settings.
 
All of these things and many more could serve to make a host seem vulnerable. Remember that attackers would begin normally with reconnaissance. They might scan a network, determine if any services are listening, what version of those services, are there any missing patches, and so on.
 
So the benefits of honeypots and honeynets would include early incident detection on non-critical IT systems. Think about it. We've got a diversion where we can get indicators of compromise because we would be feeding in activity from these honeypots and honeynets to a security monitoring system.
 
We would get indications that something strange might be happening even at the reconnaissance phase, which gives us a heads up so that we can really be careful to protect assets that actually have value to the organization. So technicians would learn about attack patterns, attack vectors or entry points.
 
Maybe the honeypot would tell us that, okay, it appears that attackers are easily finding that the Apache Struts Java framework on the web server is not patched, and they seem to be focusing on that vulnerability. That's valuable to know.
 
In some cases, you might even be able to track the identities of attackers, maybe trace it back to an attacking group or a certain IP address. Even though IPs can be spoofed and we know this. But it’s not all rivers of milk and honey, so to speak.
 
There could be some drawbacks to the use of honeypots and honeynets, and we need to be aware of these. One would be potential legal liability. How would that happen? Well, imagine that we have a honeypot or a honeynet. Attackers compromise it, but are somehow able, maybe through lateral movement and further digging into a network, are able to actually get their hands on sensitive data.
 
That could be a problem. And another potential drawback that we have to consider is let's say that we've got a honeypot that, of course, gets compromised by an attacker, but then the attacker uses that, maybe as a bot in a botnet for a distributed denial-of-service attack against another victim. And now that's being traced back to your network, whether it's in the cloud, whether it’s on-premises. So we have to consider this.
 
Another consideration is the cost. How much does it cost for us to deploy and manage the honeypot or the honeynet? So all of these things must be considered together before embarking on this type of endeavor. However, mind you, there are open-source honeypot and honeynet solutions, which means it would incur no cost. Usually there's a bit more of a learning curve with those types of tools.
 
So when we configure honeypots and honeynets, depending on the tool we’re using we’ll determine exactly how to do it. But they all have common functionality, in that we can specify the OS version that we want to mimic in an intentionally vulnerable host. We can tell it which services we want to be running like HTTP, DNS, FTP, Active Directory. We can even include sample fake data that looks real.
 
And of course, as we've mentioned, we want to make sure we've got log forwarding to a security information and event management or a SIEM solution that would be a secured host on a secured network elsewhere. We don't just want to have log data on the honeypot because of course if it gets compromised, then we can't trust those logs. They might have been wiped or tampered with by the attackers.
 
So common honeypot features would include the ability to forward logs to a centralized logging host, also to generate activity reports for malicious activity. In some cases, honeypots will be able to record attacks when they're happening, and then you have the ability to play back the sequence of events.
 
Most honeypots will include a web application with intentional common vulnerabilities. One common example of just that is called Metasploitable. It's a free downloadable virtual machine that has a website with many intentionally vulnerable features, although you can configure whether you want the web app to appear to be vulnerable to injection attacks or cross-site scripting and whatnot.
 
Other features include the correlation of attack patterns with a known list of attackers, and honeypots don’t just have to be server-side. You can also configure client-side honeypots, such as intentionally vulnerable web browsers, so that you could track how a compromised web server might somehow take advantage, perhaps of cookies, on a client device.
 
Honeypots often have the ability also to track any malware, and quarantine it, that was deployed onto that host. So honeypots and honeynets then can provide many benefits for an organization to improve its security posture.
 

Implementing a Honeypot

In this demonstration, I will be setting up a honeypot on the Linux platform. We’ve talked about honeypots and honeynets. Essentially, a honeypot is a machine or what appears to be a functional machine with running services, perhaps some sensitive data, or what appears to be sensitive data, when really that machine is intentionally made to look vulnerable so that it attracts attackers.
 
And the movements of the attackers can be tracked by security technicians. That's really the purpose of a honeypot or a collection of multiple honeypots on a honeynet. And there are free and open-source tools that you can use to configure this.
 
Always remember, you have to be very careful. The last thing you would want is for real sensitive data to mistakenly, somehow be disclosed through a honeypot, or for the honeypot to be used by attackers as a launching pad for illegal activity against other hosts.