
Cloud Network Security
This is a guide on cloud network security.
Cloud Network Segmentation
In this video, we'll examine how network segmentation can be used in a cloud environment to simply create smaller networks, which in turn, can provide isolation or separation of networks, which is always a good practice, in that any given network should only have the necessary number of hosts on that network. Now, bear in mind that in a cloud environment, you don't have access to any kind of physical hardware, but most cloud providers do support virtual network configurations. Whereby you can assign an address base and then configure multiple subnets to reduce the size of the number of hosts supported on that subnet, and this can usually be done within minutes.
Once, you have the network configuration you need, then you can start to place the appropriate host systems in the appropriate subnets. Many will also give you the option to configure additional security features such as firewalls or the ability to route between the subnets should it become necessary.
In addition, since it's all configured virtually, changes to the configuration can be made at any time and implemented within minutes. The benefits of implementing this approach include a restricted ability for users or other systems to see only those systems that they need to see, and it can also hinder the efforts of attackers as fewer targets simply represents a smaller attack surface.
For example, if they were able to penetrate the network and every system was on that same network, then of course, they would have access to all of them, but if they managed to penetrate any given isolated network, there would be far fewer systems and therefore, it would be less likely that they would find the target they're looking for. The inherently minimized traffic on the network would also minimize the amount of packet sniffing that could be done to obtain information.
Quite simply, smaller networks are more secure networks. Other advantages include the ability to isolate protocols from each other. For example, if you need one group of computers to communicate entirely using IPv6, while others should use only IPv4, then they could be placed within separate isolated networks, so that, only the necessary traffic over the appropriate protocol is occurring.
In addition, smaller, more isolated networks will simply perform better due to the lower amount of traffic and congestion. Any broadcasts that might occur will be limited in range and be heard by fewer systems, and fewer systems to manage simply results in improved access control with respect to which systems should be added or removed from the network, and if a problem arises, it is much easier to contain because there are fewer systems available to be compromised.
So then, when it does come time to configure network segmentation or isolation in a cloud environment, it will depend on the provider with respect to the services they offer. But in general, you could expect to see the availability of firewalls that can isolate various services or applications on two different networks, or perhaps most notably, the ability to implement subnets or what they might even refer to as VLANs within a virtual network address space.
Or there may be other software defined networking applications that can also be added to your subscription to allow a centralized configuration of all of your network components. Now, just a quick point in that regard, since there is no physical infrastructure that is ever exposed to you in a cloud environment, everything that you do in terms of virtual network configuration is already being implemented through software defined networking.
But the service may still be available for you to use to facilitate centralized management of all of your virtual networks, subnets and security configurations, so that, you don't always have to go directly to the properties of each individual virtual network to perform management tasks. Regardless of the network configuration, segmentation and isolation are always good practices to observe as smaller numbers are always easier to manage than larger.
Cloud Network Protocols
When it comes to securing any kind of network configuration in the cloud, the level of security can certainly be affected by the protocols that are in use on that network. So, in this presentation we'll examine several commonly used protocols for cloud-based services with respect to some of their security considerations. Now, clearly the use of some of them will be entirely dependent on the services you choose to run in your network, but some of them are made available to the network by the provider, so that, you don't necessarily have to implement them yourselves, which might often include the use of the domain name system or DNS protocol. Now, its function is no different in the cloud than it is anywhere else. It is still implemented as a distributed database that is broken down into a hierarchy to provide domain and host name resolution to IP addresses, and it runs over the UDP protocol. So, if your cloud-based network requires a DNS server, by all means feel free to implement one, but as mentioned, in many cases, you actually don't have to. For example, if you create something like a virtual network and place virtual machines on that network, you can usually choose a configuration option that specifies that DNS will be handled by the cloud provider. In which case you literally do not need to implement anything. Yet the systems within that virtual network will be able to communicate with each other by name, and they'll be able to resolve any Internet-based names as well if they have external access. It might not give you as much control over the configuration as the actual
DNS server is not exposed to you, but it's one less system that you have to manage.
And since DNS is a common point of attack, you don't have to worry about securing that system as it's maintained entirely by the cloud provider. Other options from the provider might include the use of DNS over HTTPS or DoH, which is a method for enhancing the security of DNS. Particularly when performing remote name resolution by wrapping DNS requests in the secured HTTP protocol, which of course, can inherently operate over the internet and can encrypt the DNS traffic to prevent malicious activity, such as eavesdropping or manipulating the request through a man in the middle attack.
Again, DNS reveals information about the names and IP addresses of the systems in your environment. So, it can help attackers to identify their targets. And DNS in its default state is not secured, so, DoH can provide that security. Similar to DoH is DNS over TLS or DoT which also provides encrypted DNS services, but really differs only in that it runs over the Transport Layer Security protocol, which is quite simply a newer implementation of SSL, or Secure Sockets Layer, which is used to provide the security of HTTPS. TLS also provides protection against eavesdropping and prevents tampering very much like DoH.
It just runs over a newer security protocol and is generally considered to be more secure than DoH, but as such it might not offer as much compatibility. Now, if you do need a little more control over the DNS service.
Then again, by all means you can implement your own DNS server in the cloud network. But again, recall that on its own DNS doesn't have any security. So, if there is any level of public access for that network, bearing in mind that of course it would still be behind a firewall, but more layers of security are always recommended. Therefore, DNS Security Extensions or DNSSEC can be used to provide better security for the name resolution requests occurring within that IP network. DNSSEC works by authenticating the origin of the data with a cryptographic signature to ensure that answers provided by the DNS server are valid and authentic.
And can also verify to a client system if in fact a DNS name does not exist, which protects the integrity of the data and can prevent malicious activity such as DNS cache poisoning, client redirection and man in the middle attacks. The Network Time Protocol might also be available as a provided service, and of course you can implement it yourself if you want to. But NTP is responsible for synchronizing the time for all systems on the network within a few milliseconds of universal coordinated time, and it operates over UDP port 123, and is designed to help mitigate the effects of variations in network latency. But it can present a security concern because if compromised, an attacker could take a server out of sync with other systems, which could make that system believe that certain operations that are time dependent have already occurred when they haven't.
Or that something is about to occur when in fact it won't. Or it could simply be used as a form of denial of service because if a system is knocked too far out of sync, there are several services that will no longer function correctly with respect to other servers on the network. But very much like DNS, NTP in its default state is not secure.
So, Network Time Security or NTS uses a key exchange between the client and server, which performs a standard TLS handshake using the same public key infrastructure mechanisms used on the public internet to authenticate to each other before any time synchronization occurs, which again prevents malicious attacks against the protocol. Internet Protocol Security, or IPsec is meant to provide security for standard IP networks by providing both authentication and encryption for traffic on any IP-based network. Again, IP in its default state is not a secure protocol. There may be many applications that can communicate securely over an IP network, but it would be because they're relying on higher level protocols to provide the security such as HTTPS.
Regular IP traffic, for lack of a better word, that is occurring on your network is otherwise not secured.
So, once IPsec is implemented, all IP traffic among the hosts on that network is secured, and it can also be extended to configurations such as VPNs. So, that, if you have on-premises clients that need to connect to the cloud based network, they can do so securely. Hypertext Transfer Protocol Secure or HTTPS is of course the secured form of standard HTTP, which again in its default state is also not secured.
The S or the secured part of the protocol refers to SSL or Secure Sockets Layer which provides the encryption to secure the data transmissions. And this is the primary protocol used with secure web-based communications such as online banking or logging in to any kind of secured service. A more recent implementation, however, uses transport layer security in place of SSL to provide the security which essentially provides the same type of service, but it's generally considered to be more secure than SSL.
So, newer implementations and solutions going forward will likely use TLS instead of SSL. Tunneling is the process of encapsulating standard data packets within another protocol, which can allow packets to cross a network that otherwise might not support the original protocol, or more commonly to provide secure connections by wrapping unsecured packets within secured packets.
This is most commonly done these days with VPNs. Whereby you need to connect to standard IP network in the cloud from a standard IP network in your home or office. Using a standard IP network such as the internet as your means to connect. But none of those networks are inherently secure, most notably of course the Internet.
So, VPNs can be implemented using a variety of protocols but the key aspect is that the two endpoints will use secure protocols to establish a connection first, before any data is transmitted, which is referred to as the tunnel. With the tunnel established, systems can use the secure protocols to encapsulate the inherently unsecure protocols. So, that the very public Internet can be used in a virtually private manner. So, the host systems can now communicate with each other securely. A Point-to-Point Tunneling Protocol is an example of a standard protocol used to establish VPN connections, which uses a standard TCP connection between hosts and the generic routing encapsulation protocol to encapsulate standard point-to-point protocol packets which draws its name from the type of connections that were used back in the days of dial up, whereby you dialed directly into a remote server.
But TCP was used only to manage the session and GRE to provide the encapsulation, but neither of them provided encryption so different providers would typically implement their own security protocols within the suite as well. For example, the version that ships with Microsoft Windows uses Microsoft Point-to-Point Encryption or MPPE to provide the security. In general, however, PPTP is quite outdated by today's standards and typically should only be used when backward compatibility requires it.
Layer 2 Tunneling Protocol or L2TP is a newer implementation for VPNs, but L2TP only encrypts its own control messages with respect to creating and maintaining the Secure Layer 2 tunnel between the hosts. It does not in itself provide encryption for the data passing through that tunnel. So, it has to be used in combination with a data encryption protocol such as IPsec. While this provides a more secure VPN than PPTP, it's also becoming somewhat dated but it is still in use.
Most modern VPN connections now rely on HTTP over SSL or TLS to create VPNs using nothing more than a browser to establish the connection, or they use proprietary client software protocols. Both of which can generally provide better security. Generic Routing Encapsulation or GRE is another tunneling protocol originally developed by Cisco Systems, but it is widely supported by many platforms, and can encapsulate a wide variety of network layer protocols to create VPNs, but similar to L2TP, GRE in and of itself does not provide data encryption. So, it is more so used to provide the compatibility component in that it allows a protocol to traverse a network that would otherwise not support it. So, when used to establish a VPN, a separate protocol must be used to handle the data encryption such as IPsec.
Lastly, the Secure Shell protocol or SSH can be used to establish a secure connection to a remote server to perform remote administration of that server. So, that administrators need not be physically present at the console. It operates in a standard client server manner in that the service will typically run on the server, and accept the incoming requests from client systems who invoke the request, and it operates by default over TCP port 22. It's most commonly used with Unix or Linux hosts, but it can allow local administrators in the on-premises environment to establish secure connections to cloud-based servers to configure them without having to go through the web-based portal for the cloud service. So, clearly there are many different protocols available to be used either within your cloud-based networks or to connect to them remotely, but as is always the case when it comes to network security, always try to implement the fewest number of protocols and services possible, and always try to use the most up to date and secure versions of those protocols that your systems will support.
Cloud Network Services
In this video, we'll take a look at methods used to secure cloud based solutions using various network services, and we'll begin by addressing a characteristic of an application known as its state, which refers to its condition or simply its quality of being if you will at any given point in time. And the various processes that are part of an application will operate in either a stateless or a stateful mode. Now, a stateless process can be thought of as being used to perform a single function or operation with no stored knowledge or record of that operation after it has completed. An example of a stateless operation would be opening a browser and accessing a website.
The server will return the site to you and of course, you could still click on links on that site, but as far as the server is concerned, it is what you asked of it. You requested the site. It delivered it. So, the transaction is complete. From that point, the server doesn't really care what you do with the site so, it simply waits for the next operation. In other words, no state information is retain. By contrast, a stateful operation will remember, for lack of a better word, which operations or requests have been made, because in many cases, any given operation or transaction is dependent on a previous transaction or will affect the next transaction.
For example, in online banking, transferring from one account to another requires a debit from the first account and a credit to the second. So, it is in fact two separate operations, one being dependent on the success of the other.
If you then pay a bill from the first account, there has to be enough left in the account to pay the bill. So again, there is a dependency. So a stored history of transactions has to be maintained. Now, there are other characteristics of stateless versus stateful connections. For instance, a stateful connection requires that the client be connected to the same server for the duration of the session. Whereas with stateless connections, two sequential operations could be carried out by two different servers. In addition, stateful connections will typically be monitored and often reconnected if dropped.
Whereas a stateless connection will not reconnect. You would have to establish the connection manually yourself. So, the state of your services will have an impact on the means by which security is maintained in the cloud. And it's not that you can't conduct a secure single operation over a stateless connection. But in most cases, anything that involves security will use stateful connections. So, for services in the cloud, the applications will use components such as cookies to store the session ID between the client and the server, which remains valid for the duration of the session. The cookie also transmits a digital ID along with every request from the client, which can be validated by the server and compared against the stored information on the server to verify who is making the request, which of course verifies that it is the same user making all requests within that session.
Other considerations for securing cloud services might include the use of a web application firewall, which can be placed between your applications and the internet to filter the HTTP traffic appropriately before it ever reaches the application. Now, to quickly clarify, a web application firewall specifically is not to be confused with something like a network based firewall that protects the entire network. So, it's usually only part of the overall security equation.
But as an application filter, it operates at layer 7 of the OSI model, and can identify any application layer protocols that might be used to attack an application. So, it can provide protection against common application attacks, such as SQL injection, cross-site scripting or file inclusions among many others. But the key consideration is that it is designed specifically to protect the applications running within your cloud service, not the entire network, nor all hosts on that network.
But many cloud providers will offer web application firewalls as part of their service offerings, which you can in turn offer to your customers as part of your solution. An application Delivery Controller is typically implemented as a network device, or in the case of a cloud environment, the services of that device that can be used to implement load balancing, or to simply reduce the stress on something like a web server. Now, that said, while easing or distributing the load may be the end result, an ADC is typically placed within a demilitarized zone or a perimeter network.
And more so is meant to provide a means to direct and optimize traffic and/or provide features such as SSL offloading, or the process of decrypting data. Perform name resolution requests, handled proxying or network address translation and many other requests. But also that the web server doesn't have to do them, thereby lessening the workload on the web server, allowing it to simply focus on doing what it does, which is servicing the client requests.
Intrusion detection systems will typically be available for use with your cloud solutions and even though hardware is never directly exposed to you in the cloud, you can certainly still implement host-based intrusion detection systems, which are simply software-based installations that you could install on your virtual machines. But you can also still subscribe to the services of a network based Intrusion Detection System, which is typically implemented in the form of a hardware based appliance. Although you don't implement and manage the hardware directly, there is still a hardware device that provides you with the service.
So, you are simply configuring that service to protect all of the systems on the network. Now, all of that said, be mindful of the name and Intrusion Detection System does just that. It detects attempts at intrusion so you can be made aware of them, but it does not inherently block the attempt. But if you do require the attempt to be blocked, then you can also implement an Intrusion Prevention System which also comes in the form of a host-based or software-based implementation, or a network or hardware-based solution.
Now, this does bring up the question, why wouldn't you want an attempted intrusion to be blocked? Well, for starters, they simply aren't always right. In some cases, legitimate traffic might be identified as questionable. So, if it always blocked everything and identified as suspicious, you could be dealing with a lot of unhappy users. But of course, on the flip side of that, something that is genuinely suspicious might be allowed.
So, fine tuning of the process can occur over time, and in fact, the service can usually learn what is normal and what isn't the longer it operates. But there can also be situations where you do want a suspicious connection to be allowed, so, that you can see exactly what type of vulnerabilities might be present in your solution. Now, this would have to be done in conjunction with close monitoring, and possibly the implementation of honey pots to lure attackers to a false reward, but it can still identify those vulnerabilities that may have been missed. Cloud Data Loss Protection does not refer to losing data due to an event such as a hard drive failure.
Rather, it's a service that can apply customizable classifications to the data that you store to protect against users from taking the data outside of your environment. This is commonly used within any organization that maintains particularly sensitive data, such as financial information, or health records, intellectual property, or any kind of top secret data.
Various policies can be set to examine the data and classify it, so, that the data is only ever available to be consumed within the confines of your organization or your cloud subscription. Common activities that can be prevented include printing hard copies that could be taken anywhere by your users. Forwarding content in an email to a competitor, or storing the data on removable storage, which again makes it portable and allows the user to take it with them.
Network Access Control can be thought of as a multi-function device or service that can allow you to combine the capabilities of many other protection services such as firewalls, intrusion detection, or anti-malware applications into a centralized management platform. That can deploy consistent security configurations and policies settings to all users and devices in your network or cloud environment. They would generally be more common in larger networks that have perhaps moved to a bring your own device model, which of course, can result in a much broader range of devices that need to be supported and secured. Given such a varied base, Network Access Control services can be used to centralize the management of all of these different platforms to defend your entire network perimeter, including both your on-premises environments, and cloud subscriptions, and not only seek to proactively prevent incidents from occurring, but to also manage incident response when they do occur.
And finally, a network packet broker can be used to provide management for all of the network tools that are in use in any given environment, which these days can be significant, again, particularly in larger networks or cloud solutions. There are dedicated tools now for dealing with just about every aspect of network communications including monitoring, management, security, forensics, usage and metering and many others. So, an NPB can simplify the deployment of each of these tools by allowing them to interface with the broker through a plugin, which essentially eliminates tool sprawl by facilitating the deployment and management of all tools into a single application.
In that regard, an NBP can be thought of very much like a toolbox, a means to keep all tools in one place as opposed to having them scattered about. So, when it comes to implementing any type of cloud solution, clearly security and proper management of that solution needs to be considered from every angle. But in many cases there will be services available from the provider that can be used in conjunction with those that you implement yourself to address most any security concern.
Network Flow Analysis and Event Monitoring
In this video, we'll take a look at how you can improve the security of your cloud network through logging, analysis and event monitoring. Which are practices that still have to be carried out in a cloud environment. Even though the provider is responsible for the underlying infrastructure of your subscription. They are not responsible for any kind of solution that you implement on that infrastructure.
So, the system activities that occur within your subscription will still need to be examined by reviewing logs, just like in any on-premises environment. Now, that said, there is a specific and distinct difference between logging and monitoring that should be noted. Logging is always the process of recording activity.
It will generally include any kind of event that occurred and associated data such as user activity, error messages or failures of any kind, and there can be many different devices, services and applications that can all generate logs. But logging in and of itself is not a proactive means of managing your network.
It is simply a record of what has been happening. Now, that is not to suggest that logs can't be used in a proactive way, but it is up to you to examine and analyze the logs to identify areas of problem or concern.
Then to take action to address those concerns going forward. Event monitoring, however, is much more proactive or at least much more real-time. Where logging reports what has happened, monitoring can identify what is happening right now.
So, you can see in real time events such as resource or configuration changes, which can then be traced to specific targets or processes.
Which can in turn be used to create rules to respond to those events if desired, or to at least trigger an alert, so that, you can respond to an event as quickly as possible. Examples of events that could be monitored as part of a cloud implementation will of course be dependent on what kind of services are being provided.
But they might include any kind of a change to the topology or resource allocation, routing or processing events, or simply the overall status of any component in terms of its health, performance level, configuration or security.
And regardless of the type of service you're consuming from the provider or providing to your customers. Monitoring is critical to ensuring stable and reliable performance by facilitating better visibility into the interaction among all interrelated components. And gathering of metrics, metadata and events to determine which components are performing as expected, versus those that require attention.
It promotes better collaboration, particularly in very complex services. And provides insights and decision support through analysis to make changes going forward that can help to improve your services.
In short, there is almost never a situation where a solution can be deployed and then just left on its own to perform its function. While it may work exactly as expected initially, at some point issues will arise. So, proper logging and monitoring are critical to ensuring the long term stability of your services.
Network Hardening
In this video, we'll examine the process of hardening a cloud network, which quite simply refers to the practice of disabling whatever is unnecessary to the services you provide. Hardening is a basic method of improving the level of security for your systems, and mitigating the risk of becoming compromised by ensuring that its configuration and its settings are appropriate with respect to the services that the system provides. In essence, this is an extension of the principle of least privilege, which states that users should only have the minimum level of access or permission necessary for them to be able to do their jobs, but nothing more.
Similarly, a server or a system should only be running the processes necessary for it to do its job and nothing more. The reason behind this is simply to improve security by eliminating or at least reducing any security vulnerabilities by removing any and all unnecessary services and applications. But the reason why these services or applications might present a risk in the first place is because every service or application, while designed to improve functionality, also represents a means by which the system could become compromised.
Every running application requires processes, services, protocols, and ports to support it, which unfortunately presents an attacker with more possible avenues of attack. So, some common points to look for when hardening will generally include any and all programs that simply aren't being used on that system.
And services, which on a server are small executables that support the general network services that the server can supply for the clients. For example, if you have a DHCP server to assign IP addresses to your clients, there is a DHCP service that supports this process. And you might be surprised by how many are present and are running by default. Now, this will depend on the platform, of course, and what that server is designed to do, and you certainly need to have a good understanding of what all of these services are before you shut any of them down.
But as an example, on a typical Microsoft Windows server that has just been freshly installed with no other applications, there are well over 100 services present by default. Perhaps not all of them running, but they should be considered with respect to whether they are necessary. Ports are another very common point of hardening.
They're one of the first points of attack from the perspective of an attacker. Each open port represents a door, if you will, through which the attack could proceed. Unused accounts or weak or broadly defined permissions could expose access to resources, and, of course, physical access points to the network should be considered as well.
In short, if it can be disabled, it should be disabled. So, as mentioned earlier, this approach is quite similar to the principle of least privilege, and when beginning your hardening process, this should always be the lens applied to all aspects. And in fact, the actual principle of least privilege itself, with respect to the configuration of your user accounts, computer accounts, and service accounts, in terms of their level of access, should always be part of the hardening process.
In other words, hardening is not only the process of hardening systems. Every means of being able to access any component of your environment needs to be considered. Due to the ever changing nature of any networking environment, particularly with the flexibility available in the cloud, changes are bound to happen. So, once you have completed your initial hardening process, a baseline should be captured so that changes can be compared to that baseline to see how far your configuration drifts over time.
And access should also be limited from both the perspective of what a server has access to and who has access to that server. For example, the server itself should only have the necessary operating system features installed, so that, it cannot be used to access services that it doesn't run. And only the appropriate people should have access to the server to apply the desired configuration, which includes when changes do need to be made.
Ensure that only authorized users can make those changes, and be sure to audit or document any and all changes, so that, it's fully known as to why any change was made. Some best practices to follow include the use of benchmarks that are generally publicly available on the internet, so that, you can compare common configurations against yours and or find configuration considerations that you may have missed. Examine the tool sets that are available to be used to assist with hardening.
For example, there are several applications and management suites that are able to centralize all of your servers, so that, they can be configured from a single location. And you can group them together based on common functionality or platforms, such as web servers or application servers.
You can also make use of predefined server images which will ensure that at least the default installation will be identical every time a new server is created from that image. So, it will always give you a consistent starting point that represents a pre-hardened server. And many platforms also support the ability to create and distribute policy-based configurations. Whereby you can specify the state of many different settings within the policy, then simply distribute that policy too many different target systems.
Every system that receives it will then have the exact same configuration. And you can have as many policies as you like, again, possibly based on functionality. For example, one policy for database servers, another for messaging servers, another for file and print servers, and any other type of server that you might need. Other best practices might include scheduled disconnection.
If, for example, there are some services that are only required during standard business hours, then they might be disconnected after hours. Now, that said, of course when it comes to cloud services, people can work from anywhere, anytime. But, for example, you still might be able to disconnect services from maybe midnight to 6 AM, possibly. Firewalls should most certainly be implemented and the logs should be evaluated regularly to determine if there have been any instances of questionable access.
Updates for any and all systems and services that are running should always be applied as well. Perhaps, you might need to evaluate the updates first, but the bulk of updates that are released are meant to address possible vulnerabilities. Account or event logging should always be implemented as well and the logs should also be reviewed regularly, just like the firewall.
As you can determine if any accounts or services have been performing questionable activities, and lockout policies should be configured on all accounts. Whereby, too many failed attempts will deactivate the account. As this could indicate that someone or something is trying to compromise that account. Ultimately, there are many considerations when it comes to hardening, but the basic principle is always consistent, minimization in every regard. If you don't need it, shut it down.
Network Security Tools
In this presentation, we'll provide an overview of some of the basic types of security tools that can be used to perform various tests to assess your overall level of network security in the cloud, by scanning applications, services and other processes, to identify weaknesses or vulnerabilities that might be present. Now, there are many different individual tools, of course, but specifically, when it comes to cloud services, a primary factor to bear in mind is that the physical infrastructure of the environment is the responsibility of the provider. So, while it may certainly be advantageous to be mindful of the security measures they themselves have implemented, ultimately, it does not fall to you to ensure the security at that level.
So, most of your efforts will most likely be focused on identifying weaknesses in software, services and configuration.
Toward that end, there is a category of tools known as Dynamic Application Security Testing tools. Now, this encompasses a large number of commercially available and even open source tools that can be implemented to assess your environment. A common example is a web application vulnerability scanner, which can be implemented outside of the site or application from your perspective. And can run continuous tests against the application in an automated manner to identify the strengths and weaknesses of the implementation.
Another common example is a port scanner which can be used to continually try port after port, to identify any that might allow access to an application or service. So, you can shut down any that aren't necessary. Now, the benefits of implementing these types of tools don't just focus on improving network security.
They can lead to better financial security as well, because of course, a breach insecurity could have serious repercussions to any organization in terms of lost revenue, significant downtime or damage to your reputation and/or customer confidence. And they can identify where your services are most at risk as well, so, you can focus your efforts and improve the efficiency of your operations. And help to ensure that you remain in compliance with company policy, privacy laws or any type of legal concern.
All of which helped to present a better posture to your customers. Another challenge presented by vulnerability assessment and management for any type of cloud solution is tracking the assets that you do have, which ones are being used and which ones aren't. And of course, this process becomes more and more complex with the size of your implementation.
But for very large environments, this can represent a significant challenge because there might literally be components that you aren't even aware of. You might have thousands upon thousands of resources. So, of course, this means that it's much more likely that any given component could escape your assessment efforts entirely.
You can't assess something when you don't even know it exists. So, proper inventory and documentation needs to be part of the process as well. So, when it does come time for you to put these tools to work, of course, it is also very helpful to know the types of attacks that are most likely to occur against your service and/or where the vulnerabilities are most likely to exist.
Now, clearly this will depend on what the solution is, but some common examples include SQL injection attacks. Whereby, a user interface would expect a value to be entered by a user, perhaps to be used as criteria when performing a search against a database. But instead, SQL code is actually entered, which could allow access to records that should not otherwise be returned.
Similarly, a command of some type could be entered which could allow access to services or other internal processes through the interface. Simply assessing the configuration of the server should always be a consideration, particularly from the perspective of hardening. In other words, is it running any services or processes that are not needed with respect to what its primary function is? Path traversal should also be examined with respect to how data flows from one component to another, and what is done to process that data at each step.
And resistance to attacks, such as cross-site scripting, should certainly be assessed, whereby client-side scripts are injected into websites or web-based applications. These are similar in concept to SQL or command injection, but those types of attacks are typically targeted at compromising the server. Whereas, cross-site scripting is usually targeted at the users to gain access to personal information, for example.
Regardless of the circumstances, be sure to try to find an appropriate tool that can both identify and help you to address whatever vulnerabilities are discovered. And be sure to continue to use these tools on a regular basis, as things change all the time when it comes to cloud services. So, a secure configuration today may not remain secure for tomorrow and on.
Vulnerability Assessment
In this presentation, we'll take a closer look at performing vulnerability assessment against your cloud services. With respect to some of the common methods that can be used and best practices to observe. Now, of course, the focus of assessing vulnerability is to identify any areas of weakness that might present an avenue of attack for a malicious user. But this in itself relies on an understanding of the nature of the service and all of the components that make up your solution, which in itself can be a daunting task, particularly if it's a very complex process overall. That said, once you do have a better understanding of what kind of vulnerabilities might exist. Clearly, you'll be better able to react to any vulnerabilities that are discovered before they're identified by an attacker. And of course, be sure to document and report to assure that there is a history of which weaknesses were identified, and what was done to address them.
That said, at its most basic, the process of identifying vulnerabilities is essentially looking for security holes that could be exploited by attackers. Usually for the purposes of compromising software, devices or services to gain access to valuable information. Simple enough. However, what isn't quite so simple is realizing all of the components that are involved in identifying those weaknesses. For example, when the term vulnerability assessment comes up, most people tend to immediately think of technological components such as testing firewalls or scanning for open ports or looking for unnecessary services.
Which is all fine, of course, but that's hardly where the process stops. User accounts and their status have to be considered. The devices you support have to be assessed for how easily they could become compromised. Particularly in today's environments where so many users have mobile devices that can be easily lost or stolen. How secure are they if that happens? Permissions to resources and the level of access for your accounts needs to be assessed.
Particularly with respect to the personnel using those accounts. They could be just as much if not more of a target than your systems or services. Your services might be configured at the highest degree of technical security. But if your HR manager falls victim to a social engineering attack, whereby, the attacker obtains the username and password. They would now likely have full access to personnel records or maybe the payroll service, and many other resources that could reveal personally identifiable information about everyone in the organization. The point being that vulnerabilities can exist in every aspect of a service. So, it has to be examined from top to bottom, if you will, or from start to finish.
Whatever approach you feel is appropriate but every point of interaction also represents a point of vulnerability. So, then with respect to mitigating some of the risks, there are other considerations to bear in mind when using cloud services, which includes the shared responsibility model. And this essentially states that in a cloud environment, both you as the consumer and the cloud provider share the overall responsibility of identifying vulnerabilities and managing risk.
And this is due to the fact that you're building your solution on top of their infrastructure. Therefore, the cloud provider shares in some of the responsibility. That said, whatever you do build is, of course, your responsibility. So, there are pros and cons to this model in that you as the customer do not have to worry about components such as the physical infrastructure. That is entirely the responsibility of the provider so, it can lessen your overall burden.
But there needs to be a clear boundary that is established between the two. So, that if a breach does occur, one entity cannot blame the other. For example, if you create a virtual machine, and the physical host server falls victim to an internal attack at the actual data center, but your virtual machine was compromised. Clearly, this is the responsibility of the provider. Conversely, if you fail to harden your virtual machine, perhaps you neglect to install an anti-malware application and it becomes compromised.
Then clearly this is your responsibility. Now, those are fairly obvious examples, but when it comes to cloud services, particularly in very complex solutions that may involve a lot of software as a service implementations, where infrastructure is less of a consideration. The boundary between customer and provider responsibility might not be as clearly defined. So, particular attention needs to be paid to something like a service level agreement between you and the provider. Best practices that should be observed with respect to reducing your level of vulnerability, should include focused and specific information sharing practices.
Sometimes referred to as having a single source of truth, which in general refers to situations where there is a lot of collaboration or the use of teams to address specific areas of concern. But a centralized approach toward achieving the same goal needs to be maintained.
For example, it might be the job of security administrators to identify the vulnerabilities, but then the actual remediation is handled by a DevOps team. More complex environments need to ensure that communication and delegation of responsibility are all implemented in a manner that avoids information skew as it's passed from person to person or team to team. A proactive approach that deals with configuration and management control should also be implemented.
Whereby, it's not just about reacting to vulnerabilities that have been detected. Rather, focusing on methods to prevent them from occurring in the first place. From a technical perspective, scanning host systems should also be included as opposed to just focusing your efforts on potential network vulnerabilities, such as weaknesses in your firewalls or intrusion detection systems. In most cases, if the attack is focused on personal gain.
As opposed to something like a denial of service attack, which is simply designed to take something down. Then these attacks almost always involve data, which ultimately resides on some kind of host system somewhere. Network protection is only ever one layer of security. There needs to be multiple layers so that if one layer is compromised, there are still many others to deal with. And again, do what you can to gather as much information as possible with respect to all areas of interaction and/or all components involved, which comes back to what was just mentioned a moment ago.
Every point of interaction also represents a point of vulnerability. So quite simply, you need to understand the entire system and the entire process. In addition, if you have implemented virtual machines that are based on preconfigured or prehardened images, then not only should you regularly scan the hosts.
The images themselves should also be scanned regularly. Most modern imaging methods allow you to apply updates to the images themselves without needing to mount them. For example, a new feature of the operating system could be introduced by injecting it directly into the image. Which could possibly introduce a vulnerability. So, you might start stamping out many new systems based on this image that now includes a security flaw.
Similarly, profiles can be used to configure systems or services in a consistent manner in support of particular services such as your web servers or your database servers. But these also need to be mindful of updates or new features. So, the profiles themselves cannot always be considered to be appropriate if new features are added or if older features are discontinued. Your own processes of detecting and remediating vulnerabilities also needs to be treated as a vulnerability.
Again, due to the rapid nature of change in the cloud, any given process, procedure or policy that you may have been observing, itself might fall out of compliance at some point. And if you can, or if it's warranted, try to automate responses wherever you can to minimize the response time, and in turn, minimize downtime. Clearly, vulnerability assessment can represent a significant undertaking for any organization. And perhaps the biggest vulnerability of all is ensuring that this entire process itself is not just done as a one off, if you will. Vulnerabilities can appear anywhere at any time. So, vulnerability assessment is a continual and ongoing process.
Security Patches
In this video, we'll be taking a look at deploying and installing security patches and updates, which can represent a significant component of your efforts to maintain security, because of course, any type of software or service will never be perfect, nor will it be immune to vulnerability. As new features are developed, those who would exploit them are often right on their heels so to speak. In fact, particularly so with new features, because in general, this is when they're most vulnerable. As an application matures, vulnerabilities are usually identified and addressed through the process of applying updates.
Therefore, the longer it is in use, the more stable it tends to become. But then of course, it will ultimately reach a point where it simply becomes outdated or obsolete. So, newer versions are always being released and the whole process starts over again. So, updating is a never ending process.
Now, there are different ways to address how you acquire, distribute and apply patches throughout your organization. But particularly these days, as cloud services are proliferating faster and faster and new services and applications are released daily. It can very quickly become ineffective and unproductive to try to keep up with a manual approach. So, a patch management platform that centralizes and automates the process can be a much more efficient approach and can also help to reduce the risk of exposure by ensuring that all systems receive updates in a timely manner.
And that no systems are missed, which of course could happen easily when using a manual approach.
Regardless of your implementation, however, a patch management strategy should include determining the level of vendor participation. In other words, how often does the original vendor or provider of an application or service release new updates, and for what kind of duration are they supported? What kind of notifications do they provide, if any, or is it incumbent upon you to manually check for updates?
Does your management practice include time to evaluate updates based on characteristics such as the severity level of the vulnerability they're designed to correct. And before they're deployed into production, are you able to assess their stability or their compatibility by installing them in a test environment first?
In addition, does your implementation give you the means to track which updates have already been deployed and to which systems? Or by which means are you able to notify your own users that updates or patches have been recently applied, and for example, might require the system to be rebooted.
And are you able to generate reports that can show the success or failure rate of updates that have been deployed. Clearly, if a system fails to receive the update, you need to be made aware of this so you can try to reinstall it. And of course, be mindful that applying patches also constitutes change to your environment. So, patch management itself should be integrated into your overall change management strategy, particularly if any given update causes an increment to the version of that application or service.
Does your solution also provide the means to deal with resolving issues that could be caused either by the update itself or with the management process itself? For example, would you be able to identify if two people were unknowingly assigned to address the same issue?
And does your method also facilitate the ability to audit past events and issues to identify how similar problems were addressed previously, and/or to provide accountability for actions that may not have been justified? Now, if you're dealing with a solution that you have developed yourself in the cloud, as opposed to just using other off the shelf solutions, then of course, you are responsible for managing updates and patches for that solution.
So, you will need to ensure that your developers and/or your DevOps teams engage in release tracking and version control mechanisms to identify which updates are applicable to which builds, and to be able to identify when any given instance is not up to date.
And if it's a matter of it being an application that is going to be deployed to all of the devices you support, then of course you need to ensure that this is extended to all devices. So that, you are aware of which devices are current versus those that have maybe missed an update.
This will in turn require that you maintain not only an inventory of all of the patches and updates themselves, but also an inventory of all devices. Including any that might be used exclusively by remote users or external contractors, to ensure that you have a complete picture as to the state of your environment as a whole. Now, just to give you one practical example of this, if you are using Microsoft Cloud Services, they have features known as Intune and Endpoint Manager that, among many other services, allow you to manage the update infrastructure in a centralized manner.
So that, all registered devices appear to you in a single management console, as do all available updates. So, for any update that needs to be deployed, you can simply select the appropriate update or updates.
Select the target groups of devices and simply deploy. From that point, there are built-in reports and logs that can be used to determine the state of any given update and the degree to which its deployment is complete. Now, that's only one example. But when it comes to implementing any kind of patch management platform, the ability to ensure consistency across all devices is one of the most effective ways to ensure that you are keeping your environment as current as possible.
Risk Registers and Patch Management Prioritization
In this presentation, we'll examine the implementation of what's referred to as a risk register as part of observing security best practices. Now, this involves the use of a variety of security tools and utilities that are designed to track and measure risk in a single centralized manner. By doing so, you can avoid having different mechanisms that are being implemented in different ways in different locations, and ending up with an incomplete picture of your security posture. Centralized management in any degree most always allows for greater efficiency and results in the ability to save on resources, time, and effort.
A risk register should also facilitate the ability of your teams to evaluate risk from the perspective of scope, or how widespread the potential damage might be. The efficacy of the measures you have implemented.
And to what degree those measures indicate any level of compliance that has to be maintained for your organization. So, that all said, as mentioned a moment ago, a risk register is a collection of tools, utilities, and processes. Designed to assess your overall level of risk and the potential impact should an incident occur.
But they aren't specific applications that you download and install. There might be templates available online that you can certainly use as guidelines for creating a risk register, but even then you have to determine the content addressed by that particular template and compare to what it addresses against your own requirements. But even in that regard, determining your own needs with respect to the risks you’ve identified can be a difficult process as well.
So, it might be worthwhile to include in your risk register the ability to create an inventory of as many adverse events as possible. Along with how likely any one event might be to occur and what the result might be or how it would affect your business if it did. So, then at a basic level, after assessing or implementing a template. You should select the tools that can accommodate assessing and mitigating the risks themselves.
But should also include a separate log or register. That can be used to identify and control situations that could contribute to increasing the risks that have already been identified. Specifically through policies and procedures that have been defined within your own organization. For example, if you have identified a risk whereby that risk is posed by data being exposed to external sources.
And your own policies allow for bring your own device situations that don't support capabilities such as remote wipe. Then your own policies or lack thereof have directly contributed to increasing the already identified risk. So, this might require some coordination with the stakeholders, along with significant monitoring and tracking of usage and behavior, but there shouldn't be any situations where your own activities are increasing the risk to your organization.
Another significant component of a risk register would be a means to manage patches and updates, as they remain one of the most effective ways to mitigate vulnerabilities. Which should include the ability to identify areas where the level of vulnerability is at its highest, and/or those that present the highest level of risk.
So, they can be addressed first. Now, in some cases that will be fairly obvious.
For example, if a critical update is released that addresses a vulnerability in a database server that houses all of your customer orders, versus an update that corrects a problem with a driver on a laptop. Clearly, the database server needs to be addressed first. But in cases where it isn't quite so obvious. There are applications that can assess vulnerabilities against the common vulnerability scoring system or CVSS.
It quite literally rates known vulnerabilities and gives you a numerical score along with the categorical ratings. For example, all scores between 7 and 10 are categorized as critical. In addition, your risk register should consider factors such as the versions of operating systems and applications that run on them. To help ensure consistency with respect to the platforms.
And perhaps to identify any applications that might be considered to be legacy versions, and no longer secure by today's standards. And lastly, any patch management strategy should allow you to test the patches first, then deploy them to production only after they have been verified to be stable.
And the focus of the platform should be on patching, which in this context means that it should not be concerned with trying to identify and deal with zero day issues. That as of yet have known mitigations. In other words, while you should certainly test updates and patches.
You shouldn't be trying to experiment with them. Ultimately, the form your risk registers will take is up to you, and again, it's not a single tool, application, or process. It's the means by which you assemble and use all of the tools, applications and processes that you feel are most appropriate into a cohesive unit that can best serve to reduce the overall level of risk throughout your cloud environment.
Security Tools and the Impacts of Service Models
In this video, we'll examine some considerations for cloud based security tools and the impact they can have on your choice of systems and service models. Along with the effect that the different cloud service models can have on your choice of tools. So, beginning with the security tools themselves, one of the first considerations is where the tool comes from, for lack of a better word, which refers to the fact that there will likely be many vendor provided tools. In other words, they're inherently available to be implemented within your subscription or even already available by default.
But in some cases, you may need third party tools from other companies or even open source tools. And we'll come back to some specific examples and considerations in just a moment. But with respect to the type of cloud service model being used, while the consideration of Infrastructure as a Service versus Platform as a Service versus software as a service will certainly dictate to a degree which types of tools will be more suitable to use.
And equally important consideration is the means by which your identity and access management is conducted, and what services you feel are most required for your users. For example, regardless of the service model, all cloud providers give you a means to manage your user identities. So, whether you're configuring a virtual machine in an IaaS model, developing a new service in a PaaS model, or configuring an application in an SaaS model, managing identities in a secure manner applies to all of those.
So, the question then becomes, what kind of common tasks are performed within each model? And what kind of tasks are applicable to every model? Coming back to the user identities, maybe you need to implement something like multi-factor authentication to increase the security for all cloud resources.
If so, this would most likely be an example of a vendor supply tool, as this is likely an integrated feature. But then, within a particular service model, you might need a specific monitoring tool to measure or meter the usage of a resource that might not be available from the provider.
So, you may need third party or open source in those cases. So, of course, the choice of tool will depend on the situation. So, looking at some categories of tools, among the more common that would be natively available from the provider would be monitoring tools which allow you to monitor the specific events that are most important to you. And to flag certain events for further analysis or maybe to generate alerts when thresholds are crossed, or to analyze workloads, to identify areas of performance concerns such as bottlenecks or low resource availability.
But again, this would be dependent on the service model. As they would almost certainly be a means to detect something like low memory in a resource such as a virtual machine or a database. But that simply isn't an issue if you're using a purely SaaS model, as you only ever see the application itself, not the underlying resources.
However, when you start to get into more specific needs for your services, the availability of the tools that you will need may not be inherently available. So, this is when the third party or open source tools may come into play. Services such as firewalls will most likely be required to control network access and to protect against common attacks.
So, you'll need to assess what is available from the provider and I'd say that it's quite likely that there would be one that is inherently available. But your particular solution may be better served by a third party tool. If, for example, you're already familiar with a particular instance that you've used before, then it might be easier for you to implement and manage that more easily. Likewise, encryption services are likely available from the provider but there could be limitations on what they can encrypt.
For example, can they offer encryption services across all platforms such as Windows versus Linux? And is it offered selectively or automatically, which again, could be dependent on the service model?
In addition, do they offer key management services to protect the encryption keys or do you have to manage them yourself? Is the encryption only available for data at rest or for both at rest and in transit? Again, there might be third party or even open source utilities that you are already more familiar with that could more easily accommodate your needs or integrate into your solution.
And for whatever tools you do implement, is there a means to manage all of them in a centralized manner, such as through security dashboards or administrative centers? And are all of the tools that you've implemented able to be consolidated into those dashboards or administrative centers?
Are you able to manage options and settings for the tools themselves through the administrative interface or do you have to manage each one separately? If integration, centralized management, and consolidation are high on your list of priorities, then vendor provided tools are likely your best bet.
But again, many third party tools may still be supported as well. Lastly, some specific types of tools that you might want to consider might include a Network Access Controller, which can enforce security policies on users and devices that are trying to gain access to your network. Data Loss Protection that can help to ensure that data does not leave the confines of your organization or your cloud environment.
Intrusion Prevention Systems that can protect your network and host systems. Endpoint protection applications that can manage anti malware, updates, patches and security settings across the devices you support.
Cloud Access Security Brokers which can centralize security features such as authentication, authorization, single sign-on or multi-factor authentication across all users and devices. Mobile threat defense applications designed to enforce security features and settings specifically for mobile devices, such as remote locking and remote data wiping, and Endpoint Detection and Response services which can continually monitor and assess the behavior of client systems, and compare it against normal baselines to detect anomalous activities.
Again, all of these tools will be dependent on your service model and the tasks that your security team members need to perform. Ultimately, it's your call of course, but in general, vendor supplied tools might be easier to implement and manage, but they may not always provide the functionality you need. Just be sure to assess all options before making the call.
Creating Subnets in a Cloud Virtual Network
We're going to see a demonstration that will show you how to create a virtual network. And then subnets within that virtual network in a cloud service. Now, this will help to isolate systems from each other when you need any given system or application to be isolated from any other, because the virtual network itself, like any network in an on-premises environment can be subnetted down into smaller units to facilitate better isolation and better security.
Now, once again, I'm using Microsoft's Azure as the cloud provider here, but many of the same capabilities would be available in just about any type of cloud service. So, from my homepage here, I'm going to click on More services just because the frequent services here does not include Virtual networks. But let's just click More services.
[Video description begins] The All services page appears. It has the resource menu at the left-hand side and the working pane with the Featured and Free training from Microsoft sections on the right-hand side. [Video description ends]
And right here, we do see Virtual networks.
So, we'll click there,
[Video description begins] The Virtual networks page appears. [Video description ends]
and we'll click on Create.
[Video description begins] He selects the Create button from the command bar. The Create virtual network page appears with the Basics, IP Addresses, Security, Tags, and Review + create tabs. Currently, the Basics tab is selected. Underneath this tab, the following sections are visible: Project details and Instance details. At the bottom of this page, the following buttons are available: Review+create, Previous, and Next: IP Addresses. [Video description ends]
And in this case, I am going to use a new Resource group that I just created called MyResourceGroup.
[Video description begins] He selects the MyResourceGroup option from the Resource group drop-down menu. [Video description ends]
There is nothing in this Resource group. And again, the idea behind a Resource group is just to contain resources that are going to be used together.
So, all I need is a name for this network. So, I'm going to go with TestVirtualNetwork and the Region is fine.
[Video description begins] He enters the value TestVirtualNetwork in the Name field. [Video description ends]
So, let's just click on Next: IP Addresses > and go to the IP Addresses. Now, this is just a default configuration that is assigned, you can certainly change this if you like.
But the two main aspects here are the entire address space and the subnet that is created. So, up above, we do see the address space that is assigned by default 10.1.0.0/16.
Which it gives you approximately 65,000 addresses in that entire address space. Now, that's probably a lot so we can go with a lot smaller configuration if we want to. But you really don't have to because what you can do is to create smaller subnets.
Now, as far as the configuration of the address space is concerned, and the subnet configuration is concerned, again, this is entirely up to you. But it does at least give you this starting point. So, I'm going to leave that address space as it is with 65,000 addresses.
But for any given subnet, I probably don't need anywhere near that number of systems. So, just like in an internal environment, you might use subnetting, or maybe a combination of subnetting with VLANs to simply reduce the size of the networks to facilitate better isolation.
So, once again, there is a default subnet that has already been created for me. And it is called default. So, I can click on this and in fact, I can change the name if I want to.
[Video description begins] In the IP Addresses tab, he selects the default option under the Subnet name column. The Edit subnet panel appears on the right side of the page. This panel has the Subnet name field with the value, default, the Subnet address range field with the value, 10.1.0.0/24, and the SERVICE ENDPOINTS section. Finally, at the bottom there are two buttons: Save and Cancel. [Video description ends]
So, let's just backspace over that, and let's call this Subnet1,
[Video description begins] He updates the value to Subnet1 in the Subnet name field. [Video description ends]
and we'll leave the address space as it is.
And this facilitates up to 256 possible addresses in theory, but Azure reserves 5 of them, and 2 of them aren't ever used anyway. Now, that's getting into some of the logistics of subnetting, and that's beyond the scope of what we can cover in this short demonstration.
But suffice to say, 251 will be available for you to use in this configuration. So, let's just click on Save and that simply renames the existing subnet. But I can create another one if I feel like I do still want a separate subnet within this total space. So, let's click Add subnet and we'll call this one Subnet2. And this one will use the next available address space.
[Video description begins] The Add subnet pane appears on the right side of the screen. Next, he updates the value to Subnet2 in the Subnet name field. [Video description ends]
So, 10.1.0.0 has already been taken up by the first subnet, so this can be 10.1.1.0/24.
[Video description begins] Next, he enters the value, 10.1.0.0/24, in the Subnet address range field. [Video description ends]
And this will again create another subnet of 251 available addresses but it's the next subnet in the series. So, let's just click on Add, and now I have two subnets.
So, I can place certain systems on Subnet1, and certain systems on Subnet2, and they will be isolated from each other. Now, since they are all within the same network space you could still route between them if you wanted to.
That's fine. But as it is, you have these separate and isolated subnets. And that's typically one of the first approaches of implementing overall network security. It's to not have networks that are any larger than they need to be.
So, let's just click on Review + create and this will take a moment to actually process and create the object, but it shouldn't take very long.
[Video description begins] He clicks the Create button. The Microsoft.VirtualNetwork-20210119160937 | Overview page appears. [Video description ends]
So, let's just give that a minute and we'll come back and verify the config, there it has already completed. So, let's just click on Go to resource and we'll see the properties.
[Video description begins] The Deployment succeeded pop-up box appears with the Go to resource and Pin to dashboard buttons. Next, he clicks the Go to resource button. The TestVirtualNetwork page appears. [Video description ends]
So again, the Address space is exactly as it was set by default. But if we go to Subnets over on the left-hand side, it gives us a breakdown of those subnets.
Then we can click on either one of them and we can make changes or
[Video description begins] He selects the Subnets option from the Settings section under the Resource menu. The TestVirtualNetwork | Subnets page appears. Next, he selects the Subnet1 option from the working pane. The Subnet1 pane appears on the right-hand side. [Video description ends]
set properties within here, and I don't really want to get into any of that at this point. But the key aspect is the fact that you can have as many subnets as you like within the capabilities of that address space, of course. But we have now implemented separate and isolated subnetworks within this overall network space. Again, a process very similar to implementing subnetting in an on-premises environment or using features such as VLANs.