
Infrastructure Planning
This is a guide on infrastructure planning.
Evolution of Cloud Infrastructure
Now when you're talking about the cloud computing, there are different aspects to it. It is about the hardware evolution. It is about the Internet software evolution. It is about the server virtualization. So let's look at the hardware evolution. Many years back, we had very basic chips, which were 386 or 486, and then came the Pentium and they could do very basic functions. Now if we look at the kind of chips that we are dealing with today, they are capable of handling Gigabytes of offload.
These chips today, they run with gigahertz of speed and they can perform very very complex functions. And of course, you know this evolution has happened from 1960s, 70s onwards and this has continued till date. Then we are talking about the Internet software evolution. It all started with ARPANET, now that was a basically a trial project. But then ARPANET took a bigger picture and things revolved and it became Internet for everybody. Whereas in the initial stage, ARPANET was limited only to few people and Internet is open to everybody else. So of course, with this evolution Internet TCP/IP stack came into the picture and then came along the IPv6 that also got merged into the Internet infrastructure environment. And now we have many devices which are running IPv6.
Of course, along the journey everything got standardized. So there are specifications, which were laid down for IPv4, there were specifications which were laid down for IPv6. And lot of other applications and the stacks and the e-services were standardized for the Internet. Now let's also talk about the server virtualization. In the good old days, we had mainframes. Then we had a single server with dumb terminals, where dumb terminals could only connect to the central controller and process the information. So all the processing would happen on the central controller. Then came the virtualization. Now with the virtualization, you have two components, which is the host and the guest. Now the host system is the physical server.
This particular server can run many other servers in the form of virtual machines. So the virtual machines are basically consuming the resources from the host system. There is no physical presence of any component within the virtual machines. They are only in the virtual form, so it would also consume the CPU or the network adapter or the memory from the physical host. The virtual machine cannot have a physical component on its own. So on a big server, you could run maybe hundreds of virtual machines depending on how big that server is in terms of system resources. So it all depends on the system resources, it depends on the number of CPUs you have, it depends on the amount of memory you have, it depends on the storage within that server.
So depending on all these resources, you could run a certain number of virtual machines or the guest systems within a server. So just to recap, in this video we talked about the evolution of cloud computing and we also looked at hardware evolution. Then we also talked about the Internet software evolution. And finally, we looked at the server virtualization.
Cloud Computing Software Security
When we talk about data security, it has three parts to it. One is the confidentiality. Second is the integrity. Third is the availability. In this particular video, we will look at how confidentiality is achieved by applying encryption. We will also look at how availability can be achieved by replicating your environment from one availability zone to the other availability zone. So let's head over to AWS now in this particular environment.
[Video description begins] An AWS interface displays titled VPC Dashboard. The header panel displays two menus: Services and Resource Groups. There are two buttons: Create subnet and Actions. The Actions button has a drop-down icon. Below the buttons is a search bar, followed by a table. The table displays the information under the following column headers: IPv6 CIDR, Availability Zone, Availability Zone ID, Route table, Network ACL, and Default subnet (the rest of the column header information is not visible). [Video description ends]
So when we launch a particular virtual machines in the Amazon AWS environment, we first select it.
[Video description begins] The host switches to another open tab titled: Launch instance Wizard. It lists seven steps to complete the wizard. Currently, the screen displays Step 1: Choose an Amazon Machine Image (AMI). [Video description ends]
So once the selection of a particular virtual machine is done, we have certain parameters that we can define. So let's first launch the virtual machines.
[Video description begins] The screen displays different virtual machine names for selection such as Ubuntu server, Microsoft Windows server 2019 Base, and so on. There is a Select button against each name. [Video description ends]
So once we do that, there are certain parameters that we need to define. So for instance, we'll go ahead and define the purpose of it.
[Video description begins] The host clicks the Select button of the Microsoft Windows server 2019 Base. The Step 2: Choose an instance Type displays. There is a table with different column headers. The host selects one of the row from this table. [Video description ends]
And then we have certain parameters like network parameters.
[Video description begins] The step 3: Configure Instance Details displays. There are different fields listed on this screen such as Number of Instances with current selection as 1. The host clicks the drop-down list against Network field and selects a vpc Mail server. [Video description ends]
Then we have a subnet.
[Video description begins] He clicks the drop-down list against the Subnet field. There are three options in this list. The host does not make any selection. [Video description ends]
By default, one particular availability zone is selected. So just go ahead with the default value. And once we proceed further, we have to define the storage.
[Video description begins] He scrolls down the screen and clicks a button which is not currently visible on the screen. The Step 4 : Add Storage displays. [Video description ends]
So by default a particular virtual machine is defined with a particular virtual machine that you launch.
[Video description begins] The screen displays a table with different column headers such as Volume Type, Device, Size and so on. [Video description ends]
Then we go ahead and define the tags.
[Video description begins] He scrolls down the window and clicks a button that is not visible on the screen. The Step 5: Add Tags displays. He again clicks a button which is not visible on the screen and next step, Step 6: Configure Security Group displays. [Video description ends]
And we also add rules for specific ports or services.
[Video description begins] The screen displays fields such as Assign a security group, Security group name, and Description. There is a table with different column headers and a button labelled Add Rule. [Video description ends]
Now, let's go back to AWS now and click on EC2.
[Video description begins] The host clicks the logo of AWS on this screen and it opens the page titled AWS Management Console. It lists the AWS services, a search bar to find a particular service, a link to access resources, explore AWS and Share feedback. [Video description ends]
[Video description begins] Under the All Services group on this screen, the host clicks EC2. [Video description ends]
So once we have the page loaded, we go into Running Instances.
[Video description begins] A new page EC2 console displays . It has a Resources section in the center of the screen, Account attributes on the right side of the screen and navigation panel on the left hand of the screen. The host clicks Running instances under Resources. [Video description ends]
Then once the particular page loads up, we have a particular virtual machine and see if we can connect to it.
[Video description begins] A screen loads that displays two virtual machine entries in the table. There is a Launch Instance button. The host selects one of the entries from the table and Connect button becomes enabled. He clicks the Connect button and a Connect to your instance pop-up window displays. [Video description ends]
So we connect to this particular system, try to get the password of it.
[Video description begins] The pop-up window displays the connection related information. It also has the following field information: Public DNS, Username and Password. for the Password field, it has a Get Password button. The host clicks this button. [Video description ends]
Notice there is no password, we have to provide the encryption key for
[Video description begins] Another pop-up window opens titled Connect to your instance Get Password. [Video description ends]
the password to be revealed now. Even if the password is revealed, it is going to be in the encrypted form. So which means, we have to decrypt the password first before we can use it. So we just close this particular dialog box and move further. We go back to the KMS Console now.
[Video description begins] The host switches to another open tab titled: KMS Console. The Key Management Service (KMS) screen displays. The screen includes a search field for "Customer managed keys" and two buttons on top labelled: Key actions and Create key. The Key action button has a drop down menu. The table below displays columns labelled: Alias, Key ID, Status, and so on. The first row with the alias labelled: MyKey is selected by default. [Video description ends]
When you are talking about KMS console, it defines your keys. Now, there is one key which I have, which is known as MyKey. Notice that it is using symmetric encryption by default.
So we can generate another key, which means we can generate a Symmetric or Asymmetric key.
[Video description begins] The host selects Create Key and the "Configure key" screen displays a left pane with the options labelled: AWS managed keys, Customer managed keys, and Custom key stores. The right pane displays Step 1 of 5 which includes "Key type," with two radio buttons labelled: Symmetric and Asymmetric. The Advanced options section appears at the bottom along with Cancel and Next buttons . [Video description ends]
It depends on the what kind of functions that we are going to be performing and what kind of requirements we have.
[Video description begins] The host clicks the KMS link and it opens a pop-up. It has an article on AWS Key Management Service listing the features of this service. [Video description ends]
So you can accordingly generate a particular key. Now, KMS is the default repository that can be used for storing the keys that you generate. It performs a centralized key management function.
[Video description begins] He scrolls down the window and explains from the Benefits and features section [Video description ends]
Now, let's go back to the VPC Dashboard.
[Video description begins] He switches back to another tab, VPC Dashboard. [Video description ends]
Once we are here, we will see that there are different availability zones. We are using us-east-2c and 2b. Now, there could be another availability zones which could be 2a as well. So these can be configured by clicking on Ohio on the upper right hand corner. So availability zones can be configured as per our requirement depending on where we want to place our virtual machine or the data.
We can choose that particular Availability Zone within the AWS environment. So let's see how availability zones are configured. They have Availability Zone one and two. Now, notice everything is replicated between these two zones. So you have the NAT gateway here, in the Availability Zone two, you have the Bastion host. And there is a Bastion host on this particular side as well. Everything is just a replica.
[Video description begins] The host explains a diagram on availability zone in AWS environment. First the is an outer layer of AWS, then a layer of VPC and finally the Amazon EFS. There are two two zones which are exact replica of each other. Each zone has a NAT Gateway and Bastion host. They have a Public subnet assigned ofr each zone. [Video description ends]
So you have something called Elastic Load Balancing in between.
[Video description begins] In between these two zones, there is internet gateway that is further connected to Elastic Load Balancing. It is connected to both the zones. [Video description ends]
Now, what happens if Availability Zone one goes down? Availability Zone two automatically takes over. So this is how you configure availability in the AWS environment.
Now, the similar function can be performed in the Azure environment also. Just to recap, in the AWS environment, we looked at how to create a key which could be symmetric or asymmetric. Then in the Azure environment, we looked at the availability zones, zone one and zone two. So if either one of these zone goes down, the other zone takes over.
Web Services for the Cloud
So in various courses, we discussed about the different types of cloud deployment models which are Software as a Service, Infrastructure as a Service, and Platform as a Service. And they can be used as and when you have the requirements. So depending on a specific requirement, you would choose a particular cloud deployment model. So for instance, so if you have to deploy something as part of the infrastructure, you would choose the infrastructure as a service model. Or you want to use a particular software within the cloud environment, then you would probably go and opt for a subscription in the software as a service model. So that would be something like Office 365, or you could go to Google Apps. So these are the examples of the software as a service.
Now, let's say you also want to deploy a particular operating system. You want to deploy Linux Mint, and you want to boot to this particular Linux. So it depends maybe you also want Windows. So you can use the infrastructure as a service platform to deploy these operating systems. AWS is particularly famous for infrastructure as a service. Now, anything starting from the operating system till the application, everything is in your control. When you talk about the platform as a service, then you are mainly dealing with the application, you're dealing with the data. And of course, there is a governance that you have to apply to ensure whatever you're doing is in your control. So GitHub is an example of platform as a service where you go and use that particular platform to build applications. Then you're talking about software as a service, which is only the application that you get to use and you can only do certain level of configuration. Nothing is in your control except that you can manage your data within that application and do some bit of configuration controls. That is the only thing you can perform in the software as a service.
So let's now look at each one of them. [Video description begins] The AWS Management Console screen displays the different sections labelled: Migration & Transfer, Networking & Content Delivery, Security, Identity, & Compliance, and so on. Each section has various options listed under it. [Video description ends] First, when we talk about Amazon AWS, we need to go to the AWS console and [Video description begins] The host scrolls up and down the AWS Management Console screen. [Video description ends] from EC2, we can deploy a particular virtual machine. And accordingly, we can configure a particular type of virtual machine. Now this is what the infrastructure as a service means because you're setting up your infrastructure. Now, notice that there are a lot of options that are given for you to manage. You can do the storage configuration, you can do the network configuration, you can do lot of other things. You can also do the analytics, machine learning.
Now all this comprises as infrastructure that you can deploy. So it actually depends on what you need, what you want to deploy, and how you want to deploy. That is all in your control. And of course, the physical infrastructure still remains in the control of the cloud service provider. Now, going forward, let's move on to Azure which is Platform as a Service. [Video description begins] The host switches to another open window labelled: All services. [Video description ends] [Video description begins] The Microsoft Azure window displays two panes, with the All services search field at the top. The left pane contains Overview and Categories such as: All, General, Compute, Networking, and so on. The right pane includes the General options labelled: All resources, Management groups, Resource groups, Recent, Subscriptions, Cost Management + Billing, and so on. Below these, the "Compute" options appear, labelled: Virtual machines, Virtual machine scale sets, App Services, and so on. [Video description ends] So here, we deploy particular applications. Now Azure can also work as infrastructure as a service, but it is more used as a platform as a service, where you can deploy a particular application. [Video description begins] The host scrolls down to display the rest of the "Compute" section options such as: Service Fabric clusters, Batch accounts, Mesh applications, and so on. [Video description ends] If you go to App Services, you can select a particular type of application to be deployed. So let's just click on App Services now. Here you can add particular type of application. [Video description begins] The App Services screen displays several options on top labelled: Add, Manage view, Refresh, and so on. The "Filter by name" field appears below and is followed by a table with columns such as: Name, Status, Location, and so on. Currently this table shows "No app services to display" message. [Video description ends] So for instance, if you just type Java, let's see what we come up with. So we simply click on Add to add this particular application. And we can choose the Subscription model. [Video description begins] The Web App screen displays four options on top labelled: Basics, Monitoring, Tags, Review + create. Below these is the Project Details section which contains: Subscription and Resource Group fields. This is followed by the Instance Details section which includes fields labelled: Name, Publish, and Runtime stack. [Video description ends] We also choose the Resource Group. And then we define the Instance Details.
Now when you define a particular name, it may exist so if that exists, you will have to define a new name. So when you define the name, it becomes the complete FQDN, which is your name and the website.net. [Video description begins] The host scrolls down and the App Service Plan section is displayed. The host selects the drop down menu under "Runtime stack." [Video description ends] So here, you can also choose the Runtime stack. Remember, you don't have to deploy everything from scratch, you can simply select a particular type of application. Now we selected Java. So we got Linux. If we select anything else, let's say .NET, we should get Windows. So now you can select a particular region in which you want to deploy a particular application. Now, this is what Azure is all about. It is about deploying your application, building your application right here in this platform. You don't have to set up the infrastructure on your own.
You don't have to set it up at your end. Now, if we move ahead, we have software as a service. [Video description begins] The host switches to another open window labelled: Home Salesforce. The screen now displays different menus on top such as: Home, Getting Started, Accounts, Contacts, and so on. Some of these menu items such as: Accounts and Contacts, have drop down options listed under each. The "Home" option is currently selected and the screen below displays the "Quarterly Performance" graph on the left and the "Assistant" details on the right. [Video description ends] This is Salesforce. Now just imagine a small company trying to set up an ERP system on their own. It is going to be very expensive, it is going to be very effort extensive and it is going to be difficult to be managed. And also, you have to bear with a lot of cost because you have to set up a complete infrastructure for the ERP. Then you have to have the hardware and for which you have to shell out lot of dollars. It may not be possible for a small company. So what they can do is subscribe to something like Salesforce, which is an example of software as a service.
So here they can customize their own particular application. The data is saved in the cloud itself. You don't know which region that data is saved in. So you will have to be very cautious of the regulations or the laws that you have to abide by. [Video description begins] The host scrolls down to display more options on the left, labelled: Today's Tasks, Today's Events, and Recent Contacts. [Video description ends] So if there is a particular regulation that prohibits you or stops you from saving data into unknown region, then you may not be able to use this particular platform. But otherwise, it works well for the small, medium, or large enterprises. In fact, any sized organization can use software as a service. Just to recap, we looked at AWS for infrastructure as a service. We looked at Azure for platform as a service. And we looked at Salesforce as an example for software as a service.
Cloud and Risk
There are different concepts of security that are applied to a particular user or a component to ensure its privacy, confidentiality, and integrity. Now if we talk about the first one would be what we call is identification, which means a user claims their identities to a particular system. This can be done through access control. So for instance, identification is required for a user to be authenticated and authorized in a system. When you talk about authentication, it is the reconciliation of a user's identity. Basically, this means a system recognizes you as you are and what you claim to be. So it establishes the user's identity and authentication, and ensures that the users who they say they are, they are. Then you're talking about accountability. That would be something like if I've deleted a file, then there should be a trail of a log or evidence that can confirm that I've deleted the file.
So audit trails and logs are meant for accountability. The next one we are talking about is authorization, which is the permission or the rights granted to an individual. It is a process that enables access to a resource to an individual. Now for instance, you may be authorized to access a particular folder in a system, or you may get access denied message, which means you are not authorized. Then we are talking about privacy, which is nothing but it is the level of confidentiality given to a user in a system. For instance, you as a user have certain attributes, and you do not want to know that. So that would be privacy of your information.
So basically when we are talking about privacy, it not only guarantees the fundamental tenets of confidentiality, but it also guarantees the level of privacy. So let us now look at privacy and compliance risks. So the first one is the notice, it is about the collection, use, and disclosure of some personally identifiable information which we call as PII. Then we have the choice. It is basically a user's choice to either opt out or opt in regarding the disclosure of PII, which means personally identifiable information to third-parties. So many times if you fill out a form on a website, they would ask you can we disclose your information, certain part of your information to third-parties, which are their partners. So either you can choose to disclose that information or you can choose not to disclose. Then we have the access, which is for the consumer who can access their own personally identifiable information, or PII, and they can review it.
They can permit the correction to that information, then we are talking about security. Now when we are talking about security, any organization that has collected your information by any means, or it would have been handwritten form or filled out form, let's say in a hospital. Now, that information needs to be protected from any unauthorized disclosure, which means that the information must be encrypted in a manner that cannot be useful to anybody other than the authorized parties. Then we have the enforcement when we are talking about enforcement. Here comes the security policies, the regulation, and the some kind of laws that are meant to protect the privacy, and the security of the information. Just to recap, in this video, we looked at the different concepts of security, and risks which included identification, authentication, accountability, authorization, and privacy. Then we also looked at privacy and compliance risks which included notice, choice, access, security, and enforcement.
Service Provider Risk
Now we cannot assume just because we have put our data in the cloud that it is going to be secure. There are going to be a lot of risks that you will end up facing. Some of these risks might materialize, some may not. So when you're putting your data in the on-premise data center, you still have a lot of control over it. But in the cloud environment, you do not have that kind of control as compared to the on-premise data center. Now many types of risks will come into the picture and let's see some of these risks. So the first one is unauthorized access to customer and business data. Now because the data is in the cloud, we don't know who has the access in the background.
So in the front we know we are controlling our own data. But in the backend what if there is other customers who accidentally gain access to your data or vice versa. You gain access to their data because of some misconfiguration in the permissions and privileges at the cloud service provider's end. Then there are security risks at the vendor which means we don't know what kind of security that has been put in place at the cloud service provider's end. We don't know their employees. We don't know whether there is an insider threat at the cloud service provider's end. And we don't know if the cloud service provider has access over the data that we have stored in the cloud environment.
So this is a big risk when we are putting our data in the cloud. Then of course, there are compliance and legal risks. This means we don't know where our data resides. In most of the cases, we don't know who's allowed to access that particular data, and sometimes we are not even aware how the data is protected. So what happens if there is a breach at the cloud service provider's end and your data is stolen. That means you are no longer compliant to a specific regulation. So which means, your organization can be legally sued because the data has been breached.
Now you cannot put the blame on the cloud service provider and say, okay, it happened because of their mistake. You are equally responsible because you have to take care of your own data as per the compliance frameworks that you have opted for. Then there are risks related to lack of control.
This means, you really don't have that much control over the data in the cloud environment like control, you have in the on-premise data centers. When you are talking about the data in the on-premise data center, you have the complete control. You can set whatever permissions you want, you can control, you can backup, you can restore data as you want. But in the cloud environment, that kind of control is no longer with you, that is with the cloud service provider.
So when you're talking about availability risk, one thing we have to understand is no service can be guaranteed for 100% uptime. So when you rely on a cloud service provider for business critical task, then you are putting the viability of your business in the hands of two services. The cloud service provider and your Internet service provider. Because, if Internet Service Provider does not provide proper connectivity, you are no longer able to connect to your data or applications which are in the cloud.
Now even if the cloud service provider provides the connectivity, and you are able to connect to your own cloud environment, what if the applications have gone down? What if the data is no longer available, because there has been a breach? So of course, the data availability and application availability can be a big risk. Now let's talk about some cloud service provider's risks. When you are using the cloud environment, you have to understand that it is completely virtualized environment.
Of course, there are physical servers running in the backend, but on top of it, everything is virtualized. Now when you are using the virtualized environment, there will be some existing risks which are there in the physical environment anyways, like your on-premise data centre. And then, there will be new risks which may get introduced when you're using the cloud environment that is running on the virtualized infrastructure.
So remember this, all these attacks as they do, on your on-premise data centers, they do exist in the cloud environment as well. Now even if you protect your virtual machines, even if you protect your data, but you have to also understand that the hypervisor running on the physical server can also be at risk. If there are no proper permissions granted, or if there is an incident with the virtual machine, your hypervisor may be attacked. And it might become a big risk because of the vulnerabilities that were introduced by a virtual machine. Now when you're also aggregating multiple systems into virtual machines, that can also increase risk.
Because there is a lack of system resources. It could also be a risk because one VM can actually take control of the other VMs. Because somebody's breached into one virtual machine and is able to take control of the other virtual machines. So these kinds of risks will still exist in the cloud.
So looking at some of these risks, we can identify several areas of risks to virtualize systems. So some of these are complexity of configuration. So virtual machines add more layers of complexity to networks and the systems. Because when you're talking about only the physical systems, you have only certain set of layers, but when you're putting the virtual machines on top of the physical systems, then of course you're adding a hypervisor. And then on top of it, you're also adding the virtual machines there is a virtual network that is running. So of course, there is a Complexity of configuration that gets introduced. Now the next thing is Privilege escalation. A hacker may be able to escalate his or her privileges on a system by leveraging a virtual machine using a lower level of access rights.
And, then the hacker may attack another virtual machine with a higher level of security controls through the hypervisor. Then we are talking about Inactive virtual machines. Now virtual machines that are not active could store data that is sensitive. So monitoring access to the data in an inactive virtual machine is virtually impossible.
But remember, it does still provide a secured risk because you lose access to that particular virtual machine, you have also lost the access to the data that was stored within that virtual machine. So monitoring for virtual machine systems are still not as mature as the tools that are there for monitoring the physical systems. Let's look at now Segregation of duties which is another level of risk. So when you're talking about virtual machines, they impose risk to the organization through improper definition of user access roles. So this means that the virtual machines can provide access to many types of component from many different directions. Now proper segregation of duties may also be difficult to maintain. One person can have difficult types of access in this scenario, therefore data does become at risk. Then we are talking about Poor access control.
Now a hypervisor facilitates hardware virtualization and mediates all hardware access for the running virtual machines. Now this creates a new attack vector into the virtual machines due to a single point of access. Therefore, the hypervisor itself can expose the trusted network through poorly designed access control systems. Deficient patching and lack of monitoring. This vulnerability also applies to the virtualized databases. So now we move on to identifying the attacks. Now a Back door can be left by the developers who have designed a particular application because they want unauthorized access to the application. So that they can bypass all the security controls and get directly into the application.
Now this is what the developers do, what happens in case of an attacker. Now attacker, let's say has impacted your system and has left a back door. Now this means the attacker can anytime come back and connect to your system and remotely gain access through the back door, because it has gone undetected by the antivirus. There can be Spoofing attack.
Now intruders use IP spoofing to convince a system that it is communicating with a known and trusted entity. Now when that happens the intruder has the access to the system and other system, assuming that it is talking to a legitimate system will interact with you and share data. Then comes the Replay attack, which occurs when an attacker intercepts and saves old messages. And then tries to send them back later pretending to be the owner of these particular messages. Then comes the Trojan horses and malware. Now this is not something new. This has been there for many many years, many many decades. And now this type of attack does still impact the physical system as well as the virtual machines.
Malware is the bigger category, underneath you have various different types. So it could be the Trojan horses, worms, viruses, all these fall into malware category.
So what does a Trojan horse do? It hides a malicious program within a legitimate program. So once Trojan horse is executed, it can be designed to do whatever damage in your system. So you have to be very cautious what you are downloading onto these virtual machines in the cloud environment, it could be a malware. So unless or until that application has been downloaded from the legitimate site or vendor site, you should avoid downloading anything in the cloud environment, as well as any system in the on-premise data center. Then you have another attack which is known as the Password guessing.
Now this problem typically happens when the password policies are not put into the place. If the absence of a password policy, an attacker can actually guess the password by conducting a dictionary attack or Brute force attack. And will probably be able to break through the password and gain access to your account or the system. Now the only reason you would put up a password policy is because you want the users to use a password of a certain length.
You also want these users to make the password complex enough, so that even if it is put through a dictionary or brute force attack, it cannot be broken in a near future maybe let's say you know a one week or ten days. Let's assume that you have defined a password policy that forces the users to define the password which is eight in length. And that they should also use complex password which means they have to use the characters, numbers then the special characters, uppercase lowercase. So all this combined can prevent any kind of brute force or dictionary attack. Because even though attackers can still try out these attacks, but it would probably take a few years to break a password which is complex in nature.
Then we come to the Man-in-the-middle attacks. Now this involves an attacker substituting his or her own data with another person. So let's say there is a User A and there is a User B. Now the man-in-the-middle, this is the attacker who sits in the middle and intercepts the data, modifies it, and passes to User B or vice versa.
So this is how the man-in-the-middle attack is conducted. So just to recap, we looked at the different types of risks which are unauthorized access to the customer and business data, security risks at the vendor compliance and legal risk. Then we moved on to looking at cloud provider risks, which are complexity of configuration, privilege escalation, inactive virtual machines. Finally towards the end of the video or the course, we looked at different types of attacks, which are Back door, Spoofing, Replay, Trojan horse, Password guessing and Man-in-the-middle.
Cloud Data Center
When you talk about collaboration, this is what cloud computing is all about, multiple vendors, multiple users, multiple companies can come together on a single platform and collaborate. Now, one of the basic example is Office 365. Now, it is a platform where multiple users can come together, work on a single document simultaneously, and edit it, save it, and all those changes get merged together into a single document. Let's see, to be able to achieve collaboration what do you need? So first, you need to have some kind of corporate culture. If you don't have that kind of corporate culture, then collaboration cannot be possible.
Everybody needs to think from the angle of collaboration, and foster to the collaboration within the company. Then you need to have your business processes aligned to collaboration, which means the way you work needs to incorporate collaboration. And accordingly, your business priorities and decisions will need to be made. Finally, you will need to leverage the technologies that can overcome the barriers of distance and time. Now, this means if you have a regional office, let's say one in Europe, and one somewhere in Africa, one in Asia, and you're sitting in the United States.
Now without collaboration, nobody can work, everybody would be working in isolation, that is not how businesses today run. So how do you achieve that? Of course, you need to bring everybody onto a common platform and help them work together through collaboration. So what do you mean by SOA? Now, SOA is service-oriented architecture, which is basically nothing but a collection of services. And when you talk about SOAs, you bring together all these services to a common platform. A service could be a business activity, or it could be an interface, it could be message oriented. And many of these services may be put together to interact with each other. Let's see, what are some of the basic fundamentals of SOA? So you need to first plan for capacity, which means it is important to create a capacity plan for an SOA architecture. To accomplish this, it is necessary to set up an initial infrastructure and establish a baseline for capacity.
Moving on, then you need to plan for availability, which means you need to include a business impact analysis and develop and implement a written availability plan. That is going to be important because you want to ensure system administrators adequately understand the criticality of a system, and implement appropriate safeguards to protect it. Then you need to build message level security. Lot of these services are going to be interacting with each other they will be sending messages within themselves. They will need to be managed and authenticated with each other. Now, this means that you need to manage and authenticate message exchanges between different parties. Now, this could be basically based on your session keys, or it could be based by any other means. Then you need to have declarative policy-based security.
This means you need to have tools and techniques for use at enterprise management level and at service level. So both of these are going to be critical, these tools and techniques should provide transparency for security administrator, policy enforcement, and policy monitoring. You also need to have Security-as-a-Service, this means that security cannot run in isolation anymore. So it has to be bundled with every single task you do, every kind of development you do, every kind of infrastructure hosting you do. You need to have security from all corners, you need to think Security-as-a-Service for every single aspect in your company. Then, you have plan for SOA security.
Now, this means if everything is running as a service, then you need to ensure that there is optimal level of security provided for the service-oriented architecture. Now, just to recap, in this video we talked about cloud data center collaboration, we looked at the meaning of collaboration. And then we also looked at data center-based SOA, which is service-oriented architecture. And then different parts to it, which is planning for availability, message level security, declarative and policy-based security, Security-as-a-Service, planning for SOA security, and then planning for capacity.
OpenSource Software
Whenever you're in the cloud environment, you would need to deploy an operating system, you would need to deploy applications which could either be commercial, or they could be open source. Now, most of the cloud service providers have a place which is known as the marketplace from where you can get the applications.
[Video description begins] An AWS window displays, which includes the left and right panes. The left pane contains various sub-sections and options such as: EC2 Dashboard, Events, Tags, Instances, Instance Types, Reserved Instances, Images, and so on. The right pane displays the "Resources" section . This includes options such as: Running Instances, Dedicated Hosts, Volumes, Snapshots, Key pairs, Load balancers, and so on. The right-most area displays the: Account attributes and Explore AWS . [Video description ends]
So let's just go over to AWS and see what kind of marketplace do they have. So to proceed further, we will have to first click on the Running Instance. Once we get into that, we launch an instance.
[Video description begins] The screen displays three buttons at the top of the right pane, labelled: Launch Instance, Connect, and Actions. The table below has columns such as: Name, Instance ID, Instance Type, and so on. The host selects the "Launch Instance" button. [Video description ends]
And notice, that on the left panel we get options related to marketplace and
[Video description begins] The screen displays Step 1: Choose an Amazon Machine Image (AMI). [Video description ends]
community. AMIs, now AMIs are Amazon Machine Images. In this list if you see there are Windows-related images and then there are Amazon Linux related images.
[Video description begins] The host scrolls down and the screen displays different virtual machine names and there is a Select button against each name. [Video description ends]
Then if we go to AWS Marketplace, there are lot of applications that we'll be able to find here.
Now these can be used as is, we have different set of options which are Infrastructure-as-a-Service related, DevOps, Business Applications, Machine Learning, IoT, and then Industry-specific. So we can either use any of these or we can also use open-source which are our community AMIs. We can also select any of these. Now going back to the AWS Marketplace, these are readily available images that you can deploy in the community AMIs. Similarly notice that there is Ubuntu, Linux,
[Video description begins] The host selects Community AMIs and the right pane displays different virtual machine names and there is a Select button against each name. [Video description ends]
there is Windows and there is Red Hat as well. Now let's switch over to Azure. [Video description begins] The host switches to another open window labelled: Marketplace -. Microsoft Azure. [Video description ends]
Now notice in the Azure there is also a marketplace.
[Video description begins] The Microsoft Azure Marketplace window has options such as: My Saved List, Service Providers, and other Categories listed in the left pane. The right pane contains different sub-sections labelled: Managed Services, AI + Machine Learning, and so on. Various options are listed within each sub-section. The host scrolls down to display more sub-sections. [Video description ends]
What we have to remember in the marketplace, be it any cloud service provider. They will have these images or applications or the operating systems, which have been thoroughly checked for security. And then they're only put into the marketplace.
So it would not be just anybody who can come and add a particular application into the marketplace in the cloud. So there are a lot of applications that are available. It could be related to Managed Services. It could be related to machine learning or artificial intelligence. It could also be related to analytics, blockchain, or compute containers. So the list is endless. Now the options that you see in the left panel are basically the categories. So you can select a particular category, and you will get to these kind of applications that are visible in the right pane. Just to recap, in this particular video we looked at the marketplace at AWS and Azure. We looked at various options that are available regarding the applications that we can use. And there are also the open-source applications and the operating systems that are available in the marketplaces.
Cloud Security Challenges
So we can create an assessment here, and this assessment can be run against the behavior of your AWS resources. [Video description begins] The hosts opens the "Amazon Inspector" option in a new window. The screen displays different Dashboard options such as: Assessment targets, Assessment templates, and so on, on the left pane. The right pane contains the "Help me create an Assessment" link on top. Below this are sections such as: Notable findings, Assessment status, Account settings, and so on. Each section has relevant details listed below. [Video description ends] Which means not only you can analyze the behavior of your AWS resources, but you can also identify potential security issues. Now that needs to be done and that needs to be done only with the rules that you create. So there are predefined rules that you can run and create the assessment. That is what you can do with Amazon Inspector.
Then next tool is Amazon Macie, which is basically used to detect, identify, and classify your data. It is nothing but a DLP service, which means it is a data loss prevention service. [Video description begins] The host toggles to the AWS screen and opens the "Amazon Macie" option in a new window. The screen displays information on "Amazon Macie," and the region drop down menu, along with the "Get started" button and the "Getting started guide" below it. [Video description ends] But here, there are certain tools that are available only in a very specific region. For instance, this one is available only in two regions in the United States. Now every tool will not be available in every single region. So you will have to be very cautious when you are putting up your data on the cloud. You may not be able to use all the services and tools. [Video description begins] The host scrolls down to display three sections, labelled: Discover, Classify, and Protect. [Video description ends] Now with Macie, specifically you can discover, classify, and protect your data. So this is what you can use Macie for. And it is a good tool when you want to protect your data using a DLP or data loss prevention service.
Moving ahead, we have something called AWS Config, which is a great tool to [Video description begins] The host toggles to the AWS screen and selects the "AWS Config" option to open it in a new window. [Video description ends] check if there are any kind of misconfigurations in the cloud. And if there are, how can you optimize them? How can you fix those misconfigurations? [Video description begins] The window displays information on "AWS Config," the region and Support drop down menus on top, along with the "Get started" button below. The host scrolls down to display three sections, labelled: Simple setup, Customize rules, and Continuous compliance. [Video description ends] So AWS Config is pretty easy to set up.
You can create custom rules. You can ensure continuous compliance. Now, not only you can check for misconfigurations, but you can also maintain an inventory of your AWS resources. And the history of configuration changes that have been done to these particular resources in the AWS environment. So this is what you can use AWS Config for. Moving ahead, you have a tool called Amazon GuardDuty, which is a same [Video description begins] The host toggles to the AWS screen and selects the "Amazon GuardDuty" option to open it in a new window. [Video description ends] tool that allows you to correlate all the logs from virtualization layer. And it also provides you critical information from all these logs. So, you can see there are different logs that are already being analyzed [Video description begins] The screen displays information on "Amazon GuardDuty," the region and Support drop down menus on top, along with the "Get started" button and "Getting started guide" below. Three sections, labelled: Continuous. Comprehensive, and Customizable appear at the bottom of the screen. The host selects the "Get started" button. [Video description ends] at this particular point. [Video description begins] The screen displays options labelled: Findings, Settings, Lists, and so on on the left pane. The Findings option is currently selected. The right pane displays the Findings, with the Actions drop down menu on top along with the Suppress Findings button. There is a search filter field below this. A table with columns such as: Finding type, Resource, and so on is displayed at the bottom of the screen. [Video description ends]
You can capture failed logins, etc. And you can also capture different kind of critical events using GuardDuty. Just to recap, in this video we looked at various tools that we can use for compliance purposes in the AWS environment. We looked at tools like Amazon Macie, we looked at tools like AWS Config, we looked at Amazon GuardDuty, and we also looked at Amazon Inspector.
Encryption and Security
Now to be able to create a bucket you need to first get into the S3 environment. And that you can do by simply searching on S3 in the Amazon AWS portal and then it will bring you to the Amazon S3 environment and from there you can simply click on the bucket and get into it. Now there are two ways you can create a bucket either you can click on that [Video description begins] The Amazon S3 screen displays options such as: Buckets, Batch Operations, Access analyzer for S3, and so on, in the left pane. The "Buckets" option is selected. In the right pane, on the top, there are four buttons labelled: Copy ARN, Empty, Delete, and Create bucket. On the bottom of the pane, there is a search field below which is a table with columns labelled: Name, Region, Access, and so on. [Video description ends]
orange button on the upper right-hand corner or you can create the bucket using the Create bucket button that is located at the bottom of the right pane. So we'll just go ahead and click on this particular button. And then we will define a name which is projectskillsoft as the bucket name. [Video description begins] The Create bucket screen displays the General configuration fields, labelled: Bucket name and Region . This is followed by the Bucket settings for Block Public Access information at the bottom of the screen. [Video description ends] And from the Region we'll go ahead and select Asia Pacific (Mumbai). Now Block all public access is Enabled by default, we'll keep that. And then we'll click on Advanced setting and we will Enable object lock. [Video description begins] The host scrolls down to display the Advanced settings section. The Cancel and Create bucket buttons are seen on the screen. The Advanced settings section has two radio buttons labelled: Disable and Enable. The host selects the Enable radio button. [Video description ends] Now when you Enable object lock, you have to again type in the word enable . [Video description begins] A pop up window displays the Enable Object Lock information. The host will now need to confirm the action by typing enable in the field provided at the bottom, along with Cancel or Confirm buttons. The host selects "Confirm" and then selects "Create bucket" from the main Amazon S3 screen. [Video description ends] Notice that now the bucket is created.
So we can simply click on this bucket which is projectskillsoft and get inside this particular bucket. Now bucket specific page where you can set a lot of properties, so [Video description begins] The projectskillsoft screen displays five tabs at the top of the screen, labelled: Overview, Properties, Permissions, Management, and Access points. Below these are different buttons such as: Upload, Create folder, Download, and so on. The region is displayed to the right of the screen. The bottom of the screen shows a message that the bucket is empty, and has three sections namely: Upload an object, Set object properties, and Set object permissions. [Video description ends] we click on the Properties tab. And notice that you can do Versioning, Server access logging, Static website hosting, Object-logging, and Default encryption is None. We select AES-256 and we can also view the bucket policy as per this encryption. It says block all public access. We just go back to the properties by clicking on the Properties tab and again click on default encryption and select AES-256 and click Save. Soon when we click on Amazon S3 again, we'll come back to the Bucket page. Notice that this bucket is created in Asia Pacific region, Not public access is enabled.
And then we go back and again in the Properties we'll notice that AES-256 encryption is enabled for this particular bucket. So just to recap, in this particular video, we created a bucket and then we set its properties which is basically applying the encryption on this particular bucket. Now what will happen is any object that we add or any resource that we add within this bucket will by default inherit the encryption that is applied at the bucket level.
BCDR
To be able to do this, we are going to have to configure something called a load balancer in between two servers. [Video description begins] The screen displays the AWS EC2 Management Console web page. The navigation pane has various sections, such as INSTANCES and LOAD BALANCING. The INSTANCES section has multiple options, including Instances. The Instances option is selected, and the Instances page displays on the right-hand side. At the top left of the Instances page are three buttons: Launch Instance, Connect, and Actions. The Launch Instance button is selected. A table of instances displays below. It has multiple columns, such as Name and Availability Zone. [Video description ends] So let's go ahead and do that. So we are on the AWS console. So here we have two services, there is Server01 which is configured in the us-east-2a Availability Zone. [Video description begins] He selects the second instance named Server01. Its Availability Zone is us-east-2a. [Video description ends] Then there is another server which is Server02 which is configured in the us-east-2b Availability Zone.
[Video description begins] He selects the third instance named Server02. [Video description ends] Now, what happens if one server either one of them goes down the other server from the other Availability Zone takes over. Let's go ahead and see what happens. To be able to do that, we are going to connect to Server01. And to be able to Connect to Server01, we need to first get our password. [Video description begins] He selects Server01 again and clicks the Connect button. The "Connect to your instance" pop-up box appears with connection-related information and fields, including Password. Next to the Password field is a "Get Password" button. Towards the bottom-right corner is a button labeled Close. [Video description ends] So we click on Get password, we have to provide our encryption key. [Video description begins] The pop-up box now displays "Get Password" related fields, such as "Key Pair Path." Next to "Key Pair Path" is a Browse button. An empty field for password displays below, followed by a "Decrypt Password" button. [Video description ends] [Video description begins] He clicks the Browse button and selects the key from the Downloads folder. [Video description ends]
We select that, and once we select that we have to Decrypt the password and [Video description begins] The key contents display in the password field. He clicks the "Decrypt Password" button. The pop-up box now displays the "Connect to your instance" information. The decrypted password displays next to the Password field. [Video description ends] then we have to copy that particular password. [Video description begins] He clicks the "Copy to clipboard" icon at the end of the password to copy it. [Video description ends] Now, beyond this point, we need to connect, with this particular server. [Video description begins] He clicks the Close button to close the "Connect to your instance" pop-up box. [Video description ends]
So we connect to that server once we are prompted, [Video description begins] The "Connecting to" pop-up message appears on the screen. [Video description ends] we need to provide the password which we had copied. Now, once we do that we will need to connect to this particular server. [Video description begins] "Your connection may not be secure" warning message appears with three buttons, including Continue. He clicks the Continue button. A pop-up box titled "Enter Your User account" appears. It has the Username as Administrator and an empty Password field. At the bottom right are the Cancel and Continue buttons. [Video description ends] [Video description begins] He pastes the copied password and clicks the Continue button. [Video description ends] And once we get into this particular server, we will see that there is Internet Information Services window is already opened. There is a default website that is running.
[Video description begins] The screen displays the Internet Information Services (IIS) Manager window. He points to the Default Web Site in the navigation pane. [Video description ends] Now, what we have done is we have configured the other server as well with the same website. So we are going to simply browse this particular website, which is the default homepage. And is marked with number one. [Video description begins] A browser window displays on the screen with the following URL: http://localhost/. It displays a page with the heading "Internet Information Services," followed by the digit 1. [Video description ends] For the other server, we have marked the same page as number two. [Video description begins] The control is back to the AWS EC2 Management Console. [Video description ends] Now, once we are done with both these servers, we have verified that both servers are marked as one and two, we can come back as discussed earlier. [Video description begins] He deselects Server01 and then points to both Server01 and Server02. [Video description ends] The both servers are now configured in the different Availability Zones. [Video description begins] He selects the Load Balancers option under LOAD BALANCING in the navigation pane. The Load Balancers page displays on the right. It has two buttons on the top-left side: Create Load Balancer and Actions. The "Create Load Balancer" button is selected. A table with multiple columns, such as Name, State, and Availability Zones, displays below. It has one load balancer named LB01, and its State is active. LB01 is selected. [Video description ends] We will see we have configured load balancing, and then we simply select the Load balancer.
Now if we see it, it is already marked as active, and if we scroll down a bit towards the right-side, we'll see that it is configured with two Availability Zones. [Video description begins] He points to the two Availability Zones: us-east-2a and us-east-2b. Below the table are five tabs, including Description and Monitoring. Under Description are various fields and their values, such as Name, ARN, and DNS name. [Video description ends] So we copy the fqdn, which is the DNS name for the Load balancer. [Video description begins] He clicks the "Copy to clipboard" icon at the end of the DNS name to copy it. [Video description ends] So let's go into the browser window. Now, notice that the webpage for the Server01 is displaying as one. We paste this DNS name, notice that webpage loaded in the web browser is now marked as two which means it is doing round-robin. [Video description begins] He pastes the copied DNS name in the address bar and presses Enter. He then points to the digit 2 at the end of the heading. [Video description ends] So first request went to Server01, the second request went to Server02. So this is how the entire thing is configured. Now, basically what it does is monitors the requests on the Monitoring tab. So when the requests are coming in, we will see whichever server is available [Video description begins] The control is back to the AWS EC2 Management Console. He clicks the Monitoring tab. Various CloudWatch metrics display under the Monitoring tab. [Video description ends] then the request will be sent to that particular server. And in case Server01 is getting overloaded or is unavailable, the request will be diverted to the other server which is Server02. [Video description begins] He clicks the Description tab. [Video description ends] Just to recap.
In this video we saw how to configure load balancing between two servers. So what happens when you already have one server available and how does the request go to the second server using the round-robin format.
BCDR Plan
When we're talking about the business continuity, it is a must-have service for your business. Which means that if in case your business goes down due to any kind of disruption, there is something that can keep your business running. Which could be probably a replica site that you have put in place. Now, this is required be it your on-premise data center or the infrastructure that you have configured in the cloud. You need to have business continuity put in place to ensure that your business does not come to a halt in case there is any IT-related disaster that strikes. Now, why would that be required?
Just imagine if you have on-premise data center, now what happens? If in case there is a fire breakout or there are floods in your area. So what happens is your data center will become non-functional. So in that case, your business will stop because there is no IT backbone that is left to serve your business. You don't want that to be happening. So how do you keep your infrastructure, your data center running? You have to have business continuity plan in place. Now, the business continuity plan will help you configure another replica. It will help you configure the infrastructure that can be used in case a disaster strikes, be it your on-premise data center or the data center in the cloud. You need to have something as a backup that will continue to serve your business and meet its objectives.
Now when you talk about the on-premise data center of course, there is lot of money that you will have to spend to set up another replica or configure the disaster recovery. Or the business continuity to be active when a disaster takes place. Now, we are focusing on the cloud. In this case what we are saying is, AWS does offer you that service of disaster recovery which will help you in the business continuity of your data center.
[Video description begins] The AWS window displays the different options on top such as: Products, Solutions, Pricing, and so on. The screen below shows the Disaster Recovery information along with two buttons labelled: Get started with CloudEndure Disaster Recovery and Contact us. [Video description ends]
Now, you could also configure your on-premise data center with AWS disaster recovery services.
[Video description begins] The host scrolls down to display the Benefits of using AWS and CloudEndure for Disaster Recovery information which includes: TCO reduction, Non-disruptive, Any application from any source, and Minimal RTO and RPO sub sections. [Video description ends]
Now what you get out of is you get the total cost of ownership reduction, you get non-disruptive services in-fact, you can configure any application in this environment. And of course, there is minimal RTO and RPO that you will get. Now let's switch over to Azure which also offers the same capability.
[Video description begins] The hosts switches to another open window. A Microsoft Azure best practices screen displays information regarding An example of paired regions for disaster recovery. [Video description ends]
Now what they call this is primary and the secondary region.
[Video description begins] The host scrolls down and the Primary Region and Secondary Region diagrams are displayed. Each diagram includes activities such as: Compute, Storage, Database, Management Services, and so on. [Video description ends]
The primary region has three levels of services, so does the second region. Now what happens is, if any of the services fails at any of these levels you could have a cross functional service running in the second region, which will take over. Now of course, there are going to be lot of recovery alerts that will be sent to the second region, which will help the services to continue. This is essential, Why? Because, of course you do not want all of a sudden primary region to fail and then your business comes to a halt. That can be taken care by the Secondary Region in case if there is an alert from the Primary Region that the service has failed. Just to recap. In this video, we talked about the business continuity and how does it help when you set up business continuity within your environment, and also in the cloud environment. Then we also looked at AWS and its disaster recovery services.
We also looked at Azure and its Primary and Secondary Regions that communicate with each others on regular basis and in case the Primary Region goes down, the Secondary Region takes over.