Cloud Computing Basics

These are my notes on Amazon Web Services.

This is my favorite Cloud Computing book on Amazon, if you are interested in learning Cloud Computing I highly recommend it

 

Table of Contents

Getting Started With Cloud Computing

Cloud terminology is everywhere these days. It means a lot of different things. Cloudcan be used in a generic way or a specific app. Cloud computing is the purchase of services that include various degrees of automation and support depending on the needs of the customer.

A cloud application is one that does not reside or run on a user’s device. It is accessed through a network. Cloud application portability is the ability to migrate a cloud application from one cloud to another.

Cloud computing is a network-accessible platform that delivers services from a large and scalable pool of systems. Cloud data portability is the ability to move data between cloud providers. The cloud deployment model is how cloud computing is delivered through a set of configurations and features of virtual resources.

The cloud deployment models are public, private, and hybrid. Data portability is the ability to move data from one system to another without having to re-enter it. 

Infrastructure as a service is a cloud service category where infrastructure level services are provided by a cloud service provider. Measured services are delivered and billed for in a metered way.

Multitenancy is having multiple customers and applications running within the same environment but in a way that they are isolated from each other and not visible to each other but share the same resources.

On-demand self service is where a customer can provision services in an automatic manner with minimal involvement from the provider. Platform as a service is a cloud service category where platform services are provided to the cloud customer and the cloud provider is responsible for the system up to the level of the actual application. Resource pooling is the aggregation of resources allocated to cloud customers by the cloud provider.

Reversibility is the ability of a cloud customer to remove all data and applications from a cloud provider and completely remove all data from their environment. Software as a service is a cloud service category in which a full application is provided to the cloud customer and the cloud service provider maintains responsibility for the entire infrastructure, platform, and application. A tenant is one or more cloud customers sharing access to a pool of resources.

Cloud Roles

A cloud auditor is someone that is specifically responsible for conducting audits of cloud systems and cloud applications. A cloud service broker is a partner that serves as an intermediary between a cloud service customer and cloud service provider. A cloud service customer is one that holds a business relationship for services with a cloud service provider. 

A cloud service partner is one that holds a relationship with either a cloud service provider or a cloud service customer to assist with cloud services and their delivery. A cloud service provider is one that offers cloud services to cloud service customers. A cloud service user is one that interacts with and consumes services offered by a cloud services customer. 

Cloud Computing Characteristics

Cloud computing has a few attributes that are common to every system. The following are key to be considered a cloud environment.

  • On-demand self service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Metered service
  • Multitenancy

On-demand self service is where cloud services can be put into use by the customer through an automation system. They can be requested and provisioned as needed. They should be able to do all of this without interacting with another person. Of course, you will need to have the technical skills to do these tasks. This is usually done through a web portal because that is the easiest way. 

Broad network access is when all cloud services are accessed over a network. Services can be accessed through thick or thin clients. You can use mobile devices, laptops, or desktops. 

Resource pooling is one of the most important concepts in cloud computing. In systems like these, you always have a mix of applications being used by different customers. Resources are dynamically allocated depending on the customer’s needs. Customers can request additional resources and pay for them as needed.

Some organizations have computing needs that vary through the year. They can increase or decrease their resource allocation through a few buttons. This is a great benefit and saves organizations a ton of money. 

Rapid elasticity is when new resources can be rapidly expanded at any time. It is also usually done through a web portal. 

Metered service is the type of service where resources are logged for billing and reporting. Services that can be included with a metered service include storage, networking, memory, and processing.

Multitenancy is where everything has a physical separation between customers. Providers often use separate network gear for this. There is often virtual separation too depending on the resource. 

Virtualization

Virtualization is a key component of cloud computing. It models a traditional data center. A data center includes a bunch of racks and servers in them. These servers and their software allow for many different customers and subsequent resource pooling. Virtualization allows providers to virtually or logically allocate resources to customers when they need it instead of physically adding a new data drive.

Different virtualization environments are what makes this happen. There are many companies that offer virtualization products and it is a great service. It is the underlying technology of cloud computing.   

Cloud Categories

There are three main types of cloud service categories. They are:

  • Infrastructure as a service
  • Platform as a service
  • Software as a service

Infrastructure as a service is the base service. It allows the most control over the environment. Basically, you handle just about everything. You can customize almost everything in this model. You just have to know how to do it. You can scale this very quickly to whatever limits you can afford.

You do not have to own any physical hardware. You will have high availability and easily be able to meet any security requirements. Pricing is controlled by metered usage so you can use as much or as little as you want. There is usually a choice of hardware if you prefer it. 

Platform as a service is the next model. It offers slightly less control so the customer can focus on their business instead of having to worry about hardware and other configurations. This model will auto-scale as you need it and provision resources. The platform still allows a lot of control and customization. You can choose whatever software and operating system that benefits you the most.

You can easily upgrade any of the software yourself. This allows a lot of cost savings for your environment. Another advantage is licensing. The cloud provider is responsible for this. This takes a massive off the customer as this can become quite the headache if you are using software that requires licenses. 

Software as a service is the last model we will talk about. This model allows the least control but the customer can just focus on the application itself that they need access to. They do not have to worry about anything else and do not need a system administrator to manage all of the other functions as they do in PaaS and IaaS models.

The customer can typically do everything themselves in SaaS models. SaaS is the most popular and widely known. We use them every day. Examples are Gmail and Drive. This model is generally the cheapest way to use an application. You will only have support costs if you ask for it. Therefore, you are only paying for the licensing costs of the software. They do not need to have a system administrator or physical access to any hardware. Licensing will be the main cost and you can choose what you need. 

Cloud Models

There are three main types of cloud deployment models. These are public, private, and hybrid models. 

A public cloud is one that provides services to the general public. Examples of this are AWS, Digital Ocean, and Rackspace. Anyone can pay for services and use them. Setup is very easy and inexpensive. The provider handles all of the hardware and virtualization needed to provide resources. Customers pay for only what they need and they can have as many resources as they are willing to pay for. 

A private cloud is different in that it is usually run by an organization and restricted to its own members. It is owned and managed by this single organization. The organization has complete control over this private cloud. This includes all hardware and software.  

A hybrid cloud is a mix of these together. This is done sometimes to meet the needs of the organization. There can be any combination of the previous models put together. You can manage certain parts by yourself and contract other parts of the model to someone else.

Anything critical can be maintained locally while non-critical parts can be outsourced. This type of model is a good way to handle disaster recovery. Since you can split your operations into multiple physical areas, recovering from a hurricane, for instance, is much easier. As in the other systems, scalability is always there as the organization is in complete control of it.  

Universal Concepts

There are several concepts that are common to most cloud models. These include interoperability, scalability, focus on security, your privacy, auditability, governance, maintenance, and reversability.

Interoperability is the ease with which one can move or reuse components of an application. The underlying platform, opersting system, location, API structure, or cloud provider should not be an impediment to moving services easily and efficiently to an alternative solution.An organization that has a high degree of interoperability with its systems is not bound to one cloud provider and can easily move to another if the level of service or price is not suitable.

Elasticity and scalability are similar concepts in terms of the changing of resources allocated to a system or application to meet current demands. The difference between the two concepts related to the manner in which the level of resources is altered. With scalability, the allocated resources are changed statistically to meet anticipated demands or new deployments in services. Elasticity adds the ability for the dynamic modification of resources to meet demands as they evolve.

The concepts of performance, availability, and resiliency shoudl be considered in any cloud environment due to the nature of cloud infrastructures and models. Given the size and scale of most cloud implementations, performance should always be second nature to a cloud. Resiliency and high availability are also important in a cloud environment. If any of these areas fall short, customers will not stay long with a cloud provider and will quickly move to other providers.

The easiest way to remember the difference between availability and resiliency is the extent to which a system is affected by outages. Availability pertains to the overall status if a system is up or down, whereas resiliency pertains to the ability of a system to continue to function when some aspect experiences an outage.

Portability is the key feature that allows systems to easily and seamlessly move betweenb different cloud providers. An organizsation that has its systems optimized for portability opens up enormous flexibility to move between different providers and hosting models and can be leveraged in a variety of ways. From a cost perspective, portability allows an organization to continually shop for cloud hosting services.

Whereas a contract will spell out the general terms and costs for services, the SLA, or service level agreements, is where the real meat of the business relationship and concrete requirements come into play. The SLA spells out in clear terms the minimum requirements for uptime, availability, processes, customer service and support, security controls and requirements, auditing and reporting, and potentially many other areas that will define the business relationship and the success of it. 

Regulatory requirements are those imposed upon a business and its operations either by law, regulation, policy, or standards and guidelines. These requirements are specific to the locality in which the company or application is based or specific to the nature of the data and transactions conducted. These requirements can carry financial, legal, or even crminal penalties for failure to comply. Sanctions and penalties can apply to the company itself or even in some cases the individuals working for the company and on its behalf, depending on the locality and the nature of the violation. 

Security is always a paramount concern for any system or application. Within a cloud environment, there can be a lot of management with using a newer technology, and many will be uncomfortable with the idea of having corporate and sensitive data not under direct control of internal IT staff and hardware housed in proprietary data centers. Depending on company policy, different applications and systems will have their own specific security requirements and controls. Within a cloud environment, this becomes of particular interest because many customers are tenants within the same framwork and the cloud provider needs to ensure each customer that their controls are being met, and done so in a way that the cloud provider can support, with varying requirements.

Privacy in the cloud environment requires particular care due to the large number of regulatory and legal requirements that can differ greatly by use and location. Adding even more complexity is the fact that laws and regulations may differ based on where the data is stored and where the data is exposed and consumed.

Cloud providers will very often have in place mechanisms to keep systems housed in geographic locations based on a customer's requirements and regulations, but it is incumbent on the cloud security professional to verify and ensure that these mechanisms are functioning properly.

Most leading cloud providers supply their customers with a good deal of auditing, including reports and evidence that show user activity, compliance with controls and regulations, systems and processes that runs, and an explanation of what they all do, as well as information, data access, ande modification records. Auditability of a cloud environment is an area where the cloud security professional needs to pay particular attention because the customer does not have full control over the environment like they would in a proprietary and traditional data center model.

Governance at its core involves assigning jobs, tasks, roles, and responsibilities and ensuring they are satisfactorily performed. Whether in a traditional data center or a cloud model, governance is mostly the same and undertaken by the same approach, with a bit of added complexity in a cloud environment due to data protection requirements and the role of the cloud provider. Although the cloud environment adds complexity to governance and oversight, it also brings some benefits as well.

With the different types of cloud services, it is important for the contract and SLA to clearly spell out maintenance responsibilities for all upgrades, patching, and maintenance, whereas with PaaS and certainly IaaS, some duties belong to the cloud customer while the rest are retained by the cloud provider. Outlining maintenance and testing practices and timelines with the SLA is particularly important for applications that may not always work correctly because of new versions or changes to the underlying system.

Reversability is the ability of a cloud customer to take all their systems and data out of a cloud provider and have assurances from the cloud provider that all of the data has been securely and completely removed within an aggred-upon timeline. In most cases, this will be done by the cloud customer by first retrieving all of their data and processes from the cloud provider, serving notice that all active and available files and systems should be deleted, and then removing all traces from long-term storage archives or storage at an agreed-upon point in time.

Security and Compliance in AWS

Security is a primary focus for AWS across all services and one of the most prominent benefits of using a cloud provider. AWS can implement extremely robust security through economies of scale that can far exceed what any organization could have the finances and experience to implement on their own.

Shared Responsibility Model

Any large and complex IT system is built upon multiple layers of services and components, and a cloud is certainly a prime example of that model. With any cloud offering, the underlying infrastructure is the sole responsibility of the cloud provider. This includes everything from the physical building and facilities to the power infrastructure and redundancy, physical security, and network cabling and hardware components. This also includes the underlying computing infrastructure such as hypervisors, CPU, memory, and storage.

Make sure to understand the shared responsibilities model and what the customer is responsible for in each service category.

With Infrastructure as a Service, the customer is responsible for everything beginning with the operating system. The cloud provider is responsible for the underlying host infrastructure from which the customer can deploy virtual services into, whether they are virtual machines or virtual networking components.

With Platform as a Service, the cloud provider is responsible for an entire hosting platform, including all software, libraries, and middleware that the customer needs. The customer then deploys their application code and data into the environment. This is most heavily used for DevOps, where developers can quickly obtain fully featured hosting environments and only need to deploy their code and any needed data to test and develop with, and do not need to worry about any underlying operating system or middleware issues.

With Software as a Service, the cloud provider is responsible for everything except specific customer or user data. SaaS is a fully featured application that a customer only needs to load users or do minimal configuration, along with possibly importing data about customers or services.

Managed vs. Unmanaged

A major question for any customer is whether to use managed or unmanaged resources within a cloud environment. While both can provide what is needed to meet the business needs of the customer, there are pros and cons of each approach.

Managed resources are those where the cloud provider is responsible for the installation, patching, maintenance, and security of a resource. On the inverse, unmanaged resources are those hosted within a cloud environment, but where the customer bears responsibility for host functions. Managed resources will typically cost more than unmanaged resources. 

Regulatory Compliance

If your application utilizes or stores any type of sensitive information, there will be specific regulatory requirements that you will need to incur compliance with. This type of data can range from credit card and financial information to health records, academic records, or government systems and data.

To assist with meeting regulatory requirements, AWS offers their Artifact service, which can be accessed directly from the AWS management console. As part of the Artifact service, AWS undergoes certification reviews and audits by various governing bodies. An additional feature that AWS offers through Artifact is enabling a customer to review and accept agreements for their individual account and what they need to maintain compliance with, along with terminating the agreement if no longer needed. 

Data Security

Several toolsets and technologies are commonly used as data security strategies. These are: encryption, key management, masking, obfuscation, anonymization, and tokenization.

With the concepts of multitenancy and resource pooling being central to any cloud environment, the use of encryption to protect data is essential and required, as the typical protections of physical separation and segregation found in a traditional data center model are not available or applicable to a cloud environment. The architecture of an encryption system has three basic components: the data itself, the encryption engine that handles all the encryption activities, and the encryption keys used in the actual encryption and use of the data.

Data in transit is the state of data when it is actually being used by an application and is traversing systems or going between the client and the actual application. Whether the data is being transmitted between systems within the cloud or going out to a user’s client, data in transit is when data is most vulnerable to exposure of unauthorized capture. Within a cloud hosting model, the transmission between systems is even more important than with a traditional data center due to multitenancy; the other systems within the same cloud are potential security risks and vulnerable points where data capture could happen successfully. 

In order to maintain portability and interoperability, the cloud security professional should make the processes for the encryption of data in transit vendor neutral in regard to the capabilities or limitations of a specific cloud provider. The most common method for data in transit encryption is to use the well known SSL and TLS technologies under HTTPS. With many modern applications utilizing web services as the framework for communications, this has become the prevailing method, which is the same method used by clients and browsers to communicate with servers over the internet. 

Data at Rest

Data at rest refers to information stored on a system or device. This data can be stored in many different forms to fit within this category.

Data residing on a system is potentially exposed and vulnerable far longer than short transmission and transaction operations would be, so special care is needed to ensure its protection from unauthorized access. 

While encrypting data is central to the confidentiality of any system, the availability and performance of data are equally as important.It is important to ensure that encryption methods provide high levels of security and protection and do so in a manner that facilitates high performance and system speed.

With portability and vendor lock-in considerations, it is important to ensure that encryption systems do not effectively cause a system to be bound to a proprietary cloud offering. Data at rest encryption and security are very important in a cloud environment due to the reliance on virtual machines. In a traditional data center, you can have systems that are powered off and inaccessible. In a virtual environment, when a system is not powered on or started, the disk and memory are gone, but the underlying image still exists within storage and carries a possibility of compromise or corruption, especially if a developer has stored application or customer data on the VM image. 

Encryption with Data States

Encryption is used in various manners and through different technology approaches, depending on the state of the data at the time.With data in use, the data is being actively accessed and processed. Because this process is the most removed from and independent of the host system, technologies such as data rights management and information rights management are the most capable and mature approaches that can be taken at this time.

Challenges With Encryption

There are a myriad of challenges with implementing encryption. Some are applicable no matter where the data is housed, and others are specific to cloud environments. A central challenge to encryption implementations is the dependence on key sets to handle the actual encryption and decryption processes. Without the proper security of encryption keys, the entire encryption scheme could be rendered vulnerable and insecure. With any software based encryption scheme, core computing components such as processor and memory are vital, and within a cloud environment specifically, these components are shared across all of the hosted customers.

Encryption Implementations

The actual implementation of encryption and how it is applied will depend largely on the type of storage being used within the cloud environment. With database storage systems, two layers of encryption are typically applied and available. First, database systems will reside on volume storage systems, resembling a typical file system of a server model. The actual database files can be protected through encryption methods at the file system level. This also serves to protect the data at rest.

For object storage, apart from the encryption at the actual file level, which is handled by the cloud provider, encryption can be used within the application itself. The most prevalent means for this is through IRM technologies or via encryption within the application itself. With IRM, encryption can be applied to the objects to control their usage after they have left the system. With application-level encryption, the application effectively acts as a proxy between the user and the object storage and ensures encryption during the transaction. However, once the object has left the application framework, no protection is provided. 

Lastly, with volume storage, many of the typical encryption systems used on a traditional server model can be employed within a cloud framework. This encryption is most useful with data at rest scenarios. Due to the application itself being able to read the encrypted data on the volume, any compromise of the application will render the file system encryption ineffective when it comes to protecting the data.

Hashing

Hashing involves taking data of arbitrary type, length, or size and using a function to map a value that is of a fixed size. Hashing can be applied to virtually any type of data object, including text strings, documents, images, binary data, and even virtual machine images. 

The main value of hashing is to quickly verify the integrity of data objects. Within a cloud environment this can offer great value with virtual machine images and the potentially large number of data locations within a dispersed environment. As many copies of a file are potentially stored in many different locations, hashing can be used to very quickly verify that the files are of identical composure and that the integrity of them has not been compromised.  

A large variety of hashing functions are commonly used and supported. The vast majority of users will have no problem using any of the freely and widely available options, which will suit their needs for data integrity and comparison without issue. 

Key Management

Key management is the safeguarding of encryption keys and the access to them.  Within a cloud environment, key management is an essential and highly important task, while also being very complex. One of the most important security considerations with key management is the access to the keys and the storage of them. Access to keys in any environment is extremely important and critical to security. In a cloud environment, where you have multitenancy and the cloud provider personnel having broad administration access to systems, there are more considerations than in a traditional data center concerning the segregation and control of the staff of the customer. 

No matter what hosting model is used by an organization, a few principles of key management are important. Key management should always be performed only on trusted systems and by trusted processes, whether in a traditional data center or in a cloud environment. In a cloud environment, careful consideration must be given to the level of trust that can be established within the environment of the cloud provider and whether that will meet management and regulatory requirements. If the externally hosted key management system becomes unavailable, like an inadvertent firewall change or ACL change, the entire system will be inaccessible.

Key storage can be implemented in a cloud environment within the same virtual machine as the encryption service or engine. Internal storage is the simplest implementation, it keeps the entire process together.

Tokenization

Tokenization is the practice of utilizing a random and opaque token value in data to replace what otherwise would be a sensitive or protected data object. The token value is usually generated by the application with a means to map it back to the actual real value, and then the token value is placed in the data set with the same formatting and requirements of the actual real value, so that the application can continue to function without different modifications or code changes. Tokenization represents a way for an organization to remove sensitive data from an application without having to introduce more intensive processes such as encryption to meet regulatory or policy requirements. 

Data Loss Prevention

A major concept and approach employed in a cloud environment to protect data is known as data loss prevention. It is a set of controls and practices put in place to ensure that data is only accessible and exposed to those users and systems authorized to have it. The goals of this strategy for an organization are to manage and minimize risk, maintain compliance with regulatory requirements, and show due diligence on the part of the application and data owner. 

DLP Components

Any DLP implementation is composed of three common components: discovery and classification, monitoring, and enforcement. The discovery and classification stage is the first stage of the DLP implementation. It is focused on the actual finding of data that is pertinent to the DLP strategy, ensuring that all instances of it are known and able to be exposed to the DLP solution, and determining the security classification and requirements of the data once it has been found. This also allows the matching of data within the environment to any regulatory requirements for its protection and assurance. 

Once data has been discovered and classified, it can then be monitored with DLP implementations. The monitoring stage encompasses the core function and purpose of a DLP strategy. 

The final stage of a DLP implementation is the actual enforcement of policies and any potential violations caught as part of the monitoring stage. If any potential violations are detected by the DLP implementation, a variety of measures can be automatically taken, depending on the policies set forth by the management.

DLP Data States

With data at rest, the DLP solution is installed on the systems holding the data, which can be servers, laptops, desktops, workstations, or mobile devices. In many instances, this will involve archived data and long-term storage data.

With data in transit, the DLP solution is deployed near the network perimeter to capture traffic as it leaves the network through various protocols such as http,https, and smtp. It looks for data that is leaving or attempting to leave the area that does not conform to security policies. 

 Lastly, with data in use, the DLP solution is deployed on the workstations or devices in order to monitor the data access and use from the endpoints. The biggest challenges with this type of implementation are reach and the complexity of having all access points covered. 

DLP on end-user devices can be a particular challenge for any cloud application. Because it requires the end user to install an application or plug in to work, you will need to make sure you fully understand the types of devices your users will be utilizing, as well as any costs and requirements associated with the use of the technology.

DLP Cloud Implementations and Practices

The cloud environment brings additional challenges to DLP. The biggest difference is the way cloud environments store data. Data in a cloud is spread across large storage systems, with varying degrees of replication and redundancy, and oftentimes where the data will be stored and accessed is unpredictable. For a DLP strategy, this can pose a particular challenge because it makes properly discovering and monitoring all data used by a system or application more difficult, especially because the data can change locations over time.

Data De-identification

Data de-identification involves using masking, obfuscation, or anonymization. The theory behind masking or obfuscation is to replace, hide, or remove sensitive data from data sets. The most common use for masking is making available test datasets for nonproduction and development environments.  By replacing sensitive data fields with random or substituted data, these nonproduction environments can quickly utilize datasets that are similar to production for testing and development, without exposing sensitive information to systems with fewer security controls and less oversight.

Typically masking is accomplished either by entirely replacing the value with a new one or by adding characters to a data field. This can be done wholesale on the entire field or just portions of it.

The two primary methods for masking are static masking and dynamic masking. With static masking, a separate and distinct copy of the data set is created with masking in place. This is typically done through a script or other process that will take  a standard data set, process it to mask the appropriate and predefined fields, and then output the dataset as a new one with the completed masking done. The static method is most appropriate for data sets that are created for nonproduction environments. With dynamic masking, production environments are protected by the masking process being implemented between the application and data layers of the application. This allows for a masking translation to take place live in the system and during normal application processing of data. Dynamic masking is usually done where a system needs to have full and unmasked data but certain users should not have the same level of access.  

With data anonymization, data is manipulated in a way to prevent the identification of an individual through various data objects. It is often used in conjunction with other identifiers such as masking. Data generally has direct and indirect identifiers, with direct identifiers being the actual personal and private data, and indirect identifiers being attributes such as demographic and location data. Data anonymization is the process of removing the indirect identifiers to prevent such asn identification from taking place. 

AWS Identity and Access Management

Just like a root account on a computer system, the AWS root account has full access to everything under your account. It can create users, provision resources, and incur financial obligations for any activities that are done with it. As with superuser accounts on any computer system, it is a best practice to not use the root account unless absolutely necessary, but instead to provision accounts that have more limited access. 

The AWS IAM dashboard can be found at https://console.aws.amazon.com/iam and you can log into this address using the same email address and password for your root account.

Securing The Root User

When you created your root account, you established a password for it. This password is what you will use to access the AWS console when using the root account. Along with a strong password, MFA will add another layer of security to the account so it is recommended to do this. 

IAM User Groups and Roles

Groups are used to assign a standard set of permissions to users as they are added to the system. As you add more users, going through each user and assigning permissions can become a very labor intensive process. It is easy to make errors like this. Groups represent the way to create packages of settings that are maintained in a single location. As users are added to the system, they can be added to the appropriate groups and will automatically inherit the appropriate permissions in a consistent manner.

Roles in AWS are the granular permissions that users can be granted. Within each AWS service, there are multiple roles that allow different activities, such as reading data, creating data, deploying services, and provisioning access. The AWs system has predefined roles for every single service offering that you can select to attach to groups. Within each service offering, there are several different roles that grant different types of access.

Federated Access

A powerful way for provisioning user access to AWS is through federated access. With federated access, you can use technologies such as SAML or Microsoft Active Directory to provision users, rather than creating them manually through the  IAM account process in the console. The big advantage with using federated access is that users will use accounts and credentials they already have established to access AWS. This enables an organization to use already existing security and account practices, without having to worry about maintaining them in another system. 

SAML

SAML 2.0 is the latest standard put out by the nonprofit OASIS consortium and their security services technical committee and can be found at https://www.oasis-open.org/standards#samlv2.0. SAML is xml based and it is used to exchange information used in the authentication and authorization process between different parties. Specifically, it is used for information exchange between identity providers and service providers, and it contains within the xml block the required information that each system needs or provides. 

User Reporting

As with any system that has a number of users on it, you will want a way to keep track of what users you have, what access they have, when they last logged in, and their status of being issued keys and when they were last rotated. This report is offered as a csv download that you can either review directly from the csv or import into any data or reporting tool you desire. The report can be accessed from the left menu with the credential report button.

AWS Support

When we created an account, we selected the free support option. It is not ideal for organizations that are more heavily invested in AWS and certainly not for anyone running production business services in AWS. 

Management Tools for AWS

The AWS management console is the main resource where you can control all of your services and perform any operations. To access the console, go to https://console.aws.amazon.com and log in with your credentials. The console has many menus that point to their many services. On any screen, in the upper right corner of the console is a dropdown menu to change regions that you are viewing. For some services that are global in nature, you will not see regions displayed within the dashboard for that service. 

 

As you are learning about the AWS core services, keep track of which ones are global in nature and not bound to regions. Many services are offered at a global level, and no selection or configuration in regard to regions or availability zones is necessary.

 

AWS CLI

The AWS command line interface provides a way to manage AWS services and perform many administrative functions without having to use the web based management console. Through the use of the command line interface, users can also script and automate many functions through whatever programming language they are familiar with or desire to use for automation. Each AWS service has command line interface commands that are pertinent to it and can be found in the AWS documentation. 

 

Developer Tools

AWS CodeBuild is a fully featured code building service that will compile and test code as well as build deployment packages that are ready for implementation. Codebuild is a fully managed service that will automatically scale to the needs of developers, alleviating their need to manage and scale a system. 

 

AWS CodeCommit is a managed service for secure Git repositories. With the popularity of Git for code versioning, the AWS service allows users to be up and running quickly and in a secure environment, without having to configure and manage their own repository systems. It will automatically scale to the needs of users and is completely compatible with any tools and software that have Git capabilities.

 

AWS CodeDeploy is a managed deployment service that can deploy code fully across AWS services or on-premises servers. The service is designed to handle complex deployments and ensure that all pieces and configurations are properly deployed, allowing a savings in time spent on verification after rollouts. It will fully scale to any resources that are needed.

 

Configuration Management

The AWS systems manager allows you to consolidate data from AWS services and automate tasks across all of your services. It allows for a holistic view of all of your services, while also allowing you to create logical groups of resources that can then be viewed in a consolidated manner. Within the Systems Manager there are many components that allow you to perform different administrative tasks. 

 

OpsCenter provides a consolidated view for developers and operations staff to view and investigate any operational issues. Data from many different resources are all centralized. It allows for a quick view of your entire environment and helps diagnose problems as quickly as possible. 

 

Explorer is a customizable dashboard that provides information on the health of your entire AWS environment and can consolidate data spanning multiple accounts and regions. 

 

AWS AppConfig provides an API and console method for applying configuration changes across AWS services from a centralized service. This is done in much the same way code is deployed out to multiple locations. AppConfig can quickly deploy configuration changes to different instances of compute services and ensure they are applied in a uniform and consistent manner.

 

Resource Groups allow for logical grouping of resources within AWS for how they are presented within Systems Manager. This allows a user to group services by application, department, tier, or any other manner they find useful, rather than looking at all resources collectively.

 

Keep in mind the concept of resource groups, especially with large deployments within AWS. The use of resource groups can help segment services to specific applications and groups and assist with monitoring your services within AWS.

 

Global Infrastructure

AWs runs a very large cloud infrastructure that is distributed throughout the world. This network is divided into different segments that are geographically based, such as region and availability zones. AWS also runs a network of Edge services throughout the world that serve a portion of AWS services and are optimized for low-latency and responsiveness to requests.

 

AWS organizes resources throughout the world in regions. Each region is a group of logical data centers called Availability Zones. While each region may seem like it is a data center or physical location, it is actually a collection of independent data centers that are grouped and clustered together, providing redundancy and fault tolerance.

 

When you provision resources within AWS, they can exist in only one region and are hosted on the physical hardware present at it. That does not mean you cannot replicate instances and virtual machines across multiple regions and around the world, but each individual instance only exists in one region. 

 

Core AWS Services

AWS offers a large number of core services that are widely used and well known throughout the world. It offers robust monitoring and auditing tools that span the breadth of all AWS service offerings. Monitoring systems are designed to collect and consolidate event data and auditing information from any services allocated under your account and provide them to you from a uniform and centralized dashboard. 

 

CloudWatch is the AWS service for monitoring and measuring services running within the AWS environment. It provides data and insights on application performance and how it may change over time, resource utilization, and a centralized and consolidated view of the overall health of systems and services. It is very useful to developers, engineers, and managers. Within any IT system, large amounts of data are produced in the form of system and application logs, but also data on performance and metrics. 

 

Across large systems, this can result in a large amount of data that is coming from many different sources. This can pose considerable challenges ranging from anyone looking to synthesize the data and formulate a picture of system health and performance, down to developers looking for specific events or instances within applications.

 

It collects and consolidates all of this data into a single service, making it much easier and more efficient to access. With this consolidation, developers and managers can see a picture of their overall systems and how they are performing, versus looking at individual systems or components of systems separately. 

 

CloudTrail is the AWs service for performing auditing and compliance within your AWS account. It pairs with CloudWatch to analyze all of the logs and data collected from the services within your account, which can then be audited and monitored for all activities done by users and admins within your account. This enables a full compliance capability and will store an historical record of all account activities. Should any investigations become necessary, all of the data is preserved and easily searchable. 

 

CloudTrail will log all account activities performed, regardless of the method through which they are done. It logs all activity through the management console, command line interface, and any API calls that are made, along with the originating IP address and all time and date data. If unauthorized changes are made, or if a change causes a disruption in services or system problems, the logs and reports available can enable an admin to quickly determine what was done and by whom.

 

AWS Shield

This provides protection from and mitigation of DDOS attacks on AWS services. It is always active and monitoring services, providing continual coverage without needing to engage AWS support for assistance should an attack occur. It comes in two different service categories, Standard and Advanced. Standard coverage is provided at no additional charge and is designed to protect against common DDOS attacks, especially for any accounts utilizing CloudFront or Route 53. This will protect websites and applications from the most frequently occurring attacks and virtually all known attacks on layer 3 and 4 against CloudFront and Route 53.

 

AWS WAF

AWS WAF is a web application firewall that protects web applications against many common attacks. It comes with an array of preconfigured rules from AWS that will offer comprehensive protection based on common top security risks, but you also have the ability to create your own rules. The WAF includes an API that can be used to automate rule creation and deployment of them to your allocated resources. Also included is a real time service view into your web traffic that you can then use to automatically create new rules and alerts. It is included at no additional cost for anyone who has purchased the AWS Shield Advanced tier. If you are not utilizing the Advanced Shield tier, you can use AWS WAF separately and will incur costs based on the number of rules you create and the number of requests they service. Remember the difference between Shield and WAF. Shield operates at the layer 3 and 4 network levels and is used to prevent DDOS attacks, versus WAF that operates at the Layer 7 level and can take action based on the specific contents of web traffic and requests. 

Services For Cloud Computing

AWS runs a very large cloud infrastructure that is distributed throughout the
world. This network is divided into different segments that are geographically
based. AWS also runs a network of Edge services throughout the worls that serve
a portion of AWS services and are optimized for low-latency and responsiveness
to requests.
   
When you provision resources within AWS, they can exist in only one region and
are hosted on the physical hardware present at it. That does not mean you cannot
replicate instances and virtual machines across multiple regions and around the
world, but each individual instance only exists in one region.
  
AWS Services
When you provision a resource, the decision of which region to locate it in can
depend on a few different factors such as: customer locations, security
requirements, and regulatory requirements. It makes sense to host your
applications and resources closer to your customers. This will yield the fastest
network times and responsiveness. It may also make sense, depending on your
application's needs and your apetite for risk, to completely separate resources
or instances. Lastly, many jurisdictions have regulatory requirements that
dictate how personal and financial data can be used and transported. In many
instances they are required to stay within geographic areas or within their own
borders.

The use of regions for regulatory compliance is very important. Most regulations
are built upon where the data resides or is being processed, and the ability
within AWS to control, with most services, where that happens makes compliance
much easier.
  
To keep order and make it easier to know what service you are using, as well as
the region hosting it, all AWS services use endpoints that are formulaic in
nature. This enables anyone with knowledge of the AWS topography to quickly know
where and what a service is just by seeing the endpoint.
  
Availability Zones
While regions represent a group or cluster of physical data centers, an AWS
Availability Zone represents those actual physical locations. Each AWS data
center is built with fully independent and redundant power, coooling,
netowrking, and physical computing hardware. All network connections are
dedicated lines, supporting the highjest possible throughput and lowest levels
of latency.
  
As each region is made up of multiple Availability Zones, there are direct
connections for networking access between them, and all traffic is encrypted.
This allows resources within a region to be spread out and clustered between the
Availability Zones, without worrying about latency or security.

When provision resources within AWS, you will select the region from one of the
options given. The list will contain all of those that are available within that
region.
  
To provide optimal responsiveness for customers, AWS maintains a network of Edge
locations throughout the world to provide low latency access to data. These
locations are geographically dispersed throughout the world to be close to
customers and organizations in order to provide the fastest response times.
Unlike regular AWS regions and Availability Zones, Edge locations are optimized
to perform a narrow set of tasks and duties, allowing them to be optimally tuned
and maintained for their intended focus.
  
Edge locations run a minimal set of services to optimize delivery speeds. These
include Amazon CloudFront, Amazon Route 53, AWS Shield, AWS WAF, and Lambda
Edge. CloudFront is a content delivery network that allows cached copies of data
and content to be distributed on Edge servers closest to customers. Route 53 is
a DNS service that provides very fast and robust lookup services. Shield is a
DDoS protection service that constantly monitors and reacts to any attacks. WAF
is a web application firewall that monitors and protects against web exploits
and attacks based on rules that inspect traffic and requests. Lambda Edge
provides a runtime environment for application code to be run on a CDN without
having to provision systems or manage them.

Core Services
AWS offers a alrge number of core services that are widely used and well known
throughout the IT world.
  
CloudWatch is the AWS service for monitoring and measuring services running
within the AWS environment. It provides data and insights on application
performance and how it may change over time, resource utilization, and a
centralised view of the overall health of systems and services. It is very
useful to developers, engineers, and managers.
  
CloudTrail is the AWS service for performing auditing and compliance within your
AWS account. It pairs with CloudWatch to analyze all the logs and data collected
from the services within your account, which can then be audited and monitored
for all activities done by users and admins within your account. This enables a
full compliance capability and will store an historical record of all account
activities. Should any investigations become necessary, all of the data is
preserved and easily searchable.
  
Shield provides protection from and mitigation of DDoS attacks on AWS service.
It is always active and monitoring AWS services, providing continual coverage
without needing to engage AWS support for assistance should an attack occur. AWS
Shield comes in two different service categories, Standard and Advanced.

AWS WAF is a web application firewall that protects web applications against
many common attacks. It comes with an array of preconfigured rules from AWS that
will offer comphrehensive protection based on common top security risks, but you
also have the ability to create your own rules. The AWS WAF includes an API that
can be used to automate rule creation and deployment of them to your allocated
resources. Also included is a real time view into your web traffic that you can
then use to automatically create new rules and alerts.
 
Remember the difference between Shield and WAF. Shield operates at the layer 3
and 4 network levels and is used to prevent DDoS attacks, versus WAF that
operates at the layer 7 content level and can take action based on the specific
contents of web traffic and requests.
 
Networking and Content Delivery
AWS offers robust networking and content delivery systems that are designed to
optimize low latency and responsiveness to any queries, as well as complete
fault tolerance and high availability.
 
With Amazon Virtual Private Cloud, you can create a logically defined space
within AWS to create an isolated virtual network. Within this network, you
retain full control over how the network is defined and allocated. You fully
control the I{ space, subnets, routing tables, and network gateway settings
within your VPC, and you have full use of both IPv4 and IPv6.

Security Groups in AWS are virtual firewalls that are used to control inbound
and outbound traffic. Security groups are applied on the actual instance within
a VPC versus at the subnet level. This means that in a VPC where you have many
services or virtual machines deployed, each one can have different security
groups applied to them. In fact,each instance can have up to 5 security groups
applied to it, allowing different policies to be enforced and maintain
granularity and flexibility for administrators and developers.
 
ACL's
Access control lists are security layers on the VPC that control traffic at the
subnet level. This differs from security groups that are on each specific
instance. However, many times both will be used for additional layers of
security.
 
Subnets
Within a VPC, you must define a block of IP addresses that are available to it.
These are called a Classless Inter-Domain Routing block. By default, a VPC will
be created with a CIDR of 172.31.0.0/16. This default block will encompass all
IP addresses from 172.31.0.0 to 172.31.255.255. While the default subnet
configuration for AWS uses IPv4 addressing, IPv6 is also available if desired or
required.

Elastic Load Balancing
Elastic Load Balancing is used to distribute traffic across the AWS
infrastructure. This can be done on varying degrees of granularity, ranging from
spanning across multiple Availability Zones or within a single Availability
Zone. It is focused on fault tolerance by implementing high availability,
security, and auto-scaling capabilities. There are three different types of load
balancing under its umbrella: application load balancer, network load balancer,
and classic load balancer.
 
Layer 7 of the OSI model pertains to the actual web traffic and content.
Developers can take advantage of data such as the HTTP method, URL, parameters,
headers, and so on, in order to tune load balancing based on the specifics of
their applications and the type of traffic and user queries it receives.
 
Route 53
Amazon Route 53 is a robust, scalable, and highly available DNS service. Rather
than running their own DNS services or being dependent on another commercial
service, an organization can utilize Route 53 to transform names into their IP
addresses, as well as having full IPv6 compatibility and access. Route 53 can be
used for services that reside inside AWS, as well as those of AWS.

CloudFront
Amazon CloudFront is a CDN that allows for delivery of data and media to users
with the lowest levels of latency and the highest levels of transfer speeds.
This is done by having CloudFront systems distributed across the entire AWS
global infrastructure and fully integrated with many AWS services, such as S3,
EC2, and Elastic Load Balancing. CloudFront optimizes speed and delivery by
directing user queries to the closest location to their requests. This is
especially valuable and useful for high resource demand media such as live and
streaming video.
 
Storage
AWS offers extremely fast and expandable storage to meet the needs of any
applications or system. These offerings range from block storage used by EC2
instances to the widely used object storage of S3. AWS offers different tiers of
storage to meet specific needs of production data processing systems versus
those used for archiving and long term storage.
 
Elastic Block Store
Amazon Elastic Block Storage is a high performance storage that is used in
conjunction with EC2 where high throughput data operations are required. This
will typically include file systems, media services, and databases. There are
four types of EBS volumes that a user can pick from to mee their specific needs.
Two of the volume types feature storage backed by solid-state drives and two use
traditional hard disk drives.

S3
Amazon Simple Storage Service is the most prominent and widely used storage
service under AWS. It offers object storage at incredibly high availability
levels, with stringent security and backups, and is used for everything from
websites, backups, and archives to big data implementations.
 
Remember that bucket names must be globally unique within AWS, and each bucket
can only exist within one region.

S3 Storage Classes
S3 offers four storage classes for users to pick from, depending on their needs.
Storage classes are set at the object level, and a bucket for a user may contain
objects using any of the storage classes concurrently.
 
S3 Standard is used for commonly accessed data and is optimized for high
throughput and low latency service. Used widely for cloud applications,
websites, content distribution, and data analytics. Encryption is supported for
data both at rest and in transit and is resilient to the loss of an entire
Availability Zone.

S3 Intelligent Tiering
Best used for data where usage patterns are unknown or may change over time.
This class works by spanning objects across two tiers: one that is optimized for
frequent access and the other for lesser access. For a small fee, AWS will
automatically move an object between the two tiers based on access.
 
S3 Standard Infrequent Access
Ideally used where access will be infrequent for an object, but where access is
requested, a quick response is necessary. This is often used for backups and
disaster recovery files that are not accessed with any regularity, but when
needed there is an immediacy requirement.
 
S3 One Zone Infrequent Access
Ideal for data that is infrequently used, requires quick access when it is
accessed, but does not require the robust fault tolerance and replication of
other S3 classes. Rather than being spread across the typical three Availability
Zones, objects under this storage class are housed on a single Availability
Zone. This realizes cost savings for users, as it is cheaper than other S3
storage classes that span multiple Availability Zones.

S3 Permissions
AWS S3 offers three different layers of permissions and security controls on S3
objects: bucket, user, and object access control lists. With bucket
policies, security policies are applied at the user level of the bucket. These
policies can apply to all objects within the bucket or just some objects. For
example, for a private bucket that you desire to only allow internal access to
objects, a bucket-level policy can be applied that automatically protects every
object within. However, if you have a mix of objects in your bucket, you may
have some that are protected to specific users for access, while others, such as
those used for public web pages, are open and available to the entire internet.
As policies are applied to objects, the ability to read, write, and delete
objects can all be controlled separately.
 
S3 Encryption
When you upload any objects to S3, the data within the object is static and
written to storage within the AWS infrastructure. If you upload an object that
contains personal or sensitive data, that will now reside in AWS and be
accessible based upon the policies and security controls that you have applied
to it. While the bucket and user policies you have in place will be applied to
those objects, it is possible to upload data that should be protected and
somehow slips through your policies for various reasons.

S3 Versioning
By default, when you update an object in S3, it is overwritten and replaces what
you previously had uploaded. In many cases this is fine, but it puts the
responsibility on the user to ensure they have a backup copy of the object or to
otherwise preserve the copy. Without doing so, once they have uploaded a new
copy, whatever existed before is gone.
 
S3 Object Life Cycle
To help manage versioning in AWS S3, the service provides automation tools,
called actions, to handle how versions are stored and when they are removed from
the system. This will be particularly useful as the number of objects you have
increases or with objects that are regularly updated and will begin to accrue a
large number of versions.
 
S3 Glacier and S3 Glacier Deep
S3 Glacier is a special type of S3 storage that is intended to be a secure
solution for long term data archiving and backups. As compared to regular S3
storage options, Glacier is offered at significant cost savings. These savings
are much greater when compared to the costs of on-premises storage solutions for
long term archiving. Depending on retrieval needs, S3 Glacier Deep is a subset
of Glacier that is intended for the longest term storage with the least likely
needs for retrieval.

AWS Storage Gateway
the AWS Storage Gateway provides storage for hybrid cloud services that gives
access to your on-premises resources to the full array of storage services in
AWS. This enables a customer to extend their storage capabilities into AWS
seamlessly and with very low latency. A common usage for Storage Gateway is for
customer to use AWS to store backups of images from their on-premises
environment, or to use the AWS cloud storage to backup their file shares. Many
customers also utilize Storage Gateway as a key component to their disaster
recovery strategy and planning.
 
AWS Backup
AWS Backup provides backup services for all AWS services. It provides a single
resource to configure backup policies and monitor their usage and success across
any services that you have allocated. This allows administrators to access a
single location for all backup services without having to separately configure
and monitor on a per-service basis across AWS. From the AWS Backup console,
users can fully automate backups and perform operations such as encryption and
auditing. AWS backup has been certified as compliant with many regulatory
requirements.

AWS Snow
AWS Snow is designed for offering compute and storage capabilities to those
organizations or places that are outside the areas where AWS regions and
resources operate. Snow is based on hardware devices that contain substantial
compute and storage resources that can be used both as devices for data
processing away from the cloud and as a means to get data into and out of AWS.
This is particularly useful in situations where high speed or reliable
networking is now possible.
 
Compute Services
With any system or application, you need an underlying compute infrastructure to
actually run your code, content, or services. AWS offers the ability through EC2
to run full virtual instances that you maintain control over and can customize
as much as you like, as well as managed environments that allow you to just
upload your content or code and be quickly running, without having to worry
about the underlying environment. These include EC2, Lightsail, Elastic
Beanstalk, Lambda, and Containers.
 
EC2
Amazon Elastic Compute cloud is the main offering for virtual servers in the
cloud. It allows users to create and deploy compute instances that they will
retain full control over and offers a variety of configuration options for
resources.

Amazon Machine Images are the basis of virtual compute instances in AWs. An
image is basically a data object that is a bootable virtual machine and can be
deployedd throughout the AWs infrastructure. AMI's can be either those offered
by AWS through their Quick Start options, those offered by vendors through the
AWS marketplace, or those created by users for their own specific needs.
 
Remember that costs for Marketplace applications will be presented as two costs.
The licensing costs from the vendor for use of the image, as well as the EC2
costs for hosting it and the compute/storage resources it will consume.

EC2 instance types are where the underlying hardware resources are married with
the type of image you are using. The instance type will dictate the type of CPU
image used, how many virtual CPU's it has, how much memory, the type of storage
used, network bandwidth, and the underlying EBS bandwidth.
 
It is not necessary to memorize all of the instance types and which purposes
they associate with.
 
Lightsail
Lightsail is the quickest way to get into AWS for new users. It offers
blueprints that will configure fully ready systems and application stacks for
you to immediately begin using and deploying your code or data into. Lightsail
is fully managed by AWS and is designed to be a one click deployment model to
get you up and running quickly at a low cost.

Elastic Beanstalk
Elastic beanstalk is designed to be even easier and quicker to get your
applications up and running in that Lightsail is. With Elastic beanstalk, you
choose the application platform that your code is written in. Once you provision
the instance you can deploy your code into it and begin running. You only select
the platform you need, you do not select specific hardware or compute resources.
 
Lambda
AWS Lambda is a service for running code for virtually any application or back
end service. all you need to do is upload your code, and there are no systems or
resources to manage. The code can be called by services or applications, and you
will only incur costs based on the processing time and the number of times your
code is called, as well as the memory that you allocate. You will always have
the level of resources you need available to run your code without having to
provision or monitor anything.
 
Containers
With typical server models, there is an enormous duplication of resources when
replicas of systems are created. When you launch several instances of EC2, each
one has its own operating system and underlying functions and services that are
needed to run. before your application code is even used, each instance is
consuming a certain amount of compute resources just to exist. A modern approach
to this problem has been through the use of containers, such as Docker. This
allows a single system instance to host multiple virtual environments within it
while leveraging the underlying infrastructure.

Databases
As many modern applications are heavily dependent on databases, AWS has several
database service offerings that will fit any type of use needed, ranging from
relational databases to data warehousing. AWs provides robust tools for
migrating databases from legacy and on-premises systems into AWS, as well as
transitioning between different database services.
 
Database Models
Databases follow two general methods. They can be either relational or
non-relational. which one younuse is entirely dependent on the needs of your
application and the type of data it is accessing and dependent on.
 
Relational databases are often referred to as structured data. A relational
database utilizes a primary key to track a unique record, but then can have many
data elements associated with it.
 
Non-relational databases are referred to as unstructured data. While their
tables also utilize a primary key, the data paired with that primary key is not
restircted to the type of data. This allows applications to store a variety of
data within their tables. However, it also restricts queries against these
tables to the primary key value, as the paired data could be in different
formats and structures and would not be efficient or stable to query from
applications generally.

Make sure you understand the differences between relational and non-relational
databases and what they are used for, especially the key aspects of how they may
be searched.
 
AWS Database Migration Service
The AWS Database Migration Service is a tool for migrating data into AWS
databases from existing databases with minimal downtime or other interruptions.
The DMS can move data from most of the popular and widely used databases into
various AWs database services while the source system remains fully operational.
DMS can do migrations where the source and destination databases are the same,
such as moving from Microsoft SQL server from another location into a Microsoft
SQL server database in AWs, but it can also perform migrations where they
differ.
 
The availability of DMS is a great opportunity for any users that have been
contemplating changing back-end databases but have been cautious about the level
of effort involved in the actual migration of data.

 Amazon Relational Database Service
 Amazon RDS is an umbrella sercice that incorporates several different kinds of
 database systems. Each system is fully managed by AWS and is optimized within
 the AWS infrastructure for memory, performance, and I/O. All aspects of database
 management, such as provisioning, configuration, maintenance, performance
 monitoring, and backups, are handled by AWS, which allows the user to fully
 focus on their applications and data.

Amazon Aurora
 Aurora is a subset of Amazon RDS that is compatible with both Microsoft SQL and
 PostgreSQL databaes. It combines the features and simplicity of opensource
 databases with the robust management and security of AWS services. Aurora
 leverages the AWS infrastructure to offer highly optimized and fast database
 services, along with the robust security and reliably of AWS.
 
 DynamoDB
 DynamoSB is the AWS key-value and document database solution for those
 applications that do not need a SQL or relational database but do need extremely
 high performance and scalable access to their data. As with other AWS services,
 DynamoDB is fully configured, maintained, and secured by AWS, so all the user
 needs to do is create a table and populate their data.

Amazon Redshift

Redshift is a cloud based data warehouse solution offered by AWS. Unlike traditional on-premises data warehouses,Redshift leverages AWS storage to any capacity that is needed by a company, either now or into the future. Organizations will only incur costs for the storage they actually use, as well as the compute power they need to do analysis and retrieve data. Typically, an organization must spend money on having sufficient capacity and continually add both compute and storage infrastructure to support growth and expansion, resulting in expenses for idle systems. 

Automation

In AWS, automation is essential to enable systems to be rapidly and corretly deployed and configured. Amazon CloudFormation implements an automated way to model infrastructure and resources within AWS via either a text file or through the use of programming languages. This allows administration to build out templates for the provisioning of resources that can then be repeated in a secure and reliable manner. As you build new systems or replicate services, you can be assured that they are being done in a consistent and uniform manner, negating the process of building out from a base image and then having to apply different packages, configurations, or scripts to fully build up systems to a ready state. With the use of file-based templates, CloudFormation allows infrastructure and services to be treated as code. This allows adminstrators to use version control to track changes to the infrastructure and use build pipelines to deploy the infrastructure changes. CloudFormation not only helps with infrastructure, it can also help build and deploy your applications. 

End-User Computing

AWS offers powerful tools to organizations to provide end-user computing such as virtual desktops and access to applications or internally protected websites. 

WorkSpaces

Amazon WorkSpaces is a Desktop as a Service implementation that is built, maintained, configured, and secured through AWS as a managed service. WorkSpace offers both Windows and Linux desktop solution that can be quickly deployed anywhere throughout the AWS global infrastructure. As many organizations have moved to virtual desktop infrastructure solutions, WorkSpaces enables them to offer the same solutions to their users without the need to actually purchase and maintain the hardware required for the VDI infrastructure, as well as the costs of managing and securing it.

AppStream

AppStream is a sercice for providing managed and streaming applications via AWS. By streaming applications, the need to download and install applications is removed, as they will be run through a browser. This eliminates the need for an organization to distribute software and support the installation and configuration of it to their users. This can be particularly useful to organizations like acedemic institutions that can offer a suite of software to their students without the need for them to actually obtain and install it.

WorkLink

WorkLink offers users the ability to access internal applications through the use of mobile devices. Traditionally this access would be controlled and secured through the use of technologies like virtual private networks or through the use of mobile device manament utilities. Both of these technologies must be already installed and configured on a user's device before they can be used.