Services For Cloud Computing

These are my notes on AWS services for cloud computing.

Samsung Store on Amazon! If you buy something, I get a small commission and that makes it easier to keep on writing. Thank you in advance if you buy something.

   
AWS runs a very large cloud infrastructure that is distributed throughout the
world. This network is divided into different segments that are geographically
based. AWS also runs a network of Edge services throughout the worls that serve
a portion of AWS services and are optimized for low-latency and responsiveness
to requests.
   
When you provision resources within AWS, they can exist in only one region and
are hosted on the physical hardware present at it. That does not mean you cannot
replicate instances and virtual machines across multiple regions and around the
world, but each individual instance only exists in one region.
  
AWS Services
When you provision a resource, the decision of which region to locate it in can
depend on a few different factors such as: customer locations, security
requirements, and regulatory requirements. It makes sense to host your
applications and resources closer to your customers. This will yield the fastest
network times and responsiveness. It may also make sense, depending on your
application's needs and your apetite for risk, to completely separate resources
or instances. Lastly, many jurisdictions have regulatory requirements that
dictate how personal and financial data can be used and transported. In many
instances they are required to stay within geographic areas or within their own
borders.

The use of regions for regulatory compliance is very important. Most regulations
are built upon where the data resides or is being processed, and the ability
within AWS to control, with most services, where that happens makes compliance
much easier.
  
To keep order and make it easier to know what service you are using, as well as
the region hosting it, all AWS services use endpoints that are formulaic in
nature. This enables anyone with knowledge of the AWS topography to quickly know
where and what a service is just by seeing the endpoint.
  
Availability Zones
While regions represent a group or cluster of physical data centers, an AWS
Availability Zone represents those actual physical locations. Each AWS data
center is built with fully independent and redundant power, coooling,
netowrking, and physical computing hardware. All network connections are
dedicated lines, supporting the highjest possible throughput and lowest levels
of latency.
  
As each region is made up of multiple Availability Zones, there are direct
connections for networking access between them, and all traffic is encrypted.
This allows resources within a region to be spread out and clustered between the
Availability Zones, without worrying about latency or security.

When provision resources within AWS, you will select the region from one of the
options given. The list will contain all of those that are available within that
region.
  
To provide optimal responsiveness for customers, AWS maintains a network of Edge
locations throughout the world to provide low latency access to data. These
locations are geographically dispersed throughout the world to be close to
customers and organizations in order to provide the fastest response times.
Unlike regular AWS regions and Availability Zones, Edge locations are optimized
to perform a narrow set of tasks and duties, allowing them to be optimally tuned
and maintained for their intended focus.
  
Edge locations run a minimal set of services to optimize delivery speeds. These
include Amazon CloudFront, Amazon Route 53, AWS Shield, AWS WAF, and Lambda
Edge. CloudFront is a content delivery network that allows cached copies of data
and content to be distributed on Edge servers closest to customers. Route 53 is
a DNS service that provides very fast and robust lookup services. Shield is a
DDoS protection service that constantly monitors and reacts to any attacks. WAF
is a web application firewall that monitors and protects against web exploits
and attacks based on rules that inspect traffic and requests. Lambda Edge
provides a runtime environment for application code to be run on a CDN without
having to provision systems or manage them.

Core Services
AWS offers a alrge number of core services that are widely used and well known
throughout the IT world.
  
CloudWatch is the AWS service for monitoring and measuring services running
within the AWS environment. It provides data and insights on application
performance and how it may change over time, resource utilization, and a
centralised view of the overall health of systems and services. It is very
useful to developers, engineers, and managers.
  
CloudTrail is the AWS service for performing auditing and compliance within your
AWS account. It pairs with CloudWatch to analyze all the logs and data collected
from the services within your account, which can then be audited and monitored
for all activities done by users and admins within your account. This enables a
full compliance capability and will store an historical record of all account
activities. Should any investigations become necessary, all of the data is
preserved and easily searchable.
  
Shield provides protection from and mitigation of DDoS attacks on AWS service.
It is always active and monitoring AWS services, providing continual coverage
without needing to engage AWS support for assistance should an attack occur. AWS
Shield comes in two different service categories, Standard and Advanced.

AWS WAF is a web application firewall that protects web applications against
many common attacks. It comes with an array of preconfigured rules from AWS that
will offer comphrehensive protection based on common top security risks, but you
also have the ability to create your own rules. The AWS WAF includes an API that
can be used to automate rule creation and deployment of them to your allocated
resources. Also included is a real time view into your web traffic that you can
then use to automatically create new rules and alerts.
 
Remember the difference between Shield and WAF. Shield operates at the layer 3
and 4 network levels and is used to prevent DDoS attacks, versus WAF that
operates at the layer 7 content level and can take action based on the specific
contents of web traffic and requests.
 
Networking and Content Delivery
AWS offers robust networking and content delivery systems that are designed to
optimize low latency and responsiveness to any queries, as well as complete
fault tolerance and high availability.
 
With Amazon Virtual Private Cloud, you can create a logically defined space
within AWS to create an isolated virtual network. Within this network, you
retain full control over how the network is defined and allocated. You fully
control the I{ space, subnets, routing tables, and network gateway settings
within your VPC, and you have full use of both IPv4 and IPv6.

Security Groups in AWS are virtual firewalls that are used to control inbound
and outbound traffic. Security groups are applied on the actual instance within
a VPC versus at the subnet level. This means that in a VPC where you have many
services or virtual machines deployed, each one can have different security
groups applied to them. In fact,each instance can have up to 5 security groups
applied to it, allowing different policies to be enforced and maintain
granularity and flexibility for administrators and developers.
 
ACL's
Access control lists are security layers on the VPC that control traffic at the
subnet level. This differs from security groups that are on each specific
instance. However, many times both will be used for additional layers of
security.
 
Subnets
Within a VPC, you must define a block of IP addresses that are available to it.
These are called a Classless Inter-Domain Routing block. By default, a VPC will
be created with a CIDR of 172.31.0.0/16. This default block will encompass all
IP addresses from 172.31.0.0 to 172.31.255.255. While the default subnet
configuration for AWS uses IPv4 addressing, IPv6 is also available if desired or
required.

Elastic Load Balancing
Elastic Load Balancing is used to distribute traffic across the AWS
infrastructure. This can be done on varying degrees of granularity, ranging from
spanning across multiple Availability Zones or within a single Availability
Zone. It is focused on fault tolerance by implementing high availability,
security, and auto-scaling capabilities. There are three different types of load
balancing under its umbrella: application load balancer, network load balancer,
and classic load balancer.
 
Layer 7 of the OSI model pertains to the actual web traffic and content.
Developers can take advantage of data such as the HTTP method, URL, parameters,
headers, and so on, in order to tune load balancing based on the specifics of
their applications and the type of traffic and user queries it receives.
 
Route 53
Amazon Route 53 is a robust, scalable, and highly available DNS service. Rather
than running their own DNS services or being dependent on another commercial
service, an organization can utilize Route 53 to transform names into their IP
addresses, as well as having full IPv6 compatibility and access. Route 53 can be
used for services that reside inside AWS, as well as those of AWS.

CloudFront
Amazon CloudFront is a CDN that allows for delivery of data and media to users
with the lowest levels of latency and the highest levels of transfer speeds.
This is done by having CloudFront systems distributed across the entire AWS
global infrastructure and fully integrated with many AWS services, such as S3,
EC2, and Elastic Load Balancing. CloudFront optimizes speed and delivery by
directing user queries to the closest location to their requests. This is
especially valuable and useful for high resource demand media such as live and
streaming video.
 
Storage
AWS offers extremely fast and expandable storage to meet the needs of any
applications or system. These offerings range from block storage used by EC2
instances to the widely used object storage of S3. AWS offers different tiers of
storage to meet specific needs of production data processing systems versus
those used for archiving and long term storage.
 
Elastic Block Store
Amazon Elastic Block Storage is a high performance storage that is used in
conjunction with EC2 where high throughput data operations are required. This
will typically include file systems, media services, and databases. There are
four types of EBS volumes that a user can pick from to mee their specific needs.
Two of the volume types feature storage backed by solid-state drives and two use
traditional hard disk drives.

S3
Amazon Simple Storage Service is the most prominent and widely used storage
service under AWS. It offers object storage at incredibly high availability
levels, with stringent security and backups, and is used for everything from
websites, backups, and archives to big data implementations.
 
Remember that bucket names must be globally unique within AWS, and each bucket
can only exist within one region.

S3 Storage Classes
S3 offers four storage classes for users to pick from, depending on their needs.
Storage classes are set at the object level, and a bucket for a user may contain
objects using any of the storage classes concurrently.
 
S3 Standard is used for commonly accessed data and is optimized for high
throughput and low latency service. Used widely for cloud applications,
websites, content distribution, and data analytics. Encryption is supported for
data both at rest and in transit and is resilient to the loss of an entire
Availability Zone.

S3 Intelligent Tiering
Best used for data where usage patterns are unknown or may change over time.
This class works by spanning objects across two tiers: one that is optimized for
frequent access and the other for lesser access. For a small fee, AWS will
automatically move an object between the two tiers based on access.
 
S3 Standard Infrequent Access
Ideally used where access will be infrequent for an object, but where access is
requested, a quick response is necessary. This is often used for backups and
disaster recovery files that are not accessed with any regularity, but when
needed there is an immediacy requirement.
 
S3 One Zone Infrequent Access
Ideal for data that is infrequently used, requires quick access when it is
accessed, but does not require the robust fault tolerance and replication of
other S3 classes. Rather than being spread across the typical three Availability
Zones, objects under this storage class are housed on a single Availability
Zone. This realizes cost savings for users, as it is cheaper than other S3
storage classes that span multiple Availability Zones.

S3 Permissions
AWS S3 offers three different layers of permissions and security controls on S3
objects: bucket, user, and object access control lists. With bucket
policies, security policies are applied at the user level of the bucket. These
policies can apply to all objects within the bucket or just some objects. For
example, for a private bucket that you desire to only allow internal access to
objects, a bucket-level policy can be applied that automatically protects every
object within. However, if you have a mix of objects in your bucket, you may
have some that are protected to specific users for access, while others, such as
those used for public web pages, are open and available to the entire internet.
As policies are applied to objects, the ability to read, write, and delete
objects can all be controlled separately.
 
S3 Encryption
When you upload any objects to S3, the data within the object is static and
written to storage within the AWS infrastructure. If you upload an object that
contains personal or sensitive data, that will now reside in AWS and be
accessible based upon the policies and security controls that you have applied
to it. While the bucket and user policies you have in place will be applied to
those objects, it is possible to upload data that should be protected and
somehow slips through your policies for various reasons.

S3 Versioning
By default, when you update an object in S3, it is overwritten and replaces what
you previously had uploaded. In many cases this is fine, but it puts the
responsibility on the user to ensure they have a backup copy of the object or to
otherwise preserve the copy. Without doing so, once they have uploaded a new
copy, whatever existed before is gone.
 
S3 Object Life Cycle
To help manage versioning in AWS S3, the service provides automation tools,
called actions, to handle how versions are stored and when they are removed from
the system. This will be particularly useful as the number of objects you have
increases or with objects that are regularly updated and will begin to accrue a
large number of versions.
 
S3 Glacier and S3 Glacier Deep
S3 Glacier is a special type of S3 storage that is intended to be a secure
solution for long term data archiving and backups. As compared to regular S3
storage options, Glacier is offered at significant cost savings. These savings
are much greater when compared to the costs of on-premises storage solutions for
long term archiving. Depending on retrieval needs, S3 Glacier Deep is a subset
of Glacier that is intended for the longest term storage with the least likely
needs for retrieval.

AWS Storage Gateway
the AWS Storage Gateway provides storage for hybrid cloud services that gives
access to your on-premises resources to the full array of storage services in
AWS. This enables a customer to extend their storage capabilities into AWS
seamlessly and with very low latency. A common usage for Storage Gateway is for
customer to use AWS to store backups of images from their on-premises
environment, or to use the AWS cloud storage to backup their file shares. Many
customers also utilize Storage Gateway as a key component to their disaster
recovery strategy and planning.
 
AWS Backup
AWS Backup provides backup services for all AWS services. It provides a single
resource to configure backup policies and monitor their usage and success across
any services that you have allocated. This allows administrators to access a
single location for all backup services without having to separately configure
and monitor on a per-service basis across AWS. From the AWS Backup console,
users can fully automate backups and perform operations such as encryption and
auditing. AWS backup has been certified as compliant with many regulatory
requirements.

AWS Snow
AWS Snow is designed for offering compute and storage capabilities to those
organizations or places that are outside the areas where AWS regions and
resources operate. Snow is based on hardware devices that contain substantial
compute and storage resources that can be used both as devices for data
processing away from the cloud and as a means to get data into and out of AWS.
This is particularly useful in situations where high speed or reliable
networking is now possible.
 
Compute Services
With any system or application, you need an underlying compute infrastructure to
actually run your code, content, or services. AWS offers the ability through EC2
to run full virtual instances that you maintain control over and can customize
as much as you like, as well as managed environments that allow you to just
upload your content or code and be quickly running, without having to worry
about the underlying environment. These include EC2, Lightsail, Elastic
Beanstalk, Lambda, and Containers.
 
EC2
Amazon Elastic Compute cloud is the main offering for virtual servers in the
cloud. It allows users to create and deploy compute instances that they will
retain full control over and offers a variety of configuration options for
resources.

Amazon Machine Images are the basis of virtual compute instances in AWs. An
image is basically a data object that is a bootable virtual machine and can be
deployedd throughout the AWs infrastructure. AMI's can be either those offered
by AWS through their Quick Start options, those offered by vendors through the
AWS marketplace, or those created by users for their own specific needs.
 
Remember that costs for Marketplace applications will be presented as two costs.
The licensing costs from the vendor for use of the image, as well as the EC2
costs for hosting it and the compute/storage resources it will consume.

EC2 instance types are where the underlying hardware resources are married with
the type of image you are using. The instance type will dictate the type of CPU
image used, how many virtual CPU's it has, how much memory, the type of storage
used, network bandwidth, and the underlying EBS bandwidth.
 
It is not necessary to memorize all of the instance types and which purposes
they associate with.
 
Lightsail
Lightsail is the quickest way to get into AWS for new users. It offers
blueprints that will configure fully ready systems and application stacks for
you to immediately begin using and deploying your code or data into. Lightsail
is fully managed by AWS and is designed to be a one click deployment model to
get you up and running quickly at a low cost.

Elastic Beanstalk
Elastic beanstalk is designed to be even easier and quicker to get your
applications up and running in that Lightsail is. With Elastic beanstalk, you
choose the application platform that your code is written in. Once you provision
the instance you can deploy your code into it and begin running. You only select
the platform you need, you do not select specific hardware or compute resources.
 
Lambda
AWS Lambda is a service for running code for virtually any application or back
end service. all you need to do is upload your code, and there are no systems or
resources to manage. The code can be called by services or applications, and you
will only incur costs based on the processing time and the number of times your
code is called, as well as the memory that you allocate. You will always have
the level of resources you need available to run your code without having to
provision or monitor anything.
 
Containers
With typical server models, there is an enormous duplication of resources when
replicas of systems are created. When you launch several instances of EC2, each
one has its own operating system and underlying functions and services that are
needed to run. before your application code is even used, each instance is
consuming a certain amount of compute resources just to exist. A modern approach
to this problem has been through the use of containers, such as Docker. This
allows a single system instance to host multiple virtual environments within it
while leveraging the underlying infrastructure.

Databases
As many modern applications are heavily dependent on databases, AWS has several
database service offerings that will fit any type of use needed, ranging from
relational databases to data warehousing. AWs provides robust tools for
migrating databases from legacy and on-premises systems into AWS, as well as
transitioning between different database services.
 
Database Models
Databases follow two general methods. They can be either relational or
non-relational. which one younuse is entirely dependent on the needs of your
application and the type of data it is accessing and dependent on.
 
Relational databases are often referred to as structured data. A relational
database utilizes a primary key to track a unique record, but then can have many
data elements associated with it.
 
Non-relational databases are referred to as unstructured data. While their
tables also utilize a primary key, the data paired with that primary key is not
restircted to the type of data. This allows applications to store a variety of
data within their tables. However, it also restricts queries against these
tables to the primary key value, as the paired data could be in different
formats and structures and would not be efficient or stable to query from
applications generally.

Make sure you understand the differences between relational and non-relational
databases and what they are used for, especially the key aspects of how they may
be searched.
 
AWS Database Migration Service
The AWS Database Migration Service is a tool for migrating data into AWS
databases from existing databases with minimal downtime or other interruptions.
The DMS can move data from most of the popular and widely used databases into
various AWs database services while the source system remains fully operational.
DMS can do migrations where the source and destination databases are the same,
such as moving from Microsoft SQL server from another location into a Microsoft
SQL server database in AWs, but it can also perform migrations where they
differ.
 
The availability of DMS is a great opportunity for any users that have been
contemplating changing back-end databases but have been cautious about the level
of effort involved in the actual migration of data.

 Amazon Relational Database Service
 Amazon RDS is an umbrella sercice that incorporates several different kinds of
 database systems. Each system is fully managed by AWS and is optimized within
 the AWS infrastructure for memory, performance, and I/O. All aspects of database
 management, such as provisioning, configuration, maintenance, performance
 monitoring, and backups, are handled by AWS, which allows the user to fully
 focus on their applications and data.

Amazon Aurora
 Aurora is a subset of Amazon RDS that is compatible with both Microsoft SQL and
 PostgreSQL databaes. It combines the features and simplicity of opensource
 databases with the robust management and security of AWS services. Aurora
 leverages the AWS infrastructure to offer highly optimized and fast database
 services, along with the robust security and reliably of AWS.
 
 DynamoDB
 DynamoSB is the AWS key-value and document database solution for those
 applications that do not need a SQL or relational database but do need extremely
 high performance and scalable access to their data. As with other AWS services,
 DynamoDB is fully configured, maintained, and secured by AWS, so all the user
 needs to do is create a table and populate their data.

Amazon Redshift

Redshift is a cloud based data warehouse solution offered by AWS. Unlike traditional on-premises data warehouses,Redshift leverages AWS storage to any capacity that is needed by a company, either now or into the future. Organizations will only incur costs for the storage they actually use, as well as the compute power they need to do analysis and retrieve data. Typically, an organization must spend money on having sufficient capacity and continually add both compute and storage infrastructure to support growth and expansion, resulting in expenses for idle systems. 

Automation

In AWS, automation is essential to enable systems to be rapidly and corretly deployed and configured. Amazon CloudFormation implements an automated way to model infrastructure and resources within AWS via either a text file or through the use of programming languages. This allows administration to build out templates for the provisioning of resources that can then be repeated in a secure and reliable manner. As you build new systems or replicate services, you can be assured that they are being done in a consistent and uniform manner, negating the process of building out from a base image and then having to apply different packages, configurations, or scripts to fully build up systems to a ready state. With the use of file-based templates, CloudFormation allows infrastructure and services to be treated as code. This allows adminstrators to use version control to track changes to the infrastructure and use build pipelines to deploy the infrastructure changes. CloudFormation not only helps with infrastructure, it can also help build and deploy your applications. 

End-User Computing

AWS offers powerful tools to organizations to provide end-user computing such as virtual desktops and access to applications or internally protected websites. 

WorkSpaces

Amazon WorkSpaces is a Desktop as a Service implementation that is built, maintained, configured, and secured through AWS as a managed service. WorkSpace offers both Windows and Linux desktop solution that can be quickly deployed anywhere throughout the AWS global infrastructure. As many organizations have moved to virtual desktop infrastructure solutions, WorkSpaces enables them to offer the same solutions to their users without the need to actually purchase and maintain the hardware required for the VDI infrastructure, as well as the costs of managing and securing it.

AppStream

AppStream is a sercice for providing managed and streaming applications via AWS. By streaming applications, the need to download and install applications is removed, as they will be run through a browser. This eliminates the need for an organization to distribute software and support the installation and configuration of it to their users. This can be particularly useful to organizations like acedemic institutions that can offer a suite of software to their students without the need for them to actually obtain and install it.

WorkLink

WorkLink offers users the ability to access internal applications through the use of mobile devices. Traditionally this access would be controlled and secured through the use of technologies like virtual private networks or through the use of mobile device manament utilities. Both of these technologies must be already installed and configured on a user's device before they can be used.