Final Architecture Integration
How the Components Fit Together
What you built is a small but complete picture of how AWS compute works. An EC2 instance lives inside a subnet inside a VPC. The Internet Gateway attached to the default VPC is what makes public IP routing possible. The security group sits at the instance boundary, evaluating every inbound connection against its rules — in this case, permitting only your IP on port 22. The IAM instance profile is not a network component; it is an identity component, telling the EC2 service which role to make available via the metadata endpoint.
The data flow for an SSH connection is: your workstation issues a TCP SYN to the instance’s public IP on port 22. AWS routes the packet to the Internet Gateway, which maps the public IP to the private IP of your instance. The security group evaluates the source IP — if it matches your /32 rule, the packet is forwarded to the instance. The SSH daemon accepts the connection, validates your private key against the stored public key, and opens a shell session.
The IAM instance profile participates in a separate flow entirely. When a process on the instance calls an AWS API, the AWS SDK automatically queries http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name> to retrieve temporary credentials. Those credentials — an access key ID, secret access key, and session token — are valid for up to six hours and are automatically rotated. The SDK handles this transparently. The key architectural point is that no credential file exists on disk. The credential lifecycle is managed entirely by the STS service on the AWS side.
Security Boundaries
There are two distinct security boundaries in this architecture. The first is the network boundary: the security group controls which traffic can reach the instance at all. The second is the identity boundary: the IAM role controls what the instance can do once it is running. These boundaries are independent. A misconfigured security group that exposes the instance to the internet does not grant additional AWS API permissions — those are governed entirely by the IAM role. Conversely, an overly permissive IAM role does not widen the network attack surface.
In production, defense in depth means both boundaries are tightly controlled. An instance should have the minimum network exposure required for its function, and its IAM role should grant only the permissions explicitly required — nothing more.
Scaling Path
The instance you built cannot scale horizontally on its own. To move from this single instance to a horizontally scalable fleet, the steps are:
First, convert the manual launch configuration into a Launch Template. A Launch Template captures every parameter you set during launch — AMI ID, instance type, key pair, security group, IAM instance profile — as a versioned, reusable document. This is the artifact an Auto Scaling Group consumes.
Second, create an Auto Scaling Group referencing the Launch Template, configured to span at least two Availability Zones. Set a minimum capacity of 1, desired capacity of 2, and a maximum capacity appropriate to your expected load. The ASG will maintain the desired count, replacing unhealthy instances automatically.
Third, place an Application Load Balancer in front of the ASG. The ALB receives traffic on port 80 or 443 and distributes it across healthy instances. At this point, individual instances have no public IPs — all inbound traffic flows through the ALB, and the security group on the instances permits inbound traffic only from the ALB’s security group, not from the internet directly.
That three-step progression is the direct architectural evolution of what you built in this lab.
Failure Paths in a Production Extension
Understanding failure behavior matters as much as understanding the happy path. Consider the following failure scenarios against the architecture you built, and how the production extension handles each.
If the underlying EC2 host fails, your single instance is gone and unrecoverable until you manually relaunch. In the ASG model, the ASG detects the instance health check failure within minutes and launches a replacement automatically, typically restoring capacity within three to five minutes.
If the Availability Zone itself experiences a disruption — which is rare but has occurred in AWS history — a single-AZ deployment is completely unavailable. A multi-AZ ASG will shift all traffic to instances in unaffected zones, provided the ALB is also multi-AZ, which it is by default.
If your AMI contains a bug that causes all new instances to fail their health checks, the ASG will enter a launch failure loop. This is why blue/green deployment patterns exist — you validate the new AMI against a parallel environment before shifting traffic.
Final Reflection
The instance you deployed in this lab is not production-ready. That is intentional. It is a clear, correct implementation of the foundational components — networking, compute, identity — that all production EC2 architectures are built from. Every decision you made here has a direct production analog: the /32 security group rule becomes an ALB-to-instance security group chain; the manually attached IAM role becomes a role with carefully scoped policies defined in Terraform; the console-based launch becomes a Launch Template consumed by an Auto Scaling Group.
The value of building this correctly from scratch, rather than clicking through a wizard without understanding the outputs, is that you now have a mental model for debugging. When an instance is unreachable, you know to check the security group before the instance itself. When an application on an instance cannot call S3, you know to check the IAM role before the network. When an instance fails to launch in an ASG, you know the Launch Template is the first place to look.
That debugging intuition is what this lab was designed to build.

In this section, I confirmed:
0 of 5 completed