Posts

Returning my focus to the hands-on! Day 5

Image
 Really trying to work through the labs in the AWS re:Start graduate environment to move onto focusing on programming and some more complex bits on Udemy. Not to also mention, lots of the labs aren't working because of silly things like IAM permissions going awry, so just stuff like I'm not given permission to view objects in the S3 bucket - real shame. This lab involved creating an REST API with mock endpoints. I feel like this should have come prior to another lab I've done this plus connecting the API endpoints to an DB, but nevertheless it has helped me solidify my API understanding once again.  It also gave me again exposure to AWS SDK for Python aka Boto3, which I'm starting to see is the core underpinning of how automated and serverless architecture is built in AWS. I am starting to feel really excited because the dots of everything I've learnt is starting to genuinely compound into some serious knowledge. It feels a little bit like learning to drive basicall

Returning my focus to the hands-on! Day 4

Image
Another day, some more hands-on labs! So this one is pretty fun, we are simply introducing Lambda functions to trigger with our API calls, AKA some serverless infrastructure, again for the benefit of being loosely coupled and highly available. From the above to the below diagram, you see we're introducing two Lambda functions to resolve the API functions (or basically queries in this instance). Though, serverless infrastructure mentioned, this architecture can also be categorised as a dynamic website more broadly too. The severless element references to how the backend is designed ultimately.  So, using a wrapper in the AWS SDK, the script uploads the intended Lambda to an S3 bucket and by the end of the project we have both lambda functions created and present alongside the API, as the image below demonstrates; Here we're able to see how the DynamoDB reflect what is exhibited on the e-commerce website; Great, but now let's having a little look into the actual code. Now the

Returning my focus to the hands-on! Day 3

Image
  So I mentioned that the new few labs were going to be topology bits. One lab environment didn't go smoothly (regular occurrence even during the instructor led days), so I went onto the next lab. After doing some Terraform, using the console is just a horribly slow, boring and ineffective way of getting infrastructure up in AWS. Nevertheless, I just did it as I want to exhaust everything in this AWS re/Start graduate environment and just move on.   Here I am pinging my bastion host, yawn, now hopefully onto something more interesting! Lambda! So the purpose of this lab is simply to add the Lambda functions, S3 bucket and set-up the SNS topic in short. I like these as for me, it's further exposure to the concept of serverless infrastructure, but it also provides some greater scope in understanding how I can leverage these tools for application purposes.  In this example the application backend is already in place within the lab. But we can see the application backend relies on

Returning my focus to the hands-on! Day 2

Image
So just a quick little lab before I give my significant other a lift - creating a VPC peering connection. Unlike yesterday, no need to faff creating the infrastructure as it's prepared in the lab. Not much to say here except it at least it was the first time (to my recollection) setting this up. Unfortunately like many pre-set labs on this AWS learning environment, the bootstrapping often seems to fail. Here I was supposed to log into the MySQL instance in the private subnet (whose VPC has no IGW) to connect via the application server. Nevermind, onto the next one! So actually before doing this next lab, I had started another but as mentioned before, the AWS learning environment sometimes doesn't always deploy or requires workarounds for whatever reason. In this instance, the script was supposed to provide S3 bucket access to my IP (which I enter in the Cloud9 Terminal) but unfortunately this didn't work for whatever reason. Unable to proceed, I moved onto the next lab. So

Returning my focus to the hands-on!

Image
So having finished the bootcamp, passed the CCP I wanted to work through and exhaust the resources I have available to me. The shortest of these was 3 months of free LinkedIn Learning. Unfortunately however this involved lots of theory and not a lot of hands on but I will be elaborating on a further blogpost about this. Having done a initial Udemy course invoking infrasture with code, this felt so slow! But nevertheless, it was a great reminder about the best and most appropriate way to deploy.   These two labs are just little "projects" in the graduate AWS re/Start learner environment we have for six months.  First little thing I did today was replicate the three-tier architecture above. In short I established a Virtual Private Cloud (VPC) with four subnets (1 public, 3 private - hello again CIDR blocks!) spread across two availability zones for redundancy. Set-up the internet and NAT gateway and built the different security groups for each tier. The only thing that felt new

Managing your AWS Resource Consumption

Image
AWS Organizations is a centralized account management service that allows you to consolidate multiple AWS accounts into a structured organization, offering consolidated billing and enhanced account management for improved budget, security, and compliance management. The organizational structure involves a hierarchy of organizational units (OUs) within a root, resembling an upside-down tree. Policies attached to nodes in this hierarchy cascade down to affect all branches and leaves, ensuring consistent controls. The service enables centrally managed access policies, controlled access to AWS services, and automated AWS account creation. However, it doesn't replace AWS Identity and Access Management (IAM) policies, which are applied to individual IAM users, groups, and roles within an account. In contrast, AWS Organizations uses Service Control Policies (SCPs) to regulate access to AWS services for entire accounts or groups of accounts within an OU, affecting all users, groups, and ro

A brief introduction to CloudWatch

Image
Amazon CloudWatch monitors the performance and health of our resources and applications in AWS. As a result it lets us: Track resource and application performance Collect and monitor log files Get notified when an alarm goes off CloudWatch consists of three primary components: metrics, alarms, and events. When running applications on Amazon EC2 instances, monitoring workload performance is crucial. This involves addressing two key questions: ensuring sufficient EC2 resources for fluctuating performance requirements and automating resource provisioning on demand. While Amazon CloudWatch facilitates performance monitoring and log file collection, it doesn't directly manage EC2 instances. Amazon EC2 Auto Scaling is our solution, as it enables dynamic scaling to maintain fleet health and availability during demand fluctuations. Amazon CloudWatch serves as a distributed statistics-gathering system, collecting and tracking metrics, including custom ones, and triggering no