API Challenge
Goal
Deploy an API application in a highly available and scalable manner, ensuring that the API can handle a significant load and recover from failures without downtime.
Tasks
- Deploy a sample API application on AWS. The API should be stateless to simplify scaling.
- Configure a load balancer to distribute incoming API requests across multiple instances of the API application.
- Set up auto-scaling for the API instances to automatically adjust the number of instances based on the load.
- Simulate API instance failures and demonstrate how the system automatically recovers, maintaining availability.
- Integrate a monitoring tool (AWS CloudWatch, Grafana, or any other) to track and alert based on the metrics you think are vital for the application.
Deliverables
- An architecture diagram illustrating the solution proposed. Feel free to include details about decisions and trade-offs.
- Infrastructure as Code (IaC) project (using CDK preferably, Terraform/OpenTofu, Cloudformation, etc.) for deploying the solution.
- Failure simulation procedures or scripts to demonstrate the system's resilience.
- Documentation on the monitoring setup, including key metrics to watch and how to access and interpret the data.
Evaluation Criteria:
- Correctness: The API deployment works as intended, effectively balancing the load and scaling in response to traffic changes.
- Resilience: The system demonstrates robustness against instance failures, with the load balancer quickly rerouting traffic to healthy instances.
- Documentation: Clear and thorough documentation that allows for easy replication of the setup and understanding of the system architecture.
- Monitoring: Effective implementation of monitoring tools to track the API's health, performance, and usage patterns.
Bonus: