In the landscape of software development, staying current with technology stacks is essential. My recent project involved modernizing a legacy application from an older Symfony version to Symfony 7. I also implemented a deployment pipeline to facilitate Continuous Delivery as part of the transition to the AWS cloud environment.
The Symfony upgrade
Upgrading to Symfony 7 was a relatively smooth process, though it required some effort.
I made several key changes to the application to transition it to the cloud:
- I transitioned the storage of assets and image uploads to an external service, specifically Amazon S3.
- I configured the application for full containerization, setting up distinct containers for the web server, application server, and worker to ensure modularity and scalability.
- To enhance efficiency, I established a containerized proxy that manages the media flow from the storage system to the web, incorporating dynamic image resizing and caching functionalities.
Cloud migration to AWS
Upon reviewing both the requirements and the budget, I selected AWS LightSail as the hosting solution. AWS provides cost-effective instances that include a substantial amount of bandwidth, which suited our needs. Additionally, LightSail offers a managed database service.
Currently, combining EC2 with RDS or using Elastic Beanstalk doesn't present any notable benefits compared to LightSail for our purposes. While ECS does have its merits, the increased cost doesn't justify the switch at this stage. Transitioning from LightSail to EC2 would be uncomplicated, and moving to ECS could be done with relative ease, especially given the deployment pipelines already established for this project.
Continuous delivery with deployment pipelines
The highlight of this project was developing the deployment pipelines using AWS CodePipeline.
The strategy for the deployment pipeline involved crafting Docker images that are self-sufficient and deployable to any target equipped with an orchestration service. This setup is designed to be compatible with various systems, whether it's Docker Compose, Docker Stack, external servers, ECS, Kubernetes, or similar environments.
I developed two separate pipelines: the first for creating base images, which ensure uniformity and repeatability across development, testing, and production stages; the second pipeline was for assembling the application,resulting in a completely self-contained runnable image.
Although the images are already fit to deploy on ECS directly, the purpose was to deploy them on external (LightSail) servers.
For this, we created a lambda function that is executed in the deploy stage, which triggers a 'deploy' for docker stack running on the deployment target.
An AWS feature I've started using recently is the Parameter Store, part of AWS Systems Manager. Essentially, it's a service that stores key-value pairs and encrypted secrets.
From what I've learned in my AWS DevOps training, the Parameter Store is ideal for centralizing application configurations. It also keeps track of Docker images, both the base and currently deployed ones, as well as AMI images.
In this project, the Parameter Store proved invaluable:
- I used it to store the version number of the base Docker image during the build process, which avoids hardcoding and simplifies rollbacks.
- The build process for the application retrieves necessary details from the Parameter Store, including the base image version and various application configuration parameters.
- A lightweight 'agent' application on the deployment target consults the Parameter Store to determine which images to deploy and to pull additional application and deployment configurations.
In essence, the Parameter Store acts as a single repository for the deployed versions of application components, credentials for the deployment platform, and environment configuration files.
I also integrated a 'cdn gateway' into LightSail, enhancing a legacy service primarily used for image resizing.
This setup changes how assets are delivered. Rather than serving directly from S3 or through S3 combined with CloudFlare or CloudFront, the gateway intermediates the delivery.
The CDN gateway functions as a forward proxy and introduces several enhancements:
- SSL termination
- Utilizes Nginx for caching, optimizing content delivery.
- Employs Nginx for on-demand image resizing, or other middleware if needed.
- Implements rate limiting to manage resizing operations and prevent overloading.
- Directs content to various storage locations based on predefined rules.
Essentially, this gateway separates asset delivery and resizing from the main application and storage system, managing resource load and preventing system stress at multiple points.