One client encountered the limits of their traditional infrastructure.
We migrated the applications (a web application, an api and some workers) completely to AWS.
The system is now completely scalable and growth resistant!
We set up the infrastructure to be as autoscaling as possible. During busy periods or peaks, extra servers will be automatically plugged in. Overnight, once it is less busy, these servers will be automatically unplugged and will not contribute to further costs.
The RDS Database is serverless and autoscaling. So the database too is ready for high peak loads. Because the database is Serverless, maintenance is minimal. The data is automatically distributed over multiple regions, and even database memory is automatically added and removed when required.
Load balanced, fault tolerant en deployment without downtime
The production clusters are load balanced. Multiple servers handle the request. If a server fails, another server is automatically generated and replaces the failing instance. In the meantime, the other available server take over the load.
Also deployments without downtime is made possible this way. If there is a new application version, a totally new cluster is generated in the background. Only once this succeeded without errors, and passed some tests, will this new cluster replace the old one.
For a setup like this a decent deployment pipeline is required. With this pipeline, developers can easily push a new application version, and need not worry about further infrastructural actions - they are automated.
The deployment pipeline detects automatically the new application version, and starts a build process in which the application is configured and assembled.
Then, the resulting application package is deployed in the cluster.
Monitoring and logging with Artificial Intelligence
The system as a whole is setup in a way that it requires as less attention af possible (no-ops philosophy).
Nevertheless it is required to get metrics on the functioning - are we being attacked? is the application still efficient? is a new version of the application generating errors? are we being cost effective?
For this we set up elaborate logging so we can view application and server logs on the fly, to see what's actually happening! (requires an expert eye).
For key metrics, we even use artificial intelligence to detect irregularities.
For example to prematurely detect attacks, cost increases or unusual loads.
The AI signals when for example data, cpu, error or request count exceeds expected tresholds. What's unique is that the thresholds themselves are inferred and adjusted by the AI!
Is the cloud expensive?
Most services and usage in the cloud is paid on a per usage basis. Few visits - low costs. Many visits - higher costs.
This contrasts traditional systems where costs are often the same, regardless of wether the capacity is used or not.
For migrated applications, this makes it somewhat difficult to estimate the costs after the migration. It is easier to predict costs after the migration - for example, how would costs rise in response to a 50% increase in visits.
Initially, for many existing applications, costs will be somewhat higher.
However, what you get back for this is scalability and growth potential, not only servers. What's really expensive is a crashing database if you suddenly have many visits, causing visitors to run away (as is the situation in many traditionally set up infrastructures).
Especially fit for start ups and rapid growth
This makes using an infrastructure especially fit for vanilla, new applications and existing applications with growth potential.
New applications will cost relatively less, because the cost allocation is on a usage basis.
With a sound architecture, these low costs are no hindrance for a quick and careless growth later on.