The way web-based applications have been engineered over the last few decades has changed. Historically, we primarily had Monolithic applications that:
Sure, these web apps could be fast in operation but a failure in code somewhere could result in the entire system being unusable, not to mention the operational business issues that arise when your project hits a certain size.
Service Orientated Architecture (SOA) began to split these systems up into services that helped create distributed platforms which communicated using Events through an Enterprise Service Bus (ESB), mitigating some of the problems of the Monolithic application architecture. Using event-driven SOA, services could be scaled interdependently, teams could work on code and release independently, and some failures wouldn’t take your entire system down, in theory at least. It did however create a reliance on the ESB, which became a single point of failure and the bottleneck of the entire application.
Then came Microservices, which shared a lot in common with SOA. The microservice architectural pattern broke down systems into components of related functionality, and exposed functionality through APIs. Decoupling the web application into components simplified the development of large systems in many ways – although it also complicated it in others, which we’ll explore later – and helped enable teams to work together on different systems at their own cadence (by avoiding the use of shared state), speeding up the process of feature development, and delivering value to customers. (As a side note, Amazon executed this very well after Jeff Bezos’ API Mandate.)
Serverless Architecture is the next step in this evolution, moving away from the old style bare metal server and virtual machine deployments, and breaking down services even further to deliver huge benefits to teams that are empowered to take advantage of the new technology and processes available.
Serverless architecture is a cloud execution model and an evolution of cloud services that further simplifies running code in the cloud.
In a serverless model, the deployable item of value is a single code function that does its work in isolation through API gateways and Event Triggers (for example, triggering a document analysis function when a user uploads a new document) to create serverless apps. Coupled with other serverless computing ‘services’ such as Amazon’s DynamoDB, S3 and Elasticache, you can create massively complex web apps while maintaining the simplicity of a single function as a deployable component. What this does for you is give you a ton of flexibility and elasticity in your deployment.
Much of this technology has been engineered by large tech firms who had real problems to solve, at scale. The likes of Google, Amazon and Microsoft have invested huge amounts of time and money into developing serverless technologies for their own benefit, using their learnings to provide serverless platforms and services as part of their cloud offering and allowing us to take advantage of tools like Azure Functions, Google Cloud Functions and AWS Lambda functions.
At the risk of stating the obvious, serverless architecture isn’t really serverless. There is a huge network of computers all over the world, connected on super fast global networks, with huge amounts of high-speed storage and RAM, with built in redundancy all over the place, and some of the best engineers on the planet taking care of them. The beauty of “serverless” technology is that you don’t have to worry about any of the stuff that doesn’t directly provide value to your customers; the cloud provider provides a fully-managed suite of services for you to be able to run your workloads and takes care of everything else.
Putting in place a serverless architecture can greatly simplify deploying code to production, and typically removes much of the need to worry about capacity planning, scaling, updating the operating system or other server maintenance tasks. However it’s important to take care when selecting a technology partner, as what you need will greatly depend on the size of the project you are trying to build.
Cost is often touted as a compelling reason to adopt serverless computing and indeed it is. Reducing the idle capacity of your platform can save on cost, and removing the need for server maintenance also decreases operational overheads.
Be aware though that it is also easy for the costs of a serverless architecture to spiral out of control. Just deploying a single serverless function behind a http entry point isn’t enough to run in production; to be sure your application is running as expected there are several other things to consider, such as application and performance monitoring.
The “pay as you go” model of on demand computing also means your costs can be pretty unpredictable, although adequate monitoring and response processes in place, you can go some way to mitigate this.
Scalability is a win for serverless. When users send http requests to your service, you can scale infinitely and quickly to meet demand. Providing your application has been developed in a way that can scale out, this typically requires little to no effort from the development team and is handled by the vendor. This can be complicated a little by the fact that you need to consider things like ‘cold starts’, which can cause a few seconds’ latency while the application starts up when scaling, and may be unacceptable in some scenarios.
Reliability is an added attraction, as Microsoft, Google and Amazon have some of the best engineers on the planet and scaling, high availability, and server management are all essentially ‘free’, helping keep your platform running smoothly.
Several things also need to be true for businesses to fully utilise serverless effectively:
Automation, using tools such as HashiCorp’s open source infrastructure automation tool Terraform or AWS Cloud Formation, is a critical part of cloud native deployments. Automation helps provide predictability to your deployments and needs to form part of your serverless architecture. This includes Continuous Integration/Continuous Deployment (CI/CD) pipelines, which ensure the quality of changes and can quickly be safely deployed through to production.
Training of the development team is essential. Using serverless is an architectural design choice of the platform and changes the way teams approach application development. You are moving complexity outside of the code base, not removing the complexity, and the trade-offs need to be understood by those writing the code. For example, managing predictable deployments increases in complexity for serverless web applications, as does debugging problems when they occur.
Monitoring throughout the entire application stack is therefore possibly one of the most important things if you are to be successful in detecting and debugging issues in the system. This requires not only the correct tooling, but also for it to be set up correctly and have someone to act on the information available. Debugging is a simple task on a Monotlith application in comparison to a fully-distributed system, and observability of the platform is critical.
In addition to being prepared for some upfront investment in upskilling, automation and architectural complexity, you’ll also need to accept some level of vendor lock in. This will most likely not cause any issues, as most businesses are already committed in some way to a cloud platform.
Going fully serverless will rarely suit every use case, and the complexities introduced by using serverless will not always be worth it. For me though, one of the biggest wins serverless computing has to offer as the future of application infrastructure is that it encourages development teams to build web applications in a way that further reduces tight coupling of services, reduces the application’s dependence on state, and improves the application’s handling of inevitable failures. These are all good engineering practices anyway.
Arguably too, most businesses are already using some serverless technology through their cloud native products. Amazon’s S3 has been around since 2006, and is one of hundreds of AWS ‘serverless’ services, so in that respect there is likely already a high level of investment in serverless technology.
So, should you be using serverless? Probably, for some things. Coupled with other cloud native services it is a powerful tool, although it can increase complexity in many ways (if you’re doing anything beyond a simple one-off helper serverless function or “Hello World” application), so it’s important to ensure you’re going to put it to good use. You can partially invest in serverless architecture using Function as a Service (FaaS) for parts of the system that need it, for example:
This will help ensure you can get real value out of your investment, and build an architecture that supports your future growth objectives.