Understanding Serverless Computing

Most consumers think of cloud computing simply as off-site storage space. But there’s been a rising trend in the last few years to host and run code in the cloud. As the features have matured, it’s become simple to use serverless computing services for applications.


The primary advantage to serverless computing is simplicity. The developer doesn’t need to consider the hardware requirements or worry about provisioning virtual servers, just writing the functionality of the application and uploading to the host provider. It lets the developer focus on developing and leaves the hardware and server maintenance tasks to professionals in that skill area.

The other advantages are flexibility and cost. Platforms like Amazon Web Services’s Lambda service only make you pay for computing time actually used by your application. Your code is actually hosted for free and you get charged based only on use. So an application that only runs once per hour can be insanely cheap. An application that has constant database triggers will run more frequently, but the nature of the cloud setup allows the host to expand on demand so requests aren’t waiting to be executed like they might be otherwise.


With any great advantages, come tradeoffs that have to be taken into account. Using a serverless setup means that, obviously, you lose control of the servers. For those with sensitive data or complex hardware requirements, dumping code into an unknown computing black hole may not be acceptable. The vast majority of applications should be able to trust that the hosts provide fast and efficient hosting, simply because of the volume of applications and requests that they handle.

The other disadvantage is that it requires a different way of architecting larger applications. Each code instance should be modular and run only when an appropriate trigger is fired. Rather than weaving a complex spaghetti instance of mini-applications calling each other constantly, execution should be driven from user input or database changes, so that each function has a very defined input and result. While this just sounds like good coding practice, in my experience most legacy systems do not work like this.

When to use

Applications that run as integrations between existing systems work well. UDig has several implementations of this type running between internal applications to keep data in sync. When the system of record updates, the update is seamlessly pushed to the reliant systems.

Back ends for small applications are also a great fit. If there is little traffic to the website, your cost is kept extremely low while you still get the performance of a large volume server. It can also be a great solution for side tasks that don’t fit into the standard architecture of your application, anything from user authentication to checking an external system for updates.


The primary providers of serverless application hosting are AWS Lambda, Microsoft Azure, and Google Cloud Functions. Amazon pioneered the space, but competitors have quickly realized the feasibility of the model and are catching up quickly.  And, as we recently experienced, even the big players can experience outages which can impact your uptime.  Read How to Avoid a Cloud Calamity by Andrew Duncan or check out a host of UDig cloud resources here.