What is Serverless Architecture: Benefits and Disadvantages

A Serverless Architecture is an application structure that hosts third-party “Backend-as-a-Service” (BaaS) services and can run or include custom code in containers managed from a “Function-as-a-Service” (FaaS) platform. This means that serverless computing allows developers to purchase backend services in a flexible model, based on the “pay as you go” uberization. In other words, developers only need to pay for the services they use and manage their usage and availability to users.

Source: algorithmia.com


The price is based on the actual amount of resources consumed by an application. “Serverless” is just a metaphor, of course, since the servers are still used by cloud service providers in order to run the code for developers. However, in this model, the developers are not concerned with the planning of: capacity, configuration, management, maintenance, operation or container sizing, virtual machines (VMs) or physical servers – something very attractive for IT operations, since they can simply abstract that part. Servers, however, are not eliminated; they just remain under the responsibility of the service provider.

In an article of June 20, 2016, the Apple developer Badri Janakiraman stated that:

“Serverless architectures are Internet-based systems where the application development does not use the usual server process. Instead, they rely solely on a combination of third-party services, client-side logic, and service hosted remote procedure calls (FaaS or Function-as-a-Service).”

It is a fact that serverless architectures can benefit from operating costs, complexity and significantly reduced engineering runtime – which is why they tend to become more and more popular. However, how much does it cost in terms of confidence in the vendor lock-in and the support services it offers? You need to understand that simple isolated functions facilitate application development, while event-driven execution makes operations cheaper. But, are there also disadvantages to this model?

Disadvantages of Serverless Architecture


The answer to that question is: yes! The serverless model requires a lot of caution and professionalism in its implementation, with some points of attention

  • Vendor lock-in can be a risk

Allowing a single provider to provide all the backend services for an application – known as a vendor lock-in – will inevitably increase the  confidence in that vendor. Configuring a serverless architecture tied to only one provider can make it difficult to switch vendors in the future, if necessary , and this is because each vendor offers slightly different features and workflows. In addition, building serverless functions on a platform can be a headache when migrating to another platform for some reason. The code may need to be rewritten, APIs that exist on one platform may not exist on the other, and many more hours of extra development may end up being committed to moving, for example, from AWS to Microsoft Azure or Google Cloud.

  • Problems due to third-party API systems

The use of third-party APIs requires great caution when it comes to issues involving: vendor control, multi-tenancy – when a single instance of the software and its supporting infrastructure serves multiple customers – vendor lock-in and safety issues. For example, relinquishing system control during the APIs deployment can lead to system downtime, forced API updates, loss of resources, unexpected limits, and especially cost changes.

  • Testing and debugging are more challenging

Developers depend on vendors for debugging and monitoring tools. Debugging is more complicated because developers have no visibility into backend processes and because the application is divided into smaller and separated functions. Debugging serverless functions is possible, but it is not a simple task and can be very time and resource consuming.

  • Serverless architectures are not built for long-running workflows

This limits the types of applications that can be run economically and optimally in a serverless architecture. Since serverless providers charge for the amount of time that the code is available for execution, it can cost more to run an application with long-running workflows on a serverless infrastructure than, for example, on a traditional infrastructure.

  • Performance may be affected

Since it is not running all the time, but at specific ranges, the serverless code may need to be “initialized” when it is activated. This startup time, on the other hand, can degrade the performance. However, if a piece of code is used regularly, the serverless provider will keep it ready to be activated – a request for this ready to use code is called  “warm start”. A request for a code that has not been used in a while is called  “cold start”.

  • Architectural complexity

Decisions on how granular a serverless function should be take time to assess, deploy and test. There must be a balance in the number of serverless functions that an application can call. As it is quite complicated to manage many serverless functions, ignoring issues associated with the granularity of the application can end up leading to the creation of mini-monoliths.

  • Deployment issues

The serverless application integration testing is no easy task. The integration units with a FaaS serverless (i.e, each serverless function) are much smaller than with other architectures. Therefore, there are many more integration tests than we can see in other styles of architecture; problems related to deployment, version control and packaging may exist.

  • Security concerns

Serverless providers will often run code from several of their customers on a single server at any time. As companies do not have their own physical servers, serverless computing presents new security concerns. Issues associated with sharing machines with other parts are called multi-tenancy. These issues can affect the application performance; if multi-tenant servers are not configured correctly, may result in data exposure. On the other hand, these issues have little or no impact on the networks with similar infrastructure, where the sandbox works correctly.

Therefore, if you are going to invest in a serverless platform, make sure that the vendor you are considering has everything you need, because being dissatisfied with your serverless computing provider after a few months or years of service can be a big problem in the future.

A little bit of history


Launched by Austen Collins in October 2015, and maintained by a team with full-time dedication, the Serverless Framework is an open source web platform written in Node.js. Developed with the purpose of building applications on AWS Lambda – serverless computing platform provided by Amazon Web Services – applications created with Serverless can be deployed for other functions as service providers. Among them, we highlight: Google Cloud using Google Cloud Functions, Microsoft Azure with Azure Functions, IBM Bluemix with IBM Cloud Functions, based on Apache OpenWhisk, Oracle Cloud using Oracle Fn and Kubeless based on Kubernetes, among others.

A serverless application may include some lambda functions to perform specific tasks or even an entire backend formed by hundreds of lambda functions. The Serverless framework supports all runtimes offered by the chosen cloud provider.

What is Serverless Architecture: Benefits and Disadvantages

A Serverless Architecture is an application structure that hosts third-party “Backend-as-a-Service” (BaaS) services and can run or include custom code in containers managed from a “Function-as-a-Service” (FaaS) platform. This means that serverless computing allows developers to purchase backend services in a flexible model, based on the “pay as you go” uberization. In other words, developers only need to pay for the services they use and manage their usage and availability to users.

Source: algorithmia.com


The price is based on the actual amount of resources consumed by an application. “Serverless” is just a metaphor, of course, since the servers are still used by cloud service providers in order to run the code for developers. However, in this model, the developers are not concerned with the planning of: capacity, configuration, management, maintenance, operation or container sizing, virtual machines (VMs) or physical servers – something very attractive for IT operations, since they can simply abstract that part. Servers, however, are not eliminated; they just remain under the responsibility of the service provider.

In an article of June 20, 2016, the Apple developer Badri Janakiraman stated that:

“Serverless architectures are Internet-based systems where the application development does not use the usual server process. Instead, they rely solely on a combination of third-party services, client-side logic, and service hosted remote procedure calls (FaaS or Function-as-a-Service).”

It is a fact that serverless architectures can benefit from operating costs, complexity and significantly reduced engineering runtime – which is why they tend to become more and more popular. However, how much does it cost in terms of confidence in the vendor lock-in and the support services it offers? You need to understand that simple isolated functions facilitate application development, while event-driven execution makes operations cheaper. But, are there also disadvantages to this model?

Disadvantages of Serverless Architecture


The answer to that question is: yes! The serverless model requires a lot of caution and professionalism in its implementation, with some points of attention

  • Vendor lock-in can be a risk

Allowing a single provider to provide all the backend services for an application – known as a vendor lock-in – will inevitably increase the  confidence in that vendor. Configuring a serverless architecture tied to only one provider can make it difficult to switch vendors in the future, if necessary , and this is because each vendor offers slightly different features and workflows. In addition, building serverless functions on a platform can be a headache when migrating to another platform for some reason. The code may need to be rewritten, APIs that exist on one platform may not exist on the other, and many more hours of extra development may end up being committed to moving, for example, from AWS to Microsoft Azure or Google Cloud.

  • Problems due to third-party API systems

The use of third-party APIs requires great caution when it comes to issues involving: vendor control, multi-tenancy – when a single instance of the software and its supporting infrastructure serves multiple customers – vendor lock-in and safety issues. For example, relinquishing system control during the APIs deployment can lead to system downtime, forced API updates, loss of resources, unexpected limits, and especially cost changes.

  • Testing and debugging are more challenging

Developers depend on vendors for debugging and monitoring tools. Debugging is more complicated because developers have no visibility into backend processes and because the application is divided into smaller and separated functions. Debugging serverless functions is possible, but it is not a simple task and can be very time and resource consuming.

  • Serverless architectures are not built for long-running workflows

This limits the types of applications that can be run economically and optimally in a serverless architecture. Since serverless providers charge for the amount of time that the code is available for execution, it can cost more to run an application with long-running workflows on a serverless infrastructure than, for example, on a traditional infrastructure.

  • Performance may be affected

Since it is not running all the time, but at specific ranges, the serverless code may need to be “initialized” when it is activated. This startup time, on the other hand, can degrade the performance. However, if a piece of code is used regularly, the serverless provider will keep it ready to be activated – a request for this ready to use code is called  “warm start”. A request for a code that has not been used in a while is called  “cold start”.

  • Architectural complexity

Decisions on how granular a serverless function should be take time to assess, deploy and test. There must be a balance in the number of serverless functions that an application can call. As it is quite complicated to manage many serverless functions, ignoring issues associated with the granularity of the application can end up leading to the creation of mini-monoliths.

  • Deployment issues

The serverless application integration testing is no easy task. The integration units with a FaaS serverless (i.e, each serverless function) are much smaller than with other architectures. Therefore, there are many more integration tests than we can see in other styles of architecture; problems related to deployment, version control and packaging may exist.

  • Security concerns

Serverless providers will often run code from several of their customers on a single server at any time. As companies do not have their own physical servers, serverless computing presents new security concerns. Issues associated with sharing machines with other parts are called multi-tenancy. These issues can affect the application performance; if multi-tenant servers are not configured correctly, may result in data exposure. On the other hand, these issues have little or no impact on the networks with similar infrastructure, where the sandbox works correctly.

Therefore, if you are going to invest in a serverless platform, make sure that the vendor you are considering has everything you need, because being dissatisfied with your serverless computing provider after a few months or years of service can be a big problem in the future.

A little bit of history


Launched by Austen Collins in October 2015, and maintained by a team with full-time dedication, the Serverless Framework is an open source web platform written in Node.js. Developed with the purpose of building applications on AWS Lambda – serverless computing platform provided by Amazon Web Services – applications created with Serverless can be deployed for other functions as service providers. Among them, we highlight: Google Cloud using Google Cloud Functions, Microsoft Azure with Azure Functions, IBM Bluemix with IBM Cloud Functions, based on Apache OpenWhisk, Oracle Cloud using Oracle Fn and Kubeless based on Kubernetes, among others.

A serverless application may include some lambda functions to perform specific tasks or even an entire backend formed by hundreds of lambda functions. The Serverless framework supports all runtimes offered by the chosen cloud provider.

Experimente agora, grátis!