Blog

Minimize Vendor Lock-in with Edge Functions and Edge Orchestrator

Minimize Vendor Lock-in with Edge Functions and Edge Orchestrator

The need for agile applications, on-demand scalability, high performance, and round-the-clock availability have all driven increased adoption of cloud computing, Edge Computing, and serverless architecture. Despite the benefits of these technologies, some companies are still hesitant to adopt them due to the fear of vendor lock-in. Knowing how to mitigate the risk of lock-in can help companies avoid unexpected costs and complications as they migrate to the edge and the cloud.

This post will define what constitutes lock-in, why it occurs, its associated risks, and how Azion’s Edge Functions and Edge Orchestrator can help reduce the risk of lock-in.

What is lock-in?

A study from the Journal of Cloud Computing describes lock-in as a “situation where customers are dependent (i.e. locked-in) on a single cloud provider technology implementation and cannot easily move in the future to a different vendor without substantial costs, legal constraints, or technical incompatibilities.”

Although lock-in is typically discussed with relation to exchanging one vendor for a competitor, some other examples of other types of lock-in are described by Gregor Hophe in a 2019 blog post from MartinFowler.com:

  • Product: heavily customization of configurations can make it hard to switch from one product to another, regardless of whether a product is proprietary or open-source
  • Version: companies can deprecate or discontinue support for a specific version to force adoption of upgrades, which can be costly to install if they break existing customizations or extensions
  • Architecture: monolithic architecture negates many of the benefits of cloud adoption, causing many companies to re-architect applications to microservices as they move to the cloud

In other words, lock-in is often experienced as a combination of different (and often connected) dependencies, making it difficult to combat.

How does lock-in occur?

In his 2020 book What is Serverless, Mike Admunsen provides a simple explanation for the root cause of vendor lock-in. He writes, “Vendor lock-in happens when competitors solve shared problems in unique ways.” In other words, the lack of standardization across vendors results in solutions that are not only proprietary, but customized in a way that prohibits portability. In addition, some vendors’ offerings are tightly coupled so that one product’s configurations lock customers into the use of another proprietary product.

Factors leading to lock-in include:

  • use of different languages, frameworks, APIs, etc.
  • lack of standardization
  • lack of interoperability
  • tightly coupled services
  • proprietary processes/software/hardware

As a result, companies often need to substantially change code or even acquire new skills or talent in order to move their application or data to another platform or solution.

Risks of lock-in

To a certain extent, lock-in is inevitable. Any company who incorporates a providers’ services into their business processes will be locked in, to some degree, since changing these processes will require changes to both the skills and mindset of anyone involved with them. In addition, some companies voluntarily agree to long-term licensing contracts in exchange for discounted fees or other benefits.

The negative stigma associated with lock-in ultimately comes from the fear that the cost of changing vendors will be so prohibitive as to negate any potential benefits that could come from doing so, opening the customer up to risks such as:

  • diminished QoS
  • changing service offerings
  • changing business processes
  • price increases
  • vendor going out of business

Not only does a high barrier to change prevent customers from getting the best possible prices and solutions, it prohibits customers from responding to changes in the vendor’s processes or services that may not align with the customer’s needs. And although it would be impossible to avoid lock-in entirely, it is possible—and crucial—for a company to mitigate these negative impacts.

Mitigating the negative effects of lock-in

A 2019 Thoughtworks blog post calculated the true cost of lock-in as the cost of migration minus the opportunity gained by migrating. For example, serverless architecture is less portable than containers, but this cost may be offset by benefits of added scalability, cost-efficiency, resource efficiency, and ease of use. In a similar vein, Hophe’s 2019 post states that lock-in risk is lowest when services provide a unique utility and low switching cost. In other words, both articles suggest that reducing the negative impact of lock-in is a matter of not only minimizing the cost of migration, but maximizing its benefits.

Minimizing cost

Ultimately, the true cost of migration extends beyond the cost of purchasing new services; it includes system downtime, updating security protocols, code refactoring and developer training. As such, minimizing these costs, reducing complexity, and mitigating risk may involve:

  • Adopting standardized technology
  • Using languages supported by multiple providers
  • Choosing solutions that are easy to configure and use

Maximizing gain

Despite fears of lock-in, companies are increasingly adopting cloud technologies due to the opportunity gains they provide, such as:

  • Faster time to market
  • Lower latency
  • More scalability
  • Efficient resource use
  • Interoperability
  • 5G readiness

Companies that not only move to the cloud, but adopt cloud-native architectures and solutions designed for performance, agility, and efficiency can ensure that they gain the best possible value from their provider, making their choice well worth the investment.

Mitigating lock-in risk with Azion

At Azion, we believe the future of computing is with open standards; as such, we hope to continually create products that enable as much portability and interoperability as possible. Two products that illustrate our commitment to reducing vendor lock-in risks are Edge Orchestrator and Edge Functions.

Edge Orchestrator

Edge Orchestrator is an orchestration solution designed to simplify infrastructure management. With it, customers can remotely deploy, control, automate, and monitor infrastructure on the edge, in the cloud, or on-prem using VMs, containers, or bare metal. This provides customers with a single pane of glass to observe and orchestrate services from different vendors, minimizing lock-in by significantly reducing the technical complexities involved in migration.

In addition, Edge Orchestrator is easy to set up and use; it can be automatically installed along with the operating system and automatically configures and orchestrates services from the moment a device is authorized in the control panel. It is also compiled with all required libraries and core dependencies to simplify the process of software installation and updates.

Edge Functions

Edge Functions is a FaaS solution that mitigates the risk of lock-in often faced by serverless customers. With it, customers can create event-driven functions using JavaScript, which is supported by all major cloud providers. In addition, it is extensible to any cloud provider and uses standardized technology, such as the JavaScript standard by ECMA and Fetch API with MDN standards, for interoperability across different platforms.

In addition, we’ve designed Edge Functions to be as performant as possible, with zero cold starts, resulting in significantly lower latency and more consistent performance than container-based solutions like AWS Lambda. As a result, we hope to not only reduce vendor lock-in risks, but provide customers with the maximum possible benefit for their investment.