The concept isn’t entirely new—containerisation has existed for years—but cloud platforms have made it accessible to mainstream businesses. As Elliott notes, “Very few organisations were actually using containerisation before Azure or before public cloud.” But those who did already understand the benefits of short-lived applications that exist only when needed.
This ephemeral model works brilliantly in the cloud, where seemingly infinite scale and consumption-based pricing align perfectly with serverless computing principles. As Chris points out, this approach doesn’t translate well to on-premises infrastructure: “It’s difficult to kind of utilise [on-premises hardware] in a way where you’re spinning things up, winding them down, but getting the most out of it in terms of costs.”
The cloud’s elasticity means you can provision resources exactly when needed without worrying about physical limitations. You need a certain resource for a certain amount of time on a certain date—it’s always going to be available.
Containerisation: A middle ground
Containers offer a potential middle ground between traditional VMs and fully serverless functions. As Elliott describes, “At its simplest level, a container—in the same way that a hypervisor has a virtual machine— the virtual machine could have a container. So we’re just going down another level.”
Services like Azure Container Instances or Container Apps allow you to run containerised workloads without managing the underlying infrastructure. These solutions provide greater portability than pure PaaS services, as the same container can run on-premises, in Azure, or in other cloud environments.
The changing role of infrastructure teams
With the shift to serverless computing, do we still need infrastructure specialists? In our opinion: absolutely.
“Development and infrastructure teams have always struggled to work together,” says Elliott. “Developers focus on creating a service and getting it out—that’s their only goal. Infrastructure teams focus on access control, security and the underlying platform.”
These different priorities are still crucial in a serverless world. In fact, infrastructure teams may need to manage more than before, particularly regarding security. “If we deploy a function app today, for example, we go into Azure and we click next, next, next, and then we deploy our app to it that is, by default, available to the internet,” Elliott warns.
This security concern should show you why infrastructure expertise is still essential even as we move away from traditional server management. We’re seeing the emergence of “platform ops” teams that bridge the gap between infrastructure and development, providing secure platforms that developers can use while maintaining necessary controls.
In this model, infrastructure teams handle networking, security, and platform management—configuring private endpoints, vNet integration, and access controls—while devs focus on writing code. For smaller organisations without dedicated platform teams, close collaboration between development and infrastructure becomes even more critical.
As Elliott emphasises, “An infrastructure person should be amazing at security infrastructure and not cross into dev. And likewise, the developer shouldn’t be trying to cross the other way, because there’s always going to be some blind spots, and to be better at the bit that you do is far more valuable.”
The cloud vendor lock-in challenge
One big concern with PaaS and serverless approaches is cloud vendor lock-in. Moving a virtual machine between environments is relatively straightforward (albeit with potential egress costs), but migrating a function app or logic app between cloud providers is considerably more complex.
“If you want to move from a logic app to another service, there’s a lot more that needs to be considered,” Elliott explains. “A logic app doesn’t exist as the same thing in one of the other providers.” Function apps offer slightly more portability since they run standardised code like .NET applications but still need reconfiguration for different platforms.
While vendor lock-in is a legitimate concern, we believe the advantages of PaaS and serverless typically outweigh this risk for most organisations. As Elliott notes, “The chances of having an issue with the likes of Microsoft’s Azure cloud platform or GCP or AWS… is far lower than the impact of your business [having other problems].”
Containers can mitigate some lock-in concerns by providing greater portability across environments. “If we use something like container instances or container apps, and then put our image into that container, that is far more portable,” Elliott explains. “We could run that on-prem. We could run that in any cloud solution, very easily.”
Edge computing: when the cloud is too far away
While cloud computing solves many challenges, certain scenarios need on-premises computing power. This is where edge computing and solutions like Azure Local come into play.
Edge computing brings processing closer to where data is generated, so you get real-time analysis without the latency of cloud transmission. We’re seeing compelling use cases across industries like:
- Manufacturing: Visual analysis for defect detection in production lines
- Retail: AI-powered product identification on scales without customers pressing buttons
- Hospitality: McDonald’s using real-time AI analytics for drive-through orders
Azure Local makes a lot of sense out of necessity, but it wouldn’t be something that you’d necessarily need to strive for as a business unless there’s that use case. The cost makes it impractical for simply running virtual machines—you’d be better off with a traditional hybrid setup.
(But if you think it does make sense for your business, check out Chris’ in-depth guide to Azure Local.)