The Future of Cloud: Edge and Serverless Computing

The Future of Cloud: Edge, Serverless and Beyond

The rapid adoption of cloud technology has revolutionised how businesses operate, but we’ve noticed many organisations are simply “lifting and shifting” their existing infrastructure without truly modernising. While moving to the cloud is a positive step, simply replicating on-premises servers as virtual machines in Azure misses the transformative potential that modern cloud services offer. 

As our Consultant Chris Bower and Principal Architect Elliott Leighton-Woodruff discussed in a recent episode of our podcast, the cloud is moving rapidly with serverless computing, edge technologies, and AI fundamentally changing what’s possible. 

But are businesses keeping pace with these changes? 

What does the modern cloud landscape look like beyond traditional VMs, and how can your organisation benefit from these advancements? Check out the video discussion below or read on to find out more. 

Is the Cloud Already Out of Date?

The evolution from VMs to PaaS: beyond “lift and shift” 

Many companies facing hardware refreshes look to the cloud as an attractive alternative to capital expenditure. However, we frequently see businesses simply migrating their virtual machines to Azure without reimagining their infrastructure. 

“What have you actually gained? You’re in the cloud, but you’re just running the same infrastructure that you were previously”, notes Chris. This approach fails to make use of the true benefits of cloud computing. 

Platform as a Service (PaaS) offerings present a dramatically more efficient alternative. Instead of managing virtual machines, businesses can focus on their applications while Microsoft handles the underlying infrastructure. This approach brings you some exceptional benefits: 

  • Scalability: PaaS services like Azure SQL elastic pools can dynamically scale based on demand—perfect for retailers during seasonal peaks like Black Friday. 
  • Improved cost efficiency: Consumption-based pricing means you only pay for what you use, potentially reducing costs significantly. 
  • Future-proofing: As Elliott points out, “If you ask a lot of people what is the future of SQL, or more generally databases in the cloud, it’s going to be driven by AI.” PaaS services are probably better positioned to benefit from these innovations than VMs.

The organisations that have it easiest are those “born in the cloud” who have built their applications directly on PaaS services without the baggage of legacy infrastructure. For businesses with traditional infrastructure—perhaps still running IS servers with legacy SQL or Visual Basic applications—the modernisation path does need some careful consideration. Key questions include: 

  • What applications are being delivered and what’s their expected lifetime? 
  • Does the vendor support PaaS solutions? 
  • Are SaaS alternatives available? 
  • For new builds and applications, how can we avoid virtual machines entirely? 

This isn’t just about applications. SQL databases are often among the largest costs we see in Azure environments. By moving to PaaS offerings like elastic pools, businesses gain the ability to scale resources dynamically based on demand while potentially reducing costs. 

Is serverless computing the future? 

Serverless computing is one of the most transformative cloud evolutions in recent history, but what does “serverless” actually mean? 

“Nothing’s ever really, actually serverless,” Elliott clarifies. “It just means that there’s no server for the individual to manage.” In serverless models, your application exists purely as storage until needed, at which point a container spins up, runs the required process, and disappears again. 

This approach brings some great benefits: 

  • No infrastructure management
  • Automatic scaling based on demand
  • Pay only for execution time rather than idle servers
  • Improved resilience through automatic failover

The concept isn’t entirely new—containerisation has existed for years—but cloud platforms have made it accessible to mainstream businesses. As Elliott notes, “Very few organisations were actually using containerisation before Azure or before public cloud.” But those who did already understand the benefits of short-lived applications that exist only when needed. 

This ephemeral model works brilliantly in the cloud, where seemingly infinite scale and consumption-based pricing align perfectly with serverless computing principles. As Chris points out, this approach doesn’t translate well to on-premises infrastructure: “It’s difficult to kind of utilise [on-premises hardware] in a way where you’re spinning things up, winding them down, but getting the most out of it in terms of costs.” 

The cloud’s elasticity means you can provision resources exactly when needed without worrying about physical limitations. You need a certain resource for a certain amount of time on a certain date—it’s always going to be available. 

Containerisation: A middle ground 

Containers offer a potential middle ground between traditional VMs and fully serverless functions. As Elliott describes, “At its simplest level, a container—in the same way that a hypervisor has a virtual machine— the virtual machine could have a container. So we’re just going down another level.” 

Services like Azure Container Instances or Container Apps allow you to run containerised workloads without managing the underlying infrastructure. These solutions provide greater portability than pure PaaS services, as the same container can run on-premises, in Azure, or in other cloud environments. 

The changing role of infrastructure teams 

With the shift to serverless computing, do we still need infrastructure specialists? In our opinion: absolutely. 

“Development and infrastructure teams have always struggled to work together,” says Elliott. “Developers focus on creating a service and getting it out—that’s their only goal. Infrastructure teams focus on access control, security and the underlying platform.” 

These different priorities are still crucial in a serverless world. In fact, infrastructure teams may need to manage more than before, particularly regarding security. “If we deploy a function app today, for example, we go into Azure and we click next, next, next, and then we deploy our app to it that is, by default, available to the internet,” Elliott warns. 

This security concern should show you why infrastructure expertise is still essential even as we move away from traditional server management. We’re seeing the emergence of “platform ops” teams that bridge the gap between infrastructure and development, providing secure platforms that developers can use while maintaining necessary controls. 

In this model, infrastructure teams handle networking, security, and platform management—configuring private endpoints, vNet integration, and access controls—while devs focus on writing code. For smaller organisations without dedicated platform teams, close collaboration between development and infrastructure becomes even more critical. 

As Elliott emphasises, “An infrastructure person should be amazing at security infrastructure and not cross into dev. And likewise, the developer shouldn’t be trying to cross the other way, because there’s always going to be some blind spots, and to be better at the bit that you do is far more valuable.” 

The cloud vendor lock-in challenge 

One big concern with PaaS and serverless approaches is cloud vendor lock-in. Moving a virtual machine between environments is relatively straightforward (albeit with potential egress costs), but migrating a function app or logic app between cloud providers is considerably more complex. 

“If you want to move from a logic app to another service, there’s a lot more that needs to be considered,” Elliott explains. “A logic app doesn’t exist as the same thing in one of the other providers.” Function apps offer slightly more portability since they run standardised code like .NET applications but still need reconfiguration for different platforms. 

While vendor lock-in is a legitimate concern, we believe the advantages of PaaS and serverless typically outweigh this risk for most organisations. As Elliott notes, “The chances of having an issue with the likes of Microsoft’s Azure cloud platform or GCP or AWS… is far lower than the impact of your business [having other problems].” 

Containers can mitigate some lock-in concerns by providing greater portability across environments. “If we use something like container instances or container apps, and then put our image into that container, that is far more portable,” Elliott explains. “We could run that on-prem. We could run that in any cloud solution, very easily.” 

Edge computing: when the cloud is too far away 

While cloud computing solves many challenges, certain scenarios need on-premises computing power. This is where edge computing and solutions like Azure Local come into play. 

Edge computing brings processing closer to where data is generated, so you get real-time analysis without the latency of cloud transmission. We’re seeing compelling use cases across industries like: 

  • Manufacturing: Visual analysis for defect detection in production lines 
  • Retail: AI-powered product identification on scales without customers pressing buttons
  • Hospitality: McDonald’s using real-time AI analytics for drive-through orders 

Azure Local makes a lot of sense out of necessity, but it wouldn’t be something that you’d necessarily need to strive for as a business unless there’s that use case. The cost makes it impractical for simply running virtual machines—you’d be better off with a traditional hybrid setup. 

(But if you think it does make sense for your business, check out Chris’ in-depth guide to Azure Local.) 

For many organisations, Azure Arc provides sufficient hybrid capabilities without the complexity of Azure Local. Arc allows you to manage on-premises Windows or Linux virtual machines from Azure’s control plane alongside integrated security services like Defender. This provides the “single pane of glass” management experience without the full overhead of Azure Stack. 

“If you’ve got Arc anyway, do you really need [Azure Local]?” Elliott questions. “The only reason you would want to go down that route is if you want to use a function on-prem, or you want to use this AI-based service that’s available in Azure, but it’s a local version of it.” 

The decision to build out your edge computing infrastructure should really be driven by specific latency requirements and use cases. For most scenarios, sending data to Azure for processing is still the most cost-effective approach. 

AI and cloud management 

Artificial intelligence is already transforming cloud operations, with Microsoft integrating intelligence into various services. 

One thing we like the look of is the new cost analysis capabilities. These tools identify trends, flag resources with dramatic cost increases, and offer optimisation recommendations. 

“Cost analysis within the billing section of a subscription… will now show you things like trends, resources that have dramatically increased in cost,” Elliott notes. As these capabilities evolve, we anticipate AI will increasingly assist with: 

  • Cost optimisation and right-sizing
  • Resource scaling based on usage patterns
  • Security posture improvement
  • Architectural guidance

Which is all seriously useful stuff—especially if you can’t dedicate loads of time to combing through your cloud bills. (If you want to learn more about keeping your Azure costs down, check out our favourite cost optimisation tips, as well as our ultimate guides to the Azure Pricing Calculator and Azure Cost Management, too. Don’t want to do it yourself? We’ll take Azure cost optimisation off your hands.) 

We’re already seeing hints of this AI evolution in services like Dev Centre for Microsoft managed hosted DevOps agents, which now offers automatic scaling based on historical usage patterns. “It’s using intelligence to work out when developers are deploying to these agents, and building up information about that so that it’ll then scale them at the times when it expects deployments,” Elliott explains. 

This approach could eventually extend to other Azure resources, learning from usage patterns to optimise scaling and availability. For organisations without deep Azure expertise, these intelligent defaults could provide serious value without needing manual configuration. 

That said, we’re quite cautious about AI replacing human expertise. “If you are going to leverage AI within your organisation, businesses need to be very careful about how they use it, who’s using it, and with what oversight,” warns Elliott. AI tools will produce what you ask for, whether it’s right or wrong—potentially leading to security vulnerabilities or cost overruns without proper supervision. 

An AI system’s job is to provide you with exactly what you asked, whether it’s right or wrong. And that could be quite dangerous. Without proper understanding of what you’re deploying, you could accidentally expose internal data, increase costs, or create serious security vulnerabilities. 

Rather than replacing them, we see AI augmenting IT professionals, allowing them to accomplish more with less effort. 

Junior engineers may advance more quickly as AI handles routine tasks, enabling them to tackle more complex challenges earlier in their careers. They’re going to get their hands on more difficult tasks sooner, potentially without the lower-level stuff that today’s IT professionals encounter early on. 

“IT engineers are going to get more senior quicker, and their jobs will be to think about possible outcomes, as opposed to just the low-level tasks, like creating users or managing virtual machines,” Elliott predicts. 

The real future of cloud technology 

Looking further ahead, we like to think we’re realistic about the pace of change. While quantum computing might eventually transform the landscape, we expect cloud technology to evolve incrementally rather than revolutionarily over the next decade. 

“I think we’ve seen over the last 10, 15 years that actually tech doesn’t move that quick,” Elliott observes. “Yeah, we’re in the cloud. Yeah, we’re using PaaS services. But actually, a lot of this technology has been available for a very long time.” 

As Bill Gates famously noted, we tend to overestimate progress in the short term while underestimating it in the long term. The adoption cycle where large enterprises implement new technologies first before they filter down to smaller businesses will likely continue. 

“Between five and 10 years from now, we’ll still be in the same position,” Elliott predicts. “We’ll probably still be leveraging public cloud. I don’t see that going anywhere, really.” Some organisations might increase their use of private cloud solutions, particularly for cost control, but the fundamental cloud model will persist. 

While we don’t anticipate dramatic disruptions like Skynet in the near future, improvement in computing power and capabilities will probably make sophisticated data analysis and decision-making more widely available. Business intelligence will keep being important for most organisations, with smaller businesses gaining better access to it over time. 

Moving forward in the cloud 

The cloud continues to evolve at lightning pace—but let’s stay grounded. 

It’s an exciting time, as we move beyond just using virtual machines. You’ve got a whole host of transformative new capabilities through PaaS, serverless computing, and edge solutions. But fundamentally, it’s business needs that drive the tech you should be using. 

We think organisations should: 

  1. Look beyond “lift and shift” to embrace PaaS services where it makes sense 
  2. Consider serverless computing approaches for new applications and development  
  3. Maintain strong infrastructure expertise focused on security and governance 
  4. Look at edge computing only in specific use cases requiring real-time processing
  5. Approach AI as an enhancement to human expertise, not a replacement 

The future of cloud computing transforms how you deliver value—making infrastructure fade into the background while your business capabilities take centre stage. 

Ready to modernise your cloud approach beyond VMs? Get in touch with our cloud experts to discuss how we can help transform your Azure environment. 

 

Elliott Leighton-Woodruff, Principle Architecture at Synextra
Article By:
Elliott Leighton-Woodruff
Principal Architect
thank you for contacting us image
Thanks, we'll be in touch.
Go back
By sending this message you agree to our terms and conditions.