If you’re keen to cut through the AI hype and understand what it really means for your business, you’re in the right place. We’ve been thinking about the practical applications of artificial intelligence, particularly large language models (LLMs), and how they’re transforming the way companies operate.
Recently, our CEO, Chris, sat down with Alex Martin, IT Programmer and Developer at Movera, for our Experts in Polo Shirts podcast to discuss the realities of implementing AI in business.
Movera is a tech pioneer in the UK property sector (check out our work with them here) and Alex specialises in the use of LLMs.
Alex is a thoughtful, opinionated AI expert, and we think it’s really worth listening to the full conversation on our YouTube channel. And you can read below for some key insights on using AI in your business.
Large language models are one of the most accessible and versatile AI technologies available to businesses today. At their core, they’re pattern recognition systems operating on a massive scale, but understanding how they work doesn’t require a PhD in computer science.
Think of an LLM as a vast, multi-dimensional map of language. While we might struggle to visualise more than three dimensions, these models operate in hundreds or thousands of dimensions, creating intricate webs of relationships between words and concepts. As Alex explains it: “Imagine you have coordinates for the word ‘dog’ and ‘woof’ – there’s a distance and angle between them. Apply that same relationship starting from ‘cat’, and you’ll reach ‘meow’.”
This simple example illustrates a pretty spectacular capability: LLMs can understand context and relationships in language that previously only humans could grasp. The same principles apply to more complex relationships – “Germany” to “sauerkraut”, “Japan” to “sushi”, or “question” to “answer”. It’s this ability to understand and generate human-like text that makes LLMs so valuable for business applications.
But here’s what makes them truly powerful: LLMs can process and understand content at a scale and speed that humans simply can’t match. They can analyse documents, generate responses, classify information, and identify patterns across vast amounts of text – all while maintaining an understanding of context that simpler automation tools lack.
Despite their capabilities, LLMs aren’t magic. They have specific limitations and characteristics that need to be understood for successful implementation. They can be biased based on their training data, they sometimes “hallucinate” or generate incorrect information, and they need careful prompting to produce reliable results.
The way to make them work well for you lies in choosing the right applications. Rather than trying to revolutionise everything at once, successful firms are identifying specific, well-defined tasks where LLMs can add immediate value. This might be document classification, customer service automation, or data analysis – tasks that are repetitive enough to benefit from automation but complex enough to require the sophisticated understanding that LLMs provide.
So how much does it all cost?
Well… that’s not an easy question to answer. The financial implications of these tools aren’t as straightforward as many AI providers might suggest. While cloud-based solutions offer an attractive entry point, the costs can quickly mount up as usage grows. So does it make more sense to run them locally?
Running an LLM on local devices or servers might be more cost-effective than pay-as-you-go cloud services – but this calculation comes with several caveats.
“I was trying to work out the costing compared to using pay-as-you-go through Azure OpenAI,” Alex explains. “My rough estimate is that when you get to 1,700 prompts it becomes cheaper using something like DeepSeek than it does pay-as-you-go. But that’s pure working it out, not actually testing it in real life, making assumptions about prompt lengths and all sorts of things.”
One interesting approach to managing costs is batch processing. This is a standard practice with other compute-heavy workloads – so why not do it with LLMs? Rather than running AI jobs continuously, companies can queue up tasks on virtual machines and process them during off-peak hours.
As Alex suggests: “You could batch them all up in the queue and then at a certain point, say 11pm, a function turns on, everything goes, does all its work, and then turns off. So the cost of running that VM is reduced because it’s only running in batch per day and not open all day waiting.”
The hardware requirements for running your own LLM locally are pretty substantial. Your standard mid-range Dell laptop from the early 2020s probably won’t cut it. Higher-end devices are getting more powerful GPUs and CPUs these days, but they aren’t cheap, and many ‘AI-ready’ devices are still set up to mostly offload AI compute to cloud services. Recent developments like NVIDIA’s smaller AI-focused computers could be an option, but they’re still a significant investment.
The choice between cloud and local deployment often comes down to a complex calculation involving:
So, it’s not particularly easy to say “using AI in this way will cost you this much”. Just like cloud cost optimisation, businesses are potentially looking at a future involving ‘AI cost optimisation’.
The privacy implications of AI implementation deserve particular attention, especially for UK businesses subject to GDPR. While major providers offer enterprise-grade security assurances, there’s growing concern about how user data might be used to train future models.
“Big Tech have a history of not being transparent about what they’re doing and why,” Alex notes. “With regards to OpenAI using your personal data, my understanding is if you have a free account and you upload a document to it, unless you have changed the settings, the terms and conditions say they can use that for their training data. If you’re a business, it’s the other way around – it is set to not use that as your training data, but my personal belief is it doesn’t matter. They’re using it all.”
This creates a particular challenge for organisations handling sensitive data. The solution increasingly points toward local deployment of open-source models, where data remains entirely within your control. So, this is a technology concern, but you also need to think about governance and risk management.
For data privacy, you’ll need to think about things like:
The most successful AI implementations we’re seeing aren’t necessarily the most ambitious – they’re the most focused. At Movera, for example, they’ve developed an address-matching system that uses AI to compare and standardise addresses from multiple sources.
“We have a customer, but we get lots of information from the customer, their estate agent, their bank,” Alex explains. “It sounds quite simple, but take the address ’22 Acacia Avenue, London SW11′ – that’s the same address as ’22 Acacia A., Westminster, Middlesex, London, SW11′. A human can see that quite easily, but trying to code that is really hard. You just take the ‘Avenue’ for example – you could have ‘Avenue’, ‘A’, ‘A.’, ‘Ave’…”
This seemingly simple application shows some of the key principles of great AI implementation:
It solves a specific, well-defined problem
Will you be able to say the same things about your AI initiatives?
If you want some inspiration for AI-driven work improvements, take a look at these examples. Practical applications we’ve seen include:
Document processing and analysis:
Invoice data extraction
Customer service:
Query classification and routing
Content and communication:
Data analysis and reporting:
Software development and DevOps:
The key to success with any of these applications is starting small and scaling based on proven results. You need to be really sure that these systems are accurate, and that might take some time to test.
If an LLM hallucinates an extra zero when processing an invoice, for example, that could end up being an expensive mistake to fix. Alex notes, “If you don’t care that OpenAI or tech companies see your invoices, having an LLM to process them would be reasonable. But if you’re going to do that, you need to be really careful that you’re getting the answer that’s correct.”
The buffet of AI options is expanding at a breakneck pace, but that doesn’t mean businesses should wait to get started. Alex suggests that AI is currently being hyped to the max and will eventually find its natural position.
“I think currently LLMs are like an elastic band,” he explains. “At the moment it is being stretched – a lot of people, marketers, advertisers, CEOs of giant tech companies are all saying this is going to do everything. I don’t think it is. At one point the elastic band is going to be let go, it’s going to go right back and then it’s going to find its natural mode. And in 10 years’ time, LLMs will be ubiquitous for certain instances in certain places.”
And the organisations that will benefit most are those that begin building practical experience now, focusing on specific, valuable use cases rather than trying to revolutionise everything at once.
We’re already seeing interesting developments in the market. The recent emergence of DeepSeek, claiming to offer similar capabilities to established players at a fraction of the cost, shows how quickly the landscape can change. While some claims might be optimistic (their quoted $5-6 million training cost seems improbably low), the broader trend towards more efficient, accessible AI solutions is clear.
The competition is also driving innovation in different directions. Local deployment options are becoming more practical as open-source models are gaining sophistication and computing requirements are gradually decreasing. Specialised AI models for specific industries are emerging, and integration capabilities are improving. It’s all quite exciting—what are you waiting for?
The key to successful AI implementation isn’t about having the biggest budget or the most advanced tech. Instead, we think it’s about having the right approach. As Alex notes from his experience: “I think if you’re a small business, you don’t have technical knowledge, it can be quite daunting. It can be quite a steep learning curve where I’m sure you’ve got a million other things that you have to look out for.”
His recommendation? “Keep it simple. Start with something little.” This might mean:
One major aspect that often gets overlooked in AI discussions is the human element. While AI can automate many tasks, it’s most effective when it augments human capabilities rather than replacing them. “LLMs can’t be creative,” Alex points out. On Hollywood and TV: “If you replaced all the screenwriters with language models, we’re just going to get the same stuff again and again.”
This insight applies across industries – the goal should be to use AI to handle routine tasks while freeing humans to focus on areas where they add the most value. We’re talking creative problem-solving and strategic decision-making. Situations where communication between humans is invaluable, like complex customer interactions and relationship-building. These are the areas where people really shine.
The future of AI in businesses like yours might involve dramatic transformations and replacing some human workers. But it also might not. Right now, it seems wise to look for practical, focused applications that deliver real value.
Start small, be selective about your use cases, and maintain a clear focus on business outcomes. Don’t let the hype cycle distract you from the practical realities. Whether you’re looking at customer service automation, document processing, or data analysis, the principles remain the same. Start with well-defined problems, make sure you have a clear understanding of the privacy implications, and build your capabilities gradually.
“In 10 years’ time, LLMs will be ubiquitous for certain instances in certain places,” Alex predicts. “I don’t think it will replace everybody’s job. I don’t think it’s going to take over the world and kill us all with killer robots. I think it will find its natural place.”
That natural place is likely to be as a powerful tool in our business arsenal – not a magic solution to every challenge, but a valuable capability that, when properly implemented, can drive game-changing efficiency and innovation.
Need help getting started? At Synextra, our friendly Azure cloud experts specialise in helping businesses of all sizes build practical AI strategies. Whether you’re just beginning to explore AI possibilities or looking to scale your existing implementation, we can help guide your journey and make sure you’re making the most of these awesome new technologies. Get in touch to see how we can help.