A notice to our customers: Microsoft are experiencing disruption related to several of their M365 services that could impact your system. We are carefully monitoring the situation. You can be kept updated via Microsoft's service page.
×

What does the DeepSeek AI release mean for me and my business?

18 February 2025 By Frans Lytzen

The Chinese company DeepSeek caused quite a stir when they released their “DeepSeek R1” Large Language Model (LLM) in January. It is ostensibly as powerful as the best LLMs from OpenAI, including GPT-4o but it costs significantly less to train and run.

With this release, DeepSeek challenged a lot of assumptions and they have caused a massive stir. There is a lot of breathless commentary about this at the moment, but what does it actually mean for normal businesses?

Here is my personal take. I don’t for one second claim that this is “the truth” - this is my opinion. Please let me know your thoughts in the comments.

TL;DR;

  1. It won’t have any direct impact on you and your business. Just crack on with what you were doing. At best, you will have some more tools in your toolbox, but this should not affect your AI strategy (in most cases).
  2. This will very likely trigger a surge in AI innovation - keep an eye on other announcements and developments over the next few months.
  3. This is a positive for global stability - it helps to reduce the risk of a de-facto AI monopoly, which could eventually cause serious instability.
  4. DeepSeek uses a lot less energy than existings models. This gives some hope that the AI boom may end up contributing less to climate change than currently expected.

A futuristic digital illustration of an open AI toolbox, set against a glowing circuit board background. The toolbox contains various metallic tools, each labeled with the names of major AI models such as GPT-4, DeepSeek R1, Mistral, LLaMa, and Copilot. A neon-lit AI chip is prominently displayed on the inside of the toolbox lid, symbolizing the technological ecosystem. The tools represent different AI solutions, emphasizing their interchangeable and complementary nature in business applications.

Does this mean I should change my plans around AI?

In short: No.

  1. Most AI framework use an abstraction around the actual LLM that is used. This allows you - up to a point - to swap different models into your bigger solution. And remember, the actual LLM part is a very small part of the whole AI solution you have to build.
  2. Most AI business solutions benefit from using multiple different LLMs in the same solution, for different parts of the problem. Looked at it that way, DeepSeek just becomes another wrench in the toolbox.
  3. I fully expect many more challengers to OpenAI to emerge in the next few months (see below). If you change your strategy every time a new LLM enters the market you will never get anything done.
  4. DeepSeek does lower the cost of running AI workloads. However, I think that the impact of this is relatively low for most people:
    1. The cost of even GPT-4o is already pretty low for most common business scenarios. It costs between £0.50 and £2 to process a couple of average books, depending on how you go about it (the price for the latest o1 model is about six times higher than that but it’s designed for something else so not really comparable).
      For some use cases, that translates to a lot of cost, but I would argue that for many/most common business use cases, the LLM cost is not the limiting factor.
    2. It’s a competitive market and DeepSeek will put downwards pressure on prices. AliBaba already started this trend last year with several huge price cuts.

In my experience, by far the biggest barrier to implementing AI in most business is simply getting started. As we like to say - it’s a Change Management problem, not a Technology problem (nor, in most cases, a cost problem).

So, just get started, today: Several more models will have been released before you get over the cultural and change management humps.

AI is evolving fast—are you keeping up? While DeepSeek R1 is a fascinating development, the key to AI success is having the right strategy. If you want to explore how AI can drive real value in your business, check out our solutions at neworbit.co.uk/ai. Let’s make AI work for you!

The accelerator effect

The thing that makes all LLMs work is the “Transformer” concept. It was invented at Google and then released to the world in 2017.

It was precisely because this crucial discovery was in the public domain that over the next several years, multiple companies were able to build Large Language Models. Google releasing this knowledge kickstarted the entire Generative AI wave: The “AI” we had before hadn’t really developed in a very long time, other than just adding more compute power.

Similarly, in February 2023, Meta open-sourced their LLaMa language model and two weeks after that their internal data about it was leaked. The next ten weeks probably saw more advancement in AI than the preceding several years put together because tens of thousands of researches and hobbyists were all of a sudden able to tinker with these tools at home. See Steve Yegge’s essay for some fascinating insights.

Okay, that’s history so what has this got to do with DeepSeek? Only that DeepSeek has also open-sourced their model. If history is anything to go by, I predict that this will cause a surge in innovation, similar to what we saw in the spring of 2023. It is entirely possible that some genuine leaps will happen in the next few months - watch this space.

If nothing else, I think it is almost guaranteed that several other OpenAI competitors will emerge in the next few months and LLMs will likely become more like interchangeable commodities.

The geo-political context

The 2021 Reith Lectures had a lot of interesting predictions about AI (I still recommend listening to it, today). One particularly worrying topic was the risk that one nation or one organisation would “invent AI” and be able to monopolise it. The potential for global conflict arising from such a scenario cannot be overstated.

With the Transformer concept being in the public domain and the various Open Source models that were released, it looks like we avoided that particular pitfall. Nonetheless, OpenAI (in particular) have worked hard to try to build a defacto monopoly by

  • Focusing on the “scale is all that matters” concept, which puts huge financial and logistical barriers in the path of the competition
    and by
  • Keeping everything they do secret.

Every time someone comes along and open sources something that challenges the established players, it reduces the risk that we end up with someone having a monopoly on AI. This is undoubtedly a very good thing.

The environmental impact

The mantra for the ever-better AI models has long been “Scale is all that matters”. In essence, build bigger data centres, buy more Nvidia chips, give it more data and - above all - feed it more energy. There are many reasons for this, many of them commercial rather than technical, but I won’t go into the details of that argument, here.

All I will say is that there is a reason that so many large tech companies have recently been reversing their stance on becoming environmentally friendly: Growing AI - at least under the prevailing mantra - requires insane amounts of energy and that, with current energy technology, is simply incompatible with green targets (and, arguably, with the concept of a liveable world).

In many ways, by far the most important part of the DeepSeek story is not about its capability but that it was built with a fraction of the energy usage that comparable models required.

This offers some real hope that, maybe, we can avoid boiling the earth while still developing AI technology.

Should I use DeepSeek?

Well, you could. There are some data privacy concerns, what with it being in China. But then again, there are also data privacy concerns with models hosted by US companies, not least in the current political climate.

My key point is, really, that I don’t think it matters all that much which model you use. There are plenty available, including LLaMa, Phi, Mistral, Anthropic, Claude and many, many others, with different hosting options, license terms and strengths and weaknesses. And it’s not actually that hard to swap models so experiment away.

What do you think? Will DeepSeek R1 change the AI landscape, or is it just another tool in the box? Have you experimented with different LLMs in your business? Let’s discuss in the comments! And if you’re looking to harness AI effectively, visit neworbit.co.uk/ai to see how we can help you navigate the AI revolution. 🚀