Science
Leave a comment

Ensuring variety in the AI-native era

Ensuring variety in the AI-native era


Dr Yichuan Zhang, CEO and co-founder of Boltzbit, examines the flaws in the market being dominated by homogenous AI foundation models- and how variety is necessary for the AI-native era.

A future where we achieve artificial general intelligence (AGI) is perhaps not quite as dystopian as one might think. AI has already proved to be one of the most powerful accelerators of human progress over the past few years, and we are already seeing exciting signals of just what is possible.

Researchers are making headway on finding solutions to problems that have puzzled us for decades. Nuclear fusion, long being thought of as being always 30 years away, is now being modelled and optimised with AI in ways that materially change the pace of experimentation. If this trajectory continues, the implications extend far beyond energy into climate modelling, resource optimisation, and other global challenges.

The more immediate question, however, is not whether AI will deliver breakthroughs, but who will benefit from them. And, crucially, who gets to build with them. At present, the answer is surprisingly few.

The current boundaries of innovation

Most of today’s AI products are structurally the same. Sitting atop a small number of foundation models that are trained on broadly similar datasets. The result is a market of apparent variety masking underlying homogeneity with different interfaces, but increasingly the same intelligence layer.

If the current trajectory holds, this small number of model providers will define more than just the capabilities of AI systems. They will define the boundaries of innovation itself and who is allowed to participate in it.

Unfortunately, a world where every AI product behaves similarly because it is powered by the same underlying intelligence is a world where differentiation erodes. More subtly, it is a world where individuality becomes harder to express through technology.

It is, therefore, imperative to redistribute the capacity to not slow down the progress that has been made. That, though, requires a shift at the level that matters most: the intelligence layer.

Shaping the intelligence layer

If individuals and organisations are to meaningfully participate in the AI-native era, they need the ability to train and own the models that power their applications. This is where live learning becomes critical.

Static models, no matter how large, are snapshots. They improve through periodic retraining cycles controlled by their providers. Live learning models, however, evolve continuously in production by incorporating new data and users controlling what and how the models learn. It can be thought of as the difference between renting intelligence and owning it.

If we want an AI-native future that retains individuality, then every individual and organisation must have the ability to own and shape the intelligence layer powering their AI applications. It is important not to make that shift too late.

A bright future for the AI-native era

A word of warning. Building the technology is only part of the challenge. The real task is making it accessible at scale, in a way that is both usable and sustainable.

It is imperative to understand how users interact with, adapt, and derive value from live learning systems in practice to make them truly valuable. And to do so, tech teams need to observe how intelligence evolves when it is placed directly in the hands of users.

What matters is not the first iteration itself but what it reveals: how users shape their models, what kinds of feedback loops emerge, and how intelligence behaves when it is no longer centrally controlled. Those insights will inform what comes next and will ensure a bright future for all.

In the long term

The closest version of the AGI future that will benefit us all is not a single moment of transformation but a gradual shift. Without doubt, individuals will live increasingly AI-assisted lives. We are already seeing that happen. The organisations we all interact with daily will become increasingly AI-native, embedding AI into their core operating models.

This is unlikely to be the end state but, over the next few years, this is what an early version of an AGI world is likely to look like. And if that future is coming, as it most certainly is, then the current trajectory matters.

If the AI ecosystem continues to be dominated by a small number of foundation model providers, we are heading as a society towards a world where AI-assisted experiences are powerful but increasingly standardised. This would be a mistake.

It is important to ensure the democratisation of live learning. A world where any individual or organisation can train and own the intelligence layer powering their AI applications and where that intelligence continues to evolve in production, through feedback and new data.

By doing so, the performance and evolution of AI systems are no longer tied to the roadmap of a handful of providers. The intelligence powering them belongs to the creator and is shaped by the people who use them. That is the difference between participating in the AI-native era and inheriting someone else’s version of it.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *