We use cookies to further personalise and enhance the user experience, conduct analytical research (for example, counting visits and traffic sources), place advertisements and contact third parties. Users can manage their cookie settings by clicking the "Choose your preferences" link.

Cookie policy
Published 2025-01-27

The AI-futures we bet on

How can we relate to and manage risks with AI as the technology matures? In Schibsted we evaluate where to be first-mover and where it makes sense to simply monitor and evaluate capability improvements. Karl Oskar Teien, Product Director subscription news, shares some insights from mapping out future AI-scenarios.

Imagine you’re responsible for an internationally renowned news app, and with the press of an update button, iOS users worldwide receive outright misinformation on their home screens with your trusted brand name attached to it. Far from a dystopian future prediction, this scenario is in fact a real life example of the risks news organizations run with AI-based disintermediation. A summary of BBC’s app notifications served by Apple Intelligence stated that high-profile murder suspect Luigi Mangione had shot himself, when in fact he had not. In a similar fashion, Google has been widely mocked for its inaccurate and erratic AI overviews in response to basic questions. No matter how robust a newsroom’s internal guardrails may be, AI functionality offered by third parties poses a real risk of eroding the trust news organizations have painstakingly built over decades. How can we leverage the opportunities of this technology without jeopardizing the defense that editorial oversight provides against AI slop?

Moving fast without breaking things

Undoubtedly, we must continue to demand rigorous quality assurance of new AI functionality by third parties that re-version our content. But when it comes to the experiments we run in our own products, there are many ways that the risks with generative AI can be managed. Across Schibsted, we have focused on setting criteria for quality assurance, rather than limiting the use of the technology as a matter of principle.

As the technology matures, we try to evaluate where there are significant first-mover advantages to be gained through early experimentation, and where it makes sense to simply monitor and evaluate capability improvements. Whether working with functionality offered by Apple or Google, or with companies that position themselves as journalism-friendly publishing platforms and content marketplaces, all publishers will be faced with major decisions about quality assurance, copyright, and distribution channels. The choices we make at this point in the evolution of the information ecosystem require clarity about which decisions are one-way doors that have “significant and often irrevocable consequences”, and which decisions are easily reversible ways of accelerating learning.

Thus far, (legacy) media organizations like Schibsted are still in the “AI efficiency phase” (as described by Caswell and Fang in their 2024 report), where “AI is applied primarily to existing tasks, workflows, and products in ways that essentially operate within the existing competitive environment”. There are (quite understandably) fewer efforts in established media companies that act decisively on a future where “the fundamental structure of the news and information ecosystem is different”. While we try to both predict and shape what that future might look like, there is already tremendous value in learning as much as possible about the ways in which we already can create value today and in the near future. 

When our team of leaders across product, design, and tech in Schibsted’s Premium Subscription newspapers met in Stockholm before Christmas, we sat down with the Swedish Omni team to dive deeper into AI-scenarios they had mapped out through a Schibsted-wide evaluation of plausible futures that may play out in the coming years. Rather than addressing the broad societal implications of AI, our group zeroed in on how these changes affect the priorities of our product, design, and tech teams most directly in the coming 3-4 years. In the following sections, I’ll describe the most impactful scenarios we discussed, and how we might play on our strengths to face them.

Atomization of news will change workflows

Most newsrooms in Schibsted have already baked some AI functionality into tools for news gathering, content curation and versioning for different audiences and format preferences. We will undoubtedly need to continue rethinking internal workflows to support our ambition of adapting to changing expectations and habits among our users. 

A relatively unanimous prediction in our group is that generic event reporting will continue to be commoditized, while expert analysis, commentary and boots-on-the-ground reporting will gain relative value. However, upon closer reflection, most participants in our discussions agree that even our most celebrated journalistic formats might evolve into something different from what they are today. Although not inevitable, it seems likely that news stories will be further “atomized” into individual components such as facts, quotes, audio and video clips, ready to be remixed into new versions for particular audiences and format preferences (text/audio/video, long/short, detailed/simplified). The atomization of news also helps enable conversational news stories with a quality and dynamism that was previously not possible.

A lot of these changes to news storytelling will be enabled by Content Management Systems that bake AI functionality into existing workflows. And while we in Schibsted have been fast movers in this field largely because we have built these systems in-house, we also need to be open to the idea that specialized third-party solutions in the long run may challenge our current approach to newsroom tooling.

A revolution in content distribution and versioning

Algorithmic news feeds tailored to users’ needs and preferences are also widely implemented through passive and active personalization systems in Schibsted. In the future, it seems critical that our personalization efforts continue to bake editorial signals into content ranking and distribution. Most of our teams believe this approach effectively combines newsroom judgement with the smartness of algorithms.

In addition to content distribution, we will also need to create responsible and transparent systems for versioning and formatting of each news story. If we believe that the traditional article in the long run will lose its function as “the unit of news”, and that every story can be atomized and versioned in multiple ways, we quickly run into some fundamental questions about the media’s role in recording history as it unfolds. If not through news articles, how will our collective history be told and recorded? How will fact-checking and disputed claims be handled in a world with endless versioning? There may be lessons learned from how organizations like Wikimedia handle differing opinions and disputes about what is ultimately true, but it is already clear that content versioning will require us to future-proof the guardrails we have in place to protect our mission.

As an example, verifying the accuracy of an article summary is relatively manageable, but once we have several versions of each article for different audiences and preferences, we quickly run into a scalability issue if we aim to maintain a “human-in-the-loop” principle. In addition to the BBC case mentioned earlier, we have also seen AI summaries falsely claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested, and that a darts player had won a game that had yet to happen. Although humans also make mistakes, there is a risk that AI-generated false information will be spread at a scale we have not seen before, and that it could all happen before we have the right systems in place for error correction.

In Schibsted, we are focused on thoroughly testing our guardrails and human-in-the-loop routines. The question in the long run will be whether simple statistics on error rates are sufficient for justifying expansion of AI-capabilities into a wider range of use cases. In our discussions on this topic, the group remained united in the idea that all experimentation must be done in a way that can be reversed and avoid jeopardising the trust our users put in the accuracy of our reporting.

Preparing for a multitude of futures

Faced with several potential scenarios in the coming years, news organizations must embrace portfolio thinking to prepare for both incremental and disruptive changes. While AI may strengthen news destinations, we must also hedge against scenarios where user relationships are disintermediated. As user behaviours change gradually and then suddenly, we need to prepare for significant tipping points in the coming years. In that future, it is likely that users’ trust in our process, legacy, and world-class journalism will be monetized differently than today. 

Several of the scenarios we discussed are already here in some form today, yet the speed of change is highly uncertain. While we prepare for everything from incremental to disruptive changes to the information ecosystem, we continue to be bold about putting new stuff in front of users to accelerate learning. With the right guardrails and human-in-the-loop processes in place, we can fundamentally improve the way we produce world-class journalism, while we ensure that our users can access that journalism in ways that fit into changing habits. Assuming that this is just another version of a story we’ve seen before would be a naive and risky bet to make.