Fear and loathing (for family offices) in the LLM information economy

We have entered the LLM information economy. It marks a messy period where the foundations of truth and reputation are being destabilised. In this article, Simple Expert Martin Jenewein highlights the unpredictable nature of LLM outputs. He also discusses the threat of data poisoning, as well as the legal exposures stemming from AI hallucinations. Finally, he proposes safeguard measures for family office advisors dealing with the integration of LLMs into their work.

Fear and Loathing in the LLM Information Economy

What you need to know

  • Truth and reputation are becoming unstable because LLMs interpret, rather than index, information. That leads to producing outputs that shift constantly with updates and opaque training methods.
  • In the era of LLMs, reputation is no longer built on just visibility. Instead, the real currency is trust and verifiability, which now supersede clicks or rankings.
  • Misuse of AI-generated content can lead to real-world consequences, including reputational damage and legal exposure, necessitating clear safeguards and verification.

PR & Reputation Management Published on Simple November 7, 2025

When a journalist from a leading international newspaper told me that their site traffic had dropped 20% in a single month after Google introduced AI-generated search summaries, I was stunned. I wondered if a small algorithmic tweak could strip one of the world’s most reputable media outlets of a fifth of its audience. What does that say about the fragility of our information-based digital economy?

We are entering what I call the LLM information economy — a messy world where large language models (LLMs) no longer just index information but interpret it, so becoming an increasingly relied-on source of truth for users. LLMs synthesise, filter, and prioritise knowledge in ways even their developers can’t fully explain but are constantly trying to analyse. Formerly familiar hierarchies of credibility — “page one of Google,” “most cited source,” “verified account” — are disintegrating behind a veil of unfathomable algorithms.

The volatility of truth

A partner at a reputation management firm recently noted that large language models rarely produce identical outputs, even for the same query. Their answers drift because of constant model updates and opaque training data. They are also unreliable. The European Broadcasting Union just published a report stating that 45% of AI assistants misrepresented news content and about a third of LLM responses showed serious sourcing problems.

This volatility has profound implications. If information becomes non-repeatable, reputation becomes unstable. A recent academic paper showed that inserting as few as 250 falsified data fragments into a model can significantly distort an LLM’s output— a form of data poisoning. The result? The informational DNA of our systems can be hacked. For companies, governments and investors, this means that reputation risk has moved from the communications department closer to that of the cybersecurity team. This is big news. Just like Web 2.0, AI LLM model manipulation has become the new playground for trolls all over the globe.

We rounded up a few companies that you might find interesting.

Visibility is not a hard currency any longer

For two decades, digital reputation was mainly built on visibility — how high you ranked, how often you were cited, how widely your content was shared, etc. This currency was often linked to revenue streams in a linear relationship. A whole industry has developed around search engine optimisation as a result of Google’s algorithms. However, in the LLM era, visibility is a poor proxy for reputation. Large language models don’t necessarily favour the loudest voices; they privilege content that best fits their way of reasoning and what is being consistently confirmed. Reputation, in this new ecosystem, is a dynamically moving average of trust, permanently recalculated time after time.

The LLM revolution is barely two years old, yet it has already upended the flow of information across industries. Things will remain messy for the time being, with every new update or model published by the AI champions vying to market dominance. In this new economy, reputation will depend on how traceable, verifiable, and resilient your information footprint remains as the underlying models keep shifting. In this environment, it will not be sufficient to only occasionally look at the content that models produce.  There needs to be regular reinforcement of messaging aligned with the algorithms’ reasoning and regular monitoring, of course.

When the LLM turns out crap

With all of the above, it’s no surprise that mistakes happen.  Where they affect service providers, they may result in legal liabilities. An advert from an insurer recently warned about negligence claims stemming from LLM-generated hallucinations. The warning was timely. In Australia, one of the Big Four consulting firms had to compensate an Australian government agency after an AI-generated report produced fabricated content. I’ve seen similar incidents first-hand — where LLMs merged unrelated data sources, producing polished but meaningless analyses. The lesson: AI hallucinations are no longer (only) embarrassing; they have consequences. This should also be a learning for principals when tasking consultants. Safeguards must be in place to avoid sloppy (or careless) use of LLM-generated information.

In summary:

  • Engagement letters should specify whether and how AI tools are used, and who bears responsibility for verifying their outputs.
  • For privacy reasons, it should be clarified which of the information shared by a principal can be used by AI Tools.
  • Any AI-generated content should be treated as a draft until verified.
  • Even verified information should be seen as a snapshot that may change over time.

In many cases, while AI has streamlined and simplified the way we operate, there are hidden complexities arising from its use, which may result in costly consequences for the naïve or uninformed. Family advisors will have to bear both sides of the coin in mind.

Premium Solutions
img
Family Office solutions

Our new Premium Service is a structured combination of high-touch services and technology-led solutions. Discover how this support framework allows future focused family offices to thrive.

Learn more

About the Authors

Martin Jenewein

Martin Jenewein

Risk Management & Strategic Advisor

Martin Jenewein is a trusted advisor with nearly two decades of experience in strategic communications, specialising in reputation management, high-stakes transactions and cross-border disputes.

Connect with Martin