Clankers and content operations
Written by Simeon Griggs
This blog post is a recap of our presentation at Next.js Conf 2025.
Without context, AI-generated content is for people who hate their audience. AI tools don’t know your goals without giving them context, which requires a human touch.
AI is awesome—the most significant technology upgrade in a generation. But we believe content operators are still required and as valuable to the content creation process as ever.
AI tooling feels like something given to us from the future, but we get to control what the future looks like. Do you want humans relegated to the role of a bystander while AI does mediocre work? Or do you believe in a future where human content operators use AI as a level-up advancement in the ambition and scale of their work?
We’re strong advocates for the latter.

Sanity, the Content Operating System
In the past, we’ve regularly messaged Sanity as a “platform for structured content,” a place to store everything your business knows, structured by what those things are, not what they look like.
One of the great features of LLMs is the ability to parse structured content and give it structure. For example, it can take a string of HTML and extract entities like names, pictures, prices, etc. This should not be seen as a reason to settle for storing your content in presentational formats, but instead as an opportunity to store everything your business knows and values in a deliberate structure.
We’ve also said, “Content is data,” which is still true, but if we rewrote this tagline today, it would likely be “Content is context.”
It's basically the same idea, just with different words.
You can extract more value from your content when you author and query what your business knows—not what your website looks like.
The age of context management systems
CMSes have traditionally focused on workflows where one content operator modifies distinct web pages. Over the lifecycle of that piece of content, they regularly make more changes to that single page.
This doesn’t scale.
And it isn’t how developers work.
Developers write “source code,” which is simpler for humans to understand, and then pass it through a compiler, which creates the final output rendered by web browsers.
As development work becomes increasingly AI-enhanced, developers are increasingly writing product requirement documents or “specs” passed through an LLM to create output code. In an ideal world, the output code should not need editing, but updates to the spec should result in better output.
So why don’t content operators get to work like this?
It's time to let content teams write source content.
A place to write individual pieces of structured content that combine to create context and generate relevant and accurate content on demand on a massive scale.
We can change our thinking from a Content Management System towards a Context Management System.

What are content operations?
Another problematic sticking point of CMSes is that they’re often only used as a place for content teams to paste text they’ve prepared in some other location and press publish.
Think of all the work that is required leading up to that moment. All the research, tasks, approval processes, rounds of review, and more. These are all content operations.

Further, consider how much work needs to take place after pressing publish. Revalidate a cache, rebuild a site, perform workflow functions, translate, syndicate, and more. These are all content operations.
Sanity is for everything before and after pressing publish.
How Sanity fosters human and robot collaboration
Since launching in 2018, Content Lake has always powered real-time, collaborative authoring experiences, whether with multiple humans or robot tokens.

We didn’t know it then, but the availability of large language models came sooner than we thought. Thankfully, the underlying infrastructure was already in place to power content creation, where robots could work faster and at a greater scale than a human collaborator ever could.
This does not mean humans have a passive role while robots take the wheel. Instead, authors can fine-tune prompts and provide additional relevant context about their business and goals to better inform and guide content created by LLMs.
Sanity has built-in AI tools for schema-aware content creation, such as AI Assist, Agent Actions, and the new Sanity Agent.
Already in production
The concepts here aren’t theoretical or from the future. Sanity customers are already putting this into practice today.
Travel-tech start-up and Sanity customer loveholidays run almost their entire business logic through Sanity.
Their content model includes “prompts,” which are fine-tuned by content teams—not developers—to generate content at a scale otherwise impossible for such a small team.
Go beyond: Content Agent

The focus of this talk is about authors writing prompts that are stored within Sanity. But this is just one way to leverage AI in the content creation process—and creating content is only one benefit of AI tooling.
Sanity Agent is a way to research, refine, query and create using your content as its context.