
When I read Alan Soon’s latest newsletter, this sentence immediately resonated:
“I’m truly exhausted by the seemingly unevolving narratives around media and journalism.”
Since the start of this year, the world has felt more chaotic than ever, an almost impossible job for media professionals to keep up with. Parallel to this, the changes around content production accelerate under the influence of artificial intelligence. Alan’s impatience with incremental reform within the media landscape echoed because, as he makes crystal clear, the ecosystem is shifting faster than the discourse about it.
I’ve been experimenting quite a bit with AI in the last couple of months, mainly to see how I can work more efficiently without extra costs. I trained a few GPTs; one of them is called my ‘critical focus assistant’, a kind of mentor that keeps me aligned with my goals. I sometimes run opinions of people I admire through this GPT and ask: “Thoughts?” So I did that with Alan’s fiery email and some other insights on AI that I collected from conversations with media professionals in the last few weeks.
The work Inclusive Journalism does around power, ethics, coloniality of knowledge systems, and decision-making and leadership under uncertainty isn’t directly related to the speed of AI adaptability, but I’m always interested in a deeper layer. AI-curious founders, startup ecosystem actors, and innovation labs are more likely to be spaces for technocratic optimism. The urgency, though, is unmistakable: the ground has shifted, and there are no excuses for not wanting to pivot. I don’t necessarily thrive in an urgency culture, but there are key questions that I can’t ignore.
From prompt to context engineering
The first is the distinction mentioned in Alan’s writing between being a content factory and being on a mission. If writing disappears or becomes automated, what value does journalism retain? What is journalism for?
Earlier this week, I interviewed a media founder from Latin America whose fact-checking newsroom has accomplished more in just three years of existence than many teams twice their size. She was transparent about her use of AI from the start. Her media not only uses it as a tool to track quotes faster, but also for editorial planning and internal organization. When people were amazed at the beginning that all the work they’re doing comes from only two people, she simply said: “We use AI to optimize.”
It’s one of the few times that I heard a founder admit this so openly. The reality is that without AI, her fact-checking outlet, which does extremely well and performs with high credibility and audience engagement, wouldn’t have been this impactful.
The second question concerns what Alan calls “context engineering”, which is a new concept to me.
The term comes from AI and developer communities, where people realized that getting good AI output is not just about prompting. Prompt engineering is crafting the perfect question, and context engineering is designing the information environment around the model.
In simple language: if you hire a brilliant assistant and every day you ask them to write an article, that would be prompt engineering. But if you give that assistant your strategy document, past reports, audience data, values, mission, vision, and keep everything organized in a shared system, that’s context engineering.
It’s in a way what I’ve done already with the creation of my trained assistant. If I went a step further and documented all my projects, archived my thinking, created instruction layers, and fed all my data into the system, the assistant would even better detect patterns, catch inconsistencies, and suggest improvements. Not sure if I want to go that far, but I get the point.
Understanding the systems we’re building
One of the issues I detect immediately, though, is that all of this is part of a worldview that assumes progress through technology. If we structure all our knowledge properly, then our intelligence improves. I’m just not convinced it’s that straightforward.
In journalism and especially in inclusive and decolonial practices, context is always political. So, I find myself asking:
- When a newsroom trains AI on itself, what exactly is being made scalable?
- If AI learns from our existing newsroom patterns, does it amplify our blind spots as efficiently as it amplifies our strengths?
- What editorial assumptions are we formalizing when we “optimize” our workflows?
- What knowledge never makes it into the system?
- Whose documentation becomes “training data”?
- When we train AI on our internal context, are we improving journalism, or are we strengthening our existing editorial logic?
If production becomes automated and journalism turns into infrastructure, the role of leadership shifts from producing content to governing systems. Institutions already struggle with ethical AI integration. At a recent conference, the organizer told me that 80% of the session pitches they received were written by AI. She was devastated. To her, it felt inauthentic.
But I couldn’t help thinking: who benefits from “pure” writing standards? For journalists whose first language isn’t English, AI might be an access tool. The ability to optimize a pitch could mean the difference between being excluded and being heard.
A colleague who works as an editor for a newsroom said something similar: most story pitches they receive now show signs of AI assistance. They don’t get immediately dismissed, but it does take more effort to understand what the journalist actually thinks. More video calls and clarification. It reminded me of the article in Harvard Business Review about how AI doesn’t reduce work, it intensifies it.
This tension is not new. One of the largest newsrooms in the Netherlands struggled with diversity for years while using an outdated entrance test focused on general cultural knowledge. Applicants who hadn’t grown up in the Netherlands — even if they spoke three languages and had deep knowledge of different cultures — often failed that test. The system rewarded familiarity with the dominant context instead of the perspective the newsroom actually needed.
The risk of AI is that it can easily replicate these dynamics at scale. If we optimize based on existing norms, we may simply automate outdated standards.
So, AI adoption is a leadership test about speed but also about judgment. Training models, feeding them context, optimizing workflows: all of this requires a decolonial lens. Otherwise, efficiency becomes a new form of exclusion.
The Latin American founder didn’t hesitate to say AI will never replace journalism. It can be used as a resource, but human interaction, direct conversations, situated judgment, and the ability to read nuance will all remain essential to knowing what is true.
In that sense, the real leadership test is probably less about how fast we adapt and more about whether we understand the systems we are building and take responsibility for them.
You must be logged in to post a comment.