✏️
📝Home/Valuable perspectives/✏️May 19th - May 23rd, 2025 || The moment AI became normal

May 19th - May 23rd, 2025 || The moment AI became normal

💬
Note from Joost about AI Co-Creation:
This article was created in dialogue with my co-creative AI sparring partner. It started with my personal reflections from the week. The AI helped identify a relevant theme, asked targeted questions to deepen the insights, and then drafted this piece using my input and answers.
While the AI structured and drafted the text based on our interaction, the core ideas, experiences, and insights are mine. I've edited the result carefully to ensure it accurately reflects my voice, perspective, and intent, turning raw reflection into a shareable 'field note'.
My aim remains to foster an environment where we can learn together, and to embrace curiosity about these new ways of working and the insights they can help surface.

The moment AI became normal (and what that actually means for how we work together)

Something shifted for me this week, and I'm still working through what it means.
I was in Amsterdam at a workshop for a mental health network transformation plan. Beautiful facilitation happening—people from the network were sharing authentic insights, building on each other's ideas, real depth in the room.
We were using Dembrane to capture everything, and I caught myself thinking: "This is just... normal now."
A few months ago, recording a conversation and having AI generate insights would have felt like magic. Now? It felt like using email. Just another tool.
But here's what got me: while I'm getting used to this technology, I watched the faces around the table. For many people, seeing AI synthesize their conversation into clear themes and action points still creates genuine "wow" moments.
That contrast made me wonder—what actually matters as AI stops being special and starts being infrastructure?

We're asking the wrong question about AI and facilitation

Everyone keeps asking "Will AI replace facilitators?" But watching this Amsterdam workshop, I think that's the wrong frame entirely. It's like asking "Will washing machines replace families?" when the real question is how tools change the nature of what we value.
The facilitators weren't just managing a conversation—they were architecting the conditions for collective wisdom to emerge. And that's not something you can automate, because it's fundamentally about reading people, creating safety, and knowing when to push or when to give space.
What struck me was how the quality of what AI could do with the output depended entirely on how well they'd designed the input. You can have the most sophisticated recording and synthesis technology, but if people aren't in the right mental space to contribute authentically, you're just automating mediocre insights.

The art of getting people into the right "thought flow"

Later this week, I was in a different preparation meeting—this one for another transformation plan where we were thinking about prioritization and network dynamics. And it hit me again: the real skill isn't in prompting the AI afterward, it's in "prompting" the humans beforehand.
How do you get people to respond from their genuine role, their lived experience, their best intentions for others? It's like giving someone a role description, but not literally—more like getting them to that ideal mental place where they're thinking from their authentic perspective.
I realized we often ask the wrong questions. Instead of "How would you prioritize this?" we might ask:
  • "What impact do you think this would actually have?"
  • "How much effort do you think this would take to execute?"
Same information, completely different quality of response. It's the difference between asking someone if they'd theoretically buy something for €50 versus watching them actually consider spending their own money in a real store.
When you work backwards from your desired outcome—like wanting a clear action plan—you start thinking about what puzzle pieces of insight you actually need. What options exist? What criteria matter for choosing? How do we make visible all the ingredients that go into a good decision?

When AI helped me think about thinking together

On the train back, I found myself with four different AI conversations from the week—workshop design, processing insights, refining approaches. These threads had gotten really deep, and I realized I was treating each one almost like a different thinking partner.
So I tried something: what if I made that explicit? What if I actually orchestrated these AI conversations like I would a human collaboration?
I asked each AI to summarize our work together. Then I fed those summaries to both ChatGPT and Gemini, asking them to synthesize everything and critique each other's output. I kept bouncing feedback between the models—each one reviewing and improving the other's work—until we reached what they rated as 98 out of 100 for a workshop design.
It felt a bit like conducting an orchestra, except the musicians were all algorithms and the music was process design.
What struck me wasn't that the technology was magic, but how it forced me to be more systematic about collaboration itself. The AI couldn't create the insights, but it could help me think more clearly about what makes different perspectives combine into something better than any single viewpoint.
And that's when the burger metaphor hit me.

The deconstructed burger of good process design

I love thinking about workshops like a deconstructed burger. You take all these separate ingredients—the right questions, the right context, the right mental prompts—and when you stack them right, you get people contributing from their lived experience instead of just their analytical minds.
A good workshop does the same thing, but with the unique people in the room and their perspectives. Instead of asking "How would you prioritize this?" you design separate "ingredients": one question gets them thinking about real impact they've witnessed, another about effort they've actually expended, another about what matters to the people they serve.
Each ingredient is designed to access a different part of their authentic experience. Then when you "stack" those insights together, you get something delicious that's more than the sum of its parts—decisions that feel grounded in reality rather than theory.
And here's what gets me excited: AI actually seems to understand this ingredient-level thinking. It grasps different human perspectives, different value systems, and can help design questions that unlock genuine experience rather than surface-level opinions.
This is where I see real potential—not just for making existing facilitators better, but for democratizing good process design. Most smart people I know struggle with this. They're brilliant at their expertise, but designing the conversation to unlock everyone's wisdom? That's a different skill entirely.
Maybe AI can help more of us think systematically about creating those conditions. And maybe—this feels both exciting and slightly unsettling—AI will get so good at this that it starts facilitating sessions itself. Voice agents are already getting remarkably sophisticated.

What stays beautifully, irreplaceably human

Even as AI becomes "normal" infrastructure for me, I'm rediscovering what stays essentially human. Tuesday I spent the day with an amazing team where we could spark off each other, adjust ideas in real-time, build on each other's thinking in ways that surprised even us.
That energy—the real-time dance of human minds working together—I don't think that's replaceable. AI can help us capture those insights, synthesize them, even suggest better questions. But it can't create the safety for people to be vulnerable, the trust that lets them share what really matters, or the intuition to read a room and shift course.
Actually, I'm wondering if AI might make us more aware of these distinctly human skills, not less. Like how GPS made us realize we'd lost our sense of direction, but also freed us to pay attention to other things while navigating.

A practical thing you can try

Here's something concrete from my week: Start with your desired outcome and work backwards.
Instead of thinking "How do I use AI in my workshop?" ask "What insights do I actually need to achieve my goal?" Then design the human experience to generate those insights authentically.
And here's something nerdy that's been game-changing: when your AI output feels flat, don't just fix it manually. Feed your improved version back and say, "This was the original. This is what I turned it into. How should I adjust the prompt to get closer to my desired output next time?"
You're teaching the AI to work better with your specific needs and style.

What I'm still figuring out

As AI becomes normal—and it will, faster than we think—I keep coming back to this: the fundamentally human work of creating connection and unlocking collective wisdom becomes more important, not less.
We might be moving toward a world where good process design is democratized. Where voice agents facilitate some conversations. Where AI helps us think more clearly about how to help humans think together.
That excites me because it could mean more voices heard, more collective intelligence unlocked, more bottom-up change becoming possible.
But I'm still working through the tensions. How do we leverage these tools without losing the messiness and unpredictability that often leads to the best insights? How do we design for effectiveness without optimizing away the human moments that actually matter?
I don't have clean answers yet. But I'm grateful to be experimenting alongside people who care about the same questions.
What conversations are you designing? And how might we create more space for the kind of authentic input that actually helps us think together?
This post emerged from my weekly reflection and was co-created with AI, starting from real experiences and sharpened through multiple rounds of cross-model collaboration.