Manifesto

AI in the service of human intelligence

Observation

Artificial intelligence has established itself in our organisations within months. It generates, summarises, recommends — at a speed humans can no longer match. Content production has exploded. Dashboards multiply. Answers arrive before the right questions are even asked.

But producing is not understanding. And speed does not guarantee relevance. What we observe is an illusion of understanding: organisations that believe they know because they have data, without ever taking the time to truly listen to the people concerned.

It is this tension — between the power of tools and the poverty of understanding — that grounds our approach.


Our position

We use AI. We own that choice. But we refuse to conflate technical capability with genuine understanding.

Understanding ≠ generating

Generating text, summaries or recommendations is not understanding a collective. Understanding demands listening, structuring and contextualising — not producing faster.

Structuring ≠ automating

AI can process volumes that no individual could absorb alone. But it is the methodological framework — not the algorithm — that gives meaning to the results.

Deciding remains human

No model decides on behalf of an organisation. AI structures information. Humans bear the responsibility of choice, with its blind spots and trade-offs.


The risks we acknowledge

Using AI to understand human dynamics is not trivial. We identify three risks that we take seriously.

Cognitive risk

The ease of AI can encourage intellectual laziness. When answers arrive instantly, the temptation is strong to stop questioning, stop digging, and settle for the surface.

Organisational risk

A poorly framed tool can serve to manufacture false consensus, instrumentalise team voices, or validate decisions already made. Technology does not protect against manipulation.

Environmental risk

Every call to a language model has a real energy cost. We reject wasteful use of unnecessary requests and commit to sober, targeted AI usage.


Our principles

In the face of these risks, we have structured our approach around four non-negotiable principles.

Human primacy

AI is a lever, never an end. Every result is contextualised, nuanced and subject to human validation. No synthesis is delivered without critical review.

Technological sobriety

We only use AI where it delivers real value: structuring volumes that humans alone cannot process. No gimmicks, no feature bloat.

Methodological transparency

The framework of each mission is shared with all stakeholders. Participants know why they are being solicited, how their data is processed, and what the limits of the exercise are.

Protection of expression

Without confidentiality, no one truly speaks their mind. We guarantee that raw verbatims are never exposed and that anonymisation respects strict thresholds.


Looking ahead

Collective Insight is part of a broader vision: one of organisations that take the time to understand before acting. Not out of slowness, but out of lucidity.

We believe that human systems — teams, departments, enterprises — deserve tools of understanding commensurate with their complexity. AI can contribute, provided it is used with rigour, transparency and humility.

Collective Insight is the first product of the InsightEngine platform, designed to equip the understanding of collective dynamics at scale.

Explore further

About

Who is behind Collective Insight, and why this approach.

Method

Our approach in detail: transparency, methodological architecture, success factors.

Collaborate

What we look for in our partners and how to work together.