AI is Making Bad Abstractions More Convincing

Leaders today are not short on data.

If anything, they are saturated—dashboards, simulations, metrics, forecasts.

The real risk is subtler: choosing the wrong level of abstraction.

Every model simplifies. That is its purpose. But simplification can mislead when it operates at the wrong level.

Treating an organization as an optimization problem is not the same as treating it as an adaptive system. One emphasizes efficiency and equilibrium; the other, evolution and interaction.

The distinction matters.

Simulation resolves dynamics; conceptual models resolve explanation.

Increasingly, AI systems generate answers that are internally consistent and empirically grounded—within a given framing. But if the framing is flawed, the answers can be precise, persuasive—and wrong in the ways that matter.

This is not a data problem. It is a thinking problem.

In an AI-rich world, leadership will hinge less on access to information and more on the ability to choose—and question—the lens through which that information is interpreted.

These ideas are part of a broader effort to understand how we think, decide, and act in a world of increasing complexity.

For those interested, I’ve explored related themes in my book, The Nexus.

 

Discover the world of nexus thinking

In this provocative and visually striking book, Julio Mario Ottino and Bruce Mau offer a guide for navigating the intersections of art, technology, and science.