AI and The Adjacent Possible

July 4, 2023

I think many of us often ask What’s Next?—if only in a subconscious, semi-rhetorical kind of way. And when something emerges in our environment that looks new and unfamiliar, our brains operate in two competing states; one born purely of a survival instinct, the other racing with questions to understand, explore, and be curious.

The last several weeks of the news cycle around AI has me thinking a lot about the question What’s Next? And I’m reminded of a concept from evolutionary biology.

First introduced by theoretical biologist, Stuart Kauffman, and popularized by Steven Johnson, the Adjacent Possible refers to the set of all possible next steps or innovations that are one step away from the current state of things. From a biological perspective, it’s a nice way to understand evolution—we didn’t emerge from the primordial ooze as humans. Before our species could even possibly exist there had to have been countless predecessors that have conferred some kind of beneficial attribute to carry forward. From initial protein chains, to early marine-based organisms, to small mammals, homosapien is merely the current state in the chain of life that is evolution, borne of all previous adjacent possibles.

This concept is also quite apt for describing the evolution of all manner of systems; from societal attitudes to politics, and of course, technology. The current state is a product of a series of previous possibilities. We can’t leap forward, but there are moments of what I like to think of as Accelerated Possibles.

Enter the age of AI

With a lot of hype in an incredibly short time, the headlines often bias to the perceived negative impacts of AI, while the stock market bathes in the promise of profits born of efficiencies, rewarding the titans of tech who will yield the greatest power in the application of these technologies in our day-to-day landscapes. It’s a contrast that reminds us that the stock market is NOT the economy.

The stock market is forward-looking, attempting to price in the future. While journalism peddles anxiety that’s more of a now kind of thing. But these anxieties are about futures that are many possibles away.

Between the polarities of rampant optimism or hand-wringing pessimism, there lies the more likely scenario—one of a mix of benefits and value and unintended consequences and externalities. Such as it has always been, and always will be.

But both our optimism and anxiety are likely overstated, and I say this for two specific reasons: Timing and Evolution.

Timing and Limitations

As much as homosapien couldn’t emerge from the primordial ooze fully formed, the AI of today is a long way from being the source of widespread job displacement or an immediate and fundamental shift in societal norms.

The reason is less to do with the tech, but rather our effective application of it, combined with the inherent limitations of both the technology and hardware on which it runs. At the end of the day AI is still a tool—a polymorphic tool, no less, but one still borne of countless adjacent possibles that involve both software and hardware evolutions.

We wouldn’t have AI today if not for the trend towards big data and machine learning from a decade ago. Nor would we have AI if processing power failed to advance at the pace that it has. AI is the evolution of technology delivered from hundreds of thousands of interactions, from a number of sectors, which span hardware and software, electrical engineering and cognitive psychology. And where it goes next will equally be dependent on what is adjacently possible in these domains, and likely others, while equally influencing or birthing new arenas of study.

But all of this will take time—time to understand the value and applications that AI may enable—and that in itself will create new opportunities (which should quell the fears of the most anxious).

Stasis is not an option

Evolution is the second reason why the hype is misplaced. As tech evolves, so do we, both in terms of our relationship to it, and in how we understand and use it. We, as a species and society, do not remain static in the face of change. What changes AI will deliver will be incremental. With each passing step, we too will evolve our understanding and application and usefulness of it, and that understanding will change our attitudes and relationships to the tech that leverages AI.

I suspect that a number of us have started to be able to identify instances of writing generated by AI. That’s not to say that ALL writing by AI is easily identified, but without specific direction, I personally find a lot of AI-generated writing to be stiff and empty. And also oddly self-aggrandizing. (But the topic of AI’s narcissistic tendencies is a piece for another day.)

Yes, the ability of the machine to pass the Turing test is impressive. But did Alan Turing, who created a test to judge a machine’s ability to show intelligent behaviour, ever anticipate that we’d develop greater sensitivity to detecting the machines?

Will the quality of AI-generated outputs improve? Yes, of course. If only because it has to in order to move beyond its current state, as a parlour trick and unreliable source of facts, into a technology that has material value to business and society at large.

Predictably human

As AI is grabbing headlines, we’re expressing a natural reaction to something new and unfamiliar. Biologically, we’re programmed to view the new with skepticism—it’s that trait that has kept us alive. But our other innately human traits soon take over: curiosity, imagination, experimentation, and ultimately innovation. Not overnight. Not in a vacuum. Not without consequences, positive and negative. But AI is still a machine. It’s a tool—something that extends human capacity and capabilities, incrementally. And it’s bound by what is immediately possible.

— Spencer Saunders, President & CEO at Art & Science

Working on something interesting?

Tell us about it.