Where AI and organisms differ and what it means for AGI

Organisms ranging from humble microbes to the humans reading this are capable of improvising. Why can't AI?

Artificial intelligence algorithms are beating humans at Go (the most complex game ever), writing viral blog posts, and disrupting the scientific method. These achievements in fields traditionally dominated by the most creative human minds raise questions about whether AI could soon be replicating the creative potential of the human mind. Artificial general intelligence (AGI), the holy grail of AI research, seeks to do just that.

The AI that beat the human world champion at Go is great at playing the game, but cannot do much else without heavy modifications. Other than the exponential increase in computing power, this hyper-specialization (which is not applicable to AGI) is a major reason behind the success of AI.

“For AI to be motivated towards a goal, it must know what it wants.”

The possible board combinations in a game of Go are more than the number of atoms in the known universe, but it’s still a finite number. In the real world, there are infinite possibilities for what might happen next, and uncertainty is rampant. How realistic, then, is AGI?

Dealing with ambiguity

A recent research paper published in Frontiers in Ecology and Evolution explores obstacles toward AGI. Biological systems with degrees of general intelligence — organisms ranging from the humble microbes to the humans reading this — are capable of improvising to meet their goals. What prevents AI from improvising? 

For one, the utter lack of motivation. For an AI to be motivated towards a goal, it must know what it wants. But an algorithm just cannot want something. As great as the Go-playing algorithm is at beating human players, it doesn’t strive to do anything else.

Even the most sophisticated AI systems that may seem like magic to us are entirely described by clever and often intricate algorithms. But three defining features of the AGI are near impossible to build with algorithms. These features are using common sense, dealing with ambiguity, and creating new knowledge. 

Why can we not code them, you ask? Because we cannot even begin to define them.

Whether something makes sense, how ambiguous something is, or which knowledge is useful to whom depends a lot on the context. AI systems operate only within the limited context of the limits of the logic coded into them. Therefore, they cannot easily deal with ambiguity or produce new knowledge, outside of precoded scenarios.

Humans, on the contrary, leverage ambiguity to produce new knowledge. Take mathematicians, for instance, who seem like computers personified to most of us and are always testing the limits of reason and logic in their work. 

If that was all they did, an AI algorithm could create new mathematical knowledge, such as a solution to the Riemann hypothesis. However, many mathematicians attribute their biggest discoveries to intuition.

What true agency looks like

Artificial intelligence researchers often describe the entities that take autonomous decisions as “agents.”

These are designed to be rational, in the sense that they make the most optimal decision possible with whatever limited information they have. Like an organism, an AI agent can also read its environment and react to it. Think of a drone with an AI-based sensor, which scans a facility and takes a picture on detecting suspicious activity.

How does the drone’s autonomy differ from that of an eagle that scans a landscape and swoops down when it spots prey? Unlike the drone, the eagle has a sense of what is good or bad for it and can swoop down just for the fun of it, if it so wishes. It can choose to forego a large number of possibilities in favor of the ones it likes. The drone, on the other hand, is limited by the possibilities and probabilities wired into its algorithms and does not possess true agency.

The true agency is being able to initiate actions internally, without necessarily requiring a stimulus from the environment. Organisms can regulate their own boundaries to gain autonomy over their interactions with the environment, in what is known as interactive autonomy. 

As the authors describe it, “organisms can identify and exploit affordances in their umwelt” (that is, the world as they perceive it). Translation: an organism, or a true agent, can take advantage of any and all opportunities its environment provides.

What this means for evolution

A popular view of evolution, exemplified by evolutionary algorithms, is that it is a search strategy, going through a space of solutions. But since organisms are more computationally capable (and creative) than AI agents, surely there must be more to it than that. 

Unlike AI agents, organisms can undergo open-ended evolution. What this means is that organisms can find novel solutions that aren’t just combinations of existing solutions. Animal evolution, in fact, is replete with examples of the sudden and rapid emergence of novel features, such as during the Cambrian explosion.

The authors argue that “organismic agency is a fundamental prerequisite for open-ended evolution.” They say that without organisms exercising their agency on their perceived environment, “evolution cannot transcend its predetermined space of possibilities.” 

If they are right, the reasons animals emerged all of a sudden during the Cambrian explosion was because their ancestors could capitalize on what their environment provided (a popular hypothesis suggests it was the rise in oxygen levels). 

The most challenging claim that the study makes is that achieving AGI is impossible with algorithms as we know them. The fears around AGI taking over the world are, thus, as of yet unfounded. 

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Related
GitHub CEO says Copilot will write 80% of code “sooner than later”
GitHub CEO Thomas Dohmke goes in depth to answer questions about how AI-powered development will change the future of innovation itself.
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
A dose of scepticism is warranted when considering the AI doomsayer narrative — there are commercial incentives to manufacture fear of AI.
To fear AI is to fear Newton and Einstein. There are no “dragons” here. 
Who’s afraid of utopia? AI doubters have cold feet. History can warm them.
What is an AI black box? A computer scientist explains
AI black boxes refer to AI systems with internal workings that are invisible to us. What are the implications of working without transparency?
4 dangers of artificial intelligence—and why they won’t end the world
AI doomsday fears are vague. This framework for the future of AI offers concrete solutions.
Up Next
Subscribe to Freethink for more great stories