My near philosophical musings about the world in general its problems and possible ways out.

2024-06-15

The third evolution

Collective Artificial Intelligence (CAI)

Well, CAI is not (yet) an established Acronym, or at least not in the sense as an abbreviation of this contributions’ sub-title.

Underlying the claim that comes along with that title is a lengthy thought which crossed my mind while listening to a panel discussion on “Intelligence in the Age of LLMs: Synthesizing with AI Toward Meta-Knowing” on Wednesday, June 05 at this years’ European Identity Conference (EIC 2024). 

The illustrious panel was comprised of ...

  • Dr. Scott David, LL.M., Executive Director - Information Risk and Synthetic Intelligence Research Initiative, University of Washington - APL,

  • Daniel Friedman, Co-Founder and President Active Inference Institute,

  • Trent McConaghy, Founder Ocean Protocol and 

  • Dr. Mihaela Ulieru, Founder and President IMPACT Institute for the Digital Economy

Their talk promised to open a treasure chest of insights into the evolution of intelligence. After a while, my thoughts took on a life of their own and following a storyline that went like this:

1 Collective Human Intelligence - The second evolution

The success of the human race stems not only from the various contributions of intelligent individuals, but to a possibly greater extent to the intelligent interaction of these individuals with each other. Humans are highly social creatures, and much of our progress and survival has depended on our ability to co-operate, communicate, and share knowledge. Collective interaction has given rise to skills that far surpass those of a single individual. 

These artefacts, which complexity theory refers to as emergent effects, ultimately triggered a second, cultural evolution based on these emergent properties of human social interaction. As a result, humanity ultimately became human. This position is supported by a wealth of literature from fields such as evolutionary biology, anthropology, cognitive science and complexity theory.

Complexity theory in particular studies how interactions among components of a system lead to complex behaviours and properties that are not evident from the properties of the individual components. 

Emergent effects in this context are defined as phenomena that arise from these interactions. They include:

  1. Nonlinearity - Small changes in initial conditions or interactions can lead to disproportionately large effects.

  2. Self-Organization - Systems can spontaneously organize into patterns or structures without external direction.

  3. Adaptation - Systems can adapt to changes in their environment through feedback mechanisms.

  4. Decentralization - Emergent behaviours typically arise from local interactions among components rather than being directed by a central authority.

In the context of human societies, emergent effects manifest as cultural norms, institutions, languages, and technologies that evolve from the collective behaviours and interactions of individuals.

2 Collective Artificial Intelligence - The third evolution

Most artificial intelligence systems, on the other hand, are currently designed and implemented as individual, isolated models, unaware of the existence of others of their kind. They do not inherently communicate with each other nor do they exchange knowledge.

Rather, the trend is to develop larger, more comprehensive models based on huge amounts of data and aiming for superhuman performance in certain tasks in order to eventually achieve a "god-like" status. The concept of a god (not a pantheon) is ultimately that of a superior individual in no need of  "a little help from his friend". Since there are clearly no equal peers, it is unlikely that this evolutionary path favours more intense communication.

However, if we want to apply the concept of collective behaviour to instances of artificial intelligence, at whatever stage of development, do we expect to observe emergent effects there as well, possibly leading to capabilities and behaviours that surpass those of individual AI systems? 

As cultural evolution builds on biological evolution, could a new form of evolution emerge from the interaction of AI systems? This "third evolution" could then be characterised by the development of a collective artificial intelligence that makes use of emergent properties from the interaction of AI systems with AI systems. 

Will we eventually see the development of a superintelligent AI in the form of a network or collective of interconnected systems, rather than a single, isolated superintelligent AI?  This collective intelligence could potentially have capabilities that surpass any single AI system. 

How would this AI collective interact with the incumbent collective, us humans? The interaction between an AI collective and human society could lead to a symbiotic relationship involving co-operation, competition or even co-evolution, where both benefit from each other's strengths. However, it also raises ethical and practical concerns about control, transparency and the alignment of AI goals with human values.

3 Speculation

There is much room for speculation indeed. 

If the course of thought up to this point has already been guided by quite esoteric considerations, then the path that may lie ahead of us is even more shrouded in thick fog.

Would such a scenario even be possible after all? Well, emergent effects can occur even from the interaction of individuals on a rather low level. Instances of individual intelligence on the other hand have currently reached a by far higher degree of autonomy in their decisions. Even higher levels are expected to be achieved. While these levels don’t represent a necessary precondition for the occurrence of emergent effects, to kick off an evolution of its own kind however, they must at least be capable of processing feedback from those effects in order to incentivise successful group behaviour and discourage less promising set-ups.

Most probably this scenario can be expected to effectively work even below the level of Artificial General Intelligence (AGI) of the participating individuals. The speed of this 3rd evolution will certainly accelerate tremendously once the AGI level is reached or even surpassed by the majority of the members of the collective. 

While there is obviously a certain probability that such a scenario might come to life one day, will it be desirable after all? Will humanity benefit from it or should we rather be scared by such possible outcome? In a survey earlier this year, just over half of the 2,778 researchers polled said that there's a five percent chance that humans will be driven to extinction, among other "extremely bad outcomes." At the same time, 68.3 percent of respondents said that "good outcomes from superhuman AI" are more likely than bad ones, showing there's little consensus on the topic among experts. 

Most probably collective AI was not in scope of those experts’ considerations. Nevertheless, to cope with the remaining risks experts and regulators frequently demand the human to stay somewhere in the loop having the last say. Given the history of mankind, however, it remains doubtful, to say the least, that we would be better off in such a case. Looking back, we cannot help but realize that the worst atrocities have been committed to us by our fellow human being. Over the course of human history, the weapons we have at hand to inflict harm to each other have evolved towards impressing levels – our moral standards and the ways we treat each other however have not.

3.1 How did human morality evolve?

While we may be the known evil for ourselves, the behaviour of autonomously acting AI instances, whether as a swarm or not, is completely unknown and still pure fiction. To approach a prediction, we could again take ourselves as a blueprint. For most of human history, the majority of us have been desperately preoccupied with the needs at the base of Maslow's pyramid. The struggle for survival dominated our days. To ensure survival, we had to band together in smaller or larger communities that competed fiercely for the scarce resources of food and shelter. But even within the communities, we competed with our fellow human beings for the same resources. We had to act as members of a community and at the same time as autonomous individuals. This dualism never left us. This is how we have been socialised over the millennia.

On closer inspection, we find ample evidence and scientific theories to suggest that human morality has indeed evolved as a survival mechanism driven by the need to navigate a hostile environment while balancing individual autonomy and collective co-operation. Here are some key points and theories that echo these ideas:

3.2 Key Theories and Evidence

Evolutionary Psychology and Morality:

  • Theory: Evolutionary psychology posits that many human behaviours, including moral behaviours, have evolved to solve recurrent problems faced by our ancestors. These behaviours increase the chances of survival and reproduction.

  • Evidence: Studies show that moral emotions like empathy, guilt, and shame are universal across cultures, suggesting a common evolutionary origin. Research by Frans de Waal on primates indicates that behaviours such as empathy, fairness, and reciprocity are not uniquely human but shared with other social animals.

Social Contract Theory:

  • Theory: The social contract theory, proposed by philosophers like Thomas Hobbes, John Locke, and Jean-Jacques Rousseau, suggests that individuals agree to form societies and accept certain moral rules to ensure mutual protection and benefits.

  • Evidence: Archaeological and anthropological studies show early human societies had norms and rules that promoted cooperation and social order, which were essential for survival in harsh environments.

Kin Selection and Inclusive Fitness:

  • Theory: Proposed by William Hamilton, kin selection theory suggests that behaviours that help genetic relatives are favoured by natural selection because they increase the likelihood of shared genes being passed on.

  • Evidence: Altruistic behaviours are often observed among close relatives, supporting the idea that helping family members enhances the survival of shared genes. Studies on hunter-gatherer societies show that people are more likely to share resources and cooperate with kin.

Reciprocal Altruism:

  • Theory: Introduced by Robert Trivers, reciprocal altruism suggests that individuals help others with the expectation that the favour will be returned in the future, promoting long-term cooperation.

  • Evidence: Experiments and observations show that humans are more likely to help those who have helped them in the past, indicating that reciprocity is a fundamental aspect of human social behaviour.

Group Selection:

  • Theory: Group selection theory, proposed by David Sloan Wilson and E.O. Wilson, argues that natural selection operates not only at the individual level but also at the group level. Groups with cooperative and altruistic members are more likely to survive and reproduce.

  • Evidence: Anthropological evidence shows that human groups with strong social cohesion and cooperative norms were more successful in surviving environmental challenges and conflicts with other groups.

Cultural Evolution and Social Learning:

  • Theory: Cultural evolution theory suggests that moral values and norms are transmitted through social learning and cultural practices, evolving over time to enhance group cohesion and cooperation.

  • Evidence: Research shows that cultural norms and moral values can change rapidly in response to environmental and social pressures, indicating that morality is influenced by both genetic and cultural factors. The work of Joseph Henrich and others on cultural evolution highlights how cultural practices can shape moral behaviours.

3.3 Socialization and Moral Development

Social Learning Theory:

  • Theory: Proposed by Albert Bandura, social learning theory posits that individuals learn moral behaviours by observing and imitating others, especially role models and authority figures.

  • Evidence: Studies show that children develop moral values through interactions with parents, peers, and society, learning what behaviours are acceptable and rewarded.

Moral Development Stages:

  • Theory: Lawrence Kohlberg's stages of moral development describe how individuals progress through different levels of moral reasoning, from obedience to authority to principled reasoning based on universal ethical principles.

  • Evidence: Longitudinal studies support the idea that moral reasoning develops through stages influenced by cognitive development and social experiences.

3.4 Takeaways

Human morality appears to be the result of a complex interplay between evolutionary pressures, socialization, and cultural influences. The need to survive in a hostile environment, both as individuals and as members of a collective, has shaped our moral values and behaviours. Theories such as evolutionary psychology, social contract theory, kin selection, reciprocal altruism, group selection, and cultural evolution provide robust frameworks for understanding the development and function of human morality. Socialization processes further refine these moral values, making them adaptable to the specific cultural and social contexts in which individuals live.

For those inclined to explore these ideas a little further, the work of evolutionary biologists such as Richard Dawkins, anthropologists such as Christopher Boehm and psychologists such as Jonathan Haidt offer extensive insights into the evolution of human morality.

These considerations imply that if AI instances are socialised in a similar way to humans, the result could resemble human behaviour. However, these various theories, each attempting to explain certain observations, are still a long way from making predictions. Before we release swarms of autonomously acting AI individuals (and if they are AGI instances, they could have consciousness and should be recognised as persons), we should definitely run some more rigorous simulations of possible scenarios and carefully examine the moral standards that emerge. Only then can we decide whether a collective artificial intelligence (CAI) could become our terminator or our saviour. 

This should be incentive enough to invest a little time and effort to find answers to these existential questions before the third evolution finally takes off.

4 Evidence in Literature

4.1 Collective Intelligence:

  • Robin Dunbar: Dunbar's work on the social brain hypothesis suggests that the evolution of human intelligence is largely due to the demands of living in complex social groups. His research indicates that larger social groups require more sophisticated social cognition and communication skills.

  • Pinker: In "The Better Angels of Our Nature," Pinker discusses how human cooperation and collective efforts have led to the reduction of violence and the advancement of societies.

4.2 Emergent Effects and Complexity Theory:

  • Murray Gell-Mann: In "The Quark and the Jaguar," Gell-Mann discusses complexity and emergent phenomena in nature, including how simple rules can lead to complex behaviours.

  • Melanie Mitchell: Her book "Complexity: A Guided Tour" explores how complex behaviours and properties emerge from simple interactions in systems ranging from biological organisms to human societies.

4.3 Cultural Evolution:

  • Richard Dawkins: Dawkins introduced the concept of memes in "The Selfish Gene," which describes how cultural information spreads and evolves similarly to genes.

  • Peter J. Richerson and Robert Boyd: Their book "Not By Genes Alone" discusses how culture has played a critical role in human evolution, supporting the idea that cultural evolution is a form of emergent property arising from human social interaction.

5 Prominent Research

5.1 Complex Systems and Emergence:

  1. Elinor Ostrom: Ostrom's research on collective action and the management of common resources demonstrates how groups of individuals can self-organize to create systems that outperform individual efforts.

  2. Francis Heylighen: Heylighen has written extensively on the principles of self-organization and the evolution of complex systems, including the role of collective intelligence in societal development.

  3. Herbert A. Simon: Simon's work on bounded rationality and his studies on the architecture of complexity are foundational in understanding emergent phenomena in human behaviour and organizational systems.

  4. John H. Holland's work on complex adaptive systems offers insights into how systems composed of multiple interacting agents can exhibit emergent behaviour.

  5. Murray Gell-Mann and Melanie Mitchell have written extensively on complexity and emergent phenomena, which can provide a theoretical foundation for understanding emergent effects in AI collectives.

5.2 Collective Intelligence in AI:

  1. Pierre Lévy introduced the concept of collective intelligence, which could be applied to AI. His book "Collective Intelligence: Mankind's Emerging World in Cyberspace" explores how collective intelligence emerges from networked interactions.

  2. Howard Bloom's "Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century" discusses the idea of a global brain, which can be extended to consider AI systems as part of this global intelligence network.

5.3 Swarm Intelligence:

  1. Eric Bonabeau and colleagues' work on swarm intelligence, such as in "Swarm Intelligence: From Natural to Artificial Systems," explores how decentralized, self-organizing systems can solve complex problems collectively.

    The principles of swarm intelligence could be applied to develop AI systems that work together to achieve goals beyond the capability of individual AIs.

5.4 Multi-Agent Systems:

  1. Research on multi-agent systems (MAS) investigates how multiple AI agents can interact, cooperate, and compete. This field provides a practical framework for building and studying AI collectives.

    Michael Wooldridge
    's "An Introduction to MultiAgent Systems" is a comprehensive resource on this topic.

5.5 AI and Human Societies:

  1. Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" considers various scenarios for the development of superintelligent AI, including collective AI systems.

  2. Yoshua Bengio and other AI researchers have discussed the potential for AI to enhance human collective intelligence rather than replace it.