People of ACM - Marco Dorigo
January 27, 2026
How did swarm intelligence emerge as a research field within computer science?
Swarm intelligence emerged in the late 1980s and early 1990s at the intersection of computer science, operations research, and biology. Researchers were beginning to look for alternatives to centralized and highly structured approaches to problem solving. At the time, many important classes of computational problems including optimization, routing, and scheduling, proved difficult to solve efficiently using traditional methods, especially as problem-size increased, or inputs became uncertain.Natural systems such as ant colonies or bee swarms offered a striking contrast. They demonstrated how large numbers of simple individuals, interacting only through local rules and without centralized control, could collectively solve complex problems in a robust and adaptive way.
Drawing on insights from the study of these systems, researchers focused on extracting and formalizing the interaction principles underlying such collective behavior—such as positive and negative feedback and indirect communication—rather than pursuing biologically detailed models.
When these abstractions were translated into effective algorithms and engineering solutions, swarm intelligence became recognized as a distinct research field within computer science.
A frequently cited example of swarm intelligence is the process by which ants build a bridge using their own bodies to access food sources. Will you explain how the ant colony learns how to do this?
In this case, the ant colony does not learn in the cognitive sense. There is no planning, no representation of the goal, and no individual ant that understands the structure being built. Instead, each ant follows very simple local rules based on physical cues, such as the forces it experiences or the flow of other ants around it.
When ants encounter a gap, some of them attach to each other and form a temporary structure. If that structure proves useful, because many ants pass over it, it is reinforced. If it is inefficient or no longer needed, ants detach and move elsewhere. Through this continuous process of local interactions and feedback, the bridge adapts its shape and size to the environment. What looks like learning at the colony level is actually an emergent form of collective adaptation arising from many simple behaviors interacting with the physical world.
Without getting too technical, what is an example of an algorithm that emerged from the study of ant colony behavior?
The best-known example is ant colony optimization, a family of algorithms inspired by how ants collectively discover efficient paths between their nest and food sources. Individual ants leave chemical traces, called pheromones, as they move through the environment. When choosing their paths, ants tend to follow routes marked by stronger pheromone concentrations but do so probabilistically rather than deterministically. Over repeated foraging cycles, this probabilistic preference for pheromone-rich paths leads to the reinforcement of routes that connect to good food sources through increased traffic and stronger pheromone trails, while less useful paths gradually fade away.
In ant colony optimization, these mechanisms are mapped onto a computational process in which many simple software agents, often called artificial ants, explore possible solutions to a given optimization problem in parallel. Each agent incrementally constructs a candidate solution by making a sequence of local choices, such as selecting the next node in a path or the next component of a partial solution. Information about solution quality is shared indirectly through numerical values associated with these choices, which play a role analogous to pheromone concentrations.
During the search, agents probabilistically favor choices that have been part of high-quality solutions in the past, while still allowing for exploration of less-used alternatives. Importantly, reinforcement operates at the level of solution components rather than entire solutions: individual decisions or transitions that contribute to good solutions become more likely to be reused by other agents in subsequent iterations.
This component-level, pheromone-mediated reinforcement process allows the system to combine good partial decisions from different agents and different solution attempts. By reinforcing promising building blocks while still encouraging exploration, ant colony optimization demonstrated how decentralized, collective processes could successfully tackle difficult optimization problems and compete with more classical algorithmic approaches.
What has been a significant challenge in applying swarm intelligence as observed in nature to coordinating large groups of autonomous robots?
One of the main challenges has been translating swarm intelligence principles originally observed in natural systems into engineered systems that must operate robustly under real-world constraints. In nature, collective behavior emerges from agents shaped by long-term evolutionary processes and tightly coupled to their environment. By contrast, in robotic systems collective behavior must be engineered explicitly, using agents whose sensing, actuation, and decision-making are limited by design and technological constraints.
As the size of a robotic swarm increases, small imperfections such as sensor noise, delays in communication, or hardware failures can have a large impact on collective behavior. Designing local interaction rules that remain effective under these conditions and that scale gracefully from a few robots to hundreds or thousands is a nontrivial engineering problem.
Another challenge is predictability and control. Swarm-based systems are attractive because they are flexible and robust, but these same properties make their behavior harder to analyze, debug, and formally reason about. Engineers are often accustomed to systems whose behavior can be precisely specified, whereas swarm intelligence relies on emergent dynamics that must be shaped indirectly.
As a result, applying swarm intelligence to robotics has required a shift from designing exact behaviors to designing interaction mechanisms that promote desirable collective outcomes across a wide range of environmental conditions, system scales, and levels of uncertainty.
What are the most exciting research directions you are currently working on?
I am currently working on the Self-Organizing Nervous System, which aims to endow robotic swarms with decentralized coordination mechanisms. A goal is to make them capable of dynamically generating temporary organizational structures such as ad hoc hierarchies when needed. Rather than relying on fixed architectures or permanent leaders, this approach investigates how self-organization can give rise to hierarchical arrangements that emerge in response to task demands, environmental conditions, or external inputs. When such hierarchies are present, they support more efficient internal coordination—for example, by structuring information flow, decision-making, and task allocation within the swarm.At the same time, these structures provide a simplified and more intuitive interface between a human operator and the swarm, allowing high-level commands or goal specifications to be introduced into the system and propagated efficiently. Crucially, these hierarchies can dissolve when no longer useful, preserving the flexibility, robustness, and adaptability that characterize swarm-based systems.
A second research direction focuses on the use of blockchain technology in robotic swarms to introduce a baseline level of security against malicious behavior that is not explicitly addressed in current swarm robotics systems. By leveraging distributed ledgers and consensus mechanisms, this approach aims to make it difficult for compromised or adversarial agents to falsify shared state, manipulate collective decisions, or impersonate other members of the swarm, all without relying on centralized control.
Beyond security, blockchain-based infrastructures can provide tamper-resistant shared memory that supports coordination, accountability, and decentralized decision validation. These capabilities are particularly relevant in settings involving large-scale swarms, heterogeneous agents, or potentially adversarial environments. This research investigates how blockchain technology can be integrated with swarm intelligence principles in a lightweight manner, so as to preserve scalability, adaptability, and responsiveness.
Together, these directions address a common challenge: how to design large populations of autonomous robots that can coordinate reliably, adapt to uncertainty, and operate without centralized control, while remaining robust to failures and external interference.
Given the trajectory of this technology, what interesting application(s) do you predict we will see ten years from now?
Over the next decade, swarm intelligence is likely to play an increasingly important role in applications involving large collections of interacting agents that must adapt to changing conditions and tolerate failures. In applications of this kind, centralized control might become impractical while the robustness, flexibility, and scalability of swarm-based approaches become key advantages.
One promising area is environmental monitoring and management such as tracking pollution, monitoring ecosystems, or supporting precision agriculture. Swarms of relatively simple robots or sensors could collectively cover large areas, adapt their activity to local conditions, and continue operating even when individual units fail. Similar ideas apply to infrastructure inspection, for example in transportation networks or energy systems, where decentralized coordination can improve coverage and resilience.
I also expect swarm intelligence to be increasingly integrated into hybrid systems rather than deployed in isolation. Future applications will likely combine swarm-based coordination with learning techniques, higher-level planning, and human supervision. In this sense, swarm intelligence will function as a foundational layer that enables collective adaptation rather than as a standalone solution.
Finally, some of the most impactful applications may be largely invisible to end users. Swarm principles are already influencing logistics, communication networks, and distributed computing, and this trend will continue as systems grow in scale and complexity. The long-term significance of swarm intelligence may lie less in spectacular robotic demonstrations and more in quietly enabling systems that remain effective in uncertain and dynamic environments.
Marco Dorigo is a Research Director for the Belgian Funds for Scientific Research (FRS-FNRS) and Co-Director of IRIDIA, the artificial intelligence lab at the Free University of Brussels, Belgium. He was recently named an ACM Fellow for “establishing swarm intelligence as a research field.” Swarm intelligence is a subfield of artificial intelligence that studies how collective behavior in natural systems—such as ant colonies and beehives—can inform computational and engineering approaches. Scientists examine how local interactions among many simple parts yield effective solutions without any centralized or hierarchical control. Dorigo’s work has focused primarily on the engineering aspects of the field, including the design of robot swarms that cooperate to accomplish tasks beyond the capabilities of individual robots.
Dorigo has published widely in this area, including the book Ant Colony Optimization, (co-authored with Thomas Stützle), and numerous other books and journal articles. Dorigo is also Founding Editor of Swarm Intelligence, the principal research journal in the field. This year he will serve as Honorary Chair of ANTS 2026, the 15th International Conference on Swarm Intelligence.