If you have ever sat in a traffic jam on a wide-open highway with no accident, no merging lane, and no construction, you have already met the problem. The cars ahead are not stopped because anything is in the way. They are stopped because every driver, individually following the sensible rule of "keep a safe gap and do not crash into the person in front of you," has collectively produced a wall of brake lights that travels backward through the fleet like a slow wave. Nothing is wrong with any single driver. Nothing is wrong with any single car. The system, as a whole, has jammed.
A paper published this month in the Proceedings of the National Academy of Sciences shows that robots do the same thing, that ants do the same thing, and that the fix in all three cases is the same: add a carefully calibrated amount of randomness to how each agent moves. Too little randomness and you get gridlock. Too much and you get wandering. In between, there is a Goldilocks zone, and the research team, working out of the lab of L. Mahadevan at Harvard's School of Engineering and Applied Sciences, managed to write down the equations that predict exactly where that zone sits.
This is one of those quietly remarkable results where the math is simultaneously obvious in hindsight and useful in about forty different fields that had no idea they were asking the same question.
The Setup That Breaks Intuition
The study, led by applied mathematics Ph.D. student Lucy Liu with guidance from SEAS senior research fellow Justin Werfel, started with a very clean experiment. Put a crowd of agents in a confined space. Give each agent a destination. Ask each agent to move toward that destination as efficiently as possible. Watch what happens.
If you have ever organized a group of small children to walk in a single direction, you can guess the result. If every agent heads directly for its goal, the agents collide. Collisions produce pauses. Pauses produce clusters. Clusters block the paths of agents who are still moving, which produces more pauses. Within a short time, the whole system freezes into a dense jam where most agents are neither moving nor reaching their destinations. Efficiency, measured as goal-destinations reached per unit time, collapses to near zero.
The naive fix is to tell the agents to move randomly instead. That does prevent jams, but it also prevents arrivals. A perfect random walker eventually reaches its destination, but the expected time is enormous, and if you scale to dozens or hundreds of agents, the system is effectively non-productive. You have traded gridlock for futility.
The interesting question is what sits between those two extremes. The Mahadevan lab's answer, demonstrated in simulation and confirmed with real robots, is that a moderate amount of randomness layered on top of goal-directed motion vastly outperforms either pure strategy. The technical term is "noise," and the quantity that matters is how much of it, relative to the deterministic drive toward the goal, each agent carries.
What Ants Figured Out First
Mahadevan has spent a career finding mathematical principles in biological systems, so it is unsurprising that the inspiration for this study came partly from insect behavior. Army ants that forage in dense columns do not walk in perfectly straight lines. They wobble. Fire ants assembling into a raft wiggle. Honeybees heading back to the hive follow headings that are noisy around their true bearing.

For decades, this noise was treated as a nuisance by biologists modeling animal motion. The assumption was that each animal was "trying" to follow a perfect heading and that the deviations were sensory imperfection. The last fifteen years of work in collective behavior, capped by this new paper, suggests the opposite. The noise is the feature. A population of animals that all tried to follow identical perfect trajectories would jam at the first bottleneck. A population that wobbles survives the bottleneck. This fits a broader pattern in animal cognition, where individual "errors" often carry hidden group-level function, visible in phenomena like the surprisingly sophisticated numerical ability of how bees think in numbers.
Iain Couzin and his collaborators at the Max Planck Institute of Animal Behavior showed something similar in 2020 with simulations of swarming fish. Too much individual accuracy in tracking neighbors collapsed the group. A small amount of individual error preserved the swarm's cohesion under threat. The signal, if you read across these studies, is that evolved biological swarms are almost never perfectly tuned. They are noise-tuned.
The Mathematics of the Goldilocks Zone
What Liu and her coauthors added is an analytical formula. Earlier work on swarm jamming relied on running large computer simulations and plotting the results. That is useful for observing the phenomenon but not for predicting what happens in a new configuration without running the simulation again. The Harvard team worked backward from the simulation data to derive closed-form expressions for the rate at which agents reach their destinations, as a function of crowd density and noise level.
The core insight of the formula is that two competing effects set the optimum. Reducing noise improves each agent's individual efficiency because a more deterministic walker reaches its goal faster. But reducing noise also increases the rate at which agents collide and cluster, which reduces everybody's efficiency. These two curves, plotted against noise, cross at a specific value. That crossing point is the optimum, and it shifts predictably with the density of the crowd.
At low densities, the optimal noise is low, because collisions are rare and each agent should mostly just go where it wants. At high densities, the optimal noise is substantial, because the system is close to jamming and the cost of determinism dominates. Crucially, the optimum is never zero. Even a mostly empty arena benefits from a little noise, because the handful of collisions that occur take longer to clear if every agent is stubbornly locked onto its own heading.
The prediction checks out against experiment. The team worked with physicist Federico Toschi at Eindhoven University of Technology to build a swarm of small wheeled robots in a physical arena. When the robots were programmed to follow the predicted optimal noise level for the arena's density, throughput matched the mathematical model. When they were programmed to be too deterministic, they jammed. When they were too random, they wandered. The equations had calibrated reality.
Why This Matters Beyond the Lab
The obvious application is warehouse robotics. Amazon, Ocado, Symbotic, and every startup running fulfillment centers packed with hundreds of mobile robots faces exactly the problem this paper describes. When demand spikes and you push more robots into the aisles, throughput does not rise linearly. It rises, plateaus, and then, if you keep adding robots, starts to fall as the robots spend more time waiting for each other than moving.
The industry has been aware of this for years and has mostly handled it with central coordination: a single scheduler assigns each robot a path that is guaranteed not to conflict with any other robot's path. That works at moderate scale but does not scale elegantly. Every added robot multiplies the scheduling problem, and a central scheduler is a single point of failure.
The Harvard result suggests a different architecture. Instead of top-down coordination, have each robot follow its own goal-seeking behavior with a tuned noise term baked in. The swarm will self-organize out of jams. The noise level becomes a knob the operator can turn up or down as crowd density changes, which is far simpler than reprogramming a central scheduler. Early implementations of this principle are already in testing at a handful of warehouse-robotics vendors, though none have published performance numbers.

There is also a policy-relevant reading of the same math. Human traffic exhibits jamming in almost exactly the form the paper describes. When traffic density on a highway crosses a critical threshold, a wave of brake lights propagates backward because every driver is being slightly too deterministic about maintaining a particular following distance. The experimental data on phantom jams, going back to Yuki Sugiyama's famous 2008 ring-road demonstration in Japan, matches the Harvard swarm model with only modest tweaks. The practical implication is counterintuitive: a fleet of self-driving cars programmed to drive "perfectly" will jam more aggressively than human drivers, unless the self-driving software deliberately introduces variability into its own behavior.
Early autonomous vehicle researchers noticed this in a different form. A 2018 University of Illinois study on mixed human-robot traffic found that adding a small number of cars running adaptive cruise control on a jammed highway could smooth phantom jams. The Harvard paper provides the underlying theoretical framework for why that worked. The robot cars were, in effect, injecting a non-zero noise level into a system that had accidentally become too deterministic.
A Pattern That Keeps Showing Up
The reason the result feels bigger than a single study is that the same pattern has been quietly surfacing in adjacent fields for years. In network packet routing, some amount of random delay in retransmissions prevents synchronization collapses. In power grid management, tiny amounts of jitter in generator response times prevent oscillatory failures. In financial markets, enough diversity of strategy across traders prevents correlated panic. In ecology, slight variation in individual behavior keeps populations from crashing in unison. Even the newest hardware for mimicking biological cognition, like the neuromorphic chips designed to solve physics problems the way brains do, relies on injected noise to settle into useful solutions rather than deterministic dead ends.
These are not the same systems. A robot swarm is not an ecosystem, and a power grid is not a beehive. But they share a structural property: many agents making local decisions in a shared resource. The Harvard paper's contribution is to write down the math of the jamming-unjamming transition cleanly enough that the same formulas might work in all of them.
That prediction is exactly the kind of claim mathematical biology tends to make and then take years to verify. Liu and Werfel are cautious about overselling it. "We have a framework that quantitatively predicts optimal noise levels for confined agent swarms with goal-seeking behavior," Werfel told ScienceDaily. "Whether that framework generalizes to systems that look similar from a distance but differ in the details is something other researchers will have to test."
Where This Leads
The most interesting implication might be philosophical rather than practical. Western engineering tradition has tended to treat randomness as a flaw, something to be designed out of systems through tighter tolerances, faster feedback, more accurate sensors. This paper is part of a growing body of work arguing that in crowded systems, randomness is not a flaw but a functional ingredient. A little bit of inefficiency at the individual level produces much more efficiency at the collective level.
Nature has known this for hundreds of millions of years. The ant that wobbles is not failing to walk a straight line. The ant is running the correct algorithm for the crowd it is in, because the alternative, a column of ants each walking perfectly toward the nest, is an ant column that jams at the first obstacle. Evolution did not tune the ant for individual perfection. It tuned the colony for collective flow.
Our warehouses, our highways, our cloud-computing schedulers, and eventually our self-driving vehicle fleets are now large enough and dense enough that the same logic applies. The Harvard paper, by giving us the equations, turns that folk wisdom into a design principle. The next step is to see how many systems can be upgraded, not by making their components more accurate, but by making them a little less so.
In the meantime, the next time your GPS routes you through a jam that nobody seems to have caused, you will know what happened. Everyone was trying too hard to drive the same way.
Sources
- Harvard SEAS, "Too Many Cooks, Or Too Many Robots?"
- Tech Xplore, "Too many cooks, or too many robots? Finding a Goldilocks level of randomness to keep robot swarms moving"
- ScienceDaily, "This simple change stops robot swarms from getting stuck"
- Harvard Science Review, "The Mathematics of the Swarm: Why Algorithmic 'Noise' is the Secret to Optimal Robotic Swarm Coordination"
- Wyss Institute, "A self-organizing thousand-robot swarm"
