Participatory Futures

How can we save the world by avoiding systemic collapse of the very things which sustain us? Leaning on aspects of emergent strategy, biomimicry and learnings from degenerative collapse in the life sciences, here we will explore the small decisions driving large consequences, and understand how the risks posed by artificial intelligence might be solved by the most human of choices. To do this, you might also consider making the choice between the generative and the human in the mode by which you choose to listen to this article.

As we emerge into the aftermath of a global pandemic, and towards an era dominated by artificial intelligence, much of our relationship with others in the world comes from our experiences within systems. Sometimes these systems are seen, often they are not. Sometimes they are understood. Often they are not. Sometimes they are big. Often they are small. The inter-related pieces of the world which flex up and down, in and out, emerge and recede, and cause us to look for sense-making patterns in the chaos of life.

Channeling these thoughts in The Book of Martha, Octavia Butler pictures a conversation between God and a young black female, and asks her to create a species-saving decision. In doing this, God asks Martha if she knows what a nova is. Martha knows it’s a star that explodes, but God qualifies her response as being a pair of stars, a giant and a dwarf. As God goes on to explain, “The dwarf pulls material from the giant. After a while, the dwarf has taken more material than it can control, and it explodes. It doesn't necessarily destroy itself, but it does throw off a great deal of excess material. It makes a very bright, violent display. But once the dwarf has quieted down, it begins to siphon material from the giant again. It can do this over and over.

The attachment of virus to host and the siphoning off of energy has deeply resonant echoes in modern digital spaces. Technological disruption is often predicated upon the carving out of someone’s existing audience through efficiency, cost-savings, and time-optimization. Incumbents are perpetually under existential threat from new means of doing business, as the underlying systems which power their products become automated and relocated. The architectural networks and systems upon which these businesses operate begin to run under increasing strain, and the need to make all-too-human decisions to survive takes hold. Such processes hold up a mirror to the same deep echoes of network effects in the life sciences.

If I were in Martha’s shoes, what would my change be? Simply, to stop small problems scaling into big ones. To do this, we have to wrestle with how small problems become big problems. In doing this work we’ll look at examples from the natural sciences, as well as emergent systems in large language models and generative artificial intelligence.

When we’re talking about disruption, we’re often talking about transformation. A change of matter from one state to another. In these changes are systems and patterns, often repeated at both small and large scale. Organizations turn and evolve, rather than being destroyed. And as adrienne maree brown describes, matter doesn’t disappear, it transforms.

She continues by motivating that energy behaves the same way. During periods of disruption energy often moves location as energy, just like the dwarf star, siphons between host and virus. In modern digital spaces, brown argues, systems of greed often characterize these transformational activities, giving way to her conclusion that humans are ultimately ‘brilliant at survival, but brutal at it’. Her work in emergent strategy articulates the way in which complex systems and patterns arise out of a multiplicity of relatively simple interactions. And that in those relational patterns we are able to generate signal from chaos.

This model of siphoning off energy is critical to understand if we are to save the world from itself by arresting the process by which small problems scale. There are two key examples of this. One from the natural world, and one from the digital. Both concern risks of collapse, and both are driven by human intervention in existing, networked environments. They are both predicated on small changes with large consequences. The first is the phenomenon of colony collapse disorder, a recent phenomenon in bees. The second is the risk of collapse in large language models as they increasingly begin to consume their own data as it spreads across the web.

Both examples lean on brown’s original illustration of the behavior of dwarf stars. That dwarf stars siphon off so much energy that they become unstable, and explode. The process then resets until the giant and dwarf stars are too far apart. Colony collapse disorder is a phenomenon which emerged in the early 2000s, primarily in the United States but ultimately on a global scale. It occurs in bee colonies where for no clearly understood reason, all of the bees simply disappear within a short space of time, abandoning the Queen. But they don’t die. And there’s no mites or pathogens present to explain the departure. In 2006, Florida apiarist Dave Hackenberg discovered massive honey bee losses and began to announce this to the world. By 2007 it was global news. Reports began to arrive from Taiwan, Argentina, China and all across Europe.

Why is this important? Pollination is a critical part of the systemic process of growing fruits and vegetables. No bees means that the food chain begins to collapse as fewer and fewer seeds become available. As Hackenberg passionately describes, “A lot of people out there don't realize that one out of every three bits of food they stick in their mouth, these honey bees have put on their dinner table. And if they're not here, we wouldn't have our fruits, and we wouldn't have our vegetables. If we want a diet that is more than gruel, more than wheat, oats, corn and rice, we need honey bees". What Hackenberg is describing here is a consequence of introducing monoculture to farming. Industrial-scale projects which only grow one crop, usually corn. And when farmers grow one crop, they have to increase the volume of pesticides they’re using in order to sustain a viable yield each year. Something much less needed in more pluralistic farming efforts. Bees are transported all around the country by industrial apiarists to pollinate these individual crops. Almonds in the South West, cranberries in the North East.

But as the bee colonies begin to collapse for no apparent reason, we look for patterned answers in the chaos. We look for the collections of small solutions we can make in driving the larger outcomes. We turn to methods of mass transportation, the chemical harvesting and insemination of queen bees, and the adjacency impact of pesticides. And while there’s no one defining cause, all of these smaller factors compound over time to weaken the systems of pollination and production. In efforts to provide more reliable sustenance for the bees, farmers have turned to synthetic sugar substitutes to keep the colonies alive. Dennis van Engelsdorp, the state apiarist for Pennsylvania, argues that this is like feeding junk food to an entire species for hundreds of generations. It causes irreversible degenerative damage, and ultimately causes a species to collapse.

The issues with the bees are illustrative of brown’s emergent strategy ideas of fractals. Nature’s never-ending patterns which are infinitely complex and self-similar across different scales. That ‘the tiniest most mundane act reflects the biggest creations we can imagine’. That the patterns of the universe repeat at scale. Collapse of bee colonies is patterned against the collapse of the human food chain. One has enormous, in this case catastrophic, consequence for the other. So what we understand as practice at small scale sets patterns for the entire system.

brown further leans on Janine Benyus’ work around biomimicry. Taking a design challenge and then finding an ecosystem that has already solved it, and emulating what you’ve learned. Large language models in digital spaces follow highly similar methods of pattern replication and simulated, predictive outcomes. Leaning into existing systems of knowledge to generate outcomes which ‘feel’ human. But what both brown and Benyus are describing is how agency at the micro level impacts outcomes at the macro level. And that if we remind ourselves that we are all situated inside of a complex set of inter-related systems, all we might do is simply think that we might ‘transform ourselves to transform the world’.

But very often there is a deliberate, commercially-driven reduction of individual agency at the micro level. This is what happened to the bees, but it’s also what happens inside of digital spaces like social networks. It’s more challenging to see oneself and our relationships inside of such digital frameworks when we don’t know where the edges of those networks are, while our attention is being commoditized, and our behavior and engagement is for sale. When this happens, the networks begin to fold in on themselves. We create echo chambers and filter bubbles. We stay with what we know and where we feel safe. We close ourselves off to different perspectives and we marginalize the under-represented. This is brown’s battle for imagination.

There are many examples of this in digital social networks, the biggest being the collapse of MySpace. Fueled by an over-abundance of commercial appetite, chronic under-investment in the health of its own ecosystem, and a misplaced belief in its own position of power, it simply collapsed over the space of a year in 2008 as Facebook, serving in the dwarf star role, siphoned off audience by the millions. But fifteen years later, the amount of energy Facebook has absorbed is nearing its own nova levels as a homogenization of social networks takes hold. Facebook’s audience declined for the first time in 2022, and referral traffic to publishers is in steep decline as new disruptors like TikTok and Snapchat draw nearer to Facebook’s audience. And as parent company Meta attempts to focus on augmented reality as a viable path to the future, they have simply become the incumbent.

Echoing the practices of bee farmers looking to sustain their colonies through the introduction of synthetic sugar foodstuffs, and the echo chambers of social networks, as large language models grow in dominance, they are increasingly absorbing their own content. They are being trained more and more on the pieces of information they’ve created in the first place. They are growing nearer the risk of folding in on themselves as the content they create becomes an increasing part of what’s published on the internet. This forces the question, what happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content?

Researchers are increasingly concerned about ‘model collapse’, a digital phenomenon brought about by the irreversible scaling of defects inside of artificial intelligence systems. Returning to brown’s emergent strategy work on fractals, this is knowing that there’s something wrong at the micro level, but continuing to let it propagate into something massive. Model collapse is a degenerative process whereby over time, the large language models forget the true underlying data they’ve originally been trained on. It magnifies mistakes, but also misrepresents and misunderstands less popular or common data. As Ilia Shumailov describes “We were surprised to observe how quickly model collapse happens: Models can rapidly forget most of the original data from which they initially learned”.

But this is more concerning than a platform like MidJourney or ChatGPT4 collapsing in on itself. There are enormous societal risks here too. In the process of model collapse, minority groups and perspectives, already heavily marginalized, get removed. It leads to serious implications such as ethnic or gender discrimination. Over time the model simply forgets that specific cohorts of the population exist. The half-life of the training data becomes shorter and shorter, and there is greater propensity to exclude already discriminated groups altogether.

Furthering the life sciences analogy, Ross Anderson, a professor of engineering at Cambridge University writes “Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale”.

But I don’t think this is accurate. It makes the assertion that all non-human generated content is lower in value than human-generated content. I don’t believe that to be true because there are an increasing volume of generative outcomes which have proven to be of incredibly high accuracy and quality. The point here is that the path from error to validated ‘correctness’ is shorter and faster in generative platforms than it is in humans, and what might appear to be ‘an internet of blah’ is only a temporal state. And while I appreciate there is a moral high ground which comes from human authenticity and authorship, I believe that there is a line which can easily be breached at which point we simply can’t tell the difference, and don’t care. The pictures of bees in this article are created with MidJourney for illustrative purposes. Might it be a lesser reading experience if they were removed?

Even within the past twelve months there have been exponential leaps in generative capability. There is enormous value in much of what is able to be generated with existing models, and the velocity at which those results are improving by human standards is increasing. Coding co-pilots are able to accelerate software development. Generative systems are beginning to be able to detect illness earlier. MidJourney is now able to generate human likeness (including hands), with staggering believability. So much so that it is now being weaponized into political campaign materials ahead of the 2024 US presidential election. Even if the value of doing this remains ethically questionable, it is still valuable to someone.

So the question here isn’t just what do we propose, but what do we do? If we’re Octavia Butler’s Martha, do we simply forbid micro-problems to scale into catastrophic macro consequences? Do we prohibit the development of systems from feeding on themselves in order to survive? Who gets to decide this and why?

In offering up a solution, I’m going to take a leap of faith into the realm of science fiction, and channel my own Octavia Butler. My proposed change is to stop small problems scaling into big ones.

But in doing that, we have to believe three things:

1. To do great things, we must do small things in a great way.
2. We must acknowledge that fallibility is essential for the survival of our species.
3. Collective sustainability is driven by intentional adaptation.

If we believe these things to be true, we accept that pluralistic, diverse, and inclusive traits must be deeply inherent in the manner in which we construct both our social and digital systems. We diminish the role of monoculture in the real world, but especially in the digital one. We create digital spaces which are predicated upon a multiverse of voices and inputs, not just from those who are developing them or are training on the produce of those with access to the internet. We empower these platforms to learn from issues of bias and discrimination and address them with increasingly efficient velocity. The systems must learn beyond simply serving what they think is next, or best. Our digital systems must deliberately, consciously be able to recognize exclusion and marginalization in themselves and others. And we must remove the commercial constraints these systems are placed under to remove the threat of collapse. If they are intended to be a public good, we must operate them as such as public utilities.

All of this is to say that in order to prevent small problems turning into big ones, we need to be able to arrest the transformative process by which this happens. The viral spread of disease is an apt metaphor here, and similar to Butler’s siphoning dwarf star, the manner in which the small pulls on the energy of the large leads us to such a conclusion as to diagnose the small in the first place. We need to catch problems earlier, faster, and more efficiently. Colony collapse in Florida bee colonies is better than the collapse of the entire human food chain, but identification and diagnosis in individual bees, or better still, preventative diagnosis in bees, is much better, and ultimately safer for everyone. The same is true of large language models. The ability for an algorithm to catch its own biases and correct them itself, is more efficient than humans catching them after millions of people are already using the service.

When she talks to Martha, God challenges her species-saving solutions. We’ll do the same here. Don’t we need the big problems? Would we remove much of our sense of progress, innovation or achievement if all we were able to tackle were the small things in life? Don’t we need that sense of overcoming, of mission, of rewarded effort to feel at our most alive? Our most… human?

As brown describes, we hold ourselves to standards which we embody. We learn. We release the idea of failure, because it’s all data. We should expect the same of the systems we rely upon for survival.

Previous
Previous

Serendipitous Discovery

Next
Next

Positive Systemic Data Day Obsession