Skip to main content

The issue: A combination of new technologies, changing social conditions, and evolving beliefs could affect our ability to understand and trust one another, and therefore our capacity for large-scale cooperation.

As forces such as social media, conspiracy theories, disinformation, and nostalgia drive the growth of distinct communities with conflicting ways of understanding the world, it may become harder to build broad social consensus or take collective action.

 

People may find it harder to agree on basic facts about the economy, politics, history, or the environment if society fragments into discrete groups with unique identities and incompatible ways of understanding the world. In such a future, productive debate on crucial issues might diminish. This may be because these groups retreat into echo chambers on separate digital platforms and therefore rarely encounter one another, or because their worldviews are so far apart that there is no starting point for conversation. Extreme fragmentation might make it almost impossible to design policies, programs, and messages that satisfy a majority of people. Similarly, establishing a national consensus on priorities and problems could be far more difficult than it is now—with societal mobilization toward any kind of common goal even less likely.

 

As technologies that make our emotions and thinking transparent become more common, social relations could become simpler or they could become much more challenging.

 

Massive amounts of new data about us will come online as the IoT expands, connecting digital systems more closely to our bodies via consumables and wearables. This data could provide real-time insight on the behaviour of those around us. So a perceived slight by a co-worker could be revealed as a manifestation of exhaustion driven by insomnia. An employee’s poor work performance could be attributed to sensory interference in their environment rather than laziness. Such insight could ease social relations. It might even allow us to anticipate how our actions could harm others and modify our behaviour accordingly. Better understanding and treatment of others could foster deeper trust, which could help revive civil society and democratic institutions. On the other hand, awareness that our inner thoughts and feelings are constantly exposed to others might heighten some kinds of anxiety. Encounters with other people, corporations, and institutions could be deeply uncomfortable in such a future, with possible knock-on declines in social and institutional trust.

 

As AI technologies play a bigger part in our decision making, they could shape the tone of social relations based on whether they prioritize collective or individual benefits.

 

More powerful AI assistants could give users information-finding and processing superpowers that improve decision making in the future. The social impacts of these changes may depend on whether our AI helpers prioritize decisions that benefit the individual, society, or something in between. AI optimized for social good could drive increases in civility, philanthropy, democratic engagement, tolerance, and support for evidence-based policy. Alternatively, systems set to maximize benefits to the individual could produce negative outcomes for society overall. AI assistants calibrated to boost social status at any cost could recommend actions that harm others. AI designed to keep us in a good mood might point us away from alarming new evidence about climate change and toward disinformation that alleviates our fears of environmental disaster. Ultimately, the direction of these changes will depend on the priorities of AI designers, market forces, and regulations.

 

As digital technologies play a bigger role in our sense-making, they could lead people to understand the world in a similar way, making it easier for them to agree on priorities; or they could drive people into separate camps with competing worldviews.

 

The technologies that support our sense-making will have distinct features and rules that influence how we understand the world. In that sense, they may become the lenses that shape our view of reality. Search algorithms already shape the information we use to make decisions in ways that are frequently inconsistent from individual to individual. If the future’s most popular sense-making technologies share enough features and rules to provide a consistent way of interpreting the world, they could foster a renewed sense of shared reality. This could make it much easier for people to talk to and understand each other, and therefore to build consensus. But there is no guarantee that these technologies will be consistent enough to promote any kind of common ground, much less the kind that reflects shared values such as multiculturalism. They may actually drive the spread of conflicting worldviews that produce misunderstanding, distrust, and intolerance.