Skip to main content

Our interactions with online tools and social media platforms entangle us in complex big data and surveillance systems. Public spaces and even social interactions increasingly subject us to recording and facial recognition systems.[i] Private spaces, increasingly integrated into the Internet of Things (IoT) via smart devices,[ii] open our domestic lives to surveillance. DNA (Deoxyribonucleic acid) and other biometric identifiers are already used by some governments, and may someday be keys to accessing certain social and economic benefits.


These systems already help us make better sense, but not without some downsides. On one hand, they reveal screen addictions and provide personal health information, suggest fantastic TV shows, and help us find romantic partners or people with similar hobbies. But there are emerging concerns about the ways big data is used to generate actionable insights about us. These insights are used to sort us into groups[iii] based on our preferences, evaluate us against criteria,[iv] monitor us in the workplace,[v] spread disinformation,[vi] and offer targeted ads for products.[vii]


In the future, these systems could shape many aspects of our lives. They may determine access to things like education, insurance, employment, and dating opportunities, especially if they include biological data such as our DNA. They may also shape how institutions, corporations, and other people see and treat us. Enhanced surveillance and quantification could help track the spread of disease[viii] in future pandemics, reach equity targets,[ix] or support sustainability efforts.[x] On the other hand, they could intensify social divides by strengthening filter bubbles.[xi] Or they might provide the raw materials for new or more effective kinds of manipulation and control, including private sector “reputation systems” and marketing chatbots with insights on our emotional and physical states.


[i] Elizabeth Thompson, “Canada planning technological fixes to make crossing the border faster,” CBC, last modified Jan. 24, 2022,

[ii] Dan Greene, “The erosion of personal ownership,” Vox, last modified Apr. 21, 2021,

[iii] Charlene Chu, Kathleen Leslie, Rune Nyrup, & Shehroz Khan, “Artificial Intelligence can discriminate on the basis of race and gender, and also age,” The Conversation, last modified Jan. 18, 2022,

[iv] A. Fisher, “An Algorithm May Decide Your Next Pay Raise,” Fortune, last modified July 14, 2019,

[v] Matthew Finnegan, “Rise in employee monitoring prompts calls for new rules to protect workers,” Computerworld, last modified Nov. 30, 2021, Clive Thompson, “What AI exam proctors are really teaching our Children,” Wired, last modified Oct. 20, 2020,

[vi] “How is Adtech funding misogynistic disinformation?” Global Disinformation Index, last modified Jan. 4, 2023,

[vii] “‘Surveillance capitalism’ by tech giants is a growing threat, says privacy watchdog,” Global News, last modified Dec. 9, 2021,

[viii] “BC professor to lead network aimed at helping Canada prepare for future pandemics,” CTV, last modified Apr. 12, 2021,

[ix] K. Hanegan, “How Data Can Supercharge Your DEI Program,” InformationWeek, last modified Dec. 26, 2022,

[x] J. Lou, N. Hultman, A Patwardahan, and Y. Qui, “Integrating sustainability into climate finance by quantifying the co-benefits and market impact of carbon projects,” Communications and Earth Environment July no. 3 (2022)

[xi] A filter bubble is the “state of intellectual isolation” that results when search engine algorithms feed you a narrow range of information based on data it has about your location, search history, and previous clicks. “Filter Bubble,” Wikipedia, accessed Aug. 5, 2022,