The Future of Generative AI
Policy Horizons Canada (Policy Horizons) is the Government of Canada’s centre of excellence in foresight. Our mandate is to empower the Government of Canada with a future-oriented mindset and outlook to strengthen decision making. The content of this document does not necessarily represent the views of the Government of Canada, or participating departments and agencies.
This first iteration of this foresight brief (May 2023) explores some potential shifts and disruptions that may arise due to generative artificial intelligence (AI) technologies in the next five years.
As ChatGPT has taken the world’s attention, this foresight brief highlights eight key things to know about generative AI, and important implications in three key areas: critical infrastructure, labour and market conditions, and content production and processing.
Generative AI could unleash scientific innovation, raise productivity, and change the way people find information. These technologies are also likely to create disruptions and challenges for multiple policy areas. By reflecting on what might happen in the future, Policy Horizons Canada aims to strengthen decision making within the Government of Canada.
Generative artificial intelligence (AI) is a paradigm-shifting technology that could unleash scientific innovation, raise productivity, and change the way people find information online. It is raising a number of concerns about how it could disrupt labour markets. It also poses important implications for critical Canadian infrastructure, as well as for content production and processing.
While generative AI developments date to the 1950s, various refinements have led to the emergence of the generative AI seen today., In late 2022, U.S.-based OpenAI released free-to-the-public versions of ChatGPT (a tool for AI-generated text) and DALL-E (a tool for AI-generated images and art). In 2023, OpenAI released GPT-4, the first in a series of multimodal AIs, meaning they can use a variety of different media formats as both inputs and/or outputs.
ChatGPT is one commercial product that uses generative AI (see Table 1 for examples of generative AI capabilities and products). The “GPT” in ChatGPT stands for “generative pretrained transformer.” All generative AI products are built on pretrained foundation models, which are trained on broad sets of “unlabeled data that can be used for different tasks, with minimal fine-tuning.” This allows these technologies to identify patterns and structures in existing data to generate new, original content.
Generative AI algorithms can quickly generate/create various types of seemingly new content, including text, images, video, audio, and synthetic data. Generative AI tools are used to write computer code and college-level essays, and develop more realistic and interactive video game worlds. They also enable and scale the rapid creation of harmful content such as misinformation,, revenge porn, and media content intended to sow social discord. Moreover, because AI algorithms reflect the values and assumptions of their creators, they can perpetuate conscious and unconscious biases that lead to exclusion or discrimination.
|Mode||Use cases||AI models that power commercial products||Commercial products|
|Some AI models are multimodal, meaning they can use a variety of different media formats as both inputs and/or outputs. For example, GPT-4 can generate text based on a prompt containing both text and an image.|
Eight things to know about generative AI
There is currently a lot of interest in generative AI, as well as a lot of media coverage. Here are some key points to keep in mind:
1. Generative AI is error-prone in ways that are challenging to detect.
While Large Language Models (LLMs) like ChatGPT may appear to be as “intelligent” as humans, they are more akin to well-developed autocomplete systems. They are therefore error-prone. They can generate text that looks credible but is incorrect or nonsensical. Some prominent AI researchers question whether there is a technical solution to this issue, or if a shift to new kinds of AI is required to overcome it.
2. The data used to train generative AI models might infringe on privacy and intellectual property (IP) rights.
Many generative models are trained on large parts of the Internet. This raises implications for IP, privacy, fair use, and fair dealing. The Privacy Commissioner of Canada and Italy’s data protection authority are investigating ChatGPT to determine whether privacy violations may have arisen. Image generator Stable Diffusion is facing a class action copyright lawsuit from artists and one more from Getty Images. Open AI and Microsoft face copyright lawsuits related to Github Copilot. Claims to copyright infringement based on AI-generated voice content of musicians like Drake may prove difficult to defend, as AI-generated content is not a true copy of anything that exists. Rulings on such cases could undermine the foundations on which these technologies function.
3. The data used to train generative AI models might also perpetuate bias.
Using data taken from the internet to create new AI-generated content disproportionately amplifies the perspectives of those overrepresented on the internet: typically English speakers. Consequently, certain cultural perspectives are emphasized, while the perspectives of non-English speakers, as well as women, older people, Indigenous peoples, persons with disabilities, and other marginalized groups, tend to be excluded.,, Because many AI models are closed-source and therefore not transparent to auditors or users, these biases can be challenging to identify and address.
4. Generative AI could nonetheless unleash a new era of innovation and productivity gains.
Generative AI is a multi-purpose technology that could speed up scientific developments. It might disrupt certain domains of work. It may also enhance productivity across sectors, improve accessibility, and reshape education.
5. It may be difficult to harness innovation gains locally due to the dominance of U.S.-based companies, a lack of domestic computing power, and concerns over foreign-hosted data.
Training generative AI models requires massive amounts of data and computing power, making U.S.-based players such as OpenAI, Microsoft, and Amazon dominant in this space. A lack of computing power domestically could limit the ability of Canadian companies and researchers to develop generative AI models in Canada. Storage or use of Canadians’ personal data abroad carries legal and privacy risks. This may create barriers to the research and development of applications that rely on foreign-based data centres. Government-funded international collaborations on AI infrastructure development are emerging to counterbalance the dominance of private players.
6. It is unclear where value from the use of generative AI will be captured.
The generative AI tech stack includes apps, the generative AI models that power the apps, and the infrastructure that allows the models and apps to operate. In some scenarios, making money from proprietary, closed-source models like GPT-3 might be challenging. Open-access generative AI models, allow anyone to develop their own apps for free. If enough people adopt these open-access models, and if these models perform well enough, closed-source models would be less attractive. There are various layers and actors involved. The trends toward open access, and the as-yet unresolved IP disputes described above, make it challenging to predict where and for whom generative AI will create the most value, and where this value will be captured. Infrastructure providers are likely among the most well-positioned to capture value in this ecosystem. The amount of value generated for and within Canada, however, remains unclear.
7. The rapid pace of advancement combined with gaps in regulation could lead to complex and unforeseen risks.
AI has impacts across society, the environment, the economy, politics, and values. Current regulatory approaches focus on finished products rather than embedding standards or norms into the technology development pipeline. For instance, identifying the kind of data used to train an AI model is not presently required. As this gap intersects with the rapid development in generative AI systems and applications, it could lead to complex and unforeseen risks. In addition, the rapid pace of development could make it challenging to future-proof AI regulations.
8. Generative AI could raise new concerns about the environmental costs of computation.
Generative AI is a particularly energy- and water-intensive technology, both to develop and use., The information and communication technologies (ICT) that support the development and use of generative AI are also energy intensive. One estimate puts the total footprint of ICT at 2.5 to 3.7 percent of all global greenhouse gas emissions. This exceeds emissions from commercial flights (about 2.4%). The information technology (IT) industry could end up doubling its carbon footprint in the coming decade just as other industries are moving in the opposite direction.
Three key areas of impact over the next five years
Generative AI is said to be a transformative technology that could boost stagnating levels of productivity and unleash a new wave of scientific innovation. One recent estimate suggests that generative AI could drive a 7 percent increase in global gross domestic product over a ten-year period. It could add about $7 trillion per year and raise net labour productivity by 1.5 percentage points per year in Western countries over the next 10 years. The use of generative AI to solve problems in protein folding science could lead to new treatments for cancer, influenza, and COVID-19. Synthetic data could also speed up the training of AI models, especially for those relying on sensitive data like patient records.
While these potential gains are impressive, transformative technologies such as generative AI are also likely to create disruptions and challenges.
Here are some concerns to reflect on for the future:
Critical Canadian infrastructure
Generative AI could put pressure on existing vulnerabilities in critical IT infrastructure. The use of generative AI applications by business, government, and the public relies on domestic infrastructure like overland fibre-optic cables and data centres. Data centres are projected to grow tenfold from 2022 to 2032. A large portion of this growth will likely be due to generative AI. Data centre distribution in Canada is already highly concentrated: the majority of data centres are located in Montreal and Toronto. This means that a critical event in either of those cities could knock out the Internet in large parts of Canada. This could become problematic in a future where data centre attacks by state actors, extremist groups, or lone actors become more common.
Many areas of Canada could still attract data centre investors in the short term, but the need to access strategic resources may compete with the green transition. Compared to most countries, Canada has cheap and renewable hydroelectric power, and a cool climate. This might reduce data centre cooling costs and allow AI companies to advertise a reduced carbon footprint. However, access to strategic metals, land, water, and regional power supply may become an increasingly contentious policy issue as data centre demand increases over the next decade. More data centres, for example, could lead to more emissions and e-waste, and more calls for environmental regulation. Economic pressure to expand data centres may also compete with a transition to green energy. This is because data centre hardware uses many of the same strategic minerals found in solar panels, electric batteries, and other green tech.
Labour and market conditions
Generative AI could partially or wholly automate creative and language work. It is more likely to “augment” technical and knowledge work. Trades are least likely to be affected. Generative AI could dramatically reshape and perhaps automate a narrow subset of job roles that rely on creative and language work. These jobs include graphic design, copywriting, and to some extent, technical writing, editing, and music composition. In the next five years, augmentation is more likely for roles where generative AI adds to the quality and quantity of work done by humans: IT specialist, software developer, finance and accounting professional, architect, receptionist, customer service representative, technician, and engineer. Generative AI will also plausibly augment the roles of business executive, scientist, and policy analyst in areas such as ideation, hypothesis development, summarization, and decision making. Impacts are likely to be quite small in the trades.
Generative AI may partially or fully automate many technical coding tasks. This could reduce wages and lead to layoffs in what were once considered “highly-skilled” tech jobs. Generative AI systems can now solve complex coding problems in a variety of technical languages with minimal human prompting. The ability of GPT-4 to write targeted programs that rely on public code libraries, for example, “favorably compares to the average software engineer’s ability.” AI-generated code may also be easier for humans to read and follow. For example, it can automatically generate plain-language documentation that describes the function of each section of code in a way that can be standardized. These enhanced capabilities could lead to lower wages in highly paid tech jobs, or even reduced employment.
It could be challenging for fields with concerns about liability due to privacy violations and errors of judgment to adopt generative AI support tools. Areas such as medicine, law, banking, and finance, may require humans to verify AI-generated information in order to adhere to professional codes of conduct. Making such tools more reliable with results that are easier to interpret could promote greater adoption in these fields.
The use of specialized generative AI technologies could lead to market concentration or megafirms in fields where none existed previously. The development of tailor-made models of generative AI trained on proprietary data could promote uptake in specialized fields. Generative AI could spark profound changes in industries like law, if technology can handle a large proportion of the work reliably. However, the costs of adopting these technologies could limit their uptake to large organizations, which might lead to the development of megafirms.
Content production and processing
The democratization of high-quality content production could undermine social cohesion. The uptake of generative AI marks a watershed moment on par with the rise of social media and the transition to Web 2.0. While Web 2.0 permitted users to generate content and participate more actively in virtual and real life communities,, it also brought concerns over filter bubbles, misinformation, extremism, and election interference. With generative AI, individuals will soon be able to create low-cost, professional-quality entertainment content. This could give rise to a flood of new amateur content unbounded by the norms set by established media production outlets. For example, this could lead to viral high-quality feature length films expressing misinformation, grievances, and hatred against political leaders, immigrants, or women.
Generative AI could enhance accessibility for people with disabilities by mediating perception in new ways. Multimodal generative AI products such as Danish startup’s Be my Eyes beta app, now relies on GPT-4 instead of human volunteers to describe images in real-time for those with visual impairments. Users can click a photo, share it via the app, and have an AI-powered Virtual Volunteer answer any question about that image and provide instant assistance for a wide variety of tasks. It can describe the food in a fridge, for example, and offer recipes for what to cook with it. Live video inputs could offer real-time descriptions of the environment and of other people, changing the way that people with visual impairments perceive and engage with the world. In the same way, real-time language transcription technologies could alter the experiences of those with hearing impairments. But these technologies could also raise concerns around both consent and responsibility for errors.
Generative AI is a paradigm-shifting technology. Over the 2020s, it is set to grow exponentially, with impacts in many industries. Since generative AI may offer dramatically higher productivity than human labour, there may be a scramble to use it everywhere possible.
The ultimate performance of generative AI is unknown. However, the quality of output from larger models supported by new algorithms may continue to surprise. Over the next 10-15 years, newer and more powerful AI technologies will cause profound shifts in many areas of society. This is likely to bring bigger disruptions extending well beyond initial or obvious areas of impact like labour and productivity.
New technologies are often overestimated in the short term and underestimated in the long term. Foresight in this domain could help set the groundwork to prepare for even more disruptive futures. While no one can predict the future of generative AI, strategic foresight can help policy and decision makers better prepare.
This foresight brief synthesizes the thinking, ideas, and analysis of many contributors through research, interviews, and conversations. The project team would like to thank the experts who generously shared their time and expertise in support of the research, including those who chose to remain anonymous.
Postdoctoral Associate, Computer Science
New York University
Wendy Hui Kyong Chun
Canada 150 Research Chair in New Media and Professor in the School of Communication
Simon Fraser University
PhD candidate, Communications Studies
LLM Candidate for Artificial Intelligence Law and Business Law
Research Scientist and Climate Lead
Co-director, Applied AI Institute
Assistant Professor in Information and Communication Technology Policy in the Department of Communication Studies
Director, Artificial Intelligence and Emerging Technology Initiative
Fellow, Foreign Policy, Strobe Talbott Center for Security, Strategy, and Technology
Nicole Rigillo, Senior Foresight Analyst and Project Lead, Foresight Research
Simon Robertson, Director, Foresight Research
Tieja Thomas, Acting Manager, Foresight Research
Kristel Van der Elst, Director General
Meaghan Wester, Foresight Analyst, Foresight Research
Mélissa Chiasson, Communications Advisor
Laura Gauvreau, Manager, Communications
Nadia Zwierzchowska, Senior Communications Advisor
We would like to thank our colleagues Imran Arshad, John Beasy, Steffen Christensen, Nicole Fournier-Sylvester, Chris Hagerman, Jennifer Lee, Pascale Louis-Miron, Megan Pickup, Elisha Ram, and Julie-Anne Turner for their support on this project.
 Yihan Cao, Siyu Li, Yixin Liu, Zhiling Yan, Yutong Dai, Philip S. Yu, and Lichao Sun, “A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT,” arXiv March 7, 2023, http://arxiv.org/abs/2303.04226.
 Dominique Cardon, Jean-Philippe Cointet, Antoine Mazières, and Liz Carey-Libbrecht, “Neurons Spike Back,” Réseaux 211 No. 5 (2018): 173-220.
 Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” arXiv preprint arXiv:2303.12712 (2023).
 “What are Foundation Models?”, IBM, accessed April 28, 2023, https://research.ibm.com/blog/what-are-foundation-models.
 George Lawton, “What Is Generative AI? Everything You Need to Know,” Enterprise AI, accessed April 19, 2023, https://www.techtarget.com/searchenterpriseai/definition/generative-AI.
 Belle Lin, “Generative AI Helping Boost Productivity of Some Software Developers,” Wall Street Journal, February 22, 2023, sec. C Suite, https://www.wsj.com/articles/generative-ai-helping-boost-productivity-of-some-software-developers-731fa5a.
 Beatrice Nolan, “Two Professors Who Say They Caught Students Cheating on Essays with ChatGPT Explain Why AI Plagiarism Can Be Hard to Prove,” Business Insider, accessed April 19, 2023, https://www.businessinsider.com/chatgpt-essays-college-cheating-professors-caught-students-ai-plagiarism-2023-1.
 Tamulur @Tamulur, “ChatGPT-driven NPC Experiment #1,” TikTok, April 16, 2023, https://www.tiktok.com/@tamulur?_t=8bd8qABhs10&_r=1.
 Sophia Khatsenkova, “Criminals are Using AI to Sound like Family Members to Scam People,” euronews, March 25, 2023, https://www.euronews.com/next/2023/03/25/audio-deepfake-scams-criminals-are-using-ai-to-sound-like-family-and-people-are-falling-fo.
 CBS News, “‘Deepfake’ Porn Could Be a Growing Problem as AI Editing Programs Become More Sophisticated,” April 17, 2023, https://www.cbsnews.com/news/deepfake-porn-ai-technology/.
 Noor Al-Sibai, “Google Not Releasing New Video-Generating AI Because of Small Issue With Gore, Porn and Racism,” Futurism, accessed April 19, 2023, https://futurism.com/the-byte/google-ai-imagen-not-releasing.
 Katyanna Quach, “Canadian Privacy Watchdog Probes OpenAI’s ChatGPT,” The Register, April 6, 2023, https://www.theregister.com/2023/04/06/canadas_privacy_chatgpt/.
 Timothy B. Lee, “Stable Diffusion Copyright Lawsuits Could Be a Legal Earthquake for AI,” Ars Technica, April 3, 2023, https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/.
 Nilay Patel, “AI Drake Just Set an Impossible Legal Trap for Google,” The Verge, April 19, 2023, https://www.theverge.com/2023/4/19/23689879/ai-drake-song-google-youtube-fair-use.
 Emma Charlton, “Chart of the Day: The Internet Has a Language Diversity Problem,” World Economic Forum, December 13, 2018, https://www.weforum.org/agenda/2018/12/chart-of-the-day-the-internet-has-a-language-diversity-problem/.
 Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. https://dl.acm.org/doi/10.1145/3442188.3445922.
 Safiya Umoja Noble, “Algorithms of Oppression: How Search Engines Reinforce Racism,” in Algorithms of Oppression (New York University Press, 2018), https://doi.org/10.18574/nyu/9781479833641.001.0001.
 James Vincent, “Twitter Taught Microsoft’s AI Chatbot to Be a Racist Asshole in Less than a Day,” The Verge, March 24, 2016, https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
 Mario Kovač, “European Processor Initiative: The Industrial Cornerstone of EuroHPC for Exascale Era,” in Proceedings of the 16th ACM International Conference on Computing Frontiers, 2019: 319.
 Stefan Baack, “Datafication and Empowerment: How the Open Data Movement Re-Articulates Notions of Democracy, Participation, and Journalism,” Big Data & Society 2, no. 2 (December 1, 2015): 2053951715594634, https://doi.org/10.1177/2053951715594634.
 Starr Hoffman, “Open Source vs. Open Access (vs. Free),” Geeky Artist Librarian (blog), June 26, 2014, https://geekyartistlibrarian.wordpress.com/2014/06/26/open-source-vs-open-access-vs-free/.
 Martin Casado, Matt Bornstein, and Guido Appenzeller, “Who Owns the Generative AI Platform?,” Andreesen Horowitz, accessed April 23, 2023, https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/.
 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford, “Datasheets for Datasets,” Communications of the ACM 64, no. 12 (2021): 86-92.
 Yanan Liu et al., “Energy Consumption and Emission Mitigation Prediction Based on Data Center Traffic and PUE for Global Data Centers,” Global Energy Interconnection 3, no. 3 (2020): 272-82.
 Chris Stokel-Walker, “The Generative AI Race Has a Dirty Secret,” Wired UK , accessed April 20, 2023, https://www.wired.co.uk/article/the-generative-ai-search-race-has-a-dirty-secret.
 The Shift Project, “‘Lean ICT: Towards Digital Sobriety’: Our New Report,” The Shift Project, 2019, https://theshiftproject.org/en/article/lean-ict-our-new-report/.
 Jeff Overton, “Issue Brief: The Growth in Greenhouse Gas Emissions from Commercial Aviation” (2019, Revised 2022), White Papers, EESI, 2022.
 Jennifer Riggins, “Next-Generation Sustainable Data Center Design,” The New Stack (podcast), January 2, 2020, https://thenewstack.io/next-generation-sustainable-data-center-design/.
 “Generative AI Could Raise Global GDP by 7%,” Goldman Sachs, April 18, 2023, https://www.goldmansachs.com/insights/pages/generative-ai-could-raise-global-gdp-by-7-percent.html.
 “Generative AI.”
 Cade Metz, “A.I. Turns Its Artistry to Creating New Human Proteins,” The New York Times, January 9, 2023, sec. Science, https://www.nytimes.com/2023/01/09/science/artificial-intelligence-proteins.html.
 Global Market Insights, “Graphics Processing Unit (GPU) market size by component, service, by deployment model, by application, COVID-19 impact analysis, regional outlook, application potential, competitive market share & forecast, 2023–2032”, 2023, https://www.gminsights.com/industry-analysis/gpu-market.
 Brian Barrett, “A Far-Right Extremist Allegedly Plotted to Blow up Amazon Data Centers,” Wired, April 9, 2021, https://www.wired.com/story/far-right-extremist-allegedly-plotted-blow-up-amazon-data-centers/.
 Barrett, “Far-Right Extremist.”
 Steve McLean, “Toronto, Montreal Data Centre Markets Grow, but Face Challenges,” Renx.ca, April 29, 2022, https://renx.ca/toronto-montreal-data-centre-markets-grow-but-face-challenges.
 Policy Horizons Canada analysis from US BEA’s Fixed Assets Accounts Tables, Section 3, Private fixed assets by industry, for Information and Data Processing Services Equipment, 2021 data (BEA 2023).
 Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” arXiv preprint arXiv:2303.12712 (2023): 21.
 Stephanie Palazzolo, “How Anthropic Cofounder Daniela Amodei Plans to Turn Trust and Safety into a Feature, Not a Bug,” Business Insider, April 24, 2023, https://www.businessinsider.com/anthropic-cofounder-daniela-amodei-trust-safety-generative-artificial-intelligence-2023-4.
 Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (Yale University Press, 2017);
 Danah Boyd, It’s Complicated: The Social Lives of Networked Teens (Yale University Press, 2014).
 Thomas Macaulay, “New GPT-4 App Can Be ‘Life-Changing,” TNW | Deep-Tech, March 17, 2023, https://thenextweb.com/news/be-my-eyes-app-uses-openai-GPT-4-help-visually-impaired.
 S. A. Applin, “Google Is Working on Language-to-Text AR Glasses. It’s a Complicated Idea,” Fast Company, May 18, 2022, https://www.fastcompany.com/90753311/google-is-working-on-language-to-text-ar-glasses-its-a-complicated-idea.