A Welcome Voice for Canada on the Future of AI

Duncan Cass-Beggs / Apr 29, 2024

By establishing an AI Safety Institute, Canada is joining an urgent global effort to ensure safe, secure, and trustworthy AI.

In its 2024 budget, the Government of Canada announced investments of $2.4 billion CAD for artificial intelligence (AI), including $2 billion for AI computing capacity, and $50 million for a new AI Safety Institute of Canada. This commitment brings Canada into line with key allies in addressing one of the most pressing global challenges of our time: how to mitigate the severe public safety risks posed by advanced AI. For these investments to be worthwhile, however, Canada will need to empower its new institute with a clear mandate and agile structure, and make a commensurate government commitment to turn research into policy action.

Even as such commitments are being made, there is evidence we are living in a Don’t Look Up world. Leading AI scientists, including Canada’s Yoshua Bengio and Geoffrey Hinton, warn that advanced AI, for all its potential benefits, could pose grave risks to humanity if not managed carefully. AI systems are becoming rapidly more powerful and could soon be misused to cause widespread harm, or even act autonomously in ways that humans can’t control.

What explains the disconnect between the potential urgency of AI risks and the relative lack of government focus? Two crucial knowledge gaps standout: first, about the nature of the risks, and second, about how best to take action. The new institute and its fledgling counterparts in the USA, UK, and Japan are intended to help fill these gaps.

How likely and severe are potential safety risks from AI? How soon could AI reach and then vastly exceed human-level proficiency across a broad range of cognitive capabilities? How likely are technical safeguards to ensure such systems can’t be misused to cause catastrophic harm, or won’t slip beyond humanity’s control? The forthcoming International Scientific Report on Advanced AI Safety should answer some of these questions, but much work remains. The new institute could unite leading Canadian AI scientists to participate in this effort.

Governments also require answers on what technical and governance solutions may be needed to mitigate AI risk. Technical research by the new institute could help determine potential capabilities and risks of frontier AI and aid in the design of effective mitigations. More ambitiously, the institute could develop new approaches for reliably safe AI. Canada has contributed to a revolution in AI paradigms before and could do so again.

Safety also requires solutions to complex AI-related national and international governance challenges. What governance mechanisms would mitigate the risk of a single company or country using AI to achieve dominance at the expense of others? What would be needed to prevent rogue actors anywhere from creating AI systems that pose extreme risks to humanity? The new institute can mobilize the ingenuity of Canadians across sectors and disciplines to address these issues.

How should the new institute be designed? Potential principles include:

  1. Scientific independence, neutrality, and rigor to ensure trust and credibility across society.
  2. Close channels of communication to learn from and advise relevant government experts, including in sensitive areas of national security.
  3. Leveraging Canada’s leading assets, such as its AI institutes (Amii, Mila and Vector) and policy and governance think tanks.
  4. Strong, dynamic, mission-led leadership.
  5. A highly focused research agenda targeted to the most important questions where Canadian expertise can contribute.
  6. Close collaboration with AI safety institutes in other countries to ensure complementary efforts.
  7. Privileged access to Canadian computing capacity to support technical research.
  8. Agility and flexibility in administration to attract and retain the right people.

To be effective and relevant, the new institute should be accompanied by the creation of a central counterpart body within government that can absorb the institute’s research and convert it into policy action. This counterpart should integrate a full range of legitimate perspectives and areas of action including public safety and national security as well as innovation, privacy, competition, global affairs and international cooperation.

Both the new institute and its counterpart in government should be established as soon as possible. Quick timing would support Canada’s preparations for leadership in the G7 in 2025.

Canada is a proven AI leader. As the world sits on the cusp of being able to create technologies that surpass humans in all cognitive capabilities, Canada has an opportunity to play a globally crucial role in building safe, secure, and trustworthy AI. We must quickly rise to the challenge.


Duncan Cass-Beggs
Duncan Cass-Beggs is Executive Director of the Global AI Risks Initiative at the Centre for International Governance Innovation (CIGI).