Home

What Lessons Did Canada Learn Before Creating Its Online Harms Bill?

Chris Tenove, Heidi Tworek / Mar 12, 2024

The Canadian Parliament in Ottawa. Shutterstock

The Canadian government introduced its landmark Online Harms Bill (Bill C-63) in parliament on February 26, 2024. The bill contains the Online Harms Act, along with changes to criminal and human rights laws. The legislation sets out significant changes in platform regulation in Canada, including the creation of a powerful new regulatory agency. If passed, it would bring Canada into the club of countries that have introduced comprehensive regulation of online harms, including the European Union, the United Kingdom, and Australia.

This is not the law that Prime Minister Justin Trudeau and his Liberal government had originally promised. In the 2021 federal election, the Liberal Party declared it would introduce legislation “within its first 100 days” in office. Drawing from the approach of Germany’s Network Enforcement Law (NetzDG), it proposed an approach that would entail quick, mandatory takedowns of hate speech and many other forms of harmful speech.

The current legislation took over eight times longer than planned, as experts lambasted the original proposal and the government fundamentally rethought its approach. Many factors contributed to the revised bill, including citizens’ assemblies, an Expert Advisory Group in 2022, on which Heidi Tworek served, and a change in the ministry shepherding the legislation from Canadian Heritage to the Department of Justice.

Canada is now at least a fourth or fifth mover amongst its peers, not a global leader as promised back in 2021. But was the legislation worth the wait? What did Canada learn from other jurisdictions?

Seven Harms, Three Duties, and A New Regulator

The proposed Online Harms Act applies to social media services, including livestreaming and user-uploaded adult content services, but not private messaging services like WhatsApp or gaming platforms. It focuses on seven types of harmful content:

  • content that sexually victimizes a child or revictimizes a survivor
  • intimate content shared without consent (including deepfakes)
  • content used to bully a child
  • content that induces a child to harm themselves
  • content that incites violent extremism or terrorism
  • content that incites violence
  • content that foments hatred

The Canadian bill would impose three duties on platforms which vary depending on the types of harms anticipated and who is affected.

For all seven types of harm, social media platforms would first face a duty to act responsibly. They must adopt policies to identify and mitigate risks, file a digital safety plan with the regulator, and face investigations of their compliance with these plans. This duty is in addition to significant new transparency requirements, which require platforms to publish details on how it addresses harmful content, as well as sharing data with the government and with accredited independent researchers and civil society.

This duty and the transparency requirements resemble Europe’s Digital Services Act (DSA), and its requirement for very large online platforms and search engines to address systemic risks. However, the DSA covers a broader set of harms, including disinformation and content that undermines democratic elections.

Second, the act imposes a duty to protect children, which applies to content that is intended or likely to be encountered by kids. This is a stronger duty, requiring that platforms implement safety and age-appropriate design measures.

The Act, and the government’s promotion of it, clearly focus on the protection of children. As elsewhere, concerns about child exploitation have increased in Canada: one high-profile case was that of Amanda Todd, a teen in British Columbia who committed suicide in 2012 after being tormented and blackmailed by a Dutch national. He was eventually convicted in 2022.

The emphasis on protecting children aligns the legislation with the UK’s Online Safety Act. Not only does the focus on children justify stronger controls on communication, it may also expand political support. The UK law was created and passed by a Conservative government, and Canada’s Conservative Party has highlighted child safety as one of the few areas of interest for new online regulation.

A third duty, the most stringent, is a duty to make content inaccessible. It applies to two content types: child sexual exploitation and non-consensual sharing of intimate content. Platforms would be required to prevent access to that content upon an order by the regulator, and would likely need to do so within 24 hours.

This power to require removal of content in 24 hours upon a regulator’s order is similar to the powers of Australia’s eSafety Commissioner. The explicit involvement of the regulator distinguishes this power from Germany’s NetzDG law, which can penalize platforms for not removing material that manifestly violates laws, even if not given notice by the regulator.

The Canadian law would create a Digital Safety Commission with the power to enforce all three duties and transparency requirements, and to impose fines and other measures for noncompliance. A Digital Safety Ombudsman would also be created, to provide a forum for stakeholders and to hear user complaints. This is a fairly novel part of the legislation. And a Digital Safety Office will support the Commission and Ombudsperson.

While the tiers of regulation may seem complicated, they represent an attempt to move beyond a focus on take-downs and to balance between freedom of expression and safety. Most expert commentary in Canada appears sanguine about this approach, though there is concern about the lack of detail around issues like the exact powers of the Digital Safety Commission and the accreditation for independent researchers.

The real controversies have arisen around the provisions introduced simultaneously to buttress the Online Harms Act portion of Bill C-63.

Additional legal changes and other controversial elements

Beyond platform governance, the bill includes proposals for significant changes to criminal and human rights law. The Criminal Code would be changed to include a new “hate crime” offense, which would apply more broadly than the current (and very infrequently applied) crimes of hate propagation, and also give authorities the power to pursue a “peace bond” to prevent a hate crime. Penalties for hate crimes can be as high as a life sentence. These proposals have sparked concerns about overreach, including from some who otherwise support the bill.

Even more controversially, the bill would give the Canadian Human Rights Commission the jurisdiction to hear complaints about online hate speech. This power was stripped from the Canadian Human Rights Act in 2013 by Prime Minister Stephen Harper’s Conservative government, which argued that it was being used to stifle freedom of expression. The new version sets a higher bar in its definition of hate speech, but there remain concerns that this provision will be overused or even “weaponized” by citizens.

What’s next?

Bill C-63 has a long road to travel, through parliamentary votes, committee hearings, and review by the Senate. Each of these points will provide opportunities to strengthen, defend and attack the legislation. The Liberals have formed a minority government that requires support from other parties, and they currently poll far behind the Conservatives. It’s unclear when an election might occur, and if they would win.

Unsurprisingly, the Conservative Party, the Liberal’s primary opposition, has strongly criticized the bill. In its first public riposte, the Conservatives stated, “We do not believe that the government should be banning opinions that contradict the Prime Minister’s radical ideology.” Some members have since made more nuanced comments. Whether the Conservatives approach to the legislation is purely political or engages with the policies will greatly influence the public debate.

Many other important questions remain:

It’s too soon to predict the final shape and fate of this bill. But Canada’s approach shows how democratic efforts at platform regulation are shifting from take-downs to transparency and responsibility.

Authors

Chris Tenove
Chris Tenove is the deputy director of the University of British Columbia’s Centre for the Study of Democratic Institutions, where he oversees research and policy engagement projects on topics including platform governance, journalist safety, and electoral integrity. He is also a research associate ...
Heidi Tworek
Heidi Tworek is the director of the Centre for the Study of Democratic Institutions as well as a Canada Research Chair, and an associate professor of international history and policy at the University of British Columbia. She is a senior fellow at the Centre for International Governance Innovation a...

Topics