Canada’s AI Strategy Must Address the Technology’s Use in K-12 Education
André Côté, Nancy Naylor / May 14, 2026
Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario Evan Solomon speaks during an announcement at Les Ateliers Beau Roc in Vars, Ontario, on Monday, May 4, 2026. (Spencer Colby/The Canadian Press via AP)
With Canada’s federal government set to release a new artificial intelligence strategy, all signs point to economy-wide AI adoption as a top priority, from small businesses and large companies to public institutions.
To date, however, K-12 classrooms have actually been the front lines of AI adoption. Young Canadians are ‘power users’ of new generative AI tools like ChatGPT, Gemini and Claude, with three quarters of students reporting AI use for schoolwork.
The problem is, school systems and educators aren’t ready.
K-12 education is only just coming to grips with the tsunami of smartphones and social media apps that flooded classrooms and cafeterias 10 years ago.
This new generation of AI technologies—in chatbots, Google search overviews, and embedded in student learning management systems and edtech tools—actually presents a thornier challenge. Unlike smartphones, which create distraction with little benefit, AI offers both great potential and risk for education and students.
Readying Canada’s K-12 systems should be a goal of the new strategy.
Advocates for AI use in education point to the potential for improving pedagogy and administration. In the classroom, it can enable personalized instruction, virtual tutoring, simulation-based learning, and more. In schools chronically stretched by larger classes, shrinking budgets, and overburdened teachers, effective AI deployment can be part of the solution.
Yet, the immediate effect of widespread student AI use has been to deeply disrupt longstanding teaching and learning practice. Educators tell us they aren’t equipped to address AI use in everything from homework assignments to end-of-semester exam assessment. Some, concerned AI is compromising essential skills development, are turning back the clock—handing out pencils and paper.
Classroom concerns are compounded by growing worries about the mental health and safety risks of AI for youth, from coercive companions, nudifying deepfakes, deepening screen use and isolation to digital privacy threats.
How to move forward? A comprehensive approach to ‘AI in education’ should include four elements.
First, developing AI literacy and skills.
The focal point of the public debate about AI in K-12, it builds on the past generation’s digital and media literacy foundations.
This is essential to equip both students and educators with the fundamentals—what AI is and isn't; how it works at a basic level; the skills to assess AI outputs for accuracy, bias, and limitations; practical skills for effective use; and ethical awareness of AI's societal implications.
Second, reinforcing students’ AI-resilient soft skills.
The Dais' research on the AI exposure of Canadian jobs finds a cluster of “human” skills—teamwork, communication, interpersonal and leadership abilities—are both commonly demanded by employers across all job types, and uniquely present in the most AI-resilient categories of jobs (like senior managers, lawyers, engineers, and surgeons).
As AI is used to automate more repetitive, non-complex tasks in areas like finance and administration, “social-emotional” skills built on the foundations of reading, writing, and critical thinking will be increasingly essential. Reengineering curriculum to protect the development of these skills will be essential.
Third, thoughtful use of AI in education delivery and administration.
This is about the deployment of AI in support of learning through lesson plans, grading, and pedagogical applications, as well as in the “back office” activities of teachers and school administrators.
In the classroom, this raises a core pedagogical question about where AI is helpful versus harmful in reducing "friction" in learning. For example, thoughtfully embedding AI can reduce student friction in accessing research resources; whereas unregulated student use can bypass necessary friction in research and critical analysis tasks to get to answers, eroding skill development.
Few provincial governments and school boards have proactively set policies or guidance for responsible AI use. The experience with other education technologies raises concerns about school board capabilities for oversight where vendors are embedding AI in existing services, like Google Classroom.
Last, ensuring AI governance is in place for youth safety, cybersecurity and digital privacy.
This is a challenge that extends beyond K-12 education to the broader debate about digital regulation in Canada. The outrage over OpenAI’s failure to report to law enforcement the troubling use of its tool related to a horrific school shooting in Tumbler Ridge is just the latest example of the need for online safety regulation that captures AI.
The stakes are too high, and the pace of change too rapid, for incremental adjustments. It will require coordinated action across governments, educators, technology companies, and civil society—with youth voices centrally involved in shaping solutions.
This should be a central part of Canada's new AI strategy.
Authors

