Home

Investing in AI Safety through Training and Education

Maria Pepelassis / Jun 12, 2023

Maria Pepelassis is a technology policy analyst based in London.

Nine small images with schematic representations of differently shaped neural networks. Alexa Steinbrück / Better Images of AI / CC-BY 4.0

The European Parliament recently adopted a political agreement on the long-awaited Artificial Intelligence Act (AI Act), which now awaits a plenary vote on 14 June. This legislation, if enacted, introduces a clear legal framework that establishes concrete obligations and restrictions for the development and deployment of AI-powered products and services. While the top-down approach of the AI Act may aide Europe in mitigating the most evident harms posed to citizens by these novel technologies, the question remains if it is sufficient to meet the challenge of preparing 447 million people to responsibly use the AI-powered tools that will drive the EU’s digital and green transformations.

Hurry Up and Regulate

Efforts to introduce horizontal measures on artificial intelligence have been ongoing since the AI Act was first introduced in April 2021; however, many of the most contentious elements of the bill are responsive to much more recent breakthroughs. Policymakers and regulators across the globe looked on warily as over 100 million monthly users engaged with the AI-powered bot ChatGPT in just two months since its launch. Italy only recently lifted a temporary ban on ChatGPT, and imposed requirements on its developer, OpenAI, to introduce meaningful measures to comply with privacy and age-verification requirements. In line with such measures, rapporteurs on the AI Act successfully introduced data processing safeguards and limitations on application of models deemed high-risk, including critical infrastructure. The AI Act also introduces explicit rules for generative AI systems like ChatGPT to bolster data privacy and align design and development of such products with EU values from sustainability to fundamental rights.

Despite lawmakers’ enthusiasm to respond to the explosive impact of ChatGPT, the chatbot's potential reach into all manner of human affairs is immediately clear. In early February, a judge in Colombia faced intense criticism for using ChatGPT in making an official decision. In Europe, the Romanian government introduced a generative AI “policy advisor” to aggregate citizen responses to proposed policies and facilitate decision making. Microsoft, which has a significant stake in OpenAI, seems poised to further capitalize on this success by synthesizing the bot with its Bing search engine. AI technologies are becoming further enmeshed in work and life, and people across industries, regions, and circumstances are increasingly interacting with and relying on artificial intelligence.

All Harms Are Not Equal

Generative AI products offer a novel challenge because the risk or benefit they incur depends almost entirely on the specific contexts- organizational, economic, political, geographic, etc.- in which they are applied, and not necessarily on the tasks they perform. The same applications of ChatGPT and generative AI models in general will most likely have completely unique practical and moral implications in different economies and across industries. For instance, AI has been envisioned as a much faster means to categorize data, in physical and cognizant systems alike. Categorizing between types of products gives rise to vastly different concerns on responsible use compared to using the same systems for processing data on employees. While the first use case might require more focus on predictability to ensure safety, the latter might focus on reproducing biases and violating expectations of privacy.

Firms and individuals alike will have to grapple with difficult questions to ensure that they reap the rewards of these technologies while safeguarding fundamental rights in health, education, and democracy, as the AI Act intends. In recognition of this burgeoning reality, the best way to navigate the new technological landscape is to introduce formal training on AI, its uses, and crucially, its limitations.

Getting Serious About Upskilling

As these technologies enter diverse sectors, their successful integration will depend on the ability of individuals to make informed decisions on their use and critically evaluate their impact. Introducing workplace training akin to anti-discrimination and safety programmes already in place will help in balancing competing needs for innovation and upholding fundamental rights. Developing frameworks for such digital skills training means introducing supplementary measures to prepare citizens who will increasingly use and, by nature of this novel technology, teach products powered by machine learning. These efforts will go a long way towards ensuring these tools are used responsibly.

Such initiatives might involve:

1. Expanding the existing EU Digital Competence Framework for citizens with updated information on AI systems, how they work, and best practices for their use in the workplace. Such guidelines will empower companies or providers of workplace training to share trustworthy and accurate information that will help workers integrate AI-powered technologies responsibly into their tasks.

2. Incentivize the development and deployment of such workplace training for companies operating in Europe. The goal for EU legislators ought to be the coherent and comprehensive education of workers on basic artificial intelligence skills and impacts. This requires encouraging, or even mandating, firms to introduce training on artificial intelligence that meets the needs of workers and is responsive to the types of tasks they will complete, aided with AI technologies.

3. Introduce certification systems for AI skills including specific systems such as ChatGPT and education, similar to the cybersecurity certification scheme outlined in the Cyber Resilience Act. Such EU-backed certification regimes would allow SMEs and major firms alike to easily find trustworthy information and training tools on AI and help ensure cohesion in training practices across the Union.

The European Union is in a uniquely advantageous position to introduce AI training, as the Commission has already identified digital skills as a key priority for 2023. While the AI Act outlines a framework governing production of AI technologies, vigilance over their use must reach individual users. Frameworks for digital literacy among students and workers can serve as a vehicle for establishing competencies and critical thinking as an innate part of working with AI, and address skills gaps in the European workforce.

An accurate understanding of these systems and how they work will empower Europeans to become competent users of the technologies they may be increasingly expected to operate. With this knowledgeable application of generative AI, Europe can fully establish itself as a global leader in both the innovative application and regulation of artificial intelligence.

Authors

Maria Pepelassis
Maria Pepelassis is a technology policy analyst based in London, with an interest in the twin digital and green transitions. In her current role, she monitors and evaluates legislation on emerging technologies in Europe and the United Kingdom. Maria previously completed an internship at the NATO Ass...

Topics