Home

Unmasking Secret Cyborgs and Illuminating Their AI Shadows

Abhishek Gupta / Jun 7, 2024

Rick Payne and team / Better Images of AI / Ai is... Banner / CC-BY 4.0

Two weeks ago, Microsoft announced a new line of computers intended to “leverage powerful processors and multiple state-of-the-art AI models” intended “to unlock a new set of experiences you can run locally, directly on the device.” As a part of the Copilot+ PCs release, the Recall feature – which collects a complete history of a user’s activity powered by AI – raised a lot of eyebrows and some excitement from the AI community, given the deep context that the feature tapped into at an operating system level, integrating data from apps, documents, and messages to provide better search and assistant capabilities.

A privacy nightmare in the making

Microsoft promises to keep the data localized and private. Yet, privacy professionals and others in the Responsible AI ecosystem are sounding the Klaxon, warning of the looming privacy nightmare that such a panopticon feature poses to users who don’t fully understand how the feature works and how to configure it well to meet their needs. Support forums are filling up with comments and posts asking how to disable the feature, expressing deep concern that Microsoft is ushering in a world reminiscent of The Entire History of You from Black Mirror.

Regulators are paying attention, with the Information Commissioner’s Office (ICO) in the UK launching an investigation. The ICO is seeking to understand the safeguards Microsoft has implemented to protect user privacy, emphasizing the need for transparency and robust data protection measures before such features are widely deployed. Similarly, the Irish Data Protection Commission has raised concerns, as has the Office of the Australian Information Commissioner (OAIC), who are aware of the privacy implications and want to know more about how the feature will work and what controls will be offered to users.

Microsoft is responding to concerns highlighting user control and agency

Microsoft has responded to the investigations and concerns by emphasizing the feature's design and user control mechanisms, including local storage and encryption, such that the data is not sent back to Microsoft servers for processing, instead utilizing local AI models. In terms of user control, there is granularity offered in terms of deleting specific snapshots, excluding applications and websites from being recorded, and ultimately turning off the feature entirely as well. Another mechanism for control is the opt-in mechanism, which informs users of the feature when setting up the device and allows the user to either turn it off or configure it as per their preferences. There is no content moderation that will be performed on what is captured. However, this leaves the potential for sensitive private information, such as financial information, to get swept up as a part of the functioning of the Recall feature. Microsoft is also engaging with regulators and other advocates in the ecosystem to offer more transparency around how these features will work and how users will have control over their data and privacy such that it aligns with their preferences.

However, within organizations, the rise of Shadow AI (use of AI without disclosure) and secret cyborgs (employees using AI without permission), often without sufficient oversight, pose significant ethical and governance challenges that might not be adequately headed off by these protections from the AI developer. Policymakers and governance professionals must urgently establish robust transparency and accountability mechanisms to ensure responsible AI usage within their organizations. In particular, adopting a proactive approach following principles of transparency and accountability can help avoid some of the worst outcomes that arise when the twin storms of secret cyborgs and their AI shadows knock on the doors of an organization.

Enhancing Transparency and Accountability Mechanisms

To address the challenges of Shadow AI and secret cyborgs, policymakers and governance professionals should focus on creating frameworks that require transparency and accountability in AI usage. This involves mandating that staff disclose the AI tools and applications they employ and establish clear accountability mechanisms for work output produced with its assistance.

AI usage disclosure mandates can help foster a culture of transparency and accountability around AI usage within an organization. Primarily, they should cover the following:

  1. Staff regularly report how and when they use AI systems to assist with their work duties and attest that they follow internal guidelines and policies to ensure responsible usage.
  2. The IT governance and risk functions should establish a registry of AI systems across the organization (as best as possible, given Shadow AI) to track and manage all AI systems and tools, ensuring visibility and control.
  3. Regular auditing via internal and external providers can add further confidence, verifying the accuracy and completeness of the AI usage disclosures and the thoroughness of the registry.

Bolstering this with appropriate accountability structures, following processes of good, responsible AI practices is equally essential:

  1. Assign specific roles and responsibilities for AI governance, such as AI ethics officers or committees, to oversee AI implementations and outcomes.
  2. Documenting the decision-making processes and clearly designating who will be held accountable for them is important to prevent the diffusion of responsibility that is common to such governance efforts.
  3. Develop and track performance metrics related to AI governance, including the effectiveness of oversight mechanisms and the alignment of AI usage with organizational goals.

Leaning into the insights and power that impact assessments can offer is another great mechanism to boost the efficacy of the above-mentioned approaches:

  1. Conduct thorough impact assessments before deploying AI tools, evaluating potential ethical, social, and economic consequences. Many examples, such as those available from the Canadian Federal Government and the US CIO Council, provide some guidance on how to get started, but there are also many domain-specific templates.
  2. Implement mechanisms for ongoing monitoring and assessment of AI systems' impacts, adjusting policies and practices as necessary based on findings.
  3. Publish impact assessment summaries to promote transparency and accountability to stakeholders, including employees, customers, and regulators. This will ensure that they are informed ahead of time about systems being primed for release and the potential impacts they might have on the ecosystem.

The advent of Microsoft's Recall feature in Copilot+ PCs underscores the dual-edged nature of technological advancements, presenting both opportunities for enhanced user assistance and significant privacy concerns. As AI capabilities develop, integrating such features into operating systems demands rigorous scrutiny from privacy professionals and regulators to safeguard user data and ensure ethical deployment.

By adopting a proactive stance on transparency, accountability, and impact assessment, organizations can navigate the complexities of integrating advanced AI features while upholding responsible AI practices. This approach protects user privacy and fosters a culture of ethical AI usage, aligning technological innovation with societal values. And it may help companies stay on the right side of regulators, who are rightly concerned.

Authors

Abhishek Gupta
Abhishek Gupta holds the prestigious BCG Henderson Institute (BHI) Fellowship on Augmented Collective Intelligence (ACI) studying the complementary strengths in hybrid collectives of humans and machines. He serves as the Director for Responsible AI at the Boston Consulting Group (BCG) advising clien...

Topics