Secret Cyborgs and Their AI Shadows: Navigating the Copilot+ PCs Frontier

Abhishek Gupta / May 29, 2024

At the recent Microsoft Build conference, Microsoft introduced the new Copilot+ PCs, a significant advancement in integrating AI capabilities directly into personal computing hardware. These devices, created in collaboration with major OEMs like Acer, ASUS, Dell, HP, Lenovo, Samsung, and Microsoft's own Surface line, are designed to handle advanced AI processes on-device rather than relying solely on cloud computing. Key features of the Copilot+ PCs include:

1. Recall: This allows users to retrieve past information using natural language prompts, integrating data from apps, documents, and messages while keeping the data localized on the device.

2. Live Captions and Translations: Real-time audio and video content translation into English from over 40 languages, enhancing accessibility and collaboration across different languages.

3. AI-Enhanced Applications: Integration with applications like DaVinci Resolve, CapCut, and LiquidText, enabling advanced features such as NPU-accelerated visual effects, auto cutout, and on-device AI annotations.

4. Windows Studio Effects: Improved features for video and audio, including automatic lighting adjustments, creative filters, and voice focus, aimed at improving user interaction and presentation.

These PCs also include the new Copilot app, which can be accessed through a dedicated Copilot key on the keyboard. This app provides users with streamlined access to AI functionalities, allowing for more intuitive and natural interactions with their devices.

Embedding advanced AI capabilities directly into personal computing hardware significantly amplifies the potential for Shadow AI and secret cyborgs within organizations. As I wrote previously for Tech Policy Press, Shadow AI refers to deploying and using artificial intelligence systems within organizations without formal approval or oversight. These unsanctioned AI applications operate outside the established IT governance frameworks, leading to potential risks related to data security, compliance, and ethical considerations. Secret cyborgs are employees who use AI tools to augment their work without their employers' explicit knowledge or endorsement. This covert use of AI enhances their productivity and effectiveness but can lead to organizational governance and ethical challenges.

Risk management and IT security functions within organizations, as well as policymakers developing AI-related regulations, should pay attention to this twin storm of deeply embedded AI capabilities and the phenomena of secret cyborgs and Shadow AI to steer the ecosystem toward more comprehensive and robust governance of AI. Existing regulations and in-development regulatory efforts are much more centered on model developments, capability enhancements, etc., while this change in product and service design ushers in a fundamental shift in the usability and availability of AI capabilities to an even broader set of users due to the native embedding of AI at the operating system (OS) level. Let’s take a closer look at the key challenges before we go into potential solutions and processes that organizations and policymakers can adopt to mitigate negative outcomes.

1. Data privacy, compliance, and security risks

The ease of access to powerful AI tools on Copilot+ PCs can lead to a proliferation of AI applications used without formal approval or oversight. Employees may quickly deploy these tools to solve specific problems, bypassing established IT governance frameworks. This unregulated usage of AI can result in several issues. With employees using AI tools autonomously, sensitive data might be processed in ways that violate privacy regulations or internal policies. The lack of oversight can lead to compliance breaches and security vulnerabilities as these AI tools may not be vetted for adherence to industry standards.

2. Governance challenges related to accountability and alignment

Organizations need robust governance frameworks to manage the increased AI activity from Copilot+ PCs. However, traditional governance models might struggle to keep up with AI's rapid and decentralized adoption. Determining who is responsible for the outcomes of AI-driven decisions becomes complex when AI tools are used informally. AI applications developed or used in isolation may not align with the organization's strategic objectives, leading to fragmented or conflicting efforts.

3. Widening skill and productivity gaps

Copilot+ PCs empower employees to become "secret cyborgs," augmenting their work with AI without explicit disclosure. This can lead to employees using AI tools to outperform their peers, creating uneven productivity levels and potentially fostering workplace inequities. As some employees gain advanced AI skills informally, the skill gap within the organization can widen, challenging the uniform development of AI literacy among staff.

4. Ethical and social Implications

The covert use of AI tools can raise ethical and social issues within the workplace such as the lack of transparency about AI usage can erode trust among employees and between employees and management. This can undermine organizational cohesion and morale. Informal AI tools might introduce unchecked biases, leading to unfair outcomes and potential ethical breaches that are harder to detect and mitigate without formal oversight.

Addressing the Combined Challenges

1. Developing Comprehensive AI Governance Policies

Organizations must extend their governance frameworks to explicitly cover the use of AI tools provided by Copilot+ PCs. This includes establishing clear guidelines on how and when AI tools can be used and ensuring these are communicated to all employees, and implementing regular audits to monitor AI usage and to detect Shadow AI activities. This can help identify and mitigate risks early on.

2. Fostering an Inclusive AI Culture:

Creating an organizational culture that encourages transparency and responsible AI usage is crucial which can begin by providing comprehensive training programs to ensure all employees understand the potential and risks of AI and know how to use these tools responsibly. Embedding ethical considerations into AI training and usage policies to promote fairness, accountability, and transparency in AI applications can help reinforce this goal and encourage the responsible use of AI, that is sanctioned, within the organization.

3. Leveraging Technology for Oversight:

Utilizing advanced monitoring and compliance tools can help organizations keep track of AI activities. This can be done by deploying tools that automatically check for compliance with data protection and ethical standards when AI tools are used. It is a great way to scale responsible AI implementations and address Shadow AI and secret cyborgs that might pop up in many parts of the organization unbeknownst to those responsible for enforcing policies. Using AI itself to monitor and manage other AI applications, providing an additional layer of oversight and control.

So, what’s next?

The introduction of Copilot+ PCs marks a significant shift in the landscape of personal computing and organizational AI usage. While these devices offer tremendous potential for enhancing productivity and enabling advanced AI applications, they also present novel challenges like Shadow AI and secret cyborgs. As AI capabilities become more deeply embedded into everyday tools, organizations must proactively adapt their governance frameworks to ensure responsible and transparent usage.

Organizations can effectively navigate this new frontier by developing comprehensive AI policies, fostering an inclusive AI culture, and leveraging technology for oversight. It is crucial to balance empowering employees with advanced AI tools and maintaining necessary governance and ethical standards. As we move forward, ongoing dialogue and collaboration between IT departments, management, and employees will be essential to harnessing the benefits of Copilot+ PCs while mitigating the associated risks.

Ultimately, the rise of Shadow AI and secret cyborgs underscores the need for a proactive and adaptive approach to AI governance. By embracing this challenge head-on, organizations can position themselves to thrive in an increasingly AI-driven world, ensuring that the power of AI is wielded responsibly and in alignment with their core values and objectives.


Abhishek Gupta
Abhishek Gupta holds the prestigious BCG Henderson Institute (BHI) Fellowship on Augmented Collective Intelligence (ACI) studying the complementary strengths in hybrid collectives of humans and machines. He serves as the Director for Responsible AI at the Boston Consulting Group (BCG) advising clien...