US Congress Restricts Use of Microsoft’s Copilot AI Tool Amid Security Concerns

US Congress restricts employees from using Microsoft's Copilot AI tool due to security concerns.

  • US Congress restricts employees from using Microsoft’s Copilot AI tool due to security concerns.
  • House Chief Administrative Officer cites risks of data leaks to unauthorized cloud services as a primary reason for the ban.
  • Microsoft reassures commitment to developing AI tools meeting federal security standards amid the directive.

The US Congress has taken measures to address security concerns surrounding the use of Microsoft’s Copilot AI tool by its employees. Following a memo issued by House Chief Administrative Officer Catherine Szpindor, staff members are now prohibited from utilizing Copilot on government-issued devices. The decision stems from apprehensions raised by the Office of Cybersecurity regarding potential data breaches to unauthorized cloud services.

While this directive restricts the use of Copilot within government settings, Microsoft has reassured its commitment to developing AI tools that meet stringent federal security standards. This includes Copilot, with plans to deliver enhanced security features later in the year. In the interim, Congress has mandated the use of ChatGPT Plus, a paid version of OpenAI’s AI chatbot, for research purposes. Notably, strict privacy settings must be enabled, and users are prohibited from pasting unpublished text blocks into the service.

The move by Congress reflects a broader trend among tech companies and governmental bodies to prioritize data security and privacy. Concerns have been raised across industries, leading to restrictions on the use of generative AI tools like ChatGPT. With ongoing developments in AI technology, maintaining the integrity of sensitive data remains paramount, prompting proactive measures to mitigate potential risks.