Microsoft temporarily restricted its employees from using ChatGPT, citing “security and data concerns,” as reported by CNBC. The company communicated this rule through an internal website, even implementing measures to block corporate devices from accessing the AI chatbot. While several tech companies had previously either prohibited or discouraged internal use of ChatGPT, Microsoft’s decision raised eyebrows, especially considering its status as OpenAI’s largest and most prominent investor.
In January, Microsoft had committed to investing $10 billion in ChatGPT’s developer over the next few years, following a prior $3 billion investment in the company. Microsoft’s own AI-powered tools, including Bing’s chatbot, utilize OpenAI’s extensive language model. However, in its note, Microsoft reportedly clarified that despite its investment in OpenAI and the built-in safeguards of ChatGPT, the chatbot is considered a third-party external service. Employees were advised to exercise caution, extending the same approach to other external services, such as the AI image generator Midjourney.
The sudden prohibition of ChatGPT by Microsoft was both surprising and prompt. CNBC reported that after its story was published, Microsoft swiftly restored access to the chatbot. The company also purportedly removed the language in its advisory that initially mentioned blocking the chat app and design software Canva. A Microsoft spokesperson acknowledged that the ban was a mistake, despite the advisory explicitly naming ChatGPT, and stated that access was promptly reinstated upon realizing the error. The spokesperson explained, “We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees,” adding, “As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”