OpenAI Updates Usage Policies, But Are They Shifting Stance on Military Collaboration?
OpenAI, the creator of the language model ChatGPT, has recently made changes to its usage policies that have sparked discussions about its stance on military collaboration. The company has modified the fine print to remove explicit mention of using its AI technology or large language models for military and warfare purposes. Previously, OpenAI’s guidelines specifically prohibited the use of its models for weapons development, military applications, and content promoting self-harm.
OpenAI states that the updates aim to make the policies more readable and provide specific guidance for each service they offer. The new policies, known as Universal Policies, prohibit any use of OpenAI’s services to harm others and prevent the reuse or distribution of results from their models for harmful purposes.
While some perceive the changes as a potential shift in OpenAI’s stance on collaborating with defense or military-related organizations, the company’s CEO, Sam Altman, along with other experts, have expressed concerns about the risks associated with artificial intelligence.
It is worth noting that OpenAI currently does not possess a product that is capable of directly causing physical harm. However, as highlighted by The Intercept, their technology could be utilized for tasks like writing code or processing procurement orders for items that could potentially be used to harm individuals.
OpenAI’s spokesperson, Niko Felix, explained that the intention behind the altered wording was to create a set of universal principles that are easy to remember and apply. The company’s tools are widely used globally, including by everyday users who can now generate their own ChatGPT outputs.
The decision to remove the explicit mention of military and war from the list of prohibited uses raises questions regarding OpenAI’s potential collaboration with government agencies, such as the Department of Defense, which often offers substantial contracts to contractors.
As military agencies worldwide are increasingly interested in leveraging AI capabilities, the adjustment in OpenAI’s policy wording coincides with this developing landscape. While the full implications of these changes remain unknown, they have ignited discussions about the company’s evolving position on military partnerships.
In conclusion, OpenAI has recently updated its usage policies, condensing them into Universal Policies. The removal of explicit referenceto military uses has raised speculation about OpenAI’s stance on military collaboration. While concerns regarding the risks of AI have been emphasized, OpenAI clarifies that their intention is to create universally applicable principles. The impact of these policy changes remains to be seen as military agencies continue to explore the potential of AI technologies.