OpenAI Changes ‘Military and Warfare’ Usage Ban Wording

OpenAI’s usage policies page firmly states that the company prohibits using its technology for “military and warfare” purposes. Recently, and quietly, the statement was deleted. On January 10, the company posted an update. While it still prohibits its large language models (LLMs) from being used for any purpose that can cause harm and warns against using its services to “develop or use weapons,” it’s the “military and warfare” language that is missing.

OpenAI spokesperson Niko Felix said the company “aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs.” Felix explained that “a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts.” In its wording, OpenAI “specifically cites weapons and injury to others as examples. Felix declined to clarify whether prohibiting the use of its technology to “harm” others included all types of military use outside of weapons development.

An OpenAI spokesperson in discussion with Engadget said, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open-source software that critical infrastructure and industry depend on. It was unclear whether these beneficial use cases would have been allowed under “military” in our previous policies. So, the goal with our policy update is to provide clarity and the ability to have these discussions.”

Leave A Reply

Your email address will not be published.