The Choice Between Open and Closed AI

In AI’s relatively brief history, there has been tremendous innovation, adoption, concern over risk, and infighting.

Initially launched as an open system, OpenAI made the source code, algorithms, and data available to everyone, fostering both innovation and collaboration and reducing the potential for bias. This transparency allowed people to inspect, modify, and improve the code.

In comparison, competition drives the more recent trend towards closed systems, restricting access to source code.

Examining both, while OpenAI does deliver transparency and spurs greater innovation, closed AI provides greater control over the quality of the innovation, greater security, and the chance to develop a competitive edge. It also speeds up development cycles and is easier to use.

There are also concerns with both.

Open AI could lead to challenges in intellectual property and ownership rights (and we don’t yet know how to protect these rights) and increase security risks and vulnerabilities. With everyone following their own Open AI rules, there really are no rules.

Closed AI limits innovation to the private resources driving it. User options can be more limited in specific closed AI systems, especially in the future. There is no transparency. Processes and algorithms remain behind the curtain, making it difficult for users to understand exactly how the AI performs.

The debate regarding the risks and benefits of AI development is now global. Open AI systems are unregulated and can potentially result in misuse of the technology’s capabilities. There is no accountability, and privacy breaches are more likely. We’ll likely see more of a balance between open AI and the acceptance of more regulation in the future to alleviate societal concerns and promote more responsible development.

The Early Open Days

OpenAI was initially founded in 2015 by Sam Altman, Elon Musk, Ilya Sutskever, and Greg Brockman as a nonprofit organization to advance digital intelligence in ways most likely to benefit all of humanity. The company rapidly saw results in deep learning and reinforcement learning, and they released a toolkit in 2016 for developing and comparing reinforcement learning algorithms called OpenAI Gym in June 2018. The company’s paper, “Improving Language Understanding by Generative Pre-Training,” introduced the underlying architecture for what would soon become ChatGPT.

Here’s where the company began to transition. In 2019, OpenAI moved from a nonprofit to a “capped-profit” model. OpenAI wanted to raise more capital while adhering to its mission. To combine both, they devised a structure whereby the nonprofit controlled the direction of a for-profit portion of the business. This resulted in a $1Bn investment from Microsoft. OpenAI Inc. was the sole controlling shareholder of the new for-profit OpenAI Global LLC, which answered to the nonprofit board and retained a fiduciary responsibility to the nonprofit charter.

In 2020, OpenAI’s Chat GPT-3, a large language model (LLM) that could understand human-like text, continued to become more intelligent. In 2021, they unveiled Codex and DALL-E, followed by the release of GPT-3 in late 2022, which was the basis for ‘Chat-GPT.’ It reached 100 million users within just two months and was soon a subscription model. GPT-4 was 10x more advanced and could analyze text, images, and voice, and the innovation continues. And this is where the infighting began.

Out N In…

On Friday, 11/17/2023, OpenAI announced that it would remove its co-founder Sam Altman as CEO based on a lack of candor in his communications with the board, claiming that it “no longer has confidence in Altman’s ability to continue leading OpenAI.”

Elon Musk resigned from the board in 2018, citing a “potential future conflict of interest” with Tesla’s AI development. Elon was also unhappy with the company’s for-profit stance and its dealings with Microsoft.

Altman’s removal prompted the resignation of President and Co-Founder Greg Brockman and three senior scientists. Greg Brockman tweeted that Ilya was a key figure in Altman’s removal. The coup, however, failed. Altman returned as CEO, and a new board was established, which included Bret Taylor (Chair), Larry Summers, and Adam D’Angelo. Clearly, Altman won.

Where Are We Now?

Look for the next OpenAI model this summer. Business Insider claims GPT-5 is on track for a mid-2024 release with much improved and feature enterprise-centric skills involving AI “agents” that can interact in an interconnected ecosystem of OpenAI tools to complete complex tasks.

The open vs. closed argument continues as a suit by Elon Musk against OpenAI prompted the company to make a startling confession. “As we get closer to building AI, it will make sense to start being less open.”

The ‘Open’ in OpenAI means that everyone should benefit from the fruits of AI after it’s built, but it’s totally OK to not share the science,” said OpenAI Chief Scientist Ilya Sutskever.

So, not exactly open.

Do You Need Open or Closed?

Like most things in technology, we must choose what meets our objectives. Open AI may be the best path for innovation, transparency, and collaboration. However, if your list includes quality control, security, and IP protection and your goal is proprietary development, closed it is.

Leave A Reply

Your email address will not be published.