Microsoft CEO Satya Nadella (right) speaks as OpenAI CEO Sam Altman (left) looks on during the OpenAI DevDay event in San Francisco on November 6, 2023.
Justin Sullivan | Getty Images
Microsoft She has stepped down as an observer on the OpenAI board. appleBut while the European Union, which had been expected to take a similar observer position, will no longer seek the position, according to the Financial Times, whatever the changes this week are meant to provide more clarity, many of the same concerns remain.
But regulators haven’t backed down, and for those focused on ethics in AI, the same concerns — about prioritizing profits over safety — persist. Amba Kak, co-executive director of the nonprofit AI Now Institute, called the announcement a “sham” designed to hide the ties between big tech companies and emerging AI players.
“The timing of this move is significant, and should be seen as a direct response to global regulatory scrutiny of these unconventional relationships,” Kak wrote in a letter to CNBC.
The close relationship between Microsoft and OpenAI and the companies’ outsized control over the AI industry will continue to be scrutinized by the Federal Trade Commission, according to a person familiar with the matter, who asked not to be identified due to confidentiality issues.
Meanwhile, a large number of AI developers and researchers who are concerned about the safety and ethics of the increasingly profit-driven AI industry remain unmoved. Current and former OpenAI employees published an open letter on June 4, describing their concerns about the rapid progress being made in AI, despite a lack of oversight and whistleblower protections.
“AI companies have strong financial incentives to avoid effective oversight, and we do not believe that specially designed corporate governance structures are sufficient to change this,” the staff wrote in the letter. They added that AI companies “currently have only weak obligations to share some of this information with governments, and none with civil society,” and cannot be “relied upon to share it voluntarily.”
Days after the letter was published, a source familiar with the matter confirmed to CNBC that the Federal Trade Commission and the Justice Department were close to opening antitrust investigations into OpenAI, Microsoft, and Nvidiawith a focus on corporate behavior.
FTC Chair Lina Khan described her agency’s move as “a market investigation into the investments and partnerships being formed between AI developers and major cloud computing providers.” Kak told CNBC that regulators’ efforts are helping to get answers and provide transparency.
Microsoft made no mention of regulators in its explanation for giving up its observer seat on the board. The software giant said it could now step down because it was satisfied with the makeup of the startup’s board, which has been revamped in the eight months since an uprising that led to the brief ouster of CEO Sam Altman and threatened Microsoft’s massive investment in OpenAI.
Microsoft initially gained a non-voting seat on OpenAI’s board in November, following the Altman saga. The new board includes Paul Nakasone, the former director of the National Security Agency, along with Quora CEO Adam D’Angelo, former Treasury Secretary Larry Summers, former Salesforce co-CEO Bret Taylor, and Altman. There are also new additions from March: Dr. Sue Desmond-Hellmann, former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, former executive vice president of Sony; and Vijay Simo, CEO of Instacart.
Following Microsoft’s announcement this week, OpenAI told Axios that the company is changing its approach to dealing with “strategic partners.” Apple did not comment. None of the three companies provided comment to CNBC for this article.
Joao Cedok, an associate professor of technology at New York University’s Stern School of Business, said Microsoft’s latest move was positive for the AI industry because of the company’s perceived influence at OpenAI. He said it was “critical” for Microsoft to step in and “help stabilize” OpenAI after the abrupt dismissal that was quickly followed by Altman’s reappointment.
“Microsoft’s presence there represents a combination of conflict of interest and competitive advantage,” Sedock said, adding that “Microsoft and OpenAI have a strange relationship of being both synergistic and competitive.”
“A huge amount of information”
In addition to Microsoft’s nearly $13 billion investment in OpenAI, the two companies work closely together to deliver generative AI products and services. OpenAI’s popular chatbot ChatGPT is based on large language models, running on Microsoft’s Azure cloud technology.
But the two companies aren’t exactly aligned. Earlier this year, Microsoft paid $650 million to license Inflection AI’s technology and hire key talent from the company, most notably CEO Mustafa Suleyman, who previously co-founded DeepMind, the AI startup that Google acquired in 2014.
Ben Miller, CEO of investment platform Fundrise, said after the Inflection deal that Microsoft was “now on its way to becoming a real competitor to OpenAI,” meaning it shouldn’t be on the startup’s board.
“Having a voice at the table has a huge impact on the company and gives Microsoft a tremendous amount of information about business activities,” Miller added.
Mustafa Suleyman, co-founder of Inflection.ai and DeepMind, speaks on CNBC's Squawk Box at the World Economic Forum Annual Meeting in Davos, Switzerland on January 17, 2024.
Adam Gallese | CNBC
The separation sets the right precedent, Sedock told CNBC, as big tech companies become increasingly big investors in AI. He cited AI startups like Anthropic, which is backed by Amazon, and Hugging Face, which has investors from Google, Amazon, Twitter and Twitter. Nvidia And others.
“They may be thinking about the after-effects of what this might mean for the overall movement of the industry,” Seddock said.
However, one area that Seddock said could pose a problem is the safety of AI.
“I think Microsoft has a lot of experience and a longer history of thinking about this in many different areas that OpenAI doesn’t have,” Seddock said. “From that perspective, I think there would be some downsides to not having them at the table.”
AI safety practices were at the heart of a dispute between Altman and OpenAI's previous board, and continue to cause divisions at the company.
In May, OpenAI disbanded its team focused on the long-term risks of AI just a year after the company announced the group. The news broke days after the team’s leaders, OpenAI co-founders Ilya Sutskever and Jan Lake, announced their departures from the company. In a post on X, Lake wrote that “OpenAI’s safety culture and processes have become secondary to the shiny products” and that he was “worried” that the company was not on the right track.
“Building machines that are smarter than humans is an inherently dangerous endeavor,” Lyke wrote. “OpenAI has a tremendous responsibility on behalf of all humanity.”
Last month, OpenAI announced the appointment of former National Security Agency director Nakasone to its board of directors, saying he would join a newly created safety and security committee. OpenAI said at the time that the group would spend 90 days assessing the company’s processes and safeguards before making recommendations to the board and eventually updating the public.
—CNBC's Ryan Brown, Matt Clinch and Steve Kovac contributed to this report.
WATCH: OpenAI hacked in April 2023 without public disclosure