Broadway

Complete News World

Things are boiling behind the scenes at OpenAI – News

Things are boiling behind the scenes at OpenAI – News

It's all about profit, safety doesn't play a role – criticize former employees. OpenAI benefits from conflict.

At OpenAI, the company behind the AI ​​model ChatGPT, things are well underway. SRF Business Editor, Pascal Lago, knows more about this.

Pascal Lago

Business Editor


Open the people box
Close the People box

Pascal Lagou studied economics in Zurich and St. Gallen. He has been working as Business Editor at SRF Radio since March 2022.

Why is the inventor of ChatGPT OpenAI so angry?

OpenAI has dissolved its security department. This is the team that is supposed to make sure that the AI ​​works for the benefit of humanity and does not rebel against its creator. Meanwhile, former board members Helen Toner and Tasha McAuley spoke publicly for the first time and harshly criticized OpenAI and its chairman, Sam Altman.

What are the allegations against Altman and OpenAI?

At OpenAI, there's no room for security and ethical concerns alone, say former board members. This cannot be independently verified. What is certain is that there were internal differences of opinion at OpenAI. CEO Sam Altman is said to have a manipulative and difficult leadership style. There are internal struggles for power.

What is OpenAI's new safety committee supposed to do?

OpenAI responds to criticism and announces a new security committee. This is after the dissolution of the Homeland Security Artificial Intelligence team. The new committee will oversee artificial intelligence systems. CEO Sam Altman himself is at the table. Toner and McCauley criticize this type of self-regulation as a toothless tiger. There is now a need for effective government regulation in the USA.

Will Sam Altman and OpenAI benefit from AI regulation?


Open the box
Close the box

legend:

Controversial OpenAI President Sam Altman.

imago

Sam Altman is a member of the Artificial Intelligence Committee at the US Department of Homeland Security, where he advises the government and the private sector. Future AI regulations are likely to benefit it. Because OpenAI is no longer a bad startup. Thanks to billions of dollars in investor money, including lead sponsor Microsoft, OpenAI is more likely to be able to afford expensive attorneys to comply with regulatory requirements than smaller competitors.

Will the criticism hurt OpenAI and Altman?

on the contrary. The discussion currently revolves almost exclusively around superintelligence, that is, that artificial intelligence will become smarter than us humans, and thus outperform us. It is controversial among experts whether such superintelligence will ever emerge. But the more it is talked about, the less it is questioned. This belief in superintelligence helps Altman and OpenAI raise money from investors. It distracts from the question of whether OpenAI can actually be profitable in the long term.

Artificial superintelligence – a distraction from more real risks?


Open the box
Close the box

Discussion of superintelligence can distract from the risks posed by artificial intelligence, which already exist today – and against which something can be done today. Deepfakes, for example. ChatGPT and other AI tools are being used, among other things, to spread Russian propaganda and influence elections. In February, on the sidelines of the Munich Security Conference, OpenAI and other tech companies promised that they would work together against this new wave of fake news and, for example, label photos or videos if they were produced by artificial intelligence. However, this promise has not yet been fulfilled.