Broadway

Complete News World

Official ChatGPT App Released – Apple Has Issued an Internal Ban

Official ChatGPT App Released – Apple Has Issued an Internal Ban

OpenAI Developers ChatGPT has launched its own smartphone application. Meanwhile, Apple fears that AI chatbots will betray secrets and upgrade themselves at the same time.

Daniel Shorter

Follow me

According to US reporters, Apple has restricted or banned the use of ChatGPT and other generative AI tools in its own company.

As reported by The Wall Street Journal, the company fears that developers may inadvertently reveal confidential information while working on similar technology themselves.

What happened?

According to the report, journalists from the US medium were able to see an internal message banning AI chatbots. In it, Apple instructs its employees not only to refrain from using OpenAi ChatGPT, but also from using copilot github. This is an AI based programming tool.

Located in San Francisco, Github is owned by Microsoft and offers a world-famous software development platform. Microsoft has also invested tens of billions in the ChatGPT developer company and obtained the right to integrate generative AI into the Bing search engine and other products.

When AI chatbots are used, the data is sent back to the developers to enable further improvements. However, it can inadvertently lead to leakage of proprietary or confidential information.

An OpenAI spokeswoman pointed to an announcement made last month by the Wall Street Journal: At the time, the company introduced a ChatGPT option that users could deactivate their chat history. According to OpenAI, this prevents unwanted further use of user data (see below).

If you are using AI chatbot ChatGPT, you can select

With ChatGPT, the “Chat History & Training” option can be deactivated in Settings. The development company maintains that it will not use user conversations to further train its language models.Screenshot: chat.openai.com

OpenAI announced in March 2023 that it had taken ChatGPT temporarily offline due to a bug that allowed some ChatGPT users to see addresses from other users’ chat history.

What do we know about the official ChatGPT app?

The Wall Street Journal’s report on Apple’s ban on third-party AI chatbots was published on Thursday. Separately, on the same day, OpenAI announced the launch of its own ChatGPT app for iPhone.

“We have started rolling out the product in the US and will be expanding to more countries in the coming weeks.”

Source: openai.com

There are already many AI chatbot apps in the Apple App Store.  Now comes the official OpenAI ChatGPT app.

There are already several AI chatbot apps in the Apple App Store, and none of them come from OpenAI, but they just use their technology. Now the official ChatGPT app has been added.Photo: screenshot: watson

ChatGPT is free to use and sync history across devices. Also integrated, according to the announcement, is Whisper, an open source OpenAI speech recognition system that can be used for oral user input and as a transcription tool.

According to the announcement, subscribers to the ChatGPT Plus add-on service (20 francs per month) will get exclusive access to the functionality of OpenAI’s existing GPT-4 language model, early access to new functionality and faster response times.

The app requires iOS 16.1 According to the App Store websitewhich means it works on iPhone 8 and iPhone X from 2017 and later iPhone models.

And where is the Android app?

OpenAI promises Android smartphone users that ChatGPT will also be available on their devices “soon”.

What does Apple intend to do with generative AI?

According to the Wall Street Journal, Apple is working on its own implementations of generative AI. This is said to have been corroborated by informants familiar with the matter. Previously had American Techblog 9to5Mac reportsIt will test the new functionality under the codename “Bobcat”.

The company’s AI efforts are being led by a senior executive, British computer scientist John Gianandrea, whom Apple poached from Google in 2018. Under Gianandrea, who reports directly to CEO Tim Cook as senior vice president, Apple has acquired a number of Artificial intelligence startups, as the Wall Street Journal notes.

John Gianandrea, Apple's director of artificial intelligence development.

John Gianandrea.Photo: Linkedin

In Apple’s recent conference call with financial analysts, Cook also expressed some concerns about progress in this area.

“I think it’s very important to approach these things carefully and thoughtfully. And there are a number of issues that need to be addressed, that are being talked about in different places, but the potential is certainly very interesting.”

Apple CEO Tim CookSource: wsj.com

Indeed, Apple, Google and Amazon are under pressure with their voice assistants Siri, Google Assistant and Alexa, which are not based on generative AI.

In fact, Apple was the first to bring AI to a mobile device with the introduction of the voice assistant Siri in 2011 on the iPhone 4S. Competition followed, but the victory of voice-controlled computers that many had predicted did not materialize for the time being.

“Virtual assistants have had more than a decade to become indispensable. However, they were hampered by high-precision design and miscalculation, which left room for chatbots to emerge.”

New York Times verdict on Siri and Co.

The already popular language assistants have already lost competition to AI systems, average US states, perhaps a little prematurely. Because AI chatbots provide great potential, but also provide uncertainty.

The tech giants now face the challenge of preserving past technology, which is considered insufficient from today’s perspective, for a new era of generative AI. According to experts, there are more serious problems: for example, the complex software infrastructure of Apple Siri must make a quick change impossible.

In addition, all AI application providers have to deal with the fact that there is a risk of their image being damaged in the event of technical errors and misuse. Legal troubles are to be expected.

What is the biggest social problem?

It is the unintentional spread of misinformation, as well as the intentional deception and manipulation of people through disinformation.

In order to explain how this happens, it is necessary to understand how generative AI works in principle.

ChatGPT, Google Bard, and many other AI chatbots are computer programs trained to understand and act upon human speech.

Every AI chatbot is based on what is called a large language model and Language Large Models (LLM) is a sub-field of machine learning.

Machine learning refers to the development of algorithms and models that allow computers to learn from data and make decisions or predictions without being explicitly programmed.

Essentially, LLMs are trained to understand human language, answer questions, create a script, or even engage in dialogue. AI chatbots must recognize and learn from language patterns. To do this, developers provide them with huge amounts of text, which raises questions about copyright.

In addition, alleged AI hallucinations can occur when interacting with users. This means that ChatGPT and Co. They put misinformation out into the world that looks deceptively real (see box).

The World Health Organization warns of the dangers of artificial intelligence in the health sector

In the view of the World Health Organization (WHO), the use of AI in medicine can lead to treatment errors, misinformation, or misuse of data. For all the enthusiasm about potential applications, the United Nations body called for the responsible use of artificial intelligence on Tuesday.

AI applications such as ChatGPT, which access massive amounts of data, can help spread health information or generate diagnoses. The World Health Organization has expressed concern that the usual precautionary measures against new technologies will not be implemented consistently in the case of artificial intelligence.

Large speech-based AI models constructed like neural networks generate seemingly professional responses, according to the World Health Organization. And the organization warned that “these answers can be completely wrong or contain serious errors, especially when it comes to health.”

According to the World Health Organization, AI technology can also be abused to disguise disinformation as seemingly legitimate content. The WHO also fears that AI systems will access sensitive health data without permission.

According to the World Health Organization, AI should only be used in the medical field when reliable data is available about the benefits of the technology.

sources

With materials from Keystone-SDA News

Fisher is attacked by a tiger shark – his reaction inspires millions

Video: Watson/Aya Baalbaki