Artificial intelligence experts say they’re watching Apple’s new partnership with the ChatGPT creator closely.
AUSTIN, Texas — After Apple announced a huge partnership with the creator of ChatGPT, OpenAI, the big question is: are their security concerns?
Apple said its upcoming iOS 18 will have new savvy features like creating personalized emojis and reading your emails to help you with your schedule.
Former OpenAI board member Elon Musk wrote on social media, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”
Musk also threatened to ban Apple devices at his companies if this partnership moves forward, despite dropping his lawsuit against OpenAI on Tuesday.
“I don’t find anything super concerning in their press release, but they themselves have enough of a problematic track record here that some of their choices give me pause,” EFF-Austin President Kevin Welch said.
“I really don’t, honestly,” Joshua Pellicer, owner of Real-Time AI said. “I know that there’s a lot of potential for danger, but it’s kind of like two giant countries that have nuclear capabilities. They have this nuclear deterrent with each other.”
Both artificial intelligence experts say they’re watching Apple’s new partnership with Chat GPT, an artificial intelligence chatbot, closely. The built-in ChatGPT will help Siri answer more questions. Apple said first, it will ask for a user’s permission before giving access.
RELATED: Apple leaps into AI with an array of upcoming iPhone features and a ChatGPT deal to smarten up Siri
“All of this AI stuff is moving to you have an assistant that’s an AI-powered super genius,” Pellicer said. “This is where things get very strange.”
Pellicer said it could have the ability to take screenshots of your information to get to know you better. But let’s say you’re a medical professional with patient information on your Apple device. That could cause issues.
“Let’s say, without your awareness, you accidentally send it to OpenAI to run a request,” Pellicer said. “You break a whole series of different laws now, and you can be held accountable for that.”
ChatGPT is trained on data from the internet, which can create some bias. Other issues include plagiarism claims or just making up information.
During the announcement, Apple said data from these AI-powered features will mostly stay on your device. If it needs to go to the cloud, it will be on Apple’s secure server and not stored by OpenAI.
Both experts said it won’t be perfect, but if Apple secures their data like promoted and they are transparent about what you can opt out of, it’ll be a great tool to have.
“It’s more a step in the right direction of where I would like to see this tech going,” Welch said.
“We’re going to see some new competitors pop up out of nowhere, and it’s going to be a fascinating next … Six, eight months,” Pellicer added.
The new iOS features drop this fall.
Source: ‘Lot of potential for danger’ | What experts are saying about security concerns between