Apple and OpenAI are working together to make your iPhone much smarter.
Bloomberg’s Mark Gurman reports that Apple has just made a deal with OpenAI to bring AI technology to iPhones through the upcoming iOS 18 update.
This isn’t just another update; it’s a huge deal that could make OpenAI billions of dollars. More importantly, it means your phone will soon understand you better than ever before.
Siri’s Big Brain Upgrade Thanks to OpenAI
By using OpenAI’s advanced language models, the same ones that power ChatGPT, Apple is secretly changing how Siri thinks. The new prototype, being tested in Cupertino’s labs, shows a Siri that not only follows commands but also truly understands context, hidden meanings, and the complex ways people talk.
Imagine asking your iPhone about complex topics like geopolitics or quantum physics and having a conversation that feels truly enlightening. Or think about turning to Siri during personal crises and getting not just information but also empathy and thoughtful advice.
This change from simple question-answering to deep understanding isn’t just a feature upgrade; it’s a big shift that makes your smartphone feel almost like a thinking companion, blurring the lines between digital tool and trusted advisor.
Apple Gives Credit Where It’s Due
In a time when it’s hard to tell where digital content comes from, Apple is setting a new standard for transparency. When the improved Siri uses OpenAI’s neural networks to answer tough questions, it will clearly say this information comes from OpenAI, just like it does now with data from Google’s search engine.
This honest approach is more than just ethical; it helps teach users about how modern AI works together. By doing this, Apple is not only using advanced technology but also helping people understand AI better, making them more prepared to handle our increasingly digital world.
Inside Apple: Not Everyone Loves Chatbots
However, inside Apple’s glass-walled headquarters, the AI-driven future isn’t coming without some debate. John Giannandrea, the company’s machine learning chief, has been a vocal skeptic. He reportedly told staff, “The last thing people needed was another AI chatbot,” showing his weariness with the many new text-based AI assistants.
Yet, Giannandrea’s skepticism doesn’t mean he is against technology. He believes that large language models can do much more than just copy human conversation. Apple’s actions suggest they are committed to going beyond the typical chatbot, looking at how AI can improve user experience in more subtle, predictive, and deeply integrated ways within the operating system.
Your iPhone, Your Choice of AI Brain
In a smart move, Apple isn’t sticking with just one AI system. While its talks with OpenAI have made news, the company is also in discussions with Google. This suggests a future where your iPhone’s AI isn’t just one-size-fits-all.
Just like you can choose your default search engine or web browser, Apple may let you pick your preferred AI helper. You could choose OpenAI for its conversational skills or Google for its knowledge. Some think Apple might even allow other AI options like Anthropic or DeepMind, creating an “AI App Store” where different AIs compete.
This approach not only gives users more choices but also encourages competition among AIs. This competition could lead to rapid improvements, making AI systems more capable, secure, and aligned with human values.
Microsoft Worries: Will OpenAI’s Apple Deal Hurt Us?
Apple’s bold moves are shaking up Silicon Valley’s power structure. At Microsoft’s Redmond campus, CEO Satya Nadella is growing concerned about the Apple-OpenAI partnership. Microsoft has invested $13 billion in OpenAI, seeing itself as the main platform for bringing OpenAI’s innovations to the public.
This investment gave Microsoft more than just equity; it secured priority access to OpenAI’s best models, aimed at revolutionizing products from Windows to Xbox. The idea of these same AI technologies being used in Apple’s ecosystem — possibly making Siri better than Microsoft’s Cortana or boosting Final Cut Pro to compete with Adobe’s AI tools — has raised alarms.
Nadella’s recent meeting with Sam Altman, OpenAI’s CEO, highlights the high-stakes nature of this tech industry power struggle.
Trouble at OpenAI: Can Its Leader Handle the Heat?
For Sam Altman, the Apple deal is a crucial win in an internal struggle for control at OpenAI. Once seen as a visionary leading the way to friendly superintelligence, Altman now faces serious challenges from former colleagues.
Helen Toner, a former director who helped shape OpenAI’s ethics, accused Altman of lying to the board and hiding important information about the company’s governance.
Even more damaging is the departure of Jan Leike, a machine learning expert who co-led OpenAI’s safety team. Leike criticized OpenAI for prioritizing flashy products over a strong safety culture, striking at the core of the company’s original mission.
These conflicts are more than just personality clashes; they show a deep ideological divide. This split became very public last November during an attempt to remove Altman from his position — a coup that failed but left OpenAI divided.
After the coup, Microsoft, feeling the pressure of its huge investment, began pushing OpenAI to focus more on market-driven goals. This has intensified the conflict between OpenAI’s original members, who are dedicated to a cautious, safety-first approach, and newer members eager to turn research into global products.
OpenAI’s Big Question: Make Money or Do Good?
At the center of OpenAI’s internal conflict lies a bigger question: Can a company aiming for artificial general intelligence (AGI) — software that matches or surpasses human thinking — exist within the profit-driven world of modern capitalism?
OpenAI’s current setup, a profit-making company overseen by a nonprofit, was a bold attempt to balance these conflicting goals. The idea was to use private investment to drive innovation while having the nonprofit ensure ethical decisions. But now, with tensions rising, Altman and his supporters are considering a big change.
One option is to become a more typical, profit-focused company. This would give OpenAI more freedom to develop products quickly, but it might move the company away from its original mission. Another idea, gaining support from those skeptical of Wall Street’s focus on profit, is to become a Benefit Corporation or “B-Corp.”
This classification, part of a movement toward “stakeholder capitalism,” lets companies have goals beyond just making money. A B-Corp OpenAI could prioritize things like “ensuring AI aligns with human values” or “prioritizing global welfare over profits.” This legal status would protect leaders from lawsuits by shareholders who prioritize profit over ethics.
This change wouldn’t just be about paperwork; it would turn OpenAI into an experiment to see if groundbreaking technology can fit into a broader social responsibility framework.
AI in Your Pocket: Amazing Tech, Big Questions
As Apple, OpenAI, Microsoft, and Google navigate through this high-stakes chess game, their decisions have far-reaching consequences beyond just financial reports or market dominance. The integration of AGI-class systems into smartphones — devices that shape our communications, news consumption, and worldviews — marks a significant shift in how humans interact with machines.
Soon, the AI on your phone won’t just set reminders or improve your commute; it may also analyze your emotions, influence your political views, or even engage you in deep conversations about life’s meaning. This isn’t just science fiction but the logical outcome of the technology being discussed in Silicon Valley boardrooms.
However, this AI-powered future, while promising, comes with unprecedented challenges:
- Safety & Control: Are we rushing AGI into our devices before it’s ready? While tests in labs show potential, deploying these systems in the real world exposes them to internet chaos from trolls, hackers, and government actors. One wrong move could turn your AI assistant from Samantha in “Her” to HAL 9000.
- Digital Epistemology: As AIs become our main source of information, how do we know what’s true? There’s debate over whether large language models really “understand” or just echo web content. If it’s the latter, are we building a global knowledge system on shaky ground?
- Algorithmic Governance: Tech companies argue that AGI’s complexity requires secretive development. Critics say when algorithms shape human culture at scale, their creation becomes a form of governance without public input. The Apple-OpenAI deal highlights this tension, with potentially society-changing code being negotiated behind closed doors.
- Corporate Ethics vs. Global Good: Are companies considering long-term human interests in their rush to dominate AGI? OpenAI’s internal conflict of prioritizing “shiny products” over “safety culture” reflects a broader industry dilemma. Is humanity’s well-being taking a backseat to corporate goals?
The silence from these companies when asked for comment is telling. It suggests they understand they’re not just making products but shaping a future with ethical, political, and even philosophical implications that are still up for debate.
This moment requires caution; unlike previous tech revolutions, AGI’s potential to reshape human-machine relationships demands careful consideration. The stakes aren’t just about business success but humanity’s future.
Your Phone, Your Future — It’s Up to Us
The collaboration between Apple and OpenAI goes beyond just a business deal. It’s a sign of AI’s rapid move from research labs to our smartphones: the core of our digital lives. This shift brings tools so advanced that they might feel less like gadgets and more like extensions of our minds.
But this closeness also means we need to get more involved. While AI experts work behind closed doors, it’s up to us — the users, the citizens — to shape the environment where these creations will exist. We need to go from passive users to active co-designers, asking not just what AI can do but what values should guide its development.
Should AI development be as open as open-source software or as secretive as national security projects? How do we make sure AI reflects humanity’s diversity, not just the values of a few in Silicon Valley? When does an AI’s influence on our decisions become manipulation?
These questions aren’t for some far-off future; they’re immediate concerns, as relevant as the next iPhone update. The code being written today won’t just improve your schedule or photos; it could reshape how you see the world, interact with others, and understand yourself.
So, the Apple-OpenAI deal isn’t just a business deal; it’s a challenge to every smartphone user. We’re being asked to think carefully about what we want from our AI future. The technology is ready, but are we?