Industry Insights

Apple Intelligence: Balancing Privacy Positioning with Embracing AI

June 11, 2024

Musk’s outburst at the news of Apple partnering with OpenAI may be an overreaction, but it’s symptomatic of some recent high-profile gaffes around AI and data privacy. In recent months, there’s already been public outcry about data privacy issues surrounding Microsoft Recall, Slack, and DocuSign

Thus far, Apple has fallen short of providing clear guidance on what data will ultimately be sent to OpenAI and how privacy will be protected. This should give individuals and corporations pause.

Reading the Fineprint: Unanswered Privacy Questions

Apple states that “when requests are routed to Private Cloud Compute, data is not stored or made accessible to Apple, and is only used to fulfill the user's requests.” That’s about all we know, for now. Information on how user data is used to train OpenAI models is conspicuous in its absence. Furthermore, there may be different, competing data privacy preferences when you connect your ChatGPT account.

This is about what we have come to expect from GenAI-enabled SaaS companies. At the best of times, navigating data retention and content training declarations is hard work. More often, it’s outright missing or heavily hidden, written in dense legal language, and subject to change over time. 

But Apple professes to be different and positions itself as the “privacy” company. Apple has been considered to be “behind” in its AI story, so needs to catch up, but will this move place questions around its privacy positioning?

The impact to enterprises

For end users, Apple Intelligence means you may have some cooler functionality but – at least for now – we don’t know to what extent your data will be used to train OpenAI models. It’s likely that if you have a lower-tier ChatGPT that you connect, you will expect OpenAI to be training on your data.

For enterprises, it may get a bit thornier still. The challenges presented by Apple Intelligence are not necessarily unique to enterprises, but they pile yet more pressure on security teams. IT and security teams will now have to work out how to manage these features within their Apple Mobile Device Management. 

Most organizations take a sensible approach of turning off risky features until they have fully understood the risk, and then slowly roll this out. However, we may be approaching a situation where features are so baked into the operating system that enterprises are forced to engage with the risk and not just switch it off. 

Looking forward 

The truth is that we don’t yet know the details of how Apple Intelligence will train on user data. However, Apple has positioned “privacy” as a central positioning for the company and will want to maintain that. 

With Microsoft Recall and Apple Intelligence, we’ve been given a taste of how AI will become increasingly baked into everything we do. 

Can we (and should we) trust these companies to secure our data, or do we need to shift how we approach data protection?

Request a demo

Concerned about the data privacy implications of Generative AI? You're not alone. Get in touch to learn bore about Harmonic Security's apporach.
Alastair Paterson