Apple Intelligence

Apple’s Missed Opportunity: How the New AI Features Let Developers Down

As Apple prepares to launch iOS 18 and macOS 15 Sequoia next week, excitement is building around the new Apple Intelligence features. With promises of a smarter Siri and AI-enhanced system apps with summarization and intelligence capabilities, the future looks bright for users. However, as a developer, I can’t help but feel let down. Despite the advancements, there’s a glaring omission that could have unlocked a ton of innovation: the absence of a built-in generative language model integrated into the OS. This missing piece could have empowered developers to create a new wave of innovative apps and solutions.

So, what exactly is Apple Intelligence?

  • A more personal Siri that can do more for you, like searching and performing actions across apps or prioritizing and summarizing your notifications. As developers, with some effort, we can let Siri reach into and integrate with our apps.
  • Writing tools that let users rewrite, adjust the tone, or correct their grammar. These tools will appear in most apps that let you edit text.
  • Generate new emojis on the fly or create images using the Image Playground API.

The Glaring Omission: No On-Device LLM for Developers

So, basically, we can now generate images in our apps, but there’s no way to generate and process text. If Apple offered something like this as an API, the potential for innovations and creative ideas would have exploded. It could have triggered a new wave of innovative apps and a lot of fresh enthusiasm around their platforms.

The AI Dilemma for Developers

When you want to make an AI-powered app, you face a dilemma. You could rely on cloud-based LLMs like OpenAI or Claude or use a local model. Both have considerable downsides.

Cloud-Based LLMs: Cost and Security Issues

If you choose the cloud-based method, the most significant factor is the cost. These API vendors charge you based on use, and if your app gets popular, it could cost you. Deciding what to do with these costs is a challenge because you basically have two options:

  • Absorb the API costs and hope that the price of your product will cover all its usage.
  • Pass the cost over to the user, either by selling in-app purchases based on usage or having the user provide their own API key, which they pay for. The final challenge is security: Your app could be hacked, your API keys stolen, and you could wake up to an unexpected bill from your provider.

On-Device Models: Licensing and Size Constraints

Should you choose the on-device model approach, you have other problems. You could use an open-source model and bundle it with your app, but this brings its own concerns. Firstly, is the model licensed in such a way that you are allowed to distribute it commercially? If it is, we come to the second problem: these models are huge – several gigabytes huge. If you decide to include one of these models in your app, then not only will your app take a long time to download, but it will take up a ton of space on your user’s devices. If everyone did this, we would all quickly run out of drive space, and every app would need to load its own model, gobbling up all your device’s memory, too.

Exploring Alternatives: The Case of Ollama

There is a third option, an app called Ollama. It runs in the background on your machine and lets you download and run models locally. It’s basically a locally-running AI API. I’ve been using it to write lots of useful scripts that help me day by day. I have, for example, one small script that runs on my Mac that looks through the news and pings me about stuff it thinks I’m interested in. Another script I’ve made analyzes and summarizes files and their git history. It has come in handy when looking for regressions in files I wasn’t familiar with or when I wanted to get up to speed on what some code does.

The downside to building apps with Ollama appears when you want to distribute them. You must require the user to install Ollama first, or your app won’t work. For best results, they’ll likely also need to install the exact model you have been tweaking your prompts to work with, and they’ll need to use the terminal. This also means it isn’t feasible if you want to distribute your apps on the App Store.

Final Thoughts

To fix all these problems, Apple should have included a basic local system-reachable language model in the OS that any app could hook into and use. Having something like that would make all these worries go away. It would eliminate the cost and licensing barriers, reduce app sizes, and open the floodgates for a ton of innovative native AI-powered apps. Doing this wouldn’t just benefit developers—it would enrich the entire Apple ecosystem with a wealth of new and advanced functionalities.

Over the last few years, many developers have felt increasingly alienated by Apple. By overlooking this, Apple has missed a significant opportunity to empower its developer community.


Want more like this?

Sign up to receive more posts like this right in your inbox as soon as I write them.