Skip to main content

How to Run Ollama and Qwen 3.5 Locally on a Mac and Use It from iPhone with Cumbersome

Coming soon: a thorough guide to running Ollama with Qwen 3.5 locally on a Mac, connecting to it from Cumbersome on macOS and iPhone, tightening privacy, and preparing for remote access with Tailscale.

March 15, 2026

This guide is coming next. It will show how to run Ollama on a Mac, use a strong local model like Qwen 3.5, and connect to it from Cumbersome on the same Mac or from an iPhone on your local network.

I am working on a thorough how-to for people who want local AI on Apple silicon without giving up a decent client experience. The short version: Ollama makes it easy to run an open model locally on a Mac, Qwen 3.5 is now good enough to be useful for real writing and coding work, and Cumbersome is a much better front end than poking at localhost in a terminal all day.

The full post will cover:

  • why you would run a local model at all: privacy, control, and keeping prompts on hardware you control
  • what kind of Mac or Mac mini you need for this to feel good instead of miserable
  • how to install Ollama on macOS
  • how to pull and run Qwen 3.5 (and similar models)
  • how to connect Cumbersome to Ollama on the same Mac
  • how to use that same setup from an iPhone or another Mac on the same network
  • how to clean up logs automatically if you want an even tighter privacy story
  • a part 2 on using Tailscale for remote access without exposing the machine to the public Internet

One reason I want to write this now: local models crossed an important threshold. They are no longer just a toy for benchmarks and screenshots. A model like Qwen 3.5 can get surprisingly close to GPT-4o-class usefulness for a lot of everyday work, especially if your priority is privacy and "good enough" on a machine you already own.

If you want to follow along when the full guide lands, this is the URL I will build it on.

In the meantime, if you have not tried it yet, Cumbersome already supports OpenAI-compatible providers. That is the plumbing this walkthrough will use for a local Ollama endpoint.

Bless up! 🙏✨