I Gave My Mac Studio an AI OpenClaw Brain Upgrade. Here’s Every Wall I Hit.
Last week I said I was replacing Notion with a self-hosted AI agent running on a Mac Studio. Today I actually did it. The install took six hours, not the two I planned, and it included three problems that no documentation warned me about. The result is a working AI assistant named TARS that responds to both Slack and iMessage. All local. Zero cloud dependency.
Here’s how it actually went.
The Setup
The plan was straightforward. Install Ollama for local LLM inference, install OpenClaw as the agent framework, and wire up messaging channels, done. I’ve got a Mac Studio M3 Ultra with 256 GB of unified memory. Plenty of horsepower.
Should have been a two-hour job. It wasn’t.
Not because the tools are bad. They’re actually solid. But self-hosting AI means you’re the sysadmin, the DevOps team, and the QA department all at once. Every integration has its own auth model, its own quirks, and its own way of failing without telling you.
Three Problems Nobody Warned Me About
Three problems ate most of my afternoon, and none of them were in the documentation.
Problem 1: OpenClaw couldn’t authenticate with Ollama. The onboarding wizard created a config file, but the gateway runs as a macOS LaunchAgent. That’s a background service with its own environment. The API key I set in the config never reached the running process. I spent two hours reading bundled JavaScript source code before I figured it out. OpenClaw checks for an OLLAMA_API_KEY environment variable, and the LaunchAgent plist needed it injected directly.
<!-- The fix: add to ~/Library/LaunchAgents/ai.openclaw.gateway.plist -->
<key>OLLAMA_API_KEY</key>
<string>ollama-local</string>The auth resolution chain goes: auth profiles.json first, then environment variables, then config apiKey. You won’t need to know that until you do.




