Omodo helps monitor your email, files, notes, and calendar — extracts action items, creates reminders, and automates tasks in the background. Powered by a local LLM. Nothing ever leaves your Mac.
Omodo runs entirely on your Mac using a local LLM — no cloud, no accounts, no data ever leaving your machine. It works seamlessly with Apple Mail, Calendar, Reminders, and Notes, so you stay in the apps you already use while AI handles the busywork behind the scenes.
All processing happens locally via Ollama. No cloud. No accounts. No telemetry. Your data stays on your machine — period.
Extracts action items from emails, files, and notes. Creates reminders and calendar events automatically. Thread-aware and context-rich.
Built in Swift. Works seamlessly with Apple Mail, Calendar, Reminders, and Notes. Lives in your menu bar — no new apps to learn.
Omodo watches your Apple Mail inbox and turns actionable emails into reminders and calendar events for you. It follows conversation threads — so when a meeting gets rescheduled or a task gets cancelled in a reply, your reminders update automatically.
Ask questions in plain English and get things done. Check your schedule, search through emails, analyze documents, or create tasks — all in one conversation. Omodo remembers your context, so you don't have to repeat yourself.
"Check my schedule tomorrow and create reminders for prep"
"Search emails from John about the contract"
"Analyze all invoices and summarize amounts owed"
Omodo aggregates relevant emails, calendar events, reminders, files, links, and notes for your contacts into a unified view — so you always have the full picture without digging through multiple apps.
Omodo watches your files, understands their content, and tags them automatically — so you can find that invoice, contract, or note by simply describing what you're looking for. It can even rename files based on their content to keep things tidy.
Omodo records and transcribes your meetings on-device, then generates a summary with key decisions and action items. Everything is saved to Apple Notes and linked to the right contacts automatically.
Unlike cloud AI tools, Omodo processes everything locally using Ollama. There are no API keys, no cloud inference, no accounts, no analytics, and no telemetry. Your emails, files, and notes stay exactly where they belong — on your machine.
That's it. No cloud icons. No arrows going offscreen. Everything stays local.
Ollama is a free, open-source tool that lets you run large language models locally on your Mac. Omodo uses Ollama to process your data — it's what makes the entire experience private. You'll need to install it and pull a model before using Omodo, but the setup takes just a few minutes.
This is expected during the beta — Omodo is not yet signed or notarized with Apple. When you first open the app, macOS may show a warning or suggest moving it to the trash. To allow it: open System Settings → Privacy & Security, scroll down, and you'll see a message about Omodo being blocked. Click "Open Anyway" to allow it. You only need to do this once.
Yes — 100%. Omodo processes everything on your Mac using Ollama running on localhost. There are no cloud servers, no API calls to external services, no accounts, no analytics, and no telemetry. Your emails, files, and notes never leave your machine.
Omodo requires a Mac with Apple Silicon (M1 or later) running macOS 14 (Sonoma) or newer. Apple Silicon is needed for efficient local LLM inference through Ollama.
Omodo works through Apple Mail, so any email account you've added to Mail — Gmail, Outlook, iCloud, Yahoo, or any IMAP provider — is supported. You choose which mailboxes to monitor.
Omodo is completely free during the beta period. We'll share more about pricing as we approach a stable release.
Yes. Since Omodo and Ollama both run locally, you don't need an internet connection for AI processing. You'll only need internet for receiving new emails or syncing calendar events through Apple's apps.
Omodo works with any model available through Ollama — including Llama, Mistral, Gemma, Phi, and more. You can choose your preferred model in Settings based on your performance and quality preferences.
Download Omodo, install Ollama, and pull a model (e.g. ollama pull llama3). Launch Omodo and follow the onboarding wizard — it'll guide you through permissions, model selection, and choosing which sources to monitor. You'll be up and running in minutes.