Your Meeting Data Should Be Yours to Program Against
Launching dealloop's Developer API, MCP Server, and Connected Tool Integrations
Last week a meeting tool quietly gated its API, and developers who had built workflows around their own data lost programmatic access overnight. We've been building something that goes in the opposite direction, and the timing felt right to ship it.
dealloop now has a local developer API, a local MCP server, and a set of connected tool integrations. Together they make your meeting data programmable in both directions: your tools can read dealloop, and dealloop's AI agent can reach into your tools.
Data out: the developer API and MCP server
The developer API is REST over localhost. It binds to 127.0.0.1, generates a bearer token in memory when you start it, and works with whatever you already use: curl, Postman, a Python script, an agent framework.
The MCP server handles Model Context Protocol clients like Claude Desktop, Cursor, and anything else that speaks the protocol. It runs as a headless subprocess that communicates with the main dealloop app over a Unix socket, and the app retains authority over whether to allow the connection.
We built both because they solve different problems. The API serves developers who want to call explicit endpoints, while MCP serves models that need to discover available tools and decide which ones to invoke. A script that syncs action items to Salesforce after every call is an API problem. An agent that pulls context from your last three meetings while you're on a fourth is an MCP problem.
Tools in: connected integrations
Connected tools go the other direction. They let dealloop's AI agent reach into services like Linear and Notion during a live conversation.
When the agent needs to pull up an issue or look something up in a wiki, it sends a tool call request down to your desktop app, which executes it against the locally running MCP server for that service and returns the result. The AI runs in the cloud, but every tool invocation still flows through your machine, through a server you've authenticated, and through a permission layer you control. The desktop app stays in the loop just as it does for the outbound API and MCP server.
Connecting a service takes a single OAuth flow handled inside the app. You can also add custom MCP servers or import ones you've already configured in Claude Desktop, Cursor, or VS Code.
Every tool from every server carries a permission you set: always allow, ask before using, or deny outright, with per-tool overrides when you want finer control. Unknown tools default to asking, so the system fails closed. During a meeting, approval requests appear as cards in the heads-up display, and you can approve or reject them without leaving the conversation.
Why local first
Local was a deliberate architectural choice, not a shortcut.
When the trust boundary is your own machine and the enforcement point is an app you're already signed into, most of the hard problems in API design just go away. You inherit the existing session instead of managing credentials. You skip OAuth entirely for the developer surface. Nothing ever leaves your machine that you haven't explicitly allowed.
The same principle extends to connected tools. Even though the AI agent runs in the cloud, every tool call it makes is relayed to your desktop app, executed locally, and returned through the same path. The agent can't silently call a tool you haven't approved, and it can't reach a service you haven't connected.
We're building a remote API for teams and server workflows where local access falls short. But we wanted the default to be the path that requires the least trust infrastructure, so that developers who never need remote access never encounter that complexity at all.
Under the hood
The MCP bridge scans for stale socket files at startup and removes them after verifying they're safe to clean up. It checks ownership permissions before accepting any connection. And if you quit the app, or it crashes, or you sign out while a client is connected, that client receives a disconnect with a reason string so it knows exactly what happened.
We put this care into failure handling because the situations that cause failures are completely mundane. Laptops sleep, processes get killed, and apps restart throughout the day. A local integration surface that handles these transitions poorly is one that developers learn to route around rather than build on.
Both the IPC path and the REST API share a single handler layer, so validation, size limits, and error formatting all live in one place regardless of transport. Both authenticate through the same provider that powers the desktop app. There's one session, one identity, and one place where auth decisions are made.
Available now in dealloop for macOS. Download here.
