Guide
Agent-First Email Design
Agent-first design is a software philosophy where AI agents are treated as primary users, not afterthoughts. The Nylas CLI was built from the ground up with this principle: every command, flag, and output format was designed so that both humans and AI agents can use the tool effectively. This guide explains the specific design decisions that make Nylas CLI agent-native.
The problem with human-first CLIs
Most CLI tools were designed for humans sitting at a terminal. They produce colorful, formatted output that looks great but is impossible for machines to parse reliably. They ask interactive questions. They print progress spinners. They format tables with dynamic column widths.
When an AI agent tries to use a human-first CLI, it faces these problems:
- Output is unpredictable -- table formatting changes based on terminal width
- Interactive prompts block execution and require simulating keyboard input
- Error messages are human-readable but not machine-parseable
- Side effects (sending email, creating events) happen without explicit confirmation flags
- No structured data format -- the agent must regex-parse prose output
Design decision 1: structured JSON output
Every Nylas CLI command that returns data supports the --json flag. When enabled, the output is a JSON array or object -- no color codes, no table borders, no dynamic formatting. The schema is stable across versions.
# Human-readable (default)
nylas email list --limit 3
# ┌──────────────────────────┬──────────────────────┬───────────┐
# │ Subject │ From │ Date │
# ├──────────────────────────┼──────────────────────┼───────────┤
# │ Weekly sync │ alice@company.com │ Mar 12 │
# │ Invoice #4521 │ billing@vendor.com │ Mar 11 │
# │ PR review request │ bob@company.com │ Mar 11 │
# └──────────────────────────┴──────────────────────┴───────────┘
# Machine-readable (--json)
nylas email list --limit 3 --json
# [
# {
# "id": "msg_abc123",
# "subject": "Weekly sync",
# "from": [{"name": "Alice", "email": "alice@company.com"}],
# "date": 1741795200,
# "unread": true
# },
# ...
# ]This means an AI agent can do:
# Agent extracts unread count
nylas email list --json --unread | jq 'length'
# Agent gets sender emails from recent messages
nylas email list --json --limit 10 | jq '.[].from[0].email'
# Agent reads a specific message body
nylas email read msg_abc123 --json | jq '.body'Design decision 2: non-interactive mode
Any command that performs a side effect (sending email, creating an event, deleting a draft) asks for confirmation by default. This is a safety feature for humans. But agents need to bypass it.
The --yes flag skips all confirmation prompts, making the command fully non-interactive:
# Interactive (default) -- human sees a prompt
nylas email send --to alice@example.com --subject "Hello" --body "World"
# Send email to alice@example.com? [y/N]
# Non-interactive -- agent skips the prompt
nylas email send --to alice@example.com --subject "Hello" --body "World" --yes
# Message sent: msg_xyz789The design is deliberate: dangerous by default (confirm), safe for automation when explicitly opted in (--yes). An agent must consciously choose non-interactive mode, which means the developer building the agent made an explicit decision to allow unsupervised sends.
Design decision 3: built-in MCP server
The Model Context Protocol (MCP) is the standard for connecting AI assistants to external tools. Instead of requiring developers to write MCP tool definitions and manage server infrastructure, Nylas CLI includes the server directly:
# Start the MCP server
nylas mcp serve
# Or install for a specific assistant (writes config automatically)
nylas mcp install --assistant claude-code
nylas mcp install --assistant cursor
nylas mcp install --assistant claude-desktopThe MCP server exposes 16 tools across email, calendar, and utility categories. Each tool has a typed schema that the AI assistant can discover at runtime. The server handles:
- Automatic credential injection -- no API keys in tool calls
- Timezone detection -- all times are in the user's local timezone
- Regional routing -- US or EU endpoint based on configuration
- Grant resolution -- finds the right account without requiring email addresses in every call
Design decision 4: subprocess tool pattern
Not every agent speaks MCP. Many custom agents (built with LangChain, OpenAI function calling, or plain Python) use subprocess calls as tools. Nylas CLI is designed for this pattern:
import subprocess
import json
def list_emails(query: str = "", limit: int = 10) -> list:
"""Tool: List emails matching a query."""
if query:
cmd = ["nylas", "email", "search", query, "--json", f"--limit={limit}"]
else:
cmd = ["nylas", "email", "list", "--json", f"--limit={limit}"]
result = subprocess.run(cmd, capture_output=True, text=True)
return json.loads(result.stdout)
def send_email(to: str, subject: str, body: str) -> dict:
"""Tool: Send an email."""
cmd = [
"nylas", "email", "send",
"--to", to,
"--subject", subject,
"--body", body,
"--yes", "--json"
]
result = subprocess.run(cmd, capture_output=True, text=True)
return json.loads(result.stdout)
def read_email(message_id: str) -> dict:
"""Tool: Read a specific email by ID."""
cmd = ["nylas", "email", "read", message_id, "--json"]
result = subprocess.run(cmd, capture_output=True, text=True)
return json.loads(result.stdout)This pattern works because:
--jsonguarantees parseable output--yesprevents blocking on prompts- Exit codes are consistent: 0 for success, non-zero for failure
- Errors go to stderr, data goes to stdout -- clean separation
Design decision 5: stdin/stdout composability
Unix philosophy says tools should read from stdin and write to stdout. Nylas CLI follows this for email body content:
# Pipe a file as email body
cat report.md | nylas email send --to team@example.com \
--subject "Weekly Report" --yes
# Pipe command output as email body
git log --oneline -10 | nylas email send --to lead@example.com \
--subject "Recent commits" --yes
# Chain with other tools
nylas email search "*" --from vendor@example.com --json | \
jq '.[].subject' | \
sort | uniq -c | sort -rn
# Agent generates body, pipes to send
echo "Meeting confirmed for Thursday at 2pm." | \
nylas email send --to alice@example.com --subject "Re: Meeting" --yesFor AI agents, this means they can construct email bodies programmatically and pipe them directly into the send command. No temporary files, no escaping issues with shell arguments.
Design decision 6: predictable error handling
Agents need to handle errors programmatically. Nylas CLI uses a consistent error structure:
# Exit codes
# 0 -- success
# 1 -- general error (bad arguments, missing config)
# 2 -- authentication error (not logged in, expired token)
# Error output goes to stderr, not stdout
nylas email send --to invalid --subject "Test" --yes 2>&1
# Error: invalid email address "invalid"
# With --json, errors are also structured
nylas email list --json --grant nonexistent 2>&1
# {"error": "grant_not_found", "message": "No grant found for 'nonexistent'"}An agent can check the exit code, parse stderr for error details, and decide how to recover -- all without fragile string matching on human-readable output.
Design decision 7: predictable command grammar
Every Nylas CLI command follows a consistent noun-verb pattern that LLMs can predict without memorizing documentation:
# Pattern: nylas <resource> <action> [flags]
# Email
nylas email list [--json] [--limit N] [--unread]
nylas email read <id> [--json]
nylas email send [--to] [--subject] [--body] [--yes]
nylas email search "query" [--json]
# Calendar
nylas calendar list [--json]
nylas calendar events [--json] [--from DATE] [--to DATE]
nylas calendar create [--title] [--start] [--end] [--yes]
# Auth
nylas auth login
nylas auth logout
nylas auth list
nylas auth whoami
# MCP
nylas mcp serve
nylas mcp install [--assistant NAME]
nylas mcp statusThe grammar is regular enough that an LLM can construct valid commands from a description of what it wants to do, even if it has never seen the exact command before. This is by design: agent-first means the tool is learnable by inference.
MCP mode vs subprocess mode
Nylas CLI supports two agent integration patterns. Choose based on your use case:
| Aspect | MCP mode | Subprocess mode |
|---|---|---|
| Connection | Persistent (stdio JSON-RPC) | Per-command (fork/exec) |
| Tool discovery | Automatic (protocol handshake) | Manual (define tool schemas) |
| Best for | Claude, Cursor, VS Code, Windsurf | Custom Python/Node agents |
| Latency | Lower (persistent connection) | Higher (process startup per call) |
| Setup | One command | Write tool wrappers |
| Flexibility | Fixed tool set (16 tools) | Full CLI surface area |
Real-world agent workflow
Here is a complete example of an AI agent using Nylas CLI to handle a request like "summarize my unread emails and draft replies to anything urgent":
#!/bin/bash
# Agent workflow: summarize unread, draft replies to urgent
# Step 1: Get unread emails
UNREAD=$(nylas email list --json --unread --limit 20)
# Step 2: Agent processes JSON, identifies urgent emails
# (This happens in the LLM -- it reads the JSON and decides)
# Step 3: Read full content of an urgent email
FULL=$(nylas email read msg_urgent123 --json)
# Step 4: Agent generates reply body, sends as draft
nylas email send --to sender@example.com \
--subject "Re: Urgent - Server outage" \
--body "I have reviewed the incident report. Escalating to the on-call team now." \
--reply-to msg_urgent123 \
--yesTurn any CLI command into an agent tool
The pattern for wrapping any Nylas CLI command as an agent tool is always the same:
import subprocess, json
def nylas_tool(args: list[str]) -> dict | list | str:
"""Generic wrapper for any Nylas CLI command."""
result = subprocess.run(
["nylas"] + args + ["--json"],
capture_output=True, text=True
)
if result.returncode != 0:
return {"error": result.stderr.strip()}
try:
return json.loads(result.stdout)
except json.JSONDecodeError:
return result.stdout.strip()
# Use it
emails = nylas_tool(["email", "list", "--limit", "5"])
calendars = nylas_tool(["calendar", "list"])
whoami = nylas_tool(["auth", "whoami"])Frequently asked questions
Does --json affect all commands?
All commands that return data support --json. Commands that only perform side effects (like nylas auth login) do not produce JSON output but still use structured exit codes.
Can I use Nylas CLI with LangChain or OpenAI function calling?
Yes. Use the subprocess pattern shown above. Define each CLI command as a tool with a JSON schema for its parameters, then call subprocess.run in the tool implementation. See the Build an LLM agent guide for a complete example.
Is the MCP server stateful?
The MCP server maintains a persistent connection but is stateless between tool calls. Each tool call is independent. The server does not remember previous calls or maintain conversation context -- that is the AI assistant's job.
How does --yes interact with the MCP server?
In MCP mode, the server handles confirmation differently. For sensitive operations (sending email, creating events), the MCP protocol includes a confirmation step that the AI assistant presents to the user. The --yes flag only applies to direct CLI usage, not MCP tool calls.
Can agents access multiple email accounts?
Yes. Run nylas auth login for each account. Then use --grant to specify which account to use in each command. The MCP server also supports multi-account access through the get_grant tool.
Next steps
- Set up the MCP server -- connect Claude, Cursor, or VS Code to your inbox in one command
- Build an LLM agent with email tools -- complete Python example with subprocess tools
- Connect voice agents to email -- use LiveKit or Vapi with Nylas CLI as the bridge
- Full command reference -- every flag and subcommand