Guide
Build an LLM Agent with Email & Calendar Tools
LLM agents are just an API call, a context array, and tools. Email and calendar are common tool needs. Instead of writing OAuth flows and provider-specific API clients for Gmail, Outlook, Exchange, Yahoo, iCloud, and IMAP, use the Nylas CLI as your tool backend. One subprocess call per tool. Same simplicity as giving your agent ping.
By Pouya Sanooei
Why use the CLI as agent tools?
When you build an agent, you give it tools. A tool is a function the LLM can invoke. The classic example is ping: define it, wire it in, and the agent figures out when to call it. You never wrote a loop to ping multiple hosts. The agent did.
The tool-use pattern — define tools, let the LLM decide when to call them — is now supported by every major provider: OpenAI's function calling API (June 2023), Anthropic's tool use API (April 2024), and Google's function calling in Gemini (December 2023). The interface differs slightly, but the core loop is identical: send tool definitions, receive tool calls, return results.
Email and calendar are the same idea. Your agent needs to read messages, send replies, check availability, create events. You could build API clients, manage OAuth tokens, and handle Gmail vs Outlook vs Exchange vs Yahoo vs iCloud vs IMAP — roughly ~300 lines of boilerplate for token management alone if you roll a custom Gmail OAuth integration. Or you could run nylas email list and nylas calendar events list from your tool handlers: one subprocess call per tool.
The Nylas CLI already handles authentication, provider abstraction, and connection management. Your agent code stays simple: subprocess in, JSON out.
1. Install and authenticate
# Install
brew install nylas/nylas-cli/nylas
# Authenticate (one-time)
nylas auth login
# Verify
nylas auth whoami
nylas email list --limit 32. The tool pattern
Every agent framework (OpenAI, Anthropic, etc.) expects tools as function definitions with JSON schemas. You implement each tool by calling the CLI. Here is the pattern:
import subprocess
import json
def list_emails(limit=10, unread_only=False):
"""List recent emails from the authenticated mailbox."""
cmd = ["nylas", "email", "list", "--limit", str(limit), "--json"]
if unread_only:
cmd.append("--unread")
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
return f"Error: {result.stderr}"
return result.stdout
def send_email(to, subject, body):
"""Send an email. Requires --yes to skip confirmation."""
result = subprocess.run(
["nylas", "email", "send", "--to", to, "--subject", subject, "--body", body, "--yes"],
capture_output=True,
text=True
)
if result.returncode != 0:
return f"Error: {result.stderr}"
return "Email sent successfully."When your agent calls list_emails(), the CLI returns structured JSON. Here is what nylas email list --json looks like:
[
{
"id": "a1b2c3d4e5f6g7h8",
"subject": "Re: API design review",
"from": [{"name": "Sarah Chen", "email": "sarah@example.com"}],
"to": [{"name": "Alex Rivera", "email": "alex@example.com"}],
"snippet": "I've updated the endpoint spec. The breaking change is in the auth middleware...",
"date": "2026-03-25T14:22:18-04:00",
"unread": true,
"folders": ["INBOX"]
},
{
"id": "b2c3d4e5f6g7h8i9",
"subject": "Deployment complete: staging-v2.4.0",
"from": [{"name": "CI Bot", "email": "ci@example.com"}],
"to": [{"name": "Alex Rivera", "email": "alex@example.com"}],
"snippet": "All 47 tests passed. Deployment to staging completed in 3m 22s.",
"date": "2026-03-25T13:15:00-04:00",
"unread": false,
"folders": ["INBOX"]
}
]And when send_email() runs nylas email send --yes --json, the response confirms delivery:
{
"id": "msg_c3d4e5f6g7h8i9j0",
"subject": "Re: API design review",
"from": [{"name": "Alex Rivera", "email": "alex@example.com"}],
"to": [{"name": "Sarah Chen", "email": "sarah@example.com"}],
"date": "2026-03-25T14:25:00-04:00",
"object": "message"
}3. Tool definitions for the LLM
The LLM needs a description of each tool. This is the JSON blob your framework expects. Example for OpenAI-style tools:
tools = [
{
"type": "function",
"function": {
"name": "list_emails",
"description": "List recent emails from the user's inbox. Use unread_only=True to filter unread only.",
"parameters": {
"type": "object",
"properties": {
"limit": {"type": "integer", "description": "Max number of emails to return", "default": 10},
"unread_only": {"type": "boolean", "description": "Only return unread emails", "default": False}
}
}
}
},
{
"type": "function",
"function": {
"name": "send_email",
"description": "Send an email. Use for replies or new messages.",
"parameters": {
"type": "object",
"properties": {
"to": {"type": "string", "description": "Recipient email address"},
"subject": {"type": "string", "description": "Email subject"},
"body": {"type": "string", "description": "Email body (plain text)"}
},
"required": ["to", "subject", "body"]
}
}
}
]4. Wire tools into your agent loop
When the LLM returns a tool call, you run the corresponding function and append the result to context. Then call the LLM again. Same pattern as the ping example:
import json
from openai import OpenAI
client = OpenAI()
context = []
def call():
return client.chat.completions.create(
model="gpt-4o",
messages=context,
tools=tools,
tool_choice="auto"
)
def handle_tool_call(item):
name = item.function.name
args = json.loads(item.function.arguments or "{}")
if name == "list_emails":
result = list_emails(**args)
elif name == "send_email":
result = send_email(**args)
else:
result = "Unknown tool"
return {
"role": "tool",
"tool_call_id": item.id,
"content": result
}
def process(user_input):
context.append({"role": "user", "content": user_input})
while True:
response = call()
message = response.choices[0].message
tool_calls = message.tool_calls or []
if not tool_calls:
final = message.content or ""
context.append({"role": "assistant", "content": final})
return final
context.append({
"role": "assistant",
"content": message.content or "",
"tool_calls": [
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
},
}
for tc in tool_calls
],
})
for item in tool_calls:
context.append(handle_tool_call(item))5. Add calendar tools
Same pattern for calendar. The CLI exposes list, create, and availability:
def list_events(days=7):
"""List upcoming calendar events."""
result = subprocess.run(
["nylas", "calendar", "events", "list", "--days", str(days), "--json"],
capture_output=True,
text=True
)
return result.stdout if result.returncode == 0 else f"Error: {result.stderr}"
def create_event(title, start, end, participants=None):
"""Create a calendar event."""
cmd = ["nylas", "calendar", "events", "create", "--title", title, "--start", start, "--end", end]
if participants:
for p in participants:
cmd.extend(["--participant", p])
result = subprocess.run(cmd, capture_output=True, text=True)
return result.stdout if result.returncode == 0 else f"Error: {result.stderr}"
def find_meeting_time(participants, duration="30m"):
"""Find when participants are free for a meeting."""
result = subprocess.run(
["nylas", "calendar", "find-time", "--participants", ",".join(participants),
"--duration", duration, "--json"],
capture_output=True,
text=True
)
return result.stdout if result.returncode == 0 else f"Error: {result.stderr}"When list_events() calls nylas calendar events list --json, the output includes participants, conferencing links, and timezone data:
[
{
"id": "evt_9x8y7z6w5v4u3t2s",
"title": "API Design Review",
"when": {
"start_time": 1774535400,
"end_time": 1774537200,
"start_timezone": "America/New_York",
"object": "timespan"
},
"participants": [
{"email": "alex@example.com", "status": "yes"},
{"email": "sarah@example.com", "status": "yes"},
{"email": "jordan@example.com", "status": "noreply"}
],
"status": "confirmed",
"conferencing": {
"provider": "Google Meet",
"details": {"url": "https://meet.google.com/abc-defg-hij"}
}
}
]6. CLI commands you can wrap
These Nylas CLI commands map directly to agent tools:
| Command | Use case |
|---|---|
nylas email list --json | List messages (add --unread, --limit) |
nylas email search "query" --json | Search by keyword |
nylas email read msg_id --json | Read full message |
nylas email send --to X --subject Y --body Z --yes | Send email |
Calendar
| Command | Use case |
|---|---|
nylas calendar events list --json | List events (add --days, --timezone) |
nylas calendar events create --title X --start Y --end Z | Create event |
nylas calendar find-time --participants X,Y --duration 30m --json | Find free slots |
7. Context engineering tips
Each tool output eats tokens. The CLI returns JSON. For long message lists, consider:
- Use
--limit 5or--limit 10instead of fetching everything - Summarize large outputs in a separate step before appending to context
- Only expose the tools the agent needs for the task. Email-only agents do not need calendar tools.
Using Cursor or Claude instead?
If you want email and calendar tools inside Claude Code, Cursor, or VS Code without building your own agent, use the Model Context Protocol (MCP) path. One command installs the Nylas MCP server and gives your assistant the same tools:
nylas mcp install --assistant claude-code
# or: cursor, windsurf, vscodeSee Give AI Agents Email Access via MCP for full setup.
FAQ
Does this work with Anthropic, Gemini, or other LLM providers?
Yes. The tool pattern (define tools, handle tool calls, append results to context) is the same across providers. Swap the client.chat.completions.create call for your provider's equivalent. The CLI wrappers stay unchanged.
What if the CLI is not in PATH?
Use the full path to the binary (e.g. /opt/homebrew/bin/nylas on macOS) or pass shell=True with the full command string. For Homebrew installs, which nylas shows the path.
How do I use a specific mailbox when I have multiple grants?
Set NYLAS_GRANT_ID in the environment before running your agent, or pass the grant ID as the first argument to each command (e.g. nylas email list grant_xyz --json). Use nylas auth list to see your grants.
Why use --yes when sending email?
Without --yes, nylas email send prompts for confirmation interactively. In an agent loop, stdin is not available, so the command would hang. Always use --yes for non-interactive use.
Can I run this in a server or CI environment?
Yes. Authenticate with nylas auth config and set NYLAS_API_KEY in your environment. The CLI reads credentials from config and env vars, so no interactive login is needed after initial setup.
Next steps
- Give your AI coding agent an email address – setup for Claude Code, Cursor, Codex CLI, and OpenClaw
- Send email from the terminal – full CLI reference for email commands
- Manage calendar from the terminal – events, availability, timezone handling
- Give AI agents email access via MCP – plug into Claude, Cursor, or VS Code
- Build an AI email triage agent – classify, draft, and archive with Python + Nylas CLI
- Record meetings from the CLI – add meeting recording and transcription to agent workflows
- Receive inbound email – give your agent a dedicated email address for incoming messages
- Command reference – every flag and subcommand
- Email APIs for AI agents compared – Gmail API vs Graph vs SendGrid vs IMAP vs Nylas CLI