Guide
How to Build an LLM Agent with Email and Calendar Tools Using Nylas CLI
LLM agents are just an API call, a context array, and tools. Email and calendar are common tool needs. Instead of writing OAuth flows and provider-specific API clients, use the Nylas CLI as your tool backend. One subprocess call per tool. Same simplicity as giving your agent ping.
Why use the CLI as agent tools?
When you build an agent, you give it tools. A tool is a function the LLM can invoke. The classic example is ping: define it, wire it in, and the agent figures out when to call it. You never wrote a loop to ping multiple hosts. The agent did.
Email and calendar are the same idea. Your agent needs to read messages, send replies, check availability, create events. You could build API clients, manage OAuth tokens, and handle Gmail vs Outlook vs IMAP. Or you could run nylas email list and nylas calendar events list from your tool handlers.
The Nylas CLI already handles authentication, provider abstraction, and connection management. Your agent code stays simple: subprocess in, JSON out.
1. Install and authenticate
# Install
brew install nylas-cli/tap/nylas
# Authenticate (one-time)
nylas auth login
# Verify
nylas auth whoami
nylas email list --limit 32. The tool pattern
Every agent framework (OpenAI, Anthropic, etc.) expects tools as function definitions with JSON schemas. You implement each tool by calling the CLI. Here is the pattern:
import subprocess
import json
def list_emails(limit=10, unread_only=False):
"""List recent emails from the authenticated mailbox."""
cmd = ["nylas", "email", "list", "--limit", str(limit), "--json"]
if unread_only:
cmd.append("--unread")
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
return f"Error: {result.stderr}"
return result.stdout
def send_email(to, subject, body):
"""Send an email. Requires --yes to skip confirmation."""
result = subprocess.run(
["nylas", "email", "send", "--to", to, "--subject", subject, "--body", body, "--yes"],
capture_output=True,
text=True
)
if result.returncode != 0:
return f"Error: {result.stderr}"
return "Email sent successfully."3. Tool definitions for the LLM
The LLM needs a description of each tool. This is the JSON blob your framework expects. Example for OpenAI-style tools:
tools = [
{
"type": "function",
"name": "list_emails",
"description": "List recent emails from the user's inbox. Use unread_only=True to filter unread only.",
"parameters": {
"type": "object",
"properties": {
"limit": {"type": "integer", "description": "Max number of emails to return", "default": 10},
"unread_only": {"type": "boolean", "description": "Only return unread emails", "default": False}
}
}
},
{
"type": "function",
"name": "send_email",
"description": "Send an email. Use for replies or new messages.",
"parameters": {
"type": "object",
"properties": {
"to": {"type": "string", "description": "Recipient email address"},
"subject": {"type": "string", "description": "Email subject"},
"body": {"type": "string", "description": "Email body (plain text)"}
},
"required": ["to", "subject", "body"]
}
}
]4. Wire tools into your agent loop
When the LLM returns a tool call, you run the corresponding function and append the result to context. Then call the LLM again. Same pattern as the ping example:
from openai import OpenAI
client = OpenAI()
context = []
def call():
return client.chat.completions.create(
model="gpt-4o",
messages=context,
tools=tools,
tool_choice="auto"
)
def handle_tool_call(item):
name = item.function.name
args = json.loads(item.function.arguments)
if name == "list_emails":
result = list_emails(**args)
elif name == "send_email":
result = send_email(**args)
else:
result = "Unknown tool"
context.append({
"role": "tool",
"tool_call_id": item.id,
"content": result
})
def process(user_input):
context.append({"role": "user", "content": user_input})
response = call()
while response.choices[0].message.tool_calls:
for item in response.choices[0].message.tool_calls:
handle_tool_call(item)
response = call()
context.append({"role": "assistant", "content": response.choices[0].message.content})
return response.choices[0].message.content5. Add calendar tools
Same pattern for calendar. The CLI exposes list, create, and availability:
def list_events(days=7):
"""List upcoming calendar events."""
result = subprocess.run(
["nylas", "calendar", "events", "list", "--days", str(days), "--json"],
capture_output=True,
text=True
)
return result.stdout if result.returncode == 0 else f"Error: {result.stderr}"
def create_event(title, start, end, participants=None):
"""Create a calendar event."""
cmd = ["nylas", "calendar", "events", "create", "--title", title, "--start", start, "--end", end, "--yes"]
if participants:
for p in participants:
cmd.extend(["--participant", p])
result = subprocess.run(cmd, capture_output=True, text=True)
return result.stdout if result.returncode == 0 else f"Error: {result.stderr}"
def find_meeting_time(participants, duration="30m"):
"""Find when participants are free for a meeting."""
result = subprocess.run(
["nylas", "calendar", "find-time", "--participants", ",".join(participants),
"--duration", duration, "--json"],
capture_output=True,
text=True
)
return result.stdout if result.returncode == 0 else f"Error: {result.stderr}"6. CLI commands you can wrap
These Nylas CLI commands map directly to agent tools:
| Command | Use case |
|---|---|
nylas email list --json | List messages (add --unread, --limit) |
nylas email search "query" --json | Search by keyword |
nylas email read msg_id --json | Read full message |
nylas email send --to X --subject Y --body Z --yes | Send email |
Calendar
| Command | Use case |
|---|---|
nylas calendar events list --json | List events (add --days, --timezone) |
nylas calendar events create --title X --start Y --end Z --yes | Create event |
nylas calendar find-time --participants X,Y --duration 30m --json | Find free slots |
7. Context engineering tips
Each tool output eats tokens. The CLI returns JSON. For long message lists, consider:
- Use
--limit 5or--limit 10instead of fetching everything - Summarize large outputs in a separate step before appending to context
- Only expose the tools the agent needs for the task. Email-only agents do not need calendar tools.
Using Cursor or Claude instead?
If you want email and calendar tools inside Claude Code, Cursor, or VS Code without building your own agent, use the MCP path. One command installs the Nylas MCP server and gives your assistant the same tools:
nylas mcp install --assistant claude-code
# or: cursor, windsurf, vscodeSee Give AI Agents Email Access via MCP for full setup.
FAQ
Does this work with Anthropic, Gemini, or other LLM providers?
Yes. The tool pattern (define tools, handle tool calls, append results to context) is the same across providers. Swap the client.chat.completions.create call for your provider's equivalent. The CLI wrappers stay unchanged.
What if the CLI is not in PATH?
Use the full path to the binary (e.g. /opt/homebrew/bin/nylas on macOS) or pass shell=True with the full command string. For Homebrew installs, which nylas shows the path.
How do I use a specific mailbox when I have multiple grants?
Set NYLAS_GRANT_ID in the environment before running your agent, or pass the grant ID as the first argument to each command (e.g. nylas email list grant_xyz --json). Use nylas auth list to see your grants.
Why use --yes when sending email?
Without --yes, nylas email send prompts for confirmation interactively. In an agent loop, stdin is not available, so the command would hang. Always use --yes for non-interactive use.
Can I run this in a server or CI environment?
Yes. Authenticate with nylas auth config and set NYLAS_API_KEY in your environment. The CLI reads credentials from config and env vars, so no interactive login is needed after initial setup.
Next steps
- Send email from the terminal – full CLI reference for email commands
- Manage calendar from the terminal – events, availability, timezone handling
- Give AI agents email access via MCP – plug into Claude, Cursor, or VS Code
- Command reference – every flag and subcommand