Guide
Export Email Data to Salesforce
Salesforce captures relationships, but only when someone logs them. This guide exports email data from any email provider using Nylas CLI and pushes it into Salesforce — covering SOQL upserts, Bulk API 2.0 for large imports, governor limit strategies, and Apex triggers that fire on Task creation.
Written by Caleb Geene Director, Site Reliability Engineering
Reviewed by Hazik
Einstein Activity Capture vs. CLI-based sync
Einstein Activity Capture (EAC) is Salesforce's built-in email logging feature, but it stores captured emails in a separate data store outside standard Salesforce objects. EAC requires Sales Cloud Einstein licenses at $75/user/month and its records don't appear in SOQL queries, reports, or Apex trigger flows. A CLI-based sync that creates standard Task records avoids these limitations entirely.
According to Salesforce's own documentation, EAC activity records “are not the same as standard Task or Event records.” That means:
- EAC emails don't appear in SOQL queries or Salesforce reports
- Apex triggers don't fire when EAC logs an email
- Workflow rules and Process Builder can't reference them
- Data exports and backups won't include them
- EAC requires Sales Cloud Einstein licenses at $75/user/month
A CLI-based sync creates standard Task records. They show up in reports, trigger Apex code, and participate in every automation Salesforce offers. For a 50-person sales team, skipping EAC licenses also saves $45,000/year.
Map to Salesforce Contact, Account, and Task objects
Salesforce uses a three-object model for email data: Contacts hold people, Accounts hold companies, and Tasks log email activities. Each email becomes a Task linked to both a Contact and an Account via the WhoId and WhatId lookup fields. Salesforce enforces a 255-character limit on the Task Subject field, so email subjects are truncated during import.
The Nylas CLI exports contacts and emails as JSON, which can be mapped to Salesforce's field naming conventions. Salesforce uses camelCase field names (FirstName, LastName) rather than the snake_case format returned by the Nylas API (given_name, surname). The jq command below transforms a single contact record to show the mapping between Nylas fields and their Salesforce equivalents.
# Pull email history and contacts
nylas email list --json --limit 500 > emails.json
nylas contacts list --json --limit 500 > contacts.json
# Salesforce-specific: map to Contact fields (uses camelCase, not Title_Case like Zoho)
cat contacts.json | jq '.[0] | {
SF_FirstName: .given_name,
SF_LastName: .surname,
SF_Email: .emails[0].email,
SF_Phone: .phone_numbers[0].number,
SF_Title: .job_title
}'Salesforce object relationships
Salesforce's data model extends beyond Lead, Contact, and Account with two junction objects that matter for email imports: AccountContactRelation and OpportunityContactRole. AccountContactRelation was introduced in Winter '22 and lets one Contact belong to multiple Accounts. OpportunityContactRole links Contacts to Opportunities with a role label like Decision Maker or Economic Buyer. Ignoring these objects during import creates orphaned records that break sales reporting.
AccountContactRelation is a junction object that lets one Contact belong to multiple Accounts. Before Winter '22, Contacts had a single AccountId lookup. Now you can model the reality that a VP of Engineering at Acme might also sit on the board of three other companies. When importing email contacts, create AccountContactRelation records for every domain the contact emails from.
OpportunityContactRole links Contacts to Opportunities with a role: Decision Maker, Evaluator, Economic Buyer, or a custom value. When you log an email as a Task linked to an Opportunity, the Contact's role on that Opportunity determines how sales managers interpret the engagement. If the Decision Maker went silent for 2 weeks, that's a different signal than if the Evaluator did.
The SOQL queries below demonstrate three common lookups: finding all Opportunities for a Contact via OpportunityContactRole, listing all Accounts a Contact belongs to via AccountContactRelation, and identifying Contacts on open Opportunities who haven't received an email in the last 30 days. The third query uses LAST_N_DAYS:30, a Salesforce date literal that adjusts automatically.
# SOQL: Find all Opportunities for a Contact
SELECT OpportunityId, Role, IsPrimary
FROM OpportunityContactRole
WHERE ContactId = '003xx000004TmiQAAS'
# SOQL: Find all Accounts a Contact is related to
SELECT AccountId, Account.Name, Roles, IsActive
FROM AccountContactRelation
WHERE ContactId = '003xx000004TmiQAAS'
# SOQL: Find Contacts who haven't been emailed in 30 days
SELECT Id, Name, Email, Account.Name,
(SELECT Subject, ActivityDate FROM Tasks
WHERE Type = 'Email' ORDER BY ActivityDate DESC LIMIT 1)
FROM Contact
WHERE Id NOT IN (
SELECT WhoId FROM Task
WHERE Type = 'Email' AND ActivityDate > LAST_N_DAYS:30
)
AND AccountId IN (
SELECT AccountId FROM Opportunity WHERE IsClosed = false
)Governor limits and Bulk API 2.0
Salesforce enforces governor limits on every API transaction to protect shared infrastructure. Synchronous REST API calls are capped at 10,000 DML operations and 100 SOQL queries per transaction, with a 100,000-record daily limit. Bulk API 2.0 bypasses these synchronous limits entirely by processing records in server-side batches, supporting up to 150 million records per 24-hour rolling period according to Salesforce documentation.
- 10,000 DML operations per synchronous transaction
- 100 SOQL queries per synchronous transaction
- 6 MB heap size for Apex triggers fired by your inserts
- 100,000 records/day for standard REST API calls
For imports under 200 records, the standard REST API works fine. For anything larger, Bulk API 2.0 is the right choice. The workflow has 5 steps: create a job, upload CSV data, close the job to start processing, poll for completion, and check results. Each Bulk API job accepts CSV files up to 150 MB per upload, and Salesforce processes the records asynchronously so your script doesn't block.
# Step 1: Create a Bulk API 2.0 job
JOB_ID=$(curl -s -X POST "$SF_INSTANCE/services/data/v59.0/jobs/ingest" \
-H "Authorization: Bearer $SF_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"object": "Contact",
"operation": "upsert",
"externalIdFieldName": "Email",
"contentType": "CSV",
"lineEnding": "LF"
}' | jq -r '.id')
echo "Created bulk job: $JOB_ID"
# Step 2: Upload CSV data
curl -s -X PUT "$SF_INSTANCE/services/data/v59.0/jobs/ingest/$JOB_ID/batches" \
-H "Authorization: Bearer $SF_TOKEN" \
-H "Content-Type: text/csv" \
--data-binary @salesforce_contacts.csv
# Step 3: Close the job to start processing
curl -s -X PATCH "$SF_INSTANCE/services/data/v59.0/jobs/ingest/$JOB_ID" \
-H "Authorization: Bearer $SF_TOKEN" \
-H "Content-Type: application/json" \
-d '{"state": "UploadComplete"}'
# Step 4: Poll for completion
while true; do
STATE=$(curl -s "$SF_INSTANCE/services/data/v59.0/jobs/ingest/$JOB_ID" \
-H "Authorization: Bearer $SF_TOKEN" | jq -r '.state')
echo "Job state: $STATE"
[[ "$STATE" == "JobComplete" || "$STATE" == "Failed" ]] && break
sleep 5
done
# Step 5: Check results
curl -s "$SF_INSTANCE/services/data/v59.0/jobs/ingest/$JOB_ID" \
-H "Authorization: Bearer $SF_TOKEN" \
| jq '{numberRecordsProcessed, numberRecordsFailed}'Bulk API 2.0 accepts CSV files with headers matching Salesforce field names. The jq command below transforms the Nylas CLI JSON output into a CSV with five columns: FirstName, LastName, Email, Phone, and Title. Contacts missing a surname default to “Unknown” since Salesforce requires a non-empty LastName field on every Contact record.
# Transform contacts.json into Salesforce Bulk API CSV
cat contacts.json | jq -r '
["FirstName","LastName","Email","Phone","Title"],
(.[] | [
(.given_name // ""),
(.surname // "Unknown"),
((.emails // [])[0].email // ""),
((.phone_numbers // [])[0].number // ""),
(.job_title // "")
])
| @csv' > salesforce_contacts.csvComposite requests for small batches
Salesforce's Composite API batches up to 25 subrequests into a single HTTP round-trip, and each subrequest can reference the ID returned by a previous one. This makes it ideal for imports under 200 records where you need to create an Account, a Contact, and a Task in one atomic operation. The Composite API counts as 1 API call against the org's daily limit of 100,000 REST API calls, regardless of how many subrequests it contains.
The example below creates an Account, links a new Contact to it using the @{newAccount.id} reference syntax, and logs a completed email Task tied to both records. Setting allOrNone: true rolls back all subrequests if any single one fails, preventing orphaned records in the org.
# Composite request: create Account + Contact + Task in one call
curl -s -X POST "$SF_INSTANCE/services/data/v59.0/composite" \
-H "Authorization: Bearer $SF_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"allOrNone": true,
"compositeRequest": [
{
"method": "POST",
"url": "/services/data/v59.0/sobjects/Account",
"referenceId": "newAccount",
"body": {
"Name": "Acme Corp",
"Website": "https://acme.com"
}
},
{
"method": "POST",
"url": "/services/data/v59.0/sobjects/Contact",
"referenceId": "newContact",
"body": {
"FirstName": "Sarah",
"LastName": "Chen",
"Email": "sarah@acme.com",
"AccountId": "@{newAccount.id}"
}
},
{
"method": "POST",
"url": "/services/data/v59.0/sobjects/Task",
"referenceId": "newTask",
"body": {
"WhoId": "@{newContact.id}",
"WhatId": "@{newAccount.id}",
"Subject": "Re: Q3 proposal follow-up",
"ActivityDate": "2026-03-10",
"Status": "Completed",
"Type": "Email"
}
}
]
}'The @{newAccount.id} syntax references the ID returned by an earlier subrequest. The allOrNone: true flag rolls back everything if any subrequest fails, preventing orphaned records.
Apex triggers on Task creation
Apex triggers fire automatically when the sync creates standard Task records, which is one of the main advantages over Einstein Activity Capture. EAC records are stored outside standard objects and never fire Apex triggers or workflow rules. With CLI-based Task creation, every email logged can trigger custom Apex code that updates Contact fields, changes Opportunity stages, or sends Slack alerts in real time. Salesforce allows up to 200 records per trigger batch in a single DML operation.
The trigger below updates a custom Last_Email_Date__c field on the Contact whenever an email-type Task is created. It collects all affected Contact IDs in a Set to avoid duplicate processing, then issues a single bulk update that counts as 1 DML operation regardless of list size.
// Apex trigger: update Contact.Last_Email_Date__c when email Task is created
// Deploy via sfdx: sfdx force:source:push
trigger UpdateLastEmailDate on Task (after insert) {
Set<Id> contactIds = new Set<Id>();
for (Task t : Trigger.new) {
// Only process email-type Tasks linked to a Contact
if (t.Type == 'Email' && t.WhoId != null
&& String.valueOf(t.WhoId).startsWith('003')) {
contactIds.add(t.WhoId);
}
}
if (contactIds.isEmpty()) return;
// Batch the update to stay within governor limits
List<Contact> toUpdate = new List<Contact>();
for (Id cid : contactIds) {
toUpdate.add(new Contact(
Id = cid,
Last_Email_Date__c = Date.today()
));
}
// This single DML counts as 1 operation regardless of list size
update toUpdate;
}More trigger ideas that fire on email Task creation:
- Auto-update Opportunity stage if a Contact with the Decision Maker role on an Opportunity receives an email after 14+ days of silence
- Create a follow-up Task if no reply is logged within 3 business days of an outbound email
- Update Account health score based on email frequency across all Contacts at the Account
- Send Slack alert via Salesforce outbound message when a dormant Account gets re-engaged
sfdx CLI interop
The Salesforce CLI (sfdx) can accept piped input from the Nylas CLI for record creation without a separate script. The sfdx force:data:record:create command takes field-value pairs inline, and sfdx force:data:bulk:upsert handles CSV files with the same Bulk API 2.0 backend that processes up to 150 million records per day. For teams already using sfdx for deployment and metadata management, this avoids adding curl-based API calls to the workflow.
The commands below show three patterns: creating a single Contact from Nylas CLI JSON output using jq to extract fields, bulk-upserting a CSV file with Email as the external ID for deduplication, and verifying the import with a SOQL query filtered to records created today.
# Create a Contact from Nylas CLI output using sfdx
CONTACT=$(nylas contacts list --json --limit 1 | jq '.[0]')
sfdx force:data:record:create \
--sobject Contact \
--values "FirstName='$(echo $CONTACT | jq -r '.given_name')' \
LastName='$(echo $CONTACT | jq -r '.surname // "Unknown"')' \
Email='$(echo $CONTACT | jq -r '.emails[0].email')'"
# Bulk upsert via sfdx using CSV
sfdx force:data:bulk:upsert \
--sobject Contact \
--csvfile salesforce_contacts.csv \
--externalid Email \
--wait 10
# Query to verify import
sfdx force:data:soql:query \
--query "SELECT Id, Name, Email, CreatedDate
FROM Contact
WHERE CreatedDate = TODAY
ORDER BY CreatedDate DESC
LIMIT 20"Python sync with simple_salesforce
The simple_salesforce Python library wraps Salesforce's REST and Bulk APIs into a single client that handles authentication, session refresh, and automatic JSON serialization. The script below runs a three-phase sync: bulk-upsert Contacts using Email as the external ID for deduplication, build a SOQL-based email-to-ContactId map with 50-record batches (staying within SOQL's 4,000-character IN clause limit), and log each email as a completed Task linked to both the Contact and any open Opportunity.
Phase 1 uses the Bulk API, which bypasses the 10,000-DML synchronous governor limit. Phases 2 and 3 use standard REST calls, which are sufficient since they operate on already-imported records. The OpportunityContactRole lookup in Phase 3 connects email Tasks to open deals, giving sales managers visibility into which Opportunities have active email engagement.
#!/usr/bin/env python3
"""Sync Nylas CLI email data to Salesforce via Bulk API 2.0."""
import json
import subprocess
import os
from simple_salesforce import Salesforce
sf = Salesforce(
username=os.environ["SF_USERNAME"],
password=os.environ["SF_PASSWORD"],
security_token=os.environ["SF_SECURITY_TOKEN"],
)
# Export from Nylas CLI
emails = json.loads(subprocess.run(
["nylas", "email", "list", "--json", "--limit", "500"],
capture_output=True, text=True, check=True,
).stdout)
contacts = json.loads(subprocess.run(
["nylas", "contacts", "list", "--json", "--limit", "500"],
capture_output=True, text=True, check=True,
).stdout)
print(f"Exported {len(emails)} emails, {len(contacts)} contacts")
# Phase 1: Bulk upsert Contacts
contact_records = []
for c in contacts:
email_list = c.get("emails", [])
if not email_list:
continue
contact_records.append({
"Email": email_list[0]["email"],
"FirstName": c.get("given_name", ""),
"LastName": c.get("surname", "") or "Unknown",
"Phone": (c.get("phone_numbers") or [{}])[0].get("number", ""),
"Title": c.get("job_title", ""),
})
# Salesforce bulk upsert — bypasses synchronous governor limits
if contact_records:
result = sf.bulk.Contact.upsert(contact_records, "Email")
success = sum(1 for r in result if r.get("success"))
print(f"Bulk upserted {success}/{len(contact_records)} contacts")
# Phase 2: Build email-to-ContactId map via SOQL
email_addrs = [r["Email"] for r in contact_records if r["Email"]]
contact_map: dict[str, str] = {}
# SOQL IN clause has 4000-char limit — batch queries
for i in range(0, len(email_addrs), 50):
batch = email_addrs[i:i+50]
quoted = ",".join(f"'{e}'" for e in batch)
results = sf.query(
f"SELECT Id, Email FROM Contact WHERE Email IN ({quoted})"
)
for rec in results["records"]:
contact_map[rec["Email"]] = rec["Id"]
print(f"Mapped {len(contact_map)} contacts by email")
# Phase 3: Log emails as Tasks with WhoId + WhatId
# Find open Opportunities for Contacts
opp_map: dict[str, str] = {}
if contact_map:
cids = ",".join(f"'{v}'" for v in contact_map.values())
ocr_results = sf.query(
f"""SELECT ContactId, OpportunityId
FROM OpportunityContactRole
WHERE ContactId IN ({cids})
AND Opportunity.IsClosed = false"""
)
for rec in ocr_results["records"]:
opp_map[rec["ContactId"]] = rec["OpportunityId"]
task_count = 0
for msg in emails:
sender_email = (msg.get("from") or [{}])[0].get("email", "")
contact_id = contact_map.get(sender_email)
if not contact_id:
continue
task_data = {
"WhoId": contact_id,
"Subject": (msg.get("subject") or "No subject")[:255],
"ActivityDate": msg.get("date", "").split("T")[0],
"Status": "Completed",
"Type": "Email",
"Description": (msg.get("body") or "")[:500],
}
# Link to Opportunity if Contact has one
opp_id = opp_map.get(contact_id)
if opp_id:
task_data["WhatId"] = opp_id
sf.Task.create(task_data)
task_count += 1
print(f"Logged {task_count} email Tasks")
print(f" {len(opp_map)} linked to Opportunities")The simple_salesforce package requires Python 3.7 or later and authenticates using a username, password, and security token. Salesforce security tokens are issued per user from Setup → My Personal Information → Reset My Security Token. Set the three environment variables below before running the sync script.
pip install simple-salesforce
export SF_USERNAME="you@company.com"
export SF_PASSWORD="your-password"
export SF_SECURITY_TOKEN="your-security-token"
python3 sync_salesforce.pyData Cloud integration
Salesforce Data Cloud (formerly Customer Data Platform) unifies data from multiple sources into a single customer profile. When an org has Data Cloud enabled, email data imported via the Nylas CLI feeds into the unified profile automatically once it hits the Contact and Task objects. Data Cloud processes ingestion within 15 minutes of record creation, according to Salesforce's Data Cloud documentation, meaning email engagement data appears alongside web visits, support tickets, and product usage in near real-time.
For direct Data Cloud ingestion that bypasses standard objects entirely, Salesforce provides the Ingestion API. This approach is useful when you want to stream raw email events (open, reply, thread length) as custom data points without creating Task records. The Ingestion API accepts JSON payloads of up to 200 MB per request and maps them to Data Cloud data model objects defined in your org's schema.
# Data Cloud Ingestion API — stream email events directly
curl -s -X POST "$SF_INSTANCE/api/v1/ingest/sources/<connector-id>/<object-name>" \
-H "Authorization: Bearer $SF_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"data": [
{
"email_address": "sarah@acme.com",
"event_type": "email_received",
"subject": "Re: Q3 proposal",
"timestamp": "2026-03-12T14:30:00Z",
"thread_length": 5,
"reply_time_minutes": 45
}
]
}'Next steps
- Export email data to HubSpot — the same workflow targeting HubSpot's API v3 batch endpoints and timeline events
- Organize emails by company and domain — group your inbox by sender domain before importing to Salesforce
- Enrich contacts from email signatures — extract job titles and phone numbers to fill in Salesforce fields
- Build contact hierarchy from email — infer reporting lines for Account planning
- CRM Email Workflows series — all 8 guides for extracting CRM intelligence from your inbox
- Full command reference — every Nylas CLI flag and subcommand documented
- Salesforce REST API Developer Guide — the official reference for the endpoints used in this guide
- Salesforce Bulk API 2.0 — for ingesting large CSV imports beyond the REST limits
- Salesforce Data Loader — the GUI alternative when scripting CSV import isn't needed