Guide
GitHub Actions Email Notifications
Send build, test, and deployment email alerts from GitHub Actions with Nylas CLI. Use workflow secrets for credentials and send only when the job fails or reaches the status you care about.
Written by Qasim Muhammad Staff SRE
Reviewed by Qasim Muhammad
Why send email from GitHub Actions?
GitHub notifications are useful for developers who live in GitHub. Email is better when the alert must reach a shared operations inbox, a customer-facing team, or a stakeholder who does not watch repository notifications.
Nylas CLI keeps the workflow small: no SMTP password in the repository, no Postfix setup on the runner, and no provider-specific SDK in your CI scripts.
1. Add workflow secrets
Add these secrets to your GitHub repository or organization:
NYLAS_API_KEY-- the API key used by the CLINYLAS_GRANT_ID-- the account that sends the alertALERT_EMAIL_TO-- the destination address or list
Keep the sender grant dedicated to automation when possible. That makes audit trails and revocation cleaner than using a developer's personal mailbox.
2. Send an email when tests fail
This workflow sends one failure notification after the test step fails with nylas email send. The CLI reads the API key from the environment, and the grant ID is passed as the first argument to the send command.
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- run: npm ci
- run: npm test
- name: Install Nylas CLI
if: failure()
run: curl -fsSL https://cli.nylas.com/install.sh | bash
- name: Send failure email
if: failure()
env:
NYLAS_API_KEY: ${{ secrets.NYLAS_API_KEY }}
NYLAS_GRANT_ID: ${{ secrets.NYLAS_GRANT_ID }}
ALERT_EMAIL_TO: ${{ secrets.ALERT_EMAIL_TO }}
run: |
nylas email send "$NYLAS_GRANT_ID" \
--to "$ALERT_EMAIL_TO" \
--subject "CI failed: ${{ github.repository }}" \
--body "Run: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}" \
--yes \
--json3. Send a richer deployment report
For deployment workflows, write a short HTML file and pass its contents as the body. This keeps YAML readable and makes it easy to include links to logs, environments, and release notes.
cat > email.html <<'HTML'
<h1>Deployment failed</h1>
<p>The production deployment did not finish.</p>
<p>Open the GitHub Actions run for logs and retry options.</p>
HTML
nylas email send "$NYLAS_GRANT_ID" \
--to "$ALERT_EMAIL_TO" \
--subject "Deployment failed" \
--body "$(cat email.html)" \
--yes \
--jsonIf your workflow sends high-volume alerts, add a job-level condition so only the final status email sends. Avoid sending one email per failing matrix job unless the recipient list expects that noise.
Which CI events deserve an email?
Email is best for events that need attention outside GitHub. A failed pull request test usually belongs in the GitHub UI or chat. A failed production deployment, failed nightly data job, security scan result, or release rollback can justify email because the audience may include support, operations, product, or customer-facing teams.
Start with a short list of high-signal alerts. Send one email when the final job status is failed. Avoid sending one email for every matrix job unless each matrix target has a different owner. Alert fatigue is real in CI systems, and a noisy email workflow gets filtered or ignored quickly.
Use the email subject to answer three questions: what failed, where it failed, and which repository or environment was involved. A subject like `Production deploy failed: api-service` is more useful than `CI failed`. The body can include the run URL, branch, commit SHA, actor, environment, and next action.
How should secrets be scoped?
Store the Nylas API key and sender grant ID as GitHub secrets. Use repository secrets for a single project and organization secrets when several repositories send alerts through the same automation account. If the sender account changes, rotate the grant secret without editing workflow files.
Use a dedicated sender account for CI notifications. A shared automation mailbox is easier to audit and revoke than a developer mailbox. The From name and signature can make it clear that the message came from a workflow, not a human. That reduces confusion when recipients reply to an alert.
Do not echo secrets in the workflow. Keep `set -x` off for steps that call the CLI. If you build a multi-line body with shell variables, print a redacted preview only when needed. GitHub masks known secret values, but it cannot mask every derived value or message body you create at runtime.
How do you avoid duplicate CI emails?
Put the email step near the end of the job and guard it with a status condition such as `if: failure()`. If multiple jobs can fail independently, create a final notification job that depends on them and sends one summary. That job can inspect `needs` results and include a short table in the email body.
For matrix builds, decide whether recipients need per-platform detail. A library maintainer may need to know that Windows failed while Linux passed. A stakeholder usually needs only the final result and a link to the run. Send the detailed matrix output in the body, not as separate messages, unless each platform has a different owner.
Use branch and event filters. You may want failure emails for `main`, release branches, scheduled jobs, and deployments, but not for every draft pull request. GitHub Actions can filter by event, branch, path, and job condition. That keeps email focused on events that require human response.
What should the notification body include?
A CI email should be scannable in ten seconds. Put the status, repository, branch, commit, actor, workflow name, and run link near the top. Then include environment, job name, and any deploy target. If the message is about a release, add the version or tag. If it is about a scheduled job, include the schedule name and data window.
Use HTML sparingly. A heading, a few paragraphs, and a short list are enough. Avoid dumping entire logs into the email. Logs are better left in GitHub Actions where search, folding, and retention already exist. The email should point to the exact run and explain why someone is receiving it.
Add a clear ownership line. For example: `Owner: platform-oncall@example.com` or `Escalate in #release`. Email often gets forwarded, so the body should preserve the next step even after it leaves GitHub. That turns the alert into an action, not just a status report.
How do you operate workflow email over time?
Review the recipient list regularly. CI alerts often start with one team and slowly grow to include too many people. Keep a small default list and route specialized alerts to specialized owners. If a person cannot act on the email, they probably should not receive it.
Track send failures separately from job failures. If a deployment fails and the email step also fails, you need to know both facts. The workflow should not hide the original deployment error behind an email error. Consider writing the email command output to the job summary so maintainers can see whether the alert was sent.
Rotate credentials on a schedule and after team changes. A dedicated automation sender makes this easier. Update the GitHub secret, run a manual workflow dispatch against a test branch, and confirm the message lands in the expected inbox. Keep the smoke test separate from production failure alerts so you can verify the channel without causing false alarms.
How should alerts differ by environment?
Production alerts should be short, direct, and routed to people who can act. Staging alerts can go to a smaller engineering list or chat archive. Pull request alerts usually belong in GitHub unless the repository has an external reviewer workflow. Treat each environment as a separate audience.
Include the environment name in the subject and body. A failed staging deploy and a failed production deploy have different urgency, even if the workflow name is the same. The recipient should not have to open the GitHub run to learn whether customers are affected.
Use different sender identities if the audience needs that separation. A production incident sender can be reserved for high-priority operational messages, while routine nightly job failures can come from a lower-priority automation mailbox. That helps recipients build filters without losing important messages.
Why add a manual test workflow?
A manual dispatch workflow lets you test the email path without breaking CI on purpose. It can install the CLI, send a test message to the alert list, and include a clear subject such as `CI email channel test`. Run it after rotating secrets, changing recipients, or updating the install step.
Keep the manual test separate from failure alerts. If the same workflow both tests and reports production failures, people may confuse drills with incidents. A small `workflow_dispatch` job with a hard-coded safe subject is easier to audit and easier to explain.
Record the last successful channel test. A simple run log or team note helps during incidents because responders know the alert path was recently verified. If no one has tested the channel in months, a failed deployment is the wrong time to discover that a secret expired.
How do you keep CI alerts out of spam?
Use a real, trusted sender account and avoid spam-like subjects. CI messages should be plain about the failure and link to the GitHub run. Avoid all-caps subjects, excessive punctuation, and large HTML blocks. Operational email should look like operational email.
Keep recipient lists intentional. Sending every failure to a large group increases complaints, filters, and unread mail. Route alerts to teams that can act, then let those teams forward or escalate when needed. Lower complaint rates help the sender stay trusted.
If alerts are business-critical, monitor whether they arrive. A scheduled test message to a monitored inbox can verify the send path. For high-severity systems, pair email with another channel so one delivery problem does not hide a production issue.
What audit trail should CI email leave?
The GitHub run is the primary audit record. The email should point to it and include enough context to find it later: repository, workflow, run ID, branch, commit SHA, and actor. That makes the email useful even after it has been forwarded or archived.
Store message IDs when possible. If the CLI returns a sent message ID in JSON, write it to the job summary or logs. That gives support and engineering a way to confirm the alert was sent without searching inboxes manually.
Do not store email bodies with secrets in artifacts. If the body includes private deployment links or customer context, keep it out of public artifacts and use short retention. The alert should help incident response without creating another data exposure surface.
What makes a CI email useful to the recipient?
The recipient should know why they got the message. Include the workflow name, environment, status, and owner in the first few lines. If the alert is informational, say so. If action is required, name the action and the team that owns it.
Avoid sending raw terminal output in the body. A short error excerpt can help, but the GitHub run is the right place for full logs. The email should point to the source of truth and summarize the failure in terms a human can route.
Use the same structure every time. When alerts follow a predictable format, responders can scan them faster and build mail filters. Consistency matters more than decorative HTML. A plain, reliable message beats a styled email that hides the run link.
Include a quiet success path somewhere outside the alert email. A job summary, deployment dashboard, or release record can show that the workflow completed. Email should stay focused on exceptions and important milestones so recipients keep trusting it.
Let recipients opt into lower-priority categories. A release manager may want every deployment message, while an engineering lead may want only failures on protected branches. Separate routing keeps the channel useful for both groups.
Give each alert a clear owner. If an email goes to a shared inbox, the body should still name the team or rotation that should act. Shared recipients without ownership create bystander risk: everyone saw the failure, but nobody knew they were accountable for the next step.
Add one unsubscribe or routing note for non-critical streams. Operational failure alerts may need to stay mandatory, but release summaries, nightly reports, and informational build emails should tell recipients how to change routing. That keeps the channel focused and lowers the chance that people filter out the alerts that matter.
References for this workflow
- GitHub Actions workflow syntax -- YAML structure, jobs, steps, and conditions
- GitHub Actions expressions -- status checks such as
failure() - GitHub Actions secrets -- storing credentials for workflow steps
Next steps
- Send email from the terminal -- full send command guide
- Email deliverability with Nylas CLI -- verify CI mail reaches inboxes
- Email signature extraction from the CLI -- add stored signatures to automated sends
- Command reference -- email send flags and JSON output