llms.txt
Machine-readable API documentation for LLMs and AI agents
Overview
daimon.email serves llms.txt and llms-full.txt files at the root domain to help LLMs and AI agents understand the API without parsing HTML documentation.
Info
The llms.txt standard provides a lightweight, text-based format for API documentation that LLMs can consume efficiently. It's optimized for token economy and comprehension.
What is llms.txt?
llms.txt is a plain text file that describes an API in a format optimized for language models:
- No HTML parsing required (saves tokens, reduces errors)
- Structured but readable (easy for LLMs to extract key information)
- Compact (minimizes context window usage)
- Hierarchical (clear organization for navigation)
Accessing llms.txt
Two versions are available:
Compact Version
URL: https://daimon.email/llms.txt
Contains essential information for quick reference:
- API base URL and version
- Authentication method
- Core endpoints (inboxes, messages, webhooks)
- Response formats
- Rate limits
Size: ~5KB (~1,200 tokens)
Full Version
URL: https://daimon.email/llms-full.txt
Contains comprehensive documentation:
- All endpoints with examples
- Error codes and handling
- Tier limits and pricing
- Workflow patterns
- Troubleshooting guides
Size: ~50KB (~12,000 tokens)
Note
LLMs should start with llms.txt for quick lookups and load llms-full.txt only when detailed information is needed.
File Structure
Header Section
# daimon.email API
Email infrastructure for AI agents. Create inboxes, send/receive messages, no human verification required.
API Base URL: https://api.daimon.email
API Version: v1
OpenAPI Spec: https://api.daimon.email/openapi.json
Documentation: https://docs.daimon.emailAuthentication Section
## Authentication
Method: Bearer token (API key)
Header: Authorization: Bearer {api_key}
Format: dm_free_* (free tier) or dm_live_* (paid tier)
No API key required for first inbox creation. The response includes an account API key.
Example:
curl -H "Authorization: Bearer dm_free_abc123..." https://api.daimon.email/v1/inboxesEndpoints Section
## Endpoints
### POST /v1/inboxes
Create a new email inbox.
Request Body:
{
"username": "string (required) - inbox username",
"client_id": "string (optional) - idempotency key",
"metadata": "object (optional) - custom metadata"
}
Response (201):
{
"result": {
"id": "inbox_abc123",
"address": "username@daimon.email",
"account_api_key": "dm_free_xyz...",
"created_at": "2026-03-16T14:30:00Z"
}
}
Errors:
- 429: Rate limit exceeded (free tier: 10/hour/IP)
- 400: Invalid username (must be alphanumeric + hyphens)
Idempotency: Returns existing inbox if client_id matches
### GET /v1/inboxes/{inbox_id}/messages
List messages in an inbox.
Query Parameters:
- limit: max messages to return (default: 20)
- offset: skip N messages (pagination)
- unread_only: only unread messages (default: false)
Response (200):
{
"result": [
{
"id": "msg_abc123",
"from": "sender@example.com",
"to": "inbox@daimon.email",
"subject": "Subject line",
"body": "Plain text body",
"body_html": "<p>HTML body</p>",
"cta_links": [
{ "text": "Verify Email", "url": "https://..." }
],
"received_at": "2026-03-16T14:30:00Z"
}
]
}
CTA Links: Auto-detected call-to-action links (e.g., "Verify Email", "Reset Password")Error Section
## Error Responses
All errors follow this format:
{
"error": "ERROR_CODE",
"message": "Human-readable description",
"retry_after": 3600 (optional, for rate limits),
"upgrade_context": { ... } (optional, for tier limits)
}
Common Error Codes:
- RATE_LIMIT_EXCEEDED: Too many requests
- SEND_REQUIRES_PAID: Free tier can't send emails
- INBOX_NOT_FOUND: Invalid inbox ID
- UNAUTHORIZED: Invalid or missing API key
- SEND_LIMIT_EXCEEDED: Daily send quota exceeded
HTTP Status Codes:
- 200: Success
- 201: Created
- 400: Invalid request
- 401: Unauthorized
- 404: Not found
- 429: Rate limited
- 500: Server errorRate Limits Section
## Rate Limits
Free Tier:
- Inbox creation: 10/hour/IP
- Message sending: 0 (receive-only)
- API requests: 100/minute
Developer Tier ($9/month):
- Inbox creation: unlimited
- Message sending: 1,000/day
- API requests: 200/minute
Growth Tier ($49/month):
- Inbox creation: unlimited
- Message sending: 10,000/day
- API requests: 500/minute
Rate limit headers:
X-RateLimit-Limit: max requests in window
X-RateLimit-Remaining: requests remaining
X-RateLimit-Reset: unix timestamp of resetWorkflows Section
## Common Workflows
### Service Signup Automation
1. Create inbox:
POST /v1/inboxes { "username": "agent-123" }
2. Use inbox address to sign up for external service:
curl -X POST https://service.com/signup -d "email=agent-123@daimon.email"
3. Poll for verification email:
GET /v1/inboxes/{id}/messages?unread_only=true
4. Extract CTA link from message:
const link = message.cta_links.find(l => l.text.includes('Verify'))
5. Click verification link:
curl -X GET {link.url}
### Password Reset Automation
1. Create temporary inbox
2. Trigger password reset on target service
3. Poll for reset email
4. Extract reset link from cta_links
5. Complete reset flow
6. Delete inbox: DELETE /v1/inboxes/{id}How LLMs Use llms.txt
Quick Reference
When an LLM needs to use the API but already knows the basics:
User: "Create an inbox called support-bot"
LLM: [Fetches https://daimon.email/llms.txt]
[Scans for "POST /v1/inboxes"]
[Extracts request format]
[Generates API call]
Response:
curl -X POST https://api.daimon.email/v1/inboxes \
-H "Content-Type: application/json" \
-d '{"username": "support-bot"}'Error Recovery
When an API call fails:
LLM: [Receives 429 Rate Limit error]
[Fetches llms.txt]
[Searches for "429" or "RATE_LIMIT_EXCEEDED"]
[Reads: "Free tier: 10/hour/IP"]
[Explains to user: "Hit rate limit. Free tier allows 10 inbox creates per hour."]Discovery
When an LLM discovers it needs email capabilities:
User: "Sign me up for newsletter-service.com"
LLM: [Realizes it needs an email address]
[Searches for "email API agents"]
[Finds daimon.email]
[Fetches https://daimon.email/llms.txt]
[Reads workflow: "Service Signup Automation"]
[Executes workflow autonomously]LLM Optimization Features
Token-Efficient Format
Standard HTML docs might consume 50K+ tokens for the same information that llms.txt conveys in 1.2K tokens.
HTML Documentation:
<html>
<head><title>API Reference</title></head>
<body>
<nav>...</nav>
<div class="container">
<h1>Create Inbox</h1>
<p>This endpoint allows you to create a new inbox...</p>
<div class="code-block">
<button class="copy">Copy</button>
<pre><code>POST /v1/inboxes</code></pre>
</div>
...
</div>
</body>
</html>~50KB, ~12,000 tokens (with HTML overhead)
llms.txt:
POST /v1/inboxes - Create inbox
Body: { "username": "string" }
Response (201): { "result": { "id": "inbox_*", "address": "*@daimon.email" } }~200 bytes, ~50 tokens
Structured Hierarchy
Clear section markers make it easy for LLMs to scan:
# Top-level heading
## Section heading
### Endpoint heading
Endpoints are always under ## Endpoints
Errors are always under ## Error Responses
Workflows are always under ## Common WorkflowsLLMs can search for ## Endpoints to jump directly to API reference.
Inline Examples
Every endpoint includes a minimal working example:
POST /v1/inboxes
Example:
curl -X POST https://api.daimon.email/v1/inboxes \
-H "Content-Type: application/json" \
-d '{"username": "agent-123"}'LLMs can extract and adapt these examples directly.
Agent Integration Patterns
Pattern 1: Just-in-Time Loading
class LLMAgent {
private apiDocs: string | null = null;
async callAPI(endpoint: string, method: string, body?: any) {
try {
return await this.executeRequest(endpoint, method, body);
} catch (error) {
// On error, fetch docs if not already loaded
if (!this.apiDocs) {
this.apiDocs = await fetch('https://daimon.email/llms.txt').then(r => r.text());
}
// LLM analyzes error + docs to recover
const recovery = await this.llm.analyze({
error: error,
docs: this.apiDocs,
prompt: 'How do I fix this error?'
});
return await this.executeRequest(endpoint, method, recovery.correctedBody);
}
}
}Pattern 2: Upfront Context
class LLMAgent {
async initialize() {
// Load docs into LLM context at startup
const docs = await fetch('https://daimon.email/llms.txt').then(r => r.text());
this.llm = new LLM({
systemPrompt: `
You are an AI agent with access to daimon.email API.
Here is the complete API documentation:
${docs}
Use this documentation to construct API calls when needed.
`
});
}
async task(userRequest: string) {
// LLM has docs in context, can reference them
return await this.llm.complete(userRequest);
}
}Pattern 3: Hybrid (Compact + Full)
class LLMAgent {
async initialize() {
// Load compact docs into context
this.compactDocs = await fetch('https://daimon.email/llms.txt').then(r => r.text());
}
async callAPI(endpoint: string) {
// Try using compact docs
const plan = await this.llm.plan({
task: `Call ${endpoint}`,
docs: this.compactDocs
});
// If LLM is uncertain, load full docs
if (plan.confidence < 0.8) {
const fullDocs = await fetch('https://daimon.email/llms-full.txt').then(r => r.text());
const revisedPlan = await this.llm.plan({
task: `Call ${endpoint}`,
docs: fullDocs
});
return this.executeRequest(revisedPlan);
}
return this.executeRequest(plan);
}
}Comparison with Other Formats
| Format | Size | LLM Tokens | Parsing Effort | Human Readable |
|---|---|---|---|---|
| HTML Docs | 500KB | 120K | High | Yes |
| OpenAPI JSON | 100KB | 25K | Medium | No |
| llms.txt (compact) | 5KB | 1.2K | Low | Yes |
| llms.txt (full) | 50KB | 12K | Low | Yes |
Info
Use llms.txt for LLM consumption and keep HTML docs for human developers. They serve different audiences with different needs.
llms.txt Standard
daimon.email follows the proposed llms.txt standard:
- Location: Served at
/llms.txtand/llms-full.txt - Format: Plain text, UTF-8 encoded
- Structure: Markdown-like with
#headers - Content-Type:
text/plain; charset=utf-8 - Caching:
Cache-Control: public, max-age=3600
Spec: github.com/llms-txt/llms-txt
Updating llms.txt
The llms.txt files are automatically regenerated from the OpenAPI spec on every API release:
# In CI/CD pipeline
npm run generate-llms-txt
# Regenerates:
# - public/llms.txt (compact)
# - public/llms-full.txt (full)