LLM Integration
S3WORM publishes a machine-readable llms.txt file that describes its API surface for AI tools and large language models.
What is llms.txt?
llms.txt is an emerging convention for websites to publish a concise, machine-readable description of their content and APIs. It serves the same purpose as robots.txt (for crawlers) or sitemap.xml (for search engines), but optimized for LLMs and AI agents.
When an AI tool encounters an S3WORM bucket or API endpoint, it can fetch /llms.txt to understand:
- What models and schemas are defined
- What CRUD operations are available
- What authentication methods are supported
- How to construct API calls
This allows AI coding assistants, chat agents, and automated tools to work with S3WORM APIs without prior training on the SDK.
The worm llms CLI Command
Generate an llms.txt file from your schema:
# Generate llms.txt from the current schema
worm llms
# Output to a specific file
worm llms --out ./public/llms.txt
# Include full API reference details
worm llms --verbose
The command reads your .worm/schema.json and generates a structured text file describing your models, fields, endpoints, and authentication requirements.
Example Output
# S3WORM API
> S3 Wrapped ORM -- typed JSON document database over S3-compatible buckets.
## Models
### Customer
- Path: #org/@customers/(id:uuid)
- Mode: readwrite
- Fields: name (string, required), email (string, required), status (string, enum: active|inactive)
- Endpoints: GET /api/Customer, GET /api/Customer/:id, POST /api/Customer, PUT /api/Customer/:id, DELETE /api/Customer/:id
### Invoice
- Path: #org/@invoices/(id:uuid)
- Mode: readwrite
- Fields: customerId (string, ref: Customer), amount (number, required), status (string, enum: draft|sent|paid)
- Endpoints: GET /api/Invoice, GET /api/Invoice/:id, POST /api/Invoice, PUT /api/Invoice/:id, DELETE /api/Invoice/:id
## Authentication
- JWT: Bearer token in Authorization header
- API Key: X-API-Key header
- Anonymous: allowed
## SDK
- Package: @decoperations/s3worm
- Docs: https://s3worm.wtf/docs
Serving llms.txt
As a Static File
Place llms.txt at the root of your site so it is accessible at https://yourdomain.com/llms.txt:
# Generate and place in your public directory
worm llms --out ./public/llms.txt
For Next.js projects, this goes in the public/ directory. For static sites built with worm site build, it is included in the output automatically.
Via the Gateway
If you are using @decoperations/s3worm-gateway, the gateway can serve llms.txt at the base path:
import { Gateway } from "@decoperations/s3worm-gateway";
const gateway = new Gateway({
s3: { /* ... */ },
schema: mySchema,
basePath: "/api",
});
// GET /api/llms.txt returns the generated llms.txt
Via Bucket Sites
When using worm serve or worm site build, the llms.txt file is auto-generated from the schema and served at /llms.txt.
Live llms.txt
The S3WORM documentation site serves its own llms.txt at:
https://s3worm.wtf/llms.txt
This file describes the S3WORM SDK itself, including all packages, types, and CLI commands.
How AI Tools Use S3WORM
Code Generation
AI coding assistants can read llms.txt to generate correct API calls:
User: "Fetch all active customers from my S3WORM bucket"
AI reads /llms.txt, discovers:
- Model: Customer
- Endpoint: GET /api/Customer?filter[status]=active
- Auth: Bearer token required
AI generates:
const response = await fetch("/api/Customer?filter[status]=active", {
headers: { Authorization: `Bearer ${token}` },
});
const { data } = await response.json();
Schema Discovery
AI agents can introspect the schema to understand data relationships:
// An AI agent could programmatically parse llms.txt to discover
// that Invoice has a ref to Customer via customerId
const invoices = await fetch("/api/Invoice?populate=customerId");
SDK Usage
With knowledge of the S3WORM TypeScript API, AI tools can generate correct SDK code:
import { S3Worm } from "@decoperations/s3worm";
const worm = new S3Worm({
bucket: "my-bucket",
endpoint: "https://gateway.storjshare.io",
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY!,
secretAccessKey: process.env.S3_SECRET_KEY!,
},
});
worm.loadSchema(".worm/schema.json");
const customers = worm.model("Customer");
const active = await customers.findAll({
filter: { status: "active" },
sort: { field: "createdAt", order: "desc" },
});
llms.txt Specification
The llms.txt format follows the llmstxt.org specification:
- Title line:
# Project Name - Description:
> One-line description - Sections: Markdown headers (
##,###) for structure - Links: Standard markdown links to documentation pages
- Plain text: Concise, machine-parseable descriptions
The goal is to be useful to both LLMs (for context) and humans (as a quick reference). Keep it concise -- llms.txt is not a full documentation site. It is an entry point that tells AI tools what is available and where to find details.