One npm package, every JS/TS framework. Import the adapter for your stack, point it at your ai.json, done.
npm install @aiendpoint/serve
Install
npm install @aiendpoint/serve
All framework dependencies are optional peer deps. You only need the one you use.
Express
import express from 'express'
import { aiendpoint } from '@aiendpoint/serve/express'
const app = express()
app.use(aiendpoint({ spec: './ai.json' }))
app.listen(3000)
GET /ai now serves your spec with proper headers.
Fastify
import Fastify from 'fastify'
import { aiendpoint } from '@aiendpoint/serve/fastify'
const app = Fastify()
app.register(aiendpoint, { spec: './ai.json' })
app.listen({ port: 3000 })
Next.js (App Router)
Create app/ai/route.ts:
import { aiendpoint } from '@aiendpoint/serve/next'
export const GET = aiendpoint({ spec: './ai.json' })
Hono
import { Hono } from 'hono'
import { aiendpoint } from '@aiendpoint/serve/hono'
const app = new Hono()
app.use(aiendpoint({ spec: './ai.json' }))
export default app
Works with Cloudflare Workers, Deno, Bun, and Node.js.
NestJS
NestJS uses Express under the hood. Use the Express adapter in main.ts:
import { aiendpoint } from '@aiendpoint/serve/express'
const app = await NestFactory.create(AppModule)
app.use(aiendpoint({ spec: './ai.json' }))
Options
All adapters accept the same options:
| Option | Type | Default | Description |
|---|---|---|---|
spec | string | object | required | Path to ai.json or inline spec object |
path | string | '/ai' | Route path to serve the spec at |
maxAge | number | 3600 | Cache-Control max-age in seconds |
Inline spec example:
app.use(aiendpoint({
spec: {
aiendpoint: '1.0',
service: { name: 'My API', description: 'Does things' },
capabilities: [{ id: 'do_thing', description: 'Does a thing', endpoint: '/api/thing', method: 'GET' }]
}
}))
Custom adapter
For frameworks not listed above, use the shared utilities:
import { loadSpec, cacheHeader } from '@aiendpoint/serve/handler'
const spec = loadSpec('./ai.json')
// In your framework's route handler:
response.setHeader('Content-Type', 'application/json')
response.setHeader('Cache-Control', cacheHeader(3600))
response.send(JSON.stringify(spec))
Generate ai.json: Use the CLI to create your spec from OpenAPI or interactively.