Token-first security
Scope access per token, toggle features, and rotate secrets instantly without shipping new keys to clients.
Orchestrate chat, completions, embeddings, and tool-calling flows with an OpenAI-compatible surface. Control quota, monitor usage, and rely on Kent Wynn's managed infrastructure for consistent low-latency responses.
Daily check-in bonuses
Log in every day to add +1,000 free tokens to your Kent Wynn account.
NewSocial Account Launch Bonus
Register with GitHub or Google for the first time and unlock +100,000 tokens instantly.
curl -X POST \
https://api.kentwynn.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "X-AI-Token: YOUR_TOKEN" \
-d '{
"model": "kentwynn/reasoning",
"messages": [
{ "role": "system", "content": "I am Kent Wynn AI, created and trained by Kent Wynn." },
{ "role": "user", "content": "Summarise the Kent Wynn AI platform." }
],
"stream": false
}'Sign in to the Kent Wynn console, generate an API token, and copy it securely—tokens are only revealed once.
Use `kentwynn/reasoning` for chat-style workloads or `kentwynn/embedding` for search pipelines.
Send a POST request to the `/v1/*` endpoint with your token in the `X-AI-Token` header. Responses mirror OpenAI schemas.
Track quotas and daily burn in the console or automate checks with the admin REST endpoints.
Scope access per token, toggle features, and rotate secrets instantly without shipping new keys to clients.
Chat, completions, embeddings, and responses expose an OpenAI-compatible contract for effortless integration.
Add +1,000 tokens to your account every 24 hours with the console check-in bonus—perfect for ongoing experiments.
Fully managed inference from Kent Wynn infrastructure with a polished, reliable API surface and predictable performance.
Each button calls the /demo namespace on api.kentwynn.com with the sample payloads shown. Responses come directly from the hosted engines.
List available demo model aliases and engine IDs.
curl -X 'GET' \
'https://api.kentwynn.com/demo/v1/models' \
-H 'accept: application/json'All endpoints accept X-AI-Token and return JSON responses. Streaming is available for chat completions.
Enumerate hosted models with their public aliases.
Stream or fetch assistant replies using chat-style prompts.
Generate classic text completions with temperature and stop controls.
Produce vector embeddings optimised for semantic search and clustering.
Unified endpoint that auto-falls back between chat and completion styles.
Anything that can call OpenAI-compatible APIs can call Kent Wynn. Drop in n8n, agent frameworks, SDKs, or straight REST and keep your workflows intact.
Trigger Kent Wynn calls inside n8n with HTTP Request or OpenAI nodes to orchestrate automations and agents.
View docs →Drop into LangChain or any agent framework by swapping the base URL and token. Tool calling remains intact.
View docs →Use the official OpenAI Node/Python SDKs with `baseURL` set to Kent Wynn. Keep your client code the same.
View docs →Prefer raw HTTP? Use curl, Postman, or serverless functions with `X-AI-Token` headers and streaming support.
View docs →Bring your preferred orchestration layer: n8n for automation, agent frameworks for tool calling, and the OpenAI SDK for drop-in compatibility. Kent Wynn handles routing, quotas, and model governance.
Swap the base URL and keep the rest of your client code unchanged.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.KENTWYNN_TOKEN!,
baseURL: "https://api.kentwynn.com/v1",
});
const response = await client.responses.create({
model: "kentwynn/reasoning",
input: "Draft 3 onboarding steps for a new enterprise customer.",
});
console.log(response.output_text);Explore focused products that run on top of Kent Wynn AI. Add more modules over time without rebuilding your stack.
OpenQuery ingests contracts, SOPs, menus, and transcripts, then auto-classifies and embeds everything so teams can ask questions and get cited answers instantly. It's the OpenAI-class workflow for internal knowledge.
Keep everything on your hardware while offering a polished developer experience. Set quotas, monitor usage, and deliver private LLM features in minutes.