Platform
Stable model aliases
Integrate once with `kentwynn/reasoning` and `kentwynn/embedding` instead of binding clients to raw backend model IDs.
Ship chat, completions, and embedding workflows behind a stable API surface. Kent Wynn AI gives teams branded model aliases, scoped keys, token-aware usage controls, and a cleaner path from prototype to internal production rollout.
Model surface
Keep clients pinned to `kentwynn/reasoning` and `kentwynn/embedding` while the platform owns backend routing.
Token controls
Issue scoped keys, enforce quotas, and audit usage from one Kent Wynn control plane instead of distributing raw provider credentials.
curl -X POST \
https://api.kentwynn.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "X-AI-Token: YOUR_TOKEN" \
-d '{
"model": "kentwynn/reasoning",
"messages": [
{ "role": "system", "content": "You are a precise platform assistant." },
{ "role": "user", "content": "Summarise the Kent Wynn AI platform for an engineering manager." }
],
"stream": false
}'Quickstart
Kent Wynn AI should read like a product platform, not a demo catalog. This flow keeps onboarding clear for engineers, internal platforms, and automation teams.
Step 1
Sign in to the Kent Wynn AI Console, issue a scoped key, and store it once. Keys are only revealed at creation time.
Step 2
Use `kentwynn/reasoning` for generation workflows and `kentwynn/embedding` for retrieval, clustering, and semantic search.
Step 3
Send requests to the `/v1/*` surface with `X-AI-Token`. Chat, completions, and embeddings follow an OpenAI-compatible contract.
Step 4
Track token consumption, rotate keys, and enforce account-level controls from the console or admin endpoints.
Platform
Integrate once with `kentwynn/reasoning` and `kentwynn/embedding` instead of binding clients to raw backend model IDs.
Platform
Use standard chat, completions, and embeddings flows with minimal client changes across Python, TypeScript, and automation tools.
Platform
Issue keys, apply quotas, and monitor token consumption from a single control plane instead of scattering secrets across systems.
Platform
Drop Kent Wynn AI into SDKs, workflow tools, and internal services without rebuilding your application architecture.
These examples call the /demo namespace on api.kentwynn.com and show the same branded aliases and response shapes developers can expect from the platform.
List the public demo model aliases exposed by Kent Wynn AI.
curl -X 'GET' \
'https://api.kentwynn.com/demo/v1/models' \
-H 'accept: application/json'Kent Wynn AI keeps the public contract narrow and stable. Model aliases stay branded, authentication stays token-based, and the endpoint surface remains easy to integrate from existing OpenAI-style clients.
List stable public model aliases and supported capabilities.
Run chat-style generation with the Kent Wynn reasoning model.
Generate prompt-based completions for legacy and lightweight text flows.
Produce 2560-dimension embeddings for retrieval, clustering, and semantic search.
Kent Wynn AI is not just a model endpoint. It is the control layer for who can call the platform, how token consumption is governed, and how integrations stay observable over time.
Standard JSON responses keep integration surfaces predictable across chat, completions, and internal telemetry.
{
"choices": [
{
"message": {
"role": "assistant",
"content": "Kent Wynn AI exposes stable model aliases with token-aware access controls."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 18,
"completion_tokens": 24,
"total_tokens": 42
}
}The platform is designed to sit behind the tools your team already uses. Keep a stable API contract for engineering, route requests through governed model aliases, and preserve control over tokens and usage across every integration.
Use the official OpenAI Node or Python SDKs with a base URL swap. Kent Wynn preserves the surface developers already know.
View docs →Connect Kent Wynn endpoints into n8n, scheduled jobs, or internal automations without introducing a second AI integration layer.
View docs →Plug into LangChain or similar toolchains while keeping model aliases, token controls, and account-level governance centralized.
View docs →Use raw HTTP when you need full transport control for backend services, webhooks, and platform-to-platform integrations.
View docs →Kent Wynn AI sits between your applications and the underlying model runtime. That lets you standardize authentication, preserve stable product aliases, and manage token usage without rewriting every client each time the backend changes.
Use the official client, swap the base URL, and keep the rest of the integration straightforward.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.KENTWYNN_TOKEN!,
baseURL: "https://api.kentwynn.com/v1",
});
const completion = await client.chat.completions.create({
model: "kentwynn/reasoning",
messages: [
{
role: "system",
content: "You are a precise platform assistant."
},
{
role: "user",
content: "Draft 3 onboarding steps for a new enterprise customer."
}
],
});
console.log(completion.choices[0].message?.content);Kent Wynn AI is the model and governance layer. Products on top of it turn that platform into focused workflows for knowledge, automation, and future multimodal operations.
OpenQuery turns contracts, SOPs, support logs, and operational documents into a queryable system with retrieval, citations, and controlled reasoning on top of Kent Wynn model aliases.
Uses Kent Wynn model aliases, governed token access, and the same stable public API surface as the platform.
Designed for document-heavy teams that need structured ingestion, embedded search, and defensible answers.
OpenQuery demonstrates the intended Kent Wynn product pattern: one controlled AI platform underneath, multiple focused products above it.
A multimodal workspace for image, scan, and visual document analysis once the vision runtime is production-ready.
A controlled environment for multi-step task execution, tool routing, and auditable operator workflows.
A unified operational dashboard for token reporting, key lifecycle, quota policy, and model adoption across products.
Start with stable Kent Wynn model aliases, issue governed API keys, and integrate chat, completions, or embeddings without exposing your product to backend churn.