Kent Wynn AI Platform

Build with Kent Wynn AI—model access, token governance, and integration-ready AI endpoints.

Ship chat, completions, and embedding workflows behind a stable API surface. Kent Wynn AI gives teams branded model aliases, scoped keys, token-aware usage controls, and a cleaner path from prototype to internal production rollout.

Model surface

Keep clients pinned to `kentwynn/reasoning` and `kentwynn/embedding` while the platform owns backend routing.

Token controls

Issue scoped keys, enforce quotas, and audit usage from one Kent Wynn control plane instead of distributing raw provider credentials.

Quick reference

Models
reasoning + embeddings
Contract
OpenAI-compatible
Control
keys + quotas
curl -X POST \
  https://api.kentwynn.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "X-AI-Token: YOUR_TOKEN" \
  -d '{
    "model": "kentwynn/reasoning",
    "messages": [
      { "role": "system", "content": "You are a precise platform assistant." },
      { "role": "user", "content": "Summarise the Kent Wynn AI platform for an engineering manager." }
    ],
    "stream": false
  }'

Quickstart

Integrate in four controlled steps.

Kent Wynn AI should read like a product platform, not a demo catalog. This flow keeps onboarding clear for engineers, internal platforms, and automation teams.

  1. Step 1

    Create an API key

    Sign in to the Kent Wynn AI Console, issue a scoped key, and store it once. Keys are only revealed at creation time.

  2. Step 2

    Choose the model alias

    Use `kentwynn/reasoning` for generation workflows and `kentwynn/embedding` for retrieval, clustering, and semantic search.

  3. Step 3

    Call the compatible endpoint

    Send requests to the `/v1/*` surface with `X-AI-Token`. Chat, completions, and embeddings follow an OpenAI-compatible contract.

  4. Step 4

    Govern usage centrally

    Track token consumption, rotate keys, and enforce account-level controls from the console or admin endpoints.

Platform

Stable model aliases

Integrate once with `kentwynn/reasoning` and `kentwynn/embedding` instead of binding clients to raw backend model IDs.

Platform

OpenAI-compatible surface

Use standard chat, completions, and embeddings flows with minimal client changes across Python, TypeScript, and automation tools.

Platform

Token governance

Issue keys, apply quotas, and monitor token consumption from a single control plane instead of scattering secrets across systems.

Platform

Integration-ready platform

Drop Kent Wynn AI into SDKs, workflow tools, and internal services without rebuilding your application architecture.

Live Demo

Verify the public contract before you integrate.

These examples call the /demo namespace on api.kentwynn.com and show the same branded aliases and response shapes developers can expect from the platform.

GETGET /demo/v1/models

List the public demo model aliases exposed by Kent Wynn AI.

curl -X 'GET' \
  'https://api.kentwynn.com/demo/v1/models' \
  -H 'accept: application/json'
API Surface

Core API surface

Kent Wynn AI keeps the public contract narrow and stable. Model aliases stay branded, authentication stays token-based, and the endpoint surface remains easy to integrate from existing OpenAI-style clients.

View full reference →
GET/v1/models

List stable public model aliases and supported capabilities.

POST/v1/chat/completions

Run chat-style generation with the Kent Wynn reasoning model.

POST/v1/completions

Generate prompt-based completions for legacy and lightweight text flows.

POST/v1/embeddings

Produce 2560-dimension embeddings for retrieval, clustering, and semantic search.

Control Plane

Token governance, account control, and usage visibility.

Kent Wynn AI is not just a model endpoint. It is the control layer for who can call the platform, how token consumption is governed, and how integrations stay observable over time.

  • API access is gated by issued keys and enforced token quotas.
  • Key lifecycle, account state, and request limits are managed from the Kent Wynn control plane.
  • Every request path is designed to support auditability and centralized governance.

Response shape

Standard JSON responses keep integration surfaces predictable across chat, completions, and internal telemetry.

{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Kent Wynn AI exposes stable model aliases with token-aware access controls."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 18,
    "completion_tokens": 24,
    "total_tokens": 42
  }
}
Integrations

Drop Kent Wynn AI into production workflows.

The platform is designed to sit behind the tools your team already uses. Keep a stable API contract for engineering, route requests through governed model aliases, and preserve control over tokens and usage across every integration.

OpenAI SDKTypeScriptPythonn8nLangChainREST API

OpenAI SDK compatibility

Use the official OpenAI Node or Python SDKs with a base URL swap. Kent Wynn preserves the surface developers already know.

View docs →

Workflow orchestration

Connect Kent Wynn endpoints into n8n, scheduled jobs, or internal automations without introducing a second AI integration layer.

View docs →

Agent frameworks

Plug into LangChain or similar toolchains while keeping model aliases, token controls, and account-level governance centralized.

View docs →

Direct REST integration

Use raw HTTP when you need full transport control for backend services, webhooks, and platform-to-platform integrations.

View docs →

Integration model

Kent Wynn AI sits between your applications and the underlying model runtime. That lets you standardize authentication, preserve stable product aliases, and manage token usage without rewriting every client each time the backend changes.

  • • One branded API surface for chat, completions, and embeddings
  • • Centralized token lifecycle and request governance
  • • Minimal client churn across SDKs, automations, and services

OpenAI SDK drop-in example

Use the official client, swap the base URL, and keep the rest of the integration straightforward.

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.KENTWYNN_TOKEN!,
  baseURL: "https://api.kentwynn.com/v1",
});

const completion = await client.chat.completions.create({
  model: "kentwynn/reasoning",
  messages: [
    {
      role: "system",
      content: "You are a precise platform assistant."
    },
    {
      role: "user",
      content: "Draft 3 onboarding steps for a new enterprise customer."
    }
  ],
});

console.log(completion.choices[0].message?.content);
Product Ecosystem

Products built on the Kent Wynn AI platform.

Kent Wynn AI is the model and governance layer. Products on top of it turn that platform into focused workflows for knowledge, automation, and future multimodal operations.

OpenQueryLive product

Document intelligence built on Kent Wynn AI for high-trust internal knowledge workflows.

OpenQuery turns contracts, SOPs, support logs, and operational documents into a queryable system with retrieval, citations, and controlled reasoning on top of Kent Wynn model aliases.

RetrievalEmbeddingsCitationsInternal knowledge
Powered by Kent Wynn AI

Uses Kent Wynn model aliases, governed token access, and the same stable public API surface as the platform.

Retrieval-first workflow

Designed for document-heavy teams that need structured ingestion, embedded search, and defensible answers.

Why it matters

OpenQuery demonstrates the intended Kent Wynn product pattern: one controlled AI platform underneath, multiple focused products above it.

Coming soon

Future platform products
Coming soon

Vision Workbench

A multimodal workspace for image, scan, and visual document analysis once the vision runtime is production-ready.

Coming soon

Agent Console

A controlled environment for multi-step task execution, tool routing, and auditable operator workflows.

Coming soon

Usage Hub

A unified operational dashboard for token reporting, key lifecycle, quota policy, and model adoption across products.

Build on a controlled AI platform.

Start with stable Kent Wynn model aliases, issue governed API keys, and integrate chat, completions, or embeddings without exposing your product to backend churn.

Kent Wynn AI — Model APIs, Token Governance & Developer Integrations