Overview
Text, markdown, raw JSON — these are the default outputs of an AI. They're also the wrong outputs for most real interactions.
taw-ui is the interface layer for the HAI era: AI-native React components that turn structured tool outputs into production-quality UI — with loading states, spring-physics motion, schema validation, and built-in affordances for uncertainty.
The future of AI products is not more text. It's better interfaces. The right component makes AI feel smarter — not because the model changed, but because the interface did.
The Gap
Most AI products treat text as the universal interface. When a user asks for metrics, choices, or a confirmation — text is the wrong answer. taw-ui fills the gap between what the LLM returns and what users should see.
{
"stats": [{
"key": "revenue",
"label": "Revenue",
"value": 142580,
"format": { "kind": "currency", "currency": "USD" },
"diff": { "value": 12.4 },
"sparkline": { "data": [95000, 108000, 122000, 135000, 142580] }
}],
"source": {
"label": "Stripe Dashboard",
"freshness": "2 hours ago"
}
}Same data, same tool call. One line of code: <KpiCard part={part} />
How It Works
Three steps. No glue code.
Your tool returns JSON matching a taw-ui schema. That same schema validates on the server, validates at render time, and infers TypeScript types. One schema, no duplication.
The AI SDK delivers a part object to your app. Pass it directly. Loading, streaming, success, and error states are handled automatically — no conditionals, no wiring.
Components live in your codebase. Customize layout, styles, behavior — everything. shadcn theming works out of the box. No external runtime, no version conflicts.
Minimal Example
import { KpiCardSchema } from "@/components/taw/kpi-card"
const getMetrics = tool({
description: "Get business metrics",
parameters: z.object({
metric: z.string(),
}),
outputSchema: KpiCardSchema,
execute: async ({ metric }) => {
const data = await fetchMetric(metric)
return {
id: metric,
stats: [{
key: metric,
label: data.name,
value: data.value,
format: { kind: "currency", currency: "USD" },
diff: { value: data.change },
sparkline: { data: data.history },
}],
source: {
label: data.source,
freshness: "just now",
},
}
},
})import { KpiCard } from "@/components/taw/kpi-card"
import type { ToolPart } from "@/components/taw/lib/types"
function ToolOutput({ part }: { part: ToolPart }) {
switch (part.toolName) {
case "getMetrics":
return <KpiCard part={part} />
default:
return null
}
}The Architecture
taw-ui sits between your AI SDK and your users. LLM providers and runtimes change fast — taw-ui is designed to stay stable across them. You own the components. We maintain the contracts.
What Makes taw-ui Different
Every schema includes confidence (0–1) and source provenance as first-class fields. No other UI library surfaces AI uncertainty in the interface.
One part prop. Four states handled automatically: loading skeleton, progressive streaming, animated success, helpful error. No conditional rendering.
Numbers count up with spring dynamics. Skeletons shimmer with physics. Entrances are eased, not popped. Motion that feels earned.
Parse failures render inline with field-level details and "Did you mean?" suggestions. Every error is an opportunity to fix a prompt — not a silent mystery.
Built for the Vercel AI SDK. The ToolPart type is structurally compatible — pass AI SDK parts directly, no adapters needed. Works with any provider the SDK supports.
One Zod schema defines the tool output shape, validates server-side, validates client-side, and infers TypeScript types. No synchronization required.
Error Handling in Action
When the LLM returns { title: "Revenue", amount: 142580 } instead of the expected schema, taw-ui doesn't silently fail. It renders a helpful inline error with field-level details and correction suggestions. Every error is a prompt iteration opportunity.
KpiCard: Schema validation failed