Overview
When an LLM calls a tool, it returns raw JSON. Today, developers either dump it as text or write one-off UI for every tool. taw-ui solves this with schema-validated, motion-native React components that handle the full tool call lifecycle.
The Problem
{
"stats": [{
"key": "revenue",
"label": "Revenue",
"value": 142580,
"format": { "kind": "currency", "currency": "USD" },
"diff": { "value": 12.4 },
"sparkline": { "data": [95000, 108000, 122000, 135000, 142580] }
}],
"source": {
"label": "Stripe Dashboard",
"freshness": "2 hours ago"
}
}Monthly Revenue
Current month to date
Same data, same tool call. One line of code: <KpiCard part={part} />
How It Works
Your tool returns JSON matching a taw-ui schema. Same schema works as your tool's outputSchema and for client-side validation.
Pass the tool call part to the component. It handles loading, streaming, success, and error states automatically.
Invalid data? Helpful error. Missing fields? Skeleton with shimmer. AI uncertain? Confidence badge. It just works.
Minimal Example
import { KpiCardSchema } from "@/components/taw/kpi-card"
const getMetrics = tool({
description: "Get business metrics",
parameters: z.object({
metric: z.string(),
}),
outputSchema: KpiCardSchema,
execute: async ({ metric }) => {
const data = await fetchMetric(metric)
return {
id: metric,
stats: [{
key: metric,
label: data.name,
value: data.value,
format: { kind: "currency", currency: "USD" },
diff: { value: data.change },
sparkline: { data: data.history },
}],
source: {
label: data.source,
freshness: "just now",
},
}
},
})import { KpiCard } from "@/components/taw/kpi-card"
import type { TawToolPart } from "taw-ui"
function ToolOutput({ part }: { part: TawToolPart }) {
switch (part.toolName) {
case "getMetrics":
return <KpiCard part={part} />
default:
return null
}
}Where taw-ui Fits
Your LLM calls a tool and returns JSON. Your AI SDK delivers it to your app. taw-ui renders it — with a hybrid model: you own the components, we maintain the contracts.
What Makes taw-ui Different
Every schema supports confidence (0–1) and source provenance. No other library surfaces AI uncertainty in the UI.
Components handle 4 states — loading, streaming, output, error — from a single prop. No conditional rendering.
Numbers count up with springs. Skeletons shimmer with physics. Entrances are settled, not popped.
Parse failures render with field-level details and "Did you mean?" suggestions. Never silent null.
Works with Vercel AI SDK, Anthropic SDK, OpenAI SDK, or raw JSON. No vendor lock-in.
One Zod schema defines tool output, validates on server, validates on client, infers types.
Error Handling in Action
When the LLM returns { title: "Revenue", amount: 142580 } instead of the expected schema, taw-ui doesn't silently fail:
taw-ui: KpiCard received invalid data