Docs

Overview

When an LLM calls a tool, it returns raw JSON. Today, developers either dump it as text or write one-off UI for every tool. taw-ui solves this with schema-validated, motion-native React components that handle the full tool call lifecycle.

The Problem

your-app.com
What's our current revenue?
Here's the current revenue data:
{
  "stats": [{
    "key": "revenue",
    "label": "Revenue",
    "value": 142580,
    "format": { "kind": "currency", "currency": "USD" },
    "diff": { "value": 12.4 },
    "sparkline": { "data": [95000, 108000, 122000, 135000, 142580] }
  }],
  "source": {
    "label": "Stripe Dashboard",
    "freshness": "2 hours ago"
  }
}
your-app.com
What's our current revenue?
Here's the current revenue data:
88%

Monthly Revenue

Current month to date

Revenue
$142,580+12.4%
Based on partial data — full sync completes tonight
Stripe Dashboard· just now
drag to compare
Without taw-uiWith taw-ui

Same data, same tool call. One line of code: <KpiCard part={part} />

How It Works

01Define your tool

Your tool returns JSON matching a taw-ui schema. Same schema works as your tool's outputSchema and for client-side validation.

02Render the part

Pass the tool call part to the component. It handles loading, streaming, success, and error states automatically.

03Ship with confidence

Invalid data? Helpful error. Missing fields? Skeleton with shimmer. AI uncertain? Confidence badge. It just works.

Minimal Example

server.ts — define tool
import { KpiCardSchema } from "@/components/taw/kpi-card"

const getMetrics = tool({
  description: "Get business metrics",
  parameters: z.object({
    metric: z.string(),
  }),
  outputSchema: KpiCardSchema,
  execute: async ({ metric }) => {
    const data = await fetchMetric(metric)
    return {
      id: metric,
      stats: [{
        key: metric,
        label: data.name,
        value: data.value,
        format: { kind: "currency", currency: "USD" },
        diff: { value: data.change },
        sparkline: { data: data.history },
      }],
      source: {
        label: data.source,
        freshness: "just now",
      },
    }
  },
})
chat.tsx — render component
import { KpiCard } from "@/components/taw/kpi-card"
import type { TawToolPart } from "taw-ui"

function ToolOutput({ part }: { part: TawToolPart }) {
  switch (part.toolName) {
    case "getMetrics":
      return <KpiCard part={part} />
    default:
      return null
  }
}

Where taw-ui Fits

Your LLM calls a tool and returns JSON. Your AI SDK delivers it to your app. taw-ui renders it — with a hybrid model: you own the components, we maintain the contracts.

LLM Provider
OpenAI, Anthropic, Google — calls tools, returns structured JSON
Any AI SDK
Vercel AI SDK, Anthropic SDK, OpenAI SDK — delivers tool call parts to your app
Your App
Your Componentscli
Copied into your project via npx taw-ui add — full ownership, customize anything
taw-uinpm
Schemas, types, validation, actions — versioned npm package that guarantees contracts

What Makes taw-ui Different

AI-native fields

Every schema supports confidence (0–1) and source provenance. No other library surfaces AI uncertainty in the UI.

Part-aware lifecycle

Components handle 4 states — loading, streaming, output, error — from a single prop. No conditional rendering.

Spring physics motion

Numbers count up with springs. Skeletons shimmer with physics. Entrances are settled, not popped.

Helpful errors, never null

Parse failures render with field-level details and "Did you mean?" suggestions. Never silent null.

SDK-agnostic

Works with Vercel AI SDK, Anthropic SDK, OpenAI SDK, or raw JSON. No vendor lock-in.

Schema = source of truth

One Zod schema defines tool output, validates on server, validates on client, infers types.

Error Handling in Action

When the LLM returns { title: "Revenue", amount: 142580 } instead of the expected schema, taw-ui doesn't silently fail:

!Schema Validation Failed

taw-ui: KpiCard received invalid data

missingidexpected invalid_type
missingstatsexpected invalid_type