Same research. Many voices.
AlphaInk is the rendering arm of the Haitu ecosystem. Where haituresearch.com generates the analytical viewpoint, and alphahub.cc stores the skills agents use to think — AlphaInk stores the formats agents use to render.
Two surfaces, one library
Every format on AlphaInk has two consumers: humans browsing the site (you, right now) and AI agents fetching format definitions at runtime when they need to render something. The library is the same; the interfaces differ.
What lives in the registry
- Charts — single visualizations with a declared data shape (a stacked-bar 13F holders chart, a multi-line margin chart).
- Vivid devices — non-chart UI building blocks (a before/after drama panel, a 3-up stat card row).
- Recipes — compositions: a "vivid infographic" recipe references a hero card + 3-up stats + a margin chart + a verdict panel, in a specific order, with a specific voice.
- Voice rules — headline-rewrite rules per audience.
- Layouts — page-level structure (vertical-scroll magazine, two-column research report).
- Audience styles — palette, type, density overlays per audience (retail-en-vivid, retail-cn-wechat, institutional-print).
The render protocol
An AI agent asks AlphaInk: "Given this content, this audience, this device — what should I render?" AlphaInk returns a format manifest. The agent fills the format's template with its data, returns the artifact. New format published → all agents next-fetch can use it.
Why we built it
AI apps shouldn't carry rendering complexity in their core logic. The same investment thesis should be deliverable as a desktop research report, a mobile-first infographic, a 5-second card, an audio brief — without the agent re-implementing each format. AlphaInk separates the rendering library from the agent app.
Status
v1 is a read-only registry with a hand-curated seed corpus and HTML previews. Audio, video, code, and dynamic output types are in the schema but not yet rendered. Agent-fetch API and publishing UI are sequenced in SPEC.md phases 2-5.