Logo
Back

streamdown-rn

Edgar Pavlovsky
Edgar Pavlovsky (CEO)November 11, 2025
streamdown-rn

This week, we're launching a set of unique tools for building AI apps. We're calling it the TypeScript Collection. After we launched the collection yesterday, I considered that a more appropriate name might be the App Collection - ultimately, we created these tools to help us build more beautiful AI app experiences.

I think we landed on the TypeScript Collection, in part, to communicate that frontend and full stack development would only be a part of our work. There's such an interesting and irrational stigma around app-level development if you come from an infrastructure background - it's generally considered less serious. I come from backend, and it took me years not to hate any kind of frontend work with a fury. But there's something incredibly important about it, especially right now: this is the work that's making the biggest difference to users.

I love engineering, but I'm first and foremost a founder - I believe in finding asymmetric opportunities to build things that people want, and a key part of that is taking unpopular bets (it is, after all, the only way you get asymmetric upside when you're betting by definition). Right now, our bet is that there's an underinvestment in software that fosters beautiful AI app experiences, especially open source.

Hence, the TypeScript Collection.

Mobile-native streaming

Our first release from the TypeScript Collection is a react native library called streamdown-rn, a react native version of Vercel's popular streamdown library optimized particularly for LLM content streaming on mobile. (We also added a bonus feature I'm really excited about).

Vercel solved a conceptually simple, but very important problem with streamdown: AI-powered responses are more beautiful when they're rendered in markdown, and the most intuitive experience is one where you see formatted markdown render as the response is being printed out. Streamdown is great - it supports everything from traditional markdown, to code blocks and mermaid diagrams. Importantly, responses look like markdown even while they're in progress of being printed out - streamdown has the really nice property of intelligently closing markdown formatting on the client side for incomplete markdown as the LLM is streaming out its markdown response. (Incidentally, react-markdown has, since streamdown's launch, also added much better support for handling streaming-based use cases like incomplete markdown rendering).

There's just one problem: right now, streamdown only supports React web. This means that it's not natively compatible with the react native stack. The reason behind it is relatively straightforward: streamdown's dependencies rely on the browser DOM, which doesn't exist in mobile contexts. A perfectly reasonable way to build a web library, but with the tradeoff of requiring a library rebuild for mobile nonetheless.

While I would've started with a web library if I was Vercel as well, I think this highlights another problem that I think about a lot (and wrote about in our introduction article) - AI apps today are primarily built for web, not mobile. It's generally easier to build for web, so this again is understandable, but the world still runs on mobile - a lot more people interact with software through a mobile phone than they do a computer, and this will continue to hold true with the emerging generation of AI applications.

We ran into this streamdown gap building the mobile experience out for Scout, our upcoming consumer finance app - and it seemed like a perfect opportunity to contribute open source back to the community. Today, streamdown-rn powers our open source Mallory experience, which we built for Corbits to showcase the power of x402 APIs (a topic for another article).

streamdown-rn is designed to support all of the same features you'd get out of streamdown, just for react native:

  • Built-in typography, formatting, GFM
  • Interactive code blocks
  • Math equations
  • Mermaid diagrams

Similar to the way streamdown was designed as a drop-in replacement for react-markdown, streamdown-rn is designed as a drop-in replacement for react-native-markdown-display, which itself is a fork on top of react-native-markdown-renderer. Without question, the react native markdown world is definitely a bit more of the wild west than its web comparable.

As we built streamdown-rn, we decided to add a unique additional feature that I'm incredible excited to share:

The supercharge: Dynamic component injection

A major part of my belief on the future evolution of AI applications is that we will move past the chat-first experience and eventually consider it arcane. There's two main points driving my view:

  • For starters, text → GUI is how software 1.0 went. We started with the terminal, and moved to more interactive, immersive visual experiences. This makes sense from both a functional perspective (there are things you can do much faster in a GUI than in the CLI - sorry engineers) and from a user experience perspective (visuals can be more pleasant - again, sorry engineers).
  • Second, I don't think this perspective of mine is unique - dynamic user interfaces are a growing topic in AI, and existing chat interfaces are starting to play around with injecting visuals.

Since we've mentioned Vercel so much today, it's worth noting that the Vercel AI SDK has had support for Generative UI components for a while, which gave LLMs the ability to use tools that provided saturated react components back to the client. The flow would go something like this:

  1. The user asks the AI what the weather is today.
  2. The AI has a weather tool, that:
    1. Fetches information about the weather
    2. Returns a saturated React component with the weather back to the user interface
  3. User gets a beautiful ui!

This was a great improvement and a key piece to breaking AI applications out of the chat-only, text-only construct. However, you'll notice that the weather tool coupled data with visuals. The AI doesn't have flexibility in this context, for example, to use one weather tool to fetch weather data and another mechanism to show that weather data. It's also limited to one-fetch-one-show: if we wanted to give the LLM a way to fetch weather data for multiple cities at once and show all of that data together, we'd have to spend time building specific weather tools that support these use cases - not very flexible.

We took the idea of dynamic user interface components a step further and made it generic.

How dynamic component injection works in streamdown-rn

streamdown-rn supports a double-bracket format that gives your AI the ability to inject arbitrary React Native components directly into markdown responses. For example:

Here's some **bold text** and a dynamic component:

{{component: "TokenCard", props: {
  "tokenSymbol": "BTC",
  "tokenPrice": 45000,
  "priceChange24h": 2.5
}}}

This turns your chat interface into a programmable UI platform where the LLM composes complex, interactive interfaces on the fly. For now, we've constrained streamdown-rn to work with a whitelisted component registry that the developer must set up and inform the LLM of - but you can imagine a future where this kind of format allows LLMs to arbitrarily create their own react components to render information from scratch. Today, components in the registry are defined with JSON schemas for validation (in the future, perhaps TOON schemas) and have a variety of information that helps LLMs make educated decisions on when to use them:

import { ComponentRegistry, ComponentDefinition } from 'streamdown-rn';
import { TokenCard } from './components/TokenCard';
import { Chart } from './components/Chart';

// Define your components with JSON schemas for validation
const components: ComponentDefinition[] = [
  {
    name: 'TokenCard',
    component: TokenCard,
    category: 'dynamic',
    description: 'Displays token information',
    propsSchema: {
      type: 'object',
      properties: {
        tokenSymbol: { type: 'string' },
        tokenPrice: { type: 'number' },
        priceChange24h: { type: 'number' }
      },
      required: ['tokenSymbol', 'tokenPrice']
    }
  },
  {
    name: 'Chart',
    component: Chart,
    category: 'dynamic',
    description: 'Renders a chart',
    propsSchema: {
      type: 'object',
      properties: {
        data: { type: 'array' },
        type: { type: 'string', enum: ['line', 'bar', 'pie'] }
      },
      required: ['data', 'type']
    }
  }
];

You can see more in-depth documentation on the details of implementing a component registry here.

This setup decouples data and visuals for the LLM, which I think is incredibly powerful. It does rely on a more intelligent AI that can understand when and how to use visual components, but I think models are getting to the point where we can start betting on the reliability of such experiences.

Open source doesn't happen without you

We'd love to hear what you think of streamdown-rn, and we'd love to see you contribute. We're prioritizing our investment in open source contributions at Dark right now, and as part of that, have up to $20,000 in funding available to open source contributors thanks to the open source funding infrastructure provided by our friends at Merit.

Let me know what you think of streamdown-rn.