For Firebase app builders using Realtime Database + Vertex AI.
FireGen turns Realtime Database into your universal Generative AI API. AI Request Analyzer understands your prompt and automatically selects the best model and valid parameters. Write a string. Watch results stream back. Polling, LROs, GCS URLs, and auth — handled.
"firegen-jobs", "your prompt") onValue(jobRef, ...) → response.url
// AI‑assisted mode: just write a prompt string
import { getDatabase, ref, push, onValue } from "firebase/database";
const rtdb = getDatabase();
const jobRef = push(ref(rtdb, "firegen-jobs"), "Vertical video of a waterfall with ambient sound");
// Subscribe to updates
onValue(jobRef, (snap) => {
const job = snap.val();
if (!job) return;
if (job.status === "succeeded") {
// Media outputs provide a signed URL under response.url
player.src = job.response.url;
} else if (job.status === "failed") {
console.error(job.response.error?.message);
}
});
Zero SDK juggling. No model guessing. No schema errors. Just RTDB.
FireGen understands your natural‑language prompt, picks the best model across Veo/Imagen/Gemini/Lyria/TTS, and fills valid parameters automatically. It preserves your prompt verbatim and saves the reasoning trail for transparency.
_meta.reasons"Vertical video of a waterfall with ambient sound"
{
"status": "requested",
"request": {
"type": "video",
"model": "veo-3.1-fast-generate-preview",
"prompt": "Vertical video of a waterfall with ambient sound",
"duration": 8,
"aspectRatio": "9:16",
"audio": true
},
"_meta": {
"aiAssisted": true,
"reasons": ["…two-step reasoning chain…"]
}
}
Stored at firegen-jobs/{jobId}/_meta/reasons
for transparency and debugging.
Deploy FireGen to your Firebase project (set region and bucket). Works with RTDB.
AI‑assisted: write a string prompt. Explicit: send a structured request. Both to firegen-jobs
.
Use onValue to read response.url (media) or response.text (text/STT) when status is succeeded.
A Firebase‑native pattern. Use AI‑assisted mode (string prompt) or explicit mode (structured request).
AI‑assisted: write a string prompt to firegen-jobs/{jobId}
.
Explicit: write a structured object.
The AI Request Analyzer picks the best model and valid parameters, then transforms your node into a structured job.
Functions v2 + Task Queue handle sync/async models. You subscribe and read response.url or response.text when succeeded.
Client ──▶ RTDB /firegen-jobs ──▶ Functions (onCreate)
│ │
│ ◀────────────────────┘ analyze prompt → structured request
│
│ ┌─▶ Task Queue Poller ──▶ Vertex Operations.get()
│ │ │
│ └──────────┴── (backoff, TTL, retries)
│
◀──────── real-time updates ◀── RTDB /firegen-jobs (status/response/error)
DX‑first, Firebase‑native, now with AI‑assisted routing.
Prompt‑to‑model routing with validation. No more guessing IDs or params; FireGen sets them correctly and saves the reasoning.
Realtime triggers, Task Queue, Functions v2, secure rules. No extra infra.
Exponential backoff + jitter. Cancel, TTL, and dead-letter patterns baked in.
Veo writes to GCS via storageUri. We return signed URLs you can stream.
Same job shape for video, image, text, audio. Sync and async flows unified.
Rules restrict per-user reads/writes. App Check ready. Keys kept server-side.
Install, set bucket & location, done. Write a string or an object — your choice.
We’ve been there: multiple SDKs, outdated examples, polling edge cases, storage hand-offs, and auth. FireGen compresses days into minutes.
| Approach | Dev Effort | Time-to-first-result |
|---|---|---|
| Direct Vertex SDKs | High (LROs, storage, auth, docs) | Days → weeks |
| Workflows / Scheduler DIY | Medium (new services, YAML/cron) | Days |
| FireGen (Extension) | Low (write node + subscribe) | Minutes |
Free for the hackathon & early access. Usage billed by Firebase & Vertex AI as usual.
Solo makers & prototypes.
Teams shipping production apps.
“Veo took us three days before FireGen. Now it’s two lines of code. Unreal.”
“The RTDB pattern is a universal API. My team didn’t touch Vertex docs once.”
Quick answers for builders.
firegen-jobs/{jobId}. The AI Request Analyzer selects the best model (Veo/Imagen/Gemini/Lyria/TTS), fills valid parameters, and stores the reasoning at _meta.reasons.
result with signed URLs.
storageUri) and optionally mirroring to Firebase Storage.
Be the first to try FireGen, our Firebase Extension that turns RTDB into a universal Generative AI API.