Accelerate VR Development with AI & Immersive Web SDK
Last year at Meta Connect, we launched the Immersive Web SDK (IWSDK), our open-source framework for building VR on the web. It handles the hard parts of VR development (physics, hand tracking, grab interactions, movement, and spatial UI) so you can focus on your idea and being creative.
Today, we’re introducing a fully integrated agentic workflow for IWSDK to further accelerate your VR development. The AI writes code, tests interactions, and fixes bugs alongside you. Developers, game designers, artists, and everyone in between can open an IWSDK project in an AI coding tool and go from idea to working VR experience without writing code yourself. No headset needed.
To test the efficacy of IWSDK with agentic workflows, we revisited one of our showcase WebXR apps, Project Flowerbed. This immersive VR gardening experience delighted users and inspired developers upon release, originally requiring tens of thousands of lines of custom code.
Using the agentic workflows, we leveraged the existing creative assets of Project Flowerbed and rebuilt the codebase for the app from the ground up in only 15 hours. This isn’t fixing a typo or generating boilerplate. It’s a full, interactive VR experience for web, rebuilt by AI using IWSDK.
If you love making VR experiences, want to start prototyping for VR, or are looking to understand how to incorporate AI into your workflows, consider this your invitation to start.
Why Build VR on the Web?
WebXR (the open web standard for running VR experiences in a web browser) lets you ship to desktop and all VR devices at once, and users can access your experience instantly through a link. No app store, no downloads. On Meta Quest alone, an average of one million users every month engage with WebXR experiences.
The web is also uniquely suited for AI-assisted development. There's no lengthy compile step, so the AI can write code, reload, and see results instantly, iterating through multiple cycles without your input. That makes it a great place to start, especially if you're new to VR development.
The Agentic Loop
In practice, agentic workflows mean the AI does more than generate code; it also tests and validates it. This closed-loop system is essential for high-quality, reliable results. IWSDK’s AI integration closes this loop entirely, offering developers maximum productivity.
IWSDK works with your favorite AI coding assistant including Claude Code, Cursor, GitHub Copilot, Codex and other tools. Using these tools, developers can build high quality VR experiences with IWSDK using the full agentic loop.
Knowledgeable code generation: Agents generate high quality code that follows framework patterns and best-practices because they can access and traverse the complete IWSDK documentation and codebase.
Scene understanding and testing: Agents take screenshots of the VR scene and understand the position, placement and nomenclature of every object, so they can make informed decisions. In addition to understanding the scene, agents can interact with the world. The AI moves around, picks up objects, presses buttons, and simulates hand gestures, testing whether things work in a dynamic VR environment.
Validation: When something is wrong, the agent can make changes on its own and retest. It can cycle through this loop until it is able to find a solution. The agentic loop also makes it easier for anyone on your team to fine tune the experience with feedback like “that feels too fast” or “move those closer together.” It maps your words to the right objects, adjusts their settings, and visually confirms the change.
Getting Started: Try it for yourself
The best way to understand the value of the agentic loop and how you can apply it to your VR experience is to try it yourself. IWSDK makes it simple to dive in and start building:
Create your project. Run one command (npm create @iwsdk) and pick which AI tool you use (Claude Code, Cursor, GitHub Copilot, or Codex). Everything is automatically configured with no extra setup.
Describe what you want. Open the project in your AI tool and provide your idea as a prompt, such as: "Build me an RPG-style VR game with neon blocks and a laser sword."
Watch it build. The AI writes the code, loads the scene, tests interactions, finds bugs, fixes them, and repeats, all without you touching anything. You can also enable the collaboration mode to work with AI together inside the same browser tab.
Refine by prompting. Say: "The blocks feel too fast." The AI adjusts the settings, confirms the change with you, and saves it.
Share with a link. Ask your AI coding tool to help you deploy to any hosting provider (GitHub Pages, Vercel, Netlify, etc.) and send the URL. Anyone can open it in VR.
That's all you need to get started. But if you want to know what's really happening under the hood, keep reading. 👇
Under the Hood
IWSDK has a purpose-built AI integration layer that gives coding assistants something they've never had before: spatial understanding of a live 3D scene.
The Framework
The SDK is built on Three.js with a high-performance Entity Component System (ECS) powered by ELICS, plus Vite-based tooling for instant hot-reload. Out of the box it provides Havok physics, one/two-hand and distance grab interactions, teleport and smooth locomotion, full articulated hand tracking, spatial UI panels (UIKitML), 3D audio, AR scene understanding, and browser-based XR emulation via IWER, no headset required during development.
How the AI Closes the Loop
Your AI coding tool connects to an MCP (Model Context Protocol) server running inside the Vite dev server, which bridges to the browser over WebSocket. This gives the AI access to over 40 specialized tools across scene inspection, XR device emulation, ECS debugging, and semantic code search:
See the scene. Capture screenshots and query the full Three.js scene hierarchy to understand every object's position, components, and relationships.
Control XR input. Move the emulated headset, point controllers, press buttons, switch to hand tracking, and simulate pinch gestures, all programmatically through IWER's RemoteControlInterface.
Debug the ECS. Pause the simulation, step through frames, toggle individual systems on/off, patch component values live, and diff state snapshots to pinpoint exactly what changed.
Search real APIs. Query a local RAG server with semantic search with more than 3,000 indexed code chunks from the IWSDK codebase, so the AI always references real function signatures and patterns, not hallucinated ones.
Every project also ships with pre-built agent skills for common workflows (architecture planning, asset browsing, UI panel building, pointer interaction, model preview, and XR mode testing), a project knowledge file with best practices and performance constraints, and pre-configured permissions so the AI works autonomously.