- Notifications
You must be signed in to change notification settings - Fork113
QuestVisionKit is a collection of template and reference projects demonstrating how to use Meta Quest’s new Passthrough Camera API for advanced AR/VR vision, tracking, and shader effects.
License
xrdevrob/QuestCameraKit
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
QuestCameraKit is a collection of template and reference projects demonstrating how to use Meta Quest’s newPassthrough Camera API(PCA) for advanced AR/VR vision, tracking, and shader effects.
- PCA Samples
- Update Notes
- Getting Started with PCA
- Troubleshooting & Known Issues
- Community Contributions
- News
- Acknowledgements & Credits
- License
- Contact
- Purpose: Convert a 3D point in space to its corresponding 2D image pixel.
- Description: This sample shows the mapping between 3D space and 2D image coordinates using the Passthrough Camera API. We use MRUK's EnvironmentRaycastManager to determine a 3D point in our environment and map it to the location on our WebcamTexture. We then extract the pixel on that point, to determine the color of a real world object.
How to run this sample
- Open the
ColorPickerscene. - Build the scene and run the APK on your headset.
- Aim the ray onto a surface in your real space and press the A button or pinch your fingers to observe the cube changing its color to the color in your real environment.
- Purpose: Convert 2D screen coordinates into their corresponding 3D points in space.
- Description: Use the Unity Inference Engine framework to infer different ML models to detect and track objects. Learn how to convert detected image coordinates (e.g. bounding boxes) back into 3D points for dynamic interaction within your scenes. In this sample you will also see how to filter labels. This means e.g. you can only detect humans and pets, to create a more safe play-area for your VR game. The sample video below is filtered to monitor, person and laptop. The sample is running at around
60 fps.
How to run this sample
- Open the
ObjectDetectionscene. - InstallUnity AI Inference (use
com.unity.ai.inference@2.3.0). - Select the labels you want to track. Leaving the list empty tracks all objects.
Show all available labels
person bicycle car motorbike aeroplane bus train truck boat traffic light fire hydrant stop sign parking meter bench bird cat dog horse sheep cow elephant bear zebra giraffe backpack umbrella handbag tie suitcase frisbee skis snowboard sports ball kite baseball bat baseball glove skateboard surfboard tennis racket bottle wine glass cup fork knife spoon bowl banana apple sandwich orange broccoli carrot hot dog pizza donut cake chair sofa pottedplant bed diningtable toilet tvmonitor laptop mouse remote keyboard cell phone microwave oven toaster sink refrigerator book clock vase scissors teddy bear hair drier toothbrush - Build and deploy to Quest. Use the trigger to scan the environment; markers will appear for detections above the configured confidence threshold.
- Purpose: Detect and track QR codes in real time. Open webviews or log-in to 3rd party services with ease.
- Description: Similarly to the object detection sample, get QR code coordinated and projects them into 3D space. Detect QR codes and call their URLs. You can select between a multiple or single QR code mode. The sample is running at around
70 fpsfor multiple QR codes and a stable72 fpsfor a single code. Users are able to choose between CenterOnly and PerCorner raycasting modes via an enum in the inspector. This enables more accurate rotation tracking for use cases that require it (PerCorner), while preserving a faster fallback (CenterOnly).
How to run this sample
- Open the
QRCodeTrackingscene. - Ensure ZXing DLLs are present (the editor script auto-adds the
ZXING_ENABLEDdefine). - Choose Single or Multiple detection mode and the raycast mode (CenterOnly vs PerCorner).
- Build to Quest, point the headset toward QR codes, and interact with the spawned markers.
- Purpose: Apply stereo passthrough camera-mapped shader effects to virtual surfaces.
- Description: The shader sample is now consolidated into one scene that uses left/right passthrough feeds and per-eye calibration data. Current materials and shaders included in this flow are
StereoPassthroughCameraMapping,StereoPassthroughFrostedGlass, andStereoPassthroughWavyPortal.
How to run this sample
- Open the
CameraMappingForShadersscene. - Make sure your passthrough setup is active and the scene has both left and right
PassthroughCameraAccesscomponents. - Build to Quest and run. Interact with the sample objects using the stereo shader materials to test camera mapping, frosted glass, and wavy portal effects.
- Purpose: Ask OpenAI's vision model (or any other multi-modal LLM) for context of your current scene.
- Description: We use a the OpenAI Speech to text API to create a command. We then send this command together with a screenshot to the Vision model. Lastly, we get the response back and use the Text to speech API to turn the response text into an audio file in Unity to speak the response. The user can select different speakers, models, and speed. For the command we can add additional instructions for the model, as well as select an image, image & text, or just a text mode. The whole loop takes anywhere from
2-6 seconds, depending on the internet connection.
How to run this sample
- Open theImageLLM scene.
- Create anOpenAI API key and enter it on theOpenAI Manager prefab.
- Select your desired model and optionally give the LLM additional instructions.
- Ensure your Quest headset is connected to a fast/stable network.
- Build the scene and run the APK on your headset.
- Use the voice input (controller or hand gesture) to issue commands; the headset captures a PCA frame and plays back the LLM response via TTS.
[!NOTE]File uploads are currently limited to25 MB and the following input formats are supported:
mp3,mp4,mpeg,mpga,m4a,wav,webm.
You can send commands and receive results in any of these languages:
Show all supported languages
| Afrikaans | Arabic | Armenian | Azerbaijani | Belarusian | Bosnian | Bulgarian | Catalan | Chinese |
| Croatian | Czech | Danish | Dutch | English | Estonian | Finnish | French | Galician |
| German | Greek | Hebrew | Hindi | Hungarian | Icelandic | Indonesian | Italian | Japanese |
| Kannada | Kazakh | Korean | Latvian | Lithuanian | Macedonian | Malay | Marathi | Maori |
| Nepali | Norwegian | Persian | Polish | Portuguese | Romanian | Russian | Serbian | Slovak |
| Slovenian | Spanish | Swahili | Swedish | Tagalog | Tamil | Thai | Turkish | Ukrainian |
| Urdu | Vietnamese | Welsh |
OpenAI.vision.whisper.model.mp4
- Purpose: Stream the Passthrough Camera stream over WebRTC to another client using WebSockets.
- Description: This sample usesSimpleWebRTC, which is a Unity-based WebRTC wrapper that facilitates peer-to-peer audio, video, and data communication over WebRTC usingUnitys WebRTC package. It leveragesNativeWebSocket for signaling and supports both video and audio streaming. You will need to setup your own websocket signaling server beforehand, either online or in LAN. You can find more information about the necessary stepshere
How to run this sample
- InPackage Manager, click the+ button →Add package from git URL and install, in order:
https://github.com/endel/NativeWebSocket.git#upmhttps://github.com/Unity-Technologies/com.unity.webrtc.githttps://github.com/FireDragonGameStudio/SimpleWebRTC.git?path=/Assets/SimpleWebRTC
- Open theWebRTC-Quest scene.
- On
[BuildingBlock] Camera Rig/TrackingSpace/CenterEyeAnchor/Client-STUNConnection, set your WebSocket signaling server address (LAN or cloud). - Build and deployWebRTC-Quest to your Quest 3.
- Open theWebRTC-SingleClient scene in the editor (or deploy to another device) to act as the receiving peer.
- Launch both apps. Perform theStart gesture with your left hand (or press the menu button) on Quest to begin streaming.
Troubleshooting
- If you hit compiler errors, re-add the three git packages listed above.
- UseTools ▸ Update WebRTC Define Symbol after importing packages.
- Ensure your own WebSocket signaling server is running (tutorialhere).
- For LAN streaming, leave the STUN address empty; otherwise keep the default.
- Enable theWeb Socket Connection active toggle so Quest connects on start.
- WebRTC works with Vulkan and OpenGLES3. If you use GLES3:
- InProject Settings ▸ XR Plug-in Management ▸ Oculus: disableLow Overhead Mode (GLES).
- InXR Plug-in Management ▸ OpenXR: disableMeta Quest: Occlusion andMeta XR Subsampled Layout.
- Ignore PST warnings about opaque textures / low overhead / camera stack for this sample.
- Purpose: Detect and track QRCodes without having to use a third-party library.
- Description: As this feature is still in experimental state, make sure to use experimental mode on your Quest3 when testing. Unity usually asks for enabling that, before building. You can activate it via command line command using Meta Quest Hub too. More information can be found here -MR Utility Kit QRCode Detection and here -Mobile Experimental Features
How to run this sample
- InstallTextMeshPro Essentials if prompted.
- Enable Experimental Mode on your Quest 3/3S (headset UI or Quest Hub CLI).
- Open theQRCodeDetection scene.
- Build and deploy to your device, then use the controllers to interact with the UI (hand tracking is not yet supported).
Camera-mapping and shader content has been consolidated for easier maintenance and a cleaner project structure.
- New primary shader scene:
Unity-QuestVisionKit/Assets/Samples/4 Shaders/CameraMappingForShaders.unity - Legacy folders/scenes removed:
Unity-QuestVisionKit/Assets/Samples/CameraMapping/Unity-QuestVisionKit/Assets/Samples/4 Shaders/FrostedGlass WIP/Unity-QuestVisionKit/Assets/Samples/4 Shaders/Shader Samples.unity
The legacy
WebCamTexturehelpers have been fully retired. Every sample now talks directly toPassthroughCameraAccesscomponent, which is part of the MRUK package, and consumes the native render texture, intrinsics, and per-frame pose data. It also offers us timestamps and both eyes at the same time.
- Single PCA component – All samples reference the same configured PCA component; no more bespoke permission scripts or manifest edits.
- Direct GPU textures – Inference Engine, QR tracking, Frosted Glass shaders, WebRTC, and OpenAI capture now pull the PCA render texture directly, preserving latency and aspect ratio.
- Pose-aware reprojection – Object/QR samples reconstruct rays with per-frame PCA pose + intrinsics so markers no longer drift when the user nods.
- Environment-aware markers – Marker placement re-samples environment normals and optionally drops detections below a configurable confidence threshold.
- Shared marker assets – Marker prefab/pool/controller moved to
Assets/Samples/Common/Markers, keeping inference + QR samples in sync. - Shader updates – Camera-mapped shaders understand PCA UV scale/offsets so effects don’t mirror, split into quadrants, or flip vertically.
- Meta Quest Device: Ensure you are runnning on a
Quest 3orQuest 3sand your device is updated toHorizonOS v74or later. - Unity: Recommended is
Unity 6. Also runs on Unity2022.3. LTS. - Camera Passthrough API does not work in the Editor or XR Simulator.
- Get more information from theMeta Quest Developer Documentation
Clone the Repository:
git clone https://github.com/xrdevrob/QuestCameraKit.gitOpen the Project in Unity:Launch Unity and open the cloned project folder.
Configure Dependencies:Follow the instructions in the section below to run one of the samples.
- For sample 7 QR Code Detection, make sure experimental mode is active. You can find more information about the necessary stepshere.
- Object Detection uses
Unity.InferenceEnginewithcom.unity.ai.inference@2.3.0. - If switching betwenn Unity 6 and other versions such as 2023 or 2022 it can happen that your Android Manifest is getting modified and the app won't run anymore. Should this happen to you make sure to go to
Meta > Tools > Update AndroidManifest.xmlorMeta > Tools > Create store-compatible AndroidManifest.xml. After that make sure you add back thehorizonos.permission.HEADSET_CAMERAmanually into your manifest file.
Tutorials
- XR Dev Rob - XR AI Tutorials,Watch on YouTube
- Dilmer Valecillos,Watch on YouTube
- Skarredghost,Watch on YouTube
- FireDragonGameStudio,Watch on YouTube
- xr masiso,Watch on YouTube
- Urals Technologies,Watch on YouTube
Object Detection
Shaders
Environment Understanding & Mapping
- Takashi Yoshinaga:Turning image into colored point cloud
- Alireza Bahremand:Quest Passthrough to MAST3R-SLAM for scene ply distribution
- うえださん:3D Scanner
- Sander Sneek:Mixed Reality Voxel Demo
- Takashi Yoshinaga:Point cloud data from Quest in real time
- Bastion:Sobel edge detection + passthrough camera + R6 Lion scan SFX
- Takashi Yoshinaga:Map color images onto a point cloud by combining Quest's Depth API and Passthrough Camera API
Lighting and Reflection Estimation
Environment Sampling
Image to 3D
Image to Image, Diffusion & Generation
- Hugues Bruyère:MR + Diffusion prototype
- Hugues Bruyère:SAM 2 to our workflow to segment people
- 水マヨ:AI image description
- 妹尾雄大:Img2Img process of SDXL
- Rolando Masís-Obando:Image to image with LCM and SDXL Turbo
- Hugues Bruyère:Mixed Reality + Diffusion prototype as a tool for exploring concepts, styles, and moods by transforming real-world surroundings into alternate realities.
Video recording and replay
OpenCV for Unity
- Takashi Yoshinaga:Using Passthrough Camera API with the OpenCV for Unity plugin
- Takashi Yoshinaga:OpenCV marker detection for object tracking
- Takashi Yoshinaga:OpenCV marker detection for multiple objects. You can find this project on hisGitHub Repo
- Aurelio Puerta Martín:OpenCV with multiple trackers
- くりやま@システム開発:Positioning 3D objects on markers
QR Code Tracking
- (Mar 21 2025) The Mysticle -One of Quests Most Exciting Updates is Now Here!
- (Mar 18 2025) Road to VR -Meta Releases Quest Camera Access for Developers, Promising Even More Immersive MR Games
- (Mar 17 2025) MIXED Reality News -Quest developers get new powerful API for mixed reality apps
- (Mar 14 2025) UploadVR -Quest's Passthrough Camera API Is Out Now, Though Store Apps Can't Yet Use It
- Thanks toMeta for the Passthrough Camera API andPassthrough Camera API Samples.
- Thanks to shader wizardDaniel Ilett for helping me in the shader samples.
- Thanks toMichael Jahn for the XZing.Net library used for the QR code tracking samples.
- Special thanks toMarkus Altenhofer fromFireDragonGameStudio for contributing the WebRTC sample scene.
- Special thanks toThomas Ratliff for contributing hisshader samples to the repo.
This project is licensed under the MIT License. See the LICENSE file for details. Feel free to use the samples for your own projects, though I would appreciate if you would leave some credits to this repo in your work ❤️
For questions, suggestions, or feedback, please open an issue in the repository or contact me onX,LinkedIn, or atroberto@blackwhale.dev. Find all my infohere or join our growing XR developer community onDiscord.
Happy coding and enjoy exploring the possibilities with QuestCameraKit!
About
QuestVisionKit is a collection of template and reference projects demonstrating how to use Meta Quest’s new Passthrough Camera API for advanced AR/VR vision, tracking, and shader effects.
Topics
Resources
License
Code of conduct
Contributing
Uh oh!
There was an error while loading.Please reload this page.




