Meta has just released their latest headset, the Quest 3, touting it as the world’s first mixed reality headset. In addition, Apple will be releasing their mixed reality headset, the Apple Vision Pro, in 2024. Mixed reality will soon be a common talking point, so here’s a quick list of common terms you’re likely to hear and what they mean.
Mixed reality refers to a hardware function that projects digital assets into a real world environment. The user is able to interact with these assets in real time as if they were real. Assets can be programmed to interact with real world objects in the room with you, like desks, tables, and chairs.
This is an acronym that stands for Head Mounted Display. It’s an alternative, shorthand, way to refer to a headset.
Mixed reality headsets are equipped with special cameras that allow the user to see their real world environment. This ‘mode’ is called passthrough mode. Currently, Quest 3, Pico 4, HTC Vive XR Elite, and (soon) Apple Vision Pro all support full color passthrough for native menu navigation and apps/games designed to be used in mixed reality.
Spatial anchors are points in the real world that a system or app tracks over time. As an example, imagine being in a manufacturing warehouse wearing a MR headset and approaching a digital map of all the shelves and items in the warehouse. This map stays in that same place for anyone using the same system/app. This map is ‘anchored’ to that real world location that is consistently tracked and maintained by the app.
When activating a mixed reality headset or app for the first time, you will most likely be asked to scan your workspace. As you look around the room, a program will overlay a mesh on all objects and furniture in the room. Scene understanding refers to the headset recognizing different types of furniture so digital assets can interact with them. For example, placing a digital lamp on your real world living room table. Scene understanding means your headset realizes a table is there and makes sure the lamp “sits” on top of it instead of falling through it.
At present, the most common way of user interaction is with controllers. Hand tracking implementation allows you to use your hands for input without controllers. There are usually quick hand and finger gestures the tracking cameras detect to interpret your intended action. For instance, quickly pinching your thumb and forefinger together indicates a ‘click’ as if you were moving a PC mouse.
The current generation of mixed reality headsets allow for users wearing separate headsets to both ‘exist’ in the same space and interact with the same objects. As an example, think about playing a board game using a virtual board anchored to your living room table with other players in the room with you. Co-location allows this to happen with each user interacting with the same board instead of their own virtual versions.
Occlusion, in plain terms, means blocking something from view of the tracking cameras of the headset. For instance, when using hand tracking, covering one hand with the other so it cannot be tracked. For mixed reality, occlusion can also refer to digital assets being blocked by real world objects. Like a virtual pet walking behind a chair to hide from your view.
Some developers are adding mixed reality features and gameplay modes to their existing virtual reality applications and reading about those updates can be confusing when terms like these are used. We hope this clears some of that up. Be sure to check out our Quest 3 game, Grokit, a mixed reality hand tracking multiplayer game that utilizes scene understanding and passthrough.
Featured image credit: Engadget
Written by 3lb Games Staff