Subdivisions in RealityKit
TL;DR: SubD can currently only be activated on Reality Composer Pro and for geometry using physically based (not custom) materials.
While working with 3D, especially for real-time rendering engines like RealityKit (the current engine powering visionOS), balancing geometry, material types, sizes, and render passes is critical for creating an optimal visual representation efficiently. This concept is often referred to as "budget" and different strategies have evolved over time to ensure that budget is used successfully.
When it comes to geometry specifically, as currently rendering engines use triangles to represent shapes, organic/smooth models will require a high amount of polygons to be perceived as continuous. So spending too much of your budget just to make things look smooth is a common issue (even from the basics like representing a sphere).
Subdivisions
A geometry method that Edwin Catmull and Jim Clark proposed in the 1970s and that has become widely used in animation and modeling, particularly in Pixar movies.
The subdivision technique works by recursively subdividing a polygon mesh, adding new vertices based on existing ones to create smoother surfaces. This process can be executed at the modeling stage (i.e., modifiers in Blender, dynamic topology tools, etc.) or at runtime, which potentially unlocks another technique called Adaptive Level of Detail, which allows engines to dynamically adjust the level of detail based on the camera’s distance from the object (pretty much foveation for geometry). With this benefit, runtime support for subdivision on modern engines is considered crucial, and starting with visionOS 2, macOS 15, iOS 18, and iPadOS 18, RealityKit gets support of this feature at the API level.
RealityKit's SubD implementation not only creates smoother geometry at a lower cost but also aligns with visionOS's trend of incorporating USD standard features. Here's an extract of the invaluable doc Validating feature support for USD files
USD Feature | RealityKit | SceneKit | Storm |
---|---|---|---|
Polygon meshes | ✓ | ✓ | ✓ |
Vertex animation | ✓ | ||
Primitive shapes | ✓ | ✓ | ✓ |
Double-sided meshes | ✓ | ✓ | |
Subdivision | ✓ | ✓ | ✓ |
NURBS patches | |||
Basis curves | |||
Points | ✓ | ✓ | |
Camera | ✓ | ✓ | |
Geometry subsets | ✓ | ✓ | ✓ |
Alembic | ✓ | ✓ | ✓ |
Draco compression | |||
Vertex colors | ✓ | ✓ | ✓ |
Purpose | ✓ | ✓ | ✓ |
In practical terms, when support for SubD was introduced at the last WWDC, understanding how to use it was not immediately clear to me, and on the current project we are working on together with Roxana, it became rapidly clear that we would pretty much need this technique. Luckily, after some digging, we finally found the key ingredient in the forums.
RealityKit supports subdivision on objects using a USD preview surface shader, starting with visionOS 2, macOS 15, iOS 18, and iPadOS 18. Objects with custom MaterialX materials use standard polygonal meshes.
and
...verify that you are using the "Physically Based" Shader (as selected in the Materials tab in Reality Composer Pro). Subdivision surfaces are not supported with custom materials at this time.
So, even with these missing pieces and current limitations, the wins are significant, and with this ongoing USD adoption trend, it is easy to be optimistic about an even more impressive visionOS.
(🎶 For USD Christmas, my wishlist contains NURBS, double-sided meshes, and cameras 🎶)