This tutorial was originally written for and published in Futureproofd on Medium
This is part 1 of my technical exploration of RealityKit
What is RealityKit?
Two years ago, Apple introduced ARKit as a first-party framework for building AR apps. New this year is the simpler RealityKit: It allows you to build pretty incredible experiences with very little code.
When you build an app in ARKit you have to set up lighting and camera effects yourself. This results in a lot of boilerplate code. RealityKit, on the other hand, gives you lots of effects, like motion blur, for free. Another feature exclusive for RealityKit is Reality Composer, a tool for AR prototyping. Overall, RealityKit feels like a simpler version of ARKit.
In this tutorial series, we’ll mostly focus on the RealityKit framework and not the Reality Composer editor. To start, you can use the “Augmented Reality App” template in Xcode 11 and select RealityKit as a framework. From there you delete “Experience.rcproject” since we’ll only use code to create our scene. Also delete all default code in the main view controller, until you’re left with a clean viewDidLoad() function.
Anchors and Entities
While you add nodes directly to a scene in SceneKit, in RealityKit you add so-called entities (basically nodes) to an anchor. The anchor is then added to the scene.
This approach is better for AR since you always want your content to be “attached” to the world. While it might seem weird to have e.g. audio anchored, you quickly realize this concept’s similarities to the real world.
Now that we know about the basic structure of entities in RealityKit, we can start coding.
Creating an anchor is very simple in RealityKit:
let anchor = AnchorEntity(plane: .horizontal)
Using this initializer for AnchorEntity, you choose, whether you want a horizontal or vertical plane to attach to.
Now we can add the anchor to the scene:
Any future entities we create and want to be attached to this plane will be added to the anchor.
RealityKit makes is very simple to create simple meshes. While not every SceneKit shape is available right now, the most important ones are already supported:
These include a box generator, a sphere generator, a plane generator and a text generator.
As far as I can tell, these are all the mesh generators, that are supported (for now at least). More special shapes like pyramids are unavailable.
These generators generate a mesh, which we can then use to create an entity.
let box = MeshResource.generateBox(size: 0.3) // Generate mesh let entity = ModelEntity(mesh: box) // Create an entity from mesh
This entity can then be added to the anchor we created before:
When we now run the app we should see our cube. If you don’t see it right away, try moving around.
You can also also change the color of the cube by creating a custom material…
let material = SimpleMaterial(color: .green, isMetallic: true)
…and adding materials:
[material] to the entity initialiser:
let entity = ModelEntity(mesh: box, materials: [material])
RealityKit adds lots of camera effects to a scene without you having to enable them. These effects are needed to make our objects look realistic. Here are the effects you get for free:
- Motion Blur
- Depth of Field
While you don’t have to know how these effects work, it’s good to be aware that RealityKit always does post-processing on a scene.
This was the first part of my technical exploration of RealityKit. While I didn’t go into a lot of detail for a lot of the topics, I hope you enjoyed this overview. In the next parts of this tutorial series, we’ll go over loading custom models, audio, animation, interactions, and physics. I’ll also briefly talk about Reality Composer.