The LIV SDK, by default clones this object to match your application’s rendering setup. This is the camera responsible for rendering the user’s HMD. It must be set to the GameObject that contains the player's hands. The "Stage" object is where LIV's camera will be inserted in your hierarchy. The LIV Unity SDK is interested in only two GameObjects, your HMD camera & "Stage". LIV’s footprint is almost entirely dependent on how well-optimised your application is! Additional camera types, effects, and layers can be added in future updates to the LIV App, making your SDK integration last longer.įor this to work well, we have developed a minimum-latency, high-performance transport layer that also handles resource management.Accurate latency compensation works without any additional effort from you, the developer.Optimised resource use - we only do the bare minimum work required in the SDK, allowing it to stay lightweight and easy to maintain.This output can then be recorded or streamed using software like OBS or Discord.ĭoing this work out-of-engine comes with some significant benefits: The compositor takes in multiple timestamped sources, performs latency compensation, and composites them together. These textures are then submitted for composition! The background & foreground are separated by clipping geometry, based on the user’s location within the scene. This camera then renders your app into a background and foreground, to allow the user’s body to be composited in. The LIV SDK spawns a camera inside your app which is controlled by LIV. With the power of out-of-engine compositing, a creator can express themselves freely without limits as a real person or an avatar! How It Works Thanks to our software, creators can film inside your app and have full control over the camera. It contextualizes what the user feels & experiences by capturing their body directly inside your world! The default entry is Custom Material Pass, which is what we want.The LIV SDK provides a spectator view of your application. Add an entry to the Transform Passes list. Select the comp Element, and then in details find the Transform Passes property. We're going to add a transform Pass to the comp Element and set it up to composite the other three layers. Now that you have four Elements (the top-level comp, a Media Plate, and two CG Elements), you can layer them all to produce your comp. The top-level comp Element is responsible for merging all of the other Elements. So we use the same layer, but switch the InclusionType to Exclude.įor your CG renders to have the proper opacity for compositing, you will need to set Enable alpha channel support in post processing to Linear color space only enable in project settings: You can mix/match includes and excludes.įor the background Element, we want everything except the ConeAndCylinder layer. You can add as many Capture Actor layers to the Element as you wish. Render everything except the actors in the specified layer. Render only the actors in the specified layer. Because the entry's InclusionType is set to Include, it will render only those actors. In the new Capture Actors entry, set the ActorSet property to the ConeAndCylinder layer that we created earlier.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |