This commit is contained in:
2024-09-20 20:30:10 +02:00
commit 4fabf1a6fd
29169 changed files with 1706941 additions and 0 deletions

View File

@@ -0,0 +1,9 @@
# Camera Switcher
The **CameraSwitcher** component allows you to define a List of Cameras in the Scene and then use the Debug Window to switch between them in Play Mode. This is useful when you want a set of different fixed views for profiling purposes where you need to guarantee that the Camera view is in the same position between sessions.
## Properties
| **Property** | **Description** |
| ------------ | ------------------------------------------------------------ |
| **Cameras** | Drag and drop GameObjects that have a Camera component attached to add them to this List of Cameras. The Debug Window can switch between the Cameras in this List. |

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

View File

@@ -0,0 +1,167 @@
[comment]: # (If you modify this file make sure you modify the copy/paste file: com.unity.render-pipelines.universal and com.unity.render-pipelines.high-definition\Documentation~\lens-flare-data-driven-asset.md)
# Lens Flare (SRP) Asset
Unitys [Scriptable Render Pipeline (SRP)](https://docs.unity3d.com/Manual/ScriptableRenderPipeline.html) includes the **Lens Flare Element** asset. You can use this asset to create lens flares in your scene and control their appearance. <br/>To create a Lens Flare Element asset, navigate to **Assets > Create > SRP Lens Flare**. To use this asset, assign it to the **Lens Flare Data** property of an [SRP Lens Flare Override Component](srp-lens-flare-component.md).
## Properties
The Lens Flare Element asset has the following properties:
- [Type](#Type)
- [Image](#Image)
- [Circle](#Circle)
- [Polygon](#Polygon)
- [Common](#Common)
- [AxisTransform](#AxisTransform)
- [Distortion](#Distortion)
- [Multiple Elements](#Multiple-Elements)
- [Uniform](#Uniform)
- [Curve](#Curve)
- [Random](#Random)
<a name="Image"></a>
### Type
| **Property** | **Description** |
| ------------ | ------------------------------------------------------------ |
| Type | Select the type of Lens Flare Element this asset creates: <br />&#8226; [Image](#Image) <br />&#8226; [Circle](#Circle) <br />&#8226; [Polygon](#Polygon) |
<a name="Image"></a>
#### Image
![](images/LensFlareShapeImage.png)
| **Property** | **Description** |
| --------------------- | ------------------------------------------------------------ |
| Flare Texture | The Texture this lens flare element uses. |
| Preserve Aspect Ratio | Fixes the width and height (aspect ratio) of the **Flare Texture**. You can use [Distortion](#Distortion) to change this property. |
<a name="Circle"></a>
#### Circle
![](images/LensFlareShapeCircle.png)
| **Property** | **Description** |
| ------------ | ------------------------------------------------------------ |
| Gradient | Controls the offset of the circular flare's gradient. This value ranges from 0 to 1. |
| Falloff | Controls the falloff of the circular flare's gradient. This value ranges from 0 to 1, where 0 has no falloff between the tones and 1 creates a falloff that is spread evenly across the circle. |
| Inverse | Enable this property to reverse the direction of the gradient. |
<a name="Polygon"></a>
#### Polygon
![](images/LensFlareShapePolygon.png)
| **Property** | **Description** |
| ------------ | ------------------------------------------------------------ |
| Gradient | Controls the offset of the polygon flare's gradient. This value ranges from 0 to 1. |
| Falloff | Controls the falloff of the polygon flare's gradient. This value ranges from 0 to 1, where 0 has no falloff between the tones and 1 creates a falloff that is spread evenly across the polygon. |
| Side Count | Determines how many sides the polygon flare has. |
| Roundness | Defines how smooth the edges of the polygon flare are. This value ranges from 0 to 1, where 0 is a sharp polygon and 1 is a circle. |
| Inverse | Enable this property to reverse the direction of the gradient |
<a name="Color"></a>
## Color
![](images/LensFlareColor.png)
| **Property** | **Description** |
| ----------------------- | ------------------------------------------------------------ |
| Tint | Changes the tint of the lens flare. If this asset is attached to the light, this property is based on the light tint. |
| Modulate By Light Color | Allows light color to affect this Lens Flare Element. This only applies when the asset is used in a [SRP Lens Flare Override Component](srp-lens-flare-component.md) that is attached to a point, spot, or area light. |
| Intensity | Controls the intensity of this element. |
| Blend Mode | Select the blend mode of the Lens Flare Element this asset creates:<br />• Additive <br />• Screen <br />• Premultiplied <br />• Lerp |
<a name="Transform"></a>
## Transform
![](images/LensFlareTransform.png)
| **Property** | **Description** |
| ----------------------- | ------------------------------------------------------------ |
| Position Offset | Defines the offset of the lens flare's position in screen space, relative to its source. |
| Auto Rotate | Enable this property to automatically rotate the Lens Flare Texture relative to its angle on the screen. Unity uses the **Auto Rotate** angle to override the **Rotation** parameter. <br/><br/> To ensure the Lens Flare can rotate, assign a value greater than 0 to the [**Starting Position**](#AxisTransform) property. |
| Rotation | Rotates the lens flare. This value operates in degrees of rotation. |
| Size | Use this to adjust the scale of this lens flare element. <br/><br/> This property is not available when the [Type](https://github.com/Unity-Technologies/Graphics/pull/3496/files?file-filters[]=.md#Type) is set to [Image](https://github.com/Unity-Technologies/Graphics/pull/3496/files?file-filters[]=.md#Image) and **Preserve Aspect Ratio** is enabled. |
| Scale | The size of this lens flare element in world space. |
<a name="AxisTransform"></a>
## AxisTransform
![](images/LensFlareAxisTransform.png)
| **Property** | **Description** |
| ----------------- | ------------------------------------------------------------ |
| Starting Position | Defines the starting position of the lens flare relative to its source. This value operates in screen space. |
| Angular Offset | Controls the angular offset of the lens flare, relative to its current position. This value operates in degrees of rotation. |
| Translation Scale | Limits the size of the lens flare offset. For example, values of (1, 0) create a horizontal lens flare, and (0, 1) create a vertical lens flare. <br/><br/>You can also use this property to control how quickly the lens flare appears to move. For example, values of (0.5, 0.5) make the lens flare element appear to move at half the speed. |
<a name="Distortion"></a>
## Distortion
![](images/LensFlareRadialDistortion.png)
| **Property** | **Description** |
| --------------- | ------------------------------------------------------------ |
| Enable | Set this property to True to enable distortion. |
| Radial Edge Size | Controls the size of the distortion effect from the edge of the screen. |
| Radial Edge Curve | Blends the distortion effect along a curve from the center of the screen to the edges of the screen. |
| Relative To Center | Set this value to True to make distortion relative to the center of the screen. Otherwise, distortion is relative to the screen position of the lens flare. |
<a name="Multiple-Elements"></a>
## Multiple Elements
| **Property** | **Description** |
| --------------- | ------------------------------------------------------------ |
| Enable | Enable this to allow multiple lens flare elements in your scene. |
| Count | Determines the number of identical lens flare elements Unity generates.<br/>A value of **1** appears the same as a single lens flare element. |
| Distribution | Select the method that Unity uses to generate multiple lens flare elements:<br/>•[Uniform](https://github.com/Unity-Technologies/Graphics/pull/3496/files?file-filters[]=.md#Uniform)<br/>•[Curve](https://github.com/Unity-Technologies/Graphics/pull/3496/files?file-filters[]=.md#Curve)<br/>•[Random](https://github.com/Unity-Technologies/Graphics/pull/3496/files?file-filters[]=.md#Random) |
| Length Spread | Controls how spread out multiple lens flare elements appear. |
| Relative To Center | If true the distortion is relative to center of the screen otherwise relative to lensFlare source screen position. |
### Uniform
![](images/LensFlareMultileElementUniform.png)
| **Property** | **Description** |
| --------------- | ------------------------------------------------------------ |
| Colors | The range of colors that this asset applies to the lens flares. |
| Rotation | The angle of rotation (in degrees) applied to each element incrementally. |
<a name="Curve"></a>
### Curve
![](images/LensFlareMultileElementCurve.png)
| **Property** | **Description** |
| ---------------- | ------------------------------------------------------------ |
| Colors | The range of colors that this asset applies to the lens flares. You can use the **Position Spacing** curve to determine how this range affects each lens flare. |
| Position Variation | Adjust this curve to change the placement of the lens flare elements in the **Lens Spread**. |
| Rotation | The uniform angle of rotation (in degrees) applied to each element distributed along the curve. This value ranges from -180° to 180°. |
| Scale | Adjust this curve to control the size range of the lens flare elements. |
<a name="Random"></a>
### Random
![](images/LensFlareMultileElementRandom.png)
| **Property** | **Description** |
| ------------------- | ------------------------------------------------------------ |
| Seed | The base value that this asset uses to generate randomness. |
| Intensity Variation | Controls the variation of brightness across the lens flare elements. A high value can make some elements might invisible. |
| Colors | The range of colors that this asset applies to the lens flares. This property is based on the **Seed** value. |
| Position Variation | Controls the position of the lens flares. The **X** value is spread along the same axis as **Length Spread**. A value of 0 means there is no change in the lens flare position. The **Y** value is spread along the vertical screen space axis based on the **Seed** value. |
| Rotation Variation | Controls the rotation variation of the lens flares, based on the **Seed** value. The **Rotation** and **Auto Rotate** parameters inherit from this property. |
| Scale Variation | Controls the scale of the lens flares based on the **Seed** value. |

View File

@@ -0,0 +1,37 @@
[comment]: # (If you modify this file make sure you modify the copy/paste file: com.unity.render-pipelines.universal and com.unity.render-pipelines.high-definition\Documentation~\lens-flare-data-driven-component.md)
# Lens Flare (SRP) Component
![](images/LensFlareHeader.png)
Unitys Scriptable Render Pipeline (SRP) includes the SRP Lens Flare Override component to control a [Lens Flare (SRP) Data](lens-flare-data-driven-asset.md) asset. You can attach an Lens Flare (SRP) Component to any GameObject.
Some properties only appear when you attach this component to a light.
![](images/LensFlareComp.png)
## Properties
### General
| **Property** | **Description** |
| --------------- | ------------------------------------------------------------ |
| Lens Flare Data | Select the [Lens Flare (SRP) Asset](lens-flare-data-driven-asset.md) asset this component controls. |
| Intensity | Multiplies the intensity of the lens flare. |
| Scale | Multiplies the scale of the lens flare. |
| Attenuation by Light Shape | Enable this property to automatically change the appearance of the lens flare based on the type of light you attached this component to.<br/>For example, if this component is attached to a spot light and the camera is looking at this light from behind, the lens flare will not be visible. <br/>This property is only available when this component is attached to a light. |
| Attenuation Distance |The distance between the start and the end of the Attenuation Distance Curve.<br/>This value operates between 0 and 1 in world space. |
| Attenuation Distance Curve | Fades out the appearance of the lens flare over the distance between the GameObject this asset is attached to, and the Camera. |
| Scale Distance | The distance between the start and the end of the **Scale Distance Curve**.<br/>This value operates between 0 and 1 in world space. |
| Scale Distance Curve | Changes the size of the lens flare over the distance between the GameObject this asset is attached to, and the Camera. |
| Screen Attenuation Curve | Reduces the effect of the lens flare based on its distance from the edge of the screen. You can use this to display a lens flare at the edge of your screen |
### Occlusion
| **Property** | **Description** |
| --------------- | ------------------------------------------------------------ |
| Enable | Enable this property to partially obscure the lens flare based on the depth buffer |
| Occlusion Radius | Defines how far from the light source Unity occludes the lens flare. This value is in world space. |
| Sample Count | The number of random samples the CPU uses to generate the **Occlusion Radius.** |
| Occlusion Offset | Offsets the plane that the occlusion operates on. A higher value moves this plane closer to Camera. This value is in world space. <br/>For example, if a lens flare is inside the light bulb, you can use this to sample occlusion outside the light bulb. |
| Occlusion Remap Curve | Specifies the curve used to remap the occlusion of the flare. By default, the occlusion is linear, between 0 and 1. This can be specifically useful to occlude flare more drastically when behind clouds. |
| Allow Off Screen | Enable this property to allow lens flares outside the Camera's view to affect the current field of view. |

View File

@@ -0,0 +1,16 @@
# Light Anchor
![](Images/LightAnchor0.png)
You can use a Light Anchor to light a scene in Rendered Camera Space. To use a Light Anchor, you must connect it to a Light.
## Properties
| **Property** | **Description** |
| --------------- | ------------------------------------------------------------ |
| Orbit | Use the left icon to control the Orbit of the light. This tool becomes green when you move the icon. |
| Elevation | Use the middle icon to control the Elevation of the light. This tool becomes blue when you move the icon. |
| Roll | Use the right icon to control the Rollof the light. This tool becomes gray when you move the icon. This is especially useful if the light has an IES or a Cookie. |
| Distance | Controls the distance between the light and its anchor in world space. |
| Up Direction | Defines the space of the up direction of the anchor. When this value is set to Local, the Up Direction is relative to the camera. |
| Common | Assigns a preset to the light component based on the behaviour of studio lights. |

View File

@@ -0,0 +1,13 @@
# Free Camera
The **FreeCamera** component provides you with an implementation for a simple free camera. When you add this component to a Camera, you can use the keyboard and mouse, or a controller, to control the Camera's position and rotation in Play Mode.
## Properties
| **Property** | **Description** |
| ------------------------- | ------------------------------------------------------------ |
| **Look Speed Controller** | Set the look speed of the Camera when using a controller. |
| **Look Speed Mouse** | Set the look speed of the Camera when using a mouse. |
| **Move Speed** | Set the speed at which the Camera moves. |
| **Move Speed Increment** | Set the value of the increment that you can increase or decrease the **Move Speed** by. This is useful if you have a large Scene and the current **Move Speed** is not enough to easily traverse it. |
| **Turbo** | Set the value that this component multiplies the **Move Speed** by when you press the key or button assigned to "Fire1". |

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 942 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 582 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 555 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 961 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 874 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

View File

@@ -0,0 +1,52 @@
# Environment Library
An Environment Library is an Asset that contains a list of environments that you can use in [Look Dev](Look-Dev.html) to simulate different lighting conditions. Each environment uses an HDRI (High Dynamic Range Image) for its skybox and also includes properties that you can use to fine-tune the environment.
<a name="Creation"></a>
![](Images/LookDevEnvironmentLibrary1.png)
## Creating an Environment Library
To create an Environment Library Asset, either:
- Select **Assets > Create > Rendering Environment Library (Look Dev)**.
- Open [Look Dev](Look-Dev.html) and click the **New Library** button.
## Creating and editing an environment
After you create an Environment Library, you can add environments to it which you can then use in Look Dev. To create environments or edit their properties, use the Look Dev window itself. To create and edit environments, you need to open an Environment Library in Look Dev. To do this, either:
- Go to the Look Dev window (menu: **Window > Rendering > Look Dev**) and drag your Environment Library from your Project window into the sidebar.
- In your Project window, click on your Environment Library Asset. Then, in the Inspector, click the **Open in LookDev window** button.
If you already have environments in the Environment Library, you can see a list of them in the sidebar. When you click on any of the HDRI previews for an environment, a box appears at the bottom of the Environment Library list. This contains the selected environment's properties for you to edit.
To add, remove, or duplicate environments, use the toolbar at the bottom of the Environment Library list, which contains the following buttons.
| **Button** | **Function** | **Description** |
| ------------------------------------------------------------ | ------------- | ------------------------------------------------------------ |
| ![](Images/LookDevEnvironmentLibrary2.png) | **Add** | Click this button to add a new environment to the bottom of the list. |
| ![](Images/LookDevEnvironmentLibrary3.png) | **Remove** | Click this button to remove the environment currently selected. Note that the environment that you have selected is the one with the blue frame. |
| ![](Images/LookDevEnvironmentLibrary4.png) | **Duplicate** | Click this button to duplicate the currently selected environment and add it as a new environment to the bottom of the list. |
## Properties
![](Images/LookDevEnvironmentLibrary5.png)
| **Property** | **Description** |
| ------------------- | ------------------------------------------------------------ |
| **Sky with Sun** | Set the HDRI Texture that Look Dev uses for the sky lighting when using this environment. For information on how to import an HDRI Texture, see [Importing an HDRI Texture](#ImportingAnHDRI). |
| **Sky without Sun** | Set the HDRI Texture that Look Dev uses for compositing the shadows when simulating a sun in the view. If you do not assign this value, Look Dev uses a lower intensity version of the same HDRI Texture in **Sky with Sun**. For information on how to import an HDRI Texture, see [Importing an HDRI Texture](#ImportingAnHDRI). |
| **Rotation** | Set the offset longitude that Look Dev applies for the whole sky and sun position. |
| **Exposure** | Set the exposure that Look Dev uses when it renders the environment. |
| **Sun Position** | Set the position of the sun when compositing the shadows. The Sun button at the end of the line automatically places the sun on the brightest spot of the **Sky with Sun** HDRI Texture. |
| **Shadow Tint** | Use the color picker to set the color of the tint that Look Dev uses to color shadows. |
<a name="ImportingAnHDRI"></a>
## Importing an HDRI Texture
To import an HDRI Texture into the Unity Editor, load an **.hdr** or **.exr** file into your Unity Project like you would any other image. In the Texture Importer Inspector window, set **Texture Type** to **Default**, set **Texture Shape** to **Cube**, and set **Convolution Type** to **None**.
When you want to test an HDRI Texture Asset or a skybox cube map Material, drag and drop it into the Look Dev view.

View File

@@ -0,0 +1,142 @@
# Look Dev
Look Dev is an image-based lighting tool that contains a viewer for you to check and compare Assets to ensure they work well in various lighting conditions. Look Dev uses the Scriptable Render Pipeline, so it can display the Asset in the same way as it looks in your Scene. You can load Assets into Look Dev either as Prefabs or from the Hierarchy window.
Look Dev is only available in Edit mode. The Look Dev window closes when you enter Play mode.
### Asset validation
Asset validation confirms whether Assets are authored correctly and behave as expected in different lighting environments.
You must use an HDRI (high dynamic range image) to validate your Assets in Look Dev. An HDRI contains real-world lighting with incredibly high detail. As such, it offers perfect lighting that is difficult to create by hand. By using such an accurate lighting environment to test an Asset, you can determine whether the Asset itself or your Project's lighting is reducing the visual quality of your Scene.
You can load two different Assets into Look Dev at the same time and compare them in two viewports. For example, an Art Director can check that a new Asset matches the art direction guidelines of a reference Asset.
## Using Look Dev
To open Look Dev in the Unity Editor, select **Window > Rendering > Look Dev**. The first time you use Look Dev, you must either create a new [Environment Library](Look-Dev-Environment-Library.html) or load one. For information on how to create an Environment Library, see the [Environment Library documentation](Look-Dev-Environment-Library.html).
### Viewports
By default, there is only one viewport in Look Dev, but you can choose from a selection of split-screen views (see the [Multi-view section](#MultiView)).
### Controls
Navigation with the Look Dev Camera works in a similar way to the [Scene view Camera](https://docs.unity3d.com/Manual/SceneViewNavigation.html):
- **Rotate around pivot:** Left click and drag (this is similar to the Scene view except that you need to press the Alt key for the Scene view Camera).
- **Pan camera:** Middle click and drag.
- **Zoom:** Alt + right click and drag.
- **Forward/backward:** Mouse wheel.
- **First Person mode:** Right click + W, A,S, and D.
### Loading Assets into Look Dev
Look Dev lets you view:
**Prefabs** - To load a Prefab into Look Dev, drag it from the Project window into the Look Dev viewport.
**GameObjects** - To load a copy of a Hierarchy GameObject, drag the GameObject from the Hierarchy into the Look Dev viewport.
## Viewport modes
Use the toolbar in the top-left of the window to change which viewing mode Look Dev uses.
### Single viewport
![](Images/LookDev1.png)
By default, Look Dev displays a single viewport which contains the Prefab or GameObject you are working with. If you are in another viewing mode, you can click either the number **1** or number **2** button to go back to single view. Each button corresponds to a viewport in Look Dev. Select button **1** to use viewport 1, and button 2 to use viewport **2**.
<a name="MultiView"></a>
### Multi-viewport
![](Images/LookDev2.png)
Use multiple viewports to compare different environments and settings for the same Asset. You can arrange viewports:
- Vertically side-by-side. Use this mode to compare two different lighting conditions on the same Asset to check that the Asset behaves correctly.
- Horizontally side-by-side. Use this mode to compare two different lighting conditions for horizontal objects, like an environment Asset, to check that the Asset behaves correctly.
- Split-screen. Use this mode investigate texture problems using a debug Shader mode (for example, use one screen to view Normal or Albedo shading, and the other for environment-lit mode).
- Side-by-side and split-screen: Use this mode to compare two different versions of the same Asset using the same lighting conditions to see which changes improve the Assets quality.
All three of these modes are useful to compare two different versions of the same Asset using the same lighting conditions to see which changes improve the Assets quality.
To load a different Prefab or Hierarchy GameObject into each split-screen view, drag and drop the Asset into the viewport that you want to view it in.
When using multiple viewports, it only makes sense to compare different Prefabs or GameObjects when you want to look at two versions of the same Asset. Comparing completely different Assets doesnt give you a good idea of the difference in lighting or visual effect.
##### Vertical or horizontal side-by-side
Vertical and horizontal side-by-side viewports show an identical view of your Asset.
![](Images/LookDev3.png)
##### Split-screen
In a split-screen view, there is a red/blue manipulation Gizmo that separates the two viewports. For information on how to use this Gizmo, see [Using the manipulation Gizmo](#ManipulationGizmo).
![](Images/LookDev4.png)
#### Multi-viewport Camera
By default, Look Dev synchronizes the camera movement for both views. To decouple the Cameras from one another, and manipulate them independently, click the **Synchronized Cameras** button in-between the two numbered Camera buttons.
![](Images/LookDev5.png)
To align the cameras with each other, or reset them, click on the drop-down arrow next to the viewport **2** icon:
![](Images/LookDev6.png)
<a name="ManipulationGizmo"></a>
### Using the manipulation Gizmo
The manipulation Gizmo represents the separation plane between the two viewports. It has different behavior in split-screen mode, but you use it in the same way for both side-by-side or split-screen modes.
#### Moving the separator
To move the separator, click and drag the straight line of the Gizmo to the location you want.
![](Images/LookDev7.png)
#### Changing the orientation and length
To change the orientation and length of the manipulator Gizmo, click and drag the circle at either end of the manipulator. Changing the length of the Gizmo lets you set the orientation and [blending](#Blending) values more precisely.
![](Images/LookDev8.png))
#### Changing the split in increments
To change the split in increments, click and hold the circle on the end of the manipulation Gizmo, then hold Shift as you move the mouse. This snaps the manipulation Gizmo to set angles in increments of 22.5°, which is useful for a perfectly horizontal, vertical or diagonal angle.
<a name="Blending"></a>
#### Blending
The central white circle on the separator allows you to blend between the two views. Left click on it and drag along the red line to blend the left-hand view with the right-hand view. Drag along the blue line to blend the right-hand view with the left-hand view (as shown in the image below).
The white circle automatically snaps back into the center when you drag it back. This helps you get back to the default blending value quickly.
![](Images/LookDev9.png)
### HDRI environments in Look Dev
Lighting in Look Dev uses an HDRI. The Look Dev view allows you to manipulate and easily switch between HDRIs to simulate different environments for the Asset you are working on.
Look Dev uses the [Environment Library](Look-Dev-Environment-Library.html) Asset to store a list of environments, which are HDRIs with extra properties that you can use to further refine the environment. For information on how to create, edit, and assign Environment Libraries, see the [Environment Library documentation](Look-Dev-Environment-Library.html#Creation).
## Implementing Look Dev for your custom Scriptable Render Pipeline
In order to use Look Dev in your custom Scriptable Render Pipeline, you must implement the **UnityEngine.Rendering.LookDev.IDataProvider** interface.
| **Function** | **Description** |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| **void FirstInitScene(StageRuntimeInterface stage)** | Look Dev calls this function after it initializes the Scene with a Light and Camera. It uses this function to add and configure extra components according to the needs of your Scriptable Render Pipeline. |
| **void UpdateSky(Camera camera, Sky sky, StageRuntimeInterface stage)** | Look Dev uses this function to update the environment when you change something in Look Dev. You can handle the sky in various ways, so add code that corresponds to your Scriptable Render Pipeline. |
| **IEnumerable****<string>** **supportedDebugModes { get; }** | Use this function to specify the list of supported debug modes. You do not need to add **None** because Look Dev handles that automatically. |
| **void UpdateDebugMode(int debugIndex)** | Use this function to update the debug mode based on what the user selects. The **debugIndex** matches the list in **supportedDebugModes**. If the user selects **None**, then the **debugIndex** is **-1**; |
| **void GetShadowMask(ref RenderTexture output, StageRuntimeInterface stage)** | This function computes a shadow map. The given **StageRuntimeInterface** contains access to the Camera and a Light simulating the sun. |

View File

@@ -0,0 +1,20 @@
* [SRP Core](index.md)
* [What's new](whats-new.md)
* [12](whats-new-12.md)
* [13](whats-new-13.md)
* Camera components
* [Free Camera](Free-Camera.md)
* [Camera Switcher](Camera-Switcher.md)
* [Render Graph](render-graph-system.md)
* [Benefits of the render graph system](render-graph-benefits.md)
* [Render graph fundamentals](render-graph-fundamentals.md)
* [Writing a Render Pipeline](render-graph-writing-a-render-pipeline.md)
* [RTHandle system](rthandle-system.md)
* [RTHandle fundamentals](rthandle-system-fundamentals.md)
* [Using the RTHandle system](rthandle-system-using.md)
* [Custom Material Inspector](custom-material-inspector.md)
* [Adding properties in the menu](adding-properties.md)
* [Synchronizing shader code and C#](generating-shader-includes.md)
* [Look Dev](Look-Dev.md)
* [Environment Library](Look-Dev-Environment-Library.md)
* [Light Anchor](view-lighting-tool.md)

View File

@@ -0,0 +1,41 @@
## Light Anchor
The Light Anchor can help to place light sources around subjects, in relation to a Camera and an anchor point. It's particularly effective for cinematic lighting, which often requires multiple light sources orbiting a subject.
## Using the Light Anchor Component
To add a Light Anchor component to a GameObject in your Scene:
1. Select a Light GameObject in the hierarchy to open its Inspector window.
2. Go to **Add Component** > **Rendering** > **Light Anchor**
By default, the Anchor's position is the same as the position of the GameObject the Light Anchor Component is attached to.
**Note**: To use the Light Anchor, you must set the Tag of at least one Camera to "MainCamera".
Use the **Orbit** and **Elevation** to control the orientation of the light, in degrees, relative to the main Camera's and Anchor's positions. If the Light has a Cookie or an IES Profile, use the **Roll** to change their orientation. Use the **Distance** to control how far from the anchor, in meters, you want to place the Light.
You can use the **Anchor Position Override** to provide a GameObjects [Transform](https://docs.unity3d.com/ScriptReference/Transform.html) as an anchor point for the Light. This is useful if you want the Light to follow a specific GameObject in the Scene.
![](Images/LightAnchorAnimation.gif)
**Note**: The above example uses the Main Camera as the reference Camera that adjusts the light rotation. The Common presets might create a different result in the Scene View if your view isn't aligned with the Main Camera.
You can set a **Position Offset** for this custom Anchor. This is useful if the Transform position of the custom Anchor isn't centered appropriately for the light to orbit correctly around the custom Anchor.
![](Images/LightAnchor0.png)
The Light Anchor component also includes a list of **Presets** that you can use to set the Light's orientation relative to the main Camera.
## Properties
| **Property** | **Description** |
| --------------- | ------------------------------------------------------------ |
| **Orbit** | Use the left icon to control the Orbit of the light. This tool becomes green when you move the icon. |
| **Elevation** | Use the middle icon to control the Elevation of the light. This tool becomes blue when you move the icon. |
| **Roll** | Use the right icon to control the Roll of the light. This tool becomes gray when you move the icon. This is useful if the light has an IES or a Cookie. |
| **Distance** | Controls the distance between the light and its anchor in world space. |
| **Up Direction** | Defines the space of the up direction of the anchor. When you set this value to Local, the Up Direction is relative to the Camera. |
| **Anchor Position Override** | Allows you to use a GameObject's [Transform](https://docs.unity3d.com/ScriptReference/Transform.html) as anchor position instead of the LightAnchor's Transform. When the Transform of the GameObject you assigned to this property changes, the Light Anchor's Transform also changes. |
| **Common** | Assigns a preset to the light component based on the behavior of studio lights. |

View File

@@ -0,0 +1,33 @@
# Adding properties to the Core Render Pipeline settings section
To add properties in the **Core Render Pipeline** settings section (**Edit > Preferences > Core Render Pipeline**), create a class that implements the interface `ICoreRenderPipelinePreferencesProvider`.
For example:
```
class MyPreference : ICoreRenderPipelinePreferencesProvider
{
class Styles
{
public static readonly GUIContent myBoolLabel = EditorGUIUtility.TrTextContent("My check box", "The description of the property.");
}
public List<string> keywords => new List<string>() {Styles.myBoolLabel.text};
public GUIContent header => EditorGUIUtility.TrTextContent("My property section", "The description of my property section.");
public static bool s_MyBoolPreference;
public void PreferenceGUI()
{
EditorGUI.BeginChangeCheck();
var newValue = EditorGUILayout.Toggle(Styles.myBoolLabel, s_MyBoolPreference);
if (EditorGUI.EndChangeCheck())
{
s_MyBoolPreference = newValue;
}
}
}
```
Unity shows the new properties in the **Core Render Pipeline** settings section:
![](Images/core_render_pipeline_preference_provider.png)

View File

@@ -0,0 +1,50 @@
# Custom Material Inspector
Custom Material Inspectors enable you to define how Unity displays properties in the Material Inspector for a particular shader. This is useful if a shader includes a lot of properties and you want to organize them in the Inspector. The Universal Render Pipeline (URP) and High Definition Render Pipeline (HDRP) both support custom Material Inspectors, but the method to create them is slightly different.
## Creating a custom Material Inspector
The implementation for custom Material Inspectors differs between URP and HDRP. For example, for compatibility purposes, every custom Material Inspector in HDRP must inherit from `HDShaderGUI` which does not exist in URP. For information on how to create custom Material Inspectors for the respective render pipelines, see:
- **HDRP**: [HDRP custom Material Inspectors](https://docs.unity3d.com/Packages/com.unity.render-pipelines.high-definition@latest?subfolder=/manual/hdrp-custom-material-inspector.html).
- **URP**: [Unity Custom Shader GUI](https://docs.unity3d.com/Manual/SL-CustomShaderGUI.html).
## Assigning a custom Material Inspector
When you create a shader, either hand-written or using Shader Graph, both URP and HDRP provide a default editor for it to use. To override this default and provide your own custom Material Inspector, the method differs depending on whether you hand-wrote the shader or used Shader Graph.
### Using hand-written shaders
To set a custom Material Inspector for a hand-written shader:
1. Open the shader source file.
2. Assign a string that contains the class name of the custom Material Inspector to the **CustomEditor** shader instruction.
This is the same method as for the Built-in Renderer's [custom shader GUI](<https://docs.unity3d.com/Manual/SL-CustomShaderGUI.html>).
For an example of how to do this, see the following shader code sample. In this example, the name of the custom Material Inspector class is **ExampleCustomMaterialInspector**:
```c#
Shader "Custom/Example"
{
Properties
{
// Shader properties
}
SubShader
{
// Shader code
}
CustomEditor "ExampleCustomMaterialInspector"
}
```
### Using Shader Graph
To set a custom Material Inspector for a Shader Graph shader:
1. Open the Shader Graph.
2. In the [Graph Inspector](<https://docs.unity3d.com/Packages/com.unity.shadergraph@latest?subfolder=/manual/Internal-Inspector.html>), open the Graph Settings tab.
3. If **Active Targets** does not include the render pipeline your project uses, click the **plus** button then, in the drop-down, click the render pipeline.
4. In the render pipeline section (**HDRP** or **URP** depending on the render pipeline your project uses) find the **Custom Editor GUI** property and provide it the name of the custom Material Inspector.

View File

@@ -0,0 +1,44 @@
# Synchronizing shader code and C#
Unity can generate HLSL code based on C# structs to synchronize data and constants between shaders and C#. In Unity, the process of generating the HLSL code from C# code is called generating shader includes. When Unity generates shader includes, it parses all the C# files in the project and, for every file that contains a struct with a GenerateHLSL attribute, generates corresponding HLSL code. It places this HLSL code in a file with the same name as the origin, but uses the `.cs.hlsl` file extension.
## Generating shader includes
To generate an HLSL equivalent for a C# struct:
1. Add the GenerateHLSL attribute to the struct. To do this, above the line that declares the struct, add `[GenerateHLSL(PackingRules.Exact, false)]`. For an example on how to do this, see the sample code below. For more information about the GenerateHLSL attribute, see the [API documentation](../api/UnityEngine.Rendering.GenerateHLSL.html).
2. In the Unity Editor, go to **Edit** > **Render Pipeline** > **Generate Shader Includes**.
The following code example is from the High Definition Render Pipeline (HDRP). It shows an extract of the C# representation of a directional light. The original file is `LightDefinition.cs`. When Unity generates the HLSL shader code, it places it in a new file called `LightDefinition.cs.hlsl`.
```
// LightDefinition.cs
[GenerateHLSL(PackingRules.Exact, false)]
struct DirectionalLightData
{
public Vector3 positionRWS;
public uint lightLayers;
public float lightDimmer;
public float volumetricLightDimmer; // Replaces 'lightDimer'
public Vector3 forward;
public Vector4 surfaceTextureScaleOffset;
};
```
```
// LightDefinition.cs.hlsl
// Generated from UnityEngine.Rendering.HighDefinition.DirectionalLightData
// PackingRules = Exact
struct DirectionalLightData
{
float3 positionRWS;
uint lightLayers;
float lightDimmer;
float volumetricLightDimmer;
float3 forward;
float4 surfaceTextureScaleOffset;
};
```

View File

@@ -0,0 +1,11 @@
# SRP Core
![](https://blogs.unity3d.com/wp-content/uploads/2018/01/image5_rs.png)
The Scriptable Render Pipeline (SRP) is a Unity feature that allows you to write C# scripts to control the way Unity renders each frame. SRP Core is a package that makes it easier to create or customize an SRP.
SRP Core contains reusable code, including boilerplate code for working with platform-specific graphics APIs, utility functions for common rendering operations, and the shader libraries used in the High Definition Render Pipeline (HDRP) and Universal Render Pipeline (URP).
If you are creating a custom SRP from scratch or customizing a prebuilt SRP, using SRP Core will save you time.
For more information on SRP, including a guide to getting started with a custom SRP, see the [SRP documentation](https://docs.unity3d.com/Manual/ScriptableRenderPipeline.html). For more information on Unity's prebuilt SRPs, see the [Universal Render Pipeline (URP) documentation](https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal@latest), or the [High Definition Render Pipeline (HDRP) documentation](https://docs.unity3d.com/Packages/com.unity.render-pipelines.high-definition@latest).

View File

@@ -0,0 +1,13 @@
# Benefits of the render graph system
## Efficient memory management
When you manage resource allocation manually, you have to account for scenarios when every rendering feature is active at the same time and thus allocate for the worst-case scenario. When particular rendering features are not active, the resources to process them are there, but the render pipeline does not use them. A render graph only allocates resources that the frame actually uses. This reduces the memory footprint of the render pipeline and means that there is no need to create complicated logic to handle resource allocation. Another benefit of efficient memory management is that, because a render graph can reuse resources efficiently, there are more resources available to create features for your render pipeline.
## Automatic synchronization point generation
Asynchronous compute queues can run in parallel to the regular graphic workload and, as a result, may reduce the overall GPU time it takes to process a render pipeline. However, it can be difficult to manually define and maintain synchronization points between an asynchronous compute queue and the regular graphics queue. A render graph automates this process and, using the high-level declaration of the render pipeline, generates correct synchronization points between the compute and graphics queue.
## Maintainability
One of the most complex issues in render pipeline maintenance is the management of resources. Because a render graph manages resources internally, it makes it much easier to maintain your render pipeline. Using the RenderGraph API, you can write efficient self-contained rendering modules that declare their input and output explicitly and are able to plug in anywhere in a render pipeline.

View File

@@ -0,0 +1,37 @@
# Render graph fundamentals
This document describes the main principles of a render graph and an overview of how Unity executes it.
## Main principles
Before you can write render passes with the [RenderGraph](../api/UnityEngine.Experimental.Rendering.RenderGraphModule.RenderGraph.html) API, you need to know the following foundational principles:
- You no longer handle resources directly and instead use render graph system-specific handles. All RenderGraph APIs use these handles to manipulate resources. The resource types a render graph manages are [RTHandles](rthandle-system.md), [ComputeBuffers](https://docs.unity3d.com/ScriptReference/ComputeBuffer.html), and [RendererLists](../api/UnityEngine.Experimental.Rendering.RendererList.html).
- Actual resource references are only accessible within the execution code of a render pass.
- The framework requires an explicit declaration of render passes. Each render pass must state which resources it reads from and/or writes to.
- There is no persistence between each execution of a render graph. This means that the resources you create inside one execution of the render graph cannot carry over to the next execution.
- For resources that need persistence (from one frame to another for example), you can create them outside of a render graph, like regular resources, and import them in. They behave like any other render graph resource in terms of dependency tracking, but the graph does not handle their lifetime.
- A render graph mostly uses `RTHandles` for texture resources. This has a number of implications on how to write shader code and how to set them up.
## Resource Management
The render graph system calculates the lifetime of each resource with the high-level representation of the whole frame. This means that when you create a resource via the RenderGraph API, the render graph system does not create the resource at that time. Instead, the API returns a handle that represents the resource, which you then use with all RenderGraph APIs. The render graph only creates the resource just before the first pass that needs to write it. In this case, “creating” does not necessarily mean that the render graph system allocates resources. Rather, it means that it provides the necessary memory to represent the resource so that it can use the resource during a render pass. In the same manner, it also releases the resource memory after the last pass that needs to read it. This way, the render graph system can reuse memory in the most efficient manner based on what you declare in your passes. If the render graph system does not execute a pass that requires a specific resource, then the system does not allocate the memory for the resource.
## Render graph execution overview
Render graph execution is a three-step process that the render graph system completes, from scratch, every frame. This is because a graph can change dynamically from frame to frame, for example, depending on the actions of the user.
### Setup
The first step is to set up all the render passes. This is where you declare all the render passes to execute and the resources each render pass uses.
### Compilation
The second step is to compile the graph. During this step, the render graph system culls render passes if no other render pass uses their outputs. This allows for less organized setups because you can reduce specific logic when you set up the graph. A good example of that is debug render passes. If you declare a render pass that produces a debug output that you don't present to the back buffer, the render graph system culls that pass automatically.
This step also calculates the lifetime of resources. This allows the render graph system to create and release resources in an efficient way as well as compute the proper synchronization points when it executes passes on the asynchronous compute pipeline.
### Execution
Finally, execute the graph. The render graph system executes all render passes that it did not cull, in declaration order. Before each render pass, the render graph system creates the proper resources and releases them after the render pass if later render passes do not use them.

View File

@@ -0,0 +1,19 @@
# The render graph system
The render graph system sits on top of Unity's Scriptable Render Pipeline (SRP). It allows you to author a custom SRP in a maintainable and modular way. Unity's High Definition Render Pipeline (HDRP) uses the render graph system.
You use the [RenderGraph](../api/UnityEngine.Experimental.Rendering.RenderGraphModule.RenderGraph.html) API to create a render graph. A render graph is a high-level representation of the custom SRP's render passes, which explicitly states how the render passes use resources.
Describing render passes in this way has two benefits: it simplifies render pipeline configuration, and it allows the render graph system to efficiently manage parts of the render pipeline, which can result in improved runtime performance. For more information on the benefits of the render graph system, see [benefits of the render graph system](render-graph-benefits.md).
To use the render graph system, you need to write your code in a different way to a regular custom SRP. For more information on how to write code for the render graph system, see [writing a render pipeline](render-graph-writing-a-render-pipeline.md).
For information on the technical principles behind the render graph system, see [render graph fundamentals](render-graph-fundamentals.md).
**Note**: Render graph is currently experimental which means Unity might change its API during future development.
This section contains the following pages:
- [Render graph benefits](render-graph-benefits.md)
- [Render graph fundamentals](render-graph-fundamentals.md)
- [Writing a render pipeline](render-graph-writing-a-render-pipeline.md)

View File

@@ -0,0 +1,245 @@
# Writing a render pipeline
This page covers the process of how to use the RenderGraph API to write a render pipeline. For information about the RenderGraph API, see [render graph system](render-graph-system.md) and [render graph fundamentals](render-graph-fundamentals.md).
### Initialization and cleanup of Render Graph
To begin, your render pipeline needs to maintain at least one instance of [RenderGraph](../api/UnityEngine.Experimental.Rendering.RenderGraphModule.RenderGraph.html). This is the main entry point for the API. You can use more than one instance of a render graph, but be aware that Unity does not share resources across `RenderGraph` instances so for optimal memory usage, only use one instance.
```c#
using UnityEngine.Experimental.Rendering.RenderGraphModule;
public class MyRenderPipeline : RenderPipeline
{
RenderGraph m_RenderGraph;
void InitializeRenderGraph()
{
m_RenderGraph = new RenderGraph(“MyRenderGraph”);
}
void CleanupRenderGraph()
{
m_RenderGraph.Cleanup();
m_RenderGraph = null;
}
}
```
To initialize a `RenderGraph` instance, call the constructor with an optional name to identify the render graph. This also registers a render graph-specific panel in the SRP Debug window which is useful for debugging the RenderGraph instance. When you dispose of a render pipeline, call the `Cleanup()` method on the RenderGraph instance to properly free all the resources the render graph allocated.
### Starting a render graph
Before you add any render passes to the render graph, you first need to initialize the render graph. To do this, call the `RecordAndExecute` method. This method will return a disposable struct of type `RenderGraphExecution` that you can use with a scope. When the `RenderGraphExecution` struct exits the scope or its Dispose function is called, the render graph is executed.
This pattern ensures that the render graph is always executed correctly even in the case of an exception during the recording of the graph.
For details about this method's parameters, see the [API documentation](../api/UnityEngine.Experimental.Rendering.RenderGraphModule.RenderGraph.html)
```c#
var renderGraphParams = new RenderGraphExecuteParams()
{
scriptableRenderContext = renderContext,
commandBuffer = cmd,
currentFrameIndex = frameIndex
};
using (m_RenderGraph.RecordAndExecute(renderGraphParams))
{
// Add your passes here
}
```
### Creating resources for the render graph
When you use a render graph, you never directly allocate resources yourself. Instead, the RenderGraph instance handles the allocation and disposal of its own resources. To declare resources and use them in a render pass, you use render graph specific APIs that return handles to the resource.
There are two main types of resources that a render graph uses:
- **Internal resources**: These resources are internal to a render graph execution and you cannot access them outside of the RenderGraph instance. You also cannot pass these resources from one execution of a graph to another. The render graph handles the lifetime of these resources.
- **Imported resources**: These usually come from outside the render graph execution. Typical examples are the back buffer (provided by the camera) or buffers that you want the graph to use across multiple frames for temporal effects (like using the camera color buffer for temporal anti-aliasing). You are responsible for handling the lifetime of these resources.
After you create or import a resource, the render graph system represents it as a resource type-specific handle (`TextureHandle`, `ComputeBufferHandle`, or `RendererListHandle`). This way, the render graph can use internal and imported resources in the same way in all of its APIs.
```c#
public TextureHandle RenderGraph.CreateTexture(in TextureDesc desc);
public ComputeBufferHandle RenderGraph.CreateComputeBuffer(in ComputeBufferDesc desc)
public RendererListHandle RenderGraph.CreateRendererList(in RendererListDesc desc);
public TextureHandle RenderGraph.ImportTexture(RTHandle rt);
public TextureHandle RenderGraph.ImportBackbuffer(RenderTargetIdentifier rt);
public ComputeBufferHandle RenderGraph.ImportComputeBuffer(ComputeBuffer computeBuffer);
```
The main ways to create resources are described above, but there are variations of these functions. For the complete list, see the [API documentation](../api/UnityEngine.Experimental.Rendering.RenderGraphModule.RenderGraph.html). Note that the specific function to use to import the camera back buffer is `RenderTargetIdentifier`.
To create resources, each API requires a descriptor structure as a parameter. The properties in these structures are similar to the properties in the resources they represent (respectively [RTHandle](rthandle-system.md), [ComputeBuffer](https://docs.unity3d.com/ScriptReference/ComputeBuffer.html), and [RendererLists](../api/UnityEngine.Experimental.Rendering.RendererList.html)). However, some properties are specific to render graph textures.
Here are the most important ones:
- **clearBuffer**: This property tells the graph whether to clear the buffer when the graph creates it. This is how you should clear textures when using the render graph. This is important because a render graph pools resources, which means any pass that creates a texture might get an already existing one with undefined content.
- **clearColor**: This property stores the color to clear the buffer to, if applicable.
There are also two notions specific to textures that a render graph exposes through the `TextureDesc` constructors:
- **xrReady**: This boolean indicates to the graph whether this texture is for XR rendering. If true, the render graph creates the texture as an array for rendering into each XR eye.
- **dynamicResolution**: This boolean indicates to the graph whether it needs to dynamically resize this texture when the application uses dynamic resolution. If false, the texture does not scale automatically.
You can create resources outside render passes, inside the setup code for a render pass, but not in the rendering code.
Creating a resource outside of all render passes can be useful for cases where the first pass uses a given resource that depends on logic in the code that might change regularly. In this case, you must create the resource before any of those passes. A good example is using the color buffer for either a deferred lighting pass or a forward lighting pass. Both of these passes write to the color buffer, but Unity only executes one of them depending on the current rendering path chosen for the camera. In this case, you would create the color buffer outside both passes and pass it to the correct one as a parameter.
Creating a resource inside a render pass is usually for resources the render pass produces itself. For example, a blur pass requires an already existing input texture, but creates the output itself and returns it at the end of the render pass.
Note that creating a resource like that does not allocate GPU memory every frame. Instead, the render graph system reuses pooled memory. In the context of the render graph, think of resource creation more in terms of data flow in the context of a render pass than actual allocation. If a render pass creates a whole new output then it “creates” a new texture in the render graph.
### Writing a render pass
Before Unity can execute the render graph, you must declare all the render passes. You write a render pass in two parts: setup and rendering.
#### Setup
During setup, you declare the render pass and all the data it needs to execute. The render graph represents data by a class specific to the render pass that contains all the relevant properties. These can be regular C# constructs (struct, PoDs, etc) and render graph resource handles. This data structure is accessible during the actual rendering code.
```c#
class MyRenderPassData
{
public float parameter;
public TextureHandle inputTexture;
public TextureHandle outputTexture;
}
```
After you define the pass data, you can then declare the render pass itself:
```c#
using (var builder = renderGraph.AddRenderPass<MyRenderPassData>("My Render Pass", out var passData))
{
passData.parameter = 2.5f;
passData.inputTexture = builder.ReadTexture(inputTexture);
TextureHandle output = renderGraph.CreateTexture(new TextureDesc(Vector2.one, true, true)
{ colorFormat = GraphicsFormat.R8G8B8A8_UNorm, clearBuffer = true, clearColor = Color.black, name = "Output" });
passData.outputTexture = builder.WriteTexture(output);
builder.SetRenderFunc(myFunc); // details below.
}
```
You define the render pass in the `using` scope around the `AddRenderPass` function. At the end of the scope, the render graph adds the render pass to the internal structures of the render graph for later processing.
The `builder` variable is an instance of `RenderGraphBuilder`. This is the entry point to build the information relating to the render pass. There are several important parts to this:
- **Declaring resource usage**: This is one of the most important aspects of the RenderGraph API. Here you explicitly declare whether the render pass needs read and/or write access to the resources. This allows the render graph to have an overall view of the whole rendering frame and thus determine the best use of GPU memory and synchronization points between various render passes.
- **Declaring the rendering function**: This is the function in which you call graphics commands. It receives the pass data you define for the render pass as a parameter as well as the render graph context. You set the rendering function for a render pass via `SetRenderFunc` and the function runs after the graph compiles.
- **Creating transient resources**: Transient, or internal, resources are resources you create for the duration of this render pass only. You create them in the builder rather than the render graph itself to reflect their lifetime. Creating transient resources uses the same parameters as the equivalent function in the RenderGraph APIs. This is particularly useful when a pass uses temporary buffers that should not be accessible outside of the pass. Outside the pass where you declare a transient resource, the handle to the resource becomes invalid and Unity throws errors if you try to use it.
The `passData` variable is an instance of the type you provide when you declare the pass. This is where you set the data that the rendering code can access. Note that the render graph does not use the contents of `passData` right away, but later in the frame, after it registers all the passes and the render graph compiles and executes. This means that any reference the `passData` stores must be constant across the whole frame. Otherwise, if you change the content before the render pass executes, it does not contain the correct content during the render pass. For this reason, it is best practice to only store value types in the `passData` unless you are certain that a reference stays constant until the pass finishes execution.
For an overview of the `RenderGraphBuilder` APIs, see the below table. For more details, see the API documentation:
| Function | Purpose |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| TextureHandle ReadTexture(in TextureHandle input) | Declares that the render pass reads from the `input` texture you pass into the function. |
| TextureHandle WriteTexture(in TextureHandle input) | Declares that the render pass writes to the `input` texture you pass into the function. |
| TextureHandle UseColorBuffer(in TextureHandle input, int index) | Same as `WriteTexture` but also automatically binds the texture as a render texture at the provided binding index at the beginning of the pass. |
| TextureHandle UseDepthBuffer(in TextureHandle input, DepthAccess flags) | Same as `WriteTexture` but also automatically binds the texture as a depth texture with the access flags you pass into the function. |
| TextureHandle CreateTransientTexture(in TextureDesc desc) | Create a transient texture. This texture exists for the duration of the pass. |
| RendererListHandle UseRendererList(in RendererListHandle input) | Declares that this render pass uses the Renderer List you pass in. The render pass uses the `RendererList.Draw` command to render the list. |
| ComputeBufferHandle ReadComputeBuffer(in ComputeBufferHandle input) | Declares that the render pass reads from the `input` ComputeBuffer you pass into the function. |
| ComputeBufferHandle WriteComputeBuffer(in ComputeBufferHandle input) | Declares that the render pass writes to the `input` Compute Buffer you pass into the function. |
| ComputeBufferHandle CreateTransientComputeBuffer(in ComputeBufferDesc desc) | Create a transient Compute Buffer. This texture exists for the duration of the Compute Buffer. |
| void SetRenderFunc<PassData>(RenderFunc<PassData> renderFunc) where PassData : class, new() | Set the rendering function for the render pass. |
| void EnableAsyncCompute(bool value) | Declares that the render pass runs on the asynchronous compute pipeline. |
| void AllowPassCulling(bool value) | Specifies whether Unity should cull the render pass (default is true). This can be useful when the render pass has side effects and you never want the render graph system to cull. |
#### Rendering Code
After you complete the setup, you can declare the function to use for rendering via the `SetRenderFunc` method on the `RenderGraphBuilder`. The function you assign must use the following signature:
```c#
delegate void RenderFunc<PassData>(PassData data, RenderGraphContext renderGraphContext) where PassData : class, new();
```
You can either pass a render function as a `static` function or a lambda. The benefit of using a lambda function is that it can bring better code clarity because the rendering code is next to the setup code.
Note that if you use a lambda, be very careful not to capture any parameters from the main scope of the function as that generates garbage, which Unity later locates and frees during garbage collection. If you use Visual Studio and hover over the arrow **=>**, it tells you if the lambda captures anything from the scope. Avoid accessing members or member functions because using either captures `this`.
The render function takes two parameters:
- `PassData data`: This data is of the type you pass in when you declare the render pass. This is where you can access the properties initialized during the setup phase and use them for the rendering code.
- `RenderGraphContext renderGraphContext`. This stores references to the `ScriptableRenderContext` and the `CommandBuffer` that provide utility functions and allow you to write rendering code.
##### Accessing resources in the render pass
Inside the rendering function, you can access all the render graph resource handles stored inside the `passData`. The conversion to actual resources is automatic so, whenever a function needs an RTHandle, a ComputeBuffer, or a RendererList, you can pass the handle and the render graph converts the handle to the actual resource implicitly. Note that doing such implicit conversion outside of a rendering function results in an exception. This exception occurs because, outside of rendering, the render graph may have not allocated those resources yet.
##### Using the RenderGraphContext
The RenderGraphContext provides various functionality you need to write rendering code. The two most important are the `ScriptableRenderContext` and the `CommandBuffer`, which you use to call all rendering commands.
The RenderGraphContext also contains the `RenderGraphObjectPool`. This class helps you to manage temporary objects that you might need for rendering code.
##### Get temp functions
Two functions that are particularly useful during render passes are `GetTempArray` and `GetTempMaterialPropertyBlock`.
```c#
T[] GetTempArray<T>(int size);
MaterialPropertyBlock GetTempMaterialPropertyBlock();
```
`GetTempArray` returns a temporary array of type `T` and size `size`. This can be useful to allocate temporary arrays for passing parameters to materials or creating a `RenderTargetIdentifier` array to create multiple render target setups without the need to manage the arrays lifetime yourself.
`GetTempMaterialPropertyBlock` returns a clean material property block that you can use to set up parameters for a Material. This is particularly important because more than one pass might use a material and each pass could use it with different parameters. Because the rendering code execution is deferred via command buffers, copying material property blocks into the command buffer is mandatory to preserve data integrity on execution.
The render graph releases and pools all the resources these two functions return automatically after the pass execution. This means you dont have to manage them yourself and does not create garbage.
#### Example render pass
The following code example contains a render pass with a setup and render function:
```c#
TextureHandle MyRenderPass(RenderGraph renderGraph, TextureHandle inputTexture, float parameter, Material material)
{
using (var builder = renderGraph.AddRenderPass<MyRenderPassData>("My Render Pass", out var passData))
{
passData.parameter = parameter;
passData.material = material;
// Tells the graph that this pass will read inputTexture.
passData.inputTexture = builder.ReadTexture(inputTexture);
// Creates the output texture.
TextureHandle output = renderGraph.CreateTexture(new TextureDesc(Vector2.one, true, true)
{ colorFormat = GraphicsFormat.R8G8B8A8_UNorm, clearBuffer = true, clearColor = Color.black, name = "Output" });
// Tells the graph that this pass will write this texture and needs to be set as render target 0.
passData.outputTexture = builder.UseColorBuffer(output, 0);
builder.SetRenderFunc(
(MyRenderPassData data, RenderGraphContext ctx) =>
{
// Render Target is already set via the use of UseColorBuffer above.
// If builder.WriteTexture was used, you'd need to do something like that:
// CoreUtils.SetRenderTarget(ctx.cmd, data.output);
// Setup material for rendering
var materialPropertyBlock = ctx.renderGraphPool.GetTempMaterialPropertyBlock();
materialPropertyBlock.SetTexture("_MainTexture", data.input);
materialPropertyBlock.SetFloat("_FloatParam", data.parameter);
CoreUtils.DrawFullScreen(ctx.cmd, data.material, materialPropertyBlock);
});
return output;
}
}
```
### Ending the frame
Over the course of your application, the render graph needs to allocate various resources. It might use these resources for a time but then might not need them. For the graph to free up those resources, call the `EndFrame()` method once a frame. This deallocates any resources that the render graph has not used since the last frame. This also executes all internal processing the render graph requires at the end of the frame.
Note that you should only call this once per frame and after all the rendering is complete (for example, after the last camera renders). This is because different cameras might have different rendering paths and thus need different resources. Calling the purge after each camera could result in the render graph releasing resources too early even though they might be necessary for the next camera.

View File

@@ -0,0 +1,17 @@
## RTHandle system fundamentals
This document describes the main principles of the RTHandle (RTHandle) system.
The RTHandle system is an abstraction on top of Unity's [RenderTexture](https://docs.unity3d.com/ScriptReference/RenderTexture.html) API. It makes it trivial to reuse render textures across Cameras that use various resolutions. The following principles are the foundation of how the RTHandle system works:
- You no longer allocate render textures yourself with a fixed resolution. Instead, you declare a render texture using a scale related to the full screen at a given resolution. The RTHandle system allocates the texture only once for the whole render pipeline so that it can reuse it for different Cameras.
- There is now the concept of reference size. This is the resolution the application uses for rendering. It is your responsibility to declare it before the render pipeline renders every Camera at a particular resolution. For information on how to do this, see the [Updating the RTHandle system](#updating-the-rthandle-system) section.
- Internally, the RTHandle system tracks the largest reference size you declare. It uses this as the actual size of render textures. The largest reference size is the maximum size.
- Every time you declare a new reference size for rendering, the RTHandle system checks if it is larger than the current recorded largest reference size. If it is, the RTHandle system reallocates all render textures internally to fit the new size and replaces the largest reference size with the new size.
An example of this process is as follows. When you allocate the main color buffer, it uses a scale of **1** because it is a full-screen texture. You want to render it at the resolution of the screen. A downscaled buffer for a quarter-resolution transparency pass would use a scale of **0.5** for both the x-axis and y-axis. Internally the RTHandle system allocates render textures using the largest reference size multiplied by the scale you declare for the render texture. After that and before each Camera renders, you tell the system what the current reference size is. Based on that and the scaling factor for all textures, the RTHandle system determines if it needs to reallocate render textures. As mentioned above, if the new reference size is larger than the current largest reference size, the RTHandle system reallocates all render textures. By doing this, the RTHandle system ends up with a stable maximum resolution for all render textures, which is most likely the resolution of your main Camera.
The key takeaway of this is that the actual resolution of the render textures is not necessarily the same as the current viewport: it can be bigger. This has implications when you write a renderer using RTHandles, which the [Using the RTHandle system](rthandle-system-using.md) documentation explains.
The RTHandleSystem also allows you to allocate textures with a fixed size. In this case, the RTHandle system never reallocates the texture. This allows you to use the RTHandle API consistently for both automatically-resized textures that the RTHandle system manages and regular fixed size textures that you manage.

View File

@@ -0,0 +1,186 @@
## Using the RTHandle system
This page covers how to use the RTHandle system to manage render textures in your render pipeline. For information about the RTHandle system, see [RTHandle system](rthandle-system.md) and [RTHandle system fundamentals](rthandle-system-fundamentals.md).
### Initializing the RTHandle System
All operations related to `RTHandles` require an instance of the `RTHandleSystem` class. This class contains all the APIs necessary to allocate RTHandles, release RTHandles, and set the reference size for the frame. This means that you must create and maintain an instance of `RTHandleSystem` in your render pipeline or make use of the static RTHandles class mentioned later in this section. To create your own instance of `RTHandleSystem`, see the following code sample:
```c#
RTHandleSystem m_RTHandleSystem = new RTHandleSystem();
m_RTHandleSystem.Initialize(Screen.width, Screen.height);
```
When you initialize the system, you must supply the starting resolution. The above code example uses the width and height of the screen. Because the RTHandle system only reallocates render textures when a Camera requires a resolution larger than the current maximum size, the internal `RTHandle` resolution can only increase from the value you pass in here. It is good practice to initialize this resolution to be the resolution of the main display. This means the system does not need to unnecessarily reallocate the render textures (and cause unwanted memory spikes) at the beginning of the application.
You must only call the `Initialize` function once at the beginning of the application. After this, you can use the initialized instance to allocate textures.
Because you allocate the majority of `RTHandles` from the same `RTHandleSystem` instance, the RTHandle system also provides a default global instance through the `RTHandles` static class. Rather than maintain your own instance of `RTHandleSystem`, this allows you to use the same API that you get with an instance, but not worry about the lifetime of the instance. Using the static instance, the initialization becomes this:
```c#
RTHandles.Initialize(Screen.width, Screen.height);
```
The code examples in the rest of this page use the default global instance.
### Updating the RTHandle System
Before rendering with a Camera, you need to set the resolution the RTHandle system uses as a reference size. To do so, call the `SetReferenceSize` function.
```c#
RTHandles.SetReferenceSize(width, height);
```
Calling this function has two effects:
1. If the new reference size you provide is bigger than the current one, the RTHandle system reallocates all the render textures internally to match the new size.
2. After that, the RTHandle system updates internal properties that set viewport and render texture scales for when the system uses RTHandles as active render textures.
### Allocating and releasing RTHandles
After you initialize an instance of `RTHandleSystem`, whether this is your own instance or the static default instance, you can use it to allocate RTHandles.
There are three main ways to allocate an `RTHandle`. They all use the same `Alloc` method on the RTHandleSystem instance. Most of the parameters of these functions are the same as the regular Unity RenderTexture ones, so for more information see the [RenderTexture API documentation](https://docs.unity3d.com/ScriptReference/RenderTexture.html). This section focuses on the parameters that relate to the size of the `RTHandle`:
- `Vector2 scaleFactor`: This variant requires a constant 2D scale for width and height. The RTHandle system uses this to calculate the resolution of the texture against the maximum reference size. For example, a scale of (1.0f, 1.0f) generates a full-screen texture. A scale of (0.5f 0.5f) generates a quarter-resolution texture.
- `ScaleFunc scaleFunc`: For cases when you don't want to use a constant scale to calculate the size of an `RTHandle`, you can provide a functor that calculates the size of the texture. The functor should take a `Vector2Int` as a parameter, which is the maximum reference size, and return a `Vector2In`t, which represents the size you want the texture to be.
- `int width, int height`: This is for fixed-size textures. If you allocate a texture like this, it behaves like any regular RenderTexture.
There are also overrides that create RTHandles from [RenderTargetIdentifier](https://docs.unity3d.com/ScriptReference/Rendering.RenderTargetIdentifier.html). [RenderTextures](https://docs.unity3d.com/ScriptReference/RenderTexture.html), or [Textures](https://docs.unity3d.com/Manual/Textures.html). These are useful when you want to use the RTHandle API to interact with all your textures, even though the texture might not be an actual `RTHandle`.
The following code sample contains example uses of the `Alloc` function:
```c#
// Simple Scale
RTHandle simpleScale = RTHandles.Alloc(Vector2.one, depthBufferBits: DepthBits.Depth32, dimension: TextureDimension.Tex2D, name: "CameraDepthStencil");
// Functor
Vector2Int ComputeRTHandleSize(Vector2Int screenSize)
{
return DoSpecificResolutionComputation(screenSize);
}
RTHandle rtHandleUsingFunctor = RTHandles.Alloc(ComputeRTHandleSize, colorFormat: GraphicsFormat.R32_SFloat, dimension: TextureDimension.Tex2D);
// Fixed size
RTHandle fixedSize = RTHandles.Alloc(256, 256, colorFormat: GraphicsFormat.R8G8B8A8_UNorm, dimension: TextureDimension.Tex2D);
```
When you no longer need a particular RTHandle, you can release it. To do this, call the `Release` method.
```c#
myRTHandle.Release();
```
## Using RTHandles
After you allocate an RTHandle, you can use it exactly like a regular RenderTexture. There are implicit conversions to `RenderTargetIdentifier` and `RenderTexture`, which means you can use them with regular related Unity APIs.
However, when you use the RTHandle system, the actual resolution of the `RTHandle` might be different from the current resolution. For example, if the main Camera renders at 1920x1080 and a secondary Camera renders at 512x512, all RTHandle resolutions are based on the 1920x1080 resolution, even when rendering at lower resolutions. Because of this, take care when you set an RTHandle up as a render target. There are a number of APIs available in the [CoreUtils](../api/UnityEngine.Rendering.CoreUtils.html) class to help you with this. For example:
```c#
public static void SetRenderTarget(CommandBuffer cmd, RTHandle buffer, ClearFlag clearFlag, Color clearColor, int miplevel = 0, CubemapFace cubemapFace = CubemapFace.Unknown, int depthSlice = -1)
```
This function sets the `RTHandle` as the active render target but also sets up the viewport based on the scale of the `RTHandle` and the current reference size, not the maximum size.
For example, when the reference size is 512x512, even if the maximum size is 1920x1080, a texture of scale (1.0f, 1.0f) uses the 512x512 size and therefore sets up a 512x512 viewport. A (0.5f, 0.5f) scaled texture sets up a viewport of 256x256 and so on. This means that, when using these helper functions, the RTHandle system generates the correct viewport based on the `RTHandle` parameters.
This example is one of many different overrides for the `SetRenderTarget` function. For the full list of overrides, see the [documentation](../api/UnityEngine.Rendering.CoreUtils.html#UnityEngine_Rendering_CoreUtils_SetRenderTarget_CommandBuffer_RenderTargetIdentifier_RenderBufferLoadAction_RenderBufferStoreAction_RenderTargetIdentifier_RenderBufferLoadAction_RenderBufferStoreAction_UnityEngine_Rendering_ClearFlag_).
## Using RTHandles in shaders
When you sample from a full-screen render texture in a shader in the usual way, UVs span the whole 0 to 1 range. This is not always the case with `RTHandles`. The current rendering might only occur in a partial viewport. To take this into account, you must apply a scale to UVs when you sample `RTHandles` that use a scale. All the information necessary to handle `RTHandles` specificity inside shaders is in the `RTHandleProperties` structure that the `RTHandleSystem` instance provides. To access it, use:
```c#
RTHandleProperties rtHandleProperties = RTHandles.rtHandleProperties;
```
This structure contains the following properties:
```c#
public struct RTHandleProperties
{
public Vector2Int previousViewportSize;
public Vector2Int previousRenderTargetSize;
public Vector2Int currentViewportSize;
public Vector2Int currentRenderTargetSize;
public Vector4 rtHandleScale;
}
```
This structure provides:
- The current viewport size. This is the reference size you set for rendering.
- The current render target size. This is the actual size of the render texture based on the maximum reference size.
- The `rtHandleScale`. This is the scale to apply to full-screen UVs to sample an RTHandle.
Values for previous frames are also available. For more information, see [Camera specific RTHandles](#camera-specific-rthandles). Generally, the most important property in this structure is `rtHandleScale`. It allows you to scale full-screen UV coordinates and use the result to sample an RTHandle. For example:
```c#
float2 scaledUVs = fullScreenUVs * rtHandleScale.xy;
```
However, because the partial viewport always starts at (0, 0), when you use integer pixel coordinates within the viewport to load content from a texture, there is no need to rescale them.
Another important thing to consider is that, when you render a full-screen quad into a partial viewport, there is no benefit from standard UV addressing mechanisms such as wrap or clamp. This is because the texture might be bigger than the viewport. For this reason, take care when you sample pixels outside of the viewport.
### Custom SRP specific information
There are no shader constants provided by default with SRP. So, when you use RTHandles with your own SRP, you must provide these constants to their shaders themselves.
## Camera specific RTHandles
Most of the render textures that a rendering loop uses can be shared by all Cameras. If their content does not need to carry from one frame to another, this is fine. However, some render textures need persistence. A good example of this is using the main color buffer in subsequent frames for Temporal Anti-aliasing. This means that the Camera cannot share its RTHandle with other Cameras. Most of the time, this also means that these RTHandles must be at least double-buffered (written to during the current frame, read from during the previous frame). To address this problem, the RTHandle system includes `BufferedRTHandleSystems`.
A `BufferedRTHandleSystem` is an `RTHandleSystem` that can multi-buffer RTHandles. The principle is to identify a buffer by a unique ID and provide APIs to allocate a number of instances of the same buffer then retrieve them from previous frames. These are history buffers. Usually, you must allocate one `BufferedRTHandleSystem` for each Camera. Each one owns their Camera-specific RTHandles.
Not every Camera needs history buffers. For example, if a Camera does not need Temporal Anti-aliasing, you do not need to assign a `BufferedRTHandleSystem` to it. History buffers require memory which means you can save memory by not assigning history buffers to Cameras that do not need them. Another consequence is that the system only allocates history buffers at the resolution of the Camera that the buffers are for. If the main Camera is 1920x1080 and another Camera renders in 256x256 and needs a history color buffer, the second Camera only uses a 256x256 buffer and not a 1920x1080 buffer as the non-Camera specific RTHandles would. To create an instance of a `BufferedRTHandleSystem`, see the following code sample:
```c#
BufferedRTHandleSystem m_HistoryRTSystem = new BufferedRTHandleSystem();
```
To allocate an `RTHandle` using a `BufferedRTHandleSystem`, the process is different from a normal `RTHandleSystem`:
```c#
public void AllocBuffer(int bufferId, Func<RTHandleSystem, int, RTHandle> allocator, int bufferCount);
```
The `bufferId` is a unique ID that the system uses to identify the buffer. The allocator is a function you provide to allocate the `RTHandles` when needed (all instances are not allocated upfront), and the `bufferCount` is the number of instances requested.
From there, you can retrieve each `RTHandle` by its ID and instance index like so:
```c#
public RTHandle GetFrameRT(int bufferId, int frameIndex);
```
The frame index is between zero and the number of buffers minus one. Zero always represents the current frame buffer, one the previous frame buffer, two the one before that, and so on.
To release a buffered RTHandle, call the `Release` function on the `BufferedRTHandleSystem`, passing in the ID of the buffer to release:
```c#
public void ReleaseBuffer(int bufferId);
```
In the same way that you provide the reference size for regular `RTHandleSystems`, you must do this for each instance of `BufferedRTHandleSystem`.
```c#
public void SwapAndSetReferenceSize(int width, int height);
```
This works the same way as regular RTHandleSystem but it also swaps the buffers internally so that the 0 index for `GetFrameRT` still references the current frame buffer. This slightly different way of handling Camera-specific buffers also has implications when you write shader code.
With a multi-buffered approach like this, `RTHandles` from a previous frame might have a different size to the one from the current frame. For example, this can happen with dynamic resolution or even when you resize the window in the Editor. This means that when you access a buffered `RTHandle` from a previous frame, you must scale it accordingly. The scale Unity uses to do this is in `RTHandleProperties.rtHandleScale.zw`. Unity uses this in exactly the same way as `xy` for regular RTHandles. This is also the reason why `RTHandleProperties` contains the viewport and resolution of the previous frame. It can be useful when doing computation with history buffers.
## Dynamic Resolution
One of the byproducts of the RTHandle System design is that you can also use it to simulate software dynamic resolution. Because the current resolution of the Camera is not directly correlated to the actual render texture objects, you can provide any resolution you want at the beginning of the frame and all render textures scale accordingly.
## Reset Reference Size
Sometimes, you might need to render to a higher resolution than normal for a short period of time. If your application does not require this resolution anymore, the additional memory allocated is wasted. To avoid that, you can reset the current maximum resolution of an `RTHandleSystem` like so:
```c#
RTHandles.ResetReferenceSize(newWidth, newHeight);
```
This forces the RTHandle system to reallocate all RTHandles to the new provided size. This is the only way to shrink the size of `RTHandles`.

View File

@@ -0,0 +1,12 @@
# The RTHandle system
Render target management is an important part of any render pipeline. In a complicated render pipeline where there are many interdependent render passes that use many different render textures, it is important to have a maintainable and extendable system that allows for easy memory management.
One of the biggest issues occurs when a render pipeline uses many different Cameras, each with their own resolution. For example, off-screen Cameras or real-time reflection probes. In this scenario, if the system allocated render textures independently for each Camera, the total amount of memory would increase to unmanageable levels. This is particularly bad for complex render pipelines that use many intermediate render textures. Unity can use [temporary render textures](https://docs.unity3d.com/ScriptReference/RenderTexture.GetTemporary.html), but unfortunately, they do not suit this kind of use case because temporary render textures can only reuse memory if a new render texture uses the exact same properties and resolution. This means that when rendering with two different resolutions, the total amount of memory Unity uses is the sum of all resolutions.
To solve these issues with render texture memory allocation, Unity's Scriptable Render Pipeline includes the RTHandle system. This system is an abstraction layer on top of Unity's [RenderTexture](https://docs.unity3d.com/ScriptReference/RenderTexture.html) API that handles render texture management automatically.
This section contains the following pages:
- [RTHandle system fundamentals](rthandle-system-fundamentals.md)
- [Using the RTHandle system](rthandle-system-using.md)

View File

@@ -0,0 +1,29 @@
# What's new in SRP Core version 12 / Unity 2021.2
This page contains an overview of new features, improvements, and issues resolved in version 12 of the Core Render Pipeline package, embedded in Unity 2021.2.
## Improvements
### RTHandle System and MSAA
The RTHandle System no longer requires you to specify the number of MSAA samples at initialization time. This means that you can now set the number of samples on a per texture basis, rather than for the whole system.
In practice, this means that the initialization APIs no longer require MSAA related parameters. The `Alloc` functions have replaced the `enableMSAA` parameter and enables you to explicitly set the number of samples.
### New API to disable runtime Rendering Debugger UI
It is now possible to disable the Rendering Debugger UI at runtime by using [DebugManager.enableRuntimeUI](https://docs.unity3d.com/Packages/com.unity.render-pipelines.core@latest/api/UnityEngine.Rendering.DebugManager.html#UnityEngine_Rendering_DebugManager_enableRuntimeUI).
## Added
### High performance sorting algorithms in CoreUnsafeUtils
New high performance sorting algorithms in the CoreUnsafeUtils helper methods. The new sorting algorithms include:
* RadixSort - ideal for very large lists, more then 512 elements.
* MergeSort (non recursive) - ideal for mid size lists, less than 512 elements.
* InsertionSort - ideal for very small lists, less than 32 elements.
The sorting algorithms only work on uint elements. They include methods that support standard c# arrays, NativeArray objects or raw pointers.
RadixSort and MergeSort require support array data, which can be allocated by the user, or allocated automatically via ref parameter passing. InsertionSort is in-place and does not require support data.
These algorithms are compatible with burst kernels when using raw pointers or NativeArray. Currently HDRP utilizes them to sort lights in the CPU lightloop.

View File

@@ -0,0 +1,10 @@
# What's new in SRP Core version 13 / Unity 2022.1
This page contains an overview of new features, improvements, and issues resolved in version 13 of the Core Render Pipeline package, embedded in Unity 2022.1.
## Added
### AMD Fidelity FX Super Sampling helper API - FSRUtils
Introducing new stream lined API for AMD Fidelity FX Super Sampling. The new API is located in the static class FSRUtils and allows scriptable pipelines to have direct access / implement and integrate easilty FSR super sampler.
For more information please review the API located in Runtime/Utitilies/FSRUtils.cs

View File

@@ -0,0 +1,8 @@
# What's new in SRP Core
This section contains information about changes to SRP Core. Each page contains a list of new features and, if relevant, a list of improvements and a list of resolved issues.
The list of pages is as follows:
- [12](whats-new-12.md)
- [13](whats-new-13.md)