版本: Unity 6 (6000.0)
语言英语
  • C#

ScriptableRenderContext.BeginRenderPass

建议更改

成功!

感谢您帮助我们提高 Unity 文档的质量。虽然我们无法接受所有提交的内容,但我们会阅读用户提出的每个建议,并在适当时进行更新。

关闭

提交失败

由于某种原因,您的建议更改无法提交。请<a>稍后再试</a>。感谢您抽出时间帮助我们提高 Unity 文档的质量。

关闭

取消

声明

public void BeginRenderPass(int width, int height, int samples, NativeArray<AttachmentDescriptor> attachments, int depthAttachmentIndex);

声明

public void BeginRenderPass(int width, int height, int volumeDepth, int samples, NativeArray<AttachmentDescriptor> attachments, int depthAttachmentIndex);

参数

width 渲染通道表面的宽度(以像素为单位)。
height 渲染通道表面的高度(以像素为单位)。
volumeDepth 渲染通道表面的切片数量。默认值为 1。
samples MSAA 采样计数;设置为 1 以禁用抗锯齿。
attachments 要在此渲染通道中使用的颜色附件数组。数组中的值会立即复制。
depthAttachmentIndex 用作此渲染通道的深度/模板缓冲区的附件索引,或 -1 以禁用深度/模板。

描述

安排新的渲染通道的开始。一次只能有一个渲染通道处于活动状态。

渲染通道提供了一种在可脚本化渲染管线上下文中切换渲染目标的新方法。与 SetRenderTargets 函数相反,渲染通道指定渲染的明确开始和结束,以及对渲染表面的显式加载/存储操作。

渲染通道还允许在同一个渲染通道内运行多个子通道,其中像素着色器对渲染通道内的当前像素值具有读访问权限。这使得在基于平铺的 GPU(例如延迟渲染)上有效地实现各种渲染方法成为可能。

渲染通道在 Metal(iOS)和 Vulkan 上实现为原生功能,但 API 通过仿真(使用传统 SetRenderTargets 调用并通过纹素获取读取当前像素值)在所有渲染后端上完全正常工作。

渲染通道机制具有以下限制
- 所有附件必须具有相同的解析度和 MSAA 采样计数
- 之前子通道的渲染结果仅在同一个屏幕空间像素内可用
坐标通过着色器中的 UNITY_READ_FRAMEBUFFER_INPUT(x) 宏;在渲染通道结束之前,无法将附件绑定为纹理或以其他方式访问
- iOS Metal 不允许从 Z 缓冲区读取,因此需要一个额外的渲染目标来解决此问题
- 每个渲染通道允许的最大附件数量目前为 8 + 深度,但请注意,各种 GPU 可能具有更严格的限制。


其他资源: BeginSubPass, EndRenderPass, BeginScopedRenderPass, BeginScopedSubPass.

一个关于如何在可脚本化渲染管线内使用渲染通道 API 实现延迟渲染的简短示例

using UnityEngine;
using UnityEngine.Rendering;
using Unity.Collections;

public static class DeferredRenderer { public static void ExecuteRenderLoop(Camera camera, CullingResults cullResults, ScriptableRenderContext context) { // Create the attachment descriptors. If these attachments are not specifically bound to any RenderTexture using the ConfigureTarget calls, // these are treated as temporary surfaces that are discarded at the end of the renderpass var albedo = new AttachmentDescriptor(RenderTextureFormat.ARGB32); var specRough = new AttachmentDescriptor(RenderTextureFormat.ARGB32); var normal = new AttachmentDescriptor(RenderTextureFormat.ARGB2101010); var emission = new AttachmentDescriptor(RenderTextureFormat.ARGBHalf); var depth = new AttachmentDescriptor(RenderTextureFormat.Depth);

// At the beginning of the render pass, clear the emission buffer to all black, and the depth buffer to 1.0f emission.ConfigureClear(new Color(0.0f, 0.0f, 0.0f, 0.0f), 1.0f, 0); depth.ConfigureClear(new Color(), 1.0f, 0);

// Bind the albedo surface to the current camera target, so the final pass will render the Scene to the screen backbuffer // The second argument specifies whether the existing contents of the surface need to be loaded as the initial values; // in our case we do not need that because we'll be clearing the attachment anyway. This saves a lot of memory // bandwidth on tiled GPUs. // The third argument specifies whether the rendering results need to be written out to memory at the end of // the renderpass. We need this as we'll be generating the final image there. // We could do this in the constructor already, but the camera target may change on the fly, esp. in the editor albedo.ConfigureTarget(BuiltinRenderTextureType.CameraTarget, false, true);

// All other attachments are transient surfaces that are not stored anywhere. If the renderer allows, // those surfaces do not even have a memory allocated for the pixel values, saving RAM usage.

// Start the renderpass using the given scriptable rendercontext, resolution, samplecount, array of attachments that will be used within the renderpass and the depth surface var attachments = new NativeArray<AttachmentDescriptor>(5, Allocator.Temp); const int depthIndex = 0, albedoIndex = 1, specRoughIndex = 2, normalIndex = 3, emissionIndex = 4; attachments[depthIndex] = depth; attachments[albedoIndex] = albedo; attachments[specRoughIndex] = specRough; attachments[normalIndex] = normal; attachments[emissionIndex] = emission; context.BeginRenderPass(camera.pixelWidth, camera.pixelHeight, 1, 1, attachments, depthIndex); attachments.Dispose();

// Start the first subpass, GBuffer creation: render to albedo, specRough, normal and emission, no need to read any input attachments var gbufferColors = new NativeArray<int>(4, Allocator.Temp); gbufferColors[0] = albedoIndex; gbufferColors[1] = specRoughIndex; gbufferColors[2] = normalIndex; gbufferColors[3] = emissionIndex; context.BeginSubPass(gbufferColors); gbufferColors.Dispose();

// Render the deferred G-Buffer // RenderGBuffer(cullResults, camera, context);

context.EndSubPass();

// Second subpass, lighting: Render to the emission buffer, read from albedo, specRough, normal and depth. // The last parameter indicates whether the depth buffer can be bound as read-only. // Note that some renderers (notably iOS Metal) won't allow reading from the depth buffer while it's bound as Z-buffer, // so those renderers should write the Z into an additional FP32 render target manually in the pixel shader and read from it instead var lightingColors = new NativeArray<int>(1, Allocator.Temp); lightingColors[0] = emissionIndex; var lightingInputs = new NativeArray<int>(4, Allocator.Temp); lightingInputs[0] = albedoIndex; lightingInputs[1] = specRoughIndex; lightingInputs[2] = normalIndex; lightingInputs[3] = depthIndex; context.BeginSubPass(lightingColors, lightingInputs, true); lightingColors.Dispose(); lightingInputs.Dispose();

// PushGlobalShadowParams(context); // RenderLighting(camera, cullResults, context);

context.EndSubPass();

// Third subpass, tonemapping: Render to albedo (which is bound to the camera target), read from emission. var tonemappingColors = new NativeArray<int>(1, Allocator.Temp); tonemappingColors[0] = albedoIndex; var tonemappingInputs = new NativeArray<int>(1, Allocator.Temp); tonemappingInputs[0] = emissionIndex; context.BeginSubPass(tonemappingColors, tonemappingInputs, true); tonemappingColors.Dispose(); tonemappingInputs.Dispose();

// present frame buffer. // FinalPass(context);

context.EndSubPass();

context.EndRenderPass(); } }