4

From what I understand of Vulkan, we have a render pass with its associated image resources and subpasses, and each subpass accesses those resources and may synchronise with previous subpasses to process the scene. In a 3D scene, we will have some geometry and lighting and shaders etc. we can do depth pass, deferred shading and so on - and that can all be encapsulated in a single render pass, by chaining multiple subpasses and their results together.

Excluding the scenario of tiled rendering, or when we are rendering to multiple final render targets, when would we ever need multiple render passes in a 3d scene? I understand that something like post processing or UI compositing could be implemented in a second render pass, as they need the result from the first 3d pass to present the final image. I am at a loss as to a situation whereby a game engine, let's say, would need multiple render passes, as all of its geometry and lighting and shading etc. can be drawn with the same render pass, and that render pass will contain all the final necessary buffers, like the final image.

Other than using multiple render passes as you might use subpasses (each render pass relying on the previous one in some way and each render pass having only one subpass) - what is the point?

I am very much a beginner to Vulkan and to all of these concepts so please forgive any errors in the question itself or naive notions.

FShrike
  • 205
  • 2
  • 8
  • "I understand that something like post processing or UI compositing could be implemented in a second render pass, as they need the result from the first 3d pass to present the final image." Didn't you just answer your own question? – Nicol Bolas Aug 09 '20 at 20:20
  • @NicolBolas I mentioned that in order to exclude such suggestions from potential answers - I am asking for any OTHER use cases – FShrike Aug 09 '20 at 20:21
  • 1
    But why exclude them? It's like asking, "if you ignore all of the reasons to want to go to the store, why would anyone want to go to the store?" – Nicol Bolas Aug 09 '20 at 20:22
  • @NicolBolas Do you mean to say that those used mentioned in that quote are the only common and reasonable use cases for multiple render passes? – FShrike Aug 09 '20 at 20:23
  • @NicolBolas because I do not know if they are all the reasons for “going to the store”- which is the exact purpose of my question – FShrike Aug 09 '20 at 20:24
  • @NicolBolas To be clear, you’re saying that those ARE all the reasons? And that that is your “answer”? – FShrike Aug 09 '20 at 20:25
  • If you were asking for any use case, that might be a legitimate question for this site. But asking for some comprehensive "every possible use case for X" kind of thing isn't appropriate for Stack Exchange sites. No one person can give you every possible use for anything where the answer isn't obvious. "And that that is your “answer”?" If I wanted it to be an answer, I'd have posted it as an answer, not as a comment. – Nicol Bolas Aug 09 '20 at 20:25
  • @NicolBolas Any use case that I am unaware of would be welcome as an answer - I did not request for a comprehensive list at any point. Do you have such a use case or are you merely questioning my question? – FShrike Aug 09 '20 at 20:27
  • @NicolBolas I cannot see a way to do so on the mobile app - apologies if I’m missing an obvious option to do so – FShrike Aug 09 '20 at 20:33
  • Click the "continue this discussion" link and see where it goes. Or perhaps go to the website through your browser. – Nicol Bolas Aug 09 '20 at 20:35

1 Answers1

12

Instead of listing specific use cases, I will explain the technical limitations of a render pass structure which would prevent someone from putting all of their rendering within it. The use cases naturally fall out from those limitations.

The most important being this: you cannot arbitrarily access any of the images attached to the render pass throughout the entire render pass. It doesn't matter if a particular subpass doesn't use the image as a render target throughout that subpass; you cannot bind it as a texture/image and read from/write to it in a shader. This covers the entirety of the render pass from start to end.

Now, there are input attachments. These are uses of attachments that allow you to read from them in the shader. However, they aren't textures (though they are descriptors); they have more limitations than textures. The most important limitation being that you can only read from a pixel that corresponds to the current fragment's location (and also, you can only read them from a fragment shader). You can't fetch data arbitrarily from the image. A particular FS invocation can only read the specific pixel that was written previously at this FS invocation's location.

Now, you can do some things within that limitation. You can do programmatic blending, for example. You can do deferred lighting passes, since any particular FS invocation only needs to read gbuffer data from the specific pixel for that invocation's location.

But the main point is this: any effect which requires arbitrarily reading from an image that was rendered to must be in a different render pass from the render pass that rendered that image.

Nicol Bolas
  • 9,762
  • 18
  • 25