Could you imagine interviewing at a major VFX facility and mentioning that you prefer compositing in sRGB space?

In the world of VFX, accurate data is king; and maintaining that data fidelity throughout your workflow plays a crucial role in final image quality.

One of the main reasons for compositing in linear space is that color math (image algebra) requires a proper “vector space” (i.e. scene referred linear with NO negative color primaries) to maintain a proper and predictable photo-accurate response throughout the various filters, color adjustments, layer interactions, etc.

Notice I mentioned filtering. When mipmapping a texture, we are doing exactly that so we want to make sure our image is linear *before* processing it to ensure the result is correct. So, this means we could either have a linear source image and mipmap it -or- promote an 8bit map to 16bit half float , then linearize it (because 8bits doesn’t have enough room to hold a proper linear representation), and then mipmap it. Either option will work but mipmapping a display referred (gammad) image and then leaving it to the render to degamma on the fly, *at render time*, is too late — the damage can not be undone. Your rendering equation is only as good as your least accurate data.

We’ll use this display referred image with very fine black and white stripes as our diffuse source color map to demonstrate:

othello debug color map

diffuse source image

 

When the above image is mipmapped in linear space, the black and white lines average to 0.5 as soon as the image is scaled in half. Below, the first sub-image shows proper image algebra (please keep in mind, since the texture is now linear the image below has been gammad to display referred space to look correct on the web):

othello debug map scaled in linear space

1.0 + 0.0 / 2 == 0.5

 

Conversely, when color maps are mipmapped outside of scene-referred linear space (i.e. the gamma is baked in, such as sRGB) bad image math ensues — the black and white lines should NOT average out to 0.21404:

othello debug map scaled in sRGB space

1.0 + 0.0 / 2 != 0.21404

*** EXTRA GOTCHA: To add insult to injury, when image math errors are present in the sub images of a mipmip, they often go unnoticed because the TOP level image appears correct! Only the sub-images show the traits of improper filtering.

  • One quick way to confirm your mipmaps is to load your mipmapped texture in the OpenImageIO’s “iv” tool and cycle through the sub-images.

Happy rendering!

 

Assuming your renderer is able to use mipmapped textures, this magical little 8K .tx map is quite useful for visualizing exactly which level of the texture is being sampled for *each* pixel being rendered. It’s great for optimizing your UV’s and textures.

While, at first looks similar to a sample “heat map”, this leverages mipmapping to show the actual resolution of the mipmapped texture being pulled by the renderer, per pixel!

The color/resolution associations in this debug map are as follows:

j3p uv resolution debug map legend

j3p uv resolution debug map legend

For example, I’ve applied this map to a simple sphere and rendered it in Arnold, RenderMan, and VRay — it’s quite interesting to see the differences:

simple sphere with UV resolution debug texture applied and rendered through Arnold

Arnold -- render of sphere with UV resolution debug texture applied

simple sphere with UV resolution debug texture applied and rendered through RenderMan 21

RenderMan 21 -- render of sphere with UV resolution debug mipmapped texture applied

simple sphere with UV resolution debug texture applied and rendered through VRay

VRay -- render of sphere with UV resolution debug mipmapped texture applied

 

 

Again, click here to download J3P’s UV resolution debug map.

***PLEASE NOTE: It probably goes without saying that this is a light-weight 8bit mipmapped texture, which is permissible for utility maps and debug passes. Color maps for beauty renders should *always* be mipmapped in linear space to maintain a photo-accurate rendering workflow.

Leave a comment and let us know what you think.

Enjoy!

 

This is a quick follow-up to the last video showing our utility’s ability to load multiple scenes into Nuke from multiple renderers, and then preview them directly in context of the composite.

Some advantages of this workflow over using other methods to preview renders:

1. Instead of only being able to view your render passes one at a time, this method allows artists to view all passes at the same time, in the proper context.

2. Piping preview renders directly into Nuke allows artists to start planning and building a final composite much earlier in the production process. This makes optimal use of preview renders since they are able to see how each render pass will actually be implemented in the final composite. This allows faster turnarounds, fewer renders, and more optimized use of resources (disk space, network, and render farm).

At J3P we recently took advantage of the cortex open source libraries for VFX to develop an ‘open framebuffer’ in Nuke which allows us to port practically any renderer to a Nuke framebuffer — we then donated it back to the cortex project. This is *not* a read node that loads an image, it’s a LIVE connection to the render stream! This means using Nuke to preview renders directly from Maya, ICE/XSI/Soft Image, Houdini, etc. and eliminating the need to use their render viewers… Taking it one step further, we built a Nuke utility to load an exported scene file into Nuke (i.e. a RIB or an Arnold scene file), detect all AOVs (including any custom AOVs) and interactively activate/deactivate AOVs to finely tune our comp on the fly, as the scene preview render is completed. This means more efficient compositing because we can develop a comp plan using all our render passes from square one — we only render what we need so we save on disk space and network traffic. It also means, as long as workstations are on the same network, we can use Nuke to aggregate preview renders from multiple packages in real time and give instant feedback to the lighters in terms of what we’ll passed we’ll need for the final comp… pretty cool. Pardon the spelling mistakes and lack of production polish — we were so excited about the capabilities that we wanted to get this news out there… I’m sure we’ll be cleaning this up soon. Enjoy and please email us with any questions!

Summary:
-Creates a LIVE connection to one OR MORE 3D renderer’s/3D package’s render stream
-This is NOT a read node loading image passes from disk, it’s an open framebuffer
-Start building your comp immediately in Nuke with test render preview framebufers.
-No need to save out images from your 3D package’s test render view.
-Launch preview renders from your 3D package directly into a Nuke framebuffer.
-Render exported scenes directly into Nuke framebuffers (Ribs, Arnold, etc…).
-Preview render all AOV’s simultaneously and plan which passes you’ll need
-Automatically detect all AOVs in scene file — including custom AOVs
-Turn AOVs on and off *EVEN* if they aren’t part of the exported scene file.
-Preview CG elements from different lighters/renderers on same network simultaneously.
-Plan your renders and comps much more efficiently.
-Let lighters know exactly which passes you need before they kick of renders.
-Harness the power of different packages without interrupting workflow.
-Allows for a package agnostic pipeline — allows lighters to work the way they choose.
-Framebuffers automatically update as soon as a new preview render is kicked off
-Write out comp’d test framebuffers