The word “noise” in the English language is a funny thing. It’s generally used when referring mostly to sound you don’t want to hear. But it also applies to images with artifacts in them that you don’t want to see, especially in high-end visual-effects and animation projects, where cleaning up all the noise of computer-generated animation requires more time and money.
Chipmaker Intel’s
Intel engineers were dealing with a seemingly obscure but surprisingly foundational question in creating compelling movie imagery: how to more quickly display a “good enough” version of a scene to make a decision about whether to continue rendering it at top quality, or to tweak and try to improve the scene. It’s not a simple question, and the impacts can mean the difference in incurring millions of dollars in production costs and delays.
The challenges all start with how modern animation software creates and then renders onscreen sophisticated images that truly capture the way light operates in the world, mimicking how light rays bounce off all kinds of surfaces, from stainless steel to moving water to semi-translucent skin.
MORE FOR YOU
“Path tracing (also known as ray tracing) is a way to create much more interesting visuals,” said Hank Driskill, Cinesite’s Montreal-based Global Head of CG for Feature Animation.
But even for powerful computers, tracking all those rays, particularly as some portion become lost or occluded from view, can send a computerized system down a technological rabbit hole. That’s where de-noising tools come in.
“What’s really dramatically changed in last four or five years is that de-noising was always something (animators) wanted or needed to use, but using the ray tracing requires Monte Carlo sampling, a randomization of your process so you don’t hone in on any one particular thing,” said Jim Jeffers, Intel’s senior principal engineer and senior director, advanced rendering engines and visualization.
“Over time, you collect all the best information in the fastest possible way,” Jeffers said. “That approach has the best performance except for one caveat. It allows you to use digital techniques to determine when you’re hitting that optimal point. The caveat is you may need to take 10 million passes to get there. That takes compute time.”
As in, it might take a super-computer-level machine a week and a half to fully render a 5-second scene. That won’t work for a big movie project on a strict budget and timeline.
Getting the “light-bleed effects and the things you see in nature” is possible with ray tracing, Driskill said, “but the end result is you get a lot of noisy pixels. Eventually you can resolve it if you let (the computer) run and run and run. But if you’re making animated movies, you don’t have time to let each of those frames run for a week.”
Intel took a different approach, training an artificial intelligence algorithm to short-cut the de-noising process of cleaning up images, said Jeffers. De-noisers have been used for a few years now with fast-turnaround TV animation.
De-noiser technology has been been a much heavier lift with feature-length animated and visual-effects-driven feature projects such as The Addams Family 2. which need to produce a far higher level of image quality for display on the largest screens.
“It just turns out for this process that we apply, it just makes better images,’ Jeffers said. “It can learn where to look in the image to focus on improving that part. It creates the smarts to say this area is going to be problematic.”
Jeffers said he was initially skeptical about the ability of artificial intelligence to solve the problems de-noising faced. Now he’s a convert.
“These days, the de-noiser learns the areas most likely to pay off with shortcuts that will leave the quality there,” Jeffers said. “The algorithm only picks the spots that are most likely to be noisy. The actual benefit is the knobs you can turn to get close to that (finished product) so you can make that final frame exactly what you envision it to be. That is a big goal of what my team is doing. ”
The new approach also empowers a “best-guess” mode where the AI estimates the best random iteration of an image with a single pass, instead of thousands or even millions.
“You can see (the result) within milliseconds. Before it would be like a minute,” Jeffers said.
“If you clean those (render effects) early, you can stop the rendering process pretty early,” Driskill said.
Intel joined the Addams Family project part-way through, offering up finished code that could be used on the image. Even that part-way involvement proved immensely helpful, Driskill said.
“Our final estimates are that we made about a 20-percent savings, part-way through the project,” Driskill said. “The goal is to cut render times in half. You can get a noisy image very quickly, then it resolves as it goes. You can get something that is certainly good for early judgment calls very quickly.”
As Cinesite uses the tool through the entire life of a project, AI training becomes more efficient and advanced, and its artists become more skilled at using it, Driskill said Cinesite might reach that ambitious goal.
As Driskill pointed out, “the nice thing about a (machine-learning)-based system like Intel’s, it will get better and better at differentiating what is important to keep and what should be thrown out.”
The human side of the process will improve too, Driskill said, as animators get better at leveraging the de-noising tool to help their creative experimentation while building a final product more quickly.
“It does change the creative thinking, not so much in the final frame, but some of what de-noiser allows,” Driskill said. “You can run the image for a short time, run the de-noiser, and then you can make some really important judgment calls along the way.”
As Driskill points out, useful technology does at least one of two things: either make you more efficient at what you’re already doing, or help you do something you couldn’t before.
“In my career, more often it turns out (with new technology) that you can turn the crank six times instead of three” on a creative endeavor, Driskill said. “In practice, though, it’s a combination of both more cranks and shorter time requirements. Any time you can allow the animators to turn that crank one more time, they love it.”
Cinesite is already using the technology on other projects, including a Disney production and an animated film based on a story by Pulitzer Prize-winning cartoonist Berkeley Breathed called Hitpig, Driskill said.
Intel partnered with graphic-software giant Autodesk
Separately, Jeffers said the de-noiser tools have a broad array of uses well beyond entertainment, though that sector’s need both high efficiency and image quality are a perfect fit.
“Scientific visualization is another area” where the tech has applications, Jeffers said. “We’ve also done a ton of stuff with Bentley, the Volkswagen(-owned) company, working with them to enable customers to see the cars they’re going to build for them.”
More broadly, it’s a period of increasing innovation and transformation in the visual effects world, fueled in part by huge investments in immersive experiences and the inclusive vision of a Metaverse or Omniverse of interconnected immersive spaces.
Unity, the big developer tool company best known for its wide use in game development, just bought Peter Jackson’s Weta Digital for $1.63 billion, with plans to “create a pathway for any artist in any industry … to leverage these incredible creative tools.”
And Netflix just acquired Scanline, the company that did visual effects on the streamer’s live-action remake of Cowboy Bebop and seasons 3 and 4 of Stranger Things, with plans to “invest in Scanline’s pipeline, infrastructure and workforce to advance the streamer’s virtual production.”
Scanline will continue to work with other clients besides Netflix