How Long Until A.I. De-noising In Blender? Home » How Long Does Neural Blender Take » How Long Until A.I. De-noising In Blender? Maybe your like How Deep To Plant Tomatoes How Long Does Nfl Game Last How Did Alan Thicke Die How Long Does Nipt Test Take How Did Albus Sister Died Loading How long until A.I. de-noising in Blender? General Forums Blender and CG Discussions Thesonofhendrix (Thesonofhendrix) December 29, 2017, 11:23am 1 It’s the new best thing since sliced bread for rendering. They use machine learning to train a neural network algorithm to learn the final average color value of each pixel after i guess some rough starting data like 10 samples, and then it uses the algorithm to skip straight to the predicted final value for each pixel, resulting in no noise and a render that looks like its gone through thousands of samples. Or at least thats the way i understand it. Realistically when will this feature get built into Blender? 1 Like FreeMind (FreeMind) December 29, 2017, 2:00pm 2 When someone codes it, basically. There are probably no volunteers to do that so far. The quickest way to do something like that is to hire a programmer to do it. Ace_Dragon (Ace Dragon) December 29, 2017, 5:46pm 3 Thesonofhendrix": It’s the new best thing since sliced bread for rendering. They use machine learning to train a neural network algorithm to learn the final average color value of each pixel after i guess some rough starting data like 10 samples, and then it uses the algorithm to skip straight to the predicted final value for each pixel, resulting in no noise and a render that looks like its gone through thousands of samples. Or at least thats the way i understand it. Realistically when will this feature get built into Blender? The most difficult part for the BF (in something like this) would be to train the network through hundreds and hundreds of renders (in order to generate the needed parameters so the code can resolve any situation and any scene you throw at it). Disney has the benefit of having dozens of hours worth of CGI movies to sort through for good training scenes, the BF does not. But you can’t overdo it because you then run the risk of the denoiser doing something known as over-fitting. Perhaps if Blender goes the AI route, it should continue to make use of feature passes to act as a verification as well as an assist (if the network otherwise can’t create a good result). KWD (Kévin Dietrich) December 29, 2017, 9:24pm 4 Ace Dragon: The most difficult part for the BF (in something like this) would be to train the network through hundreds and hundreds of renders (in order to generate the needed parameters so the code can resolve any situation and any scene you throw at it). Or you could add noise to regular images/photographs and train your algorithm (neural network, filter, etc…) on that. This is a typical machine learning technique. For example, you can downsize an image and teach a computer to go from the downsized version to the original in order to have a very effective upscaler/upsampler. (But this won’t account for fireflies, and other MC renderer specialties.) Edit: just read the DIsney paper, and they seemed to have just used 600 various frames for their demo, so I guess the Blender Institute does have a sufficient amount of data here. anon98372585 December 30, 2017, 4:22am 5 KWD: Or you could add noise to regular images/photographs and train your algorithm (neural network, filter, etc…) on that. This is a typical machine learning technique. For example, you can downsize an image and teach a computer to go from the downsized version to the original in order to have a very effective upscaler/upsampler. (But this won’t account for fireflies, and other MC renderer specialties.) Edit: just read the DIsney paper, and they seemed to have just used 600 various frames for their demo, so I guess the Blender Institute does have a sufficient amount of data here. Data augmentation heavily depends on the task at hand. Path tracers produce certain noise patterns and the goal of the denoiser is obviously to get rid of those. Adding random noise can help, if the task is to remove dust or small scratches from images. That is very different from the noise produced by a path tracer. The standard techniques of flipping and rotating the rendered images by 90/180/270 degrees may help to add more data for the training. Besides that, I am not aware of further low hanging fruits for this task. The Disney paper contains images to demonstrate some of its shortcomings. They were able to get a rich representation with those 600 frames, but there are cases for which additional data is needed. It is an amazing paper, but we should not ignore the limitations and be aware that it is still going to take time to learn how to use this kind of solution in practice. I have been spending a lot of time to work on a denoiser for Blender which uses deep learning. I am trying to denoise just the noisy passes from Cycles. Even though the amount of data I am using is quite limited, I am able to denoise the shadow and ambient occlusion passes surprisingly well. Those passes are clearly a lot simpler than the remaining ones, but it still shows that this approach might work. There is an overwhelming amount of work ahead, so I can’t make any promises. texasfunk101 (Brandon Funk) December 30, 2017, 9:37am 6 The thing is, it shouldn’t be hard to do at all. You would just need quite a few 10 sample images, the fully rendered equivalents and feed it into a denoising auto-encoder network where it would train itself on how to get from the noisy image to the fully rendered one. Again, the hardest part would be collecting data from a wide range of scenes. SterlingRoth (SterlingRoth) December 30, 2017, 10:05am 7 texasfunk101: The thing is, it shouldn’t be hard to do at all. You would just need quite a few 10 sample images, the fully rendered equivalents and feed it into a denoising auto-encoder network where it would train itself on how to get from the noisy image to the fully rendered one. Again, the hardest part would be collecting data from a wide range of scenes. The complexity of the algorithm itself should not be underestimated. That Disney paper has 9 credited authors, most of which have Phd’s in computer graphics. There is certainly some labor involved in assembling a sample set, but that’s mostly machine time. It is far and away not the hardest part. The blender foundation has about half a dozen shorts that they could very easily grind out datasets from, they already have the full sample versions renders, and churning out a 10 sample version would be pretty quick and easy. Cranking out state of the art Phd level code is an order of magnitude more complicated. texasfunk101 (Brandon Funk) December 30, 2017, 10:12am 8 SterlingRoth: The complexity of the algorithm itself should not be underestimated. That Disney paper has 9 credited authors, most of which have Phd’s in computer graphics. There is certainly some labor involved in assembling a sample set, but that’s mostly machine time. It is far and away not the hardest part. The blender foundation has about half a dozen shorts that they could very easily grind out datasets from, they already have the full sample versions renders, and churning out a 10 sample version would be pretty quick and easy. Cranking out state of the art Phd level code is an order of magnitude more complicated. I feel like you over-estimate how difficult implementation is, there are already prebuilt libraries in python that facilitate deep/machine learning. Using Keras, Theano, and Tensorflow one can easily build a network from the ground up. Here is a simple example from Tanmay Bakshi. Implementing into Blender may be a bit more difficult with making a smooth implementation and everything but I don’t think it’s nearly as hard as you think. Note that I am coming from a programming background so maybe I’m oversimplifying it. Who knows. JohnVV (JohnVV) December 30, 2017, 10:14am 9 personaly i use G’Mic on renders to remove noise the heatflow PDE it very useful in removing basic random noise there is also a special tool for the “hot pixels” that can be in images anon98372585 December 30, 2017, 10:18am 10 texasfunk101: The thing is, it shouldn’t be hard to do at all. You would just need quite a few 10 sample images, the fully rendered equivalents and feed it into a denoising auto-encoder network where it would train itself on how to get from the noisy image to the fully rendered one. Again, the hardest part would be collecting data from a wide range of scenes. If both, the noisy and fully rendered images are available, it makes more sense to directly train the network on those. Autoencoders tend to use a lot more examples. SterlingRoth (SterlingRoth) December 30, 2017, 10:49am 11 texasfunk101: I feel like you over-estimate how difficult implementation is, there are already prebuilt libraries in python that facilitate deep/machine learning. Using Keras, Theano, and Tensorflow one can easily build a network from the ground up. Here is a simple example from Tanmay Bakshi. Implementing into Blender may be a bit more difficult with making a smooth implementation and everything but I don’t think it’s nearly as hard as you think. Note that I am coming from a programming background so maybe I’m oversimplifying it. Who knows. Well, I’m no coder, just some basic python scripting. But when I see a team of highly trained, highly talented individuals with the backing of a multi billion dollar corporation, I tend to assume that it isn’t simple. I’m happy to be wrong though, you seem pretty competent, maybe you could sling together a prototype? Ace_Dragon (Ace Dragon) December 30, 2017, 11:08am 12 KWD: Edit: just read the DIsney paper, and they seemed to have just used 600 various frames for their demo, so I guess the Blender Institute does have a sufficient amount of data here. But does the Blender Open Movies contain enough possible cases to prevent the algorithm from failing in various situations (none of them have caustics for instance, and I don’t know if they contain any abuse of the shader node normal input for things like dispersion and glinting). SterlingRoth (SterlingRoth) December 30, 2017, 11:27am 13 Any denoising technique will introduce bias. Any biased rendering technique will have corner cases. if you want a renderer that will catch every corner case, use a pure monte carlo renderer and let it cook until the heat death of the universe. Ace_Dragon (Ace Dragon) December 30, 2017, 12:20pm 14 SterlingRoth: Any denoising technique will introduce bias. Any biased rendering technique will have corner cases. if you want a renderer that will catch every corner case, use a pure monte carlo renderer and let it cook until the heat death of the universe. Why the snark, the cases I mentioned show up many times in renders posted on this board. The current denoiser at least attempts to smooth them out (with decent results depending on how defined they are at that stage), an AI denoiser that was never trained on that kind of data may just create a blurry mess (as shown in the Disney paper when their AI tried to resolve volumetric effects that weren’t in the training set). SterlingRoth (SterlingRoth) December 30, 2017, 12:51pm 15 Sorry for snarking, I just get tired of hearing about caustics like they are huge stumbling block for a denoiser. Caustics are a corner case. One that is well recognized for being particularly difficult for a path tracer. If real caustics are critical for your work case, you shouldn’t be using a path tracer. Fire up any renderer with bidirectional path tracing and get those crystal sharp caustics. Trying to denoise path traced caustics is a hopeless endeavor. and I highly doubt, even with a ton of caustic samples in the dataset, that an AI denoiser could interpolate 10 samples of noisy caustics. (though perhaps 10 bidirectional samples could do it) expecting this (or any other) denoising algorithm to magically make path traced caustics render perfectly clear is like using sunglasses to protect your eyes while welding. Yes, sunglasses do a good job of protecting your eyes from too much light, but welding is a special case, and you need special protection for that case. Just like you need a special renderer for rendering caustics. Ace_Dragon (Ace Dragon) December 30, 2017, 1:48pm 16 SterlingRoth: Trying to denoise path traced caustics is a hopeless endeavor. and I highly doubt, even with a ton of caustic samples in the dataset, that an AI denoiser could interpolate 10 samples of noisy caustics. (though perhaps 10 bidirectional samples could do it) expecting this (or any other) denoising algorithm to magically make path traced caustics render perfectly clear is like using sunglasses to protect your eyes while welding. Yes, sunglasses do a good job of protecting your eyes from too much light, but welding is a special case, and you need special protection for that case. Just like you need a special renderer for rendering caustics. Who says the denoising has to completely resolve the render from 10 samples? I have gone up into the many thousands of samples (of most categories using BPT) and up to 25000 samples using regular PT and caustic effects resolve fine (a little bit of Filter Glossy and a little bit of node trickery goes a long way). If an AI-based denoiser can’t scale up beyond 10 samples, then that should be a clear argument against its implementation. On that, I hope you’re not suggesting that users shouldn’t need to go beyond 10 samples for every possible scene. SterlingRoth (SterlingRoth) December 30, 2017, 3:41pm 17 Who says a path tracer is a viable tool for rendering caustics? Dantus (Dantus) December 30, 2017, 3:50pm 18 SterlingRoth: Who says a path tracer is a viable tool for rendering caustics? A neural network might be a tool to make a path tracer more viable for caustics. We certainly don’t know yet whether this is actually true. Ace_Dragon (Ace Dragon) December 30, 2017, 4:00pm 19 SterlingRoth: Who says a path tracer is a viable tool for rendering caustics? No one says that, but I have a node group trick combined with the use of Filter Glossy to make them far more visible than would be possible otherwise (at the cost of some bias). Still takes a lot of samples though and a lot of time (though optimizations in the last year and post 2.79 helps greatly with the sample throwing part). The trick also needs the Filter Glossy paramater at 0.1 or higher, the parameter not being there would mean no caustics for you in most cases. Proof below (and yes, the denoiser can handle things like this. The node group trick is not used on top in this example though). .FG_working.jpg915×537 114 KB MartinZ (Martynas Žiemys) December 30, 2017, 4:30pm 20 You are all talking about the programing/learning part as the hardest thing here as if running the actual algorithms is every day regular thing. Did I miss something? So how do those neural networks work? What hardware are they running on? Is it not pointless if it runs on GPUs and CPUs instead of actual neural chips like IBM’s True North? I don’t think you can find those on e-bay yet. I mean it’s a cool thing, it’s really amazing and I am sure it will change our lives in the future, but is it not a bit too early to ask how soon we are going to be able to use neural networks for specific things as opposed to asking how soon we are going to be able to use them at all? next page → Tag » How Long Does Neural Blender Take How Long Does It Take To Complete A Larger Scene? : R/blender - Reddit Discover How Long Does Neural Blender Take 's Popular Videos - TikTok How Much Time Does It Take To Render A Photorealistic Artwork In ... Neural Blender - Auto-generated "art" | The Allspark Forums How Long Does It Take... - Blender Artists Community Neural Blender Neuralblender Settings Explained Why Is My Render Taking So Long? - Blender Stack Exchange 4 Easy Ways To Speed Up Cycles - Blender Guru NeuralBlender - Know Your Meme Make Something In Neural Blender - Nonstop Gaming - GameFAQs Type Your Username Into Neural Blender And Post The Results Neural Blender | Hacker News