X265 - Always Choose 10bit? [Archive] - Doom9's Forum

Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC) > x265 - Always choose 10bit? PDA

View Full Version : x265 - Always choose 10bit?

Neillithan28th December 2017, 09:38A month ago, I was told to enable 10bit mode, without being asked if my input footage was 8bit. Is there a reason to encode in 10bit even if the input footage is 8bit? microchip828th December 2017, 10:54you'd get much less banding in your encode in 10 bit. x265 still suffers too much from that when using 8 bit. Your input doesn't matter much what bit depth it has (8 or 10) do note that a lot of gear still doesn't support 10 bit yet, computers excluded. If you have a HW decoder that can deal with 10 bit, displaying it on an 8 bit display (like a TV) is not an issue. If your TV is 10 bit, even better Selur29th December 2017, 19:17Is there a reason to encode in 10bit even if the input footage is 8bit? Pro: 1. better compression 2. no banding through compression precision Con: 1. higher compression complexity -> potentially more cpu usage 2. higher decoding complexity -> potentially more cpu usage 3. not all decoders support it Personally I only encode in 10bit,.. Cu Selur GZZ30th December 2017, 22:59I did a test on a 8 bit TV-episode encoded with x265 10 bit (Preset Medium) CRF 19 and compared it to a 8 bit encoding (same preset and crf) and I couldnt see any difference, but the file size came out 200 mb bigger for the 10 bit encode. Is there a video (Youtube ?) that will show the difference of 10 bit encoding on a 8 bit source compared to a 8 bit encoding of the same source ? RanmaCanada31st December 2017, 03:04There are actually some video blogs that will point you in the direction you are looking. You can quickly google something like benefits of 10 bit x265 encoding. What you will be looking for is things that you can see only in frame by frame comparisons really, and in areas where there would normally be lots of dithering, like clouds and leaves and backgrounds. Information that would easily be lost. The 10bit profile makes it less objectionable when dealing with gradients. Sparktank31st December 2017, 10:03I did a test on a 8 bit TV-episode encoded with x265 10 bit (Preset Medium) CRF 19 and compared it to a 8 bit encoding (same preset and crf) and I couldnt see any difference, but the file size came out 200 mb bigger for the 10 bit encode. Is there a video (Youtube ?) that will show the difference of 10 bit encoding on a 8 bit source compared to a 8 bit encoding of the same source ? You're better using 2-pass with equal bitrate/settings for comparing. CRF in 10bit can be lower(quality) compared to 8bit. CRF19 in 8bit, you can go CRF 21-23 in 10bit. GZZ31st December 2017, 16:12You're better using 2-pass with equal bitrate/settings for comparing. CRF in 10bit can be lower(quality) compared to 8bit. CRF19 in 8bit, you can go CRF 21-23 in 10bit. That is what I cant get into my head. So if I encode a 8bit movie as 10 bit I can use higher CRF and still keep the same quality as if I encoded the movie in 8 bit with CRF 19. How can that be ? Asmodian31st December 2017, 20:06Why does 10-bit save bandwidth (even when content is 8-bit)? (http://x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf) Motenai Yoda31st December 2017, 22:33That is what I cant get into my head. So if I encode a 8bit movie as 10 bit I can use higher CRF and still keep the same quality as if I encoded the movie in 8 bit with CRF 19. How can that be ? 10bit improve about 5-7% of compression quality usually in x264 10bit can avoid banding, on x265 it's a bit different, better quality but isn't banding-free. always compare 2 pass vs 2 pass at the same bitrate WhatZit1st January 2018, 01:13Is there a reason to encode in 10bit even if the input footage is 8bit? Ultimately, your choice of encode methodology depends entirely on how you intend to display those encodes. If you're stuck supporting 8-bit only hardware, then you're stuck with 8-bit encoding. Can't avoid it. However, if you only ever intend to have your encodes played on new 10-bit capable hardware, then you should unequivocally encode in 10-bits at all times, for all of the reasons that everyone has mentioned. The tangible benefits of 10-bit encoding will exist forever, but the ephemeral problems of compatibility will vanish once hardware that supports 10-bit formats becomes commonplace. Just look at the spec sheets for existing 2017 models, and upcoming 2018 models. So, simple choice: are you held hostage by ageing hardware, or are you free to explore the best possible options? Asmodian1st January 2018, 07:44Sadly 10-bit H.264 will probably never be supported by decoding hardware. :( Happily most mobile devices will probably be able to do software decoding of 10-bit H.264, albeit while using a lot more power. GZZ1st January 2018, 17:43Sadly 10-bit H.264 will probably never be supported by decoding hardware. :( Happily most mobile devices will probably be able to do software decoding of 10-bit H.264, albeit while using a lot more power. Why use H.264 when H.265 is out ? H.265 10 bit is supported by many devices. Sparktank1st January 2018, 18:05Why use H.264 when H.265 is out ? H.265 10 bit is supported by many devices. It does, but still needs more work. x264 is still getting updates, but you can still just use a few builds older than the latest version and not look back for archival purposes. Dollars to donuts, if it's not going to have HDR in the end result, I'd rather go with x264. It'll be some time before I upgrade any piece of hardware to natively decode HEVC without framedrops. I'm not much for standalone hardware or media players, so I use my PC more often. And I have yet to upgrade my graphics card to support the 4K HDR movies and then I still have to upgrade my CPU to handle all that far better, but I think if I want to upgrade my CPU to one of the newest generations out there, I'd have to upgrade my mother board, and if I'm going to upgrade my motherboard, I might as well get one that supports better RAM and get a new case for it as well. And maybe look into water cooling. For my personal case? No HDR? Then go x264 hi10p. Blurays to hi10p is sufficient enough until I up my game to start working on the 4K BD's, by which time, hopefully x265 will be on par. And, true, it's far more improved since a few years ago. But, alas, I have miles and miles to go before I sleep. birdie2nd January 2018, 20:14I had quite a bad experience with re-encoding H.264 sources to HEVC while enabling 10bit encoding: zero quality improvement while file sizes become significantly larger (at least 5%). jd172nd January 2018, 22:33Sounds like unrealistic expectations to me... An encode cannot be better than it's source. It might look more pleasing in some regard (e.g. smoothed gradations by means of dithering) - but it will never be better. If you are getting large filesizes, you are... a) encoding a source that is already (highly) compressed or b) using the wrong settings / targets or c) expecting too much. :) microchip82nd January 2018, 22:40I had quite a bad experience with re-encoding H.264 sources to HEVC while enabling 10bit encoding: zero quality improvement while file sizes become significantly larger (at least 5%). you're doing something wrong. I get almost half size reduction when using x265 10 bit @ CRF 21 compared to 8 bit x264 @ CRF 18. Am hard pressed to notice difference in quality between the two post your settings, if you will Nico85832nd January 2018, 23:29you're doing something wrong. I get almost half size reduction when using x265 10 bit @ CRF 21 compared to 8 bit x264 @ CRF 18. Am hard pressed to notice difference in quality between the two post your settings, if you will Do you compare "x264 8bit @ CRF 18 encoded to x265 10bit @ CRF 21" or "H264 source encoded to x264 8bit @ CRF 18 vs H264 source encoded to x265 10bit @ CRF 21" ? microchip82nd January 2018, 23:34Do you compare "x264 8bit @ CRF 18 encoded to x265 10bit @ CRF 21" or "H264 source encoded to x264 8bit @ CRF 18 vs H264 source encoded to x265 10bit @ CRF 21" ? the latter Motenai Yoda3rd January 2018, 02:57It'll be some time before I upgrade any piece of hardware to natively decode HEVC without framedrops. Your cpu should be enough to sw decode hevc without framedrops moreover h264 10bit hw support doesn't exist and luckily will not. Sparktank3rd January 2018, 03:16Your cpu should be enough to sw decode hevc without framedrops moreover h264 10bit hw support doesn't exist and luckily will not. :o Worth checking out before diving into the rabbit hole. I should start with the UHD drive first. birdie3rd January 2018, 13:27you're doing something wrong. I get almost half size reduction when using x265 10 bit @ CRF 21 compared to 8 bit x264 @ CRF 18. Am hard pressed to notice difference in quality between the two post your settings, if you will Nope, I was doing everything OK. Again, I didn't encode an uncompressed source, I re-encoded H.264 videos. I never use any fancy encoding flags. Here's my usual encoding parameters (tune is optional): ffmpeg -i *mkv -c:audio copy -c:v libx265 -preset veryslow -x265-params crf=18:no-sao=1:tune=grain Again, I got negatives gains from using 10bit encoding (increased encoding time and increased file sizes with zero improvements in details retention). jd173rd January 2018, 19:51I didn't encode an uncompressed source, I re-encoded H.264 videos. H.264 can be anything from a Blu-ray @ 35000 kbit/s down to a bad x264 encode at 2000 kbit/s. So what exactly are we talking about? Also, what kind of video were the sources you tested? 35mm with a lot of grain or squeaky-clean digital video? -preset veryslow -x265-params crf=18:no-sao=1:tune=grain This is absolute overkill. (BTW, --no-sao is part of --tune grain anyhow.) --preset veryslow plus --tune grain will just create loads of artificial grain i.e. blowing up the bitrate without a real benefit. Also, combining those two will result in insanely low encoding speeds... I would recommend to either go with a fast preset (not slower than medium) when using grain or use a slow(er) preset without --tune grain, just adding --no-sao. Try going --crf 18 --preset slow --no-sao @10bit on a source that has not been compressed to death and you will see the benefits. Asmodian3rd January 2018, 20:44I re-encoded H.264 videos. Blu-ray or already highly compressed? Again, I got negatives gains from using 10bit encoding (increased encoding time and increased file sizes with zero improvements in details retention). If you were already transparent at the bitrate obtained with your 8 bit encode and simply used the same command line with 10 bit then, sure, you probably won't notice a difference. Compare at the same bitrates and when you can notice the drop in quality and 10 bit will look better. At bit rates where increasing the bitrate of the 8 bit encode does not result in higher quality using 10 bit probably won't increase the quality either (unless you are getting banding at 8 bit even at high bitrates; source dependant). birdie3rd January 2018, 22:58This is absolute overkill. (BTW, --no-sao is part of --tune grain anyhow.) I wouldn't call veryslow/crf 18/no-sao/grain an overkill - it's probably a must for transparent FullHD (re)encodes. I won't argue if you're OK with losing tons of details. I'm not. I want to get the exact amount of details for my reencodes as the source has. Blu-ray or already highly compressed? The latter. Asmodian4th January 2018, 00:12If your 8 bit encode looks exactly like the source then of course the 10 bit one doesn't look any better. :rolleyes: The 10 bit one is higher quality, you simply cannot tell. The same reason why you aren't using crf 17 even though that would be higher quality. To compare fairly you need to determine new "optimal" settings for the 10 bit encode, the same way you did for the 8 bit one. jd174th January 2018, 08:32The latter. Reencoding already highly compressed sources is such a useless thing to do that I am regretting ever engaging in this discussion... I wouldn't call veryslow/crf 18/no-sao/grain an overkill - it's probably a must for transparent FullHD (re)encodes. I won't argue if you're OK with losing tons of details. I'm not. I want to get the exact amount of details for my reencodes as the source has. Your premise is flawed. - Reencoding an already highly compressed source is nonsense. Expecting the result to look better is even more nonsense. - Again! --no-sao is already part of --tune grain. No use adding it to the command line. - Starting with --preset slow, --tune grain adds --psy-rdoq 10, which generates artificial grain with relatively clean sources. The result might appear sharper to your eye, but it is not closer to the source. - As long as you are not being precise regarding sources, any further discussion is useless. While --tune grain can surely help with noisy 16/35/65mm sources (or those with a grain filter), even with slower presets, it will hurt other, clean digital sources. High grain/noise sources have never been the strong suit for x265 so far. Even if the source were lossless or low compression / high bitrate, opting for x265 might be a mistake. Anyhow, not trying different settings is probably the best thing you can do, considering your sources are already highly compressed. So I'll rest my case now... Nico85834th January 2018, 21:45Does anyone already compile a FFMPEG x265 10bit Windows build ? Asmodian5th January 2018, 04:11Don't ffmpeg builds usually support the --output-depth option? Nico85835th January 2018, 08:24It seems to not support it : Unrecognized option '-output-depth'. Error splitting the argument list: Option not found Asmodian5th January 2018, 11:22Hmm, you are correct. I don't know how to set the output bitdepth for libx265 in ffmpeg. I usually pipe to x265.exe from Avisynth (avs2pipemod.exe). :o Nico85835th January 2018, 20:46Thank you, and is it necessary to install FFDShow or LAV or other to use avs2pipemod ? Asmodian5th January 2018, 22:00No, but you do need to get a decoding plugin for Avisynth (I use ffms2 (https://github.com/FFMS/ffms2/releases)) and create the decoding script. I often use a single line like "FFVideoSource("00260.mkv")". Example command line: C:\Tools\avs2pipemod.exe -y4mp=1:1 input.avs| C:\Tools\x265.exe --input - --y4m -o "output.mkv" --crf 21 -D 10 -p 8 Motenai Yoda5th January 2018, 22:20LoL just add -pix_fmt yuv420p10le after input file Nico85835th January 2018, 22:35Thank you Asmodian, I'll try it :) And -pix_fmt doesn't work, you need to compile a 10bit build : http://www.gregwessels.com/dev/2017/10/27/ffmpeg-x265.html But I don't have all the tools to do it Asmodian5th January 2018, 22:42C:\Tools\ffmpeg\bin\ffmpeg.exe -i input.mkv -pix_fmt yuv420p10le -c:v libx265 output.mp4: Incompatible pixel format 'yuv420p10le' for codec 'libx265', auto-selecting format 'yuv420p' DotJun6th January 2018, 06:22Is there a reason to use ffmsindex over L-smash works for uhd source? Neillithan8th January 2018, 04:29Wow, I posted this question and completely forgot about it. Came back, and it's 2 pages long. Haha. Thanks everyone for the responses. I will read through these. Slightly off topic. For the time being, I'd like to point out that I did like, 3 dozen tests on a particular sample footage (blu-ray source)..... Converting to HEVC (x265) CRF 8-bit (Medium Preset) With Grain Tune Enabled. Each and every single time, it completely butchered the sample. It over-compressed and caused the faces to become a complete blurry mess. It wasn't until I started doing a 2-pass (12,000br) conversion that it was able to properly allocate the bitrate. For some reason, CRF with HEVC is too problematic to be reliable for me. It simply is unable to determine what the bitrate should be for my particular sample, and decided it should be less than 500 bitrate, which resulted in a horrible artifacty mess, that didn't resolve itself until the people's heads moved slightly. 2-pass did not have this problem, and produced flawless results every time. So anyway, the reason I felt the need to bring that up is because I see a *lot* of comments here saying CRF results were problematic with x265, and my theory is simply x265 CRF misses the mark by a mile. Use 2-pass for the best results. In my experience, x264 never had this problem with CRF. And before anybody says it, yes, I'm fully aware that 2-pass is intended to be used when you are trying to meet a specific target filesize, such as DVD-5 or DVD-DL. At this point, I'm simply using 2-pass because it's more reliable than CRF, and target filesize does not matter to me. Only intelligent bitrate allocation matters to me. @birdie, If you're aiming for highest quality downconversion from a Blu-ray source, I recommend sticking to this x265 configuration: x265 8-bit 2-pass (Medium Preset with Fast First pass) (Grain Tune enabled depending on whether or not your source has grain), at least 12,000 bitrate, with a max VBV of 50,000. This has been yielding very positive results for me. Please note, I have not yet tested 10-bit so I can't yet recommend it. I plan to experiment with this. As for banding artifacts, I have not yet noticed any of that in any of my encodes. I do use FFDShow Tryouts and enable anti banding for sources that do have heavy banding problems, but I haven't needed to enable this for my own encodes. Asmodian8th January 2018, 07:08Is there a reason to use ffmsindex over L-smash works for uhd source? L-smash had an issue with returning incorrect frames during out of order frame requests in specific situations and ffmsindex works well for me. :o They are both good but I think that index file is a good thing to have. :) DotJun8th January 2018, 14:39L-smash had an issue with returning incorrect frames during out of order frame requests in specific situations and ffmsindex works well for me. :o They are both good but I think that index file is a good thing to have. :) Has this problem not been addressed yet? I’ve only used smash so far and haven’t had any trouble with it yet, though as you said, maybe that is due to my particular source file. Motenai Yoda8th January 2018, 17:20Does anyone already compile a FFMPEG x265 10bit Windows build ? x86_64 https://expirebox.com/download/fb99827459c4f77e5b6e669b50c8f246.html should stay on for 48h, media-autobuild_suite describe it as GPL2.1 but I found only LGPL2.1 license... Nico85838th January 2018, 18:38Thank you Motenai Yoda ;) nghiabeo202nd November 2019, 06:51Is banding (at the same file size) the only thing that 10-bit fixes? Could it also increase sharpness? If the 8-bit encode already has smooth enough tone, what could a 10-bit one gain? microchip82nd November 2019, 07:08Is banding (at the same file size) the only thing that 10-bit fixes? Could it also increase sharpness? If the 8-bit encode already has smooth enough tone, what could a 10-bit one gain? it reduces banding and has a higher quantization precision (less errors), which should save bits. I always encode in 10 bits even if input is 8 bits Boulder2nd November 2019, 14:18it reduces banding and has a higher quantization precision (less errors), which should save bits. I always encode in 10 bits even if input is 8 bits And if you do any preprocessing before feeding the data in x265, do it in 16 bits and set --dither in x265. Selur4th November 2019, 18:06As a side note: Aside from missing hardware decoding support on older hardware, how about using 12bit? :) microchip84th November 2019, 18:27As a side note: Aside from missing hardware decoding support on older hardware, how about using 12bit? :) Well, my Samsung UHD 4K Blu-ray player does not support 12 bit. Only 8 and 10 bit. I don't think 12 bit is widely supported yet (maybe on very recent HW only) FranceBB4th November 2019, 18:41Well, my Samsung UHD 4K Blu-ray player does not support 12 bit. Only 8 and 10 bit. I don't think 12 bit is widely supported yet (maybe on very recent HW only) Yep; pretty much anything that doesn't support Dolby Vision, doesn't support normal x265 12bit encoded files either. My Panasonic UHD BD player from years ago doesn't support it either. (I'm still hoping in a software update, though). Anyway, it seems that we're slowly getting there... :) Blue_MiSfit4th November 2019, 20:52Well.. Dolby Vision isn't 12 bit HEVC. It's (generally) 10 bit HEVC with metadata or a second HEVC stream that lets it reconstruct 16 bit RGB with quality similar to 12 bit YCbCr. There's a few different ways to do this, but in the online streaming world for on-demand playback it's profile 5 which means a wacky looking 10 bit HEVC encode containing IPT (similar to but distinct from ICtCp) encoded as if it were YCbCr (and dynamically shaped from 16 bit RGB using the Dolby Vision metadata), plus the metadata as separate NALs. It's different on UHD BluRay (Profile 7), where there are actually two HEVC bitstreams, one that's HDR10 compatible standard YCbCr and another HEVC stream (often at 1/4 resolution) with additional data. Combining these lets the player reconstruct the original 16 bit RGB with higher precision. https://www.dolby.com/us/en/technologies/dolby-vision/dolby-vision-profiles-levels.pdf TL;DR 12 bit HEVC isn't really a thing yet anywhere outside of software decoders and GPU decoders, as far as I'm aware. kuchikirukia5th November 2019, 02:56Sadly 10-bit H.264 will probably never be supported by decoding hardware. :( I have a UHD BD player that will play H.264 10 bit 1080p60. Greenhorn5th November 2019, 03:25Riffing on the same topic-- what about scenarios where a 10bit file made from an 8bit source gets fed through an 8bit display pipeline? I'm assuming the negatives of a proper dithering wouldn't be enough to outweigh the positives of a 10bit encode, but what about truncation? Blue_MiSfit5th November 2019, 18:44Riffing on the same topic-- what about scenarios where a 10bit file made from an 8bit source gets fed through an 8bit display pipeline? I'm assuming the negatives of a proper dithering wouldn't be enough to outweigh the positives of a 10bit encode, but what about truncation? That's a good point. Many consumer TVs for example have horrendous display pipelines filled with format conversions and many bit depth conversions. Often these will produce better results with 8 bit input over HDMI even if they have a real 10 bit panel. It's worth noting that one of the major benefits of Dolby Vision is that it takes over the display and has sensibly designed pipelines. In my experience a well crafted Dolby Vision encode will often have less banding than a well crafted HDR10 encode. microchip85th November 2019, 19:29I've disabled all of image processing I could find in its settings on my 8 bit Panasonic 1080p TV and 10 bit still looks better than the same encode done in 8 bit (same settings for both, just bit depth differs). I'm especially pleased that I no longer encounter banding as I used to do sometimes with 8 bit encodes So I see only benefits of 10 bit encodes even when displayed on 8 bit displays. At least that's my own experience here RanmaCanada6th November 2019, 05:26As a side note: Aside from missing hardware decoding support on older hardware, how about using 12bit? :) I don't know of any devices other than PC's that can decode 12 bit. No android boxes, no game consoles, no phones, tablets, nothing outside of PC's. Do you know of any? Any of the 12 bit encodes I've seen have been mind blowing, but playback can only be done on my computer :( aymanalz7th November 2019, 03:42I don't know of any devices other than PC's that can decode 12 bit. No android boxes, no game consoles, no phones, tablets, nothing outside of PC's. Do you know of any? Any of the 12 bit encodes I've seen have been mind blowing, but playback can only be done on my computer :( Mind blowing in what sense? How are they better than 10 bit? Less bitrate required, or better visual quality, or both? Any specific aspect of visual quality that gets improved, like 10 bit having less banding than 8 bit? I only want to playback videos on my PC or laptop (connected to the TV), so if 12 bit offers any advantage over 10 bit, I might start using that. Provided encoding times don't go up like crazy. RanmaCanada7th November 2019, 05:55Mind blowing in what sense? How are they better than 10 bit? Less bitrate required, or better visual quality, or both? Any specific aspect of visual quality that gets improved, like 10 bit having less banding than 8 bit? I only want to playback videos on my PC or laptop (connected to the TV), so if 12 bit offers any advantage over 10 bit, I might start using that. Provided encoding times don't go up like crazy. The encodes I have seen and have done, are to my eyes, visually transparent to the source material with the usual shrink in size. No banding, no dithering, and no bleeding of bright colours into dark areas or vice versa. Though the last could have been done with filtering before encoding as it was not my encode that I viewed. As for encode times, I will be honest and say I do not know if it will increase times or not as I no longer do encodes in 12 bit because of the lack or hardware support outside of PC. I mainly do anime, and I honestly would use 12 bit HEVC over 10 bit Hi-P for anime (only a handful of devices support it in hardware) if there was 12 bit support. My current setup is just a placeholder until the 3950 is released (if I can even get my hands on one), a measly Ryzen 2700. Once I replace it with a real CPU, I may attempt 12 bit again. vBulletin® v3.8.11, Copyright ©2000-2025, vBulletin Solutions Inc.

Từ khóa » H 265 10 Bit Vs 12 Bit