DaalaMeeting20150505
Jump to navigation
Jump to search
# Meeting 2015-05-05 Mumble: mf4.xiph.org:64738 # Agenda - reviews - full precision references - Data for CTO review - Quantizer interpolation - PCS 2015 - MSVC 2013 build wanted? # Attending unlord, TD-Linux, derf, xiphmont, yushin, tmatth, MrZeus, jmspeex # reviews - j: thomas assigned me 735 but should be tim - d: presumably his is doing what you said worked, but that's fine I'll take a look - j: I'll give you a link so you can compare the two predictors - d: 731, nathan you owe thomas a review - u: I'll get to that today - d: 719, you owe dan a review also - t: what about 634, can't we land that as is - u: I owe you some updates but yes that can land # full precision references - x: I am testing my patches right now, but considering that tim landed 31 patches in the code I am working on it is coming along - t: I finished my rebase yesterday - j: what did you rebase? - t: my multiple reference patches - d: are those public anywhere? - t: yes, my github # status - d: so azita has asked me to go around and get status updates from everyone as she cannot be in the meeting today, so monty? - x: 20% <of what task> - x: I had originally said that would be a couple of days, but maybe a little bit more than that as it's already been a couple of days - d: thomas? - t: I was going to split that into multiple tasks so that I could give you a number - d: when is that going to be done? - t: today? - d: nathan? - u: I have started - d: yushin? - y: I compared SAT - d: for SATD, do you have the patches for that up somewhere/ - y: oh yeah, I have a patch but it needs to be cleaned up - d: that is okay, I just wanted to look at a few things - y: do you have any free time to look? - d: I have no free time, but I want to make sure you are unblocked - d: on my own end, the 32x32 support has landed, I was going to do some small retuning of the probabilities of the motion vector flags - j: you mean retuning the initial probabilities? - d: or even figuring out if its useful to have initial probabilities at all - j: I think it may make a tiny difference on akiyo and other things - d: but since jack is out, he was collecting data for the CTO review which azita has now informed me is my responsibility - d: I know that jack had asked several of you for data, specifically thomas for the comparison stuff - t: I also know what he was running - d: he was - u: http://people.xiph.org/~unlord/rd_data/ntt1-fastssim.png - j: in my case I have updated data for Haar, I fixed some issues and also learned that the blocksize switching is helping Haar which I don't understand - j: I also know he wanted to compare the small test set I made for screen shots with 265 and unless that has been integrated - u: do we have the same parameters in rd_collect_x265.sh as we do on AWCY - t: I believe I normalized them - d: okay, so thomas you know what you need to do there? - t: yep - d: I'll probably talk to you a bit more jm offline about what you and jack wanted to show - d: so the other thing that EKR wants to look at are actual visual comparisons - j: crazy # quantizer interpolation - d: so before we do that I want to unmuck our quantizer situation when we landed fixed lapping - j: which aspects? - d: at least two, (1) the fact that we no longer have quantizer interpolation and (2) the fact that chroma is broken - j: yeas those two - j: nothing that we have now is bad, but we need to add more stuff - d: yes, but there are lots of places where things are bad, we have band structures where we are doing things by coefficients - j: do you want to discuss this now? - d: I have been looking at how I would actually want to do this and I don't think the per band interpolation that is in there is what we want to do - j: yes and no, it is not all we want to do but we want to do that one - j: there are a bunch of things that changed meaning that are still needed for other reasons, e.g. the per band control is how we do quantization and activity masking - d: so I had been grouping this into two pieces, interpolating the shape of the quantization matrix and the magnitude and you can think about this per band or per coefficient block - d: the normal way people do this is they have one shape and then scale that shape linearly for every quantizer, we know this is stupid and that you want things much more skewed at high rates than at low rates when you want things much flatter - d: the way we did this with theora is that we had a mapping - j: now I understand, so the magnitude is mostly useful for balancing chroma v. luma but otherwise its not that meaningful - j: if you don't interpolate (if you only had luma) then things wouldn't move exactly like you want to but it wouldn't be that bad - d: that doesn't turn out to be true, if you look at what the math tells you, it says to interpolate between two QMs that have the same magnitude you want to use weighted harmonic interpolation and you - d: it worked really well for us in Theora - j: the one that I had to do last time but that I had to remove for fixed lapping was the geometric average which had one nice property, assuming that the point you have at the input matrix makes sense it can never cause you to have a non-monotonic quantizer as a function of the bit rate and that is the only way you can achieve that - d: actually that's not true - j: yeah it is true - d: no its not, I did all the math yesterday - j: true in that if you have all your quantizers set up at -v 10 then a certain band gets this quantizer and at -v 100 things are very skewed it is the only interpolation that will give you a flat line - d: so that's not actually a case I care about - j: it is very extreme but its nice to have this property, and if you have enough points in your interpolation you don't care about the interpolation method - d: so you could sample this at some points but that doesn't tell you where you should be sample - d: the thing is, we actually know what we want now so why don't we just do the right thing - j: I disagree that we want the inverse harmonic mean - y: if you see some wrong quality as a result of an issue with the interpolation, I would more think the real reason is adaptive quantization itself not the interpolation method? - d: I don't think that that is going to be that big of an issue just because of the range of different quantizers we use it isn't going to change much. particularly with activity masking we don't use a different shape quantization matrix with different AM - d: one way to think about this is that when you apply activity masking you change the scale of the quantization matrix, but you don't change the shape at all - j: nope - d: but you could - d: inside a single band at a single block size, but for a different gain, you could change the quantization matrix based on activity masking - d: it basically has nothing to do with coding efficiency - d: I am talking about when you want to use larger or smaller weights do you also change how they are skewed between large and smaller quantizers - d: I want less skewing with small quantizers, essentially at low rates you want to optimize for PSNR - d: currently we have one skewed matrix that we use for every rate - y: mostly for high bit rate - d: currently we have one QM and one skew with all rates - j: we need to redo all the interpolation we did - d: and yet all of the code is still in active use - u: what? - d: it still gets called - d: geometric averaging is absolutely what you want to do for the scales, I'm not sure its what you want to do for the shape. I looked at a few things yesterday - d: - j: scale of the entire DCT v. per band, we had some balancing between the chroma and luma but that also got removed with the fixed lapping - d: there is only one interpolation function in the code jm - j: yeah and it interpolated two things, it interpolated a scale and per band data - d: my point is that we can decouple them, and you can chose the rate at which you interpolate between two shapes that is entirely independent of the rate at which you interpolate between two scales - j: yeah - d: and the natural question is if I am doing harmonic interpolation between two scales, at what rate - d: I wanted to see what it would take to get the same behavior if I handed the harmonic interpolation if I am using harmonic interpolation between two flat matrices of different scales, then I solved for the speed at which you have to move between them to get the same result as doing geometric interpolation on the scales directly. - j: define speed - d: you have an interpolation function that takes a parameter between 0 and 1 - j: the interpolation variable? - d: yes, because I have these decoupled, saying I am such and such a distance between the scales, the scales have one of these the shapes have another - y: scale here means what? - d: you take a matrix and multiply it by some value, the matrix defines the shape and the value is the scale - d: so you have two variables between 0 and 1, one tells you how far you are between the two in scale and one in shape. you can take them to be the same but there is no reason for them to be the same - j: the property I want is not there if these are not the same - d: right, I don't care about that property - j: I assumed that for your harmonic thing you want your x to be geometric and your y to be harmonic - d: yes that is true but the difference is that the harmonic interpolation is interpolating between two matrices that have the same average quantizer and the - d: let me continue jm, like I said I solved for the speed and it turned out to be exactly the wrong thing - j: well I didn't expect that it would work - d: it produced matrices that get flat very quickly, I can write a function that makes this not happen - d: the conclusion is that the two interpolations need to be decoupled but it isn't clear if we want something faster or slower - d: in any case, one of the other things I looked at is exactly this concern that jm had that is if things are highly skewed you can get things that are non-monotonic - d: I was able to compute a bound that tells you exactly how skewed you can make your matrices before you have this property so you can just clamp it - j: ugly - d: whatever it works - d: I looked at the curves for this stuff and even when you are on the edge it looks fine - j: you have a derivative that goes to zero - d: yes but it goes to zero right on the edge - j: in any case, so far you have been talking about interpolating 2 things and I think we need to interpolate 3 - d: what are the things - j: the interpolation matrix, the scale between luma and chroma and the PVQ band - d: as far as I can tell the PVQ band all you are doing is rate matching - j: right now yes, but hopefully we can do something smarter - d: like what? - j: I don't know yet, I was hoping to look at the images and do some tuning - d: well we have even less time now, so matching the rate is fine, I was looking at it as matching the scale on a smaller scale - j: you could actually keep everything the way it is now and just add to the shape of the quantization matrices - d: yeah that is what I was looking at now, I am hoping to have something done by the end of the day, the question is who is going to review them - j: I can do it # PCS 2015 - d: we have 10 days to submit something to a picture coding challenge - x: we should look at daala v. bgp and see if we are still beating them - y: we give code? - d: we give them an executable - t: they also have a motion picture challenge, I hear we have good prediction - d: its better this week than it was last week # MSVC 2013 builds - d: I don't know who added this - t: is it MrZeus? [it was.] - d: I think the answer is YES a 2013 MSVC build is wanted # anything else - j: I am trying to play with adaptively enabling the Haar stuff on images that could be photographic and non-photographic, including turning on and off lapping per edge - t: that isn't enough information for me to have a conversation with you - d: I don't have any insights to help you on your lapping v. no lapping