DaalaMeeting20150310

From XiphWiki
Jump to navigation Jump to search
# Meeting 2015-03-10

Mumble:  mf4.xiph.org:64738

# Agenda

- reviews
- lapping approach data from jm, TD, (unlord?)
- quilting bugs

 Attending

azita, daala-person, derf, jack, jmspeex, TD-Linux, tmatth, unlord, xiphmont, yushin

# reviews

# lapping approaches

- x: Do we go with Tran paper or JM's thing. It comes down to the data. Where's the data?
- td: I ran new results: apples to apples (or apples to some similar fruit). It was 16x16 max blocksize (because 32x32 doesn't work yet). I also run PSNR optimized because I don't have PSNRHVS or any of the other metrics that JM has. I switched JM's to max 16x16 too. What I saw this morning is that mine does substantially worse, so I think there is still something wrong.
- d: Can you tell us which results those are?
- td: ntt-short-1 the top three. The first one is wrong, but the second one is the first one that isn't broken.
- jm: Did you compare to master?
- td: I did not do a run with master limited to 16x16 and with PSNR tuning.
- jm: There's master PSNR somewhere we could compare to.
- td: Mine seems to do worse than that one.
- jm: You are slightly better at high bitrate and worse and low bitrate. High bitrate is easier to select blocksize for with this kind of approach.
- d: Yesterday we were talking about the block sizes picked in intra frames sticking for inter frames. Is that still happening?
- td: Not sure yet. I need to run those tests again. I thought it was working, but maybe that is the problem.
- a: What percentage done are you on this task (wearing my PM hat)?
- td: 50%. (90% probably but last 10% will take half the time)
- a: What about JM?
- jm: I have 100% of what's need to make a decision about lapping. Once the decision is made, there will be more work if we pick my appraoch to merge it into the code and retuning everything. Assuming it will land, I'm about 66% of the way to landing this. Then there is another set of work to improve what I have once it's landed. Maybe 40% if you include that extra work.
- j: Is there a next set of steps for TD or just debugging?
- td: Maybe just tuning.
- jm: I find that FourPeople is a good clip for block size decision sanity.
- d: That one is easier to visually inspect for sure.
- td: My code tends to track block sizes more.
- jm: That doesn't make sense. If you skip multiple blocks it should make them a single block.
- td: The problem is the lapping and basis is still changing so it's more likely to still skip. I'm now undecided on this.
- d: It might help if you had some examples, like pictures.
- td: I can prepare that for tomorrow's watercooler.
- d: What work needs to be done to land JM's stuff?
- jm: I need to clean up the code. Guillaume has started cleaning it up, but more work is needed. Right now I've ended up with three different QM that need to be combined. It's probably good to only apply one at a time.
- d: What are the three matrices?
- jm: One is to compensate for the magnitude of the basis functions (1xN array, and matrix is outer product). Second one is the HVS matrix; I've been playing around with a few different ones, including the one from PSNR-HVS.
- d: What other ones have you been playing with for HVS?
- jm: I have four in the code right now. I've only looked at them a little bit, so I'm not claiming that I've enabled the best one. The ones I've tried are the two from the MPEG2 spec (something yushin pointed me to).
- d: There are two there, one for intra and inter.
- jm: I tried both. The other 2 in this second set are the one from PSNR-HVS and the one I'm using now is the intra MPEG2 matrix made symmetric and averaged it with PSNR-HVS.
- d: I thought before all this started we added interpolation because we wanted flatter matrices at lower rates.
- jm: I have a single matrix now, but I need to put back in interpolation. I don't think it's as bad as it was before, but I'm not sure exactly why. I've been focusing on making a decision about blocksize, and interpolating matrices isn't really part of that.
- d: Hmm. You did some testing originally with a flat matrix. But if you have a highly skewed matrix then it will try to flatten it by picking smaller blocks.
- jm: I'm using hte same matrix at all blocksizes and interpolating them.
- d: What??
- jm: I'm using the 8x8 matrix and resampling it for different blocksizes.
- d: I am very confused. We can take this offline.
- u: How do we propose to compare these two approaches with different QMs?
- d: Unfortunately the problems cannot be separated.
- jm: That is why we originally agreed to compare on PSNR.
- d: That is a first step yes.
- jm: The real question is what would convince anyone else that we need to pick one or the other approach?
- y: I think JM's approach improves quality, but because we changed QM, it adds another factor. If you isolate the QM from your RDO I think we can land it quicker.
- jm: Doing the lapping the way I do it changes all the QMs because we change all the basis functions. Once you do magnitude compensation you've changed all your matrices.
- u: At some point, shouldn't we need to take sample videos and figure out what QMs we really need?
- d: What are you optimizing for when you try to learn?
- u: That's what I'm asking.
- td: PSNR-HVS is the best we have.
- d: And that has a very specific QM bias built into it. I don't think any of our metrics do the right thing meaning that if you fed it into a numeric optimizer it would do the right thing.
- jm: Essentially if you take what's currently in master, it's a matrix that I hand optimized for PSNR-HVS.
- d: Producing that matrix seems like a process that could be automated.
- jm: Kind of hard. At some point I also factored in the other metrics. I hand tuned while still looking at the other ones.
- d: That still seems automatable. We will be playing with these for a long time, and having a tool to do this would make it a lot easier.
- jm: It's becoming less clear how we're going to make a decision here.
- d: It was never clear to me! I would like to see an apples-to-fruit comparison of the two techniques. I haven't seen any numbers from TD that he didn't think were broken. We should take a look at intra-only, monochrome, flat QM and PSNR optimized and AM off.
- j: Do we have this for JM's code/
- td: Yes, from smarter's patch we have togglable options that can do that.
- d: This seems like a comparison that we can actually do and get numbers that make any kind of sense.
- j: What's stopping us from doing that today?
- td: Nothing. On it. Wanted to have that done this morning.
- d: Try intra-only first, I don't think you've tried that.
- jm: Intra's easier because of Haar.
- d: The point is he thinks the intra stuff is actually working.
- jm: Another consideration is how much pain it is to choose the blocksize.
- u: Don't we lose something by not using larger lapping.
- jm: JM claimed that the loss was very small, and I certainly don't believe it visually regardless of what the metrics say.
- d: It concerns me when people in IRC say that some images look worse when your metrics show huge gains.
- jm: Who and what?
- d: Fruits and not super low rate.
- jm: http://jmvalin.ca/video/fruits_master_4k.png and http://jmvalin.ca/video/fruits_rdo_4k.png give you the idea of the tradeoff.
- d: No one is ever going to encode fruits at 4k. If it's bad at 2.5bpp then who cares about 4k.
- jm: I'm not claiming this is better; it's showing the tradeoff that is happening.
- y: I noticed that the Q values for JM's branch are quite different.
- jm: Yeah, they are completely different from what's in master.
- y: I compared -v 35 of persimmon and master -v 65 or 70. JM's uses less 4x4. It gives a lot of band skip in the analyzer. Those backgrounds they are surprising represented by just one coeff in 32x32 block.
- d: That's what you want to happen and what happens in both versions of lapping. Just that one of them looks better when you do that.
- jm: One thing I'm thinking of doing is tweaking it to give the first AC coeff a boost.
- u: Have we tried doing BS decisions on what you're doing and then go back and use the largest filtering possible? Do we have an RD curve for that?
- jm: We don't have a curve for that, but you've got the code.
- u: We should at least document that this is an experiment that needs to be run as well.
- j: Isn't that how the Tran paper worked?
- jm: That is how it works at the highest block level. And it does exhaustive search below that.
- j: So you're saying use JM's decisions and then maximize lapping on every edge?
- u: Right.
- td: I thought this experiment should be run as well.
- j: TD is busy so who will run this experiment, and do people think this is an experiment that needs to be run?
- u: There's little to be done, because you just do BSS as normal and then let the normal filters run. Our current code does the maximum filter size for every block edge, so all you need to do is change where you make your decision.
- d: It sounds like a simple experiment to run.
- j: I nominate nathan.
- u: I think JM should do it.
- jm: Absolutely not.
- u: I can give it a go.

# Quilting

- x: We need to decide what to do about it and that can be done offline.
- u: I don't think I understand the asymmetric thing from the discussion in IRC.
- x: Did you see the code in IRC? Built it run it and plot the output. The basic problem is that when our ref frame is taken back down to 8 bits, that quantization means that we get ringing in the lapping. It is not perfectly reversible. Maybe that's not the right word. All you need is ht prefilter to see it happen.
- d: Reversible is the word I would use.
- x: The asymmetry means that the positive and negative rounding error tends to pile up in the same places as the negative error. If you have two blocks of different DC values, then you get a little bit of ringing when it goes to the lower bit depth. Then if you bring in another DC block that would be perfectly smooth. Unfortunately that's another step function and when you go through the lapping, transform and reduced ref frame, the positive and negative rounding ends up in the same place and it's additive. And it's high frequency content, so unless you're coding all the coeffs and a high quantizer it will never damp out.
- d: That's not exactly true. We can always dampen high freqs if they show up.
- x: Yes, but we don't do it now.
- d: By using PVQ energy. You don't have to code all the coeffs, just the energy. But we'd rather not spend bits on that. The point is that this is inherent to the structure of the prefilters. Picking particular coeffs will not change this. One thing we can do is do this thing is flip in alternating frames. The other things you can do is change the structure of the filters somewhat. If you look at the filter then ask yourself what happens if I send same stuff negated and you already see the asymmetric easily in 4x4. It's possible to design symmetric ones. The third thing I thought of is that when we do the quantization down to 8 bits we can round to even instead of straight rounding. More expensive but maybe cheaper than changing the filter structures.
- x: And there's always full depth reference.
- d: No there's not.
- x: We have full depth references when doing 10bit right?
- d: Yes, which is why everyone will do their anime encodes with it.

# MC lines

- d: What was that?
- x: I suspect pilot error. I went back to reproduce and couldn't reproduce it. One possibility is that I screwed up with Git. I made sure that TD's patch was applied that he told me I needed. The next thing I was going to try and do is go back before that patch and see if the bug reappears.