# Meeting 2015-02-24
Mumble: mf4.xiph.org:64738
# Agenda
- reviews
- block size decision experiments
- Why are some edges not being split on in intra frames?
http://jmvalin.ca/video/bsize_fixed/00000000bsize.png
- How do vert and horz prediction impact intra block decisions?
- quantization matrix experiments
- use a QM across the entire block (not per band)
- use a QM for block based on the lapping
# Attending
jack, jmspeex, smarter, td-linux, unlord, xiphmont, yushin, daala-person, MrZeus
# reviews
nothing to report except nathan has some backlog
# block size decision experiments
- u: Want to coordinate what we're doing with different experiments.
- jm: I'm running only on the recursive fixed lapping branch that I have. This is what I get with the latest experiments: http://jmvalin.ca/video/bsize_fixed/
- x: Are those overlayed on the original frame?
- jm: The actual image you see is the original. One thing I've been experimenting with is doing more calibration with basis functions. What happens is that it's really hard to get exactly the same quantization for different block sizes and you don't want the block size decision to pick whatever block size is best calibrated. Also I've been having a hard time at high rate with 4x4 because AM is disabled and it has a flat QM. Picking the block size based on PSNR means it's picking 4x4 more than it should.
- y: I believe turning off the oldest vectors is the right way. If you play with multiple vectors you can hardly conclude anything.
- jm: You can't fix it first. Properly compensating for the basis functions is something you can only do with fixed lapping.
- x: That's not what we meant. There are a number of confounding factors that are making this more difficult.
- y: QM and AM are dynamic decisions and are risky to include in this experiment.
- jm: That's why the decisions I just shared are based on pure PSNR.
- y: I would be happy if you achieve base PSNR positive change.
- jm: Some of hte NTT stuff that I ran is already much better than master. Already a 5-10% improvement over master.
- x: We need to be comparing block size experiments to each other. You are working on something new and Thomas is trying to implement the Tran paper, and Nathan is working on his own set. We have three sets of experiments to cross compare. Turning off QM and AM....
- jm: You can't turn off QM. As soon as you change anything in the filter you change your QMs even if you don't change the numbers in the arrays.
- y: Not chaning QM means as flat as possible.
- jm: There is no such thing as flat. You can make the values flat but what your quantizing is not flat. When you change the lapping you change all the basis functions.
- x: Yes it's a joint optimization problem but let's try to make it as comparable as possible.
- jm: If you think you can separate then have a shot.
- u: At some point, won't we want to jointly optimizje all these things when we're shipping the codec; what is the best set of constants to use for some set of lambdas?
- d: Nobody knows how to do that.
- jm: My experiments are trying to get QMs out of the way. I've tried it before and I'm trying it again: directly compensating for the scaling of the basis functions. It's not working exactly as I expected, but unsure if that is normal. Optimizing for PSNR and flattening everything, I get slightly worse PSNR when I compensate for the basis functions.
- y: Your RDO is not working it's best right?
- jm: This is on still images. I'm trying to get intra-only working first.
- s: You're not doing RDO on intra?
- jm: Not yet.
- s: So it's normal you are not getting better PSNR.
- jm: No, what I mean is that I'm trying to have flat QMs as opposed to pretending they are flat. The last time I tried to do this I was trying to do this with master's lapping, and with that scaling the basis function only gets you so far.
- s: So you're doing it wrong.
- jm: I don't know if I'm doing it wrong or if there is some other effect I'm not seeing.
- s: Are you turning off AM?
- jm: Yes, and a flat QM.
- s: Does it have something to do with the way PVQ encodes things?
- jm: Maybe. It could be that given that we're not operating at infinite rate, there is some deviation from theoretically optimal flat matrices. The one encouraging thing I'm seeing is that if I look at PSNR as a function of bitrate, PSNR is smaller between different blocksizes in the version where I compensate for the basis functions.
- x: It looks like having a line right in a center of a block causes the block to not get divided. A strong edge horizontal/vertical in the middle of a block causes it not to be split.
- jm: In the stuff I just posted?
- x: Yes. This doesn't look sane, so there may yet be a bug.
- jm: Pure vertical or pure horizontal edges can get away with slightly larger block sizes.
- x: The fact that the algorithm seems blind to it seems like a concern. The images don't fill me with a great deal of confidence.
- jm: Is there anything other than that? If you gave me this image without any segmentation I would also hesitate to spilt there.
- x: That should definitely be split.
- u: That's an experiment that needs to be run and we don't need to argue about this.
- u: One question I still have is what is a good RDO metric? Just PSNR in spatial domain?
- jm: I'm doing that as a first step, but I want to eventually do HVS and even AM in RDO.
(missed some)
- jm: I'm starting to suspect this kind of issue is why most video codecs are mostly optimizing with flat QMs.
- d: It is, but you can see now why that's a bad idea right?
- jm: Why?
- d: Because if you do that you can put yourself into a corner where you can't design anything where it's not flat.
- y: Yes. There is a long list of stuff for the second one.
- jm: Does anyone know of a variable block size encoder that has non-flat QMs?
- d: VP9
- s: H264
- jm: All of AC is flatly quantized right?
- d: Right.
- x: So that's JMs experiment. Nathan?
- u: The one I'm working on is a variation of what I did for intra-paint. I'm trying to do a viterbi style iteration. For some configuration find some new partition that has an improvement in RDO space.
- d: Wouldn't you try to do that in a checkerboard pattern?
- u: It's not quite done done.
- x: This is using fixed lapping like jean-marc?
- u: No, variable lapping. You have to consider your 8 superblock neighbors and the RDO impact that has on them. This won't be optimal but it should converge and keep improving.
- x: Are you running into the same problems with scaling?
- u: I haven't gotten that far yet.
- jm: On the topic of comparing stuff, I believe that if we optimize for PSNR we have a chance of having a decent comparison.
- x: Fair point, but you mentioned that if you did that it gets worse.
- jm: We're talking about 1%. Compared to tuning for one thing or the other, metrics fly around by 20% one direction or the other.
- x: Compared to that it is indeed small. I'm just concerned about it becoming more complex of a problem than is warranted.
- s: I think it makes sense that a flat matrix is worse because you are compressing less coeffs.
- jm: Maybe. You mean we're keeping more HFs that if we tilt the matrix?
- s: Yeah. If you look at how you spend your bits, you probably spend more bits on AC to get the same quality.
- jm: The effect is relative small so I'm still wondering whether I screwed up or not. I'm not that concerned otherwise.
- j: What about Thomas's experiment?
- t: I'm working on what the original paper does based on Nathan's quadtree order of performing the lapping. I do the superblock lapping at smallest lapping. Then I try every possible combination of splitting inside that. So I'll pick a splitting, code that, and then wind back and determine distortion and rate.
- x: Did the tran paper do that in raster scan order?
- t: Yes, and that's what I'm doing too. The whole point of 4pt lapping on the edges is so that superblocks don't influence other superblocks.
- u: As a last pass you just code everything normally and pick the right lapping for superblocks?
- t: Yes.
- u: That's a good idea. How does it perform?
- t: I'm actually having problems, probably bugs. I'm testing on intra because theirs worked on intra. My decisions looked wrong even though it does already work on inter, so I want to fix those first.
- x: It looks like how thomas structured the code, I can't use it directly. It struck me that the tran paper didn't use maximal borrowing on the upper and left border.
- t: It does not find the smallest possible lapping across a whole edge. It will switch in the middle of an edge. That's what the tran paper does, but I don't do that.
# small issues
- u: It looks like Jenkins builds are failing due to changes in build scripts that enabled the DCT optimization. It also looks like the BSD_SOURCE #define is breaking the build as well. I pinged Ralph.
- t: I can fix the BSD_SOURCE thing and I'll look at the DCT one.
# QMs
- jm: I've revised some previous thoughts that I had on how to do them with PVQ. I was thinking a few things backwards, and now I think we can apply a standard QM on the DCT coeffs before sending anything to PVQ and forget about the per-band thing completely.
- u: Revised in the last 10m?
- jm: No, I've been thinking this for a while. Mostly because the contrast sensitivity function for the noise should be kind of similar.
- s: How is that different from what we do now?
- jm: Right now all the bands are flat. You scale a band compared to another band, but not within the band. But I think now you can do it from outside PVQ and PVQ will preserve the right thing. For some reason, I thought the scaling would be different when I thought about this originally. Right now between two bands I use different scaling, and that is all I'm able to model. Instead I think we should scale the coeffs before PVQ and not bother with different scales per band.
- s: Is what we're doing currently different than using QMs before PVQ where each band uses same coeffs?
- jm: It may be different with some effects with lambda, but conceptually it is the same.
- s: The first step isn't to use the same QMs but before PVQ?
- jm: Something like that. I'd need to make sure there isn't some internal PVQ stuff that works around that. Of course we need to come up with equivalent QMs for different block sizes.
- s: That would solve the problem with 4x4 which is flat. So maybe we should start with this QMs before doing weird stuff with block sizes.
- u: That seems reasonable.
- jm: Yes and no. Because (missed).
- s: OH right. Given fixed lapping like in your branch, can we generate removing of the scaling instead of having magic numbers in the QMs?
- jm: There's a compute_basis and if you give it mag, it will give you the magnitude of the basis functions. What you want to do is compute the outer product of the vector by itself so you get a matrix of scaling factors.
- s: I think we should put that in the encoder to separate concerns with the optimal matrix. Right now when you're tweaking the QMs, you are tweaking both the scaling stuff and whatever you want to optimize for. It would be nice to separate that.
- jm: Absolutely. That's what I'm working on right now.