DaalaMeeting20141104

From XiphWiki
Revision as of 15:58, 6 February 2015 by Daala-ts (talk | contribs) (Add word wrapping)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
# Meeting 2014-11-04

Mumble:  mf4.xiph.org:64738

# Agenda

- reviews

# Attending

unlord, derf, jmspeex, td-linux

# Reviews

d: jm were you going to review 498?
j: that should go into a branch right?
d: sure, if we want to do that.  Someone should probably tell jack this.
j: yeah it was fine.
d: tell jack that too
u: I had the same question about the CfL PVQ tuning branch I reviewed, that is going to go into a branch?
j: yeah, all of these branches should be labeled dump something
d: thomas, any idea why 499 and 505
t: they are being blocked right now on upgrading the jenkins VM.
t: I can commit them to master
d: is that going to go try to run them with out the right software installed
t: yeah
d: lets not do that

d: basically, we are applying lapping on the inside of the block for the 16x16 and then applying TF to make a 32x32 block, so what this means is that before TF your basis functions overlap.  So consider for example just the DC coefficient, all the coefficinetns of your basis function are positive.    When hyou apply TF, to these 4 coefficients, one of the outputs will have all of them added up.  Another will have the difference between the coefficients, and in the places where they overlap, TF will cancel out the energy of the basis functions.
d: what we hoped we'd be able to do is do something with TF where we turn off the interior lapping and use monty's second stage TF.  This still shows some blocking effects which is why we are using an actual 32x32 DCT.

u: that reminds me of the quantization of the PVQ gains based on the amount of energy that goes into the bands of each block.
d: the thing with counting all the different combinations, was trying to evaluate whether we could do a per coefficient scaling, and I agree if we fix the scaling with the 32x32 DCT like nathan is working on we can do the scaling per band.
j: especially the DC case I think will be useful for Haar DC.
d: although we tried correcting it before and did not have any luck
j: yeah because we were correcting it on the order of 5% when it was 50% and that explains why it didn't work

j: there were other things, not something precise, but its probably worth experimenting with how we do the lapping around the block size changes
d: okay what kinds of things were you thinking of
j: okay for example, how you lap when you have a bunch of 4x4 blocks that are adjacent to an 8x8 and how you lap the 4x4 boundary that is orthogonal to the 8x8
d: okay but *what* things did you want to experiment with
j: (1) trying to play with applying vertical and horizontal in different orders (2) not applying some of the filters to not mess up larger blocks that are adjacent to smaller blocks (the example I had is 4 4x4 adjacent to an 8x8, there is a boundary where if you apply the 4 point filter between the 4x4s, there is one block boundary between the 4x4s that hits the middle of the 8x8, so there is a discontinuity) (3) modulating the filter 
u: how much of a discontinutiy does that actually cause in the 8x8?
j: you saw the basis functions I posted, it creates a bunch of funny things
d: yeah some of them look kind of terrible
j: at this point its more like try messing around with a few things, if they are all within a few percent its probably not worth doing it, but if its a lot more then its worth doing
d: but now we have a list we can hand off to someone to look at
u: I don't know if that causes other blocking artifacts when you don't run the post filter there
j: we'll see

j: for 485, my name is on it but I think monty should review it
d: okay assign it to him
j: its already assigned to him
j: also I'm no longer sure what we should do with it given the fact that the HV filter is no longer providing such an improvement
u: do we want to back out the HV if its not giving much of an improvlement?
j: no right now its really really simple and overall its not doing much except in the pathological cases where its doing really well
d: right
j: its preventing us from looking really silly and not costing much so I think its worth it

d: looking at older stuff, thomas whatever happened to andres AVX2 patches?
t: they are still there
d: have they been checked in
t: no
d: can we fix that
t: I have a new version that is in my git some where, I suppose I can put that up for review
d: that seems like a good idea
t: I also have one for stieners generic SSIM patch, the only issue is that I don't think that it is used anywhere
d: I thought he had written one function to test with
d: it is better to get the framework in place so we can start using it with other stuff
d: I think if we add the multiply that all the DCT's use we can convert everything to use this
t: yes

d: were you still playing around with trying to cap the distortion in the motion vector search
t: yeah but I got sidetracked with trying to put multiple references into the prediction
d: I think it may be easier to support multiple reference frames where one of them is the most recent key frame
t: yeah I can do that too, and the reason I didn't do that is that I tried setting x265 to do that and the actual gain they got was small so its hard to tell if its worth it
d: yeah fair enough
j: yeah I think starting with b-frames is easier because you can actually tell if its working
d: I don't think low latency encoding is a priority for them, I think it took x264 five years to get around to handling that case, basically some company had to contact the devs to do it

https://people.xiph.org/~jm/daala/pvq_demo/

j: does anyone have any comments on the PVQ demo I posted
d: I also wasn't sure if you wanted to steal some images from my CELT presentation at LCA
j: do you have a link to that
d: let me find that 
j: yeah slide 24, I actually tried reproducing something similar, I didn't want to try drawing vornoi regions because we are not actually using them due to RDO
d: that seems like a subtlety that will be lost on people
j: you think the figures are not clear
d: you say "eh this can be trained" but you are not clear on what happens when you do that
d: if I look at the left VQ latice and the one one the right, its not clear whey the one on the right is better. you look at slide 24 and its very clear why people want to do VQ
j: yeah I chose to not mention that because we are not using it, you have the 3 reasons to use PVQ and I didn't include it because none of them apply
d: the reason I did that then is because I was talking to someone who actually knew what VQ was and had done his PhD research on it, it is specifically to try and contrast it with what VQ is typically used for and specifically say we are not doing that.
j: I can also completely remove the trained VQ figure
d: I have to think about that and look at the surrounding text, but its not really essential to the story