DaalaMeeting20131217

From XiphWiki
Revision as of 16:15, 6 February 2015 by Daala-ts (talk | contribs) (Add word wrapping)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
# Meeting 2013-12-17

Mumble:  mf4.xiph.org:64738

# Agenda

- Interns!
- Research grant
- Luma intrapredictor TF to 4x4
- JM's PVQ decoder

# Attending

- greg, monty, jmspeex, jack, derf

# electrical engineering quiz

- do *you* know how to wind toroids?
- I know how to wind toroids

# Interns!

- derf: we met a promising candidate at PCS. she's worked on the same training problem nathan has worked on but on predictors in time domain.
- jack: i will talk to her about applying
- derf: the other interesting bit was that she worked on open loop stuff. like if we TFed down to 4x4 predicted each block and then TFed back up to compute the residual at the end.
- jm: you mena avoiding being unstable?
- derf: exactly. she's not doing anything explicitly, but she runs the open loop over the whole block and tries to minimize least squared error.
- jm: i wouldn't be surprised if it was a little unstable but it wouldn't explode.
- jack: smarter's going to work on edge directed interpolation from mid-feb to august. doing paperwork for that now.
- jack: smarter won't count against our 2 interns, so we need to find more.
- derf: i talked to folks at perdue and at a german university (hanover). we can ask the northwestern folks.
- jack: need to follow up on this now, because interns are already getting offers from other companies.

# research grant

- derf: northwestern team has two ideas. one is adaptive pre/post filtering. ie, use different filters for different block edges, which would be signaled. i warned them that there are potential IPR issues with that, but it's not impossible to do.
- jm: the idea is to balance the sharp/smooth filters with edges? or is there more to it?
- derf: he wasn't super clear about the effects he desired, but i think it's something like that.
- derf: the second part is decoder-as-estimator. by itself doesn't seem that interesting, but when you optimize the filters in that framework he comes up with a different optimization goal, and we know ours isn't hte best, so having more things to look at would be useful there.
- jack: for decoder-as-estimator, isn't decode pixel perfect? is this only useful for packet loss?
- derf: no, you can perhaps make up quantization loss with knowledge about the type of image
- greg: edge directed interpolation is an example of this class of technique— using prior information about the structure of the image to do a better reconstruction in the decoder. 
- derf: but they're planning on doing this in the pre/post processing.

# Luma intrapredictor

- greg: i reviewed nathan's patch (outstanding), and the patch works. i have a general concern that i came to while testing this thing. i'm seeing a lot of bheavior with the new predictors which don't just use DC for the 16x16 case. it moves some low frequency energy into HF with intraprediction and then it doesn't have enough bits to move the bits back out. it's realy obvious in the background in fruits.
- jm: hopefully this is something that pvq can trivially fix. for scalar quantization this can be an issue.
- greg: good point. otherwise i was considering low passing the predictions. or try to address this in training.
- jm: is this really a problem even without pvq? it's minimizing the energy so it's getting it right to compensate for the times when it's wrong. if it's doing that it means that it's helping something and so i'd like to understand what.
- derf: we don't use adaptive quantization so we should be spending more bits than we actually are.
- greg: this pattern of copying LF to HF i expect to continue to be a problem.
- derf: you may be right
- greg: maybe pvq solves it or it can be improved in the training. i'm not saying it can't be solved.
- jm: if the large block sizes are trained on stuff that has edges, then that could cause the HF effect as opposed to training on stuff that should be 16x16.
- greg: i think we were escaping the problem previously because they only predict DC due to previous training problems. it's real easy to churn off ideas to do something better here. i don't think this should block landing this patch.
- jm: it's worth retraining with excluding the edges or weighting the edges very low.
- greg: training on all the offsets might help too. tim was getting some more images.
- derf: working on it. it would be nice to use larger intrapredictors for edges.
- greg: i don't want to exclude edges, but perhaps some regularization to weight more for modes where we get improvement is something we should be doing. i don't think we want to train it on edges that it doesn't even help on. nice long straight edges, sure.
- jm: we can even specialize based on that. it would be tricky because you're using different data for different modes, but you can have intra modes that are trying to do edges but also have modes that explicitly trained to not have edges and only do vaguely directional gradients and that kind of thing.
- derf: we certainly could.

# PVQ decoder

- jm: it decodes. it works on two different images!
- derf: twice as much testing as i expected.
- jm: that's 2 more tests than when i committed it! it's better than the previous pvq decoder. the good news is i did not have to fix anything in the encoder to make it work.
- derf: the encoder was actually working?
- jm: yes. encoded all the information
- derf: twice?
- jm: can't say, but i don't think so :) I only have code for 8x8. if you go to encode.c and set run_pvq=1 it still works. you're going to get scalar quant for 4x4 and 16x16 but pvq for 8x8. i wouldn't guarantee you'd be able to export the files because it's using double precision but it might work. double precision is probably safer than float for this kind of thing. you have to be pretty unlucky to end up at the rounding threshold. i have verified that hte bitstream is actually correct. what should the next thing be?
- jack: 4x4 and 16x16?
- jm: that or removing double precision or context adaptation.
- greg: how can we increase confidence that it's correct?
- derf: or useful
- greg: i meant that we're implementing the stuff we think we're implementing and that it works better than the prior pvq stuff.
- jm: it's been in review for 2 months, that would have been a first step.
- derf: i started going through it and have a bunch of comments but i haven't finished it yet.
- jm: ideally everyone would look at it an understand it and see what i may have been doing wrong.
- derf: when are you doing that monty?
- xiphmont: after i skim the PCS papers.