DaalaMeeting20140923

From XiphWiki
Revision as of 23:13, 23 September 2014 by Tomvanbraeckel (talk | contribs) (Add weekly meeting report of 20140923)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
# Meeting 2014-09-23
 
Mumble:  mf4.xiph.org:64738
 
# Agenda
- VDD debrief (derf, unlord)
- reviews
- Deringing demo (jm)
- more intra pred tests (unlord) http://people.xiph.org/~unlord/intra/
- travel plans IETF-91
- ?
 
# Attending
 
unlord, jack, derf, bkoc, jmspeex, td-linux
 
# Reviews
 
- td-linux will review gal's patch
- monty will review td-linux's quantizer scaling
 
# VDD
 
- u: Tim gave a great talk.
- d: I thought it was good. We gave away some tshirts and got a potential intern for next year. 
- u: We have patches from new contributors in rietveld. Hopefully they will continue!
- d: katarina has been trying to correct the scale factor between the different levels. she managed to make the curves go down a bit, but she hasn't gotten very far.
- jm: all our dcs have similar scaling now.
- d: they were within about 0.9.
- jm: Still awesome that someone is looking at this.
- u: I'm also looking at it. I think I figured about how to handle the Haar DC fi we want to not code the blocks outside the frame. I'll try and send a patch for that in the next day or so.
 
# deringing
 
- jm: Who's seen the demo I came up with?
- j: I haven't seen it.
- jm: I haven't had any comments yet.
- d: In general I thought it was great and I used it in my presentation. I can give you more detailed feedback today.
- jm: While working on this I think I came up with a decent way to do the search. Basically in terms of complexity in each direction I would have only one add per pixel and two multiplies per line. Meaning for horizontal and vertical you have N lines and for 45 degree and diagonal you have 2N-1 but still relatively cheap.
- d: What are you actually doing there to get that?
- jm: Nothing fundamentally different. I just did a bit of simpliciation. My criterion is the MSE compared to the painted image. The painted image is just an average along a certain direction. MSE - avg is basically the variance. If I go back to the variance computation, you are doing sum of the squares minus sum squared divided by N and the sum of the squares is constant across all the directions you search for. In the ned all you need to do is for each direction you follow each line summing, and then divide by N and sum all the lines.
- d: You are just recentering your variance calculation. That's fine.
- jm: You're just computing all the sums for each direction and then squaring and divinging.
- d: Do you really get away with squaring things along the line?
- jm: It all cancels out between directions. All the x^2 terms cancel each other.
- d: This is starting to sound more promising.
- jm: It runs pretty fast now.
- d: How fast?
- jm: I still haven't optimized the painting but if I look at the total time it goes from .5s to .7s instead of going to 3s. This is on fruits.
- d: I still think there should be a way to do this without computing the paint image for every direction.
- jm: I suspect most of the complexity is no longer the search. I suspecting painting has become more complex because I'm still doing the square roots and divisions and stuff.
- d: I'd like to avoid doing the paints for all or most of the directions as part of the search.
- jm: I don't see a way to avoid that. Mayber there is but I haven't seen it. I think the way I'm doing it now is viable. In terms of per pixel operations it is 9 now.
- d: Yes, but maybe you can look at the first few AC coefficents and compute a gradient and then your per pixel operations is zero.
- jm: I think we're going to have a really hard time figuring out a direction in the general case that I"m trying to handle. LIke I"ve got a boundary between texture and something smooth or I have a whole bunch of lines in different directions. If we could see AC coefficients could we code it efficiently?
- d: I still think it is worth experimenting with. I don't think it will be as good as what you are doing now, but maybe it will be close and you'll only have to search a few directions.
- jm: It's hard to do much. If you have a few directions to search you haven't reduced the complexity very much. The base complexity is not high.
- d: I don't have to do all the painting for those directions
- jm: I don't do the painting there.
- d: Oh, nevermind.
- jm: I paint only in the one direction, and that hasn't been optimized. Right now I'm painting the whole image even if some blocks are disabled. The decoder will be able to avoid painting disabled blocks.
- d: Maybe not as big of a deal as a thought, but probably still worth taking a look at.
- jm: It seems like anything you would try would be more complex than what I'm currently doing. Right now I'm just searching for 8 directions and considering whether to do a refinement to 16, but maybe the search for 8 could be 4 plus refinement. I'm not sure if it's worth it. http://pastebin.mozilla.org/6594773 This is my directional search.
- d: This looks like pretty good progress.
- jm: For the on/off switch I think I want to do something else. At low rates just coding one flag per 8x8 block would be bad. I need to figure out what I want to do instead. Either going to 16x16 or automatically disabling the postfilter in blocks where we don't think it will be useful.
- d; Either that or you do some hierarchical thing. As in you code at 32x32 whether there are any blocks to dering and split down.
- jm: Right now if I just let it choose which ones seem best it picks about half/half. At -v 80. At lower v it picks fewer blocks. If i just let it choose with no constraint there's no way to reduce the rate. It's going to cost me one bit per block. 
(missed some)
- jm: Unlike other stuff I've done this one is basically just a post filter. Is anyone up for helping merge this?
- d: If I get time I'm interested. I'll see what tomorrow looks like.
- jm: Monty did you look at this?
- x: It was spooky because it looked like something I would write.
- jm: http://jmvalin.ca/daala/demo1/
 
# intrapred tests
 
- u: I continued the work from last week. There's a URL in the agenda above that has a bunch of images.
- j: This looks amazing. Unlapping closed most of the gap with VP8.
- u: I'm not including mode signaling cost yet, so still need to do that and Haar DC.
- jm: for the cp ones, you should try copying only the coefficients which have horizontal energy. I would like to see this graph as it gets closer to master's features.
- u: I'm working on that.
(missed some)
- jm: You would look at left blocka nd see horitonzal energy. And look up at vertical energy. Let's say it's the one on the left. Youy would take the N horizontal coeffs to predict the current block. If it's bad you just signal noref. In this case,  you'd have no signaling for the mode.
- u: The mode selection has to take it into account.
- jm: There is no signaling needed here. The decoder picks automatically.
- u: We could try this right now.
- jm: It wouldn't be entirely free because you're signaling noref, but it's pretty cheap compared to 8 modes. There's no potential for making bad decisions like spending 3 bits to code a mode which is thrown away.
- u: Currently we should take into account noref when we spend those three bits.
- d: This needs research because it sounds vaguely like what mpeg4 part 2 did with copying one row or column of AC.
- u: There's more things I want to experiment with. 1) Take the intrapredictors (TFing neightbors down to 4x4) and add it to this graph 2) Go back and retrain where you are using just AC coeffs. 3) Instead of training as we do now, train which coeffs to copy.
- jm: Are these variable block size?
- d: These are all 8x8 but adding the rest is in my plan.