DaalaMeeting20140916: Difference between revisions

From XiphWiki
Jump to navigation Jump to search
(Created page with "<pre> # Meeting 2014-09-16 Mumble: mf4.xiph.org:64738 # Agenda - reviews - horizontal and vertical unlapping predictors http://people.xiph.org/~unlord/unlap/fruits-v10-res....")
 
(Add word wrapping)
 
Line 1: Line 1:
<pre>
<pre style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
# Meeting 2014-09-16
# Meeting 2014-09-16



Latest revision as of 16:59, 6 February 2015

# Meeting 2014-09-16

Mumble:  mf4.xiph.org:64738

# Agenda

- reviews
- horizontal and vertical unlapping predictors
  http://people.xiph.org/~unlord/unlap/fruits-v10-res.png
  http://people.xiph.org/~unlord/unlap/fruits-v10-modes.png
rd-curves
  http://people.xiph.org/~unlord/unlap/subset1-fastssim.png
  http://people.xiph.org/~unlord/unlap/subset1-psnr.png
  http://people.xiph.org/~unlord/unlap/subset1-psnrhvs.png
  http://people.xiph.org/~unlord/unlap/subset1-ssim.png
fruits at ~0.1 bpp
  http://people.xiph.org/~unlord/unlap/fruits-v61-14105-pred.png
  http://people.xiph.org/~unlord/unlap/fruits-v82-14030-nopred.png
- ?

# Attending

azita, derf, gmaxwell, jack, jmspeex, tmatth, unlord

# Reviews

# horizontal and vertical unlapping predictors

- u: the idea is to find intra prediction modes that work given the lapping. one of the ideas tim had was to try to unlap the lapping on the current block. you want to predict what the lapped values of the current block would be given your lapped neighbors. if you look at the horizontal modes, it's easy to do horizontal and vertical. so i wrote a simple piece of code that does three different modes, no predictor, horizontal and vertical. and i measured SATD and used that as the metric for picking the mode. you don't do haar DC you just use the normal dc and use the k tokenizer for the remainder of the coefficients.
http://people.xiph.org/~unlord/unlap/subset1-fastssim.png
- d: when you compare i'ts with haar dc?
- u: no it's recording the dc as is.
- jm: can you compare to intraprediction and disabling coding of hte mode? this is with modified intraprediction right? you could just do the rd curve were you disable haar dc and reenable the old intra and comment out the part where we code the mode in the old intra.
- u: so you want to know what this looks like compared to the freq domain stuff?
- d: this is all fixed block size right?
- u: yes. 8x8. i think it's possible to do different block sizes.
- jm: you can actually have an apples to apples comparison if you compare to the old intra.
- u: i also put up a graph of what the modes are, and if i did just doing horizontal and vertical so doing some adaptive coding of the modes should be efficient.
- d: if you did some RDO the noise should go away.
- u: that's on my list of things to add. the thing i'm adding now is for modes that are not horiz or vert, just use the unlapped region outside of the block. ie, pulling your predictors from farther away. and you push those across two block sizes and do the prefilter and dct and use that as your predictor. for some of these you see it does a poor job and the modes that aren't horizontal or vertical look like good candidates for doing something else.
- d: the concern is merely that it is not as good. because you are predicting from farther away, the number of directions you have support for is smaller. but it may be better than nothing which is what we're currently doing.
- g: did you look at the actual output images?
- d: you have the two rd curves here, but what do the images look like at 0.1 bpp?
- u: I can tell you that in a minute.
- d: It looks like you have a good half db of separation at those rates, so we should be able to see it.
- u: you want with and without?
- d: yeah.
- g: fastssim is unlikely to be doing something weird, but we have cases where ...
- u: another question i had is even though you have fewer modes you can still adapt your context for that case right? you could just not have that in the cdf for those modes right?
- d: what you'd probably more likely do is make up some data over there so you still could do a prediction but it wouldn't be very good and not chosen very often. the idea is to have fewer special cases.
- u: do you think there would be real savings? have you  played with trading those two things off?
- d: no i have not compared them. i'm arguing from design complexity at this point.

# testing

- jm: who is doing tests for 265, 264 and vpx? would you be able to have runs that disable the non-realtime stuff?
- td: yeah, I can do that.
- jm: the patch i submitted makes the bitstreams robust to errors
- j: does that mean no vomit tigers?
- d: we'll still have bugs
- g: I think we're already comparable because 264 and 265 won't desync.
- d: jean-marc is also talking about the no lookahead case. if you look in the 265 development use cases they had two, archival storage and relatime. it's perfectly reasonable to test these codecs under both scenarios.
- jm: at the IETF the primary use case is realtime.
- d: this will also make us look better. we only do 1-2% worse between the cases, but in the other codecs they do much worse if you turn off bframes. currently the only thing we're planning to add that would impact this difference is some equivalent of bframes. and actual rate control but these comparisons are theoretically without rate control. if you look at vp9, it takes a large hit for the realtime use cases because they have a bunch of things in addition to altref frames that they have to turn off to deal with dropped frames. they don't do any online updates of prob models. so you have to turn that off, etc. they were showing something like 12-20% drops for running in realtime mode.
- j: does vp8 have similar problems?
- d: vp8 has to shut off altrefs, but it doesn't do feed forward probability adaptation. instead of adapting the probabilities from the previous frame they code what probabilities to use at the beginning of each frame.
- jm: aside from bframe equivalents is there anything else that 26x does for realtime that we are not planning on doing?
- d: i'm trying to think of what the other two things were in vp9. but i'm pretty sure we're not planning on doing them, whatever they were. for 265 there was some stuff i was worried about, but i think they found ways to make them work without desync.265 builds predictor lists and then signal which one to use as a predictor. if you do any kind of consolidation of those lists, then you have a problem if it includes the temporal predictor.

# unlapping (cont)

- d: nathan, you need ot add dc prediction before this is a fair comparison.
- u: when i do the prediction i'm in the lapped domain. i do the dct and use that in my predictor. are you suggesting i use dc when i have a mode that is not predictor.
- d: when the mode is no predcitor, you should still predict dc. and learn what the weight is.
- u: would I include blocks that are already horiz and vertical?
- g: i think what he is asking if he should train on all the blocks or just the ones are horizontal and vertical. probably you want all the blocks and go back a retrain with only the blocks that got assigned.i can go over it with you a little later this morning.

# multiple ref frames

- d: any progress greg?
- g: not yet, but should be able to make progress this weekend and next week.
- d: i'll be at VDD this week.
- u: I found a link for it. There's a wiki. (https://wiki.videolan.org/VDD14)
- d: Guillaume should be there. I should be talking about some opus and mp4 stuff. I'll be giving a talk on Daala. the way this work is that they invite people since they dont' have a lot of resources and they know who they want to show up.
- tmatth: i will be at vdd \o/
- j: we should have done this for xiph
- d: we kind of do. we have FOMS.

# awcy

- jm: what's the status of them avoiding getting shut down?
- td: I'm working on fixing it. i might disable shutdown until i get it fixed.
- jm: I'm using the web interface but there's a commandline tool?
- td: yep. there's a python script.
- jm: what's the name of the script?
- td: submit_awcy.py