Daala on Wheels: Difference between revisions

From XiphWiki
Jump to navigation Jump to search
(Mention legendre polynomial basis transforms)
Line 13: Line 13:
** Timothy has some trial code showing speed-up proportional to the number of bits coded at once. (ec_test.c)
** Timothy has some trial code showing speed-up proportional to the number of bits coded at once. (ec_test.c)
* Mode prediction using the previously decoded data, e.g. coding the mode using a probability function derived from trained predictors on the surrounding blocks.  
* Mode prediction using the previously decoded data, e.g. coding the mode using a probability function derived from trained predictors on the surrounding blocks.  
** This will be terrible for robustness but may significantly reduce signalling overhead, allowing many more modes, and provide continuous adaptation between signalling free and fully signalled modes.  
** This will be terrible for robustness but may significantly reduce signalling overhead, allowing many more modes, and provide continuous adaptation between signalling free and fully signalled modes.
* Explore legendre polynomial basis transforms instead of DCT
** May have better perceptual properties and/or result in 'less compromised' efficient implementations. 
* Coefficient domain prediction to allow efficient energy preserving quantization.
* Coefficient domain prediction to allow efficient energy preserving quantization.
* Variable partition size/shape and the use of good predictors appears to remove most of the benefit of directional transforms.
* Variable partition size/shape and the use of good predictors appears to remove most of the benefit of directional transforms.

Revision as of 17:07, 9 October 2010

Daala is the current working name of a next generation video codec— to be renamed once someone insists on something better. So far the best proposed alternative is PatentCake.


For now the purposes of this page is to collect notes about things which have been discussed in informal public IRC discussion about the next generation initiative. Participants in these discussions have included Timothy Terriberry, Jason Garrett-Glaser, Loren Merritt, Ben Schwartz, Greg Maxwell, and others.


The discussed overall structure so far has been a variable size lapped-DCT block based codec with lapping done via pre/post filtering with a specially structured (lifting) linear phase transform along the edges along with overlapped block motion compensation and the expected trimmings. The lapping can be optimized for energy compaction and other useful properties, including invert-ability, and yields excellent results with efficient finite precision math.

Other components which have been discussed include:

Techniques applicable to all frame types

  • Multisymbol arithmetic coding
    • Timothy has some trial code showing speed-up proportional to the number of bits coded at once. (ec_test.c)
  • Mode prediction using the previously decoded data, e.g. coding the mode using a probability function derived from trained predictors on the surrounding blocks.
    • This will be terrible for robustness but may significantly reduce signalling overhead, allowing many more modes, and provide continuous adaptation between signalling free and fully signalled modes.
  • Explore legendre polynomial basis transforms instead of DCT
    • May have better perceptual properties and/or result in 'less compromised' efficient implementations.
  • Coefficient domain prediction to allow efficient energy preserving quantization.
  • Variable partition size/shape and the use of good predictors appears to remove most of the benefit of directional transforms.
    • Perhaps 45deg is still useful?
    • How does this change with partition sizes? Directional transforms are clearly not that useful with 4x4.
  • Transform-post filtering to allow merging smaller transform blocks (like TF merging in CELT) may allow more flexible partitioning then outright using mixed block sizes.
  • Perturbed quantization mode-signalling has been discussed but mostly laughed at. ;)
  • Special block modes well suited to solid color/cartoon like content— avoiding ringing.
    • Are pixel prediction modes too slow?
  • In general— what markov random field techniques can be applied with acceptable performance. Any?

Techniques applicable to inter frames

  • Using x264 as a test-bed Jason and Loren demonstrated 15% rate/distortion improvements from using 10-bit intermediaries and references, estimated as being 1/3rd from quality calculation in the 10-bit space, 1/3rd from the higher precision references, and 1/3rd from higher intermediate precision in calculations (e.g. MC filter processing).
    • Increased reference precision competes for memory with increased number of references. The improvements demonstrated appear to be a greater win than increasing the reference count once there are four references or so.
  • Super-resolution techniques for motion-compensation references have been discussed— in particular it appears that the half-pel location is where intelligent filtering matters the most so staged computation could be effectively used to allow more expensive filtering at that level.
    • Edge-directed interpolation techniques might be effectively applied to increase motion compensation accuracy, but most of the techniques known to be very effective are too slow.
    • Speculation has been offered that a significant part of MC inaccuracy may be due to blending in a physically incorrect (gamma-corrected) space, though no real conclusions were made. Academic papers on motion compensation accuracy seem to have ignored this issue.
  • Timothy has an example code base for a variable partition size blocking-free motion compensation scheme which merges OBMC (overlapped block motion compensation) and CGI (control-grid interpolation) with an interesting prediction/sub-division scheme and whole-frame trellis optimization of motion vectors. (daala-exp)