DaalaMeeting20140708: Difference between revisions
Jump to navigation
Jump to search
(Created page with "<pre> # Meeting 2014-07-08 Mumble: mf4.xiph.org:64738 # Agenda - reviews - code party - https://daala.etherpad.mozilla.org/coding-party-201408 - monty's intra plans - monty...") |
m (Add word wrapping) |
||
Line 1: | Line 1: | ||
<pre> | <pre style="white-space: pre-wrap; | ||
white-space: -moz-pre-wrap; | |||
white-space: -pre-wrap; | |||
white-space: -o-pre-wrap; | |||
word-wrap: break-word;"> | |||
# Meeting 2014-07-08 | # Meeting 2014-07-08 | ||
Latest revision as of 16:01, 6 February 2015
# Meeting 2014-07-08 Mumble: mf4.xiph.org:64738 # Agenda - reviews - code party - https://daala.etherpad.mozilla.org/coding-party-201408 - monty's intra plans - monty's journal - work week budget - weird rd_collect results? # Attending bkoc, xiphmont, td-linux, jmspeex, derf, jack # Reviews gmaxwell is on vacation, so don't pick him as a reviewer until the 18th :) # Coding party SEND INVITES! # Monty's intra plans - x: Trying to see if my crazy closed loop TFed 4x4 intraprediction works. It doesn't address a number of problems that Jean-Marc identified. I think it's a neat idea, but I haven't done it. - d: By closed loop do you mean open loop? - x: no. closed loop decoding is easy if you magically have all the encoded data. unfortunately in the encoder you have a problem that you are encoding with data that you don't have. fortunately the system damped. So if i iterated a couple of times with a hybrid closed/open loop in the encoder it should converge. The idea is that we run the encoding open loop and then repeat the process with the previous open loop data to iterate and converge on what we would get.... i don't see a closed form solution but the iterative solution converges. We can decode everything, run the preidction at 4x4, and we're closed loop because we have ??? - jm: how do we decode 16x16? - x: you decode 16x16 and predict from TFed 4x4. - jm: what does the decoder do? - x: It does all its prediction in the freq domain. When we decode a 16x16 block. We decodee the residual. We TF the residual down to 4x4 and the prediction is being done at 4x4. We run the prediction closed loop. We start at the top and go in prediction order. We predict the first block and have the residual. - d: It doesn't work with PVQ. - x: Why? - d: In order to decode the residual you have to have the prediction already. - x: Ok. I'll think more about that. - jm: If it ever works, which I doubt, it would only work at lower bitrates. For obvious reasons you cannot do lossless or near lossless. - x: I don't see that at all, but I do understand Tim's objection. - jm: You wouldn't be able to get the exact residual. - x: I don't follow. The prediction is still invertible. It doesn't matter what the residual is. - jm: There is no way to build a lossless encoder. - x: It could be lossless, but it might not be efficient. Because the prediction process is still invertible you're residual is still invertible. # monty's journal - x: the idea is that when i'm working on things I jot down notes. Sometimes I do it on paper, but most often in an emacs buffer. What I'm suggesting is that I make the note taking pad visible. Not putting any more effort into it, but sort of a live tumblr feed of the inside of my mind. Mostly this would be a record of what I'm doing because normally I don't keep these buffers around. But part of the thing we want is the process to be visible and recallable. The point is not showing everyone else, but remembering it myself so I can talk about it, and others can reference the notes if they are interested. - jm: keep this at a well known url? - j: yes, everyone keeps theirs at http://people.xiph.org/~user/journal.txt or something - x: the important part is the documentation not the code. the other part is that they're all in the same place so everyone knows exactly where they are. - j: we already do some of this during work weeks, and those refs to directories float around in IRC for weeks afterwards, and it's nice. - x: we all work in different ways, but this seems like something we can all do as a compromise. - jm: check out http://jmvalin.ca/journal/?C=M;O=A is that close? - j: if those had one sentence descriptions of each patch, i think you'd be there. - x: we want some of the process documented, not just the results? # weird rd_collect results - jm: I've been wondering what changed in the rd_collect stuff around the time of the previous work week? Nathan? Because I've not been able to reproduce subset1 results across these changes. The best I've been able to compare for two things that should have been the same.. i got like Daala being 3-5% worse with no change to how we're doing the encoding. - u: if you look at the actual out files, that what you're seeing? - jm: the out files are different, but i didn't check close. I am judging by bd_rate output. - u: maybe it's because we're picking different sampling points. - jm: they mostly appear to be changes in rate, not distortion. - u: do you have a link? - jm: http://jmvalin.ca/video/weird/ - u: on thing you do with rd_collect is run the new scripts with the old build to confirm there is some change that happened. - jm: how do you do that? - j: this should probably be documented. people are going to want to do this pretty often. - jm: is it measuring the rate differently? - u: it shouldn't be - jm: look at my graph. i was testing a new patch and it was worse than back in may, and when i was bisecting to find this, i discovered this range of commits were the rd scripts appear to have changed. - u: can you confirm you get the same file after encode? - jm: i'll dig a little deeper. I just wanted to check if there was anything obvious that had changed before I dug too deeply. [discussion about measuring bits in planes other than 0] - u: we ought to be running rd_collect on every commit and make sure nothing crazy is happening to detect regressions. - d: i agree that if you are making changes that affect chroma you won't be able to see it if you don't look at chroma. - u: there's a rought belief that improvements in luma will help prediction from luma. - td: i thought maybe we should make -1 the default or not coding planes 1 and 2. - d: -1 averages all planes by pixel count - jm: ugh. - d: exactly. - jm: we need to figure out some decent weighting. - td: the simpler solution is not to encode planes 1 and 2. - d: that doesn't solve any of the probems you're talking about. - jm: how about we dump all hte planes instead of just one. psnr for red doesn't make sense, but maybe the others will be fine - d: that sounds better than averaging a bunch of things. if i had to decide to do one of these things, getting something running on every commit is far more useful. - jm: you (derf) were working on something that is an actual test? - d: I have a two line patch that makes the lambda function change with quality and I don't have a curve that shows things. - u: what do you need to actual do the test? - d: come up with a set of videos that code and can be evaluated in a reasonable amount of time