DaalaMeeting20140902: Difference between revisions

From XiphWiki
Jump to navigation Jump to search
(Created page with "<pre> # Meeting 2014-09-02 Mumble: mf4.xiph.org:64738 # Agenda - reviews - Daala Demo 5? - multi-frame motion compensation - Deringing filter based on paint - Other intra ide...")
 
m (Add word wrapping)
 
Line 1: Line 1:
<pre>
<pre style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
# Meeting 2014-09-02
# Meeting 2014-09-02



Latest revision as of 16:00, 6 February 2015

# Meeting 2014-09-02

Mumble:  mf4.xiph.org:64738

# Agenda

- reviews
- Daala Demo 5?
- multi-frame motion compensation
- Deringing filter based on paint
- Other intra ideas
- Publishing

# Attending

unlord, smarter, tmatth, jack, jmspeex, derf, gmaxwell, td-linux, xiphmont

# Reviews

https://review.xiph.org/437/ - done!
https://review.xiph.org/438/ - asm

# Demo 5

- x: update1 is in the same spot, waiting on Andreas. Haven't done anything on demo5. Was working on the crazy idea from last week.
- d: What idea was that?
- x: It was directional transform. The bit that didn't work was rotating the lapping, which made it not reversible. The rotational transform is interesting though. The bit I got distracted on was attempting to remove refractions, but I've convinced myself that isn't possible.
- d: I didn't see how you were going to make it reversible.

# Multi-frame motion compensation

- g: I have something kind of working, but I need to rebase it to master. Hopefully it will work better once it's caught up with the current code base. I don't think we have any clips that are great for testing it.
- d: I assume you are just using the keyframe?
- g: Yeah.
- d: I think the real benefit will be when you use frames that are close by.

# Deringing filter

- jm: I posted some images of my work in progress and I'll repaste: http://jmvalin.ca/video/fruits_v60a.png  http://jmvalin.ca/video/fruits_v60b.png
 http://jmvalin.ca/video/fruits_v60c.png  http://jmvalin.ca/video/fruits_v60d.png
- jm: a is the no deringing filter, b and c use 16x16 and d uses 8x8 (different settings of the deringing filter)
- g: what else did you test this on?
- jm: IENA and fruits, which are the easiest and hardest cases. Mostly IENA results in blurring right now.

(some discussion of various artifacts)

- jm: I'm not sure how and what to signal right now. In theory you could go with on/off but seems like we could do better than that.
- g: I assume it would be reasonable to parameterize the intensity right?
- jm: Yes.
- d: But the intensity can be derived from the quantizer used right? All the activity masking is goign to tell you where you used a coarser quantizer. Also, we should be doing less deringing at higher qualities.
- jm: That's the only thing I'm taking into account right now. There are two gains I can derive. 1) The optimal gain knowing the original. and 2) the theoretically optimal game not knowing the original. The second is the one I'm applying now. It's stddev of the noise divided by stddev of the image.
- g: Currently activity masking will result in us using a less precise quantizer ...
- jm: Activity masking is disabled at 4x4. It's a known issue.
- g: I was going ot suggest we could code a bit that says we need more quantizer here, etc.
- d: I think we code that bit and it's whether to use 4x4. 4x4 is not used exclusively for edges. LIke stars in quebec city bridge.
- jm: These I don't think would look really well in the current deringing filter. That will definitely need some signalling. Mostly because the filter will not see the star itself.
- d: It does not match your model.
- g: How do we run this on moving images/
- jm: I Don't see the problem, but I'm not there yet. We definitely need a way to make it more or less aggressive on both still and moving images. One possibility will be to compute motion compensated frame difference after the fact. How much change did we compute? Then use it as side info for deringing.
- d: Those are two different things. The MC frame difference is comparing the predictor to the original image. Are you talking about that or the difference you actually encoded?
- jm: The latter. I derived the gain in the optimal case where I know the original image. That gain is basically the covariance of the coded image with the original image divided by the variance of the coded image. But you need to signal something because that info is just not available.
- d: Have you looked at the difference between that and the other gain?
- jm: Yeah, that's my next step - do the theoretically optimal gain. If it's not buggy, there's no way to make the image worse. It's optimal for MSE / PSNR.
- d: We all know that's the best thing to optimize for!
- g: How slow is this?
- jm: Right now very, because of the directional search. There are ways to make it faster but I'm not looking at that yet. I still have a few bugs. I get a segfault on one image. I'll have to fix that to get actual RD curves. On fruits it helps PSNR up to 0.3dB at v60.
- d: It's better than basar was seeing with his stuff. I think we should be expecting 0.5 to 1dB.
I didn't think you were expecting that much gain from side information.
- jm: Once you get below the quant threshold, your noise is your actual image, so it will apply at a gain of 1 blindly. Also, is anyone interested in checking my math? If so, I can transcribe from my whiteboard.
- j: Write it down somewhere!
- x: I'm interested, but probably not this week.
- d: I'm also in the worrying about the math later camp.
- jm: Right now the criteria is baed on MSE. How familiar are you guys with audio denoising? You could theoretically optimize the gain for preserving the amount of texture.
- d: Don't we already do a great job of that?
- jm: I mean in the denoiser. Instead of having a wiener filter, you could have something that minimizes the amplitude of the spectrum or whatever. Opus minimizes the log spectrum error. What this does is minimize the MSE of the signal not the spectrum.
- jm: Only concern I'm aware of is the inter one.
- d: That's a big one!
- jm: I have one fix that is guaranteed to work
- d: Don't run it on inter frames?
- jm: Yeah.
- g: Certainly ringing problems on inter are less bad.
- j: How do you prevent progressive blurring on inter frames?
- d: That's what he said, you don't apply it to inter frames.
- jm: Imagine your video is a single image. You apply the filter. When you do the next frame, the gain would be zero because applying it would make the MSE worse.
- d: You hope it's zero. It's not floating point, so there will be rounding errors.
- jm: That is one approach, and the other approach is to look at blocks where we coded a non-zero theta. You'd not be allowed to dering on blocks with zero theta.
- d: Right. Anything it adds will be ghosting aligned with the edge so your thing will enhance them.
- jm: You're assuming the same block size which isn't going to be the case.
- d: The sort of artifacts during MC will be if you get the edge alignment wrong.
- jm: I can't remove that.
- d: Exactly.
- jm: That's why I think we automatically disable it for anything where we don't have any residual.
- d: Then you run into the fun problem of having lapping and what does that mean.
- jm: If it doesn't work on keyframes we should get rid of it, so I want to work on that first.

# Other intra ideas

- u: I should have RD curves soon. I wrote some code to do the three tests but I don't have it hooked into rd_collect yet. At some point I want to ask you about the Haar DC. I still don't understand exactly how that works. There's a bunch of  magic constants that I don't understand.
- jm: THere are two. The only place where there is prediction is across superblocks. There are constants for up left, etc, only at the superblock level. The only other thing is trying to predict gradient within the block. If it detects there's a gradient, it will attempt to continue that gradient within the block. That's the division by 5.
- u: I see a shift by 5.
- jm: No, there's a /5 somewhere.
- d: hgrad / 5  and vgrad / 5 in quantize_haar_dc. JM, your comment is on the second time you use them.

# Publishing

- g: What's the schedule of publishing update1?
- x: I'm going to publish demo pages again as soon as I work on them. Just not update1 until Andreas is ready.
- g: What about intrapaint?
- x: We could, but we don't know if it works yet.
- jm: It's starting to work now
- g: I don't consider the post filter the same thing anymore.