DaalaMeeting20140805

From XiphWiki
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
# Meeting 2014-08-05

Mumble:  mf4.xiph.org:64738

# Agenda

- reviews
- coding party
- EDI progress: https://martres.me/tmp/edi0805/ now only slightly worse than current code!
- Merging intra paint?

# Attending

unlord
xiphmont
smarter
jack
bkoc
derf
jmspeex
TD-linux

# Reviews

- nothing reviews

# Coding party

- j: looking good.

# EDI progress

- s: I've been playing with EDI and got it to work better. I took teh same theta we use and put them into every eDI filter (either 6 or 12 steps depending on what was used before) and it works better, but it doesn't work great yet.
- j: what about the upsampling test from last week?
- s: I haven't done the up/downsampling tests yet, but i'll do that next.
- d: do you have a plan to move forward?
- s: I still need to deal correctly with matching and padding, but I don't have a better plan, no.
- d: One thing to try would be to training the filter instead of directly computing it. IN theory if you train a plain linear filter you end up with the windowed sync function we already have. But because you have split this up into different classes, and you train a filter for each class, you should certainly be able to do no worse.
- jm: Please start with the actual test for reconstruction noise, otherwise you don't know where the problem lies. It's quite possible that EDI works will well but EDI from an EDI frame is causing the problem. And you wouldn't know until you actually run the test.
- s: I'll try restricting motion to 1/2 pel and see how that works. Can you show me how to do it?
- d: Go into src/mcenc.c and od_mvs_init, down at the bottom. There is a TODO: allow configuration, and est_mv_res_min is there. 0 is 1/8, 1 is 1/4 pel, etc.
- j: Can you make that a feature flag like the others?
- d: Yeah, that is what the TODO is about.
- s: Yeah, I'll do that. I have no idea how to train the filter.
- d: It's a least squares regression. There is code inside intra stuff that does it.
- u: I can point you to code that does that.

# Merging Intra paint

- jm: We're approaching the point where it's going to be useful to have this merged, but how?
- x: How well does it work compared to current intra? What do the numbers look like?
- jm: The fruits image I posted (only luma) looks like with everything it would be 4-6k.
- td: Worse by the way. How are you coding modes?
- jm: Assuming flat coding of the modes. 5 bits to code each mode (32 modes).
- d: Sounds like we don't have coding yet.
- td: I have some coding, but it's way worse.
- jm: I have entropy and experience tells me that I get entropy when I code stuff.
- x: Branching makes this easy.
- jm: It's already in a branch in my personal repo.
- x: Can I check it out?
- jm: Yes, exp-intra-paint on my Daala repo on MF4.
- u: Can you show me how to set up a personal repo offline?
- jm: Yea, sure. Basically I'm starting to think about merging it and want to know how that would be done. It's not actually hooked up to the encoder. From the numbers I'm seeing we can do better at low bitrate. For stuff like fruits or adventures with windmills, I'm getting pretty nice images. I don't know about PSNR and that sort of thing, but when there is ringing, it follows the branches instead of having ringing corners all over the place.
- d: Wasn't adventures the one that showed up the worst?
- jm: Sure, but the metrics me nothing. Adventures is perfect white and perfect black and is starting with PSNR that is much worse than most other images. When we're coding it with the current master, it's already going to be much worse. Some of the images where we already ahve good PSNR I haven't tried.
- d: Do you have a proposal?
- j: Hooking it up to the encoder woudl be a good first step.
- jm: Do you want it merged yet?
- d: I don't know because I don't know if it works.
- jm: So first condition is full encode, decode and be better?
- d: Yes.
- jm: That postpones it to post coding party.
- j: I suggest emailing details about where you get the branch to the mailing list so people can follow along. That way when it's ready to merge everyone has already had a chance to play and see hte code.
- jm: Ok.
...
- jm: Right now my code supports 32, 16, 8
- d: Maybe 32x32 will decrease rate, but ???
- jm: Rate at equal quality. I think I may be able to live without 64x64. It was mostly useful for smooth gradients.
- d: We can always revisit adding it later, but it involves structural changes to the rest of the codec.
- jm: Probably it's not going to be useful just for intra. On the other hand, maybe for motion vectors.
- d: That's certainly a possibility.
- jm: The results in your paper clearly show that 16x16 was way better than 8x8, which suggests you at least want 32x32 but maybe even more than that.
- d: Part of that may be that our decisions were terrible. I'm trying to fix that right now so I'm not sure I trust any of those results. ... The reason is that most encoders make decisions greedily. I fyou have relatively constant motion, and you have all the blocks saying use the preidction and then have small refinements. If you are not using a greedy optimizer you can do better than that.
- jm: Should we want 64x64 for 4k video?
- d: Absolutely. We'll see.
- jm: I changed all my code to be fixed at 32x32. I only recently changed the scanning order to superblock scanning instead of raster scanning which is bad when you switch block sizes. Sometimes you haven't quite quantized everything you need. DO you have ideas for how to deal with DC?
- d: There's thing you can do to handle discontinuities.
- jm: It's not that I'm worried about. Especially since I changed the way DC is encoded to only use four corners, if there is something directional close to a DC block I end up smearing the directional stuff. Also, the actual interpoltion within the DC, I'm not sure how to do it. It's a special case and I'm trying to fit it into the normal cases.
- d: I'd have to look at what you're doing more closely before I could suggest something constructive.
- jm: Someone looking at my code would be helpful. It's about 900 lines long right now. Some of it is a bit repetitive and I wish someone could help figure out how to improve it.
- d: For comparison that is half a silk decoder. Remember how long it took me to document that?