DaalaMeeting20140506

From XiphWiki

Jump to: navigation, search
# Meeting 2014-05-06

Mumble:  mf4.xiph.org:64738

# Agenda

- new meeting time
- reviews
- workweek debrief
- moto debrief (?)
- vp9 summit 6/6
- next workweek
- code party in august
- trained VQ

# Attending

jmspeex, jack, unlord, gmaxwell, derf

# new meeting time

- jack: what's the end of your range jm?
- jm: 5 Eastern, 2 Pacific.
- jm: or after 9pm eastern

# reviews

- g: nathan has two patches that need a reviewer
- n: probably assigning to tim, but forgot when ran the tool

# workweek debrief

- j: how often should we do these?
- jm: more often is better, but need to balance family stuff
- g: this one was ok, but knowing dates in advance is better
- jm: ietf meetings if we need to all go should count as a work week, i'm good with 6 weeks a year.
(recap of meeting with andreas)
- g: gpu rationale has been mostly about power savings, but gpus are not necessarily power efficient. we should look into that.

# next workweek / vp9 summit

some people will attend vp9 summit and stay over weekend. potentially in SF

# code party

- jm: i have a conflict with 8/4, but could do the 3rd week of august.
- j: ok, we'll pencil that one in.

# vq training

- jm: did you do any experiments tim?
- d: i did principle geodesic stuff. i was trying to get greg's code to print the eigenvectors but it's on my laptop which is in my apt, which has no power. and my machine at work blew up too. it was not exciting the little i did look.
- jm: i tried PCA, not on the codebook, but on the data, and i'm curious if you found the same thing i did, which was basis functions. the dct basis functions appeared to be the PCA.
- d: that is not what i found. when you do geodesic analysis on a sphere there is no mean on the sphere that makes any sense.
- jm: so geodesic PCA just sucks then
- d: that may be true. i'm probably more interested in looking at what the results look like around codebook vectors. but i haven't gotten a chance to look yet. if it does suck, it doesn't suck too badly. i got something that looked roughly 4D.
- jm: the PCA i did looked roughly 4D as well. there's more energy in the first 4 basis functions.
- d: the character of the data changes remarkably after the first four. they are fairly broadly spread out. it's not like it's a small cloud clustered around mean. if you project hte thing into 2D along the eigenvectors, you get stuff that is clustered around 2 axes. it looks like a cross. i have pictures i can show you later. the interesting thing is that the axes are not perpendicular.
- jm: what I did yesterday was to take really directional images in training. that seemed to produce better results?
- d: as in rate/distortion performance was better?
- jm: only on a single rate, it seemed to improve PSNR and visually while slightly decreasing the bitrate. i think we're on to something here. obviously i think it would be bad just to train on just concentric circles. do you have any ideas how we should go about finding training data?
- d: not really. my original idea was that we'd have a large set of images.
- jm: anything that's textured would give us the same results no matter what it was. what's really important is to have the edges.
- d: one thing you could do is go look at the blocks where vp8 does well and we do badly.
- j: can you switch codebooks between different training sets?
- jm: there's detection in git, so that code switches between pure pvq and trained vq. trained vq replaces the prediction. could the same thing explain why the intra predictor doesn't work so well? the idea of let's train on everything. what if we only train the intrapredictors on concentric circles, would it be better at predicting edges? what if we started from vp8 and instead of iterating we forced it to have the same modes as vp8 and exclude the ones where the prediction didn't work very well and leave only the actual edges.
- g: vp8 will still classify modes where the prediction doesn't work well for vp8. stuff like this was the motivation behind the weighting stuff we had in the code.
- jm: i understand the reason it made sense, but who knows. we've been attempting to do something better than vp8 and it was worse, so maybe attempting to replicate vp8 would give us better results.
- g: the metrics we use int eh training claim we are better than vp8, but we don't get the same visual effects.
- jm: the measurement is actually looking at it.
- u: we can put jean-marc in the loop.
- jm: mechanical turk! on the vq side, i'm pretty sure it would help to have many more edges in the training.
- g: the vq seems to be a little harder problem with training data. the non vq stuff can just copy its surrounding environments. for the non-vq stuff i'm confident we could train on machine generated lines and it would do sensible things, but i'm not sure it would do well with vq.
- jm: i'll look at it. not sure if hte best thing is to hand pick images or use lots of synthetic data.
- g: the middle ground is to take natural images and use vp8 to figure out which ones contain good prediction targets.
- jm: .... i picked only 4x4
- d: did you train it on 4x4 and use it on all or what?
- jm: New experiment was trained on 4x4 only, would need to try per-blocksize training
Personal tools


Main Page

Xiph.Org Projects

Audio—

Video—

Text—

Container—

Streaming—