# Meeting 2014-02-18
Mumble: mf4.xiph.org:64738
# Agenda
- scaling issues
- det1
- 32x32
- Prediction error numbers: https://pastebin.mozilla.org/4329150
# Attending
unlord, jmspeex, derf, jack, xiphmont, gmaxwell
# input scaling
- d: have a patch that works, but nathan is having trouble testing it. i'll take a look today.
- Currently set to 4 bit (seems good for now, just a #define)
- Need to check for overflows in CfL
- Mostly disables RDO in Intral prediction at the moment
# det1
- g: I posted graphs in my status report. I have it trained and ready to go now. It didn't do anything terrible tot eh RD plots.
- g: https://people.xiph.org/~greg/det8/
- d: I will take not terrible.
- j: is the training tool getting checked in?
- u: that already landed
- g: i've been on a bit of a training lark becuase I made the mistake of running the training again after updating the tools. i made some improvements to make it faster and then found better prefilters on friday, saturday and sunday. i haven't found a better one since sunday so I left it there.
- jm: this is only 4x4?
- g: all sizes but only changing 8x8.
- jm: what about 4x4?
- g: that one i searched exhaustively. in the 8x16 case, the non-negative case makes it somewhat spare, so there still may be better ones.
- d: the important thign is to get something in.
- j: do we still need a 16x16 one?
- g: that's training right now with the new training code.
- j: how long does that take?
- g: I'll put in whatever it comes up with by tomorrow.
- j: is it possible to search 8x8 and 16x16 exhaustively?
- g: not at all.
- j: do we need to retrain the intra stuff now?
- g: that's already baked into this task. that's part of what i'm doing.
- u: these are still independent right?
- g: correct
- j: do we need to retrain with scaling patch?
- d: no, they were already scaled up during training.
- g: i thought you used integer transforms in training?
- u: i thought we used the double version. i'll find out.
- u: it's using integer filters and transforms. but we dont' have a problem because we're in 32 bits.
- d: there are multiples in the transform that multiply by 15 bits and produce a 32bit result. so if od_coeff is more than 17 bits you have a problem.
- u: our input is in y4m space right?
- d: yes.
- u: we're scaling by 6 bits in the training.
- d: that's already 14 and prefilter adds 2 bits in each direction for 16x16.
- g: we'll definitely be overflowing somewhere there.
- d: that could be one reason that 16x16 is coming out crap.
- x: the appearance of stairs at the high end, is that innocuous?
- d: that patch hasn't landed, so that is innocuous.
- j: why train at 6 bits and run at 4 bits?
- jm: it makes since to use more than necessary.
- j: how do we solve this problem?
- d: we have to fix the transform to work with 4 its if that's what we're using.
- u: you were looking at trans tools with 16x16 and seeing overflow also?
- d: that wasn't overflow, that was precision error.
- g: i' will check for overflow. we should change the scaling down because it's risky in training.
# 32x32
- j: made progress on this?
- d: not yet. waiting on det1.
- jm: i've been running experiements and 16x16 is performing noticeably worse than 8x8. maybe that's the cause for 32x32.
- g: if that's true, why would we expect 32x32 to be better?
- d: that's concerning.
- jm: i have no idea what could cause that.
- d: is it an entropy coding issue?
- jm: i meant to check but if i did I don't remember
# prediction error
- d: this leads me into the next thing i wanted to alk about. i was collecting data from teh quant matrix stuff, and one thing i was looking at is the square root of the average squared prediction error for each coeff for all 16x16, 8x8, and 4x4 blocks. i ran with k=3 so it should be 1/3 intra and 2/3 inter, and https://pastebin.mozilla.org/4329150 shows what it looks like.
- g: i noticed that in the 8x16 training, none of the predictors predict the highest frequency coeff.
- d: greg is referring to something else i found on friday, but that effect is gone after running more video through it. the fact that the vertical prediction error is more than twice as large as horizontal is not gone.
- jm: i would tend to blame that on having more energy in that direction.
- d: yeah, this may be that video is wide. this also showed up in 4x4 and 8x8 but not as bad as 16x16. i'll go look at it and see whether that is true.
- jm: can we get the coding gain values for our transforms?
- u: the trans2d tool does it.
- g: on what input?
- jm: the coding gain we think we are achieving.
- g: without intra?
- jm: yes
- g: you want transform + prefilter?
- jm: yes.
- g: ok, will have in a few minutes.