https://wiki.xiph.org/api.php?action=feedcontributions&user=WikiCleaner&feedformat=atomXiphWiki - User contributions [en]2024-03-28T13:28:13ZUser contributionsMediaWiki 1.40.1https://wiki.xiph.org/index.php?title=Ambisonics&diff=13494Ambisonics2012-06-15T10:15:03Z<p>WikiCleaner: </p>
<hr />
<div><i><br />
This page is part of the XiphWiki, and is aimed at people developing file <br />
formats and associated software for Ambisonics. For an general introduction <br />
to Ambisonics, please go to the <br />
</i><br />
[[Wikipedia:Ambisonics|Wikipedia page on Ambisonics]].<br />
<br />
'''Ambisonics''' is a surround sound system first developed in the 1970s. <br />
Its main difference from other surround techniques is that it separates <br />
transmission channels from speaker feeds, the speaker feeds being derived <br />
using a decoder situated in the living room. Decoders can be implemented <br />
in either hardware or software. Typically more speakers are used than <br />
transmission channels, and the more speakers used then the more stable the <br />
resulting soundfield. Speakers can be arranged in a number of configurations, <br />
regular polygons being the most popular.<br />
<br />
Ambisonic files can come in a number of different formats. The main one is <br />
called B-Format, the other formats being derived from this. UHJ format is <br />
mono- and stereo-compatible. G-Format is a set of speaker feeds, so can be <br />
enjoyed in surround sound without the need for a decoder in the living room.<br />
<br />
== Ambisonics and 5.1 ==<br />
<br />
Ambisonics and conventional 5.1 surround sound are very different. 5.1 is a <br />
set speaker feeds, the signal only being fully defined for sounds coming <br />
from a speaker. Phantom images between speakers can be created, but the <br />
technique to do so is left unspecified. Many 5.1 releases use pair-wise <br />
mixing to create phantom images. This is understandable as almost all <br />
stereo recordings are mixed using pair-wise mixing.<br />
<br />
Pair-wise mixing is also called "pan-potting", "amplitude mixing" and <br />
"intensity stereophony". It mixes signals into the feeds for a pair of <br />
speakers to create the illusion that a sound is coming from a point <br />
somewhere between the speakers. During mixing, the apparent location of <br />
each sound is determined only by the relative amplitude of that sound in <br />
the two speakers.<br />
<br />
Unfortunately, pair-wise mixing works poorly when the speakers are to the <br />
rear of the listener and not-at-all when they are to one side. You can <br />
demonstrate this for yourself by performing <br />
[http://members.tripod.com/martin_leese/Ambisonic/exper.html a very simple experiment].<br />
Pair-wise mixing did not work in the quadraphonic era and it will not work <br />
now. Such an absolute statement can be made because the way that humans <br />
localise sound has not changed.<br />
<br />
Ambisonics is fundamentally different from 5.1. What is encoded in <br />
Ambisonics is not speaker feeds, but ''direction''. When mixing in <br />
Ambisonics, the positions of the speakers are unknown <br />
''and are of no interest''. Further, when Ambisonics is decoded to speaker <br />
feeds all of the speakers cooperate to localise a sound in its correct <br />
position so, for example, when the speakers on the left push those on the <br />
right pull. The speakers all contribute to the creation of a single <br />
coherent soundfield.<br />
<br />
=== Ambisonics to 5.1 ===<br />
Converting Ambisonics to 5.1 is straightforward, and is discussed below <br />
(see [[#G-Format|G-Format]]).<br />
<br />
=== 5.1 to Ambisonics ===<br />
Converting 5.1 to Ambisonics is more difficult. It is easy to make the <br />
five speaker feeds phantom images, called "virtual speakers". (The ".1" <br />
channel can be folded into W.) The problem with this is that even if the <br />
Ambisonic rendering is perfect, the result will only be as good as the <br />
original 5.1 played through ''real'' speakers. It will not be an <br />
improvement. Nobody has yet come up with a way for Ambisonics to improve <br />
5.1; 5.1 is simply too broken.<br />
<br />
== B-Format ==<br />
<br />
B-Format is a single coherent soundfield composed of a set of related <br />
channels. The number of channels used depends on whether the soundfiled <br />
is horizontal-only or full-sphere, and on the order. These B-Format <br />
channels are transmission channels, not speaker feeds. Listening to <br />
B-Format requires a decoder in your living room. Some numbers of <br />
channels are tabulated below.<br />
<br />
=== Channel correlation ===<br />
Compression techniques typically make use of channel correlation to <br />
remove redundancy from the audio data, and so improve the compression <br />
ratio.<br />
<br />
The correlation between B-Format channels depends on the content. <br />
Four-channel B-Format consists of an <br />
omni-directional component, called W, and three figure-of-eight <br />
components pointing forward, left and up, called X, Y, Z. <br />
([http://members.tripod.com/martin_leese/Ambisonic/Harmonic.html Pictures are available].) <br />
Three-channel, horizontal-only B-Format simply omits the Z channel. This <br />
means that anything in X also appears in W. Same for Y and Z. (W is <br />
omni-directional; everything appears in W.) Also, if content comes from <br />
Front-Left then it appears equally in X and Y. Same for content from <br />
Front-Right, Back-Left, Back-Right; only the relative polarities change. <br />
So there can be a lot of correlation between B-Format channels, but it is <br />
content dependent.<br />
<br />
One problem with B-Format is that it is big on low-frequency phase. The <br />
phase relationships between the different B-Format channels are important <br />
if the resulting soundfield is to correctly "gel". This may be a problem <br />
when B-Format channels are compressed using lossy compression.<br />
<br />
There is a file specification in use for downloadable B-Format files <br />
called the <br />
[http://members.tripod.com/martin_leese/Ambisonic/B-Format_file_format.html ".amb" specification].<br />
<br />
=== Limitations of the ".amb" specification ===<br />
The [http://members.tripod.com/martin_leese/Ambisonic/B-Format_file_format.html ".amb" specification] <br />
for downloadable B-Format files is based on the WAVE-EX format. There are <br />
currently over 200 pieces available in this format <br />
[http://www.ambisonia.com for free download]. Most of these are <br />
first-order full-sphere soundfields. (The same website also has details of <br />
[http://www.ambisonia.com/wiki/index.php/Playback_Software ad hoc software decoders].) <br />
Some of the limitations of the specification are: <br />
<br />
#It is limited to 4 GByte files (2 GBytes if somebody screwed up).<br />
#It is limited to third-order soundfields and below. While third-order looks like a lot (16 channels), there already exists a prototype mic that can record up to fourth-order (25 channels).<br />
#No compression (particularly lossless).<br />
<br />
The reason that the ".amb" file specification is limited to third-order <br />
and below is because it uses the number of channels to uniquely define the <br />
soundfield order. Unfortunately this simple and elegant scheme does not <br />
work above third-order as ambiguities creep in. (One ambiguity is <br />
illustrated in the table below.)<br />
<br />
A more general file format will have to use something else, such as <br />
''Malham notation'', or storing both the horizontal-order and <br />
height-order. There is a one-to-one correspondence between Malham notation <br />
and the pair of orders, and either can generate the number of channels.<br />
<br />
==== Malham notation ====<br />
Malham notation specifies the order of a B-Format soundfield using a <br />
string of characters, each character being either '''f''' (for full-sphere) <br />
or '''h''' (for horizontal). The first character in the string specifies <br />
the type of the first-order components, the second character the type of <br />
the second-order components, etc.<br />
<br />
{| class="wikitable" style="text-align:center"<br />
|<br />
|-<br />
!<span style="font-size:80%">Horizontal<br>order</span><br />
!<span style="font-size:80%">Height<br>order</span><br />
!<span style="font-size:80%">Soundfield_type</span><br />
!<span style="font-size:80%">Malham<br>notation</span><br />
!<span style="font-size:80%">Number<br>of_channels</span><br />
!<span style="font-size:80%">Channels</span><br />
|-<br />
| 1|| 0||horizontal||'''h'''|| 3||WXY<br />
|-<br />
| 1|| 1||full-sphere||'''f'''|| 4||WXYZ<br />
|-<br />
| 2|| 0||horizontal||'''hh'''|| 5||WXYUV<br />
|-<br />
| 2|| 1||mixed-order||'''fh'''|| 6||WXYZUV<br />
|-<br />
| 2|| 2||full-sphere||'''ff'''|| 9||WXYZRSTUV<br />
|-<br />
| 3|| 0||horizontal||'''hhh'''|| 7||WXYUVPQ<br />
|-<br />
| 3|| 1||mixed-order||'''fhh'''|| 8||WXYZUVPQ<br />
|-<br />
| 3|| 2||mixed-order||'''ffh'''|| 11||WXYZRSTUVPQ<br />
|-<br />
| 3|| 3||full-sphere||'''fff'''|| 16||WXYZRSTUVKLMNOPQ<br />
|-<br />
| 4|| 0||horizontal||'''hhhh'''|| 9||extra channels unlabled<br />
|}<br />
<br />
=== Default channel conversions from B-Format ===<br />
Converting a B-Format file to a mono file is straightforward. Use Mono = <br />
W*sqrt(2).<br />
<br />
Converting a B-Format file to a stereo file is more difficult. The "proper" <br />
way to do this is to convert the W,X,Y channels to two-channel UHJ. <br />
Unfortunately this requires the use of wide-band 90-degree phase shifters. <br />
In the digital domain these are usually implemented as convolution filters.<br />
<br />
Assuming 90-degree phase shifters are unavailable then the problem is one of <br />
choice. Starting from B-Format, it is possible to synthesize ''any'' mic <br />
response pointing in ''any'' direction. Hence, it is possible to synthesize <br />
''all'' coincident stereo mic techniques. Two popular stereo techniques are <br />
''Blumlein Mid-Side'' and ''Blumlein Crossed Pair''.<br />
<br />
==== Blumlein Mid-Side ====<br />
<pre><br />
Mid = (W*sqrt(2)) + X /*This is a cardioid response pointing forward*/<br />
Left = Mid + Y<br />
Right = Mid - Y<br />
</pre><br />
<br />
==== Blumlein Crossed Pair ====<br />
<pre><br />
Left = (X + Y)/sqrt(2) /* (Left, Right) are just the (Y, X) */<br />
Right = (X - Y)/sqrt(2) /* responses rotated by -45 degrees */<br />
</pre><br />
<br />
Which conversion to stereo is better depends on the material and how it was <br />
recorded. A good suggestion is to not specify a ''particular'' default <br />
channel conversion; instead, simply specify that there must be one. If one <br />
has to be specified then Blumlein Crossed Pair is the simpler.<br />
<br />
== UHJ format ==<br />
<br />
B-Format is the main format for Ambisonic files. However, B-Format is <br />
not mono- or stereo-compatible. This is why the UHJ hierarchical system <br />
was developed. Depending on the number of channels available, the UHJ <br />
system can carry more or less information, but at all times it is fully <br />
mono- and stereo-compatible. Up to four channels (Left, Right, T, Q) may <br />
be used. The T-channel can also be band-limited but, as this <br />
"2&frac12;-channel UHJ" was only ever used for FM radio transmission, it <br />
will not be discussed further.<br />
<br />
To listen to UHJ files in surround requires a decoder in your living room. <br />
Also, UHJ is restricted to first-order soundfields, either horizontal (two- <br />
and three-channel UHJ) or full-sphere (four-channel UHJ).<br />
<br />
Converting B-Format channels to UHJ channels, and vice versa, requires the <br />
use of wide-band 90-degree phase shifters. In the digital domain these <br />
are usually implemented as convolution filters. Conversion between <br />
four-channel B-Format (W, X, Y, Z) and four-channel UHJ (Left, Right, T, <br />
Q) can be accomplished without loss of information. The same with <br />
three-channel to three-channel (W, X, Y) <=> (Left, Right, T). It is <br />
possible to recover three-channel B-Format (W, X, Y) from two-channel UHJ <br />
(Left, Right), but not without loss. It is also important for the Ambisonic <br />
decoder to be aware that the B-Format channels were recovered from <br />
two-channel UHJ (because of the need to apply different shelf filters).<br />
<br />
Several hundred <br />
[http://members.cox.net/surround/uhjdisc/ambindex.htm two-channel UHJ LPs and CDs] <br />
have been released. Three- and four-channel UHJ recordings have never been <br />
commercially released.<br />
<br />
=== UHJ encoding and decoding equations ===<br />
<div style="margin-left:-9999px;"> [http://www.thefunnyquotessayings.com/funny-one-liners-sidhuisms/ funny one liners][http://itshumour.blogspot.com/2009/09/top-10-hilarious-quotes.html hilarious quotes] [http://itshumour.blogspot.com/2011/07/funny-marriage-jokes.html funny jokes]</div><br />
==== Encoding ====<br />
<pre><br />
S = 0.9396926*W + 0.1855740*X<br />
D = j(-0.3420201*W + 0.5098604*X) + 0.6554516*Y<br />
<br />
Left = (S + D)/2.0<br />
Right = (S - D)/2.0<br />
T = j(-0.1432*W + 0.6512*X) - 0.7071*Y<br />
Q = 0.9772*Z<br />
<br />
where j is a +90 degree phase shift<br />
</pre><br />
<br />
==== Decoding ====<br />
For two-channel UHJ:<br />
<pre><br />
S = (Left + Right)/2.0<br />
D = (Left - Right)/2.0<br />
<br />
W = 0.982*S + j*0.164*D<br />
X = 0.419*S - j*0.828*D<br />
Y = 0.763*D + j*0.385*S<br />
<br />
where j is a +90 degree phase shift<br />
</pre><br />
Note that two-channel UHJ requires the player to use different shelf filters than for B-Format.<br />
<br />
For three- and four-channel UHJ:<br />
<pre><br />
S = (Left + Right)/2.0<br />
D = (Left - Right)/2.0<br />
<br />
W = 0.982*S + j*0.197(0.828*D + 0.768*T)<br />
X = 0.419*S - j(0.828*D + 0.768*T)<br />
Y = 0.796*D - 0.676*T + j*0.187*S<br />
Z = 1.023*Q<br />
<br />
where j is a +90 degree phase shift<br />
</pre><br />
<br />
There is a file specification for downloadable two-channel UHJ files <br />
called the <br />
[http://members.tripod.com/martin_leese/Ambisonic/UHJ_file_format.html ".uhj" specification], but it is not currently in use.<br />
<br />
=== Limitations of the ".uhj" specification ===<br />
The [http://members.tripod.com/martin_leese/Ambisonic/UHJ_file_format.html ".uhj" specification] <br />
for downloadable two-channel UHJ files is based on the WAVE or WAVE-EX <br />
format. A UHJ chunk is added to the file to indicate it is UHJ. As <br />
unrecognized chunks are always skipped, use of this chunk maintains stereo <br />
compatibility. Some of the limitations of the specification are: <br />
<br />
#It is limited to 4 GByte files (2 GBytes if somebody screwed up).<br />
#It is limited to two-channel UHJ files. Three- and four-channel UHJ are not accommodated.<br />
#No compression.<br />
<br />
The ".uhj" spcecification is only defined for two-channel UHJ to maintain <br />
stereo compatibility. While it would be possible to add the UHJ chunk to <br />
three- and four-channel WAVE-EX files, the recommendations from Microsoft <br />
for playing such files is that the audio device should render the extra <br />
channels to output ports not in use. This can happen even when the extra <br />
channels are masked off. (Put simply, in WAVE-EX files the channel mask <br />
does ''not'' mask channels.) Because of this, three- and four-channel <br />
WAVE-EX files can not be made stereo compatible.<br />
<br />
In the Xiph world, it should be possible to use default channel conversions <br />
to ensure that three- and four-channel UHJ files remain stereo compatible.<br />
<br />
=== Default channel conversions from UHJ ===<br />
Converting a UHJ file to a mono file is straightforward. Use Mono = <br />
(Left + Right) / sqrt(2).<br />
<br />
Converting a UHJ file to a stereo file is even easier. Use Left = Left, Right = Right, and discard T and Q if present.<br />
<br />
== G-Format ==<br />
<br />
A G-Format file is any common multi-channel surround file containing an <br />
Ambisonic soundfield pre-decoded to its speaker feeds. This allows <br />
listeners who do not own an Ambisonic decoder to enjoy Ambisonics.<br />
<br />
The sound engineer creates a set of speaker feeds for a particular number <br />
and arrangement of speakers. This is typically four speakers arranged in <br />
a square. Other speaker arrangements are also possible<br />
<br />
In Ambisonics, all speakers cooperate to localise sounds in any particular <br />
direction; there are no "surround speakers" as such. Because of this, best <br />
results when playing G-Format recordings (and Ambisonics in general) are <br />
obtained when the speakers are matched. The easiest way to accomplish this <br />
is to use identical speakers. Unfortunately, many home theatre systems <br />
include a centre-front speaker which is different from the other speakers.<br />
<br />
An easy way to cope with this is adopted on G-Format recordings commercially <br />
released on DVD-A by [http://www.wyastone.co.uk/nrl/dvd.html Nimbus Records]. <br />
They use four speakers in a square, the centre-front speaker being unused. <br />
<br />
=== Recovering B-Format from G-Format ===<br />
It is sometimes possible to recover the original B-Format channels from <br />
the G-Format speaker feeds. The recovered B-Format channels can then be <br />
fed to a decoder in the listener's living room, and so accommodate a <br />
speaker arrangement different from the one used when the G-Format file <br />
was produced. Each B-Format channel is recovered using a weighted <br />
combination of the speaker feeds in the G-Format file. The conversion <br />
coefficients required for the B-Format recovery depend on the particular <br />
speaker arrangment chosen by the sound engineer. (Obviously, if a <br />
B-Format version of the file also exists then it can be fed to the <br />
decoder directly without the need for G-Format.)<br />
<br />
File formats for G-Format include all multi-channel formats that contain <br />
speaker feeds. However, these will not contain information to allow the <br />
B-Format channels to be automatically recovered. A [http://members.tripod.com/martin_leese/Ambisonic/G-Format_chunk.html ".amg" file format] <br />
(based on WAVE-EX) for downloadable G-Format files, which will allow <br />
the B-Format channels to be automatically recovered, has been proposed. <br />
Such file formats have the advantage of storing the conversion <br />
coefficients at the time the G-Format file is created. This is the only <br />
time the required information is readily available.<br />
<br />
=== Default channel conversions from G-Format ===<br />
Converting a G-Format file to a mono or stereo file is straightforward. <br />
First, recover the B-Format channels using the conversion coefficients <br />
contained in the file. Second, follow the advice given above for <br />
[[#Default channel conversions from B-Format|Default channel conversions from B-Format]].<br />
<br />
== Resources on Ambisonics ==<br />
<br />
*There is a set of [http://en.wikipedia.org/wiki/Ambisonics Wikipedia articles on Ambisonics].<br />
*Of particular relevance is the [http://members.tripod.com/martin_leese/Ambisonic/B-Format_file_format.html ".amb" specification] in use for downloadable B-Format files. However the ".amb" spec has some limitations which it would be useful to overcome.<br />
*There is also the [http://members.tripod.com/martin_leese/Ambisonic/UHJ_file_format.html ".uhj" specification] for downloadable two-channel UHJ files, but it is not currently in use. The ".uhj" spec also has some limitations which it would be useful to overcome.<br />
*[http://members.tripod.com/martin_leese/Ambisonic/ This website] has many pages on Ambisonics (including at the bottom links to other Ambisonic websites).<br />
*[http://www.ambisonic.net/ Ambisonic.Net website] includes a detailed series of descriptive and practical articles on current and past Ambisonic techniques with links to tools, other sites and additional material.<br />
*[http://ambisonic.info/info/ricardo.html Richard Lee's page on Ambisonics] contains articles on shelf filters and the design of Ambisonic decoders.<br />
<br />
[[Category:Developers stuff]]</div>WikiCleanerhttps://wiki.xiph.org/index.php?title=OggKate&diff=13493OggKate2012-06-15T10:11:59Z<p>WikiCleaner: fixed format change</p>
<hr />
<div>== Disclaimer ==<br />
This is not a Xiph codec, though it may be embedded in Ogg alonside other Xiph<br />
codecs, such as Vorbis and Theora. As such, please do not assume that Xiph has<br />
anything to do with this, much less responsibility.<br />
<br />
== What is Kate? ==<br />
<br />
Kate is an overlay codec, originally designed for karaoke and text, that can be<br />
multiplixed in Ogg. Text and images can be carried by a Kate stream, and animated.<br />
Most of the time, this would be multiplexed with audio/video to carry subtitles,<br />
song lyrics (with or without karaoke data), etc, but doesn't have to be.<br />
<br />
Series of curves (splines, segments, etc) may be attached to various properties<br />
(text position, font size, etc) to create animated overlays. This allows scrolling<br />
or fading text to be defined. This can even be used to draw arbitrary shapes, so<br />
hand drawing can also be represented by a Kate stream.<br />
<br />
Example uses of Kate streams are movie subtitles for Theora videos, either text based,<br />
as may be created by [http://itshumour.blogspot.com/2011/07/funny-marriage-jokes.html ffmpeg2theora], or image<br />
based, such as created by [http://thoggen.net Thoggen] (patching needed), and lyrics,<br />
as created by oggenc, from vorbis-tools.<br />
<br />
== Why a new codec? ==<br />
<br />
As I was adding support for Theora, Speex and FLAC to some software of mine, I found myself<br />
wanting to have song lyrics accompanying Vorbis audio. Since Vorbis comments are limited to<br />
the headers, one can't add them in the stream as they are sung, so another multiplexed stream<br />
would be needed to carry them.<br />
<br />
The three possible bases usable for such a codec I found were Writ, CMML, and OGM/SRT.<br />
<br />
*[[OggWrit|Writ]] is an unmaintained start at an implementation of a very basic design, though I did find an encoder/decoder in py-ogg2 later on - I'd been quicker to write Kate from scratch anyway.<br />
*[[CMML]] is more geared towards encapsulating metadata about an accompanying stream, rather than being a data stream itself, and seemed complex for a simple use, though I have now revised my view on this - besides, it seems designed for Annodex (which I haven't had a look at), though it does seems relatively generic for use outwith Annodex - though it is being "repurposed" as timed text now, bringing it closer to what I'm doing<br />
*OGM/SRT, which I only found when I added Kate support to MPlayer, is shoehorning various data formats into an Ogg stream, and just dumps the SRT subtitle format as is, AFAICS (though I haven't looked at this one in detail, since I'd already had a working Kate implementation by that time)<br />
<br />
I then decided to roll my own, not least because it's a fun thing to do.<br />
<br />
I found other formats, such as USF (designed for inclusion in Matroska) and various subtitle formats,<br />
but none were designed for embedding inside an Ogg container.<br />
<br />
== Overview of the Kate bitstream format ==<br />
<br />
I've taken much inspiration from Vorbis and Theora here.<br />
Headers and packets (as well as the API design) follow the design of these two codecs.<br />
<br />
A rough overview (see [[#Format specification|Format specification]] for more details) is:<br />
<br />
Headers packets:<br />
*ID header [BOS]: magic, version, granule fraction, encoding, language, etc<br />
*Comment header: Vorbis comments, as per Vorbis/Theora streams<br />
*Style definitions header: a list of predefined styles to be referred to by data packets<br />
*Region definitions header: a list of predefined regions to be referred to by data packets<br />
*Curves definitions header: a list of predefined curves to be referred to by data packets<br />
*Motion definitions header: a list of predefined motions to be referred to by data packets<br />
*Palette definitions header: a list of predefined palettes to be referred to by data packets<br />
*Bitmap definitions header: a list of predefined bitmaps to be referred to by data packets<br />
*Font mapping definitions header: a list of predefined font mappings to be referred to by data packets<br />
<br />
Other header packets are ignored, and left for future expansion.<br />
<br />
Data packets:<br />
*text data: text/image and optional motions, accompanied by optional overrides for style, region, language, etc<br />
*keepalive: can be emitted at any time to help a demuxer know where we're at, but those packets are optional<br />
*repeats: a verbatim repeat of a text packet's payload, in order to bound any backward seeking needed when starting to play a stream partway through. These are also optional.<br />
*end data [EOS]: marks the end of the stream, it doesn't have any useful payload<br />
<br />
Other data packets are ignored, and left for future expansion.<br />
<br />
The intent of the "keepalive" packet is to be sent at regular<br />
intervals when no other packet has been emitted for a while. This would be to help seeking code<br />
find a kate page more easily.<br />
<br />
Things of note:<br />
*Kate is a discontinuous codec, as defined in [http://www.xiph.org/ogg/doc/ogg-multiplex.html ogg-multiplex.html] in the Ogg documentation, which means it's timed by start granule, not end granule (as Theora and Vorbis).<br />
* All data packets are on their own page, for two reasons:<br />
**Ogg keeps track of granules at the page level, not the packet level<br />
**if no text event happens for a while after a particular text event, we don't want to delay it so a larger page can be issued<br />
<br />
See also [[#Seeking and memory|Problems to solve: Seeking and memory]].<br />
<br />
*The granule encoding is not a direct time/granule correspondance, see the granule encoding section.<br />
*The EOS packet should have a granule pos higher or equal to the end time of all events.<br />
*User code doesn't have to know the number of headers to expect, this is moved inside the library code (as opposed to Vorbis and Theora).<br />
*The format contains hooks so that additional information may be added in future revisions while keeping backward compatibility (though old decoders will correctly parse, but ignore the new information).<br />
<br />
== Format specification ==<br />
<br />
The Kate bitstream format consists of a number of sequential packets.<br />
Packets can be either header packets or data packets. All header packets<br />
must appear before any data packet.<br />
<br />
Header packets must appear in order. Decoding of a data packet is not<br />
possible until all header packets have been decoded.<br />
<br />
Each Kate packet starts with a one byte type. A type with the MSB set<br />
(eg, between 0x80 and 0xff) indicates a header packet, while a type with<br />
the MSB cleared (eg, between 0x00 and 0x7f) indicates a data packet.<br />
All header packets then have the Kate magic, from byte offset 1 to byte<br />
offset 7 ("kate\0\0\0"). Note that this applies only to header packets:<br />
data packets do not contain the Kate signature.<br />
<br />
Since the ID header must appear first, a Kate stream can be recognized<br />
by comparing the first eight bytes of the first packet with the signature<br />
string "\200kate\0\0\0".<br />
<br />
<br />
When embedded in Ogg,the first packet in a Kate stream (always packet type 0x80,<br />
the id header packet) must be placed on a separate page. The corresponding Ogg<br />
packet must be marked as beginning of stream (BOS).All subsequent header packets<br />
must be on one or more pages. Subsequently, each data packet must be on a separate<br />
page.<br />
<br />
The last data packet must be the end of stream packet (packet type 0x7f).<br />
<br />
When embedded in Ogg, the corresponding Ogg packet must be marked as end of stream (EOS).<br />
<br />
As per the Ogg specification, granule positions must be non decreasing<br />
within the stream. Header packets have granule position 0.<br />
<br />
Currently existing packet types are:<br />
:headers:<br />
::0x80 ID header (BOS)<br />
::0x81 Vorbis comment header<br />
::0x82 regions list header<br />
::0x83 styles list header<br />
::0x84 curves list header<br />
::0x85 motions list header<br />
::0x86 palettes list header<br />
::0x87 bitmaps list header<br />
::0x88 font ranges and mappings header<br />
:data:<br />
::0x00 text data (including optional motions and overrides)<br />
::0x01 keepalive<br />
::0x02 repeat<br />
::0x7f end packet (EOS)<br />
<br />
<br />
<br />
This format described here is for bitstream version 0.x.<br />
As or 19 december 2008, the latest bitstream version is 0.4.<br />
<br />
For more detailed information, refer to the format documentation<br />
in libkate (see URL below in the [[#Downloading|Downlading]] section).<br />
<br />
Following is the definition of the ID header (packet type 0x80).<br />
This works out to a 64 byte ID header. This is the header that should be<br />
used to detect a Kate stream within an Ogg stream.<br />
<br />
<br />
<br />
0 1 2 3 |<br />
0 1 2 3<br />
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1| Byte<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| packtype | Identifier char[7]: 'kate\0\0\0' | 0-3<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| kate magic continued | 4-7<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| reserved - 0 | version major | version minor | num headers | 8-11<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| text encoding | directionality| reserved - 0 | granule shift | 12-15<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| cw sh | canvas width | ch sh | canvas height | 16-19<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| reserved - 0 | 20-23<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| granule rate numerator | 24-27<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| granule rate denominator | 28-31<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| language (NUL terminated) | 32-35<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| language (continued) | 36-39<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| language (continued) | 40-43<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| language (continued) | 44-47<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| category (NUL terminated) | 48-51<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| category (continued) | 52-55<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| category (continued) | 56-59<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
| category (continued) | 60-63<br />
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<br />
<br />
<br />
<br />
The fields cw sh, canvas width, cw sh, and canvas height were introduced<br />
in bistream 0.3. Earlier bitstreams will have 0 in these fields.<br />
<br />
language and category are NUL terminating ASCII strings.<br />
Language follows RFC 3066, though obviously will not accommodate language tags<br />
with lots of subtags.<br />
<br />
Category is currently loosely defined, and I haven't found yet a nice way to<br />
present it in a generic way, but is meant for automatic classifying of<br />
various multiplexed Kate streams (eg, to recognize that some streams are<br />
subtitles (in a set of languages), and some others are commentary (in a<br />
possibly different set of languages, etc).<br />
<br />
== API overview ==<br />
<br />
libkate offers an API very similar to that of libvorbis and libtheora, as well as<br />
an extra higher level decoding API.<br />
<br />
Here's an overview of the three main modules:<br />
<br />
=== Decoding ===<br />
<br />
Decoding is done in a way similar to libvorbis. First, initialize a kate_info and a<br />
kate_comment structure. Then, read headers by calling kate_decode_headerin. Once<br />
all headers have been read, a kate_state is initialized for decoding using kate_decode_init,<br />
and kate_decode_packetin is called repeatedly with data packets. Events (eg, text) can be<br />
retrieved via kate_decode_eventout.<br />
<br />
=== Encoding ===<br />
<br />
Encoding is also done in a way similar to libvorbis. First initialize a kate_info<br />
and a kate_comment structure, and fill them out as needed. kate_encode_headers will<br />
create ogg packets from those. Then, kate_encode_text is called repeatedly for all<br />
the text events to add. When done, calling kate_encode_finish will create an end of<br />
stream packet.<br />
<br />
=== High level decoding API ===<br />
<br />
There are only 3 calls here:<br />
<br />
kate_high_decode_init<br />
kate_high_decode_packetin<br />
kate_high_decode_clear<br />
<br />
Here, all Ogg packets are sent to kate_high_decode_packetin, which does the right<br />
thing (header/data classification, decoding, and event retrieval). Note that you<br />
do not get access to the comments directly using this, but you do get access to the<br />
kate_info via events.<br />
<br />
The libkate distribution includes commented examples for each of those.<br />
<br />
Additionally, libkate includes a layer (liboggkate) to make it easier to use when<br />
embedded in Ogg. While the normal API uses kate_packet structures, liboggkate uses<br />
ogg_packet structures.<br />
<br />
The High level decoding API does not have an Ogg specific layer, but functions exist<br />
to wrap a kate_packet around a memory buffer (such as the one ogg_packet uses, for instance).<br />
<br />
== Support ==<br />
<br />
Among the software with Kate support:<br />
*VLC<br />
*ffmpeg2theora<br />
*liboggz<br />
*liboggplay<br />
*Cortado (wikimedia version)<br />
*vorbis-tools<br />
<br />
I have patches for the following with Kate support:<br />
*MPlayer<br />
*xine<br />
*GStreamer<br />
*Thoggen<br />
*Audacious<br />
*and more...<br />
<br />
These may be found in the libkate source distribution (see [[#Downloading|Downloading]]<br />
for links).<br />
<br />
In addition, libtiger is a rendering library for Kate streams using Pango and Cairo,<br />
though it is not quite yet API stable (though no major changes are expected).<br />
<br />
== Granule encoding ==<br />
<br />
=== Ogg ===<br />
<br />
Ogg leaves the encoding of granules up to a particular codec, only<br />
mandating that granules be non decreasing with time.<br />
<br />
The Kate bitstream format uses a linear mapping between time and<br />
granule, described here.<br />
<br />
A Kate granule position is composed of two different parts:<br />
- a base granule, in the high bits<br />
- a granule offset, in the low bits<br />
<br />
+----------------+----------------+<br />
| base | offset |<br />
+----------------+----------------+<br />
<br />
The number of bits these parts occupy is variable, and each stream<br />
may choose how many bits to dedicate to each. The kate_info structure<br />
for a stream holds that information in the granule_shift field,<br />
so each part may be reconstructed from a granulepos.<br />
<br />
The timestamp T of a given Kate packet is split into a base B and<br />
offset O, and these are stored in the granulepos of that packet.<br />
The split is done such that the B is the time of the earliest event<br />
still active at the time, and the O is the time elapsed between B<br />
and T. Thus, T = B + O. This mimics the way Theora stores its own<br />
timestamps in granulepos, where the base acts as a keyframe, and<br />
an offset acts as the position of an intra frame from the previous<br />
keyframe. Since Kate allows time overlapping events, however, the<br />
choice of the base to use is slightly more complex, as it may not<br />
be the starting time of the previous event, if the stream contains<br />
time overlapping events.<br />
<br />
The kate_info structure for a stream holds a rational fraction<br />
representing the time span of granule units for both the base and<br />
the offset parts.<br />
<br />
The granule rate is defined by the two fields:<br />
<br />
kate_info::gps_numerator<br />
kate_info::gps_denominator<br />
<br />
<br />
The number of bits reserved for the offset is defined by the field:<br />
<br />
kate_info::granule_shift<br />
<br />
=== Generic timing ===<br />
<br />
Kate data packets (data packet type 0) includes timing information (start time,<br />
end time, and time of the earliest event still active). All these are stored as<br />
64 bit at the rate defined by the granule rate, so they do not suffer from the<br />
granule_shift space limitation.<br />
<br />
This also allows for Kate streams to be stored in other containers.<br />
<br />
== Motion ==<br />
<br />
The Kate bitstream format includes motion definition, originally for karaoke purposes, but<br />
which can be used for more general purpose, such as line based drawing, or animation of<br />
the text (position, color, etc)<br />
<br />
Motions are defined by the means of a series of curves (static points, segments, splines (catmull-rom, bezier, and b-splines)).<br />
A 2D point can be obtained from a motion for any timestamp during the lifetime of a text.<br />
This can be used for moving a marker in 2D above the text for karaoke, or to use the x<br />
coordinate to color text when the motion position passes each letter or word, etc.<br />
Motions have an attached semantics so the client code knows how to use a particular motion.<br />
Predefined semantics include text color, text position, etc).<br />
<br />
Since a motion can be composed of an arbitrary number of curves, each of which may have<br />
an arbitrary number of control points, complex motions can be achieved. If the motion is<br />
the main object of an event, it is even possible to have an empty text, and use the motion<br />
as a virtual pencil to draw arbitrary shapes. Even on-the-fly handwriting subtitles could<br />
be done this way, though this would require a lot of control points, and would not be able<br />
to be used with text-to-speech.<br />
<br />
As a proof of concept, I also have a "draw chat" program where two people can draw, and<br />
the shapes are turned to b-splines and sent as a kate motion to be displayed on the other<br />
person's window.<br />
<br />
It is also possible for motions to be discontinuous - simply insert a curve of 'none' type.<br />
While the timestamp lies within such a curve, no 2D point will be generated. This can be<br />
used to temporarily hide a marker, for instance.<br />
<br />
It is worth mentionning that pauses in the motion can be trivially included by inserting<br />
at the right time and for the right duration a simple linear interpolation curve with only<br />
two equal points, equal to the position the motion is supposed to pause at.<br />
<br />
Kate defines a set of predefined mappings so that each decoder user interprets a motion in<br />
the same way. A mapping is coded on 8 bits in the bitstream, and the first 128 are reserved<br />
for Kate, leaving 128 for application specific mappings, to avoid constraining creative uses<br />
of that feature. Predefined mappings include frame (eg, 0-1 points are mapped to the size of<br />
the current video frame), or region, to scale 0-1 to the current region. This allows curves<br />
to be defined without knowing in advance the pixel size of the area it should cover.<br />
<br />
For uses which require more than two coordinates (eg, text color, where 4 (RGBA) values are<br />
needed, Kate predefines the semantics text_color_rg and text_color_ba, so a 4D point can be<br />
obtained using two different motions.<br />
<br />
There are higher level constructs, such as morphing between two styles, or predefined<br />
karaoke effects. More are planned to be added in the future.<br />
<br />
See also [[#Trackers|Trackers]].<br />
<br />
== Trackers ==<br />
<br />
Since attaching motions to text position, etc, makes it hard for the client to keep track of<br />
everything, doing interpolation, etc, the library supplies a tracker object, which handles the<br />
interpolation of the relevant properties.<br />
Once initialized with a text and a set of motions, the client code can give the tracker a new<br />
timestamp, and get back the current text position, text color, etc.<br />
<br />
Using a tracker is not necessary, if one wants to use the motions directly, or just ignore them,<br />
but it makes life easier, especially when considering the the order in which motions are applied<br />
does matter (to be defined formally, but the current source code is informative at this point).<br />
<br />
<br />
== The Kate file format ==<br />
<br />
Though this is not a feature of the bitstream format, I have created a text file format to<br />
describe a series of events to be turned into a Kate bitstream.<br />
At its minimum, the following is a valid input to the encoder:<br />
<br />
: kate {<br />
:: event { 00:00:05 --> 00:00:10 "This is a text" }<br />
: }<br />
<br />
This will create a simple stream with "This is a text" emitted at an offset of 5 seconds into<br />
the track, lasting 5 seconds to an end time at 10 seconds.<br />
<br />
Motions, regions, styles can be declared in a definitions block to be reused by events, or can<br />
be defined inline. Defining those in the definitions block places them in a header so they can<br />
be reused later, saving space. However, they can also be defined in each event, so they will be<br />
sent with the event. This allows them to be generated on the fly (eg, if the bitstream is being<br />
streamed from a realtime input).<br />
<br />
For convenience, the Kate file format also allows C style macros, though without parameters.<br />
<br />
Please note that the Kate file format is fully separate from the Kate bitstream format. The<br />
difference between the two is similar to the difference between a C source file and the resulting<br />
object file, when compiled.<br />
<br />
Note that the format is not based on XML for a very parochial reason: I tend to dislike very<br />
much editing XML by hand, as it's really hard to read. XML is really meant for machines to parse<br />
generically text data in a shared syntax but with possibly unknown semantics, and I need those<br />
text representations to be editable easily.<br />
<br />
This also implies that there could be an XML representation of a Kate stream, which would be<br />
useful if one were to make an editor that worked on a higher level than the current all-text<br />
representation, and it is something that might very well happen in the future, in parallel with<br />
the current format.<br />
<br />
== Karaoke ==<br />
<br />
Karaoke effects rely on motions, and there will be predefined higher level ways of specifying<br />
timings and effects, two of which are already done. As an example, this is a valid Karaoke script:<br />
<br />
:kate {<br />
:: simple_timed_glyph_style_morph {<br />
::: from style "start_style" to style "end_style"<br />
::: "Let " at 1.0<br />
::: "us " at 1.2<br />
::: "sing " at 1.4<br />
::: "to" at 2.0<br />
::: "ge" at 2.5<br />
::: "ther" at 3.0<br />
:: }<br />
:}<br />
<br />
The syllables will change from a style to another as time passes. The definition of the start_style<br />
and end_style styles is omitted for brevity.<br />
<br />
<br />
== Problems to solve ==<br />
<br />
There are a few things to solve before the Kate bitstream format can be considered good<br />
enough to be frozen:<br />
<br />
Note: the following is mostly solved, and the bitstream is now stable, and has been<br />
backward and forward compatible since the first released version. This will be updated<br />
when I get some time.<br />
<br />
=== Seeking and memory ===<br />
<br />
When seeking to a particular time in a movie with subtitles, we may end up at a place when a subtitle has been started, but is not removed yet. Pure streaming doesn't have this problem as it remembers the subtitle being issued (as opposed to, say, Vorbis, for which all data valid now is decoded from the last packet). With Kate, a text string valid now may have been issued long ago.<br />
<br />
I see three possible ways to solve this:<br />
*each data packet includes the granule of the earliest still active packet (if none, this will be the granule of this very packet)<br />
**this means seeks are two phased: first seek, find the next Kate packet, and seek again if the granule of the earlier still active packet is less than the original seeked granule. This implies support code on players to do the double seek.<br />
<br />
*use "reference frames", a bit like Theora does, where the granule position is split in several fields: the higher bits represent a position for the reference frame, and the lowest bits a delta time to the current position. When seeking to a granule position, the lower bits are cleared off, yielding the granule position of the previous reference frame, so the seek ends up at the reference frame. The reference frame is a sync point where any active strings are issued again. This is a variant of the method described in the Writ wiki page, but the granule splitting avoids any "downtime".<br />
**this requires reissuing packets, and it doesn't feel right (and wastes space).<br />
**it also requires "dummy" decoding of Kate data from the reference frame to the actual seek point to fully refresh the state "memory".<br />
<br />
*A variant of the two-granules-in-one system used by libcmml, where the "back link" points to the earliest still active string, rather than the previous one (this allows a two phase seek, rather than a multiphase seek, hopping back from event to event, with no real way to know if there is or not a previous event which is still active - I suppose CMML has no need to know this, if their "clips" do not overlap - mine can do).<br />
**Such a system considerably shortens the usable granule space, though it can do a one phase seek, if I understand the system correctly, which I am not certain.<br />
*** Well, it seems it can't do a one phase seek anyway.<br />
<br />
*Additionally, it could be possible to emit simple "keepalive" packets at regular intervals to help a seek algorithm to sync up to the stream without needing too much data reading - this helps for discontinuous streams where there could be no pages for a while if no data is needed at that time.<br />
<br />
=== Text encoding ===<br />
<br />
A header field declares the text encoding used in the stream. At the moment, only UTF-8 is<br />
supported, for simplicity. There are no plans to support other encodings, such as UTF-16,<br />
at the moment.<br />
<br />
Note that strings included in the header (language, category) are not affected by that<br />
language encoding (rather obviously for language itself). These are ASCII.<br />
<br />
The actual text in events may include simple HTML-like markup (at the moment, allowed markup<br />
is the same as the one Pango uses, but more markup types may be defined in the future).<br />
It is also possible to ask libkate to remove this markup if the client prefers to receive<br />
plain text without the markup.<br />
<br />
=== Language encoding ===<br />
<br />
A header field defines the language (if any) used in the stream (this can be overridden in a<br />
data packet, but this is not relevant to this point). At the moment, my test code uses<br />
ISO 639-1 two letter codes, but I originally thought to use RFC 3066 tags. However, matching<br />
a language to a user selection may be simpler for user code if the language encoding is kept<br />
simple. At the moment, I tend to favor allowing both two letter tags (eg, "en") and secondary<br />
tags (like "en_EN"), as RFC 3066 tags can be quite complex, but I welcome comments on this.<br />
<br />
If a stream contains more than one language, there usually is a predominant language, which<br />
can be set as the default language for the stream. Each event can then have a language<br />
override. If there is no predominant language, and it is not possible to split the stream<br />
into multiple substreams, each with its own language, then it is possible to use the "mul"<br />
language tag, as a last resort.<br />
<br />
=== Bitstream format for floating point values ===<br />
<br />
Floating point values are be turned to a 16.16 fixed point format, then stored in a bitpacked<br />
format, storing the number of zero bits at the head and tail of the floating point values once<br />
per stream, and the remainder bits for all values in the stream. This seems to yield good results<br />
(typically a 50% reduction over 32 bits raw writes, and 70% over the snprintf based storage), and<br />
has the big advantage of being portable (eg, independant of any IEEE format).<br />
However, this means reduced precision due to the quantization to 16.16. I may add support for<br />
variable precision (eg, 8.24 fixed point formats) to alleviate this. This would however mean less<br />
space savings, though these are likely to be insignificant when Kate streams are interleaved with<br />
a video.<br />
<br />
*Though this is not a Kate issue per se, the motion feature is very difficult to use without a curve editor. While tools may be coded to create a Kate bitstream for various existing subtitle formats, it is not certain it will be easy to find a good authoring tool for a series of curves. That said, it's not exactly difficult to do if you know a widget set.<br />
<br />
=== Higher dimensional curves/motions ===<br />
<br />
It is quite annoying to have to create two motions to control a color change, due to curves<br />
being restricted to two dimensions. I may add support for arbitrary dimensions. It would also<br />
help for 1D motions, like changing the time flow, where one coordinate is simply ignored at<br />
the moment.<br />
Alternatively, changes could be made to the Kate file format to hide the two dimensionality and<br />
allow simpler specification of non-2 dimensional motions, but still map them to 2D in the kate<br />
bitstream format.<br />
<br />
=== Category definition ===<br />
<br />
The category field in the BOS packet is a 16 byte text field (15 really, as it is zero terminated<br />
in the bitstream itself). Its goal is to provide the reader with a short description of what kind<br />
of information the stream contains, eg subtitles, lyrics, etc. This would be displayed to the user,<br />
possibly to allow to choose to turn some streams on and off.<br />
<br />
Since this category is meant primarily for a machine to parse, they will be kept to ASCII. When<br />
a player recognizes a category, it is free to replace its name with one in the user's language if<br />
it prefers. Even in English, the "lyrics" category could be displayed by a player as "Lyrics".<br />
<br />
Since this is a free text field rather than an enumeration, it would be good to have a list of<br />
common predefined category names that Kate streams can use.<br />
<br />
This is a list of proposed predefined categories, feedback/additions welcome:<br />
<br />
* subtitles - the usual movie subtitles, as text<br />
* spu-subtitles - movie subtitles in DVD style paletted images<br />
* lyrics - song lyrics<br />
<br />
Please remember the 15 character limit if proposing other categories.<br />
<br />
Note that the list of categories is subject to change, and will likely<br />
be replaced by new, more "identifier like" ones. The three ones above,<br />
however, would be kept for backward compatibility as they're already used.<br />
<br />
== Text to speech ==<br />
<br />
One of the goals of the Kate bitstream format is that text data can be easily parsed<br />
by the user of the decoder, so any additional information, such as style, placement,<br />
karaoke data, etc, should be able to be stripped to leave only the bare text. This is<br />
in view of allowing text-to-speech software to use Kate bitstreams as a bandwith-cheap<br />
way of conveying speech data, and could also allow things like e-books which can be<br />
either read or listened to from the same bitstream (I have seen no reference to this<br />
being used anywhere, but I see no reason why the granule progression should be temporal,<br />
and not user controlled, such as by using a "next" button which would bump a granule<br />
postion by a preset amount, simulating turning a page (this would be close to necessary<br />
for text-to-speech, as the wall time duration of the spoken speech is not known in<br />
advance to the Kate encoder, and can't be mapped to a time based granule progression)).<br />
All text strings triggered consecutively between the two granule positions would then<br />
be read in order.<br />
<br />
== Possible additions ==<br />
<br />
=== Embedded binary data ===<br />
<br />
Images and font mappings can be included within a Kate stream.<br />
<br />
==== Images ====<br />
<br />
Though this could be misused to interfere with ability to render as text-to-speech, Kate<br />
can use images as well as text. The same caveat as for fonts applies with regard to data<br />
duplication.<br />
<br />
Complex images might however be best left to a multiplexed OggSpots or OggMNG stream, unless the<br />
images mesh with the text (eg, graphical exclamation points, custom fonts, (see next<br />
paragraph), etc).<br />
<br />
There is support for simple paletted bitmap images, with a variable length palette of up<br />
to 256 colors (in fact, sized in powers of 2 up to 256) and matching pixel data in as<br />
many bits per pixel as can address the palette. Palettes and images are stored separately,<br />
so can be used with one another with no fixed assignment.<br />
<br />
Palettes and bitmaps are put in two separate header for later use by reference, but can<br />
also be placed in data packets, as with motions, etc, if they are not going to be reused.<br />
<br />
PNG bitmaps can also be embedded in a Kate stream. These do not have associated palettes<br />
(but the PNGs themselves may or may not be paletted). There is no support for decoding PNG<br />
images in libkate itself, so a program will have to use libpng (or similar code) to decode<br />
the PNG image. For instance, the libtiger rendering library uses Cairo to decode and render<br />
PNG images in Kate streams.<br />
<br />
This can be used to have custom fonts, so that raw text is still available if the stream<br />
creator wants a custom look.<br />
<br />
I expect that the need for more than 256 colors in a bitmap, or non palette bitmap data,<br />
would be best handled by another codec, eg OggMNG or OggSpots. The goal of images in a<br />
Kate stream is to mesh the images with the text, not to have large images by themselves.<br />
<br />
On the other hand, interesting Karaoke effects could be achieved by having MNG images<br />
instead of simple paletted bitmaps in a Kate streams. Comments would be most welcome on<br />
whether this is going too far, however.<br />
<br />
I am also investigating SVG images. These allow for very small footprint images for simple<br />
vector drawings, and could be very useful for things like background gradients below text.<br />
<br />
A possible solution to the duplication issue is to have another stream in the container<br />
stream, which would hold the shared data (eg, fonts), which the user program could load,<br />
and which could then be used by any Kate (and other) stream. Typically, this type of stream<br />
would be a degenerate stream with only header packets (so it is fully processed before any<br />
other stream presents data packets that might make use of that shared data), and all payload<br />
such as fonts being contained within the headers. Thinking about it, it has parallels with<br />
the way Vorbis stores its codebooks within a header packet, or even the way Kate stores the<br />
list of styles within a header packet.<br />
<br />
==== Fonts ====<br />
<br />
Custom fonts are merely a set of ranges mapping unicode code points to bitmaps. As this implies,<br />
fonts are bitmap fonts, not vector fonts, so scaling, if supported by the rendering client,<br />
may not look as good as with a vector font.<br />
<br />
A style may also refer to a font name to use (eg, "Tahoma"). These fonts may or may not be<br />
available on the playing system, however, since the font data is not included in the stream,<br />
just referenced by name. For this reason, it is best to keep to widely known fonts.<br />
<br />
== Reference encoder/decoder ==<br />
<br />
A encoder (kateenc) and a decoder (katedec) are included in the tools directory.<br />
The encoder supports input from several different formats:<br />
* a custom text based file format (see [[#The Kate file format|The Kate file format]]), which is by no means meant to be part of the Kate bitstream specification itself<br />
* SubRip (.srt), the most common subtitle format I found<br />
* LRC lyrics format.<br />
<br />
As an example for the widely used SRT subtitles format, the following command line<br />
create a Kate subtitles stream from an SRT file:<br />
<br />
kateenc -l en -c subtitles -t srt -o subtites.ogg subtitles.srt<br />
<br />
The reverse is possible, to recover an SRT file from a Kate stream, with katedec.<br />
<br />
Note that the subtitles.ogg file should then be multiplexed into the A/V stream,<br />
using either ogg-tools or oggz-tools.<br />
<br />
The Kate bitstreams encoded and decoded by those tools are (supposed to be) correct for this<br />
specification, provided their input is correct.<br />
<br />
== Next steps ==<br />
<br />
=== Continuations ===<br />
<br />
Continuations are a way to add to existing events, and are mostly meant for motions. When streaming<br />
in real time, what motions may be applied to events may not be known in advance (for instance, for a<br />
draw chat program where two programs exchange Kate streams, the drawing motions are only known as<br />
they are drawn. Continuations will allow an event to be extended in time, and motions to be appended<br />
to it. This is only useful for streaming, as when stored in a file, everything is already known in<br />
advance.<br />
<br />
=== A rendering library ===<br />
<br />
This will allow easier integration in other packages (movie players, etc).<br />
I have started working on an implementation using Cairo and Pango, though I'm still at the early stages.<br />
I might add support for embedding vector fonts in a Kate stream if I was going that way. Still need to think about this.<br />
Another point of note is that when this library is available, it would make it easier to add<br />
capabilities such as rotation, scaling, etc, to the bitstream, since this would not cause too<br />
much work for playing programs using the rendering library. It is expected that these additions<br />
would stay backward compatible (eg, an old player would ignore this information but still correctly<br />
decode the information they can work with from a newly encoded stream).<br />
<br />
=== An XML representation ===<br />
<br />
While I purposefully did not write Kate description files in XML due to me finding editing XML such<br />
a chore, it would be nice to be able to losslessly convert between the more user friendly representation<br />
and an XML document, so one can do what one does with XML documents, like transformations.<br />
<br />
And after all, some people might prefer editing the XML version.<br />
<br />
=== Packaging ===<br />
<br />
It would be really nice to have packages for libkate/libtiger for many distros.<br />
<br />
If you're a packager for a distro which doesn't have yet packages for libkate<br />
or libtiger, please consider helping :)<br />
<br />
In particular, packages for Debian would be grand.<br />
<br />
== Matroska mapping ==<br />
<br />
The codec ID is "S_KATE".<br />
<br />
As for Theora and Vorbis, Kate headers are stored in the private data as xiph-laced packets:<br />
<br />
Byte 0: number of packets present, minus 1 (there must be at least one packet) - let this number be NP<br />
Bytes 1..n: lengths of the first NP packets, coded in xiph style lacing<br />
Bytes n+1..end: the data packets themselves concatenated one after the other<br />
<br />
Note that the length of the last packet isn't encoded, it is deduced from the sizes of the other<br />
packets and the total size of the private data.<br />
<br />
This mapping is similar to the Vorbis and Theora mappings, with the caveat that one should not<br />
expect a set number of headers.<br />
<br />
== Downloading ==<br />
<br />
libkate encodes and decodes Kate streams, and is API and ABI stable.<br />
<br />
The libkate source distribution is available at [http://libkate.googlecode.com/ http://libkate.googlecode.com/].<br />
<br />
A public git repository is available at [http://git.xiph.org/?p=users/oggk/kate.git;a=summary http://git.xiph.org/?p=users/oggk/kate.git;a=summary].<br />
<br />
libtiger renders Kate streams using Pango and Cairo, and is alpha, with API changes still possible.<br />
<br />
The libtiger source distribution is available at [http://libtiger.googlecode.com/ http://libtiger.googlecode.com/].<br />
<br />
A public git repository is available at [http://git.xiph.org/?p=users/oggk/tiger.git;a=summary http://git.xiph.org/?p=users/oggk/tiger.git;a=summary].<br />
<br />
== HOWTOs ==<br />
<br />
These paragraphs describe a few ways to use Kate streams:<br />
<br />
=== Text movie subtitles ===<br />
<br />
Kate streams can carry Unicode text (that is, text that can represent<br />
pretty much any existing language/script). If several Kate streams are<br />
multiplexed along with a video, subtitles in various languages can be<br />
made for that movie.<br />
<br />
An easy way to create such subtitles is to use ffmpeg2theora, which<br />
can create Kate streams from SubRip (.srt) format files, a simple but<br />
common text subtitles format. ffmpeg2theora 0.21 or later is needed.<br />
<br />
At its simplest:<br />
<br />
ffmpeg2theora -o video-with-subtitles.ogg --subtitles subtitles.srt<br />
video-without-subtitles.avi<br />
<br />
Several languages may be created and tagged with their language code<br />
for easy selection in a media player:<br />
<br />
ffmpeg2theora -o video-with-subtitles.ogg video-without-subtitles.avi<br />
--subtitles japanese-subtitles.srt --subtitles-language ja<br />
--subtitles welsh-subtitles.srt --subtitles-language cy<br />
--subtitles english-subtitles.srt --subtitles-language en_GB<br />
<br />
Alternatively, kateenc (which comes with the libkate distribution) can<br />
create Kate streams from SubRip files as well. These can then be merged<br />
with a video with oggz-tools:<br />
<br />
kateenc -t srt -c SUB -l it -o subtitles.ogg italian-subtitles.srt<br />
oggz merge -o movie-with-subtitles.ogg movie-without-subtitles.ogg subtitles.ogg<br />
<br />
This second method can also be used to add subtitles to a video which<br />
is already encoded to Theora, as it will not transcode the video again.<br />
<br />
<br />
=== DVD subtitles ===<br />
<br />
DVD subtitles are not text, but images. Thoggen, a DVD ripper program,<br />
can convert these subtitles to Kate streams (at the time of writing,<br />
Thoggen and GStreamer have not applied the necessary patches for this<br />
to be possible out of the box, so patching them will be required).<br />
<br />
When configuring how to rip DVD tracks, any subtitles will be detected<br />
by Thoggen, and selecting them in the GUI will cause them to be saved as<br />
Kate tracks along with the movie.<br />
<br />
<br />
=== Song lyrics ===<br />
<br />
Kate streams carrying song lyrics can be embedded in an Ogg file. The<br />
oggenc Vorbis encoding tool from the Xiph.Org Vorbis tools allows lyrics<br />
to be loaded from a LRC or SRT text file and converted to a Kate stream<br />
multiplexed with the resulting Vorbis audio. At the time of writing,<br />
the patch to oggenc was not applied yet, so it will have to be patched<br />
manually with the patch found in the diffs directory.<br />
<br />
oggenc -o song-with-lyrics.ogg --lyrics lyrics.lrc --lyrics-language en_US song.wav<br />
<br />
So called 'enhanced LRC' files (containing extra karaoke timing information)<br />
are supported, and a simple karaoke color change scheme will be saved<br />
out for these files. For more complex karaoke effects (such as more <br />
complex style changes, or sprite animation), kateenc should be used with<br />
a Kate description file to create a separate Kate stream, which can then<br />
be merged with a Vorbis only song with oggz-tools:<br />
<br />
oggenc -o song.ogg song.wav<br />
kateenc -t kate -c LRC -l en_US -o lyrics.ogg lyrics-with-karaoke.kate<br />
oggz merge -o song-with-karaoke.ogg lyrics-with-karaoke.ogg song.ogg<br />
<br />
This latter method may also be used if you already have an encoded Vorbis song<br />
with no lyrics, and just want to add the lyrics without reencoding.<br />
<br />
<br />
=== Metadata ===<br />
<br />
Metadata can be attached to events, or to styles, bitmaps, regions, etc.<br />
Metadata are free form tag/value pairs, and can be used to enrich their<br />
attached data with extra information. However, how this information is<br />
interpreted is up to the application layer.<br />
<br />
It is worth noting that an event may not have attached text, so it is<br />
possible to create an empty timed event with attached metadata.<br />
<br />
For instance, let's say we have a documentary, with footage from various<br />
places, as well as short interviews, and we want two things:<br />
- tag footage with metadata about the location and date that footage was shot<br />
- subtitle the interviews and tag those subtitles with information about the speaker<br />
<br />
You can then create an empty Kate event for each footage part, synchronized<br />
with the footage, and attach a new metadata item called GEO_LOCATION, filled<br />
with latitude and longitude of the place the footage was shot at.<br />
Similarly, for each subtitle event, a metadata item called SPEAKER can be<br />
attached.<br />
<br />
An empty event to tag a long 4:20 footage shot in Tokyo on 2011/08/12, and<br />
inserted at 18:30 in the documentary could look like:<br />
<br />
event {<br />
00:18:30,000 --> 00:22:50,000<br />
meta "GEO_LOCATION" = "35.42; 139.42"<br />
meta "DATE" = "2011-08-12"<br />
}<br />
<br />
Here's a example for a line spoken by Dr Joe Bloggs at 18:30 into the documentary:<br />
<br />
event {<br />
00:18:30,000 --> 00:18:32,000<br />
"Notice how the subtitles for my words have metadata attached to them"<br />
meta "SPEAKER" = "Dr Joe Bloggs"<br />
meta "URL" = "http://www.example.com/biography?name=Joe+Bloggs"<br />
}<br />
<br />
Notice how another metadata item, URL, is also present. The application<br />
will have to be aware of those metadata in order to do something with it<br />
though. Since those are free form, it is up to you to think of what<br />
metadata you want, and make use of it.<br />
<br />
Note that metadata may be attached to other objects, such as regions.<br />
This way, you can for example create a region tagged with a name, and<br />
track a person's movements with that region. Or you can tag a bitmap<br />
with a copyright and a URL to a larger version of the image.<br />
<br />
<br />
<br />
=== Changing a Kate stream embedded in an Ogg stream ===<br />
<br />
If you need to change a Kate stream already embedded in an Ogg stream (eg, you have a movie with subtitles, and you want to fix a spelling mistake, or want to bring one of the subtitles forward in time, etc), you can do this easily with KateDJ, a tool that will extract Kate streams, decode them to a temporary location, and rebuild the original stream after you've made whatever changes you want.<br />
<br />
KateDJ (included with the libkate distribution) is a GUI program using wxPython, a Python module for the wxWidgets GUI library, and the oggz tools (both needing installing separately if they are not already).<br />
<br />
The procedure consists of:<br />
<br />
* Run KateDJ<br />
* Click 'Load Ogg stream' and select the file to load<br />
* Click 'Demux file' to decode Kate streams in a temporary location<br />
* Edit the Kate streams (a message box tells you where they are placed)<br />
* When done, click 'Remux file from parts'<br />
* If any errors are reported, continue editing until the remux step succeeds<br />
<br />
== Frequently Asked Questions ==<br />
<br />
=== Does libkate work on other plaforms than Linux ? ===<br />
<br />
Yes, libkate is not Linux specific in any way. It optionally relies on libogg<br />
and libpng, two libraries widely ported to various platforms.<br />
It has been reported to work on Windows and MacOS X as well as UNIX platforms.<br />
<br />
However, libtiger, a rendering library for Kate streams, relies on Pango and Cairo,<br />
which are not easy to build on Windows, though they can be.<br />
The Tiger renderer is however completely separate from libkate, and is not needed<br />
for full encoding and decoding of Kate streams.<br />
<br />
=== Where can I find some example files ? ===<br />
<br />
The libkate distribution can generate various examples, but already built files<br />
can be found there:<br />
[http://people.xiph.org/~oggk/elephants_dream/elephantsdream-with-subtitles.ogg]<br />
[http://stallman.org/fry/Stephen_Fry-Happy_Birthday_GNU-nq_600px_425kbit.ogv]<br />
<br />
These files use raw text only.<br />
<br />
<br />
<br />
[[Category:Drafts]]<br />
[[Category:Ogg Mappings]]</div>WikiCleaner