OggKate: Difference between revisions

From XiphWiki
Jump to navigation Jump to search
m (Removed Category:Drafts as it seems stable and Xiph is recommending that people use it)
(108 intermediate revisions by 8 users not shown)
Line 1: Line 1:
== Disclaimer ==
== Disclaimer ==
This is not a Xiph codec, but I was asked to post information
This is not a Xiph codec, though it may be embedded in Ogg alonside other Xiph
about Ogg/Kate on this wiki. As such, please do not assume that Xiph has anything
codecs, such as Vorbis and Theora. As such, please do not assume that Xiph has
to do with this, much less responsibility.
anything to do with this, much less responsibility.


==  What is Kate? ==
==  What is Kate? ==


Kate is a codec for karaoke and text encapsulation for Ogg. Most of the time, this
Kate is an overlay codec, originally designed for karaoke and text, that can be
would be multiplexed with audio/video to carry subtitles, song lyrics (with or without
multiplixed in Ogg. Text and images can be carried by a Kate stream, and animated.
karaoke data), etc, but doesn't have to be. A possible use of a lone Kate stream would
Most of the time, this would be multiplexed with audio/video to carry subtitles,
be an e-book.
song lyrics (with or without karaoke data), etc, but doesn't have to be.
Moreover, the motion feature gives Kate a powrful means to describe arbitrary curves, so
 
hand drawing of shapes can be achieved. This was originally meant for karaoke use, but
Series of curves (splines, segments, etc) may be attached to various properties
can be used for any purpose. Motions can be attached to various semantics, like position,
(text position, font size, etc) to create animated overlays. This allows scrolling
color, etc, so scrolling or fading text can be defined.
or fading text to be defined. This can even be used to draw arbitrary shapes, so
hand drawing can also be represented by a Kate stream.
 
Example uses of Kate streams are movie subtitles for Theora videos, either text based,
as may be created by [http://www.v2v.cc/~j/ffmpeg2theora ffmpeg2theora], or image
based, such as created by [http://thoggen.net Thoggen] (patching needed), and lyrics,
as created by oggenc, from vorbis-tools.


== Why a new codec? ==
== Why a new codec? ==
Line 24: Line 30:
The three possible bases usable for such a codec I found were Writ, CMML, and OGM/SRT.
The three possible bases usable for such a codec I found were Writ, CMML, and OGM/SRT.


*[[OggWrit|Writ]] is an unmaintained start at an implementation of a very basic design, though I did find an encoder/decoder in py-pgg2 later on - I'd been quicker to write Kate from scratch anyway.
*[[OggWrit|Writ]] is an unmaintained start at an implementation of a very basic design, though I did find an encoder/decoder in py-ogg2 later on - I'd been quicker to write Kate from scratch anyway.
*CMML is more geared towards encapsulating metadata about an accompanying stream, rather than being a data stream itself, and seems complex for a simple use - I don't really want *full* HTML/XML with links, etc - besides, it seems designed for Annodex (which I haven't had a look at), though it does seems relatively generic for use outwith Annodex
*[[CMML]] is more geared towards encapsulating metadata about an accompanying stream, rather than being a data stream itself, and seemed complex for a simple use, though I have now revised my view on this - besides, it seems designed for Annodex (which I haven't had a look at), though it does seems relatively generic for use outwith Annodex - though it is being "repurposed" as timed text now, bringing it closer to what I'm doing
*OGM/SRT, which I only found when I added Kate support to MPlayer, is shoehorning various data formats into an Ogg stream, and just dumps the SRT subtitle format as is, AFAICS (though I haven't looked at this one in detail, since I'd already had a working Kate implementation by that time)
*OGM/SRT, which I only found when I added Kate support to MPlayer, is shoehorning various data formats into an Ogg stream, and just dumps the SRT subtitle format as is, AFAICS (though I haven't looked at this one in detail, since I'd already had a working Kate implementation by that time)


Line 38: Line 44:
Headers and packets (as well as the API design) follow the design of these two codecs.
Headers and packets (as well as the API design) follow the design of these two codecs.


A rough overview (detailed description is available below (no, it's not, it will be
A rough overview (see [[#Format specification|Format specification]] for more details) is:
available later when the format has settled down a bit more)) is:


Headers packets:
Headers packets:
Line 46: Line 51:
*Style definitions header: a list of predefined styles to be referred to by data packets
*Style definitions header: a list of predefined styles to be referred to by data packets
*Region definitions header: a list of predefined regions to be referred to by data packets
*Region definitions header: a list of predefined regions to be referred to by data packets
*Curves definitions header: a list of predefined curves to be referred to by data packets
*Motion definitions header: a list of predefined motions to be referred to by data packets
*Motion definitions header: a list of predefined motions to be referred to by data packets
*Palette definitions header: a list of predefined palettes to be referred to by data packets
*Bitmap definitions header: a list of predefined bitmaps to be referred to by data packets
*Font mapping definitions header: a list of predefined font mappings to be referred to by data packets


Other header packets are ignored, and left for future expansion. In particular, there will
Other header packets are ignored, and left for future expansion.
likely be a motions definition header, where motions which are to be used repeatedly will
be stored for reference in text packets.


Data packets:
Data packets:
*text data: text and optional motions, accompanied by optional overrides for style, region, language, etc
*text data: text/image and optional motions, accompanied by optional overrides for style, region, language, etc
*end data [EOS]: marks the end of the stream, it doesn't have any payload
*keepalive: can be emitted at any time to help a demuxer know where we're at, but those packets are optional
*repeats: a verbatim repeat of a text packet's payload, in order to bound any backward seeking needed when starting to play a stream partway through. These are also optional.
*end data [EOS]: marks the end of the stream, it doesn't have any useful payload


Other data packets are ignored, and left for future expansion.
Other data packets are ignored, and left for future expansion.
The intent of the "keepalive" packet is to be sent at regular
intervals when no other packet has been emitted for a while. This would be to help seeking code
find a kate page more easily.


Things of note:
Things of note:
*Kate is a discontinuous codec, as defined in ogg-multiplex.html in the Ogg documentation, which means it's timed by start granule, not end granule (as Theora and Vorbis). Also, all data packets are on their own page, for two reasons:
*Kate is a discontinuous codec, as defined in [http://www.xiph.org/ogg/doc/ogg-multiplex.html ogg-multiplex.html] in the Ogg documentation, which means it's timed by start granule, not end granule (as Theora and Vorbis).
* All data packets are on their own page, for two reasons:
**Ogg keeps track of granules at the page level, not the packet level
**Ogg keeps track of granules at the page level, not the packet level
**if no text event happens for a while after a particular text event, we don't want to delay it so a fuller page can be issued
**if no text event happens for a while after a particular text event, we don't want to delay it so a larger page can be issued


See also [[#Seeking and memory|Problems to solve: Seeking and memory]].
See also [[#Seeking and memory|Problems to solve: Seeking and memory]].


*The granule encoding is not a direct time/granule correspondance, see the granule encoding section.
*The granule encoding is not a direct time/granule correspondance, see the granule encoding section.
*The EOS packet should have a granule pos higher than the end time of all events.
*The EOS packet should have a granule pos higher or equal to the end time of all events.
*User code doesn't have to know the number of headers to expect, this is moved inside the library code.
*User code doesn't have to know the number of headers to expect, this is moved inside the library code (as opposed to Vorbis and Theora).
*The format contains hooks so that additional information may be added in future revisions while keeping backward compatibility (though old decoders will correctly parse, but ignore the new information).
 
== Format specification ==
 
The Kate bitstream format consists of a number of sequential packets.
Packets can be either header packets or data packets. All header packets
must appear before any data packet.
 
Header packets must appear in order. Decoding of a data packet is not
possible until all header packets have been decoded.
 
Each Kate packet starts with a one byte type. A type with the MSB set
(eg, between 0x80 and 0xff) indicates a header packet, while a type with
the MSB cleared (eg, between 0x00 and 0x7f) indicates a data packet.
All header packets then have the Kate magic, from byte offset 1 to byte
offset 7 ("kate\0\0\0"). Note that this applies only to header packets:
data packets do not contain the Kate signature.
 
Since the ID header must appear first, a Kate stream can be recognized
by comparing the first eight bytes of the first packet with the signature
string "\200kate\0\0\0".
 
 
When embedded in Ogg,the first packet in a Kate stream (always packet type 0x80,
the id header packet) must be placed on a separate page. The corresponding Ogg
packet must be marked as beginning of stream (BOS).All subsequent header packets
must be on one or more pages. Subsequently, each data packet must be on a separate
page.
 
The last data packet must be the end of stream packet (packet type 0x7f).
 
When embedded in Ogg, the corresponding Ogg packet must be marked as end of stream (EOS).
 
As per the Ogg specification, granule positions must be non decreasing
within the stream. Header packets have granule position 0.
 
Currently existing packet types are:
:headers:
::0x80  ID header (BOS)
::0x81  Vorbis comment header
::0x82  regions list header
::0x83  styles list header
::0x84  curves list header
::0x85  motions list header
::0x86  palettes list header
::0x87  bitmaps list header
::0x88  font ranges and mappings header
:data:
::0x00 text data (including optional motions and overrides)
::0x01 keepalive
::0x02 repeat
::0x7f end packet (EOS)
 
 
 
This format described here is for bitstream version 0.x.
As or 19 december 2008, the latest bitstream version is 0.4.
 
For more detailed information, refer to the format documentation
in libkate (see URL below in the [[#Downloading|Downlading]] section).
 
Following is the definition of the ID header (packet type 0x80).
This works out to a 64 byte ID header. This is the header that should be
used to detect a Kate stream within an Ogg stream.
 
 
 
  0              1              2              3              |
  0                  1                  2                  3
  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1| Byte
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| packtype      | Identifier char[7]: 'kate\0\0\0'              | 0-3
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| kate magic continued                                          | 4-7
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| reserved - 0  | version major | version minor | num headers  | 8-11
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| text encoding | directionality| reserved - 0  | granule shift | 12-15
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| cw sh |  canvas width        | ch sh | canvas height        | 16-19
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| reserved - 0                                                  | 20-23
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| granule rate numerator                                        | 24-27
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| granule rate denominator                                      | 28-31
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| language (NUL terminated)                                    | 32-35
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| language (continued)                                          | 36-39
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| language (continued)                                          | 40-43
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| language (continued)                                          | 44-47
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| category (NUL terminated)                                    | 48-51
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| category (continued)                                          | 52-55
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| category (continued)                                          | 56-59
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| category (continued)                                          | 60-63
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 
 
 
The fields cw sh, canvas width, cw sh, and canvas height were introduced
in bistream 0.3. Earlier bitstreams will have 0 in these fields.
 
language and category are NUL terminating ASCII strings.
Language follows RFC 3066, though obviously will not accommodate language tags
with lots of subtags.
 
Category is currently loosely defined, and I haven't found yet a nice way to
present it in a generic way, but is meant for automatic classifying of
various multiplexed Kate streams (eg, to recognize that some streams are
subtitles (in a set of languages), and some others are commentary (in a
possibly different set of languages, etc).
 
== API overview ==
 
libkate offers an API very similar to that of libvorbis and libtheora, as well as
an extra higher level decoding API.
 
Here's an overview of the three main modules:
 
=== Decoding ===
 
Decoding is done in a way similar to libvorbis. First, initialize a kate_info and a
kate_comment structure. Then, read headers by calling kate_decode_headerin. Once
all headers have been read, a kate_state is initialized for decoding using kate_decode_init,
and kate_decode_packetin is called repeatedly with data packets. Events (eg, text) can be
retrieved via kate_decode_eventout.
 
=== Encoding ===
 
Encoding is also done in a way similar to libvorbis. First initialize a kate_info
and a kate_comment structure, and fill them out as needed. kate_encode_headers will
create ogg packets from those. Then, kate_encode_text is called repeatedly for all
the text events to add. When done, calling kate_encode_finish will create an end of
stream packet.
 
=== High level decoding API ===
 
There are only 3 calls here:
 
kate_high_decode_init
kate_high_decode_packetin
kate_high_decode_clear
 
Here, all Ogg packets are sent to kate_high_decode_packetin, which does the right
thing (header/data classification, decoding, and event retrieval). Note that you
do not get access to the comments directly using this, but you do get access to the
kate_info via events.
 
The libkate distribution includes commented examples for each of those.
 
Additionally, libkate includes a layer (liboggkate) to make it easier to use when
embedded in Ogg. While the normal API uses kate_packet structures, liboggkate uses
ogg_packet structures.
 
The High level decoding API does not have an Ogg specific layer, but functions exist
to wrap a kate_packet around a memory buffer (such as the one ogg_packet uses, for instance).


== Support ==
== Support ==
Among the software with Kate support:
*VLC
*ffmpeg2theora
*liboggz
*liboggplay
*Cortado (wikimedia version)
*vorbis-tools


I have patches for the following with Kate support:
I have patches for the following with Kate support:
*oggmerge (it also adds Theora support)
*MPlayer
*file(1)
*xine
*MPlayer (for multiplexed per-language subtitles - all region/style info is ignored)
*GStreamer
*xine (text, and work-in-progress style/regions/motions support)
*Thoggen
*Audacious
*and more...


None of those are released yet, since the Kate bitstream format is still a work in progress.
These may be found in the libkate source distribution (see [[#Downloading|Downloading]]
for links).
 
In addition, libtiger is a rendering library for Kate streams using Pango and Cairo,
though it is not quite yet API stable (though no major changes are expected).


== Granule encoding ==
== Granule encoding ==


At the moment, the granules are split in two: the high bits represent a time (scaled by a
=== Ogg ===
fractional speed defined in the ID header), and the low bits are an increasing counter
 
used when several events happen at the same time.
Ogg leaves the encoding of granules up to a particular codec, only
At the moment, 5 bits are taken for that counter. This is totally arbitrary and subject
mandating that granules be non decreasing with time.
to change. The granule shift of a stream is included in the ID header.
See also the problems to solve section, about seeking, for a possible three-way split, where
The Kate bitstream format uses a linear mapping between time and
the high bits would be further split.
granule, described here.
A Kate granule position is composed of two different parts:
- a base granule, in the high bits
- a granule offset, in the low bits
 
+----------------+----------------+
| base          | offset        |
+----------------+----------------+
The number of bits these parts occupy is variable, and each stream
may choose how many bits to dedicate to each. The kate_info structure
for a stream holds that information in the granule_shift field,
so each part may be reconstructed from a granulepos.
 
The timestamp T of a given Kate packet is split into a base B and
offset O, and these are stored in the granulepos of that packet.
The split is done such that the B is the time of the earliest event
still active at the time, and the O is the time elapsed between B
and T. Thus, T = B + O. This mimics the way Theora stores its own
timestamps in granulepos, where the base acts as a keyframe, and
an offset acts as the position of an intra frame from the previous
keyframe. Since Kate allows time overlapping events, however, the
choice of the base to use is slightly more complex, as it may not
be the starting time of the previous event, if the stream contains
time overlapping events.
The kate_info structure for a stream holds a rational fraction
representing the time span of granule units for both the base and
the offset parts.
The granule rate is defined by the two fields:
 
kate_info::gps_numerator
kate_info::gps_denominator
 
 
The number of bits reserved for the offset is defined by the field:
 
kate_info::granule_shift
 
=== Generic timing ===
 
Kate data packets (data packet type 0) includes timing information (start time,
end time, and time of the earliest event still active). All these are stored as
64 bit at the rate defined by the granule rate, so they do not suffer from the
granule_shift space limitation.
 
This also allows for Kate streams to be stored in other containers.


== Motion ==
== Motion ==


The Kate bitstream format includes motion definition, primarily for karaoke purposes, but
The Kate bitstream format includes motion definition, originally for karaoke purposes, but
which can be used for more general purpose, such as line based drawing, or animation of
which can be used for more general purpose, such as line based drawing, or animation of
the text (position, color, etc)
the text (position, color, etc)


Motions are defined by the means of a series of curves (for now, segments and splines). A
Motions are defined by the means of a series of curves (static points, segments, splines (catmull-rom, bezier, and b-splines)).
2D point can be obtained from a motion for any timestamp during the lifetime of a text.
A 2D point can be obtained from a motion for any timestamp during the lifetime of a text.
This can be used for moving a marker in 2D above the text for karaoke, or to use the x
This can be used for moving a marker in 2D above the text for karaoke, or to use the x
coordinate to color text when the motion position passes each letter or word, etc.
coordinate to color text when the motion position passes each letter or word, etc.
Line 106: Line 345:
the main object of an event, it is even possible to have an empty text, and use the motion
the main object of an event, it is even possible to have an empty text, and use the motion
as a virtual pencil to draw arbitrary shapes. Even on-the-fly handwriting subtitles could
as a virtual pencil to draw arbitrary shapes. Even on-the-fly handwriting subtitles could
be done this way, though this would require a lot of control points.
be done this way, though this would require a lot of control points, and would not be able
to be used with text-to-speech.
 
As a proof of concept, I also have a "draw chat" program where two people can draw, and
the shapes are turned to b-splines and sent as a kate motion to be displayed on the other
person's window.


It is also possible for motions to be discontinuous - simply insert a curve of 'none' type.
It is also possible for motions to be discontinuous - simply insert a curve of 'none' type.
Line 126: Line 370:
needed, Kate predefines the semantics text_color_rg and text_color_ba, so a 4D point can be
needed, Kate predefines the semantics text_color_rg and text_color_ba, so a 4D point can be
obtained using two different motions.
obtained using two different motions.
There are higher level constructs, such as morphing between two styles, or predefined
karaoke effects. More are planned to be added in the future.


See also [[#Trackers|Trackers]].
See also [[#Trackers|Trackers]].


== Trackers ==
== Trackers ==
Line 137: Line 383:
Once initialized with a text and a set of motions, the client code can give the tracker a new
Once initialized with a text and a set of motions, the client code can give the tracker a new
timestamp, and get back the current text position, text color, etc.
timestamp, and get back the current text position, text color, etc.
Using a tracker is not necessary, if one wants to use the motions directly, or just ignore them,
but it makes life easier, especially when considering the the order in which motions are applied
does matter (to be defined formally, but the current source code is informative at this point).
== The Kate file format ==
Though this is not a feature of the bitstream format, I have created a text file format to
describe a series of events to be turned into a Kate bitstream.
At its minimum, the following is a valid input to the encoder:
: kate {
::  event { 00:00:05 --> 00:00:10    "This is a text" }
: }
This will create a simple stream with "This is a text" emitted at an offset of 5 seconds into
the track, lasting 5 seconds to an end time at 10 seconds.
Motions, regions, styles can be declared in a definitions block to be reused by events, or can
be defined inline. Defining those in the definitions block places them in a header so they can
be reused later, saving space. However, they can also be defined in each event, so they will be
sent with the event. This allows them to be generated on the fly (eg, if the bitstream is being
streamed from a realtime input).
For convenience, the Kate file format also allows C style macros, though without parameters.
Please note that the Kate file format is fully separate from the Kate bitstream format. The
difference between the two is similar to the difference between a C source file and the resulting
object file, when compiled.
Note that the format is not based on XML for a very parochial reason: I tend to dislike very
much editing XML by hand, as it's really hard to read. XML is really meant for machines to parse
generically text data in a shared syntax but with possibly unknown semantics, and I need those
text representations to be editable easily.
This also implies that there could be an XML representation of a Kate stream, which would be
useful if one were to make an editor that worked on a higher level than the current all-text
representation, and it is something that might very well happen in the future, in parallel with
the current format.
== Karaoke ==
Karaoke effects rely on motions, and there will be predefined higher level ways of specifying
timings and effects, two of which are already done. As an example, this is a valid Karaoke script:
:kate {
::  simple_timed_glyph_style_morph {
:::  from style "start_style" to style "end_style"
:::  "Let "    at 1.0
:::  "us "    at 1.2
:::  "sing "  at 1.4
:::  "to"      at 2.0
:::  "ge"      at 2.5
:::  "ther"    at 3.0
::  }
:}
The syllables will change from a style to another as time passes. The definition of the start_style
and end_style styles is omitted for brevity.




Line 143: Line 449:
There are a few things to solve before the Kate bitstream format can be considered good
There are a few things to solve before the Kate bitstream format can be considered good
enough to be frozen:
enough to be frozen:
Note: the following is mostly solved, and the bitstream is now stable, and has been
backward and forward compatible since the first released version. This will be updated
when I get some time.


=== Seeking and memory ===
=== Seeking and memory ===
Line 158: Line 468:
*A variant of the two-granules-in-one system used by libcmml, where the "back link" points to the earliest still active string, rather than the previous one (this allows a two phase seek, rather than a multiphase seek, hopping back from event to event, with no real way to know if there is or not a previous event which is still active - I suppose CMML has no need to know this, if their "clips" do not overlap - mine can do).
*A variant of the two-granules-in-one system used by libcmml, where the "back link" points to the earliest still active string, rather than the previous one (this allows a two phase seek, rather than a multiphase seek, hopping back from event to event, with no real way to know if there is or not a previous event which is still active - I suppose CMML has no need to know this, if their "clips" do not overlap - mine can do).
**Such a system considerably shortens the usable granule space, though it can do a one phase seek, if I understand the system correctly, which I am not certain.
**Such a system considerably shortens the usable granule space, though it can do a one phase seek, if I understand the system correctly, which I am not certain.
*** Well, it seems it can't do a one phase seek anyway.
*Additionally, it could be possible to emit simple "keepalive" packets at regular intervals to help a seek algorithm to sync up to the stream without needing too much data reading - this helps for discontinuous streams where there could be no pages for a while if no data is needed at that time.


=== Text encoding ===
=== Text encoding ===


A header field declares the text encoding used in the stream (this can be overridden in a
A header field declares the text encoding used in the stream. At the moment, only UTF-8 is
data packet, but this is not relevant to this point). At the moment, only UTF-8 is supported,
supported, for simplicity. There are no plans to support other encodings, such as UTF-16,
for simplicity, and I have not yet decided whether or not the Kate specification will allow
at the moment.
for other encodings, such as UTF-16 of UTF-32. The reason for this is that, if these were to
be supported, either:
*users of the decoder would have to be ready to face text in any one of these encodings
*the decoder would have to convert encodings to one selected by the user of the decoder


The first option may be asking a lot of users, while the second one brings complexity to the
Note that strings included in the header (language, category) are not affected by that
decoder, and kind of defeats the purpose of supporting the encoding in the first place.
 
Note that strings included in the header (language, category, etc) are not affected by that
language encoding (rather obviously for language itself). These are ASCII.
language encoding (rather obviously for language itself). These are ASCII.


At the moment, I am leaning strongly towards limiting the encoding to UTF-8.
The actual text in events may include simple HTML-like markup (at the moment, allowed markup
 
is the same as the one Pango uses, but more markup types may be defined in the future).
The actual text may include simple HTML markup (eg, <br>, <em>, etc).
It is also possible to ask libkate to remove this markup if the client prefers to receive
It is also possible to ask libkate to remove this markup if the client doesn't want it.
plain text without the markup.


=== Language encoding ===
=== Language encoding ===
Line 189: Line 495:
tags (like "en_EN"), as RFC 3066 tags can be quite complex, but I welcome comments on this.
tags (like "en_EN"), as RFC 3066 tags can be quite complex, but I welcome comments on this.


Alternatively, I might use only RFC 1766 tags, which are essentially the subset I considered
If a stream contains more than one language, there usually is a predominant language, which
above, but this RFC has been deprecated by RFC 3066, and I'm not sure of the wisdom of basing
can be set as the default language for the stream. Each event can then have a language
a new format on a deprecated RFC.
override. If there is no predominant language, and it is not possible to split the stream
into multiple substreams, each with its own language, then it is possible to use the "mul"
language tag, as a last resort.
 
=== Bitstream format for floating point values ===
 
Floating point values are be turned to a 16.16 fixed point format, then stored in a bitpacked
format, storing the number of zero bits at the head and tail of the floating point values once
per stream, and the remainder bits for all values in the stream. This seems to yield good results
(typically a 50% reduction over 32 bits raw writes, and 70% over the snprintf based storage), and
has the big advantage of being portable (eg, independant of any IEEE format).
However, this means reduced precision due to the quantization to 16.16. I may add support for
variable precision (eg, 8.24 fixed point formats) to alleviate this. This would however mean less
space savings, though these are likely to be insignificant when Kate streams are interleaved with
a video.
 
*Though this is not a Kate issue per se, the motion feature is very difficult to use without a curve editor. While tools may be coded to create a Kate bitstream for various existing subtitle formats, it is not certain it will be easy to find a good authoring tool for a series of curves. That said, it's not exactly difficult to do if you know a widget set.


Also, it might be possible for the language field to be a list of such encodings, for streams
=== Higher dimensional curves/motions ===
that contain several languages (though the usual way to present several languages is to have
several bitstreams multiplexed with one another (as opposed to Writ, which has all languages
included in a single bitstream)).


A disadvantage of having multiple languages is that text-to-speech typically needs to know
It is quite annoying to have to create two motions to control a color change, due to curves
the current language to function properly, and that having, say, two current languages, would
being restricted to two dimensions. I may add support for arbitrary dimensions. It would also
make it more difficult to deal with such a stream.
help for 1D motions, like changing the time flow, where one coordinate is simply ignored at
the moment.
Alternatively, changes could be made to the Kate file format to hide the two dimensionality and
allow simpler specification of non-2 dimensional motions, but still map them to 2D in the kate
bitstream format.


=== Bitstream format for floating point values ===
=== Category definition ===
 
The category field in the BOS packet is a 16 byte text field (15 really, as it is zero terminated
in the bitstream itself). Its goal is to provide the reader with a short description of what kind
of information the stream contains, eg subtitles, lyrics, etc. This would be displayed to the user,
possibly to allow to choose to turn some streams on and off.
 
Since this category is meant primarily for a machine to parse, they will be kept to ASCII. When
a player recognizes a category, it is free to replace its name with one in the user's language if
it prefers. Even in English, the "lyrics" category could be displayed by a player as "Lyrics".
 
Since this is a free text field rather than an enumeration, it would be good to have a list of
common predefined category names that Kate streams can use.
 
This is a list of proposed predefined categories, feedback/additions welcome:


At the moment, floating point values (for splines) are stored as their textual representation, and converted back and forth using snprintf and sscanf. We could quantize them and store as
* subtitles - the usual movie subtitles, as text
integers, since precision isn't that important here.
* spu-subtitles - movie subtitles in DVD style paletted images
-> If precision doesn't suffer, floating point values will be turned to a 16.16 fixed point format, then
* lyrics - song lyrics
    stored in a bitpacked format, storing the number of zero bits at the head and tail of the floating point
    values once per stream, and the remainder bits for all values in the stream. This seems to yield good
    results (typically a 50% reduction over 32 bits raw writes, and 70% over the snprintf based storage),
    and has the big advantage of being portable (eg, independant of any IEEE format).


*Though this is not a Kate issue per se, the motion feature is very difficult to use without a curve editor. While tools may be coded to create a Kate bitstream for various existing subtitle formats, it is not certain it will be easy to find a good authoring tool for a series of curves. That said, it's not exactly difficult to do if you know a widget set.
Please remember the 15 character limit if proposing other categories.


*Since motions may be repeated, I may add predefined motions in an extra header packet, to be referenced as styles and regions are. This would depend on whether motions are likely to be exactly repeated often, and I don't know if this will likely be the case. Complex motion definitions can take a lot of space, especially with the current floating point value encoding. After some thought, I will almost certainly place predefined curves in a header, and allow motions to refer to them. Fully defined curves will also be able to be placed in data packets, as it's likely some curves will be used only once, and it would constrain future uses to allow them only in headers (eg, if one were to stream handwriting using Kate).
Note that the list of categories is subject to change, and will likely
  -> There is now a separate curves/motions definition header, especially as I now use motions for more than simply moving a point in 2D.
  be replaced by new, more "identifier like" ones. The three ones above,
    Motions use a lot of floating point values, so the above compression is welcome.
however, would be kept for backward compatibility as they're already used.
    Motions will still be able to be stored in data packets, however, so they can be generated in real time if streaming from a live source.


== Text to speech ==
== Text to speech ==
Line 239: Line 571:
=== Embedded binary data ===
=== Embedded binary data ===


Various types of binary data could be embedded within a Kate stream:
Images and font mappings can be included within a Kate stream.
 
==== Images ====
 
Though this could be misused to interfere with ability to render as text-to-speech, Kate
can use images as well as text. The same caveat as for fonts applies with regard to data
duplication.
 
Complex images might however be best left to a multiplexed OggSpots or OggMNG stream, unless the
images mesh with the text (eg, graphical exclamation points, custom fonts, (see next
paragraph), etc).
 
There is support for simple paletted bitmap images, with a variable length palette of up
to 256 colors (in fact, sized in powers of 2 up to 256) and matching pixel data in as
many bits per pixel as can address the palette. Palettes and images are stored separately,
so can be used with one another with no fixed assignment.
 
Palettes and bitmaps are put in two separate header for later use by reference, but can
also be placed in data packets, as with motions, etc, if they are not going to be reused.
 
PNG bitmaps can also be embedded in a Kate stream. These do not have associated palettes
(but the PNGs themselves may or may not be paletted). There is no support for decoding PNG
images in libkate itself, so a program will have to use libpng (or similar code) to decode
the PNG image. For instance, the libtiger rendering library uses Cairo to decode and render
PNG images in Kate streams.


==== Fonts ====
This can be used to have custom fonts, so that raw text is still available if the stream
creator wants a custom look.


Font selection is the first thing that came to mind, due to the discrepancy of font
I expect that the need for more than 256 colors in a bitmap, or non palette bitmap data,
naming in platforms (eg, the *-*-* X system, and the...  hmm, not sure, filename ?
would be best handled by another codec, eg OggMNG or OggSpots. The goal of images in a
in Windows). A potential problem, however, is that there might be several multiplexed
Kate stream is to mesh the images with the text, not to  have large images by themselves.
Kate streams in an Ogg bitstream, so a custom font might be included several times
in the container stream. On the other hand, it would allow for per-language fonts.


==== Images ====
On the other hand, interesting Karaoke effects could be achieved by having MNG images
instead of simple paletted bitmaps in a Kate streams. Comments would be most welcome on
whether this is going too far, however.


Though this could interfere with ability to render as text-to-speech, images could be
I am also investigating SVG images. These allow for very small footprint images for simple
mixed with text. The same caveat as for fonts applies with regard to data duplication.
vector drawings, and could be very useful for things like background gradients below text.
This might however be best left to a multiplexed OggSpots stream, unless the images
mesh with the text (eg, graphical exclamation points, etc).


A possible solution to the duplication issue is to have another stream in the container
A possible solution to the duplication issue is to have another stream in the container
Line 264: Line 619:
the way Vorbis stores its codebooks within a header packet, or even the way Kate stores the
the way Vorbis stores its codebooks within a header packet, or even the way Kate stores the
list of styles within a header packet.
list of styles within a header packet.
==== Fonts ====
Custom fonts are merely a set of ranges mapping unicode code points to bitmaps. As this implies,
fonts are bitmap fonts, not vector fonts, so scaling, if supported by the rendering client,
may not look as good as with a vector font.
A style may also refer to a font name to use (eg, "Tahoma"). These fonts may or may not be
available on the playing system, however, since the font data is not included in the stream,
just referenced by name. For this reason, it is best to keep to widely known fonts.


== Reference encoder/decoder ==
== Reference encoder/decoder ==


A encoder and a decoder are included in the tools directory. Note that they are very rough
A encoder (kateenc) and a decoder (katedec) are included in the tools directory.
and do not perform much error checking at all. The encoder pulls its input from a custom
The encoder supports input from several different formats:
text based file format, which is by no means meant to be part of the Kate specification,
* a custom text based file format (see [[#The Kate file format|The Kate file format]]), which is by no means meant to be part of the Kate bitstream specification itself
or from an SubRip format file (the most common subtitle format I found, and a very basic one).
* SubRip (.srt), the most common subtitle format I found
* LRC lyrics format.
 
As an example for the widely used SRT subtitles format, the following command line
create a Kate subtitles stream from an SRT file:
 
kateenc -l en -c subtitles -t srt -o subtites.ogg subtitles.srt
 
The reverse is possible, to recover an SRT file from a Kate stream, with katedec.
 
Note that the subtitles.ogg file should then be multiplexed into the A/V stream,
using either ogg-tools or oggz-tools.
 
The Kate bitstreams encoded and decoded by those tools are (supposed to be) correct for this
specification, provided their input is correct.
 
== Next steps ==
 
=== Continuations ===
 
Continuations are a way to add to existing events, and are mostly meant for motions. When streaming
in real time, what motions may be applied to events may not be known in advance (for instance, for a
draw chat program where two programs exchange Kate streams, the drawing motions are only known as
they are drawn. Continuations will allow an event to be extended in time, and motions to be appended
to it. This is only useful for streaming, as when stored in a file, everything is already known in
advance.
 
=== A rendering library ===
 
This will allow easier integration in other packages (movie players, etc).
I have started working on an implementation using Cairo and Pango, though I'm still at the early stages.
I might add support for embedding vector fonts in a Kate stream if I was going that way. Still need to think about this.
Another point of note is that when this library is available, it would make it easier to add
capabilities such as rotation, scaling, etc, to the bitstream, since this would not cause too
much work for playing programs using the rendering library. It is expected that these additions
would stay backward compatible (eg, an old player would ignore this information but still correctly
decode the information they can work with from a newly encoded stream).
 
=== An XML representation ===
 
While I purposefully did not write Kate description files in XML due to me finding editing XML such
a chore, it would be nice to be able to losslessly convert between the more user friendly representation
and an XML document, so one can do what one does with XML documents, like transformations.
 
And after all, some people might prefer editing the XML version.
 
=== Packaging ===
 
It would be really nice to have packages for libkate/libtiger for many distros.
 
If you're a packager for a distro which doesn't have yet packages for libkate
or libtiger, please consider helping :)
 
In particular, packages for Debian would be grand.
 
== Matroska mapping ==
 
The codec ID is "S_KATE".
 
As for Theora and Vorbis, Kate headers are stored in the private data as xiph-laced packets:
 
Byte 0: number of packets present, minus 1 (there must be at least one packet) - let this number be NP
Bytes 1..n: lengths of the first NP packets, coded in xiph style lacing
Bytes n+1..end: the data packets themselves concatenated one after the other
 
Note that the length of the last packet isn't encoded, it is deduced from the sizes of the other
packets and the total size of the private data.
 
This mapping is similar to the Vorbis and Theora mappings, with the caveat that one should not
expect a set number of headers.
 
== Downloading ==
 
libkate encodes and decodes Kate streams, and is API and ABI stable.
 
The libkate source distribution is available at [http://libkate.googlecode.com/ http://libkate.googlecode.com/].
 
A public git repository is available at [http://git.xiph.org/?p=users/oggk/kate.git;a=summary http://git.xiph.org/?p=users/oggk/kate.git;a=summary].
 
libtiger renders Kate streams using Pango and Cairo, and is alpha, with API changes still possible.
 
The libtiger source distribution is available at [http://libtiger.googlecode.com/ http://libtiger.googlecode.com/].
 
A public git repository is available at [http://git.xiph.org/?p=users/oggk/tiger.git;a=summary http://git.xiph.org/?p=users/oggk/tiger.git;a=summary].
 
== HOWTOs ==
 
These paragraphs describe a few ways to use Kate streams:
 
=== Text movie subtitles ===
 
Kate streams can carry Unicode text (that is, text that can represent
pretty much any existing language/script). If several Kate streams are
multiplexed along with a video, subtitles in various languages can be
made for that movie.
 
An easy way to create such subtitles is to use ffmpeg2theora, which
can create Kate streams from SubRip (.srt) format files, a simple but
common text subtitles format. ffmpeg2theora 0.21 or later is needed.
 
At its simplest:
 
    ffmpeg2theora -o video-with-subtitles.ogg --subtitles subtitles.srt
      video-without-subtitles.avi
 
Several languages may be created and tagged with their language code
for easy selection in a media player:
 
    ffmpeg2theora -o video-with-subtitles.ogg video-without-subtitles.avi
      --subtitles japanese-subtitles.srt --subtitles-language ja
      --subtitles welsh-subtitles.srt --subtitles-language cy
      --subtitles english-subtitles.srt --subtitles-language en_GB
 
Alternatively, kateenc (which comes with the libkate distribution) can
create Kate streams from SubRip files as well. These can then be merged
with a video with oggz-tools:
 
    kateenc -t srt -c SUB -l it -o subtitles.ogg italian-subtitles.srt
    oggz merge -o movie-with-subtitles.ogg movie-without-subtitles.ogg subtitles.ogg
 
This second method can also be used to add subtitles to a video which
is already encoded to Theora, as it will not transcode the video again.
 
 
=== DVD subtitles ===
 
DVD subtitles are not text, but images. Thoggen, a DVD ripper program,
can convert these subtitles to Kate streams (at the time of writing,
Thoggen and GStreamer have not applied the necessary patches for this
to be possible out of the box, so patching them will be required).
 
When configuring how to rip DVD tracks, any subtitles will be detected
by Thoggen, and selecting them in the GUI will cause them to be saved as
Kate tracks along with the movie.
 
 
=== Song lyrics ===
 
Kate streams carrying song lyrics can be embedded in an Ogg file. The
oggenc Vorbis encoding tool from the Xiph.Org Vorbis tools allows lyrics
to be loaded from a LRC or SRT text file and converted to a Kate stream
multiplexed with the resulting Vorbis audio. At the time of writing,
the patch to oggenc was not applied yet, so it will have to be patched
manually with the patch found in the diffs directory.
 
    oggenc -o song-with-lyrics.ogg --lyrics lyrics.lrc --lyrics-language en_US song.wav
 
So called 'enhanced LRC' files (containing extra karaoke timing information)
are supported, and a simple karaoke color change scheme will be saved
out for these files. For more complex karaoke effects (such as more
complex style changes, or sprite animation), kateenc should be used with
a Kate description file to create a separate Kate stream, which can then
be merged with a Vorbis only song with oggz-tools:
 
    oggenc -o song.ogg song.wav
    kateenc -t kate -c LRC -l en_US -o lyrics.ogg lyrics-with-karaoke.kate
    oggz merge -o song-with-karaoke.ogg lyrics-with-karaoke.ogg song.ogg
 
This latter method may also be used if you already have an encoded Vorbis song
with no lyrics, and just want to add the lyrics without reencoding.
 
 
=== Metadata ===
 
Metadata can be attached to events, or to styles, bitmaps, regions, etc.
Metadata are free form tag/value pairs, and can be used to enrich their
attached data with extra information. However, how this information is
interpreted is up to the application layer.
 
It is worth noting that an event may not have attached text, so it is
possible to create an empty timed event with attached metadata.
 
For instance, let's say we have a documentary, with footage from various
places, as well as short interviews, and we want two things:
- tag footage with metadata about the location and date that footage was shot
- subtitle the interviews and tag those subtitles with information about the speaker
 
You can then create an empty Kate event for each footage part, synchronized
with the footage, and attach a new metadata item called GEO_LOCATION, filled
with latitude and longitude of the place the footage was shot at.
Similarly, for each subtitle event, a metadata item called SPEAKER can be
attached.
 
An empty event to tag a long 4:20 footage shot in Tokyo on 2011/08/12, and
inserted at 18:30 in the documentary could look like:
 
  event {
    00:18:30,000 --> 00:22:50,000
    meta "GEO_LOCATION" = "35.42; 139.42"
    meta "DATE" = "2011-08-12"
  }
 
Here's a example for a line spoken by Dr Joe Bloggs at 18:30 into the documentary:
 
  event {
    00:18:30,000 --> 00:18:32,000
    "Notice how the subtitles for my words have metadata attached to them"
    meta "SPEAKER" = "Dr Joe Bloggs"
    meta "URL" = "http://www.example.com/biography?name=Joe+Bloggs"
  }
 
Notice how another metadata item, URL, is also present. The application
will have to be aware of those metadata in order to do something with it
though. Since those are free form, it is up to you to think of what
metadata you want, and make use of it.
 
Note that metadata may be attached to other objects, such as regions.
This way, you can for example create a region tagged with a name, and
track a person's movements with that region. Or you can tag a bitmap
with a copyright and a URL to a larger version of the image.
 
 
 
=== Changing a Kate stream embedded in an Ogg stream ===
 
If you need to change a Kate stream already embedded in an Ogg stream (eg, you have a movie with subtitles, and you want to fix a spelling mistake, or want to bring one of the subtitles forward in time, etc), you can do this easily with KateDJ, a tool that will extract Kate streams, decode them to a temporary location, and rebuild the original stream after you've made whatever changes you want.
 
KateDJ (included with the libkate distribution) is a GUI program using wxPython, a Python module for the wxWidgets GUI library, and the oggz tools (both needing installing separately if they are not already).
 
The procedure consists of:
 
* Run KateDJ
* Click 'Load Ogg stream' and select the file to load
* Click 'Demux file' to decode Kate streams in a temporary location
* Edit the Kate streams (a message box tells you where they are placed)
* When done, click 'Remux file from parts'
* If any errors are reported, continue editing until the remux step succeeds
 
== Frequently Asked Questions ==
 
=== Does libkate work on other plaforms than Linux ? ===
 
Yes, libkate is not Linux specific in any way. It optionally relies on libogg
and libpng, two libraries widely ported to various platforms.
It has been reported to work on Windows and MacOS X as well as UNIX platforms.
 
However, libtiger, a rendering library for Kate streams, relies on Pango and Cairo,
which are not easy to build on Windows, though they can be.
The Tiger renderer is however completely separate from libkate, and is not needed
for full encoding and decoding of Kate streams.
 
=== Where can I find some example files ? ===
 
The libkate distribution can generate various examples, but already built files
can be found there:
[http://people.xiph.org/~oggk/elephants_dream/elephantsdream-with-subtitles.ogg]
[http://stallman.org/fry/Stephen_Fry-Happy_Birthday_GNU-nq_600px_425kbit.ogv]


The custom format It is just used as a quick way to define data to create a Kate bitstream.
These files use raw text only.
Tools might be created to create a Kate bitstream from various data formats, such as existing
subtitle formats (SSA, etc). I currently have a patch to mplayer that can create Kate streams
from any of the subtitle formats it supports, so creating Kate streams is easy.


The Kate bitstreams encoded and decoded by those tools, however, are (supposed to be)
[[Category:Ogg Mappings]]
correct for this specification, provided their input is correct.

Revision as of 15:12, 23 July 2013

Disclaimer

This is not a Xiph codec, though it may be embedded in Ogg alonside other Xiph codecs, such as Vorbis and Theora. As such, please do not assume that Xiph has anything to do with this, much less responsibility.

What is Kate?

Kate is an overlay codec, originally designed for karaoke and text, that can be multiplixed in Ogg. Text and images can be carried by a Kate stream, and animated. Most of the time, this would be multiplexed with audio/video to carry subtitles, song lyrics (with or without karaoke data), etc, but doesn't have to be.

Series of curves (splines, segments, etc) may be attached to various properties (text position, font size, etc) to create animated overlays. This allows scrolling or fading text to be defined. This can even be used to draw arbitrary shapes, so hand drawing can also be represented by a Kate stream.

Example uses of Kate streams are movie subtitles for Theora videos, either text based, as may be created by ffmpeg2theora, or image based, such as created by Thoggen (patching needed), and lyrics, as created by oggenc, from vorbis-tools.

Why a new codec?

As I was adding support for Theora, Speex and FLAC to some software of mine, I found myself wanting to have song lyrics accompanying Vorbis audio. Since Vorbis comments are limited to the headers, one can't add them in the stream as they are sung, so another multiplexed stream would be needed to carry them.

The three possible bases usable for such a codec I found were Writ, CMML, and OGM/SRT.

  • Writ is an unmaintained start at an implementation of a very basic design, though I did find an encoder/decoder in py-ogg2 later on - I'd been quicker to write Kate from scratch anyway.
  • CMML is more geared towards encapsulating metadata about an accompanying stream, rather than being a data stream itself, and seemed complex for a simple use, though I have now revised my view on this - besides, it seems designed for Annodex (which I haven't had a look at), though it does seems relatively generic for use outwith Annodex - though it is being "repurposed" as timed text now, bringing it closer to what I'm doing
  • OGM/SRT, which I only found when I added Kate support to MPlayer, is shoehorning various data formats into an Ogg stream, and just dumps the SRT subtitle format as is, AFAICS (though I haven't looked at this one in detail, since I'd already had a working Kate implementation by that time)

I then decided to roll my own, not least because it's a fun thing to do.

I found other formats, such as USF (designed for inclusion in Matroska) and various subtitle formats, but none were designed for embedding inside an Ogg container.

Overview of the Kate bitstream format

I've taken much inspiration from Vorbis and Theora here. Headers and packets (as well as the API design) follow the design of these two codecs.

A rough overview (see Format specification for more details) is:

Headers packets:

  • ID header [BOS]: magic, version, granule fraction, encoding, language, etc
  • Comment header: Vorbis comments, as per Vorbis/Theora streams
  • Style definitions header: a list of predefined styles to be referred to by data packets
  • Region definitions header: a list of predefined regions to be referred to by data packets
  • Curves definitions header: a list of predefined curves to be referred to by data packets
  • Motion definitions header: a list of predefined motions to be referred to by data packets
  • Palette definitions header: a list of predefined palettes to be referred to by data packets
  • Bitmap definitions header: a list of predefined bitmaps to be referred to by data packets
  • Font mapping definitions header: a list of predefined font mappings to be referred to by data packets

Other header packets are ignored, and left for future expansion.

Data packets:

  • text data: text/image and optional motions, accompanied by optional overrides for style, region, language, etc
  • keepalive: can be emitted at any time to help a demuxer know where we're at, but those packets are optional
  • repeats: a verbatim repeat of a text packet's payload, in order to bound any backward seeking needed when starting to play a stream partway through. These are also optional.
  • end data [EOS]: marks the end of the stream, it doesn't have any useful payload

Other data packets are ignored, and left for future expansion.

The intent of the "keepalive" packet is to be sent at regular intervals when no other packet has been emitted for a while. This would be to help seeking code find a kate page more easily.

Things of note:

  • Kate is a discontinuous codec, as defined in ogg-multiplex.html in the Ogg documentation, which means it's timed by start granule, not end granule (as Theora and Vorbis).
  • All data packets are on their own page, for two reasons:
    • Ogg keeps track of granules at the page level, not the packet level
    • if no text event happens for a while after a particular text event, we don't want to delay it so a larger page can be issued

See also Problems to solve: Seeking and memory.

  • The granule encoding is not a direct time/granule correspondance, see the granule encoding section.
  • The EOS packet should have a granule pos higher or equal to the end time of all events.
  • User code doesn't have to know the number of headers to expect, this is moved inside the library code (as opposed to Vorbis and Theora).
  • The format contains hooks so that additional information may be added in future revisions while keeping backward compatibility (though old decoders will correctly parse, but ignore the new information).

Format specification

The Kate bitstream format consists of a number of sequential packets. Packets can be either header packets or data packets. All header packets must appear before any data packet.

Header packets must appear in order. Decoding of a data packet is not possible until all header packets have been decoded.

Each Kate packet starts with a one byte type. A type with the MSB set (eg, between 0x80 and 0xff) indicates a header packet, while a type with the MSB cleared (eg, between 0x00 and 0x7f) indicates a data packet. All header packets then have the Kate magic, from byte offset 1 to byte offset 7 ("kate\0\0\0"). Note that this applies only to header packets: data packets do not contain the Kate signature.

Since the ID header must appear first, a Kate stream can be recognized by comparing the first eight bytes of the first packet with the signature string "\200kate\0\0\0".


When embedded in Ogg,the first packet in a Kate stream (always packet type 0x80, the id header packet) must be placed on a separate page. The corresponding Ogg packet must be marked as beginning of stream (BOS).All subsequent header packets must be on one or more pages. Subsequently, each data packet must be on a separate page.

The last data packet must be the end of stream packet (packet type 0x7f).

When embedded in Ogg, the corresponding Ogg packet must be marked as end of stream (EOS).

As per the Ogg specification, granule positions must be non decreasing within the stream. Header packets have granule position 0.

Currently existing packet types are:

headers:
0x80 ID header (BOS)
0x81 Vorbis comment header
0x82 regions list header
0x83 styles list header
0x84 curves list header
0x85 motions list header
0x86 palettes list header
0x87 bitmaps list header
0x88 font ranges and mappings header
data:
0x00 text data (including optional motions and overrides)
0x01 keepalive
0x02 repeat
0x7f end packet (EOS)


This format described here is for bitstream version 0.x. As or 19 december 2008, the latest bitstream version is 0.4.

For more detailed information, refer to the format documentation in libkate (see URL below in the Downlading section).

Following is the definition of the ID header (packet type 0x80). This works out to a 64 byte ID header. This is the header that should be used to detect a Kate stream within an Ogg stream.


 0               1               2               3              |
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1| Byte
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| packtype      | Identifier char[7]: 'kate\0\0\0'              | 0-3
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| kate magic continued                                          | 4-7
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| reserved - 0  | version major | version minor | num headers   | 8-11
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| text encoding | directionality| reserved - 0  | granule shift | 12-15
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| cw sh |  canvas width         | ch sh | canvas height         | 16-19
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| reserved - 0                                                  | 20-23
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| granule rate numerator                                        | 24-27
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| granule rate denominator                                      | 28-31
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| language (NUL terminated)                                     | 32-35
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| language (continued)                                          | 36-39
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| language (continued)                                          | 40-43
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| language (continued)                                          | 44-47
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| category (NUL terminated)                                     | 48-51
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| category (continued)                                          | 52-55
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| category (continued)                                          | 56-59
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| category (continued)                                          | 60-63
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


The fields cw sh, canvas width, cw sh, and canvas height were introduced in bistream 0.3. Earlier bitstreams will have 0 in these fields.

language and category are NUL terminating ASCII strings. Language follows RFC 3066, though obviously will not accommodate language tags with lots of subtags.

Category is currently loosely defined, and I haven't found yet a nice way to present it in a generic way, but is meant for automatic classifying of various multiplexed Kate streams (eg, to recognize that some streams are subtitles (in a set of languages), and some others are commentary (in a possibly different set of languages, etc).

API overview

libkate offers an API very similar to that of libvorbis and libtheora, as well as an extra higher level decoding API.

Here's an overview of the three main modules:

Decoding

Decoding is done in a way similar to libvorbis. First, initialize a kate_info and a kate_comment structure. Then, read headers by calling kate_decode_headerin. Once all headers have been read, a kate_state is initialized for decoding using kate_decode_init, and kate_decode_packetin is called repeatedly with data packets. Events (eg, text) can be retrieved via kate_decode_eventout.

Encoding

Encoding is also done in a way similar to libvorbis. First initialize a kate_info and a kate_comment structure, and fill them out as needed. kate_encode_headers will create ogg packets from those. Then, kate_encode_text is called repeatedly for all the text events to add. When done, calling kate_encode_finish will create an end of stream packet.

High level decoding API

There are only 3 calls here:

kate_high_decode_init
kate_high_decode_packetin
kate_high_decode_clear

Here, all Ogg packets are sent to kate_high_decode_packetin, which does the right thing (header/data classification, decoding, and event retrieval). Note that you do not get access to the comments directly using this, but you do get access to the kate_info via events.

The libkate distribution includes commented examples for each of those.

Additionally, libkate includes a layer (liboggkate) to make it easier to use when embedded in Ogg. While the normal API uses kate_packet structures, liboggkate uses ogg_packet structures.

The High level decoding API does not have an Ogg specific layer, but functions exist to wrap a kate_packet around a memory buffer (such as the one ogg_packet uses, for instance).

Support

Among the software with Kate support:

  • VLC
  • ffmpeg2theora
  • liboggz
  • liboggplay
  • Cortado (wikimedia version)
  • vorbis-tools

I have patches for the following with Kate support:

  • MPlayer
  • xine
  • GStreamer
  • Thoggen
  • Audacious
  • and more...

These may be found in the libkate source distribution (see Downloading for links).

In addition, libtiger is a rendering library for Kate streams using Pango and Cairo, though it is not quite yet API stable (though no major changes are expected).

Granule encoding

Ogg

Ogg leaves the encoding of granules up to a particular codec, only mandating that granules be non decreasing with time.

The Kate bitstream format uses a linear mapping between time and granule, described here.

A Kate granule position is composed of two different parts:

- a base granule, in the high bits
- a granule offset, in the low bits
+----------------+----------------+
| base           | offset         |
+----------------+----------------+

The number of bits these parts occupy is variable, and each stream may choose how many bits to dedicate to each. The kate_info structure for a stream holds that information in the granule_shift field, so each part may be reconstructed from a granulepos.

The timestamp T of a given Kate packet is split into a base B and offset O, and these are stored in the granulepos of that packet. The split is done such that the B is the time of the earliest event still active at the time, and the O is the time elapsed between B and T. Thus, T = B + O. This mimics the way Theora stores its own timestamps in granulepos, where the base acts as a keyframe, and an offset acts as the position of an intra frame from the previous keyframe. Since Kate allows time overlapping events, however, the choice of the base to use is slightly more complex, as it may not be the starting time of the previous event, if the stream contains time overlapping events.

The kate_info structure for a stream holds a rational fraction representing the time span of granule units for both the base and the offset parts.

The granule rate is defined by the two fields:

kate_info::gps_numerator
kate_info::gps_denominator


The number of bits reserved for the offset is defined by the field:

kate_info::granule_shift

Generic timing

Kate data packets (data packet type 0) includes timing information (start time, end time, and time of the earliest event still active). All these are stored as 64 bit at the rate defined by the granule rate, so they do not suffer from the granule_shift space limitation.

This also allows for Kate streams to be stored in other containers.

Motion

The Kate bitstream format includes motion definition, originally for karaoke purposes, but which can be used for more general purpose, such as line based drawing, or animation of the text (position, color, etc)

Motions are defined by the means of a series of curves (static points, segments, splines (catmull-rom, bezier, and b-splines)). A 2D point can be obtained from a motion for any timestamp during the lifetime of a text. This can be used for moving a marker in 2D above the text for karaoke, or to use the x coordinate to color text when the motion position passes each letter or word, etc. Motions have an attached semantics so the client code knows how to use a particular motion. Predefined semantics include text color, text position, etc).

Since a motion can be composed of an arbitrary number of curves, each of which may have an arbitrary number of control points, complex motions can be achieved. If the motion is the main object of an event, it is even possible to have an empty text, and use the motion as a virtual pencil to draw arbitrary shapes. Even on-the-fly handwriting subtitles could be done this way, though this would require a lot of control points, and would not be able to be used with text-to-speech.

As a proof of concept, I also have a "draw chat" program where two people can draw, and the shapes are turned to b-splines and sent as a kate motion to be displayed on the other person's window.

It is also possible for motions to be discontinuous - simply insert a curve of 'none' type. While the timestamp lies within such a curve, no 2D point will be generated. This can be used to temporarily hide a marker, for instance.

It is worth mentionning that pauses in the motion can be trivially included by inserting at the right time and for the right duration a simple linear interpolation curve with only two equal points, equal to the position the motion is supposed to pause at.

Kate defines a set of predefined mappings so that each decoder user interprets a motion in the same way. A mapping is coded on 8 bits in the bitstream, and the first 128 are reserved for Kate, leaving 128 for application specific mappings, to avoid constraining creative uses of that feature. Predefined mappings include frame (eg, 0-1 points are mapped to the size of the current video frame), or region, to scale 0-1 to the current region. This allows curves to be defined without knowing in advance the pixel size of the area it should cover.

For uses which require more than two coordinates (eg, text color, where 4 (RGBA) values are needed, Kate predefines the semantics text_color_rg and text_color_ba, so a 4D point can be obtained using two different motions.

There are higher level constructs, such as morphing between two styles, or predefined karaoke effects. More are planned to be added in the future.

See also Trackers.

Trackers

Since attaching motions to text position, etc, makes it hard for the client to keep track of everything, doing interpolation, etc, the library supplies a tracker object, which handles the interpolation of the relevant properties. Once initialized with a text and a set of motions, the client code can give the tracker a new timestamp, and get back the current text position, text color, etc.

Using a tracker is not necessary, if one wants to use the motions directly, or just ignore them, but it makes life easier, especially when considering the the order in which motions are applied does matter (to be defined formally, but the current source code is informative at this point).


The Kate file format

Though this is not a feature of the bitstream format, I have created a text file format to describe a series of events to be turned into a Kate bitstream. At its minimum, the following is a valid input to the encoder:

kate {
event { 00:00:05 --> 00:00:10 "This is a text" }
}

This will create a simple stream with "This is a text" emitted at an offset of 5 seconds into the track, lasting 5 seconds to an end time at 10 seconds.

Motions, regions, styles can be declared in a definitions block to be reused by events, or can be defined inline. Defining those in the definitions block places them in a header so they can be reused later, saving space. However, they can also be defined in each event, so they will be sent with the event. This allows them to be generated on the fly (eg, if the bitstream is being streamed from a realtime input).

For convenience, the Kate file format also allows C style macros, though without parameters.

Please note that the Kate file format is fully separate from the Kate bitstream format. The difference between the two is similar to the difference between a C source file and the resulting object file, when compiled.

Note that the format is not based on XML for a very parochial reason: I tend to dislike very much editing XML by hand, as it's really hard to read. XML is really meant for machines to parse generically text data in a shared syntax but with possibly unknown semantics, and I need those text representations to be editable easily.

This also implies that there could be an XML representation of a Kate stream, which would be useful if one were to make an editor that worked on a higher level than the current all-text representation, and it is something that might very well happen in the future, in parallel with the current format.

Karaoke

Karaoke effects rely on motions, and there will be predefined higher level ways of specifying timings and effects, two of which are already done. As an example, this is a valid Karaoke script:

kate {
simple_timed_glyph_style_morph {
from style "start_style" to style "end_style"
"Let " at 1.0
"us " at 1.2
"sing " at 1.4
"to" at 2.0
"ge" at 2.5
"ther" at 3.0
}
}

The syllables will change from a style to another as time passes. The definition of the start_style and end_style styles is omitted for brevity.


Problems to solve

There are a few things to solve before the Kate bitstream format can be considered good enough to be frozen:

Note: the following is mostly solved, and the bitstream is now stable, and has been backward and forward compatible since the first released version. This will be updated when I get some time.

Seeking and memory

When seeking to a particular time in a movie with subtitles, we may end up at a place when a subtitle has been started, but is not removed yet. Pure streaming doesn't have this problem as it remembers the subtitle being issued (as opposed to, say, Vorbis, for which all data valid now is decoded from the last packet). With Kate, a text string valid now may have been issued long ago.

I see three possible ways to solve this:

  • each data packet includes the granule of the earliest still active packet (if none, this will be the granule of this very packet)
    • this means seeks are two phased: first seek, find the next Kate packet, and seek again if the granule of the earlier still active packet is less than the original seeked granule. This implies support code on players to do the double seek.
  • use "reference frames", a bit like Theora does, where the granule position is split in several fields: the higher bits represent a position for the reference frame, and the lowest bits a delta time to the current position. When seeking to a granule position, the lower bits are cleared off, yielding the granule position of the previous reference frame, so the seek ends up at the reference frame. The reference frame is a sync point where any active strings are issued again. This is a variant of the method described in the Writ wiki page, but the granule splitting avoids any "downtime".
    • this requires reissuing packets, and it doesn't feel right (and wastes space).
    • it also requires "dummy" decoding of Kate data from the reference frame to the actual seek point to fully refresh the state "memory".
  • A variant of the two-granules-in-one system used by libcmml, where the "back link" points to the earliest still active string, rather than the previous one (this allows a two phase seek, rather than a multiphase seek, hopping back from event to event, with no real way to know if there is or not a previous event which is still active - I suppose CMML has no need to know this, if their "clips" do not overlap - mine can do).
    • Such a system considerably shortens the usable granule space, though it can do a one phase seek, if I understand the system correctly, which I am not certain.
      • Well, it seems it can't do a one phase seek anyway.
  • Additionally, it could be possible to emit simple "keepalive" packets at regular intervals to help a seek algorithm to sync up to the stream without needing too much data reading - this helps for discontinuous streams where there could be no pages for a while if no data is needed at that time.

Text encoding

A header field declares the text encoding used in the stream. At the moment, only UTF-8 is supported, for simplicity. There are no plans to support other encodings, such as UTF-16, at the moment.

Note that strings included in the header (language, category) are not affected by that language encoding (rather obviously for language itself). These are ASCII.

The actual text in events may include simple HTML-like markup (at the moment, allowed markup is the same as the one Pango uses, but more markup types may be defined in the future). It is also possible to ask libkate to remove this markup if the client prefers to receive plain text without the markup.

Language encoding

A header field defines the language (if any) used in the stream (this can be overridden in a data packet, but this is not relevant to this point). At the moment, my test code uses ISO 639-1 two letter codes, but I originally thought to use RFC 3066 tags. However, matching a language to a user selection may be simpler for user code if the language encoding is kept simple. At the moment, I tend to favor allowing both two letter tags (eg, "en") and secondary tags (like "en_EN"), as RFC 3066 tags can be quite complex, but I welcome comments on this.

If a stream contains more than one language, there usually is a predominant language, which can be set as the default language for the stream. Each event can then have a language override. If there is no predominant language, and it is not possible to split the stream into multiple substreams, each with its own language, then it is possible to use the "mul" language tag, as a last resort.

Bitstream format for floating point values

Floating point values are be turned to a 16.16 fixed point format, then stored in a bitpacked format, storing the number of zero bits at the head and tail of the floating point values once per stream, and the remainder bits for all values in the stream. This seems to yield good results (typically a 50% reduction over 32 bits raw writes, and 70% over the snprintf based storage), and has the big advantage of being portable (eg, independant of any IEEE format). However, this means reduced precision due to the quantization to 16.16. I may add support for variable precision (eg, 8.24 fixed point formats) to alleviate this. This would however mean less space savings, though these are likely to be insignificant when Kate streams are interleaved with a video.

  • Though this is not a Kate issue per se, the motion feature is very difficult to use without a curve editor. While tools may be coded to create a Kate bitstream for various existing subtitle formats, it is not certain it will be easy to find a good authoring tool for a series of curves. That said, it's not exactly difficult to do if you know a widget set.

Higher dimensional curves/motions

It is quite annoying to have to create two motions to control a color change, due to curves being restricted to two dimensions. I may add support for arbitrary dimensions. It would also help for 1D motions, like changing the time flow, where one coordinate is simply ignored at the moment. Alternatively, changes could be made to the Kate file format to hide the two dimensionality and allow simpler specification of non-2 dimensional motions, but still map them to 2D in the kate bitstream format.

Category definition

The category field in the BOS packet is a 16 byte text field (15 really, as it is zero terminated in the bitstream itself). Its goal is to provide the reader with a short description of what kind of information the stream contains, eg subtitles, lyrics, etc. This would be displayed to the user, possibly to allow to choose to turn some streams on and off.

Since this category is meant primarily for a machine to parse, they will be kept to ASCII. When a player recognizes a category, it is free to replace its name with one in the user's language if it prefers. Even in English, the "lyrics" category could be displayed by a player as "Lyrics".

Since this is a free text field rather than an enumeration, it would be good to have a list of common predefined category names that Kate streams can use.

This is a list of proposed predefined categories, feedback/additions welcome:

  • subtitles - the usual movie subtitles, as text
  • spu-subtitles - movie subtitles in DVD style paletted images
  • lyrics - song lyrics

Please remember the 15 character limit if proposing other categories.

Note that the list of categories is subject to change, and will likely
be replaced by new, more "identifier like" ones. The three ones above,
however, would be kept for backward compatibility as they're already used.

Text to speech

One of the goals of the Kate bitstream format is that text data can be easily parsed by the user of the decoder, so any additional information, such as style, placement, karaoke data, etc, should be able to be stripped to leave only the bare text. This is in view of allowing text-to-speech software to use Kate bitstreams as a bandwith-cheap way of conveying speech data, and could also allow things like e-books which can be either read or listened to from the same bitstream (I have seen no reference to this being used anywhere, but I see no reason why the granule progression should be temporal, and not user controlled, such as by using a "next" button which would bump a granule postion by a preset amount, simulating turning a page (this would be close to necessary for text-to-speech, as the wall time duration of the spoken speech is not known in advance to the Kate encoder, and can't be mapped to a time based granule progression)). All text strings triggered consecutively between the two granule positions would then be read in order.

Possible additions

Embedded binary data

Images and font mappings can be included within a Kate stream.

Images

Though this could be misused to interfere with ability to render as text-to-speech, Kate can use images as well as text. The same caveat as for fonts applies with regard to data duplication.

Complex images might however be best left to a multiplexed OggSpots or OggMNG stream, unless the images mesh with the text (eg, graphical exclamation points, custom fonts, (see next paragraph), etc).

There is support for simple paletted bitmap images, with a variable length palette of up to 256 colors (in fact, sized in powers of 2 up to 256) and matching pixel data in as many bits per pixel as can address the palette. Palettes and images are stored separately, so can be used with one another with no fixed assignment.

Palettes and bitmaps are put in two separate header for later use by reference, but can also be placed in data packets, as with motions, etc, if they are not going to be reused.

PNG bitmaps can also be embedded in a Kate stream. These do not have associated palettes (but the PNGs themselves may or may not be paletted). There is no support for decoding PNG images in libkate itself, so a program will have to use libpng (or similar code) to decode the PNG image. For instance, the libtiger rendering library uses Cairo to decode and render PNG images in Kate streams.

This can be used to have custom fonts, so that raw text is still available if the stream creator wants a custom look.

I expect that the need for more than 256 colors in a bitmap, or non palette bitmap data, would be best handled by another codec, eg OggMNG or OggSpots. The goal of images in a Kate stream is to mesh the images with the text, not to have large images by themselves.

On the other hand, interesting Karaoke effects could be achieved by having MNG images instead of simple paletted bitmaps in a Kate streams. Comments would be most welcome on whether this is going too far, however.

I am also investigating SVG images. These allow for very small footprint images for simple vector drawings, and could be very useful for things like background gradients below text.

A possible solution to the duplication issue is to have another stream in the container stream, which would hold the shared data (eg, fonts), which the user program could load, and which could then be used by any Kate (and other) stream. Typically, this type of stream would be a degenerate stream with only header packets (so it is fully processed before any other stream presents data packets that might make use of that shared data), and all payload such as fonts being contained within the headers. Thinking about it, it has parallels with the way Vorbis stores its codebooks within a header packet, or even the way Kate stores the list of styles within a header packet.

Fonts

Custom fonts are merely a set of ranges mapping unicode code points to bitmaps. As this implies, fonts are bitmap fonts, not vector fonts, so scaling, if supported by the rendering client, may not look as good as with a vector font.

A style may also refer to a font name to use (eg, "Tahoma"). These fonts may or may not be available on the playing system, however, since the font data is not included in the stream, just referenced by name. For this reason, it is best to keep to widely known fonts.

Reference encoder/decoder

A encoder (kateenc) and a decoder (katedec) are included in the tools directory. The encoder supports input from several different formats:

  • a custom text based file format (see The Kate file format), which is by no means meant to be part of the Kate bitstream specification itself
  • SubRip (.srt), the most common subtitle format I found
  • LRC lyrics format.

As an example for the widely used SRT subtitles format, the following command line create a Kate subtitles stream from an SRT file:

kateenc -l en -c subtitles -t srt -o subtites.ogg subtitles.srt

The reverse is possible, to recover an SRT file from a Kate stream, with katedec.

Note that the subtitles.ogg file should then be multiplexed into the A/V stream, using either ogg-tools or oggz-tools.

The Kate bitstreams encoded and decoded by those tools are (supposed to be) correct for this specification, provided their input is correct.

Next steps

Continuations

Continuations are a way to add to existing events, and are mostly meant for motions. When streaming in real time, what motions may be applied to events may not be known in advance (for instance, for a draw chat program where two programs exchange Kate streams, the drawing motions are only known as they are drawn. Continuations will allow an event to be extended in time, and motions to be appended to it. This is only useful for streaming, as when stored in a file, everything is already known in advance.

A rendering library

This will allow easier integration in other packages (movie players, etc). I have started working on an implementation using Cairo and Pango, though I'm still at the early stages. I might add support for embedding vector fonts in a Kate stream if I was going that way. Still need to think about this. Another point of note is that when this library is available, it would make it easier to add capabilities such as rotation, scaling, etc, to the bitstream, since this would not cause too much work for playing programs using the rendering library. It is expected that these additions would stay backward compatible (eg, an old player would ignore this information but still correctly decode the information they can work with from a newly encoded stream).

An XML representation

While I purposefully did not write Kate description files in XML due to me finding editing XML such a chore, it would be nice to be able to losslessly convert between the more user friendly representation and an XML document, so one can do what one does with XML documents, like transformations.

And after all, some people might prefer editing the XML version.

Packaging

It would be really nice to have packages for libkate/libtiger for many distros.

If you're a packager for a distro which doesn't have yet packages for libkate or libtiger, please consider helping :)

In particular, packages for Debian would be grand.

Matroska mapping

The codec ID is "S_KATE".

As for Theora and Vorbis, Kate headers are stored in the private data as xiph-laced packets:

Byte 0: number of packets present, minus 1 (there must be at least one packet) - let this number be NP
Bytes 1..n: lengths of the first NP packets, coded in xiph style lacing
Bytes n+1..end: the data packets themselves concatenated one after the other

Note that the length of the last packet isn't encoded, it is deduced from the sizes of the other packets and the total size of the private data.

This mapping is similar to the Vorbis and Theora mappings, with the caveat that one should not expect a set number of headers.

Downloading

libkate encodes and decodes Kate streams, and is API and ABI stable.

The libkate source distribution is available at http://libkate.googlecode.com/.

A public git repository is available at http://git.xiph.org/?p=users/oggk/kate.git;a=summary.

libtiger renders Kate streams using Pango and Cairo, and is alpha, with API changes still possible.

The libtiger source distribution is available at http://libtiger.googlecode.com/.

A public git repository is available at http://git.xiph.org/?p=users/oggk/tiger.git;a=summary.

HOWTOs

These paragraphs describe a few ways to use Kate streams:

Text movie subtitles

Kate streams can carry Unicode text (that is, text that can represent pretty much any existing language/script). If several Kate streams are multiplexed along with a video, subtitles in various languages can be made for that movie.

An easy way to create such subtitles is to use ffmpeg2theora, which can create Kate streams from SubRip (.srt) format files, a simple but common text subtitles format. ffmpeg2theora 0.21 or later is needed.

At its simplest:

   ffmpeg2theora -o video-with-subtitles.ogg --subtitles subtitles.srt
     video-without-subtitles.avi

Several languages may be created and tagged with their language code for easy selection in a media player:

   ffmpeg2theora -o video-with-subtitles.ogg video-without-subtitles.avi
     --subtitles japanese-subtitles.srt --subtitles-language ja
     --subtitles welsh-subtitles.srt --subtitles-language cy
     --subtitles english-subtitles.srt --subtitles-language en_GB

Alternatively, kateenc (which comes with the libkate distribution) can create Kate streams from SubRip files as well. These can then be merged with a video with oggz-tools:

   kateenc -t srt -c SUB -l it -o subtitles.ogg italian-subtitles.srt
   oggz merge -o movie-with-subtitles.ogg movie-without-subtitles.ogg subtitles.ogg

This second method can also be used to add subtitles to a video which is already encoded to Theora, as it will not transcode the video again.


DVD subtitles

DVD subtitles are not text, but images. Thoggen, a DVD ripper program, can convert these subtitles to Kate streams (at the time of writing, Thoggen and GStreamer have not applied the necessary patches for this to be possible out of the box, so patching them will be required).

When configuring how to rip DVD tracks, any subtitles will be detected by Thoggen, and selecting them in the GUI will cause them to be saved as Kate tracks along with the movie.


Song lyrics

Kate streams carrying song lyrics can be embedded in an Ogg file. The oggenc Vorbis encoding tool from the Xiph.Org Vorbis tools allows lyrics to be loaded from a LRC or SRT text file and converted to a Kate stream multiplexed with the resulting Vorbis audio. At the time of writing, the patch to oggenc was not applied yet, so it will have to be patched manually with the patch found in the diffs directory.

   oggenc -o song-with-lyrics.ogg --lyrics lyrics.lrc --lyrics-language en_US song.wav

So called 'enhanced LRC' files (containing extra karaoke timing information) are supported, and a simple karaoke color change scheme will be saved out for these files. For more complex karaoke effects (such as more complex style changes, or sprite animation), kateenc should be used with a Kate description file to create a separate Kate stream, which can then be merged with a Vorbis only song with oggz-tools:

   oggenc -o song.ogg song.wav
   kateenc -t kate -c LRC -l en_US -o lyrics.ogg lyrics-with-karaoke.kate
   oggz merge -o song-with-karaoke.ogg lyrics-with-karaoke.ogg song.ogg

This latter method may also be used if you already have an encoded Vorbis song with no lyrics, and just want to add the lyrics without reencoding.


Metadata

Metadata can be attached to events, or to styles, bitmaps, regions, etc. Metadata are free form tag/value pairs, and can be used to enrich their attached data with extra information. However, how this information is interpreted is up to the application layer.

It is worth noting that an event may not have attached text, so it is possible to create an empty timed event with attached metadata.

For instance, let's say we have a documentary, with footage from various places, as well as short interviews, and we want two things: - tag footage with metadata about the location and date that footage was shot - subtitle the interviews and tag those subtitles with information about the speaker

You can then create an empty Kate event for each footage part, synchronized with the footage, and attach a new metadata item called GEO_LOCATION, filled with latitude and longitude of the place the footage was shot at. Similarly, for each subtitle event, a metadata item called SPEAKER can be attached.

An empty event to tag a long 4:20 footage shot in Tokyo on 2011/08/12, and inserted at 18:30 in the documentary could look like:

 event {
   00:18:30,000 --> 00:22:50,000
   meta "GEO_LOCATION" = "35.42; 139.42"
   meta "DATE" = "2011-08-12"
 }

Here's a example for a line spoken by Dr Joe Bloggs at 18:30 into the documentary:

 event {
   00:18:30,000 --> 00:18:32,000
   "Notice how the subtitles for my words have metadata attached to them"
   meta "SPEAKER" = "Dr Joe Bloggs"
   meta "URL" = "http://www.example.com/biography?name=Joe+Bloggs"
 }

Notice how another metadata item, URL, is also present. The application will have to be aware of those metadata in order to do something with it though. Since those are free form, it is up to you to think of what metadata you want, and make use of it.

Note that metadata may be attached to other objects, such as regions. This way, you can for example create a region tagged with a name, and track a person's movements with that region. Or you can tag a bitmap with a copyright and a URL to a larger version of the image.


Changing a Kate stream embedded in an Ogg stream

If you need to change a Kate stream already embedded in an Ogg stream (eg, you have a movie with subtitles, and you want to fix a spelling mistake, or want to bring one of the subtitles forward in time, etc), you can do this easily with KateDJ, a tool that will extract Kate streams, decode them to a temporary location, and rebuild the original stream after you've made whatever changes you want.

KateDJ (included with the libkate distribution) is a GUI program using wxPython, a Python module for the wxWidgets GUI library, and the oggz tools (both needing installing separately if they are not already).

The procedure consists of:

  • Run KateDJ
  • Click 'Load Ogg stream' and select the file to load
  • Click 'Demux file' to decode Kate streams in a temporary location
  • Edit the Kate streams (a message box tells you where they are placed)
  • When done, click 'Remux file from parts'
  • If any errors are reported, continue editing until the remux step succeeds

Frequently Asked Questions

Does libkate work on other plaforms than Linux ?

Yes, libkate is not Linux specific in any way. It optionally relies on libogg and libpng, two libraries widely ported to various platforms. It has been reported to work on Windows and MacOS X as well as UNIX platforms.

However, libtiger, a rendering library for Kate streams, relies on Pango and Cairo, which are not easy to build on Windows, though they can be. The Tiger renderer is however completely separate from libkate, and is not needed for full encoding and decoding of Kate streams.

Where can I find some example files ?

The libkate distribution can generate various examples, but already built files can be found there: [1] [2]

These files use raw text only.