Talk:OggPCM Draft1: Difference between revisions
Jump to navigation
Jump to search
Line 65: | Line 65: | ||
* I believe that 64-bit platforms still use 32-bit memory space (I may be wrong!). Yes, libogg2 buffers should always begin on a 32-bit word boundary, so the beginning of the data should also be on a boundary. This was done intentionally, as was the choice to use a three letter codec identifier for raw codecs (since the packet ID + codec ID = 32bits this way), after an extended IRC discussion on the subject. If ending on a 64-bit boundary is something we're really worried about, we could always add 4 bytes, but I really don't think it should be necessary. | * I believe that 64-bit platforms still use 32-bit memory space (I may be wrong!). Yes, libogg2 buffers should always begin on a 32-bit word boundary, so the beginning of the data should also be on a boundary. This was done intentionally, as was the choice to use a three letter codec identifier for raw codecs (since the packet ID + codec ID = 32bits this way), after an extended IRC discussion on the subject. If ending on a 64-bit boundary is something we're really worried about, we could always add 4 bytes, but I really don't think it should be necessary. | ||
--[[User:Arc|Arc]] 13:11, 9 Nov 2005 (PST) | --[[User:Arc|Arc]] 13:11, 9 Nov 2005 (PST) | ||
* On UltraSparc and Alpha CPUs (both 64 bit) accessing a 64 bit double at an address that is not 8 byte aligned causes a segmentation fault. However, accessing unaligned doubles on x86 (ie 32 bit) is slower than accessing aligned doubles. You might want to consider this. | |||
--[[Erikd|Erikd]] |
Revision as of 00:08, 10 November 2005
Do we need signed/unsigned data flag?
- Not really. The data can be easily changed to signed as default losslessly. Unsigned 8-bit data (where 128 is the median) is easily changed to signed, and changed back if being saved as RIFF/WAV (which only supports unsigned 8-bit). However, it wouldn't hurt to support it. Applications can be built to support one or multiple formats, thus requesting conversion if not supported by the codec.
--Arc
- I don't agree with that. It just puts more conditional code into packages that would normally have only one native format and it gives them more opportunity to fail to support variants of the format. If it's fixed then a few packages will always have to modify the data, and most will never get it wrong. If it's variable then every package will have to do something sometimes, or fail occasionally.
--Gumboot 01:28, 8 Nov 2005 (PST)
- I see no reason to support any unsigned PCM format other than 8 bit. For instance, I know of no container format which supports unsigned 16 bit.
--Erikd
Do we need to record int/float data flag?
- Some codecs (Vorbis) use floating point samples natively. Others only support int. Support for int/float data flag is thus important.
--Arc
- Please don't make determination of the data format depend on multiple fields. Instead use an enumeration so that something like little endian 16 bit PCM can be specifed as OGG_PCM_LE_PCM_16 and big endian 16 bit doubles can be specified as OGG_PCM_BE_FLOAT_64. This scheme is far more transparent and self documenting. If the format field is 8 bits, this scheme supports 256 formats; if its 16 bit it will support 65536 formats.
I also suggest leaving the format associated with a value of zero as an invalid format. --Erikd
Do we need to offer endian data flag? If not, which is used?
- LSB/MSB can be changed losslessly, one should probobally be settled on for the data and stick with it. It's a fairly low-CPU process to change the endian on the application side in any event, and if the application uses the bitpacker, this isn't even an issue. Supporting both is possible, too, but adds complexity to a format intended to be simple.
--Arc
- We should just standardize on little endian ordering for the data. It's commonly used and well supported in hardware and software. Any cross architecture application that can deal WAV's will already know how to support it.
--Jkoleszar 11:48, 9 Nov 2005 (PST)
- I agree that we should use little endian as standard, however, I'm questioning if big endian should be supported as well... after all, it'd be trivial for a plugin to convert from one to another.
--Arc 13:11, 9 Nov 2005 (PST)
- Big and little endian data formats should both be supported with equal status. There should not even be a default; the endian-ness should be explicit.
--Erikd
Is it worth supporting a vorbiscomment header?
- It'd be useful to be able to carry information like what was decoded, or CDDB IDs, or replaygain information. Besides, if you don't put it in then five other people will do it five different ways.
--Arc
How does one interpret a file where the Bits per Sample is neither 32 nor 64 and the Data Type is float?
- One doesn't. Standardize on IEEE floats and be done with it. Simple, remember? :)
--Jkoleszar 11:48, 9 Nov 2005 (PST)
- I'm uncertain exactly what this question is. Hopefully the submitter can clarify?
--Arc 13:11, 9 Nov 2005 (PST)
- Many file formats (WAV, AIFF, AU and others) support 64 bit float data. WAV stores floats as little endian data and AIFF stores if as big endian data. OggPCM should support both 32 and 64 bit floats of both endian-nesses (is that a word?). I don't know of any other floating point format that needs consideration.
--Erikd
Are samples padded to some round number of bits?
- I don't know of any PCM formats for non-octet based samples, but if you want to specify something, I'd say pack them into the MSB's of the next larger byte boundary, round toward zero, on a per channel basis. This should allow software that knows how to handle 16 bit audio but not 10 bit to operate on the data.
--Jkoleszar 11:48, 9 Nov 2005 (PST)
- The occurrence of N bit PCM where N is not a multiple of 8 bits is so rare that it should probably be ignored. In addition, there really isn't any reason to treat 10 bit data packed into the 10 most significant bits of a 16 bit int any different from a real 16 bit value. So why make any distinction?
--Erikd
Do we want/need the 32-bit data packet header?
- The issue was raised on the ogg-dev mailing list of wether this is necessary. With only a single header packet, it could be considered an unneeded complication, however, additional header packets (current or future) will make this a requirement. --Arc
- I can definitely see people wanting to use comment pages, so I'd say leave the header on the data pages as well. On the other hand, if ogg provides guarantees about the alignment of packet data from packetout, I could see getting rid of it since there are benefits to working on buffers aligned to larger boundaries on some architectures. As far as I can tell, either no guarantees are made, or you'll get a buffer aligned to a word boundary, in which case having the header has no penalty.
--Jkoleszar 11:48, 9 Nov 2005 (PST)
- I believe that 64-bit platforms still use 32-bit memory space (I may be wrong!). Yes, libogg2 buffers should always begin on a 32-bit word boundary, so the beginning of the data should also be on a boundary. This was done intentionally, as was the choice to use a three letter codec identifier for raw codecs (since the packet ID + codec ID = 32bits this way), after an extended IRC discussion on the subject. If ending on a 64-bit boundary is something we're really worried about, we could always add 4 bytes, but I really don't think it should be necessary.
--Arc 13:11, 9 Nov 2005 (PST)
- On UltraSparc and Alpha CPUs (both 64 bit) accessing a 64 bit double at an address that is not 8 byte aligned causes a segmentation fault. However, accessing unaligned doubles on x86 (ie 32 bit) is slower than accessing aligned doubles. You might want to consider this.
--Erikd