https://wiki.xiph.org/api.php?action=feedcontributions&user=Gnafu+the+Great&feedformat=atomXiphWiki - User contributions [en]2024-03-28T08:33:11ZUser contributionsMediaWiki 1.40.1https://wiki.xiph.org/index.php?title=Summer_of_Code_2021&diff=16751Summer of Code 20212021-03-03T17:18:12Z<p>Gnafu the Great: Consistency in "Difficulty" ratings</p>
<hr />
<div>== Introduction ==<br />
<br />
This year Xiph.org is focusing on the rav1e AV1 encoder for its GSoC participation. Both video and still images are currently hot topics, especially with the recent support of AVIF within browsers.<br />
<br />
Below you'll find the description for the following GSoC project ideas around the rav1e project.<br />
<br />
If you want to know more about a particular idea, please get in touch with the people listed under "possible mentors". While no guarantee, that the person will be the actual mentor for the task, they know it and will be happy to answer your questions.<br />
----<br />
<br />
== Detailed Project Descriptions ==<br />
<br />
These ideas were suggested by various members of the developer community as projects that would be beneficial and which we feel we can mentor. Students should feel free to select one of these, develop a variation, or propose their own ideas. Here, ideally.<br />
<br />
----<br />
<br />
=== Grain synthesis implementation inside of the rav1e encoder ===<br />
<br />
Grain synthesis is using the idea of modeling noise temporally and spatially using noise estimation.<br />
<br />
==== Problem / Intro ====<br />
<br />
Keeping high frequency detail and noise(dithering, camera noise, grain) using traditional encoder techniques is very expensive in terms of bitrate allocation, and some tools implemented to take care of that problem can create additional artifacts that are not pleasing to the general viewer experience, or are detrimental to the fidelity of the image.<br />
<br />
==== Solution / Task ====<br />
<br />
Implementing grain synthesis that models the noise parameters of a video, and applies the generated noise parameters during the decoding process, saving very high amounts of bitrates and providing a very high subjective visual fidelity and appeal.<br />
<br />
Making it faster than other forms of grain synthesis via smarter algorithms and using various forms of threading to speed up its application, such as tile threading and integration with rav1e-by-gop, making it possible to use as part of any encoding workflow. This will make sure adoption of the technique becomes as widespread as possible.<br />
<br />
==== Requirements ====<br />
<br />
The student should be familiar with Rust and C, and must have a light background in general visual media encoding, such as video and image compression.<br />
<br />
Difficulty: Medium.<br />
<br />
==== Possible Mentors ====<br />
<br />
[[User:Lu_zero]] && XX<br />
<br />
----<br />
=== Adaptive quantization ===<br />
<br />
Adaptive quantization is the process of an algorithm trying to efficiently allocate bitrate among the various macroblocks found in a frame by varying the quantizer across each of them according to different visual targets.<br />
<br />
==== Problem / Intro ====<br />
<br />
Often times, an encoder does not know about the best way to allocate the bitrate budget across a frame, and may overspend a considerable amount of bitrate to regions that might not benefit from a low quantizer(low amounts of distortion, so less compression) while not giving enough bitrate to zones that might actually need it. This can even cause issues temporally, as bitrate allocation within a group of frames(GOPs) may be skewed towards more complex and high motion frames, while leaving other frames with barely any bitrate to work with, creating visual artifacts such as blocking and banding.<br />
<br />
==== Solution / Task ====<br />
<br />
Implementing 1-3 forms on adaptive quantization either based on variance, complexity and/or variance variance with a bias in low contrast frames in rav1e. This will make bitrate allocation more efficient, and avoid bitrate overspending in areas which either need the lower quantizer to avoid the presence of lower quality frames that might detract from the viewer experience, and make objective/subjective quality targets easier to achieved. The combination of powerful adaptive quantization and grain synthesis would allow for a higher subjective quality viewing experience at lower bitrates while potentially lowering computational complexity by a good margin.<br />
<br />
Making a smart adaptive AQ mode in which the encoder chooses which adaptive quantization algorithm to use depending on the scene featured in a GOP. Potentially difficult.<br />
<br />
==== Requirements ====<br />
<br />
The student should be familiar with Rust and C, and having a background in general visual media encoding, such as video and image compression is recommended.<br />
<br />
Difficulty: Medium to high depending on which targets the student chooses to follow.<br />
<br />
==== Possible Mentors ====<br />
<br />
[[User:Lu_zero]] && XX<br />
<br />
----<br />
=== Integrate rav1e-by-gop into rav1e encoder ===<br />
<br />
rav1e-by-gop is an extended command line encoder that provides additional encoding strategies such as by-gop parallel encoding across multiple machines.<br />
<br />
==== Problem / Intro ====<br />
<br />
rav1e-by-gop is currently a command line program, some users might want to enjoy on the extended features from other programs even if they do not always belong to an encoder.<br />
<br />
==== Solution / Task ====<br />
<br />
make rav1e-by-gop expose the same API of the normal rav1e to make easy to use the multiple machine encoding features from other programs.<br />
<br />
==== Requirements ====<br />
<br />
The student should be familiar with Rust or C programming.<br><br />
Video knowledge is not strictly necessary, however a basic understanding of the concepts is vastly beneficial.<br />
<br />
==== Possible Mentors ====<br />
<br />
[[User:Tdaede]] [[User:Lu_zero]]<br />
<br />
----<br />
=== Visual metric targeting in rav1e-by-gop ===<br />
<br />
Objective metrics are used to evaluate an encoder's performance in a diverse set of scenarios. Different metrics such as PSNR, SSIM, DSSIM, VMAF and some closed no-reference metrics are used in the field to record encoder performance changes across versions trying to correlate closely with human perception.<br />
<br />
==== Problem / Intro ====<br />
<br />
Classical methods of rate control such as ABR(Average BitRate), fixed quantizers and even CRF(Constant Rate Factor) have the issue of not targeting a certain quality level. This can result in starved encodes where the bitrate budget has to be kept low to stay watchable by the viewer without interruption, leading to scenes that have exceptionally good visual targets by overspending bitrate, and scenes that have very poor visual appeal by having too little bitrate, detracting from the viewer experience entirely. More advanced forms of rate control like CRF help somewhat, but they still have the issue of having to overshoot so the lower quality scenes do not suffer, and do not adapt to the different type of content encoded, resulting in variable quality encodes.<br />
<br />
==== Solution / Task ====<br />
<br />
Implementing visual metric targeting based on VMAF(mainly used for video) and butteraugli(mainly used for images) as part of rav1e-by-gop as a secondary rate control option. <br />
<br />
The application of visual metric targeting in rav1e-by-gop would take advantage of its adaptive keyframe placement and smart scene detection to its fullest. <br />
This would allow for the best rate control possible, as short scenes in the length of 1-15s are where visual metrics such as VMAF shine the most. The idea is to encode first with a very fast speed preset in the encoder to gauge the quality at a prefixed quantizer. If the visual metric target set is not achieved, the encoder tries again once or twice until it gets the right result. <br />
With this method, instead of targeting an average of bitrate, you would target a visual score, getting higher efficiency and higher subjective quality. This would also be advantageous in terms of encoding time spent, as encoder complexity could be dialed back while keeping overall efficiency the same or higher, with efficiency being a function of both encoder efficiency and rate control.<br />
<br />
Implementing butteraugli quality targeting as an option using rav1e for AVIF images. Since visual quality requirements are considerably higher for intra only(image only) media, keeping high visual fidelity is even more important than video compression. Quality targeting iterations would also be quite useful here.<br />
<br />
==== Requirements ====<br />
<br />
The student should be familiar with Rust and C. General interest in image and video coding is recommended<br />
<br />
Difficulty: Medium.<br />
<br />
==== Possible Mentors ====<br />
<br />
[[User:Lu_zero]] && XX<br />
----<br />
=== Improved cluster support for Icecast ===<br />
Icecast servers deliver streams to million of users simultaneous worldwide. Each instance can handle many thousand clients at the same time. However redundancy, scalability, hardware requirement, and most importantly network connectivity often requires to use several instances in a professional deployment.<br />
==== Problem / Intro ====<br />
Icecast is designed as a standalone application. While basic support exists (such as master-slave mode) support for clusters can be improved. At this point a cluster level mangement instance seems to be missing.<br />
==== Solution / Task ====<br />
A solution for cluster management should be developed. A cluster controller should be implemented as well as support within Icecast. The focus is on Icecast itself at this point. However the controler should at least demonstrate all features implemented in Icecast.<br />
Possible features:<br />
* Automatic master-slave, and relay configuration.<br />
* Load distribution.<br />
* Statistic data collection.<br />
* Log collection.<br />
* Node monitoring.<br />
* Signalling of cluster state to external components (e.g. for automatic cluster scaling)<br />
==== Requirements ====<br />
The student should be familiar with C. A basic understanding of HTTP, as well as other web technologies is helpful. Knowledge of visualisation technologies is '''not''' required.<br />
<br />
Complexity: Medium to high.<br />
==== Possible Mentors ====<br />
phschafft (teamed with someone else)<br />
----<br />
=== Development of advanced content navigation in Icecast ===<br />
Icecast currently supports navigation of listeners between different streams. This was developed mostly for fallbacks (providing alternative content if the primary source fails). This support should be improved to provide better interaction with contents.<br />
==== Problem / Intro ====<br />
The current implementation is designed to work very robust for source side events (such as fallbacks). However it fails for two requirements:<br />
* Listener initiated interaction such as adaptive streaming.<br />
* Exact timing.<br />
==== Solution / Task ====<br />
The current concept is capable of being extended to the new requirements. Code should be written to add the new features on the existing (and well proven) infrastructure. The following additional features would be needed:<br />
Detection and matching of features within and between streams. This is required for any kind of synchronisation.<br />
* Executing operations exactly at detached features.<br />
* Adding ways to communicate features and operations between listener and Icecast, source and Icecast, and between multiple * * Icecast instances of the same cluster.<br />
==== Requirements ====<br />
The student should be familiar with C. A basic understanding of HTTP, Ogg, and Matroska/WebM is helpful.<br />
<br />
Complexity: Medium to high.<br />
==== Possible Mentors ====<br />
phschafft (teamed with someone else)<br />
----<br />
=== Uniform return channel for Icecast ===<br />
Icecast supports broadcasting media to several thousand listeners per instance. In a classic setup this is a one way process from the source (such as a radio or TV studio) to the consumer. However it is sometimes useful to provide a return channel, such as for implementing polls.<br />
==== Problem / Intro ====<br />
Returning information from the listener to the source is part of classic media. The need has become more relevant with the development of more interactive ways of the web. Several technologies have been used to implement this including asking the listeners to call in, send e-mails, or comment on a web page.<br />
<br />
Classic ways to implement feedback include a media break and are only loosely bound to the forward channel.<br />
==== Solution / Task ====<br />
A uniform return channel should be implemented that allows several types of data to be send from the listener to the source. This includes three major parts:<br />
* Improved session handling (both for listeners and for sources)<br />
* Implementing a return channel for listeners.<br />
* Implementing a return channel for sources.<br />
==== Requirements ====<br />
The student should be familiar with C and HTTP.<br />
<br />
Complexity: Medium.<br />
==== Possible Mentors ====<br />
phschafft (teamed with someone else)<br />
----<br />
=== (Live) listener statistics for Icecast ===<br />
Icecast supports writing a basic access.log that includes client information as well as connection time. In addition a playlist log is supported, and live statistic data via the STATS interface.<br />
<br />
A standard solution to use this data for detailed listener statistics is missing.<br />
==== Problem / Intro ====<br />
In the current solution there is no off the shelf solution to process the statistic data provided by Icecast. The best available solutions are standard access log analysers. A solution for live statistics is completely missing. Statistics taking content into account is also absent.<br />
==== Solution / Task ====<br />
To improve the situation two major steps must be accomplished:<br />
* The statistic interface of Icecast must be enhanced to provide the required information.<br />
* A solution that analyses this data must be developed. The focus is on this part.<br />
This project allows for a wide range of ideas from the participants to be incorporated. There is not yet and specific technical direction set. Evaluating different options is the first part of the project.<br />
==== Requirements ====<br />
The student should be familiar with C. A basic understanding of log analysis, and monitoring and/or data collecting systems is helpful.<br />
<br />
Difficulty: Medium depending on which targets the student chooses to follow.<br />
<br />
==== Possible Mentors ====<br />
phschafft (teamed with someone else)<br />
----<br />
<br />
=== Support WebAssembly SIMD in rav1e ===<br />
<br />
rav1e supports the [https://wasi.dev/ WASI] platform and it has its javascript API bindings relying on it.<br />
<br />
==== Problem / Intro ====<br />
<br />
The WebAssembly SIMD is getting [https://github.com/WebAssembly/simd closer] to be available, we should support it.<br />
<br />
<br />
==== Solution / Task ====<br />
<br />
* Implement the dispatch logic for WASM SIMD as done already for x86_64 and aarch64.<br />
* Implement the Sum of absolute difference (SAD) and Sum of absolute transformed differences (SATD)<br />
* Implement the inverse transforms (idct, iadst, identity, ...)<br />
* Implement the motion compensation.<br />
<br />
==== Requirements ====<br />
<br />
The student should be familiar with Rust, WASM, wasmtime and related tools. Knowledge of x86 or arm assembly is not needed but will help.<br />
<br />
==== Possible Mentors ====<br />
<br />
[[User:Lu_zero]]<br />
----<br />
<br />
=== Implement butteraugli in av-metrics ===<br />
<br />
av-metrics is a collection of video quality metrics, [https://github.com/google/butteraugli butteraugli] is a promising psychovisual similarity metric. <br />
<br />
==== Problem / Intro ====<br />
<br />
Currently the implementation of butteraugli exists as [https://github.com/google/butteraugli stand-alone] codebase. The code is readable, but it could be faster. <br />
<br />
==== Solution / Task ====<br />
<br />
* Implement rust bindings to the reference butteraugli.<br />
* Implement butteraugli in pure rust within av-metrics.<br />
* Write integration and unit tests to make sure the implementation does not diverge<br />
* Write criterion benchmarks<br />
* Implement x86_64 or aarch64 optimizations for it, using intrinsics or plain ASM.<br />
<br />
==== Requirements ====<br />
<br />
The student should be familiar with Rust, C and C++. Knowledge of x86 or arm assembly is welcome.<br />
<br />
==== Possible Mentors ====<br />
<br />
[[User:Lu_zero]]</div>Gnafu the Greathttps://wiki.xiph.org/index.php?title=Todo&diff=16739Todo2021-03-02T13:35:19Z<p>Gnafu the Great: Added rav1e's x264 features page to the list of project todos</p>
<hr />
<div>Todo list for xiph.org.<br />
<br />
If you're interested in helping out, this is a good place to start. Also, asking on irc (''#vorbis'', ''#theora'', ''#annodex'' or ''#xiph'' on ''irc.freenode.net'') is a good way to get oriented.<br />
<br />
== Todos ==<br />
<br />
* Add [[Ogg_Skeleton_4]]/index support to Cortado.<br />
* Add [[Ogg_Skeleton_4]]/index support to VLC.<br />
* Add [[Ogg_Skeleton_4]]/index support to liboggz.<br />
* Add [[Ogg_Skeleton_4]]/index support to ffmpeg.<br />
* See [[OggIndex-Migration]] for other projects which need OggIndex support.<br />
* [http://github.com/cpearce/OggIndex OggIndex] needs Speex support.<br />
* [[Ices]] needs Speex support<br />
* Icecast toolchain needs support for WebM.<br />
* Icecast toolchain needs support for CELT. NOTE: CELT Ogg encapsulation may change.<br />
* Oggenc and Ogg123 need OggPCM support (encoding and playback respectively)<br />
* Test and fix 'downstream' applications<br />
* Create Xiph.Org conference swag: brochures, posters, toys, demo disks, etc. (don't do this without coordinating with folks)<br />
* Update the todos<br />
<br />
Several projects have their own todo lists in the wiki: <br />
<br />
* [[TheoraTodo]]<br />
* [[XSPF_Todo_list]]<br />
* [[Unimplemented x264 features in rav1e]]<br />
<br />
We always need people to help with the [http://xiph.org/ website] as well.<br />
<br />
== Website todos ==<br />
* Find and fix bugs, bad links, outdated, and incorrect information.<br />
* More HTML5 multimedia content for our websites, both useful things like presentation videos and the primer as well as (tasteful) dancing baloney.<br />
* Better integrate our web resources:<br />
** Mailing lists archives and Wiki could be more extensively integrated into the websites, e.g. sections automatically fed with recent posts/edits to relevant pages/lists.<br />
** Make planet.xiph.org more publicly visible? (need to reduce the offtopic posts the leak through)<br />
* Local blogging platform so JM / Xiphmont don't need to use livejournal— in particular having nice media support would be nice (ugh, more software to maintain) <br />
* Make Media.xiph.org more reasonably structured and prettier (Apng previews? WebM renders?)<br />
<br />
== Bounties ==<br />
* See also the [[Bounties]] page<br />
<br />
[[Category:Developers stuff]]</div>Gnafu the Greathttps://wiki.xiph.org/index.php?title=Summer_of_Code&diff=16738Summer of Code2021-03-02T13:32:20Z<p>Gnafu the Great: Rearrange list</p>
<hr />
<div>*see [[Summer of Code 2021]]<br />
*see [[Summer of Code 2015]]<br />
*see [[Summer of Code 2009]]<br />
*see [[Summer of Code 2008]]<br />
*see [[Summer of Code 2010]] (Xiph.org was not a mentoring organzation in 2010)</div>Gnafu the Greathttps://wiki.xiph.org/index.php?title=Summer_of_Code&diff=16737Summer of Code2021-03-02T13:31:05Z<p>Gnafu the Great: Added 2021 page to the list</p>
<hr />
<div>*see [[Summer of Code 2015]]<br />
*see [[Summer of Code 2009]]<br />
*see [[Summer of Code 2008]]<br />
*see [[Summer of Code 2010]] (Xiph.org was not a mentoring organzation in 2010)<br />
*see [[Summer of Code 2021]]</div>Gnafu the Greathttps://wiki.xiph.org/index.php?title=GST_cookbook&diff=13073GST cookbook2011-10-01T03:06:33Z<p>Gnafu the Great: Fix typo and wording of Fluendo plugin comment</p>
<hr />
<div>In addition to being a powerful multimedia infrastructure for applications Gstreamer is also a useful tool for general manipulations of multimedia data. By invoking gst-launch from the command-line with a custom pipeline many useful processing steps are possible.<br />
<br />
Gstreamer also usually has good support for Xiph-related formats. <br />
<br />
Unfortunately, it can be rather difficult to figure out an appropriate pipeline without a starting point.<br />
<br />
Here are some useful examples: <!-- Don't complain about the selection of examples, I started this simply by grepping my shell history. Feel free to submit more and/or improve the existing ones --gmaxwell --><br />
<br />
===Encode a .wav to Vorbis:===<br />
*gst-launch filesrc location="INPUT.wav" ! wavparse ! audioconvert ! vorbisenc ! oggmux ! filesink location="OUTPUT.ogg"<br />
<br />
===Dump a Theora video to PNGs:===<br />
*gst-launch filesrc location="INPUT.ogv" ! oggdemux ! theoradec ! ffmpegcolorspace ! pngenc snapshot=false ! multifilesink location="OUTPUT%04d.png"<br />
<br />
===Transmux a MKV (containing vorbis and theora) to Ogg:===<br />
*gst-launch filesrc location="INPUT.mkv" ! matroskademux name=d ! video/x-theora ! queue ! theoraparse ! oggmux name=mux ! filesink location="OUTPUT.ogv" d. ! audio/x-vorbis ! queue ! vorbisparse ! queue ! mux.<br />
<br />
===Encode a y4m to lossless Dirac in Ogg:===<br />
*gst-launch filesrc location="INPUT.y4m" ! decodebin ! schroenc force-profile=vc2_main rate-control=lossless ! oggmux ! filesink location="OUTPUT.ogv"<br />
<br />
===Pull from a windows media stream, transcode to Ogg/Theora+Vorbis and send to a icecast server:===<br />
(may require purchasing Fluendo plugins for decoding the encumbered codecs)<br />
*gst-launch uridecodebin uri=mms://SOURCE.SERVER.COM/path name=d ! queue max-size-time=100000000 ! ffmpegcolorspace ! theoraenc bitrate=800 ! oggmux name=mux ! shout2send ip=YOURICECAST.SERVER.COM port=8000 password=YOURPASSWORD mount=/OUTPUTFILENAME.ogv d. ! queue max-size-time=100000000 ! audioconvert ! vorbisenc ! mux.<br />
<br />
===Capture video from a webcam, encode to an Ogg Theora file, decode and display on screen, write to a file whose name is the current date+time, and stream to an IceCast server===<br />
*gst-launch-0.10 --eos-on-shutdown v4l2src ! 'video/x-raw-yuv, width=640, height=480' ! videorate ! 'video/x-raw-yuv, framerate=15/1' ! queue max-size-bytes=100000000 max-size-time=0 ! theoraenc bitrate=150 ! oggmux ! tee name=ogged ! queue max-size-bytes=100000000 max-size-time=0 ! oggdemux ! theoradec ! xvimagesink sync=false force-aspect-ratio=true ogged. ! queue max-size-bytes=100000000 max-size-time=0 ! filesink location=`date +%F_%T`.ogv ogged. ! queue max-size-bytes=100000000 max-size-time=0 ! shout2send ip=YOURICECAST.SERVER.COM port=8000 password=YOURPASSWORD mount=/OUTPUTFILENAME.ogv streamname=YOURSTREAMNAME description=YOURDESCRIPTION genre=YOURGENRE url=YOURSTREAMURL ogged.<br />
<br />
===A v4l2 source + ALSA source -> Ogg Theora -> IceCast===<br />
*dov4l -i [0|1] -m NTSC<br />
*gst-launch-0.10 --eos-on-shutdown v4l2src ! videoscale ! video/x-raw-yuv,width=320,height=240,framerate=30000/1001,interlaced=true ! queue max-size-bytes=100000000 max-size-time=0 ! gamma gamma=1.2 ! queue max-size-bytes=100000000 max-size-time=0 ! videobalance saturation=1.9 brightness=0.00 contrast=1.4 hue=0.06 ! ffmpegcolorspace ! queue max-size-bytes=100000000 max-size-time=0 ! theoraenc bitrate=400 ! queue max-size-bytes=100000000 max-size-time=0 ! oggmux name=mux alsasrc device=hw:1,0 latency-time=100 ! audioconvert ! vorbisenc ! queue max-size-bytes=100000000 max-size-time=0 ! mux. mux. ! queue max-size-bytes=100000000 max-size-time=0 ! shout2send ip=icecast-server.com password=hackme mount=/mountpoint.ogv<br />
:Note that the above pipeline will slowly lose audio/video synchronization due to hardware-level limitations in audio vs. video capture of media samples. FireWire and SDI-based capture does not have these limitations.<br />
<br />
===Two v4l2 sources combined side-by-side into rectangular video + ALSA -> Ogg Theora -> IceCast===<br />
*mplayer tv:// -tv device=/dev/video0:driver=v4l2:norm=NTSC:width=320:height=240:outfmt=uyvy:input=1:buffersize=16: -ao null<br />
*mplayer tv:// -tv device=/dev/video1:driver=v4l2:norm=NTSC:width=320:height=240:outfmt=uyvy:input=1:buffersize=16: -ao null<br />
*gst-launch --eos-on-shutdown v4l2src device=/dev/video0 ! videoscale ! video/x-raw-yuv,width=320,height=240,interlaced=true ! queue max-size-bytes=100000000 max-size-time=0 ! gamma gamma=1.2 ! queue max-size-bytes=100000000 max-size-time=0 ! videobalance saturation=1.9 brightness=0.00 contrast=1.4 hue=0.06 ! ffmpegcolorspace ! queue max-size-bytes=100000000 max-size-time=0 ! videorate ! video/x-raw-yuv,framerate=15/1 ! queue max-size-bytes=100000000 max-size-time=0 ! ffmpegcolorspace ! queue max-size-bytes=100000000 max-size-time=0 ! videomixer name=mix ! queue max-size-bytes=100000000 max-size-time=0 ! ffmpegcolorspace ! queue max-size-bytes=100000000 max-size-time=0 ! theoraenc bitrate=600 ! queue max-size-bytes=100000000 max-size-time=0 ! oggmux name=mux alsasrc device=hw:1,0 latency-time=100 ! queue max-size-bytes=100000000 max-size-time=0 ! audioconvert ! vorbisenc ! queue max-size-bytes=100000000 max-size-time=0 ! mux. mux. ! queue max-size-bytes=100000000 max-size-time=0 ! shout2send ip=icecast-server.com password=hackme mount=/mountpoint.ogv v4l2src device=/dev/video1 ! videoscale ! video/x-raw-yuv,width=320,height=240,interlaced=true ! gamma gamma=1.2 ! queue max-size-bytes=100000000 max-size-time=0 ! videobalance saturation=1.9 brightness=0.00 contrast=1.4 hue=0.06 ! queue max-size-bytes=100000000 max-size-time=0 ! videorate ! video/x-raw-yuv,framerate=5/1 ! queue max-size-bytes=100000000 max-size-time=0 ! ffmpegcolorspace ! queue max-size-bytes=100000000 max-size-time=0 ! videobox border-alpha=0 fill=green left=-320 ! mix.<br />
<br />
===Create a video with an alpha channel from a sequence of PNG files===<br />
*gst-launch-0.10 multifilesrc location=images%05d.png caps="image/png,framerate=1/1,pixel-aspect-ratio=1/1" num-buffers=95 ! pngdec ! videorate ! alphacolor ! "video/x-raw-yuv,format=(fourcc)AYUV" ! matroskamux ! filesink location=images_raw.mkv<br />
<br />
===Capture from a 1680x1050 GNOME Desktop, Combine with ALSA -> Ogg Theora + Vorbis -> Icecast===<br />
*gst-launch --eos-on-shutdown ximagesrc ! capsfilter caps=video/x-raw-rgb,framerate=3/1,width=1680,height=1050 ! queue max-size-bytes=100000000 max-size-time=0 ! videoscale ! video/x-raw-rgb, width=1056, height=660 ! queue max-size-bytes=100000000 max-size-time=0 ! ffmpegcolorspace ! queue max-size-bytes=100000000 max-size-time=0 ! theoraenc bitrate=450 keyframe-auto=false keyframe-force=12 keyframe-freq=12 speed-level=0 drop-frames=false ! queue max-size-bytes=100000000 max-size-time=0 ! oggmux name=mux alsasrc device=hw:0,0 ! audio/x-raw-init,rate=44100,channels=1 ! queue max-size-bytes=100000000 ! audioconvert ! vorbisenc ! queue max-size-bytes=100000000 max-size-time=0 ! mux. mux. ! queue max-size-bytes=100000000 max-size-time=0 ! shout2send ip=icecast-server.com password=hackme mount=/mountpoint.ogv<br />
*Icecast will need to be built from SVN (ver. 2.3.2 fails to relay data due to a bug)<br />
*Viewers will need better-than-average internet connectivity (around 1.5Mbits/sec)<br />
*The simplest way to capture audio events is to place a mic next to computer speakers<br />
<br />
<br />
===Live-stream a high-resolution Mac OSX Desktop at 1 FPS===<br />
gst-launch --eos-on-shutdown osximagesrc ! queue max-size-bytes=100000000 max-size-time=0 ! ffmpegcolorspace ! videoscale method=4-tap ! video/x-raw-yuv, width=960, height=600 ! queue max-size-bytes=100000000 max-size-time=0 ! theoraenc bitrate=360 keyframe-auto=false keyframe-force=3 keyframe-freq=3 speed-level=0 drop-frames=false ! queue max-size-bytes=100000000 max-size-time=0 ! oggmux ! queue max-size-bytes=100000000 max-size-time=0 ! shout2send ip=host.com password=hackme mount=/mountpoint.ogv</div>Gnafu the Great