Browser vendor are developing media libraries for WebApps dealing with heavy media stream. Real-time communication, P2P streaming or simple access to the hardware feature are the backbone of the WebRTC technology soon on the way to its public release for the major browsers, cross platform and mobile. In this lab we will focus on the mediastream part of webRTC and the way to render it via a GPU pipeline.
Before GPU technologies, image processing was done on a per-pixel loop which was a real CPU killer. Even for smartphone Apps the choice of delegating all the processing to the GPU is not so obvious for a one-time filtering such as photo effect. It lead to get the product more hardware dependent and so less flexible. But for a real time context, well there is no other choice than the GPU solution. For the web and its apps, two solutions come to us: Shaders or CSS.
CSS shadows the heavy work of coding for the GPU but doesnt’s give any choice than the pre-set of filters for most common and popular effect (blur, sepia, saturate, etc…)
Shaders are a deeper solution, more complex but the only way to customize effects and post-process it. An other situation where webGL stands is if you need dynamic effect such as mouse dependent effect or pixel dependant customisation.
The following live demo shows an implementation of the Sepia effect for real-time video. Note that the webGL version include a dynamic sub-effect on the left side. There the Sepia tone is pipe to an dark soft-edge effect. You will need Google Canary tu run the demo.
As webGL is basically the same version than OpenGLes, shaders written for the smartphone apps can be loaded on the fly to a HTML5 version. This is also great for prototyping shaders as web technologies are more flexible than the smartphone ecosystem. A lot of shaders designed for image processing are already on the open source community but the most interesting part of them are certainly those related to real time interaction based on pattern matching. This would be the subject of a later Lab.