SURPR!SE
サプライズ
2013.11.23
WebGLCL and javascript GPGPU
With a modern browser, the trend is to load more and more computing on the front end. For graphic purpose GPU can now wisely be used and is becoming required more and more. The road opened with WebGL is obviously followed by General Purpose GPU technologies: WebCL (Khronos group) or River Trail (Intel). But WebGL is quite young and unfortunately we are not very close to see a full implementation of those parallel computing technic coming soon this year.
So what can we do while waiting for those GPGPU solutions? Some smart guys out there went ahead and created the WebGLCL javascript library. WebGLCL is a very simple yet clever use of a 2D FBO with a mimic of basic OpenCL-like functions. The state of the art GPGPU on OpenGL hides shaders and lets you write a kernel-like GPU code in the GL Shader Language instead. So all Array or vector has to be allocated as a 2d texture. Once you get this GPGPU FBO trick you can load as much computing as your video card allows. There is no access to shared memory nor thread ID. Yet WebGLCL is easy to use and straightforward for any fast GPU task.
Take care of Buffers that aren’t initialized in a 2D square texture, because it will have noise padded to complete the squared space. You may also prefer reading the source code itself more than the documentation, which doesn’t fully explain all the features and functions.
If using OpenGL as a workaround to CUDA or OpenCL has always been part of the GPGPU computing culture, its technique doesn’t appear to have spread to web technologies. Hopefully WebGL2.0 based on OpenGLes3.0 will see more of these solutions. Examples and demos can be seen in Three.js or other GPGPU WebGl hack (See links below). It would be interesting to see some “classic” GPU hack implemented in WebGL such as the Pingpong FBO which allows a simultaneous read/write operation on the buffer.
We would like to underline this is the first to bottleneck “serious” GPGPU in javascript. Tacits cast between 64-bit javascript floating point numbers on CPU and WebGL single-precision float (32-bit) leads to a lost of data beyond 10^-6. Storing in a javascript typed array will have no influence. For a rough computing task like gaming, this will not be an issue. However, for more a serious purpose like eventual encryption or compression it may lead to inconsistency. Note that this issue appears in memory allocation between CPU and GPU, not during computing where each PU remains consistent.
Here is a table of tested numbers under WebGLCL: the 3rd row is a result on your own GPU and on a compatible GPU.
CPU | Tested GPU* | Your GPU |
0 | -2.3650443381484365e-8 | Your browser doesn’t support WebGL |
1 | 0.9999951930474822 | |
2 | 2.000002376869503 | |
3 | 2.9999975935674428 |
参照サイト
https://github.com/RiverTrail/RiverTrail