Using HTML5 WebGL Shaders for Computation

It seems to me like one could theoretically use WebGL for computation–such as computing primes or π or something along those lines. However, from what little I’ve seen, the shader itself isn’t written in Javascript, so I have a few questions:

  1. What language are the shaders written in?
  2. Would it even be worthwhile to attempt to do such a thing, taking into account how shaders work?
  3. How does one pass variables back and forth during runtime? Or if not possible, how does one pass information back after the shader finishes executing?
  4. Since it isn’t Javascript, how would one handle very large integers (BigInteger in Java or a ported version in Javascript)?
  5. I would assume this automatically compiles the script so that it runs across all the cores in the graphics card, can I get a confirmation?

If relevant, in this specific case, I’m trying to factor fairly large numbers as part of a [very] extended compsci project.


  1. WebGL shaders are written in GLSL.

I’ve used compute shaders from JavaScript in Chrome using WebGL to solve the travelling salesman problem as a distributed set of smaller optimization problems solved in the fragment shader, and in a few other genetic optimization problems.


  1. You can put floats in (r: 1.00, g: 234.24234, b: -22.0) but you can only get integers out (r: 255, g: 255, b: 0). This can be overcome by encoding a single float into 4 integers as an output per fragment. This is actually so heavy an operation that it almost defeats the purpose for 99% of problems. Your better to solve problems with simple integer or boolean sub-solutions.

  2. Debugging is a nightmare of epic proportions and the community is at the time of writing this actively.

  3. Injecting data into the shader as pixel data is VERY slow, reading it out is even slower. To give you an example, reading and writing the data to solve a TSP problem takes 200 and 400 ms respectively, the actual ‘draw’ or ‘compute’ time of that data is 14 ms. In order to be usable your data set has to be large enough in the right way.

  4. JavaScript is weakly typed (on the surface…), whereas OpenGL ES is strongly typed. In order to interoperate we have to use things like Int32Array or Float32Array in JavaScript, which feels awkward and constraining in a language normally touted for it’s freedoms.

  5. Big number support comes down to using 5 or 6 textures of input data, combining all that pixel data into a single number structure (somehow…), then operating on that big number in a meaningful way. Very hacky, not at all recommended.

Read More:   Position div to center of visible area

There’s a project currently being worked on to do pretty much exactly what you’re doing – WebCL. I don’t believe it’s live in any browsers yet, though.

To answer your questions:

  1. Already answered I guess!
  2. Probably not worth doing in WebGL. If you want to play around with GPU computation, you’ll probably have better luck doing it outside the browser for now, as the toolchains are much more mature there.
  3. If you’re stuck with WebGL, one approach might be to write your results into a texture and read that back.
  4. With difficulty. Much like CPUs, GPUs can only work with certain size values natively, and everything else has to be emulated.
  5. Yep.

i mucked around with this kind of stuff once. In answer to your 3rd question I passed vars back n forth with ‘uniforms’

*edit – looking back now also used vector ‘attributes’ to pass data in from outside.

you’ll need to run mamp or something for this to work locally…

I used pixels to represent letters of the alphabet and did string searching with shaders. It was amazingly fast. faster than CPU based native search programs. i.e. searching an entire book for instances of a single word is faster in browser on GPU than in a lightweight program like textedit. and i was only using a single texture.

The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .

Similar Posts