I am trying to use a Float32Array as an input to GPGPU calculations in the browser.
Converting from Float32Array to Uint8Array works OK, and I am looking for a way to
- convert RGBA to float
- perform some calculations
- convert float to RGBA
in order to retrieve the results from Javascript.
The fragment shader looks like this:
vec4 getPointX(in sampler2D tex, in vec2 dimensions, in float index) {
vec2 uv = (
vec2(
floor(mod(index, dimensions.x)),
floor(index / dimensions.x)) + 0.5
) / dimensions;
return texture2D(tex, uv).rgba;
}
const vec4 bitEnc = vec4(1.0, 255.0, 65025.0, 16581375.0);
const vec4 bitDec = 1.0 / bitEnc;
vec4 EncodeFloatRGBA (float v) {
vec4 enc = bitEnc * v;
enc = fract(enc);
enc -= enc.yzww * vec2(1.0 / 255.0, 0.0).xxxy;
return enc;
}
float DecodeFloatRGBA (vec4 v) {
return dot(v, bitDec);
}
void main(void) {
vec4 a = getPointX(aValues, aDimensions, float(gl_FragCoord.x));
float o = DecodeFloatRGBA(a);
vec4 t = EncodeFloatRGBA(o);
gl_FragColor = t;
}
with encoding/decoding as per this answer (question is about int - answer seems to be about float to RGBA and back).
Retrieving the pixels looks like this:
const pixels = new Uint8Array(numOutputs * 4);
const results = new Float32Array(pixels.buffer);
gl.readPixels(0, 0, numOutputs, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
However the round trip RGBA => float => RGBA doesn not seem to work.
For example on an input of
Uint8Array(64) [ 222, 146, 192, 61, 116, 74, 28, 63, 56, 51, … ]
I get
Uint8Array(64) [ 222, 146, 192, 0, 116, 74, 28, 128, 56, 51, … ]
where every fourth element is not what I expect.
If - in the above code - I do
gl_FragColor = a
and skip the decoding/encoding the round-trip works cleanly, so it looks to me like things happen in the decoding/encoding step
Where is the issue then?