As mentioned in the previous article, WebGL 2 uses a programmable pipeline.
This article covers shaders and GLSL language fundamentals - the essential building blocks before tackling fluid simulation.
Before creating complex effects, you need to understand how WebGL fundamentally renders anything to the screen. Unlike declarative approaches like HTML/CSS, WebGL requires to explicitly define the geometry and the appearance using custom programs. These programs, called shaders, are written in GLSL and executed directly on the GPU.
Introduction to GLSL ES 3.0
GLSL ES 3.0 (OpenGL Shading Language for Embedded Systems version 3.0) is a high-level shading language with a C-like syntax, specifically designed for graphics processing units (GPUs) in embedded systems and web browsers.
The GLSL ES 3.00 specification requires every shader to begin with #version 300 es
to specify GLSL version for Embedded Systems. Without this directive, the shader won't compile. It also requires precision qualifiers in fragment shaders (e.g., precision mediump float;
)
GLSL ES 3.0 Type System
Type System is the building blocks of GLSL ES 3.0 code. Understanding the type system will help declare variables correctly and use built-in functions without confusion.
Scalar Types
-
float
(32-bit IEEE 754) -
int
(32-bit signed) -
uint
(32-bit unsigned) -
bool
These are the simplest types: single numeric or boolean values.
Vector Types
- Floating-point:
vec2
,vec3
,vec4
- Integer:
ivec2
,ivec3
,ivec4
- Unsigned:
uvec2
,uvec3
,uvec4
- Boolean:
bvec2
,bvec3
,bvec4
GPUs process vector operations efficiently in parallel.
vec4 color = vec4(1.0, 0.5, 0.2, 1.0);
float r = color.r; // Same as color.x or color[0]
vec3 bgr = color.bgr; // Swizzle: vec3(0.2, 0.5, 1.0)
Matrix and Sampler Types
- Matrices:
mat2
,mat3
,mat4
(plus non-square variants) - Texture samplers:
sampler2D
,sampler3D
,samplerCube
- Integer samplers:
isampler2D
,usampler2D
A sampler type (e.g. sampler2D
) tells the compiler you want to sample a 2D texture inside your shader. You’ll see these used in fragment shaders when you write expressions like texture(uTexture, vUV)
.
Variable Qualifiers
In GLSL, you also need to tell the compiler how data moves into and out of each shader stage. That’s where qualifiers like in
, out
, and uniform
come in.
-
in
: Input data (per-vertex in vertex shader, interpolated in fragment shader) -
out
: Output data (interpolated outputs in vertex shader, framebuffer outputs in fragment shader) -
uniform
: Constant values for entire draw call
Uniform Blocks
WebGL 2 supports grouping uniforms into blocks for efficient updates.
The GLSL code declares the uniform block structure in the shader:
layout(std140) uniform TransformBlock {
mat4 model;
mat4 view;
mat4 projection;
} transform;
The JavaScript code creates a buffer and uploads data to match that structure:
// Create and bind uniform buffer
const ubo = gl.createBuffer();
gl.bindBuffer(gl.UNIFORM_BUFFER, ubo);
gl.bufferData(gl.UNIFORM_BUFFER, transformData, gl.DYNAMIC_DRAW);
// Associate with shader program
const blockIndex = gl.getUniformBlockIndex(program, 'TransformBlock');
gl.uniformBlockBinding(program, blockIndex, 0);
gl.bindBufferBase(gl.UNIFORM_BUFFER, 0, ubo);
The std140
layout is a standardized memory layout rule. It ensures all GPUs organize uniform buffer data the same way. Without this standard, your code might work on one graphics card but fail on another due to different memory arrangements.
WebGL 2 Vertex Specification Enhancements
WebGL 2 (GLSL ES 3.0) added several conveniences that cut down on boilerplate JavaScript and reduce errors when setting up vertex data.
Vertex Array Objects (VAOs)
Instead of binding and configuring each attribute (position, normal, UV, etc.) every time you draw, you can store that entire setup in a VAO.
VAOs encapsulate vertex attribute configuration, storing buffer bindings, attribute pointers, and enable states. This eliminates repetitive setup calls:
const vao = gl.createVertexArray();
gl.bindVertexArray(vao);
// Configure all attributes once
gl.bindVertexArray(null);
// Later: single call to use
gl.bindVertexArray(vao);
gl.drawArrays(gl.TRIANGLES, 0, vertexCount);
Explicit Attribute Locations
Instead of querying attribute indices at runtime with gl.getAttribLocation
, you can hard-code the location right in your shader.
GLSL ES 3.0 supports explicit attribute location assignment, eliminating the need for getAttribLocation
queries:
layout(location = 0) in vec2 aPosition;
layout(location = 1) in vec3 aColor;
JavaScript can then reference attributes numerically: gl.enableVertexAttribArray(0)
.
Shaders
A shader is a program written in GLSL for processing graphics data. There are two main types of shaders that work together to render graphics: the vertex shader and the fragment shader.
GLSL provides predefined built-in variables that handle communication between shader stages and the GPU.
Vertex Shader Built-ins:
-
gl_Position
Must be written to the vertex shader. It’s the clip-space (x, y, z, w) output. -
gl_VertexID
Automatically provides the index of the current vertex (useful for procedural geometry). -
gl_InstanceID
When doing instanced draws, this tells you which instance you’re on.
Fragment Shader Built-ins:
-
gl_FragCoord
The window-space coordinates(x, y, z, w)
of the current fragment center, wherez
is depth andw
is1/w
from clip space.gl_FragCoord.y
uses bottom-left origin (OpenGL convention), so manual flipping may be needed for top-left origin coordinates. -
gl_FrontFacing
A boolean telling you whether the primitive was front-facing or back-facing. -
gl_FragDepth
Allows the fragment shader to write a custom value to the depth buffer
Vertex Shader
The vertex shader executes once per vertex with mandatory output gl_Position
in clip-space coordinates. It receives per-vertex data through in
variables and passes interpolated data to the fragment shader via out
variables:
#version 300 es
in vec3 aPosition; // Per-vertex position from buffer
in vec3 aColor; // Per-vertex color from buffer
out vec3 vColor; // Interpolated color for fragments
uniform mat4 uModelViewMatrix; // Constant for all vertices
uniform mat4 uProjectionMatrix; // Constant for all vertices
void main() {
// Transform to clip space with w=1.0 for positions
gl_Position = uProjectionMatrix * uModelViewMatrix * vec4(aPosition, 1.0);
vColor = aColor; // Pass color to fragment shader
}
After vertex shader execution, the GPU clips primitives outside the clip volume and then the GPU's fixed-function hardware automatically performs perspective division (dividing by the w component) to convert clip-space to Normalized Device Coordinates (NDC).
Compute in vertex shader when possible — few vertices in the vertex shader vs millions of pixels in the fragment shader.
Fragment Shader
After NDC producing, the GPU’s fixed-function maps NDC to window (pixel) coordinates. The rasterizer then converts triangles (defined by vertices) into fragments. Each fragment corresponds to a pixel location that the triangle covers. It contains: screen position (x, y), depth value (z), other interpolated values from vertex shader outputs.
The fragment shader runs once per fragment, receiving interpolated values from the vertex shader. The GPU automatically interpolates all vertex shader outputs across each triangle's surface:
#version 300 es
precision mediump float;
in vec3 vColor; // Interpolated from vertex shader
out vec4 fragColor; // RGBA output to framebuffer
void main() {
fragColor = vec4(vColor, 1.0); // Alpha = 1.0 for opaque
}
Output colors typically use a vec4 data type, representing red
, green
, blue
, and Alpha
(RGBA) components, with values often ranging from 0.0 to 1.0.
GPUs can trade accuracy for speed — using mediump
instead of highp
can improve performance on mobile devices while maintaining acceptable visual quality.
GPUs execute fragments in groups of threads (sometimes called warps or wavefronts). If a branch splits those threads — some go one way, some go another — the GPU ends up running both sides of the branch. That extra work makes branching less efficient:
// Efficient: branchless operations
float t = step(0.5, value); // Returns 0.0 or 1.0
vec3 result = mix(colorA, colorB, t);
// Inefficient: dynamic branching
vec3 result = (value > 0.5) ? colorB : colorA;
Full-Screen Quad Geometry
Many advanced GPU effects — from motion blur to fluid dynamics — are often implemented using two triangles that define a quad. These two triangles form a rectangle, invoking the fragment shader — once for every pixel. If it's full-screen, we call this a 'full-screen quad'. This technique allows you to run a fragment shader on every pixel of the screen or the quad's area.
A full-screen quad typically uses two triangles to cover the viewport because GPUs are highly optimized for processing triangles. This method ensures the entire rectangular area is consistently covered:
Full-Screen Quad Implementation
This implementation renders a full-screen quad. We'll create a visual test pattern by outputting UV coordinates as colors - this technique helps verify that our geometry covers the viewport correctly and that our coordinate mapping works as expected. This same setup will later serve as the foundation for post-processing effects.
Shaders
// Vertex Shader
#version 300 es
layout(location = 0) in vec2 aPosition;
out vec2 vUV;
void main() {
gl_Position = vec4(aPosition, 0.0, 1.0);
// Account for WebGL's bottom-left origin:
vUV = aPosition * 0.5 + 0.5;
// Flip Y for top-left texture origin
vUV.y = 1.0 - vUV.y;
}
// Fragment Shader
#version 300 es
precision mediump float;
in vec2 vUV;
out vec4 fragColor;
void main() {
fragColor = vec4(vUV, 0.5, 1.0); // RG gradient visualization
}
JavaScript Setup
// Context acquisition
const gl = canvas.getContext('webgl2');
if (!gl) throw new Error('WebGL 2 not supported');
// Shader compilation
function compileShader(gl, source, type) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
const info = gl.getShaderInfoLog(shader);
gl.deleteShader(shader);
throw new Error(`Compile failed: ${info}`);
}
return shader;
}
// Program creation
function createProgram(gl, vertSrc, fragSrc) {
const program = gl.createProgram();
gl.attachShader(program, compileShader(gl, vertSrc, gl.VERTEX_SHADER));
gl.attachShader(program, compileShader(gl, fragSrc, gl.FRAGMENT_SHADER));
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
throw new Error(`Link failed: ${gl.getProgramInfoLog(program)}`);
}
return program;
}
// Full-screen quad VAO
function createFullScreenQuad(gl) {
const vao = gl.createVertexArray();
gl.bindVertexArray(vao);
// Two triangles: [-1,-1] to [1,1]
const vertices = new Float32Array([
-1, -1, 1, -1, -1, 1, // Triangle 1
-1, 1, 1, -1, 1, 1, // Triangle 2
]);
const buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
gl.enableVertexAttribArray(0);
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0);
gl.bindVertexArray(null);
return vao;
}
This implementation renders a full-screen quad with a UV coordinate gradient, serving as the foundation for post-processing effects and GPU-based computations.
Example: repo
Debugging Techniques
Debugging shaders presents unique challenges because they execute on the GPU, separate from the JavaScript environment. You can't simply use console.log()
inside a GLSL shader to inspect values. So, you have to rely on specific techniques to understand program flow and data, helping to identify and fix issues.
Shader Compilation Checks
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error('Shader error:', gl.getShaderInfoLog(shader));
}
Visual Debugging
Output intermediate values as colors:
fragColor = vec4(vUV, 0.0, 1.0); // UV coordinates
fragColor = vec4(normalize(vNormal) * 0.5 + 0.5, 1.0); // Normals
Summary
Full-screen quads provide the foundation for GPU image processing and simulations. Two triangles covering the viewport create a one-to-one fragment-to-pixel mapping, enabling parallel computation across the framebuffer. This technique underlies post-processing effects, fluid simulations, and GPU-accelerated algorithms.
This is part of my series on implementing interactive 3D visualizations with WebGL 2.