WebGPU Engine from Scratch Part 5: More Pipeline Improvements
ndesmic

ndesmic @ndesmic

About: I like to make fun web things from scratch. Ideally build-less, framework-less, infrastructure-less and free from the annoyances of my day job.

Joined:
Sep 30, 2020

WebGPU Engine from Scratch Part 5: More Pipeline Improvements

Publish Date: Aug 18
2 0

I was hoping to get to new features but it seems like there was just too much I wanted to fix up in the pipeline because it was getting really taxing to try different things. So it seems this will be another chapter of more scattershot improvements but admittedly they are more boring in nature. There's enough that I didn't want to bog down a feature post with too much enhancement diversion.

Surface Grid Mesh

I want to re-add the formerly named "terrain mesh" but I'm going to rename it "surface grid" because it's a little more general (I also fixed a bug in the uvSphere where the index buffer was too big).

/**
 * Generates a flat surface made up of multiple quads, faces +Y, each quad is 1x1
 * @param {number} height 
 * @param {number} width 
 */
export function surfaceGrid(height, width){
    const vertexLength = (height + 1) * (width + 1);
    const positions = new Float32Array(vertexLength * 3);
    const uvs = new Float32Array(vertexLength * 2);
    const normals = new Float32Array(vertexLength * 3);
    const tangents = new Float32Array(vertexLength * 3);
    const indices = new Int16Array(height * width * 6);

    let z = -(height / 2);

    for (let row = 0; row < height + 1; row++) {
        let x = -(width / 2);
        for (let col = 0; col < width + 1; col++) {
            positions.set([
                x, 0, z 
            ], (row * (width + 1) + col) * 3);
            uvs.set([
                col / width, row / height
            ], (row * (width + 1) + col) * 2);
            normals.set([
                0, 1, 0
            ], (row * (width + 1) + col) * 3);
            tangents.set([
                1, 0, 0
            ], (row * (width + 1) + col) * 3)
            x++;
        }
        z++;
    }

    for(let row = 0; row < height; row++){
        for(let col = 0; col < width; col++){
            const index = row * (width + 1) + col;
            indices.set([
                index, index + 1, index + width + 2, //take into account the extra vert at end of row
                index, index + width + 2, index + width + 1
            ], (row * width + col) * 6);
        }
    }

    return {
        positions,
        uvs,
        normals,
        indices,
        tangents,
        vertexLength
    };
}
Enter fullscreen mode Exit fullscreen mode

This is much cleaned up. It doesn't create extra overlapping vertices anymore and the code is more straight-forward. This will let us generate some floor so we can see the shadows.

Mesh enhancements

Another thing to do is added some nice-to-haves to the Mesh class as it's currently frustrating flipping back and forth between test models because they are inconsistent or we might use different sorts of attributes.

First I renamed length to vertexLength for clarity. Then I normalized all the properties to always have a default array so they can't be undefined and that the size attributes will return 0 if there is no corresponding data. Then I added a method useAttributes which lets you prune attribute data you aren't using.

//mesh.js
useAttributes(attrNames){
    for(const [attrName, _attrSizeName] of Mesh.attributeOrdering){
        if(!attrNames.includes(attrName)){
            this[attrName] = null;
        }
    }
    return this;
}
Enter fullscreen mode Exit fullscreen mode

This is useful because we can save some space but also it means the mesh can handle the attribute sizes and thus we don't need to pass that in to packMesh.

//buffer-utils.js
/**
 * 
 * @param {{ positions: Float32Array, colors?: Float32Array, uvs?: Float32Array, normals?: Float32Array, attributeLength: number, positionSize?: number, colorSize?: number, uvSize?: number, normalSize?: number, tangentSize?: number }} mesh 
 */
export function packMesh(mesh){
    const stride = (mesh.positionSize ?? 0) + (mesh.colorSize ?? 0) + (mesh.uvSize ?? 0) + (mesh.normalSize ?? 0) + (mesh.tangentSize ?? 0); //stride in terms of indices (not bytes, assume F32s)
    const buffer = new Float32Array(stride * mesh.attributeLength);

    const positionOffset = 0;
    const colorOffset = mesh.positionSize ?? 0;
    const uvOffset = colorOffset + (mesh.colorSize ?? 0);
    const normalOffset = uvOffset + (mesh.uvSize ?? 0);
    const tangentOffset = uvOffset + (mesh.uvSize ?? 0);

    for(let i = 0; i < mesh.attributeLength; i++){
        packAttribute(buffer, mesh.positions, i, positionOffset, mesh.positionSize, stride);
        packAttribute(buffer, mesh.colors, i, colorOffset, mesh.colorSize, stride);
        packAttribute(buffer, mesh.uvs, i, uvOffset, mesh.uvSize, stride);
        packAttribute(buffer, mesh.normals, i, normalOffset, mesh.normalSize, stride);
        packAttribute(buffer, mesh.tangents, i, tangentOffset, mesh.tangentSize, stride);
    }

    return buffer;
}
Enter fullscreen mode Exit fullscreen mode

So now we can use our surface mesh like this:

{
    const mesh = new Mesh(surfaceGrid(1, 1))
        .useAttributes(["positions", "uvs"]);
    const { vertexBuffer, indexBuffer } = uploadMesh(this.#device, mesh, {
        label: "floor-mesh"
    });
    this.#meshInfos.set("floor", { vertexBuffer, indexBuffer, mesh });
}
Enter fullscreen mode Exit fullscreen mode

And get the vertexBufferLayout like this (#meshes was renamed to #meshContainers because I kept thinking it contained Mesh objects):

const vertexBufferLayout = getVertexBufferLayout(this.#meshContainers.get("floor").mesh);
Enter fullscreen mode Exit fullscreen mode

Baking transforms

The next thing I want to do is allow us to "bake" transforms. That is, directly modify the mesh position data. This can happen during a loading phase of the pipeline if necessary but it allows us to normalize the meshes better. We already do some normalization for scale to shrink everything down to a unit volume. Baking does a similar thing. Since the models to start at 0 they tend to occupy the upper right distant octant (3d quadrant). We'd rather have them always start centered so what we can do is apply that transform and then bake it.

//mesh.js
bakeTransforms(){
    const modelMatrix = this.getModelMatrix();
    const transformedPositions = chunk(this.positions, this.positionSize)
        .map(values => {
            const lengthToPad = 4 - values.length;
            switch(lengthToPad){
                case 1:{
                    return [...values, 1.0]
                }
                case 2:{
                    return [...values, 0.0, 1.0];
                }
                case 3: {
                    return [...values, 0.0, 0.0, 1.0];
                }
                default: {
                    return [0.0, 0.0, 0.0, 1.0];
                }
            }
        })
        .map(values => multiplyMatrixVector(values, modelMatrix))
        .toArray();
    //collect
    const newPositionsBuffer = new Float32Array(this.vertexLength * this.positionSize);
    for(let i = 0; i < transformedPositions.length; i++){
        newPositionsBuffer.set(transformedPositions[i].slice(0, this.positionSize), i * this.positionSize)
    }
    this.positions = newPositionsBuffer;
    this.resetTransforms();
    return this;
}
Enter fullscreen mode Exit fullscreen mode

We have to do a little bit of transformation to deal with different position sizes and we need to apply the matrix point-by-point which means we have to re-collect the points into vectors from the linear positions property which is kinda annoying. Then we create a new position buffer and assign it. We should also remove the transforms since they are now applied to the data itself, it wouldn't make sense to keep them around.

The iterator helper to chunk:

//iterator-utils.js
export function* chunk(iterator, size) {
    let chunk = new Array(size);
    let i = 0;
    for (const element of iterator) {
        chunk[i] = element;
        i++;
        if (i === size) {
            yield chunk;
            chunk = new Array(size);
            i = 0;
        }
    }
    if (i > 0) {
        yield chunk;
    }
}
Enter fullscreen mode Exit fullscreen mode

The intermediate Float32Arrays are annoying to deal with but it's fine, I don't think it's a performance problem just a verbose code one.

Now that we can bake transforms, when we normalize positions we should also center the model.

//mesh.js
/**
 * Normalizes positions to be unit volume and centers
 * @param {{ scale?: boolean, center?: boolean }} options
 * @returns 
 */
normalizePositions(options = {}){
    const shouldCenter = options.center ?? true;
    const shouldScale = options.scale ?? true;
    const max = new Array(this.positionSize).fill(-Infinity);
    const min = new Array(this.positionSize).fill(Infinity);
    for(let i = 0; i < this.vertexLength; i++){
        for(let j = 0; j < this.positionSize; j++){
            const coord = this.#positions[i * this.positionSize + j];
            if(coord > max[j]){
                max[j] = coord
            }
            if(coord < min[j]){
                min[j] = coord;
            }
        }
    }

    const length = subtractVector(max, min);
    const maxLength = Math.max(...length);

    let currentCenter;
    if(shouldScale){
        for(let i = 0; i < this.positions.length; i++){
            this.#positions[i] /= maxLength;
        }
        currentCenter = addVector(divideVector(min, maxLength), divideVector(divideVector(length, maxLength), 2));
    } else {
        currentCenter = addVector(min, divideVector(length, 2));
    }
    if(shouldCenter){
        for (let i = 0; i < this.positions.length; i++) {
            const dimension = i % this.positionSize;
            this.#positions[i] -= currentCenter[dimension];
        }
    }
    return this;
}
Enter fullscreen mode Exit fullscreen mode

To reiterate, this takes the min and max along each direction (previously there was a bug here where I always started at 0), finds the distance between the min and max to get the max length per dimension. To normalize we divide everything by the maxLength. To center we need to get the center point which is the min + length / 2 per dimension. If we scaled then we need to divide by max length to get the updated min and length. Once we have the center we subtract it from each point. This will allow us to easies center non-normalized models. I removed the parameter to set the scale value because we can now use the bakeTransforms to do that.

We can take extra stuff out of uploadObj, rename and move it to data-utils.js. We'll use the mesh directly to add transforms and normalization.

//data-utils.js
/**
 * Loads an .obj file
 * @param {GPUDevice} device 
 * @param {string} url 
 * @param {{ color?: [number, number, number, number], reverseWinding?: boolean, label?: string }} options
 * @returns 
 */
export async function fetchObjMesh(url, options = {}) {
    const response = await fetch(url);
    if (!response.ok) throw new Error(`Could not fetch obj content from ${url}`);
    const objText = await response.text();
    const objContent = loadObj(objText, { color: options.color, reverseWinding: options.reverseWinding });
    const mesh = new Mesh(objContent);
    return mesh;
}   
Enter fullscreen mode Exit fullscreen mode

Finally we can get the teapot, scaled, centered and rotated when loaded

//gpu-engine.js
{
    const mesh = await fetchObjMesh("./objs/teapot-low.obj", { reverseWinding: true });
    mesh.useAttributes(["positions", "uvs"])
        .normalizePositions()
        .rotate({ x: -Math.PI / 2 })
        .bakeTransforms();
    const { vertexBuffer, indexBuffer } = await uploadMesh(this.#device, mesh, { label: "teapot" });
    this.#meshContainers.set("teapot", { vertexBuffer, indexBuffer, mesh });
}
Enter fullscreen mode Exit fullscreen mode

Rendered teapot in center of screen

Materials

So we can render a textured teapot but what about something else in the same scene? In this case we'll need to make sure that the textures get parameterized into bind groups. We'll map the meshes with a property called "material". For now this will just be the name of the texture.

I found some free textures and wired them up.

//gpu-engine.js
async initializeTextures(){
    this.#textures.set("marble", await uploadTexture(this.#device, "./img/marble-white/marble-white-base.jpg"));
    this.#textures.set("red-fabric", await uploadTexture(this.#device, "./img/red-fabric/red-fabric-base.jpg"));
    //...other stuff here
}
Enter fullscreen mode Exit fullscreen mode

And associate them to meshes

async initializeMeshes(){
    {
        const mesh = await fetchObjMesh("./objs/teapot-low.obj", { reverseWinding: true });
        mesh.useAttributes(["positions", "uvs"])
            .normalizePositions()
            .rotate({ x: -Math.PI / 2 })
            .bakeTransforms()
+           .setMaterial("marble");
        const { vertexBuffer, indexBuffer } = await uploadMesh(this.#device, mesh, { label: "teapot" });
        this.#meshContainers.set("teapot", { vertexBuffer, indexBuffer, mesh });
    }
    {
        const mesh = new Mesh(surfaceGrid(2, 2))
            .useAttributes(["positions", "uvs"])
            .translate({ y: -0.24 })
+           .setMaterial("red-fabric");
        const { vertexBuffer, indexBuffer } = uploadMesh(this.#device, mesh, {
            label: "floor-mesh"
        });
        this.#meshContainers.set("floor", { vertexBuffer, indexBuffer, mesh });
    }
    // {
Enter fullscreen mode Exit fullscreen mode

setMaterial is identical to set material but chainable. Lastly, when setting the texture bind group we pass in the mesh (which setMainBindgroups already has) so we can access the material

setMainTextureBindGroup(passEncoder, bindGroupLayouts, mesh){
    const textureBindGroup = this.#device.createBindGroup({
        layout: bindGroupLayouts.get("textures"),
        entries: [
            { binding: 0, resource: this.#samplers.get("main") },
-           { binding: 1, resource: this.#textures.get("earth").createView() }
+           { binding: 1, resource: this.#textures.get(mesh.material).createView() },
        ]
    });
    passEncoder.setBindGroup(1, textureBindGroup);
}
Enter fullscreen mode Exit fullscreen mode

This is enough to render two things.

A teapot on a carpet (unlit)

Of course without lighting this looks pretty bad. Next time we'll clean it up.

Code

https://github.com/ndesmic/geo/releases/tag/v0.5

Comments 0 total

    Add comment