For this chapter I wanted to take a pause. The code before was spelled out but could be a little more reusable. We'll clean it up now and add a few of the extra features which should get us to near parity with the old WebGL renderer. It'll be a hodge-podge of things.
Creating a texture upload helper
Creating textures whether they are 2d or cubemaps is almost identical so we can create a helper to do this.
//wgpu-utils.js
import { loadImage } from "./image-utils.js";
/**
* Loads an image url, uploads to GPU and returns texture ref.
* Cubemaps defined like [+X, -X, +Y, -Y, +Z, -Z]
* @param {GPUDevicee} device
* @param {string | string[]} urlOrUrls
* @param {*} options
*/
export async function uploadTexture(device, urlOrUrls, options) {
const urls = [].concat(urlOrUrls);
const images = await Promise.all(urls.map(url => loadImage(url)));
const size = {
width: images[0].width,
height: images[0].height,
depthOrArrayLayers: images.length
};
const texture = device.createTexture({
size,
dimension: "2d",
format: `rgba8unorm`,
usage: GPUTextureUsage.COPY_DST | GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING
});
images.forEach((img, layer) => {
device.queue.copyExternalImageToTexture(
{
source: img,
flipY: true
},
{
texture,
origin: [0, 0, layer]
},
{
width: img.width,
height: img.height,
depthOrArrayLayers: 1
}
);
});
return texture;
}
Then everything gets a lot simpler
async initializeTextures(){
this.#textures.set("earth", await uploadTexture(this.#device, "./img/earth.png"));
this.#textures.set("space", await uploadTexture(this.#device, [
"./img/space_right.png",
"./img/space_left.png",
"./img/space_top.png",
"./img/space_bottom.png",
"./img/space_front.png",
"./img/space_back.png"
]));
}
options
is kept incase we need them later.
Creating a mesh upload helper
We should do the same for meshes.
//wgpu-utils.js
/**
* This version is incomplete, see version in example below!
* @param {GPUDevice} device
* @param {Mesh} mesh
* @param {{ positionLength: number, uvLength?: number, label?: string }} options
*/
export function uploadMesh(device, mesh, options){
const vertices = packMesh(mesh, { positionLength: options.positionLength, uvLength: options.uvLength });
const vertexBuffer = device.createBuffer({
label: options.label,
size: vertices.byteLength,
usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
});
device.queue.writeBuffer(vertexBuffer, 0, vertices);
const indexBuffer = device.createBuffer({
label: `${options.label}-indices`,
size: mesh.indices.byteLength,
usage: GPUBufferUsage.INDEX | GPUBufferUsage.COPY_DST,
});
device.queue.writeBuffer(indexBuffer, 0, mesh.indices);
return {
vertexBuffer,
indexBuffer
};
}
Then meshes get simpler too
initializeMeshes(){
{
const mesh = new Mesh(uvSphere(8))
const { vertexBuffer, indexBuffer } = uploadMesh(this.#device, mesh, { positionLength: 3, uvLength: 2, label: "earth-mesh" });
this.#meshes.set("earth", { vertexBuffer, indexBuffer, mesh });
}
{
const mesh = new Mesh(quad());
const { vertexBuffer, indexBuffer } = uploadMesh(this.#device, mesh, { positionLength: 3, label: "background-mesh" });
this.#meshes.set("background", { vertexBuffer, indexBuffer, mesh });
}
}
However we should also apply the optimization from the cube map chapter to reduce the screen quad to a single triangle:
//mesh-generator.js
export function screenTri(){
return {
positions: new Float32Array([
-1.0, -1.0,
3.0, -1.0,
-1.0, 3.0
]),
uvs: new Float32Array([
0.0, -1.0,
3.0, 1.0,
0.0, 3.0,
]),
indices: [0,1,2],
length: 3
}
}
We could even embed this in the shader itself (something WebGL couldn't do!) but that gets inconsistent with the mesh pipeline (we'd need to invent a virtual mesh or something) so we'll leave it external for now. But this has a problem because the index buffer needs to be a multiple of 4 (3xint16 = 6 bytes).
export function uploadMesh(device, mesh, options){
const vertices = packMesh(mesh, { positionLength: options.positionLength, uvLength: options.uvLength });
const vertexBuffer = device.createBuffer({
label: options.label,
size: vertices.byteLength,
usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
});
device.queue.writeBuffer(vertexBuffer, 0, vertices);
const paddedIndexSize = getPaddedSize(mesh.indices.byteLength, 4);
const indexBuffer = device.createBuffer({
label: `${options.label}-indices`,
size: paddedIndexSize,
usage: GPUBufferUsage.INDEX | GPUBufferUsage.COPY_DST,
});
let indexData = mesh.indices;
if(paddedIndexSize !== mesh.indices.byteLength){
indexData = new ArrayBuffer(paddedIndexSize);
const indicesAsUint = new Uint16Array(indexData);
indicesAsUint.set(mesh.indices, 0);
}
device.queue.writeBuffer(indexBuffer, 0, indexData);
return {
vertexBuffer,
indexBuffer
};
}
This will adhere to the padding for this case.
Simplify Pipelines
Pipelines are still pretty manual at this point so we can't do a lot to simplify them because the shader code and vertex descriptors are most of the actual code. But we can move to autobinding which should at least clean up the code a bit. We're really not doing anything fancy enough to warrant manual binding.
initializePipelines(){
{
const vertexBufferDescriptor = [{
attributes: [
{
shaderLocation: 0,
offset: 0,
format: "float32x3"
},
{
shaderLocation: 1,
offset: 12,
format: "float32x2"
}
],
arrayStride: 20,
stepMode: "vertex"
}];
const shaderModule = this.#device.createShaderModule({
code: `
struct VertexOut {
@builtin(position) position : vec4<f32>,
@location(0) uv : vec2<f32>
};
struct Uniforms {
view_matrix: mat4x4<f32>,
projection_matrix: mat4x4<f32>,
model_matrix: mat4x4<f32>,
normal_matrix: mat3x3<f32>,
camera_position: vec3<f32>
}
@group(0) @binding(0) var<uniform> uniforms : Uniforms;
@group(1) @binding(0) var main_sampler: sampler;
@group(1) @binding(1) var earth_texture: texture_2d<f32>;
@vertex
fn vertex_main(@location(0) position: vec3<f32>, @location(1) uv: vec2<f32>) -> VertexOut
{
var output : VertexOut;
output.position = uniforms.projection_matrix * uniforms.view_matrix * uniforms.model_matrix * vec4<f32>(position, 1.0);
output.uv = uv;
return output;
}
@fragment
fn fragment_main(fragData: VertexOut) -> @location(0) vec4<f32>
{
return textureSample(earth_texture, main_sampler, fragData.uv);
}
`
});
const pipelineDescriptor = {
label: "main-pipeline",
vertex: {
module: shaderModule,
entryPoint: "vertex_main",
buffers: vertexBufferDescriptor
},
fragment: {
module: shaderModule,
entryPoint: "fragment_main",
targets: [
{ format: "rgba8unorm" }
]
},
primitive: {
topology: "triangle-list",
frontFace: "ccw",
cullMode: "back"
},
layout: "auto"
};
const pipeline = this.#device.createRenderPipeline(pipelineDescriptor);
this.#pipelines.set("main", {
pipeline,
bindGroupLayouts: new Map([
["uniforms", pipeline.getBindGroupLayout(0)],
["textures", pipeline.getBindGroupLayout(1)]
]),
bindMethod: this.setMainBindGroups.bind(this)
});
}
{
const vertexBufferDescriptor = [{
attributes: [
{
shaderLocation: 0,
offset: 0,
format: "float32x3"
}
],
arrayStride: 12,
stepMode: "vertex"
}];
const shaderModule = this.#device.createShaderModule({
code: `
struct VertexOut {
@builtin(position) frag_position : vec4<f32>,
@location(0) clip_position: vec4<f32>
};
@group(0) @binding(0) var<uniform> inverse_view_matrix: mat4x4<f32>;
@group(1) @binding(0) var main_sampler: sampler;
@group(1) @binding(1) var space_texture: texture_cube<f32>;
@vertex
fn vertex_main(@location(0) position: vec3<f32>) -> VertexOut
{
var output : VertexOut;
output.frag_position = vec4(position, 1.0);
output.clip_position = vec4(position, 1.0);
return output;
}
@fragment
fn fragment_main(fragData: VertexOut) -> @location(0) vec4<f32>
{
var pos = inverse_view_matrix * fragData.clip_position;
return textureSample(space_texture, main_sampler, pos.xyz);
}
`
});
const pipelineDescriptor = {
label: "background-pipeline",
vertex: {
module: shaderModule,
entryPoint: "vertex_main",
buffers: vertexBufferDescriptor
},
fragment: {
module: shaderModule,
entryPoint: "fragment_main",
targets: [
{ format: "rgba8unorm" }
]
},
primitive: {
topology: "triangle-list",
frontFace: "ccw",
cullMode: "back"
},
layout: "auto"
};
const pipeline = this.#device.createRenderPipeline(pipelineDescriptor);
this.#pipelines.set("background", {
pipeline: pipeline,
bindGroupLayouts: new Map([
["uniforms", pipeline.getBindGroupLayout(0)],
["textures", pipeline.getBindGroupLayout(1)]
]),
bindMethod: this.setBackgroundBindGroups.bind(this)
});
}
}
This reduces the code by a fair bit but we lose a little control. Unclear if we'll need to add it back. The other thing I want to do here is allow the shaders to be external because I don't like dealing with inline strings.
//wgpu-utils.js
/**
*
* @param {GPUDevice} device
* @param {string} url
* @param {*} options
* @returns
*/
export async function uploadShader(device, url, options = {}) {
const response = await fetch(url);
if (!response.ok) throw new Error(`Could not fetch text content from ${url}`);
const code = await response.text();
const shaderModule = device.createShaderModule({
label: options.label ?? url,
code
});
const compilationInfo = await shaderModule.getCompilationInfo();
if(compilationInfo.messages.length > 0){
throw new Error(`Failed to compile shader ${url}.`);
}
return shaderModule;
}
I'm not really handling the error messages because those are already sent to the console in Chrome, but we could do some error formatting if we really wanted. Then we can clean up a little more:
-const shaderModule = this.#device.createShaderModule({
- code: `
- ...
- `
-})
+const shaderModule = await uploadShader(this.#device, "./shaders/shader.wgsl");
Optimizing drawing
As in the cubemap chapter we want to optimize the drawing so that we aren't drawing over the background but only filling the pixel once. For that we need depth buffer.
//gpu-engine.js - initializeTextures
async initializeTextures(){
//...
this.#textures.set("depth", this.#device.createTexture({
size: {
width: this.#canvas.width,
height: this.#canvas.height,
depthOrArrayLayers: 1
},
format: "depth32float",
usage: GPUTextureUsage.RENDER_ATTACHMENT
}));
}
Here we're creating a new texture that's the same size as the canvas and giving it a 32-bit depth value (essentially a black/white image). This will be the depth buffer. I chose this over depth24plus
because apparently it's more compatible and I was having issues debugging. We need to set it so that if we try to draw something behind what's already on screen we cancel that pixel (you want less-equal
rather than less
because the background is always drawn at max value 1.0 which is also the clear value and we want that to stick).
//gpu-engine.js initializePipelines
const pipelineDescriptor = {
label: "main-pipeline",
vertex: {
module: shaderModule,
entryPoint: "vertex_main",
buffers: vertexBufferDescriptor
},
fragment: {
module: shaderModule,
entryPoint: "fragment_main",
targets: [
{ format: "rgba8unorm" }
]
},
primitive: {
topology: "triangle-list",
frontFace: "ccw",
cullMode: "back"
},
+ depthStencil: {
+ depthWriteEnabled: true,
+ depthCompare: "less-equal",
+ format: "depth32float"
+ },
layout: "auto"
};
We add the depth stencil to the pipeline (both pipelines! only one shown!). Lastly we actually need to use it in the pipeline. This poses a bit of a problem because we want to clear on the first pass and load on the subsequent passes which means that render passes must now be 1-to-1 with pipelines so we need to shuffle the code around a bit.
render() {
const commandEncoder = this.#device.createCommandEncoder({
label: "main-command-encoder"
});
const camera = this.#cameras.get("main");
let isFirstPass = true;
const depthView = this.#textures.get("depth").createView();
for(const [pipelineName, meshNames] of this.#pipelineMesh.entries()){
const passEncoder = commandEncoder.beginRenderPass({
label: `${pipelineName}-render-pass`,
colorAttachments: [
{
storeOp: "store",
loadOp: "load",
view: this.#context.getCurrentTexture().createView()
}
],
depthStencilAttachment: {
view: depthView,
depthClearValue: 1.0,
depthStoreOp: "store",
depthLoadOp: isFirstPass ? "clear" : "load"
}
});
const pipelineContainer = this.#pipelines.get(pipelineName);
passEncoder.setPipeline(pipelineContainer.pipeline);
for(const meshName of meshNames){
const meshContainer = this.#meshes.get(meshName);
pipelineContainer.bindMethod(passEncoder, pipelineContainer.bindGroupLayouts, camera, meshContainer.mesh);
passEncoder.setVertexBuffer(0, meshContainer.vertexBuffer);
passEncoder.setIndexBuffer(meshContainer.indexBuffer, "uint16");
passEncoder.drawIndexed(meshContainer.mesh.indices.length);
}
passEncoder.end();
isFirstPass = false;
}
this.#device.queue.submit([commandEncoder.finish()]);
}
The first time through we need to clear the depth buffer (1.0 means the deepest values) and then on subsequent passes we will use the existing values. Since it compares less-equal
when it writes the pixel, if the new value is less or equal, it will overwrite and if not it will discard. Be aware that you should only call createView()
on the depth texture once. If you do it per pass it seems to reset the values and that mistake took a while to debug. One last things is the flip the order so the Earth is drawn first.
initializePipelineMesh(){
this.#pipelineMesh.set("main", ["earth"]);
this.#pipelineMesh.set("background", ["background"]);
}
Now there is no overdraw. The Earth is drawn and the background is drawn in all the space around that has the default 1.0 (the depth is given by the builtin(position)
's z value).
Debugging the depth test
While implementing this I kept getting the wrong result and it's actually tricky to debug. What we want is to be able to get an black/white image that represents the depth texture so we can see if it actually wrote correctly. The problem is that the depth test texture isn't directly accessible.
//debug utils
export function setupExtractDepthBuffer(device, context) {
const vertices = new Float32Array([
-1.0, -1.0,
3.0, -1.0,
-1.0, 3.0
]);
const vertexBuffer = device.createBuffer({
label: "depth-buffer-export-tri",
size: vertices.byteLength,
usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
});
device.queue.writeBuffer(vertexBuffer, 0, vertices);
const vertexBufferDescriptor = [{
attributes: [
{
shaderLocation: 0,
offset: 0,
format: "float32x2"
}
],
arrayStride: 8,
stepMode: "vertex"
}];
const shaderModule = device.createShaderModule({
label: "depth-buffer-export-shader",
code: `
struct VertexOut {
@builtin(position) frag_position : vec4<f32>,
@location(0) clip_position: vec4<f32>,
@location(2) uv: vec2<f32>
};
@group(0) @binding(0) var depthSampler: sampler;
@group(0) @binding(1) var depthTex: texture_depth_2d;
@vertex
fn vertex_main(@location(0) position: vec2<f32>) -> VertexOut
{
var output : VertexOut;
output.frag_position = vec4(position, 1.0, 1.0);
output.clip_position = vec4(position, 1.0, 1.0);
output.uv = position.xy * 0.5 + vec2<f32>(0.5, 0.5);
return output;
}
@fragment
fn fragment_main(fragData: VertexOut) -> @location(0) vec4<f32> {
let depth = textureSample(depthTex, depthSampler, fragData.uv);
let gamma_depth = pow(depth, 10.0);
return vec4<f32>(gamma_depth, gamma_depth, gamma_depth, 1.0); // grayscale output
}
`
});
const sampler = device.createSampler({
compare: undefined
});
const bindGroupLayout = device.createBindGroupLayout({
entries: [
{
binding: 0,
visibility: GPUShaderStage.FRAGMENT,
sampler: {
type: "non-filtering"
}
},
{
binding: 1,
visibility: GPUShaderStage.FRAGMENT,
texture: {
sampleType: "depth",
viewDimension: "2d"
}
}
]
});
const pipelineLayout = device.createPipelineLayout({
label: "depth-buffer-export-pipeline-layout",
bindGroupLayouts: [
bindGroupLayout
]
});
const pipelineDescriptor = {
label: "depth-buffer-export-pipeline",
vertex: {
module: shaderModule,
entryPoint: "vertex_main",
buffers: vertexBufferDescriptor
},
fragment: {
module: shaderModule,
entryPoint: "fragment_main",
targets: [
{ format: "rgba8unorm" }
]
},
primitive: {
topology: "triangle-list"
},
layout: pipelineLayout
};
const pipeline = device.createRenderPipeline(pipelineDescriptor);
return (depthBufferView) => {
const commandEncoder = device.createCommandEncoder({
label: "depth-buffer-export-command-encoder"
});
const passEncoder = commandEncoder.beginRenderPass({
label: `depth-buffer-export-render-pass`,
clearValue: { r: 0, g: 0, b: 0, a: 1 },
colorAttachments: [
{
storeOp: "store",
loadOp: "clear",
view: context.getCurrentTexture().createView()
}
]
});
const textureBindGroup = device.createBindGroup({
label: "depth-buffer-export-bind-group",
layout: bindGroupLayout,
entries: [
{ binding: 0, resource: sampler },
{ binding: 1, resource: depthBufferView },
]
});
passEncoder.setPipeline(pipeline);
passEncoder.setBindGroup(0, textureBindGroup);
passEncoder.setVertexBuffer(0, vertexBuffer);
passEncoder.draw(3);
passEncoder.end();
device.queue.submit([commandEncoder.finish()]);
}
}
I won't explain this too much since we've already gone over similar pieces. The function returns a function. This is so we can setup the buffers in a setup command and not recreate them on each draw call. We call the function and then call the return value when we want to draw the depth buffer to the screen. We pass in the depth texture and sample from it. In order to do this we need to add a new usage flag for TEXTURE_BINDING
to the depth texture:
this.#textures.set("depth", this.#device.createTexture({
label: "depth-texture",
size: {
width: this.#canvas.width,
height: this.#canvas.height,
depthOrArrayLayers: 1
},
format: "depth32float",
usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING //for debug
}));
The key here is that we're sampling a texture_depth_2d
which gives us a single float. Getting the various formats to agree was hard (and why I went with depth32float
). I also scale the values let gamma_depth = pow(depth, 10.0);
because otherwise they would be too small to see, you'd just get a white screen.
We set it up in intialize
with
this.#extractDepthBuffer = setupExtractDepthBuffer(this.#device, this.#context);
We can call this at the bottom of the render
method with (this won't be in the final code since it was for debugging)
this.#extractDepthBuffer(depthView);`
This will produce an image like the following
Where the darker spots are closer to the viewer and pure white is the furthest value we can see.
Setting up the screen recorder
Also for the sake of producing captures I wanted to re-add the screen capture functionality. This time I've set it to use Shift+R
to make it easier to toggle.
//wc-geo.js
onRKeyPressed(e) {
if (!e.shiftKey) return; //record function is shift+R
this.#isRecording = !this.#isRecording;
if (this.#isRecording) {
this.dom.message.textContent = "Recording video";
const stream = this.dom.canvas.captureStream(25);
this.#mediaRecorder = new MediaRecorder(stream, {
mimeType: 'video/webm;codecs=vp9'
});
this.#recordedChunks = [];
this.#mediaRecorder.ondataavailable = e => {
if (e.data.size > 0) {
this.#recordedChunks.push(e.data);
}
};
this.#mediaRecorder.start();
} else {
this.dom.message.textContent = "Recording stopped";
this.#mediaRecorder.stop();
setTimeout(() => {
const blob = new Blob(this.#recordedChunks, {
type: "video/webm"
});
downloadBlob(blob, "recording.webm");
}, 0);
setTimeout(() => {
this.dom.message.textContent = "";
}, 3000);
}
}
Starting and stopping the simulation
One thing that would be useful is the ability to start and stop (pause/unpause) the engine. I do this with the Esc
key:
//Remember to add to onKeyDown method switch!
onEscPressed(e){
if (this.engine.isRunning) {
this.engine.stop();
this.dom.message.textContent = "Paused";
} else {
this.engine.start();
this.setTemporaryMessage("Started");
}
return;
}
On the engine side
#raf;
#isRunning = false;
start() {
this.#isRunning = true;
this.renderLoop();
}
stop(){
cancelAnimationFrame(this.#raf);
this.#isRunning = false;
}
get isRunning(){
return this.#isRunning;
}
Now when we press Esc
the engine will toggle between paused and unpaused. Note that any changes to the camera from key events still happen, they just don't render until it's running again.
OBJ Files
The last thing I want to add is .obj
support so we can load some actual user created models and not just geometric shapes. We already covered most of this.
//obj=loader.js
/**
*
* @param {string} txt
* @param {{ color?: [number, number, number, number], reverseWinding?: boolean}} options
* @returns
*/
export function loadObj(txt, options = {}) {
const positions = [];
const normals = [];
const uvs = [];
const colors = [];
const indices = [];
const faceCombos = [];
let positionSize = 3;
let normalSize = 3;
let colorSize = 4;
let uvSize = 2;
const lines = txt.split("\n");
for (const line of lines) {
const normalizedLine = line.trim();
if (!normalizedLine || normalizedLine.startsWith("#")) continue;
const parts = normalizedLine.split(/\s+/g);
const values = parts.slice(1);
switch (parts[0]) {
case "v": {
positions.push(values.map(x => parseFloat(x)));
positionSize = values.length;
break;
}
case "c": { //custom extension
if (!options.color) {
colors.push(values.map(x => parseFloat(x)));
colorSize = values.length;
}
break;
}
case "vt": {
uvs.push(values.map(x => parseFloat(x)));
uvSize = values.length;
break;
}
case "vn": {
normals.push(values.map(x => parseFloat(x)));
normalSize = values.length;
break;
}
case "f": {
if(values[0].includes("/")){
faceCombos.push(values.map(value => value.split("/").map(x => parseFloat(x) - 1)));
} else {
const oneBasedIndicies = values.map(x => parseFloat(x) - 1);
indices.push(
...(options.reverseWinding ? oneBasedIndicies.reverse() : oneBasedIndicies)
);
}
break;
}
}
}
if(faceCombos.length === 0){
return {
positions: positions.flat(Infinity),
uvs: uvs.flat(Infinity),
normals: normals.flat(Infinity),
indices,
colors: colors.flat(Infinity),
length: positions.length,
positionSize,
uvSize,
colorSize,
normalSize,
};
}
//For multi value faces we need to get position/uv/normal combos and put each into the pool of vertices
const comboPositions = [];
const comboUvs = [];
const comboNormals = [];
const comboIndices = [];
let startIndex = 0;
for(const combo of faceCombos){
for(const attrIndex of combo){
comboPositions.push(positions[attrIndex[0]]);
comboUvs.push(uvs[attrIndex[1]]);
comboNormals.push(normals[attrIndex[2]]);
}
if (combo.length === 3) {
if(options.reverseWinding){
comboIndices.push(startIndex + 2, startIndex + 1, startIndex);
} else {
comboIndices.push(startIndex, startIndex + 1, startIndex + 2);
}
} else if(combo.length === 4){
if(options.reverseWinding){
comboIndices.push(
startIndex + 2,
startIndex + 1,
startIndex,
startIndex + 3,
startIndex + 2,
startIndex);
} else {
comboIndices.push(
startIndex,
startIndex + 1,
startIndex + 2,
startIndex,
startIndex + 2,
startIndex + 3);
}
}
startIndex += combo.length;
}
return {
positions: comboPositions.flat(Infinity),
uvs: comboUvs.flat(Infinity),
normals: comboNormals.flat(Infinity),
indices: comboIndices.flat(Infinity),
colors: [],
length: comboPositions.length,
positionSize,
uvSize,
colorSize: 0,
normalSize
};
}
Starting with the existing code this has been expanded a bit. It now properly handles faces that use multiple indices like
#square.obj
v -0.5 -0.5 0
v 0.5 -0.5 0
v 0.5 0.5 0
v -0.5 0.5 0
vt 0 0
vt 1 0
vt 1 1
vt 0 1
vn 0 0 -1
f 1/1/1 2/2/1 3/3/1 4/4/1
It can also handle models that are made with quads by doing some very simple tessellation into 2 triangles and allows for variable sizes vertices (this info will probably come in handy later if we want to autobuild pipeline vertex buffer descriptions based on the geometry). Admittedly, not the prettiest code but it works. I also made an an uploadObj
helper.
//wpgu-utils.ts
/**
* Loads an .obj file
* @param {GPUDevice} device
* @param {string} url
* @param {{ color?: [number, number, number, number], reverseWinding?: boolean, label?: string, normalizePositions?: boolean | number }} options
* @returns
*/
export async function uploadObj(device, url, options = {}){
const response = await fetch(url);
if (!response.ok) throw new Error(`Could not fetch obj content from ${url}`);
const objText = await response.text();
const objContent = loadObj(objText, { color: options.color, reverseWinding: options.reverseWinding });
const mesh = new Mesh(objContent);
if(options.normalizePositions){
mesh.normalizePositions(typeof options.normalizePositions === "number" ? options.normalizePositions : 1);
}
return {
mesh,
...uploadMesh(device, mesh, {
positionSize: objContent.positionSize,
uvSize: objContent.uvSize,
normalSize: objContent.normalSize,
colorSize: objContent.colorSize,
label: options.label ?? url
})
};
}
This loads the model and then shoves it through uploadMesh
. It's not actually a great API in practice though because you don't have control over the Mesh
. I added the ability to change the scale with normalizePositions
but it probably also makes sense to have rotations too because the teapot model is Z up. This will probably be reworked into two calls to get the Mesh
and then upload. But we can load the teapot and other models now which means we can work with more interesting things.