WebGPU基础(五)- 加载模型

内容纲要

在WebGPU开发中,会加载各种各样的模型,

下载模型

这里以通用OBJ模型为例,首先是加载文件:

//...

private modeObj?: any;
private modeData?: any;

//...

//...

function loadFile(url: string) {
    new Promise((resolve, reject) => {

    });
    var xhr = new XMLHttpRequest();
    xhr.open('GET', url, false);
    xhr.onload = function () {
        if (xhr.status != 200) {
            console.log('加载中:' + xhr.status + '/ ' + xhr.statusText);
        } else {
            result = xhr.responseText;
        }
    }
    xhr.send();
}

modeObj = await loadFile(CUBE, 'https://hungking.cc/xxxx.obj');

//...

这里为了防止无脑抄袭,使用原生。实际生产会使用fetch或者使用Promise,第三方库axios等。

模型处理

模型加载完后不能直接使用,需要处理成可被识别的数据结构, 数据处理比较简单:

//...

async function loadObj(obj: any) {
    if (!obj) throw new Error("数据错误:" + obj);
    let objText = obj.trim() + "\n";
    let startSTR = 0;
    let finishSTR = objText.indexOf("\n", 0);
    let V = [1];
    let T = [1];
    let N = [1];
    let F = [1];
    let vertex = [];
    let uv = [];
    let normals = [];
    let faces = [];
    let face = [];
    let faceindex = 0;
    let objLineArray = objText.split("\n");

    for (let indexLine = 0; indexLine < objLineArray.length; indexLine++) {
        let line = objLineArray[indexLine];
        line = line.trim().replace(/\s+/g, ' ');
        let lineData: any = line.split(" ");
        const typeVertexData = lineData[0];
        if (typeVertexData === "v") {
            lineData.shift();
            lineData = lineData.map(parseFloat);
            V.push(lineData);
        } else if (typeVertexData === "vt") {
            lineData.shift();
            lineData = lineData.map(parseFloat);
            lineData[2] = 0.0;
            T.push(lineData.slice(0, 2));
        } else if (typeVertexData === "vn") {
            lineData.shift();
            lineData = lineData.map(parseFloat);
            N.push(lineData);
        } else if (typeVertexData === "f") {
            lineData.shift();
            for (let index = 0; index < lineData.length - 2; index++) {
                for (let indexTriangel = 0; indexTriangel < 3; indexTriangel++) {
                    let vertexDataArray = lineData[indexTriangel + index];
                    if (indexTriangel == 2) {
                        indexTriangel = lineData.length - 1;
                        vertexDataArray = lineData[indexTriangel];
                    }

                    let vertexData = vertexDataArray.split("/");
                    for (let i = 0; i < 3; i++) {
                        if (vertexData[0] < 0) {
                            vertexData[0] = V.length + 1 + parseInt(vertexData[0]);
                        }
                        if (vertexData[1] < 0) {
                            vertexData[1] = T.length + 1 + parseInt(vertexData[1]);
                        }
                        if (vertexData[2] < 0) {
                            vertexData[2] = N.length + 1 + parseInt(vertexData[2]);
                        }
                    }

                    for (let index = 0; index < vertexData.length; index++) {
                        switch (index) {
                            case 0:
                                vertex.push(V[vertexData[0]]);
                                break;
                            case 1:
                                uv.push(T[vertexData[1]]);
                                break;
                            case 2:
                                normals.push(N[vertexData[2]]);
                                break;
                            default:
                                break;
                        }
                    }
                    faces.push(faceindex);
                    faceindex++;
                }
            }
        }
    }

    return {
        vertex,
        uv,
        normals,
        faces,
    };
}

//...

modelData = await loadModel(modeObj);

处理完后,数组中含有多维数组,需要扁平化处理,也可以在处理数据时直接处理掉:

//...

model_vertex = new Float32Array(modelData.vertex.flat());
model_uv = new Float32Array(modelData.uv.flat());
model_index = new Uint32Array(modelData.faces.flat());

//...

Uniform Data

然后处理uniform数据,首先创建:

//...

const uniformBuffer = device.createBuffer({
    size: 64 + 64 + 64,
    usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST
});

//...

let MODELMATRIX = mat4.create();
let VIEWMATRIX = mat4.create();
let PROJMATRIX = mat4.create();

mat4.lookAt(VIEWMATRIX, [0.0, 0.0, 5.0], [0.0, 0.0, 0.0], [0.0, 1.0, 0.0]);
mat4.identity(PROJMATRIX);
mat4.perspective(PROJMATRIX, fovy, canvas.width / canvas.height, 1, 25);

//...

这里可以封装到之前的camera中去。

然后是顶点、UV、索引的buffer

//...

const vertexBuffer = device.createBuffer({
    size: cube_vertex.byteLength,
    usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    mappedAtCreation: true
});

new Float32Array(vertexBuffer.getMappedRange()).set(cube_vertex);

vertexBuffer.unmap();

const uvBuffer = device.createBuffer({
    size: cube_uv.byteLength,
    usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    mappedAtCreation: true
});

new Float32Array(uvBuffer.getMappedRange()).set(cube_uv);

uvBuffer.unmap();

const indexBuffer = device.createBuffer({
    size: cube_index.byteLength,
    usage: GPUBufferUsage.INDEX | GPUBufferUsage.COPY_DST,
    mappedAtCreation: true
});

new Uint32Array(indexBuffer.getMappedRange()).set(cube_index);
indexBuffer.unmap();

//...

管线

接着处理管线:

const pipeline = device.createRenderPipeline({
    vertex: {
        module: device.createShaderModule({
            code: shaderVertexCode,
        }),
        entryPoint: "main",
        buffers: [{
            arrayStride: 4 * 3,
            attributes: [{
                shaderLocation: 0,
                format: "float32x3",
                offset: 0
            }]
        },
        {
            arrayStride: 4 * 2,
            attributes: [{
                shaderLocation: 1,
                format: "float32x2",
                offset: 0
            }]
        }
        ]
    },
    fragment: {
        module: device.createShaderModule({
            code: shaderFragmentCode,
        }),
        entryPoint: "main",
        targets: [{
            format: format,
        },],
    },
    primitive: {
        topology: "triangle-list",
        frontFace: "ccw",
        cullMode: "back"
    },
    depthStencil: {
        format: "depth24plus",
        depthWriteEnabled: true,
        depthCompare: "less"
    }
});

绑定组

//...

const uniformBindGroup = device.createBindGroup({
    layout: pipeline.getBindGroupLayout(0),
    entries: [{
        binding: 0,
        resource: {
            buffer: uniformBuffer,
            offset: 0,
            size: 64 + 64 + 64
        }
    },
    ]
});

//...

纹理

但是,这种加载只有模型,没有纹理,可以加载纹理:

//...

let img: any = new Image();
img.src = 'https://hungking.cc/xxxxxx.png';
img.crossOrigin = true;
await img.decode();

const imageBitmap = await createImageBitmap(img);

const sampler = device.createSampler({
    minFilter: 'linear',
    magFilter: 'linear',
    addressModeU: 'repeat',
    addressModeV: 'repeat'
});

const texture = device.createTexture({
    size: [imageBitmap.width, imageBitmap.height, 1],
    format: 'rgba8unorm',
    usage: GPUTextureUsage.TEXTURE_BINDING |
        GPUTextureUsage.COPY_DST |
        GPUTextureUsage.RENDER_ATTACHMENT
});

device.queue.copyExternalImageToTexture({ source: imageBitmap }, { texture: texture }, [imageBitmap.width, imageBitmap.height]);

纹理可以自行调整,比如:

const sampler = device.createSampler({
    //...
    mipmapFilter : "nearest",
    //...
});

将纹理绑定到组:

const uniformBindGroup = device.createBindGroup({
    //...

    {
        binding: 1,
        resource: sampler
    },
    {
        binding: 2,
        resource: texture.createView()
    }

    //...
});

队列

之后所有的操作都要提交到队列中,比如:

//...

device.queue.writeBuffer(uniformBuffer, 0, PROJMATRIX);
device.queue.writeBuffer(uniformBuffer, 64, VIEWMATRIX);
device.queue.writeBuffer(uniformBuffer, 64 + 64, MODELMATRIX);

//...

commandEncoder和Draw

接着是借鉴Apple设计中的commandEncoder,然后是渲染,最后增加交互,之前已经简单提过,这里不再赘述。

//...

mat4.rotateY(MODELMATRIX, MODELMATRIX, ly);
mat4.rotateX(MODELMATRIX, MODELMATRIX, lx);
device.queue.writeBuffer(uniformBuffer, 64 + 64, MODELMATRIX);

const commandEncoder = device.createCommandEncoder();
textureView = context.getCurrentTexture().createView();
renderPassDescription.colorAttachments[0].view = textureView;

const renderPass = commandEncoder.beginRenderPass(renderPassDescription);

renderPass.setPipeline(pipeline);
renderPass.setVertexBuffer(0, vertexBuffer);
renderPass.setVertexBuffer(1, uvBuffer);
renderPass.setIndexBuffer(indexBuffer, "uint32");
renderPass.setBindGroup(0, uniformBindGroup);
renderPass.drawIndexed(cube_index.length);
renderPass.end();

device.queue.submit([commandEncoder.finish()]);

//...

目前根据新的标准,部分参数已经弃用或改成其他参数了,比如:

//..

let textureView = context.getCurrentTexture().createView();

let depthTexture = device.createTexture({
    size: [canvas.clientWidth * devicePixelRatio, canvas.clientHeight * devicePixelRatio, 1],
    format: "depth24plus",
    usage: GPUTextureUsage.RENDER_ATTACHMENT
});

//...

const renderPassDescription = {
    colorAttachments: [{
        view: textureView,
        loadValue: { r: 0.5, g: 0.5, b: 0.5, a: 1.0 },
        clearValue: { r: 0.5, g: 0.5, b: 0.5, a: 1.0 },
        loadValue: { r: 0.5, g: 0.5, b: 0.5, a: 1.0 },
        loadOp: "load",
        storeOp: "store",
    },],
    depthStencilAttachment: {
        view: depthTexture.createView(),
        depthLoadValue: 1.0,
        depthLoadOp: "clear",
        depthClearValue: 1.0,
        depthStoreOp: "store",
        stencilLoadValue: 0,
        stencilStoreOp: "store",
        stencilLoadOp: "clear"
    }
};

//...

renderPassDescription中:stencilLoadValuestencilStoreOpstencilLoadOp,如果为了兼容目前正式版的浏览器可以做继续使用,但建议尽量根据浏览器兼容,这里不再演示。

注意:例子中的方法和属性可能随时调整,具体使用请参考W3C的官方文档

效果预览地址,随便贴了张🐸的:

加载模型

加载模型

code enjoy! 🦖🦖🦖

作者:indeex

链接:https://indeex.club

著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

发表评论

您的电子邮箱地址不会被公开。