WebGPU基础(二)-旋转的立方体
根据克罗诺斯(Khronos)会议,WebGPU将于2022年第二季度发布1.0版本,WGSL也在快速完善。
按照以往的技术推广进度,理论上标准会在发布5-7年后才能被大部分信仰React/Vue的开发者接受,但随着硬件设备的不断发展、Web行业的不断壮大、使用Typescript的人越来越多以及元宇宙概念的普及,尤其很多以前使用WebGL开发Flash游戏的开发者很多已经早早开始了WebGPU引擎的开发工作,这个时间会被极大的缩短,那么今年将成为WebGPU元年。
那么WebGPU能做什么呢?理论上WebGPU is All,比如在服务器上可以挖矿、在游戏领域做3A大作、在电商领域做商品展示、在教育领域做物理化学模拟、在科学领域做各种推测和演示、打造元宇宙等等,只有想不到,没有做不到。
面对现在很多人惆怅的996、007面前,这项技术不一定是最好的,但一定是最快的,能极大的缩短时间和人工成本。
这里是一些使用过程中的记录,随着标准的不断改进,代码也会不断修改,可能会随时无法直接使用,但理论和思想都是一致的。
W3C WebGPU文档: WebGPU 官网
W3C WGSL文档: WGSL 官网
使用WebGPU的库: Babylon 官网
以前也有相关使用的记录,记录是查看当时官网文档、apple官方开发博客和一些其他例子后的一个汇总,因为当时的官方API变动较频繁且改动过大,一个简单的例子中间做过很多次修改,图片参考当时一个国外的博客,这里是最后修改的例子地址:Webgpu基础
设备检测
if (!this.canvas) {
message.error("没有Canvas Element", 10);
throw new Error("没有Canvas Element");
}
if (!navigator.gpu) {
message.error("不支持使用显卡设备", 10);
throw new Error("不支持显卡设备");
}
this.adapter = await navigator.gpu.requestAdapter({
powerPreference: "high-performance",//高性能模式,目前有两种:高性能和低功耗模式,建议笔记本、只有核显什么的使用低功耗模式,防止系统强制丢失设备
}) || undefined;
if (!this.adapter) {
message.error("获取显卡设备失败",10);
throw new Error("获取显卡设备失败");
}
this.device = await this.adapter.requestDevice();
if (!this.device) {
message.error("显卡设备未找到",10);
throw new Error("显卡设备未找到");
}
Canvas首选项
this.context = this.canvas.getContext("webgpu");
this.format = this.context?.getPreferredFormat(this.adapter);
this.context?.configure({
device: this.device, format: this.format, size: this.size
});
getPreferredFormat 获取适配器纹理的首选项配置
configure 纹理配置
Vertex shader顶点着色器
着色器详见文档:
struct Uniforms {
modelViewProjectionMatrix : mat4x4<f32>;
};
@binding(0) @group(0) var<uniform> uniforms : Uniforms;
struct VertexOutput {
@builtin(position) Position : vec4<f32>;
@location(0) fragUV : vec2<f32>;
@location(1) fragPosition: vec4<f32>;
};
@stage(vertex)
fn main(@location(0) position : vec4<f32>,
@location(1) uv : vec2<f32>) -> VertexOutput {
var output : VertexOutput;
output.Position = uniforms.modelViewProjectionMatrix * position;
output.fragUV = uv;
output.fragPosition = 0.5 * (position + vec4<f32>(1.0, 1.0, 1.0, 1.0));
return output;
}
Fragment shader片段着色器
@stage(fragment)
fn main(@location(0) fragUV: vec2<f32>,
@location(1) fragPosition: vec4<f32>) -> @location(0) vec4<f32> {
return fragPosition;
}
VerticesBuffer
Buffer操作,详见官方文档:
const verticesBuffer = this.device?.createBuffer({
size: cubeVertexArray.byteLength,
usage: GPUBufferUsage.VERTEX,
mappedAtCreation: true,
});
new Float32Array(verticesBuffer!.getMappedRange()).set(cubeVertexArray);
verticesBuffer?.unmap();
this.verticesBuffer = verticesBuffer;
Pipeline管线
可以使用默认配置,也可以根据需求自行配置,配置见官方文档:
const pipeline = this.device?.createRenderPipeline({
vertex: {
module: this.device.createShaderModule({ code: vertShaderCode }),
entryPoint: 'main',
buffers: [
{
arrayStride: cubeVertexSize,
attributes: [
{
shaderLocation: 0,
offset: cubePositionOffset,
format: "float32x4",
},
{
shaderLocation: 1,
offset: cubeUVOffset,
format: "float32x2",
}
]
}
]
},
fragment: {
module: this.device.createShaderModule({ code: fragShaderCode }),
entryPoint: 'main',
targets: [
{
format: this.format,
}
]
},
primitive: {
topology: "triangle-list",
cullMode: "back",
},
depthStencil: {
depthWriteEnabled: true,
depthCompare: "less",
format: "depth24plus",
}
});
this.pipeline = pipeline;
Texture纹理
纹理配置,相见文档:
const depthTexture = this.device?.createTexture({
size: this.size,
format: "depth24plus",
usage: GPUTextureUsage.RENDER_ATTACHMENT,
});
format有很多,具体见 GPUTextureFormat
BindGroup绑定资源
资源一般通过组绑定,文档:
const matrixSize = 4 * 16;//matrix 4*4
const offset = 256;//256对齐
const uniformBuferSize = offset + matrixSize;
const uniformBuffer = this.device!.createBuffer({
size: uniformBuferSize,
usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
});
this.uniformBuffer = uniformBuffer;
const uniformBindGroup = this.device?.createBindGroup({
layout: pipeline!.getBindGroupLayout(0),
entries: [
{
binding: 0,
resource: {
buffer: uniformBuffer,
offset: 0,
size: matrixSize,
}
}
]
});
this.uniformBindGroup = uniformBindGroup;
const uniformBindGroup2 = this.device?.createBindGroup({
layout: pipeline!.getBindGroupLayout(0),
entries: [
{
binding: 0,
resource: {
buffer: uniformBuffer,
offset: offset,
size: matrixSize,
}
}
]
});
this.uniformBindGroup2 = uniformBindGroup2;
const renderPassDescriptor: any = {
colorAttachments: [
{
view: undefined,
clearValue: { r: 0.5, g: 0.5, b: 0.5, a: 1.0 },
loadOp: "clear",
storeOp: "store",
}
],
depthStencilAttachment: {
view: depthTexture!.createView(),
depthClearValue: 1.0,
depthStoreOp: "store",
depthLoadOp: "clear",
}
};
this.renderPassDescriptor = renderPassDescriptor;
const projectionMatrix = mat4.create();
this.projectionMatrix = projectionMatrix;
const aspect = this.canvas.clientWidth / this.canvas.clientHeight;
mat4.perspective(projectionMatrix, (2 * Math.PI) / 5, aspect, 1, 100.0);
Matrix矩阵转换
getTransformationMatrix() {
const modelMatrix = mat4.create();
mat4.translate(modelMatrix, modelMatrix, vec3.fromValues(-2, 0, 0));
this.modelMatrix = modelMatrix;
const modelMatrix2 = mat4.create();
mat4.translate(modelMatrix2, modelMatrix2, vec3.fromValues(2, 0, 0));
this.modelMatrix2 = modelMatrix2;
const modelViewProjectionMatrix = mat4.create();
this.modelViewProjectionMatrix = modelViewProjectionMatrix;
const modelViewProjectionMatrix2 = mat4.create();
this.modelViewProjectionMatrix2 = modelViewProjectionMatrix2;
const viewMatrix = mat4.create();
mat4.translate(viewMatrix, viewMatrix, vec3.fromValues(0, 0, -7));
this.viewMatrix = viewMatrix;
const tmpMat4 = mat4.create();
const tmpMat42 = mat4.create();
this.tmpMat4 = tmpMat4;
this.tmpMat42 = tmpMat42;
}
更新矩阵
updateTransfromationMatrix(){
const now = Date.now() / 1000;
mat4.rotate(this.tmpMat4, this.modelMatrix, 1, vec3.fromValues(Math.sin(now), Math.cos(now), 0));
mat4.rotate(this.tmpMat42, this.modelMatrix2, 1, vec3.fromValues(Math.cos(now), Math.sin(now), 0));
mat4.multiply(this.modelViewProjectionMatrix, this.viewMatrix, this.tmpMat4);
mat4.multiply(this.modelViewProjectionMatrix, this.projectionMatrix, this.modelViewProjectionMatrix);
mat4.multiply(this.modelViewProjectionMatrix2, this.viewMatrix, this.tmpMat42);
mat4.multiply(this.modelViewProjectionMatrix2, this.projectionMatrix, this.modelViewProjectionMatrix2);
}
renderer渲染
通过requestAnimationFrame更新渲染:
render = () => {
this.updateTransfromationMatrix();
this.device?.queue.writeBuffer(this.uniformBuffer, 0, this.modelViewProjectionMatrix.buffer, this.modelViewProjectionMatrix.byteOffset, this.modelViewProjectionMatrix.byteLength);
this.device?.queue.writeBuffer(this.uniformBuffer, 256, this.modelViewProjectionMatrix2.buffer, this.modelViewProjectionMatrix2.byteOffset, this.modelViewProjectionMatrix2.byteLength);
this.renderPassDescriptor.colorAttachments[0].view = this.context?.getCurrentTexture().createView();
const commandEncoder: any = this.device?.createCommandEncoder();
const passEncoder = commandEncoder?.beginRenderPass(this.renderPassDescriptor);
passEncoder?.setPipeline(this.pipeline);
passEncoder?.setVertexBuffer(0, this.verticesBuffer);
passEncoder?.setBindGroup(0, this.uniformBindGroup);
passEncoder?.draw(cubeVertexCount, 1, 0, 0);
passEncoder?.setBindGroup(0, this.uniformBindGroup2);
passEncoder?.draw(cubeVertexCount, 1, 0, 0);
passEncoder?.end();
this.device?.queue.submit([commandEncoder.finish()]);
requestAnimationFrame(this.render);
}
效果预览地址:

code enjoy! 🐙🐙🐙🕸
作者:indeex
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。