-
Notifications
You must be signed in to change notification settings - Fork 327
GPU Web 2024 09 04
Corentin Wallez edited this page Sep 10, 2024
·
1 revision
Chair: CW
Scribe: KR
Location: Google Meet
- Administrivia
- CTS Update
- [M0] Add usage to GPUTextureViewDescriptor #4746 (issue #4426)
- [M0] What happens on the device timeline after GPUDevice.destroy is called? #4294
- [M0] More strongly push developers toward correct use of GPUAdapter #4784
- [editorial] [timebox] Rename image copies -> texel copies #4838 (issue #4573)
- Hardware feature levels (and compat) #4656
- Compatibility Mode: Support OpenGLES devices with zero SSBOs and image uniforms in vertex, fragment stages #4721
- [M1] Feature detection for HDR canvas #4828
- Agenda for next meeting
- Apple
- Mike Wyrzykowski
- Myles C. Maxfield
- Google
- Brandon Jones
- Corentin Wallez
- Geoff Lang
- Gregg Tavares
- Kai Ninomiya
- Ken Russell
- Stephen White
- Microsoft
- Rafael Cintron
- Mozilla
- Jim Blandy
- Kelsey Gilbert
- Nvidia
- Anders Leino
- Mehmet Oguz Derin
- Xiaoshen X
- Add items to the F2F agenda!
- Not ray tracing please :)
- M0 4 WGSL issues and 3 API issues
- GT: major work on texture tests. textureGather seems broken almost everywhere. Doesn't work on Mac AMD, Mac Intel, Pixel 6, …
- MM: can we remove it from the spec?
- GT: it works with 2D and RGBA8_unorm. Everywhere else, not sure. With offsets, are also a problem. Sometimes work, sometimes not, based on device.
- KG: looking forward to the errata report.
- MM: Apple Silicon?
- GT: works there. One quirk, textureSampleLevel with NEAREST, instead of transitioning at 0.5 it transitions at 0.517.
- CW: need to figure out what to do with this, e.g. filing bugs against everyone.
- Need to decide pattern for specifying that the view usage is inherited from the texture usage.
- BJ: generally this is you don't define a specific view. How does this translate down to native API? If I say 0 in view usages, is that analogous to inheriting definition from texture, or is it an error, like on the texture side? On native side, NONE is like inheriting the texture's usage. But on JS side, funneling both "undefined" and 0 down separately is difficult.
- BJ: my proposal: in dictionary for TextureViewDescriptor: default for usage is 0, and that's treated as INHERIT. You don't have a way to say I want zero usages, but that's useless anyway and we'd need a separate error for it anyway.
- BJ: alternative is "undefined" and 0 are separate on the JS side; need more validation rules then. No perceived benefit.
- KG: sure (sounds fine)
- Don't think there's a real argument that the native behavior of "no usages" is compelling
- MW: thumbs up
- Consensus: default to 0, make it mean inherited!
- KN: there was another thread on the PR
- BJ: if you only specify STORAGE usage and specify a view format that can't do STORAGE, that view format can't be used. Validate at texture creation?
- KG: think the consensus was that that was an error.
- BJ: not sure we talked about that and got consensus on it.
- KG: if there's something more to discuss here can we write it down in the bug
- BJ: behavior in the PR (which doesn't address this) is fine, you can use it as-is and you won't get into an invalid state. Just can't make texture with a view that won't be used down the road. If we want to tighten it up, can discuss that as a separate topic to this one.
- CW: problem was that it wasn't clear what to look at in the PR. Let's try to do better at that. Here there's agreement on everything except this one issue. Let's try to land something in the spec.
- BJ: the reason this didn't come up in the agenda, this was basically resolved with a non-normative note for warnings. We talked about it on the pull request, came to some resolution.
- CW: procedurally it's a validation error; we can add TODOs to the spec that have to be resolved before CR, and let's tightly scope agenda items for the next meeting.
[M0] What happens on the device timeline after GPUDevice.destroy is called? #4294
- JB: nice to have a summary of where you landed.
- KN: my question was: can you render to a canvas and then destroy the device? 2 questions. 1) If you destroy the device before it finishes, will it display? 2) If you destroy after rendering, will the canvas stay on the screen, or will it be cleared? Discussed in the past. If you destroy, it'll finish what you rendered, and canvas contents will be preserved. Works for us in Chromium because we allocate the canvas's texture outside the canvas, but can imagine has impl complexity for others. We have a reftest already for this. Would like to enhance the test - do a big draw, destroy the device.
- JB: that's what you meant by keeping #2599.
- KN: yes. 1) mapping and destroying; 2) rendering to canvas and destroying. We decided destroying cancels the map. Only relevant thing is the canvas.
- JB: our general feeling: wasn't our intention to revisit #2599. Only thing we're concerned about, preventing absurd things like creating buffers on the destroyed device. Destroy pending state -> ack that work's still ongoing, but prevent device from being used by further operations. How to spec, leave up to the editors.
- KN: did land the destroyStarted change. Spec as-is is what I described in keeping the original decision.
- JB: OK, then maybe no further work here.
- KN: agree
- MW: thumbs up
- KG: want as few states as possible. Nice to have device.destroy transition immediately on the client. Other thing, deliberately drop the canvas and then destroy the device. Device destruction happens before presentation to HTML page. Do we take that partially- or completely-rendered back buffer as a fully rendered frame, if device went invalid during that rAF?
- JB: that's the #2599 issue?
- KG: don't think so.
- KN: I get the concern. We have a case like this in our impl. Our backbuffer's a WebGPU texture, then later we copy into the canvas, rather than rendering directly into the canvas. If the copy happens after the destroy, won't work - src texture isn't there any more. Spec has non-normative notes about this.
- KG: don't care that much, just worrying we're adding complexity that we'll regret.
- JB: worth going to some effort to offer predictable behavior. Want API to behave similarly regardless of how quickly the GPU finishes its work.
- KG: that's understood.
- KN: there's a "wait for the queue to finish" part in our spec. Alternative is, assuming we want consistent behavior: once you destroy the device, the canvas goes blank. Then doesn't matter when it happens.
- CW: isn't that problematic if the device gets lost?
- KG: in the case of a lost canvas you try your best to preserve the content. But if you try to preserve during device destruction, may get a different result than that.
- GT: if you draw something and then wait 10 seconds and then destroy, would expect to see the previous draw.
- BJ: think it depends on whether you've called getCurrentTexture or not. Fetched a new texture? We'll clear the canvas and process things.
- KN: think spec says you'll get the last thing you rendered.
- CW: submitted.
- KN/BJ: yes.
- KG: queue submitted? OK. Problem is: queue submission doesn't put things on the screen. Presentation to HTML document does. That's a different step.
- CW: rewinding: textures returned by getCurrentTexture on most browsers are "externally allocated" from the WebGPU implementation. Lent to the WebGPU impl for the duration of getCurrentTexture until end of rAF / next getCurrentTexture. If you bind that to a texture, queue.submit, device.destroy - pipeline of commands to GPU process will do the IOSurface render before destruction. Then end of rAF will display it.
- KN: does presentation involve the device? If so, can't do it. Think it's nice behavior - smooths over displaying incomplete stuff because of device loss. Have to finish the work you just submitted. I'm interested in this idea - that getCurrentTexture removes the canvas back buffer, and end of rAF, if the device is still alive, takes the texture and puts it on the canvas.
- KN: not sure if we need to clear the canvas or not. Presentation would fail. What happens? 1) retain previous frame 2) get a blank one, because you called getCurrentTexture. Both are sound, both are nicer than what I've been trying to propose.
- CW: talked a lot. Kai you're volunteering to write a new issue about this?
- KR: personal opinion, like the canvas going blank. It's simpler. Device loss is pretty much impossible to preserve the canvas's contents.
- More discussion.
- BJ: for later: think it's important for developers to have a guaranteed way to render one frame. As long as there's a consistent way to do this (and we do, through ImageBitmap), just need to document that well.
[M0] More strongly push developers toward correct use of GPUAdapter #4784
- KG: Moz doesn't really care, in the M0 degree
- CW: does anyone, for M0?
- Consensus: no.
- Issue closed!
- CW: useful for people learning the spec to have names which make sense.
- KG: IMO a couple of the names aren't better than what currently exist. Think only solution is for me to review the PR. My preference, leave as is.
- GT: is the plan to break native apps?
- CW: yes. Multiple times in the next 6 months.
- Defer this.
Hardware feature levels (and compat) #4656
- CW: concerns that we won't know the feature levels, and whether they'll be linear. We hope they can be.
- KG: brainstorming result: one thing we think is probable: we'll want to sunset Compat & remove it, and not have to deal with it in the API any more, before the # of feature levels becomes onerous enough for Compat to be part of feature levels. So it's tolerable to have Compat not be baked into feature levels. Then we have more time to figure out feature levels. So move forward with Compat and add feature levels later.
- JB: thumbs up
- BJ: discussed with Jim and Kai on editors calls. There are aspects of feature levels proposal that are cleaner when Compat's not in it. Makes that discussion easier if they're separate. Understand the urge to make them the same. If Compat's not part of feature levels, then FLs can be more in terms of which features / limits they enable, and not which ones they take away - an odd concept.
- KN: I'm trying to phrase this so that Compat isn't phrased that way. Phrase it in terms of other feature levels. Prose-wise, exactly the same as other feature levels. Ultimately doesn't matter that much - like this proposal, think it's well founded.
- CW: one reason I like Compat being in there is that it's a forcing function to have feature levels in the API. Understand we want time to look at FLs. They're a bit at odds. Concern is that we'll take a long time to get to FLs if we don't do it now.
- KG: concerned we'll take a long time to get to Compat if we have to do FLs at the same time.
- CW: think FLs will be 2 years out if we don't do them now.
- KG: that's a project management issue. Think we should force ourselves to do it by making it a priority because of the design constraint.
- JB: my concern: not that I see something wrong in Kai's proposal - simply that we're extrapolating too far forward in the future. More comfortable having the convo when we want to introduce specific FLs with specific motivations. Maybe we will take Kai's current proposal at the time. But want to make the decision in the specific context - don't think Compat
- KR: Would urge to not drop this now, we were so close to consensus 3 meetings ago then backtracked. We shouldn't be afraid about putting in the capability now, we know we will use it in the future and we will be able to gather future WebGPU futures. At that point we'll just be able to group the features. Knowing the mechanism now we can put the structure in place for it. It would be a lot harder in the future, we regretted not doing it in WebGL because we never retrofitted it. It worked well for D3D as well. Think Compat is a good motivating use case for it.
- JB: why is it harder to add this later?
- KR: Because later when WebGPU has broader use, a change like this needs to be added to every impl before devs can use it. It takes time to get it to stable browsers so devs can use it reliably. That's why we never added it to WebGL. It would be ideal to have it in WebGPU from day one.
- KG: Was actually going to suggest a webgl 1.1 to the WebGL as a different context type. So it's just a project management issue.
- JB: So you want implementation to reject strings for feature levels that they don't know?
- KR: Just with compat in from the start.
- Discussion
- KN: don't think we need to be this worried about how forward looking this is. All we're adding in this PR is a dictionary member featureLevel, and an enum "core" (or "compat", later). All we're saying is that we'll continue using that dictionary member for whatever we do in the future. What that looks like isn't relevant - not adding that yet. Just adding featureLevel as a dictionary member to requestAdapter.
- JB: this specific spec change is not one that I object to.
- MW: WebKit likes this proposal - with WebKit not implementing Compat mode, it's important that we aren't out of spec compliance. We like Kai's proposal to automatically upgrade Compat to Core. Interested in not having Compat go out and having Safari being non-conformant because it doesn't implement Compat validations.
- CW: that's always been a design constraint on Compat.
- KN: I can pare down the PR and remove everything about what it'll look like. Make it less scary? Leave the old thing as an historical artifact.
- MM: looking through the PR, there are mobile / desktop progressions you're envisioning - Apple Silicon has roots in both, how would you handle that?
- KN: didn't think that through that carefully. Just wrote down some ideas because they're nontrivial.
- BJ: feels there are a lot of folks scared by Kai putting a graph in the PR. Just a convo between Kai and me about what this might look like. Not a statement of future plans. Don't want folks to be too hung up on that.
- JB: understood, have brought that up. Bool -> enum change doesn't bother me at all. Just the entrainment of a lot of other discussion - not as clear to me as it seems to Ken and Kai. So, when Kai says let's trim back the PR, that's what I want to hear.
- KG: concrete concern is that we haven't thought well enough about this. If we had more time to investigate feature levels we would come up with a different API. Concerned that this might not be the right API and that we don't know it yet.
- CW: concretely you're against the FL enum now because it might not be the correct solution?
- MM: understand that concern, maybe we'd want multiple enums in the future. If we take this proposal and later add a bunch of values: some will auto-ugprade to others but some won't - will the spec have some big table and checkboxes about what'll get auto-upgraded to what?
- KN: yes, in the proposal that we don't have to agree on, but we don't have to agree on that right now. W.r.t. extra complexity: don't think that's a problem. We're just adding a dictionary member. Not saying we can't put another dictionary in it later. Can change it from an enum to a union with an enum. To address Ken's point: what we want right now is a dictionary member that we validate and that the browser will break if it isn't supported.
- CW: let's split off to a different PR. Nice also to have a comparison of the mechanisms in the various APIs, for precedent. (MM: thumbs up.)
- KR: actually think we made a lot of progress today. Let's look at the slimmer PR.
- KG: don't want to relay false hope that we're close to being there.
Compatibility Mode: Support OpenGLES devices with zero SSBOs and image uniforms in vertex, fragment stages #4721
[M1] Feature detection for HDR canvas #4828
- Meet next week
- Agenda