wgpu_hal/lib.rs
1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//! should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//! validation, if any, and incorrect use will often cause undefined behavior.
20//! This allows us to minimize the overhead we impose over the underlying
21//! graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//! safe API for driving `wgpu-hal`, implementing all necessary validation,
23//! resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//! use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//! `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//! anticipate*, like running out of memory or losing the device. Any errors
29//! that the user could reasonably anticipate are their responsibility to
30//! avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//! not mappable: as the buffer creator, the user should already know if they
32//! can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//! generally object-safe. You must select a specific backend type
36//! like [`vulkan::Api`] or [`metal::Api`], and then use that
37//! according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//! taking objects by reference, returning them by value, and so on,
41//! unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//! remain mapped on the CPU while the GPU reads or writes to it. You must
45//! explicitly indicate when data might need to be transferred between CPU and
46//! GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//! resource. For example, if a buffer is written to by a compute
50//! shader, and then used as and index buffer to a draw call, you
51//! must use [`CommandEncoder::transition_buffers`] between those two
52//! operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//! Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//! store data in particular containers. The implementation doesn't guarantee
59//! that any of the iterators are drained, unless stated otherwise by the
60//! function documentation. For this reason, we recommend that iterators don't
61//! do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//! Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//! available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//! own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//! creates an instance value, which you can use to enumerate the adapters
121//! available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//! returns an instance that can enumerate the Vulkan physical devices on your
123//! system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//! particular device from a particular backend. For example, a Vulkan instance
127//! might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//! link to a device. You get a device value by calling [`Adapter::open`], and
131//! then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//! command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//! use to build buffers of commands to submit to a queue. This has all the
138//! methods for drawing and running compute shaders, which is presumably what
139//! you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//! swapchain for presenting images on the screen, via interaction with the
143//! system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![no_std]
206#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
207#![allow(
208 // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
209 clippy::arc_with_non_send_sync,
210 // We don't use syntax sugar where it's not necessary.
211 clippy::match_like_matches_macro,
212 // Redundant matching is more explicit.
213 clippy::redundant_pattern_matching,
214 // Explicit lifetimes are often easier to reason about.
215 clippy::needless_lifetimes,
216 // No need for defaults in the internal types.
217 clippy::new_without_default,
218 // Matches are good and extendable, no need to make an exception here.
219 clippy::single_match,
220 // Push commands are more regular than macros.
221 clippy::vec_init_then_push,
222 // We unsafe impl `Send` for a reason.
223 clippy::non_send_fields_in_send_ty,
224 // TODO!
225 clippy::missing_safety_doc,
226 // It gets in the way a lot and does not prevent bugs in practice.
227 clippy::pattern_type_mismatch,
228)]
229#![warn(
230 clippy::alloc_instead_of_core,
231 clippy::ptr_as_ptr,
232 clippy::std_instead_of_alloc,
233 clippy::std_instead_of_core,
234 trivial_casts,
235 trivial_numeric_casts,
236 unsafe_op_in_unsafe_fn,
237 unused_extern_crates,
238 unused_qualifications
239)]
240
241extern crate alloc;
242extern crate wgpu_types as wgt;
243// Each of these backends needs `std` in some fashion; usually `std::thread` functions.
244#[cfg(any(dx12, gles_with_std, metal, vulkan))]
245#[macro_use]
246extern crate std;
247
248/// DirectX12 API internals.
249#[cfg(dx12)]
250pub mod dx12;
251/// GLES API internals.
252#[cfg(gles)]
253pub mod gles;
254/// Metal API internals.
255#[cfg(metal)]
256pub mod metal;
257/// A dummy API implementation.
258// TODO(https://github.com/gfx-rs/wgpu/issues/7120): this should have a cfg
259pub mod noop;
260/// Vulkan API internals.
261#[cfg(vulkan)]
262pub mod vulkan;
263
264pub mod auxil;
265pub mod api {
266 #[cfg(dx12)]
267 pub use super::dx12::Api as Dx12;
268 #[cfg(gles)]
269 pub use super::gles::Api as Gles;
270 #[cfg(metal)]
271 pub use super::metal::Api as Metal;
272 pub use super::noop::Api as Noop;
273 #[cfg(vulkan)]
274 pub use super::vulkan::Api as Vulkan;
275}
276
277mod dynamic;
278
279pub(crate) use dynamic::impl_dyn_resource;
280pub use dynamic::{
281 DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
282 DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
283 DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
284 DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
285 DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
286};
287
288#[allow(unused)]
289use alloc::boxed::Box;
290use alloc::{borrow::Cow, string::String, sync::Arc, vec::Vec};
291use core::{
292 borrow::Borrow,
293 error::Error,
294 fmt,
295 num::NonZeroU32,
296 ops::{Range, RangeInclusive},
297 ptr::NonNull,
298};
299
300use bitflags::bitflags;
301use parking_lot::Mutex;
302use thiserror::Error;
303use wgt::WasmNotSendSync;
304
305// - Vertex + Fragment
306// - Compute
307// Task + Mesh + Fragment
308pub const MAX_CONCURRENT_SHADER_STAGES: usize = 3;
309pub const MAX_ANISOTROPY: u8 = 16;
310pub const MAX_BIND_GROUPS: usize = 8;
311pub const MAX_VERTEX_BUFFERS: usize = 16;
312pub const MAX_COLOR_ATTACHMENTS: usize = 8;
313pub const MAX_MIP_LEVELS: u32 = 16;
314/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
315/// cbindgen:ignore
316pub const QUERY_SIZE: wgt::BufferAddress = 8;
317
318pub type Label<'a> = Option<&'a str>;
319pub type MemoryRange = Range<wgt::BufferAddress>;
320pub type FenceValue = u64;
321#[cfg(supports_64bit_atomics)]
322pub type AtomicFenceValue = core::sync::atomic::AtomicU64;
323#[cfg(not(supports_64bit_atomics))]
324pub type AtomicFenceValue = portable_atomic::AtomicU64;
325
326/// A callback to signal that wgpu is no longer using a resource.
327#[cfg(any(gles, vulkan))]
328pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
329
330#[cfg(any(gles, vulkan))]
331pub struct DropGuard {
332 callback: Option<DropCallback>,
333}
334
335#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
336impl DropGuard {
337 fn from_option(callback: Option<DropCallback>) -> Option<Self> {
338 callback.map(|callback| Self {
339 callback: Some(callback),
340 })
341 }
342}
343
344#[cfg(any(gles, vulkan))]
345impl Drop for DropGuard {
346 fn drop(&mut self) {
347 if let Some(cb) = self.callback.take() {
348 (cb)();
349 }
350 }
351}
352
353#[cfg(any(gles, vulkan))]
354impl fmt::Debug for DropGuard {
355 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
356 f.debug_struct("DropGuard").finish()
357 }
358}
359
360#[derive(Clone, Debug, PartialEq, Eq, Error)]
361pub enum DeviceError {
362 #[error("Out of memory")]
363 OutOfMemory,
364 #[error("Device is lost")]
365 Lost,
366 #[error("Creation of a resource failed for a reason other than running out of memory.")]
367 ResourceCreationFailed,
368 #[error("Unexpected error variant (driver implementation is at fault)")]
369 Unexpected,
370}
371
372#[allow(dead_code)] // may be unused on some platforms
373#[cold]
374fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
375 panic!("wgpu-hal invariant was violated (usage error): {txt}")
376}
377
378#[allow(dead_code)] // may be unused on some platforms
379#[cold]
380fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
381 panic!("wgpu-hal ran into a preventable internal error: {txt}")
382}
383
384#[derive(Clone, Debug, Eq, PartialEq, Error)]
385pub enum ShaderError {
386 #[error("Compilation failed: {0:?}")]
387 Compilation(String),
388 #[error(transparent)]
389 Device(#[from] DeviceError),
390}
391
392#[derive(Clone, Debug, Eq, PartialEq, Error)]
393pub enum PipelineError {
394 #[error("Linkage failed for stage {0:?}: {1}")]
395 Linkage(wgt::ShaderStages, String),
396 #[error("Entry point for stage {0:?} is invalid")]
397 EntryPoint(naga::ShaderStage),
398 #[error(transparent)]
399 Device(#[from] DeviceError),
400 #[error("Pipeline constant error for stage {0:?}: {1}")]
401 PipelineConstants(wgt::ShaderStages, String),
402}
403
404#[derive(Clone, Debug, Eq, PartialEq, Error)]
405pub enum PipelineCacheError {
406 #[error(transparent)]
407 Device(#[from] DeviceError),
408}
409
410#[derive(Clone, Debug, Eq, PartialEq, Error)]
411pub enum SurfaceError {
412 #[error("Surface is lost")]
413 Lost,
414 #[error("Surface is outdated, needs to be re-created")]
415 Outdated,
416 #[error(transparent)]
417 Device(#[from] DeviceError),
418 #[error("Other reason: {0}")]
419 Other(&'static str),
420}
421
422/// Error occurring while trying to create an instance, or create a surface from an instance;
423/// typically relating to the state of the underlying graphics API or hardware.
424#[derive(Clone, Debug, Error)]
425#[error("{message}")]
426pub struct InstanceError {
427 /// These errors are very platform specific, so do not attempt to encode them as an enum.
428 ///
429 /// This message should describe the problem in sufficient detail to be useful for a
430 /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
431 /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
432 message: String,
433
434 /// Underlying error value, if any is available.
435 #[source]
436 source: Option<Arc<dyn Error + Send + Sync + 'static>>,
437}
438
439impl InstanceError {
440 #[allow(dead_code)] // may be unused on some platforms
441 pub(crate) fn new(message: String) -> Self {
442 Self {
443 message,
444 source: None,
445 }
446 }
447 #[allow(dead_code)] // may be unused on some platforms
448 pub(crate) fn with_source(message: String, source: impl Error + Send + Sync + 'static) -> Self {
449 Self {
450 message,
451 source: Some(Arc::new(source)),
452 }
453 }
454}
455
456pub trait Api: Clone + fmt::Debug + Sized {
457 type Instance: DynInstance + Instance<A = Self>;
458 type Surface: DynSurface + Surface<A = Self>;
459 type Adapter: DynAdapter + Adapter<A = Self>;
460 type Device: DynDevice + Device<A = Self>;
461
462 type Queue: DynQueue + Queue<A = Self>;
463 type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
464
465 /// This API's command buffer type.
466 ///
467 /// The only thing you can do with `CommandBuffer`s is build them
468 /// with a [`CommandEncoder`] and then pass them to
469 /// [`Queue::submit`] for execution, or destroy them by passing
470 /// them to [`CommandEncoder::reset_all`].
471 ///
472 /// [`CommandEncoder`]: Api::CommandEncoder
473 type CommandBuffer: DynCommandBuffer;
474
475 type Buffer: DynBuffer;
476 type Texture: DynTexture;
477 type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
478 type TextureView: DynTextureView;
479 type Sampler: DynSampler;
480 type QuerySet: DynQuerySet;
481
482 /// A value you can block on to wait for something to finish.
483 ///
484 /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
485 /// [`Device::wait`] to block until a fence reaches or passes a value you
486 /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
487 /// store in it when the submitted work is complete.
488 ///
489 /// Attempting to set a fence to a value less than its current value has no
490 /// effect.
491 ///
492 /// Waiting on a fence returns as soon as the fence reaches *or passes* the
493 /// requested value. This implies that, in order to reliably determine when
494 /// an operation has completed, operations must finish in order of
495 /// increasing fence values: if a higher-valued operation were to finish
496 /// before a lower-valued operation, then waiting for the fence to reach the
497 /// lower value could return before the lower-valued operation has actually
498 /// finished.
499 type Fence: DynFence;
500
501 type BindGroupLayout: DynBindGroupLayout;
502 type BindGroup: DynBindGroup;
503 type PipelineLayout: DynPipelineLayout;
504 type ShaderModule: DynShaderModule;
505 type RenderPipeline: DynRenderPipeline;
506 type ComputePipeline: DynComputePipeline;
507 type PipelineCache: DynPipelineCache;
508
509 type AccelerationStructure: DynAccelerationStructure + 'static;
510}
511
512pub trait Instance: Sized + WasmNotSendSync {
513 type A: Api;
514
515 unsafe fn init(desc: &InstanceDescriptor) -> Result<Self, InstanceError>;
516 unsafe fn create_surface(
517 &self,
518 display_handle: raw_window_handle::RawDisplayHandle,
519 window_handle: raw_window_handle::RawWindowHandle,
520 ) -> Result<<Self::A as Api>::Surface, InstanceError>;
521 /// `surface_hint` is only used by the GLES backend targeting WebGL2
522 unsafe fn enumerate_adapters(
523 &self,
524 surface_hint: Option<&<Self::A as Api>::Surface>,
525 ) -> Vec<ExposedAdapter<Self::A>>;
526}
527
528pub trait Surface: WasmNotSendSync {
529 type A: Api;
530
531 /// Configure `self` to use `device`.
532 ///
533 /// # Safety
534 ///
535 /// - All GPU work using `self` must have been completed.
536 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
537 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
538 /// - The surface `self` must not currently be configured to use any other [`Device`].
539 unsafe fn configure(
540 &self,
541 device: &<Self::A as Api>::Device,
542 config: &SurfaceConfiguration,
543 ) -> Result<(), SurfaceError>;
544
545 /// Unconfigure `self` on `device`.
546 ///
547 /// # Safety
548 ///
549 /// - All GPU work that uses `surface` must have been completed.
550 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
551 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
552 /// - The surface `self` must have been configured on `device`.
553 unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
554
555 /// Return the next texture to be presented by `self`, for the caller to draw on.
556 ///
557 /// On success, return an [`AcquiredSurfaceTexture`] representing the
558 /// texture into which the caller should draw the image to be displayed on
559 /// `self`.
560 ///
561 /// If `timeout` elapses before `self` has a texture ready to be acquired,
562 /// return `Ok(None)`. If `timeout` is `None`, wait indefinitely, with no
563 /// timeout.
564 ///
565 /// # Using an [`AcquiredSurfaceTexture`]
566 ///
567 /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
568 /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
569 /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
570 /// carries some metadata about that [`SurfaceTexture`].
571 ///
572 /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
573 /// include the [`SurfaceTexture`] in the `surface_textures` argument.
574 ///
575 /// When you are done drawing on the texture, you can display it on `self`
576 /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
577 ///
578 /// If you do not wish to display the texture, you must pass the
579 /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
580 /// by future acquisitions.
581 ///
582 /// # Portability
583 ///
584 /// Some backends can't support a timeout when acquiring a texture. On these
585 /// backends, `timeout` is ignored.
586 ///
587 /// # Safety
588 ///
589 /// - The surface `self` must currently be configured on some [`Device`].
590 ///
591 /// - The `fence` argument must be the same [`Fence`] passed to all calls to
592 /// [`Queue::submit`] that used [`Texture`]s acquired from this surface.
593 ///
594 /// - You may only have one texture acquired from `self` at a time. When
595 /// `acquire_texture` returns `Ok(Some(ast))`, you must pass the returned
596 /// [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
597 /// [`Surface::discard_texture`] before calling `acquire_texture` again.
598 ///
599 /// [`texture`]: AcquiredSurfaceTexture::texture
600 /// [`SurfaceTexture`]: Api::SurfaceTexture
601 /// [`borrow`]: alloc::borrow::Borrow::borrow
602 /// [`Texture`]: Api::Texture
603 /// [`Fence`]: Api::Fence
604 /// [`self.discard_texture`]: Surface::discard_texture
605 unsafe fn acquire_texture(
606 &self,
607 timeout: Option<core::time::Duration>,
608 fence: &<Self::A as Api>::Fence,
609 ) -> Result<Option<AcquiredSurfaceTexture<Self::A>>, SurfaceError>;
610
611 /// Relinquish an acquired texture without presenting it.
612 ///
613 /// After this call, the texture underlying [`SurfaceTexture`] may be
614 /// returned by subsequent calls to [`self.acquire_texture`].
615 ///
616 /// # Safety
617 ///
618 /// - The surface `self` must currently be configured on some [`Device`].
619 ///
620 /// - `texture` must be a [`SurfaceTexture`] returned by a call to
621 /// [`self.acquire_texture`] that has not yet been passed to
622 /// [`Queue::present`].
623 ///
624 /// [`SurfaceTexture`]: Api::SurfaceTexture
625 /// [`self.acquire_texture`]: Surface::acquire_texture
626 unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
627}
628
629pub trait Adapter: WasmNotSendSync {
630 type A: Api;
631
632 unsafe fn open(
633 &self,
634 features: wgt::Features,
635 limits: &wgt::Limits,
636 memory_hints: &wgt::MemoryHints,
637 ) -> Result<OpenDevice<Self::A>, DeviceError>;
638
639 /// Return the set of supported capabilities for a texture format.
640 unsafe fn texture_format_capabilities(
641 &self,
642 format: wgt::TextureFormat,
643 ) -> TextureFormatCapabilities;
644
645 /// Returns the capabilities of working with a specified surface.
646 ///
647 /// `None` means presentation is not supported for it.
648 unsafe fn surface_capabilities(
649 &self,
650 surface: &<Self::A as Api>::Surface,
651 ) -> Option<SurfaceCapabilities>;
652
653 /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
654 ///
655 /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
656 unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
657}
658
659/// A connection to a GPU and a pool of resources to use with it.
660///
661/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
662/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
663/// used for creating resources. Each `Device` has an associated [`Queue`] used
664/// for command submission.
665///
666/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
667/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
668/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
669/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
670/// `Adapter`.
671///
672/// A `Device`'s life cycle is generally:
673///
674/// 1) Obtain a `Device` and its associated [`Queue`] by calling
675/// [`Adapter::open`].
676///
677/// Alternatively, the backend-specific types that implement [`Adapter`] often
678/// have methods for creating a `wgpu-hal` `Device` from a platform-specific
679/// handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
680/// [`vulkan::Device`] from an [`ash::Device`].
681///
682/// 1) Create resources to use on the device by calling methods like
683/// [`Device::create_texture`] or [`Device::create_shader_module`].
684///
685/// 1) Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
686/// which you can use to build [`CommandBuffer`]s holding commands to be
687/// executed on the GPU.
688///
689/// 1) Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
690/// [`CommandBuffer`]s for execution on the GPU. If needed, call
691/// [`Device::wait`] to wait for them to finish execution.
692///
693/// 1) Free resources with methods like [`Device::destroy_texture`] or
694/// [`Device::destroy_shader_module`].
695///
696/// 1) Drop the device.
697///
698/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
699/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
700/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
701/// [`wgpu_hal::Adapter`]: Adapter
702/// [`wgpu_hal::Device`]: Device
703/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
704/// [`vulkan::Device`]: vulkan/struct.Device.html
705/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
706/// [`CommandBuffer`]: Api::CommandBuffer
707///
708/// # Safety
709///
710/// As with other `wgpu-hal` APIs, [validation] is the caller's
711/// responsibility. Here are the general requirements for all `Device`
712/// methods:
713///
714/// - Any resource passed to a `Device` method must have been created by that
715/// `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
716/// have been created with the `Device` passed as `self`.
717///
718/// - Resources may not be destroyed if they are used by any submitted command
719/// buffers that have not yet finished execution.
720///
721/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
722/// [`Texture`]: Api::Texture
723pub trait Device: WasmNotSendSync {
724 type A: Api;
725
726 /// Creates a new buffer.
727 ///
728 /// The initial usage is `wgt::BufferUses::empty()`.
729 unsafe fn create_buffer(
730 &self,
731 desc: &BufferDescriptor,
732 ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
733
734 /// Free `buffer` and any GPU resources it owns.
735 ///
736 /// Note that backends are allowed to allocate GPU memory for buffers from
737 /// allocation pools, and this call is permitted to simply return `buffer`'s
738 /// storage to that pool, without making it available to other applications.
739 ///
740 /// # Safety
741 ///
742 /// - The given `buffer` must not currently be mapped.
743 unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
744
745 /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
746 unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
747
748 /// Return a pointer to CPU memory mapping the contents of `buffer`.
749 ///
750 /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
751 /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
752 /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
753 /// `wgpu_hal` buffer is also unmapped.)
754 ///
755 /// If this function returns `Ok(mapping)`, then:
756 ///
757 /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
758 ///
759 /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
760 /// memory are immediately visible on the GPU, and vice versa.
761 ///
762 /// # Safety
763 ///
764 /// - The given `buffer` must have been created with the [`MAP_READ`] or
765 /// [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
766 ///
767 /// - The given `range` must fall within the size of `buffer`.
768 ///
769 /// - The caller must avoid data races between the CPU and the GPU. A data
770 /// race is any pair of accesses to a particular byte, one of which is a
771 /// write, that are not ordered with respect to each other by some sort of
772 /// synchronization operation.
773 ///
774 /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
775 /// `false`, then:
776 ///
777 /// - Every CPU write to a mapped byte followed by a GPU read of that byte
778 /// must have at least one call to [`Device::flush_mapped_ranges`]
779 /// covering that byte that occurs between those two accesses.
780 ///
781 /// - Every GPU write to a mapped byte followed by a CPU read of that byte
782 /// must have at least one call to [`Device::invalidate_mapped_ranges`]
783 /// covering that byte that occurs between those two accesses.
784 ///
785 /// Note that the data race rule above requires that all such access pairs
786 /// be ordered, so it is meaningful to talk about what must occur
787 /// "between" them.
788 ///
789 /// - Zero-sized mappings are not allowed.
790 ///
791 /// - The returned [`BufferMapping::ptr`] must not be used after a call to
792 /// [`Device::unmap_buffer`].
793 ///
794 /// [`MAP_READ`]: wgt::BufferUses::MAP_READ
795 /// [`MAP_WRITE`]: wgt::BufferUses::MAP_WRITE
796 unsafe fn map_buffer(
797 &self,
798 buffer: &<Self::A as Api>::Buffer,
799 range: MemoryRange,
800 ) -> Result<BufferMapping, DeviceError>;
801
802 /// Remove the mapping established by the last call to [`Device::map_buffer`].
803 ///
804 /// # Safety
805 ///
806 /// - The given `buffer` must be currently mapped.
807 unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
808
809 /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
810 ///
811 /// # Safety
812 ///
813 /// - The given `buffer` must be currently mapped.
814 ///
815 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
816 unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
817 where
818 I: Iterator<Item = MemoryRange>;
819
820 /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
821 ///
822 /// # Safety
823 ///
824 /// - The given `buffer` must be currently mapped.
825 ///
826 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
827 unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
828 where
829 I: Iterator<Item = MemoryRange>;
830
831 /// Creates a new texture.
832 ///
833 /// The initial usage for all subresources is `wgt::TextureUses::UNINITIALIZED`.
834 unsafe fn create_texture(
835 &self,
836 desc: &TextureDescriptor,
837 ) -> Result<<Self::A as Api>::Texture, DeviceError>;
838 unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
839
840 /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
841 unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
842
843 unsafe fn create_texture_view(
844 &self,
845 texture: &<Self::A as Api>::Texture,
846 desc: &TextureViewDescriptor,
847 ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
848 unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
849 unsafe fn create_sampler(
850 &self,
851 desc: &SamplerDescriptor,
852 ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
853 unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
854
855 /// Create a fresh [`CommandEncoder`].
856 ///
857 /// The new `CommandEncoder` is in the "closed" state.
858 unsafe fn create_command_encoder(
859 &self,
860 desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
861 ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
862
863 /// Creates a bind group layout.
864 unsafe fn create_bind_group_layout(
865 &self,
866 desc: &BindGroupLayoutDescriptor,
867 ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
868 unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
869 unsafe fn create_pipeline_layout(
870 &self,
871 desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
872 ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
873 unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
874
875 #[allow(clippy::type_complexity)]
876 unsafe fn create_bind_group(
877 &self,
878 desc: &BindGroupDescriptor<
879 <Self::A as Api>::BindGroupLayout,
880 <Self::A as Api>::Buffer,
881 <Self::A as Api>::Sampler,
882 <Self::A as Api>::TextureView,
883 <Self::A as Api>::AccelerationStructure,
884 >,
885 ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
886 unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
887
888 unsafe fn create_shader_module(
889 &self,
890 desc: &ShaderModuleDescriptor,
891 shader: ShaderInput,
892 ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
893 unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
894
895 #[allow(clippy::type_complexity)]
896 unsafe fn create_render_pipeline(
897 &self,
898 desc: &RenderPipelineDescriptor<
899 <Self::A as Api>::PipelineLayout,
900 <Self::A as Api>::ShaderModule,
901 <Self::A as Api>::PipelineCache,
902 >,
903 ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
904 #[allow(clippy::type_complexity)]
905 unsafe fn create_mesh_pipeline(
906 &self,
907 desc: &MeshPipelineDescriptor<
908 <Self::A as Api>::PipelineLayout,
909 <Self::A as Api>::ShaderModule,
910 <Self::A as Api>::PipelineCache,
911 >,
912 ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
913 unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
914
915 #[allow(clippy::type_complexity)]
916 unsafe fn create_compute_pipeline(
917 &self,
918 desc: &ComputePipelineDescriptor<
919 <Self::A as Api>::PipelineLayout,
920 <Self::A as Api>::ShaderModule,
921 <Self::A as Api>::PipelineCache,
922 >,
923 ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
924 unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
925
926 unsafe fn create_pipeline_cache(
927 &self,
928 desc: &PipelineCacheDescriptor<'_>,
929 ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
930 fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
931 None
932 }
933 unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
934
935 unsafe fn create_query_set(
936 &self,
937 desc: &wgt::QuerySetDescriptor<Label>,
938 ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
939 unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
940 unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
941 unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
942 unsafe fn get_fence_value(
943 &self,
944 fence: &<Self::A as Api>::Fence,
945 ) -> Result<FenceValue, DeviceError>;
946
947 /// Wait for `fence` to reach `value`.
948 ///
949 /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
950 /// [`FenceValue`] to store in it, so you can use this `wait` function
951 /// to wait for a given queue submission to finish execution.
952 ///
953 /// The `value` argument must be a value that some actual operation you have
954 /// already presented to the device is going to store in `fence`. You cannot
955 /// wait for values yet to be submitted. (This restriction accommodates
956 /// implementations like the `vulkan` backend's [`FencePool`] that must
957 /// allocate a distinct synchronization object for each fence value one is
958 /// able to wait for.)
959 ///
960 /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
961 /// returns immediately.
962 ///
963 /// Returns `Ok(true)` on success and `Ok(false)` on timeout.
964 ///
965 /// [`Fence`]: Api::Fence
966 /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
967 unsafe fn wait(
968 &self,
969 fence: &<Self::A as Api>::Fence,
970 value: FenceValue,
971 timeout_ms: u32,
972 ) -> Result<bool, DeviceError>;
973
974 /// Start a graphics debugger capture.
975 ///
976 /// # Safety
977 ///
978 /// See [`wgpu::Device::start_graphics_debugger_capture`][api] for more details.
979 ///
980 /// [api]: ../wgpu/struct.Device.html#method.start_graphics_debugger_capture
981 unsafe fn start_graphics_debugger_capture(&self) -> bool;
982
983 /// Stop a graphics debugger capture.
984 ///
985 /// # Safety
986 ///
987 /// See [`wgpu::Device::stop_graphics_debugger_capture`][api] for more details.
988 ///
989 /// [api]: ../wgpu/struct.Device.html#method.stop_graphics_debugger_capture
990 unsafe fn stop_graphics_debugger_capture(&self);
991
992 #[allow(unused_variables)]
993 unsafe fn pipeline_cache_get_data(
994 &self,
995 cache: &<Self::A as Api>::PipelineCache,
996 ) -> Option<Vec<u8>> {
997 None
998 }
999
1000 unsafe fn create_acceleration_structure(
1001 &self,
1002 desc: &AccelerationStructureDescriptor,
1003 ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
1004 unsafe fn get_acceleration_structure_build_sizes(
1005 &self,
1006 desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
1007 ) -> AccelerationStructureBuildSizes;
1008 unsafe fn get_acceleration_structure_device_address(
1009 &self,
1010 acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1011 ) -> wgt::BufferAddress;
1012 unsafe fn destroy_acceleration_structure(
1013 &self,
1014 acceleration_structure: <Self::A as Api>::AccelerationStructure,
1015 );
1016 fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
1017
1018 fn get_internal_counters(&self) -> wgt::HalCounters;
1019
1020 fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
1021 None
1022 }
1023}
1024
1025pub trait Queue: WasmNotSendSync {
1026 type A: Api;
1027
1028 /// Submit `command_buffers` for execution on GPU.
1029 ///
1030 /// Update `fence` to `value` when the operation is complete. See
1031 /// [`Fence`] for details.
1032 ///
1033 /// A `wgpu_hal` queue is "single threaded": all command buffers are
1034 /// executed in the order they're submitted, with each buffer able to see
1035 /// previous buffers' results. Specifically:
1036 ///
1037 /// - If two calls to `submit` on a single `Queue` occur in a particular
1038 /// order (that is, they happen on the same thread, or on two threads that
1039 /// have synchronized to establish an ordering), then the first
1040 /// submission's commands all complete execution before any of the second
1041 /// submission's commands begin. All results produced by one submission
1042 /// are visible to the next.
1043 ///
1044 /// - Within a submission, command buffers execute in the order in which they
1045 /// appear in `command_buffers`. All results produced by one buffer are
1046 /// visible to the next.
1047 ///
1048 /// If two calls to `submit` on a single `Queue` from different threads are
1049 /// not synchronized to occur in a particular order, they must pass distinct
1050 /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1051 /// operations to complete is only trustworthy when operations finish in
1052 /// order of increasing fence value, but submissions from different threads
1053 /// cannot determine how to order the fence values if the submissions
1054 /// themselves are unordered. If each thread uses a separate [`Fence`], this
1055 /// problem does not arise.
1056 ///
1057 /// # Safety
1058 ///
1059 /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1060 /// from a [`CommandEncoder`][ce] that was constructed from the
1061 /// [`Device`][d] associated with this [`Queue`].
1062 ///
1063 /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1064 /// commands have finished execution. Since command buffers must not
1065 /// outlive their encoders, this implies that the encoders must remain
1066 /// alive as well.
1067 ///
1068 /// - All resources used by a submitted [`CommandBuffer`][cb]
1069 /// ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1070 /// on) must remain alive until the command buffer finishes execution.
1071 ///
1072 /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1073 /// writes to must appear in the `surface_textures` argument.
1074 ///
1075 /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1076 /// argument more than once.
1077 ///
1078 /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1079 /// for use with the [`Device`][d] associated with this [`Queue`],
1080 /// typically by calling [`Surface::configure`].
1081 ///
1082 /// - All calls to this function that include a given [`SurfaceTexture`][st]
1083 /// in `surface_textures` must use the same [`Fence`].
1084 ///
1085 /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1086 /// all submissions that will signal it have completed.
1087 ///
1088 /// [`Fence`]: Api::Fence
1089 /// [cb]: Api::CommandBuffer
1090 /// [ce]: Api::CommandEncoder
1091 /// [d]: Api::Device
1092 /// [t]: Api::Texture
1093 /// [bg]: Api::BindGroup
1094 /// [rp]: Api::RenderPipeline
1095 /// [st]: Api::SurfaceTexture
1096 unsafe fn submit(
1097 &self,
1098 command_buffers: &[&<Self::A as Api>::CommandBuffer],
1099 surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1100 signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1101 ) -> Result<(), DeviceError>;
1102 unsafe fn present(
1103 &self,
1104 surface: &<Self::A as Api>::Surface,
1105 texture: <Self::A as Api>::SurfaceTexture,
1106 ) -> Result<(), SurfaceError>;
1107 unsafe fn get_timestamp_period(&self) -> f32;
1108}
1109
1110/// Encoder and allocation pool for `CommandBuffer`s.
1111///
1112/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1113/// acts as the allocation pool that owns the buffers' underlying
1114/// storage. Thus, `CommandBuffer`s must not outlive the
1115/// `CommandEncoder` that created them.
1116///
1117/// The life cycle of a `CommandBuffer` is as follows:
1118///
1119/// - Call [`Device::create_command_encoder`] to create a new
1120/// `CommandEncoder`, in the "closed" state.
1121///
1122/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1123/// recording commands. This puts the `CommandEncoder` in the
1124/// "recording" state.
1125///
1126/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1127/// etc. on a "recording" `CommandEncoder` to add commands to the
1128/// list. (If an error occurs, you must call `discard_encoding`; see
1129/// below.)
1130///
1131/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1132/// encoder and construct a fresh `CommandBuffer` consisting of the
1133/// list of commands recorded up to that point.
1134///
1135/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1136/// the commands recorded thus far and close the encoder. This is
1137/// the only safe thing to do on a `CommandEncoder` if an error has
1138/// occurred while recording commands.
1139///
1140/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1141/// live `CommandBuffers` built from it. All the `CommandBuffer`s
1142/// are destroyed, and their resources are freed.
1143///
1144/// # Safety
1145///
1146/// - The `CommandEncoder` must be in the states described above to
1147/// make the given calls.
1148///
1149/// - A `CommandBuffer` that has been submitted for execution on the
1150/// GPU must live until its execution is complete.
1151///
1152/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1153/// built it.
1154///
1155/// It is the user's responsibility to meet this requirements. This
1156/// allows `CommandEncoder` implementations to keep their state
1157/// tracking to a minimum.
1158pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1159 type A: Api;
1160
1161 /// Begin encoding a new command buffer.
1162 ///
1163 /// This puts this `CommandEncoder` in the "recording" state.
1164 ///
1165 /// # Safety
1166 ///
1167 /// This `CommandEncoder` must be in the "closed" state.
1168 unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1169
1170 /// Discard the command list under construction.
1171 ///
1172 /// If an error has occurred while recording commands, this
1173 /// is the only safe thing to do with the encoder.
1174 ///
1175 /// This puts this `CommandEncoder` in the "closed" state.
1176 ///
1177 /// # Safety
1178 ///
1179 /// This `CommandEncoder` must be in the "recording" state.
1180 ///
1181 /// Callers must not assume that implementations of this
1182 /// function are idempotent, and thus should not call it
1183 /// multiple times in a row.
1184 unsafe fn discard_encoding(&mut self);
1185
1186 /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1187 ///
1188 /// The returned [`CommandBuffer`] holds all the commands recorded
1189 /// on this `CommandEncoder` since the last call to
1190 /// [`begin_encoding`].
1191 ///
1192 /// This puts this `CommandEncoder` in the "closed" state.
1193 ///
1194 /// # Safety
1195 ///
1196 /// This `CommandEncoder` must be in the "recording" state.
1197 ///
1198 /// The returned [`CommandBuffer`] must not outlive this
1199 /// `CommandEncoder`. Implementations are allowed to build
1200 /// `CommandBuffer`s that depend on storage owned by this
1201 /// `CommandEncoder`.
1202 ///
1203 /// [`CommandBuffer`]: Api::CommandBuffer
1204 /// [`begin_encoding`]: CommandEncoder::begin_encoding
1205 unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1206
1207 /// Reclaim all resources belonging to this `CommandEncoder`.
1208 ///
1209 /// # Safety
1210 ///
1211 /// This `CommandEncoder` must be in the "closed" state.
1212 ///
1213 /// The `command_buffers` iterator must produce all the live
1214 /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1215 /// is, every extant `CommandBuffer` returned from `end_encoding`.
1216 ///
1217 /// [`CommandBuffer`]: Api::CommandBuffer
1218 unsafe fn reset_all<I>(&mut self, command_buffers: I)
1219 where
1220 I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1221
1222 unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1223 where
1224 T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1225
1226 unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1227 where
1228 T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1229
1230 // copy operations
1231
1232 unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1233
1234 unsafe fn copy_buffer_to_buffer<T>(
1235 &mut self,
1236 src: &<Self::A as Api>::Buffer,
1237 dst: &<Self::A as Api>::Buffer,
1238 regions: T,
1239 ) where
1240 T: Iterator<Item = BufferCopy>;
1241
1242 /// Copy from an external image to an internal texture.
1243 /// Works with a single array layer.
1244 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1245 /// Note: the copy extent is in physical size (rounded to the block size)
1246 #[cfg(webgl)]
1247 unsafe fn copy_external_image_to_texture<T>(
1248 &mut self,
1249 src: &wgt::CopyExternalImageSourceInfo,
1250 dst: &<Self::A as Api>::Texture,
1251 dst_premultiplication: bool,
1252 regions: T,
1253 ) where
1254 T: Iterator<Item = TextureCopy>;
1255
1256 /// Copy from one texture to another.
1257 /// Works with a single array layer.
1258 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1259 /// Note: the copy extent is in physical size (rounded to the block size)
1260 unsafe fn copy_texture_to_texture<T>(
1261 &mut self,
1262 src: &<Self::A as Api>::Texture,
1263 src_usage: wgt::TextureUses,
1264 dst: &<Self::A as Api>::Texture,
1265 regions: T,
1266 ) where
1267 T: Iterator<Item = TextureCopy>;
1268
1269 /// Copy from buffer to texture.
1270 /// Works with a single array layer.
1271 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1272 /// Note: the copy extent is in physical size (rounded to the block size)
1273 unsafe fn copy_buffer_to_texture<T>(
1274 &mut self,
1275 src: &<Self::A as Api>::Buffer,
1276 dst: &<Self::A as Api>::Texture,
1277 regions: T,
1278 ) where
1279 T: Iterator<Item = BufferTextureCopy>;
1280
1281 /// Copy from texture to buffer.
1282 /// Works with a single array layer.
1283 /// Note: the copy extent is in physical size (rounded to the block size)
1284 unsafe fn copy_texture_to_buffer<T>(
1285 &mut self,
1286 src: &<Self::A as Api>::Texture,
1287 src_usage: wgt::TextureUses,
1288 dst: &<Self::A as Api>::Buffer,
1289 regions: T,
1290 ) where
1291 T: Iterator<Item = BufferTextureCopy>;
1292
1293 unsafe fn copy_acceleration_structure_to_acceleration_structure(
1294 &mut self,
1295 src: &<Self::A as Api>::AccelerationStructure,
1296 dst: &<Self::A as Api>::AccelerationStructure,
1297 copy: wgt::AccelerationStructureCopy,
1298 );
1299 // pass common
1300
1301 /// Sets the bind group at `index` to `group`.
1302 ///
1303 /// If this is not the first call to `set_bind_group` within the current
1304 /// render or compute pass:
1305 ///
1306 /// - If `layout` contains `n` bind group layouts, then any previously set
1307 /// bind groups at indices `n` or higher are cleared.
1308 ///
1309 /// - If the first `m` bind group layouts of `layout` are equal to those of
1310 /// the previously passed layout, but no more, then any previously set
1311 /// bind groups at indices `m` or higher are cleared.
1312 ///
1313 /// It follows from the above that passing the same layout as before doesn't
1314 /// clear any bind groups.
1315 ///
1316 /// # Safety
1317 ///
1318 /// - This [`CommandEncoder`] must be within a render or compute pass.
1319 ///
1320 /// - `index` must be the valid index of some bind group layout in `layout`.
1321 /// Call this the "relevant bind group layout".
1322 ///
1323 /// - The layout of `group` must be equal to the relevant bind group layout.
1324 ///
1325 /// - The length of `dynamic_offsets` must match the number of buffer
1326 /// bindings [with dynamic offsets][hdo] in the relevant bind group
1327 /// layout.
1328 ///
1329 /// - If those buffer bindings are ordered by increasing [`binding` number]
1330 /// and paired with elements from `dynamic_offsets`, then each offset must
1331 /// be a valid offset for the binding's corresponding buffer in `group`.
1332 ///
1333 /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1334 /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1335 unsafe fn set_bind_group(
1336 &mut self,
1337 layout: &<Self::A as Api>::PipelineLayout,
1338 index: u32,
1339 group: &<Self::A as Api>::BindGroup,
1340 dynamic_offsets: &[wgt::DynamicOffset],
1341 );
1342
1343 /// Sets a range in push constant data.
1344 ///
1345 /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1346 ///
1347 /// # Safety
1348 ///
1349 /// - `offset_bytes` must be a multiple of 4.
1350 /// - The range of push constants written must be valid for the pipeline layout at draw time.
1351 unsafe fn set_push_constants(
1352 &mut self,
1353 layout: &<Self::A as Api>::PipelineLayout,
1354 stages: wgt::ShaderStages,
1355 offset_bytes: u32,
1356 data: &[u32],
1357 );
1358
1359 unsafe fn insert_debug_marker(&mut self, label: &str);
1360 unsafe fn begin_debug_marker(&mut self, group_label: &str);
1361 unsafe fn end_debug_marker(&mut self);
1362
1363 // queries
1364
1365 /// # Safety:
1366 ///
1367 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1368 unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1369 /// # Safety:
1370 ///
1371 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1372 unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1373 unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1374 unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1375 unsafe fn copy_query_results(
1376 &mut self,
1377 set: &<Self::A as Api>::QuerySet,
1378 range: Range<u32>,
1379 buffer: &<Self::A as Api>::Buffer,
1380 offset: wgt::BufferAddress,
1381 stride: wgt::BufferSize,
1382 );
1383
1384 // render passes
1385
1386 /// Begin a new render pass, clearing all active bindings.
1387 ///
1388 /// This clears any bindings established by the following calls:
1389 ///
1390 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1391 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1392 /// - [`begin_query`](CommandEncoder::begin_query)
1393 /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1394 /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1395 /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1396 ///
1397 /// # Safety
1398 ///
1399 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1400 /// by a call to [`end_render_pass`].
1401 ///
1402 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1403 /// by a call to [`end_compute_pass`].
1404 ///
1405 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1406 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1407 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1408 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1409 unsafe fn begin_render_pass(
1410 &mut self,
1411 desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1412 );
1413
1414 /// End the current render pass.
1415 ///
1416 /// # Safety
1417 ///
1418 /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1419 /// that has not been followed by a call to [`end_render_pass`].
1420 ///
1421 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1422 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1423 unsafe fn end_render_pass(&mut self);
1424
1425 unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1426
1427 unsafe fn set_index_buffer<'a>(
1428 &mut self,
1429 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1430 format: wgt::IndexFormat,
1431 );
1432 unsafe fn set_vertex_buffer<'a>(
1433 &mut self,
1434 index: u32,
1435 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1436 );
1437 unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1438 unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1439 unsafe fn set_stencil_reference(&mut self, value: u32);
1440 unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1441
1442 unsafe fn draw(
1443 &mut self,
1444 first_vertex: u32,
1445 vertex_count: u32,
1446 first_instance: u32,
1447 instance_count: u32,
1448 );
1449 unsafe fn draw_indexed(
1450 &mut self,
1451 first_index: u32,
1452 index_count: u32,
1453 base_vertex: i32,
1454 first_instance: u32,
1455 instance_count: u32,
1456 );
1457 unsafe fn draw_indirect(
1458 &mut self,
1459 buffer: &<Self::A as Api>::Buffer,
1460 offset: wgt::BufferAddress,
1461 draw_count: u32,
1462 );
1463 unsafe fn draw_indexed_indirect(
1464 &mut self,
1465 buffer: &<Self::A as Api>::Buffer,
1466 offset: wgt::BufferAddress,
1467 draw_count: u32,
1468 );
1469 unsafe fn draw_indirect_count(
1470 &mut self,
1471 buffer: &<Self::A as Api>::Buffer,
1472 offset: wgt::BufferAddress,
1473 count_buffer: &<Self::A as Api>::Buffer,
1474 count_offset: wgt::BufferAddress,
1475 max_count: u32,
1476 );
1477 unsafe fn draw_indexed_indirect_count(
1478 &mut self,
1479 buffer: &<Self::A as Api>::Buffer,
1480 offset: wgt::BufferAddress,
1481 count_buffer: &<Self::A as Api>::Buffer,
1482 count_offset: wgt::BufferAddress,
1483 max_count: u32,
1484 );
1485 unsafe fn draw_mesh_tasks(
1486 &mut self,
1487 group_count_x: u32,
1488 group_count_y: u32,
1489 group_count_z: u32,
1490 );
1491 unsafe fn draw_mesh_tasks_indirect(
1492 &mut self,
1493 buffer: &<Self::A as Api>::Buffer,
1494 offset: wgt::BufferAddress,
1495 draw_count: u32,
1496 );
1497 unsafe fn draw_mesh_tasks_indirect_count(
1498 &mut self,
1499 buffer: &<Self::A as Api>::Buffer,
1500 offset: wgt::BufferAddress,
1501 count_buffer: &<Self::A as Api>::Buffer,
1502 count_offset: wgt::BufferAddress,
1503 max_count: u32,
1504 );
1505
1506 // compute passes
1507
1508 /// Begin a new compute pass, clearing all active bindings.
1509 ///
1510 /// This clears any bindings established by the following calls:
1511 ///
1512 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1513 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1514 /// - [`begin_query`](CommandEncoder::begin_query)
1515 /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1516 ///
1517 /// # Safety
1518 ///
1519 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1520 /// by a call to [`end_render_pass`].
1521 ///
1522 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1523 /// by a call to [`end_compute_pass`].
1524 ///
1525 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1526 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1527 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1528 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1529 unsafe fn begin_compute_pass(
1530 &mut self,
1531 desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1532 );
1533
1534 /// End the current compute pass.
1535 ///
1536 /// # Safety
1537 ///
1538 /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1539 /// that has not been followed by a call to [`end_compute_pass`].
1540 ///
1541 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1542 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1543 unsafe fn end_compute_pass(&mut self);
1544
1545 unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1546
1547 unsafe fn dispatch(&mut self, count: [u32; 3]);
1548 unsafe fn dispatch_indirect(
1549 &mut self,
1550 buffer: &<Self::A as Api>::Buffer,
1551 offset: wgt::BufferAddress,
1552 );
1553
1554 /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1555 /// All buffers must be synchronized externally
1556 /// All buffer regions, which are written to may only be passed once per function call,
1557 /// with the exception of updates in the same descriptor.
1558 /// Consequences of this limitation:
1559 /// - scratch buffers need to be unique
1560 /// - a tlas can't be build in the same call with a blas it contains
1561 unsafe fn build_acceleration_structures<'a, T>(
1562 &mut self,
1563 descriptor_count: u32,
1564 descriptors: T,
1565 ) where
1566 Self::A: 'a,
1567 T: IntoIterator<
1568 Item = BuildAccelerationStructureDescriptor<
1569 'a,
1570 <Self::A as Api>::Buffer,
1571 <Self::A as Api>::AccelerationStructure,
1572 >,
1573 >;
1574
1575 unsafe fn place_acceleration_structure_barrier(
1576 &mut self,
1577 barrier: AccelerationStructureBarrier,
1578 );
1579 // modeled off dx12, because this is able to be polyfilled in vulkan as opposed to the other way round
1580 unsafe fn read_acceleration_structure_compact_size(
1581 &mut self,
1582 acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1583 buf: &<Self::A as Api>::Buffer,
1584 );
1585}
1586
1587bitflags!(
1588 /// Pipeline layout creation flags.
1589 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1590 pub struct PipelineLayoutFlags: u32 {
1591 /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1592 /// via push constants for direct execution.
1593 const FIRST_VERTEX_INSTANCE = 1 << 0;
1594 /// D3D12: Add support for `num_workgroups` builtins via push constants
1595 /// for direct execution.
1596 const NUM_WORK_GROUPS = 1 << 1;
1597 /// D3D12: Add support for the builtins that the other flags enable for
1598 /// indirect execution.
1599 const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1600 }
1601);
1602
1603bitflags!(
1604 /// Pipeline layout creation flags.
1605 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1606 pub struct BindGroupLayoutFlags: u32 {
1607 /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1608 const PARTIALLY_BOUND = 1 << 0;
1609 }
1610);
1611
1612bitflags!(
1613 /// Texture format capability flags.
1614 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1615 pub struct TextureFormatCapabilities: u32 {
1616 /// Format can be sampled.
1617 const SAMPLED = 1 << 0;
1618 /// Format can be sampled with a linear sampler.
1619 const SAMPLED_LINEAR = 1 << 1;
1620 /// Format can be sampled with a min/max reduction sampler.
1621 const SAMPLED_MINMAX = 1 << 2;
1622
1623 /// Format can be used as storage with read-only access.
1624 const STORAGE_READ_ONLY = 1 << 3;
1625 /// Format can be used as storage with write-only access.
1626 const STORAGE_WRITE_ONLY = 1 << 4;
1627 /// Format can be used as storage with both read and write access.
1628 const STORAGE_READ_WRITE = 1 << 5;
1629 /// Format can be used as storage with atomics.
1630 const STORAGE_ATOMIC = 1 << 6;
1631
1632 /// Format can be used as color and input attachment.
1633 const COLOR_ATTACHMENT = 1 << 7;
1634 /// Format can be used as color (with blending) and input attachment.
1635 const COLOR_ATTACHMENT_BLEND = 1 << 8;
1636 /// Format can be used as depth-stencil and input attachment.
1637 const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1638
1639 /// Format can be multisampled by x2.
1640 const MULTISAMPLE_X2 = 1 << 10;
1641 /// Format can be multisampled by x4.
1642 const MULTISAMPLE_X4 = 1 << 11;
1643 /// Format can be multisampled by x8.
1644 const MULTISAMPLE_X8 = 1 << 12;
1645 /// Format can be multisampled by x16.
1646 const MULTISAMPLE_X16 = 1 << 13;
1647
1648 /// Format can be used for render pass resolve targets.
1649 const MULTISAMPLE_RESOLVE = 1 << 14;
1650
1651 /// Format can be copied from.
1652 const COPY_SRC = 1 << 15;
1653 /// Format can be copied to.
1654 const COPY_DST = 1 << 16;
1655 }
1656);
1657
1658bitflags!(
1659 /// Texture format capability flags.
1660 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1661 pub struct FormatAspects: u8 {
1662 const COLOR = 1 << 0;
1663 const DEPTH = 1 << 1;
1664 const STENCIL = 1 << 2;
1665 const PLANE_0 = 1 << 3;
1666 const PLANE_1 = 1 << 4;
1667 const PLANE_2 = 1 << 5;
1668
1669 const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1670 }
1671);
1672
1673impl FormatAspects {
1674 pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1675 let aspect_mask = match aspect {
1676 wgt::TextureAspect::All => Self::all(),
1677 wgt::TextureAspect::DepthOnly => Self::DEPTH,
1678 wgt::TextureAspect::StencilOnly => Self::STENCIL,
1679 wgt::TextureAspect::Plane0 => Self::PLANE_0,
1680 wgt::TextureAspect::Plane1 => Self::PLANE_1,
1681 wgt::TextureAspect::Plane2 => Self::PLANE_2,
1682 };
1683 Self::from(format) & aspect_mask
1684 }
1685
1686 /// Returns `true` if only one flag is set
1687 pub fn is_one(&self) -> bool {
1688 self.bits().is_power_of_two()
1689 }
1690
1691 pub fn map(&self) -> wgt::TextureAspect {
1692 match *self {
1693 Self::COLOR => wgt::TextureAspect::All,
1694 Self::DEPTH => wgt::TextureAspect::DepthOnly,
1695 Self::STENCIL => wgt::TextureAspect::StencilOnly,
1696 Self::PLANE_0 => wgt::TextureAspect::Plane0,
1697 Self::PLANE_1 => wgt::TextureAspect::Plane1,
1698 Self::PLANE_2 => wgt::TextureAspect::Plane2,
1699 _ => unreachable!(),
1700 }
1701 }
1702}
1703
1704impl From<wgt::TextureFormat> for FormatAspects {
1705 fn from(format: wgt::TextureFormat) -> Self {
1706 match format {
1707 wgt::TextureFormat::Stencil8 => Self::STENCIL,
1708 wgt::TextureFormat::Depth16Unorm
1709 | wgt::TextureFormat::Depth32Float
1710 | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1711 wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1712 Self::DEPTH_STENCIL
1713 }
1714 wgt::TextureFormat::NV12 => Self::PLANE_0 | Self::PLANE_1,
1715 _ => Self::COLOR,
1716 }
1717 }
1718}
1719
1720bitflags!(
1721 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1722 pub struct MemoryFlags: u32 {
1723 const TRANSIENT = 1 << 0;
1724 const PREFER_COHERENT = 1 << 1;
1725 }
1726);
1727
1728//TODO: it's not intuitive for the backends to consider `LOAD` being optional.
1729
1730bitflags!(
1731 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1732 pub struct AttachmentOps: u8 {
1733 const LOAD = 1 << 0;
1734 const STORE = 1 << 1;
1735 }
1736);
1737
1738#[derive(Clone, Debug)]
1739pub struct InstanceDescriptor<'a> {
1740 pub name: &'a str,
1741 pub flags: wgt::InstanceFlags,
1742 pub backend_options: wgt::BackendOptions,
1743}
1744
1745#[derive(Clone, Debug)]
1746pub struct Alignments {
1747 /// The alignment of the start of the buffer used as a GPU copy source.
1748 pub buffer_copy_offset: wgt::BufferSize,
1749
1750 /// The alignment of the row pitch of the texture data stored in a buffer that is
1751 /// used in a GPU copy operation.
1752 pub buffer_copy_pitch: wgt::BufferSize,
1753
1754 /// The finest alignment of bound range checking for uniform buffers.
1755 ///
1756 /// When `wgpu_hal` restricts shader references to the [accessible
1757 /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1758 /// is the bind group binding's stated [size], rounded up to the next
1759 /// multiple of this value.
1760 ///
1761 /// We don't need an analogous field for storage buffer bindings, because
1762 /// all our backends promise to enforce the size at least to a four-byte
1763 /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1764 /// of four anyway.
1765 ///
1766 /// [ar]: struct.BufferBinding.html#accessible-region
1767 /// [`Uniform`]: wgt::BufferBindingType::Uniform
1768 /// [size]: BufferBinding::size
1769 pub uniform_bounds_check_alignment: wgt::BufferSize,
1770
1771 /// The size of the raw TLAS instance
1772 pub raw_tlas_instance_size: usize,
1773
1774 /// What the scratch buffer for building an acceleration structure must be aligned to
1775 pub ray_tracing_scratch_buffer_alignment: u32,
1776}
1777
1778#[derive(Clone, Debug)]
1779pub struct Capabilities {
1780 pub limits: wgt::Limits,
1781 pub alignments: Alignments,
1782 pub downlevel: wgt::DownlevelCapabilities,
1783}
1784
1785#[derive(Debug)]
1786pub struct ExposedAdapter<A: Api> {
1787 pub adapter: A::Adapter,
1788 pub info: wgt::AdapterInfo,
1789 pub features: wgt::Features,
1790 pub capabilities: Capabilities,
1791}
1792
1793/// Describes information about what a `Surface`'s presentation capabilities are.
1794/// Fetch this with [Adapter::surface_capabilities].
1795#[derive(Debug, Clone)]
1796pub struct SurfaceCapabilities {
1797 /// List of supported texture formats.
1798 ///
1799 /// Must be at least one.
1800 pub formats: Vec<wgt::TextureFormat>,
1801
1802 /// Range for the number of queued frames.
1803 ///
1804 /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
1805 /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
1806 ///
1807 /// - `maximum_frame_latency.start` must be at least 1.
1808 /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
1809 pub maximum_frame_latency: RangeInclusive<u32>,
1810
1811 /// Current extent of the surface, if known.
1812 pub current_extent: Option<wgt::Extent3d>,
1813
1814 /// Supported texture usage flags.
1815 ///
1816 /// Must have at least `wgt::TextureUses::COLOR_TARGET`
1817 pub usage: wgt::TextureUses,
1818
1819 /// List of supported V-sync modes.
1820 ///
1821 /// Must be at least one.
1822 pub present_modes: Vec<wgt::PresentMode>,
1823
1824 /// List of supported alpha composition modes.
1825 ///
1826 /// Must be at least one.
1827 pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
1828}
1829
1830#[derive(Debug)]
1831pub struct AcquiredSurfaceTexture<A: Api> {
1832 pub texture: A::SurfaceTexture,
1833 /// The presentation configuration no longer matches
1834 /// the surface properties exactly, but can still be used to present
1835 /// to the surface successfully.
1836 pub suboptimal: bool,
1837}
1838
1839#[derive(Debug)]
1840pub struct OpenDevice<A: Api> {
1841 pub device: A::Device,
1842 pub queue: A::Queue,
1843}
1844
1845#[derive(Clone, Debug)]
1846pub struct BufferMapping {
1847 pub ptr: NonNull<u8>,
1848 pub is_coherent: bool,
1849}
1850
1851#[derive(Clone, Debug)]
1852pub struct BufferDescriptor<'a> {
1853 pub label: Label<'a>,
1854 pub size: wgt::BufferAddress,
1855 pub usage: wgt::BufferUses,
1856 pub memory_flags: MemoryFlags,
1857}
1858
1859#[derive(Clone, Debug)]
1860pub struct TextureDescriptor<'a> {
1861 pub label: Label<'a>,
1862 pub size: wgt::Extent3d,
1863 pub mip_level_count: u32,
1864 pub sample_count: u32,
1865 pub dimension: wgt::TextureDimension,
1866 pub format: wgt::TextureFormat,
1867 pub usage: wgt::TextureUses,
1868 pub memory_flags: MemoryFlags,
1869 /// Allows views of this texture to have a different format
1870 /// than the texture does.
1871 pub view_formats: Vec<wgt::TextureFormat>,
1872}
1873
1874impl TextureDescriptor<'_> {
1875 pub fn copy_extent(&self) -> CopyExtent {
1876 CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
1877 }
1878
1879 pub fn is_cube_compatible(&self) -> bool {
1880 self.dimension == wgt::TextureDimension::D2
1881 && self.size.depth_or_array_layers % 6 == 0
1882 && self.sample_count == 1
1883 && self.size.width == self.size.height
1884 }
1885
1886 pub fn array_layer_count(&self) -> u32 {
1887 match self.dimension {
1888 wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
1889 wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
1890 }
1891 }
1892}
1893
1894/// TextureView descriptor.
1895///
1896/// Valid usage:
1897///. - `format` has to be the same as `TextureDescriptor::format`
1898///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
1899///. - `usage` has to be a subset of `TextureDescriptor::usage`
1900///. - `range` has to be a subset of parent texture
1901#[derive(Clone, Debug)]
1902pub struct TextureViewDescriptor<'a> {
1903 pub label: Label<'a>,
1904 pub format: wgt::TextureFormat,
1905 pub dimension: wgt::TextureViewDimension,
1906 pub usage: wgt::TextureUses,
1907 pub range: wgt::ImageSubresourceRange,
1908}
1909
1910#[derive(Clone, Debug)]
1911pub struct SamplerDescriptor<'a> {
1912 pub label: Label<'a>,
1913 pub address_modes: [wgt::AddressMode; 3],
1914 pub mag_filter: wgt::FilterMode,
1915 pub min_filter: wgt::FilterMode,
1916 pub mipmap_filter: wgt::FilterMode,
1917 pub lod_clamp: Range<f32>,
1918 pub compare: Option<wgt::CompareFunction>,
1919 // Must in the range [1, 16].
1920 //
1921 // Anisotropic filtering must be supported if this is not 1.
1922 pub anisotropy_clamp: u16,
1923 pub border_color: Option<wgt::SamplerBorderColor>,
1924}
1925
1926/// BindGroupLayout descriptor.
1927///
1928/// Valid usage:
1929/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
1930#[derive(Clone, Debug)]
1931pub struct BindGroupLayoutDescriptor<'a> {
1932 pub label: Label<'a>,
1933 pub flags: BindGroupLayoutFlags,
1934 pub entries: &'a [wgt::BindGroupLayoutEntry],
1935}
1936
1937#[derive(Clone, Debug)]
1938pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
1939 pub label: Label<'a>,
1940 pub flags: PipelineLayoutFlags,
1941 pub bind_group_layouts: &'a [&'a B],
1942 pub push_constant_ranges: &'a [wgt::PushConstantRange],
1943}
1944
1945/// A region of a buffer made visible to shaders via a [`BindGroup`].
1946///
1947/// [`BindGroup`]: Api::BindGroup
1948///
1949/// ## Accessible region
1950///
1951/// `wgpu_hal` guarantees that shaders compiled with
1952/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
1953/// write data via this binding outside the *accessible region* of [`buffer`]:
1954///
1955/// - The accessible region starts at [`offset`].
1956///
1957/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
1958/// which must be a multiple of 4.
1959///
1960/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
1961/// rounded up to the next multiple of
1962/// [`Alignments::uniform_bounds_check_alignment`].
1963///
1964/// Note that this guarantee is stricter than WGSL's requirements for
1965/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
1966/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
1967/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
1968/// never be read by the application before they are overwritten. This
1969/// optimization consults bind group buffer binding regions to determine which
1970/// parts of which buffers shaders might observe. This optimization is only
1971/// sound if shader access is bounds-checked.
1972///
1973/// [`buffer`]: BufferBinding::buffer
1974/// [`offset`]: BufferBinding::offset
1975/// [`size`]: BufferBinding::size
1976/// [`Storage`]: wgt::BufferBindingType::Storage
1977/// [`Uniform`]: wgt::BufferBindingType::Uniform
1978/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
1979#[derive(Debug)]
1980pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
1981 /// The buffer being bound.
1982 pub buffer: &'a B,
1983
1984 /// The offset at which the bound region starts.
1985 ///
1986 /// This must be less than the size of the buffer. Some back ends
1987 /// cannot tolerate zero-length regions; for example, see
1988 /// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
1989 /// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
1990 /// documentation for GLES's [glBindBufferRange][bbr].
1991 ///
1992 /// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
1993 /// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
1994 /// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
1995 pub offset: wgt::BufferAddress,
1996
1997 /// The size of the region bound, in bytes.
1998 ///
1999 /// If `None`, the region extends from `offset` to the end of the
2000 /// buffer. Given the restrictions on `offset`, this means that
2001 /// the size is always greater than zero.
2002 pub size: Option<wgt::BufferSize>,
2003}
2004
2005impl<'a, T: DynBuffer + ?Sized> Clone for BufferBinding<'a, T> {
2006 fn clone(&self) -> Self {
2007 BufferBinding {
2008 buffer: self.buffer,
2009 offset: self.offset,
2010 size: self.size,
2011 }
2012 }
2013}
2014
2015#[derive(Debug)]
2016pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2017 pub view: &'a T,
2018 pub usage: wgt::TextureUses,
2019}
2020
2021impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2022 fn clone(&self) -> Self {
2023 TextureBinding {
2024 view: self.view,
2025 usage: self.usage,
2026 }
2027 }
2028}
2029
2030/// cbindgen:ignore
2031#[derive(Clone, Debug)]
2032pub struct BindGroupEntry {
2033 pub binding: u32,
2034 pub resource_index: u32,
2035 pub count: u32,
2036}
2037
2038/// BindGroup descriptor.
2039///
2040/// Valid usage:
2041///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2042///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2043///. - each entry has to be compatible with the `layout`
2044///. - each entry's `BindGroupEntry::resource_index` is within range
2045/// of the corresponding resource array, selected by the relevant
2046/// `BindGroupLayoutEntry`.
2047#[derive(Clone, Debug)]
2048pub struct BindGroupDescriptor<
2049 'a,
2050 Bgl: DynBindGroupLayout + ?Sized,
2051 B: DynBuffer + ?Sized,
2052 S: DynSampler + ?Sized,
2053 T: DynTextureView + ?Sized,
2054 A: DynAccelerationStructure + ?Sized,
2055> {
2056 pub label: Label<'a>,
2057 pub layout: &'a Bgl,
2058 pub buffers: &'a [BufferBinding<'a, B>],
2059 pub samplers: &'a [&'a S],
2060 pub textures: &'a [TextureBinding<'a, T>],
2061 pub entries: &'a [BindGroupEntry],
2062 pub acceleration_structures: &'a [&'a A],
2063}
2064
2065#[derive(Clone, Debug)]
2066pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2067 pub label: Label<'a>,
2068 pub queue: &'a Q,
2069}
2070
2071/// Naga shader module.
2072#[derive(Default)]
2073pub struct NagaShader {
2074 /// Shader module IR.
2075 pub module: Cow<'static, naga::Module>,
2076 /// Analysis information of the module.
2077 pub info: naga::valid::ModuleInfo,
2078 /// Source codes for debug
2079 pub debug_source: Option<DebugSource>,
2080}
2081
2082// Custom implementation avoids the need to generate Debug impl code
2083// for the whole Naga module and info.
2084impl fmt::Debug for NagaShader {
2085 fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2086 write!(formatter, "Naga shader")
2087 }
2088}
2089
2090/// Shader input.
2091#[allow(clippy::large_enum_variant)]
2092pub enum ShaderInput<'a> {
2093 Naga(NagaShader),
2094 Msl {
2095 shader: String,
2096 entry_point: String,
2097 num_workgroups: (u32, u32, u32),
2098 },
2099 SpirV(&'a [u32]),
2100}
2101
2102pub struct ShaderModuleDescriptor<'a> {
2103 pub label: Label<'a>,
2104
2105 /// # Safety
2106 ///
2107 /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2108 ///
2109 /// [src]: wgt::ShaderRuntimeChecks
2110 pub runtime_checks: wgt::ShaderRuntimeChecks,
2111}
2112
2113#[derive(Debug, Clone)]
2114pub struct DebugSource {
2115 pub file_name: Cow<'static, str>,
2116 pub source_code: Cow<'static, str>,
2117}
2118
2119/// Describes a programmable pipeline stage.
2120#[derive(Debug)]
2121pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2122 /// The compiled shader module for this stage.
2123 pub module: &'a M,
2124 /// The name of the entry point in the compiled shader. There must be a function with this name
2125 /// in the shader.
2126 pub entry_point: &'a str,
2127 /// Pipeline constants
2128 pub constants: &'a naga::back::PipelineConstants,
2129 /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2130 ///
2131 /// This is required by the WebGPU spec, but may have overhead which can be avoided
2132 /// for cross-platform applications
2133 pub zero_initialize_workgroup_memory: bool,
2134}
2135
2136impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2137 fn clone(&self) -> Self {
2138 Self {
2139 module: self.module,
2140 entry_point: self.entry_point,
2141 constants: self.constants,
2142 zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2143 }
2144 }
2145}
2146
2147/// Describes a compute pipeline.
2148#[derive(Clone, Debug)]
2149pub struct ComputePipelineDescriptor<
2150 'a,
2151 Pl: DynPipelineLayout + ?Sized,
2152 M: DynShaderModule + ?Sized,
2153 Pc: DynPipelineCache + ?Sized,
2154> {
2155 pub label: Label<'a>,
2156 /// The layout of bind groups for this pipeline.
2157 pub layout: &'a Pl,
2158 /// The compiled compute stage and its entry point.
2159 pub stage: ProgrammableStage<'a, M>,
2160 /// The cache which will be used and filled when compiling this pipeline
2161 pub cache: Option<&'a Pc>,
2162}
2163
2164pub struct PipelineCacheDescriptor<'a> {
2165 pub label: Label<'a>,
2166 pub data: Option<&'a [u8]>,
2167}
2168
2169/// Describes how the vertex buffer is interpreted.
2170#[derive(Clone, Debug)]
2171pub struct VertexBufferLayout<'a> {
2172 /// The stride, in bytes, between elements of this buffer.
2173 pub array_stride: wgt::BufferAddress,
2174 /// How often this vertex buffer is "stepped" forward.
2175 pub step_mode: wgt::VertexStepMode,
2176 /// The list of attributes which comprise a single vertex.
2177 pub attributes: &'a [wgt::VertexAttribute],
2178}
2179
2180/// Describes a render (graphics) pipeline.
2181#[derive(Clone, Debug)]
2182pub struct RenderPipelineDescriptor<
2183 'a,
2184 Pl: DynPipelineLayout + ?Sized,
2185 M: DynShaderModule + ?Sized,
2186 Pc: DynPipelineCache + ?Sized,
2187> {
2188 pub label: Label<'a>,
2189 /// The layout of bind groups for this pipeline.
2190 pub layout: &'a Pl,
2191 /// The format of any vertex buffers used with this pipeline.
2192 pub vertex_buffers: &'a [VertexBufferLayout<'a>],
2193 /// The vertex stage for this pipeline.
2194 pub vertex_stage: ProgrammableStage<'a, M>,
2195 /// The properties of the pipeline at the primitive assembly and rasterization level.
2196 pub primitive: wgt::PrimitiveState,
2197 /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2198 pub depth_stencil: Option<wgt::DepthStencilState>,
2199 /// The multi-sampling properties of the pipeline.
2200 pub multisample: wgt::MultisampleState,
2201 /// The fragment stage for this pipeline.
2202 pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2203 /// The effect of draw calls on the color aspect of the output target.
2204 pub color_targets: &'a [Option<wgt::ColorTargetState>],
2205 /// If the pipeline will be used with a multiview render pass, this indicates how many array
2206 /// layers the attachments will have.
2207 pub multiview: Option<NonZeroU32>,
2208 /// The cache which will be used and filled when compiling this pipeline
2209 pub cache: Option<&'a Pc>,
2210}
2211pub struct MeshPipelineDescriptor<
2212 'a,
2213 Pl: DynPipelineLayout + ?Sized,
2214 M: DynShaderModule + ?Sized,
2215 Pc: DynPipelineCache + ?Sized,
2216> {
2217 pub label: Label<'a>,
2218 /// The layout of bind groups for this pipeline.
2219 pub layout: &'a Pl,
2220 pub task_stage: Option<ProgrammableStage<'a, M>>,
2221 pub mesh_stage: ProgrammableStage<'a, M>,
2222 /// The properties of the pipeline at the primitive assembly and rasterization level.
2223 pub primitive: wgt::PrimitiveState,
2224 /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2225 pub depth_stencil: Option<wgt::DepthStencilState>,
2226 /// The multi-sampling properties of the pipeline.
2227 pub multisample: wgt::MultisampleState,
2228 /// The fragment stage for this pipeline.
2229 pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2230 /// The effect of draw calls on the color aspect of the output target.
2231 pub color_targets: &'a [Option<wgt::ColorTargetState>],
2232 /// If the pipeline will be used with a multiview render pass, this indicates how many array
2233 /// layers the attachments will have.
2234 pub multiview: Option<NonZeroU32>,
2235 /// The cache which will be used and filled when compiling this pipeline
2236 pub cache: Option<&'a Pc>,
2237}
2238
2239#[derive(Debug, Clone)]
2240pub struct SurfaceConfiguration {
2241 /// Maximum number of queued frames. Must be in
2242 /// `SurfaceCapabilities::maximum_frame_latency` range.
2243 pub maximum_frame_latency: u32,
2244 /// Vertical synchronization mode.
2245 pub present_mode: wgt::PresentMode,
2246 /// Alpha composition mode.
2247 pub composite_alpha_mode: wgt::CompositeAlphaMode,
2248 /// Format of the surface textures.
2249 pub format: wgt::TextureFormat,
2250 /// Requested texture extent. Must be in
2251 /// `SurfaceCapabilities::extents` range.
2252 pub extent: wgt::Extent3d,
2253 /// Allowed usage of surface textures,
2254 pub usage: wgt::TextureUses,
2255 /// Allows views of swapchain texture to have a different format
2256 /// than the texture does.
2257 pub view_formats: Vec<wgt::TextureFormat>,
2258}
2259
2260#[derive(Debug, Clone)]
2261pub struct Rect<T> {
2262 pub x: T,
2263 pub y: T,
2264 pub w: T,
2265 pub h: T,
2266}
2267
2268#[derive(Debug, Clone, PartialEq)]
2269pub struct StateTransition<T> {
2270 pub from: T,
2271 pub to: T,
2272}
2273
2274#[derive(Debug, Clone)]
2275pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2276 pub buffer: &'a B,
2277 pub usage: StateTransition<wgt::BufferUses>,
2278}
2279
2280#[derive(Debug, Clone)]
2281pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2282 pub texture: &'a T,
2283 pub range: wgt::ImageSubresourceRange,
2284 pub usage: StateTransition<wgt::TextureUses>,
2285}
2286
2287#[derive(Clone, Copy, Debug)]
2288pub struct BufferCopy {
2289 pub src_offset: wgt::BufferAddress,
2290 pub dst_offset: wgt::BufferAddress,
2291 pub size: wgt::BufferSize,
2292}
2293
2294#[derive(Clone, Debug)]
2295pub struct TextureCopyBase {
2296 pub mip_level: u32,
2297 pub array_layer: u32,
2298 /// Origin within a texture.
2299 /// Note: for 1D and 2D textures, Z must be 0.
2300 pub origin: wgt::Origin3d,
2301 pub aspect: FormatAspects,
2302}
2303
2304#[derive(Clone, Copy, Debug)]
2305pub struct CopyExtent {
2306 pub width: u32,
2307 pub height: u32,
2308 pub depth: u32,
2309}
2310
2311#[derive(Clone, Debug)]
2312pub struct TextureCopy {
2313 pub src_base: TextureCopyBase,
2314 pub dst_base: TextureCopyBase,
2315 pub size: CopyExtent,
2316}
2317
2318#[derive(Clone, Debug)]
2319pub struct BufferTextureCopy {
2320 pub buffer_layout: wgt::TexelCopyBufferLayout,
2321 pub texture_base: TextureCopyBase,
2322 pub size: CopyExtent,
2323}
2324
2325#[derive(Clone, Debug)]
2326pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2327 pub view: &'a T,
2328 /// Contains either a single mutating usage as a target,
2329 /// or a valid combination of read-only usages.
2330 pub usage: wgt::TextureUses,
2331}
2332
2333#[derive(Clone, Debug)]
2334pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2335 pub target: Attachment<'a, T>,
2336 pub resolve_target: Option<Attachment<'a, T>>,
2337 pub ops: AttachmentOps,
2338 pub clear_value: wgt::Color,
2339}
2340
2341#[derive(Clone, Debug)]
2342pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2343 pub target: Attachment<'a, T>,
2344 pub depth_ops: AttachmentOps,
2345 pub stencil_ops: AttachmentOps,
2346 pub clear_value: (f32, u32),
2347}
2348
2349#[derive(Clone, Debug)]
2350pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2351 pub query_set: &'a Q,
2352 pub beginning_of_pass_write_index: Option<u32>,
2353 pub end_of_pass_write_index: Option<u32>,
2354}
2355
2356#[derive(Clone, Debug)]
2357pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2358 pub label: Label<'a>,
2359 pub extent: wgt::Extent3d,
2360 pub sample_count: u32,
2361 pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2362 pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2363 pub multiview: Option<NonZeroU32>,
2364 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2365 pub occlusion_query_set: Option<&'a Q>,
2366}
2367
2368#[derive(Clone, Debug)]
2369pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2370 pub label: Label<'a>,
2371 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2372}
2373
2374/// Stores the text of any validation errors that have occurred since
2375/// the last call to `get_and_reset`.
2376///
2377/// Each value is a validation error and a message associated with it,
2378/// or `None` if the error has no message from the api.
2379///
2380/// This is used for internal wgpu testing only and _must not_ be used
2381/// as a way to check for errors.
2382///
2383/// This works as a static because `cargo nextest` runs all of our
2384/// tests in separate processes, so each test gets its own canary.
2385///
2386/// This prevents the issue of one validation error terminating the
2387/// entire process.
2388pub static VALIDATION_CANARY: ValidationCanary = ValidationCanary {
2389 inner: Mutex::new(Vec::new()),
2390};
2391
2392/// Flag for internal testing.
2393pub struct ValidationCanary {
2394 inner: Mutex<Vec<String>>,
2395}
2396
2397impl ValidationCanary {
2398 #[allow(dead_code)] // in some configurations this function is dead
2399 fn add(&self, msg: String) {
2400 self.inner.lock().push(msg);
2401 }
2402
2403 /// Returns any API validation errors that have occurred in this process
2404 /// since the last call to this function.
2405 pub fn get_and_reset(&self) -> Vec<String> {
2406 self.inner.lock().drain(..).collect()
2407 }
2408}
2409
2410#[test]
2411fn test_default_limits() {
2412 let limits = wgt::Limits::default();
2413 assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2414}
2415
2416#[derive(Clone, Debug)]
2417pub struct AccelerationStructureDescriptor<'a> {
2418 pub label: Label<'a>,
2419 pub size: wgt::BufferAddress,
2420 pub format: AccelerationStructureFormat,
2421 pub allow_compaction: bool,
2422}
2423
2424#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2425pub enum AccelerationStructureFormat {
2426 TopLevel,
2427 BottomLevel,
2428}
2429
2430#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2431pub enum AccelerationStructureBuildMode {
2432 Build,
2433 Update,
2434}
2435
2436/// Information of the required size for a corresponding entries struct (+ flags)
2437#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2438pub struct AccelerationStructureBuildSizes {
2439 pub acceleration_structure_size: wgt::BufferAddress,
2440 pub update_scratch_size: wgt::BufferAddress,
2441 pub build_scratch_size: wgt::BufferAddress,
2442}
2443
2444/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2445/// For updates, only the data is allowed to change (not the meta data or sizes).
2446#[derive(Clone, Debug)]
2447pub struct BuildAccelerationStructureDescriptor<
2448 'a,
2449 B: DynBuffer + ?Sized,
2450 A: DynAccelerationStructure + ?Sized,
2451> {
2452 pub entries: &'a AccelerationStructureEntries<'a, B>,
2453 pub mode: AccelerationStructureBuildMode,
2454 pub flags: AccelerationStructureBuildFlags,
2455 pub source_acceleration_structure: Option<&'a A>,
2456 pub destination_acceleration_structure: &'a A,
2457 pub scratch_buffer: &'a B,
2458 pub scratch_buffer_offset: wgt::BufferAddress,
2459}
2460
2461/// - All buffers, buffer addresses and offsets will be ignored.
2462/// - The build mode will be ignored.
2463/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2464/// may result in reduced size requirements.
2465/// - Any other change may result in a bigger or smaller size requirement.
2466#[derive(Clone, Debug)]
2467pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2468 pub entries: &'a AccelerationStructureEntries<'a, B>,
2469 pub flags: AccelerationStructureBuildFlags,
2470}
2471
2472/// Entries for a single descriptor
2473/// * `Instances` - Multiple instances for a top level acceleration structure
2474/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2475/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2476#[derive(Debug)]
2477pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2478 Instances(AccelerationStructureInstances<'a, B>),
2479 Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2480 AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2481}
2482
2483/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2484/// * `indices` - optional index buffer with attributes
2485/// * `transform` - optional transform
2486#[derive(Clone, Debug)]
2487pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2488 pub vertex_buffer: Option<&'a B>,
2489 pub vertex_format: wgt::VertexFormat,
2490 pub first_vertex: u32,
2491 pub vertex_count: u32,
2492 pub vertex_stride: wgt::BufferAddress,
2493 pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2494 pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2495 pub flags: AccelerationStructureGeometryFlags,
2496}
2497
2498/// * `offset` - offset in bytes
2499#[derive(Clone, Debug)]
2500pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2501 pub buffer: Option<&'a B>,
2502 pub offset: u32,
2503 pub count: u32,
2504 pub stride: wgt::BufferAddress,
2505 pub flags: AccelerationStructureGeometryFlags,
2506}
2507
2508pub struct AccelerationStructureCopy {
2509 pub copy_flags: wgt::AccelerationStructureCopy,
2510 pub type_flags: wgt::AccelerationStructureType,
2511}
2512
2513/// * `offset` - offset in bytes
2514#[derive(Clone, Debug)]
2515pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2516 pub buffer: Option<&'a B>,
2517 pub offset: u32,
2518 pub count: u32,
2519}
2520
2521/// * `offset` - offset in bytes
2522#[derive(Clone, Debug)]
2523pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2524 pub format: wgt::IndexFormat,
2525 pub buffer: Option<&'a B>,
2526 pub offset: u32,
2527 pub count: u32,
2528}
2529
2530/// * `offset` - offset in bytes
2531#[derive(Clone, Debug)]
2532pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2533 pub buffer: &'a B,
2534 pub offset: u32,
2535}
2536
2537pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2538pub use wgt::AccelerationStructureGeometryFlags;
2539
2540bitflags::bitflags! {
2541 #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2542 pub struct AccelerationStructureUses: u8 {
2543 // For blas used as input for tlas
2544 const BUILD_INPUT = 1 << 0;
2545 // Target for acceleration structure build
2546 const BUILD_OUTPUT = 1 << 1;
2547 // Tlas used in a shader
2548 const SHADER_INPUT = 1 << 2;
2549 // Blas used to query compacted size
2550 const QUERY_INPUT = 1 << 3;
2551 // BLAS used as a src for a copy operation
2552 const COPY_SRC = 1 << 4;
2553 // BLAS used as a dst for a copy operation
2554 const COPY_DST = 1 << 5;
2555 }
2556}
2557
2558#[derive(Debug, Clone)]
2559pub struct AccelerationStructureBarrier {
2560 pub usage: StateTransition<AccelerationStructureUses>,
2561}
2562
2563#[derive(Debug, Copy, Clone)]
2564pub struct TlasInstance {
2565 pub transform: [f32; 12],
2566 pub custom_data: u32,
2567 pub mask: u8,
2568 pub blas_address: u64,
2569}