wgpu_hal/lib.rs
1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//! should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//! validation, if any, and incorrect use will often cause undefined behavior.
20//! This allows us to minimize the overhead we impose over the underlying
21//! graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//! safe API for driving `wgpu-hal`, implementing all necessary validation,
23//! resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//! use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//! `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//! anticipate*, like running out of memory or losing the device. Any errors
29//! that the user could reasonably anticipate are their responsibility to
30//! avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//! not mappable: as the buffer creator, the user should already know if they
32//! can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//! generally object-safe. You must select a specific backend type
36//! like [`vulkan::Api`] or [`metal::Api`], and then use that
37//! according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//! taking objects by reference, returning them by value, and so on,
41//! unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//! remain mapped on the CPU while the GPU reads or writes to it. You must
45//! explicitly indicate when data might need to be transferred between CPU and
46//! GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//! resource. For example, if a buffer is written to by a compute
50//! shader, and then used as and index buffer to a draw call, you
51//! must use [`CommandEncoder::transition_buffers`] between those two
52//! operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//! Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//! store data in particular containers. The implementation doesn't guarantee
59//! that any of the iterators are drained, unless stated otherwise by the
60//! function documentation. For this reason, we recommend that iterators don't
61//! do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//! Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//! available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//! own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//! creates an instance value, which you can use to enumerate the adapters
121//! available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//! returns an instance that can enumerate the Vulkan physical devices on your
123//! system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//! particular device from a particular backend. For example, a Vulkan instance
127//! might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//! link to a device. You get a device value by calling [`Adapter::open`], and
131//! then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//! command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//! use to build buffers of commands to submit to a queue. This has all the
138//! methods for drawing and running compute shaders, which is presumably what
139//! you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//! swapchain for presenting images on the screen, via interaction with the
143//! system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
206#![allow(
207 // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
208 clippy::arc_with_non_send_sync,
209 // We don't use syntax sugar where it's not necessary.
210 clippy::match_like_matches_macro,
211 // Redundant matching is more explicit.
212 clippy::redundant_pattern_matching,
213 // Explicit lifetimes are often easier to reason about.
214 clippy::needless_lifetimes,
215 // No need for defaults in the internal types.
216 clippy::new_without_default,
217 // Matches are good and extendable, no need to make an exception here.
218 clippy::single_match,
219 // Push commands are more regular than macros.
220 clippy::vec_init_then_push,
221 // We unsafe impl `Send` for a reason.
222 clippy::non_send_fields_in_send_ty,
223 // TODO!
224 clippy::missing_safety_doc,
225 // It gets in the way a lot and does not prevent bugs in practice.
226 clippy::pattern_type_mismatch,
227)]
228#![warn(
229 clippy::ptr_as_ptr,
230 trivial_casts,
231 trivial_numeric_casts,
232 unsafe_op_in_unsafe_fn,
233 unused_extern_crates,
234 unused_qualifications
235)]
236
237/// DirectX12 API internals.
238#[cfg(dx12)]
239pub mod dx12;
240/// A dummy API implementation.
241pub mod empty;
242/// GLES API internals.
243#[cfg(gles)]
244pub mod gles;
245/// Metal API internals.
246#[cfg(metal)]
247pub mod metal;
248/// Vulkan API internals.
249#[cfg(vulkan)]
250pub mod vulkan;
251
252pub mod auxil;
253pub mod api {
254 #[cfg(dx12)]
255 pub use super::dx12::Api as Dx12;
256 pub use super::empty::Api as Empty;
257 #[cfg(gles)]
258 pub use super::gles::Api as Gles;
259 #[cfg(metal)]
260 pub use super::metal::Api as Metal;
261 #[cfg(vulkan)]
262 pub use super::vulkan::Api as Vulkan;
263}
264
265mod dynamic;
266
267pub(crate) use dynamic::impl_dyn_resource;
268pub use dynamic::{
269 DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
270 DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
271 DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
272 DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
273 DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
274};
275
276use std::{
277 borrow::{Borrow, Cow},
278 fmt,
279 num::NonZeroU32,
280 ops::{Range, RangeInclusive},
281 ptr::NonNull,
282 sync::Arc,
283};
284
285use bitflags::bitflags;
286use parking_lot::Mutex;
287use thiserror::Error;
288use wgt::WasmNotSendSync;
289
290// - Vertex + Fragment
291// - Compute
292pub const MAX_CONCURRENT_SHADER_STAGES: usize = 2;
293pub const MAX_ANISOTROPY: u8 = 16;
294pub const MAX_BIND_GROUPS: usize = 8;
295pub const MAX_VERTEX_BUFFERS: usize = 16;
296pub const MAX_COLOR_ATTACHMENTS: usize = 8;
297pub const MAX_MIP_LEVELS: u32 = 16;
298/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
299pub const QUERY_SIZE: wgt::BufferAddress = 8;
300
301pub type Label<'a> = Option<&'a str>;
302pub type MemoryRange = Range<wgt::BufferAddress>;
303pub type FenceValue = u64;
304pub type AtomicFenceValue = std::sync::atomic::AtomicU64;
305
306/// A callback to signal that wgpu is no longer using a resource.
307#[cfg(any(gles, vulkan))]
308pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
309
310#[cfg(any(gles, vulkan))]
311pub struct DropGuard {
312 callback: Option<DropCallback>,
313}
314
315#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
316impl DropGuard {
317 fn from_option(callback: Option<DropCallback>) -> Option<Self> {
318 callback.map(|callback| Self {
319 callback: Some(callback),
320 })
321 }
322}
323
324#[cfg(any(gles, vulkan))]
325impl Drop for DropGuard {
326 fn drop(&mut self) {
327 if let Some(cb) = self.callback.take() {
328 (cb)();
329 }
330 }
331}
332
333#[cfg(any(gles, vulkan))]
334impl fmt::Debug for DropGuard {
335 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
336 f.debug_struct("DropGuard").finish()
337 }
338}
339
340#[derive(Clone, Debug, PartialEq, Eq, Error)]
341pub enum DeviceError {
342 #[error("Out of memory")]
343 OutOfMemory,
344 #[error("Device is lost")]
345 Lost,
346 #[error("Creation of a resource failed for a reason other than running out of memory.")]
347 ResourceCreationFailed,
348 #[error("Unexpected error variant (driver implementation is at fault)")]
349 Unexpected,
350}
351
352#[allow(dead_code)] // may be unused on some platforms
353#[cold]
354fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
355 panic!("wgpu-hal invariant was violated (usage error): {txt}")
356}
357
358#[allow(dead_code)] // may be unused on some platforms
359#[cold]
360fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
361 panic!("wgpu-hal ran into a preventable internal error: {txt}")
362}
363
364#[derive(Clone, Debug, Eq, PartialEq, Error)]
365pub enum ShaderError {
366 #[error("Compilation failed: {0:?}")]
367 Compilation(String),
368 #[error(transparent)]
369 Device(#[from] DeviceError),
370}
371
372#[derive(Clone, Debug, Eq, PartialEq, Error)]
373pub enum PipelineError {
374 #[error("Linkage failed for stage {0:?}: {1}")]
375 Linkage(wgt::ShaderStages, String),
376 #[error("Entry point for stage {0:?} is invalid")]
377 EntryPoint(naga::ShaderStage),
378 #[error(transparent)]
379 Device(#[from] DeviceError),
380 #[error("Pipeline constant error for stage {0:?}: {1}")]
381 PipelineConstants(wgt::ShaderStages, String),
382}
383
384#[derive(Clone, Debug, Eq, PartialEq, Error)]
385pub enum PipelineCacheError {
386 #[error(transparent)]
387 Device(#[from] DeviceError),
388}
389
390#[derive(Clone, Debug, Eq, PartialEq, Error)]
391pub enum SurfaceError {
392 #[error("Surface is lost")]
393 Lost,
394 #[error("Surface is outdated, needs to be re-created")]
395 Outdated,
396 #[error(transparent)]
397 Device(#[from] DeviceError),
398 #[error("Other reason: {0}")]
399 Other(&'static str),
400}
401
402/// Error occurring while trying to create an instance, or create a surface from an instance;
403/// typically relating to the state of the underlying graphics API or hardware.
404#[derive(Clone, Debug, Error)]
405#[error("{message}")]
406pub struct InstanceError {
407 /// These errors are very platform specific, so do not attempt to encode them as an enum.
408 ///
409 /// This message should describe the problem in sufficient detail to be useful for a
410 /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
411 /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
412 message: String,
413
414 /// Underlying error value, if any is available.
415 #[source]
416 source: Option<Arc<dyn std::error::Error + Send + Sync + 'static>>,
417}
418
419impl InstanceError {
420 #[allow(dead_code)] // may be unused on some platforms
421 pub(crate) fn new(message: String) -> Self {
422 Self {
423 message,
424 source: None,
425 }
426 }
427 #[allow(dead_code)] // may be unused on some platforms
428 pub(crate) fn with_source(
429 message: String,
430 source: impl std::error::Error + Send + Sync + 'static,
431 ) -> Self {
432 Self {
433 message,
434 source: Some(Arc::new(source)),
435 }
436 }
437}
438
439pub trait Api: Clone + fmt::Debug + Sized {
440 type Instance: DynInstance + Instance<A = Self>;
441 type Surface: DynSurface + Surface<A = Self>;
442 type Adapter: DynAdapter + Adapter<A = Self>;
443 type Device: DynDevice + Device<A = Self>;
444
445 type Queue: DynQueue + Queue<A = Self>;
446 type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
447
448 /// This API's command buffer type.
449 ///
450 /// The only thing you can do with `CommandBuffer`s is build them
451 /// with a [`CommandEncoder`] and then pass them to
452 /// [`Queue::submit`] for execution, or destroy them by passing
453 /// them to [`CommandEncoder::reset_all`].
454 ///
455 /// [`CommandEncoder`]: Api::CommandEncoder
456 type CommandBuffer: DynCommandBuffer;
457
458 type Buffer: DynBuffer;
459 type Texture: DynTexture;
460 type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
461 type TextureView: DynTextureView;
462 type Sampler: DynSampler;
463 type QuerySet: DynQuerySet;
464
465 /// A value you can block on to wait for something to finish.
466 ///
467 /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
468 /// [`Device::wait`] to block until a fence reaches or passes a value you
469 /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
470 /// store in it when the submitted work is complete.
471 ///
472 /// Attempting to set a fence to a value less than its current value has no
473 /// effect.
474 ///
475 /// Waiting on a fence returns as soon as the fence reaches *or passes* the
476 /// requested value. This implies that, in order to reliably determine when
477 /// an operation has completed, operations must finish in order of
478 /// increasing fence values: if a higher-valued operation were to finish
479 /// before a lower-valued operation, then waiting for the fence to reach the
480 /// lower value could return before the lower-valued operation has actually
481 /// finished.
482 type Fence: DynFence;
483
484 type BindGroupLayout: DynBindGroupLayout;
485 type BindGroup: DynBindGroup;
486 type PipelineLayout: DynPipelineLayout;
487 type ShaderModule: DynShaderModule;
488 type RenderPipeline: DynRenderPipeline;
489 type ComputePipeline: DynComputePipeline;
490 type PipelineCache: DynPipelineCache;
491
492 type AccelerationStructure: DynAccelerationStructure + 'static;
493}
494
495pub trait Instance: Sized + WasmNotSendSync {
496 type A: Api;
497
498 unsafe fn init(desc: &InstanceDescriptor) -> Result<Self, InstanceError>;
499 unsafe fn create_surface(
500 &self,
501 display_handle: raw_window_handle::RawDisplayHandle,
502 window_handle: raw_window_handle::RawWindowHandle,
503 ) -> Result<<Self::A as Api>::Surface, InstanceError>;
504 /// `surface_hint` is only used by the GLES backend targeting WebGL2
505 unsafe fn enumerate_adapters(
506 &self,
507 surface_hint: Option<&<Self::A as Api>::Surface>,
508 ) -> Vec<ExposedAdapter<Self::A>>;
509}
510
511pub trait Surface: WasmNotSendSync {
512 type A: Api;
513
514 /// Configure `self` to use `device`.
515 ///
516 /// # Safety
517 ///
518 /// - All GPU work using `self` must have been completed.
519 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
520 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
521 /// - The surface `self` must not currently be configured to use any other [`Device`].
522 unsafe fn configure(
523 &self,
524 device: &<Self::A as Api>::Device,
525 config: &SurfaceConfiguration,
526 ) -> Result<(), SurfaceError>;
527
528 /// Unconfigure `self` on `device`.
529 ///
530 /// # Safety
531 ///
532 /// - All GPU work that uses `surface` must have been completed.
533 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
534 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
535 /// - The surface `self` must have been configured on `device`.
536 unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
537
538 /// Return the next texture to be presented by `self`, for the caller to draw on.
539 ///
540 /// On success, return an [`AcquiredSurfaceTexture`] representing the
541 /// texture into which the caller should draw the image to be displayed on
542 /// `self`.
543 ///
544 /// If `timeout` elapses before `self` has a texture ready to be acquired,
545 /// return `Ok(None)`. If `timeout` is `None`, wait indefinitely, with no
546 /// timeout.
547 ///
548 /// # Using an [`AcquiredSurfaceTexture`]
549 ///
550 /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
551 /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
552 /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
553 /// carries some metadata about that [`SurfaceTexture`].
554 ///
555 /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
556 /// include the [`SurfaceTexture`] in the `surface_textures` argument.
557 ///
558 /// When you are done drawing on the texture, you can display it on `self`
559 /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
560 ///
561 /// If you do not wish to display the texture, you must pass the
562 /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
563 /// by future acquisitions.
564 ///
565 /// # Portability
566 ///
567 /// Some backends can't support a timeout when acquiring a texture. On these
568 /// backends, `timeout` is ignored.
569 ///
570 /// # Safety
571 ///
572 /// - The surface `self` must currently be configured on some [`Device`].
573 ///
574 /// - The `fence` argument must be the same [`Fence`] passed to all calls to
575 /// [`Queue::submit`] that used [`Texture`]s acquired from this surface.
576 ///
577 /// - You may only have one texture acquired from `self` at a time. When
578 /// `acquire_texture` returns `Ok(Some(ast))`, you must pass the returned
579 /// [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
580 /// [`Surface::discard_texture`] before calling `acquire_texture` again.
581 ///
582 /// [`texture`]: AcquiredSurfaceTexture::texture
583 /// [`SurfaceTexture`]: Api::SurfaceTexture
584 /// [`borrow`]: std::borrow::Borrow::borrow
585 /// [`Texture`]: Api::Texture
586 /// [`Fence`]: Api::Fence
587 /// [`self.discard_texture`]: Surface::discard_texture
588 unsafe fn acquire_texture(
589 &self,
590 timeout: Option<std::time::Duration>,
591 fence: &<Self::A as Api>::Fence,
592 ) -> Result<Option<AcquiredSurfaceTexture<Self::A>>, SurfaceError>;
593
594 /// Relinquish an acquired texture without presenting it.
595 ///
596 /// After this call, the texture underlying [`SurfaceTexture`] may be
597 /// returned by subsequent calls to [`self.acquire_texture`].
598 ///
599 /// # Safety
600 ///
601 /// - The surface `self` must currently be configured on some [`Device`].
602 ///
603 /// - `texture` must be a [`SurfaceTexture`] returned by a call to
604 /// [`self.acquire_texture`] that has not yet been passed to
605 /// [`Queue::present`].
606 ///
607 /// [`SurfaceTexture`]: Api::SurfaceTexture
608 /// [`self.acquire_texture`]: Surface::acquire_texture
609 unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
610}
611
612pub trait Adapter: WasmNotSendSync {
613 type A: Api;
614
615 unsafe fn open(
616 &self,
617 features: wgt::Features,
618 limits: &wgt::Limits,
619 memory_hints: &wgt::MemoryHints,
620 ) -> Result<OpenDevice<Self::A>, DeviceError>;
621
622 /// Return the set of supported capabilities for a texture format.
623 unsafe fn texture_format_capabilities(
624 &self,
625 format: wgt::TextureFormat,
626 ) -> TextureFormatCapabilities;
627
628 /// Returns the capabilities of working with a specified surface.
629 ///
630 /// `None` means presentation is not supported for it.
631 unsafe fn surface_capabilities(
632 &self,
633 surface: &<Self::A as Api>::Surface,
634 ) -> Option<SurfaceCapabilities>;
635
636 /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
637 ///
638 /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
639 unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
640}
641
642/// A connection to a GPU and a pool of resources to use with it.
643///
644/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
645/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
646/// used for creating resources. Each `Device` has an associated [`Queue`] used
647/// for command submission.
648///
649/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
650/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
651/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
652/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
653/// `Adapter`.
654///
655/// A `Device`'s life cycle is generally:
656///
657/// 1) Obtain a `Device` and its associated [`Queue`] by calling
658/// [`Adapter::open`].
659///
660/// Alternatively, the backend-specific types that implement [`Adapter`] often
661/// have methods for creating a `wgpu-hal` `Device` from a platform-specific
662/// handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
663/// [`vulkan::Device`] from an [`ash::Device`].
664///
665/// 1) Create resources to use on the device by calling methods like
666/// [`Device::create_texture`] or [`Device::create_shader_module`].
667///
668/// 1) Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
669/// which you can use to build [`CommandBuffer`]s holding commands to be
670/// executed on the GPU.
671///
672/// 1) Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
673/// [`CommandBuffer`]s for execution on the GPU. If needed, call
674/// [`Device::wait`] to wait for them to finish execution.
675///
676/// 1) Free resources with methods like [`Device::destroy_texture`] or
677/// [`Device::destroy_shader_module`].
678///
679/// 1) Drop the device.
680///
681/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
682/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
683/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
684/// [`wgpu_hal::Adapter`]: Adapter
685/// [`wgpu_hal::Device`]: Device
686/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
687/// [`vulkan::Device`]: vulkan/struct.Device.html
688/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
689/// [`CommandBuffer`]: Api::CommandBuffer
690///
691/// # Safety
692///
693/// As with other `wgpu-hal` APIs, [validation] is the caller's
694/// responsibility. Here are the general requirements for all `Device`
695/// methods:
696///
697/// - Any resource passed to a `Device` method must have been created by that
698/// `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
699/// have been created with the `Device` passed as `self`.
700///
701/// - Resources may not be destroyed if they are used by any submitted command
702/// buffers that have not yet finished execution.
703///
704/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
705/// [`Texture`]: Api::Texture
706pub trait Device: WasmNotSendSync {
707 type A: Api;
708
709 /// Creates a new buffer.
710 ///
711 /// The initial usage is `BufferUses::empty()`.
712 unsafe fn create_buffer(
713 &self,
714 desc: &BufferDescriptor,
715 ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
716
717 /// Free `buffer` and any GPU resources it owns.
718 ///
719 /// Note that backends are allowed to allocate GPU memory for buffers from
720 /// allocation pools, and this call is permitted to simply return `buffer`'s
721 /// storage to that pool, without making it available to other applications.
722 ///
723 /// # Safety
724 ///
725 /// - The given `buffer` must not currently be mapped.
726 unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
727
728 /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
729 unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
730
731 /// Return a pointer to CPU memory mapping the contents of `buffer`.
732 ///
733 /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
734 /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
735 /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
736 /// `wgpu_hal` buffer is also unmapped.)
737 ///
738 /// If this function returns `Ok(mapping)`, then:
739 ///
740 /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
741 ///
742 /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
743 /// memory are immediately visible on the GPU, and vice versa.
744 ///
745 /// # Safety
746 ///
747 /// - The given `buffer` must have been created with the [`MAP_READ`] or
748 /// [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
749 ///
750 /// - The given `range` must fall within the size of `buffer`.
751 ///
752 /// - The caller must avoid data races between the CPU and the GPU. A data
753 /// race is any pair of accesses to a particular byte, one of which is a
754 /// write, that are not ordered with respect to each other by some sort of
755 /// synchronization operation.
756 ///
757 /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
758 /// `false`, then:
759 ///
760 /// - Every CPU write to a mapped byte followed by a GPU read of that byte
761 /// must have at least one call to [`Device::flush_mapped_ranges`]
762 /// covering that byte that occurs between those two accesses.
763 ///
764 /// - Every GPU write to a mapped byte followed by a CPU read of that byte
765 /// must have at least one call to [`Device::invalidate_mapped_ranges`]
766 /// covering that byte that occurs between those two accesses.
767 ///
768 /// Note that the data race rule above requires that all such access pairs
769 /// be ordered, so it is meaningful to talk about what must occur
770 /// "between" them.
771 ///
772 /// - Zero-sized mappings are not allowed.
773 ///
774 /// - The returned [`BufferMapping::ptr`] must not be used after a call to
775 /// [`Device::unmap_buffer`].
776 ///
777 /// [`MAP_READ`]: BufferUses::MAP_READ
778 /// [`MAP_WRITE`]: BufferUses::MAP_WRITE
779 unsafe fn map_buffer(
780 &self,
781 buffer: &<Self::A as Api>::Buffer,
782 range: MemoryRange,
783 ) -> Result<BufferMapping, DeviceError>;
784
785 /// Remove the mapping established by the last call to [`Device::map_buffer`].
786 ///
787 /// # Safety
788 ///
789 /// - The given `buffer` must be currently mapped.
790 unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
791
792 /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
793 ///
794 /// # Safety
795 ///
796 /// - The given `buffer` must be currently mapped.
797 ///
798 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
799 unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
800 where
801 I: Iterator<Item = MemoryRange>;
802
803 /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
804 ///
805 /// # Safety
806 ///
807 /// - The given `buffer` must be currently mapped.
808 ///
809 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
810 unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
811 where
812 I: Iterator<Item = MemoryRange>;
813
814 /// Creates a new texture.
815 ///
816 /// The initial usage for all subresources is `TextureUses::UNINITIALIZED`.
817 unsafe fn create_texture(
818 &self,
819 desc: &TextureDescriptor,
820 ) -> Result<<Self::A as Api>::Texture, DeviceError>;
821 unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
822
823 /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
824 unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
825
826 unsafe fn create_texture_view(
827 &self,
828 texture: &<Self::A as Api>::Texture,
829 desc: &TextureViewDescriptor,
830 ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
831 unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
832 unsafe fn create_sampler(
833 &self,
834 desc: &SamplerDescriptor,
835 ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
836 unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
837
838 /// Create a fresh [`CommandEncoder`].
839 ///
840 /// The new `CommandEncoder` is in the "closed" state.
841 unsafe fn create_command_encoder(
842 &self,
843 desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
844 ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
845
846 /// Creates a bind group layout.
847 unsafe fn create_bind_group_layout(
848 &self,
849 desc: &BindGroupLayoutDescriptor,
850 ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
851 unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
852 unsafe fn create_pipeline_layout(
853 &self,
854 desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
855 ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
856 unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
857
858 #[allow(clippy::type_complexity)]
859 unsafe fn create_bind_group(
860 &self,
861 desc: &BindGroupDescriptor<
862 <Self::A as Api>::BindGroupLayout,
863 <Self::A as Api>::Buffer,
864 <Self::A as Api>::Sampler,
865 <Self::A as Api>::TextureView,
866 <Self::A as Api>::AccelerationStructure,
867 >,
868 ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
869 unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
870
871 unsafe fn create_shader_module(
872 &self,
873 desc: &ShaderModuleDescriptor,
874 shader: ShaderInput,
875 ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
876 unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
877
878 #[allow(clippy::type_complexity)]
879 unsafe fn create_render_pipeline(
880 &self,
881 desc: &RenderPipelineDescriptor<
882 <Self::A as Api>::PipelineLayout,
883 <Self::A as Api>::ShaderModule,
884 <Self::A as Api>::PipelineCache,
885 >,
886 ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
887 unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
888
889 #[allow(clippy::type_complexity)]
890 unsafe fn create_compute_pipeline(
891 &self,
892 desc: &ComputePipelineDescriptor<
893 <Self::A as Api>::PipelineLayout,
894 <Self::A as Api>::ShaderModule,
895 <Self::A as Api>::PipelineCache,
896 >,
897 ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
898 unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
899
900 unsafe fn create_pipeline_cache(
901 &self,
902 desc: &PipelineCacheDescriptor<'_>,
903 ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
904 fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
905 None
906 }
907 unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
908
909 unsafe fn create_query_set(
910 &self,
911 desc: &wgt::QuerySetDescriptor<Label>,
912 ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
913 unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
914 unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
915 unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
916 unsafe fn get_fence_value(
917 &self,
918 fence: &<Self::A as Api>::Fence,
919 ) -> Result<FenceValue, DeviceError>;
920
921 /// Wait for `fence` to reach `value`.
922 ///
923 /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
924 /// [`FenceValue`] to store in it, so you can use this `wait` function
925 /// to wait for a given queue submission to finish execution.
926 ///
927 /// The `value` argument must be a value that some actual operation you have
928 /// already presented to the device is going to store in `fence`. You cannot
929 /// wait for values yet to be submitted. (This restriction accommodates
930 /// implementations like the `vulkan` backend's [`FencePool`] that must
931 /// allocate a distinct synchronization object for each fence value one is
932 /// able to wait for.)
933 ///
934 /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
935 /// returns immediately.
936 ///
937 /// [`Fence`]: Api::Fence
938 /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
939 unsafe fn wait(
940 &self,
941 fence: &<Self::A as Api>::Fence,
942 value: FenceValue,
943 timeout_ms: u32,
944 ) -> Result<bool, DeviceError>;
945
946 unsafe fn start_capture(&self) -> bool;
947 unsafe fn stop_capture(&self);
948
949 #[allow(unused_variables)]
950 unsafe fn pipeline_cache_get_data(
951 &self,
952 cache: &<Self::A as Api>::PipelineCache,
953 ) -> Option<Vec<u8>> {
954 None
955 }
956
957 unsafe fn create_acceleration_structure(
958 &self,
959 desc: &AccelerationStructureDescriptor,
960 ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
961 unsafe fn get_acceleration_structure_build_sizes(
962 &self,
963 desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
964 ) -> AccelerationStructureBuildSizes;
965 unsafe fn get_acceleration_structure_device_address(
966 &self,
967 acceleration_structure: &<Self::A as Api>::AccelerationStructure,
968 ) -> wgt::BufferAddress;
969 unsafe fn destroy_acceleration_structure(
970 &self,
971 acceleration_structure: <Self::A as Api>::AccelerationStructure,
972 );
973 fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
974
975 fn get_internal_counters(&self) -> wgt::HalCounters;
976
977 fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
978 None
979 }
980}
981
982pub trait Queue: WasmNotSendSync {
983 type A: Api;
984
985 /// Submit `command_buffers` for execution on GPU.
986 ///
987 /// Update `fence` to `value` when the operation is complete. See
988 /// [`Fence`] for details.
989 ///
990 /// A `wgpu_hal` queue is "single threaded": all command buffers are
991 /// executed in the order they're submitted, with each buffer able to see
992 /// previous buffers' results. Specifically:
993 ///
994 /// - If two calls to `submit` on a single `Queue` occur in a particular
995 /// order (that is, they happen on the same thread, or on two threads that
996 /// have synchronized to establish an ordering), then the first
997 /// submission's commands all complete execution before any of the second
998 /// submission's commands begin. All results produced by one submission
999 /// are visible to the next.
1000 ///
1001 /// - Within a submission, command buffers execute in the order in which they
1002 /// appear in `command_buffers`. All results produced by one buffer are
1003 /// visible to the next.
1004 ///
1005 /// If two calls to `submit` on a single `Queue` from different threads are
1006 /// not synchronized to occur in a particular order, they must pass distinct
1007 /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1008 /// operations to complete is only trustworthy when operations finish in
1009 /// order of increasing fence value, but submissions from different threads
1010 /// cannot determine how to order the fence values if the submissions
1011 /// themselves are unordered. If each thread uses a separate [`Fence`], this
1012 /// problem does not arise.
1013 ///
1014 /// # Safety
1015 ///
1016 /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1017 /// from a [`CommandEncoder`][ce] that was constructed from the
1018 /// [`Device`][d] associated with this [`Queue`].
1019 ///
1020 /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1021 /// commands have finished execution. Since command buffers must not
1022 /// outlive their encoders, this implies that the encoders must remain
1023 /// alive as well.
1024 ///
1025 /// - All resources used by a submitted [`CommandBuffer`][cb]
1026 /// ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1027 /// on) must remain alive until the command buffer finishes execution.
1028 ///
1029 /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1030 /// writes to must appear in the `surface_textures` argument.
1031 ///
1032 /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1033 /// argument more than once.
1034 ///
1035 /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1036 /// for use with the [`Device`][d] associated with this [`Queue`],
1037 /// typically by calling [`Surface::configure`].
1038 ///
1039 /// - All calls to this function that include a given [`SurfaceTexture`][st]
1040 /// in `surface_textures` must use the same [`Fence`].
1041 ///
1042 /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1043 /// all submissions that will signal it have completed.
1044 ///
1045 /// [`Fence`]: Api::Fence
1046 /// [cb]: Api::CommandBuffer
1047 /// [ce]: Api::CommandEncoder
1048 /// [d]: Api::Device
1049 /// [t]: Api::Texture
1050 /// [bg]: Api::BindGroup
1051 /// [rp]: Api::RenderPipeline
1052 /// [st]: Api::SurfaceTexture
1053 unsafe fn submit(
1054 &self,
1055 command_buffers: &[&<Self::A as Api>::CommandBuffer],
1056 surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1057 signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1058 ) -> Result<(), DeviceError>;
1059 unsafe fn present(
1060 &self,
1061 surface: &<Self::A as Api>::Surface,
1062 texture: <Self::A as Api>::SurfaceTexture,
1063 ) -> Result<(), SurfaceError>;
1064 unsafe fn get_timestamp_period(&self) -> f32;
1065}
1066
1067/// Encoder and allocation pool for `CommandBuffer`s.
1068///
1069/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1070/// acts as the allocation pool that owns the buffers' underlying
1071/// storage. Thus, `CommandBuffer`s must not outlive the
1072/// `CommandEncoder` that created them.
1073///
1074/// The life cycle of a `CommandBuffer` is as follows:
1075///
1076/// - Call [`Device::create_command_encoder`] to create a new
1077/// `CommandEncoder`, in the "closed" state.
1078///
1079/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1080/// recording commands. This puts the `CommandEncoder` in the
1081/// "recording" state.
1082///
1083/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1084/// etc. on a "recording" `CommandEncoder` to add commands to the
1085/// list. (If an error occurs, you must call `discard_encoding`; see
1086/// below.)
1087///
1088/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1089/// encoder and construct a fresh `CommandBuffer` consisting of the
1090/// list of commands recorded up to that point.
1091///
1092/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1093/// the commands recorded thus far and close the encoder. This is
1094/// the only safe thing to do on a `CommandEncoder` if an error has
1095/// occurred while recording commands.
1096///
1097/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1098/// live `CommandBuffers` built from it. All the `CommandBuffer`s
1099/// are destroyed, and their resources are freed.
1100///
1101/// # Safety
1102///
1103/// - The `CommandEncoder` must be in the states described above to
1104/// make the given calls.
1105///
1106/// - A `CommandBuffer` that has been submitted for execution on the
1107/// GPU must live until its execution is complete.
1108///
1109/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1110/// built it.
1111///
1112/// It is the user's responsibility to meet this requirements. This
1113/// allows `CommandEncoder` implementations to keep their state
1114/// tracking to a minimum.
1115pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1116 type A: Api;
1117
1118 /// Begin encoding a new command buffer.
1119 ///
1120 /// This puts this `CommandEncoder` in the "recording" state.
1121 ///
1122 /// # Safety
1123 ///
1124 /// This `CommandEncoder` must be in the "closed" state.
1125 unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1126
1127 /// Discard the command list under construction.
1128 ///
1129 /// If an error has occurred while recording commands, this
1130 /// is the only safe thing to do with the encoder.
1131 ///
1132 /// This puts this `CommandEncoder` in the "closed" state.
1133 ///
1134 /// # Safety
1135 ///
1136 /// This `CommandEncoder` must be in the "recording" state.
1137 ///
1138 /// Callers must not assume that implementations of this
1139 /// function are idempotent, and thus should not call it
1140 /// multiple times in a row.
1141 unsafe fn discard_encoding(&mut self);
1142
1143 /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1144 ///
1145 /// The returned [`CommandBuffer`] holds all the commands recorded
1146 /// on this `CommandEncoder` since the last call to
1147 /// [`begin_encoding`].
1148 ///
1149 /// This puts this `CommandEncoder` in the "closed" state.
1150 ///
1151 /// # Safety
1152 ///
1153 /// This `CommandEncoder` must be in the "recording" state.
1154 ///
1155 /// The returned [`CommandBuffer`] must not outlive this
1156 /// `CommandEncoder`. Implementations are allowed to build
1157 /// `CommandBuffer`s that depend on storage owned by this
1158 /// `CommandEncoder`.
1159 ///
1160 /// [`CommandBuffer`]: Api::CommandBuffer
1161 /// [`begin_encoding`]: CommandEncoder::begin_encoding
1162 unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1163
1164 /// Reclaim all resources belonging to this `CommandEncoder`.
1165 ///
1166 /// # Safety
1167 ///
1168 /// This `CommandEncoder` must be in the "closed" state.
1169 ///
1170 /// The `command_buffers` iterator must produce all the live
1171 /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1172 /// is, every extant `CommandBuffer` returned from `end_encoding`.
1173 ///
1174 /// [`CommandBuffer`]: Api::CommandBuffer
1175 unsafe fn reset_all<I>(&mut self, command_buffers: I)
1176 where
1177 I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1178
1179 unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1180 where
1181 T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1182
1183 unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1184 where
1185 T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1186
1187 // copy operations
1188
1189 unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1190
1191 unsafe fn copy_buffer_to_buffer<T>(
1192 &mut self,
1193 src: &<Self::A as Api>::Buffer,
1194 dst: &<Self::A as Api>::Buffer,
1195 regions: T,
1196 ) where
1197 T: Iterator<Item = BufferCopy>;
1198
1199 /// Copy from an external image to an internal texture.
1200 /// Works with a single array layer.
1201 /// Note: `dst` current usage has to be `TextureUses::COPY_DST`.
1202 /// Note: the copy extent is in physical size (rounded to the block size)
1203 #[cfg(webgl)]
1204 unsafe fn copy_external_image_to_texture<T>(
1205 &mut self,
1206 src: &wgt::CopyExternalImageSourceInfo,
1207 dst: &<Self::A as Api>::Texture,
1208 dst_premultiplication: bool,
1209 regions: T,
1210 ) where
1211 T: Iterator<Item = TextureCopy>;
1212
1213 /// Copy from one texture to another.
1214 /// Works with a single array layer.
1215 /// Note: `dst` current usage has to be `TextureUses::COPY_DST`.
1216 /// Note: the copy extent is in physical size (rounded to the block size)
1217 unsafe fn copy_texture_to_texture<T>(
1218 &mut self,
1219 src: &<Self::A as Api>::Texture,
1220 src_usage: TextureUses,
1221 dst: &<Self::A as Api>::Texture,
1222 regions: T,
1223 ) where
1224 T: Iterator<Item = TextureCopy>;
1225
1226 /// Copy from buffer to texture.
1227 /// Works with a single array layer.
1228 /// Note: `dst` current usage has to be `TextureUses::COPY_DST`.
1229 /// Note: the copy extent is in physical size (rounded to the block size)
1230 unsafe fn copy_buffer_to_texture<T>(
1231 &mut self,
1232 src: &<Self::A as Api>::Buffer,
1233 dst: &<Self::A as Api>::Texture,
1234 regions: T,
1235 ) where
1236 T: Iterator<Item = BufferTextureCopy>;
1237
1238 /// Copy from texture to buffer.
1239 /// Works with a single array layer.
1240 /// Note: the copy extent is in physical size (rounded to the block size)
1241 unsafe fn copy_texture_to_buffer<T>(
1242 &mut self,
1243 src: &<Self::A as Api>::Texture,
1244 src_usage: TextureUses,
1245 dst: &<Self::A as Api>::Buffer,
1246 regions: T,
1247 ) where
1248 T: Iterator<Item = BufferTextureCopy>;
1249
1250 // pass common
1251
1252 /// Sets the bind group at `index` to `group`.
1253 ///
1254 /// If this is not the first call to `set_bind_group` within the current
1255 /// render or compute pass:
1256 ///
1257 /// - If `layout` contains `n` bind group layouts, then any previously set
1258 /// bind groups at indices `n` or higher are cleared.
1259 ///
1260 /// - If the first `m` bind group layouts of `layout` are equal to those of
1261 /// the previously passed layout, but no more, then any previously set
1262 /// bind groups at indices `m` or higher are cleared.
1263 ///
1264 /// It follows from the above that passing the same layout as before doesn't
1265 /// clear any bind groups.
1266 ///
1267 /// # Safety
1268 ///
1269 /// - This [`CommandEncoder`] must be within a render or compute pass.
1270 ///
1271 /// - `index` must be the valid index of some bind group layout in `layout`.
1272 /// Call this the "relevant bind group layout".
1273 ///
1274 /// - The layout of `group` must be equal to the relevant bind group layout.
1275 ///
1276 /// - The length of `dynamic_offsets` must match the number of buffer
1277 /// bindings [with dynamic offsets][hdo] in the relevant bind group
1278 /// layout.
1279 ///
1280 /// - If those buffer bindings are ordered by increasing [`binding` number]
1281 /// and paired with elements from `dynamic_offsets`, then each offset must
1282 /// be a valid offset for the binding's corresponding buffer in `group`.
1283 ///
1284 /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1285 /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1286 unsafe fn set_bind_group(
1287 &mut self,
1288 layout: &<Self::A as Api>::PipelineLayout,
1289 index: u32,
1290 group: &<Self::A as Api>::BindGroup,
1291 dynamic_offsets: &[wgt::DynamicOffset],
1292 );
1293
1294 /// Sets a range in push constant data.
1295 ///
1296 /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1297 ///
1298 /// # Safety
1299 ///
1300 /// - `offset_bytes` must be a multiple of 4.
1301 /// - The range of push constants written must be valid for the pipeline layout at draw time.
1302 unsafe fn set_push_constants(
1303 &mut self,
1304 layout: &<Self::A as Api>::PipelineLayout,
1305 stages: wgt::ShaderStages,
1306 offset_bytes: u32,
1307 data: &[u32],
1308 );
1309
1310 unsafe fn insert_debug_marker(&mut self, label: &str);
1311 unsafe fn begin_debug_marker(&mut self, group_label: &str);
1312 unsafe fn end_debug_marker(&mut self);
1313
1314 // queries
1315
1316 /// # Safety:
1317 ///
1318 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1319 unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1320 /// # Safety:
1321 ///
1322 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1323 unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1324 unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1325 unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1326 unsafe fn copy_query_results(
1327 &mut self,
1328 set: &<Self::A as Api>::QuerySet,
1329 range: Range<u32>,
1330 buffer: &<Self::A as Api>::Buffer,
1331 offset: wgt::BufferAddress,
1332 stride: wgt::BufferSize,
1333 );
1334
1335 // render passes
1336
1337 /// Begin a new render pass, clearing all active bindings.
1338 ///
1339 /// This clears any bindings established by the following calls:
1340 ///
1341 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1342 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1343 /// - [`begin_query`](CommandEncoder::begin_query)
1344 /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1345 /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1346 /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1347 ///
1348 /// # Safety
1349 ///
1350 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1351 /// by a call to [`end_render_pass`].
1352 ///
1353 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1354 /// by a call to [`end_compute_pass`].
1355 ///
1356 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1357 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1358 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1359 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1360 unsafe fn begin_render_pass(
1361 &mut self,
1362 desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1363 );
1364
1365 /// End the current render pass.
1366 ///
1367 /// # Safety
1368 ///
1369 /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1370 /// that has not been followed by a call to [`end_render_pass`].
1371 ///
1372 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1373 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1374 unsafe fn end_render_pass(&mut self);
1375
1376 unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1377
1378 unsafe fn set_index_buffer<'a>(
1379 &mut self,
1380 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1381 format: wgt::IndexFormat,
1382 );
1383 unsafe fn set_vertex_buffer<'a>(
1384 &mut self,
1385 index: u32,
1386 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1387 );
1388 unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1389 unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1390 unsafe fn set_stencil_reference(&mut self, value: u32);
1391 unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1392
1393 unsafe fn draw(
1394 &mut self,
1395 first_vertex: u32,
1396 vertex_count: u32,
1397 first_instance: u32,
1398 instance_count: u32,
1399 );
1400 unsafe fn draw_indexed(
1401 &mut self,
1402 first_index: u32,
1403 index_count: u32,
1404 base_vertex: i32,
1405 first_instance: u32,
1406 instance_count: u32,
1407 );
1408 unsafe fn draw_indirect(
1409 &mut self,
1410 buffer: &<Self::A as Api>::Buffer,
1411 offset: wgt::BufferAddress,
1412 draw_count: u32,
1413 );
1414 unsafe fn draw_indexed_indirect(
1415 &mut self,
1416 buffer: &<Self::A as Api>::Buffer,
1417 offset: wgt::BufferAddress,
1418 draw_count: u32,
1419 );
1420 unsafe fn draw_indirect_count(
1421 &mut self,
1422 buffer: &<Self::A as Api>::Buffer,
1423 offset: wgt::BufferAddress,
1424 count_buffer: &<Self::A as Api>::Buffer,
1425 count_offset: wgt::BufferAddress,
1426 max_count: u32,
1427 );
1428 unsafe fn draw_indexed_indirect_count(
1429 &mut self,
1430 buffer: &<Self::A as Api>::Buffer,
1431 offset: wgt::BufferAddress,
1432 count_buffer: &<Self::A as Api>::Buffer,
1433 count_offset: wgt::BufferAddress,
1434 max_count: u32,
1435 );
1436
1437 // compute passes
1438
1439 /// Begin a new compute pass, clearing all active bindings.
1440 ///
1441 /// This clears any bindings established by the following calls:
1442 ///
1443 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1444 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1445 /// - [`begin_query`](CommandEncoder::begin_query)
1446 /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1447 ///
1448 /// # Safety
1449 ///
1450 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1451 /// by a call to [`end_render_pass`].
1452 ///
1453 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1454 /// by a call to [`end_compute_pass`].
1455 ///
1456 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1457 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1458 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1459 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1460 unsafe fn begin_compute_pass(
1461 &mut self,
1462 desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1463 );
1464
1465 /// End the current compute pass.
1466 ///
1467 /// # Safety
1468 ///
1469 /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1470 /// that has not been followed by a call to [`end_compute_pass`].
1471 ///
1472 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1473 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1474 unsafe fn end_compute_pass(&mut self);
1475
1476 unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1477
1478 unsafe fn dispatch(&mut self, count: [u32; 3]);
1479 unsafe fn dispatch_indirect(
1480 &mut self,
1481 buffer: &<Self::A as Api>::Buffer,
1482 offset: wgt::BufferAddress,
1483 );
1484
1485 /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1486 /// All buffers must be synchronized externally
1487 /// All buffer regions, which are written to may only be passed once per function call,
1488 /// with the exception of updates in the same descriptor.
1489 /// Consequences of this limitation:
1490 /// - scratch buffers need to be unique
1491 /// - a tlas can't be build in the same call with a blas it contains
1492 unsafe fn build_acceleration_structures<'a, T>(
1493 &mut self,
1494 descriptor_count: u32,
1495 descriptors: T,
1496 ) where
1497 Self::A: 'a,
1498 T: IntoIterator<
1499 Item = BuildAccelerationStructureDescriptor<
1500 'a,
1501 <Self::A as Api>::Buffer,
1502 <Self::A as Api>::AccelerationStructure,
1503 >,
1504 >;
1505
1506 unsafe fn place_acceleration_structure_barrier(
1507 &mut self,
1508 barrier: AccelerationStructureBarrier,
1509 );
1510}
1511
1512bitflags!(
1513 /// Pipeline layout creation flags.
1514 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1515 pub struct PipelineLayoutFlags: u32 {
1516 /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1517 /// via push constants for direct execution.
1518 const FIRST_VERTEX_INSTANCE = 1 << 0;
1519 /// D3D12: Add support for `num_workgroups` builtins via push constants
1520 /// for direct execution.
1521 const NUM_WORK_GROUPS = 1 << 1;
1522 /// D3D12: Add support for the builtins that the other flags enable for
1523 /// indirect execution.
1524 const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1525 }
1526);
1527
1528bitflags!(
1529 /// Pipeline layout creation flags.
1530 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1531 pub struct BindGroupLayoutFlags: u32 {
1532 /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1533 const PARTIALLY_BOUND = 1 << 0;
1534 }
1535);
1536
1537bitflags!(
1538 /// Texture format capability flags.
1539 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1540 pub struct TextureFormatCapabilities: u32 {
1541 /// Format can be sampled.
1542 const SAMPLED = 1 << 0;
1543 /// Format can be sampled with a linear sampler.
1544 const SAMPLED_LINEAR = 1 << 1;
1545 /// Format can be sampled with a min/max reduction sampler.
1546 const SAMPLED_MINMAX = 1 << 2;
1547
1548 /// Format can be used as storage with read-only access.
1549 const STORAGE_READ_ONLY = 1 << 3;
1550 /// Format can be used as storage with write-only access.
1551 const STORAGE_WRITE_ONLY = 1 << 4;
1552 /// Format can be used as storage with both read and write access.
1553 const STORAGE_READ_WRITE = 1 << 5;
1554 /// Format can be used as storage with atomics.
1555 const STORAGE_ATOMIC = 1 << 6;
1556
1557 /// Format can be used as color and input attachment.
1558 const COLOR_ATTACHMENT = 1 << 7;
1559 /// Format can be used as color (with blending) and input attachment.
1560 const COLOR_ATTACHMENT_BLEND = 1 << 8;
1561 /// Format can be used as depth-stencil and input attachment.
1562 const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1563
1564 /// Format can be multisampled by x2.
1565 const MULTISAMPLE_X2 = 1 << 10;
1566 /// Format can be multisampled by x4.
1567 const MULTISAMPLE_X4 = 1 << 11;
1568 /// Format can be multisampled by x8.
1569 const MULTISAMPLE_X8 = 1 << 12;
1570 /// Format can be multisampled by x16.
1571 const MULTISAMPLE_X16 = 1 << 13;
1572
1573 /// Format can be used for render pass resolve targets.
1574 const MULTISAMPLE_RESOLVE = 1 << 14;
1575
1576 /// Format can be copied from.
1577 const COPY_SRC = 1 << 15;
1578 /// Format can be copied to.
1579 const COPY_DST = 1 << 16;
1580 }
1581);
1582
1583bitflags!(
1584 /// Texture format capability flags.
1585 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1586 pub struct FormatAspects: u8 {
1587 const COLOR = 1 << 0;
1588 const DEPTH = 1 << 1;
1589 const STENCIL = 1 << 2;
1590 const PLANE_0 = 1 << 3;
1591 const PLANE_1 = 1 << 4;
1592 const PLANE_2 = 1 << 5;
1593
1594 const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1595 }
1596);
1597
1598impl FormatAspects {
1599 pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1600 let aspect_mask = match aspect {
1601 wgt::TextureAspect::All => Self::all(),
1602 wgt::TextureAspect::DepthOnly => Self::DEPTH,
1603 wgt::TextureAspect::StencilOnly => Self::STENCIL,
1604 wgt::TextureAspect::Plane0 => Self::PLANE_0,
1605 wgt::TextureAspect::Plane1 => Self::PLANE_1,
1606 wgt::TextureAspect::Plane2 => Self::PLANE_2,
1607 };
1608 Self::from(format) & aspect_mask
1609 }
1610
1611 /// Returns `true` if only one flag is set
1612 pub fn is_one(&self) -> bool {
1613 self.bits().is_power_of_two()
1614 }
1615
1616 pub fn map(&self) -> wgt::TextureAspect {
1617 match *self {
1618 Self::COLOR => wgt::TextureAspect::All,
1619 Self::DEPTH => wgt::TextureAspect::DepthOnly,
1620 Self::STENCIL => wgt::TextureAspect::StencilOnly,
1621 Self::PLANE_0 => wgt::TextureAspect::Plane0,
1622 Self::PLANE_1 => wgt::TextureAspect::Plane1,
1623 Self::PLANE_2 => wgt::TextureAspect::Plane2,
1624 _ => unreachable!(),
1625 }
1626 }
1627}
1628
1629impl From<wgt::TextureFormat> for FormatAspects {
1630 fn from(format: wgt::TextureFormat) -> Self {
1631 match format {
1632 wgt::TextureFormat::Stencil8 => Self::STENCIL,
1633 wgt::TextureFormat::Depth16Unorm
1634 | wgt::TextureFormat::Depth32Float
1635 | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1636 wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1637 Self::DEPTH_STENCIL
1638 }
1639 wgt::TextureFormat::NV12 => Self::PLANE_0 | Self::PLANE_1,
1640 _ => Self::COLOR,
1641 }
1642 }
1643}
1644
1645bitflags!(
1646 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1647 pub struct MemoryFlags: u32 {
1648 const TRANSIENT = 1 << 0;
1649 const PREFER_COHERENT = 1 << 1;
1650 }
1651);
1652
1653//TODO: it's not intuitive for the backends to consider `LOAD` being optional.
1654
1655bitflags!(
1656 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1657 pub struct AttachmentOps: u8 {
1658 const LOAD = 1 << 0;
1659 const STORE = 1 << 1;
1660 }
1661);
1662
1663bitflags::bitflags! {
1664 /// Similar to `wgt::BufferUsages` but for internal use.
1665 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1666 pub struct BufferUses: u16 {
1667 /// The argument to a read-only mapping.
1668 const MAP_READ = 1 << 0;
1669 /// The argument to a write-only mapping.
1670 const MAP_WRITE = 1 << 1;
1671 /// The source of a hardware copy.
1672 const COPY_SRC = 1 << 2;
1673 /// The destination of a hardware copy.
1674 const COPY_DST = 1 << 3;
1675 /// The index buffer used for drawing.
1676 const INDEX = 1 << 4;
1677 /// A vertex buffer used for drawing.
1678 const VERTEX = 1 << 5;
1679 /// A uniform buffer bound in a bind group.
1680 const UNIFORM = 1 << 6;
1681 /// A read-only storage buffer used in a bind group.
1682 const STORAGE_READ_ONLY = 1 << 7;
1683 /// A read-write buffer used in a bind group.
1684 const STORAGE_READ_WRITE = 1 << 8;
1685 /// The indirect or count buffer in a indirect draw or dispatch.
1686 const INDIRECT = 1 << 9;
1687 /// A buffer used to store query results.
1688 const QUERY_RESOLVE = 1 << 10;
1689 const ACCELERATION_STRUCTURE_SCRATCH = 1 << 11;
1690 const BOTTOM_LEVEL_ACCELERATION_STRUCTURE_INPUT = 1 << 12;
1691 const TOP_LEVEL_ACCELERATION_STRUCTURE_INPUT = 1 << 13;
1692 /// The combination of states that a buffer may be in _at the same time_.
1693 const INCLUSIVE = Self::MAP_READ.bits() | Self::COPY_SRC.bits() |
1694 Self::INDEX.bits() | Self::VERTEX.bits() | Self::UNIFORM.bits() |
1695 Self::STORAGE_READ_ONLY.bits() | Self::INDIRECT.bits() | Self::BOTTOM_LEVEL_ACCELERATION_STRUCTURE_INPUT.bits() | Self::TOP_LEVEL_ACCELERATION_STRUCTURE_INPUT.bits();
1696 /// The combination of states that a buffer must exclusively be in.
1697 const EXCLUSIVE = Self::MAP_WRITE.bits() | Self::COPY_DST.bits() | Self::STORAGE_READ_WRITE.bits() | Self::ACCELERATION_STRUCTURE_SCRATCH.bits();
1698 /// The combination of all usages that the are guaranteed to be be ordered by the hardware.
1699 /// If a usage is ordered, then if the buffer state doesn't change between draw calls, there
1700 /// are no barriers needed for synchronization.
1701 const ORDERED = Self::INCLUSIVE.bits() | Self::MAP_WRITE.bits();
1702 }
1703}
1704
1705bitflags::bitflags! {
1706 /// Similar to `wgt::TextureUsages` but for internal use.
1707 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1708 pub struct TextureUses: u16 {
1709 /// The texture is in unknown state.
1710 const UNINITIALIZED = 1 << 0;
1711 /// Ready to present image to the surface.
1712 const PRESENT = 1 << 1;
1713 /// The source of a hardware copy.
1714 const COPY_SRC = 1 << 2;
1715 /// The destination of a hardware copy.
1716 const COPY_DST = 1 << 3;
1717 /// Read-only sampled or fetched resource.
1718 const RESOURCE = 1 << 4;
1719 /// The color target of a renderpass.
1720 const COLOR_TARGET = 1 << 5;
1721 /// Read-only depth stencil usage.
1722 const DEPTH_STENCIL_READ = 1 << 6;
1723 /// Read-write depth stencil usage
1724 const DEPTH_STENCIL_WRITE = 1 << 7;
1725 /// Read-only storage texture usage. Corresponds to a UAV in d3d, so is exclusive, despite being read only.
1726 const STORAGE_READ_ONLY = 1 << 8;
1727 /// Write-only storage texture usage.
1728 const STORAGE_WRITE_ONLY = 1 << 9;
1729 /// Read-write storage texture usage.
1730 const STORAGE_READ_WRITE = 1 << 10;
1731 /// Image atomic enabled storage
1732 const STORAGE_ATOMIC = 1 << 11;
1733 /// The combination of states that a texture may be in _at the same time_.
1734 const INCLUSIVE = Self::COPY_SRC.bits() | Self::RESOURCE.bits() | Self::DEPTH_STENCIL_READ.bits();
1735 /// The combination of states that a texture must exclusively be in.
1736 const EXCLUSIVE = Self::COPY_DST.bits() | Self::COLOR_TARGET.bits() | Self::DEPTH_STENCIL_WRITE.bits() | Self::STORAGE_READ_ONLY.bits() | Self::STORAGE_WRITE_ONLY.bits() | Self::STORAGE_READ_WRITE.bits() | Self::STORAGE_ATOMIC.bits() | Self::PRESENT.bits();
1737 /// The combination of all usages that the are guaranteed to be be ordered by the hardware.
1738 /// If a usage is ordered, then if the texture state doesn't change between draw calls, there
1739 /// are no barriers needed for synchronization.
1740 const ORDERED = Self::INCLUSIVE.bits() | Self::COLOR_TARGET.bits() | Self::DEPTH_STENCIL_WRITE.bits() | Self::STORAGE_READ_ONLY.bits();
1741
1742 /// Flag used by the wgpu-core texture tracker to say a texture is in different states for every sub-resource
1743 const COMPLEX = 1 << 12;
1744 /// Flag used by the wgpu-core texture tracker to say that the tracker does not know the state of the sub-resource.
1745 /// This is different from UNINITIALIZED as that says the tracker does know, but the texture has not been initialized.
1746 const UNKNOWN = 1 << 13;
1747 }
1748}
1749
1750#[derive(Clone, Debug)]
1751pub struct InstanceDescriptor<'a> {
1752 pub name: &'a str,
1753 pub flags: wgt::InstanceFlags,
1754 pub dx12_shader_compiler: wgt::Dx12Compiler,
1755 pub gles_minor_version: wgt::Gles3MinorVersion,
1756}
1757
1758#[derive(Clone, Debug)]
1759pub struct Alignments {
1760 /// The alignment of the start of the buffer used as a GPU copy source.
1761 pub buffer_copy_offset: wgt::BufferSize,
1762
1763 /// The alignment of the row pitch of the texture data stored in a buffer that is
1764 /// used in a GPU copy operation.
1765 pub buffer_copy_pitch: wgt::BufferSize,
1766
1767 /// The finest alignment of bound range checking for uniform buffers.
1768 ///
1769 /// When `wgpu_hal` restricts shader references to the [accessible
1770 /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1771 /// is the bind group binding's stated [size], rounded up to the next
1772 /// multiple of this value.
1773 ///
1774 /// We don't need an analogous field for storage buffer bindings, because
1775 /// all our backends promise to enforce the size at least to a four-byte
1776 /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1777 /// of four anyway.
1778 ///
1779 /// [ar]: struct.BufferBinding.html#accessible-region
1780 /// [`Uniform`]: wgt::BufferBindingType::Uniform
1781 /// [size]: BufferBinding::size
1782 pub uniform_bounds_check_alignment: wgt::BufferSize,
1783
1784 /// The size of the raw TLAS instance
1785 pub raw_tlas_instance_size: usize,
1786
1787 /// What the scratch buffer for building an acceleration structure must be aligned to
1788 pub ray_tracing_scratch_buffer_alignment: u32,
1789}
1790
1791#[derive(Clone, Debug)]
1792pub struct Capabilities {
1793 pub limits: wgt::Limits,
1794 pub alignments: Alignments,
1795 pub downlevel: wgt::DownlevelCapabilities,
1796}
1797
1798#[derive(Debug)]
1799pub struct ExposedAdapter<A: Api> {
1800 pub adapter: A::Adapter,
1801 pub info: wgt::AdapterInfo,
1802 pub features: wgt::Features,
1803 pub capabilities: Capabilities,
1804}
1805
1806/// Describes information about what a `Surface`'s presentation capabilities are.
1807/// Fetch this with [Adapter::surface_capabilities].
1808#[derive(Debug, Clone)]
1809pub struct SurfaceCapabilities {
1810 /// List of supported texture formats.
1811 ///
1812 /// Must be at least one.
1813 pub formats: Vec<wgt::TextureFormat>,
1814
1815 /// Range for the number of queued frames.
1816 ///
1817 /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
1818 /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
1819 ///
1820 /// - `maximum_frame_latency.start` must be at least 1.
1821 /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
1822 pub maximum_frame_latency: RangeInclusive<u32>,
1823
1824 /// Current extent of the surface, if known.
1825 pub current_extent: Option<wgt::Extent3d>,
1826
1827 /// Supported texture usage flags.
1828 ///
1829 /// Must have at least `TextureUses::COLOR_TARGET`
1830 pub usage: TextureUses,
1831
1832 /// List of supported V-sync modes.
1833 ///
1834 /// Must be at least one.
1835 pub present_modes: Vec<wgt::PresentMode>,
1836
1837 /// List of supported alpha composition modes.
1838 ///
1839 /// Must be at least one.
1840 pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
1841}
1842
1843#[derive(Debug)]
1844pub struct AcquiredSurfaceTexture<A: Api> {
1845 pub texture: A::SurfaceTexture,
1846 /// The presentation configuration no longer matches
1847 /// the surface properties exactly, but can still be used to present
1848 /// to the surface successfully.
1849 pub suboptimal: bool,
1850}
1851
1852#[derive(Debug)]
1853pub struct OpenDevice<A: Api> {
1854 pub device: A::Device,
1855 pub queue: A::Queue,
1856}
1857
1858#[derive(Clone, Debug)]
1859pub struct BufferMapping {
1860 pub ptr: NonNull<u8>,
1861 pub is_coherent: bool,
1862}
1863
1864#[derive(Clone, Debug)]
1865pub struct BufferDescriptor<'a> {
1866 pub label: Label<'a>,
1867 pub size: wgt::BufferAddress,
1868 pub usage: BufferUses,
1869 pub memory_flags: MemoryFlags,
1870}
1871
1872#[derive(Clone, Debug)]
1873pub struct TextureDescriptor<'a> {
1874 pub label: Label<'a>,
1875 pub size: wgt::Extent3d,
1876 pub mip_level_count: u32,
1877 pub sample_count: u32,
1878 pub dimension: wgt::TextureDimension,
1879 pub format: wgt::TextureFormat,
1880 pub usage: TextureUses,
1881 pub memory_flags: MemoryFlags,
1882 /// Allows views of this texture to have a different format
1883 /// than the texture does.
1884 pub view_formats: Vec<wgt::TextureFormat>,
1885}
1886
1887impl TextureDescriptor<'_> {
1888 pub fn copy_extent(&self) -> CopyExtent {
1889 CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
1890 }
1891
1892 pub fn is_cube_compatible(&self) -> bool {
1893 self.dimension == wgt::TextureDimension::D2
1894 && self.size.depth_or_array_layers % 6 == 0
1895 && self.sample_count == 1
1896 && self.size.width == self.size.height
1897 }
1898
1899 pub fn array_layer_count(&self) -> u32 {
1900 match self.dimension {
1901 wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
1902 wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
1903 }
1904 }
1905}
1906
1907/// TextureView descriptor.
1908///
1909/// Valid usage:
1910///. - `format` has to be the same as `TextureDescriptor::format`
1911///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
1912///. - `usage` has to be a subset of `TextureDescriptor::usage`
1913///. - `range` has to be a subset of parent texture
1914#[derive(Clone, Debug)]
1915pub struct TextureViewDescriptor<'a> {
1916 pub label: Label<'a>,
1917 pub format: wgt::TextureFormat,
1918 pub dimension: wgt::TextureViewDimension,
1919 pub usage: TextureUses,
1920 pub range: wgt::ImageSubresourceRange,
1921}
1922
1923#[derive(Clone, Debug)]
1924pub struct SamplerDescriptor<'a> {
1925 pub label: Label<'a>,
1926 pub address_modes: [wgt::AddressMode; 3],
1927 pub mag_filter: wgt::FilterMode,
1928 pub min_filter: wgt::FilterMode,
1929 pub mipmap_filter: wgt::FilterMode,
1930 pub lod_clamp: Range<f32>,
1931 pub compare: Option<wgt::CompareFunction>,
1932 // Must in the range [1, 16].
1933 //
1934 // Anisotropic filtering must be supported if this is not 1.
1935 pub anisotropy_clamp: u16,
1936 pub border_color: Option<wgt::SamplerBorderColor>,
1937}
1938
1939/// BindGroupLayout descriptor.
1940///
1941/// Valid usage:
1942/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
1943#[derive(Clone, Debug)]
1944pub struct BindGroupLayoutDescriptor<'a> {
1945 pub label: Label<'a>,
1946 pub flags: BindGroupLayoutFlags,
1947 pub entries: &'a [wgt::BindGroupLayoutEntry],
1948}
1949
1950#[derive(Clone, Debug)]
1951pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
1952 pub label: Label<'a>,
1953 pub flags: PipelineLayoutFlags,
1954 pub bind_group_layouts: &'a [&'a B],
1955 pub push_constant_ranges: &'a [wgt::PushConstantRange],
1956}
1957
1958/// A region of a buffer made visible to shaders via a [`BindGroup`].
1959///
1960/// [`BindGroup`]: Api::BindGroup
1961///
1962/// ## Accessible region
1963///
1964/// `wgpu_hal` guarantees that shaders compiled with
1965/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
1966/// write data via this binding outside the *accessible region* of [`buffer`]:
1967///
1968/// - The accessible region starts at [`offset`].
1969///
1970/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
1971/// which must be a multiple of 4.
1972///
1973/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
1974/// rounded up to the next multiple of
1975/// [`Alignments::uniform_bounds_check_alignment`].
1976///
1977/// Note that this guarantee is stricter than WGSL's requirements for
1978/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
1979/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
1980/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
1981/// never be read by the application before they are overwritten. This
1982/// optimization consults bind group buffer binding regions to determine which
1983/// parts of which buffers shaders might observe. This optimization is only
1984/// sound if shader access is bounds-checked.
1985///
1986/// [`buffer`]: BufferBinding::buffer
1987/// [`offset`]: BufferBinding::offset
1988/// [`size`]: BufferBinding::size
1989/// [`Storage`]: wgt::BufferBindingType::Storage
1990/// [`Uniform`]: wgt::BufferBindingType::Uniform
1991/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
1992#[derive(Debug)]
1993pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
1994 /// The buffer being bound.
1995 pub buffer: &'a B,
1996
1997 /// The offset at which the bound region starts.
1998 ///
1999 /// This must be less than the size of the buffer. Some back ends
2000 /// cannot tolerate zero-length regions; for example, see
2001 /// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
2002 /// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
2003 /// documentation for GLES's [glBindBufferRange][bbr].
2004 ///
2005 /// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
2006 /// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
2007 /// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
2008 pub offset: wgt::BufferAddress,
2009
2010 /// The size of the region bound, in bytes.
2011 ///
2012 /// If `None`, the region extends from `offset` to the end of the
2013 /// buffer. Given the restrictions on `offset`, this means that
2014 /// the size is always greater than zero.
2015 pub size: Option<wgt::BufferSize>,
2016}
2017
2018impl<'a, T: DynBuffer + ?Sized> Clone for BufferBinding<'a, T> {
2019 fn clone(&self) -> Self {
2020 BufferBinding {
2021 buffer: self.buffer,
2022 offset: self.offset,
2023 size: self.size,
2024 }
2025 }
2026}
2027
2028#[derive(Debug)]
2029pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2030 pub view: &'a T,
2031 pub usage: TextureUses,
2032}
2033
2034impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2035 fn clone(&self) -> Self {
2036 TextureBinding {
2037 view: self.view,
2038 usage: self.usage,
2039 }
2040 }
2041}
2042
2043#[derive(Clone, Debug)]
2044pub struct BindGroupEntry {
2045 pub binding: u32,
2046 pub resource_index: u32,
2047 pub count: u32,
2048}
2049
2050/// BindGroup descriptor.
2051///
2052/// Valid usage:
2053///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2054///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2055///. - each entry has to be compatible with the `layout`
2056///. - each entry's `BindGroupEntry::resource_index` is within range
2057/// of the corresponding resource array, selected by the relevant
2058/// `BindGroupLayoutEntry`.
2059#[derive(Clone, Debug)]
2060pub struct BindGroupDescriptor<
2061 'a,
2062 Bgl: DynBindGroupLayout + ?Sized,
2063 B: DynBuffer + ?Sized,
2064 S: DynSampler + ?Sized,
2065 T: DynTextureView + ?Sized,
2066 A: DynAccelerationStructure + ?Sized,
2067> {
2068 pub label: Label<'a>,
2069 pub layout: &'a Bgl,
2070 pub buffers: &'a [BufferBinding<'a, B>],
2071 pub samplers: &'a [&'a S],
2072 pub textures: &'a [TextureBinding<'a, T>],
2073 pub entries: &'a [BindGroupEntry],
2074 pub acceleration_structures: &'a [&'a A],
2075}
2076
2077#[derive(Clone, Debug)]
2078pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2079 pub label: Label<'a>,
2080 pub queue: &'a Q,
2081}
2082
2083/// Naga shader module.
2084pub struct NagaShader {
2085 /// Shader module IR.
2086 pub module: Cow<'static, naga::Module>,
2087 /// Analysis information of the module.
2088 pub info: naga::valid::ModuleInfo,
2089 /// Source codes for debug
2090 pub debug_source: Option<DebugSource>,
2091}
2092
2093// Custom implementation avoids the need to generate Debug impl code
2094// for the whole Naga module and info.
2095impl fmt::Debug for NagaShader {
2096 fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2097 write!(formatter, "Naga shader")
2098 }
2099}
2100
2101/// Shader input.
2102#[allow(clippy::large_enum_variant)]
2103pub enum ShaderInput<'a> {
2104 Naga(NagaShader),
2105 SpirV(&'a [u32]),
2106}
2107
2108pub struct ShaderModuleDescriptor<'a> {
2109 pub label: Label<'a>,
2110
2111 /// # Safety
2112 ///
2113 /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2114 ///
2115 /// [src]: wgt::ShaderRuntimeChecks
2116 pub runtime_checks: wgt::ShaderRuntimeChecks,
2117}
2118
2119#[derive(Debug, Clone)]
2120pub struct DebugSource {
2121 pub file_name: Cow<'static, str>,
2122 pub source_code: Cow<'static, str>,
2123}
2124
2125/// Describes a programmable pipeline stage.
2126#[derive(Debug)]
2127pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2128 /// The compiled shader module for this stage.
2129 pub module: &'a M,
2130 /// The name of the entry point in the compiled shader. There must be a function with this name
2131 /// in the shader.
2132 pub entry_point: &'a str,
2133 /// Pipeline constants
2134 pub constants: &'a naga::back::PipelineConstants,
2135 /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2136 ///
2137 /// This is required by the WebGPU spec, but may have overhead which can be avoided
2138 /// for cross-platform applications
2139 pub zero_initialize_workgroup_memory: bool,
2140}
2141
2142impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2143 fn clone(&self) -> Self {
2144 Self {
2145 module: self.module,
2146 entry_point: self.entry_point,
2147 constants: self.constants,
2148 zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2149 }
2150 }
2151}
2152
2153/// Describes a compute pipeline.
2154#[derive(Clone, Debug)]
2155pub struct ComputePipelineDescriptor<
2156 'a,
2157 Pl: DynPipelineLayout + ?Sized,
2158 M: DynShaderModule + ?Sized,
2159 Pc: DynPipelineCache + ?Sized,
2160> {
2161 pub label: Label<'a>,
2162 /// The layout of bind groups for this pipeline.
2163 pub layout: &'a Pl,
2164 /// The compiled compute stage and its entry point.
2165 pub stage: ProgrammableStage<'a, M>,
2166 /// The cache which will be used and filled when compiling this pipeline
2167 pub cache: Option<&'a Pc>,
2168}
2169
2170pub struct PipelineCacheDescriptor<'a> {
2171 pub label: Label<'a>,
2172 pub data: Option<&'a [u8]>,
2173}
2174
2175/// Describes how the vertex buffer is interpreted.
2176#[derive(Clone, Debug)]
2177pub struct VertexBufferLayout<'a> {
2178 /// The stride, in bytes, between elements of this buffer.
2179 pub array_stride: wgt::BufferAddress,
2180 /// How often this vertex buffer is "stepped" forward.
2181 pub step_mode: wgt::VertexStepMode,
2182 /// The list of attributes which comprise a single vertex.
2183 pub attributes: &'a [wgt::VertexAttribute],
2184}
2185
2186/// Describes a render (graphics) pipeline.
2187#[derive(Clone, Debug)]
2188pub struct RenderPipelineDescriptor<
2189 'a,
2190 Pl: DynPipelineLayout + ?Sized,
2191 M: DynShaderModule + ?Sized,
2192 Pc: DynPipelineCache + ?Sized,
2193> {
2194 pub label: Label<'a>,
2195 /// The layout of bind groups for this pipeline.
2196 pub layout: &'a Pl,
2197 /// The format of any vertex buffers used with this pipeline.
2198 pub vertex_buffers: &'a [VertexBufferLayout<'a>],
2199 /// The vertex stage for this pipeline.
2200 pub vertex_stage: ProgrammableStage<'a, M>,
2201 /// The properties of the pipeline at the primitive assembly and rasterization level.
2202 pub primitive: wgt::PrimitiveState,
2203 /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2204 pub depth_stencil: Option<wgt::DepthStencilState>,
2205 /// The multi-sampling properties of the pipeline.
2206 pub multisample: wgt::MultisampleState,
2207 /// The fragment stage for this pipeline.
2208 pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2209 /// The effect of draw calls on the color aspect of the output target.
2210 pub color_targets: &'a [Option<wgt::ColorTargetState>],
2211 /// If the pipeline will be used with a multiview render pass, this indicates how many array
2212 /// layers the attachments will have.
2213 pub multiview: Option<NonZeroU32>,
2214 /// The cache which will be used and filled when compiling this pipeline
2215 pub cache: Option<&'a Pc>,
2216}
2217
2218#[derive(Debug, Clone)]
2219pub struct SurfaceConfiguration {
2220 /// Maximum number of queued frames. Must be in
2221 /// `SurfaceCapabilities::maximum_frame_latency` range.
2222 pub maximum_frame_latency: u32,
2223 /// Vertical synchronization mode.
2224 pub present_mode: wgt::PresentMode,
2225 /// Alpha composition mode.
2226 pub composite_alpha_mode: wgt::CompositeAlphaMode,
2227 /// Format of the surface textures.
2228 pub format: wgt::TextureFormat,
2229 /// Requested texture extent. Must be in
2230 /// `SurfaceCapabilities::extents` range.
2231 pub extent: wgt::Extent3d,
2232 /// Allowed usage of surface textures,
2233 pub usage: TextureUses,
2234 /// Allows views of swapchain texture to have a different format
2235 /// than the texture does.
2236 pub view_formats: Vec<wgt::TextureFormat>,
2237}
2238
2239#[derive(Debug, Clone)]
2240pub struct Rect<T> {
2241 pub x: T,
2242 pub y: T,
2243 pub w: T,
2244 pub h: T,
2245}
2246
2247#[derive(Debug, Clone, PartialEq)]
2248pub struct StateTransition<T> {
2249 pub from: T,
2250 pub to: T,
2251}
2252
2253#[derive(Debug, Clone)]
2254pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2255 pub buffer: &'a B,
2256 pub usage: StateTransition<BufferUses>,
2257}
2258
2259#[derive(Debug, Clone)]
2260pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2261 pub texture: &'a T,
2262 pub range: wgt::ImageSubresourceRange,
2263 pub usage: StateTransition<TextureUses>,
2264}
2265
2266#[derive(Clone, Copy, Debug)]
2267pub struct BufferCopy {
2268 pub src_offset: wgt::BufferAddress,
2269 pub dst_offset: wgt::BufferAddress,
2270 pub size: wgt::BufferSize,
2271}
2272
2273#[derive(Clone, Debug)]
2274pub struct TextureCopyBase {
2275 pub mip_level: u32,
2276 pub array_layer: u32,
2277 /// Origin within a texture.
2278 /// Note: for 1D and 2D textures, Z must be 0.
2279 pub origin: wgt::Origin3d,
2280 pub aspect: FormatAspects,
2281}
2282
2283#[derive(Clone, Copy, Debug)]
2284pub struct CopyExtent {
2285 pub width: u32,
2286 pub height: u32,
2287 pub depth: u32,
2288}
2289
2290#[derive(Clone, Debug)]
2291pub struct TextureCopy {
2292 pub src_base: TextureCopyBase,
2293 pub dst_base: TextureCopyBase,
2294 pub size: CopyExtent,
2295}
2296
2297#[derive(Clone, Debug)]
2298pub struct BufferTextureCopy {
2299 pub buffer_layout: wgt::TexelCopyBufferLayout,
2300 pub texture_base: TextureCopyBase,
2301 pub size: CopyExtent,
2302}
2303
2304#[derive(Clone, Debug)]
2305pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2306 pub view: &'a T,
2307 /// Contains either a single mutating usage as a target,
2308 /// or a valid combination of read-only usages.
2309 pub usage: TextureUses,
2310}
2311
2312#[derive(Clone, Debug)]
2313pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2314 pub target: Attachment<'a, T>,
2315 pub resolve_target: Option<Attachment<'a, T>>,
2316 pub ops: AttachmentOps,
2317 pub clear_value: wgt::Color,
2318}
2319
2320#[derive(Clone, Debug)]
2321pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2322 pub target: Attachment<'a, T>,
2323 pub depth_ops: AttachmentOps,
2324 pub stencil_ops: AttachmentOps,
2325 pub clear_value: (f32, u32),
2326}
2327
2328#[derive(Clone, Debug)]
2329pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2330 pub query_set: &'a Q,
2331 pub beginning_of_pass_write_index: Option<u32>,
2332 pub end_of_pass_write_index: Option<u32>,
2333}
2334
2335#[derive(Clone, Debug)]
2336pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2337 pub label: Label<'a>,
2338 pub extent: wgt::Extent3d,
2339 pub sample_count: u32,
2340 pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2341 pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2342 pub multiview: Option<NonZeroU32>,
2343 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2344 pub occlusion_query_set: Option<&'a Q>,
2345}
2346
2347#[derive(Clone, Debug)]
2348pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2349 pub label: Label<'a>,
2350 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2351}
2352
2353/// Stores the text of any validation errors that have occurred since
2354/// the last call to `get_and_reset`.
2355///
2356/// Each value is a validation error and a message associated with it,
2357/// or `None` if the error has no message from the api.
2358///
2359/// This is used for internal wgpu testing only and _must not_ be used
2360/// as a way to check for errors.
2361///
2362/// This works as a static because `cargo nextest` runs all of our
2363/// tests in separate processes, so each test gets its own canary.
2364///
2365/// This prevents the issue of one validation error terminating the
2366/// entire process.
2367pub static VALIDATION_CANARY: ValidationCanary = ValidationCanary {
2368 inner: Mutex::new(Vec::new()),
2369};
2370
2371/// Flag for internal testing.
2372pub struct ValidationCanary {
2373 inner: Mutex<Vec<String>>,
2374}
2375
2376impl ValidationCanary {
2377 #[allow(dead_code)] // in some configurations this function is dead
2378 fn add(&self, msg: String) {
2379 self.inner.lock().push(msg);
2380 }
2381
2382 /// Returns any API validation errors that have occurred in this process
2383 /// since the last call to this function.
2384 pub fn get_and_reset(&self) -> Vec<String> {
2385 self.inner.lock().drain(..).collect()
2386 }
2387}
2388
2389#[test]
2390fn test_default_limits() {
2391 let limits = wgt::Limits::default();
2392 assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2393}
2394
2395#[derive(Clone, Debug)]
2396pub struct AccelerationStructureDescriptor<'a> {
2397 pub label: Label<'a>,
2398 pub size: wgt::BufferAddress,
2399 pub format: AccelerationStructureFormat,
2400}
2401
2402#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2403pub enum AccelerationStructureFormat {
2404 TopLevel,
2405 BottomLevel,
2406}
2407
2408#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2409pub enum AccelerationStructureBuildMode {
2410 Build,
2411 Update,
2412}
2413
2414/// Information of the required size for a corresponding entries struct (+ flags)
2415#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2416pub struct AccelerationStructureBuildSizes {
2417 pub acceleration_structure_size: wgt::BufferAddress,
2418 pub update_scratch_size: wgt::BufferAddress,
2419 pub build_scratch_size: wgt::BufferAddress,
2420}
2421
2422/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2423/// For updates, only the data is allowed to change (not the meta data or sizes).
2424#[derive(Clone, Debug)]
2425pub struct BuildAccelerationStructureDescriptor<
2426 'a,
2427 B: DynBuffer + ?Sized,
2428 A: DynAccelerationStructure + ?Sized,
2429> {
2430 pub entries: &'a AccelerationStructureEntries<'a, B>,
2431 pub mode: AccelerationStructureBuildMode,
2432 pub flags: AccelerationStructureBuildFlags,
2433 pub source_acceleration_structure: Option<&'a A>,
2434 pub destination_acceleration_structure: &'a A,
2435 pub scratch_buffer: &'a B,
2436 pub scratch_buffer_offset: wgt::BufferAddress,
2437}
2438
2439/// - All buffers, buffer addresses and offsets will be ignored.
2440/// - The build mode will be ignored.
2441/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2442/// may result in reduced size requirements.
2443/// - Any other change may result in a bigger or smaller size requirement.
2444#[derive(Clone, Debug)]
2445pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2446 pub entries: &'a AccelerationStructureEntries<'a, B>,
2447 pub flags: AccelerationStructureBuildFlags,
2448}
2449
2450/// Entries for a single descriptor
2451/// * `Instances` - Multiple instances for a top level acceleration structure
2452/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2453/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2454#[derive(Debug)]
2455pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2456 Instances(AccelerationStructureInstances<'a, B>),
2457 Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2458 AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2459}
2460
2461/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2462/// * `indices` - optional index buffer with attributes
2463/// * `transform` - optional transform
2464#[derive(Clone, Debug)]
2465pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2466 pub vertex_buffer: Option<&'a B>,
2467 pub vertex_format: wgt::VertexFormat,
2468 pub first_vertex: u32,
2469 pub vertex_count: u32,
2470 pub vertex_stride: wgt::BufferAddress,
2471 pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2472 pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2473 pub flags: AccelerationStructureGeometryFlags,
2474}
2475
2476/// * `offset` - offset in bytes
2477#[derive(Clone, Debug)]
2478pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2479 pub buffer: Option<&'a B>,
2480 pub offset: u32,
2481 pub count: u32,
2482 pub stride: wgt::BufferAddress,
2483 pub flags: AccelerationStructureGeometryFlags,
2484}
2485
2486/// * `offset` - offset in bytes
2487#[derive(Clone, Debug)]
2488pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2489 pub buffer: Option<&'a B>,
2490 pub offset: u32,
2491 pub count: u32,
2492}
2493
2494/// * `offset` - offset in bytes
2495#[derive(Clone, Debug)]
2496pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2497 pub format: wgt::IndexFormat,
2498 pub buffer: Option<&'a B>,
2499 pub offset: u32,
2500 pub count: u32,
2501}
2502
2503/// * `offset` - offset in bytes
2504#[derive(Clone, Debug)]
2505pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2506 pub buffer: &'a B,
2507 pub offset: u32,
2508}
2509
2510pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2511pub use wgt::AccelerationStructureGeometryFlags;
2512
2513bitflags::bitflags! {
2514 #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2515 pub struct AccelerationStructureUses: u8 {
2516 // For blas used as input for tlas
2517 const BUILD_INPUT = 1 << 0;
2518 // Target for acceleration structure build
2519 const BUILD_OUTPUT = 1 << 1;
2520 // Tlas used in a shader
2521 const SHADER_INPUT = 1 << 2;
2522 }
2523}
2524
2525#[derive(Debug, Clone)]
2526pub struct AccelerationStructureBarrier {
2527 pub usage: StateTransition<AccelerationStructureUses>,
2528}
2529
2530#[derive(Debug, Copy, Clone)]
2531pub struct TlasInstance {
2532 pub transform: [f32; 12],
2533 pub custom_index: u32,
2534 pub mask: u8,
2535 pub blas_address: u64,
2536}