Struct quinn_proto::TransportConfig
source · pub struct TransportConfig { /* private fields */ }
Expand description
Parameters governing the core QUIC state machine
Default values should be suitable for most internet applications. Applications protocols which
forbid remotely-initiated streams should set max_concurrent_bidi_streams
and
max_concurrent_uni_streams
to zero.
In some cases, performance or resource requirements can be improved by tuning these values to suit a particular application and/or network connection. In particular, data window sizes can be tuned for a particular expected round trip time, link capacity, and memory availability. Tuning for higher bandwidths and latencies increases worst-case memory consumption, but does not impair performance at lower bandwidths and latencies. The default configuration is tuned for a 100Mbps link with a 100ms round trip time.
Implementations§
source§impl TransportConfig
impl TransportConfig
sourcepub fn max_concurrent_bidi_streams(&mut self, value: VarInt) -> &mut Self
pub fn max_concurrent_bidi_streams(&mut self, value: VarInt) -> &mut Self
Maximum number of incoming bidirectional streams that may be open concurrently
Must be nonzero for the peer to open any bidirectional streams.
Worst-case memory use is directly proportional to max_concurrent_bidi_streams * stream_receive_window
, with an upper bound proportional to receive_window
.
sourcepub fn max_concurrent_uni_streams(&mut self, value: VarInt) -> &mut Self
pub fn max_concurrent_uni_streams(&mut self, value: VarInt) -> &mut Self
Variant of max_concurrent_bidi_streams
affecting unidirectional streams
sourcepub fn max_idle_timeout(&mut self, value: Option<IdleTimeout>) -> &mut Self
pub fn max_idle_timeout(&mut self, value: Option<IdleTimeout>) -> &mut Self
Maximum duration of inactivity to accept before timing out the connection.
The true idle timeout is the minimum of this and the peer’s own max idle timeout. None
represents an infinite timeout.
WARNING: If a peer or its network path malfunctions or acts maliciously, an infinite idle timeout can result in permanently hung futures!
let mut config = TransportConfig::default();
// Set the idle timeout as `VarInt`-encoded milliseconds
config.max_idle_timeout(Some(VarInt::from_u32(10_000).into()));
// Set the idle timeout as a `Duration`
config.max_idle_timeout(Some(Duration::from_secs(10).try_into()?));
sourcepub fn stream_receive_window(&mut self, value: VarInt) -> &mut Self
pub fn stream_receive_window(&mut self, value: VarInt) -> &mut Self
Maximum number of bytes the peer may transmit without acknowledgement on any one stream before becoming blocked.
This should be set to at least the expected connection latency multiplied by the maximum
desired throughput. Setting this smaller than receive_window
helps ensure that a single
stream doesn’t monopolize receive buffers, which may otherwise occur if the application
chooses not to read from a large stream for a time while still requiring data on other
streams.
sourcepub fn receive_window(&mut self, value: VarInt) -> &mut Self
pub fn receive_window(&mut self, value: VarInt) -> &mut Self
Maximum number of bytes the peer may transmit across all streams of a connection before becoming blocked.
This should be set to at least the expected connection latency multiplied by the maximum desired throughput. Larger values can be useful to allow maximum throughput within a stream while another is blocked.
sourcepub fn send_window(&mut self, value: u64) -> &mut Self
pub fn send_window(&mut self, value: u64) -> &mut Self
Maximum number of bytes to transmit to a peer without acknowledgment
Provides an upper bound on memory when communicating with peers that issue large amounts of flow control credit. Endpoints that wish to handle large numbers of connections robustly should take care to set this low enough to guarantee memory exhaustion does not occur if every connection uses the entire window.
sourcepub fn max_tlps(&mut self, value: u32) -> &mut Self
pub fn max_tlps(&mut self, value: u32) -> &mut Self
Maximum number of tail loss probes before an RTO fires.
sourcepub fn packet_threshold(&mut self, value: u32) -> &mut Self
pub fn packet_threshold(&mut self, value: u32) -> &mut Self
Maximum reordering in packet number space before FACK style loss detection considers a packet lost. Should not be less than 3, per RFC5681.
sourcepub fn time_threshold(&mut self, value: f32) -> &mut Self
pub fn time_threshold(&mut self, value: f32) -> &mut Self
Maximum reordering in time space before time based loss detection considers a packet lost, as a factor of RTT
sourcepub fn initial_rtt(&mut self, value: Duration) -> &mut Self
pub fn initial_rtt(&mut self, value: Duration) -> &mut Self
The RTT used before an RTT sample is taken
sourcepub fn initial_mtu(&mut self, value: u16) -> &mut Self
pub fn initial_mtu(&mut self, value: u16) -> &mut Self
The initial value to be used as the maximum UDP payload size before running MTU discovery
(see TransportConfig::mtu_discovery_config
).
Must be at least 1200, which is the default, and known to be safe for typical internet
applications. Larger values are more efficient, but increase the risk of packet loss due to
exceeding the network path’s IP MTU. If the provided value is higher than what the network
path actually supports, packet loss will eventually trigger black hole detection and bring
it down to TransportConfig::min_mtu
.
sourcepub fn min_mtu(&mut self, value: u16) -> &mut Self
pub fn min_mtu(&mut self, value: u16) -> &mut Self
The maximum UDP payload size guaranteed to be supported by the network.
Must be at least 1200, which is the default, and lower than or equal to
TransportConfig::initial_mtu
.
Real-world MTUs can vary according to ISP, VPN, and properties of intermediate network links
outside of either endpoint’s control. Extreme care should be used when raising this value
outside of private networks where these factors are fully controlled. If the provided value
is higher than what the network path actually supports, the result will be unpredictable and
catastrophic packet loss, without a possibility of repair. Prefer
TransportConfig::initial_mtu
together with
TransportConfig::mtu_discovery_config
to set a maximum UDP payload size that robustly
adapts to the network.
sourcepub fn mtu_discovery_config(
&mut self,
value: Option<MtuDiscoveryConfig>
) -> &mut Self
pub fn mtu_discovery_config( &mut self, value: Option<MtuDiscoveryConfig> ) -> &mut Self
Specifies the MTU discovery config (see MtuDiscoveryConfig
for details).
Defaults to None
, which disables MTU discovery altogether.
Important
MTU discovery depends on platform support for disabling UDP packet fragmentation, which is not always available. If the platform allows fragmenting UDP packets, MTU discovery may end up “discovering” an MTU that is not really supported by the network, causing packet loss down the line.
The quinn
crate provides the Endpoint::server
and Endpoint::client
constructors that
automatically disable UDP packet fragmentation on Linux and Windows. When using these
constructors, MTU discovery will reliably work, unless the code is compiled targeting an
unsupported platform (e.g. iOS). In the latter case, it is advisable to keep MTU discovery
disabled.
Users of quinn-proto
and authors of custom AsyncUdpSocket
implementations should ensure
to disable UDP packet fragmentation (this is strongly recommended by RFC
9000, regardless of MTU
discovery). They can build on top of the quinn-udp
crate, used by quinn
itself, which
provides Linux, Windows, macOS, and FreeBSD support for disabling packet fragmentation.
sourcepub fn persistent_congestion_threshold(&mut self, value: u32) -> &mut Self
pub fn persistent_congestion_threshold(&mut self, value: u32) -> &mut Self
Number of consecutive PTOs after which network is considered to be experiencing persistent congestion.
sourcepub fn keep_alive_interval(&mut self, value: Option<Duration>) -> &mut Self
pub fn keep_alive_interval(&mut self, value: Option<Duration>) -> &mut Self
Period of inactivity before sending a keep-alive packet
Keep-alive packets prevent an inactive but otherwise healthy connection from timing out.
None
to disable, which is the default. Only one side of any given connection needs keep-alive
enabled for the connection to be preserved. Must be set lower than the idle_timeout of both
peers to be effective.
sourcepub fn crypto_buffer_size(&mut self, value: usize) -> &mut Self
pub fn crypto_buffer_size(&mut self, value: usize) -> &mut Self
Maximum quantity of out-of-order crypto layer data to buffer
sourcepub fn allow_spin(&mut self, value: bool) -> &mut Self
pub fn allow_spin(&mut self, value: bool) -> &mut Self
Whether the implementation is permitted to set the spin bit on this connection
This allows passive observers to easily judge the round trip time of a connection, which can be useful for network administration but sacrifices a small amount of privacy.
sourcepub fn datagram_receive_buffer_size(
&mut self,
value: Option<usize>
) -> &mut Self
pub fn datagram_receive_buffer_size( &mut self, value: Option<usize> ) -> &mut Self
Maximum number of incoming application datagram bytes to buffer, or None to disable incoming datagrams
The peer is forbidden to send single datagrams larger than this size. If the aggregate size of all datagrams that have been received from the peer but not consumed by the application exceeds this value, old datagrams are dropped until it is no longer exceeded.
sourcepub fn datagram_send_buffer_size(&mut self, value: usize) -> &mut Self
pub fn datagram_send_buffer_size(&mut self, value: usize) -> &mut Self
Maximum number of outgoing application datagram bytes to buffer
While datagrams are sent ASAP, it is possible for an application to generate data faster than the link, or even the underlying hardware, can transmit them. This limits the amount of memory that may be consumed in that case. When the send buffer is full and a new datagram is sent, older datagrams are dropped until sufficient space is available.
sourcepub fn congestion_controller_factory(
&mut self,
factory: impl ControllerFactory + Send + Sync + 'static
) -> &mut Self
pub fn congestion_controller_factory( &mut self, factory: impl ControllerFactory + Send + Sync + 'static ) -> &mut Self
How to construct new congestion::Controller
s
Typically the refcounted configuration of a congestion::Controller
,
e.g. a congestion::NewRenoConfig
.
Example
let mut config = TransportConfig::default();
config.congestion_controller_factory(Arc::new(congestion::NewRenoConfig::default()));
sourcepub fn enable_segmentation_offload(&mut self, enabled: bool) -> &mut Self
pub fn enable_segmentation_offload(&mut self, enabled: bool) -> &mut Self
Whether to use “Generic Segmentation Offload” to accelerate transmits, when supported by the environment
Defaults to true
.
GSO dramatically reduces CPU consumption when sending large numbers of packets with the same
headers, such as when transmitting bulk data on a connection. However, it is not supported
by all network interface drivers or packet inspection tools. quinn-udp
will attempt to
disable GSO automatically when unavailable, but this can lead to spurious packet loss at
startup, temporarily degrading performance.