pub trait LowerHex {
// Required method
fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error>;
}
Expand description
x
formatting.
The LowerHex
trait should format its output as a number in hexadecimal, with a
through f
in lower case.
For primitive signed integers (i8
to i128
, and isize
),
negative values are formatted as the two’s complement representation.
The alternate flag, #
, adds a 0x
in front of the output.
For more information on formatters, see the module-level documentation.
§Examples
Basic usage with i32
:
let y = 42; // 42 is '2a' in hex
assert_eq!(format!("{y:x}"), "2a");
assert_eq!(format!("{y:#x}"), "0x2a");
assert_eq!(format!("{:x}", -16), "fffffff0");
Implementing LowerHex
on a type:
use std::fmt;
struct Length(i32);
impl fmt::LowerHex for Length {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let val = self.0;
fmt::LowerHex::fmt(&val, f) // delegate to i32's implementation
}
}
let l = Length(9);
assert_eq!(format!("l as hex is: {l:x}"), "l as hex is: 9");
assert_eq!(format!("l as hex is: {l:#010x}"), "l as hex is: 0x00000009");
Required Methods§
1.0.0 · Sourcefn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error>
fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error>
Formats the value using the given formatter.
§Errors
This function should return Err
if, and only if, the provided Formatter
returns Err
.
String formatting is considered an infallible operation; this function only
returns a Result
because writing to the underlying stream might fail and it must
provide a way to propagate the fact that an error has occurred back up the stack.
Implementors§
impl LowerHex for i8
impl LowerHex for i16
impl LowerHex for i32
impl LowerHex for i64
impl LowerHex for i128
impl LowerHex for isize
impl LowerHex for u8
impl LowerHex for u16
impl LowerHex for u32
impl LowerHex for u64
impl LowerHex for u128
impl LowerHex for usize
impl LowerHex for Felt
Represents Felt in lowercase hexadecimal format.
impl LowerHex for Limb
impl LowerHex for BigInt
impl LowerHex for BigUint
impl LowerHex for FieldElement
impl<'a, T, O> LowerHex for Domain<'a, Const, T, O>
impl<A, O> LowerHex for BitArray<A, O>where
O: BitOrder,
A: BitViewSized,
impl<O> LowerHex for I16<O>where
O: ByteOrder,
impl<O> LowerHex for I32<O>where
O: ByteOrder,
impl<O> LowerHex for I64<O>where
O: ByteOrder,
impl<O> LowerHex for I128<O>where
O: ByteOrder,
impl<O> LowerHex for U16<O>where
O: ByteOrder,
impl<O> LowerHex for U32<O>where
O: ByteOrder,
impl<O> LowerHex for U64<O>where
O: ByteOrder,
impl<O> LowerHex for U128<O>where
O: ByteOrder,
impl<T> LowerHex for &T
impl<T> LowerHex for &mut T
impl<T> LowerHex for cairo_vm::with_std::num::NonZero<T>where
T: ZeroablePrimitive + LowerHex,
impl<T> LowerHex for Saturating<T>where
T: LowerHex,
impl<T> LowerHex for cairo_vm::with_std::num::Wrapping<T>where
T: LowerHex,
impl<T> LowerHex for crypto_bigint::non_zero::NonZero<T>
impl<T> LowerHex for crypto_bigint::wrapping::Wrapping<T>where
T: LowerHex,
impl<T> LowerHex for GenericArray<u8, T>
impl<T> LowerHex for FmtBinary<T>
impl<T> LowerHex for FmtDisplay<T>
impl<T> LowerHex for FmtList<T>
impl<T> LowerHex for FmtLowerExp<T>
impl<T> LowerHex for FmtLowerHex<T>where
T: LowerHex,
impl<T> LowerHex for FmtOctal<T>
impl<T> LowerHex for FmtPointer<T>
impl<T> LowerHex for FmtUpperExp<T>
impl<T> LowerHex for FmtUpperHex<T>
impl<T, O> LowerHex for BitBox<T, O>
impl<T, O> LowerHex for BitSlice<T, O>
§Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
§Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.