Expand description
Pulley bytecode operations with their operands.
Structs§
- Bitcast
Float From Int32 low32(dst) = bitcast low32(src) as f32
- Bitcast
Float From Int64 dst = bitcast src as f64
- Bitcast
IntFrom Float32 low32(dst) = bitcast low32(src) as i32
- Bitcast
IntFrom Float64 dst = bitcast src as i64
- BrIf
- Conditionally transfer control to the given PC offset if
low32(cond)
contains a non-zero value. - BrIfNot
- Conditionally transfer control to the given PC offset if
low32(cond)
contains a zero value. - BrIf
Xeq32 - Branch if
a == b
. - BrIf
Xeq64 - Branch if
a == b
. - BrIf
Xeq32 I8 - Branch if
a == b
. - BrIf
Xeq32 I32 - Branch if
a == b
. - BrIf
Xeq64 I8 - Branch if
a == b
. - BrIf
Xeq64 I32 - Branch if
a == b
. - BrIf
Xneq32 - Branch if
a !=
b. - BrIf
Xneq64 - Branch if
a !=
b. - BrIf
Xneq32 I8 - Branch if
a !=
b. - BrIf
Xneq32 I32 - Branch if
a !=
b. - BrIf
Xneq64 I8 - Branch if
a !=
b. - BrIf
Xneq64 I32 - Branch if
a !=
b. - BrIf
Xsgt32 I8 - Branch if signed
a > b
. - BrIf
Xsgt32 I32 - Branch if signed
a > b
. - BrIf
Xsgt64 I8 - Branch if signed
a > b
. - BrIf
Xsgt64 I32 - Branch if signed
a > b
. - BrIf
Xsgteq32 I8 - Branch if signed
a >= b
. - BrIf
Xsgteq32 I32 - Branch if signed
a >= b
. - BrIf
Xsgteq64 I8 - Branch if signed
a >= b
. - BrIf
Xsgteq64 I32 - Branch if signed
a >= b
. - BrIf
Xslt32 - Branch if signed
a < b
. - BrIf
Xslt64 - Branch if signed
a < b
. - BrIf
Xslt32 I8 - Branch if signed
a < b
. - BrIf
Xslt32 I32 - Branch if signed
a < b
. - BrIf
Xslt64 I8 - Branch if signed
a < b
. - BrIf
Xslt64 I32 - Branch if signed
a < b
. - BrIf
Xslteq32 - Branch if signed
a <= b
. - BrIf
Xslteq64 - Branch if signed
a <= b
. - BrIf
Xslteq32 I8 - Branch if signed
a <= b
. - BrIf
Xslteq32 I32 - Branch if signed
a <= b
. - BrIf
Xslteq64 I8 - Branch if signed
a <= b
. - BrIf
Xslteq64 I32 - Branch if signed
a <= b
. - BrIf
Xugt32 U8 - Branch if unsigned
a > b
. - BrIf
Xugt32 U32 - Branch if unsigned
a > b
. - BrIf
Xugt64 U8 - Branch if unsigned
a > b
. - BrIf
Xugt64 U32 - Branch if unsigned
a > b
. - BrIf
Xugteq32 U8 - Branch if unsigned
a >= b
. - BrIf
Xugteq32 U32 - Branch if unsigned
a >= b
. - BrIf
Xugteq64 U8 - Branch if unsigned
a >= b
. - BrIf
Xugteq64 U32 - Branch if unsigned
a >= b
. - BrIf
Xult32 - Branch if unsigned
a < b
. - BrIf
Xult64 - Branch if unsigned
a < b
. - BrIf
Xult32 U8 - Branch if unsigned
a < b
. - BrIf
Xult32 U32 - Branch if unsigned
a < b
. - BrIf
Xult64 U8 - Branch if unsigned
a < b
. - BrIf
Xult64 U32 - Branch if unsigned
a < b
. - BrIf
Xulteq32 - Branch if unsigned
a <= b
. - BrIf
Xulteq64 - Branch if unsigned
a <= b
. - BrIf
Xulteq32 U8 - Branch if unsigned
a <= b
. - BrIf
Xulteq32 U32 - Branch if unsigned
a <= b
. - BrIf
Xulteq64 U8 - Branch if unsigned
a <= b
. - BrIf
Xulteq64 U32 - Branch if unsigned
a <= b
. - BrTable32
- Branch to the label indicated by
low32(idx)
. - Bswap32
dst = byteswap(low32(src))
- Bswap64
dst = byteswap(src)
- Call
- Transfer control to the PC at the given offset and set the
lr
register to the PC just after this instruction. - Call1
- Like
call
, but alsox0 = arg1
- Call2
- Like
call
, but alsox0, x1 = arg1, arg2
- Call3
- Like
call
, but alsox0, x1, x2 = arg1, arg2, arg3
- Call4
- Like
call
, but alsox0, x1, x2, x3 = arg1, arg2, arg3, arg4
- Call
Indirect - Transfer control to the PC in
reg
and setlr
to the PC just after this instruction. - Call
Indirect Host - A special opcode to halt interpreter execution and yield control back to the host.
- F32From
F64 low32(dst) = demote(src)
- F32From
X32S low32(dst) = checked_f32_from_signed(low32(src))
- F32From
X32U low32(dst) = checked_f32_from_unsigned(low32(src))
- F32From
X64S low32(dst) = checked_f32_from_signed(src)
- F32From
X64U low32(dst) = checked_f32_from_unsigned(src)
- F64From
F32 (st) = promote(low32(src))
- F64From
X32S dst = checked_f64_from_signed(low32(src))
- F64From
X32U dst = checked_f64_from_unsigned(low32(src))
- F64From
X64S dst = checked_f64_from_signed(src)
- F64From
X64U dst = checked_f64_from_unsigned(src)
- FConst32
low32(dst) = bits
- FConst64
dst = bits
- FCopy
Sign32 low32(dst) = copysign(low32(src1), low32(src2))
- FCopy
Sign64 dst = copysign(src1, src2)
- FExtract
V32x4 low32(dst) = src[lane]
- FExtract
V64x2 dst = src[lane]
- FSelect32
low32(dst) = low32(cond) ? low32(if_nonzero) : low32(if_zero)
- FSelect64
dst = low32(cond) ? if_nonzero : if_zero
- Fabs32
low32(dst) = |low32(src)|
- Fabs64
dst = |src|
- Fadd32
low32(dst) = low32(src1) + low32(src2)
- Fadd64
dst = src1 + src2
- Fceil32
low32(dst) = ieee_ceil(low32(src))
- Fceil64
dst = ieee_ceil(src)
- Fdiv32
low32(dst) = low32(src1) / low32(src2)
- Fdiv64
dst = src1 / src2
- Feq32
low32(dst) = zext(src1 == src2)
- Feq64
low32(dst) = zext(src1 == src2)
- Ffloor32
low32(dst) = ieee_floor(low32(src))
- Ffloor64
dst = ieee_floor(src)
- Fload32
BeOffset32 low32(dst) = zext(*(ptr + offset))
- Fload32
LeOffset32 low32(dst) = zext(*(ptr + offset))
- Fload64
BeOffset32 dst = *(ptr + offset)
- Fload64
LeOffset32 dst = *(ptr + offset)
- Flt32
low32(dst) = zext(src1 < src2)
- Flt64
low32(dst) = zext(src1 < src2)
- Flteq32
low32(dst) = zext(src1 <= src2)
- Flteq64
low32(dst) = zext(src1 <= src2)
- Fmaximum32
low32(dst) = ieee_maximum(low32(src1), low32(src2))
- Fmaximum64
dst = ieee_maximum(src1, src2)
- Fminimum32
low32(dst) = ieee_minimum(low32(src1), low32(src2))
- Fminimum64
dst = ieee_minimum(src1, src2)
- Fmov
- Move between
f
registers. - Fmul32
low32(dst) = low32(src1) * low32(src2)
- Fmul64
dst = src1 * src2
- Fnearest32
low32(dst) = ieee_nearest(low32(src))
- Fnearest64
dst = ieee_nearest(src)
- Fneg32
low32(dst) = -low32(src)
- Fneg64
dst = -src
- Fneq32
low32(dst) = zext(src1 != src2)
- Fneq64
low32(dst) = zext(src1 != src2)
- Fsqrt32
low32(dst) = ieee_sqrt(low32(src))
- Fsqrt64
dst = ieee_sqrt(src)
- Fstore32
BeOffset32 *(ptr + offset) = low32(src)
- Fstore32
LeOffset32 *(ptr + offset) = low32(src)
- Fstore64
BeOffset32 *(ptr + offset) = src
- Fstore64
LeOffset32 *(ptr + offset) = src
- Fsub32
low32(dst) = low32(src1) - low32(src2)
- Fsub64
dst = src1 - src2
- Ftrunc32
low32(dst) = ieee_trunc(low32(src))
- Ftrunc64
dst = ieee_trunc(src)
- Jump
- Unconditionally transfer control to the PC at the given offset.
- Materialize
OpsVisitor decode
- A visitor that materializes whole
Op
s as it decodes the bytecode stream. - Nop
- Do nothing.
- PopFrame
sp = fp; pop fp; pop lr
- PopFrame
Restore - Inverse of
push_frame_save
. Restoresregs
from the top of the stack, then runsstack_free32 amt
, then runspop_frame
. - Push
Frame push lr; push fp; fp = sp
- Push
Frame Save - Macro-instruction to enter a function, allocate some stack, and then save some registers.
- Ret
- Transfer control the address in the
lr
register. - Sext8
dst = sext(low8(src))
- Sext16
dst = sext(low16(src))
- Sext32
dst = sext(low32(src))
- Stack
Alloc32 sp = sp.checked_sub(amt)
- Stack
Free32 sp = sp + amt
- Trap
- Raise a trap.
- VAdd
F32x4 dst = src1 + src2
- VAdd
F64x2 dst = src1 + src2
- VAdd
I8x16 dst = src1 + src2
- VAdd
I8x16 Sat dst = satruating_add(src1, src2)
- VAdd
I16x8 dst = src1 + src2
- VAdd
I16x8 Sat dst = satruating_add(src1, src2)
- VAdd
I32x4 dst = src1 + src2
- VAdd
I64x2 dst = src1 + src2
- VAdd
U8x16 Sat dst = satruating_add(src1, src2)
- VAdd
U16x8 Sat dst = satruating_add(src1, src2)
- VAddpairwise
I16x8S dst = [src1[0] + src1[1], ..., src2[6] + src2[7]]
- VAddpairwise
I32x4S dst = [src1[0] + src1[1], ..., src2[2] + src2[3]]
- VBand128
dst = src1 & src2
- VBitselect128
dst = (c & x) | (!c & y)
- VBnot128
dst = !src1
- VBor128
dst = src1 | src2
- VBxor128
dst = src1 ^ src2
- VDiv
F64x2 dst = src1 / src2
- VF32x4
From I32x4S - Int-to-float conversion (same as
f32_from_x32_s
) - VF32x4
From I32x4U - Int-to-float conversion (same as
f32_from_x32_u
) - VF64x2
From I64x2S - Int-to-float conversion (same as
f64_from_x64_s
) - VF64x2
From I64x2U - Int-to-float conversion (same as
f64_from_x64_u
) - VFdemote
- Demotes the two f64x2 lanes to f32x2 and then extends with two more zero lanes.
- VFpromote
Low - Promotes the low two lanes of the f32x4 input to f64x2.
- VInsert
F32 dst = src1; dst[lane] = src2
- VInsert
F64 dst = src1; dst[lane] = src2
- VInsert
X8 dst = src1; dst[lane] = src2
- VInsert
X16 dst = src1; dst[lane] = src2
- VInsert
X32 dst = src1; dst[lane] = src2
- VInsert
X64 dst = src1; dst[lane] = src2
- VLoad8x8S
Offset32 - Load the 64-bit source as i8x8 and sign-extend to i16x8.
- VLoad8x8U
Offset32 - Load the 64-bit source as u8x8 and zero-extend to i16x8.
- VLoad16x4
LeSOffset32 - Load the 64-bit source as i16x4 and sign-extend to i32x4.
- VLoad16x4
LeUOffset32 - Load the 64-bit source as u16x4 and zero-extend to i32x4.
- VLoad32x2
LeSOffset32 - Load the 64-bit source as i32x2 and sign-extend to i64x2.
- VLoad32x2
LeUOffset32 - Load the 64-bit source as u32x2 and zero-extend to i64x2.
- VLoad128
Offset32 dst = *(ptr + offset)
- VMul
F64x2 dst = src1 * src2
- VMul
I8x16 dst = src1 * src2
- VMul
I16x8 dst = src1 * src2
- VMul
I32x4 dst = src1 * src2
- VMul
I64x2 dst = src1 * src2
- VPopcnt8x16
dst = count_ones(src)
- VQmulrs
I16x8 dst = signed_saturate(src1 * src2 + (1 << (Q - 1)) >> Q)
- VShl
I8x16 dst = src1 << src2
- VShl
I16x8 dst = src1 << src2
- VShl
I32x4 dst = src1 << src2
- VShl
I64x2 dst = src1 << src2
- VShr
I8x16S dst = src1 >> src2
(signed)- VShr
I8x16U dst = src1 >> src2
(unsigned)- VShr
I16x8S dst = src1 >> src2
(signed)- VShr
I16x8U dst = src1 >> src2
(unsigned)- VShr
I32x4S dst = src1 >> src2
(signed)- VShr
I32x4U dst = src1 >> src2
(unsigned)- VShr
I64x2S dst = src1 >> src2
(signed)- VShr
I64x2U dst = src1 >> src2
(unsigned)- VShuffle
dst = shuffle(src1, src2, mask)
- VSplat
F32 dst = splat(low32(src))
- VSplat
F64 dst = splat(src)
- VSplat
X8 dst = splat(low8(src))
- VSplat
X16 dst = splat(low16(src))
- VSplat
X32 dst = splat(low32(src))
- VSplat
X64 dst = splat(src)
- VSub
F64x2 dst = src1 - src2
- VSub
I8x16 dst = src1 - src2
- VSub
I8x16 Sat dst = saturating_sub(src1, src2)
- VSub
I16x8 dst = src1 - src2
- VSub
I16x8 Sat dst = saturating_sub(src1, src2)
- VSub
I32x4 dst = src1 - src2
- VSub
I64x2 dst = src1 - src2
- VSub
U8x16 Sat dst = saturating_sub(src1, src2)
- VSub
U16x8 Sat dst = saturating_sub(src1, src2)
- VWiden
High8x16S - Widens the high lanes of the input vector, as signed, to twice the width.
- VWiden
High8x16U - Widens the high lanes of the input vector, as unsigned, to twice the width.
- VWiden
High16x8S - Widens the high lanes of the input vector, as signed, to twice the width.
- VWiden
High16x8U - Widens the high lanes of the input vector, as unsigned, to twice the width.
- VWiden
High32x4S - Widens the high lanes of the input vector, as signed, to twice the width.
- VWiden
High32x4U - Widens the high lanes of the input vector, as unsigned, to twice the width.
- VWiden
Low8x16S - Widens the low lanes of the input vector, as signed, to twice the width.
- VWiden
Low8x16U - Widens the low lanes of the input vector, as unsigned, to twice the width.
- VWiden
Low16x8S - Widens the low lanes of the input vector, as signed, to twice the width.
- VWiden
Low16x8U - Widens the low lanes of the input vector, as unsigned, to twice the width.
- VWiden
Low32x4S - Widens the low lanes of the input vector, as signed, to twice the width.
- VWiden
Low32x4U - Widens the low lanes of the input vector, as unsigned, to twice the width.
- Vabs8x16
dst = |src|
- Vabs16x8
dst = |src|
- Vabs32x4
dst = |src|
- Vabs64x2
dst = |src|
- Vabsf32x4
dst = |src|
- Vabsf64x2
dst = |src|
- Valltrue8x16
- Store whether all lanes are nonzero in
dst
. - Valltrue16x8
- Store whether all lanes are nonzero in
dst
. - Valltrue32x4
- Store whether all lanes are nonzero in
dst
. - Valltrue64x2
- Store whether any lanes are nonzero in
dst
. - Vanytrue8x16
- Store whether any lanes are nonzero in
dst
. - Vanytrue16x8
- Store whether any lanes are nonzero in
dst
. - Vanytrue32x4
- Store whether any lanes are nonzero in
dst
. - Vanytrue64x2
- Store whether any lanes are nonzero in
dst
. - Vavground8x16
dst = (src1 + src2 + 1) // 2
- Vavground16x8
dst = (src1 + src2 + 1) // 2
- Vbitmask8x16
- Collect high bits of each lane into the low 32-bits of the destination.
- Vbitmask16x8
- Collect high bits of each lane into the low 32-bits of the destination.
- Vbitmask32x4
- Collect high bits of each lane into the low 32-bits of the destination.
- Vbitmask64x2
- Collect high bits of each lane into the low 32-bits of the destination.
- Vceil32x4
low128(dst) = ieee_ceil(low128(src))
- Vceil64x2
low128(dst) = ieee_ceil(low128(src))
- Vconst128
dst = imm
- Vdivf32x4
low128(dst) = low128(src1) / low128(src2)
- Veq8x16
dst = src == dst
- Veq16x8
dst = src == dst
- Veq32x4
dst = src == dst
- Veq64x2
dst = src == dst
- Vfloor32x4
low128(dst) = ieee_floor(low128(src))
- Vfloor64x2
low128(dst) = ieee_floor(low128(src))
- Vmax8x16S
dst = max(src1, src2)
(signed)- Vmax8x16U
dst = max(src1, src2)
(unsigned)- Vmax16x8S
dst = max(src1, src2)
(signed)- Vmax16x8U
dst = max(src1, src2)
(unsigned)- Vmax32x4S
dst = max(src1, src2)
(signed)- Vmax32x4U
dst = max(src1, src2)
(unsigned)- Vmaximumf32x4
dst = ieee_maximum(src1, src2)
- Vmaximumf64x2
dst = ieee_maximum(src1, src2)
- Vmin8x16S
dst = min(src1, src2)
(signed)- Vmin8x16U
dst = min(src1, src2)
(unsigned)- Vmin16x8S
dst = min(src1, src2)
(signed)- Vmin16x8U
dst = min(src1, src2)
(unsigned)- Vmin32x4S
dst = min(src1, src2)
(signed)- Vmin32x4U
dst = min(src1, src2)
(unsigned)- Vminimumf32x4
dst = ieee_minimum(src1, src2)
- Vminimumf64x2
dst = ieee_minimum(src1, src2)
- Vmov
- Move between
v
registers. - Vnarrow16x8S
- Narrows the two 16x8 vectors, assuming all input lanes are signed, to half the width. Narrowing is signed and saturating.
- Vnarrow16x8U
- Narrows the two 16x8 vectors, assuming all input lanes are signed, to half the width. Narrowing is unsigned and saturating.
- Vnarrow32x4S
- Narrows the two 32x4 vectors, assuming all input lanes are signed, to half the width. Narrowing is signed and saturating.
- Vnarrow32x4U
- Narrows the two 32x4 vectors, assuming all input lanes are signed, to half the width. Narrowing is unsigned and saturating.
- Vnearest32x4
low128(dst) = ieee_nearest(low128(src))
- Vnearest64x2
low128(dst) = ieee_nearest(low128(src))
- Vneg8x16
dst = -src
- Vneg16x8
dst = -src
- Vneg32x4
dst = -src
- Vneg64x2
dst = -src
- Vneg
F64x2 dst = -src
- Vneq8x16
dst = src != dst
- Vneq16x8
dst = src != dst
- Vneq32x4
dst = src != dst
- Vneq64x2
dst = src != dst
- Vslt8x16
dst = src < dst
(signed)- Vslt16x8
dst = src < dst
(signed)- Vslt32x4
dst = src < dst
(signed)- Vslt64x2
dst = src < dst
(signed)- Vslteq8x16
dst = src <= dst
(signed)- Vslteq16x8
dst = src <= dst
(signed)- Vslteq32x4
dst = src <= dst
(signed)- Vslteq64x2
dst = src <= dst
(signed)- Vsqrt32x4
low32(dst) = ieee_sqrt(low32(src))
- Vsqrt64x2
low32(dst) = ieee_sqrt(low32(src))
- Vstore128
LeOffset32 *(ptr + offset) = src
- Vswizzlei8x16
dst = swizzle(src1, src2)
- Vtrunc32x4
low128(dst) = ieee_trunc(low128(src))
- Vtrunc64x2
low128(dst) = ieee_trunc(low128(src))
- Vult8x16
dst = src < dst
(unsigned)- Vult16x8
dst = src < dst
(unsigned)- Vult32x4
dst = src < dst
(unsigned)- Vult64x2
dst = src < dst
(unsigned)- Vulteq8x16
dst = src <= dst
(unsigned)- Vulteq16x8
dst = src <= dst
(unsigned)- Vulteq32x4
dst = src <= dst
(unsigned)- Vulteq64x2
dst = src <= dst
(unsigned)- X32From
F32S low32(dst) = checked_signed_from_f32(low32(src))
- X32From
F32S Sat low32(dst) = saturating_signed_from_f32(low32(src))
- X32From
F32U low32(dst) = checked_unsigned_from_f32(low32(src))
- X32From
F32U Sat low32(dst) = saturating_unsigned_from_f32(low32(src))
- X32From
F64S low32(dst) = checked_signed_from_f64(src)
- X32From
F64S Sat low32(dst) = saturating_signed_from_f64(src)
- X32From
F64U low32(dst) = checked_unsigned_from_f64(src)
- X32From
F64U Sat low32(dst) = saturating_unsigned_from_f64(src)
- X64From
F32S dst = checked_signed_from_f32(low32(src))
- X64From
F32S Sat dst = saturating_signed_from_f32(low32(src))
- X64From
F32U dst = checked_unsigned_from_f32(low32(src))
- X64From
F32U Sat dst = saturating_unsigned_from_f32(low32(src))
- X64From
F64S dst = checked_signed_from_f64(src)
- X64From
F64S Sat dst = saturating_signed_from_f64(src)
- X64From
F64U dst = checked_unsigned_from_f64(src)
- X64From
F64U Sat dst = saturating_unsigned_from_f64(src)
- XAbs32
low32(dst) = |low32(src)|
- XAbs64
dst = |src|
- XBand32
low32(dst) = low32(src1) & low32(src2)
- XBand64
dst = src1 & src2
- XBnot32
low32(dst) = !low32(src1)
- XBnot64
dst = !src1
- XBor32
low32(dst) = low32(src1) | low32(src2)
- XBor64
dst = src1 | src2
- XBxor32
low32(dst) = low32(src1) ^ low32(src2)
- XBxor64
dst = src1 ^ src2
- XDiv32S
low32(dst) = low32(src1) / low32(src2)
(signed)- XDiv32U
low32(dst) = low32(src1) / low32(src2)
(unsigned)- XDiv64S
dst = src1 / src2
(signed)- XDiv64U
dst = src1 / src2
(unsigned)- XExtract
V8x16 low32(dst) = zext(src[lane])
- XExtract
V16x8 low32(dst) = zext(src[lane])
- XExtract
V32x4 low32(dst) = src[lane]
- XExtract
V64x2 dst = src[lane]
- XJump
- Unconditionally transfer control to the PC at specified register.
- XLoad8
S32Offset8 low32(dst) = sext(*(ptr + offset))
- XLoad8
S32Offset32 low32(dst) = sext(*(ptr + offset))
- XLoad8
S64Offset8 dst = sext(*(ptr + offset))
- XLoad8
S64Offset32 dst = sext(*(ptr + offset))
- XLoad8
U32Offset8 low32(dst) = zext(*(ptr + offset))
- XLoad8
U32Offset32 low32(dst) = zext(*(ptr + offset))
- XLoad8
U64Offset8 dst = zext(*(ptr + offset))
- XLoad8
U64Offset32 dst = zext(*(ptr + offset))
- XLoad16
BeS64 Offset32 dst = sext(*(ptr + offset))
- XLoad16
BeU64 Offset32 dst = zext(*(ptr + offset))
- XLoad16
LeS32 Offset8 low32(dst) = sext(*(ptr + offset))
- XLoad16
LeS32 Offset32 low32(dst) = sext(*(ptr + offset))
- XLoad16
LeS64 Offset8 dst = sext(*(ptr + offset))
- XLoad16
LeS64 Offset32 dst = sext(*(ptr + offset))
- XLoad16
LeU32 Offset8 low32(dst) = zext(*(ptr + offset))
- XLoad16
LeU32 Offset32 low32(dst) = zext(*(ptr + offset))
- XLoad16
LeU64 Offset8 dst = zext(*(ptr + offset))
- XLoad16
LeU64 Offset32 dst = zext(*(ptr + offset))
- XLoad32
BeS64 Offset32 dst = sext(*(ptr + offset))
- XLoad32
BeU64 Offset32 dst = zext(*(ptr + offset))
- XLoad32
LeOffset8 low32(dst) = *(ptr + offset)
- XLoad32
LeOffset32 low32(dst) = *(ptr + offset)
- XLoad32
LeS64 Offset8 dst = sext(*(ptr + offset))
- XLoad32
LeS64 Offset32 dst = sext(*(ptr + offset))
- XLoad32
LeU64 Offset8 dst = zext(*(ptr + offset))
- XLoad32
LeU64 Offset32 dst = zext(*(ptr + offset))
- XLoad64
BeOffset32 dst = *(ptr + offset)
- XLoad64
LeOffset8 dst = *(ptr + offset)
- XLoad64
LeOffset32 dst = *(ptr + offset)
- XMul32
low32(dst) = low32(src1) * low32(src2)
- XMul64
dst = src1 * src2
- XMul
Hi64S dst = high64(src1 * src2)
(signed)- XMul
Hi64U dst = high64(src1 * src2)
(unsigned)- XPop32
*dst = *sp; sp -= 4
- XPop64
*dst = *sp; sp -= 8
- XPop32
Many for dst in dsts.rev() { xpop32 dst }
- XPop64
Many for dst in dsts.rev() { xpop64 dst }
- XPush32
*sp = low32(src); sp = sp.checked_add(4)
- XPush64
*sp = src; sp = sp.checked_add(8)
- XPush32
Many for src in srcs { xpush32 src }
- XPush64
Many for src in srcs { xpush64 src }
- XRem32S
low32(dst) = low32(src1) % low32(src2)
(signed)- XRem32U
low32(dst) = low32(src1) % low32(src2)
(unsigned)- XRem64S
dst = src1 / src2
(signed)- XRem64U
dst = src1 / src2
(unsigned)- XSelect32
low32(dst) = low32(cond) ? low32(if_nonzero) : low32(if_zero)
- XSelect64
dst = low32(cond) ? if_nonzero : if_zero
- XStore8
Offset8 *(ptr + offset) = low8(src)
- XStore8
Offset32 *(ptr + offset) = low8(src)
- XStore16
BeOffset32 *(ptr + offset) = low16(src)
- XStore16
LeOffset8 *(ptr + offset) = low16(src)
- XStore16
LeOffset32 *(ptr + offset) = low16(src)
- XStore32
BeOffset32 *(ptr + offset) = low32(src)
- XStore32
LeOffset8 *(ptr + offset) = low32(src)
- XStore32
LeOffset32 *(ptr + offset) = low32(src)
- XStore64
BeOffset32 *(ptr + offset) = low64(src)
- XStore64
LeOffset8 *(ptr + offset) = low64(src)
- XStore64
LeOffset32 *(ptr + offset) = low64(src)
- Xadd32
- 32-bit wrapping addition:
low32(dst) = low32(src1) + low32(src2)
. - Xadd64
- 64-bit wrapping addition:
dst = src1 + src2
. - Xadd32
U8 - Same as
xadd32
butsrc2
is a zero-extended 8-bit immediate. - Xadd32
U32 - Same as
xadd32
butsrc2
is a 32-bit immediate. - Xadd32
Uoverflow Trap - 32-bit checked unsigned addition:
low32(dst) = low32(src1) + low32(src2)
. - Xadd64
U8 - Same as
xadd64
butsrc2
is a zero-extended 8-bit immediate. - Xadd64
U32 - Same as
xadd64
butsrc2
is a zero-extended 32-bit immediate. - Xadd64
Uoverflow Trap - 64-bit checked unsigned addition:
dst = src1 + src2
. - Xband32
S8 - Same as
xband64
butsrc2
is a sign-extended 8-bit immediate. - Xband32
S32 - Same as
xband32
butsrc2
is a sign-extended 32-bit immediate. - Xband64
S8 - Same as
xband64
butsrc2
is a sign-extended 8-bit immediate. - Xband64
S32 - Same as
xband64
butsrc2
is a sign-extended 32-bit immediate. - Xbmask32
- low32(dst) = if low32(src) == 0 { 0 } else { -1 }
- Xbmask64
- dst = if src == 0 { 0 } else { -1 }
- Xbor32
S8 - Same as
xbor64
butsrc2
is a sign-extended 8-bit immediate. - Xbor32
S32 - Same as
xbor32
butsrc2
is a sign-extended 32-bit immediate. - Xbor64
S8 - Same as
xbor64
butsrc2
is a sign-extended 8-bit immediate. - Xbor64
S32 - Same as
xbor64
butsrc2
is a sign-extended 32-bit immediate. - Xbxor32
S8 - Same as
xbxor64
butsrc2
is a sign-extended 8-bit immediate. - Xbxor32
S32 - Same as
xbxor32
butsrc2
is a sign-extended 32-bit immediate. - Xbxor64
S8 - Same as
xbxor64
butsrc2
is a sign-extended 8-bit immediate. - Xbxor64
S32 - Same as
xbxor64
butsrc2
is a sign-extended 32-bit immediate. - Xclz32
low32(dst) = leading_zeros(low32(src))
- Xclz64
dst = leading_zeros(src)
- Xconst8
- Set
dst = sign_extend(imm8)
. - Xconst16
- Set
dst = sign_extend(imm16)
. - Xconst32
- Set
dst = sign_extend(imm32)
. - Xconst64
- Set
dst = imm64
. - Xctz32
low32(dst) = trailing_zeros(low32(src))
- Xctz64
dst = trailing_zeros(src)
- Xeq32
low32(dst) = low32(src1) == low32(src2)
- Xeq64
low32(dst) = src1 == src2
- Xmax32S
low32(dst) = max(low32(src1), low32(src2))
(signed)- Xmax32U
low32(dst) = max(low32(src1), low32(src2))
(unsigned)- Xmax64S
dst = max(src1, src2)
(signed)- Xmax64U
dst = max(src1, src2)
(unsigned)- Xmin32S
low32(dst) = min(low32(src1), low32(src2))
(signed)- Xmin32U
low32(dst) = min(low32(src1), low32(src2))
(unsigned)- Xmin64S
dst = min(src1, src2)
(signed)- Xmin64U
dst = min(src1, src2)
(unsigned)- Xmov
- Move between
x
registers. - XmovFp
- Gets the special “fp” register and moves it into
dst
. - XmovLr
- Gets the special “lr” register and moves it into
dst
. - Xmul32
S8 - Same as
xmul64
butsrc2
is a sign-extended 8-bit immediate. - Xmul32
S32 - Same as
xmul32
butsrc2
is a sign-extended 32-bit immediate. - Xmul64
S8 - Same as
xmul64
butsrc2
is a sign-extended 8-bit immediate. - Xmul64
S32 - Same as
xmul64
butsrc2
is a sign-extended 64-bit immediate. - Xneg32
low32(dst) = -low32(src)
- Xneg64
dst = -src
- Xneq32
low32(dst) = low32(src1) != low32(src2)
- Xneq64
low32(dst) = src1 != src2
- Xpopcnt32
low32(dst) = count_ones(low32(src))
- Xpopcnt64
dst = count_ones(src)
- Xrotl32
low32(dst) = rotate_left(low32(src1), low32(src2))
- Xrotl64
dst = rotate_left(src1, src2)
- Xrotr32
low32(dst) = rotate_right(low32(src1), low32(src2))
- Xrotr64
dst = rotate_right(src1, src2)
- Xshl32
low32(dst) = low32(src1) << low5(src2)
- Xshl64
dst = src1 << low5(src2)
- Xshl32
U6 low32(dst) = low32(src1) << low5(src2)
- Xshl64
U6 dst = src1 << low5(src2)
- Xshr32S
low32(dst) = low32(src1) >> low5(src2)
- Xshr32S
U6 low32(dst) = low32(src1) >> low5(src2)
- Xshr32U
low32(dst) = low32(src1) >> low5(src2)
- Xshr32U
U6 low32(dst) = low32(src1) >> low5(src2)
- Xshr64S
dst = src1 >> low6(src2)
- Xshr64S
U6 dst = src1 >> low6(src2)
- Xshr64U
dst = src1 >> low6(src2)
- Xshr64U
U6 dst = src1 >> low6(src2)
- Xslt32
low32(dst) = low32(src1) < low32(src2)
(signed)- Xslt64
low32(dst) = src1 < src2
(signed)- Xslteq32
low32(dst) = low32(src1) <= low32(src2)
(signed)- Xslteq64
low32(dst) = src1 <= src2
(signed)- Xsub32
- 32-bit wrapping subtraction:
low32(dst) = low32(src1) - low32(src2)
. - Xsub64
- 64-bit wrapping subtraction:
dst = src1 - src2
. - Xsub32
U8 - Same as
xsub32
butsrc2
is a zero-extended 8-bit immediate. - Xsub32
U32 - Same as
xsub32
butsrc2
is a 32-bit immediate. - Xsub64
U8 - Same as
xsub64
butsrc2
is a zero-extended 8-bit immediate. - Xsub64
U32 - Same as
xsub64
butsrc2
is a zero-extended 32-bit immediate. - Xult32
low32(dst) = low32(src1) < low32(src2)
(unsigned)- Xult64
low32(dst) = src1 < src2
(unsigned)- Xulteq32
low32(dst) = low32(src1) <= low32(src2)
(unsigned)- Xulteq64
low32(dst) = src1 <= src2
(unsigned)- Zext8
dst = zext(low8(src))
- Zext16
dst = zext(low16(src))
- Zext32
dst = zext(low32(src))
Enums§
- Extended
Op - An extended operation/instruction.
- Op
- A complete, materialized operation/instruction.