pulley_interpreter

Module op

Source
Expand description

Pulley bytecode operations with their operands.

Structs§

BitcastFloatFromInt32
low32(dst) = bitcast low32(src) as f32
BitcastFloatFromInt64
dst = bitcast src as f64
BitcastIntFromFloat32
low32(dst) = bitcast low32(src) as i32
BitcastIntFromFloat64
dst = bitcast src as i64
BrIf
Conditionally transfer control to the given PC offset if low32(cond) contains a non-zero value.
BrIfNot
Conditionally transfer control to the given PC offset if low32(cond) contains a zero value.
BrIfXeq32
Branch if a == b.
BrIfXeq64
Branch if a == b.
BrIfXeq32I8
Branch if a == b.
BrIfXeq32I32
Branch if a == b.
BrIfXeq64I8
Branch if a == b.
BrIfXeq64I32
Branch if a == b.
BrIfXneq32
Branch if a != b.
BrIfXneq64
Branch if a != b.
BrIfXneq32I8
Branch if a != b.
BrIfXneq32I32
Branch if a != b.
BrIfXneq64I8
Branch if a != b.
BrIfXneq64I32
Branch if a != b.
BrIfXsgt32I8
Branch if signed a > b.
BrIfXsgt32I32
Branch if signed a > b.
BrIfXsgt64I8
Branch if signed a > b.
BrIfXsgt64I32
Branch if signed a > b.
BrIfXsgteq32I8
Branch if signed a >= b.
BrIfXsgteq32I32
Branch if signed a >= b.
BrIfXsgteq64I8
Branch if signed a >= b.
BrIfXsgteq64I32
Branch if signed a >= b.
BrIfXslt32
Branch if signed a < b.
BrIfXslt64
Branch if signed a < b.
BrIfXslt32I8
Branch if signed a < b.
BrIfXslt32I32
Branch if signed a < b.
BrIfXslt64I8
Branch if signed a < b.
BrIfXslt64I32
Branch if signed a < b.
BrIfXslteq32
Branch if signed a <= b.
BrIfXslteq64
Branch if signed a <= b.
BrIfXslteq32I8
Branch if signed a <= b.
BrIfXslteq32I32
Branch if signed a <= b.
BrIfXslteq64I8
Branch if signed a <= b.
BrIfXslteq64I32
Branch if signed a <= b.
BrIfXugt32U8
Branch if unsigned a > b.
BrIfXugt32U32
Branch if unsigned a > b.
BrIfXugt64U8
Branch if unsigned a > b.
BrIfXugt64U32
Branch if unsigned a > b.
BrIfXugteq32U8
Branch if unsigned a >= b.
BrIfXugteq32U32
Branch if unsigned a >= b.
BrIfXugteq64U8
Branch if unsigned a >= b.
BrIfXugteq64U32
Branch if unsigned a >= b.
BrIfXult32
Branch if unsigned a < b.
BrIfXult64
Branch if unsigned a < b.
BrIfXult32U8
Branch if unsigned a < b.
BrIfXult32U32
Branch if unsigned a < b.
BrIfXult64U8
Branch if unsigned a < b.
BrIfXult64U32
Branch if unsigned a < b.
BrIfXulteq32
Branch if unsigned a <= b.
BrIfXulteq64
Branch if unsigned a <= b.
BrIfXulteq32U8
Branch if unsigned a <= b.
BrIfXulteq32U32
Branch if unsigned a <= b.
BrIfXulteq64U8
Branch if unsigned a <= b.
BrIfXulteq64U32
Branch if unsigned a <= b.
BrTable32
Branch to the label indicated by low32(idx).
Bswap32
dst = byteswap(low32(src))
Bswap64
dst = byteswap(src)
Call
Transfer control to the PC at the given offset and set the lr register to the PC just after this instruction.
Call1
Like call, but also x0 = arg1
Call2
Like call, but also x0, x1 = arg1, arg2
Call3
Like call, but also x0, x1, x2 = arg1, arg2, arg3
Call4
Like call, but also x0, x1, x2, x3 = arg1, arg2, arg3, arg4
CallIndirect
Transfer control to the PC in reg and set lr to the PC just after this instruction.
CallIndirectHost
A special opcode to halt interpreter execution and yield control back to the host.
F32FromF64
low32(dst) = demote(src)
F32FromX32S
low32(dst) = checked_f32_from_signed(low32(src))
F32FromX32U
low32(dst) = checked_f32_from_unsigned(low32(src))
F32FromX64S
low32(dst) = checked_f32_from_signed(src)
F32FromX64U
low32(dst) = checked_f32_from_unsigned(src)
F64FromF32
(st) = promote(low32(src))
F64FromX32S
dst = checked_f64_from_signed(low32(src))
F64FromX32U
dst = checked_f64_from_unsigned(low32(src))
F64FromX64S
dst = checked_f64_from_signed(src)
F64FromX64U
dst = checked_f64_from_unsigned(src)
FConst32
low32(dst) = bits
FConst64
dst = bits
FCopySign32
low32(dst) = copysign(low32(src1), low32(src2))
FCopySign64
dst = copysign(src1, src2)
FExtractV32x4
low32(dst) = src[lane]
FExtractV64x2
dst = src[lane]
FSelect32
low32(dst) = low32(cond) ? low32(if_nonzero) : low32(if_zero)
FSelect64
dst = low32(cond) ? if_nonzero : if_zero
Fabs32
low32(dst) = |low32(src)|
Fabs64
dst = |src|
Fadd32
low32(dst) = low32(src1) + low32(src2)
Fadd64
dst = src1 + src2
Fceil32
low32(dst) = ieee_ceil(low32(src))
Fceil64
dst = ieee_ceil(src)
Fdiv32
low32(dst) = low32(src1) / low32(src2)
Fdiv64
dst = src1 / src2
Feq32
low32(dst) = zext(src1 == src2)
Feq64
low32(dst) = zext(src1 == src2)
Ffloor32
low32(dst) = ieee_floor(low32(src))
Ffloor64
dst = ieee_floor(src)
Fload32BeOffset32
low32(dst) = zext(*(ptr + offset))
Fload32LeOffset32
low32(dst) = zext(*(ptr + offset))
Fload64BeOffset32
dst = *(ptr + offset)
Fload64LeOffset32
dst = *(ptr + offset)
Flt32
low32(dst) = zext(src1 < src2)
Flt64
low32(dst) = zext(src1 < src2)
Flteq32
low32(dst) = zext(src1 <= src2)
Flteq64
low32(dst) = zext(src1 <= src2)
Fmaximum32
low32(dst) = ieee_maximum(low32(src1), low32(src2))
Fmaximum64
dst = ieee_maximum(src1, src2)
Fminimum32
low32(dst) = ieee_minimum(low32(src1), low32(src2))
Fminimum64
dst = ieee_minimum(src1, src2)
Fmov
Move between f registers.
Fmul32
low32(dst) = low32(src1) * low32(src2)
Fmul64
dst = src1 * src2
Fnearest32
low32(dst) = ieee_nearest(low32(src))
Fnearest64
dst = ieee_nearest(src)
Fneg32
low32(dst) = -low32(src)
Fneg64
dst = -src
Fneq32
low32(dst) = zext(src1 != src2)
Fneq64
low32(dst) = zext(src1 != src2)
Fsqrt32
low32(dst) = ieee_sqrt(low32(src))
Fsqrt64
dst = ieee_sqrt(src)
Fstore32BeOffset32
*(ptr + offset) = low32(src)
Fstore32LeOffset32
*(ptr + offset) = low32(src)
Fstore64BeOffset32
*(ptr + offset) = src
Fstore64LeOffset32
*(ptr + offset) = src
Fsub32
low32(dst) = low32(src1) - low32(src2)
Fsub64
dst = src1 - src2
Ftrunc32
low32(dst) = ieee_trunc(low32(src))
Ftrunc64
dst = ieee_trunc(src)
Jump
Unconditionally transfer control to the PC at the given offset.
MaterializeOpsVisitordecode
A visitor that materializes whole Ops as it decodes the bytecode stream.
Nop
Do nothing.
PopFrame
sp = fp; pop fp; pop lr
PopFrameRestore
Inverse of push_frame_save. Restores regs from the top of the stack, then runs stack_free32 amt, then runs pop_frame.
PushFrame
push lr; push fp; fp = sp
PushFrameSave
Macro-instruction to enter a function, allocate some stack, and then save some registers.
Ret
Transfer control the address in the lr register.
Sext8
dst = sext(low8(src))
Sext16
dst = sext(low16(src))
Sext32
dst = sext(low32(src))
StackAlloc32
sp = sp.checked_sub(amt)
StackFree32
sp = sp + amt
Trap
Raise a trap.
VAddF32x4
dst = src1 + src2
VAddF64x2
dst = src1 + src2
VAddI8x16
dst = src1 + src2
VAddI8x16Sat
dst = satruating_add(src1, src2)
VAddI16x8
dst = src1 + src2
VAddI16x8Sat
dst = satruating_add(src1, src2)
VAddI32x4
dst = src1 + src2
VAddI64x2
dst = src1 + src2
VAddU8x16Sat
dst = satruating_add(src1, src2)
VAddU16x8Sat
dst = satruating_add(src1, src2)
VAddpairwiseI16x8S
dst = [src1[0] + src1[1], ..., src2[6] + src2[7]]
VAddpairwiseI32x4S
dst = [src1[0] + src1[1], ..., src2[2] + src2[3]]
VBand128
dst = src1 & src2
VBitselect128
dst = (c & x) | (!c & y)
VBnot128
dst = !src1
VBor128
dst = src1 | src2
VBxor128
dst = src1 ^ src2
VDivF64x2
dst = src1 / src2
VF32x4FromI32x4S
Int-to-float conversion (same as f32_from_x32_s)
VF32x4FromI32x4U
Int-to-float conversion (same as f32_from_x32_u)
VF64x2FromI64x2S
Int-to-float conversion (same as f64_from_x64_s)
VF64x2FromI64x2U
Int-to-float conversion (same as f64_from_x64_u)
VFdemote
Demotes the two f64x2 lanes to f32x2 and then extends with two more zero lanes.
VFpromoteLow
Promotes the low two lanes of the f32x4 input to f64x2.
VInsertF32
dst = src1; dst[lane] = src2
VInsertF64
dst = src1; dst[lane] = src2
VInsertX8
dst = src1; dst[lane] = src2
VInsertX16
dst = src1; dst[lane] = src2
VInsertX32
dst = src1; dst[lane] = src2
VInsertX64
dst = src1; dst[lane] = src2
VLoad8x8SOffset32
Load the 64-bit source as i8x8 and sign-extend to i16x8.
VLoad8x8UOffset32
Load the 64-bit source as u8x8 and zero-extend to i16x8.
VLoad16x4LeSOffset32
Load the 64-bit source as i16x4 and sign-extend to i32x4.
VLoad16x4LeUOffset32
Load the 64-bit source as u16x4 and zero-extend to i32x4.
VLoad32x2LeSOffset32
Load the 64-bit source as i32x2 and sign-extend to i64x2.
VLoad32x2LeUOffset32
Load the 64-bit source as u32x2 and zero-extend to i64x2.
VLoad128Offset32
dst = *(ptr + offset)
VMulF64x2
dst = src1 * src2
VMulI8x16
dst = src1 * src2
VMulI16x8
dst = src1 * src2
VMulI32x4
dst = src1 * src2
VMulI64x2
dst = src1 * src2
VPopcnt8x16
dst = count_ones(src)
VQmulrsI16x8
dst = signed_saturate(src1 * src2 + (1 << (Q - 1)) >> Q)
VShlI8x16
dst = src1 << src2
VShlI16x8
dst = src1 << src2
VShlI32x4
dst = src1 << src2
VShlI64x2
dst = src1 << src2
VShrI8x16S
dst = src1 >> src2 (signed)
VShrI8x16U
dst = src1 >> src2 (unsigned)
VShrI16x8S
dst = src1 >> src2 (signed)
VShrI16x8U
dst = src1 >> src2 (unsigned)
VShrI32x4S
dst = src1 >> src2 (signed)
VShrI32x4U
dst = src1 >> src2 (unsigned)
VShrI64x2S
dst = src1 >> src2 (signed)
VShrI64x2U
dst = src1 >> src2 (unsigned)
VShuffle
dst = shuffle(src1, src2, mask)
VSplatF32
dst = splat(low32(src))
VSplatF64
dst = splat(src)
VSplatX8
dst = splat(low8(src))
VSplatX16
dst = splat(low16(src))
VSplatX32
dst = splat(low32(src))
VSplatX64
dst = splat(src)
VSubF64x2
dst = src1 - src2
VSubI8x16
dst = src1 - src2
VSubI8x16Sat
dst = saturating_sub(src1, src2)
VSubI16x8
dst = src1 - src2
VSubI16x8Sat
dst = saturating_sub(src1, src2)
VSubI32x4
dst = src1 - src2
VSubI64x2
dst = src1 - src2
VSubU8x16Sat
dst = saturating_sub(src1, src2)
VSubU16x8Sat
dst = saturating_sub(src1, src2)
VWidenHigh8x16S
Widens the high lanes of the input vector, as signed, to twice the width.
VWidenHigh8x16U
Widens the high lanes of the input vector, as unsigned, to twice the width.
VWidenHigh16x8S
Widens the high lanes of the input vector, as signed, to twice the width.
VWidenHigh16x8U
Widens the high lanes of the input vector, as unsigned, to twice the width.
VWidenHigh32x4S
Widens the high lanes of the input vector, as signed, to twice the width.
VWidenHigh32x4U
Widens the high lanes of the input vector, as unsigned, to twice the width.
VWidenLow8x16S
Widens the low lanes of the input vector, as signed, to twice the width.
VWidenLow8x16U
Widens the low lanes of the input vector, as unsigned, to twice the width.
VWidenLow16x8S
Widens the low lanes of the input vector, as signed, to twice the width.
VWidenLow16x8U
Widens the low lanes of the input vector, as unsigned, to twice the width.
VWidenLow32x4S
Widens the low lanes of the input vector, as signed, to twice the width.
VWidenLow32x4U
Widens the low lanes of the input vector, as unsigned, to twice the width.
Vabs8x16
dst = |src|
Vabs16x8
dst = |src|
Vabs32x4
dst = |src|
Vabs64x2
dst = |src|
Vabsf32x4
dst = |src|
Vabsf64x2
dst = |src|
Valltrue8x16
Store whether all lanes are nonzero in dst.
Valltrue16x8
Store whether all lanes are nonzero in dst.
Valltrue32x4
Store whether all lanes are nonzero in dst.
Valltrue64x2
Store whether any lanes are nonzero in dst.
Vanytrue8x16
Store whether any lanes are nonzero in dst.
Vanytrue16x8
Store whether any lanes are nonzero in dst.
Vanytrue32x4
Store whether any lanes are nonzero in dst.
Vanytrue64x2
Store whether any lanes are nonzero in dst.
Vavground8x16
dst = (src1 + src2 + 1) // 2
Vavground16x8
dst = (src1 + src2 + 1) // 2
Vbitmask8x16
Collect high bits of each lane into the low 32-bits of the destination.
Vbitmask16x8
Collect high bits of each lane into the low 32-bits of the destination.
Vbitmask32x4
Collect high bits of each lane into the low 32-bits of the destination.
Vbitmask64x2
Collect high bits of each lane into the low 32-bits of the destination.
Vceil32x4
low128(dst) = ieee_ceil(low128(src))
Vceil64x2
low128(dst) = ieee_ceil(low128(src))
Vconst128
dst = imm
Vdivf32x4
low128(dst) = low128(src1) / low128(src2)
Veq8x16
dst = src == dst
Veq16x8
dst = src == dst
Veq32x4
dst = src == dst
Veq64x2
dst = src == dst
Vfloor32x4
low128(dst) = ieee_floor(low128(src))
Vfloor64x2
low128(dst) = ieee_floor(low128(src))
Vmax8x16S
dst = max(src1, src2) (signed)
Vmax8x16U
dst = max(src1, src2) (unsigned)
Vmax16x8S
dst = max(src1, src2) (signed)
Vmax16x8U
dst = max(src1, src2) (unsigned)
Vmax32x4S
dst = max(src1, src2) (signed)
Vmax32x4U
dst = max(src1, src2) (unsigned)
Vmaximumf32x4
dst = ieee_maximum(src1, src2)
Vmaximumf64x2
dst = ieee_maximum(src1, src2)
Vmin8x16S
dst = min(src1, src2) (signed)
Vmin8x16U
dst = min(src1, src2) (unsigned)
Vmin16x8S
dst = min(src1, src2) (signed)
Vmin16x8U
dst = min(src1, src2) (unsigned)
Vmin32x4S
dst = min(src1, src2) (signed)
Vmin32x4U
dst = min(src1, src2) (unsigned)
Vminimumf32x4
dst = ieee_minimum(src1, src2)
Vminimumf64x2
dst = ieee_minimum(src1, src2)
Vmov
Move between v registers.
Vnarrow16x8S
Narrows the two 16x8 vectors, assuming all input lanes are signed, to half the width. Narrowing is signed and saturating.
Vnarrow16x8U
Narrows the two 16x8 vectors, assuming all input lanes are signed, to half the width. Narrowing is unsigned and saturating.
Vnarrow32x4S
Narrows the two 32x4 vectors, assuming all input lanes are signed, to half the width. Narrowing is signed and saturating.
Vnarrow32x4U
Narrows the two 32x4 vectors, assuming all input lanes are signed, to half the width. Narrowing is unsigned and saturating.
Vnearest32x4
low128(dst) = ieee_nearest(low128(src))
Vnearest64x2
low128(dst) = ieee_nearest(low128(src))
Vneg8x16
dst = -src
Vneg16x8
dst = -src
Vneg32x4
dst = -src
Vneg64x2
dst = -src
VnegF64x2
dst = -src
Vneq8x16
dst = src != dst
Vneq16x8
dst = src != dst
Vneq32x4
dst = src != dst
Vneq64x2
dst = src != dst
Vslt8x16
dst = src < dst (signed)
Vslt16x8
dst = src < dst (signed)
Vslt32x4
dst = src < dst (signed)
Vslt64x2
dst = src < dst (signed)
Vslteq8x16
dst = src <= dst (signed)
Vslteq16x8
dst = src <= dst (signed)
Vslteq32x4
dst = src <= dst (signed)
Vslteq64x2
dst = src <= dst (signed)
Vsqrt32x4
low32(dst) = ieee_sqrt(low32(src))
Vsqrt64x2
low32(dst) = ieee_sqrt(low32(src))
Vstore128LeOffset32
*(ptr + offset) = src
Vswizzlei8x16
dst = swizzle(src1, src2)
Vtrunc32x4
low128(dst) = ieee_trunc(low128(src))
Vtrunc64x2
low128(dst) = ieee_trunc(low128(src))
Vult8x16
dst = src < dst (unsigned)
Vult16x8
dst = src < dst (unsigned)
Vult32x4
dst = src < dst (unsigned)
Vult64x2
dst = src < dst (unsigned)
Vulteq8x16
dst = src <= dst (unsigned)
Vulteq16x8
dst = src <= dst (unsigned)
Vulteq32x4
dst = src <= dst (unsigned)
Vulteq64x2
dst = src <= dst (unsigned)
X32FromF32S
low32(dst) = checked_signed_from_f32(low32(src))
X32FromF32SSat
low32(dst) = saturating_signed_from_f32(low32(src))
X32FromF32U
low32(dst) = checked_unsigned_from_f32(low32(src))
X32FromF32USat
low32(dst) = saturating_unsigned_from_f32(low32(src))
X32FromF64S
low32(dst) = checked_signed_from_f64(src)
X32FromF64SSat
low32(dst) = saturating_signed_from_f64(src)
X32FromF64U
low32(dst) = checked_unsigned_from_f64(src)
X32FromF64USat
low32(dst) = saturating_unsigned_from_f64(src)
X64FromF32S
dst = checked_signed_from_f32(low32(src))
X64FromF32SSat
dst = saturating_signed_from_f32(low32(src))
X64FromF32U
dst = checked_unsigned_from_f32(low32(src))
X64FromF32USat
dst = saturating_unsigned_from_f32(low32(src))
X64FromF64S
dst = checked_signed_from_f64(src)
X64FromF64SSat
dst = saturating_signed_from_f64(src)
X64FromF64U
dst = checked_unsigned_from_f64(src)
X64FromF64USat
dst = saturating_unsigned_from_f64(src)
XAbs32
low32(dst) = |low32(src)|
XAbs64
dst = |src|
XBand32
low32(dst) = low32(src1) & low32(src2)
XBand64
dst = src1 & src2
XBnot32
low32(dst) = !low32(src1)
XBnot64
dst = !src1
XBor32
low32(dst) = low32(src1) | low32(src2)
XBor64
dst = src1 | src2
XBxor32
low32(dst) = low32(src1) ^ low32(src2)
XBxor64
dst = src1 ^ src2
XDiv32S
low32(dst) = low32(src1) / low32(src2) (signed)
XDiv32U
low32(dst) = low32(src1) / low32(src2) (unsigned)
XDiv64S
dst = src1 / src2 (signed)
XDiv64U
dst = src1 / src2 (unsigned)
XExtractV8x16
low32(dst) = zext(src[lane])
XExtractV16x8
low32(dst) = zext(src[lane])
XExtractV32x4
low32(dst) = src[lane]
XExtractV64x2
dst = src[lane]
XJump
Unconditionally transfer control to the PC at specified register.
XLoad8S32Offset8
low32(dst) = sext(*(ptr + offset))
XLoad8S32Offset32
low32(dst) = sext(*(ptr + offset))
XLoad8S64Offset8
dst = sext(*(ptr + offset))
XLoad8S64Offset32
dst = sext(*(ptr + offset))
XLoad8U32Offset8
low32(dst) = zext(*(ptr + offset))
XLoad8U32Offset32
low32(dst) = zext(*(ptr + offset))
XLoad8U64Offset8
dst = zext(*(ptr + offset))
XLoad8U64Offset32
dst = zext(*(ptr + offset))
XLoad16BeS64Offset32
dst = sext(*(ptr + offset))
XLoad16BeU64Offset32
dst = zext(*(ptr + offset))
XLoad16LeS32Offset8
low32(dst) = sext(*(ptr + offset))
XLoad16LeS32Offset32
low32(dst) = sext(*(ptr + offset))
XLoad16LeS64Offset8
dst = sext(*(ptr + offset))
XLoad16LeS64Offset32
dst = sext(*(ptr + offset))
XLoad16LeU32Offset8
low32(dst) = zext(*(ptr + offset))
XLoad16LeU32Offset32
low32(dst) = zext(*(ptr + offset))
XLoad16LeU64Offset8
dst = zext(*(ptr + offset))
XLoad16LeU64Offset32
dst = zext(*(ptr + offset))
XLoad32BeS64Offset32
dst = sext(*(ptr + offset))
XLoad32BeU64Offset32
dst = zext(*(ptr + offset))
XLoad32LeOffset8
low32(dst) = *(ptr + offset)
XLoad32LeOffset32
low32(dst) = *(ptr + offset)
XLoad32LeS64Offset8
dst = sext(*(ptr + offset))
XLoad32LeS64Offset32
dst = sext(*(ptr + offset))
XLoad32LeU64Offset8
dst = zext(*(ptr + offset))
XLoad32LeU64Offset32
dst = zext(*(ptr + offset))
XLoad64BeOffset32
dst = *(ptr + offset)
XLoad64LeOffset8
dst = *(ptr + offset)
XLoad64LeOffset32
dst = *(ptr + offset)
XMul32
low32(dst) = low32(src1) * low32(src2)
XMul64
dst = src1 * src2
XMulHi64S
dst = high64(src1 * src2) (signed)
XMulHi64U
dst = high64(src1 * src2) (unsigned)
XPop32
*dst = *sp; sp -= 4
XPop64
*dst = *sp; sp -= 8
XPop32Many
for dst in dsts.rev() { xpop32 dst }
XPop64Many
for dst in dsts.rev() { xpop64 dst }
XPush32
*sp = low32(src); sp = sp.checked_add(4)
XPush64
*sp = src; sp = sp.checked_add(8)
XPush32Many
for src in srcs { xpush32 src }
XPush64Many
for src in srcs { xpush64 src }
XRem32S
low32(dst) = low32(src1) % low32(src2) (signed)
XRem32U
low32(dst) = low32(src1) % low32(src2) (unsigned)
XRem64S
dst = src1 / src2 (signed)
XRem64U
dst = src1 / src2 (unsigned)
XSelect32
low32(dst) = low32(cond) ? low32(if_nonzero) : low32(if_zero)
XSelect64
dst = low32(cond) ? if_nonzero : if_zero
XStore8Offset8
*(ptr + offset) = low8(src)
XStore8Offset32
*(ptr + offset) = low8(src)
XStore16BeOffset32
*(ptr + offset) = low16(src)
XStore16LeOffset8
*(ptr + offset) = low16(src)
XStore16LeOffset32
*(ptr + offset) = low16(src)
XStore32BeOffset32
*(ptr + offset) = low32(src)
XStore32LeOffset8
*(ptr + offset) = low32(src)
XStore32LeOffset32
*(ptr + offset) = low32(src)
XStore64BeOffset32
*(ptr + offset) = low64(src)
XStore64LeOffset8
*(ptr + offset) = low64(src)
XStore64LeOffset32
*(ptr + offset) = low64(src)
Xadd32
32-bit wrapping addition: low32(dst) = low32(src1) + low32(src2).
Xadd64
64-bit wrapping addition: dst = src1 + src2.
Xadd32U8
Same as xadd32 but src2 is a zero-extended 8-bit immediate.
Xadd32U32
Same as xadd32 but src2 is a 32-bit immediate.
Xadd32UoverflowTrap
32-bit checked unsigned addition: low32(dst) = low32(src1) + low32(src2).
Xadd64U8
Same as xadd64 but src2 is a zero-extended 8-bit immediate.
Xadd64U32
Same as xadd64 but src2 is a zero-extended 32-bit immediate.
Xadd64UoverflowTrap
64-bit checked unsigned addition: dst = src1 + src2.
Xband32S8
Same as xband64 but src2 is a sign-extended 8-bit immediate.
Xband32S32
Same as xband32 but src2 is a sign-extended 32-bit immediate.
Xband64S8
Same as xband64 but src2 is a sign-extended 8-bit immediate.
Xband64S32
Same as xband64 but src2 is a sign-extended 32-bit immediate.
Xbmask32
low32(dst) = if low32(src) == 0 { 0 } else { -1 }
Xbmask64
dst = if src == 0 { 0 } else { -1 }
Xbor32S8
Same as xbor64 but src2 is a sign-extended 8-bit immediate.
Xbor32S32
Same as xbor32 but src2 is a sign-extended 32-bit immediate.
Xbor64S8
Same as xbor64 but src2 is a sign-extended 8-bit immediate.
Xbor64S32
Same as xbor64 but src2 is a sign-extended 32-bit immediate.
Xbxor32S8
Same as xbxor64 but src2 is a sign-extended 8-bit immediate.
Xbxor32S32
Same as xbxor32 but src2 is a sign-extended 32-bit immediate.
Xbxor64S8
Same as xbxor64 but src2 is a sign-extended 8-bit immediate.
Xbxor64S32
Same as xbxor64 but src2 is a sign-extended 32-bit immediate.
Xclz32
low32(dst) = leading_zeros(low32(src))
Xclz64
dst = leading_zeros(src)
Xconst8
Set dst = sign_extend(imm8).
Xconst16
Set dst = sign_extend(imm16).
Xconst32
Set dst = sign_extend(imm32).
Xconst64
Set dst = imm64.
Xctz32
low32(dst) = trailing_zeros(low32(src))
Xctz64
dst = trailing_zeros(src)
Xeq32
low32(dst) = low32(src1) == low32(src2)
Xeq64
low32(dst) = src1 == src2
Xmax32S
low32(dst) = max(low32(src1), low32(src2)) (signed)
Xmax32U
low32(dst) = max(low32(src1), low32(src2)) (unsigned)
Xmax64S
dst = max(src1, src2) (signed)
Xmax64U
dst = max(src1, src2) (unsigned)
Xmin32S
low32(dst) = min(low32(src1), low32(src2)) (signed)
Xmin32U
low32(dst) = min(low32(src1), low32(src2)) (unsigned)
Xmin64S
dst = min(src1, src2) (signed)
Xmin64U
dst = min(src1, src2) (unsigned)
Xmov
Move between x registers.
XmovFp
Gets the special “fp” register and moves it into dst.
XmovLr
Gets the special “lr” register and moves it into dst.
Xmul32S8
Same as xmul64 but src2 is a sign-extended 8-bit immediate.
Xmul32S32
Same as xmul32 but src2 is a sign-extended 32-bit immediate.
Xmul64S8
Same as xmul64 but src2 is a sign-extended 8-bit immediate.
Xmul64S32
Same as xmul64 but src2 is a sign-extended 64-bit immediate.
Xneg32
low32(dst) = -low32(src)
Xneg64
dst = -src
Xneq32
low32(dst) = low32(src1) != low32(src2)
Xneq64
low32(dst) = src1 != src2
Xpopcnt32
low32(dst) = count_ones(low32(src))
Xpopcnt64
dst = count_ones(src)
Xrotl32
low32(dst) = rotate_left(low32(src1), low32(src2))
Xrotl64
dst = rotate_left(src1, src2)
Xrotr32
low32(dst) = rotate_right(low32(src1), low32(src2))
Xrotr64
dst = rotate_right(src1, src2)
Xshl32
low32(dst) = low32(src1) << low5(src2)
Xshl64
dst = src1 << low5(src2)
Xshl32U6
low32(dst) = low32(src1) << low5(src2)
Xshl64U6
dst = src1 << low5(src2)
Xshr32S
low32(dst) = low32(src1) >> low5(src2)
Xshr32SU6
low32(dst) = low32(src1) >> low5(src2)
Xshr32U
low32(dst) = low32(src1) >> low5(src2)
Xshr32UU6
low32(dst) = low32(src1) >> low5(src2)
Xshr64S
dst = src1 >> low6(src2)
Xshr64SU6
dst = src1 >> low6(src2)
Xshr64U
dst = src1 >> low6(src2)
Xshr64UU6
dst = src1 >> low6(src2)
Xslt32
low32(dst) = low32(src1) < low32(src2) (signed)
Xslt64
low32(dst) = src1 < src2 (signed)
Xslteq32
low32(dst) = low32(src1) <= low32(src2) (signed)
Xslteq64
low32(dst) = src1 <= src2 (signed)
Xsub32
32-bit wrapping subtraction: low32(dst) = low32(src1) - low32(src2).
Xsub64
64-bit wrapping subtraction: dst = src1 - src2.
Xsub32U8
Same as xsub32 but src2 is a zero-extended 8-bit immediate.
Xsub32U32
Same as xsub32 but src2 is a 32-bit immediate.
Xsub64U8
Same as xsub64 but src2 is a zero-extended 8-bit immediate.
Xsub64U32
Same as xsub64 but src2 is a zero-extended 32-bit immediate.
Xult32
low32(dst) = low32(src1) < low32(src2) (unsigned)
Xult64
low32(dst) = src1 < src2 (unsigned)
Xulteq32
low32(dst) = low32(src1) <= low32(src2) (unsigned)
Xulteq64
low32(dst) = src1 <= src2 (unsigned)
Zext8
dst = zext(low8(src))
Zext16
dst = zext(low16(src))
Zext32
dst = zext(low32(src))

Enums§

ExtendedOp
An extended operation/instruction.
Op
A complete, materialized operation/instruction.