You copied the Doc URL to your clipboard.

1.8.1. Extended ARM instruction set summary

The extended ARM instruction set summary is given in Table 1.6.

ARM instruction set summary
ArithmeticAddADD{cond}{S} <Rd>, <Rn>, <operand2>
Add with carryADC{cond}{S} <Rd>, <Rn>, <operand2>
SubtractSUB{cond}{S} <Rd>, <Rn>, <operand2>
Subtract with carrySBC{cond}{S} <Rd>, <Rn>, <operand2>
Reverse subtractRSB{cond}{S} <Rd>, <Rn>, <operand2>
Reverse subtract with carryRSC{cond}{S} <Rd>, <Rn>, <operand2>
MultiplyMUL{cond}{S} <Rd>, <Rm>, <Rs>
Multiply-accumulateMLA{cond}{S} <Rd>, <Rm>, <Rs>, <Rn>
Multiply unsigned longUMULL{cond}{S} <RdLo>, <RdHi>, <Rm>, <Rs>
Multiply unsigned accumulate longUMLAL{cond}{S} <RdLo>, <RdHi>, <Rm>, <Rs>
Multiply signed longSMULL{cond}{S} <RdLo>, <RdHi>, <Rm>, <Rs>
Multiply signed accumulate longSMLAL{cond}{S} <RdLo>, <RdHi>, <Rm>, <Rs>
Saturating addQADD{cond} <Rd>, <Rm>, <Rn>
Saturating add with doubleQDADD{cond} <Rd>, <Rm>, <Rn>
Saturating subtractQSUB{cond} <Rd>, <Rm>, <Rn>
Saturating subtract with doubleQDSUB{cond} <Rd>, <Rm>, <Rn>
Multiply 16x16SMULxy{cond} <Rd>, <Rm>, <Rs>
Multiply-accumulate 16x16+32SMLAxy{cond} <Rd>, <Rm>, <Rs>, <Rn>
Multiply 32x16SMULWxy{cond} <Rd>, <Rm>, <Rs>
Multiply-accumulate 32x16+32SMLAWxy{cond} <Rd>, <Rm>, <Rs>, <Rn>
Multiply signed accumulate long 16x16+64SMLALxy{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
Count leading zerosCLZ{cond} <Rd>, <Rm>
CompareCompareCMP{cond} <Rn>, <operand2>
Compare negativeCMN{cond} <Rn>, <operand2>
LogicalMoveMOV{cond}{S} <Rd>, <operand2>
Move NOTMVN{cond}{S} <Rd>, <operand2>
TestTST{cond} <Rn>, <operand2>
Test equivalenceTEQ{cond} <Rn>, <operand2>
ANDAND{cond}{S} <Rd>, <Rn>, <operand2>
XOREOR{cond}{S} <Rd>, <Rn>, <operand2>
ORORR{cond}{S} <Rd>, <Rn>, <operand2>
Bit clearBIC{cond}{S} <Rd>, <Rn>, <operand2>
BranchBranchB{cond} <label>
Branch with linkBL{cond} <label>
Branch and exchangeBX{cond} <Rm>
Branch, link and exchangeBLX <label>
Branch, link and exchangeBLX{cond} <Rm>
Branch and exchange to Java stateBXJ{cond} <Rm>
Status register handlingMove SPSR to registerMRS{cond} <Rd>, SPSR
Move CPSR to registerMRS{cond} <Rd>, CPSR
Move register to SPSRMSR{cond} SPSR_{field}, <Rm>
Move register to CPSRMSR{cond} CPSR_{field}, <Rm>
Move immediate to SPSR flagsMSR{cond} SPSR_{field}, #<immed_8r>
Move immediate to CPSR flagsMSR{cond} CPSR_{field}, #<immed_8r>
LoadWordLDR{cond} <Rd>, <a_mode2>
Word with User mode privilegeLDR{cond}T <Rd>, <a_mode2P>
PC as destination, branch and exchangeLDR{cond} R15, <a_mode2P>
ByteLDR{cond}B <Rd>, <a_mode2>
Byte with User mode privilegeLDR{cond}BT <Rd>, <a_mode2P>
Byte signedLDR{cond}SB <Rd>, <a_mode3>
HalfwordLDR{cond}H <Rd>, <a_mode3>
Halfword signedLDR{cond}SH <Rd>, <a_mode3>
DoublewordLDR{cond}D <Rd>, <a_mode3>
Return from exceptionRFE<a_mode4> <Rn>{!}
Load multipleStack operationsLDM{cond}<a_mode4L> <Rn>{!}, <reglist>
Increment beforeLDM{cond}IB <Rn>{!}, <reglist>{^}
Increment afterLDM{cond}IA <Rn>{!}, <reglist>{^}
Decrement beforeLDM{cond}DB <Rn>{!}, <reglist>{^}
Decrement afterLDM{cond}DA <Rn>{!}, <reglist>{^}
Stack operations and restore CPSRLDM{cond}<a_mode4> <Rn>{!}, <reglist+pc>^
User registersLDM{cond}<a_mode4> <Rn>{!}, <reglist>^
Soft preloadMemory system hintPLD <a_mode2>
StoreWordSTR{cond} <Rd>, <a_mode2>
Word with User mode privilegeSTR{cond}T <Rd>, <a_mode2P>
ByteSTR{cond}B <Rd>, <a_mode2>
Byte with User mode privilegeSTR{cond}BT <Rd>, <a_mode2P>
HalfwordSTR{cond}H <Rd>, <a_mode3>
DoublewordSTR{cond}D <Rd>, <a_mode3>
Store return stateSRS<a_mode4> <mode>{!}
Store multipleStack operationsSTM{cond}<a_mode4S> <Rn>{!}, <reglist>
Increment beforeSTM{cond}IB <Rn>{!}, <reglist>{^}
Increment afterSTM{cond}IA <Rn>{!}, <reglist>{^}
Decrement beforeSTM{cond}DB <Rn>{!}, <reglist>{^}
Decrement afterSTM{cond}DA <Rn>{!}, <reglist>{^}
User registersSTM{cond}<a_mode4S> <Rn>{!}, <reglist>^
SwapWordSWP{cond} <Rd>, <Rm>, [<Rn>]
ByteSWP{cond}B <Rd>, <Rm>, [<Rn>]
Change stateChange processor stateCPS<effect> <iflags>{, <mode>}
Change processor modeCPS <mode>
Change endiannessSETEND <endian_specifier>
No OperationNo Operation[1]NOP{cond} {<hint>}
Byte-reverseByte-reverse word REV{cond} <Rd>, <Rm>
Byte-reverse halfwordREV16{cond} <Rd>, <Rm>
Byte-reverse signed halfwordREVSH{cond} <Rd>, <Rm>
Synchronization primitivesClear exclusive[1]CLREX
Load exclusiveLDREX{cond} <Rd>, [<Rn>]
Load Byte exclusive[1]LDREXB{cond} <Rd>, [<Rn>]
Load Doubleword exclusive[1]LDREXD{cond} <Rd>, [<Rn>]
Load Halfword exclusive[1]LDREXH{cond} <Rd>, [<Rn>]
Store exclusiveSTREX{cond} <Rd>, <Rm>, [<Rn>]
Store Byte exclusive[1]STREXB{cond} <Rd>, <Rm>, [<Rn>]
Store Doubleword exclusive[1]STREXD{cond} <Rd>, <Rm>, [<Rn>]
Store Halfword exclusive[1]STREXH{cond} <Rd>, <Rm>, [<Rn>]
CoprocessorData operationsCDP{cond} <cp_num>, <op1>, <CRd>, <CRn>, <CRm>{, <op2>}
Move to ARM reg from coprocMRC{cond} <cp_num>, <op1>, <Rd>, <CRn>, <CRm>{, <op2>}
Move to coproc from ARM regMCR{cond} <cp_num>, <op1>, <Rd>, <CRn>, <CRm>{, <op2>}
Move double to ARM reg from coprocMRRC{cond} <cp_num>, <op1>, <Rd>, <Rn>, <CRm>
Move double to coproc from ARM regMCRR{cond} <cp_num>, <op1>, <Rd>, <Rn>, <CRm>
LoadLDC{cond} <cp_num>, <CRd>, <a_mode5>
StoreSTC{cond} <cp_num>, <CRd>, <a_mode5>
Alternative coprocessorData operationsCDP2 <cp_num>, <op1>, <CRd>, <CRn>, <CRm>{, <op2>}
Move to ARM reg from coproc[2]MRC2 <cp_num>, <op1>, <Rd>, <CRn>, <CRm>{, <op2>}
Move to coproc from ARM reg[2]MCR2 <cp_num>, <op1>, <Rd>, <CRn>, <CRm>{, <op2>}
Move double to ARM reg from coprocessor[2]MRRC2 <cp_num>, <op1>, <Rd>, <Rn>, <CRm>
Move double to coprocessor from ARM reg[2]MCRR2 <cp_num>, <op1>, <Rd>, <Rn>, <CRm>
LoadLDC2 <cp_num>, <CRd>, <a_mode5>
StoreSTC2 <cp_num>, <CRd>, <a_mode5>
Software interruptSWI{cond} <immed_24>
Software breakpointBKPT <immed_16>
Parallel add /subtractSigned add high 16 + 16, low 16 + 16, set GE flagsSADD16{cond} <Rd>, <Rn>, <Rm>
Saturated add high 16 + 16, low 16 + 16QADD16{cond} <Rd>, <Rn>, <Rm>
Signed high 16 + 16, low 16 + 16, halvedSHADD16{cond} <Rd>, <Rn>, <Rm>
Unsigned high 16 + 16, low 16 + 16, set GE flagsUADD16{cond} <Rd>, <Rn>, <Rm>
Saturated unsigned high 16 + 16, low 16 + 16UQADD16{cond} <Rd>, <Rn>, <Rm>
Unsigned high 16 + 16, low 16 + 16, halvedUHADD16{cond} <Rd>, <Rn>, <Rm>
Signed high 16 + low 16, low 16 - high 16, set GE flagsSADDSUBX{cond} <Rd>, <Rn>, <Rm>
Saturated high 16 + low 16, low 16 - high 16QADDSUBX{cond} <Rd>, <Rn>, <Rm>
Signed high 16 + low 16, low 16 - high 16, halvedSHADDSUBX{cond} <Rd>, <Rn>, <Rm>

Parallel add /subtract


Unsigned high 16 + low 16, low 16 - high 16, set GE flagsUADDSUBX{cond} <Rd>, <Rn>, <Rm>
Saturated unsigned high 16 + low 16, low 16 - high 16UQADDSUBX{cond} <Rd>, <Rn>, <Rm>
Unsigned high 16 + low 16, low 16 - high 16, halvedUHADDSUBX{cond} <Rd>, <Rn>, <Rm>
Signed high 16 - low 16, low 16 + high 16, set GE flagsSSUBADDX{cond} <Rd>, <Rn>, <Rm>
Saturated high 16 - low 16, low 16 + high 16QSUBADDX{cond} <Rd>, <Rn>, <Rm>
Signed high 16 - low 16, low 16 + high 16, halvedSHSUBADDX{cond} <Rd>, <Rn>, <Rm>
Unsigned high 16 - low 16, low 16 + high 16, set GE flagsUSUBADDX{cond} <Rd>, <Rn>, <Rm>
Saturated unsigned high 16 - low 16, low 16 + high 16UQSUBADDX{cond} <Rd>, <Rn>, <Rm>
Unsigned high 16 - low 16, low 16 + high 16, halvedUHSUBADDX{cond} <Rd>, <Rn>, <Rm>
Signed high 16-16, low 16-16, set GE flagsSSUB16{cond} <Rd>, <Rn>, <Rm>
Saturated high 16 - 16, low 16 - 16QSUB16{cond} <Rd>, <Rn>, <Rm>
Signed high 16 - 16, low 16 - 16, halvedSHSUB16{cond} <Rd>, <Rn>, <Rm>
Unsigned high 16 - 16, low 16 - 16, set GE flagsUSUB16{cond} <Rd>, <Rn>, <Rm>
Saturated unsigned high 16 - 16, low 16 - 16UQSUB16{cond} <Rd>, <Rn>, <Rm>
Unsigned high 16 - 16, low 16 - 16, halvedUHSUB16{cond} <Rd>, <Rn>, <Rm>
Four signed 8 + 8, set GE flagsSADD8{cond} <Rd>, <Rn>, <Rm>
Four saturated 8 + 8QADD8{cond} <Rd>, <Rn>, <Rm>

Parallel add /subtract


Four signed 8 + 8, halvedSHADD8{cond} <Rd>, <Rn>, <Rm>
Four unsigned 8 + 8, set GE flagsUADD8{cond} <Rd>, <Rn>, <Rm>
Four saturated unsigned 8 + 8UQADD8{cond} <Rd>, <Rn>, <Rm>
Four unsigned 8 + 8, halvedUHADD8{cond} <Rd>, <Rn>, <Rm>
Four signed 8 - 8, set GE flagsSSUB8{cond} <Rd>, <Rn>, <Rm>
Four saturated 8 - 8QSUB8{cond} <Rd>, <Rn>, <Rm>
Four signed 8 - 8, halvedSHSUB8{cond} <Rd>, <Rn>, <Rm>
Four unsigned 8 - 8USUB8{cond} <Rd>, <Rn>, <Rm>
Four saturated unsigned 8 - 8UQSUB8{cond} <Rd>, <Rn>, <Rm>
Four unsigned 8 - 8, halvedUHSUB8{cond} <Rd>, <Rn>, <Rm>
Sum of absolute differencesUSAD8{cond} <Rd>, <Rm>, <Rs>
Sum of absolute differences and accumulateUSADA8{cond} <Rd>, <Rm>, <Rs>, <Rn>
Sign/zero extend and addTwo low 8/16, sign extend to 16 + 16SADD8TO16{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
Low 8/32, sign extend to 32, + 32SADD8TO32{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
Low 16/32, sign extend to 32, + 32SADD16TO32{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
Two low 8/16, zero extend to 16, + 16UADD8TO16{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
Low 8/32, zero extend to 32, + 32UADD8TO32{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
Low 16/32, zero extend to 32, + 32UADD16TO32{cond} <Rd>, <Rn>, <Rm>{, <rotation>}
Two low 8, sign extend to 16, packed 32SUNPK8TO16{cond} <Rd>, <Rm>{, <rotation>}
Low 8, sign extend to 32SUNPK8TO32{cond} <Rd>, <Rm>{, <rotation>}
Low 16, sign extend to 32SUNPK16TO32{cond} <Rd>, <Rm>{, <rotation>}

Sign/zero extend and add


Two low 8, zero extend to 16, packed 32UUNPK8TO16{cond} <Rd>, <Rm>,{, <rotation>}
Low 8, zero extend to 32UUNPK8TO32{cond} <Rd>, <Rm>{, <rotation>}
Low 16, zero extend to 32UUNPK16TO32{cond} <Rd>, <Rm>{, <rotation>}
Signed multiply and multiply, accumulateSigned (high 16 x 16) + (low 16 x 16) + 32, and set Q flagSMLAD{cond} <Rd>, <Rm>, <Rs>, <Rn>
As SMLAD, but high x low, low x high, and set Q flagSMLADX{cond} <Rd>, <Rm>, <Rs>, <Rn>
Signed (high 16 x 16) - (low 16 x 16) + 32SMLSD{cond} <Rd>, <Rm>, <Rs>, <Rn>
As SMLSD, but high x low, low x highSMLSDX{cond} <Rd>, <Rm>, <Rs>, <Rn>
Signed (high 16 x 16) + (low 16 x 16) + 64SMLALD{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
As SMLALD, but high x low, low x highSMLALDX{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
Signed (high 16 x 16) - (low 16 x 16) + 64SMLSLD{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
As SMLSLD, but high x low, low x highSMLSLDX{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
32 + truncated high 16 (32 x 32)SMMLA{cond} <Rd>, <Rm>, <Rs>, <Rn>
32 + rounded high 16 (32 x 32)SMMLAR{cond} <Rd>, <Rm>, <Rs>, <Rn>
32 - truncated high 16 (32 x 32)SMMLS{cond} <Rd>, <Rm>, <Rs>, <Rn>
32 -rounded high 16 (32 x 32)SMMLSR{cond} <Rd>, <Rm>, <Rs>, <Rn>
Signed (high 16 x 16) + (low 16 x 16), and set Q flagSMUAD{cond} <Rd>, <Rm>, <Rs>
As SMUAD, but high x low, low x high, and set Q flagSMUADX{cond} <Rd>, <Rm>, <Rs>
Signed (high 16 x 16) - (low 16 x 16)SMUSD{cond} <Rd>, <Rm>, <Rs>
As SMUSD, but high x low, low x highSMUSDX{cond} <Rd>, <Rm>, <Rs>

Signed multiply and multiply, accumulate


Truncated high 16 (32 x 32)SMMUL{cond} <Rd>, <Rm>, <Rs>
Rounded high 16 (32 x 32)SMMULR{cond} <Rd>, <Rm>, <Rs>
Unsigned 32 x 32, + two 32, to 64UMAAL{cond} <RdLo>, <RdHi>, <Rm>, <Rs>
Saturate, select, and packSigned saturation at bit position nSSAT{cond} <Rd>, #<immed_5>, <Rm>{, <shift>}
Unsigned saturation at bit position nUSAT{cond} <Rd>, #<immed_5>, <Rm>{, <shift>}
Two 16 signed saturation at bit position nSSAT16{cond} <Rd>, #<immed_4>, <Rm>
Two 16 unsigned saturation at bit position nUSAT16{cond} <Rd>, #<immed_4>, <Rm>
Select bytes from Rn/Rm based on GE flagsSEL{cond} <Rd>, <Rn>, <Rm>
Pack low 16/32, high 16/32PKHBT{cond} <Rd>, <Rn>, <Rm>{, LSL #<immed_5>}
Pack high 16/32, low 16/32PKHTB{cond} <Rd>, <Rn>, <Rm>{, ASR #<immed_5>}

[1] The CLREX, LDREXB, STREXB, LDREXH, STREXH, LDREXD, STREXD and NOP operations are only available from the rev1 (r1p0) release of the ARM1136JF-S processor.

[2] The MCR2, MRC2, MCRR2 and MRRC2 instructions are not supported for operations to/from the CP15 coprocessor.

Addressing mode 2 is summarized in Table 1.7.

Addressing mode 2
Addressing modeAssembler
 Immediate offset[<Rn>, #+/<immed_12>]
 Zero offset[<Rn>]
 Register offset[<Rn>, +/-<Rm>]
 Scaled register offset[<Rn>, +/-<Rm>, LSL #<immed_5>]
  [<Rn>, +/-<Rm>, LSR #<immed_5>]
  [<Rn>, +/-<Rm>, ASR #<immed_5>]
  [<Rn>, +/-<Rm>, ROR #<immed_5>]
  [<Rn>, +/-<Rm>, RRX]
Pre-indexed offset-
 Immediate offset[<Rn>], #+/<immed_12>
 Zero offset[<Rn>]
 Register offset[<Rn>, +/-<Rm>]!
 Scaled register offset[<Rn>, +/-<Rm>, LSL #<immed_5>]!
  [<Rn>, +/-<Rm>, LSR #<immed_5>]!
  [<Rn>, +/-<Rm>, ASR #<immed_5>]!
  [<Rn>, +/-<Rm>, ROR #<immed_5>]!
  [<Rn>, +/-<Rm>, RRX]!
Post-indexed offset-
 Immediate [<Rn>], #+/-<immed_12>
 Zero offset[<Rn>]
 Register offset[<Rn>], +/-<Rm>
 Scaled register offset[<Rn>], +/-<Rm>, LSL #<immed_5>
  [<Rn>], +/-<Rm>, LSR #<immed_5>
  [<Rn>], +/-<Rm>, ASR #<immed_5>
  [<Rn>], +/-<Rm>, ROR #<immed_5>
  [<Rn>], +/-<Rm>, RRX

Addressing mode 2P, post-indexed only, is summarized in Table 1.8.

Addressing mode 2P, post-indexed only
Addressing modeAssembler
Post-indexed offset-
 Immediate offset[<Rn>], #+/-<immed_12>
 Zero offset[<Rn>]
 Register offset[<Rn>], +/-<Rm>
 Scaled register offset[<Rn>], +/-<Rm>, LSL #<immed_5>
  [<Rn>], +/-<Rm>, LSR #<immed_5>
  [<Rn>], +/-<Rm>, ASR #<immed_5>
  [<Rn>], +/-<Rm>, ROR #<immed_5>
  [<Rn>], +/-<Rm>, RRX

Addressing mode 3 is summarized in Table 1.9.

Addressing mode 3
Addressing modeAssembler
Immediate offset[<Rn>, #+/-<immed_8>]
 Pre-indexed[<Rn>, #+/-<immed_8>]!
 Post-indexed[<Rn>], #+/-<immed_8>
Register offset[<Rn>, +/- <Rm>]
 Pre-indexed[<Rn>, +/- <Rm>]!
 Post-indexed[<Rn>], +/- <Rm>

Addressing mode 4 is summarized in Table 1.10.

Addressing mode 4
Addressing modeStack type
Block loadStack pop (LDM, RFE)
IAIncrement afterFDFull descending
IBIncrement beforeEDEmpty descending
DADecrement afterFAFull ascending
DBDecrement beforeEAEmpty ascending
Block storeStack push (STM, SRS)
IAIA Increment afterEAEmpty ascending
IBIB Increment beforeFAFull ascending
DADA Decrement afterEDEmpty descending
DBDB Decrement beforeFDFull descending

Addressing mode 5 is summarized in Table 1.11.

Addressing mode 5
Addressing modeAssembler
Immediate offset[<Rn>, #+/-<immed_8*4>]
Immediate pre-indexed[<Rn>, #+/-<immed_8*4>]!
Immediate pre-indexed[<Rn>], #+/-<immed_8*4>
Unindexed[<Rn>], <option>

Operand2 is summarized in Table 1.12.

Immediate value#<immed_8r>
Logical shift left<Rm> LSL #<immed_5>
Logical shift right<Rm> LSR #<immed_5>
Arithmetic shift right<Rm> ASR #<immed_5>
Rotate right<Rm> ROR #<immed_5>
Logical shift left<Rm> LSL <Rs>
Logical shift right<Rm> LSR <Rs>
Arithmetic shift right<Rm> ASR <Rs>
Rotate right<Rm> ROR <Rs>
Rotate right extended<Rm> RRX

Fields are summarized in Table 1.13.

SuffixSets this bit in the MSR field_maskMSR instruction bit number
cControl field mask bit (bit 0)16
xExtension field mask bit (bit 1)17
sStatus field mask bit (bit 2)18
fFlags field mask bit (bit 3)19

Condition codes are summarized in Table 1.14.

Condition codes
NENot equal
HS/CSUnsigned higher or same
LO/CCUnsigned lower
PLPositive or zero
VCNo overflow
HIUnsigned higher
LSUnsigned lower or same
GEGreater or equal
LTLess than
GTGreater than
LELess than or equal
Was this page helpful? Yes No