(old) htmldiff from-(new)

Shared Pseudocode Functions

This page displays common pseudocode functions shared by many pages.

Pseudocodes

Library pseudocode for aarch32/debug/VCRMatch/AArch32.VCRMatch

// AArch32.VCRMatch() // ================== boolean AArch32.VCRMatch(bits(32) vaddress) if UsingAArch32() && ELUsingAArch32(EL1) && IsZero(vaddress<1:0>) && PSTATE.EL != EL2 then // Each bit position in this string corresponds to a bit in DBGVCR and an exception vector. match_word = Zeros(32); if vaddress<31:5> == ExcVectorBase()<31:5> then if HaveEL(EL3) && !IsSecure() then match_word<UInt(vaddress<4:2>) + 24> = '1'; // Non-secure vectors else match_word<UInt(vaddress<4:2>) + 0> = '1'; // Secure vectors (or no EL3) if HaveEL(EL3) && ELUsingAArch32(EL3) && IsSecure() && vaddress<31:5> == MVBAR<31:5> then match_word<UInt(vaddress<4:2>) + 8> = '1'; // Monitor vectors // Mask out bits not corresponding to vectors. if !HaveEL(EL3) then mask = '00000000':'00000000':'00000000':'11011110'; // DBGVCR[31:8] are RES0 elsif !ELUsingAArch32(EL3) then mask = '11011110':'00000000':'00000000':'11011110'; // DBGVCR[15:8] are RES0 else mask = '11011110':'00000000':'11011100':'11011110'; match_word = match_word AND DBGVCR AND mask; match = !IsZero(match_word); // Check for UNPREDICTABLE case - match on Prefetch Abort and Data Abort vectors if !IsZero(match_word<28:27,12:11,4:3>) && DebugTarget() == PSTATE.EL then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHDAPA); else match = FALSE; return match;

Library pseudocode for aarch32/debug/authentication/AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled

// AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled() // ======================================================== boolean AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled returns // the state of the (DBGEN AND SPIDEN) signal. if !HaveEL(EL3) && !IsSecure() then return FALSE; return DBGEN == HIGH && SPIDEN == HIGH;

Library pseudocode for aarch32/debug/breakpoint/AArch32.BreakpointMatch

// AArch32.BreakpointMatch() // ========================= // Breakpoint matching in an AArch32 translation regime. (boolean,boolean) AArch32.BreakpointMatch(integer n, bits(32) vaddress, integer size) assert ELUsingAArch32(S1TranslationRegime()); assert n <= UInt(DBGDIDR.BRPs); enabled = DBGBCR[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR[n].BT == '0x01'; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch32.StateMatch(DBGBCR[n].SSC, DBGBCR[n].HMC, DBGBCR[n].PMC, linked, DBGBCR[n].LBN, isbreakpnt, ispriv); (value_match, value_mismatch) = AArch32.BreakpointValueMatch(n, vaddress, linked_to); if size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. (match_i, mismatch_i) = AArch32.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if value_mismatch && !mismatch_i then value_mismatch = ConstrainUnpredictableBool(Unpredictable_BPMISMATCHHALF); if vaddress<1> == '1' && DBGBCR[n].BAS == '1111' then // The above notwithstanding, if DBGBCR[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if !value_mismatch then value_mismatch = ConstrainUnpredictableBool(Unpredictable_BPMISMATCHHALF); match = value_match && state_match && enabled; mismatch = value_mismatch && state_match && enabled; return (match, mismatch);

Library pseudocode for aarch32/debug/breakpoint/AArch32.BreakpointValueMatch

// AArch32.BreakpointValueMatch() // ============================== // The first result is whether an Address Match or Context breakpoint is programmed on the // instruction at "address". The second result is whether an Address Mismatch breakpoint is // programmed on the instruction, that is, whether the instruction should be stepped. (boolean,boolean) AArch32.BreakpointValueMatch(integer n, bits(32) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n > UInt(DBGDIDR.BRPs) then (c, n) = ConstrainUnpredictableInteger(0, UInt(DBGDIDR.BRPs), Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (FALSE,FALSE); // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR[n].E == '0' then return (FALSE,FALSE); context_aware = (n >= UInt(DBGDIDR.BRPs) - UInt(DBGDIDR.CTX_CMPs)); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. dbgtype = DBGBCR[n].BT; type = DBGBCR[n].BT; if ((dbgtype IN {'011x','11xx'} && ! if ((type IN {'011x','11xx'} && !HaveVirtHostExt()) || // Context matching (dbgtype == '010x' && (type == '010x' && HaltOnBreakpointOrWatchpoint()) || // Address mismatch (dbgtype != '0x0x' && !context_aware) || // Context matching (dbgtype == '1xxx' && ! (type != '0x0x' && !context_aware) || // Context matching (type == '1xxx' && !HaveEL(EL2))) then // EL2 extension (c, dbgtype) = (c, type) = ConstrainUnpredictableBits(Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (FALSE,FALSE); // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (dbgtype == '0x0x'); mismatch = (dbgtype == '010x'); match_vmid = (dbgtype == '10xx'); match_cid1 = (dbgtype == 'xx1x'); match_cid2 = (dbgtype == '11xx'); linked = (dbgtype == 'xxx1'); match_addr = (type == '0x0x'); mismatch = (type == '010x'); match_vmid = (type == '10xx'); match_cid1 = (type == 'xx1x'); match_cid2 = (type == '11xx'); linked = (type == 'xxx1'); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return (FALSE,FALSE); // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return (FALSE,FALSE); // Do the comparison. if match_addr then byte = UInt(vaddress<1:0>); assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR[n].BAS<byte> == '1'); BVR_match = vaddress<31:2> == DBGBVR[n]<31:2> && byte_select_match; elsif match_cid1 then BVR_match = (PSTATE.EL != EL2 && CONTEXTIDR == DBGBVR[n]<31:0>); if match_vmid then if ELUsingAArch32(EL2) then vmid = ZeroExtend(VTTBR.VMID, 16); bvr_vmid = ZeroExtend(DBGBXVR[n]<7:0>, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBXVR[n]<7:0>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBXVR[n]<15:0>; BXVR_match = (PSTATE.EL IN { BXVR_match = (EL2Enabled() && PSTATE.EL IN {EL0, EL1} && EL2Enabled() && } && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (!IsSecure() && HaveVirtHostExt() && !ELUsingAArch32(EL2) && DBGBXVR[n]<31:0> == CONTEXTIDR_EL2); bvr_match_valid = (match_addr || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return (match && !mismatch, !match && mismatch);

Library pseudocode for aarch32/debug/breakpoint/AArch32.StateMatch

// AArch32.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. boolean AArch32.StateMatch(bits(2) SSC, bit HMC, bits(2) PxC, boolean linked, bits(4) LBN, boolean isbreakpnt, boolean ispriv) // "SSC", "HMC", "PxC" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. // If parameters are set to a reserved type, behaves as either disabled or a defined type if ((HMC:SSC:PxC) IN {'011xx','100x0','101x0','11010','11101','1111x'} || // Reserved (HMC == '0' && PxC == '00' && !isbreakpnt) || // Usr/Svc/Sys (SSC IN {'01','10'} && !HaveEL(EL3)) || // No EL3 (HMC:SSC:PxC == '11000' && ELUsingAArch32(EL3)) || // AArch64 only (HMC:SSC != '000' && HMC:SSC != '111' && !HaveEL(EL3) && !HaveEL(EL2)) || // No EL3/EL2 (HMC:SSC:PxC == '11100' && !HaveEL(EL2))) then // No EL2 (c, <HMC,SSC,PxC>) = ConstrainUnpredictableBits(Unpredictable_RESBPWPCTRL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value PL2_match = HaveEL(EL2) && HMC == '1'; PL1_match = PxC<0> == '1'; PL0_match = PxC<1> == '1'; SSU_match = isbreakpnt && HMC == '0' && PxC == '00' && SSC != '11'; el = PSTATE.EL; if !ispriv && !isbreakpnt then priv_match = PL0_match; elsif SSU_match then priv_match = PSTATE.M IN {M32_User,M32_Svc,M32_System}; else case el of when EL3 priv_match = PL1_match; // EL3 and EL1 are both PL1 when EL2 priv_match = PL2_match; when EL1 priv_match = PL1_match; when EL0 priv_match = PL0_match; case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = !IsSecure(); // Non-secure only when '10' security_state_match = IsSecure(); // Secure only when '11' security_state_match = TRUE; // Both if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = (UInt(DBGDIDR.BRPs) - UInt(DBGDIDR.CTX_CMPs)); last_ctx_cmp = UInt(DBGDIDR.BRPs); if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint if linked then vaddress = bits(32) UNKNOWN; linked_to = TRUE; (linked_match,-) = AArch32.BreakpointValueMatch(lbn, vaddress, linked_to); return priv_match && security_state_match && (!linked || linked_match);

Library pseudocode for aarch32/debug/enables/AArch32.GenerateDebugExceptions

// AArch32.GenerateDebugExceptions() // ================================= boolean AArch32.GenerateDebugExceptions() return AArch32.GenerateDebugExceptionsFrom(PSTATE.EL, IsSecure());

Library pseudocode for aarch32/debug/enables/AArch32.GenerateDebugExceptionsFrom

// AArch32.GenerateDebugExceptionsFrom() // ===================================== boolean AArch32.GenerateDebugExceptionsFrom(bits(2) from, boolean secure) if from == EL0 && !ELStateUsingAArch32(EL1, secure) then mask = bit UNKNOWN; // PSTATE.D mask, unused for EL0 case return AArch64.GenerateDebugExceptionsFrom(from, secure, mask); if DBGOSLSR.OSLK == '1' || DoubleLockStatus() || Halted() then return FALSE; if HaveEL(EL3) && secure then spd = if ELUsingAArch32(EL3) then SDCR.SPD else MDCR_EL3.SPD32; if spd<1> == '1' then enabled = spd<0> == '1'; else // SPD == 0b01 is reserved, but behaves the same as 0b00. enabled = AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled(); if from == EL0 then enabled = enabled || SDER.SUIDEN == '1'; else enabled = from != EL2; return enabled;

Library pseudocode for aarch32/debug/pmu/AArch32.CheckForPMUOverflow

// AArch32.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events boolean AArch32.CheckForPMUOverflow() if !ELUsingAArch32(EL1) then return AArch64.CheckForPMUOverflow(); pmuirq = PMCR.E == '1' && PMINTENSET<31> == '1' && PMOVSSET<31> == '1'; for n = 0 to UInt(PMCR.N) - 1 if HaveEL(EL2) then hpmn = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN; hpme = if !ELUsingAArch32(EL2) then MDCR_EL2.HPME else HDCR.HPME; E = (if n < UInt(hpmn) then PMCR.E else hpme); else E = PMCR.E; if E == '1' && PMINTENSET<n> == '1' && PMOVSSET<n> == '1' then pmuirq = TRUE; SetInterruptRequestLevel(InterruptID_PMUIRQ, if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflow, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.) return pmuirq;

Library pseudocode for aarch32/debug/pmu/AArch32.CountEvents

// AArch32.CountEvents() // ===================== // Return TRUE if counter "n" should count its event. For the cycle counter, n == 31. boolean AArch32.CountEvents(integer n) assert n == 31 || n < UInt(PMCR.N); if !ELUsingAArch32(EL1) then return AArch64.CountEvents(n); // Event counting is disabled in Debug state debug = Halted(); // In Non-secure state, some counters are reserved for EL2 if HaveEL(EL2) then hpmn = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN; hpme = if !ELUsingAArch32(EL2) then MDCR_EL2.HPME else HDCR.HPME; E = if n < UInt(hpmn) || n == 31 then PMCR.E else hpme; else E = PMCR.E; enabled = E == '1' && PMCNTENSET<n> == '1'; if !IsSecure() then // Event counting in Non-secure state is allowed unless all of: // * EL2 and the HPMD Extension are implemented // * Executing at EL2 // * PMNx is not reserved for EL2 // * HDCR.HPMD == 1 if HaveHPMDExt() && PSTATE.EL == EL2 && (n < UInt(hpmn) || n == 31) then hpmd = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMD else HDCR.HPMD; prohibited = (hpmd == '1'); else prohibited = FALSE; else // Event counting in Secure state is prohibited unless any one of: // * EL3 is not implemented // * EL3 is using AArch64 and MDCR_EL3.SPME == 1 // * EL3 is using AArch32 and SDCR.SPME == 1 // * Executing at EL0, and SDER.SUNIDEN == 1. spme = (if ELUsingAArch32(EL3) then SDCR.SPME else MDCR_EL3.SPME); prohibited = HaveEL(EL3) && spme == '0' && (PSTATE.EL != EL0 || SDER.SUNIDEN == '0'); // The IMPLEMENTATION DEFINED authentication interface might override software controls if prohibited && !HaveNoSecurePMUDisableOverride() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // For the cycle counter, PMCR.DP enables counting when otherwise prohibited if prohibited && n == 31 then prohibited = (PMCR.DP == '1'); // Event counting can be filtered by the {P, U, NSK, NSU, NSH} bits filter = if n == 31 then PMCCFILTR else PMEVTYPER[n]; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) then filter<27> else '0'; case PSTATE.EL of when EL0 filtered = if IsSecure() then U == '1' else U != NSU; when EL1 filtered = if IsSecure() then P == '1' else P != NSK; when EL2 filtered = (NSH == '0'); when EL3 filtered = (P == '1'); return !debug && enabled && !prohibited && !filtered;

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterHypModeInDebugState

// AArch32.EnterHypModeInDebugState() // ================================== // Take an exception in Debug state to Hyp mode. AArch32.EnterHypModeInDebugState(ExceptionRecord exception) SynchronizeContext(); assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2); AArch32.ReportHypEntry(exception); AArch32.WriteMode(M32_Hyp); SPSR[] = bits(32) UNKNOWN; PSTATE.SSBS = bits(1) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if EDSCR.ERR = '1'; HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); EndOfInstruction();

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterModeInDebugState

// AArch32.EnterModeInDebugState() // =============================== // Take an exception in Debug state to a mode other than Monitor and Hyp mode. AArch32.EnterModeInDebugState(bits(5) target_mode) SynchronizeContext(); assert ELUsingAArch32(EL1) && PSTATE.EL != EL2; if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(target_mode); SPSR[] = bits(32) UNKNOWN;[] = bits(32) UNKNOWN; PSTATE.SSBS = bits(1) UNKNOWN; R[14] = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; EDSCR.ERR = '1'; HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. EndOfInstruction();

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterMonitorModeInDebugState

// AArch32.EnterMonitorModeInDebugState() // ====================================== // Take an exception in Debug state to Monitor mode. AArch32.EnterMonitorModeInDebugState() SynchronizeContext(); assert HaveEL(EL3) && ELUsingAArch32(EL3); from_secure = IsSecure(); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(M32_Monitor); SPSR[] = bits(32) UNKNOWN;[] = bits(32) UNKNOWN; PSTATE.SSBS = bits(1) UNKNOWN; R[14] = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if EDSCR.ERR = '1'; HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. EndOfInstruction();

Library pseudocode for aarch32/debug/watchpoint/AArch32.WatchpointByteMatch

// AArch32.WatchpointByteMatch() // ============================= boolean AArch32.WatchpointByteMatch(integer n, bits(32) vaddress) bottom = if DBGWVR[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR[n].BAS<UInt(vaddress<bottom-1:0>)> != '0'); mask = UInt(DBGWCR[n].MASK); // If DBGWCR[n].MASK is non-zero value and DBGWCR[n].BAS is not set to '11111111', or // DBGWCR[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !IsOnes(DBGWCR[n].BAS) then byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPMASKANDBAS); else LSB = (DBGWCR[n].BAS AND NOT(DBGWCR[n].BAS - 1)); MSB = (DBGWCR[n].BAS + LSB); if !IsZero(MSB AND (MSB - 1)) then // Not contiguous byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUS); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then (c, mask) = ConstrainUnpredictableInteger(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value if mask > bottom then WVR_match = (vaddress<31:mask> == DBGWVR[n]<31:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITS); else WVR_match = vaddress<31:bottom> == DBGWVR[n]<31:bottom>; return WVR_match && byte_select_match;

Library pseudocode for aarch32/debug/watchpoint/AArch32.WatchpointMatch

// AArch32.WatchpointMatch() // ========================= // Watchpoint matching in an AArch32 translation regime. boolean AArch32.WatchpointMatch(integer n, bits(32) vaddress, integer size, boolean ispriv, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); assert n <= UInt(DBGDIDR.WRPs); // "ispriv" is FALSE for LDRT/STRT instructions executed at EL1 and all // load/stores at EL0, TRUE for all other load/stores. "iswrite" is TRUE for stores, FALSE for // loads. enabled = DBGWCR[n].E == '1'; linked = DBGWCR[n].WT == '1'; isbreakpnt = FALSE; state_match = AArch32.StateMatch(DBGWCR[n].SSC, DBGWCR[n].HMC, DBGWCR[n].PAC, linked, DBGWCR[n].LBN, isbreakpnt, ispriv); ls_match = (DBGWCR[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch32.WatchpointByteMatch(n, vaddress + byte); return value_match && state_match && ls_match && enabled;

Library pseudocode for aarch32/exceptions/aborts/AArch32.Abort

// AArch32.Abort() // =============== // Abort and Debug exception handling in an AArch32 translation regime. AArch32.Abort(bits(32) vaddress, FaultRecord fault) // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = (HCR_EL2.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && MDCR_EL2.TDE == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1' && IsExternalAbort(fault); if route_to_aarch64 then AArch64.Abort(ZeroExtend(vaddress), fault); elsif fault.acctype == AccType_IFETCH then AArch32.TakePrefetchAbortException(vaddress, fault); else AArch32.TakeDataAbortException(vaddress, fault);

Library pseudocode for aarch32/exceptions/aborts/AArch32.AbortSyndrome

// AArch32.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort exceptions taken to Hyp mode // from an AArch32 translation regime. ExceptionRecord AArch32.AbortSyndrome(Exception exceptype,type, FaultRecord fault, bits(32) vaddress) exception = ExceptionSyndrome(exceptype); (type); d_side = exceptype == d_side = type == Exception_DataAbort; exception.syndrome = AArch32.FaultSyndrome(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValid(fault) then exception.ipavalid = TRUE; exception.NS = fault.ipaddress.NS; exception.ipaddress = ZeroExtend(fault.ipaddress.address); else exception.ipavalid = FALSE; return exception;

Library pseudocode for aarch32/exceptions/aborts/AArch32.CheckPCAlignment

// AArch32.CheckPCAlignment() // ========================== AArch32.CheckPCAlignment() bits(32) pc = ThisInstrAddr(); if (CurrentInstrSet() == InstrSet_A32 && pc<1> == '1') || pc<0> == '1' then if AArch32.GeneralExceptionsToAArch64() then AArch64.PCAlignmentFault(); // Generate an Alignment fault Prefetch Abort exception vaddress = pc; acctype = AccType_IFETCH; iswrite = FALSE; secondstage = FALSE; AArch32.Abort(vaddress, AArch32.AlignmentFault(acctype, iswrite, secondstage));

Library pseudocode for aarch32/exceptions/aborts/AArch32.ReportDataAbort

// AArch32.ReportDataAbort() // ========================= // Report syndrome information for aborts taken to modes other than Hyp mode. AArch32.ReportDataAbort(boolean route_to_monitor, FaultRecord fault, bits(32) vaddress) // The encoding used in the IFSR or DFSR can be Long-descriptor format or Short-descriptor // format. Normally, the current translation table format determines the format. For an abort // from Non-secure state to Monitor mode, the IFSR or DFSR uses the Long-descriptor format if // any of the following applies: // * The Secure TTBCR.EAE is set to 1. // * The abort is synchronous and either: // - It is taken from Hyp mode. // - It is taken from EL1 or EL0, and the Non-secure TTBCR.EAE is set to 1. long_format = FALSE; if route_to_monitor && !IsSecure() then long_format = TTBCR_S.EAE == '1'; if !IsSErrorInterrupt(fault) && !long_format then long_format = PSTATE.EL == EL2 || TTBCR.EAE == '1'; else long_format = TTBCR.EAE == '1'; d_side = TRUE; if long_format then syndrome = AArch32.FaultStatusLD(d_side, fault); else syndrome = AArch32.FaultStatusSD(d_side, fault); if fault.acctype == AccType_IC then if (!long_format && boolean IMPLEMENTATION_DEFINED "Report I-cache maintenance fault in IFSR") then i_syndrome = syndrome; syndrome<10,3:0> = EncodeSDFSC(Fault_ICacheMaint, 1); else i_syndrome = bits(32) UNKNOWN; if route_to_monitor then IFSR_S = i_syndrome; else IFSR = i_syndrome; if route_to_monitor then DFSR_S = syndrome; DFAR_S = vaddress; else DFSR = syndrome; DFAR = vaddress; return;

Library pseudocode for aarch32/exceptions/aborts/AArch32.ReportPrefetchAbort

// AArch32.ReportPrefetchAbort() // ============================= // Report syndrome information for aborts taken to modes other than Hyp mode. AArch32.ReportPrefetchAbort(boolean route_to_monitor, FaultRecord fault, bits(32) vaddress) // The encoding used in the IFSR can be Long-descriptor format or Short-descriptor format. // Normally, the current translation table format determines the format. For an abort from // Non-secure state to Monitor mode, the IFSR uses the Long-descriptor format if any of the // following applies: // * The Secure TTBCR.EAE is set to 1. // * It is taken from Hyp mode. // * It is taken from EL1 or EL0, and the Non-secure TTBCR.EAE is set to 1. long_format = FALSE; if route_to_monitor && !IsSecure() then long_format = TTBCR_S.EAE == '1' || PSTATE.EL == EL2 || TTBCR.EAE == '1'; else long_format = TTBCR.EAE == '1'; d_side = FALSE; if long_format then fsr = AArch32.FaultStatusLD(d_side, fault); else fsr = AArch32.FaultStatusSD(d_side, fault); if route_to_monitor then IFSR_S = fsr; IFAR_S = vaddress; else IFSR = fsr; IFAR = vaddress; return;

Library pseudocode for aarch32/exceptions/aborts/AArch32.TakeDataAbortException

// AArch32.TakeDataAbortException() // ================================ AArch32.TakeDataAbortException(bits(32) vaddress, FaultRecord fault) route_to_monitor = HaveEL(EL3) && SCR.EA == '1' && IsExternalAbort(fault); route_to_hyp = (HaveEL(EL2) && !IsSecure() && PSTATE.EL IN {EL0, EL1} && (HCR.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && HDCR.TDE == '1'))); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; if IsDebugException(fault) then DBGDSCRext.MOE = fault.debugmoe; if route_to_monitor then AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/AArch32.TakePrefetchAbortException

// AArch32.TakePrefetchAbortException() // ==================================== AArch32.TakePrefetchAbortException(bits(32) vaddress, FaultRecord fault) route_to_monitor = HaveEL(EL3) && SCR.EA == '1' && IsExternalAbort(fault); route_to_hyp = (HaveEL(EL2) && !IsSecure() && PSTATE.EL IN {EL0, EL1} && (HCR.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && HDCR.TDE == '1'))); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0C; lr_offset = 4; if IsDebugException(fault) then DBGDSCRext.MOE = fault.debugmoe; if route_to_monitor then AArch32.ReportPrefetchAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then if fault.statuscode == if fault.type == Fault_Alignment then // PC Alignment fault exception = ExceptionSyndrome(Exception_PCAlignment); exception.vaddress = ThisInstrAddr(); else exception = AArch32.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportPrefetchAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/BranchTargetException

// BranchTargetException // ===================== // Raise branch target exception. AArch64.BranchTargetException(bits(52) vaddress) route_to_el2 = PSTATE.EL == route_to_el2 = EL2Enabled() && PSTATE.EL == EL0 && EL2Enabled() && HCR_EL2.TGE == '1'; && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_BranchTarget); exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> = Zeros(); // RES0 if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/asynchaborts/AArch32.TakePhysicalFIQExceptionEffectiveTCF

// AArch32.TakePhysicalFIQException() // ==================================// EffectiveTCF() // ============== // Returns the TCF field applied to Tag Check Fails in the given Exception Level bits(2) AArch32.TakePhysicalFIQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL ==EffectiveTCF(bits(2) el) if el == EL0EL3 && !then tcf = SCTLR_EL3.TCF; elsif el ==ELUsingAArch32EL2(then tcf = SCTLR_EL2.TCF; elsif el ==EL1); if !route_to_aarch64 &&then tcf = SCTLR_EL1.TCF; elsif el == EL2EnabledEL0() && !&& HCR_EL2.<E2H,TGE> == '11' then tcf = SCTLR_EL2.TCF0; elsif el ==ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.FMO == '1' && !IsInHost()); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.FIQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalFIQException(); route_to_monitor = HaveEL(EL3) && SCR.FIQ == '1'; route_to_hyp = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR.TGE == '1' || HCR.FMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_FIQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_FIQ, preferred_exception_return, lr_offset, vect_offset);&& HCR_EL2.<E2H,TGE> != '11' then tcf = SCTLR_EL1.TCF0; return tcf;

Library pseudocode for aarch32/exceptions/asynchaborts/AArch32.TakePhysicalIRQExceptionRecordTagCheckFail

// AArch32.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception.// RecordTagCheckFail() // ==================== // Records a tag fail exception into the appropriate TCFR_ELx AArch32.TakePhysicalIRQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL ==ReportTagCheckFail(bits(2) el, bit ttbr) if el == EL0EL3 && !then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el ==ELUsingAArch32EL2(then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el ==EL1); if !route_to_aarch64 &&then if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el == EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.IMO == '1' && !IsInHost()); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.IRQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalIRQException(); route_to_monitor = HaveEL(EL3) && SCR.IRQ == '1'; route_to_hyp = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR.TGE == '1' || HCR.IMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x18; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_IRQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_IRQ, preferred_exception_return, lr_offset, vect_offset);then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';

Library pseudocode for aarch32/exceptions/asynchaborts/AArch32.TakePhysicalSErrorExceptionTagCheckFail

// AArch32.TakePhysicalSErrorException() // =====================================// TagCheckFail() // ============== // Handle a tag check fail condition AArch32.TakePhysicalSErrorException(boolean parity, bit extflag, bits(2) errortype, boolean impdef_syndrome, bits(24) full_syndrome)TagCheckFail(bits(64) vaddress, boolean iswrite) bits(2) tcf = ClearPendingPhysicalSErrorEffectiveTCF(); // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL ==(PSTATE.EL); if tcf == '01' then EL0TagCheckFault && !(vaddress, iswrite); elsif tcf == '10' thenELUsingAArch32ReportTagCheckFail(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = (HCR_EL2.TGE == '1' || (!IsInHost() && HCR_EL2.AMO == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.TakePhysicalSErrorException(impdef_syndrome, full_syndrome); route_to_monitor = HaveEL(EL3) && SCR.EA == '1'; route_to_hyp = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR.TGE == '1' || HCR.AMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; fault = AArch32.AsynchExternalAbort(parity, errortype, extflag); vaddress = bits(32) UNKNOWN; if route_to_monitor then AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);(PSTATE.EL, vaddress<55>);

Library pseudocode for aarch32/exceptions/asynchaborts/AArch32.TakeVirtualFIQExceptionTagCheckFault

// AArch32.TakeVirtualFIQException() // =================================// TagCheckFault() // =============== // Raise a tag check fail exception. AArch32.TakeVirtualFIQException() assert PSTATE.EL IN {TagCheckFault(bits(64) va, boolean write) bits(2) target_el; bits(64) preferred_exception_return =ThisInstrAddr(); integer vect_offset = 0x0; if PSTATE.EL == EL0,then target_el = if HCR_EL2.TGE == 0 then EL1} &&else EL2Enabled(); if ELUsingAArch32(EL2) then // Virtual IRQ enabled if TGE==0 and FMO==1 assert HCR.TGE == '0' && HCR.FMO == '1'; ; else assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == target_el = PSTATE.EL; exception = EL0ExceptionSyndrome && !(ELUsingAArch32Exception_DataAbort(); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = va;EL1AArch64.TakeException) then AArch64.TakeVirtualFIQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; AArch32.EnterMode(M32_FIQ, preferred_exception_return, lr_offset, vect_offset);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakeVirtualIRQExceptionAArch32.TakePhysicalFIQException

// AArch32.TakeVirtualIRQException() // =================================// AArch32.TakePhysicalFIQException() // ================================== AArch32.TakeVirtualIRQException() assert PSTATE.EL IN {AArch32.TakePhysicalFIQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL ==EL0,&& ! ELUsingAArch32(EL1} &&); if !route_to_aarch64 && EL2Enabled(); if() && ! ELUsingAArch32(EL2) then // Virtual IRQs enabled if TGE==0 and IMO==1 assert HCR.TGE == '0' && HCR.IMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Check if routed to AArch64 state if PSTATE.EL ==) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.FMO == '1' && ! EL0IsInHost && !()); if !route_to_aarch64 &&HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.FIQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalFIQException(); route_to_monitor = HaveEL(EL3) && SCR.FIQ == '1'; route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0,EL1) then} && (HCR.TGE == '1' || HCR.FMO == '1')); bits(32) preferred_exception_return = AArch64.TakeVirtualIRQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x18; lr_offset = 4; vect_offset = 0x1C; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_FIQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_IRQM32_FIQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakeVirtualSErrorExceptionAArch32.TakePhysicalIRQException

// AArch32.TakeVirtualSErrorException() // ====================================// AArch32.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception. AArch32.TakeVirtualSErrorException(bit extflag, bits(2) errortype, boolean impdef_syndrome, bits(24) full_syndrome) AArch32.TakePhysicalIRQException() assert PSTATE.EL IN { // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL ==EL0,&& ! ELUsingAArch32(EL1} &&); if !route_to_aarch64 && EL2Enabled(); if() && ! ELUsingAArch32(EL2) then // Virtual SError enabled if TGE==0 and AMO==1 assert HCR.TGE == '0' && HCR.AMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Check if routed to AArch64 state if PSTATE.EL ==) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.IMO == '1' && ! EL0IsInHost && !()); if !route_to_aarch64 &&HaveEL(EL3) && !ELUsingAArch32(EL1EL3) then) then route_to_aarch64 = SCR_EL3.IRQ == '1'; if route_to_aarch64 then AArch64.TakeVirtualSErrorExceptionAArch64.TakePhysicalIRQException(impdef_syndrome, full_syndrome); (); route_to_monitor = FALSE; bits(32) preferred_exception_return = route_to_monitor = ThisInstrAddrHaveEL(); vect_offset = 0x10; lr_offset = 8; vaddress = bits(32) UNKNOWN; parity = FALSE; if( HaveRASExtEL3() then if) && SCR.IRQ == '1'; route_to_hyp = ( ELUsingAArch32EL2Enabled(() && PSTATE.EL IN {EL2EL0) then fault =, AArch32.AsynchExternalAbortEL1(FALSE, VDFSR.AET, VDFSR.ExT); else fault =} && (HCR.TGE == '1' || HCR.IMO == '1')); bits(32) preferred_exception_return = AArch32.AsynchExternalAbortThisInstrAddr(FALSE, VSESR_EL2.AET, VSESR_EL2.ExT); else fault =(); vect_offset = 0x18; lr_offset = 4; if route_to_monitor then AArch32.AsynchExternalAbortAArch32.EnterMonitorMode(parity, errortype, extflag);(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == ClearPendingVirtualSErrorEL2();|| route_to_hyp then exception = AArch32.ReportDataAbortExceptionSyndrome(route_to_monitor, fault, vaddress);( Exception_IRQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_AbortM32_IRQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/debugasynch/AArch32.SoftwareBreakpointAArch32.TakePhysicalSErrorException

// AArch32.SoftwareBreakpoint() // ============================// AArch32.TakePhysicalSErrorException() // ===================================== AArch32.SoftwareBreakpoint(bits(16) immediate) if (AArch32.TakePhysicalSErrorException(boolean parity, bit extflag, bits(2) errortype, boolean impdef_syndrome, bits(24) full_syndrome)ClearPendingPhysicalSError(); // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')) || !) then route_to_aarch64 = (HCR_EL2.TGE == '1' || (!IsInHost() && HCR_EL2.AMO == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.TakePhysicalSErrorException(impdef_syndrome, full_syndrome); route_to_monitor = HaveEL(EL3) && SCR.EA == '1'; route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0,EL1) then} && (HCR.TGE == '1' || HCR.AMO == '1')); bits(32) preferred_exception_return = AArch64.SoftwareBreakpointThisInstrAddr(immediate); vaddress = bits(32) UNKNOWN; acctype =(); vect_offset = 0x10; lr_offset = 8; fault = AccType_IFETCHAArch32.AsynchExternalAbort; // Take as a Prefetch Abort iswrite = FALSE; entry =(parity, errortype, extflag); vaddress = bits(32) UNKNOWN; if route_to_monitor then DebugException_BKPTAArch32.ReportDataAbort; fault =(route_to_monitor, fault, vaddress); AArch32.DebugFaultAArch32.EnterMonitorMode(acctype, iswrite, entry);(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == || route_to_hyp then exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_AbortAArch32.AbortEL2(vaddress, fault);, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/debugasynch/DebugExceptionAArch32.TakeVirtualFIQException

constant bits(4)// AArch32.TakeVirtualFIQException() // ================================= DebugException_Breakpoint = '0001'; constant bits(4)AArch32.TakeVirtualFIQException() assert DebugException_BKPT = '0011'; constant bits(4)() && PSTATE.EL IN { DebugException_VectorCatch = '0101'; constant bits(4), }; if ELUsingAArch32(EL2) then // Virtual IRQ enabled if TGE==0 and FMO==1 assert HCR.TGE == '0' && HCR.FMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualFIQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; AArch32.EnterMode(M32_FIQDebugException_Watchpoint = '1010';, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/exceptionsasynch/AArch32.CheckAdvSIMDOrFPRegisterTrapsAArch32.TakeVirtualIRQException

// AArch32.CheckAdvSIMDOrFPRegisterTraps() // ======================================= // Check if an instruction that accesses an Advanced SIMD and // floating-point System register is trapped by an appropriate HCR.TIDx // ID group trap control.// AArch32.TakeVirtualIRQException() // ================================= AArch32.CheckAdvSIMDOrFPRegisterTraps(bits(4) reg) if PSTATE.EL ==AArch32.TakeVirtualIRQException() assert EL1 && EL2Enabled() then tid0 = if() && PSTATE.EL IN { ELUsingAArch32EL0(,EL2EL1) then HCR.TID0 else HCR_EL2.TID0; tid3 = if}; if ELUsingAArch32(EL2) then HCR.TID3 else HCR_EL2.TID3; ) then // Virtual IRQs enabled if TGE==0 and IMO==1 assert HCR.TGE == '0' && HCR.IMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; if (tid0 == '1' && reg == '0000') // FPSID || (tid3 == '1' && reg IN {'0101', '0110', '0111'}) then // MVFRx if // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL2EL1) then AArch32.SystemAccessTrapAArch64.TakeVirtualIRQException((); bits(32) preferred_exception_return =M32_HypThisInstrAddr, 0x8); // Exception_AdvSIMDFPAccessTrap else(); vect_offset = 0x18; lr_offset = 4; AArch64.AArch32SystemAccessTrapAArch32.EnterMode(EL2M32_IRQ, 0x8); // Exception_AdvSIMDFPAccessTrap, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/exceptionsasynch/AArch32.ExceptionClassAArch32.TakeVirtualSErrorException

// AArch32.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in HSR (integer,bit)// AArch32.TakeVirtualSErrorException() // ==================================== AArch32.ExceptionClass(AArch32.TakeVirtualSErrorException(bit extflag, bits(2) errortype, boolean impdef_syndrome, bits(24) full_syndrome) assertExceptionEL2Enabled exceptype) il = if() && PSTATE.EL IN { ThisInstrLengthEL0() == 32 then '1' else '0'; case exceptype of when, Exception_UncategorizedEL1 ec = 0x00; il = '1'; when}; if Exception_WFxTrapELUsingAArch32 ec = 0x01; when( Exception_CP15RTTrapEL2 ec = 0x03; when) then // Virtual SError enabled if TGE==0 and AMO==1 assert HCR.TGE == '0' && HCR.AMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == Exception_CP15RRTTrapEL0 ec = 0x04; when&& ! Exception_CP14RTTrapELUsingAArch32 ec = 0x05; when( Exception_CP14DTTrapEL1 ec = 0x06; when) then Exception_AdvSIMDFPAccessTrapAArch64.TakeVirtualSErrorException ec = 0x07; when(impdef_syndrome, full_syndrome); route_to_monitor = FALSE; bits(32) preferred_exception_return = Exception_FPIDTrapThisInstrAddr ec = 0x08; when(); vect_offset = 0x10; lr_offset = 8; vaddress = bits(32) UNKNOWN; parity = FALSE; if Exception_PACTrapHaveRASExt ec = 0x09; when() then if Exception_CP14RRTTrapELUsingAArch32 ec = 0x0C; when( Exception_BranchTargetEL2 ec = 0x0D; when) then fault = Exception_IllegalStateAArch32.AsynchExternalAbort ec = 0x0E; il = '1'; when(FALSE, VDFSR.AET, VDFSR.ExT); else fault = Exception_SupervisorCallAArch32.AsynchExternalAbort ec = 0x11; when(FALSE, VSESR_EL2.AET, VSESR_EL2.ExT); else fault = Exception_HypervisorCallAArch32.AsynchExternalAbort ec = 0x12; when(parity, errortype, extflag); Exception_MonitorCallClearPendingVirtualSError ec = 0x13; when(); Exception_ERetTrapAArch32.ReportDataAbort ec = 0x1A; when(route_to_monitor, fault, vaddress); Exception_InstructionAbortAArch32.EnterMode ec = 0x20; il = '1'; when( Exception_PCAlignmentM32_Abort ec = 0x22; il = '1'; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_FPTrappedException ec = 0x28; otherwise Unreachable(); if ec IN {0x20,0x24} && PSTATE.EL == EL2 then ec = ec + 1; return (ec,il);, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/exceptionsdebug/AArch32.GeneralExceptionsToAArch64AArch32.SoftwareBreakpoint

// AArch32.GeneralExceptionsToAArch64() // ==================================== // Returns TRUE if exceptions normally routed to EL1 are being handled at an Exception // level using AArch64, because either EL1 is using AArch64 or TGE is in force and EL2 // is using AArch64. boolean// AArch32.SoftwareBreakpoint() // ============================ AArch32.GeneralExceptionsToAArch64() return ((PSTATE.EL ==AArch32.SoftwareBreakpoint(bits(16) immediate) if ( EL0EL2Enabled && !() && !ELUsingAArch32(EL2) && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')) || !ELUsingAArch32(EL1)) || () thenEL2EnabledAArch64.SoftwareBreakpoint() && !(immediate); vaddress = bits(32) UNKNOWN; acctype =ELUsingAArch32AccType_IFETCH(; // Take as a Prefetch Abort iswrite = FALSE; entry =; fault = AArch32.DebugFault(acctype, iswrite, entry); AArch32.AbortEL2DebugException_BKPT) && HCR_EL2.TGE == '1'));(vaddress, fault);

Library pseudocode for aarch32/exceptions/exceptionsdebug/AArch32.ReportHypEntryDebugException

// AArch32.ReportHypEntry() // ======================== // Report syndrome information to Hyp mode registers.constant bits(4) AArch32.ReportHypEntry(DebugException_Breakpoint = '0001'; constant bits(4)ExceptionRecord exception)DebugException_BKPT = '0011'; constant bits(4) Exception exceptype = exception.exceptype; (ec,il) =DebugException_VectorCatch = '0101'; constant bits(4) AArch32.ExceptionClass(exceptype); iss = exception.syndrome; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; HSR = ec<5:0>:il:iss; if exceptype IN {Exception_InstructionAbort, Exception_PCAlignment} then HIFAR = exception.vaddress<31:0>; HDFAR = bits(32) UNKNOWN; elsif exceptype == Exception_DataAbort then HIFAR = bits(32) UNKNOWN; HDFAR = exception.vaddress<31:0>; if exception.ipavalid then HPFAR<31:4> = exception.ipaddress<39:12>; else HPFAR<31:4> = bits(28) UNKNOWN; return;DebugException_Watchpoint = '1010';

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ResetControlRegistersAArch32.CheckAdvSIMDOrFPRegisterTraps

// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values.// AArch32.CheckAdvSIMDOrFPRegisterTraps() // ======================================= // Check if an instruction that accesses an Advanced SIMD and // floating-point System register is trapped by an appropriate HCR.TIDx // ID group trap control. AArch32.ResetControlRegisters(boolean cold_reset);AArch32.CheckAdvSIMDOrFPRegisterTraps(bits(4) reg) if PSTATE.EL ==EL1 && EL2Enabled() then tid0 = if ELUsingAArch32(EL2) then HCR.TID0 else HCR_EL2.TID0; tid3 = if ELUsingAArch32(EL2) then HCR.TID3 else HCR_EL2.TID3; if (tid0 == '1' && reg == '0000') // FPSID || (tid3 == '1' && reg IN {'0101', '0110', '0111'}) then // MVFRx if ELUsingAArch32(EL2) then AArch32.SystemAccessTrap(M32_Hyp, 0x8); // Exception_AdvSIMDFPAccessTrap else AArch64.AArch32SystemAccessTrap(EL2, 0x8); // Exception_AdvSIMDFPAccessTrap

Library pseudocode for aarch32/exceptions/exceptions/AArch32.TakeResetAArch32.ExceptionClass

// AArch32.TakeReset() // =================== // Reset into AArch32 state// AArch32.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in HSR (integer,bit) AArch32.TakeReset(boolean cold_reset) assertAArch32.ExceptionClass( HighestELUsingAArch32Exception(); type) // Enter the highest implemented Exception level in AArch32 state if il = if HaveELThisInstrLength(() == 32 then '1' else '0'; case type of whenEL3Exception_Uncategorized) thenec = 0x00; il = '1'; when AArch32.WriteModeException_WFxTrap(ec = 0x01; whenM32_SvcException_CP15RTTrap); SCR.NS = '0'; // Secure state elsifec = 0x03; when HaveELException_CP15RRTTrap(ec = 0x04; whenEL2Exception_CP14RTTrap) thenec = 0x05; when AArch32.WriteModeException_CP14DTTrap(ec = 0x06; whenM32_HypException_AdvSIMDFPAccessTrap); elseec = 0x07; when AArch32.WriteModeException_FPIDTrap(ec = 0x08; whenM32_SvcException_PACTrap); // Reset the CP14 and CP15 registers and other system componentsec = 0x09; when AArch32.ResetControlRegistersException_CP14RRTTrap(cold_reset); FPEXC.EN = '0'; // Reset all other PSTATE fields, including instruction set and endianness according to the // SCTLR values produced by the above call to ResetControlRegisters() PSTATE.<A,I,F> = '111'; // All asynchronous exceptions masked PSTATE.IT = '00000000'; // IT block state reset PSTATE.T = SCTLR.TE; // Instruction set: TE=0: A32, TE=1: T32. PSTATE.J is RES0. PSTATE.E = SCTLR.EE; // Endianness: EE=0: little-endian, EE=1: big-endian PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // R14 or ELR_hyp and SPSR have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way.ec = 0x0C; when AArch32.ResetGeneralRegistersException_BranchTarget();ec = 0x0D; when AArch32.ResetSIMDFPRegistersException_IllegalState();ec = 0x0E; il = '1'; when AArch32.ResetSpecialRegistersException_SupervisorCall();ec = 0x11; when ResetExternalDebugRegistersException_HypervisorCall(cold_reset); bits(32) rv; // IMPLEMENTATION DEFINED reset vector ifec = 0x12; when HaveELException_MonitorCall(ec = 0x13; whenEL3Exception_ERetTrap) then if MVBAR<0> == '1' then // Reset vector in MVBAR rv = MVBAR<31:1>:'0'; else rv = bits(32) IMPLEMENTATION_DEFINED "reset vector address"; else rv = RVBAR<31:1>:'0'; // The reset vector must be correctly aligned assert rv<0> == '0' && (PSTATE.T == '1' || rv<1> == '0');ec = 0x1A; when BranchToException_InstructionAbort(rv,ec = 0x20; il = '1'; when ec = 0x22; il = '1'; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_FPTrappedException ec = 0x28; otherwise Unreachable(); if ec IN {0x20,0x24} && PSTATE.EL == EL2BranchType_RESETException_PCAlignment);then ec = ec + 1; return (ec,il);

Library pseudocode for aarch32/exceptions/exceptions/ExcVectorBaseAArch32.GeneralExceptionsToAArch64

// ExcVectorBase() // =============== // AArch32.GeneralExceptionsToAArch64() // ==================================== // Returns TRUE if exceptions normally routed to EL1 are being handled at an Exception // level using AArch64, because either EL1 is using AArch64 or TGE is in force and EL2 // is using AArch64. bits(32)boolean ExcVectorBase() if SCTLR.V == '1' then // Hivecs selected, base = 0xFFFF0000 returnAArch32.GeneralExceptionsToAArch64() return ((PSTATE.EL == OnesEL0(16):&& !ZerosELUsingAArch32(16); else return VBAR<31:5>:()) || (EL2Enabled() && !ELUsingAArch32(EL2ZerosEL1(5);) && HCR_EL2.TGE == '1'));

Library pseudocode for aarch32/exceptions/ieeefpexceptions/AArch32.FPTrappedExceptionAArch32.ReportHypEntry

// AArch32.FPTrappedException() // ============================// AArch32.ReportHypEntry() // ======================== // Report syndrome information to Hyp mode registers. AArch32.FPTrappedException(bits(8) accumulated_exceptions) ifAArch32.ReportHypEntry( AArch32.GeneralExceptionsToAArch64ExceptionRecord() then is_ase = FALSE; element = 0;exception) AArch64.FPTrappedExceptionException(is_ase, element, accumulated_exceptions); FPEXC.DEX = '1'; FPEXC.TFV = '1'; FPEXC<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF FPEXC<10:8> = '111'; // VECITR is RES1type = exception.type; (ec,il) = (type); iss = exception.syndrome; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; HSR = ec<5:0>:il:iss; if type IN {Exception_InstructionAbort, Exception_PCAlignment} then HIFAR = exception.vaddress<31:0>; HDFAR = bits(32) UNKNOWN; elsif type == Exception_DataAbortAArch32.TakeUndefInstrExceptionAArch32.ExceptionClass();then HIFAR = bits(32) UNKNOWN; HDFAR = exception.vaddress<31:0>; if exception.ipavalid then HPFAR<31:4> = exception.ipaddress<39:12>; else HPFAR<31:4> = bits(28) UNKNOWN; return;

Library pseudocode for aarch32/exceptions/syscallsexceptions/AArch32.CallHypervisorAArch32.ResetControlRegisters

// AArch32.CallHypervisor() // ======================== // Performs a HVC call// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values. AArch32.CallHypervisor(bits(16) immediate) assertAArch32.ResetControlRegisters(boolean cold_reset); HaveEL(EL2); if !ELUsingAArch32(EL2) then AArch64.CallHypervisor(immediate); else AArch32.TakeHVCException(immediate);

Library pseudocode for aarch32/exceptions/syscallsexceptions/AArch32.CallSupervisorAArch32.TakeReset

// AArch32.CallSupervisor() // ======================== // Calls the Supervisor// AArch32.TakeReset() // =================== // Reset into AArch32 state AArch32.CallSupervisor(bits(16) immediate) ifAArch32.TakeReset(boolean cold_reset) assert AArch32.CurrentCondHighestELUsingAArch32() != '1110' then immediate = bits(16) UNKNOWN; (); // Enter the highest implemented Exception level in AArch32 state if AArch32.GeneralExceptionsToAArch64HaveEL() then( AArch64.CallSupervisorEL3(immediate); else) then (M32_Svc); SCR.NS = '0'; // Secure state elsif HaveEL(EL2) then AArch32.WriteMode(M32_Hyp); else AArch32.WriteMode(M32_Svc); // Reset the CP14 and CP15 registers and other system components AArch32.ResetControlRegisters(cold_reset); FPEXC.EN = '0'; // Reset all other PSTATE fields, including instruction set and endianness according to the // SCTLR values produced by the above call to ResetControlRegisters() PSTATE.<A,I,F> = '111'; // All asynchronous exceptions masked PSTATE.IT = '00000000'; // IT block state reset PSTATE.T = SCTLR.TE; // Instruction set: TE=0: A32, TE=1: T32. PSTATE.J is RES0. PSTATE.E = SCTLR.EE; // Endianness: EE=0: little-endian, EE=1: big-endian PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // R14 or ELR_hyp and SPSR have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch32.ResetGeneralRegisters(); AArch32.ResetSIMDFPRegisters(); AArch32.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(32) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then if MVBAR<0> == '1' then // Reset vector in MVBAR rv = MVBAR<31:1>:'0'; else rv = bits(32) IMPLEMENTATION_DEFINED "reset vector address"; else rv = RVBAR<31:1>:'0'; // The reset vector must be correctly aligned assert rv<0> == '0' && (PSTATE.T == '1' || rv<1> == '0'); BranchTo(rv, BranchType_RESETAArch32.TakeSVCExceptionAArch32.WriteMode(immediate););

Library pseudocode for aarch32/exceptions/syscallsexceptions/AArch32.TakeHVCExceptionExcVectorBase

// AArch32.TakeHVCException() // ==========================// ExcVectorBase() // =============== bits(32) AArch32.TakeHVCException(bits(16) immediate) assertExcVectorBase() if SCTLR.V == '1' then // Hivecs selected, base = 0xFFFF0000 return HaveELOnes((16):EL2Zeros) &&(16); else return VBAR<31:5>: ELUsingAArch32Zeros(EL2); AArch32.ITAdvance(); SSAdvance(); bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; exception = ExceptionSyndrome(Exception_HypervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14);(5);

Library pseudocode for aarch32/exceptions/syscallsieeefp/AArch32.TakeSMCExceptionAArch32.FPTrappedException

// AArch32.TakeSMCException() // ==========================// AArch32.FPTrappedException() // ============================ AArch32.TakeSMCException() assertAArch32.FPTrappedException(bits(8) accumulated_exceptions) if HaveELAArch32.GeneralExceptionsToAArch64(() then is_ase = FALSE; element = 0;EL3AArch64.FPTrappedException) &&(is_ase, element, accumulated_exceptions); FPEXC.DEX = '1'; FPEXC.TFV = '1'; FPEXC<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF FPEXC<10:8> = '111'; // VECITR is RES1 ELUsingAArch32AArch32.TakeUndefInstrException(EL3); AArch32.ITAdvance(); SSAdvance(); bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; lr_offset = 0; AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset);();

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeSVCExceptionAArch32.CallHypervisor

// AArch32.TakeSVCException() // ==========================// AArch32.CallHypervisor() // ======================== // Performs a HVC call AArch32.TakeSVCException(bits(16) immediate)AArch32.CallHypervisor(bits(16) immediate) assert AArch32.ITAdvanceHaveEL();( SSAdvance(); route_to_hyp = PSTATE.EL == EL0 && EL2Enabled() && HCR.TGE == '1'; bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; lr_offset = 0; if PSTATE.EL == EL2 || route_to_hyp then exception =); if ! ExceptionSyndromeELUsingAArch32(Exception_SupervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL2 then) then AArch32.EnterHypModeAArch64.CallHypervisor(exception, preferred_exception_return, vect_offset); else(immediate); else AArch32.EnterHypModeAArch32.TakeHVCException(exception, preferred_exception_return, 0x14); else AArch32.EnterMode(M32_Svc, preferred_exception_return, lr_offset, vect_offset);(immediate);

Library pseudocode for aarch32/exceptions/takeexceptionsyscalls/AArch32.EnterHypModeAArch32.CallSupervisor

// AArch32.EnterHypMode() // ====================== // Take an exception to Hyp mode.// AArch32.CallSupervisor() // ======================== // Calls the Supervisor AArch32.EnterHypMode(AArch32.CallSupervisor(bits(16) immediate) ifExceptionRecordAArch32.CurrentCond exception, bits(32) preferred_exception_return, integer vect_offset)() != '1110' then immediate = bits(16) UNKNOWN; if SynchronizeContextAArch32.GeneralExceptionsToAArch64(); assert() then HaveELAArch64.CallSupervisor((immediate); elseEL2AArch32.TakeSVCException) && !IsSecure() && ELUsingAArch32(EL2); spsr = GetPSRFromPSTATE(); if !(exception.exceptype IN {Exception_IRQ, Exception_FIQ}) then AArch32.ReportHypEntry(exception); AArch32.WriteMode(M32_Hyp); SPSR[] = spsr; ELR_hyp = preferred_exception_return; PSTATE.T = HSCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if !HaveEL(EL3) || SCR_GEN[].EA == '0' then PSTATE.A = '1'; if !HaveEL(EL3) || SCR_GEN[].IRQ == '0' then PSTATE.I = '1'; if !HaveEL(EL3) || SCR_GEN[].FIQ == '0' then PSTATE.F = '1'; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HaveSSBSExt() then PSTATE.SSBS = HSCTLR.DSSBS; BranchTo(HVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();(immediate);

Library pseudocode for aarch32/exceptions/takeexceptionsyscalls/AArch32.EnterModeAArch32.TakeHVCException

// AArch32.EnterMode() // =================== // Take an exception to a mode other than Monitor and Hyp mode.// AArch32.TakeHVCException() // ========================== AArch32.EnterMode(bits(5) target_mode, bits(32) preferred_exception_return, integer lr_offset, integer vect_offset)AArch32.TakeHVCException(bits(16) immediate) assert SynchronizeContextHaveEL(); assert( EL2) && ELUsingAArch32(EL1) && PSTATE.EL != EL2; spsr =); GetPSRFromPSTATEAArch32.ITAdvance(); if PSTATE.M ==(); M32_MonitorSSAdvance then SCR.NS = '0';(); bits(32) preferred_exception_return = AArch32.WriteModeNextInstrAddr(target_mode);(); vect_offset = 0x08; exception = SPSRExceptionSyndrome[] = spsr;( RException_HypervisorCall[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if target_mode ==); exception.syndrome<15:0> = immediate; if PSTATE.EL == M32_FIQEL2 then PSTATE.<A,I,F> = '111'; elsif target_mode IN {thenM32_AbortAArch32.EnterHypMode,(exception, preferred_exception_return, vect_offset); else M32_IRQAArch32.EnterHypMode} then PSTATE.<A,I> = '11'; else PSTATE.I = '1'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if HaveSSBSExt() then PSTATE.SSBS = SCTLR.DSSBS; BranchTo(ExcVectorBase()<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();(exception, preferred_exception_return, 0x14);

Library pseudocode for aarch32/exceptions/takeexceptionsyscalls/AArch32.EnterMonitorModeAArch32.TakeSMCException

// AArch32.EnterMonitorMode() // ========================== // Take an exception to Monitor mode.// AArch32.TakeSMCException() // ========================== AArch32.EnterMonitorMode(bits(32) preferred_exception_return, integer lr_offset, integer vect_offset)AArch32.TakeSMCException() assert SynchronizeContext(); assert HaveEL(EL3) && ELUsingAArch32(EL3); from_secure =); IsSecureAArch32.ITAdvance(); spsr =(); GetPSRFromPSTATESSAdvance(); if PSTATE.M == bits(32) preferred_exception_return = M32_MonitorNextInstrAddr then SCR.NS = '0';(); vect_offset = 0x08; lr_offset = 0; AArch32.WriteModeAArch32.EnterMonitorMode(M32_Monitor); SPSR[] = spsr; R[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; PSTATE.<A,I,F> = '111'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if HaveSSBSExt() then PSTATE.SSBS = SCTLR.DSSBS; BranchTo(MVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();(preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/trapssyscalls/AArch32.CheckAdvSIMDOrFPEnabledAArch32.TakeSVCException

// AArch32.CheckAdvSIMDOrFPEnabled() // ================================= // Check against CPACR, FPEXC, HCPTR, NSACR, and CPTR_EL3.// AArch32.TakeSVCException() // ========================== AArch32.CheckAdvSIMDOrFPEnabled(boolean fpexc_check, boolean advsimd) if PSTATE.EL ==AArch32.TakeSVCException(bits(16) immediate) EL0AArch32.ITAdvance && (!();HaveELSSAdvance((); route_to_hyp =EL2EL2Enabled) || (!() && PSTATE.EL ==ELUsingAArch32(EL2) && HCR_EL2.TGE == '0')) && !ELUsingAArch32(EL1) then // The PE behaves as if FPEXC.EN is 1 AArch64.CheckFPAdvSIMDEnabled(); elsif PSTATE.EL == EL0 &&&& HCR.TGE == '1'; bits(32) preferred_exception_return = HaveELNextInstrAddr((); vect_offset = 0x08; lr_offset = 0; if PSTATE.EL ==EL2) && !|| route_to_hyp then exception =ELUsingAArch32ExceptionSyndrome(EL2Exception_SupervisorCall) && HCR_EL2.TGE == '1' && !); exception.syndrome<15:0> = immediate; if PSTATE.EL ==ELUsingAArch32(EL1) then if fpexc_check && HCR_EL2.RW == '0' then fpexc_en = bits(1) IMPLEMENTATION_DEFINED "FPEXC.EN value when TGE==1 and RW==0"; if fpexc_en == '0' then UNDEFINED; AArch64.CheckFPAdvSIMDEnabled(); else cpacr_asedis = CPACR.ASEDIS; cpacr_cp10 = CPACR.cp10; if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then cpacr_asedis = '1'; if NSACR.cp10 == '0' then cpacr_cp10 = '00'; if PSTATE.EL != EL2 then // Check if Advanced SIMD disabled in CPACR if advsimd && cpacr_asedis == '1' then UNDEFINED; if cpacr_cp10 == '10' then (c, cpacr_cp10) =then ConstrainUnpredictableBitsAArch32.EnterHypMode((exception, preferred_exception_return, vect_offset); elseUnpredictable_RESCPACRAArch32.EnterHypMode); // Check if access disabled in CPACR case cpacr_cp10 of when '00' disabled = TRUE; when '01' disabled = PSTATE.EL ==(exception, preferred_exception_return, 0x14); else EL0AArch32.EnterMode; when '11' disabled = FALSE; if disabled then UNDEFINED; // If required, check FPEXC enabled bit. if fpexc_check && FPEXC.EN == '0' then UNDEFINED;( AArch32.CheckFPAdvSIMDTrapM32_Svc(advsimd); // Also check against HCPTR and CPTR_EL3, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/trapstakeexception/AArch32.CheckFPAdvSIMDTrapAArch32.EnterHypMode

// AArch32.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3.// AArch32.EnterHypMode() // ====================== // Take an exception to Hyp mode. AArch32.CheckFPAdvSIMDTrap(boolean advsimd) ifAArch32.EnterHypMode( EL2EnabledExceptionRecord() && !exception, bits(32) preferred_exception_return, integer vect_offset)ELUsingAArch32SynchronizeContext((); assertHaveEL(EL2) then) && ! AArch64.CheckFPAdvSIMDTrapIsSecure(); else if() && HaveELELUsingAArch32(EL2) && !); spsr =IsSecureGetPSRFromPSTATE() then hcptr_tase = HCPTR.TASE; hcptr_cp10 = HCPTR.TCP10; if(); if !(exception.type IN { HaveELException_IRQ(,EL3Exception_FIQ) &&}) then ELUsingAArch32AArch32.ReportHypEntry((exception);EL3AArch32.WriteMode) && !(IsSecureM32_Hyp() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then hcptr_tase = '1'; if NSACR.cp10 == '0' then hcptr_cp10 = '1'; // Check if access disabled in HCPTR if (advsimd && hcptr_tase == '1') || hcptr_cp10 == '1' then exception =); ExceptionSyndromeSPSR([] = spsr; ifException_AdvSIMDFPAccessTrapHaveSSBSExt); exception.syndrome<24:20> =() then PSTATE.SSBS = HSCTLR.DSSBS; ELR_hyp = preferred_exception_return; PSTATE.T = HSCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if ! ConditionSyndromeHaveEL(); if advsimd then exception.syndrome<5> = '1'; else exception.syndrome<5> = '0'; exception.syndrome<3:0> = '1010'; // coproc field, always 0xA if PSTATE.EL ==( EL2EL3 then) || AArch32.TakeUndefInstrExceptionSCR_GEN(exception); else[].EA == '0' then PSTATE.A = '1'; if ! AArch32.TakeHypTrapException(exception); if HaveEL(EL3) && !) ||ELUsingAArch32SCR_GEN([].IRQ == '0' then PSTATE.I = '1'; if !HaveEL(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then) || AArch64.AdvSIMDFPAccessTrapSCR_GEN([].FIQ == '0' then PSTATE.F = '1'; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000';(HVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstructionEL3BranchTo); return;();

Library pseudocode for aarch32/exceptions/trapstakeexception/AArch32.CheckForSMCUndefOrTrapAArch32.EnterMode

// AArch32.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction// AArch32.EnterMode() // =================== // Take an exception to a mode other than Monitor and Hyp mode. AArch32.CheckForSMCUndefOrTrap() if !AArch32.EnterMode(bits(5) target_mode, bits(32) preferred_exception_return, integer lr_offset, integer vect_offset)HaveELSynchronizeContext((); assertEL3ELUsingAArch32) || PSTATE.EL ==( EL0EL1 then UNDEFINED; if) && PSTATE.EL != EL2EnabledEL2() && !; spsr =ELUsingAArch32GetPSRFromPSTATE((); if PSTATE.M ==EL2M32_Monitor) thenthen SCR.NS = '0'; AArch64.CheckForSMCUndefOrTrapAArch32.WriteMode((target_mode);ZerosSPSR(16)); else route_to_hyp =[] = spsr; if HaveELHaveSSBSExt(() then PSTATE.SSBS = SCTLR.DSSBS;EL2R) && ![14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if target_mode ==IsSecureM32_FIQ() && PSTATE.EL ==then PSTATE.<A,I,F> = '111'; elsif target_mode IN { EL1M32_Abort && HCR.TSC == '1'; if route_to_hyp then exception =, ExceptionSyndromeM32_IRQ(} then PSTATE.<A,I> = '11'; else PSTATE.I = '1'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; ifException_MonitorCallHavePANExt);() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; (ExcVectorBase()<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstructionAArch32.TakeHypTrapExceptionBranchTo(exception);();

Library pseudocode for aarch32/exceptions/trapstakeexception/AArch32.CheckForWFxTrapAArch32.EnterMonitorMode

// AArch32.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction// AArch32.EnterMonitorMode() // ========================== // Take an exception to Monitor mode. AArch32.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assertAArch32.EnterMonitorMode(bits(32) preferred_exception_return, integer lr_offset, integer vect_offset) SynchronizeContext(); assert HaveEL(target_el); // Check for routing to AArch64 if !(EL3) && ELUsingAArch32(target_el) then( AArch64.CheckForWFxTrap(target_el, is_wfe); return; case target_el of when EL1 trap = (if is_wfe then SCTLR.nTWE else SCTLR.nTWI) == '0'; when EL2 trap = (if is_wfe then HCR.TWE else HCR.TWI) == '1'; when EL3 trap = (if is_wfe then SCR.TWE else SCR.TWI) == '1'; if trap then if target_el ==); from_secure = EL1IsSecure &&(); spsr = EL2EnabledGetPSRFromPSTATE() && !(); if PSTATE.M ==ELUsingAArch32M32_Monitor(then SCR.NS = '0';EL2AArch32.WriteMode) && HCR_EL2.TGE == '1' then( AArch64.WFxTrapM32_Monitor(target_el, is_wfe); if target_el ==); EL3SPSR then[] = spsr; if AArch32.TakeMonitorTrapExceptionHaveSSBSExt(); elsif target_el ==() then PSTATE.SSBS = SCTLR.DSSBS; EL2R then exception =[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; PSTATE.<A,I,F> = '111'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if ExceptionSyndromeHavePANExt(() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1';Exception_WFxTrapBranchTo); exception.syndrome<24:20> =(MVBAR<31:5>:vect_offset<4:0>, ConditionSyndromeBranchType_EXCEPTION(); exception.syndrome<0> = if is_wfe then '1' else '0';); AArch32.TakeHypTrapExceptionEndOfInstruction(exception); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckITEnabledAArch32.CheckAdvSIMDOrFPEnabled

// AArch32.CheckITEnabled() // ======================== // Check whether the T32 IT instruction is disabled.// AArch32.CheckAdvSIMDOrFPEnabled() // ================================= // Check against CPACR, FPEXC, HCPTR, NSACR, and CPTR_EL3. AArch32.CheckITEnabled(bits(4) mask) AArch32.CheckAdvSIMDOrFPEnabled(boolean fpexc_check, boolean advsimd) if PSTATE.EL == EL0 && (!HaveEL(EL2 then it_disabled = HSCTLR.ITD; else it_disabled = (if) || (! ELUsingAArch32(EL2) && HCR_EL2.TGE == '0')) && !ELUsingAArch32(EL1) then SCTLR.ITD else SCTLR[].ITD); if it_disabled == '1' then if mask != '1000' then UNDEFINED; // Otherwise whether the IT block is allowed depends on hw1 of the next instruction. next_instr =) then // The PE behaves as if FPEXC.EN is 1 AArch32.MemSingleAArch64.CheckFPAdvSIMDEnabled[(); elsif PSTATE.EL ==NextInstrAddrEL0(), 2,&& (EL2) && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1' && !ELUsingAArch32(EL1) then if fpexc_check && HCR_EL2.RW == '0' then fpexc_en = bits(1) IMPLEMENTATION_DEFINED "FPEXC.EN value when TGE==1 and RW==0"; if fpexc_en == '0' then UNDEFINED; AArch64.CheckFPAdvSIMDEnabled(); else cpacr_asedis = CPACR.ASEDIS; cpacr_cp10 = CPACR.cp10; if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then cpacr_asedis = '1'; if NSACR.cp10 == '0' then cpacr_cp10 = '00'; if PSTATE.EL != EL2 then // Check if Advanced SIMD disabled in CPACR if advsimd && cpacr_asedis == '1' then UNDEFINED; if cpacr_cp10 == '10' then (c, cpacr_cp10) = ConstrainUnpredictableBits(Unpredictable_RESCPACR); // Check if access disabled in CPACR case cpacr_cp10 of when '00' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0; when '11' disabled = FALSE; if disabled then UNDEFINED; // If required, check FPEXC enabled bit. if fpexc_check && FPEXC.EN == '0' then UNDEFINED; AArch32.CheckFPAdvSIMDTrapAccType_IFETCHHaveEL, TRUE]; if next_instr IN {'11xxxxxxxxxxxxxx', '1011xxxxxxxxxxxx', '10100xxxxxxxxxxx', '01001xxxxxxxxxxx', '010001xxx1111xxx', '010001xx1xxxx111'} then // It is IMPLEMENTATION DEFINED whether the Undefined Instruction exception is // taken on the IT instruction or the next instruction. This is not reflected in // the pseudocode, which always takes the exception on the IT instruction. This // also does not take into account cases where the next instruction is UNPREDICTABLE. UNDEFINED; return;(advsimd); // Also check against HCPTR and CPTR_EL3

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckIllegalStateAArch32.CheckFPAdvSIMDTrap

// AArch32.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set.// AArch32.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3. AArch32.CheckIllegalState() AArch32.CheckFPAdvSIMDTrap(boolean advsimd) if AArch32.GeneralExceptionsToAArch64EL2Enabled() then() && ! AArch64.CheckIllegalStateELUsingAArch32(); elsif PSTATE.IL == '1' then route_to_hyp = PSTATE.EL ==( EL0EL2 &&) then EL2EnabledAArch64.CheckFPAdvSIMDTrap() && HCR.TGE == '1'; bits(32) preferred_exception_return =(); else if ThisInstrAddrHaveEL(); vect_offset = 0x04; if PSTATE.EL ==( EL2 || route_to_hyp then exception =) && ! IsSecure() then hcptr_tase = HCPTR.TASE; hcptr_cp10 = HCPTR.TCP10; if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then hcptr_tase = '1'; if NSACR.cp10 == '0' then hcptr_cp10 = '1'; // Check if access disabled in HCPTR if (advsimd && hcptr_tase == '1') || hcptr_cp10 == '1' then exception = ExceptionSyndrome(Exception_IllegalStateException_AdvSIMDFPAccessTrap); if PSTATE.EL == exception.syndrome<24:20> = ConditionSyndrome(); if advsimd then exception.syndrome<5> = '1'; else exception.syndrome<5> = '0'; exception.syndrome<3:0> = '1010'; // coproc field, always 0xA if PSTATE.EL == EL2 then AArch32.EnterHypModeAArch32.TakeUndefInstrException(exception, preferred_exception_return, vect_offset); else(exception); else AArch32.EnterHypModeAArch32.TakeHypTrapException(exception, preferred_exception_return, 0x14); else(exception); if (EL3) && !ELUsingAArch32(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3AArch32.TakeUndefInstrExceptionHaveEL();); return;

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckSETENDEnabledAArch32.CheckForSMCUndefOrTrap

// AArch32.CheckSETENDEnabled() // ============================ // Check whether the AArch32 SETEND instruction is disabled.// AArch32.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction AArch32.CheckSETENDEnabled() if PSTATE.EL ==AArch32.CheckForSMCUndefOrTrap() if ! EL2HaveEL then setend_disabled = HSCTLR.SED; else setend_disabled = (if( EL3) || PSTATE.EL == EL0 then UNDEFINED; if EL2Enabled() && !ELUsingAArch32(EL2) then AArch64.CheckForSMCUndefOrTrap(Zeros(16)); else route_to_hyp = HaveEL(EL2) && !IsSecure() && PSTATE.EL == EL1 && HCR.TSC == '1'; if route_to_hyp then exception = ExceptionSyndrome(Exception_MonitorCall); AArch32.TakeHypTrapException) then SCTLR.SED else SCTLR[].SED); if setend_disabled == '1' then UNDEFINED; return;(exception);

Library pseudocode for aarch32/exceptions/traps/AArch32.SystemAccessTrapAArch32.CheckForWFxTrap

// AArch32.SystemAccessTrap() // ========================== // Trapped system register access.// AArch32.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction AArch32.SystemAccessTrap(bits(5) mode, integer ec) (valid, target_el) =AArch32.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assert ELFromM32(mode); assert valid && HaveEL(target_el) && target_el !=(target_el); // Check for routing to AArch64 if ! EL0ELUsingAArch32 &&(target_el) then UIntAArch64.CheckForWFxTrap(target_el) >=(target_el, is_wfe); return; case target_el of when UIntEL1(PSTATE.EL); if target_el ==trap = (if is_wfe then SCTLR.nTWE else SCTLR.nTWI) == '0'; when EL2 then exception =trap = (if is_wfe then HCR.TWE else HCR.TWI) == '1'; when AArch32.SystemAccessTrapSyndromeEL3(trap = (if is_wfe then SCR.TWE else SCR.TWI) == '1'; if trap then if target_el == && EL2Enabled() && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1' then AArch64.WFxTrap(target_el, is_wfe); if target_el == EL3 then AArch32.TakeMonitorTrapException(); elsif target_el == EL2 then exception = ExceptionSyndrome(Exception_WFxTrap); exception.syndrome<24:20> = ConditionSyndromeThisInstrEL1(), ec);(); exception.syndrome<0> = if is_wfe then '1' else '0'; AArch32.TakeHypTrapException(exception); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.SystemAccessTrapSyndromeAArch32.CheckITEnabled

// AArch32.SystemAccessTrapSyndrome() // ================================== // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord// AArch32.CheckITEnabled() // ======================== // Check whether the T32 IT instruction is disabled. AArch32.SystemAccessTrapSyndrome(bits(32) instr, integer ec)AArch32.CheckITEnabled(bits(4) mask) if PSTATE.EL == ExceptionRecordEL2 exception; case ec of when 0x0 exception =then it_disabled = HSCTLR.ITD; else it_disabled = (if ExceptionSyndromeELUsingAArch32(Exception_UncategorizedEL1); when 0x3 exception =) then SCTLR.ITD else SCTLR[].ITD); if it_disabled == '1' then if mask != '1000' then UNDEFINED; // Otherwise whether the IT block is allowed depends on hw1 of the next instruction. next_instr = ExceptionSyndromeAArch32.MemSingle([Exception_CP15RTTrapNextInstrAddr); when 0x4 exception =(), 2, ExceptionSyndromeAccType_IFETCH(Exception_CP15RRTTrap); when 0x5 exception = ExceptionSyndrome(Exception_CP14RTTrap); when 0x6 exception = ExceptionSyndrome(Exception_CP14DTTrap); when 0x7 exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); when 0x8 exception = ExceptionSyndrome(Exception_FPIDTrap); when 0xC exception = ExceptionSyndrome(Exception_CP14RRTTrap); otherwise Unreachable(); bits(20) iss = Zeros(); if exception.exceptype IN {Exception_FPIDTrap, Exception_CP14RTTrap, Exception_CP15RTTrap} then // Trapped MRC/MCR, VMRS on FPSID iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<8:5> = instr<15:12>; // Rt elsif exception.exceptype IN {Exception_CP14RRTTrap, Exception_AdvSIMDFPAccessTrap, Exception_CP15RRTTrap} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 iss<13:10> = instr<19:16>; // Rt2 iss<8:5> = instr<15:12>; // Rt iss<4:1> = instr<3:0>; // CRm elsif exception.exceptype == Exception_CP14DTTrap then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<8:5> = bits(4) UNKNOWN; iss<3> = '1'; elsif exception.exceptype == Exception_Uncategorized then // Trapped for unknown reason iss<8:5> = instr<19:16>; // Rn iss<3> = '0'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<19:0> = iss; , TRUE]; return exception; if next_instr IN {'11xxxxxxxxxxxxxx', '1011xxxxxxxxxxxx', '10100xxxxxxxxxxx', '01001xxxxxxxxxxx', '010001xxx1111xxx', '010001xx1xxxx111'} then // It is IMPLEMENTATION DEFINED whether the Undefined Instruction exception is // taken on the IT instruction or the next instruction. This is not reflected in // the pseudocode, which always takes the exception on the IT instruction. This // also does not take into account cases where the next instruction is UNPREDICTABLE. UNDEFINED; return;

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeHypTrapExceptionAArch32.CheckIllegalState

// AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception.// AArch32.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set. AArch32.TakeHypTrapException(integer ec) exception =AArch32.CheckIllegalState() if AArch32.SystemAccessTrapSyndromeAArch32.GeneralExceptionsToAArch64(() thenThisInstrAArch64.CheckIllegalState(), ec);(); elsif PSTATE.IL == '1' then route_to_hyp = AArch32.TakeHypTrapExceptionEL2Enabled(exception); // AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception.() && PSTATE.EL == AArch32.TakeHypTrapExceptionEL0(&& HCR.TGE == '1'; bits(32) preferred_exception_return =ExceptionRecordThisInstrAddr exception) assert(); vect_offset = 0x04; if PSTATE.EL == HaveEL(EL2) && !|| route_to_hyp then exception =IsSecureExceptionSyndrome() &&( ELUsingAArch32Exception_IllegalState(); if PSTATE.EL ==EL2); bits(32) preferred_exception_return =then ThisInstrAddrAArch32.EnterHypMode(); vect_offset = 0x14;(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.TakeUndefInstrException(exception, preferred_exception_return, vect_offset);();

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeMonitorTrapExceptionAArch32.CheckSETENDEnabled

// AArch32.TakeMonitorTrapException() // ================================== // Exceptions routed to Monitor mode as a Monitor Trap exception.// AArch32.CheckSETENDEnabled() // ============================ // Check whether the AArch32 SETEND instruction is disabled. AArch32.TakeMonitorTrapException() assertAArch32.CheckSETENDEnabled() if PSTATE.EL == HaveELEL2(then setend_disabled = HSCTLR.SED; else setend_disabled = (ifEL3) && ELUsingAArch32(EL3EL1); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; lr_offset = if CurrentInstrSet() == InstrSet_A32 then 4 else 2; AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset);) then SCTLR.SED else SCTLR[].SED); if setend_disabled == '1' then UNDEFINED; return;

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeUndefInstrExceptionAArch32.SystemAccessTrap

// AArch32.TakeUndefInstrException() // =================================// AArch32.SystemAccessTrap() // ========================== // Trapped system register access. AArch32.TakeUndefInstrException() exception =AArch32.SystemAccessTrap(bits(5) mode, integer ec) (valid, target_el) = ExceptionSyndromeELFromM32((mode); assert valid &&Exception_UncategorizedHaveEL);(target_el) && target_el != AArch32.TakeUndefInstrException(exception); // AArch32.TakeUndefInstrException() // ================================= AArch32.TakeUndefInstrException(ExceptionRecord exception) route_to_hyp = PSTATE.EL == EL0 && EL2EnabledUInt() && HCR.TGE == '1'; bits(32) preferred_exception_return =(target_el) >= ThisInstrAddrUInt(); vect_offset = 0x04; lr_offset = if(PSTATE.EL); if target_el == CurrentInstrSet() == InstrSet_A32 then 4 else 2; if PSTATE.EL == EL2 thenthen exception = AArch32.EnterHypModeAArch32.SystemAccessTrapSyndrome(exception, preferred_exception_return, vect_offset); elsif route_to_hyp then( AArch32.EnterHypModeThisInstr(exception, preferred_exception_return, 0x14); else(), ec); AArch32.EnterModeAArch32.TakeHypTrapException((exception); elseM32_UndefAArch32.TakeUndefInstrException, preferred_exception_return, lr_offset, vect_offset);();

Library pseudocode for aarch32/exceptions/traps/AArch32.UndefinedFaultAArch32.SystemAccessTrapSyndrome

// AArch32.UndefinedFault() // ========================// AArch32.SystemAccessTrapSyndrome() // ================================== // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord AArch32.UndefinedFault() ifAArch32.SystemAccessTrapSyndrome(bits(32) instr, integer ec) AArch32.GeneralExceptionsToAArch64ExceptionRecord() thenexception; case ec of when 0x0 exception = AArch64.UndefinedFaultExceptionSyndrome();( ); when 0x3 exception = ExceptionSyndrome(Exception_CP15RTTrap); when 0x4 exception = ExceptionSyndrome(Exception_CP15RRTTrap); when 0x5 exception = ExceptionSyndrome(Exception_CP14RTTrap); when 0x6 exception = ExceptionSyndrome(Exception_CP14DTTrap); when 0x7 exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); when 0x8 exception = ExceptionSyndrome(Exception_FPIDTrap); when 0xC exception = ExceptionSyndrome(Exception_CP14RRTTrap); otherwise Unreachable(); bits(20) iss = Zeros(); if exception.type IN {Exception_FPIDTrap, Exception_CP14RTTrap, Exception_CP15RTTrap} then // Trapped MRC/MCR, VMRS on FPSID iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<8:5> = instr<15:12>; // Rt elsif exception.type IN {Exception_CP14RRTTrap, Exception_AdvSIMDFPAccessTrap, Exception_CP15RRTTrap} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 iss<13:10> = instr<19:16>; // Rt2 iss<8:5> = instr<15:12>; // Rt iss<4:1> = instr<3:0>; // CRm elsif exception.type == Exception_CP14DTTrap then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<8:5> = bits(4) UNKNOWN; iss<3> = '1'; elsif exception.type == Exception_Uncategorized then // Trapped for unknown reason iss<8:5> = instr<19:16>; // Rn iss<3> = '0'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndromeAArch32.TakeUndefInstrExceptionException_Uncategorized();(); exception.syndrome<19:0> = iss; return exception;

Library pseudocode for aarch32/functionsexceptions/abortstraps/AArch32.CreateFaultRecordAArch32.TakeHypTrapException

// AArch32.CreateFaultRecord() // =========================== FaultRecord// AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception. AArch32.CreateFaultRecord(AArch32.TakeHypTrapException(integer ec) exception =FaultAArch32.SystemAccessTrapSyndrome statuscode, bits(40) ipaddress, bits(4) domain, integer level,( AccTypeThisInstr acctype, boolean write, bit extflag, bits(4) debugmoe, bits(2) errortype, boolean secondstage, boolean s2fs1walk)(), ec); FaultRecordAArch32.TakeHypTrapException fault; fault.statuscode = statuscode; if (statuscode !=(exception); // AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception. Fault_NoneAArch32.TakeHypTrapException && PSTATE.EL !=( ExceptionRecord exception) assert HaveEL(EL2 && TTBCR.EAE == '0' && !secondstage && !s2fs1walk &&) && ! AArch32.DomainValidIsSecure(statuscode, level)) then fault.domain = domain; else fault.domain = bits(4) UNKNOWN; fault.debugmoe = debugmoe; fault.errortype = errortype; fault.ipaddress.NS = bit UNKNOWN; fault.ipaddress.address =() && (EL2); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x14; AArch32.EnterHypModeZeroExtendELUsingAArch32(ipaddress); fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;(exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/functionsexceptions/abortstraps/AArch32.DomainValidAArch32.TakeMonitorTrapException

// AArch32.DomainValid() // ===================== // Returns TRUE if the Domain is valid for a Short-descriptor translation scheme. boolean// AArch32.TakeMonitorTrapException() // ================================== // Exceptions routed to Monitor mode as a Monitor Trap exception. AArch32.DomainValid(AArch32.TakeMonitorTrapException() assertFaultHaveEL statuscode, integer level) assert statuscode !=( Fault_NoneEL3; case statuscode of when) && Fault_DomainELUsingAArch32 return TRUE; when( Fault_TranslationEL3,); bits(32) preferred_exception_return = Fault_AccessFlagThisInstrAddr,(); vect_offset = 0x04; lr_offset = if Fault_SyncExternalOnWalkCurrentInstrSet,() == then 4 else 2; AArch32.EnterMonitorModeFault_SyncParityOnWalkInstrSet_A32 return level == 2; otherwise return FALSE;(preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/functionsexceptions/abortstraps/AArch32.FaultStatusLDAArch32.TakeUndefInstrException

// AArch32.FaultStatusLD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Long-descriptor format. bits(32)// AArch32.TakeUndefInstrException() // ================================= AArch32.FaultStatusLD(boolean d_side,AArch32.TakeUndefInstrException() exception = FaultRecordExceptionSyndrome fault) assert fault.statuscode !=( Fault_NoneException_Uncategorized; bits(32) fsr =); ZerosAArch32.TakeUndefInstrException(); if(exception); // AArch32.TakeUndefInstrException() // ================================= HaveRASExt() &&AArch32.TakeUndefInstrException( IsAsyncAbortExceptionRecord(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {exception) route_to_hyp =AccType_DCEL2Enabled,() && PSTATE.EL == AccType_ICEL0,&& HCR.TGE == '1'; bits(32) preferred_exception_return = AccType_ATThisInstrAddr} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if(); vect_offset = 0x04; lr_offset = if IsExternalAbortCurrentInstrSet(fault) then fsr<12> = fault.extflag; fsr<9> = '1'; fsr<5:0> =() == then 4 else 2; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); elsif route_to_hyp then AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.EnterMode(M32_UndefEncodeLDFSCInstrSet_A32(fault.statuscode, fault.level); return fsr;, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/functionsexceptions/abortstraps/AArch32.FaultStatusSDAArch32.UndefinedFault

// AArch32.FaultStatusSD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Short-descriptor format. bits(32)// AArch32.UndefinedFault() // ======================== AArch32.FaultStatusSD(boolean d_side,AArch32.UndefinedFault() if FaultRecordAArch32.GeneralExceptionsToAArch64 fault) assert fault.statuscode !=() then Fault_NoneAArch64.UndefinedFault; bits(32) fsr =(); ZerosAArch32.TakeUndefInstrException(); if HaveRASExt() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '0'; fsr<10,3:0> = EncodeSDFSC(fault.statuscode, fault.level); if d_side then fsr<7:4> = fault.domain; // Domain field (data fault only) return fsr;();

Library pseudocode for aarch32/functions/aborts/AArch32.FaultSyndromeAArch32.CreateFaultRecord

// AArch32.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // AArch32 Hyp mode. // AArch32.CreateFaultRecord() // =========================== bits(25)FaultRecord AArch32.FaultSyndrome(boolean d_side,AArch32.CreateFaultRecord( Fault type, bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean write, bit extflag, bits(4) debugmoe, bits(2) errortype, boolean secondstage, boolean s2fs1walk) FaultRecord fault) assert fault.statuscode !=fault; fault.type = type; if (type != Fault_None; bits(25) iss =&& PSTATE.EL != ZerosEL2(); if&& TTBCR.EAE == '0' && !secondstage && !s2fs1walk && HaveRASExtAArch32.DomainValid() &&(type, level)) then fault.domain = domain; else fault.domain = bits(4) UNKNOWN; fault.debugmoe = debugmoe; fault.errortype = errortype; fault.ipaddress.NS = bit UNKNOWN; fault.ipaddress.address = IsAsyncAbortZeroExtend(fault) then iss<11:10> = fault.errortype; // AET if d_side then if IsSecondStage(fault) && !fault.s2fs1walk then iss<24:14> = LSInstructionSyndrome(); if fault.acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_IC, AccType_AT} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.statuscode, fault.level); (ipaddress); fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return iss; return fault;

Library pseudocode for aarch32/functions/aborts/EncodeSDFSCAArch32.DomainValid

// EncodeSDFSC() // ============= // Function that gives the Short-descriptor FSR code for different types of Fault // AArch32.DomainValid() // ===================== // Returns TRUE if the Domain is valid for a Short-descriptor translation scheme. bits(5)boolean EncodeSDFSC(AArch32.DomainValid(Fault statuscode, integer level) bits(5) result; case statuscode of whentype, integer level) assert type != Fault_AccessFlagFault_None assert level IN {1,2}; result = if level == 1 then '00011' else '00110'; ; case type of when Fault_Alignment result = '00001'; when Fault_Permission assert level IN {1,2}; result = if level == 1 then '01101' else '01111'; when Fault_Domain assert level IN {1,2}; result = if level == 1 then '01001' else '01011'; return TRUE; when Fault_Translation assert level IN {1,2}; result = if level == 1 then '00101' else '00111'; when, Fault_SyncExternalFault_AccessFlag result = '01000'; when, Fault_SyncExternalOnWalk assert level IN {1,2}; result = if level == 1 then '01100' else '01110'; when, Fault_SyncParity result = '11001'; when Fault_SyncParityOnWalk assert level IN {1,2}; result = if level == 1 then '11100' else '11110'; when Fault_AsyncParity result = '11000'; when Fault_AsyncExternal result = '10110'; when Fault_Debug result = '00010'; when Fault_TLBConflict result = '10000'; when Fault_Lockdown result = '10100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '10101'; // IMPLEMENTATION DEFINED when Fault_ICacheMaint result = '00100'; otherwise Unreachable(); return result;return level == 2; otherwise return FALSE;

Library pseudocode for aarch32/functions/commonaborts/A32ExpandImmAArch32.FaultStatusLD

// A32ExpandImm() // ============== // AArch32.FaultStatusLD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Long-descriptor format. bits(32) A32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) =AArch32.FaultStatusLD(boolean d_side, fault) assert fault.type != Fault_None; bits(32) fsr = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '1'; fsr<5:0> = EncodeLDFSCA32ExpandImm_CFaultRecord(imm12, PSTATE.C); (fault.type, fault.level); return imm32; return fsr;

Library pseudocode for aarch32/functions/commonaborts/A32ExpandImm_CAArch32.FaultStatusSD

// A32ExpandImm_C() // ================ // AArch32.FaultStatusSD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Short-descriptor format. (bits(32), bit)bits(32) A32ExpandImm_C(bits(12) imm12, bit carry_in) unrotated_value =AArch32.FaultStatusSD(boolean d_side, ZeroExtendFaultRecord(imm12<7:0>, 32); (imm32, carry_out) =fault) assert fault.type != Shift_CFault_None(unrotated_value,; bits(32) fsr = SRType_RORZeros, 2*(); if() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '0'; fsr<10,3:0> = EncodeSDFSCUIntHaveRASExt(imm12<11:8>), carry_in); (fault.type, fault.level); if d_side then fsr<7:4> = fault.domain; // Domain field (data fault only) return (imm32, carry_out); return fsr;

Library pseudocode for aarch32/functions/commonaborts/DecodeImmShiftAArch32.FaultSyndrome

// DecodeImmShift() // ================ // AArch32.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // AArch32 Hyp mode. (SRType, integer)bits(25) DecodeImmShift(bits(2) srtype, bits(5) imm5) case srtype of when '00' shift_t =AArch32.FaultSyndrome(boolean d_side, SRType_LSLFaultRecord; shift_n =fault) assert fault.type != UIntFault_None(imm5); when '01' shift_t =; bits(25) iss = SRType_LSRZeros; shift_n = if imm5 == '00000' then 32 else(); if UIntHaveRASExt(imm5); when '10' shift_t =() && SRType_ASRIsAsyncAbort; shift_n = if imm5 == '00000' then 32 else(fault) then iss<11:10> = fault.errortype; // AET if d_side then if UIntIsSecondStage(imm5); when '11' if imm5 == '00000' then shift_t =(fault) && !fault.s2fs1walk then iss<24:14> = SRType_RRXLSInstructionSyndrome; shift_n = 1; else shift_t =(); if fault.acctype IN { SRType_RORAccType_DC; shift_n =, , AccType_IC, AccType_AT} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSCUIntAccType_DC_UNPRIV(imm5); (fault.type, fault.level); return (shift_t, shift_n); return iss;

Library pseudocode for aarch32/functions/commonaborts/DecodeRegShiftEncodeSDFSC

// DecodeRegShift() // ================ // EncodeSDFSC() // ============= // Function that gives the Short-descriptor FSR code for different types of Fault SRTypebits(5) DecodeRegShift(bits(2) srtype) case srtype of when '00' shift_t =EncodeSDFSC( SRType_LSLFault; when '01' shift_t =type, integer level) bits(5) result; case type of when SRType_LSRFault_AccessFlag; when '10' shift_t =assert level IN {1,2}; result = if level == 1 then '00011' else '00110'; when SRType_ASRFault_Alignment; when '11' shift_t =result = '00001'; when assert level IN {1,2}; result = if level == 1 then '01101' else '01111'; when Fault_Domain assert level IN {1,2}; result = if level == 1 then '01001' else '01011'; when Fault_Translation assert level IN {1,2}; result = if level == 1 then '00101' else '00111'; when Fault_SyncExternal result = '01000'; when Fault_SyncExternalOnWalk assert level IN {1,2}; result = if level == 1 then '01100' else '01110'; when Fault_SyncParity result = '11001'; when Fault_SyncParityOnWalk assert level IN {1,2}; result = if level == 1 then '11100' else '11110'; when Fault_AsyncParity result = '11000'; when Fault_AsyncExternal result = '10110'; when Fault_Debug result = '00010'; when Fault_TLBConflict result = '10000'; when Fault_Lockdown result = '10100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '10101'; // IMPLEMENTATION DEFINED when Fault_ICacheMaint result = '00100'; otherwise UnreachableSRType_RORFault_Permission; return shift_t;(); return result;

Library pseudocode for aarch32/functions/common/RRXA32ExpandImm

// RRX() // ===== // A32ExpandImm() // ============== bits(N)bits(32) RRX(bits(N) x, bit carry_in) (result, -) =A32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) = RRX_CA32ExpandImm_C(x, carry_in); return result;(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/common/RRX_CA32ExpandImm_C

// RRX_C() // ======= // A32ExpandImm_C() // ================ (bits(N), bit)(bits(32), bit) RRX_C(bits(N) x, bit carry_in) result = carry_in : x<N-1:1>; carry_out = x<0>; return (result, carry_out);A32ExpandImm_C(bits(12) imm12, bit carry_in) unrotated_value =ZeroExtend(imm12<7:0>, 32); (imm32, carry_out) = Shift_C(unrotated_value, SRType_ROR, 2*UInt(imm12<11:8>), carry_in); return (imm32, carry_out);

Library pseudocode for aarch32/functions/common/SRTypeDecodeImmShift

enumeration// DecodeImmShift() // ================ (SRType, integer) SRType {DecodeImmShift(bits(2) type, bits(5) imm5) case type of when '00' shift_t =SRType_LSL,; shift_n = SRType_LSR,(imm5); when '01' shift_t = SRType_ASR,; shift_n = if imm5 == '00000' then 32 else SRType_ROR,(imm5); when '10' shift_t = ; shift_n = if imm5 == '00000' then 32 else UInt(imm5); when '11' if imm5 == '00000' then shift_t = SRType_RRX; shift_n = 1; else shift_t = SRType_ROR; shift_n = UIntSRType_RRX};(imm5); return (shift_t, shift_n);

Library pseudocode for aarch32/functions/common/ShiftDecodeRegShift

// Shift() // ======= // DecodeRegShift() // ================ bits(N)SRType Shift(bits(N) value,DecodeRegShift(bits(2) type) case type of when '00' shift_t = SRTypeSRType_LSL srtype, integer amount, bit carry_in) (result, -) =; when '01' shift_t = ; when '10' shift_t = SRType_ASR; when '11' shift_t = SRType_RORShift_CSRType_LSR(value, srtype, amount, carry_in); return result;; return shift_t;

Library pseudocode for aarch32/functions/common/Shift_CRRX

// Shift_C() // ========= // RRX() // ===== (bits(N), bit)bits(N) Shift_C(bits(N) value,RRX(bits(N) x, bit carry_in) (result, -) = SRType srtype, integer amount, bit carry_in) assert !(srtype == SRType_RRX && amount != 1); if amount == 0 then (result, carry_out) = (value, carry_in); else case srtype of when SRType_LSL (result, carry_out) = LSL_C(value, amount); when SRType_LSR (result, carry_out) = LSR_C(value, amount); when SRType_ASR (result, carry_out) = ASR_C(value, amount); when SRType_ROR (result, carry_out) = ROR_C(value, amount); when SRType_RRX (result, carry_out) = RRX_C(value, carry_in); return (result, carry_out);(x, carry_in); return result;

Library pseudocode for aarch32/functions/common/T32ExpandImmRRX_C

// T32ExpandImm() // ============== // RRX_C() // ======= bits(32)(bits(N), bit) T32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) =RRX_C(bits(N) x, bit carry_in) result = carry_in : x<N-1:1>; carry_out = x<0>; return (result, carry_out); T32ExpandImm_C(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/common/T32ExpandImm_CSRType

// T32ExpandImm_C() // ================ (bits(32), bit)enumeration T32ExpandImm_C(bits(12) imm12, bit carry_in) if imm12<11:10> == '00' then case imm12<9:8> of when '00' imm32 =SRType { ZeroExtend(imm12<7:0>, 32); when '01' imm32 = '00000000' : imm12<7:0> : '00000000' : imm12<7:0>; when '10' imm32 = imm12<7:0> : '00000000' : imm12<7:0> : '00000000'; when '11' imm32 = imm12<7:0> : imm12<7:0> : imm12<7:0> : imm12<7:0>; carry_out = carry_in; else unrotated_value =SRType_LSL, ZeroExtend('1':imm12<6:0>, 32); (imm32, carry_out) =SRType_LSR, ROR_C(unrotated_value,SRType_ASR, SRType_ROR, UInt(imm12<11:7>)); return (imm32, carry_out);SRType_RRX};

Library pseudocode for aarch32/functions/coproccommon/AArch32.CheckCP15InstrCoarseTrapsShift

// AArch32.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained CP15 traps in HSTR and HCR. // Shift() // ======= booleanbits(N) AArch32.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) // Check for coarse-grained Hyp traps if PSTATE.EL IN {Shift(bits(N) value,EL0SRType,type, integer amount, bit carry_in) (result, -) = EL1Shift_C} && EL2Enabled() then if PSTATE.EL == EL0 && !ELUsingAArch32(EL2) then return AArch64.CheckCP15InstrCoarseTraps(CRn, nreg, CRm); // Check for MCR, MRC, MCRR and MRRC disabled by HSTR<CRn/CRm> major = if nreg == 1 then CRn else CRm; if !(major IN {4,14}) && HSTR<major> == '1' then return TRUE; // Check for MRC and MCR disabled by HCR.TIDCP if (HCR.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;(value, type, amount, carry_in); return result;

Library pseudocode for aarch32/functions/exclusivecommon/AArch32.ExclusiveMonitorsPassShift_C

// AArch32.ExclusiveMonitorsPass() // =============================== // Shift_C() // ========= // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean(bits(N), bit) AArch32.ExclusiveMonitorsPass(bits(32) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype =Shift_C(bits(N) value, AccType_ATOMICSRType; iswrite = TRUE; aligned = (address ==type, integer amount, bit carry_in) assert !(type == AlignSRType_RRX(address, size)); && amount != 1); if !aligned then secondstage = FALSE; if amount == 0 then (result, carry_out) = (value, carry_in); else case type of when AArch32.AbortSRType_LSL(address,(result, carry_out) = AArch32.AlignmentFaultLSL_C(acctype, iswrite, secondstage)); passed =(value, amount); when AArch32.IsExclusiveVASRType_LSR(address,(result, carry_out) = ProcessorIDLSR_C(), size); if !passed then return FALSE; memaddrdesc =(value, amount); when AArch32.TranslateAddressSRType_ASR(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if(result, carry_out) = IsFaultASR_C(memaddrdesc) then(value, amount); when AArch32.AbortSRType_ROR(address, memaddrdesc.fault); passed =(result, carry_out) = IsExclusiveLocalROR_C(memaddrdesc.paddress,(value, amount); when ProcessorIDSRType_RRX(), size); if passed then(result, carry_out) = ClearExclusiveLocalRRX_C(ProcessorID()); if memaddrdesc.memattrs.shareable then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); (value, carry_in); return passed; return (result, carry_out);

Library pseudocode for aarch32/functions/exclusivecommon/AArch32.IsExclusiveVAT32ExpandImm

// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean// T32ExpandImm() // ============== bits(32) AArch32.IsExclusiveVA(bits(32) address, integer processorid, integer size);T32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) =T32ExpandImm_C(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/exclusivecommon/AArch32.MarkExclusiveVAT32ExpandImm_C

// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid.// T32ExpandImm_C() // ================ (bits(32), bit) AArch32.MarkExclusiveVA(bits(32) address, integer processorid, integer size);T32ExpandImm_C(bits(12) imm12, bit carry_in) if imm12<11:10> == '00' then case imm12<9:8> of when '00' imm32 =ZeroExtend(imm12<7:0>, 32); when '01' imm32 = '00000000' : imm12<7:0> : '00000000' : imm12<7:0>; when '10' imm32 = imm12<7:0> : '00000000' : imm12<7:0> : '00000000'; when '11' imm32 = imm12<7:0> : imm12<7:0> : imm12<7:0> : imm12<7:0>; carry_out = carry_in; else unrotated_value = ZeroExtend('1':imm12<6:0>, 32); (imm32, carry_out) = ROR_C(unrotated_value, UInt(imm12<11:7>)); return (imm32, carry_out);

Library pseudocode for aarch32/functions/exclusivecoproc/AArch32.SetExclusiveMonitorsAArch32.CheckCP15InstrCoarseTraps

// AArch32.SetExclusiveMonitors() // ============================== // AArch32.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained CP15 traps in HSTR and HCR. // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address.boolean AArch32.SetExclusiveMonitors(bits(32) address, integer size) AArch32.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) acctype = // Check for coarse-grained Hyp traps if AccType_ATOMICEL2Enabled; iswrite = FALSE; aligned = (address ==() && PSTATE.EL IN { AlignEL0(address, size)); memaddrdesc =, AArch32.TranslateAddressEL1(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if} then if PSTATE.EL == IsFaultEL0(memaddrdesc) then return; if memaddrdesc.memattrs.shareable then&& ! MarkExclusiveGlobalELUsingAArch32(memaddrdesc.paddress,( ProcessorIDEL2(), size);) then return MarkExclusiveLocalAArch64.CheckCP15InstrCoarseTraps(memaddrdesc.paddress, ProcessorID(), size); AArch32.MarkExclusiveVA(address, ProcessorID(), size);(CRn, nreg, CRm); // Check for MCR, MRC, MCRR and MRRC disabled by HSTR<CRn/CRm> major = if nreg == 1 then CRn else CRm; if !(major IN {4,14}) && HSTR<major> == '1' then return TRUE; // Check for MRC and MCR disabled by HCR.TIDCP if (HCR.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;

Library pseudocode for aarch32/functions/floatexclusive/CheckAdvSIMDEnabledAArch32.ExclusiveMonitorsPass

// CheckAdvSIMDEnabled() // =====================// AArch32.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean CheckAdvSIMDEnabled() AArch32.ExclusiveMonitorsPass(bits(32) address, integer size) fpexc_check = TRUE; advsimd = TRUE; // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype = AArch32.CheckAdvSIMDOrFPEnabledAccType_ATOMIC(fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if Advanced SIMD access is permitted // Make temporary copy of D registers // _Dclone[] is used as input data for instruction pseudocode for i = 0 to 31 _Dclone[i] =; iswrite = TRUE; aligned = (address == (address, size)); if !aligned then secondstage = FALSE; AArch32.Abort(address, AArch32.AlignmentFault(acctype, iswrite, secondstage)); passed = AArch32.IsExclusiveVA(address, ProcessorID(), size); if !passed then return FALSE; memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); if passed then ClearExclusiveLocal(ProcessorID()); if memaddrdesc.memattrs.shareable then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorIDDAlign[i]; (), size); return; return passed;

Library pseudocode for aarch32/functions/floatexclusive/CheckAdvSIMDOrVFPEnabledAArch32.IsExclusiveVA

// CheckAdvSIMDOrVFPEnabled() // ==========================// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean CheckAdvSIMDOrVFPEnabled(boolean include_fpexc_check, boolean advsimd)AArch32.IsExclusiveVA(bits(32) address, integer processorid, integer size); AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/floatexclusive/CheckCryptoEnabled32AArch32.MarkExclusiveVA

// CheckCryptoEnabled32() // ======================// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid. CheckCryptoEnabled32()AArch32.MarkExclusiveVA(bits(32) address, integer processorid, integer size); CheckAdvSIMDEnabled(); // Return from CheckAdvSIMDEnabled() occurs only if access is permitted return;

Library pseudocode for aarch32/functions/floatexclusive/CheckVFPEnabledAArch32.SetExclusiveMonitors

// CheckVFPEnabled() // =================// AArch32.SetExclusiveMonitors() // ============================== // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address. CheckVFPEnabled(boolean include_fpexc_check) advsimd = FALSE;AArch32.SetExclusiveMonitors(bits(32) address, integer size) acctype = ; iswrite = FALSE; aligned = (address == Align(address, size)); memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return; if memaddrdesc.memattrs.shareable then MarkExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); MarkExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); AArch32.MarkExclusiveVA(address, ProcessorIDAArch32.CheckAdvSIMDOrFPEnabledAccType_ATOMIC(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;(), size);

Library pseudocode for aarch32/functions/float/FPHalvedSubCheckAdvSIMDEnabled

// FPHalvedSub() // ============= bits(N)// CheckAdvSIMDEnabled() // ===================== FPHalvedSub(bits(N) op1, bits(N) op2,CheckAdvSIMDEnabled() fpexc_check = TRUE; advsimd = TRUE; FPCRTypeAArch32.CheckAdvSIMDOrFPEnabled fpcr) assert N IN {16,32,64}; rounding =(fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if Advanced SIMD access is permitted // Make temporary copy of D registers // _Dclone[] is used as input data for instruction pseudocode for i = 0 to 31 _Dclone[i] = FPRoundingModeD(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == sign2 then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 != sign2 then result = FPZero(sign1); else result_value = (value1 - value2) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr); return result;[i]; return;

Library pseudocode for aarch32/functions/float/FPRSqrtStepCheckAdvSIMDOrVFPEnabled

// FPRSqrtStep() // ============= bits(N)// CheckAdvSIMDOrVFPEnabled() // ========================== FPRSqrtStep(bits(N) op1, bits(N) op2) assert N IN {16,32};CheckAdvSIMDOrVFPEnabled(boolean include_fpexc_check, boolean advsimd) FPCRTypeAArch32.CheckAdvSIMDOrFPEnabled fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) three = FPThree('0'); result = FPHalvedSub(three, product, fpcr); return result;(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/float/FPRecipStepCheckCryptoEnabled32

// FPRecipStep() // ============= bits(N)// CheckCryptoEnabled32() // ====================== FPRecipStep(bits(N) op1, bits(N) op2) assert N IN {16,32};CheckCryptoEnabled32() FPCRTypeCheckAdvSIMDEnabled fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) two = FPTwo('0'); result = FPSub(two, product, fpcr); return result;(); // Return from CheckAdvSIMDEnabled() occurs only if access is permitted return;

Library pseudocode for aarch32/functions/float/StandardFPSCRValueCheckVFPEnabled

// StandardFPSCRValue() // ==================== FPCRType// CheckVFPEnabled() // ================= StandardFPSCRValue() return '00000' : FPSCR.AHP : '110000' : FPSCR.FZ16 : '0000000000000000000';CheckVFPEnabled(boolean include_fpexc_check) advsimd = FALSE;AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/memoryfloat/AArch32.CheckAlignmentFPHalvedSub

// AArch32.CheckAlignment() // ======================== // FPHalvedSub() // ============= booleanbits(N) AArch32.CheckAlignment(bits(32) address, integer alignment,FPHalvedSub(bits(N) op1, bits(N) op2, AccTypeFPCRType acctype, boolean iswrite) if PSTATE.EL ==fpcr) assert N IN {16,32,64}; rounding = EL0FPRoundingMode && !(fpcr); (type1,sign1,value1) =ELUsingAArch32FPUnpack((op1, fpcr); (type2,sign2,value2) =S1TranslationRegimeFPUnpack()) then A = SCTLR[].A; //use AArch64 register, when higher Exception level is using AArch64 elsif PSTATE.EL ==(op2, fpcr); (done,result) = EL2FPProcessNaNs then A = HSCTLR.A; else A = SCTLR.A; aligned = (address ==(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == AlignFPType_Infinity(address, alignment)); atomic = acctype IN {); inf2 = (type2 == AccType_ATOMICFPType_Infinity,); zero1 = (type1 == AccType_ATOMICRWFPType_Zero,); zero2 = (type2 == AccType_ORDEREDATOMICFPType_Zero,); if inf1 && inf2 && sign1 == sign2 then result = AccType_ORDEREDATOMICRWFPDefaultNaN }; ordered = acctype IN {(); AccType_ORDEREDFPProcessException,( AccType_ORDEREDRWFPExc_InvalidOp,, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = AccType_LIMITEDORDEREDFPInfinity,('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = AccType_ORDEREDATOMICFPInfinity,('1'); elsif zero1 && zero2 && sign1 != sign2 then result = AccType_ORDEREDATOMICRWFPZero }; vector = acctype ==(sign1); else result_value = (value1 - value2) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == AccType_VECFPRounding_NEGINF; // AccType_VEC is used for SIMD element alignment checks only check = (atomic || ordered || vector || A == '1'); if check && !aligned then secondstage = FALSE;then '1' else '0'; result = AArch32.AbortFPZero(address,(result_sign); else result = AArch32.AlignmentFaultFPRound(acctype, iswrite, secondstage)); return aligned;(result_value, fpcr); return result;

Library pseudocode for aarch32/functions/memoryfloat/AArch32.MemSingleFPRSqrtStep

// AArch32.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. // FPRSqrtStep() // ============= bits(size*8)bits(N) AArch32.MemSingle[bits(32) address, integer size,FPRSqrtStep(bits(N) op1, bits(N) op2) assert N IN {16,32}; AccTypeFPCRType acctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; assert address ==fpcr = AlignStandardFPSCRValue(address, size);(); (type1,sign1,value1) = AddressDescriptorFPUnpack memaddrdesc; bits(size*8) value; iswrite = FALSE; // MMU or MPU memaddrdesc =(op1, fpcr); (type2,sign2,value2) = AArch32.TranslateAddressFPUnpack(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if(op2, fpcr); (done,result) = IsFaultFPProcessNaNs(memaddrdesc) then(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == AArch32.AbortFPType_Infinity(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if); inf2 = (type2 == HaveMTEExtFPType_Infinity() then if); zero1 = (type1 == AArch64.AccessIsTagCheckedFPType_Zero(); zero2 = (type2 ==ZeroExtendFPType_Zero(address, 64), acctype) then bits(4) ptag =); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = AArch64.TransformTagFPZero(('0'); else product =ZeroExtendFPMul(address, 64)); if !(op1, op2, fpcr); bits(N) three =AArch64.CheckTagFPThree(memaddrdesc, ptag, iswrite) then('0'); result = AArch64.TagCheckFailFPHalvedSub(ZeroExtend(address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; // AArch32.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.TransformTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, ptag, iswrite) then AArch64.TagCheckFail(ZeroExtend(address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; return;(three, product, fpcr); return result;

Library pseudocode for aarch32/functions/memoryfloat/Hint_PreloadDataFPRecipStep

// FPRecipStep() // ============= bits(N) FPRecipStep(bits(N) op1, bits(N) op2) assert N IN {16,32}; FPCRType fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) two = FPTwo('0'); result = FPSubHint_PreloadData(bits(32) address);(two, product, fpcr); return result;

Library pseudocode for aarch32/functions/memoryfloat/Hint_PreloadDataForWriteStandardFPSCRValue

// StandardFPSCRValue() // ==================== FPCRType Hint_PreloadDataForWrite(bits(32) address);StandardFPSCRValue() return '00000' : FPSCR.AHP : '110000' : FPSCR.FZ16 : '0000000000000000000';

Library pseudocode for aarch32/functions/memory/Hint_PreloadInstrAArch32.CheckAlignment

// AArch32.CheckAlignment() // ======================== boolean AArch32.CheckAlignment(bits(32) address, integer alignment, AccType acctype, boolean iswrite) if PSTATE.EL == EL0 && !ELUsingAArch32(S1TranslationRegime()) then A = SCTLR[].A; //use AArch64 register, when higher Exception level is using AArch64 elsif PSTATE.EL == EL2 then A = HSCTLR.A; else A = SCTLR.A; aligned = (address == Align(address, alignment)); atomic = acctype IN { AccType_ATOMIC, AccType_ATOMICRW, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; ordered = acctype IN { AccType_ORDERED, AccType_ORDEREDRW, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; // AccType_VEC is used for SIMD element alignment checks only check = (atomic || ordered || vector || A == '1'); if check && !aligned then secondstage = FALSE; AArch32.Abort(address, AArch32.AlignmentFaultHint_PreloadInstr(bits(32) address);(acctype, iswrite, secondstage)); return aligned;

Library pseudocode for aarch32/functions/memory/MemAAArch32.MemSingle

// MemA[] - non-assignment form // ============================ // AArch32.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(8*size)bits(size*8) MemA[bits(32) address, integer size] acctype =AArch32.MemSingle[bits(32) address, integer size, AccType_ATOMICAccType; returnacctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; assert address == Mem_with_typeAlign[address, size, acctype]; // MemA[] - assignment form // ========================(address, size); AddressDescriptor memaddrdesc; bits(size*8) value; iswrite = FALSE; // MMU or MPU memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtend(address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; // AArch32.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. MemA[bits(32) address, integer size] = bits(8*size) value acctype =AArch32.MemSingle[bits(32) address, integer size, AccType_ATOMICAccType;acctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == (address, size); AddressDescriptor memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtendMem_with_typeAlign[address, size, acctype] = value; (address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; return;

Library pseudocode for aarch32/functions/memory/MemOAddressWithAllocationTag

// MemO[] - non-assignment form // ============================ // AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(8*size)bits(64) MemO[bits(32) address, integer size] acctype =AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result; AccType_ORDERED; return Mem_with_type[address, size, acctype]; // MemO[] - assignment form // ======================== MemO[bits(32) address, integer size] = bits(8*size) value acctype = AccType_ORDERED; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemUAllocationTagFromAddress

// MemU[] - non-assignment form // ============================ // AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(8*size)bits(4) MemU[bits(32) address, integer size] acctype =AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag; AccType_NORMAL; return Mem_with_type[address, size, acctype]; // MemU[] - assignment form // ======================== MemU[bits(32) address, integer size] = bits(8*size) value acctype = AccType_NORMAL; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemU_unprivCheckTag

// MemU_unpriv[] - non-assignment form // =================================== // CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed bits(8*size)boolean MemU_unpriv[bits(32) address, integer size] acctype =CheckTag( AccType_UNPRIVAddressDescriptor; returnmemaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress = Mem_with_typeZeroExtend[address, size, acctype]; // MemU_unpriv[] - assignment form // ===============================(memaddrdesc.paddress.address); return ptag == MemU_unpriv[bits(32) address, integer size] = bits(8*size) value acctype = AccType_UNPRIV; Mem_with_type[address, size, acctype] = value; return;[paddress]; else return TRUE;

Library pseudocode for aarch32/functions/memory/Mem_with_typeHint_PreloadData

// Mem_with_type[] - non-assignment (read) form // ============================================ // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch32.MemSingle directly. bits(size*8) Mem_with_type[bits(32) address, integer size, AccType acctype] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; value<7:0> = AArch32.MemSingle[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = AArch32.MemSingle[address+i, 1, acctype, aligned]; else value = AArch32.MemSingle[address, size, acctype, aligned]; if BigEndian() then value = BigEndianReverse(value); return value; // Mem_with_type[] - assignment (write) form // ========================================= // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem_with_type[bits(32) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if BigEndian() then value = BigEndianReverse(value); aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; AArch32.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch32.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; else AArch32.MemSingle[address, size, acctype, aligned] = value; return;Hint_PreloadData(bits(32) address);

Library pseudocode for aarch32/functions/rasmemory/AArch32.ESBOperationHint_PreloadDataForWrite

// AArch32.ESBOperation() // ====================== // Perform the AArch32 ESB operation for ESB executed in AArch32 state AArch32.ESBOperation() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1'; if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.ESBOperation(); return; route_to_monitor = HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.EA == '1'; route_to_hyp = PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR.TGE == '1' || HCR.AMO == '1'); if route_to_monitor then target = M32_Monitor; elsif route_to_hyp || PSTATE.M == M32_Hyp then target = M32_Hyp; else target = M32_Abort; if IsSecure() then mask_active = TRUE; elsif target == M32_Monitor then mask_active = SCR.AW == '1' && (!HaveEL(EL2) || (HCR.TGE == '0' && HCR.AMO == '0')); else mask_active = target == M32_Abort || PSTATE.M == M32_Hyp; mask_set = PSTATE.A == '1'; (-, el) = ELFromM32(target); intdis = Halted() || ExternalDebugInterruptsDisabled(el); masked = intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); ClearPendingPhysicalSError(); return;Hint_PreloadDataForWrite(bits(32) address);

Library pseudocode for aarch32/functions/rasmemory/AArch32.PhysicalSErrorSyndromeHint_PreloadInstr

// Return the SError syndrome AArch32.SErrorSyndrome AArch32.PhysicalSErrorSyndrome();Hint_PreloadInstr(bits(32) address);

Library pseudocode for aarch32/functions/rasmemory/AArch32.ReportDeferredSErrorMemA

// AArch32.ReportDeferredSError() // ============================== // Return deferred SError syndrome // MemA[] - non-assignment form // ============================ bits(32)bits(8*size) AArch32.ReportDeferredSError(bits(2) AET, bit ExT) bits(32) target; target<31> = '1'; // A syndrome =MemA[bits(32) address, integer size] acctype = ZerosAccType_ATOMIC(16); if PSTATE.EL ==; return EL2Mem_with_type then syndrome<11:10> = AET; // AET syndrome<9> = ExT; // EA syndrome<5:0> = '010001'; // DFSC else syndrome<15:14> = AET; // AET syndrome<12> = ExT; // ExT syndrome<9> = TTBCR.EAE; // LPAE if TTBCR.EAE == '1' then // Long-descriptor format syndrome<5:0> = '010001'; // STATUS else // Short-descriptor format syndrome<10,3:0> = '10110'; // FS if[address, size, acctype]; // MemA[] - assignment form // ======================== HaveAnyAArch64() then target<24:0> =MemA[bits(32) address, integer size] = bits(8*size) value acctype = ; Mem_with_typeZeroExtendAccType_ATOMIC(syndrome);// Any RES0 fields must be set to zero else target<15:0> = syndrome; return target;[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/rasmemory/AArch32.SErrorSyndromeMemO

type// MemO[] - non-assignment form // ============================ bits(8*size) AArch32.SErrorSyndrome is ( bits(2) AET, bit ExT )MemO[bits(32) address, integer size] acctype =AccType_ORDERED; return Mem_with_type[address, size, acctype]; // MemO[] - assignment form // ======================== MemO[bits(32) address, integer size] = bits(8*size) value acctype = AccType_ORDERED; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/rasmemory/AArch32.vESBOperationMemTag

// AArch32.vESBOperation() // ======================= // Perform the ESB operation for virtual SError interrupts executed in AArch32 state// MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. bits(4) AArch32.vESBOperation() assert PSTATE.EL IN {MemTag[bits(64) address]EL0AddressDescriptor,memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = EL1AArch64.TranslateAddress} &&(address, EL2EnabledAccType_NORMAL(); // Check for EL2 using AArch64 state if !, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions ifELUsingAArch32IsFault((memaddrdesc) thenEL2AArch64.Abort) then(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AArch64.vESBOperationAllocationTagAccessIsEnabled(); return; () && memaddrdesc.memattrs.tagged then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // If physical SError interrupts are routed to Hyp mode, and TGE is not set, then a // virtual SError interrupt might be pending vSEI_enabled = HCR.TGE == '0' && HCR.AMO == '1'; vSEI_pending = vSEI_enabled && HCR.VA == '1'; vintdis =// MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. Halted() ||MemTag[bits(64) address] = bits(4) value ExternalDebugInterruptsDisabledAddressDescriptor(memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address !=EL1Align); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then VDISR =(address, TAG_GRANULE) then boolean secondstage = FALSE; (address, AArch64.AlignmentFault(AccType_NORMAL, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access if AllocationTagAccessIsEnabledAArch32.ReportDeferredSErrorAArch64.Abort(VDFSR<15:14>, VDFSR<12>); HCR.VA = '0'; // Clear pending virtual SError return;() && memaddrdesc.memattrs.tagged then _MemTag[memaddrdesc] = value;

Library pseudocode for aarch32/functions/registersmemory/AArch32.ResetGeneralRegistersMemU

// AArch32.ResetGeneralRegisters() // ===============================// MemU[] - non-assignment form // ============================ bits(8*size) AArch32.ResetGeneralRegisters() for i = 0 to 7MemU[bits(32) address, integer size] acctype = RAccType_NORMAL[i] = bits(32) UNKNOWN; for i = 8 to 12; return RmodeMem_with_type[i,[address, size, acctype]; // MemU[] - assignment form // ======================== M32_User] = bits(32) UNKNOWN;MemU[bits(32) address, integer size] = bits(8*size) value acctype = RmodeAccType_NORMAL[i,; M32_FIQMem_with_type] = bits(32) UNKNOWN; if HaveEL(EL2) then Rmode[13, M32_Hyp] = bits(32) UNKNOWN; // No R14_hyp for i = 13 to 14 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; Rmode[i, M32_IRQ] = bits(32) UNKNOWN; Rmode[i, M32_Svc] = bits(32) UNKNOWN; Rmode[i, M32_Abort] = bits(32) UNKNOWN; Rmode[i, M32_Undef] = bits(32) UNKNOWN; if HaveEL(EL3) then Rmode[i, M32_Monitor] = bits(32) UNKNOWN; [address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/registersmemory/AArch32.ResetSIMDFPRegistersMemU_unpriv

// AArch32.ResetSIMDFPRegisters() // ==============================// MemU_unpriv[] - non-assignment form // =================================== bits(8*size) AArch32.ResetSIMDFPRegisters() for i = 0 to 15MemU_unpriv[bits(32) address, integer size] acctype = ; return Mem_with_type[address, size, acctype]; // MemU_unpriv[] - assignment form // =============================== MemU_unpriv[bits(32) address, integer size] = bits(8*size) value acctype = AccType_UNPRIV; Mem_with_typeQAccType_UNPRIV[i] = bits(128) UNKNOWN; [address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/registersmemory/AArch32.ResetSpecialRegistersMem_with_type

// AArch32.ResetSpecialRegisters() // ===============================// Mem_with_type[] - non-assignment (read) form // ============================================ // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch32.MemSingle directly. bits(size*8) AArch32.ResetSpecialRegisters() // AArch32 special registers SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_svc = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; ifMem_with_type[bits(32) address, integer size, HaveELAccType(acctype] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned =EL2AArch32.CheckAlignment) then SPSR_hyp = bits(32) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; if(address, size, acctype, iswrite); if !aligned then assert size > 1; value<7:0> = HaveELAArch32.MemSingle([address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c =(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = AArch32.MemSingle[address+i, 1, acctype, aligned]; else value = AArch32.MemSingle[address, size, acctype, aligned]; if BigEndian() then value = BigEndianReverse(value); return value; // Mem_with_type[] - assignment (write) form // ========================================= // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem_with_type[bits(32) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if BigEndian() then value = BigEndianReverse(value); aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; AArch32.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch32.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; else AArch32.MemSingleEL3ConstrainUnpredictable) then SPSR_mon = bits(32) UNKNOWN; // External debug special registers DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; [address, size, acctype, aligned] = value; return;

Library pseudocode for aarch32/functions/registersmemory/AArch32.ResetSystemRegistersTransformTag

// TransformTag() // ============== // Apply tag transformation rules. bits(4) TransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta = ZeroExtendAArch32.ResetSystemRegisters(boolean cold_reset);(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;

Library pseudocode for aarch32/functions/registersmemory/ALUExceptionReturnboolean

// ALUExceptionReturn() // ====================// boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. boolean ALUExceptionReturn(bits(32) address) if PSTATE.EL ==AccessIsTagChecked(bits(64) vaddr, EL2AccType then UNDEFINED; elsif PSTATE.M IN {acctype) if PSTATE.M<4> == '1' then return FALSE; ifM32_UserEffectiveTBI,(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; ifM32_SystemEffectiveTCMA} then UNPREDICTABLE; // UNDEFINED or NOP else(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if ! AArch32.ExceptionReturnAllocationTagAccessIsEnabled(address,() then return FALSE; if acctype IN { , AccType_PTW} then return FALSE; if acctype == AccType_NV2REGISTERSPSRAccType_IFETCH[]);then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;

Library pseudocode for aarch32/functions/registersras/ALUWritePCAArch32.ESBOperation

// ALUWritePC() // ============// AArch32.ESBOperation() // ====================== // Perform the AArch32 ESB operation for ESB executed in AArch32 state ALUWritePC(bits(32) address) ifAArch32.ESBOperation() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == CurrentInstrSetEL0() ==&& ! InstrSet_A32ELUsingAArch32 then( BXWritePCEL1(address,); if !route_to_aarch64 && BranchType_INDIREL2Enabled); else() && ! BranchWritePCELUsingAArch32(address,( ) then route_to_aarch64 = HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1'; if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.ESBOperation(); return; route_to_monitor = HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.EA == '1'; route_to_hyp = EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.AMO == '1'); if route_to_monitor then target = M32_Monitor; elsif route_to_hyp || PSTATE.M == M32_Hyp then target = M32_Hyp; else target = M32_Abort; if IsSecure() then mask_active = TRUE; elsif target == M32_Monitor then mask_active = SCR.AW == '1' && (!HaveEL(EL2) || (HCR.TGE == '0' && HCR.AMO == '0')); else mask_active = target == M32_Abort || PSTATE.M == M32_Hyp; mask_set = PSTATE.A == '1'; (-, el) = ELFromM32(target); intdis = Halted() || ExternalDebugInterruptsDisabled(el); masked = intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); ClearPendingPhysicalSErrorBranchType_INDIREL2);(); return;

Library pseudocode for aarch32/functions/registersras/BXWritePCAArch32.PhysicalSErrorSyndrome

// BXWritePC() // ===========// Return the SError syndrome AArch32.SErrorSyndrome BXWritePC(bits(32) address,AArch32.PhysicalSErrorSyndrome(); BranchType branch_type) if address<0> == '1' then SelectInstrSet(InstrSet_T32); address<0> = '0'; else SelectInstrSet(InstrSet_A32); // For branches to an unaligned PC counter in A32 state, the processor takes the branch // and does one of: // * Forces the address to be aligned // * Leaves the PC unaligned, meaning the target generates a PC Alignment fault. if address<1> == '1' && ConstrainUnpredictableBool(Unpredictable_A32FORCEALIGNPC) then address<1> = '0'; BranchTo(address, branch_type);

Library pseudocode for aarch32/functions/registersras/BranchWritePCAArch32.ReportDeferredSError

// BranchWritePC() // ===============// AArch32.ReportDeferredSError() // ============================== // Return deferred SError syndrome bits(32) BranchWritePC(bits(32) address,AArch32.ReportDeferredSError(bits(2) AET, bit ExT) bits(32) target; target<31> = '1'; // A syndrome = BranchTypeZeros branch_type) if(16); if PSTATE.EL == CurrentInstrSetEL2() ==then syndrome<11:10> = AET; // AET syndrome<9> = ExT; // EA syndrome<5:0> = '010001'; // DFSC else syndrome<15:14> = AET; // AET syndrome<12> = ExT; // ExT syndrome<9> = TTBCR.EAE; // LPAE if TTBCR.EAE == '1' then // Long-descriptor format syndrome<5:0> = '010001'; // STATUS else // Short-descriptor format syndrome<10,3:0> = '10110'; // FS if InstrSet_A32HaveAnyAArch64 then address<1:0> = '00'; else address<0> = '0';() then target<24:0> = BranchToZeroExtend(address, branch_type);(syndrome);// Any RES0 fields must be set to zero else target<15:0> = syndrome; return target;

Library pseudocode for aarch32/functions/registersras/DAArch32.SErrorSyndrome

// D[] - non-assignment form // ========================= bits(64)type D[integer n] assert n >= 0 && n <= 31; base = (n MOD 2) * 64; bits(128) vreg = V[n DIV 2]; return vreg<base+63:base>; // D[] - assignment form // =====================AArch32.SErrorSyndrome is ( bits(2) AET, bit ExT ) D[integer n] = bits(64) value assert n >= 0 && n <= 31; base = (n MOD 2) * 64; bits(128) vreg = V[n DIV 2]; vreg<base+63:base> = value; V[n DIV 2] = vreg; return;

Library pseudocode for aarch32/functions/registersras/DinAArch32.vESBOperation

// Din[] - non-assignment form // =========================== bits(64)// AArch32.vESBOperation() // ======================= // Perform the ESB operation for virtual SError interrupts executed in AArch32 state Din[integer n] assert n >= 0 && n <= 31; return _Dclone[n];AArch32.vESBOperation() assertEL2Enabled() && PSTATE.EL IN {EL0,EL1}; // Check for EL2 using AArch64 state if !ELUsingAArch32(EL2) then AArch64.vESBOperation(); return; // If physical SError interrupts are routed to Hyp mode, and TGE is not set, then a // virtual SError interrupt might be pending vSEI_enabled = HCR.TGE == '0' && HCR.AMO == '1'; vSEI_pending = vSEI_enabled && HCR.VA == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); HCR.VA = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch32/functions/registers/LRAArch32.ResetGeneralRegisters

// LR - assignment form // ====================// AArch32.ResetGeneralRegisters() // =============================== LR = bits(32) valueAArch32.ResetGeneralRegisters() for i = 0 to 7 R[14] = value; return; // LR - non-assignment form // ======================== bits(32) LR return[i] = bits(32) UNKNOWN; for i = 8 to 12 [i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; if HaveEL(EL2) then Rmode[13, M32_Hyp] = bits(32) UNKNOWN; // No R14_hyp for i = 13 to 14 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; Rmode[i, M32_IRQ] = bits(32) UNKNOWN; Rmode[i, M32_Svc] = bits(32) UNKNOWN; Rmode[i, M32_Abort] = bits(32) UNKNOWN; Rmode[i, M32_Undef] = bits(32) UNKNOWN; if HaveEL(EL3) then Rmode[i, M32_MonitorRRmode[14];] = bits(32) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/LoadWritePCAArch32.ResetSIMDFPRegisters

// LoadWritePC() // =============// AArch32.ResetSIMDFPRegisters() // ============================== LoadWritePC(bits(32) address)AArch32.ResetSIMDFPRegisters() for i = 0 to 15 BXWritePCQ(address, BranchType_INDIR);[i] = bits(128) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/LookUpRIndexAArch32.ResetSpecialRegisters

// LookUpRIndex() // ============== integer// AArch32.ResetSpecialRegisters() // =============================== LookUpRIndex(integer n, bits(5) mode) assert n >= 0 && n <= 14; AArch32.ResetSpecialRegisters() case n of // Select index by mode: usr fiq irq svc abt und hyp when 8 result = // AArch32 special registers SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_svc = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; if RBankSelectHaveEL(mode, 8, 24, 8, 8, 8, 8, 8); when 9 result =( RBankSelectEL2(mode, 9, 25, 9, 9, 9, 9, 9); when 10 result =) then SPSR_hyp = bits(32) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; if RBankSelectHaveEL(mode, 10, 26, 10, 10, 10, 10, 10); when 11 result =( RBankSelectEL3(mode, 11, 27, 11, 11, 11, 11, 11); when 12 result = RBankSelect(mode, 12, 28, 12, 12, 12, 12, 12); when 13 result = RBankSelect(mode, 13, 29, 17, 19, 21, 23, 15); when 14 result = RBankSelect(mode, 14, 30, 16, 18, 20, 22, 14); otherwise result = n; ) then SPSR_mon = bits(32) UNKNOWN; return result; // External debug special registers DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/Monitor_mode_registersAArch32.ResetSystemRegisters

bits(32) SP_mon; bits(32) LR_mon;AArch32.ResetSystemRegisters(boolean cold_reset);

Library pseudocode for aarch32/functions/registers/PCALUExceptionReturn

// PC - non-assignment form // ======================== bits(32) PC return// ALUExceptionReturn() // ==================== ALUExceptionReturn(bits(32) address) if PSTATE.EL == EL2 then UNDEFINED; elsif PSTATE.M IN {M32_User,M32_System} then UNPREDICTABLE; // UNDEFINED or NOP else AArch32.ExceptionReturn(address, SPSRR[15]; // This includes the offset from AArch32 state[]);

Library pseudocode for aarch32/functions/registers/PCStoreValueALUWritePC

// PCStoreValue() // ============== bits(32)// ALUWritePC() // ============ PCStoreValue() // This function returns the PC value. On architecture versions before Armv7, it // is permitted to instead return PC+4, provided it does so consistently. It is // used only to describe A32 instructions, so it returns the address of the current // instruction plus 8 (normally) or 12 (when the alternative is permitted). return PC;ALUWritePC(bits(32) address) ifCurrentInstrSet() == InstrSet_A32 then BXWritePC(address, BranchType_INDIR); else BranchWritePC(address, BranchType_INDIR);

Library pseudocode for aarch32/functions/registers/QBXWritePC

// Q[] - non-assignment form // ========================= bits(128)// BXWritePC() // =========== Q[integer n] assert n >= 0 && n <= 15; return V[n]; // Q[] - assignment form // =====================BXWritePC(bits(32) address, branch_type) if address<0> == '1' then SelectInstrSet(InstrSet_T32); address<0> = '0'; else SelectInstrSet(InstrSet_A32); // For branches to an unaligned PC counter in A32 state, the processor takes the branch // and does one of: // * Forces the address to be aligned // * Leaves the PC unaligned, meaning the target generates a PC Alignment fault. if address<1> == '1' && ConstrainUnpredictableBool(Unpredictable_A32FORCEALIGNPC) then address<1> = '0'; BranchToQ[integer n] = bits(128) value assert n >= 0 && n <= 15; V[n] = value; return;(address, branch_type);

Library pseudocode for aarch32/functions/registers/QinBranchWritePC

// Qin[] - non-assignment form // =========================== bits(128)// BranchWritePC() // =============== Qin[integer n] assert n >= 0 && n <= 15; returnBranchWritePC(bits(32) address, DinBranchType[2*n+1]:branch_type) if() == InstrSet_A32 then address<1:0> = '00'; else address<0> = '0'; BranchToDinCurrentInstrSet[2*n];(address, branch_type);

Library pseudocode for aarch32/functions/registers/RD

// R[] - assignment form // =====================// D[] - non-assignment form // ========================= bits(64) R[integer n] = bits(32) valueD[integer n] assert n >= 0 && n <= 31; base = (n MOD 2) * 64; return _V[n DIV 2]<base+63:base>; // D[] - assignment form // ===================== Rmode[n, PSTATE.M] = value; return; // R[] - non-assignment form // ========================= bits(32) R[integer n] if n == 15 then offset = (if CurrentInstrSet() == InstrSet_A32 then 8 else 4); return _PC<31:0> + offset; else return Rmode[n, PSTATE.M];D[integer n] = bits(64) value assert n >= 0 && n <= 31; base = (n MOD 2) * 64; _V[n DIV 2]<base+63:base> = value; return;

Library pseudocode for aarch32/functions/registers/RBankSelectDin

// RBankSelect() // ============= // Din[] - non-assignment form // =========================== integerbits(64) RBankSelect(bits(5) mode, integer usr, integer fiq, integer irq, integer svc, integer abt, integer und, integer hyp) case mode of whenDin[integer n] assert n >= 0 && n <= 31; return _Dclone[n]; M32_User result = usr; // User mode when M32_FIQ result = fiq; // FIQ mode when M32_IRQ result = irq; // IRQ mode when M32_Svc result = svc; // Supervisor mode when M32_Abort result = abt; // Abort mode when M32_Hyp result = hyp; // Hyp mode when M32_Undef result = und; // Undefined mode when M32_System result = usr; // System mode uses User mode registers otherwise Unreachable(); // Monitor mode return result;

Library pseudocode for aarch32/functions/registers/RmodeLR

// Rmode[] - non-assignment form // ============================= bits(32)// LR - assignment form // ==================== Rmode[integer n, bits(5) mode] assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !LR = bits(32) valueIsSecureR() then assert mode !=[14] = value; return; // LR - non-assignment form // ======================== bits(32) LR return M32_MonitorR; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then return SP_mon; elsif n == 14 then return LR_mon; else return _R[n]<31:0>; else return _R[LookUpRIndex(n, mode)]<31:0>; // Rmode[] - assignment form // ========================= Rmode[integer n, bits(5) mode] = bits(32) value assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !IsSecure() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then SP_mon = value; elsif n == 14 then LR_mon = value; else _R[n]<31:0> = value; else // It is CONSTRAINED UNPREDICTABLE whether the upper 32 bits of the X // register are unchanged or set to zero. This is also tested for on // exception entry, as this applies to all AArch32 registers. if !HighestELUsingAArch32() && ConstrainUnpredictableBool(Unpredictable_ZEROUPPER) then _R[LookUpRIndex(n, mode)] = ZeroExtend(value); else _R[LookUpRIndex(n, mode)]<31:0> = value; return;[14];

Library pseudocode for aarch32/functions/registers/SLoadWritePC

// S[] - non-assignment form // ========================= bits(32)// LoadWritePC() // ============= S[integer n] assert n >= 0 && n <= 31; base = (n MOD 4) * 32; bits(128) vreg = V[n DIV 4]; return vreg<base+31:base>; // S[] - assignment form // =====================LoadWritePC(bits(32) address) (address, BranchType_INDIRS[integer n] = bits(32) value assert n >= 0 && n <= 31; base = (n MOD 4) * 32; bits(128) vreg = V[n DIV 4]; vreg<base+31:base> = value; V[n DIV 4] = vreg; return;);

Library pseudocode for aarch32/functions/registers/SPLookUpRIndex

// SP - assignment form // ====================// LookUpRIndex() // ============== integer SP = bits(32) valueLookUpRIndex(integer n, bits(5) mode) assert n >= 0 && n <= 14; case n of // Select index by mode: usr fiq irq svc abt und hyp when 8 result = RRBankSelect[13] = value; return; // SP - non-assignment form // ======================== bits(32) SP return(mode, 8, 24, 8, 8, 8, 8, 8); when 9 result = (mode, 9, 25, 9, 9, 9, 9, 9); when 10 result = RBankSelect(mode, 10, 26, 10, 10, 10, 10, 10); when 11 result = RBankSelect(mode, 11, 27, 11, 11, 11, 11, 11); when 12 result = RBankSelect(mode, 12, 28, 12, 12, 12, 12, 12); when 13 result = RBankSelect(mode, 13, 29, 17, 19, 21, 23, 15); when 14 result = RBankSelectRRBankSelect[13];(mode, 14, 30, 16, 18, 20, 22, 14); otherwise result = n; return result;

Library pseudocode for aarch32/functions/registers/_DcloneMonitor_mode_registers

array bits(64) _Dclone[0..31];bits(32) SP_mon; bits(32) LR_mon;

Library pseudocode for aarch32/functions/systemregisters/AArch32.ExceptionReturnPC

// AArch32.ExceptionReturn() // =========================// PC - non-assignment form // ======================== bits(32) PC return AArch32.ExceptionReturn(bits(32) new_pc, bits(32) spsr) SynchronizeContext(); // Attempts to change to an illegal mode or state will invoke the Illegal Execution state // mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' then // If the exception return is illegal, PC[1:0] are UNKNOWN new_pc<1:0> = bits(2) UNKNOWN; else // LR[1:0] or LR[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 BranchTo(new_pc, BranchType_ERET);[15]; // This includes the offset from AArch32 state

Library pseudocode for aarch32/functions/systemregisters/AArch32.ExecutingATS1xPInstrPCStoreValue

// AArch32.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1CPR/WP // PCStoreValue() // ============== booleanbits(32) AArch32.ExecutingATS1xPInstr() if !PCStoreValue() // This function returns the PC value. On architecture versions before Armv7, it // is permitted to instead return PC+4, provided it does so consistently. It is // used only to describe A32 instructions, so it returns the address of the current // instruction plus 8 (normally) or 12 (when the alternative is permitted). return PC;HavePrivATExt() then return FALSE; instr = ThisInstr(); if instr<24+:4> == '1110' && instr<8+:4> == '1110' then op1 = instr<21+:3>; CRn = instr<16+:4>; CRm = instr<0+:4>; op2 = instr<5+:3>; return (op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}); else return FALSE;

Library pseudocode for aarch32/functions/systemregisters/AArch32.ExecutingCP10or11InstrQ

// AArch32.ExecutingCP10or11Instr() // ================================ // Q[] - non-assignment form // ========================= booleanbits(128) AArch32.ExecutingCP10or11Instr() instr =Q[integer n] assert n >= 0 && n <= 15; return _V[n]; // Q[] - assignment form // ===================== ThisInstr(); instr_set = CurrentInstrSet(); assert instr_set IN {InstrSet_A32, InstrSet_T32}; if instr_set == InstrSet_A32 then return ((instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x'); else // InstrSet_T32 return (instr<31:28> == '111x' && (instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x');Q[integer n] = bits(128) value assert n >= 0 && n <= 15; _V[n] = value; return;

Library pseudocode for aarch32/functions/systemregisters/AArch32.ExecutingLSMInstrQin

// AArch32.ExecutingLSMInstr() // Qin[] - non-assignment form // =========================== // Returns TRUE if processor is executing a Load/Store Multiple instruction booleanbits(128) AArch32.ExecutingLSMInstr() instr =Qin[integer n] assert n >= 0 && n <= 15; return ThisInstrDin(); instr_set =[2*n+1]: CurrentInstrSetDin(); assert instr_set IN {InstrSet_A32, InstrSet_T32}; if instr_set == InstrSet_A32 then return (instr<28+:4> != '1111' && instr<25+:3> == '100'); else // InstrSet_T32 if ThisInstrLength() == 16 then return (instr<12+:4> == '1100'); else return (instr<25+:7> == '1110100' && instr<22> == '0');[2*n];

Library pseudocode for aarch32/functions/systemregisters/AArch32.ITAdvanceR

// AArch32.ITAdvance() // ===================// R[] - assignment form // ===================== AArch32.ITAdvance() if PSTATE.IT<2:0> == '000' then PSTATE.IT = '00000000'; else PSTATE.IT<4:0> =R[integer n] = bits(32) value [n, PSTATE.M] = value; return; // R[] - non-assignment form // ========================= bits(32) R[integer n] if n == 15 then offset = (if CurrentInstrSet() == InstrSet_A32 then 8 else 4); return _PC<31:0> + offset; else return RmodeLSLRmode(PSTATE.IT<4:0>, 1); return;[n, PSTATE.M];

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegReadRBankSelect

// Read from a 32-bit AArch32 System register and return the register's contents. bits(32)// RBankSelect() // ============= integer AArch32.SysRegRead(integer cp_num, bits(32) instr);RBankSelect(bits(5) mode, integer usr, integer fiq, integer irq, integer svc, integer abt, integer und, integer hyp) case mode of whenM32_User result = usr; // User mode when M32_FIQ result = fiq; // FIQ mode when M32_IRQ result = irq; // IRQ mode when M32_Svc result = svc; // Supervisor mode when M32_Abort result = abt; // Abort mode when M32_Hyp result = hyp; // Hyp mode when M32_Undef result = und; // Undefined mode when M32_System result = usr; // System mode uses User mode registers otherwise Unreachable(); // Monitor mode return result;

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegRead64Rmode

// Read from a 64-bit AArch32 System register and return the register's contents. bits(64)// Rmode[] - non-assignment form // ============================= bits(32) AArch32.SysRegRead64(integer cp_num, bits(32) instr);Rmode[integer n, bits(5) mode] assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !IsSecure() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then return SP_mon; elsif n == 14 then return LR_mon; else return _R[n]<31:0>; else return _R[LookUpRIndex(n, mode)]<31:0>; // Rmode[] - assignment form // ========================= Rmode[integer n, bits(5) mode] = bits(32) value assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !IsSecure() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then SP_mon = value; elsif n == 14 then LR_mon = value; else _R[n]<31:0> = value; else // It is CONSTRAINED UNPREDICTABLE whether the upper 32 bits of the X // register are unchanged or set to zero. This is also tested for on // exception entry, as this applies to all AArch32 registers. if !HighestELUsingAArch32() && ConstrainUnpredictableBool(Unpredictable_ZEROUPPER) then _R[LookUpRIndex(n, mode)] = ZeroExtend(value); else _R[LookUpRIndex(n, mode)]<31:0> = value; return;

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegReadCanWriteAPSRS

// AArch32.SysRegReadCanWriteAPSR() // ================================ // Determines whether the AArch32 System register read instruction can write to APSR flags. // S[] - non-assignment form // ========================= booleanbits(32) AArch32.SysRegReadCanWriteAPSR(integer cp_num, bits(32) instr) assertS[integer n] assert n >= 0 && n <= 31; base = (n MOD 4) * 32; return _V[n DIV 4]<base+31:base>; // S[] - assignment form // ===================== UsingAArch32(); assert (cp_num IN {14,15}); assert cp_num == UInt(instr<11:8>); opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); if cp_num == 14 && opc1 == 0 && CRn == 0 && CRm == 1 && opc2 == 0 then // DBGDSCRint return TRUE; return FALSE;S[integer n] = bits(32) value assert n >= 0 && n <= 31; base = (n MOD 4) * 32; _V[n DIV 4]<base+31:base> = value; return;

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegWriteSP

// Write to a 32-bit AArch32 System register.// SP - assignment form // ==================== AArch32.SysRegWrite(integer cp_num, bits(32) instr, bits(32) val);SP = bits(32) valueR[13] = value; return; // SP - non-assignment form // ======================== bits(32) SP return R[13];

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegWrite64_Dclone

// Write to a 64-bit AArch32 System register.array bits(64) _Dclone[0..31]; AArch32.SysRegWrite64(integer cp_num, bits(32) instr, bits(64) val);

Library pseudocode for aarch32/functions/system/AArch32.WriteModeAArch32.ExceptionReturn

// AArch32.WriteMode() // =================== // Function for dealing with writes to PSTATE.M from AArch32 state only. // This ensures that PSTATE.EL and PSTATE.SP are always valid.// AArch32.ExceptionReturn() // ========================= AArch32.WriteMode(bits(5) mode) (valid,el) =AArch32.ExceptionReturn(bits(32) new_pc, bits(32) spsr) ELFromM32SynchronizeContext(mode); assert valid; PSTATE.M = mode; PSTATE.EL = el; PSTATE.nRW = '1'; PSTATE.SP = (if mode IN {(); // Attempts to change to an illegal mode or state will invoke the Illegal Execution state // mechanismM32_UserSetPSTATEFromPSR,(spsr);(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' then // If the exception return is illegal, PC[1:0] are UNKNOWN new_pc<1:0> = bits(2) UNKNOWN; else // LR[1:0] or LR[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 BranchTo(new_pc, BranchType_ERETM32_SystemClearExclusiveLocal} then '0' else '1'); return;);

Library pseudocode for aarch32/functions/system/AArch32.WriteModeByInstrAArch32.ExecutingATS1xPInstr

// AArch32.WriteModeByInstr() // ========================== // Function for dealing with writes to PSTATE.M from an AArch32 instruction, and ensuring that // illegal state changes are correctly flagged in PSTATE.IL.// AArch32.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1CPR/WP boolean AArch32.WriteModeByInstr(bits(5) mode) (valid,el) =AArch32.ExecutingATS1xPInstr() if ! ELFromM32HavePrivATExt(mode); () then return FALSE; // 'valid' is set to FALSE if' mode' is invalid for this implementation or the current value // of SCR.NS/SCR_EL3.NS. Additionally, it is illegal for an instruction to write 'mode' to // PSTATE.EL if it would result in any of: // * A change to a mode that would cause entry to a higher Exception level. if instr = UIntThisInstr(el) > UInt(PSTATE.EL) then valid = FALSE; // * A change to or from Hyp mode. if (PSTATE.M == M32_Hyp || mode == M32_Hyp) && PSTATE.M != mode then valid = FALSE; // * When EL2 is implemented, the value of HCR.TGE is '1', a change to a Non-secure EL1 mode. if PSTATE.M == M32_Monitor && HaveEL(EL2) && el == EL1 && SCR.NS == '1' && HCR.TGE == '1' then valid = FALSE; if !valid then PSTATE.IL = '1'; else AArch32.WriteMode(mode);(); if instr<24+:4> == '1110' && instr<8+:4> == '1110' then op1 = instr<21+:3>; CRn = instr<16+:4>; CRm = instr<0+:4>; op2 = instr<5+:3>; return (op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}); else return FALSE;

Library pseudocode for aarch32/functions/system/BadModeAArch32.ExecutingCP10or11Instr

// BadMode() // ========= // AArch32.ExecutingCP10or11Instr() // ================================ boolean BadMode(bits(5) mode) // Return TRUE if 'mode' encodes a mode that is not valid for this implementation case mode of whenAArch32.ExecutingCP10or11Instr() instr = M32_MonitorThisInstr valid =(); instr_set = HaveAArch32ELCurrentInstrSet((); assert instr_set IN {EL3InstrSet_A32); when, M32_HypInstrSet_T32 valid =}; if instr_set == HaveAArch32ELInstrSet_A32(EL2); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. // Therefore it is sufficient to test this implementation supports EL1 using AArch32. valid = HaveAArch32EL(EL1); when M32_User valid = HaveAArch32EL(EL0); otherwise valid = FALSE; // Passed an illegal mode value return !valid;then return ((instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x'); else // InstrSet_T32 return (instr<31:28> == '111x' && (instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x');

Library pseudocode for aarch32/functions/system/BankedRegisterAccessValidAArch32.ExecutingLSMInstr

// BankedRegisterAccessValid() // AArch32.ExecutingLSMInstr() // =========================== // Checks for MRS (Banked register) or MSR (Banked register) accesses to registers // other than the SPSRs that are invalid. This includes ELR_hyp accesses.// Returns TRUE if processor is executing a Load/Store Multiple instruction boolean BankedRegisterAccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '000xx', '00100' // R8_usr to R12_usr if mode !=AArch32.ExecutingLSMInstr() instr = M32_FIQThisInstr then UNPREDICTABLE; when '00101' // SP_usr if mode ==(); instr_set = M32_SystemCurrentInstrSet then UNPREDICTABLE; when '00110' // LR_usr if mode IN {(); assert instr_set IN {M32_HypInstrSet_A32,M32_SystemInstrSet_T32} then UNPREDICTABLE; when '010xx', '0110x', '01110' // R8_fiq to R12_fiq, SP_fiq, LR_fiq if mode ==}; if instr_set == M32_FIQInstrSet_A32 then UNPREDICTABLE; when '1000x' // LR_irq, SP_irq if mode ==then return (instr<28+:4> != '1111' && instr<25+:3> == '100'); else // InstrSet_T32 if M32_IRQThisInstrLength then UNPREDICTABLE; when '1001x' // LR_svc, SP_svc if mode == M32_Svc then UNPREDICTABLE; when '1010x' // LR_abt, SP_abt if mode == M32_Abort then UNPREDICTABLE; when '1011x' // LR_und, SP_und if mode == M32_Undef then UNPREDICTABLE; when '1110x' // LR_mon, SP_mon if !HaveEL(EL3) || !IsSecure() || mode == M32_Monitor then UNPREDICTABLE; when '11110' // ELR_hyp, only from Monitor or Hyp mode if !HaveEL(EL2) || !(mode IN {M32_Monitor,M32_Hyp}) then UNPREDICTABLE; when '11111' // SP_hyp, only from Monitor mode if !HaveEL(EL2) || mode != M32_Monitor then UNPREDICTABLE; otherwise UNPREDICTABLE; return;() == 16 then return (instr<12+:4> == '1100'); else return (instr<25+:7> == '1110100' && instr<22> == '0');

Library pseudocode for aarch32/functions/system/CPSRWriteByInstrAArch32.ITAdvance

// CPSRWriteByInstr() // ================== // Update PSTATE.<N,Z,C,V,Q,GE,E,A,I,F,M> from a CPSR value written by an MSR instruction.// AArch32.ITAdvance() // =================== CPSRWriteByInstr(bits(32) value, bits(4) bytemask) privileged = PSTATE.EL !=AArch32.ITAdvance() if PSTATE.IT<2:0> == '000' then PSTATE.IT = '00000000'; else PSTATE.IT<4:0> = EL0LSL; // PSTATE.<A,I,F,M> are not writable at EL0 // Write PSTATE from 'value', ignoring bytes masked by 'bytemask' if bytemask<3> == '1' then PSTATE.<N,Z,C,V,Q> = value<31:27>; // Bits <26:24> are ignored if bytemask<2> == '1' then // Bit <23> is RES0 if privileged then PSTATE.PAN = value<22>; // Bits <21:20> are RES0 PSTATE.GE = value<19:16>; if bytemask<1> == '1' then // Bits <15:10> are RES0 PSTATE.E = value<9>; // PSTATE.E is writable at EL0 if privileged then PSTATE.A = value<8>; if bytemask<0> == '1' then if privileged then PSTATE.<I,F> = value<7:6>; // Bit <5> is RES0 // AArch32.WriteModeByInstr() sets PSTATE.IL to 1 if this is an illegal mode change. AArch32.WriteModeByInstr(value<4:0>); (PSTATE.IT<4:0>, 1); return;

Library pseudocode for aarch32/functions/system/ConditionPassedAArch32.SysRegRead

// ConditionPassed() // ================= boolean// Read from a 32-bit AArch32 System register and return the register's contents. bits(32) ConditionPassed() returnAArch32.SysRegRead(integer cp_num, bits(32) instr); ConditionHolds(AArch32.CurrentCond());

Library pseudocode for aarch32/functions/system/CurrentCondAArch32.SysRegRead64

bits(4)// Read from a 64-bit AArch32 System register and return the register's contents. bits(64) AArch32.CurrentCond();AArch32.SysRegRead64(integer cp_num, bits(32) instr);

Library pseudocode for aarch32/functions/system/InITBlockAArch32.SysRegReadCanWriteAPSR

// InITBlock() // =========== // AArch32.SysRegReadCanWriteAPSR() // ================================ // Determines whether the AArch32 System register read instruction can write to APSR flags. boolean InITBlock() ifAArch32.SysRegReadCanWriteAPSR(integer cp_num, bits(32) instr) assert CurrentInstrSetUsingAArch32() ==(); assert (cp_num IN {14,15}); assert cp_num == (instr<11:8>); opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UIntInstrSet_T32UInt then return PSTATE.IT<3:0> != '0000'; else return FALSE;(instr<3:0>); if cp_num == 14 && opc1 == 0 && CRn == 0 && CRm == 1 && opc2 == 0 then // DBGDSCRint return TRUE; return FALSE;

Library pseudocode for aarch32/functions/system/LastInITBlockAArch32.SysRegWrite

// LastInITBlock() // =============== boolean// Write to a 32-bit AArch32 System register. LastInITBlock() return (PSTATE.IT<3:0> == '1000');AArch32.SysRegWrite(integer cp_num, bits(32) instr, bits(32) val);

Library pseudocode for aarch32/functions/system/SPSRWriteByInstrAArch32.SysRegWrite64

// SPSRWriteByInstr() // ==================// Write to a 64-bit AArch32 System register. SPSRWriteByInstr(bits(32) value, bits(4) bytemask) new_spsr =AArch32.SysRegWrite64(integer cp_num, bits(32) instr, bits(64) val); SPSR[]; if bytemask<3> == '1' then new_spsr<31:24> = value<31:24>; // N,Z,C,V,Q flags, IT[1:0],J bits if bytemask<2> == '1' then new_spsr<23:16> = value<23:16>; // IL bit, GE[3:0] flags if bytemask<1> == '1' then new_spsr<15:8> = value<15:8>; // IT[7:2] bits, E bit, A interrupt mask if bytemask<0> == '1' then new_spsr<7:0> = value<7:0>; // I,F interrupt masks, T bit, Mode bits SPSR[] = new_spsr; // UNPREDICTABLE if User or System mode return;

Library pseudocode for aarch32/functions/system/SPSRaccessValidAArch32.WriteMode

// SPSRaccessValid() // ================= // Checks for MRS (Banked register) or MSR (Banked register) accesses to the SPSRs // that are UNPREDICTABLE// AArch32.WriteMode() // =================== // Function for dealing with writes to PSTATE.M from AArch32 state only. // This ensures that PSTATE.EL and PSTATE.SP are always valid. SPSRaccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '01110' // SPSR_fiq if mode ==AArch32.WriteMode(bits(5) mode) (valid,el) = M32_FIQELFromM32 then UNPREDICTABLE; when '10000' // SPSR_irq if mode ==(mode); assert valid; PSTATE.M = mode; PSTATE.EL = el; PSTATE.nRW = '1'; PSTATE.SP = (if mode IN { M32_IRQM32_User then UNPREDICTABLE; when '10010' // SPSR_svc if mode ==, M32_SvcM32_System then UNPREDICTABLE; when '10100' // SPSR_abt if mode == M32_Abort then UNPREDICTABLE; when '10110' // SPSR_und if mode == M32_Undef then UNPREDICTABLE; when '11100' // SPSR_mon if !HaveEL(EL3) || mode == M32_Monitor || !IsSecure() then UNPREDICTABLE; when '11110' // SPSR_hyp if !HaveEL(EL2) || mode != M32_Monitor then UNPREDICTABLE; otherwise UNPREDICTABLE; } then '0' else '1'); return;

Library pseudocode for aarch32/functions/system/SelectInstrSetAArch32.WriteModeByInstr

// SelectInstrSet() // ================// AArch32.WriteModeByInstr() // ========================== // Function for dealing with writes to PSTATE.M from an AArch32 instruction, and ensuring that // illegal state changes are correctly flagged in PSTATE.IL. SelectInstrSet(AArch32.WriteModeByInstr(bits(5) mode) (valid,el) =InstrSetELFromM32 iset) assert(mode); // 'valid' is set to FALSE if' mode' is invalid for this implementation or the current value // of SCR.NS/SCR_EL3.NS. Additionally, it is illegal for an instruction to write 'mode' to // PSTATE.EL if it would result in any of: // * A change to a mode that would cause entry to a higher Exception level. if CurrentInstrSetUInt() IN {(el) >InstrSet_A32UInt,(PSTATE.EL) then valid = FALSE; // * A change to or from Hyp mode. if (PSTATE.M == InstrSet_T32M32_Hyp}; assert iset IN {|| mode ==InstrSet_A32M32_Hyp,) && PSTATE.M != mode then valid = FALSE; // * When EL2 is implemented, the value of HCR.TGE is '1', a change to a Non-secure EL1 mode. if PSTATE.M == InstrSet_T32M32_Monitor}; PSTATE.T = if iset ==&& (EL2) && el == EL1 && SCR.NS == '1' && HCR.TGE == '1' then valid = FALSE; if !valid then PSTATE.IL = '1'; else AArch32.WriteModeInstrSet_A32HaveEL then '0' else '1'; return;(mode);

Library pseudocode for aarch32/functions/v6simdsystem/SatBadMode

// Sat() // ===== // BadMode() // ========= bits(N)boolean Sat(integer i, integer N, boolean unsigned) result = if unsigned thenBadMode(bits(5) mode) // Return TRUE if 'mode' encodes a mode that is not valid for this implementation case mode of when UnsignedSatM32_Monitor(i, N) elsevalid = (EL3); when M32_Hyp valid = HaveAArch32EL(EL2); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. // Therefore it is sufficient to test this implementation supports EL1 using AArch32. valid = HaveAArch32EL(EL1); when M32_User valid = HaveAArch32EL(EL0SignedSatHaveAArch32EL(i, N); return result;); otherwise valid = FALSE; // Passed an illegal mode value return !valid;

Library pseudocode for aarch32/functions/v6simdsystem/SignedSatBankedRegisterAccessValid

// SignedSat() // =========== bits(N)// BankedRegisterAccessValid() // =========================== // Checks for MRS (Banked register) or MSR (Banked register) accesses to registers // other than the SPSRs that are invalid. This includes ELR_hyp accesses. SignedSat(integer i, integer N) (result, -) =BankedRegisterAccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '000xx', '00100' // R8_usr to R12_usr if mode != then UNPREDICTABLE; when '00101' // SP_usr if mode == M32_System then UNPREDICTABLE; when '00110' // LR_usr if mode IN {M32_Hyp,M32_System} then UNPREDICTABLE; when '010xx', '0110x', '01110' // R8_fiq to R12_fiq, SP_fiq, LR_fiq if mode == M32_FIQ then UNPREDICTABLE; when '1000x' // LR_irq, SP_irq if mode == M32_IRQ then UNPREDICTABLE; when '1001x' // LR_svc, SP_svc if mode == M32_Svc then UNPREDICTABLE; when '1010x' // LR_abt, SP_abt if mode == M32_Abort then UNPREDICTABLE; when '1011x' // LR_und, SP_und if mode == M32_Undef then UNPREDICTABLE; when '1110x' // LR_mon, SP_mon if !HaveEL(EL3) || !IsSecure() || mode == M32_Monitor then UNPREDICTABLE; when '11110' // ELR_hyp, only from Monitor or Hyp mode if !HaveEL(EL2) || !(mode IN {M32_Monitor,M32_Hyp}) then UNPREDICTABLE; when '11111' // SP_hyp, only from Monitor mode if !HaveEL(EL2) || mode != M32_MonitorSignedSatQM32_FIQ(i, N); return result;then UNPREDICTABLE; otherwise UNPREDICTABLE; return;

Library pseudocode for aarch32/functions/v6simdsystem/UnsignedSatCPSRWriteByInstr

// UnsignedSat() // ============= bits(N)// CPSRWriteByInstr() // ================== // Update PSTATE.<N,Z,C,V,Q,GE,E,A,I,F,M> from a CPSR value written by an MSR instruction. UnsignedSat(integer i, integer N) (result, -) =CPSRWriteByInstr(bits(32) value, bits(4) bytemask) privileged = PSTATE.EL != ; // PSTATE.<A,I,F,M> are not writable at EL0 // Write PSTATE from 'value', ignoring bytes masked by 'bytemask' if bytemask<3> == '1' then PSTATE.<N,Z,C,V,Q> = value<31:27>; // Bits <26:24> are ignored if bytemask<2> == '1' then // Bit <23> is RES0 if privileged then PSTATE.PAN = value<22>; // Bits <21:20> are RES0 PSTATE.GE = value<19:16>; if bytemask<1> == '1' then // Bits <15:10> are RES0 PSTATE.E = value<9>; // PSTATE.E is writable at EL0 if privileged then PSTATE.A = value<8>; if bytemask<0> == '1' then if privileged then PSTATE.<I,F> = value<7:6>; // Bit <5> is RES0 // AArch32.WriteModeByInstr() sets PSTATE.IL to 1 if this is an illegal mode change. AArch32.WriteModeByInstrUnsignedSatQEL0(i, N); return result;(value<4:0>); return;

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.DefaultTEXDecodeConditionPassed

// AArch32.DefaultTEXDecode() // ========================== // ConditionPassed() // ================= MemoryAttributesboolean AArch32.DefaultTEXDecode(bits(3) TEX, bit C, bit B, bit S,ConditionPassed() return AccTypeConditionHolds acctype)( MemoryAttributesAArch32.CurrentCond memattrs; // Reserved values map to allocated values if (TEX == '001' && C:B == '01') || (TEX == '010' && C:B != '00') || TEX == '011' then bits(5) texcb; (-, texcb) = ConstrainUnpredictableBits(Unpredictable_RESTEXCB); TEX = texcb<4:2>; C = texcb<1>; B = texcb<0>; case TEX:C:B of when '00000' // Device-nGnRnE memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRnE; when '00001', '01000' // Device-nGnRE memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRE; when '00010', '00011', '00100' // Write-back or Write-through Read allocate, or Non-cacheable memattrs.memtype = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.shareable = (S == '1'); when '00110' memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; when '00111' // Write-back Read and Write allocate memattrs.memtype = MemType_Normal; memattrs.inner = ShortConvertAttrsHints('01', acctype, FALSE); memattrs.outer = ShortConvertAttrsHints('01', acctype, FALSE); memattrs.shareable = (S == '1'); when '1xxxx' // Cacheable, TEX<1:0> = Outer attrs, {C,B} = Inner attrs memattrs.memtype = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(TEX<1:0>, acctype, FALSE); memattrs.shareable = (S == '1'); otherwise // Reserved, handled above Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; // distinction between inner and outer shareable is not supported in this format memattrs.outershareable = memattrs.shareable; memattrs.tagged = FALSE; return MemAttrDefaults(memattrs);());

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.InstructionDeviceCurrentCond

// AArch32.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. AddressDescriptorbits(4) AArch32.InstructionDevice(AArch32.CurrentCond();AddressDescriptor addrdesc, bits(32) vaddress, bits(40) ipaddress, integer level, bits(4) domain, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) c = ConstrainUnpredictable(Unpredictable_INSTRDEVICE); assert c IN {Constraint_NONE, Constraint_FAULT}; if c == Constraint_FAULT then addrdesc.fault = AArch32.PermissionFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.memtype = MemType_Normal; addrdesc.memattrs.inner.attrs = MemAttr_NC; addrdesc.memattrs.inner.hints = MemHint_No; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = MemAttrDefaults(addrdesc.memattrs); return addrdesc;

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.RemappedTEXDecodeInITBlock

// AArch32.RemappedTEXDecode() // =========================== // InITBlock() // =========== MemoryAttributesboolean AArch32.RemappedTEXDecode(bits(3) TEX, bit C, bit B, bit S,InITBlock() if AccTypeCurrentInstrSet acctype)() == MemoryAttributesInstrSet_T32 memattrs; region = UInt(TEX<0>:C:B); // TEX<2:1> are ignored in this mapping scheme if region == 6 then memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; else base = 2 * region; attrfield = PRRR<base+1:base>; if attrfield == '11' then // Reserved, maps to allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESPRRR); case attrfield of when '00' // Device-nGnRnE memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRnE; when '01' // Device-nGnRE memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRE; when '10' memattrs.memtype = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(NMRR<base+1:base>, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(NMRR<base+17:base+16>, acctype, FALSE); s_bit = if S == '0' then PRRR.NS0 else PRRR.NS1; memattrs.shareable = (s_bit == '1'); memattrs.outershareable = (s_bit == '1' && PRRR<region+24> == '0'); when '11' Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; memattrs.tagged = FALSE; return MemAttrDefaults(memattrs);then return PSTATE.IT<3:0> != '0000'; else return FALSE;

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.S1AttrDecodeLastInITBlock

// AArch32.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. // LastInITBlock() // =============== MemoryAttributesboolean AArch32.S1AttrDecode(bits(2) SH, bits(3) attr,LastInITBlock() return (PSTATE.IT<3:0> == '1000'); AccType acctype) MemoryAttributes memattrs; if PSTATE.EL == EL2 then mair = HMAIR1:HMAIR0; else mair = MAIR1:MAIR0; index = 8 * UInt(attr); attrfield = mair<index+7:index>; memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESMAIR); if !HaveMTEExt() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESMAIR); if attrfield<7:4> == '0000' then // Device memattrs.memtype = MemType_Device; case attrfield<3:0> of when '0000' memattrs.device = DeviceType_nGnRnE; when '0100' memattrs.device = DeviceType_nGnRE; when '1000' memattrs.device = DeviceType_nGRE; when '1100' memattrs.device = DeviceType_GRE; otherwise Unreachable(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.memtype = MemType_Normal; memattrs.outer = LongConvertAttrsHints(attrfield<7:4>, acctype); memattrs.inner = LongConvertAttrsHints(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif HaveMTEExt() && attrfield == '11110000' then // Normal, Tagged if WB-RWA memattrs.memtype = MemType_Normal; memattrs.outer = LongConvertAttrsHints('1111', acctype); // WB_RWA memattrs.inner = LongConvertAttrsHints('1111', acctype); // WB_RWA memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; memattrs.tagged = (memattrs.inner.attrs == MemAttr_WB && memattrs.inner.hints == MemHint_RWA && memattrs.outer.attrs == MemAttr_WB && memattrs.outer.hints == MemHint_RWA); else Unreachable(); // Reserved, handled above return MemAttrDefaults(memattrs);

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.TranslateAddressS1OffSPSRWriteByInstr

// AArch32.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. TLBRecord// SPSRWriteByInstr() // ================== AArch32.TranslateAddressS1Off(bits(32) vaddress,SPSRWriteByInstr(bits(32) value, bits(4) bytemask) new_spsr = AccTypeSPSR acctype, boolean iswrite) assert[]; if bytemask<3> == '1' then new_spsr<31:24> = value<31:24>; // N,Z,C,V,Q flags, IT[1:0],J bits if bytemask<2> == '1' then new_spsr<23:16> = value<23:16>; // IL bit, GE[3:0] flags if bytemask<1> == '1' then new_spsr<15:8> = value<15:8>; // IT[7:2] bits, E bit, A interrupt mask if bytemask<0> == '1' then new_spsr<7:0> = value<7:0>; // I,F interrupt masks, T bit, Mode bits ELUsingAArch32SPSR(S1TranslationRegime()); TLBRecord result; default_cacheable = (HasS2Translation() && ((if ELUsingAArch32(EL2) then HCR.DC else HCR_EL2.DC) == '1')); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.memtype = MemType_Normal; result.addrdesc.memattrs.inner.attrs = MemAttr_WB; // Write-back result.addrdesc.memattrs.inner.hints = MemHint_RWA; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype != AccType_IFETCH then // Treat data as Device result.addrdesc.memattrs.memtype = MemType_Device; result.addrdesc.memattrs.device = DeviceType_nGnRnE; result.addrdesc.memattrs.inner = MemAttrHints UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR/HSCTLR.I if PSTATE.EL == EL2 then cacheable = HSCTLR.I == '1'; else cacheable = SCTLR.I == '1'; result.addrdesc.memattrs.memtype = MemType_Normal; if cacheable then result.addrdesc.memattrs.inner.attrs = MemAttr_WT; result.addrdesc.memattrs.inner.hints = MemHint_RA; else result.addrdesc.memattrs.inner.attrs = MemAttr_NC; result.addrdesc.memattrs.inner.hints = MemHint_No; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = MemAttrDefaults(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = ZeroExtend(vaddress); result.addrdesc.paddress.NS = if IsSecure() then '0' else '1'; result.addrdesc.fault = AArch32.NoFault(); return result;[] = new_spsr; // UNPREDICTABLE if User or System mode return;

Library pseudocode for aarch32/translationfunctions/checkssystem/AArch32.AccessIsPrivilegedSPSRaccessValid

// AArch32.AccessIsPrivileged() // ============================ boolean// SPSRaccessValid() // ================= // Checks for MRS (Banked register) or MSR (Banked register) accesses to the SPSRs // that are UNPREDICTABLE AArch32.AccessIsPrivileged(SPSRaccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '01110' // SPSR_fiq if mode ==AccTypeM32_FIQ acctype) el =then UNPREDICTABLE; when '10000' // SPSR_irq if mode == AArch32.AccessUsesELM32_IRQ(acctype); if el ==then UNPREDICTABLE; when '10010' // SPSR_svc if mode == EL0M32_Svc then ispriv = FALSE; elsif el !=then UNPREDICTABLE; when '10100' // SPSR_abt if mode == EL1M32_Abort then ispriv = TRUE; else ispriv = (acctype !=then UNPREDICTABLE; when '10110' // SPSR_und if mode == then UNPREDICTABLE; when '11100' // SPSR_mon if !HaveEL(EL3) || mode == M32_Monitor || !IsSecure() then UNPREDICTABLE; when '11110' // SPSR_hyp if !HaveEL(EL2) || mode != M32_MonitorAccType_UNPRIVM32_Undef); then UNPREDICTABLE; otherwise UNPREDICTABLE; return ispriv; return;

Library pseudocode for aarch32/translationfunctions/checkssystem/AArch32.AccessUsesELSelectInstrSet

// AArch32.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. bits(2)// SelectInstrSet() // ================ AArch32.AccessUsesEL(SelectInstrSet(AccTypeInstrSet acctype) if acctype ==iset) assert AccType_UNPRIVCurrentInstrSet then return() IN { , InstrSet_T32}; assert iset IN {InstrSet_A32, InstrSet_T32}; PSTATE.T = if iset == InstrSet_A32EL0InstrSet_A32; else return PSTATE.EL;then '0' else '1'; return;

Library pseudocode for aarch32/translationfunctions/checksv6simd/AArch32.CheckDomainSat

// AArch32.CheckDomain() // ===================== // Sat() // ===== (boolean, FaultRecord)bits(N) AArch32.CheckDomain(bits(4) domain, bits(32) vaddress, integer level,Sat(integer i, integer N, boolean unsigned) result = if unsigned then AccTypeUnsignedSat acctype, boolean iswrite) index = 2 *(i, N) else UIntSignedSat(domain); attrfield = DACR<index+1:index>; if attrfield == '10' then // Reserved, maps to an allocated value // Reserved value maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESDACR); if attrfield == '00' then fault = AArch32.DomainFault(domain, level, acctype, iswrite); else fault = AArch32.NoFault(); permissioncheck = (attrfield == '01'); return (permissioncheck, fault);(i, N); return result;

Library pseudocode for aarch32/translationfunctions/checksv6simd/AArch32.CheckPermissionSignedSat

// AArch32.CheckPermission() // ========================= // Function used for permission checking from AArch32 stage 1 translations // SignedSat() // =========== FaultRecordbits(N) AArch32.CheckPermission(SignedSat(integer i, integer N) (result, -) =PermissionsSignedSatQ perms, bits(32) vaddress, integer level, bits(4) domain, bit NS, AccType acctype, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); if PSTATE.EL != EL2 then wxn = SCTLR.WXN == '1'; if TTBCR.EAE == '1' || SCTLR.AFE == '1' || perms.ap<0> == '1' then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; else priv_r = perms.ap<2:1> != '00'; priv_w = perms.ap<2:1> == '01'; user_r = perms.ap<1> == '1'; user_w = FALSE; uwxn = SCTLR.UWXN == '1'; ispriv = AArch32.AccessIsPrivileged(acctype); pan = if HavePANExt() then PSTATE.PAN else '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch32.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = !user_r || perms.xn == '1' || (user_w && wxn); priv_xn = (!priv_r || perms.xn == '1' || perms.pxn == '1' || (priv_w && wxn) || (user_w && uwxn)); if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 wxn = HSCTLR.WXN == '1'; r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecure() && NS == '1' then secure_instr_fetch = if ELUsingAArch32(EL3) then SCR.SIF else SCR_EL3.SIF; if secure_instr_fetch == '1' then xn = TRUE; if acctype == AccType_IFETCH then fail = xn; failedread = TRUE; elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW } then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation. fail = FALSE; elsif iswrite then fail = !w; failedread = FALSE; else fail = !r; failedread = TRUE; if fail then secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(40) UNKNOWN; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFault();(i, N); return result;

Library pseudocode for aarch32/translationfunctions/checksv6simd/AArch32.CheckS2PermissionUnsignedSat

// AArch32.CheckS2Permission() // =========================== // Function used for permission checking from AArch32 stage 2 translations // UnsignedSat() // ============= FaultRecordbits(N) AArch32.CheckS2Permission(UnsignedSat(integer i, integer N) (result, -) =PermissionsUnsignedSatQ perms, bits(32) vaddress, bits(40) ipaddress, integer level, AccType acctype, boolean iswrite, boolean s2fs1walk) assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2) && HasS2Translation(); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if HaveExtendedExecuteNeverExt() then case perms.xn:perms.xxn of when '00' xn = !r; when '01' xn = !r || PSTATE.EL == EL1; when '10' xn = TRUE; when '11' xn = !r || PSTATE.EL == EL0; else xn = !r || perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type if acctype == AccType_IFETCH && !s2fs1walk then fail = xn; failedread = TRUE; elsif (acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC && !s2fs1walk then // DC maintenance instructions operating by VA, do not generate Permission faults // from stage 2 translation, other than from stage 1 translation table walk. fail = FALSE; elsif iswrite && !s2fs1walk then fail = !w; failedread = FALSE; else fail = !r; failedread = !iswrite; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFault();(i, N); return result;

Library pseudocode for aarch32/translation/debugattrs/AArch32.CheckBreakpointAArch32.DefaultTEXDecode

// AArch32.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. // AArch32.DefaultTEXDecode() // ========================== FaultRecordMemoryAttributes AArch32.CheckBreakpoint(bits(32) vaddress, integer size) assertAArch32.DefaultTEXDecode(bits(3) TEX, bit C, bit B, bit S, ELUsingAArch32AccType(acctype)S1TranslationRegimeMemoryAttributes()); assert size IN {2,4}; memattrs; match = FALSE; mismatch = FALSE; for i = 0 to // Reserved values map to allocated values if (TEX == '001' && C:B == '01') || (TEX == '010' && C:B != '00') || TEX == '011' then bits(5) texcb; (-, texcb) = UIntConstrainUnpredictableBits(DBGDIDR.BRPs) (match_i, mismatch_i) =( AArch32.BreakpointMatchUnpredictable_RESTEXCB(i, vaddress, size); match = match || match_i; mismatch = mismatch || mismatch_i; ); TEX = texcb<4:2>; C = texcb<1>; B = texcb<0>; if match && case TEX:C:B of when '00000' // Device-nGnRnE memattrs.type = HaltOnBreakpointOrWatchpointMemType_Device() then reason =; memattrs.device = DebugHalt_BreakpointDeviceType_nGnRnE;; when '00001', '01000' // Device-nGnRE memattrs.type = HaltMemType_Device(reason); elsif (match || mismatch) && DBGDSCRext.MDBGen == '1' &&; memattrs.device = AArch32.GenerateDebugExceptionsDeviceType_nGnRE() then acctype =; when '00010', '00011', '00100' // Write-back or Write-through Read allocate, or Non-cacheable memattrs.type = AccType_IFETCHMemType_Normal; iswrite = FALSE; debugmoe = memattrs.inner = DebugException_BreakpointShortConvertAttrsHints; return(C:B, acctype, FALSE); memattrs.outer = AArch32.DebugFaultShortConvertAttrsHints(acctype, iswrite, debugmoe); else return(C:B, acctype, FALSE); memattrs.shareable = (S == '1'); when '00110' memattrs = IMPLEMENTATION_DEFINED; when '00111' // Write-back Read and Write allocate memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints('01', acctype, FALSE); memattrs.outer = ShortConvertAttrsHints('01', acctype, FALSE); memattrs.shareable = (S == '1'); when '1xxxx' // Cacheable, TEX<1:0> = Outer attrs, {C,B} = Inner attrs memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(TEX<1:0>, acctype, FALSE); memattrs.shareable = (S == '1'); otherwise // Reserved, handled above Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; // distinction between inner and outer shareable is not supported in this format memattrs.outershareable = memattrs.shareable; memattrs.tagged = FALSE; return MemAttrDefaultsAArch32.NoFaultMemoryAttributes();(memattrs);

Library pseudocode for aarch32/translation/debugattrs/AArch32.CheckDebugAArch32.InstructionDevice

// AArch32.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. // AArch32.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. FaultRecordAddressDescriptor AArch32.CheckDebug(bits(32) vaddress,AArch32.InstructionDevice( AddressDescriptor addrdesc, bits(32) vaddress, bits(40) ipaddress, integer level, bits(4) domain, AccType acctype, boolean iswrite, integer size)acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) c = FaultRecordConstrainUnpredictable fault =( AArch32.NoFaultUnpredictable_INSTRDEVICE(); d_side = (acctype !=); assert c IN { AccType_IFETCHConstraint_NONE); generate_exception =, AArch32.GenerateDebugExceptionsConstraint_FAULT() && DBGDSCRext.MDBGen == '1'; halt =}; if c == HaltOnBreakpointOrWatchpointConstraint_FAULT(); // Relative priority of Vector Catch and Breakpoint exceptions not defined in the architecture vector_catch_first =then addrdesc.fault = ConstrainUnpredictableBoolAArch32.PermissionFault((ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.type =Unpredictable_BPVECTORCATCHPRIMemType_Normal); if !d_side && vector_catch_first && generate_exception then fault =; addrdesc.memattrs.inner.attrs = AArch32.CheckVectorCatchMemAttr_NC(vaddress, size); if fault.statuscode ==; addrdesc.memattrs.inner.hints = Fault_NoneMemHint_No && (generate_exception || halt) then if d_side then fault =; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = AArch32.CheckWatchpointMemAttrDefaults(vaddress, acctype, iswrite, size); else fault = AArch32.CheckBreakpoint(vaddress, size); if fault.statuscode == Fault_None && !d_side && !vector_catch_first && generate_exception then return AArch32.CheckVectorCatch(vaddress, size); (addrdesc.memattrs); return fault; return addrdesc;

Library pseudocode for aarch32/translation/debugattrs/AArch32.CheckVectorCatchAArch32.RemappedTEXDecode

// AArch32.CheckVectorCatch() // ========================== // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // Vector Catch can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. // AArch32.RemappedTEXDecode() // =========================== FaultRecordMemoryAttributes AArch32.CheckVectorCatch(bits(32) vaddress, integer size) assertAArch32.RemappedTEXDecode(bits(3) TEX, bit C, bit B, bit S, ELUsingAArch32AccType(acctype)S1TranslationRegimeMemoryAttributes()); memattrs; match = region = AArch32.VCRMatchUInt(vaddress); if size == 4 && !match &&(TEX<0>:C:B); // TEX<2:1> are ignored in this mapping scheme if region == 6 then memattrs = AArch32.VCRMatchMemoryAttributes(vaddress + 2) then match =IMPLEMENTATION_DEFINED; else base = 2 * region; attrfield = PRRR<base+1:base>; if attrfield == '11' then // Reserved, maps to allocated value (-, attrfield) = ConstrainUnpredictableBoolConstrainUnpredictableBits(Unpredictable_VCMATCHHALFUnpredictable_RESPRRR); if match && DBGDSCRext.MDBGen == '1' && case attrfield of when '00' // Device-nGnRnE memattrs.type = AArch32.GenerateDebugExceptionsMemType_Device() then acctype =; memattrs.device = AccType_IFETCHDeviceType_nGnRnE; iswrite = FALSE; debugmoe = when '01' // Device-nGnRE memattrs.type = DebugException_VectorCatchMemType_Device; return memattrs.device = AArch32.DebugFaultDeviceType_nGnRE(acctype, iswrite, debugmoe); else return; when '10' memattrs.type = ; memattrs.inner = ShortConvertAttrsHints(NMRR<base+1:base>, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(NMRR<base+17:base+16>, acctype, FALSE); s_bit = if S == '0' then PRRR.NS0 else PRRR.NS1; memattrs.shareable = (s_bit == '1'); memattrs.outershareable = (s_bit == '1' && PRRR<region+24> == '0'); when '11' Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; memattrs.tagged = FALSE; return MemAttrDefaultsAArch32.NoFaultMemType_Normal();(memattrs);

Library pseudocode for aarch32/translation/debugattrs/AArch32.CheckWatchpointAArch32.S1AttrDecode

// AArch32.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". // AArch32.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. FaultRecordMemoryAttributes AArch32.CheckWatchpoint(bits(32) vaddress,AArch32.S1AttrDecode(bits(2) SH, bits(3) attr, AccType acctype, boolean iswrite, integer size) assertacctype) ELUsingAArch32MemoryAttributes(memattrs; if PSTATE.EL ==S1TranslationRegimeEL2()); match = FALSE; ispriv =then mair = HMAIR1:HMAIR0; else mair = MAIR1:MAIR0; index = 8 * AArch32.AccessIsPrivileged(acctype); for i = 0 to UInt(DBGDIDR.WRPs) match = match ||(attr); attrfield = mair<index+7:index>; memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) = AArch32.WatchpointMatchConstrainUnpredictableBits(i, vaddress, size, ispriv, iswrite); if match &&( HaltOnBreakpointOrWatchpointUnpredictable_RESMAIR() then reason =); if ! DebugHalt_WatchpointHaveMTEExt;() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) = HaltConstrainUnpredictableBits(reason); elsif match && DBGDSCRext.MDBGen == '1' &&( AArch32.GenerateDebugExceptionsUnpredictable_RESMAIR() then debugmoe =); if attrfield<7:4> == '0000' then // Device memattrs.type = DebugException_WatchpointMemType_Device; return case attrfield<3:0> of when '0000' memattrs.device = AArch32.DebugFaultDeviceType_nGnRnE(acctype, iswrite, debugmoe); else return; when '0100' memattrs.device = ; when '1000' memattrs.device = DeviceType_nGRE; when '1100' memattrs.device = DeviceType_GRE; otherwise Unreachable(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.type = MemType_Normal; memattrs.outer = LongConvertAttrsHints(attrfield<7:4>, acctype); memattrs.inner = LongConvertAttrsHints(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif HaveMTEExt() && attrfield == '11110000' then // Tagged, Normal memattrs.tagged = TRUE; memattrs.type = MemType_Normal; memattrs.outer.attrs = MemAttr_WB; memattrs.inner.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RWA; memattrs.inner.hints = MemHint_RWA; memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else Unreachable(); // Reserved, handled above return MemAttrDefaultsAArch32.NoFaultDeviceType_nGnRE();(memattrs);

Library pseudocode for aarch32/translation/faultsattrs/AArch32.AccessFlagFaultAArch32.TranslateAddressS1Off

// AArch32.AccessFlagFault() // ========================= // AArch32.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. FaultRecordTLBRecord AArch32.AccessFlagFault(bits(40) ipaddress, bits(4) domain, integer level,AArch32.TranslateAddressS1Off(bits(32) vaddress, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; returnacctype, boolean iswrite) assert AArch32.CreateFaultRecordELUsingAArch32(()); TLBRecord result; default_cacheable = (HasS2Translation() && ((if ELUsingAArch32(EL2) then HCR.DC else HCR_EL2.DC) == '1')); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.type = MemType_Normal; result.addrdesc.memattrs.inner.attrs = MemAttr_WB; // Write-back result.addrdesc.memattrs.inner.hints = MemHint_RWA; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype != AccType_IFETCH then // Treat data as Device result.addrdesc.memattrs.type = MemType_Device; result.addrdesc.memattrs.device = DeviceType_nGnRnE; result.addrdesc.memattrs.inner = MemAttrHints UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR/HSCTLR.I if PSTATE.EL == EL2 then cacheable = HSCTLR.I == '1'; else cacheable = SCTLR.I == '1'; result.addrdesc.memattrs.type = MemType_Normal; if cacheable then result.addrdesc.memattrs.inner.attrs = MemAttr_WT; result.addrdesc.memattrs.inner.hints = MemHint_RA; else result.addrdesc.memattrs.inner.attrs = MemAttr_NC; result.addrdesc.memattrs.inner.hints = MemHint_No; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = MemAttrDefaults(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = ZeroExtend(vaddress); result.addrdesc.paddress.NS = if IsSecure() then '0' else '1'; result.addrdesc.fault = AArch32.NoFaultFault_AccessFlagS1TranslationRegime, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);(); return result;

Library pseudocode for aarch32/translation/faultschecks/AArch32.AddressSizeFaultAArch32.AccessIsPrivileged

// AArch32.AddressSizeFault() // ========================== // AArch32.AccessIsPrivileged() // ============================ FaultRecordboolean AArch32.AddressSizeFault(bits(40) ipaddress, bits(4) domain, integer level,AArch32.AccessIsPrivileged( AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) acctype) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return el = AArch32.CreateFaultRecordAArch32.AccessUsesEL((acctype); if el == then ispriv = FALSE; elsif el != EL1 then ispriv = TRUE; else ispriv = (acctype != AccType_UNPRIVFault_AddressSizeEL0, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);); return ispriv;

Library pseudocode for aarch32/translation/faultschecks/AArch32.AlignmentFaultAArch32.AccessUsesEL

// AArch32.AlignmentFault() // ======================== // AArch32.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. FaultRecordbits(2) AArch32.AlignmentFault(AArch32.AccessUsesEL(AccType acctype, boolean iswrite, boolean secondstage) ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; returnacctype) if acctype == AArch32.CreateFaultRecordAccType_UNPRIV(then returnFault_AlignmentEL0, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);; else return PSTATE.EL;

Library pseudocode for aarch32/translation/faultschecks/AArch32.AsynchExternalAbortAArch32.CheckDomain

// AArch32.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts // AArch32.CheckDomain() // ===================== FaultRecord(boolean, FaultRecord) AArch32.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) faulttype = if parity thenAArch32.CheckDomain(bits(4) domain, bits(32) vaddress, integer level, Fault_AsyncParityAccType elseacctype, boolean iswrite) index = 2 * Fault_AsyncExternalUInt; ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype =(domain); attrfield = DACR<index+1:index>; if attrfield == '10' then // Reserved, maps to an allocated value // Reserved value maps to an allocated value (-, attrfield) = AccType_NORMALConstrainUnpredictableBits; iswrite = boolean UNKNOWN; debugmoe = bits(4) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return( ); if attrfield == '00' then fault = AArch32.DomainFault(domain, level, acctype, iswrite); else fault = AArch32.NoFaultAArch32.CreateFaultRecordUnpredictable_RESDACR(faulttype, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);(); permissioncheck = (attrfield == '01'); return (permissioncheck, fault);

Library pseudocode for aarch32/translation/faultschecks/AArch32.DebugFaultAArch32.CheckPermission

// AArch32.DebugFault() // ==================== // AArch32.CheckPermission() // ========================= // Function used for permission checking from AArch32 stage 1 translations FaultRecord AArch32.DebugFault(AArch32.CheckPermission(Permissions perms, bits(32) vaddress, integer level, bits(4) domain, bit NS, AccType acctype, boolean iswrite, bits(4) debugmoe) ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; returnacctype, boolean iswrite) assert AArch32.CreateFaultRecordELUsingAArch32(()); if PSTATE.EL != EL2 then wxn = SCTLR.WXN == '1'; if TTBCR.EAE == '1' || SCTLR.AFE == '1' || perms.ap<0> == '1' then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; else priv_r = perms.ap<2:1> != '00'; priv_w = perms.ap<2:1> == '01'; user_r = perms.ap<1> == '1'; user_w = FALSE; uwxn = SCTLR.UWXN == '1'; ispriv = AArch32.AccessIsPrivileged(acctype); pan = if HavePANExt() then PSTATE.PAN else '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch32.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = !user_r || perms.xn == '1' || (user_w && wxn); priv_xn = (!priv_r || perms.xn == '1' || perms.pxn == '1' || (priv_w && wxn) || (user_w && uwxn)); if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 wxn = HSCTLR.WXN == '1'; r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecure() && NS == '1' then secure_instr_fetch = if ELUsingAArch32(EL3) then SCR.SIF else SCR_EL3.SIF; if secure_instr_fetch == '1' then xn = TRUE; if acctype == AccType_IFETCH then fail = xn; failedread = TRUE; elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW } then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation. fail = FALSE; elsif iswrite then fail = !w; failedread = FALSE; else fail = !r; failedread = TRUE; if fail then secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(40) UNKNOWN; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFaultFault_DebugS1TranslationRegime, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/faultschecks/AArch32.DomainFaultAArch32.CheckS2Permission

// AArch32.DomainFault() // ===================== // AArch32.CheckS2Permission() // =========================== // Function used for permission checking from AArch32 stage 2 translations FaultRecord AArch32.DomainFault(bits(4) domain, integer level,AArch32.CheckS2Permission( Permissions perms, bits(32) vaddress, bits(40) ipaddress, integer level, AccType acctype, boolean iswrite) acctype, boolean iswrite, boolean s2fs1walk) ipaddress = bits(40) UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return assert AArch32.CreateFaultRecordHaveEL() && !IsSecure() && ELUsingAArch32(EL2) && HasS2Translation(); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if HaveExtendedExecuteNeverExt() then case perms.xn:perms.xxn of when '00' xn = !r; when '01' xn = !r || PSTATE.EL == EL1; when '10' xn = TRUE; when '11' xn = !r || PSTATE.EL == EL0; else xn = !r || perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type if acctype == AccType_IFETCH && !s2fs1walk then fail = xn; failedread = TRUE; elsif (acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC && !s2fs1walk then // DC maintenance instructions operating by VA, do not generate Permission faults // from stage 2 translation, other than from stage 1 translation table walk. fail = FALSE; elsif iswrite && !s2fs1walk then fail = !w; failedread = FALSE; else fail = !r; failedread = !iswrite; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFaultFault_DomainEL2, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/faultsdebug/AArch32.NoFaultAArch32.CheckBreakpoint

// AArch32.NoFault() // ================= // AArch32.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. FaultRecord AArch32.NoFault() ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype =AArch32.CheckBreakpoint(bits(32) vaddress, integer size) assert AccType_NORMALELUsingAArch32; iswrite = boolean UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return( AArch32.CreateFaultRecordS1TranslationRegime(()); assert size IN {2,4}; match = FALSE; mismatch = FALSE; for i = 0 to(DBGDIDR.BRPs) (match_i, mismatch_i) = AArch32.BreakpointMatch(i, vaddress, size); match = match || match_i; mismatch = mismatch || mismatch_i; if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Breakpoint; Halt(reason); elsif (match || mismatch) && DBGDSCRext.MDBGen == '1' && AArch32.GenerateDebugExceptions() then acctype = AccType_IFETCH; iswrite = FALSE; debugmoe = DebugException_Breakpoint; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return AArch32.NoFaultFault_NoneUInt, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/faultsdebug/AArch32.PermissionFaultAArch32.CheckDebug

// AArch32.PermissionFault() // ========================= // AArch32.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. FaultRecord AArch32.PermissionFault(bits(40) ipaddress, bits(4) domain, integer level,AArch32.CheckDebug(bits(32) vaddress, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; returnacctype, boolean iswrite, integer size) AArch32.CreateFaultRecordFaultRecord(fault =(); d_side = (acctype != AccType_IFETCH); generate_exception = AArch32.GenerateDebugExceptions() && DBGDSCRext.MDBGen == '1'; halt = HaltOnBreakpointOrWatchpoint(); // Relative priority of Vector Catch and Breakpoint exceptions not defined in the architecture vector_catch_first = ConstrainUnpredictableBool(Unpredictable_BPVECTORCATCHPRI); if !d_side && vector_catch_first && generate_exception then fault = AArch32.CheckVectorCatch(vaddress, size); if fault.type == Fault_None && (generate_exception || halt) then if d_side then fault = AArch32.CheckWatchpoint(vaddress, acctype, iswrite, size); else fault = AArch32.CheckBreakpoint(vaddress, size); if fault.type == Fault_None && !d_side && !vector_catch_first && generate_exception then return AArch32.CheckVectorCatchFault_PermissionAArch32.NoFault, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);(vaddress, size); return fault;

Library pseudocode for aarch32/translation/faultsdebug/AArch32.TranslationFaultAArch32.CheckVectorCatch

// AArch32.TranslationFault() // AArch32.CheckVectorCatch() // ========================== // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // Vector Catch can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. FaultRecord AArch32.TranslationFault(bits(40) ipaddress, bits(4) domain, integer level,AArch32.CheckVectorCatch(bits(32) vaddress, integer size) assert AccTypeELUsingAArch32 acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return( AArch32.CreateFaultRecordS1TranslationRegime(()); match =(vaddress); if size == 4 && !match && AArch32.VCRMatch(vaddress + 2) then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHHALF); if match && DBGDSCRext.MDBGen == '1' && AArch32.GenerateDebugExceptions() then acctype = AccType_IFETCH; iswrite = FALSE; debugmoe = DebugException_VectorCatch; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return AArch32.NoFaultFault_TranslationAArch32.VCRMatch, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/translationdebug/AArch32.FirstStageTranslateAArch32.CheckWatchpoint

// AArch32.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. // AArch32.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". AddressDescriptorFaultRecord AArch32.FirstStageTranslate(bits(32) vaddress,AArch32.CheckWatchpoint(bits(32) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, integer size) if PSTATE.EL ==acctype, boolean iswrite, integer size) assert EL2 then s1_enabled = HSCTLR.M == '1'; elsif EL2Enabled() then tge = (if ELUsingAArch32(EL2) then HCR.TGE else HCR_EL2.TGE); dc = (if ELUsingAArch32(EL2) then HCR.DC else HCR_EL2.DC); s1_enabled = tge == '0' && dc == '0' && SCTLR.M == '1'; else s1_enabled = SCTLR.M == '1'; ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; if s1_enabled then // First stage enabled use_long_descriptor_format = PSTATE.EL == EL2 || TTBCR.EAE == '1'; if use_long_descriptor_format then S1 = AArch32.TranslationTableWalkLD(ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; domaincheck = FALSE; else S1 = AArch32.TranslationTableWalkSD(vaddress, acctype, iswrite, size); permissioncheck = TRUE; domaincheck = TRUE; else S1 = AArch32.TranslateAddressS1Off(vaddress, acctype, iswrite); permissioncheck = FALSE; domaincheck = FALSE; if UsingAArch32() && HaveTrapLoadStoreMultipleDeviceExt() && AArch32.ExecutingLSMInstr() then if S1.addrdesc.memattrs.memtype == MemType_Device && S1.addrdesc.memattrs.device != DeviceType_GRE then nTLSMD = if S1TranslationRegime() ==()); match = FALSE; ispriv = EL2AArch32.AccessIsPrivileged then HSCTLR.nTLSMD else SCTLR.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault =(acctype); for i = 0 to AArch32.AlignmentFaultUInt(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype !=(DBGDIDR.WRPs) match = match || AccType_IFETCHAArch32.WatchpointMatch) || (acctype ==(i, vaddress, size, ispriv, iswrite); if match && AccType_DCZVAHaltOnBreakpointOrWatchpoint)) && S1.addrdesc.memattrs.memtype ==() then reason = MemType_DeviceDebugHalt_Watchpoint && !;IsFaultHalt(S1.addrdesc) then S1.addrdesc.fault =(reason); elsif match && DBGDSCRext.MDBGen == '1' && AArch32.AlignmentFaultAArch32.GenerateDebugExceptions(acctype, iswrite, secondstage); if !() then debugmoe =IsFaultDebugException_Watchpoint(S1.addrdesc) && domaincheck then (permissioncheck, abort) =; return AArch32.CheckDomainAArch32.DebugFault(S1.domain, vaddress, S1.level, acctype, iswrite); S1.addrdesc.fault = abort; if !(acctype, iswrite, debugmoe); else returnIsFaultAArch32.NoFault(S1.addrdesc) && permissioncheck then S1.addrdesc.fault = AArch32.CheckPermission(S1.perms, vaddress, S1.level, S1.domain, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (!IsFault(S1.addrdesc) && S1.addrdesc.memattrs.memtype == MemType_Device && acctype == AccType_IFETCH) then S1.addrdesc = AArch32.InstructionDevice(S1.addrdesc, vaddress, ipaddress, S1.level, S1.domain, acctype, iswrite, secondstage, s2fs1walk); return S1.addrdesc;();

Library pseudocode for aarch32/translation/translationfaults/AArch32.FullTranslateAArch32.AccessFlagFault

// AArch32.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. // AArch32.AccessFlagFault() // ========================= AddressDescriptorFaultRecord AArch32.FullTranslate(bits(32) vaddress,AArch32.AccessFlagFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean iswrite, boolean wasaligned, integer size) acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) // First Stage Translation S1 = extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return AArch32.FirstStageTranslateAArch32.CreateFaultRecord(vaddress, acctype, iswrite, wasaligned, size); if !(IsFaultFault_AccessFlag(S1) && !(HaveNV2Ext() && acctype == AccType_NV2REGISTER) && HasS2Translation() then s2fs1walk = FALSE; result = AArch32.SecondStageTranslate(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size); else result = S1; return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/translationfaults/AArch32.SecondStageTranslateAArch32.AddressSizeFault

// AArch32.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. // AArch32.AddressSizeFault() // ========================== AddressDescriptorFaultRecord AArch32.SecondStageTranslate(AArch32.AddressSizeFault(bits(40) ipaddress, bits(4) domain, integer level,AddressDescriptor S1, bits(32) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size) assertacctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return HasS2TranslationAArch32.CreateFaultRecord(); assert( IsZeroFault_AddressSize(S1.paddress.address<47:40>); hwupdatewalk = FALSE; if !ELUsingAArch32(EL2) then return AArch64.SecondStageTranslate(S1, ZeroExtend(vaddress, 64), acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); s2_enabled = HCR.VM == '1' || HCR.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<39:0>; S2 = AArch32.TranslationTableWalkLD(ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S2.addrdesc.memattrs.memtype == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.CheckS2Permission(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, s2fs1walk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !IsFault(S2.addrdesc) && S2.addrdesc.memattrs.memtype == MemType_Device && acctype == AccType_IFETCH) then domain = bits(4) UNKNOWN; S2.addrdesc = AArch32.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, domain, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR.PTW == '1' && S2.addrdesc.memattrs.memtype == MemType_Device) then domain = bits(4) UNKNOWN; S2.addrdesc.fault = AArch32.PermissionFault(ipaddress, domain, S2.level, acctype, iswrite, secondstage, s2fs1walk); result = CombineS1S2Desc(S1, S2.addrdesc); else result = S1; return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/translationfaults/AArch32.SecondStageWalkAArch32.AlignmentFault

// AArch32.SecondStageWalk() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. // AArch32.AlignmentFault() // ======================== AddressDescriptorFaultRecord AArch32.SecondStageWalk(AArch32.AlignmentFault(AddressDescriptor S1, bits(32) vaddress, AccType acctype, boolean iswrite, integer size) acctype, boolean iswrite, boolean secondstage) assert ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; return HasS2TranslationAArch32.CreateFaultRecord(); s2fs1walk = TRUE; wasaligned = TRUE; return( AArch32.SecondStageTranslateFault_Alignment(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size);, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/translationfaults/AArch32.TranslateAddressAArch32.AsynchExternalAbort

// AArch32.TranslateAddress() // ========================== // Main entry point for translating an address // AArch32.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts AddressDescriptorFaultRecord AArch32.TranslateAddress(bits(32) vaddress,AArch32.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) type = if parity then AccTypeFault_AsyncParity acctype, boolean iswrite, boolean wasaligned, integer size) if !elseELUsingAArch32Fault_AsyncExternal(; ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype =S1TranslationRegimeAccType_NORMAL()) then return; iswrite = boolean UNKNOWN; debugmoe = bits(4) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch64.TranslateAddressAArch32.CreateFaultRecord(ZeroExtend(vaddress, 64), acctype, iswrite, wasaligned, size); result = AArch32.FullTranslate(vaddress, acctype, iswrite, wasaligned, size); if !(acctype IN {AccType_PTW, AccType_IC, AccType_AT}) && !IsFault(result) then result.fault = AArch32.CheckDebug(vaddress, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress = ZeroExtend(vaddress); return result;(type, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/walkfaults/AArch32.TranslationTableWalkLDAArch32.DebugFault

// AArch32.TranslationTableWalkLD() // ================================ // Returns a result of a translation table walk using the Long-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. // AArch32.DebugFault() // ==================== TLBRecordFaultRecord AArch32.TranslationTableWalkLD(bits(40) ipaddress, bits(32) vaddress,AArch32.DebugFault( AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assertacctype, boolean iswrite, bits(4) debugmoe) ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return ELUsingAArch32AArch32.CreateFaultRecord(S1TranslationRegimeFault_Debug()); else assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2) && HasS2Translation(); TLBRecord result; AddressDescriptor descaddr; bits(64) baseregister; bits(40) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 bit nswalk; // Stage 2 translation table walks are to Secure or to Non-secure PA space domain = bits(4) UNKNOWN; descaddr.memattrs.memtype = MemType_Normal; // Fixed parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB in AArch32 // stride = Log2(Address per Level) - Bits of address consumed at each level constant integer grainsize = 12; // Log2(4KB page size) constant integer stride = grainsize - 3; // Log2(page size / 8 bytes) // Derived parameters for the page table walk: // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = ZeroExtend(vaddress); el = AArch32.AccessUsesEL(acctype); if el == EL2 then inputsize = 32 - UInt(HTCR.T0SZ); basefound = inputsize == 32 || IsZero(inputaddr<31:inputsize>); disabled = FALSE; baseregister = HTTBR; descaddr.memattrs = WalkAttrDecode(HTCR.SH0, HTCR.ORGN0, HTCR.IRGN0, secondstage); reversedescriptors = HSCTLR.EE == '1'; lookupsecure = FALSE; singlepriv = TRUE; hierattrsdisabled = AArch32.HaveHPDExt() && HTCR.HPD == '1'; else basefound = FALSE; disabled = FALSE; t0size = UInt(TTBCR.T0SZ); if t0size == 0 || IsZero(inputaddr<31:(32-t0size)>) then inputsize = 32 - t0size; basefound = TRUE; baseregister = TTBR0; descaddr.memattrs = WalkAttrDecode(TTBCR.SH0, TTBCR.ORGN0, TTBCR.IRGN0, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD0 == '1'; t1size = UInt(TTBCR.T1SZ); if (t1size == 0 && !basefound) || (t1size > 0 && IsOnes(inputaddr<31:(32-t1size)>)) then inputsize = 32 - t1size; basefound = TRUE; baseregister = TTBR1; descaddr.memattrs = WalkAttrDecode(TTBCR.SH1, TTBCR.ORGN1, TTBCR.IRGN1, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD1 == '1'; reversedescriptors = SCTLR.EE == '1'; lookupsecure = IsSecure(); singlepriv = FALSE; // The starting level is the number of strides needed to consume the input address level = 4 - RoundUp(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ipaddress; inputsize = 32 - SInt(VTCR.T0SZ); // VTCR.S must match VTCR.T0SZ[3] if VTCR.S != VTCR.T0SZ<3> then (-, inputsize) = ConstrainUnpredictableInteger(32-7, 32+8, Unpredictable_RESVTCRS); basefound = inputsize == 40 || IsZero(inputaddr<39:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR.SH0, VTCR.ORGN0, VTCR.IRGN0, secondstage); reversedescriptors = HSCTLR.EE == '1'; singlepriv = TRUE; lookupsecure = FALSE; baseregister = VTTBR; startlevel = UInt(VTCR.SL0); level = 2 - startlevel; if level <= 0 then basefound = FALSE; // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsize - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // That is, VTCR.SL0 == '00' and SInt(VTCR.T0SZ) > 1, Size of Input Address < 2^31 bytes // Upper bound check is: startsizecheck > Log2(pagesize/8*16) // That is, VTCR.SL0 == '01' and SInt(VTCR.T0SZ) < -2, Size of Input Address > 2^34 bytes if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 1; // AArch64 reports this as a level 0 fault result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if !IsZero(baseregister<47:40>) then level = 0; result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) baseaddress = baseregister<39:baselowerbound>:Zeros(baselowerbound); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; addrselecttop = inputsize - 1; repeat addrselectbottom = (3-level)*stride + grainsize; bits(40) index = ZeroExtend(inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address = ZeroExtend(baseaddress OR index); descaddr.paddress.NS = if secondstage then nswalk else ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || !HasS2Translation() || (HaveNV2Ext() && acctype == AccType_NV2REGISTER) then descaddr2 = descaddr; else descaddr2 = AArch32.SecondStageWalk(descaddr, vaddress, acctype, iswrite, 8); // Check for a fault on the stage 2 walk if IsFault(descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = BigEndianReverse(desc); if desc<0> == '0' || (desc<1:0> == '01' && level == 3) then // Fault (00), Reserved (10), or Block (01) at level 3. result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; baseaddress = desc<39:grainsize>:Zeros(grainsize); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply only in EL1&0 translation regimes if !singlepriv then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Unpack the descriptor into address and upper and lower block attributes outputaddress = desc<39:addrselectbottom>:inputaddr<addrselectbottom-1:0>; // Check the output address is inside the supported range if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check the access flag if desc<10> == '0' then result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply only in EL1&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = AArch32.S1AttrDecode(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if HaveExtendedExecuteNeverExt() then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = '1'; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.fault = AArch32.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExt() then result.CnP = baseregister<0>; return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/walkfaults/AArch32.TranslationTableWalkSDAArch32.DomainFault

// AArch32.TranslationTableWalkSD() // ================================ // Returns a result of a translation table walk using the Short-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. // AArch32.DomainFault() // ===================== TLBRecordFaultRecord AArch32.TranslationTableWalkSD(bits(32) vaddress,AArch32.DomainFault(bits(4) domain, integer level, AccType acctype, boolean iswrite, integer size) assertacctype, boolean iswrite) ipaddress = bits(40) UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return ELUsingAArch32AArch32.CreateFaultRecord(S1TranslationRegimeFault_Domain()); // This is only called when address translation is enabled TLBRecord result; AddressDescriptor l1descaddr; AddressDescriptor l2descaddr; bits(40) outputaddress; // Variables for Abort functions ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; NS = bit UNKNOWN; // Default setting of the domain domain = bits(4) UNKNOWN; // Determine correct Translation Table Base Register to use. bits(64) ttbr; n = UInt(TTBCR.N); if n == 0 || IsZero(vaddress<31:(32-n)>) then ttbr = TTBR0; disabled = (TTBCR.PD0 == '1'); else ttbr = TTBR1; disabled = (TTBCR.PD1 == '1'); n = 0; // TTBR1 translation always works like N=0 TTBR0 translation // Check this Translation Table Base Register is not disabled. if disabled then level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Obtain descriptor from initial lookup. l1descaddr.paddress.address = ZeroExtend(ttbr<31:14-n>:vaddress<31-n:20>:'00'); l1descaddr.paddress.NS = if IsSecure() then '0' else '1'; IRGN = ttbr<0>:ttbr<6>; // TTBR.IRGN RGN = ttbr<4:3>; // TTBR.RGN SH = ttbr<1>:ttbr<5>; // TTBR.S:TTBR.NOS l1descaddr.memattrs = WalkAttrDecode(SH, RGN, IRGN, secondstage); if !HaveEL(EL2) || (IsSecure() && !IsSecureEL2Enabled()) then // if only 1 stage of translation l1descaddr2 = l1descaddr; else l1descaddr2 = AArch32.SecondStageWalk(l1descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l1descaddr2) then result.addrdesc.fault = l1descaddr2.fault; return result; // Update virtual address for abort functions l1descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l1desc = _Mem[l1descaddr2, 4,accdesc]; if SCTLR.EE == '1' then l1desc = BigEndianReverse(l1desc); // Process descriptor from initial lookup. case l1desc<1:0> of when '00' // Fault, Reserved level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; when '01' // Large page or Small page domain = l1desc<8:5>; level = 2; pxn = l1desc<2>; NS = l1desc<3>; // Obtain descriptor from level 2 lookup. l2descaddr.paddress.address = ZeroExtend(l1desc<31:10>:vaddress<19:12>:'00'); l2descaddr.paddress.NS = if IsSecure() then '0' else '1'; l2descaddr.memattrs = l1descaddr.memattrs; if !HaveEL(EL2) || (IsSecure() && !IsSecureEL2Enabled()) then // if only 1 stage of translation l2descaddr2 = l2descaddr; else l2descaddr2 = AArch32.SecondStageWalk(l2descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l2descaddr2) then result.addrdesc.fault = l2descaddr2.fault; return result; // Update virtual address for abort functions l2descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l2desc = _Mem[l2descaddr2, 4, accdesc]; if SCTLR.EE == '1' then l2desc = BigEndianReverse(l2desc); // Process descriptor from level 2 lookup. if l2desc<1:0> == '00' then result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; nG = l2desc<11>; S = l2desc<10>; ap = l2desc<9,5:4>; if SCTLR.AFE == '1' && l2desc<4> == '0' then // Armv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l2desc<1> == '0' then // Large page xn = l2desc<15>; tex = l2desc<14:12>; c = l2desc<3>; b = l2desc<2>; blocksize = 64; outputaddress = ZeroExtend(l2desc<31:16>:vaddress<15:0>); else // Small page tex = l2desc<8:6>; c = l2desc<3>; b = l2desc<2>; xn = l2desc<0>; blocksize = 4; outputaddress = ZeroExtend(l2desc<31:12>:vaddress<11:0>); when '1x' // Section or Supersection NS = l1desc<19>; nG = l1desc<17>; S = l1desc<16>; ap = l1desc<15,11:10>; tex = l1desc<14:12>; xn = l1desc<4>; c = l1desc<3>; b = l1desc<2>; pxn = l1desc<0>; level = 1; if SCTLR.AFE == '1' && l1desc<10> == '0' then // Armv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l1desc<18> == '0' then // Section domain = l1desc<8:5>; blocksize = 1024; outputaddress = ZeroExtend(l1desc<31:20>:vaddress<19:0>); else // Supersection domain = '0000'; blocksize = 16384; outputaddress = l1desc<8:5>:l1desc<23:20>:l1desc<31:24>:vaddress<23:0>; // Decode the TEX, C, B and S bits to produce the TLBRecord's memory attributes if SCTLR.TRE == '0' then if RemapRegsHaveResetValues() then result.addrdesc.memattrs = AArch32.DefaultTEXDecode(tex, c, b, S, acctype); else result.addrdesc.memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; else result.addrdesc.memattrs = AArch32.RemappedTEXDecode(tex, c, b, S, acctype); // Set the rest of the TLBRecord, try to add it to the TLB, and return it. result.perms.ap = ap; result.perms.xn = xn; result.perms.pxn = pxn; result.nG = nG; result.domain = domain; result.level = level; result.blocksize = blocksize; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.paddress.NS = if IsSecure() then NS else '1'; result.addrdesc.fault = AArch32.NoFault(); return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/walkfaults/RemapRegsHaveResetValuesAArch32.NoFault

boolean// AArch32.NoFault() // ================= FaultRecord RemapRegsHaveResetValues();AArch32.NoFault() ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype =AccType_NORMAL; iswrite = boolean UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch32.CreateFaultRecord(Fault_None, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64aarch32/debugtranslation/breakpointfaults/AArch64.BreakpointMatchAArch32.PermissionFault

// AArch64.BreakpointMatch() // AArch32.PermissionFault() // ========================= // Breakpoint matching in an AArch64 translation regime. booleanFaultRecord AArch64.BreakpointMatch(integer n, bits(64) vaddress,AArch32.PermissionFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, integer size) assert !acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; returnELUsingAArch32AArch32.CreateFaultRecord(S1TranslationRegimeFault_Permission()); assert n <= UInt(ID_AA64DFR0_EL1.BRPs); enabled = DBGBCR_EL1[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR_EL1[n].BT == '0x01'; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch64.StateMatch(DBGBCR_EL1[n].SSC, DBGBCR_EL1[n].HMC, DBGBCR_EL1[n].PMC, linked, DBGBCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); value_match = AArch64.BreakpointValueMatch(n, vaddress, linked_to); if HaveAnyAArch32() && size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. match_i = AArch64.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if vaddress<1> == '1' && DBGBCR_EL1[n].BAS == '1111' then // The above notwithstanding, if DBGBCR_EL1[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR_EL1[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); match = value_match && state_match && enabled; return match;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64aarch32/debugtranslation/breakpointfaults/AArch64.BreakpointValueMatchAArch32.TranslationFault

// AArch64.BreakpointValueMatch() // ============================== // AArch32.TranslationFault() // ========================== booleanFaultRecord AArch64.BreakpointValueMatch(integer n, bits(64) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n >AArch32.TranslationFault(bits(40) ipaddress, bits(4) domain, integer level, UIntAccType(ID_AA64DFR0_EL1.BRPs) then (c, n) =acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return ConstrainUnpredictableIntegerAArch32.CreateFaultRecord(0,( UIntFault_Translation(ID_AA64DFR0_EL1.BRPs), Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR_EL1[n].E == '0' then return FALSE; context_aware = (n >= UInt(ID_AA64DFR0_EL1.BRPs) - UInt(ID_AA64DFR0_EL1.CTX_CMPs)); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. dbgtype = DBGBCR_EL1[n].BT; if ((dbgtype IN {'011x','11xx'} && !HaveVirtHostExt()) || // Context matching dbgtype == '010x' || // Reserved (dbgtype != '0x0x' && !context_aware) || // Context matching (dbgtype == '1xxx' && !HaveEL(EL2))) then // EL2 extension (c, dbgtype) = ConstrainUnpredictableBits(Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (dbgtype == '0x0x'); match_vmid = (dbgtype == '10xx'); match_cid = (dbgtype == '001x'); match_cid1 = (dbgtype IN { '101x', 'x11x'}); match_cid2 = (dbgtype == '11xx'); linked = (dbgtype == 'xxx1'); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return FALSE; // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return FALSE; // Do the comparison. if match_addr then byte = UInt(vaddress<1:0>); if HaveAnyAArch32() then // T32 instructions can be executed at EL0 in an AArch64 translation regime. assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR_EL1[n].BAS<byte> == '1'); else assert byte == 0; // "vaddress" is word aligned byte_select_match = TRUE; // DBGBCR_EL1[n].BAS<byte> is RES1 top = AddrTop(vaddress, TRUE, PSTATE.EL); BVR_match = vaddress<top:2> == DBGBVR_EL1[n]<top:2> && byte_select_match; elsif match_cid then if IsInHost() then BVR_match = (CONTEXTIDR_EL2 == DBGBVR_EL1[n]<31:0>); else BVR_match = (PSTATE.EL IN {EL0, EL1} && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); elsif match_cid1 then BVR_match = (PSTATE.EL IN {EL0, EL1} && !IsInHost() && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); if match_vmid then if !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBVR_EL1[n]<39:32>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBVR_EL1[n]<47:32>; BXVR_match = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && !IsInHost() && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (!IsSecure() && HaveVirtHostExt() && DBGBVR_EL1[n]<63:32> == CONTEXTIDR_EL2); bvr_match_valid = (match_addr || match_cid || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return match;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64aarch32/debugtranslation/breakpointtranslation/AArch64.StateMatchAArch32.FirstStageTranslate

// AArch64.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. // AArch32.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. booleanAddressDescriptor AArch64.StateMatch(bits(2) SSC, bit HMC, bits(2) PxC, boolean linked, bits(4) LBN, boolean isbreakpnt,AArch32.FirstStageTranslate(bits(32) vaddress, AccType acctype, boolean ispriv) // "SSC", "HMC", "PxC" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. acctype, boolean iswrite, boolean wasaligned, integer size) // If parameters are set to a reserved type, behaves as either disabled or a defined type if ((HMC:SSC:PxC) IN {'011xx','100x0','101x0','11010','11101','1111x'} || // Reserved (HMC == '0' && PxC == '00' && (!isbreakpnt || ! if PSTATE.EL ==HaveAArch32ELEL2(then s1_enabled = HSCTLR.M == '1'; elsifEL1EL2Enabled))) || // Usr/Svc/Sys (SSC IN {'01','10'} && !() then tge = (ifHaveELELUsingAArch32(EL3EL2)) || // No EL3 (HMC:SSC != '000' && HMC:SSC != '111' && !) then HCR.TGE else HCR_EL2.TGE); dc = (ifHaveELELUsingAArch32(EL3) && !HaveEL(EL2)) || // No EL3/EL2 (HMC:SSC:PxC == '11100' && !) then HCR.DC else HCR_EL2.DC); s1_enabled = tge == '0' && dc == '0' && SCTLR.M == '1'; else s1_enabled = SCTLR.M == '1'; ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; if s1_enabled then // First stage enabled use_long_descriptor_format = PSTATE.EL ==HaveEL(EL2))) then // No EL2 (c, <HMC,SSC,PxC>) =|| TTBCR.EAE == '1'; if use_long_descriptor_format then S1 = ConstrainUnpredictableBitsAArch32.TranslationTableWalkLD((ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; domaincheck = FALSE; else S1 =Unpredictable_RESBPWPCTRLAArch32.TranslationTableWalkSD); assert c IN {(vaddress, acctype, iswrite, size); permissioncheck = TRUE; domaincheck = TRUE; else S1 =Constraint_DISABLEDAArch32.TranslateAddressS1Off,(vaddress, acctype, iswrite); permissioncheck = FALSE; domaincheck = FALSE; if Constraint_UNKNOWNUsingAArch32}; if c ==() && Constraint_DISABLEDHaveTrapLoadStoreMultipleDeviceExt then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value EL3_match =() && HaveELAArch32.ExecutingLSMInstr(() then if S1.addrdesc.memattrs.type ==EL3MemType_Device) && HMC == '1' && SSC<0> == '0'; EL2_match =&& S1.addrdesc.memattrs.device != HaveELDeviceType_GRE(then nTLSMD = ifEL2S1TranslationRegime) && HMC == '1'; EL1_match = PxC<0> == '1'; EL0_match = PxC<1> == '1'; el = if() == HaveNV2Ext() && acctype == AccType_NV2REGISTER then EL2 else PSTATE.EL; if !ispriv && !isbreakpnt then priv_match = EL0_match; else case el of whenthen HSCTLR.nTLSMD else SCTLR.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault = EL3AArch32.AlignmentFault priv_match = EL3_match; when(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != EL2AccType_IFETCH priv_match = EL2_match; when) || (acctype == EL1AccType_DCZVA priv_match = EL1_match; when)) && S1.addrdesc.memattrs.type == EL0MemType_Device priv_match = EL0_match; case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = !&& !IsSecureIsFault(); // Non-secure only when '10' security_state_match =(S1.addrdesc) then S1.addrdesc.fault = IsSecureAArch32.AlignmentFault(); // Secure only when '11' security_state_match = TRUE; // Both if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn =(acctype, iswrite, secondstage); if ! UIntIsFault(LBN); first_ctx_cmp = ((S1.addrdesc) && domaincheck then (permissioncheck, abort) =UIntAArch32.CheckDomain(ID_AA64DFR0_EL1.BRPs) -(S1.domain, vaddress, S1.level, acctype, iswrite); S1.addrdesc.fault = abort; if ! UIntIsFault(ID_AA64DFR0_EL1.CTX_CMPs)); last_ctx_cmp =(S1.addrdesc) && permissioncheck then S1.addrdesc.fault = UIntAArch32.CheckPermission(ID_AA64DFR0_EL1.BRPs); if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) =(S1.perms, vaddress, S1.level, S1.domain, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (! ConstrainUnpredictableIntegerIsFault(first_ctx_cmp, last_ctx_cmp,(S1.addrdesc) && S1.addrdesc.memattrs.type == Unpredictable_BPNOTCTXCMPMemType_Device); assert c IN {&& acctype ==Constraint_DISABLEDAccType_IFETCH,) then S1.addrdesc = Constraint_NONEAArch32.InstructionDevice, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint if linked then vaddress = bits(64) UNKNOWN; linked_to = TRUE; linked_match = AArch64.BreakpointValueMatch(lbn, vaddress, linked_to); (S1.addrdesc, vaddress, ipaddress, S1.level, S1.domain, acctype, iswrite, secondstage, s2fs1walk); return priv_match && security_state_match && (!linked || linked_match); return S1.addrdesc;

Library pseudocode for aarch64aarch32/debugtranslation/enablestranslation/AArch64.GenerateDebugExceptionsAArch32.FullTranslate

// AArch64.GenerateDebugExceptions() // ================================= // AArch32.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. booleanAddressDescriptor AArch64.GenerateDebugExceptions() returnAArch32.FullTranslate(bits(32) vaddress, AArch64.GenerateDebugExceptionsFromAccType(PSTATE.EL,acctype, boolean iswrite, boolean wasaligned, integer size) // First Stage Translation S1 = (vaddress, acctype, iswrite, wasaligned, size); if !IsFault(S1) && !(HaveNV2Ext() && acctype == AccType_NV2REGISTER) && HasS2Translation() then s2fs1walk = FALSE; result = AArch32.SecondStageTranslateIsSecureAArch32.FirstStageTranslate(), PSTATE.D);(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size); else result = S1; return result;

Library pseudocode for aarch64aarch32/debugtranslation/enablestranslation/AArch64.GenerateDebugExceptionsFromAArch32.SecondStageTranslate

// AArch64.GenerateDebugExceptionsFrom() // ===================================== // AArch32.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. booleanAddressDescriptor AArch64.GenerateDebugExceptionsFrom(bits(2) from, boolean secure, bit mask) if OSLSR_EL1.OSLK == '1' ||AArch32.SecondStageTranslate( DoubleLockStatusAddressDescriptor() ||S1, bits(32) vaddress, HaltedAccType() then return FALSE; route_to_el2 =acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size) assert HaveELHasS2Translation((); assertEL2IsZero) && (!secure ||(S1.paddress.address<47:40>); hwupdatewalk = FALSE; if ! IsSecureEL2EnabledELUsingAArch32()) && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); target = (if route_to_el2 then( EL2 else) then return EL1AArch64.SecondStageTranslate); enabled = !(S1,HaveELZeroExtend((vaddress, 64), acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); s2_enabled = HCR.VM == '1' || HCR.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<39:0>; S2 =EL3AArch32.TranslationTableWalkLD) || !secure || MDCR_EL3.SDD == '0'; (ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); if from == target then enabled = enabled && MDSCR_EL1.KDE == '1' && mask == '0'; else enabled = enabled && // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != UIntAccType_IFETCH(target) >) || (acctype == )) && S2.addrdesc.memattrs.type == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.CheckS2Permission(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, s2fs1walk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !IsFault(S2.addrdesc) && S2.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then domain = bits(4) UNKNOWN; S2.addrdesc = AArch32.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, domain, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR.PTW == '1' && S2.addrdesc.memattrs.type == MemType_Device) then domain = bits(4) UNKNOWN; S2.addrdesc.fault = AArch32.PermissionFault(ipaddress, domain, S2.level, acctype, iswrite, secondstage, s2fs1walk); result = CombineS1S2DescUIntAccType_DCZVA(from); (S1, S2.addrdesc); else result = S1; return enabled; return result;

Library pseudocode for aarch64aarch32/debugtranslation/pmutranslation/AArch64.CheckForPMUOverflowAArch32.SecondStageWalk

// AArch64.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events // AArch32.SecondStageWalk() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. booleanAddressDescriptor AArch64.CheckForPMUOverflow() pmuirq = PMCR_EL0.E == '1' && PMINTENSET_EL1<31> == '1' && PMOVSSET_EL0<31> == '1'; for n = 0 toAArch32.SecondStageWalk( UIntAddressDescriptor(PMCR_EL0.N) - 1 ifS1, bits(32) vaddress, HaveELAccType(acctype, boolean iswrite, integer size) assertEL2HasS2Translation) then E = (if n <(); s2fs1walk = TRUE; wasaligned = TRUE; return UIntAArch32.SecondStageTranslate(MDCR_EL2.HPMN) then PMCR_EL0.E else MDCR_EL2.HPME); else E = PMCR_EL0.E; if E == '1' && PMINTENSET_EL1<n> == '1' && PMOVSSET_EL0<n> == '1' then pmuirq = TRUE; SetInterruptRequestLevel(InterruptID_PMUIRQ, if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflow, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.) return pmuirq;(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size);

Library pseudocode for aarch64aarch32/debugtranslation/pmutranslation/AArch64.CountEventsAArch32.TranslateAddress

// AArch64.CountEvents() // ===================== // Return TRUE if counter "n" should count its event. For the cycle counter, n == 31. // AArch32.TranslateAddress() // ========================== // Main entry point for translating an address booleanAddressDescriptor AArch64.CountEvents(integer n) assert n == 31 || n <AArch32.TranslateAddress(bits(32) vaddress, UIntAccType(PMCR_EL0.N); acctype, boolean iswrite, boolean wasaligned, integer size) // Event counting is disabled in Debug state debug = if ! HaltedELUsingAArch32(); // In Non-secure state, some counters are reserved for EL2 if( HaveELS1TranslationRegime(()) then returnEL2AArch64.TranslateAddress) then E = if n <( UIntZeroExtend(MDCR_EL2.HPMN) || n == 31 then PMCR_EL0.E else MDCR_EL2.HPME; else E = PMCR_EL0.E; enabled = E == '1' && PMCNTENSET_EL0<n> == '1'; if !(vaddress, 64), acctype, iswrite, wasaligned, size); result =IsSecureAArch32.FullTranslate() then // Event counting in Non-secure state is allowed unless all of: // * EL2 and the HPMD Extension are implemented // * Executing at EL2 // * PMNx is not reserved for EL2 // * MDCR_EL2.HPMD == 1 if(vaddress, acctype, iswrite, wasaligned, size); if !(acctype IN { HaveHPMDExtAccType_PTW() && PSTATE.EL ==, EL2AccType_IC && (n <, UIntAccType_AT(MDCR_EL2.HPMN) || n == 31) then prohibited = (MDCR_EL2.HPMD == '1'); else prohibited = FALSE; else // Event counting in Secure state is prohibited unless any one of: // * EL3 is not implemented // * EL3 is using AArch64 and MDCR_EL3.SPME == 1 prohibited =}) && ! HaveELIsFault((result) then result.fault =EL3AArch32.CheckDebug) && MDCR_EL3.SPME == '0'; (vaddress, acctype, iswrite, size); // The IMPLEMENTATION DEFINED authentication interface might override software controls if prohibited && ! // Update virtual address for abort functions result.vaddress =HaveNoSecurePMUDisableOverrideZeroExtend() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // For the cycle counter, PMCR_EL0.DP enables counting when otherwise prohibited if prohibited && n == 31 then prohibited = (PMCR_EL0.DP == '1'); // Event counting can be filtered by the {P, U, NSK, NSU, NSH, M} bits filter = if n == 31 then PMCCFILTR else PMEVTYPER[n]; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) then filter<27> else '0'; M = if HaveEL(EL3) then filter<26> else '0'; case PSTATE.EL of when EL0 filtered = if IsSecure() then U == '1' else U != NSU; when EL1 filtered = if IsSecure() then P == '1' else P != NSK; when EL2 filtered = (NSH == '0'); when EL3 filtered = (M != P); (vaddress); return !debug && enabled && !prohibited && !filtered; return result;

Library pseudocode for aarch64aarch32/debugtranslation/statisticalprofilingwalk/CheckProfilingBufferAccessAArch32.TranslationTableWalkLD

// CheckProfilingBufferAccess() // ============================ // AArch32.TranslationTableWalkLD() // ================================ // Returns a result of a translation table walk using the Long-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. SysRegAccessTLBRecord CheckProfilingBufferAccess() if !AArch32.TranslationTableWalkLD(bits(40) ipaddress, bits(32) vaddress,HaveStatisticalProfilingAccType() || PSTATE.EL ==acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assert EL0ELUsingAArch32 ||( UsingAArch32S1TranslationRegime() then return()); else assert SysRegAccess_UNDEFINEDHaveEL; if PSTATE.EL ==( EL1EL2 &&) && ! EL2EnabledIsSecure() && MDCR_EL2.E2PB<0> != '1' then return() && SysRegAccess_TrapToEL2ELUsingAArch32; if( HaveELEL2() &&EL3HasS2Translation) && PSTATE.EL !=(); EL3TLBRecord && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then returnresult; SysRegAccess_TrapToEL3AddressDescriptor; descaddr; bits(64) baseregister; bits(40) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 return domain = bits(4) UNKNOWN; descaddr.memattrs.type = ; // Fixed parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB in AArch32 // stride = Log2(Address per Level) - Bits of address consumed at each level constant integer grainsize = 12; // Log2(4KB page size) constant integer stride = grainsize - 3; // Log2(page size / 8 bytes) // Derived parameters for the page table walk: // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = ZeroExtend(vaddress); el = AArch32.AccessUsesEL(acctype); if el == EL2 then inputsize = 32 - UInt(HTCR.T0SZ); basefound = inputsize == 32 || IsZero(inputaddr<31:inputsize>); disabled = FALSE; baseregister = HTTBR; descaddr.memattrs = WalkAttrDecode(HTCR.SH0, HTCR.ORGN0, HTCR.IRGN0, secondstage); reversedescriptors = HSCTLR.EE == '1'; lookupsecure = FALSE; singlepriv = TRUE; hierattrsdisabled = AArch32.HaveHPDExt() && HTCR.HPD == '1'; else basefound = FALSE; disabled = FALSE; t0size = UInt(TTBCR.T0SZ); if t0size == 0 || IsZero(inputaddr<31:(32-t0size)>) then inputsize = 32 - t0size; basefound = TRUE; baseregister = TTBR0; descaddr.memattrs = WalkAttrDecode(TTBCR.SH0, TTBCR.ORGN0, TTBCR.IRGN0, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD0 == '1'; t1size = UInt(TTBCR.T1SZ); if (t1size == 0 && !basefound) || (t1size > 0 && IsOnes(inputaddr<31:(32-t1size)>)) then inputsize = 32 - t1size; basefound = TRUE; baseregister = TTBR1; descaddr.memattrs = WalkAttrDecode(TTBCR.SH1, TTBCR.ORGN1, TTBCR.IRGN1, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD1 == '1'; reversedescriptors = SCTLR.EE == '1'; lookupsecure = IsSecure(); singlepriv = FALSE; // The starting level is the number of strides needed to consume the input address level = 4 - RoundUp(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ipaddress; inputsize = 32 - SInt(VTCR.T0SZ); // VTCR.S must match VTCR.T0SZ[3] if VTCR.S != VTCR.T0SZ<3> then (-, inputsize) = ConstrainUnpredictableInteger(32-7, 32+8, Unpredictable_RESVTCRS); basefound = inputsize == 40 || IsZero(inputaddr<39:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR.SH0, VTCR.ORGN0, VTCR.IRGN0, secondstage); reversedescriptors = HSCTLR.EE == '1'; singlepriv = TRUE; lookupsecure = FALSE; baseregister = VTTBR; startlevel = UInt(VTCR.SL0); level = 2 - startlevel; if level <= 0 then basefound = FALSE; // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsize - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // That is, VTCR.SL0 == '00' and SInt(VTCR.T0SZ) > 1, Size of Input Address < 2^31 bytes // Upper bound check is: startsizecheck > Log2(pagesize/8*16) // That is, VTCR.SL0 == '01' and SInt(VTCR.T0SZ) < -2, Size of Input Address > 2^34 bytes if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 1; // AArch64 reports this as a level 0 fault result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if !IsZero(baseregister<47:40>) then level = 0; result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) baseaddress = baseregister<39:baselowerbound>:Zeros(baselowerbound); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; addrselecttop = inputsize - 1; repeat addrselectbottom = (3-level)*stride + grainsize; bits(40) index = ZeroExtend(inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address = ZeroExtend(baseaddress OR index); descaddr.paddress.NS = ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || !HasS2Translation() || (HaveNV2Ext() && acctype == AccType_NV2REGISTER) then descaddr2 = descaddr; else descaddr2 = AArch32.SecondStageWalk(descaddr, vaddress, acctype, iswrite, 8); // Check for a fault on the stage 2 walk if IsFault(descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = BigEndianReverse(desc); if desc<0> == '0' || (desc<1:0> == '01' && level == 3) then // Fault (00), Reserved (10), or Block (01) at level 3. result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; baseaddress = desc<39:grainsize>:Zeros(grainsize); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply only in EL1&0 translation regimes if !singlepriv then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Check the output address is inside the supported range if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Unpack the descriptor into address and upper and lower block attributes outputaddress = desc<39:addrselectbottom>:inputaddr<addrselectbottom-1:0>; // Check the access flag if desc<10> == '0' then result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply only in EL1&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = AArch32.S1AttrDecode(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if HaveExtendedExecuteNeverExt() then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = '1'; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.fault = AArch32.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExtSysRegAccess_OKMemType_Normal;() then result.CnP = baseregister<0>; return result;

Library pseudocode for aarch64aarch32/debugtranslation/statisticalprofilingwalk/CheckStatisticalProfilingAccessAArch32.TranslationTableWalkSD

// CheckStatisticalProfilingAccess() // ================================= // AArch32.TranslationTableWalkSD() // ================================ // Returns a result of a translation table walk using the Short-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. SysRegAccessTLBRecord CheckStatisticalProfilingAccess() if !AArch32.TranslationTableWalkSD(bits(32) vaddress,HaveStatisticalProfilingAccType() || PSTATE.EL ==acctype, boolean iswrite, integer size) assert EL0ELUsingAArch32 ||( UsingAArch32S1TranslationRegime() then return()); // This is only called when address translation is enabled SysRegAccess_UNDEFINEDTLBRecord; if PSTATE.EL ==result; EL1AddressDescriptor &&l1descaddr; EL2EnabledAddressDescriptor() && MDCR_EL2.TPMS == '1' then returnl2descaddr; bits(40) outputaddress; // Variables for Abort functions ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; NS = bit UNKNOWN; // Default setting of the domain domain = bits(4) UNKNOWN; // Determine correct Translation Table Base Register to use. bits(64) ttbr; n = SysRegAccess_TrapToEL2UInt; if(TTBCR.N); if n == 0 || IsZero(vaddress<31:(32-n)>) then ttbr = TTBR0; disabled = (TTBCR.PD0 == '1'); else ttbr = TTBR1; disabled = (TTBCR.PD1 == '1'); n = 0; // TTBR1 translation always works like N=0 TTBR0 translation // Check this Translation Table Base Register is not disabled. if disabled then level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Obtain descriptor from initial lookup. l1descaddr.paddress.address = ZeroExtend(ttbr<31:14-n>:vaddress<31-n:20>:'00'); l1descaddr.paddress.NS = if IsSecure() then '0' else '1'; IRGN = ttbr<0>:ttbr<6>; // TTBR.IRGN RGN = ttbr<4:3>; // TTBR.RGN SH = ttbr<1>:ttbr<5>; // TTBR.S:TTBR.NOS l1descaddr.memattrs = WalkAttrDecode(SH, RGN, IRGN, secondstage); if !HaveEL(EL3EL2) && PSTATE.EL !=) || ( EL3IsSecure && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then return() && ! SysRegAccess_TrapToEL3IsSecureEL2Enabled; return()) then // if only 1 stage of translation l1descaddr2 = l1descaddr; else l1descaddr2 = (l1descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l1descaddr2) then result.addrdesc.fault = l1descaddr2.fault; return result; // Update virtual address for abort functions l1descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l1desc = _Mem[l1descaddr2, 4,accdesc]; if SCTLR.EE == '1' then l1desc = BigEndianReverse(l1desc); // Process descriptor from initial lookup. case l1desc<1:0> of when '00' // Fault, Reserved level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; when '01' // Large page or Small page domain = l1desc<8:5>; level = 2; pxn = l1desc<2>; NS = l1desc<3>; // Obtain descriptor from level 2 lookup. l2descaddr.paddress.address = ZeroExtend(l1desc<31:10>:vaddress<19:12>:'00'); l2descaddr.paddress.NS = if IsSecure() then '0' else '1'; l2descaddr.memattrs = l1descaddr.memattrs; if !HaveEL(EL2) || (IsSecure() && !IsSecureEL2Enabled()) then // if only 1 stage of translation l2descaddr2 = l2descaddr; else l2descaddr2 = AArch32.SecondStageWalk(l2descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l2descaddr2) then result.addrdesc.fault = l2descaddr2.fault; return result; // Update virtual address for abort functions l2descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l2desc = _Mem[l2descaddr2, 4, accdesc]; if SCTLR.EE == '1' then l2desc = BigEndianReverse(l2desc); // Process descriptor from level 2 lookup. if l2desc<1:0> == '00' then result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; nG = l2desc<11>; S = l2desc<10>; ap = l2desc<9,5:4>; if SCTLR.AFE == '1' && l2desc<4> == '0' then // Armv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l2desc<1> == '0' then // Large page xn = l2desc<15>; tex = l2desc<14:12>; c = l2desc<3>; b = l2desc<2>; blocksize = 64; outputaddress = ZeroExtend(l2desc<31:16>:vaddress<15:0>); else // Small page tex = l2desc<8:6>; c = l2desc<3>; b = l2desc<2>; xn = l2desc<0>; blocksize = 4; outputaddress = ZeroExtend(l2desc<31:12>:vaddress<11:0>); when '1x' // Section or Supersection NS = l1desc<19>; nG = l1desc<17>; S = l1desc<16>; ap = l1desc<15,11:10>; tex = l1desc<14:12>; xn = l1desc<4>; c = l1desc<3>; b = l1desc<2>; pxn = l1desc<0>; level = 1; if SCTLR.AFE == '1' && l1desc<10> == '0' then // Armv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l1desc<18> == '0' then // Section domain = l1desc<8:5>; blocksize = 1024; outputaddress = ZeroExtend(l1desc<31:20>:vaddress<19:0>); else // Supersection domain = '0000'; blocksize = 16384; outputaddress = l1desc<8:5>:l1desc<23:20>:l1desc<31:24>:vaddress<23:0>; // Decode the TEX, C, B and S bits to produce the TLBRecord's memory attributes if SCTLR.TRE == '0' then if RemapRegsHaveResetValues() then result.addrdesc.memattrs = AArch32.DefaultTEXDecode(tex, c, b, S, acctype); else result.addrdesc.memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; else result.addrdesc.memattrs = AArch32.RemappedTEXDecode(tex, c, b, S, acctype); // Set the rest of the TLBRecord, try to add it to the TLB, and return it. result.perms.ap = ap; result.perms.xn = xn; result.perms.pxn = pxn; result.nG = nG; result.domain = domain; result.level = level; result.blocksize = blocksize; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.paddress.NS = if IsSecure() then NS else '1'; result.addrdesc.fault = AArch32.NoFaultSysRegAccess_OKAArch32.SecondStageWalk;(); return result;

Library pseudocode for aarch64aarch32/debugtranslation/statisticalprofilingwalk/CollectContextIDR1RemapRegsHaveResetValues

// CollectContextIDR1() // ==================== boolean CollectContextIDR1() if !RemapRegsHaveResetValues();StatisticalProfilingEnabled() then return FALSE; if PSTATE.EL == EL2 then return FALSE; if EL2Enabled() && HCR_EL2.TGE == '1' then return FALSE; return PMSCR_EL1.CX == '1';

Library pseudocode for aarch64/debug/statisticalprofilingbreakpoint/CollectContextIDR2AArch64.BreakpointMatch

// CollectContextIDR2() // ==================== // AArch64.BreakpointMatch() // ========================= // Breakpoint matching in an AArch64 translation regime. boolean CollectContextIDR2() if !AArch64.BreakpointMatch(integer n, bits(64) vaddress,StatisticalProfilingEnabledAccType() then return FALSE; ifacctype, integer size) assert ! (S1TranslationRegime()); assert n <= UInt(ID_AA64DFR0_EL1.BRPs); enabled = DBGBCR_EL1[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR_EL1[n].BT == '0x01'; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch64.StateMatch(DBGBCR_EL1[n].SSC, DBGBCR_EL1[n].HMC, DBGBCR_EL1[n].PMC, linked, DBGBCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); value_match = AArch64.BreakpointValueMatch(n, vaddress, linked_to); if HaveAnyAArch32() && size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. match_i = AArch64.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if vaddress<1> == '1' && DBGBCR_EL1[n].BAS == '1111' then // The above notwithstanding, if DBGBCR_EL1[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR_EL1[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALFEL2EnabledELUsingAArch32() then return FALSE; return PMSCR_EL2.CX == '1';); match = value_match && state_match && enabled; return match;

Library pseudocode for aarch64/debug/statisticalprofilingbreakpoint/CollectPhysicalAddressAArch64.BreakpointValueMatch

// CollectPhysicalAddress() // ======================== // AArch64.BreakpointValueMatch() // ============================== boolean CollectPhysicalAddress() if !AArch64.BreakpointValueMatch(integer n, bits(64) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n >StatisticalProfilingEnabledUInt() then return FALSE; (secure, el) =(ID_AA64DFR0_EL1.BRPs) then (c, n) = ProfilingBufferOwnerConstrainUnpredictableInteger(); if !secure &&(0, UInt(ID_AA64DFR0_EL1.BRPs), Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR_EL1[n].E == '0' then return FALSE; context_aware = (n >= UInt(ID_AA64DFR0_EL1.BRPs) - UInt(ID_AA64DFR0_EL1.CTX_CMPs)); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. type = DBGBCR_EL1[n].BT; if ((type IN {'011x','11xx'} && !HaveVirtHostExt()) || // Context matching type == '010x' || // Reserved (type != '0x0x' && !context_aware) || // Context matching (type == '1xxx' && !HaveEL(EL2) then return PMSCR_EL2.PA == '1' && (el ==))) then // EL2 extension (c, type) = (Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (type == '0x0x'); match_vmid = (type == '10xx'); match_cid = (type == '001x'); match_cid1 = (type IN { '101x', 'x11x'}); match_cid2 = (type == '11xx'); linked = (type == 'xxx1'); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return FALSE; // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return FALSE; // Do the comparison. if match_addr then byte = UInt(vaddress<1:0>); if HaveAnyAArch32() then // T32 instructions can be executed at EL0 in an AArch64 translation regime. assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR_EL1[n].BAS<byte> == '1'); else assert byte == 0; // "vaddress" is word aligned byte_select_match = TRUE; // DBGBCR_EL1[n].BAS<byte> is RES1 top = AddrTop(vaddress, TRUE, PSTATE.EL); BVR_match = vaddress<top:2> == DBGBVR_EL1[n]<top:2> && byte_select_match; elsif match_cid then if IsInHost() then BVR_match = (CONTEXTIDR_EL2 == DBGBVR_EL1[n]<31:0>); else BVR_match = (PSTATE.EL IN {EL0,EL1} && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); elsif match_cid1 then BVR_match = (PSTATE.EL IN {EL0,EL1} && !IsInHost() && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); if match_vmid then if !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBVR_EL1[n]<39:32>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBVR_EL1[n]<47:32>; BXVR_match = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && !IsInHost() && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (!IsSecure() && HaveVirtHostExtEL2ConstrainUnpredictableBits || PMSCR_EL1.PA == '1'); else return PMSCR_EL1.PA == '1';() && DBGBVR_EL1[n]<63:32> == CONTEXTIDR_EL2); bvr_match_valid = (match_addr || match_cid || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return match;

Library pseudocode for aarch64/debug/statisticalprofilingbreakpoint/CollectRecordAArch64.StateMatch

// CollectRecord() // =============== // AArch64.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. boolean CollectRecord(bits(64) events, integer total_latency,AArch64.StateMatch(bits(2) SSC, bit HMC, bits(2) PxC, boolean linked, bits(4) LBN, boolean isbreakpnt, OpTypeAccType optype) assertacctype, boolean ispriv) // "SSC", "HMC", "PxC" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. // If parameters are set to a reserved type, behaves as either disabled or a defined type if ((HMC:SSC:PxC) IN {'011xx','100x0','101x0','11010','11101','1111x'} || // Reserved (HMC == '0' && PxC == '00' && (!isbreakpnt || ! StatisticalProfilingEnabledHaveAArch32EL(); if PMSFCR_EL1.FE == '1' then e = events<63:48,31:24,15:12,7,5,3,1>; m = PMSEVFR_EL1<63:48,31:24,15:12,7,5,3,1>; // Check for UNPREDICTABLE case if( IsZeroEL1(PMSEVFR_EL1) &&))) || // Usr/Svc/Sys (SSC IN {'01','10'} && ! ConstrainUnpredictableBoolHaveEL(Unpredictable_ZEROPMSEVFREL3) then return FALSE; if !)) || // No EL3 (HMC:SSC != '000' && HMC:SSC != '111' && !IsZeroHaveEL(NOT(e) AND m) then return FALSE; if PMSFCR_EL1.FT == '1' then // Check for UNPREDICTABLE case if( IsZeroEL3(PMSFCR_EL1.<B,LD,ST>) &&) && ! ConstrainUnpredictableBoolHaveEL(Unpredictable_NOOPTYPESEL2) then return FALSE; case optype of when)) || // No EL3/EL2 (HMC:SSC:PxC == '11100' && ! OpType_BranchHaveEL if PMSFCR_EL1.B == '0' then return FALSE; when( OpType_LoadEL2 if PMSFCR_EL1.LD == '0' then return FALSE; when))) then // No EL2 (c, <HMC,SSC,PxC>) = OpType_StoreConstrainUnpredictableBits if PMSFCR_EL1.ST == '0' then return FALSE; when( OpType_LoadAtomicUnpredictable_RESBPWPCTRL if PMSFCR_EL1.<LD,ST> == '00' then return FALSE; otherwise return FALSE; if PMSFCR_EL1.FL == '1' then if); assert c IN { IsZeroConstraint_DISABLED(PMSLATFR_EL1.MINLAT) &&, ConstrainUnpredictableBoolConstraint_UNKNOWN(}; if c ==Unpredictable_ZEROMINLATENCYConstraint_DISABLED) then // UNPREDICTABLE case return FALSE; if total_latency <then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value EL3_match = HaveEL(EL3) && HMC == '1' && SSC<0> == '0'; EL2_match = HaveEL(EL2) && HMC == '1'; EL1_match = PxC<0> == '1'; EL0_match = PxC<1> == '1'; el = if HaveNV2Ext() && acctype == AccType_NV2REGISTER then EL2 else PSTATE.EL; if !ispriv && !isbreakpnt then priv_match = EL0_match; else case el of when EL3 priv_match = EL3_match; when EL2 priv_match = EL2_match; when EL1 priv_match = EL1_match; when EL0 priv_match = EL0_match; case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = !IsSecure(); // Non-secure only when '10' security_state_match = IsSecure(); // Secure only when '11' security_state_match = TRUE; // Both if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = (UInt(ID_AA64DFR0_EL1.BRPs) - UInt(ID_AA64DFR0_EL1.CTX_CMPs)); last_ctx_cmp = UInt(ID_AA64DFR0_EL1.BRPs); if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint if linked then vaddress = bits(64) UNKNOWN; linked_to = TRUE; linked_match = AArch64.BreakpointValueMatch(PMSLATFR_EL1.MINLAT) then return FALSE; return TRUE;(lbn, vaddress, linked_to); return priv_match && security_state_match && (!linked || linked_match);

Library pseudocode for aarch64/debug/statisticalprofilingenables/CollectTimeStampAArch64.GenerateDebugExceptions

// CollectTimeStamp() // ================== // AArch64.GenerateDebugExceptions() // ================================= TimeStampboolean CollectTimeStamp() if !AArch64.GenerateDebugExceptions() returnStatisticalProfilingEnabledAArch64.GenerateDebugExceptionsFrom() then return(PSTATE.EL, TimeStamp_NoneIsSecure; (secure, el) = ProfilingBufferOwner(); if el == EL2 then if PMSCR_EL2.TS == '0' then return TimeStamp_None; else if PMSCR_EL1.TS == '0' then return TimeStamp_None; if EL2Enabled() then pct = PMSCR_EL2.PCT == '1' && (el == EL2 || PMSCR_EL1.PCT == '1'); else pct = PMSCR_EL1.PCT == '1'; return (if pct then TimeStamp_Physical else TimeStamp_Virtual);(), PSTATE.D);

Library pseudocode for aarch64/debug/statisticalprofilingenables/OpTypeAArch64.GenerateDebugExceptionsFrom

enumeration// AArch64.GenerateDebugExceptionsFrom() // ===================================== boolean OpType {AArch64.GenerateDebugExceptionsFrom(bits(2) from, boolean secure, bit mask) if OSLSR_EL1.OSLK == '1' || OpType_Load, // Any memory-read operation other than atomics, compare-and-swap, and swap() || OpType_Store, // Any memory-write operation, including atomics without return() then return FALSE; route_to_el2 = OpType_LoadAtomic, // Atomics with return, compare-and-swap and swap( OpType_Branch, // Software write to the PC) && !secure && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); target = (if route_to_el2 then else EL1); enabled = !HaveEL(EL3) || !secure || MDCR_EL3.SDD == '0'; if from == target then enabled = enabled && MDSCR_EL1.KDE == '1' && mask == '0'; else enabled = enabled && UInt(target) > UIntOpType_Other // Any other class of operation };(from); return enabled;

Library pseudocode for aarch64/debug/statisticalprofilingpmu/ProfilingBufferEnabledAArch64.CheckForPMUOverflow

// ProfilingBufferEnabled() // ======================== // AArch64.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events boolean ProfilingBufferEnabled() if !AArch64.CheckForPMUOverflow() pmuirq = PMCR_EL0.E == '1' && PMINTENSET_EL1<31> == '1' && PMOVSSET_EL0<31> == '1'; for n = 0 toHaveStatisticalProfilingUInt() then return FALSE; (secure, el) =(PMCR_EL0.N) - 1 if ProfilingBufferOwnerHaveEL(); non_secure_bit = if secure then '0' else '1'; return (!() then E = (if n < UInt(MDCR_EL2.HPMN) then PMCR_EL0.E else MDCR_EL2.HPME); else E = PMCR_EL0.E; if E == '1' && PMINTENSET_EL1<n> == '1' && PMOVSSET_EL0<n> == '1' then pmuirq = TRUE; SetInterruptRequestLevel(InterruptID_PMUIRQ, if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflowELUsingAArch32EL2(el) && non_secure_bit == SCR_EL3.NS && PMBLIMITR_EL1.E == '1' && PMBSR_EL1.S == '0');, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.) return pmuirq;

Library pseudocode for aarch64/debug/statisticalprofilingpmu/ProfilingBufferOwnerAArch64.CountEvents

// ProfilingBufferOwner() // ====================== // AArch64.CountEvents() // ===================== // Return TRUE if counter "n" should count its event. For the cycle counter, n == 31. (boolean, bits(2))boolean ProfilingBufferOwner() secure = ifAArch64.CountEvents(integer n) assert n == 31 || n < UInt(PMCR_EL0.N); // Event counting is disabled in Debug state debug = Halted(); // In Non-secure state, some counters are reserved for EL2 if HaveEL(EL3EL2) then (MDCR_EL3.NSPB<1> == '0') else) then E = if n < UInt(MDCR_EL2.HPMN) || n == 31 then PMCR_EL0.E else MDCR_EL2.HPME; else E = PMCR_EL0.E; enabled = E == '1' && PMCNTENSET_EL0<n> == '1'; if !IsSecure(); el = if !secure &&() then // Event counting in Non-secure state is allowed unless all of: // * EL2 and the HPMD Extension are implemented // * Executing at EL2 // * PMNx is not reserved for EL2 // * MDCR_EL2.HPMD == 1 if HaveHPMDExt() && PSTATE.EL == EL2 && (n < UInt(MDCR_EL2.HPMN) || n == 31) then prohibited = (MDCR_EL2.HPMD == '1'); else prohibited = FALSE; else // Event counting in Secure state is prohibited unless any one of: // * EL3 is not implemented // * EL3 is using AArch64 and MDCR_EL3.SPME == 1 prohibited = HaveEL(EL3) && MDCR_EL3.SPME == '0'; // The IMPLEMENTATION DEFINED authentication interface might override software controls if prohibited && !HaveNoSecurePMUDisableOverride() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // For the cycle counter, PMCR_EL0.DP enables counting when otherwise prohibited if prohibited && n == 31 then prohibited = (PMCR_EL0.DP == '1'); // Event counting can be filtered by the {P, U, NSK, NSU, NSH, M} bits filter = if n == 31 then PMCCFILTR else PMEVTYPER[n]; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) && MDCR_EL2.E2PB == '00' then) then filter<27> else '0'; M = if HaveEL(EL3) then filter<26> else '0'; case PSTATE.EL of when EL0 filtered = if IsSecure() then U == '1' else U != NSU; when EL1 filtered = if IsSecure() then P == '1' else P != NSK; when EL2 elsefiltered = (NSH == '0'); when EL1EL3; return (secure, el);filtered = (M != P); return !debug && enabled && !prohibited && !filtered;

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingSynchronizationBarrierCheckProfilingBufferAccess

// Barrier to ensure that all existing profiling data has been formatted, and profiling buffer // addresses have been translated such that writes to the profiling buffer have been initiated. // A following DSB completes when writes to the profiling buffer have completed.// CheckProfilingBufferAccess() // ============================ SysRegAccess ProfilingSynchronizationBarrier();CheckProfilingBufferAccess() if !HaveStatisticalProfiling() || PSTATE.EL == EL0 || UsingAArch32() then return SysRegAccess_UNDEFINED; if EL2Enabled() && PSTATE.EL == EL1 && MDCR_EL2.E2PB<0> != '1' then return SysRegAccess_TrapToEL2; if HaveEL(EL3) && PSTATE.EL != EL3 && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then return SysRegAccess_TrapToEL3; return SysRegAccess_OK;

Library pseudocode for aarch64/debug/statisticalprofiling/StatisticalProfilingEnabledCheckStatisticalProfilingAccess

// StatisticalProfilingEnabled() // ============================= // CheckStatisticalProfilingAccess() // ================================= booleanSysRegAccess StatisticalProfilingEnabled() CheckStatisticalProfilingAccess() if !HaveStatisticalProfiling() ||() || PSTATE.EL == EL0 || UsingAArch32() || !() then returnProfilingBufferEnabledSysRegAccess_UNDEFINED() then return FALSE; ; in_host = if EL2Enabled() && HCR_EL2.TGE == '1'; (secure, el) =() && PSTATE.EL == ProfilingBufferOwnerEL1(); if&& MDCR_EL2.TPMS == '1' then return UIntSysRegAccess_TrapToEL2(el) <; if UIntHaveEL(PSTATE.EL) || secure !=( IsSecureEL3() || (in_host && el ==) && PSTATE.EL != EL1) then return FALSE; case PSTATE.EL of when EL3 && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then returnUnreachableSysRegAccess_TrapToEL3(); when; return EL2SysRegAccess_OK spe_bit = PMSCR_EL2.E2SPE; when EL1 spe_bit = PMSCR_EL1.E1SPE; when EL0 spe_bit = (if in_host then PMSCR_EL2.E0HSPE else PMSCR_EL1.E0SPE); return spe_bit == '1';;

Library pseudocode for aarch64/debug/statisticalprofiling/SysRegAccessCollectContextIDR1

enumeration// CollectContextIDR1() // ==================== boolean SysRegAccess {CollectContextIDR1() if ! SysRegAccess_OK,() then return FALSE; if PSTATE.EL == SysRegAccess_UNDEFINED,then return FALSE; if SysRegAccess_TrapToEL1, SysRegAccess_TrapToEL2, SysRegAccess_TrapToEL3 };() && HCR_EL2.TGE == '1' then return FALSE; return PMSCR_EL1.CX == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/TimeStampCollectContextIDR2

enumeration// CollectContextIDR2() // ==================== boolean TimeStamp {CollectContextIDR2() if ! TimeStamp_None, // No timestamp() then return FALSE; if TimeStamp_CoreSight, // CoreSight time (IMPLEMENTATION DEFINED) TimeStamp_Virtual, // Physical counter value minus CNTVOFF_EL2 TimeStamp_Physical }; // Physical counter value with no offset() then return FALSE; return PMSCR_EL2.CX == '1';

Library pseudocode for aarch64/debug/takeexceptiondbgstatisticalprofiling/AArch64.TakeExceptionInDebugStateCollectPhysicalAddress

// AArch64.TakeExceptionInDebugState() // =================================== // Take an exception in Debug state to an Exception Level using AArch64.// CollectPhysicalAddress() // ======================== boolean AArch64.TakeExceptionInDebugState(bits(2) target_el,CollectPhysicalAddress() if ! ExceptionRecordStatisticalProfilingEnabled exception) assert() then return FALSE; (secure, el) = ProfilingBufferOwner(); if !secure && HaveEL(target_el) && !(ELUsingAArch32EL2(target_el) &&) then return PMSCR_EL2.PA == '1' && (el == UInt(target_el) >= UInt(PSTATE.EL); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = bits(32) UNKNOWN; ELR[] = bits(64) UNKNOWN; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if UNKNOWN. PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if (HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0') then PSTATE.PAN = '1'; if HaveUAOExt() then PSTATE.UAO = '0'; if HaveBTIExt() then PSTATE.BTYPE = '00'; if HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; if HaveMTEExt() then PSTATE.TCO = '1'; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. if sync_errors then SynchronizeErrors(); EndOfInstruction();|| PMSCR_EL1.PA == '1'); else return PMSCR_EL1.PA == '1';

Library pseudocode for aarch64/debug/watchpointstatisticalprofiling/AArch64.WatchpointByteMatchCollectRecord

// AArch64.WatchpointByteMatch() // ============================= // CollectRecord() // =============== boolean AArch64.WatchpointByteMatch(integer n,CollectRecord(bits(64) events, integer total_latency, AccTypeOpType acctype, bits(64) vaddress) el = ifoptype) assert HaveNV2ExtStatisticalProfilingEnabled() && acctype ==(); if PMSFCR_EL1.FE == '1' then e = events<63:48,31:24,15:12,7,5,3,1>; m = PMSEVFR_EL1<63:48,31:24,15:12,7,5,3,1>; // Check for UNPREDICTABLE case if AccType_NV2REGISTERIsZero then(PMSEVFR_EL1) && EL2 else PSTATE.EL; top = AddrTop(vaddress, FALSE, el); bottom = if DBGWVR_EL1[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR_EL1[n].BAS<UInt(vaddress<bottom-1:0>)> != '0'); mask = UInt(DBGWCR_EL1[n].MASK); // If DBGWCR_EL1[n].MASK is non-zero value and DBGWCR_EL1[n].BAS is not set to '11111111', or // DBGWCR_EL1[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !IsOnes(DBGWCR_EL1[n].BAS) then byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPMASKANDBASUnpredictable_ZEROPMSEVFR); else LSB = (DBGWCR_EL1[n].BAS AND NOT(DBGWCR_EL1[n].BAS - 1)); MSB = (DBGWCR_EL1[n].BAS + LSB); ) then return FALSE; if !IsZero(MSB AND (MSB - 1)) then // Not contiguous byte_select_match =(NOT(e) AND m) then return FALSE; if PMSFCR_EL1.FT == '1' then // Check for UNPREDICTABLE case if IsZero(PMSFCR_EL1.<B,LD,ST>) && ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUSUnpredictable_NOOPTYPES); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then (c, mask) =) then return FALSE; case optype of when ConstrainUnpredictableIntegerOpType_Branch(3, 31,if PMSFCR_EL1.B == '0' then return FALSE; when Unpredictable_RESWPMASKOpType_Load); assert c IN {if PMSFCR_EL1.LD == '0' then return FALSE; whenConstraint_DISABLEDOpType_Store,if PMSFCR_EL1.ST == '0' then return FALSE; when Constraint_NONEOpType_LoadAtomic,if PMSFCR_EL1.<LD,ST> == '00' then return FALSE; otherwise return FALSE; if PMSFCR_EL1.FL == '1' then if Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value if mask > bottom then WVR_match = (vaddress<top:mask> == DBGWVR_EL1[n]<top:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR_EL1[n]<mask-1:bottom>) then WVR_match =(PMSLATFR_EL1.MINLAT) && ConstrainUnpredictableBool() then // UNPREDICTABLE case return FALSE; if total_latency < UIntUnpredictable_WPMASKEDBITSUnpredictable_ZEROMINLATENCY); else WVR_match = vaddress<top:bottom> == DBGWVR_EL1[n]<top:bottom>; return WVR_match && byte_select_match;(PMSLATFR_EL1.MINLAT) then return FALSE; return TRUE;

Library pseudocode for aarch64/debug/watchpointstatisticalprofiling/AArch64.WatchpointMatchCollectTimeStamp

// AArch64.WatchpointMatch() // ========================= // Watchpoint matching in an AArch64 translation regime. // CollectTimeStamp() // ================== booleanTimeStamp AArch64.WatchpointMatch(integer n, bits(64) vaddress, integer size, boolean ispriv,CollectTimeStamp() if ! AccTypeStatisticalProfilingEnabled acctype, boolean iswrite) assert !() then returnELUsingAArch32TimeStamp_None(; (secure, el) =S1TranslationRegimeProfilingBufferOwner()); assert n <=(); if el == UIntEL2(ID_AA64DFR0_EL1.WRPs); // "ispriv" is FALSE for LDTR/STTR instructions executed at EL1 and all // load/stores at EL0, TRUE for all other load/stores. "iswrite" is TRUE for stores, FALSE for // loads. enabled = DBGWCR_EL1[n].E == '1'; linked = DBGWCR_EL1[n].WT == '1'; isbreakpnt = FALSE; state_match =then if PMSCR_EL2.TS == '0' then return AArch64.StateMatchTimeStamp_None(DBGWCR_EL1[n].SSC, DBGWCR_EL1[n].HMC, DBGWCR_EL1[n].PAC, linked, DBGWCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); ls_match = (DBGWCR_EL1[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match ||; else if PMSCR_EL1.TS == '0' then return ; if EL2Enabled() then pct = PMSCR_EL2.PCT == '1' && (el == EL2 || PMSCR_EL1.PCT == '1'); else pct = PMSCR_EL1.PCT == '1'; return (if pct then TimeStamp_Physical else TimeStamp_VirtualAArch64.WatchpointByteMatchTimeStamp_None(n, acctype, vaddress + byte); return value_match && state_match && ls_match && enabled;);

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.AbortOpType

// AArch64.Abort() // =============== // Abort and Debug exception handling in an AArch64 translation regime.enumeration AArch64.Abort(bits(64) vaddress,OpType { FaultRecord fault) ifOpType_Load, // Any memory-read operation other than atomics, compare-and-swap, and swap IsDebugException(fault) then if fault.acctype ==OpType_Store, // Any memory-write operation, including atomics without return AccType_IFETCH then ifOpType_LoadAtomic, // Atomics with return, compare-and-swap and swap UsingAArch32() && fault.debugmoe ==OpType_Branch, // Software write to the PC DebugException_VectorCatch then AArch64.VectorCatchException(fault); else AArch64.BreakpointException(fault); else AArch64.WatchpointException(vaddress, fault); elsif fault.acctype == AccType_IFETCH then AArch64.InstructionAbort(vaddress, fault); else AArch64.DataAbort(vaddress, fault);OpType_Other // Any other class of operation };

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.AbortSyndromeProfilingBufferEnabled

// AArch64.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort and Watchpoint exceptions // from an AArch64 translation regime. // ProfilingBufferEnabled() // ======================== ExceptionRecordboolean AArch64.AbortSyndrome(ProfilingBufferEnabled() if !ExceptionHaveStatisticalProfiling exceptype,() then return FALSE; (secure, el) = FaultRecordProfilingBufferOwner fault, bits(64) vaddress) exception =(); non_secure_bit = if secure then '0' else '1'; return (! ExceptionSyndromeELUsingAArch32(exceptype); d_side = exceptype IN {Exception_DataAbort, Exception_NV2DataAbort, Exception_Watchpoint}; exception.syndrome = AArch64.FaultSyndrome(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValid(fault) then exception.ipavalid = TRUE; exception.NS = fault.ipaddress.NS; exception.ipaddress = fault.ipaddress.address; else exception.ipavalid = FALSE; return exception;(el) && non_secure_bit == SCR_EL3.NS && PMBLIMITR_EL1.E == '1' && PMBSR_EL1.S == '0');

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.CheckPCAlignmentProfilingBufferOwner

// AArch64.CheckPCAlignment() // ==========================// ProfilingBufferOwner() // ====================== (boolean, bits(2)) AArch64.CheckPCAlignment() bits(64) pc =ProfilingBufferOwner() secure = if ThisInstrAddrHaveEL(); if pc<1:0> != '00' then( ) then (MDCR_EL3.NSPB<1> == '0') else IsSecure(); el = if !secure && HaveEL(EL2) && MDCR_EL2.E2PB == '00' then EL2 else EL1AArch64.PCAlignmentFaultEL3();; return (secure, el);

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.DataAbortProfilingSynchronizationBarrier

// AArch64.DataAbort() // ===================// Barrier to ensure that all existing profiling data has been formatted, and profiling buffer // addresses have been translated such that writes to the profiling buffer have been initiated. // A following DSB completes when writes to the profiling buffer have completed. AArch64.DataAbort(bits(64) vaddress,ProfilingSynchronizationBarrier(); FaultRecord fault) route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1' && IsExternalAbort(fault); route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || (HaveRASExt() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)) || (HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER) || IsSecondStage(fault))); bits(64) preferred_exception_return = ThisInstrAddr(); if (HaveDoubleFaultExt() && (PSTATE.EL == EL3 || route_to_el3) && IsExternalAbort(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then exception = AArch64.AbortSyndrome(Exception_NV2DataAbort, fault, vaddress); else exception = AArch64.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.EffectiveTCFStatisticalProfilingEnabled

// AArch64.EffectiveTCF() // ====================== // Returns the TCF field applied to Tag Check Fails in the given Exception Level. // StatisticalProfilingEnabled() // ============================= bits(2)boolean AArch64.EffectiveTCF(bits(2) el) if el ==StatisticalProfilingEnabled() if ! HaveStatisticalProfiling() || UsingAArch32() || !ProfilingBufferEnabled() then return FALSE; in_host = EL2Enabled() && HCR_EL2.TGE == '1'; (secure, el) = ProfilingBufferOwner(); if UInt(el) < UInt(PSTATE.EL) || secure != IsSecure() || (in_host && el == EL1) then return FALSE; case PSTATE.EL of when EL3 then tcf = SCTLR_EL3.TCF; elsif el == Unreachable(); when EL2 then tcf = SCTLR_EL2.TCF; elsif el ==spe_bit = PMSCR_EL2.E2SPE; when EL1 then tcf = SCTLR_EL1.TCF; elsif el == EL0 && HCR_EL2.<E2H,TGE> == '11' then tcf = SCTLR_EL2.TCF0; elsif el ==spe_bit = PMSCR_EL1.E1SPE; when EL0 && HCR_EL2.<E2H,TGE> != '11' then tcf = SCTLR_EL1.TCF0; spe_bit = (if in_host then PMSCR_EL2.E0HSPE else PMSCR_EL1.E0SPE); return tcf; return spe_bit == '1';

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.InstructionAbortSysRegAccess

// AArch64.InstructionAbort() // ==========================enumeration AArch64.InstructionAbort(bits(64) vaddress,SysRegAccess { FaultRecord fault) // External aborts on instruction fetch must be taken synchronously ifSysRegAccess_OK, HaveDoubleFaultExt() then assert fault.statuscode !=SysRegAccess_UNDEFINED, Fault_AsyncExternal; route_to_el3 =SysRegAccess_TrapToEL1, HaveEL(SysRegAccess_TrapToEL2,EL3) && SCR_EL3.EA == '1' && IsExternalAbort(fault); route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = AArch64.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);SysRegAccess_TrapToEL3 };

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.PCAlignmentFaultTimeStamp

// AArch64.PCAlignmentFault() // ========================== // Called on unaligned program counter in AArch64 state.enumeration AArch64.PCAlignmentFault() bits(64) preferred_exception_return =TimeStamp { ThisInstrAddr(); vect_offset = 0x0; exception =TimeStamp_None, ExceptionSyndrome(TimeStamp_Virtual,Exception_PCAlignment); exception.vaddress = ThisInstrAddr(); if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);TimeStamp_Physical };

Library pseudocode for aarch64/exceptionsdebug/abortstakeexceptiondbg/AArch64.ReportTagCheckFailAArch64.TakeExceptionInDebugState

// AArch64.ReportTagCheckFail() // ============================ // Records a tag fail exception into the appropriate TCFR_ELx.// AArch64.TakeExceptionInDebugState() // =================================== // Take an exception in Debug state to an Exception Level using AArch64. AArch64.ReportTagCheckFail(bits(2) el, bit ttbr) if el ==AArch64.TakeExceptionInDebugState(bits(2) target_el, ExceptionRecord exception) assert HaveEL(target_el) && !ELUsingAArch32(target_el) && UInt(target_el) >= UInt(PSTATE.EL); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3 then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el ==); // SCTLR[].IESB might be ignored in Debug state. if ! EL2ConstrainUnpredictableBool then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el ==( Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = bits(32) UNKNOWN; ELR[] = bits(64) UNKNOWN; PSTATE.SSBS = bits(1) UNKNOWN; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if UNKNOWN. PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; if HaveBTIExt() then if exception.type == Exception_Breakpoint then DSPSR_EL0<11:10> = PSTATE.BTYPE; else DSPSR_EL0<11:10> = if ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE) then '00' else PSTATE.BTYPE; PSTATE.BTYPE = '00'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if HavePANExt() && (PSTATE.EL == EL1 then if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el ==|| (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0' then PSTATE.PAN = '1'; if HaveMTEExt() then PSTATE.TCO = '1'; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. if sync_errors then SynchronizeErrors(); EndOfInstruction then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';();

Library pseudocode for aarch64/exceptionsdebug/abortswatchpoint/AArch64.SPAlignmentFaultAArch64.WatchpointByteMatch

// AArch64.SPAlignmentFault() // ========================== // Called on an unaligned stack pointer in AArch64 state.// AArch64.WatchpointByteMatch() // ============================= boolean AArch64.SPAlignmentFault() bits(64) preferred_exception_return =AArch64.WatchpointByteMatch(integer n, ThisInstrAddrAccType(); vect_offset = 0x0; acctype, bits(64) vaddress) exception = el = if ExceptionSyndromeHaveNV2Ext(() && acctype ==Exception_SPAlignmentAccType_NV2REGISTER); ifthen EL2 else PSTATE.EL; top = AddrTop(vaddress, FALSE, el); bottom = if DBGWVR_EL1[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR_EL1[n].BAS<UInt(PSTATE.EL) >(vaddress<bottom-1:0>)> != '0'); mask = UInt((DBGWCR_EL1[n].MASK); // If DBGWCR_EL1[n].MASK is non-zero value and DBGWCR_EL1[n].BAS is not set to '11111111', or // DBGWCR_EL1[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !EL1IsOnes) then(DBGWCR_EL1[n].BAS) then byte_select_match = AArch64.TakeExceptionConstrainUnpredictableBool(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif( EL2EnabledUnpredictable_WPMASKANDBAS() && HCR_EL2.TGE == '1' then); else LSB = (DBGWCR_EL1[n].BAS AND NOT(DBGWCR_EL1[n].BAS - 1)); MSB = (DBGWCR_EL1[n].BAS + LSB); if ! AArch64.TakeExceptionIsZero((MSB AND (MSB - 1)) then // Not contiguous byte_select_match =EL2ConstrainUnpredictableBool, exception, preferred_exception_return, vect_offset); else( AArch64.TakeExceptionUnpredictable_WPBASCONTIGUOUS(); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then (c, mask) =(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value if mask > bottom then WVR_match = (vaddress<top:mask> == DBGWVR_EL1[n]<top:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR_EL1[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITSEL1ConstrainUnpredictableInteger, exception, preferred_exception_return, vect_offset);); else WVR_match = vaddress<top:bottom> == DBGWVR_EL1[n]<top:bottom>; return WVR_match && byte_select_match;

Library pseudocode for aarch64/exceptionsdebug/abortswatchpoint/AArch64.TagCheckFailAArch64.WatchpointMatch

// AArch64.TagCheckFail() // ====================== // Handle a tag check fail condition.// AArch64.WatchpointMatch() // ========================= // Watchpoint matching in an AArch64 translation regime. boolean AArch64.TagCheckFail(bits(64) vaddress, boolean iswrite) bits(2) tcf =AArch64.WatchpointMatch(integer n, bits(64) vaddress, integer size, boolean ispriv, AArch64.EffectiveTCFAccType(PSTATE.EL); if tcf == '01' thenacctype, boolean iswrite) assert ! AArch64.TagCheckFaultELUsingAArch32(vaddress, iswrite); elsif tcf == '10' then( ()); assert n <= UInt(ID_AA64DFR0_EL1.WRPs); // "ispriv" is FALSE for LDTR/STTR instructions executed at EL1 and all // load/stores at EL0, TRUE for all other load/stores. "iswrite" is TRUE for stores, FALSE for // loads. enabled = DBGWCR_EL1[n].E == '1'; linked = DBGWCR_EL1[n].WT == '1'; isbreakpnt = FALSE; state_match = AArch64.StateMatch(DBGWCR_EL1[n].SSC, DBGWCR_EL1[n].HMC, DBGWCR_EL1[n].PAC, linked, DBGWCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); ls_match = (DBGWCR_EL1[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch64.WatchpointByteMatchAArch64.ReportTagCheckFailS1TranslationRegime(PSTATE.EL, vaddress<55>);(n, acctype, vaddress + byte); return value_match && state_match && ls_match && enabled;

Library pseudocode for aarch64/exceptions/aborts/AArch64.TagCheckFaultAArch64.Abort

// AArch64.TagCheckFault() // ======================= // Raise a tag check fail exception.// AArch64.Abort() // =============== // Abort and Debug exception handling in an AArch64 translation regime. AArch64.TagCheckFault(bits(64) va, boolean write) bits(2) target_el; bits(64) preferred_exception_return =AArch64.Abort(bits(64) vaddress, ThisInstrAddrFaultRecord(); integer vect_offset = 0x0; fault) if PSTATE.EL == if EL0IsDebugException then target_el = if HCR_EL2.TGE == '0' then(fault) then if fault.acctype == EL1AccType_IFETCH elsethen if EL2UsingAArch32; else target_el = PSTATE.EL; exception =() && fault.debugmoe == ExceptionSyndromeDebugException_VectorCatch(thenException_DataAbortAArch64.VectorCatchException); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = va;(fault); else (fault); else AArch64.WatchpointException(vaddress, fault); elsif fault.acctype == AccType_IFETCH then AArch64.InstructionAbort(vaddress, fault); else AArch64.DataAbortAArch64.TakeExceptionAArch64.BreakpointException(target_el, exception, preferred_exception_return, vect_offset);(vaddress, fault);

Library pseudocode for aarch64/exceptions/aborts/BranchTargetExceptionAArch64.AbortSyndrome

// BranchTargetException // ===================== // Raise branch target exception.// AArch64.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort and Watchpoint exceptions // from an AArch64 translation regime. ExceptionRecord AArch64.BranchTargetException(bits(52) vaddress) route_to_el2 = PSTATE.EL ==AArch64.AbortSyndrome( EL0Exception &&type, EL2EnabledFaultRecord() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =fault, bits(64) vaddress) exception = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome((type); d_side = type IN {Exception_BranchTargetException_DataAbort); exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> =, ZerosException_NV2DataAbort(); // RES0 if, UIntException_Watchpoint(PSTATE.EL) >}; exception.syndrome = UIntAArch64.FaultSyndrome((d_side, fault); exception.vaddress =EL1ZeroExtend) then(vaddress); if AArch64.TakeExceptionIPAValid(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);(fault) then exception.ipavalid = TRUE; exception.NS = fault.ipaddress.NS; exception.ipaddress = fault.ipaddress.address; else exception.ipavalid = FALSE; return exception;

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakePhysicalFIQExceptionAArch64.CheckPCAlignment

// AArch64.TakePhysicalFIQException() // ==================================// AArch64.CheckPCAlignment() // ========================== AArch64.TakePhysicalFIQException() AArch64.CheckPCAlignment() route_to_el3 = bits(64) pc = HaveEL(EL3) && SCR_EL3.FIQ == '1'; route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || HCR_EL2.FMO == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x100; exception = if pc<1:0> != '00' then ExceptionSyndromeAArch64.PCAlignmentFault(Exception_FIQ); if route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then assert PSTATE.EL != EL3; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0, EL1}; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);();

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakePhysicalIRQExceptionAArch64.DataAbort

// AArch64.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception.// AArch64.DataAbort() // =================== AArch64.TakePhysicalIRQException() route_to_el3 =AArch64.DataAbort(bits(64) vaddress, FaultRecord fault) route_to_el3 = HaveEL(EL3) && SCR_EL3.IRQ == '1'; route_to_el2 = (PSTATE.EL IN {) && SCR_EL3.EA == '1' &&IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0, EL1} &&} && (HCR_EL2.TGE == '1' || ( EL2EnabledHaveRASExt() && (HCR_EL2.TGE == '1' || HCR_EL2.IMO == '1')); bits(64) preferred_exception_return =() && HCR_EL2.TEA == '1' && ThisInstrAddrIsExternalAbort(); vect_offset = 0x80; exception =(fault)) || ( ExceptionSyndromeHaveNV2Ext(() && fault.acctype ==Exception_IRQAccType_NV2REGISTER); if route_to_el3 then) || AArch64.TakeExceptionIsSecondStage((fault))); bits(64) preferred_exception_return =ThisInstrAddr(); if (HaveDoubleFaultExt() && (PSTATE.EL == EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL ==|| route_to_el3) && EL2IsExternalAbort || route_to_el2 then assert PSTATE.EL !=(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then exception = AArch64.AbortSyndrome(Exception_NV2DataAbort, fault, vaddress); else exception = AArch64.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL3;|| route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {|| route_to_el2 thenEL0AArch64.TakeException,( EL1EL2};, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakePhysicalSErrorExceptionAArch64.InstructionAbort

// AArch64.TakePhysicalSErrorException() // =====================================// AArch64.InstructionAbort() // ========================== AArch64.TakePhysicalSErrorException(boolean impdef_syndrome, bits(24) syndrome) route_to_el3 =AArch64.InstructionAbort(bits(64) vaddress, FaultRecord fault) // External aborts on instruction fetch must be taken synchronously if HaveDoubleFaultExt() then assert fault.type != Fault_AsyncExternal; route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1'; route_to_el2 = (PSTATE.EL IN {) && SCR_EL3.EA == '1' &&IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0, EL1} &&} && (HCR_EL2.TGE == '1' || EL2EnabledIsSecondStage() && (HCR_EL2.TGE == '1' || (!(fault) || (IsInHostHaveRASExt() && HCR_EL2.AMO == '1'))); bits(64) preferred_exception_return =() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x180; vect_offset = 0x0; exception = ExceptionSyndromeAArch64.AbortSyndrome(Exception_SErrorException_InstructionAbort); exception.syndrome<24> = if impdef_syndrome then '1' else '0'; exception.syndrome<23:0> = syndrome; ClearPendingPhysicalSError(); , fault, vaddress); if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakeVirtualFIQExceptionAArch64.PCAlignmentFault

// AArch64.TakeVirtualFIQException() // =================================// AArch64.PCAlignmentFault() // ========================== // Called on unaligned program counter in AArch64 state. AArch64.TakeVirtualFIQException() assert PSTATE.EL IN {AArch64.PCAlignmentFault() bits(64) preferred_exception_return =EL0ThisInstrAddr,(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_PCAlignment); exception.vaddress = ThisInstrAddr(); if UInt(PSTATE.EL) > UInt(EL1} &&) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif EL2Enabled(); assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Virtual IRQ enabled if TGE==0 and FMO==1 bits(64) preferred_exception_return =() && HCR_EL2.TGE == '1' then ThisInstrAddrAArch64.TakeException(); vect_offset = 0x100; exception =( ExceptionSyndromeEL2(Exception_FIQ);, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakeVirtualIRQExceptionAArch64.SPAlignmentFault

// AArch64.TakeVirtualIRQException() // =================================// AArch64.SPAlignmentFault() // ========================== // Called on an unaligned stack pointer in AArch64 state. AArch64.TakeVirtualIRQException() assert PSTATE.EL IN {AArch64.SPAlignmentFault() bits(64) preferred_exception_return =EL0ThisInstrAddr,(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SPAlignment); if UInt(PSTATE.EL) > UInt(EL1} &&) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif EL2Enabled(); assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Virtual IRQ enabled if TGE==0 and IMO==1 bits(64) preferred_exception_return =() && HCR_EL2.TGE == '1' then ThisInstrAddrAArch64.TakeException(); vect_offset = 0x80; exception =( ExceptionSyndromeEL2(Exception_IRQ);, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakeVirtualSErrorExceptionBranchTargetException

// AArch64.TakeVirtualSErrorException() // ====================================// BranchTargetException // ===================== // Raise branch target exception. AArch64.TakeVirtualSErrorException(boolean impdef_syndrome, bits(24) syndrome) AArch64.BranchTargetException(bits(52) vaddress) assert PSTATE.EL IN { route_to_el2 =EL2Enabled() && PSTATE.EL == EL0,&& HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = EL1} && EL2Enabled(); assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Virtual SError enabled if TGE==0 and AMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x180; vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SErrorException_BranchTarget); if exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> = HaveRASExtZeros() then exception.syndrome<24> = VSESR_EL2.IDS; exception.syndrome<23:0> = VSESR_EL2.ISS; else exception.syndrome<24> = if impdef_syndrome then '1' else '0'; if impdef_syndrome then exception.syndrome<23:0> = syndrome;(); // RES0 if (PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2ClearPendingVirtualSErrorUInt();, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debugaborts/AArch64.BreakpointExceptionEffectiveTCF

// AArch64.BreakpointException() // =============================// EffectiveTCF() // ============== // Returns the TCF field applied to Tag Check Fails in the given Exception Level bits(2) AArch64.BreakpointException(EffectiveTCF(bits(2) el) if el ==FaultRecord fault) assert PSTATE.EL != EL3; route_to_el2 = (PSTATE.EL IN {then tcf = SCTLR_EL3.TCF; elsif el ==EL0EL2,then tcf = SCTLR_EL2.TCF; elsif el == EL1} &&then tcf = SCTLR_EL1.TCF; elsif el == EL2EnabledEL0() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return =&& HCR_EL2.<E2H,TGE> == '11' then tcf = SCTLR_EL2.TCF0; elsif el == ThisInstrAddrEL0(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception = AArch64.AbortSyndrome(Exception_Breakpoint, fault, vaddress); if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);&& HCR_EL2.<E2H,TGE> != '11' then tcf = SCTLR_EL1.TCF0; return tcf;

Library pseudocode for aarch64/exceptions/debugaborts/AArch64.SoftwareBreakpointRecordTagCheckFail

// AArch64.SoftwareBreakpoint() // ============================// RecordTagCheckFail() // ==================== // Records a tag fail exception into the appropriate TCFR_ELx AArch64.SoftwareBreakpoint(bits(16) immediate) route_to_el2 = (PSTATE.EL IN {ReportTagCheckFail(bits(2) el, bit ttbr) if el ==EL0EL3,then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el == EL1EL2} &&then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el == EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SoftwareBreakpoint); exception.syndrome<15:0> = immediate; if UInt(PSTATE.EL) > UInt(EL1) thenthen if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el == AArch64.TakeExceptionEL0(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';

Library pseudocode for aarch64/exceptions/debugaborts/AArch64.SoftwareStepExceptionTagCheckFail

// AArch64.SoftwareStepException() // ===============================// TagCheckFail() // ============== // Handle a tag check fail condition AArch64.SoftwareStepException() assert PSTATE.EL !=TagCheckFail(bits(64) vaddress, boolean iswrite) bits(2) tcf = EL3EffectiveTCF; route_to_el2 = (PSTATE.EL IN {(PSTATE.EL); if tcf == '01' thenEL0TagCheckFault,(vaddress, iswrite); elsif tcf == '10' then EL1ReportTagCheckFail} && EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SoftwareStep); if SoftwareStep_DidNotStep() then exception.syndrome<24> = '0'; else exception.syndrome<24> = '1'; exception.syndrome<6> = if SoftwareStep_SteppedEX() then '1' else '0'; if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);(PSTATE.EL, vaddress<55>);

Library pseudocode for aarch64/exceptions/debugaborts/AArch64.VectorCatchExceptionTagCheckFault

// AArch64.VectorCatchException() // ============================== // Vector Catch taken from EL0 or EL1 to EL2. This can only be called when debug exceptions are // being routed to EL2, as Vector Catch is a legacy debug event.// TagCheckFault() // =============== // Raise a tag check fail exception. AArch64.VectorCatchException(TagCheckFault(bits(64) va, boolean write) bits(2) target_el; bits(64) preferred_exception_return =FaultRecordThisInstrAddr fault) assert PSTATE.EL !=(); integer vect_offset = 0x0; if PSTATE.EL == EL2EL0; assertthen target_el = if HCR_EL2.TGE == 0 then EL2EnabledEL1() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); bits(64) preferred_exception_return =else ThisInstrAddrEL2(); vect_offset = 0x0; ; else target_el = PSTATE.EL; vaddress = bits(64) UNKNOWN; exception = AArch64.AbortSyndromeExceptionSyndrome(Exception_VectorCatchException_DataAbort, fault, vaddress);); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = va; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debugasynch/AArch64.WatchpointExceptionAArch64.TakePhysicalFIQException

// AArch64.WatchpointException() // =============================// AArch64.TakePhysicalFIQException() // ================================== AArch64.WatchpointException(bits(64) vaddress,AArch64.TakePhysicalFIQException() route_to_el3 = FaultRecordHaveEL fault) assert PSTATE.EL !=( EL3; route_to_el2 = (PSTATE.EL IN {) && SCR_EL3.FIQ == '1'; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0, EL1} &&} && (HCR_EL2.TGE == '1' || HCR_EL2.FMO == '1')); bits(64) preferred_exception_return = EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vect_offset = 0x100; exception = AArch64.AbortSyndromeExceptionSyndrome(Exception_WatchpointException_FIQ, fault, vaddress); ); if PSTATE.EL == if route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then|| route_to_el2 then assert PSTATE.EL != EL3; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0,EL1, exception, preferred_exception_return, vect_offset); else}; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsasynch/AArch64.ExceptionClassAArch64.TakePhysicalIRQException

// AArch64.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in ESR (integer,bit)// AArch64.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception. AArch64.ExceptionClass(AArch64.TakePhysicalIRQException() route_to_el3 =ExceptionHaveEL exceptype, bits(2) target_el) il = if( ThisInstrLengthEL3() == 32 then '1' else '0'; from_32 =) && SCR_EL3.IRQ == '1'; route_to_el2 = ( UsingAArch32EL2Enabled(); assert from_32 || il == '1'; // AArch64 instructions always 32-bit case exceptype of when() && PSTATE.EL IN { Exception_UncategorizedEL0 ec = 0x00; il = '1'; when, Exception_WFxTrapEL1 ec = 0x01; when} && (HCR_EL2.TGE == '1' || HCR_EL2.IMO == '1')); bits(64) preferred_exception_return = Exception_CP15RTTrapThisInstrAddr ec = 0x03; assert from_32; when(); vect_offset = 0x80; exception = Exception_CP15RRTTrapExceptionSyndrome ec = 0x04; assert from_32; when( Exception_CP14RTTrapException_IRQ ec = 0x05; assert from_32; when); if route_to_el3 then Exception_CP14DTTrapAArch64.TakeException ec = 0x06; assert from_32; when( Exception_AdvSIMDFPAccessTrapEL3 ec = 0x07; when, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == Exception_FPIDTrapEL2 ec = 0x08; when|| route_to_el2 then assert PSTATE.EL != Exception_PACTrapEL3 ec = 0x09; when; Exception_CP14RRTTrapAArch64.TakeException ec = 0x0C; assert from_32; when( Exception_BranchTargetEL2 ec = 0x0D; when, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN { Exception_IllegalStateEL0 ec = 0x0E; il = '1'; when, Exception_SupervisorCallEL1 ec = 0x11; when}; Exception_HypervisorCallAArch64.TakeException ec = 0x12; when( Exception_MonitorCallEL1 ec = 0x13; when Exception_SystemRegisterTrap ec = 0x18; assert !from_32; when Exception_SVEAccessTrap ec = 0x19; assert !from_32; when Exception_ERetTrap ec = 0x1A; when Exception_InstructionAbort ec = 0x20; il = '1'; when Exception_PCAlignment ec = 0x22; il = '1'; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_SPAlignment ec = 0x26; il = '1'; assert !from_32; when Exception_FPTrappedException ec = 0x28; when Exception_SError ec = 0x2F; il = '1'; when Exception_Breakpoint ec = 0x30; il = '1'; when Exception_SoftwareStep ec = 0x32; il = '1'; when Exception_Watchpoint ec = 0x34; il = '1'; when Exception_SoftwareBreakpoint ec = 0x38; when Exception_VectorCatch ec = 0x3A; il = '1'; assert from_32; otherwise Unreachable(); if ec IN {0x20,0x24,0x30,0x32,0x34} && target_el == PSTATE.EL then ec = ec + 1; if ec IN {0x11,0x12,0x13,0x28,0x38} && !from_32 then ec = ec + 4; return (ec,il);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsasynch/AArch64.ReportExceptionAArch64.TakePhysicalSErrorException

// AArch64.ReportException() // ========================= // Report syndrome information for exception taken to AArch64 state.// AArch64.TakePhysicalSErrorException() // ===================================== AArch64.ReportException(AArch64.TakePhysicalSErrorException(boolean impdef_syndrome, bits(24) syndrome) route_to_el3 =ExceptionRecordHaveEL exception, bits(2) target_el)( ExceptionEL3 exceptype = exception.exceptype; (ec,il) =) && SCR_EL3.EA == '1'; route_to_el2 = ( AArch64.ExceptionClassEL2Enabled(exceptype, target_el); iss = exception.syndrome; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1';() && PSTATE.EL IN { ESREL0[target_el] = ec<5:0>:il:iss; if exceptype IN {,Exception_InstructionAbortEL1,} && (HCR_EL2.TGE == '1' || (! Exception_PCAlignmentIsInHost,() && HCR_EL2.AMO == '1'))); bits(64) preferred_exception_return = Exception_DataAbortThisInstrAddr,(); vect_offset = 0x180; exception = Exception_NV2DataAbortExceptionSyndrome,( Exception_WatchpointException_SError} then); exception.syndrome<24> = if impdef_syndrome then '1' else '0'; exception.syndrome<23:0> = syndrome; FARClearPendingPhysicalSError[target_el] = exception.vaddress; else(); if PSTATE.EL == FAREL3[target_el] = bits(64) UNKNOWN; if target_el ==|| route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 then if exception.ipavalid then HPFAR_EL2<43:4> = exception.ipaddress<51:12>; if|| route_to_el2 then HaveSecureEL2ExtAArch64.TakeException() then if( , exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1IsSecureEL2EnabledEL2() then HPFAR_EL2.NS = exception.NS; else HPFAR_EL2.NS = '0'; else HPFAR_EL2<43:4> = bits(40) UNKNOWN; return;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsasynch/AArch64.ResetControlRegistersAArch64.TakeVirtualFIQException

// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values.// AArch64.TakeVirtualFIQException() // ================================= AArch64.ResetControlRegisters(boolean cold_reset);AArch64.TakeVirtualFIQException() assertEL2Enabled() && PSTATE.EL IN {EL0,EL1}; assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Virtual IRQ enabled if TGE==0 and FMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x100; exception = ExceptionSyndrome(Exception_FIQ); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsasynch/AArch64.TakeResetAArch64.TakeVirtualIRQException

// AArch64.TakeReset() // =================== // Reset into AArch64 state// AArch64.TakeVirtualIRQException() // ================================= AArch64.TakeReset(boolean cold_reset) assert !AArch64.TakeVirtualIRQException() assertHighestELUsingAArch32EL2Enabled(); // Enter the highest implemented Exception level in AArch64 state PSTATE.nRW = '0'; if() && PSTATE.EL IN { HaveELEL0(,EL3EL1) then PSTATE.EL =}; assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Virtual IRQ enabled if TGE==0 and IMO==1 bits(64) preferred_exception_return = EL3ThisInstrAddr; elsif(); vect_offset = 0x80; exception = HaveELExceptionSyndrome(EL2Exception_IRQ) then PSTATE.EL =); EL2AArch64.TakeException; else PSTATE.EL =( EL1; // Reset the system registers and other system components AArch64.ResetControlRegisters(cold_reset); // Reset all other PSTATE fields PSTATE.SP = '1'; // Select stack pointer PSTATE.<D,A,I,F> = '1111'; // All asynchronous exceptions masked PSTATE.SS = '0'; // Clear software step bit PSTATE.DIT = '0'; // PSTATE.DIT is reset to 0 when resetting into AArch64 PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // ELR_ELx and SPSR_ELx have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch64.ResetGeneralRegisters(); AArch64.ResetSIMDFPRegisters(); AArch64.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(64) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then rv = RVBAR_EL3; elsif HaveEL(EL2) then rv = RVBAR_EL2; else rv = RVBAR_EL1; // The reset vector must be correctly aligned assert IsZero(rv<63:PAMax()>) && IsZero(rv<1:0>); BranchTo(rv, BranchType_RESET);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/ieeefpasynch/AArch64.FPTrappedExceptionAArch64.TakeVirtualSErrorException

// AArch64.FPTrappedException() // ============================// AArch64.TakeVirtualSErrorException() // ==================================== AArch64.FPTrappedException(boolean is_ase, integer element, bits(8) accumulated_exceptions) exception =AArch64.TakeVirtualSErrorException(boolean impdef_syndrome, bits(24) syndrome) assert ExceptionSyndromeEL2Enabled(() && PSTATE.EL IN {Exception_FPTrappedExceptionEL0); if is_ase then if boolean IMPLEMENTATION_DEFINED "vector instructions set TFV to 1" then exception.syndrome<23> = '1'; // TFV else exception.syndrome<23> = '0'; // TFV else exception.syndrome<23> = '1'; // TFV exception.syndrome<10:8> = bits(3) UNKNOWN; // VECITR if exception.syndrome<23> == '1' then exception.syndrome<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF else exception.syndrome<7,4:0> = bits(6) UNKNOWN; route_to_el2 =, EL2EnabledEL1() && HCR_EL2.TGE == '1'; }; assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Virtual SError enabled if TGE==0 and AMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vect_offset = 0x180; if exception = UIntExceptionSyndrome(PSTATE.EL) >( UIntException_SError(); ifEL1HaveRASExt) then() then exception.syndrome<24> = VSESR_EL2.IDS; exception.syndrome<23:0> = VSESR_EL2.ISS; else exception.syndrome<24> = if impdef_syndrome then '1' else '0'; if impdef_syndrome then exception.syndrome<23:0> = syndrome; AArch64.TakeExceptionClearPendingVirtualSError(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else(); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscallsdebug/AArch64.CallHypervisorAArch64.BreakpointException

// AArch64.CallHypervisor() // ======================== // Performs a HVC call// AArch64.BreakpointException() // ============================= AArch64.CallHypervisor(bits(16) immediate) assertAArch64.BreakpointException( HaveELFaultRecord(fault) assert PSTATE.EL !=EL2EL3); ; if route_to_el2 = ( UsingAArch32EL2Enabled() then() && PSTATE.EL IN { AArch32.ITAdvanceEL0();, SSAdvanceEL1(); } && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = NextInstrAddrThisInstrAddr(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception = ExceptionSyndromeAArch64.AbortSyndrome(Exception_HypervisorCallException_Breakpoint); exception.syndrome<15:0> = immediate; , fault, vaddress); if PSTATE.EL == EL3EL2 then|| route_to_el2 then AArch64.TakeException(EL3EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL2EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscallsdebug/AArch64.CallSecureMonitorAArch64.SoftwareBreakpoint

// AArch64.CallSecureMonitor() // ===========================// AArch64.SoftwareBreakpoint() // ============================ AArch64.CallSecureMonitor(bits(16) immediate) assertAArch64.SoftwareBreakpoint(bits(16) immediate) route_to_el2 = ( HaveELEL2Enabled(() && PSTATE.EL IN {EL3EL0) && !,ELUsingAArch32EL1(} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return =EL3ThisInstrAddr); if(); vect_offset = 0x0; exception = UsingAArch32ExceptionSyndrome() then( AArch32.ITAdvanceException_SoftwareBreakpoint();); exception.syndrome<15:0> = immediate; if SSAdvanceUInt(); bits(64) preferred_exception_return =(PSTATE.EL) > NextInstrAddrUInt(); vect_offset = 0x0; exception =( ExceptionSyndromeEL1() thenException_MonitorCallAArch64.TakeException); exception.syndrome<15:0> = immediate;(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1EL3EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscallsdebug/AArch64.CallSupervisorAArch64.SoftwareStepException

// AArch64.CallSupervisor() // ======================== // Calls the Supervisor// AArch64.SoftwareStepException() // =============================== AArch64.CallSupervisor(bits(16) immediate) ifAArch64.SoftwareStepException() assert PSTATE.EL != UsingAArch32EL3() then; route_to_el2 = ( AArch32.ITAdvanceEL2Enabled();() && PSTATE.EL IN { SSAdvance(); route_to_el2 = PSTATE.EL == EL0 &&, EL2EnabledEL1() && HCR_EL2.TGE == '1'; } && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = NextInstrAddrThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SupervisorCallException_SoftwareStep); exception.syndrome<15:0> = immediate; if UIntSoftwareStep_DidNotStep(PSTATE.EL) >() then exception.syndrome<24> = '0'; else exception.syndrome<24> = '1'; exception.syndrome<6> = if UIntSoftwareStep_SteppedEX(() then '1' else '0'; if PSTATE.EL ==EL1EL2) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then|| route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/takeexceptiondebug/AArch64.TakeExceptionAArch64.VectorCatchException

// AArch64.TakeException() // ======================= // Take an exception to an Exception Level using AArch64.// AArch64.VectorCatchException() // ============================== // Vector Catch taken from EL0 or EL1 to EL2. This can only be called when debug exceptions are // being routed to EL2, as Vector Catch is a legacy debug event. AArch64.TakeException(bits(2) target_el,AArch64.VectorCatchException( ExceptionRecordFaultRecord exception, bits(64) preferred_exception_return, integer vect_offset) assertfault) assert PSTATE.EL != HaveEL(target_el) && !ELUsingAArch32(target_el) && UInt(target_el) >= UInt(PSTATE.EL); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); iesb_req = FALSE; sync_errors = FALSE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); if UInt(target_el) > UInt(PSTATE.EL) then boolean lower_32; if target_el == EL3 then if EL2Enabled() then lower_32 = ELUsingAArch32(EL2); else lower_32 =; assert ELUsingAArch32(EL1); elsif IsInHost() && PSTATE.EL == EL0 && target_el == EL2 then lower_32 = ELUsingAArch32(EL0); else lower_32 = ELUsingAArch32(target_el - 1); vect_offset = vect_offset + (if lower_32 then 0x600 else 0x400); elsif PSTATE.SP == '1' then vect_offset = vect_offset + 0x200; spsr = GetPSRFromPSTATE(); if HaveNVExt() && PSTATE.EL == EL1 && target_el == EL1 && EL2Enabled() && HCR_EL2.<NV,NV1> == '10' then spsr<3:2> = '10'; () && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); if bits(64) preferred_exception_return = HaveBTIExtThisInstrAddr() then // SPSR[].BTYPE is only guaranteed valid for these exception types if exception.exceptype IN {(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception =Exception_SErrorAArch64.AbortSyndrome,( Exception_IRQ, Exception_FIQ, Exception_SoftwareStep, Exception_PCAlignment, Exception_InstructionAbort, Exception_Breakpoint, Exception_VectorCatch,, fault, vaddress); Exception_SoftwareBreakpointAArch64.TakeException,( Exception_IllegalState, Exception_BranchTarget} then zero_btype = FALSE; else zero_btype = ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE); if zero_btype then spsr<11:10> = '00'; if HaveNV2Ext() && exception.exceptype == Exception_NV2DataAbort && target_el == EL3 then // external aborts are configured to be taken to EL3 exception.exceptype = Exception_DataAbort; if !(exception.exceptype IN {Exception_IRQ, Exception_FIQ}) then AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = spsr; ELR[] = preferred_exception_return; PSTATE.SS = '0'; PSTATE.<D,A,I,F> = '1111'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if (HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0') then PSTATE.PAN = '1'; if HaveUAOExt() then PSTATE.UAO = '0'; if HaveBTIExt() then PSTATE.BTYPE = '00'; if HaveSSBSExt() then PSTATE.SSBS = SCTLR[].DSSBS; if HaveMTEExt() then PSTATE.TCO = '1'; BranchTo(VBAR[]<63:11>:vect_offset<10:0>, BranchType_EXCEPTION); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); EndOfInstruction();, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapsdebug/AArch64.AArch32SystemAccessTrapAArch64.WatchpointException

// AArch64.AArch32SystemAccessTrap() // ================================= // Trapped AARCH32 system register access.// AArch64.WatchpointException() // ============================= AArch64.AArch32SystemAccessTrap(bits(2) target_el, integer ec) assertAArch64.WatchpointException(bits(64) vaddress, HaveELFaultRecord(target_el) && target_el !=fault) assert PSTATE.EL != EL3; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0 &&, UIntEL1(target_el) >=} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = AArch64.AArch32SystemAccessTrapSyndromeAArch64.AbortSyndrome(ThisInstrException_Watchpoint(), ec);, fault, vaddress); if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1(target_el, exception, preferred_exception_return, vect_offset);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapsexceptions/AArch64.AArch32SystemAccessTrapSyndromeAArch64.ExceptionClass

// AArch64.AArch32SystemAccessTrapSyndrome() // ========================================= // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. // AArch64.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in ESR ExceptionRecord(integer,bit) AArch64.AArch32SystemAccessTrapSyndrome(bits(32) instr, integer ec)AArch64.ExceptionClass( ExceptionRecordException exception; type, bits(2) target_el) case ec of when 0x0 exception = il = if ExceptionSyndromeThisInstrLength(() == 32 then '1' else '0'; from_32 =UsingAArch32(); assert from_32 || il == '1'; // AArch64 instructions always 32-bit case type of when Exception_Uncategorized); when 0x3 exception =ec = 0x00; il = '1'; when ExceptionSyndromeException_WFxTrap(ec = 0x01; whenException_CP15RTTrap); when 0x4 exception =ec = 0x03; assert from_32; when ExceptionSyndrome(Exception_CP15RRTTrap); when 0x5 exception =ec = 0x04; assert from_32; when ExceptionSyndrome(Exception_CP14RTTrap); when 0x6 exception =ec = 0x05; assert from_32; when ExceptionSyndrome(Exception_CP14DTTrap); when 0x7 exception =ec = 0x06; assert from_32; when ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); when 0x8 exception =ec = 0x07; when ExceptionSyndrome(Exception_FPIDTrap); when 0xC exception =ec = 0x08; when ExceptionSyndromeException_PACTrap(ec = 0x09; whenException_CP14RRTTrap); otherwiseec = 0x0C; assert from_32; when UnreachableException_BranchTarget(); bits(20) iss =ec = 0x0D; when ZerosException_IllegalState(); if exception.exceptype IN {ec = 0x0E; il = '1'; whenException_FPIDTrapException_SupervisorCall,ec = 0x11; when Exception_CP14RTTrapException_HypervisorCall,ec = 0x12; when Exception_CP15RTTrapException_MonitorCall} then // Trapped MRC/MCR, VMRS on FPSID if exception.exceptype !=ec = 0x13; when Exception_FPIDTrapException_SystemRegisterTrap then // When trap is not for VMRS iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<4:1> = instr<3:0>; // CRm else iss<19:17> = '000'; iss<16:14> = '111'; iss<13:10> = instr<19:16>; // reg iss<4:1> = '0000'; if instr<20> == '1' && instr<15:12> == '1111' then // MRC, Rt==15 iss<9:5> = '11111'; elsif instr<20> == '0' && instr<15:12> == '1111' then // MCR, Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; elsif exception.exceptype IN {ec = 0x18; assert !from_32; whenException_CP14RRTTrapException_SVEAccessTrap,ec = 0x19; assert !from_32; when Exception_AdvSIMDFPAccessTrapException_ERetTrap,ec = 0x1A; when Exception_CP15RRTTrapException_InstructionAbort} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 if instr<19:16> == '1111' then // Rt2==15 iss<14:10> = bits(5) UNKNOWN; else iss<14:10> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; if instr<15:12> == '1111' then // Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif exception.exceptype ==ec = 0x20; il = '1'; when Exception_CP14DTTrapException_PCAlignment then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<9:5> = bits(5) UNKNOWN; iss<3> = '1'; elsif exception.exceptype ==ec = 0x22; il = '1'; when Exception_UncategorizedException_DataAbort then // Trapped for unknown reason iss<9:5> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; // Rn iss<3> = '0'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> =ec = 0x24; when ec = 0x25; when Exception_SPAlignment ec = 0x26; il = '1'; assert !from_32; when Exception_FPTrappedException ec = 0x28; when Exception_SError ec = 0x2F; il = '1'; when Exception_Breakpoint ec = 0x30; il = '1'; when Exception_SoftwareStep ec = 0x32; il = '1'; when Exception_Watchpoint ec = 0x34; il = '1'; when Exception_SoftwareBreakpoint ec = 0x38; when Exception_VectorCatch ec = 0x3A; il = '1'; assert from_32; otherwise UnreachableConditionSyndromeException_NV2DataAbort(); exception.syndrome<19:0> = iss; return exception; if ec IN {0x20,0x24,0x30,0x32,0x34} && target_el == PSTATE.EL then ec = ec + 1; if ec IN {0x11,0x12,0x13,0x28,0x38} && !from_32 then ec = ec + 4; return (ec,il);

Library pseudocode for aarch64/exceptions/trapsexceptions/AArch64.AdvSIMDFPAccessTrapAArch64.ReportException

// AArch64.AdvSIMDFPAccessTrap() // ============================= // Trapped access to Advanced SIMD or FP registers due to CPACR[].// AArch64.ReportException() // ========================= // Report syndrome information for exception taken to AArch64 state. AArch64.AdvSIMDFPAccessTrap(bits(2) target_el) bits(64) preferred_exception_return =AArch64.ReportException( ThisInstrAddrExceptionRecord(); vect_offset = 0x0; route_to_el2 = (target_el ==exception, bits(2) target_el) EL1Exception &&type = exception.type; (ec,il) = EL2EnabledAArch64.ExceptionClass() && HCR_EL2.TGE == '1'); (type, target_el); iss = exception.syndrome; if route_to_el2 then exception = // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; ExceptionSyndromeESR([target_el] = ec<5:0>:il:iss; if type IN {Exception_UncategorizedException_InstructionAbort);, AArch64.TakeExceptionException_PCAlignment(,EL2Exception_DataAbort, exception, preferred_exception_return, vect_offset); else exception =, ExceptionSyndromeException_NV2DataAbort(,Exception_AdvSIMDFPAccessTrapException_Watchpoint); exception.syndrome<24:20> =} then ConditionSyndromeFAR();[target_el] = exception.vaddress; else [target_el] = bits(64) UNKNOWN; if target_el == EL2 then if exception.ipavalid then HPFAR_EL2<43:4> = exception.ipaddress<51:12>; if HaveSecureEL2Ext() then if IsSecureEL2EnabledAArch64.TakeExceptionFAR(target_el, exception, preferred_exception_return, vect_offset); () then HPFAR_EL2.NS = exception.NS; else HPFAR_EL2.NS = '0'; else HPFAR_EL2<43:4> = bits(40) UNKNOWN; return;

Library pseudocode for aarch64/exceptions/trapsexceptions/AArch64.CheckCP15InstrCoarseTrapsAArch64.ResetControlRegisters

// AArch64.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained AArch32 CP15 traps in HSTR_EL2 and HCR_EL2. boolean// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values. AArch64.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) // Check for coarse-grained Hyp traps if PSTATE.EL IN {AArch64.ResetControlRegisters(boolean cold_reset);EL0, EL1} && EL2Enabled() then // Check for MCR, MRC, MCRR and MRRC disabled by HSTR_EL2<CRn/CRm> major = if nreg == 1 then CRn else CRm; if !IsInHost() && !(major IN {4,14}) && HSTR_EL2<major> == '1' then return TRUE; // Check for MRC and MCR disabled by HCR_EL2.TIDCP if (HCR_EL2.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;

Library pseudocode for aarch64/exceptions/trapsexceptions/AArch64.CheckFPAdvSIMDEnabledAArch64.TakeReset

// AArch64.CheckFPAdvSIMDEnabled() // =============================== // Check against CPACR[]// AArch64.TakeReset() // =================== // Reset into AArch64 state AArch64.CheckFPAdvSIMDEnabled() if PSTATE.EL IN {AArch64.TakeReset(boolean cold_reset) assert !EL0HighestELUsingAArch32,(); // Enter the highest implemented Exception level in AArch64 state PSTATE.nRW = '0'; if EL1HaveEL} && !(IsInHostEL3() then // Check if access disabled in CPACR_EL1 case) then PSTATE.EL = CPACREL3[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL ==; elsif EL0HaveEL; when '11' disabled = FALSE; if disabled then( AArch64.AdvSIMDFPAccessTrapEL2() then PSTATE.EL =EL2; else PSTATE.EL = EL1);; // Reset the system registers and other system components (cold_reset); // Reset all other PSTATE fields PSTATE.SP = '1'; // Select stack pointer PSTATE.<D,A,I,F> = '1111'; // All asynchronous exceptions masked PSTATE.SS = '0'; // Clear software step bit PSTATE.DIT = '0'; // PSTATE.DIT is reset to 0 when resetting into AArch64 PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // ELR_ELx and SPSR_ELx have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch64.ResetGeneralRegisters(); AArch64.ResetSIMDFPRegisters(); AArch64.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(64) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then rv = RVBAR_EL3; elsif HaveEL(EL2) then rv = RVBAR_EL2; else rv = RVBAR_EL1; // The reset vector must be correctly aligned assert IsZero(rv<63:PAMax()>) && IsZero(rv<1:0>); BranchTo(rv, BranchType_RESETAArch64.CheckFPAdvSIMDTrapAArch64.ResetControlRegisters(); // Also check against CPTR_EL2 and CPTR_EL3);

Library pseudocode for aarch64/exceptions/trapsieeefp/AArch64.CheckFPAdvSIMDTrapAArch64.FPTrappedException

// AArch64.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3.// AArch64.FPTrappedException() // ============================ AArch64.CheckFPAdvSIMDTrap() if PSTATE.EL IN {AArch64.FPTrappedException(boolean is_ase, integer element, bits(8) accumulated_exceptions) exception =EL0ExceptionSyndrome,( EL1Exception_FPTrappedException,); if is_ase then if boolean IMPLEMENTATION_DEFINED "vector instructions set TFV to 1" then exception.syndrome<23> = '1'; // TFV else exception.syndrome<23> = '0'; // TFV else exception.syndrome<23> = '1'; // TFV exception.syndrome<10:8> = bits(3) UNKNOWN; // VECITR if exception.syndrome<23> == '1' then exception.syndrome<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF else exception.syndrome<7,4:0> = bits(6) UNKNOWN; route_to_el2 = EL2} && EL2Enabled() then // Check if access disabled in CPTR_EL2 if() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = HaveVirtHostExtThisInstrAddr() && HCR_EL2.E2H == '1' then case CPTR_EL2.FPEN of when 'x0' disabled = !(PSTATE.EL ==(); vect_offset = 0x0; if EL1UInt && HCR_EL2.TGE == '1'); when '01' disabled = (PSTATE.EL ==(PSTATE.EL) > EL0UInt && HCR_EL2.TGE == '1'); when '11' disabled = FALSE; if disabled then( AArch64.AdvSIMDFPAccessTrapEL1() thenEL2AArch64.TakeException); else if CPTR_EL2.TFP == '1' then(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.AdvSIMDFPAccessTrapAArch64.TakeException(EL2); if, exception, preferred_exception_return, vect_offset); else HaveELAArch64.TakeException(EL3EL1) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3); return;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapssyscalls/AArch64.CheckForERetTrapAArch64.CallHypervisor

// AArch64.CheckForERetTrap() // ========================== // Check for trap on ERET, ERETAA, ERETAB instruction// AArch64.CallHypervisor() // ======================== // Performs a HVC call AArch64.CheckForERetTrap(boolean eret_with_pac, boolean pac_uses_key_a) // Non-secure EL1 execution of ERET, ERETAA, ERETAB when HCR_EL2.NV bit is set, is trapped to EL2 route_to_el2 =AArch64.CallHypervisor(bits(16) immediate) assert HaveNVExtHaveEL() && PSTATE.EL ==( EL1EL2 &&); if EL2EnabledUsingAArch32() && HCR_EL2.NV == '1'; if route_to_el2 then() then ExceptionRecordAArch32.ITAdvance exception; bits(64) preferred_exception_return =(); ThisInstrAddrSSAdvance(); vect_offset = 0x0; exception = bits(64) preferred_exception_return = NextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL3 then AArch64.TakeException(EL3Exception_ERetTrapException_HypervisorCall); if !eret_with_pac then // ERET exception.syndrome<1> = '0'; exception.syndrome<0> = '0'; // RES0 else exception.syndrome<1> = '1'; if pac_uses_key_a then // ERETAA exception.syndrome<0> = '0'; else // ERETAB exception.syndrome<0> = '1';, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapssyscalls/AArch64.CheckForSMCUndefOrTrapAArch64.CallSecureMonitor

// AArch64.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction// AArch64.CallSecureMonitor() // =========================== AArch64.CheckForSMCUndefOrTrap(bits(16) imm) route_to_el2 = PSTATE.EL ==AArch64.CallSecureMonitor(bits(16) immediate) assert EL1 && EL2Enabled() && HCR_EL2.TSC == '1'; if PSTATE.EL == EL0 then UNDEFINED; if !HaveEL(EL3) then if PSTATE.EL ==) && ! EL1ELUsingAArch32 &&( EL2EnabledEL3() then if); if HaveNVExtUsingAArch32() && HCR_EL2.NV == '1' && HCR_EL2.TSC == '1' then route_to_el2 = TRUE; else UNDEFINED; else UNDEFINED; else route_to_el2 = PSTATE.EL ==() then EL1AArch32.ITAdvance &&(); EL2EnabledSSAdvance() && HCR_EL2.TSC == '1'; if route_to_el2 then bits(64) preferred_exception_return =(); bits(64) preferred_exception_return = ThisInstrAddrNextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_MonitorCall); exception.syndrome<15:0> = imm; exception.syndrome<15:0> = immediate; AArch64.TakeException(EL2EL3, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapssyscalls/AArch64.CheckForWFxTrapAArch64.CallSupervisor

// AArch64.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction// AArch64.CallSupervisor() // ======================== // Calls the Supervisor AArch64.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assertAArch64.CallSupervisor(bits(16) immediate) if HaveELUsingAArch32(target_el); case target_el of when() then AArch32.ITAdvance(); SSAdvance(); route_to_el2 = EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = NextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SupervisorCall); exception.syndrome<15:0> = immediate; if UInt(PSTATE.EL) > UInt(EL1 trap = (if is_wfe then) then SCTLRAArch64.TakeException[].nTWE else(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then SCTLRAArch64.TakeException[].nTWI) == '0'; when( EL2 trap = (if is_wfe then HCR_EL2.TWE else HCR_EL2.TWI) == '1'; when, exception, preferred_exception_return, vect_offset); else EL3AArch64.TakeException trap = (if is_wfe then SCR_EL3.TWE else SCR_EL3.TWI) == '1'; if trap then( AArch64.WFxTrapEL1(target_el, is_wfe);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapstakeexception/AArch64.CheckIllegalStateAArch64.TakeException

// AArch64.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set.// AArch64.TakeException() // ======================= // Take an exception to an Exception Level using AArch64. AArch64.CheckIllegalState() if PSTATE.IL == '1' then route_to_el2 = PSTATE.EL ==AArch64.TakeException(bits(2) target_el, EL0ExceptionRecord &&exception, bits(64) preferred_exception_return, integer vect_offset) assert EL2EnabledHaveEL() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =(target_el) && ! ThisInstrAddrELUsingAArch32(); vect_offset = 0x0; exception =(target_el) && ExceptionSyndromeUInt((target_el) >=Exception_IllegalStateUInt); (PSTATE.EL); if sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); iesb_req = FALSE; sync_errors = FALSE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); if UInt(PSTATE.EL) >(target_el) > UInt((PSTATE.EL) then boolean lower_32; if target_el ==EL3 then if EL2Enabled() then lower_32 = ELUsingAArch32(EL2); else lower_32 = ELUsingAArch32(EL1) then); elsif AArch64.TakeExceptionIsInHost(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then() && PSTATE.EL == AArch64.TakeExceptionEL0(&& target_el ==EL2, exception, preferred_exception_return, vect_offset); elsethen lower_32 = AArch64.TakeExceptionELUsingAArch32(EL0); else lower_32 = ELUsingAArch32(target_el - 1); vect_offset = vect_offset + (if lower_32 then 0x600 else 0x400); elsif PSTATE.SP == '1' then vect_offset = vect_offset + 0x200; spsr = GetPSRFromPSTATE(); if PSTATE.EL == EL1 && target_el == EL1 && HaveNVExt() && EL2Enabled() && HCR_EL2.<NV, NV1> == '10' then spsr<3:2> = '10'; if HaveUAOExt() then PSTATE.UAO = '0'; if !(exception.type IN {Exception_IRQ, Exception_FIQ}) then AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; if HaveBTIExt() then if ( exception.type IN {Exception_SError, Exception_IRQ, Exception_FIQ, Exception_SoftwareStep, Exception_PCAlignment, Exception_InstructionAbort, Exception_SoftwareBreakpoint, Exception_IllegalState, Exception_BranchTarget} ) then spsr_btype = PSTATE.BTYPE; else spsr_btype = if ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE) then '00' else PSTATE.BTYPE; spsr<11:10> = spsr_btype; SPSR[] = spsr; ELR[] = preferred_exception_return; if HaveSSBSExt() then PSTATE.SSBS = SCTLR[].DSSBS; if HaveBTIExt() then PSTATE.BTYPE = '00'; PSTATE.SS = '0'; PSTATE.<D,A,I,F> = '1111'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0' then PSTATE.PAN = '1'; if HaveMTEExt() then PSTATE.TCO = '1'; BranchTo(VBAR[]<63:11>:vect_offset<10:0>, BranchType_EXCEPTION); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); EndOfInstruction, exception, preferred_exception_return, vect_offset);();

Library pseudocode for aarch64/exceptions/traps/AArch64.MonitorModeTrapAArch64.AArch32SystemAccessTrap

// AArch64.MonitorModeTrap() // ========================= // Trapped use of Monitor mode features in a Secure EL1 AArch32 mode// AArch64.AArch32SystemAccessTrap() // ================================= // Trapped AARCH32 system register access. AArch64.MonitorModeTrap() bits(64) preferred_exception_return =AArch64.AArch32SystemAccessTrap(bits(2) target_el, integer ec) assert ThisInstrAddrHaveEL(); vect_offset = 0x0; exception =(target_el) && target_el != ExceptionSyndromeEL0(&&Exception_UncategorizedUInt); if(target_el) >= IsSecureEL2EnabledUInt() then(PSTATE.EL); bits(64) preferred_exception_return = AArch64.TakeExceptionThisInstrAddr((); vect_offset = 0x0; exception =EL2AArch64.AArch32SystemAccessTrapSyndrome, exception, preferred_exception_return, vect_offset);( ThisInstr(), ec); AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.SystemAccessTrapAArch64.AArch32SystemAccessTrapSyndrome

// AArch64.SystemAccessTrap() // ========================== // Trapped access to AArch64 system register or system instruction.// AArch64.AArch32SystemAccessTrapSyndrome() // ========================================= // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord AArch64.SystemAccessTrap(bits(2) target_el, integer ec) assertAArch64.AArch32SystemAccessTrapSyndrome(bits(32) instr, integer ec) HaveELExceptionRecord(target_el) && target_el !=exception; case ec of when 0x0 exception = EL0ExceptionSyndrome &&( UIntException_Uncategorized(target_el) >=); when 0x3 exception = UIntExceptionSyndrome(PSTATE.EL); bits(64) preferred_exception_return =( ThisInstrAddrException_CP15RTTrap(); vect_offset = 0x0; exception =); when 0x4 exception = AArch64.SystemAccessTrapSyndromeExceptionSyndrome(ThisInstrException_CP15RRTTrap(), ec);); when 0x5 exception = (Exception_CP14RTTrap); when 0x6 exception = ExceptionSyndrome(Exception_CP14DTTrap); when 0x7 exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); when 0x8 exception = ExceptionSyndrome(Exception_FPIDTrap); when 0xC exception = ExceptionSyndrome(Exception_CP14RRTTrap); otherwise Unreachable(); bits(20) iss = Zeros(); if exception.type IN {Exception_FPIDTrap, Exception_CP14RTTrap, Exception_CP15RTTrap} then // Trapped MRC/MCR, VMRS on FPSID if exception.type != Exception_FPIDTrap then // When trap is not for VMRS iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<4:1> = instr<3:0>; // CRm else iss<19:17> = '000'; iss<16:14> = '111'; iss<13:10> = instr<19:16>; // reg iss<4:1> = '0000'; if instr<20> == '1' && instr<15:12> == '1111' then // MRC, Rt==15 iss<9:5> = '11111'; elsif instr<20> == '0' && instr<15:12> == '1111' then // MCR, Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; elsif exception.type IN {Exception_CP14RRTTrap, Exception_AdvSIMDFPAccessTrap, Exception_CP15RRTTrap} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 if instr<19:16> == '1111' then // Rt2==15 iss<14:10> = bits(5) UNKNOWN; else iss<14:10> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; if instr<15:12> == '1111' then // Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif exception.type == Exception_CP14DTTrap then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<9:5> = bits(5) UNKNOWN; iss<3> = '1'; elsif exception.type == Exception_Uncategorized then // Trapped for unknown reason iss<9:5> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; // Rn iss<3> = '0'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndromeAArch64.TakeExceptionExceptionSyndrome(target_el, exception, preferred_exception_return, vect_offset);(); exception.syndrome<19:0> = iss; return exception;

Library pseudocode for aarch64/exceptions/traps/AArch64.SystemAccessTrapSyndromeAArch64.AdvSIMDFPAccessTrap

// AArch64.SystemAccessTrapSyndrome() // ================================== // Returns the syndrome information for traps on AArch64 MSR/MRS instructions. ExceptionRecord// AArch64.AdvSIMDFPAccessTrap() // ============================= // Trapped access to Advanced SIMD or FP registers due to CPACR[]. AArch64.SystemAccessTrapSyndrome(bits(32) instr, integer ec)AArch64.AdvSIMDFPAccessTrap(bits(2) target_el) bits(64) preferred_exception_return = ExceptionRecordThisInstrAddr exception; case ec of when 0x0 // Trapped access due to unknown reason. exception =(); vect_offset = 0x0; route_to_el2 = (target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1'); if route_to_el2 then exception = ExceptionSyndrome(Exception_Uncategorized); when 0x7 // Trapped access to SVE, Advance SIMD&FP system register. exception =); AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); exception.syndrome<24:20> = ConditionSyndrome(); when 0x18 // Trapped access to system register or system instruction. exception =(); ExceptionSyndromeAArch64.TakeException(Exception_SystemRegisterTrap); instr = ThisInstr(); exception.syndrome<21:20> = instr<20:19>; // Op0 exception.syndrome<19:17> = instr<7:5>; // Op2 exception.syndrome<16:14> = instr<18:16>; // Op1 exception.syndrome<13:10> = instr<15:12>; // CRn exception.syndrome<9:5> = instr<4:0>; // Rt exception.syndrome<4:1> = instr<11:8>; // CRm exception.syndrome<0> = instr<21>; // Direction otherwise Unreachable(); (target_el, exception, preferred_exception_return, vect_offset); return exception; return;

Library pseudocode for aarch64/exceptions/traps/AArch64.UndefinedFaultAArch64.CheckCP15InstrCoarseTraps

// AArch64.UndefinedFault() // ========================// AArch64.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained AArch32 CP15 traps in HSTR_EL2 and HCR_EL2. boolean AArch64.UndefinedFault() AArch64.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) route_to_el2 = PSTATE.EL == // Check for coarse-grained Hyp traps if EL0 && EL2Enabled() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =() && PSTATE.EL IN { ThisInstrAddrEL0(); vect_offset = 0x0; exception =, ExceptionSyndrome(Exception_Uncategorized); if UInt(PSTATE.EL) > UInt(EL1) then} then // Check for MCR, MRC, MCRR and MRRC disabled by HSTR_EL2<CRn/CRm> major = if nreg == 1 then CRn else CRm; if ! AArch64.TakeExceptionIsInHost(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);() && !(major IN {4,14}) && HSTR_EL2<major> == '1' then return TRUE; // Check for MRC and MCR disabled by HCR_EL2.TIDCP if (HCR_EL2.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;

Library pseudocode for aarch64/exceptions/traps/AArch64.WFxTrapAArch64.CheckFPAdvSIMDEnabled

// AArch64.WFxTrap() // =================// AArch64.CheckFPAdvSIMDEnabled() // =============================== // Check against CPACR[] AArch64.WFxTrap(bits(2) target_el, boolean is_wfe) assertAArch64.CheckFPAdvSIMDEnabled() if PSTATE.EL IN { UIntEL0(target_el) >, UIntEL1(PSTATE.EL); bits(64) preferred_exception_return =} && ! ThisInstrAddrIsInHost(); vect_offset = 0x0; exception =() then // Check if access disabled in CPACR_EL1 case ExceptionSyndromeCPACR([].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL ==Exception_WFxTrapEL0); exception.syndrome<24:20> =; when '11' disabled = FALSE; if disabled then ConditionSyndromeAArch64.AdvSIMDFPAccessTrap(); exception.syndrome<0> = if is_wfe then '1' else '0'; if target_el ==( EL1 &&); EL2EnabledAArch64.CheckFPAdvSIMDTrap() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);(); // Also check against CPTR_EL2 and CPTR_EL3

Library pseudocode for aarch64/exceptions/traps/CheckFPAdvSIMDEnabled64AArch64.CheckFPAdvSIMDTrap

// CheckFPAdvSIMDEnabled64() // ========================= // AArch64 instruction wrapper// AArch64.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3. CheckFPAdvSIMDEnabled64()AArch64.CheckFPAdvSIMDTrap() if PSTATE.EL IN { , EL1, EL2} && EL2Enabled() then // Check if access disabled in CPTR_EL2 if HaveVirtHostExt() && HCR_EL2.E2H == '1' then case CPTR_EL2.FPEN of when 'x0' disabled = !(PSTATE.EL == EL1 && HCR_EL2.TGE == '1'); when '01' disabled = (PSTATE.EL == EL0 && HCR_EL2.TGE == '1'); when '11' disabled = FALSE; if disabled then AArch64.AdvSIMDFPAccessTrap(EL2); else if CPTR_EL2.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL2); if HaveEL(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3AArch64.CheckFPAdvSIMDEnabledEL0();); return;

Library pseudocode for aarch64/functionsexceptions/abortstraps/AArch64.CreateFaultRecordAArch64.CheckForERetTrap

// AArch64.CreateFaultRecord() // =========================== FaultRecord// AArch64.CheckForERetTrap() // ========================== // Check for trap on ERET, ERETAA, ERETAB instruction AArch64.CreateFaultRecord(AArch64.CheckForERetTrap(boolean eret_with_pac, boolean pac_uses_key_a) // Non-secure EL1 execution of ERET, ERETAA, ERETAB when HCR_EL2.NV bit is set, is trapped to EL2 route_to_el2 =FaultHaveNVExt statuscode, bits(52) ipaddress, boolean NS, integer level,() && AccTypeEL2Enabled acctype, boolean write, bit extflag, bits(2) errortype, boolean secondstage, boolean s2fs1walk)() && PSTATE.EL == && HCR_EL2.NV == '1'; if route_to_el2 then ExceptionRecord exception; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_ERetTrap); if !eret_with_pac then // ERET exception.syndrome<1> = '0'; exception.syndrome<0> = '0'; // RES0 else exception.syndrome<1> = '1'; if pac_uses_key_a then // ERETAA exception.syndrome<0> = '0'; else // ERETAB exception.syndrome<0> = '1'; AArch64.TakeException(EL2FaultRecordEL1 fault; fault.statuscode = statuscode; fault.domain = bits(4) UNKNOWN; // Not used from AArch64 fault.debugmoe = bits(4) UNKNOWN; // Not used from AArch64 fault.errortype = errortype; fault.ipaddress.NS = if NS then '1' else '0'; fault.ipaddress.address = ipaddress; fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/abortstraps/AArch64.FaultSyndromeAArch64.CheckForSMCUndefOrTrap

// AArch64.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // an Exception Level using AArch64. bits(25)// AArch64.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction AArch64.FaultSyndrome(boolean d_side,AArch64.CheckForSMCUndefOrTrap(bits(16) imm) route_to_el2 = FaultRecordEL2Enabled fault) assert fault.statuscode !=() && PSTATE.EL == Fault_NoneEL1; bits(25) iss =&& HCR_EL2.TSC == '1'; if PSTATE.EL == ZerosEL0(); ifthen UNDEFINED; if ! HaveRASExtHaveEL() &&( IsExternalSyncAbortEL3(fault) then iss<12:11> = fault.errortype; // SET if d_side then ) then if IsSecondStageEL2Enabled(fault) && !fault.s2fs1walk then iss<24:14> =() && PSTATE.EL == LSInstructionSyndromeEL1(); ifthen if HaveNV2ExtHaveNVExt() && fault.acctype ==() && HCR_EL2.NV == '1' && HCR_EL2.TSC == '1' then route_to_el2 = TRUE; else UNDEFINED; else UNDEFINED; else route_to_el2 = AccType_NV2REGISTEREL2Enabled then iss<13> = '1'; // Value of '1' indicates fault is generated by use of VNCR_EL2 if fault.acctype IN {() && PSTATE.EL ==AccType_DCEL1,&& HCR_EL2.TSC == '1'; if route_to_el2 then bits(64) preferred_exception_return = AccType_DC_UNPRIVThisInstrAddr,(); vect_offset = 0x0; exception = AccType_ICExceptionSyndrome,( AccType_ATException_MonitorCall} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if); exception.syndrome<15:0> = imm; IsExternalAbortAArch64.TakeException(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> =( EncodeLDFSCEL2(fault.statuscode, fault.level); return iss;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/exclusivetraps/AArch64.ExclusiveMonitorsPassAArch64.CheckForWFxTrap

// AArch64.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean// AArch64.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction AArch64.ExclusiveMonitorsPass(bits(64) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype =AArch64.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assert AccType_ATOMICHaveEL; iswrite = TRUE; aligned = (address ==(target_el); case target_el of when AlignEL1(address, size)); if !aligned then secondstage = FALSE;trap = (if is_wfe then AArch64.AbortSCTLR(address,[].nTWE else AArch64.AlignmentFaultSCTLR(acctype, iswrite, secondstage)); passed =[].nTWI) == '0'; when AArch64.IsExclusiveVAEL2(address,trap = (if is_wfe then HCR_EL2.TWE else HCR_EL2.TWI) == '1'; when ProcessorIDEL3(), size); if !passed then return FALSE; memaddrdesc =trap = (if is_wfe then SCR_EL3.TWE else SCR_EL3.TWI) == '1'; if trap then AArch64.TranslateAddressAArch64.WFxTrap(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); if passed then ClearExclusiveLocal(ProcessorID()); if memaddrdesc.memattrs.shareable then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); return passed;(target_el, is_wfe);

Library pseudocode for aarch64/functionsexceptions/exclusivetraps/AArch64.IsExclusiveVAAArch64.CheckIllegalState

// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean// AArch64.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set. AArch64.IsExclusiveVA(bits(64) address, integer processorid, integer size);AArch64.CheckIllegalState() if PSTATE.IL == '1' then route_to_el2 =EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_IllegalState); if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/exclusivetraps/AArch64.MarkExclusiveVAAArch64.MonitorModeTrap

// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid.// AArch64.MonitorModeTrap() // ========================= // Trapped use of Monitor mode features in a Secure EL1 AArch32 mode AArch64.MarkExclusiveVA(bits(64) address, integer processorid, integer size);AArch64.MonitorModeTrap() bits(64) preferred_exception_return =ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_Uncategorized); if IsSecureEL2Enabled() then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/exclusivetraps/AArch64.SetExclusiveMonitorsAArch64.SystemAccessTrap

// AArch64.SetExclusiveMonitors() // ============================== // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address.// AArch64.SystemAccessTrap() // ========================== // Trapped access to AArch64 system register or system instruction. AArch64.SetExclusiveMonitors(bits(64) address, integer size) acctype =AArch64.SystemAccessTrap(bits(2) target_el, integer ec) assert AccType_ATOMICHaveEL; iswrite = FALSE; aligned = (address ==(target_el) && target_el != AlignEL0(address, size)); memaddrdesc =&& AArch64.TranslateAddressUInt(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if(target_el) >= IsFaultUInt(memaddrdesc) then return; (PSTATE.EL); if memaddrdesc.memattrs.shareable then bits(64) preferred_exception_return = MarkExclusiveGlobalThisInstrAddr(memaddrdesc.paddress,(); vect_offset = 0x0; exception = ProcessorIDAArch64.SystemAccessTrapSyndrome(), size);( MarkExclusiveLocalThisInstr(memaddrdesc.paddress,(), ec); ProcessorIDAArch64.TakeException(), size); AArch64.MarkExclusiveVA(address, ProcessorID(), size);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/fusedrsteptraps/FPRSqrtStepFusedAArch64.SystemAccessTrapSyndrome

// FPRSqrtStepFused() // ================== // AArch64.SystemAccessTrapSyndrome() // ================================== // Returns the syndrome information for traps on AArch64 MSR/MRS instructions. bits(N)ExceptionRecord FPRSqrtStepFused(bits(N) op1, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; op1 =AArch64.SystemAccessTrapSyndrome(bits(32) instr, integer ec) FPNegExceptionRecord(op1); (type1,sign1,value1) =exception; case ec of when 0x0 // Trapped access due to unknown reason. exception = FPUnpackExceptionSyndrome(op1, FPCR); (type2,sign2,value2) =( FPUnpackException_Uncategorized(op2, FPCR); (done,result) =); when 0x7 // Trapped access to SVE, Advance SIMD&FP system register. exception = FPProcessNaNsExceptionSyndrome(type1, type2, op1, op2, FPCR); if !done then inf1 = (type1 ==( FPType_InfinityException_AdvSIMDFPAccessTrap); inf2 = (type2 == exception.syndrome<24:20> = FPType_InfinityConditionSyndrome); zero1 = (type1 ==(); when 0x18 // Trapped access to system register or system instruction. exception = FPType_ZeroExceptionSyndrome); zero2 = (type2 ==( FPType_ZeroException_SystemRegisterTrap); if (inf1 && zero2) || (zero1 && inf2) then result = instr = FPOnePointFiveThisInstr('0'); elsif inf1 || inf2 then result =(); exception.syndrome<21:20> = instr<20:19>; // Op0 exception.syndrome<19:17> = instr<7:5>; // Op2 exception.syndrome<16:14> = instr<18:16>; // Op1 exception.syndrome<13:10> = instr<15:12>; // CRn exception.syndrome<9:5> = instr<4:0>; // Rt exception.syndrome<4:1> = instr<11:8>; // CRm exception.syndrome<0> = instr<21>; // Direction otherwise FPInfinityUnreachable(sign1 EOR sign2); else // Fully fused multiply-add and halve result_value = (3.0 + (value1 * value2)) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if FPRoundingMode(FPCR) == FPRounding_NEGINF then '1' else '0'; result = FPZero(sign); else result = FPRound(result_value, FPCR); return result;(); return exception;

Library pseudocode for aarch64/functionsexceptions/fusedrsteptraps/FPRecipStepFusedAArch64.UndefinedFault

// FPRecipStepFused() // ================== bits(N)// AArch64.UndefinedFault() // ======================== FPRecipStepFused(bits(N) op1, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; op1 =AArch64.UndefinedFault() route_to_el2 = FPNegEL2Enabled(op1); (type1,sign1,value1) =() && PSTATE.EL == FPUnpackEL0(op1, FPCR); (type2,sign2,value2) =&& HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = FPUnpackThisInstrAddr(op2, FPCR); (done,result) =(); vect_offset = 0x0; exception = FPProcessNaNsExceptionSyndrome(type1, type2, op1, op2, FPCR); if !done then inf1 = (type1 ==( FPType_InfinityException_Uncategorized); inf2 = (type2 == if FPType_InfinityUInt); zero1 = (type1 ==(PSTATE.EL) > FPType_ZeroUInt); zero2 = (type2 ==( FPType_ZeroEL1); if (inf1 && zero2) || (zero1 && inf2) then result =) then FPTwoAArch64.TakeException('0'); elsif inf1 || inf2 then result =(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then FPInfinityAArch64.TakeException(sign1 EOR sign2); else // Fully fused multiply-add result_value = 2.0 + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if( FPRoundingModeEL2(FPCR) ==, exception, preferred_exception_return, vect_offset); else FPRounding_NEGINFAArch64.TakeException then '1' else '0'; result =( FPZeroEL1(sign); else result = FPRound(result_value, FPCR); return result;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/memorytraps/AArch64.AccessIsTagCheckedAArch64.WFxTrap

// AArch64.AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. boolean// AArch64.WFxTrap() // ================= AArch64.AccessIsTagChecked(bits(64) vaddr,AArch64.WFxTrap(bits(2) target_el, boolean is_wfe) assert AccTypeUInt acctype) if PSTATE.M<4> == '1' then return FALSE; if(target_el) > EffectiveTBIUInt(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; (PSTATE.EL); if bits(64) preferred_exception_return = EffectiveTCMAThisInstrAddr(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; (); vect_offset = 0x0; if ! exception =AArch64.AllocationTagAccessIsEnabledExceptionSyndrome() then return FALSE; if acctype IN {(AccType_IFETCHException_WFxTrap,); exception.syndrome<24:20> = AccType_PTWConditionSyndrome} then return FALSE; (); exception.syndrome<0> = if is_wfe then '1' else '0'; if acctype == if target_el == && EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeExceptionAccType_NV2REGISTEREL1 then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/memorytraps/AArch64.AddressWithAllocationTagCheckFPAdvSIMDEnabled64

// AArch64.AddressWithAllocationTag() // ================================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as '0000'. bits(64)// CheckFPAdvSIMDEnabled64() // ========================= // AArch64 instruction wrapper AArch64.AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result;CheckFPAdvSIMDEnabled64()AArch64.CheckFPAdvSIMDEnabled();

Library pseudocode for aarch64/functions/memoryaborts/AArch64.AllocationTagFromAddressAArch64.CreateFaultRecord

// AArch64.AllocationTagFromAddress() // ================================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns '0000'. // AArch64.CreateFaultRecord() // =========================== bits(4)FaultRecord AArch64.AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag;AArch64.CreateFaultRecord(Fault type, bits(52) ipaddress, bit NS, integer level, AccType acctype, boolean write, bit extflag, bits(2) errortype, boolean secondstage, boolean s2fs1walk) FaultRecord fault; fault.type = type; fault.domain = bits(4) UNKNOWN; // Not used from AArch64 fault.debugmoe = bits(4) UNKNOWN; // Not used from AArch64 fault.errortype = errortype; fault.ipaddress.NS = NS; fault.ipaddress.address = ipaddress; fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;

Library pseudocode for aarch64/functions/memoryaborts/AArch64.CheckAlignmentAArch64.FaultSyndrome

// AArch64.CheckAlignment() // ======================== // AArch64.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // an Exception Level using AArch64. booleanbits(25) AArch64.CheckAlignment(bits(64) address, integer alignment,AArch64.FaultSyndrome(boolean d_side, AccTypeFaultRecord acctype, boolean iswrite) aligned = (address ==fault) assert fault.type != AlignFault_None(address, alignment)); atomic = acctype IN {; bits(25) iss = AccType_ATOMICZeros,(); if AccType_ATOMICRWHaveRASExt,() && AccType_ORDEREDATOMICIsExternalSyncAbort,(fault) then iss<12:11> = fault.errortype; // SET if d_side then if AccType_ORDEREDATOMICRWIsSecondStage }; ordered = acctype IN {(fault) && !fault.s2fs1walk then iss<24:14> = AccType_ORDEREDLSInstructionSyndrome,(); if AccType_ORDEREDRWHaveNV2Ext,() && fault.acctype == AccType_LIMITEDORDEREDAccType_NV2REGISTER,then iss<13> = '1'; // Value of '1' indicates fault is generated by use of VNCR_EL2 if fault.acctype IN { AccType_ORDEREDATOMICAccType_DC, AccType_ORDEREDATOMICRWAccType_DC_UNPRIV }; vector = acctype ==, AccType_VECAccType_IC; if, SCTLRAccType_AT[].A == '1' then check = TRUE; elsif} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if HaveUA16ExtIsExternalAbort() then check = ((fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> =UIntEncodeLDFSC(address<0+:4>) + alignment > 16) && ((ordered && SCTLR[].nAA == '0') || atomic); else check = atomic || ordered; if check && !aligned then secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(acctype, iswrite, secondstage)); (fault.type, fault.level); return aligned; return iss;

Library pseudocode for aarch64/functions/memoryexclusive/AArch64.CheckTagAArch64.ExclusiveMonitorsPass

// AArch64.CheckTag() // ================== // Performs a Tag Check operation for a memory access and returns // whether the check passed // AArch64.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean AArch64.CheckTag(AArch64.ExclusiveMonitorsPass(bits(64) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype =; iswrite = TRUE; aligned = (address == Align(address, size)); if !aligned then secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(acctype, iswrite, secondstage)); passed = AArch64.IsExclusiveVA(address, ProcessorID(), size); if !passed then return FALSE; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); if passed then ClearExclusiveLocal(ProcessorID()); if memaddrdesc.memattrs.shareable then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorIDAddressDescriptorAccType_ATOMIC memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then return ptag == _MemTag[memaddrdesc]; else return TRUE;(), size); return passed;

Library pseudocode for aarch64/functions/memoryexclusive/AArch64.MemSingleAArch64.IsExclusiveVA

// AArch64.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8)// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean AArch64.MemSingle[bits(64) address, integer size,AArch64.IsExclusiveVA(bits(64) address, integer processorid, integer size); AccType acctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; bits(size*8) value; iswrite = FALSE; // MMU or MPU memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.TransformTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, ptag, iswrite) then AArch64.TagCheckFail(ZeroExtend(address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; // AArch64.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. AArch64.MemSingle[bits(64) address, integer size, AccType acctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.TransformTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, ptag, iswrite) then AArch64.TagCheckFail(ZeroExtend(address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; return;

Library pseudocode for aarch64/functions/memoryexclusive/AArch64.MemTagAArch64.MarkExclusiveVA

// AArch64.MemTag[] - non-assignment (read) form // ============================================= // Load an Allocation Tag from memory. bits(4)// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid. AArch64.MemTag[bits(64) address]AArch64.MarkExclusiveVA(bits(64) address, integer processorid, integer size); AddressDescriptor memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AArch64.AllocationTagAccessIsEnabled() && memaddrdesc.memattrs.tagged then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // AArch64.MemTag[] - assignment (write) form // ========================================== // Store an Allocation Tag to memory. AArch64.MemTag[bits(64) address] = bits(4) value AddressDescriptor memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != Align(address, TAG_GRANULE) then boolean secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(AccType_NORMAL, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access if AArch64.AllocationTagAccessIsEnabled() && memaddrdesc.memattrs.tagged then _MemTag[memaddrdesc] = value;

Library pseudocode for aarch64/functions/memoryexclusive/AArch64.TransformTagAArch64.SetExclusiveMonitors

// AArch64.TransformTag() // ====================== // Apply tag transformation rules. // AArch64.SetExclusiveMonitors() // ============================== bits(4)// Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address. AArch64.TransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta =AArch64.SetExclusiveMonitors(bits(64) address, integer size) acctype = ; iswrite = FALSE; aligned = (address == Align(address, size)); memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return; if memaddrdesc.memattrs.shareable then MarkExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); MarkExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); AArch64.MarkExclusiveVA(address, ProcessorIDZeroExtendAccType_ATOMIC(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;(), size);

Library pseudocode for aarch64/functions/memoryfusedrstep/AArch64.TranslateAddressForAtomicAccessFPRSqrtStepFused

// AArch64.TranslateAddressForAtomicAccess() // ========================================= // Performs an alignment check for atomic memory operations. // Also translates 64-bit Virtual Address into Physical Address. // FPRSqrtStepFused() // ================== AddressDescriptorbits(N) AArch64.TranslateAddressForAtomicAccess(bits(64) address, integer sizeinbits) boolean iswrite = FALSE; size = sizeinbits DIV 8; assert size IN {1, 2, 4, 8, 16}; aligned =FPRSqrtStepFused(bits(N) op1, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; op1 = AArch64.CheckAlignmentFPNeg(address, size,(op1); (type1,sign1,value1) = AccType_ATOMICRWFPUnpack, iswrite); // MMU or MPU lookup memaddrdesc =(op1, FPCR); (type2,sign2,value2) = AArch64.TranslateAddressFPUnpack(address,(op2, FPCR); (done,result) = AccType_ATOMICRWFPProcessNaNs, iswrite, aligned, size); // Check for aborts or debug exceptions if(type1, type2, op1, op2, FPCR); if !done then inf1 = (type1 == IsFaultFPType_Infinity(memaddrdesc) then); inf2 = (type2 == AArch64.AbortFPType_Infinity(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then); zero1 = (type1 == ClearExclusiveByAddressFPType_Zero(memaddrdesc.paddress,); zero2 = (type2 == ProcessorIDFPType_Zero(), size); if); if (inf1 && zero2) || (zero1 && inf2) then result = HaveMTEExtFPOnePointFive() &&('0'); elsif inf1 || inf2 then result = AArch64.AccessIsTagCheckedFPInfinity(address,(sign1 EOR sign2); else // Fully fused multiply-add and halve result_value = (3.0 + (value1 * value2)) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if AccType_ATOMICRWFPRoundingMode) then bits(4) ptag =(FPCR) == AArch64.TransformTagFPRounding_NEGINF(address); if !then '1' else '0'; result =AArch64.CheckTagFPZero(memaddrdesc, ptag, iswrite) then(sign); else result = AArch64.TagCheckFailFPRound(address, iswrite); return memaddrdesc;(result_value, FPCR); return result;

Library pseudocode for aarch64/functions/memoryfusedrstep/CheckSPAlignmentFPRecipStepFused

// CheckSPAlignment() // FPRecipStepFused() // ================== // Check correct stack pointer alignment for AArch64 state. bits(N) CheckSPAlignment() bits(64) sp =FPRecipStepFused(bits(N) op1, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; op1 = SPFPNeg[]; if PSTATE.EL ==(op1); (type1,sign1,value1) = EL0FPUnpack then stack_align_check = ((op1, FPCR); (type2,sign2,value2) =SCTLRFPUnpack[].SA0 != '0'); else stack_align_check = ((op2, FPCR); (done,result) =SCTLRFPProcessNaNs[].SA != '0'); if stack_align_check && sp !=(type1, type2, op1, op2, FPCR); if !done then inf1 = (type1 == AlignFPType_Infinity(sp, 16) then); inf2 = (type2 == ); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPTwo('0'); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); else // Fully fused multiply-add result_value = 2.0 + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if FPRoundingMode(FPCR) == FPRounding_NEGINF then '1' else '0'; result = FPZero(sign); else result = FPRoundAArch64.SPAlignmentFaultFPType_Infinity(); return;(result_value, FPCR); return result;

Library pseudocode for aarch64/functions/memory/IsBlockDescriptorNTBitValidAArch64.CheckAlignment

// If the implementation supports changing the block size without a break-before-make // approach, then for implementations that have level 1 or 2 support, the nT bit in // the block descriptor is valid. // AArch64.CheckAlignment() // ======================== boolean IsBlockDescriptorNTBitValid();AArch64.CheckAlignment(bits(64) address, integer alignment,AccType acctype, boolean iswrite) aligned = (address == Align(address, alignment)); atomic = acctype IN { AccType_ATOMIC, AccType_ATOMICRW, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; ordered = acctype IN { AccType_ORDERED, AccType_ORDEREDRW, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; if SCTLR[].A == '1' then check = TRUE; elsif HaveUA16Ext() then check = (UInt(address<0+:4>) + alignment > 16) && ((ordered && SCTLR[].nAA == '0') || atomic); else check = atomic || ordered; if check && !aligned then secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(acctype, iswrite, secondstage)); return aligned;

Library pseudocode for aarch64/functions/memory/MemAArch64.MemSingle

// Mem[] - non-assignment (read) form // ================================== // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch64.MemSingle directly. // AArch64.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8) Mem[bits(64) address, integer size,AArch64.MemSingle[bits(64) address, integer size, AccType acctype] acctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned = assert address == AArch64.CheckAlignment(address, size, acctype, iswrite); if size != 16 || !(acctype IN {AccType_VEC, AccType_VECSTREAM}) then atomic = aligned; else // 128-bit SIMD&FP loads are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == Align(address, 8); if !atomic then assert size > 1; value<7:0> =(address, size); AArch64.MemSingleAddressDescriptor[address, 1, acctype, aligned]; memaddrdesc; bits(size*8) value; iswrite = FALSE; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = // MMU or MPU memaddrdesc = ConstrainUnpredictableAArch64.TranslateAddress((address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions ifUnpredictable_DEVPAGE2IsFault); assert c IN {(memaddrdesc) thenConstraint_FAULTAArch64.Abort,(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if Constraint_NONEHaveMTEExt}; if c ==() then if Constraint_NONEAccessIsTagChecked then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> =( AArch64.MemSingleZeroExtend[address+i, 1, acctype, aligned]; elsif size == 16 && acctype IN {(address, 64), acctype) then bits(4) ptag =AccType_VECTransformTag,( AccType_VECSTREAMZeroExtend} then value<63:0> =(address, 64)); if ! AArch64.MemSingleCheckTag[address, 8, acctype, aligned]; value<127:64> =(memaddrdesc, ptag, iswrite) then AArch64.MemSingleTagCheckFail[address+8, 8, acctype, aligned]; else value =( AArch64.MemSingleZeroExtend[address, size, acctype, aligned]; (address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; if (// AArch64.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes.HaveNV2Ext() && acctype == AccType_NV2REGISTER && SCTLR_EL2.EE == '1') || BigEndian() then value = BigEndianReverse(value); return value; // Mem[] - assignment (write) form // =============================== // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem[bits(64) address, integer size,AArch64.MemSingle[bits(64) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if (acctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address ==HaveNV2ExtAlign() && acctype ==(address, size); AccType_NV2REGISTERAddressDescriptor && SCTLR_EL2.EE == '1') ||memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = BigEndianAArch64.TranslateAddress() then value =(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if BigEndianReverseIsFault(value); aligned =(memaddrdesc) then AArch64.CheckAlignmentAArch64.Abort(address, size, acctype, iswrite); if size != 16 || !(acctype IN {(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable thenAccType_VECClearExclusiveByAddress,(memaddrdesc.paddress, AccType_VECSTREAMProcessorID}) then atomic = aligned; else // 128-bit SIMD&FP stores are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address ==(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if AlignHaveMTEExt(address, 8); if !atomic then assert size > 1;() then if AArch64.MemSingleAccessIsTagChecked[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c =( ConstrainUnpredictableZeroExtend((address, 64), acctype) then bits(4) ptag =Unpredictable_DEVPAGE2TransformTag); assert c IN {(Constraint_FAULTZeroExtend,(address, 64)); if ! Constraint_NONECheckTag}; if c ==(memaddrdesc, ptag, iswrite) then Constraint_NONETagCheckFail then aligned = TRUE; for i = 1 to size-1( AArch64.MemSingleZeroExtend[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; elsif size == 16 && acctype IN {AccType_VEC, AccType_VECSTREAM} then AArch64.MemSingle[address, 8, acctype, aligned] = value<63:0>; AArch64.MemSingle[address+8, 8, acctype, aligned] = value<127:64>; else AArch64.MemSingle[address, size, acctype, aligned] = value; (address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; return;

Library pseudocode for aarch64/functions/memory/MemAtomicAddressWithAllocationTag

// MemAtomic() // =========== // Performs a load and store memory operation for a given virtual address. // AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(size)bits(64) MemAtomic(bits(64) address,AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result; MemAtomicOp op, bits(size) value, AccType ldacctype, AccType stacctype) bits(size) newvalue; memaddrdesc = AArch64.TranslateAddressForAtomicAccess(address, size); ldaccdesc = CreateAccessDescriptor(ldacctype); staccdesc = CreateAccessDescriptor(stacctype); // All observers in the shareability domain observe the // following load and store atomically. oldvalue = _Mem[memaddrdesc, size DIV 8, ldaccdesc]; if BigEndian() then oldvalue = BigEndianReverse(oldvalue); case op of when MemAtomicOp_ADD newvalue = oldvalue + value; when MemAtomicOp_BIC newvalue = oldvalue AND NOT(value); when MemAtomicOp_EOR newvalue = oldvalue EOR value; when MemAtomicOp_ORR newvalue = oldvalue OR value; when MemAtomicOp_SMAX newvalue = if SInt(oldvalue) > SInt(value) then oldvalue else value; when MemAtomicOp_SMIN newvalue = if SInt(oldvalue) > SInt(value) then value else oldvalue; when MemAtomicOp_UMAX newvalue = if UInt(oldvalue) > UInt(value) then oldvalue else value; when MemAtomicOp_UMIN newvalue = if UInt(oldvalue) > UInt(value) then value else oldvalue; when MemAtomicOp_SWP newvalue = value; if BigEndian() then newvalue = BigEndianReverse(newvalue); _Mem[memaddrdesc, size DIV 8, staccdesc] = newvalue; // Load operations return the old (pre-operation) value return oldvalue;

Library pseudocode for aarch64/functions/memory/MemAtomicCompareAndSwapAllocationTagFromAddress

// MemAtomicCompareAndSwap() // ========================= // Compares the value stored at the passed-in memory address against the passed-in expected // value. If the comparison is successful, the value at the passed-in memory address is swapped // with the passed-in new_value. // AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(size)bits(4) MemAtomicCompareAndSwap(bits(64) address, bits(size) expectedvalue, bits(size) newvalue,AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag; AccType ldacctype, AccType stacctype) memaddrdesc = AArch64.TranslateAddressForAtomicAccess(address, size); ldaccdesc = CreateAccessDescriptor(ldacctype); staccdesc = CreateAccessDescriptor(stacctype); // All observers in the shareability domain observe the // following load and store atomically. oldvalue = _Mem[memaddrdesc, size DIV 8, ldaccdesc]; if BigEndian() then oldvalue = BigEndianReverse(oldvalue); if oldvalue == expectedvalue then if BigEndian() then newvalue = BigEndianReverse(newvalue); _Mem[memaddrdesc, size DIV 8, staccdesc] = newvalue; return oldvalue;

Library pseudocode for aarch64/functions/pacmemory/addpac/AddPACCheckSPAlignment

// AddPAC() // ======== // Calculates the pointer authentication code for a 64-bit quantity and then // inserts that into pointer authentication code field of that 64-bit quantity. bits(64)// CheckSPAlignment() // ================== // Check correct stack pointer alignment for AArch64 state. AddPAC(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data) bits(64) PAC; bits(64) result; bits(64) ext_ptr; bits(64) extfield; bit selbit; boolean tbi =CheckSPAlignment() bits(64) sp = CalculateTBISP(ptr, data); integer top_bit = if tbi then 55 else 63; // If tagged pointers are in use for a regime with two TTBRs, use bit<55> of // the pointer to select between upper and lower ranges, and preserve this. // This handles the awkward case where there is apparently no correct choice between // the upper and lower address range - ie an addr of 1xxxxxxx0... with TBI0=0 and TBI1=1 // and 0xxxxxxx1 with TBI1=0 and TBI0=1: if[]; if PSTATE.EL == PtrHasUpperAndLowerAddRangesEL0() then assertthen stack_align_check = ( S1TranslationRegimeSCTLR() IN {[].SA0 != '0'); else stack_align_check = (EL1SCTLR,[].SA != '0'); if stack_align_check && sp != EL2Align}; if(sp, 16) then S1TranslationRegimeAArch64.SPAlignmentFault() == EL1 then // EL1 translation regime registers if data then selbit = if TCR_EL1.TBI1 == '1' || TCR_EL1.TBI0 == '1' then ptr<55> else ptr<63>; else if ((TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0') || (TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then selbit = ptr<55>; else selbit = ptr<63>; else // EL2 translation regime registers if data then selbit = if ((HaveEL(EL2) && TCR_EL2.TBI1 == '1') || (HaveEL(EL2) && TCR_EL2.TBI0 == '1')) then ptr<55> else ptr<63>; else selbit = if ((HaveEL(EL2) && TCR_EL2.TBI1 == '1' && TCR_EL1.TBID1 == '0') || (HaveEL(EL2) && TCR_EL2.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then ptr<55> else ptr<63>; else selbit = if tbi then ptr<55> else ptr<63>; integer bottom_PAC_bit = CalculateBottomPACBit(selbit); // The pointer authentication code field takes all the available bits in between extfield = Replicate(selbit, 64); // Compute the pointer authentication code for a ptr with good extension bits if tbi then ext_ptr = ptr<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; else ext_ptr = extfield<(64-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = ComputePAC(ext_ptr, modifier, K<127:64>, K<63:0>); // Check if the ptr has good extension bits and corrupt the pointer authentication code if not if !IsZero(ptr<top_bit:bottom_PAC_bit>) && !IsOnes(ptr<top_bit:bottom_PAC_bit>) then if HaveEnhancedPAC() then PAC = Zeros(); else PAC<top_bit-1> = NOT(PAC<top_bit-1>); // Preserve the determination between upper and lower address at bit<55> and insert PAC if tbi then result = ptr<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; else result = PAC<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; return result; return;

Library pseudocode for aarch64/functions/pacmemory/addpacda/AddPACDACheckTag

// AddPACDA() // CheckTag() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDAKey_EL1. // Performs a Tag Check operation for a memory access and returns // whether the check passed bits(64)boolean AddPACDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of whenCheckTag( EL0AddressDescriptor boolean IsEL1Regime =memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress = S1TranslationRegimeZeroExtend() ==(memaddrdesc.paddress.address); return ptag == EL1MemTag; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APDAKey_EL1, TRUE);[paddress]; else return TRUE;

Library pseudocode for aarch64/functions/pacmemory/addpacdb/AddPACDBIsBlockDescriptorNTBitValid

// AddPACDB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDBKey_EL1. bits(64)// If the implementation supports changing the block size without a break-before-make // approach, then for implementations that have level 1 or 2 support, the nT bit in // the block descriptor is valid. boolean AddPACDB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of whenIsBlockDescriptorNTBitValid(); EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APDBKey_EL1, TRUE);

Library pseudocode for aarch64/functions/pacmemory/addpacga/AddPACGAMem

// AddPACGA() // ========== // Returns a 64-bit value where the lower 32 bits are 0, and the upper 32 bits contain // a 32-bit pointer authentication code which is derived using a cryptographic // algorithm as a combination of X, Y and the APGAKey_EL1. // Mem[] - non-assignment (read) form // ================================== // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch64.MemSingle directly. bits(64)bits(size*8) AddPACGA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(128) APGAKey_EL1; APGAKey_EL1 = APGAKeyHi_EL1<63:0> : APGAKeyLo_EL1<63:0>; case PSTATE.EL of whenMem[bits(64) address, integer size, EL0AccType TrapEL2 = (acctype] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned =EL2EnabledAArch64.CheckAlignment() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =(address, size, acctype, iswrite); if size != 16 || !(acctype IN { HaveELAccType_VEC(,EL3AccType_VECSTREAM) && SCR_EL3.API == '0'; when}) then atomic = aligned; else // 128-bit SIMD&FP loads are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == EL1Align TrapEL2 =(address, 8); if !atomic then assert size > 1; value<7:0> = EL2EnabledAArch64.MemSingle() && HCR_EL2.API == '0'; TrapEL3 =[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = HaveELConstrainUnpredictable(EL3Unpredictable_DEVPAGE2) && SCR_EL3.API == '0'; when); assert c IN { EL2Constraint_FAULT TrapEL2 = FALSE; TrapEL3 =, HaveELConstraint_NONE(}; if c ==EL3Constraint_NONE) && SCR_EL3.API == '0'; whenthen aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = EL3AArch64.MemSingle TrapEL2 = FALSE; TrapEL3 = FALSE; if TrapEL2 then[address+i, 1, acctype, aligned]; elsif size == 16 && acctype IN { TrapPACUseAccType_VEC(,EL2AccType_VECSTREAM); elsif TrapEL3 then} then value<63:0> = TrapPACUseAArch64.MemSingle([address, 8, acctype, aligned]; value<127:64> =EL3AArch64.MemSingle); else return[address+8, 8, acctype, aligned]; else value = ComputePACAArch64.MemSingle(X, Y, APGAKey_EL1<127:64>, APGAKey_EL1<63:0>)<63:32>:[address, size, acctype, aligned]; if (() && acctype == AccType_NV2REGISTER && SCTLR_EL2.EE == '1') || BigEndian() then value = BigEndianReverse(value); return value; // Mem[] - assignment (write) form // =============================== // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem[bits(64) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if (HaveNV2Ext() && acctype == AccType_NV2REGISTER && SCTLR_EL2.EE == '1') || BigEndian() then value = BigEndianReverse(value); aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); if size != 16 || !(acctype IN {AccType_VEC, AccType_VECSTREAM}) then atomic = aligned; else // 128-bit SIMD&FP stores are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == Align(address, 8); if !atomic then assert size > 1; AArch64.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch64.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; elsif size == 16 && acctype IN {AccType_VEC, AccType_VECSTREAM} then AArch64.MemSingle[address, 8, acctype, aligned] = value<63:0>; AArch64.MemSingle[address+8, 8, acctype, aligned] = value<127:64>; else AArch64.MemSingleZerosHaveNV2Ext(32);[address, size, acctype, aligned] = value; return;

Library pseudocode for aarch64/functions/pacmemory/addpacia/AddPACIAMemTag

// AddPACIA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y, and the // APIAKey_EL1. // MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. bits(64)bits(4) AddPACIA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; APIAKey_EL1 = APIAKeyHi_EL1<63:0>:APIAKeyLo_EL1<63:0>; case PSTATE.EL of whenMemTag[bits(64) address] EL0AddressDescriptor boolean IsEL1Regime =memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = S1TranslationRegimeAArch64.TranslateAddress() ==(address, EL1AccType_NORMAL; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = (, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions ifEL2EnabledIsFault() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =(memaddrdesc) then HaveELAArch64.Abort((address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... ifEL3AllocationTagAccessIsEnabled) && SCR_EL3.API == '0'; when() && memaddrdesc.memattrs.tagged then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. EL1 Enable = SCTLR_EL1.EnIA; TrapEL2 =MemTag[bits(64) address] = bits(4) value EL2EnabledAddressDescriptor() && HCR_EL2.API == '0'; TrapEL3 =memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != HaveELAlign((address, TAG_GRANULE) then boolean secondstage = FALSE;EL3AArch64.Abort) && SCR_EL3.API == '0'; when(address, EL2AArch64.AlignmentFault Enable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 =( HaveELAccType_NORMAL(, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc =EL3AArch64.TranslateAddress) && SCR_EL3.API == '0'; when(address, EL3AccType_NORMAL Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; , iswrite, wasaligned, TAG_GRANULE); if Enable == '0' then return X; elsif TrapEL2 then // Check for aborts or debug exceptions if TrapPACUseIsFault((memaddrdesc) thenEL2AArch64.Abort); elsif TrapEL3 then(address, memaddrdesc.fault); // Memory array access if TrapPACUseAllocationTagAccessIsEnabled(EL3); else return AddPAC(X, Y, APIAKey_EL1, FALSE);() && memaddrdesc.memattrs.tagged then _MemTag[memaddrdesc] = value;

Library pseudocode for aarch64/functions/pacmemory/addpacib/AddPACIBTransformTag

// AddPACIB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APIBKey_EL1. // TransformTag() // ============== // Apply tag transformation rules. bits(64)bits(4) AddPACIB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of whenTransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta = EL0ZeroExtend boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APIBKey_EL1, FALSE);(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;

Library pseudocode for aarch64/functions/pacmemory/auth/Authboolean

// Auth() // ====== // Restores the upper bits of the address to be all zeros or all ones (based on the // value of bit[55]) and computes and checks the pointer authentication code. If the // check passes, then the restored address is returned. If the check fails, the // second-top and third-top bits of the extension bits in the pointer authentication code // field are corrupted to ensure that accessing the address will give a translation fault. // boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. bits(64)boolean Auth(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data, bit keynumber) bits(64) PAC; bits(64) result; bits(64) original_ptr; bits(2) error_code; bits(64) extfield; // Reconstruct the extension field used of adding the PAC to the pointer boolean tbi =AccessIsTagChecked(bits(64) vaddr, CalculateTBIAccType(ptr, data); integer bottom_PAC_bit =acctype) if PSTATE.M<4> == '1' then return FALSE; if CalculateBottomPACBitEffectiveTBI(ptr<55>); extfield =(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if ReplicateEffectiveTCMA(ptr<55>, 64); (vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if tbi then original_ptr = ptr<63:56>:extfield<56-bottom_PAC_bit-1:0>:ptr<bottom_PAC_bit-1:0>; else original_ptr = extfield<64-bottom_PAC_bit-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = if ! () then return FALSE; if acctype IN {AccType_IFETCH, AccType_PTW} then return FALSE; if acctype == AccType_NV2REGISTERComputePACAllocationTagAccessIsEnabled(original_ptr, modifier, K<127:64>, K<63:0>); // Check pointer authentication code if tbi then if PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit> then result = original_ptr; else error_code = keynumber:NOT(keynumber); result = original_ptr<63:55>:error_code:original_ptr<52:0>; else if ((PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit>) && (PAC<63:56> == ptr<63:56>)) then result = original_ptr; else error_code = keynumber:NOT(keynumber); result = original_ptr<63>:error_code:original_ptr<60:0>; return result;then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/pac/authdaaddpac/AuthDAAddPAC

// AuthDA() // AddPAC() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACDA(). // Calculates the pointer authentication code for a 64-bit quantity and then // inserts that into pointer authentication code field of that 64-bit quantity. bits(64) AuthDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of whenAddPAC(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data) bits(64) PAC; bits(64) result; bits(64) ext_ptr; bits(64) extfield; bit selbit; boolean tbi = EL0CalculateTBI boolean IsEL1Regime =(ptr, data); integer top_bit = if tbi then 55 else 63; // If tagged pointers are in use for a regime with two TTBRs, use bit<55> of // the pointer to select between upper and lower ranges, and preserve this. // This handles the awkward case where there is apparently no correct choice between // the upper and lower address range - ie an addr of 1xxxxxxx0... with TBI0=0 and TBI1=1 // and 0xxxxxxx1 with TBI1=0 and TBI0=1: if PtrHasUpperAndLowerAddRanges() then assert S1TranslationRegime() ==() IN { EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (,EL2EnabledEL2() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =}; if S1TranslationRegime() == EL1 then // EL1 translation regime registers if data then selbit = if TCR_EL1.TBI1 == '1' || TCR_EL1.TBI0 == '1' then ptr<55> else ptr<63>; else if ((TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0') || (TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then selbit = ptr<55>; else selbit = ptr<63>; else // EL2 translation regime registers if data then selbit = if ((HaveEL(EL3EL2) && SCR_EL3.API == '0'; when) && TCR_EL2.TBI1 == '1') || ( EL1HaveEL Enable = SCTLR_EL1.EnDA; TrapEL2 =( EL2EnabledEL2() && HCR_EL2.API == '0'; TrapEL3 =) && TCR_EL2.TBI0 == '1')) then ptr<55> else ptr<63>; else selbit = if (( HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 =) && TCR_EL2.TBI1 == '1' && TCR_EL1.TBID1 == '0') || ( HaveEL(EL3EL2) && SCR_EL3.API == '0'; when) && TCR_EL2.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then ptr<55> else ptr<63>; else selbit = if tbi then ptr<55> else ptr<63>; integer bottom_PAC_bit = EL3CalculateBottomPACBit Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; (selbit); if Enable == '0' then return X; elsif TrapEL2 then // The pointer authentication code field takes all the available bits in between extfield = TrapPACUseReplicate((selbit, 64); // Compute the pointer authentication code for a ptr with good extension bits if tbi then ext_ptr = ptr<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; else ext_ptr = extfield<(64-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; PAC =EL2ComputePAC); elsif TrapEL3 then(ext_ptr, modifier, K<127:64>, K<63:0>); // Check if the ptr has good extension bits and corrupt the pointer authentication code if not if ! TrapPACUseIsZero((ptr<top_bit:bottom_PAC_bit>) && !EL3IsOnes); else return(ptr<top_bit:bottom_PAC_bit>) then if () then PAC = ZerosAuthHaveEnhancedPAC(X, Y, APDAKey_EL1, TRUE, '0');(); else PAC<top_bit-1> = NOT(PAC<top_bit-1>); // Preserve the determination between upper and lower address at bit<55> and insert PAC if tbi then result = ptr<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; else result = PAC<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; return result;

Library pseudocode for aarch64/functions/pac/authdbaddpacda/AuthDBAddPACDA

// AuthDB() // ======== // AddPACDA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a // pointer authentication code in the pointer authentication code field bits of X, using // the same algorithm and key as AddPACDB(). // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDAKey_EL1. bits(64) AuthDB(bits(64) X, bits(64) Y) AddPACDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; bits(128) APDAKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDB; Enable = SCTLR_EL1.EnDA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDB; Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDB; Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AuthAddPAC(X, Y, APDBKey_EL1, TRUE, '1');(X, Y, APDAKey_EL1, TRUE);

Library pseudocode for aarch64/functions/pac/authiaaddpacdb/AuthIAAddPACDB

// AuthIA() // ======== // AddPACDB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIA(). // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDBKey_EL1. bits(64) AuthIA(bits(64) X, bits(64) Y) AddPACDB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; bits(128) APDBKey_EL1; APIAKey_EL1 = APIAKeyHi_EL1<63:0> : APIAKeyLo_EL1<63:0>; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIA; Enable = SCTLR_EL1.EnDB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIA; Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIA; Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AuthAddPAC(X, Y, APIAKey_EL1, FALSE, '0');(X, Y, APDBKey_EL1, TRUE);

Library pseudocode for aarch64/functions/pac/authibaddpacga/AuthIBAddPACGA

// AuthIB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIB(). // AddPACGA() // ========== // Returns a 64-bit value where the lower 32 bits are 0, and the upper 32 bits contain // a 32-bit pointer authentication code which is derived using a cryptographic // algorithm as a combination of X, Y and the APGAKey_EL1. bits(64) AuthIB(bits(64) X, bits(64) Y) AddPACGA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; bits(128) APGAKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; APGAKey_EL1 = APGAKeyHi_EL1<63:0> : APGAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime =TrapEL2 = ( S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIB; TrapEL2 =TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then if TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return (X, Y, APGAKey_EL1<127:64>, APGAKey_EL1<63:0>)<63:32>:ZerosAuthComputePAC(X, Y, APIBKey_EL1, FALSE, '1');(32);

Library pseudocode for aarch64/functions/pac/calcbottompacbitaddpacia/CalculateBottomPACBitAddPACIA

// CalculateBottomPACBit() // ======================= // AddPACIA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y, and the // APIAKey_EL1. integerbits(64) CalculateBottomPACBit(bit top_bit) integer tsz_field; AddPACIA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; if APIAKey_EL1 = APIAKeyHi_EL1<63:0>:APIAKeyLo_EL1<63:0>; case PSTATE.EL of when PtrHasUpperAndLowerAddRangesEL0() then assertboolean IsEL1Regime = S1TranslationRegime() IN {() ==EL1,; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = ( EL2EL2Enabled}; if() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = S1TranslationRegimeHaveEL() ==( EL3) && SCR_EL3.API == '0'; when EL1 then // EL1 translation regime registers tsz_field = if top_bit == '1' thenEnable = SCTLR_EL1.EnIA; TrapEL2 = UIntEL2Enabled(TCR_EL1.T1SZ) else() && HCR_EL2.API == '0'; TrapEL3 = UInt(TCR_EL1.T0SZ); using64k = if top_bit == '1' then TCR_EL1.TG1 == '11' else TCR_EL1.TG0 == '01'; else // EL2 translation regime registers assert HaveEL(EL3) && SCR_EL3.API == '0'; when EL2); tsz_field = if top_bit == '1' thenEnable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 = UIntHaveEL(TCR_EL2.T1SZ) else( UIntEL3(TCR_EL2.T0SZ); using64k = if top_bit == '1' then TCR_EL2.TG1 == '11' else TCR_EL2.TG0 == '01'; else tsz_field = if PSTATE.EL ==) && SCR_EL3.API == '0'; when EL2EL3 thenEnable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then UIntTrapPACUse(TCR_EL2.T0SZ) else( UInt(TCR_EL3.T0SZ); using64k = if PSTATE.EL == EL2 then TCR_EL2.TG0 == '01' else TCR_EL3.TG0 == '01'; max_limit_tsz_field = (if !); elsif TrapEL3 thenHaveSmallPageTblExtTrapPACUse() then 39 else if using64k then 47 else 48); if tsz_field > max_limit_tsz_field then // TCR_ELx.TySZ is out of range c =( ConstrainUnpredictableEL3(); else returnUnpredictable_RESTnSZAddPAC); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = max_limit_tsz_field; tszmin = if using64k && VAMax() == 52 then 12 else 16; if tsz_field < tszmin then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = tszmin; return (64-tsz_field);(X, Y, APIAKey_EL1, FALSE);

Library pseudocode for aarch64/functions/pac/calculatetbiaddpacib/CalculateTBIAddPACIB

// CalculateTBI() // ============== // AddPACIB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APIBKey_EL1. booleanbits(64) CalculateTBI(bits(64) ptr, boolean data) boolean tbi = FALSE; AddPACIB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; if APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of when PtrHasUpperAndLowerAddRangesEL0() then assertboolean IsEL1Regime = S1TranslationRegime() IN {() ==EL1,; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = ( EL2EL2Enabled}; if() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = S1TranslationRegimeHaveEL() ==( EL3) && SCR_EL3.API == '0'; when EL1 then // EL1 translation regime registers if data then tbi = if ptr<55> == '1' then TCR_EL1.TBI1 == '1' else TCR_EL1.TBI0 == '1'; else if ptr<55> == '1' then tbi = TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0'; else tbi = TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0'; else // EL2 translation regime registers if data then tbi = if ptr<55> == '1' then TCR_EL2.TBI1 == '1' else TCR_EL2.TBI0 == '1'; else if ptr<55> == '1' then tbi = TCR_EL2.TBI1 == '1' && TCR_EL2.TBID1 == '0'; else tbi = TCR_EL2.TBI0 == '1' && TCR_EL2.TBID0 == '0'; elsif PSTATE.EL ==Enable = SCTLR_EL1.EnIB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 then tbi = if data then TCR_EL2.TBI=='1' else TCR_EL2.TBI=='1' && TCR_EL2.TBID=='0'; elsif PSTATE.EL ==Enable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC then tbi = if data then TCR_EL3.TBI=='1' else TCR_EL3.TBI=='1' && TCR_EL3.TBID=='0'; return tbi;(X, Y, APIBKey_EL1, FALSE);

Library pseudocode for aarch64/functions/pac/computepacauth/ComputePACAuth

array bits(64) RC[0..4]; // Auth() // ====== // Restores the upper bits of the address to be all zeros or all ones (based on the // value of bit[55]) and computes and checks the pointer authentication code. If the // check passes, then the restored address is returned. If the check fails, the // second-top and third-top bits of the extension bits in the pointer authentication code // field are corrupted to ensure that accessing the address will give a translation fault. bits(64) ComputePAC(bits(64) data, bits(64) modifier, bits(64) key0, bits(64) key1) bits(64) workingval; bits(64) runningmod; bits(64) roundkey; bits(64) modk0; constant bits(64) Alpha = 0xC0AC29B7C97C50DD<63:0>; Auth(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data, bit keynumber) bits(64) PAC; bits(64) result; bits(64) original_ptr; bits(2) error_code; bits(64) extfield; RC[0] = 0x0000000000000000<63:0>; RC[1] = 0x13198A2E03707344<63:0>; RC[2] = 0xA4093822299F31D0<63:0>; RC[3] = 0x082EFA98EC4E6C89<63:0>; RC[4] = 0x452821E638D01377<63:0>; modk0 = key0<0>:key0<63:2>:(key0<63> EOR key0<1>); runningmod = modifier; workingval = data EOR key0; for i = 0 to 4 roundkey = key1 EOR runningmod; workingval = workingval EOR roundkey; workingval = workingval EOR RC[i]; if i > 0 then workingval = // Reconstruct the extension field used of adding the PAC to the pointer boolean tbi = PACCellShuffleCalculateTBI(workingval); workingval =(ptr, data); integer bottom_PAC_bit = PACMultCalculateBottomPACBit(workingval); workingval =(ptr<55>); extfield = PACSubReplicate(workingval); runningmod =(ptr<55>, 64); if tbi then original_ptr = ptr<63:56>:extfield<56-bottom_PAC_bit-1:0>:ptr<bottom_PAC_bit-1:0>; else original_ptr = extfield<64-bottom_PAC_bit-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = TweakShuffleComputePAC(runningmod<63:0>); roundkey = modk0 EOR runningmod; workingval = workingval EOR roundkey; workingval = PACCellShuffle(workingval); workingval = PACMult(workingval); workingval = PACSub(workingval); workingval = PACCellShuffle(workingval); workingval = PACMult(workingval); workingval = key1 EOR workingval; workingval = PACCellInvShuffle(workingval); workingval = PACInvSub(workingval); workingval = PACMult(workingval); workingval = PACCellInvShuffle(workingval); workingval = workingval EOR key0; workingval = workingval EOR runningmod; for i = 0 to 4 workingval = PACInvSub(workingval); if i < 4 then workingval = PACMult(workingval); workingval = PACCellInvShuffle(workingval); runningmod = TweakInvShuffle(runningmod<63:0>); roundkey = key1 EOR runningmod; workingval = workingval EOR RC[4-i]; workingval = workingval EOR roundkey; workingval = workingval EOR Alpha; workingval = workingval EOR modk0; return workingval;(original_ptr, modifier, K<127:64>, K<63:0>); // Check pointer authentication code if tbi then if PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit> then result = original_ptr; else error_code = keynumber:NOT(keynumber); result = original_ptr<63:55>:error_code:original_ptr<52:0>; else if ((PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit>) && (PAC<63:56> == ptr<63:56>)) then result = original_ptr; else error_code = keynumber:NOT(keynumber); result = original_ptr<63>:error_code:original_ptr<60:0>; return result;

Library pseudocode for aarch64/functions/pac/computepacauthda/PACCellInvShuffleAuthDA

// PACCellInvShuffle() // =================== // AuthDA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACDA(). bits(64) PACCellInvShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<15:12>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<51:48>; outdata<15:12> = indata<39:36>; outdata<19:16> = indata<59:56>; outdata<23:20> = indata<47:44>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<19:16>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<31:28>; outdata<47:44> = indata<11:8>; outdata<51:48> = indata<23:20>; outdata<55:52> = indata<3:0>; outdata<59:56> = indata<43:40>; outdata<63:60> = indata<63:60>; return outdata;AuthDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of whenEL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APDAKey_EL1, TRUE, '0');

Library pseudocode for aarch64/functions/pac/computepacauthdb/PACCellShuffleAuthDB

// PACCellShuffle() // ================ // AuthDB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a // pointer authentication code in the pointer authentication code field bits of X, using // the same algorithm and key as AddPACDB(). bits(64) PACCellShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<55:52>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<47:44>; outdata<15:12> = indata<3:0>; outdata<19:16> = indata<31:28>; outdata<23:20> = indata<51:48>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<43:40>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<15:12>; outdata<43:40> = indata<59:56>; outdata<47:44> = indata<23:20>; outdata<51:48> = indata<11:8>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<19:16>; outdata<63:60> = indata<63:60>; return outdata;AuthDB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of whenEL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APDBKey_EL1, TRUE, '1');

Library pseudocode for aarch64/functions/pac/computepacauthia/PACInvSubAuthIA

// PACInvSub() // =========== // AuthIA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIA(). bits(64) PACInvSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher AuthIA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '0101'; when '0001' Toutput<4*i+3:4*i> = '1110'; when '0010' Toutput<4*i+3:4*i> = '1101'; when '0011' Toutput<4*i+3:4*i> = '1000'; when '0100' Toutput<4*i+3:4*i> = '1010'; when '0101' Toutput<4*i+3:4*i> = '1011'; when '0110' Toutput<4*i+3:4*i> = '0001'; when '0111' Toutput<4*i+3:4*i> = '1001'; when '1000' Toutput<4*i+3:4*i> = '0010'; when '1001' Toutput<4*i+3:4*i> = '0110'; when '1010' Toutput<4*i+3:4*i> = '1111'; when '1011' Toutput<4*i+3:4*i> = '0000'; when '1100' Toutput<4*i+3:4*i> = '0100'; when '1101' Toutput<4*i+3:4*i> = '1100'; when '1110' Toutput<4*i+3:4*i> = '0111'; when '1111' Toutput<4*i+3:4*i> = '0011'; return Toutput; APIAKey_EL1 = APIAKeyHi_EL1<63:0> : APIAKeyLo_EL1<63:0>; case PSTATE.EL of whenEL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APIAKey_EL1, FALSE, '0');

Library pseudocode for aarch64/functions/pac/computepacauthib/PACMultAuthIB

// PACMult() // ========= // AuthIB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIB(). bits(64) PACMult(bits(64) Sinput) bits(4) t0; bits(4) t1; bits(4) t2; bits(4) t3; bits(64) Soutput; AuthIB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; for i = 0 to 3 t0<3:0> = APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of when RotCellEL0(Sinput<4*(i+8)+3:4*(i+8)>, 1) EORboolean IsEL1Regime = RotCellS1TranslationRegime(Sinput<4*(i+4)+3:4*(i+4)>, 2); t0<3:0> = t0<3:0> EOR() == RotCellEL1(Sinput<4*(i)+3:4*(i)>, 1); t1<3:0> =; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = ( RotCellEL2Enabled(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = RotCellHaveEL(Sinput<4*(i+4)+3:4*(i+4)>, 1); t1<3:0> = t1<3:0> EOR( RotCellEL3(Sinput<4*(i)+3:4*(i)>, 2); t2<3:0> =) && SCR_EL3.API == '0'; when RotCellEL1(Sinput<4*(i+12)+3:4*(i+12)>, 2) EOREnable = SCTLR_EL1.EnIB; TrapEL2 = RotCellEL2Enabled(Sinput<4*(i+8)+3:4*(i+8)>, 1); t2<3:0> = t2<3:0> EOR() && HCR_EL2.API == '0'; TrapEL3 = RotCellHaveEL(Sinput<4*(i)+3:4*(i)>, 1); t3<3:0> =( RotCellEL3(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR) && SCR_EL3.API == '0'; when RotCellEL2(Sinput<4*(i+8)+3:4*(i+8)>, 2); t3<3:0> = t3<3:0> EOREnable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 = (EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AuthRotCellHaveEL(Sinput<4*(i+4)+3:4*(i+4)>, 1); Soutput<4*i+3:4*i> = t3<3:0>; Soutput<4*(i+4)+3:4*(i+4)> = t2<3:0>; Soutput<4*(i+8)+3:4*(i+8)> = t1<3:0>; Soutput<4*(i+12)+3:4*(i+12)> = t0<3:0>; return Soutput;(X, Y, APIBKey_EL1, FALSE, '1');

Library pseudocode for aarch64/functions/pac/computepaccalcbottompacbit/PACSubCalculateBottomPACBit

// PACSub() // ======== // CalculateBottomPACBit() // ======================= bits(64)integer PACSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '1011'; when '0001' Toutput<4*i+3:4*i> = '0110'; when '0010' Toutput<4*i+3:4*i> = '1000'; when '0011' Toutput<4*i+3:4*i> = '1111'; when '0100' Toutput<4*i+3:4*i> = '1100'; when '0101' Toutput<4*i+3:4*i> = '0000'; when '0110' Toutput<4*i+3:4*i> = '1001'; when '0111' Toutput<4*i+3:4*i> = '1110'; when '1000' Toutput<4*i+3:4*i> = '0011'; when '1001' Toutput<4*i+3:4*i> = '0111'; when '1010' Toutput<4*i+3:4*i> = '0100'; when '1011' Toutput<4*i+3:4*i> = '0101'; when '1100' Toutput<4*i+3:4*i> = '1101'; when '1101' Toutput<4*i+3:4*i> = '0010'; when '1110' Toutput<4*i+3:4*i> = '0001'; when '1111' Toutput<4*i+3:4*i> = '1010'; return Toutput;CalculateBottomPACBit(bit top_bit) integer tsz_field; ifPtrHasUpperAndLowerAddRanges() then assert S1TranslationRegime() IN {EL1, EL2}; if S1TranslationRegime() == EL1 then // EL1 translation regime registers tsz_field = if top_bit == '1' then UInt(TCR_EL1.T1SZ) else UInt(TCR_EL1.T0SZ); using64k = if top_bit == '1' then TCR_EL1.TG1 == '11' else TCR_EL1.TG0 == '01'; else // EL2 translation regime registers assert HaveEL(EL2); tsz_field = if top_bit == '1' then UInt(TCR_EL2.T1SZ) else UInt(TCR_EL2.T0SZ); using64k = if top_bit == '1' then TCR_EL2.TG1 == '11' else TCR_EL2.TG0 == '01'; else tsz_field = if PSTATE.EL == EL2 then UInt(TCR_EL2.T0SZ) else UInt(TCR_EL3.T0SZ); using64k = if PSTATE.EL == EL2 then TCR_EL2.TG0 == '01' else TCR_EL3.TG0 == '01'; max_limit_tsz_field = (if !HaveSmallPageTblExt() then 39 else if using64k then 47 else 48); if tsz_field > max_limit_tsz_field then // TCR_ELx.TySZ is out of range c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = max_limit_tsz_field; tszmin = if using64k && VAMax() == 52 then 12 else 16; if tsz_field < tszmin then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = tszmin; return (64-tsz_field);

Library pseudocode for aarch64/functions/pac/computepaccalculatetbi/RotCellCalculateTBI

// RotCell() // ========= // CalculateTBI() // ============== bits(4)boolean RotCell(bits(4) incell, integer amount) bits(8) tmp; bits(4) outcell; CalculateTBI(bits(64) ptr, boolean data) boolean tbi = FALSE; // assert amount>3 || amount<1; tmp<7:0> = incell<3:0>:incell<3:0>; outcell = tmp<7-amount:4-amount>; return outcell; ifPtrHasUpperAndLowerAddRanges() then assert S1TranslationRegime() IN {EL1, EL2}; if S1TranslationRegime() == EL1 then // EL1 translation regime registers if data then tbi = if ptr<55> == '1' then TCR_EL1.TBI1 == '1' else TCR_EL1.TBI0 == '1'; else if ptr<55> == '1' then tbi = TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0'; else tbi = TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0'; else // EL2 translation regime registers if data then tbi = if ptr<55> == '1' then TCR_EL2.TBI1 == '1' else TCR_EL2.TBI0 == '1'; else if ptr<55> == '1' then tbi = TCR_EL2.TBI1 == '1' && TCR_EL2.TBID1 == '0'; else tbi = TCR_EL2.TBI0 == '1' && TCR_EL2.TBID0 == '0'; elsif PSTATE.EL == EL2 then tbi = if data then TCR_EL2.TBI=='1' else TCR_EL2.TBI=='1' && TCR_EL2.TBID=='0'; elsif PSTATE.EL == EL3 then tbi = if data then TCR_EL3.TBI=='1' else TCR_EL3.TBI=='1' && TCR_EL3.TBID=='0'; return tbi;

Library pseudocode for aarch64/functions/pac/computepac/TweakCellInvRotComputePAC

// TweakCellInvRot() // ================= array bits(64) RC[0..4]; bits(4) TweakCellInvRot(bits(4)incell) bits(4) outcell; outcell<3> = incell<2>; outcell<2> = incell<1>; outcell<1> = incell<0>; outcell<0> = incell<0> EOR incell<3>; return outcell;bits(64)ComputePAC(bits(64) data, bits(64) modifier, bits(64) key0, bits(64) key1) bits(64) workingval; bits(64) runningmod; bits(64) roundkey; bits(64) modk0; constant bits(64) Alpha = 0xC0AC29B7C97C50DD<63:0>; RC[0] = 0x0000000000000000<63:0>; RC[1] = 0x13198A2E03707344<63:0>; RC[2] = 0xA4093822299F31D0<63:0>; RC[3] = 0x082EFA98EC4E6C89<63:0>; RC[4] = 0x452821E638D01377<63:0>; modk0 = key0<0>:key0<63:2>:(key0<63> EOR key0<1>); runningmod = modifier; workingval = data EOR key0; for i = 0 to 4 roundkey = key1 EOR runningmod; workingval = workingval EOR roundkey; workingval = workingval EOR RC[i]; if i > 0 then workingval = PACCellShuffle(workingval); workingval = PACMult(workingval); workingval = PACSub(workingval); runningmod = TweakShuffle(runningmod<63:0>); roundkey = modk0 EOR runningmod; workingval = workingval EOR roundkey; workingval = PACCellShuffle(workingval); workingval = PACMult(workingval); workingval = PACSub(workingval); workingval = PACCellShuffle(workingval); workingval = PACMult(workingval); workingval = key1 EOR workingval; workingval = PACCellInvShuffle(workingval); workingval = PACInvSub(workingval); workingval = PACMult(workingval); workingval = PACCellInvShuffle(workingval); workingval = workingval EOR key0; workingval = workingval EOR runningmod; for i = 0 to 4 workingval = PACInvSub(workingval); if i < 4 then workingval = PACMult(workingval); workingval = PACCellInvShuffle(workingval); runningmod = TweakInvShuffle(runningmod<63:0>); roundkey = key1 EOR runningmod; workingval = workingval EOR RC[4-i]; workingval = workingval EOR roundkey; workingval = workingval EOR Alpha; workingval = workingval EOR modk0; return workingval;

Library pseudocode for aarch64/functions/pac/computepac/TweakCellRotPACCellInvShuffle

// TweakCellRot() // ============== // PACCellInvShuffle() // =================== bits(4)bits(64) TweakCellRot(bits(4) incell) bits(4) outcell; outcell<3> = incell<0> EOR incell<1>; outcell<2> = incell<3>; outcell<1> = incell<2>; outcell<0> = incell<1>; return outcell;PACCellInvShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<15:12>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<51:48>; outdata<15:12> = indata<39:36>; outdata<19:16> = indata<59:56>; outdata<23:20> = indata<47:44>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<19:16>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<31:28>; outdata<47:44> = indata<11:8>; outdata<51:48> = indata<23:20>; outdata<55:52> = indata<3:0>; outdata<59:56> = indata<43:40>; outdata<63:60> = indata<63:60>; return outdata;

Library pseudocode for aarch64/functions/pac/computepac/TweakInvShufflePACCellShuffle

// TweakInvShuffle() // ================= // PACCellShuffle() // ================ bits(64) TweakInvShuffle(bits(64)indata) bits(64) outdata; outdata<3:0> = TweakCellInvRot(indata<51:48>); outdata<7:4> = indata<55:52>; outdata<11:8> = indata<23:20>; outdata<15:12> = indata<27:24>; outdata<19:16> = indata<3:0>; outdata<23:20> = indata<7:4>; outdata<27:24> = TweakCellInvRot(indata<11:8>); outdata<31:28> = indata<15:12>; outdata<35:32> = TweakCellInvRot(indata<31:28>); outdata<39:36> = TweakCellInvRot(indata<63:60>); outdata<43:40> = TweakCellInvRot(indata<59:56>); outdata<47:44> = TweakCellInvRot(indata<19:16>); outdata<51:48> = indata<35:32>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<43:40>; outdata<63:60> = TweakCellInvRot(indata<47:44>); return outdata;bits(64)PACCellShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<55:52>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<47:44>; outdata<15:12> = indata<3:0>; outdata<19:16> = indata<31:28>; outdata<23:20> = indata<51:48>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<43:40>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<15:12>; outdata<43:40> = indata<59:56>; outdata<47:44> = indata<23:20>; outdata<51:48> = indata<11:8>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<19:16>; outdata<63:60> = indata<63:60>; return outdata;

Library pseudocode for aarch64/functions/pac/computepac/TweakShufflePACInvSub

// TweakShuffle() // ============== // PACInvSub() // =========== bits(64) TweakShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<19:16>; outdata<7:4> = indata<23:20>; outdata<11:8> =PACInvSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '0101'; when '0001' Toutput<4*i+3:4*i> = '1110'; when '0010' Toutput<4*i+3:4*i> = '1101'; when '0011' Toutput<4*i+3:4*i> = '1000'; when '0100' Toutput<4*i+3:4*i> = '1010'; when '0101' Toutput<4*i+3:4*i> = '1011'; when '0110' Toutput<4*i+3:4*i> = '0001'; when '0111' Toutput<4*i+3:4*i> = '1001'; when '1000' Toutput<4*i+3:4*i> = '0010'; when '1001' Toutput<4*i+3:4*i> = '0110'; when '1010' Toutput<4*i+3:4*i> = '1111'; when '1011' Toutput<4*i+3:4*i> = '0000'; when '1100' Toutput<4*i+3:4*i> = '0100'; when '1101' Toutput<4*i+3:4*i> = '1100'; when '1110' Toutput<4*i+3:4*i> = '0111'; when '1111' Toutput<4*i+3:4*i> = '0011'; return Toutput; TweakCellRot(indata<27:24>); outdata<15:12> = indata<31:28>; outdata<19:16> = TweakCellRot(indata<47:44>); outdata<23:20> = indata<11:8>; outdata<27:24> = indata<15:12>; outdata<31:28> = TweakCellRot(indata<35:32>); outdata<35:32> = indata<51:48>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<59:56>; outdata<47:44> = TweakCellRot(indata<63:60>); outdata<51:48> = TweakCellRot(indata<3:0>); outdata<55:52> = indata<7:4>; outdata<59:56> = TweakCellRot(indata<43:40>); outdata<63:60> = TweakCellRot(indata<39:36>); return outdata;

Library pseudocode for aarch64/functions/pac/paccomputepac/HaveEnhancedPACPACMult

// HaveEnhancedPAC() // ================= // Returns TRUE if support for EnhancedPAC is implemented, FALSE otherwise. // PACMult() // ========= booleanbits(64) HaveEnhancedPAC() return (PACMult(bits(64) Sinput) bits(4) t0; bits(4) t1; bits(4) t2; bits(4) t3; bits(64) Soutput; for i = 0 to 3 t0<3:0> = (Sinput<4*(i+8)+3:4*(i+8)>, 1) EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 2); t0<3:0> = t0<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 1); t1<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 1); t1<3:0> = t1<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 2); t2<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 2) EOR RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 1); t2<3:0> = t2<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 1); t3<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 2); t3<3:0> = t3<3:0> EOR RotCellHavePACExtRotCell() && boolean IMPLEMENTATION_DEFINED "Has enhanced PAC functionality" );(Sinput<4*(i+4)+3:4*(i+4)>, 1); Soutput<4*i+3:4*i> = t3<3:0>; Soutput<4*(i+4)+3:4*(i+4)> = t2<3:0>; Soutput<4*(i+8)+3:4*(i+8)> = t1<3:0>; Soutput<4*(i+12)+3:4*(i+12)> = t0<3:0>; return Soutput;

Library pseudocode for aarch64/functions/pac/paccomputepac/HavePACExtPACSub

// HavePACExt() // ============ // Returns TRUE if support for the PAC extension is implemented, FALSE otherwise. // PACSub() // ======== booleanbits(64) HavePACExt() returnPACSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '1011'; when '0001' Toutput<4*i+3:4*i> = '0110'; when '0010' Toutput<4*i+3:4*i> = '1000'; when '0011' Toutput<4*i+3:4*i> = '1111'; when '0100' Toutput<4*i+3:4*i> = '1100'; when '0101' Toutput<4*i+3:4*i> = '0000'; when '0110' Toutput<4*i+3:4*i> = '1001'; when '0111' Toutput<4*i+3:4*i> = '1110'; when '1000' Toutput<4*i+3:4*i> = '0011'; when '1001' Toutput<4*i+3:4*i> = '0111'; when '1010' Toutput<4*i+3:4*i> = '0100'; when '1011' Toutput<4*i+3:4*i> = '0101'; when '1100' Toutput<4*i+3:4*i> = '1101'; when '1101' Toutput<4*i+3:4*i> = '0010'; when '1110' Toutput<4*i+3:4*i> = '0001'; when '1111' Toutput<4*i+3:4*i> = '1010'; return Toutput; HasArchVersion(ARMv8p3);

Library pseudocode for aarch64/functions/pac/paccomputepac/PtrHasUpperAndLowerAddRangesRotCell

// PtrHasUpperAndLowerAddRanges() // ============================== // Returns TRUE if the pointer has upper and lower address ranges, FALSE otherwise. // RotCell() // ========= booleanbits(4) PtrHasUpperAndLowerAddRanges() return PSTATE.EL ==RotCell(bits(4) incell, integer amount) bits(8) tmp; bits(4) outcell; // assert amount>3 || amount<1; tmp<7:0> = incell<3:0>:incell<3:0>; outcell = tmp<7-amount:4-amount>; return outcell; EL1 || PSTATE.EL == EL0 || (PSTATE.EL == EL2 && HCR_EL2.E2H == '1');

Library pseudocode for aarch64/functions/pac/stripcomputepac/StripTweakCellInvRot

// Strip() // ======= // Strip() returns a 64-bit value containing A, but replacing the pointer authentication // code field bits with the extension of the address bits. This can apply to either // instructions or data, where, as the use of tagged pointers is distinct, it might be // handled differently. // TweakCellInvRot() // ================= bits(64)bits(4) TweakCellInvRot(bits(4)incell) bits(4) outcell; outcell<3> = incell<2>; outcell<2> = incell<1>; outcell<1> = incell<0>; outcell<0> = incell<0> EOR incell<3>; return outcell; Strip(bits(64) A, boolean data) boolean TrapEL2; boolean TrapEL3; bits(64) original_ptr; bits(64) extfield; boolean tbi = CalculateTBI(A, data); integer bottom_PAC_bit = CalculateBottomPACBit(A<55>); extfield = Replicate(A<55>, 64); if tbi then original_ptr = A<63:56>:extfield< 56-bottom_PAC_bit-1:0>:A<bottom_PAC_bit-1:0>; else original_ptr = extfield< 64-bottom_PAC_bit-1:0>:A<bottom_PAC_bit-1:0>; case PSTATE.EL of when EL0 TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 TrapEL2 = FALSE; TrapEL3 = FALSE; if TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return original_ptr;

Library pseudocode for aarch64/functions/pac/trappacusecomputepac/TrapPACUseTweakCellRot

// TrapPACUse() // ============ // Used for the trapping of the pointer authentication functions by higher exception // levels.// TweakCellRot() // ============== bits(4) TrapPACUse(bits(2) target_el) assertTweakCellRot(bits(4) incell) bits(4) outcell; outcell<3> = incell<0> EOR incell<1>; outcell<2> = incell<3>; outcell<1> = incell<2>; outcell<0> = incell<1>; return outcell; HaveEL(target_el) && target_el != EL0 && UInt(target_el) >= UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); ExceptionRecord exception; vect_offset = 0; exception = ExceptionSyndrome(Exception_PACTrap); AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/raspac/AArch64.ESBOperationcomputepac/TweakInvShuffle

// AArch64.ESBOperation() // ====================== // Perform the AArch64 ESB operation, either for ESB executed in AArch64 state, or for // ESB in AArch32 state when SError interrupts are routed to an Exception level using // AArch64// TweakInvShuffle() // ================= bits(64) TweakInvShuffle(bits(64)indata) bits(64) outdata; outdata<3:0> = TweakCellInvRot(indata<51:48>); outdata<7:4> = indata<55:52>; outdata<11:8> = indata<23:20>; outdata<15:12> = indata<27:24>; outdata<19:16> = indata<3:0>; outdata<23:20> = indata<7:4>; outdata<27:24> = TweakCellInvRot(indata<11:8>); outdata<31:28> = indata<15:12>; outdata<35:32> = TweakCellInvRot(indata<31:28>); outdata<39:36> = TweakCellInvRot(indata<63:60>); outdata<43:40> = TweakCellInvRot(indata<59:56>); outdata<47:44> = TweakCellInvRot(indata<19:16>); outdata<51:48> = indata<35:32>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<43:40>; outdata<63:60> = TweakCellInvRot(indata<47:44>); return outdata; AArch64.ESBOperation() route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1'; route_to_el2 = (EL2Enabled() && (HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1')); target = if route_to_el3 then EL3 elsif route_to_el2 then EL2 else EL1; if target == EL1 then mask_active = PSTATE.EL IN {EL0, EL1}; elsif HaveVirtHostExt() && target == EL2 && HCR_EL2.<E2H,TGE> == '11' then mask_active = PSTATE.EL IN {EL0, EL2}; else mask_active = PSTATE.EL == target; mask_set = (PSTATE.A == '1' && (!HaveDoubleFaultExt() || SCR_EL3.EA == '0' || PSTATE.EL != EL3 || SCR_EL3.NMEA == '0')); intdis = Halted() || ExternalDebugInterruptsDisabled(target); masked = (UInt(target) < UInt(PSTATE.EL)) || intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then // This function might be called for an interworking case, and INTdis is masking // the SError interrupt. if ELUsingAArch32(S1TranslationRegime()) then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); else implicit_esb = FALSE; syndrome64 = AArch64.PhysicalSErrorSyndrome(implicit_esb); DISR_EL1 = AArch64.ReportDeferredSError(syndrome64); ClearPendingPhysicalSError(); // Set ISR_EL1.A to 0 return;

Library pseudocode for aarch64/functions/raspac/AArch64.PhysicalSErrorSyndromecomputepac/TweakShuffle

// Return the SError syndrome bits(25)// TweakShuffle() // ============== bits(64) AArch64.PhysicalSErrorSyndrome(boolean implicit_esb);TweakShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<19:16>; outdata<7:4> = indata<23:20>; outdata<11:8> =TweakCellRot(indata<27:24>); outdata<15:12> = indata<31:28>; outdata<19:16> = TweakCellRot(indata<47:44>); outdata<23:20> = indata<11:8>; outdata<27:24> = indata<15:12>; outdata<31:28> = TweakCellRot(indata<35:32>); outdata<35:32> = indata<51:48>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<59:56>; outdata<47:44> = TweakCellRot(indata<63:60>); outdata<51:48> = TweakCellRot(indata<3:0>); outdata<55:52> = indata<7:4>; outdata<59:56> = TweakCellRot(indata<43:40>); outdata<63:60> = TweakCellRot(indata<39:36>); return outdata;

Library pseudocode for aarch64/functions/raspac/AArch64.ReportDeferredSErrorpac/HavePACExt

// AArch64.ReportDeferredSError() // ============================== // Generate deferred SError syndrome // HavePACExt() // ============ bits(64)boolean AArch64.ReportDeferredSError(bits(25) syndrome) bits(64) target; target<31> = '1'; // A target<24> = syndrome<24>; // IDS target<23:0> = syndrome<23:0>; // ISS return target;HavePACExt() returnHasArchVersion(ARMv8p3); boolean HaveEnhancedPAC() return ( HavePACExt() && boolean IMPLEMENTATION_DEFINED "Has enhanced PAC functionality" );

Library pseudocode for aarch64/functions/raspac/AArch64.vESBOperationpac/PtrHasUpperAndLowerAddRanges

// AArch64.vESBOperation() // ======================= // Perform the AArch64 ESB operation for virtual SError interrupts, either for ESB // executed in AArch64 state, or for ESB in AArch32 state with EL2 using AArch64 state// PtrHasUpperAndLowerAddRanges() // ============================== // Returns TRUE if the pointer has upper and lower address ranges boolean AArch64.vESBOperation() assert PSTATE.EL IN {PtrHasUpperAndLowerAddRanges() return PSTATE.EL ==EL0, EL1} &&|| PSTATE.EL == EL2EnabledEL0(); // If physical SError interrupts are routed to EL2, and TGE is not set, then a virtual // SError interrupt might be pending vSEI_enabled = HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; vSEI_pending = vSEI_enabled && HCR_EL2.VSE == '1'; vintdis =|| (PSTATE.EL == HaltedEL2() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then // This function might be called for the interworking case, and INTdis is masking // the virtual SError interrupt. if ELUsingAArch32(EL1) then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); else VDISR_EL2 = AArch64.ReportDeferredSError(VSESR_EL2<24:0>); HCR_EL2.VSE = '0'; // Clear pending virtual SError return;&& HCR_EL2.E2H == '1');

Library pseudocode for aarch64/functions/registerspac/AArch64.MaybeZeroRegisterUppersstrip/Strip

// AArch64.MaybeZeroRegisterUppers() // ================================= // On taking an exception to AArch64 from AArch32, it is CONSTRAINED UNPREDICTABLE whether the top // 32 bits of registers visible at any lower Exception level using AArch32 are set to zero.// Strip() // ======= // Strip() returns a 64-bit value containing A, but replacing the pointer authentication // code field bits with the extension of the address bits. This can apply to either // instructions or data, where, as the use of tagged pointers is distinct, it might be // handled differently. bits(64) AArch64.MaybeZeroRegisterUppers() assertStrip(bits(64) A, boolean data) boolean TrapEL2; boolean TrapEL3; bits(64) original_ptr; bits(64) extfield; boolean tbi = UsingAArch32CalculateTBI(); // Always called from AArch32 state before entering AArch64 state if PSTATE.EL ==(A, data); integer bottom_PAC_bit = CalculateBottomPACBit(A<55>); extfield = Replicate(A<55>, 64); if tbi then original_ptr = A<63:56>:extfield< 56-bottom_PAC_bit-1:0>:A<bottom_PAC_bit-1:0>; else original_ptr = extfield< 64-bottom_PAC_bit-1:0>:A<bottom_PAC_bit-1:0>; case PSTATE.EL of when EL0 && !TrapEL2 = (ELUsingAArch32EL2Enabled(() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =EL1HaveEL) then first = 0; last = 14; include_R15 = FALSE; elsif PSTATE.EL IN {(EL0EL3,) && SCR_EL3.API == '0'; when EL1} &&TrapEL2 = EL2Enabled() && !() && HCR_EL2.API == '0'; TrapEL3 =ELUsingAArch32HaveEL(EL3) && SCR_EL3.API == '0'; when EL2) then first = 0; last = 30; include_R15 = FALSE; else first = 0; last = 30; include_R15 = TRUE; for n = first to last if (n != 15 || include_R15) &&TrapEL2 = FALSE; TrapEL3 = ConstrainUnpredictableBoolHaveEL(Unpredictable_ZEROUPPEREL3) then _R[n]<63:32> =) && SCR_EL3.API == '0'; when TrapEL2 = FALSE; TrapEL3 = FALSE; if TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3ZerosEL3(); return;); else return original_ptr;

Library pseudocode for aarch64/functions/registerspac/AArch64.ResetGeneralRegisterstrappacuse/TrapPACUse

// AArch64.ResetGeneralRegisters() // ===============================// TrapPACUse() // ============ // Used for the trapping of the pointer authentication functions by higher exception // levels. AArch64.ResetGeneralRegisters() for i = 0 to 30TrapPACUse(bits(2) target_el) assert (target_el) && target_el != EL0 && UInt(target_el) >= UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); ExceptionRecord exception; vect_offset = 0; exception = ExceptionSyndrome(Exception_PACTrap); AArch64.TakeExceptionXHaveEL[i] = bits(64) UNKNOWN; return;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/registersras/AArch64.ResetSIMDFPRegistersAArch64.ESBOperation

// AArch64.ResetSIMDFPRegisters() // ==============================// AArch64.ESBOperation() // ====================== // Perform the AArch64 ESB operation, either for ESB executed in AArch64 state, or for // ESB in AArch32 state when SError interrupts are routed to an Exception level using // AArch64 AArch64.ResetSIMDFPRegisters() AArch64.ESBOperation() for i = 0 to 31 route_to_el3 = ( (EL3) && SCR_EL3.EA == '1'); route_to_el2 = (EL2Enabled() && (HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1')); target = (if route_to_el3 then EL3 elsif route_to_el2 then EL2 else EL1); if target == EL1 then mask_active = (PSTATE.EL IN {EL0,EL1}); elsif HaveVirtHostExt() && target == EL2 && HCR_EL2.<E2H,TGE> == '11' then mask_active = (PSTATE.EL IN {EL0,EL2}); else mask_active = (PSTATE.EL == target); mask_set = (PSTATE.A == '1' && (!HaveDoubleFaultExt() || SCR_EL3.EA == '0' || PSTATE.EL != EL3 || SCR_EL3.NMEA == '0')); intdis = (Halted() || ExternalDebugInterruptsDisabled(target)); masked = (UInt(target) < UInt(PSTATE.EL)) || intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then // This function might be called for an interworking case, and INTdis is masking // the SError interrupt. if ELUsingAArch32(S1TranslationRegime()) then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); else implicit_esb = FALSE; syndrome64 = AArch64.PhysicalSErrorSyndrome(implicit_esb); DISR_EL1 = AArch64.ReportDeferredSError(syndrome64); ClearPendingPhysicalSErrorVHaveEL[i] = bits(128) UNKNOWN; (); // Set ISR_EL1.A to 0 return;

Library pseudocode for aarch64/functions/registersras/AArch64.ResetSpecialRegistersAArch64.PhysicalSErrorSyndrome

// AArch64.ResetSpecialRegisters() // ===============================// Return the SError syndrome bits(25) AArch64.ResetSpecialRegisters() // AArch64 special registers SP_EL0 = bits(64) UNKNOWN; SP_EL1 = bits(64) UNKNOWN; SPSR_EL1 = bits(32) UNKNOWN; ELR_EL1 = bits(64) UNKNOWN; ifAArch64.PhysicalSErrorSyndrome(boolean implicit_esb); HaveEL(EL2) then SP_EL2 = bits(64) UNKNOWN; SPSR_EL2 = bits(32) UNKNOWN; ELR_EL2 = bits(64) UNKNOWN; if HaveEL(EL3) then SP_EL3 = bits(64) UNKNOWN; SPSR_EL3 = bits(32) UNKNOWN; ELR_EL3 = bits(64) UNKNOWN; // AArch32 special registers that are not architecturally mapped to AArch64 registers if HaveAArch32EL(EL1) then SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; // External debug special registers DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; return;

Library pseudocode for aarch64/functions/registersras/AArch64.ResetSystemRegistersAArch64.ReportDeferredSError

// AArch64.ReportDeferredSError() // ============================== // Generate deferred SError syndrome bits(64) AArch64.ResetSystemRegisters(boolean cold_reset);AArch64.ReportDeferredSError(bits(25) syndrome) bits(64) target; target<31> = '1'; // A target<24> = syndrome<24>; // IDS target<23:0> = syndrome<23:0>; // ISS return target;

Library pseudocode for aarch64/functions/registersras/PCAArch64.vESBOperation

// PC - non-assignment form // ======================== // Read program counter. bits(64)// AArch64.vESBOperation() // ======================= // Perform the AArch64 ESB operation for virtual SError interrupts, either for ESB // executed in AArch64 state, or for ESB in AArch32 state with EL2 using AArch64 state PC[] return _PC;AArch64.vESBOperation() assertEL2Enabled() && PSTATE.EL IN {EL0,EL1}; // If physical SError interrupts are routed to EL2, and TGE is not set, then a virtual // SError interrupt might be pending vSEI_enabled = HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; vSEI_pending = vSEI_enabled && HCR_EL2.VSE == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then // This function might be called for the interworking case, and INTdis is masking // the virtual SError interrupt. if ELUsingAArch32(EL1) then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); else VDISR_EL2 = AArch64.ReportDeferredSError(VSESR_EL2<24:0>); HCR_EL2.VSE = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch64/functions/registers/SPAArch64.MaybeZeroRegisterUppers

// SP[] - assignment form // ====================== // Write to stack pointer from either a 32-bit or a 64-bit value.// AArch64.MaybeZeroRegisterUppers() // ================================= // On taking an exception to AArch64 from AArch32, it is CONSTRAINED UNPREDICTABLE whether the top // 32 bits of registers visible at any lower Exception level using AArch32 are set to zero. SP[] = bits(width) value assert width IN {32,64}; if PSTATE.SP == '0' then SP_EL0 =AArch64.MaybeZeroRegisterUppers() assert ZeroExtendUsingAArch32(value); else case PSTATE.EL of when(); // Always called from AArch32 state before entering AArch64 state if PSTATE.EL == EL0 SP_EL0 =&& ! ZeroExtendELUsingAArch32(value); when( EL1 SP_EL1 =) then first = 0; last = 14; include_R15 = FALSE; elsif PSTATE.EL IN { ZeroExtendEL0(value); when, EL2EL1 SP_EL2 =} && ZeroExtendEL2Enabled(value); when() && ! EL3ELUsingAArch32 SP_EL3 =( ZeroExtendEL2(value); return; ) then first = 0; last = 30; include_R15 = FALSE; else first = 0; last = 30; include_R15 = TRUE; // SP[] - non-assignment form // ========================== // Read stack pointer with implicit slice of 8, 16, 32 or 64 bits. bits(width) for n = first to last if (n != 15 || include_R15) && SP[] assert width IN {8,16,32,64}; if PSTATE.SP == '0' then return SP_EL0<width-1:0>; else case PSTATE.EL of when( EL0Unpredictable_ZEROUPPER return SP_EL0<width-1:0>; when) then _R[n]<63:32> = EL1Zeros return SP_EL1<width-1:0>; when EL2 return SP_EL2<width-1:0>; when EL3 return SP_EL3<width-1:0>;(); return;

Library pseudocode for aarch64/functions/registers/VAArch64.ResetGeneralRegisters

// V[] - assignment form // ===================== // Write to SIMD&FP register with implicit extension from // 8, 16, 32, 64 or 128 bits.// AArch64.ResetGeneralRegisters() // =============================== V[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; integer vlen = ifAArch64.ResetGeneralRegisters() for i = 0 to 30 IsSVEEnabledX(PSTATE.EL) then VL else 128; if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _Z[n] = ZeroExtend(value); else _Z[n]<vlen-1:0> = ZeroExtend(value); // V[] - non-assignment form // ========================= // Read from SIMD&FP register with implicit slice of 8, 16 // 32, 64 or 128 bits. bits(width) V[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; return _Z[n]<width-1:0>;[i] = bits(64) UNKNOWN; return;

Library pseudocode for aarch64/functions/registers/VpartAArch64.ResetSIMDFPRegisters

// Vpart[] - non-assignment form // ============================= // Reads a 128-bit SIMD&FP register in up to two parts: // part 0 returns the bottom 8, 16, 32 or 64 bits of a value held in the register; // part 1 returns the top half of the bottom 64 bits or the top half of the 128-bit // value held in the register. bits(width)// AArch64.ResetSIMDFPRegisters() // ============================== Vpart[integer n, integer part] assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width < 128; returnAArch64.ResetSIMDFPRegisters() for i = 0 to 31 V[n]; else assert width IN {32,64}; bits(128) vreg =[i] = bits(128) UNKNOWN; return; V[n]; return vreg<(width * 2)-1:width>; // Vpart[] - assignment form // ========================= // Writes a 128-bit SIMD&FP register in up to two parts: // part 0 zero extends a 8, 16, 32, or 64-bit value to fill the whole register; // part 1 inserts a 64-bit value into the top half of the register. Vpart[integer n, integer part] = bits(width) value assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width < 128; V[n] = value; else assert width == 64; bits(64) vreg = V[n]; V[n] = value<63:0> : vreg;

Library pseudocode for aarch64/functions/registers/XAArch64.ResetSpecialRegisters

// X[] - assignment form // ===================== // Write to general-purpose register from either a 32-bit or a 64-bit value.// AArch64.ResetSpecialRegisters() // =============================== X[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {32,64}; if n != 31 then _R[n] =AArch64.ResetSpecialRegisters() // AArch64 special registers SP_EL0 = bits(64) UNKNOWN; SP_EL1 = bits(64) UNKNOWN; SPSR_EL1 = bits(32) UNKNOWN; ELR_EL1 = bits(64) UNKNOWN; if ZeroExtendHaveEL(value); return; // X[] - non-assignment form // ========================= // Read from general-purpose register with implicit slice of 8, 16, 32 or 64 bits. bits(width)( X[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64}; if n != 31 then return _R[n]<width-1:0>; else return) then SP_EL2 = bits(64) UNKNOWN; SPSR_EL2 = bits(32) UNKNOWN; ELR_EL2 = bits(64) UNKNOWN; if (EL3) then SP_EL3 = bits(64) UNKNOWN; SPSR_EL3 = bits(32) UNKNOWN; ELR_EL3 = bits(64) UNKNOWN; // AArch32 special registers that are not architecturally mapped to AArch64 registers if HaveAArch32EL(EL1ZerosHaveEL(width);) then SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; // External debug special registers DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; return;

Library pseudocode for aarch64/functions/sveregisters/AArch32.IsFPEnabledAArch64.ResetSystemRegisters

// AArch32.IsFPEnabled() // ===================== boolean AArch32.IsFPEnabled(bits(2) el) if el == EL0 && !ELUsingAArch32(EL1) then return AArch64.IsFPEnabled(el); if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.cp10 == '0' then return FALSE; if el IN {EL0, EL1} then // Check if access disabled in CPACR case CPACR.cp10 of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; if el IN {EL0, EL1, EL2} then if EL2Enabled() then if !ELUsingAArch32(EL2) then if CPTR_EL2.TFP == '1' then return FALSE; else if HCPTR.TCP10 == '1' then return FALSE; if HaveEL(EL3) && !ELUsingAArch32(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then return FALSE; return TRUE;AArch64.ResetSystemRegisters(boolean cold_reset);

Library pseudocode for aarch64/functions/sveregisters/AArch64.IsFPEnabledPC

// AArch64.IsFPEnabled() // ===================== // PC - non-assignment form // ======================== // Read program counter. booleanbits(64) AArch64.IsFPEnabled(bits(2) el) // Check if access disabled in CPACR_EL1 if el IN {PC[] return _PC;EL0, EL1} then // Check FP&SIMD at EL0/EL1 case CPACR[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; // Check if access disabled in CPTR_EL2 if el IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.FPEN == 'x0' then return FALSE; else if CPTR_EL2.TFP == '1' then return FALSE; // Check if access disabled in CPTR_EL3 if HaveEL(EL3) then if CPTR_EL3.TFP == '1' then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sveregisters/CeilPow2SP

// CeilPow2() // ========== // For a positive integer X, return the smallest power of 2 >= X integer// SP[] - assignment form // ====================== // Write to stack pointer from either a 32-bit or a 64-bit value. CeilPow2(integer x) if x == 0 then return 0; if x == 1 then return 2; returnSP[] = bits(width) value assert width IN {32,64}; if PSTATE.SP == '0' then SP_EL0 = (value); else case PSTATE.EL of when EL0 SP_EL0 = ZeroExtend(value); when EL1 SP_EL1 = ZeroExtend(value); when EL2 SP_EL2 = ZeroExtend(value); when EL3 SP_EL3 = ZeroExtend(value); return; // SP[] - non-assignment form // ========================== // Read stack pointer with implicit slice of 8, 16, 32 or 64 bits. bits(width) SP[] assert width IN {8,16,32,64}; if PSTATE.SP == '0' then return SP_EL0<width-1:0>; else case PSTATE.EL of when EL0 return SP_EL0<width-1:0>; when EL1 return SP_EL1<width-1:0>; when EL2 return SP_EL2<width-1:0>; when EL3FloorPow2ZeroExtend(x - 1) * 2;return SP_EL3<width-1:0>;

Library pseudocode for aarch64/functions/sveregisters/CheckSVEEnabledV

// CheckSVEEnabled() // =================// V[] - assignment form // ===================== CheckSVEEnabled() // Check if access disabled in CPACR_EL1 if PSTATE.EL IN {V[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; integer vlen = ifEL0IsSVEEnabled,(PSTATE.EL) then VL else 128; if EL1ConstrainUnpredictableBool} then // Check SVE at EL0/EL1 case( CPACRUnpredictable_SVEZEROUPPER[].ZEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL ==) then _Z[n] = EL0ZeroExtend; when '11' disabled = FALSE; if disabled then(value); else _Z[n]<vlen-1:0> = SVEAccessTrapZeroExtend((value); // V[] - non-assignment form // ========================= bits(width)EL1); // Check FP&SIMD at EL0/EL1 case CPACR[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0; when '11' disabled = FALSE; if disabled then AArch64.AdvSIMDFPAccessTrap(EL1); if PSTATE.EL IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.ZEN == 'x0' then SVEAccessTrap(EL2); if CPTR_EL2.FPEN == 'x0' then AArch64.AdvSIMDFPAccessTrap(EL2); else if CPTR_EL2.TZ == '1' then SVEAccessTrap(EL2); if CPTR_EL2.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL2); // Check if access disabled in CPTR_EL3 if HaveEL(EL3) then if CPTR_EL3.EZ == '0' then SVEAccessTrap(EL3); if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3);V[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; return _Z[n]<width-1:0>;

Library pseudocode for aarch64/functions/sveregisters/DecodePredCountVpart

// DecodePredCount() // ================= // Vpart[] - non-assignment form // ============================= integerbits(width) DecodePredCount(bits(5) pattern, integer esize) integer elements = VL DIV esize; integer numElem; case pattern of when '00000' numElem =Vpart[integer n, integer part] assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; return [n]; else assert width == 64; return _V[n]<(width * 2)-1:width>; // Vpart[] - assignment form // ========================= Vpart[integer n, integer part] = bits(width) value assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; V[n] = value; else assert width == 64; bits(64) vreg = V[n]; VFloorPow2V(elements); when '00001' numElem = if elements >= 1 then 1 else 0; when '00010' numElem = if elements >= 2 then 2 else 0; when '00011' numElem = if elements >= 3 then 3 else 0; when '00100' numElem = if elements >= 4 then 4 else 0; when '00101' numElem = if elements >= 5 then 5 else 0; when '00110' numElem = if elements >= 6 then 6 else 0; when '00111' numElem = if elements >= 7 then 7 else 0; when '01000' numElem = if elements >= 8 then 8 else 0; when '01001' numElem = if elements >= 16 then 16 else 0; when '01010' numElem = if elements >= 32 then 32 else 0; when '01011' numElem = if elements >= 64 then 64 else 0; when '01100' numElem = if elements >= 128 then 128 else 0; when '01101' numElem = if elements >= 256 then 256 else 0; when '11101' numElem = elements - (elements MOD 4); when '11110' numElem = elements - (elements MOD 3); when '11111' numElem = elements; otherwise numElem = 0; return numElem;[n] = value<63:0> : vreg;

Library pseudocode for aarch64/functions/sveregisters/ElemFFRX

// ElemFFR[] - non-assignment form // =============================== bit// X[] - assignment form // ===================== // Write to general-purpose register from either a 32-bit or a 64-bit value. ElemFFR[integer e, integer esize] returnX[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {32,64}; if n != 31 then _R[n] = ElemPZeroExtend[_FFR, e, esize]; (value); return; // ElemFFR[] - assignment form // ===========================// X[] - non-assignment form // ========================= // Read from general-purpose register with implicit slice of 8, 16, 32 or 64 bits. bits(width) ElemFFR[integer e, integer esize] = bit value integer psize = esize DIV 8; integer n = e * psize; assert n >= 0 && (n + psize) <= PL; _FFR<n+psize-1:n> =X[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64}; if n != 31 then return _R[n]<width-1:0>; else return ZeroExtendZeros(value, psize); return;(width);

Library pseudocode for aarch64/functions/sve/ElemPAArch32.IsFPEnabled

// ElemP[] - non-assignment form // ============================= // AArch32.IsFPEnabled() // ===================== bitboolean ElemP[bits(N) pred, integer e, integer esize] integer n = e * (esize DIV 8); assert n >= 0 && n < N; return pred<n>; // ElemP[] - assignment form // =========================AArch32.IsFPEnabled(bits(2) el) if el == ElemP[bits(N) &pred, integer e, integer esize] = bit value integer psize = esize DIV 8; integer n = e * psize; assert n >= 0 && (n + psize) <= N; pred<n+psize-1:n> =&& ! (EL1) then return AArch64.IsFPEnabled(el); if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.cp10 == '0' then return FALSE; if el IN {EL0, EL1} then // Check if access disabled in CPACR case CPACR.cp10 of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; if el IN {EL0, EL1, EL2} then if EL2Enabled() then if !ELUsingAArch32(EL2) then if CPTR_EL2.TFP == '1' then return FALSE; else if HCPTR.TCP10 == '1' then return FALSE; if HaveEL(EL3) && !ELUsingAArch32(EL3ZeroExtendELUsingAArch32(value, psize); return;) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sve/FFRAArch64.IsFPEnabled

// FFR[] - non-assignment form // =========================== // AArch64.IsFPEnabled() // ===================== bits(width)boolean FFR[] assert width == PL; return _FFR<width-1:0>; // FFR[] - assignment form // =======================AArch64.IsFPEnabled(bits(2) el) // Check if access disabled in CPACR_EL1 if el IN { FFR[] = bits(width) value assert width == PL; if, ConstrainUnpredictableBoolEL1(} then // Check FP&SIMD at EL0/EL1 caseUnpredictable_SVEZEROUPPERCPACR) then _FFR =[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = (el == ); when '11' disabled = FALSE; if disabled then return FALSE; // Check if access disabled in CPTR_EL2 if el IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.FPEN == 'x0' then return FALSE; else if CPTR_EL2.TFP == '1' then return FALSE; // Check if access disabled in CPTR_EL3 if HaveEL(EL3ZeroExtendEL0(value); else _FFR<width-1:0> = value;) then if CPTR_EL3.TFP == '1' then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sve/FPCompareNECeilPow2

// FPCompareNE() // ============= // CeilPow2() // ========== boolean// For a positive integer X, return the smallest power of 2 >= X integer FPCompareNE(bits(N) op1, bits(N) op2,CeilPow2(integer x) if x == 0 then return 0; if x == 1 then return 2; return FPCRTypeFloorPow2 fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = TRUE; if type1==FPType_SNaN || type2==FPType_SNaN then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 != value2); return result;(x - 1) * 2;

Library pseudocode for aarch64/functions/sve/FPCompareUNCheckSVEEnabled

// FPCompareUN() // ============= boolean// CheckSVEEnabled() // ================= FPCompareUN(bits(N) op1, bits(N) op2,CheckSVEEnabled() // Check if access disabled in CPACR_EL1 if PSTATE.EL IN { FPCRTypeEL0 fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =, FPUnpackEL1(op1, fpcr); (type2,sign2,value2) =} then // Check SVE at EL0/EL1 case FPUnpackCPACR(op2, fpcr); if type1==[].ZEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL ==FPType_SNaNEL0 || type2==; when '11' disabled = FALSE; if disabled thenFPType_SNaNSVEAccessTrap then( FPProcessExceptionEL1(); // Check FP&SIMD at EL0/EL1 caseFPExc_InvalidOpCPACR, fpcr); return (type1==[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL ==FPType_SNaNEL0 || type1==; when '11' disabled = FALSE; if disabled thenFPType_QNaNAArch64.AdvSIMDFPAccessTrap || type2==(FPType_SNaNEL1 || type2==); if PSTATE.EL IN {, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.ZEN == 'x0' then SVEAccessTrap(EL2); if CPTR_EL2.FPEN == 'x0' then AArch64.AdvSIMDFPAccessTrap(EL2); else if CPTR_EL2.TZ == '1' then SVEAccessTrap(EL2); if CPTR_EL2.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL2); // Check if access disabled in CPTR_EL3 if HaveEL(EL3) then if CPTR_EL3.EZ == '0' then SVEAccessTrap(EL3); if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3FPType_QNaNEL0);

Library pseudocode for aarch64/functions/sve/FPConvertSVEDecodePredCount

// FPConvertSVE() // ============== // DecodePredCount() // ================= bits(M)integer FPConvertSVE(bits(N) op,DecodePredCount(bits(5) pattern, integer esize) integer elements = VL DIV esize; integer numElem; case pattern of when '00000' numElem = FPCRTypeFloorPow2 fpcr, FPRounding rounding) fpcr.AHP = '0'; return FPConvert(op, fpcr, rounding); // FPConvertSVE() // ============== bits(M) FPConvertSVE(bits(N) op, FPCRType fpcr) fpcr.AHP = '0'; return FPConvert(op, fpcr, FPRoundingMode(fpcr));(elements); when '00001' numElem = if elements >= 1 then 1 else 0; when '00010' numElem = if elements >= 2 then 2 else 0; when '00011' numElem = if elements >= 3 then 3 else 0; when '00100' numElem = if elements >= 4 then 4 else 0; when '00101' numElem = if elements >= 5 then 5 else 0; when '00110' numElem = if elements >= 6 then 6 else 0; when '00111' numElem = if elements >= 7 then 7 else 0; when '01000' numElem = if elements >= 8 then 8 else 0; when '01001' numElem = if elements >= 16 then 16 else 0; when '01010' numElem = if elements >= 32 then 32 else 0; when '01011' numElem = if elements >= 64 then 64 else 0; when '01100' numElem = if elements >= 128 then 128 else 0; when '01101' numElem = if elements >= 256 then 256 else 0; when '11101' numElem = elements - (elements MOD 4); when '11110' numElem = elements - (elements MOD 3); when '11111' numElem = elements; otherwise numElem = 0; return numElem;

Library pseudocode for aarch64/functions/sve/FPExpAElemFFR

// FPExpA() // ======== // ElemFFR[] - non-assignment form // =============================== bits(N)bit FPExpA(bits(N) op) assert N IN {16,32,64}; bits(N) result; bits(N) coeff; integer idx = if N == 16 thenElemFFR[integer e, integer esize] return UIntElemP(op<4:0>) else[_FFR, e, esize]; // ElemFFR[] - assignment form // =========================== UInt(op<5:0>); coeff =ElemFFR[integer e, integer esize] = bit value integer psize = esize DIV 8; integer n = e * psize; assert n >= 0 && (n + psize) <= PL; _FFR<n+psize-1:n> = FPExpCoefficientZeroExtend[idx]; if N == 16 then result<15:0> = '0':op<9:5>:coeff<9:0>; elsif N == 32 then result<31:0> = '0':op<13:6>:coeff<22:0>; else // N == 64 result<63:0> = '0':op<16:6>:coeff<51:0>; return result;(value, psize); return;

Library pseudocode for aarch64/functions/sve/FPExpCoefficientElemP

// FPExpCoefficient() // ================== // ElemP[] - non-assignment form // ============================= bits(N)bit FPExpCoefficient[integer index] assert N IN {16,32,64}; integer result; ElemP[bits(N) pred, integer e, integer esize] integer n = e * (esize DIV 8); assert n >= 0 && n < N; return pred<n>; if N == 16 then case index of when 0 result = 0x0000; when 1 result = 0x0016; when 2 result = 0x002d; when 3 result = 0x0045; when 4 result = 0x005d; when 5 result = 0x0075; when 6 result = 0x008e; when 7 result = 0x00a8; when 8 result = 0x00c2; when 9 result = 0x00dc; when 10 result = 0x00f8; when 11 result = 0x0114; when 12 result = 0x0130; when 13 result = 0x014d; when 14 result = 0x016b; when 15 result = 0x0189; when 16 result = 0x01a8; when 17 result = 0x01c8; when 18 result = 0x01e8; when 19 result = 0x0209; when 20 result = 0x022b; when 21 result = 0x024e; when 22 result = 0x0271; when 23 result = 0x0295; when 24 result = 0x02ba; when 25 result = 0x02e0; when 26 result = 0x0306; when 27 result = 0x032e; when 28 result = 0x0356; when 29 result = 0x037f; when 30 result = 0x03a9; when 31 result = 0x03d4; elsif N == 32 then case index of when 0 result = 0x000000; when 1 result = 0x0164d2; when 2 result = 0x02cd87; when 3 result = 0x043a29; when 4 result = 0x05aac3; when 5 result = 0x071f62; when 6 result = 0x08980f; when 7 result = 0x0a14d5; when 8 result = 0x0b95c2; when 9 result = 0x0d1adf; when 10 result = 0x0ea43a; when 11 result = 0x1031dc; when 12 result = 0x11c3d3; when 13 result = 0x135a2b; when 14 result = 0x14f4f0; when 15 result = 0x16942d; when 16 result = 0x1837f0; when 17 result = 0x19e046; when 18 result = 0x1b8d3a; when 19 result = 0x1d3eda; when 20 result = 0x1ef532; when 21 result = 0x20b051; when 22 result = 0x227043; when 23 result = 0x243516; when 24 result = 0x25fed7; when 25 result = 0x27cd94; when 26 result = 0x29a15b; when 27 result = 0x2b7a3a; when 28 result = 0x2d583f; when 29 result = 0x2f3b79; when 30 result = 0x3123f6; when 31 result = 0x3311c4; when 32 result = 0x3504f3; when 33 result = 0x36fd92; when 34 result = 0x38fbaf; when 35 result = 0x3aff5b; when 36 result = 0x3d08a4; when 37 result = 0x3f179a; when 38 result = 0x412c4d; when 39 result = 0x4346cd; when 40 result = 0x45672a; when 41 result = 0x478d75; when 42 result = 0x49b9be; when 43 result = 0x4bec15; when 44 result = 0x4e248c; when 45 result = 0x506334; when 46 result = 0x52a81e; when 47 result = 0x54f35b; when 48 result = 0x5744fd; when 49 result = 0x599d16; when 50 result = 0x5bfbb8; when 51 result = 0x5e60f5; when 52 result = 0x60ccdf; when 53 result = 0x633f89; when 54 result = 0x65b907; when 55 result = 0x68396a; when 56 result = 0x6ac0c7; when 57 result = 0x6d4f30; when 58 result = 0x6fe4ba; when 59 result = 0x728177; when 60 result = 0x75257d; when 61 result = 0x77d0df; when 62 result = 0x7a83b3; when 63 result = 0x7d3e0c; else // N == 64 case index of when 0 result = 0x0000000000000; when 1 result = 0x02C9A3E778061; when 2 result = 0x059B0D3158574; when 3 result = 0x0874518759BC8; when 4 result = 0x0B5586CF9890F; when 5 result = 0x0E3EC32D3D1A2; when 6 result = 0x11301D0125B51; when 7 result = 0x1429AAEA92DE0; when 8 result = 0x172B83C7D517B; when 9 result = 0x1A35BEB6FCB75; when 10 result = 0x1D4873168B9AA; when 11 result = 0x2063B88628CD6; when 12 result = 0x2387A6E756238; when 13 result = 0x26B4565E27CDD; when 14 result = 0x29E9DF51FDEE1; when 15 result = 0x2D285A6E4030B; when 16 result = 0x306FE0A31B715; when 17 result = 0x33C08B26416FF; when 18 result = 0x371A7373AA9CB; when 19 result = 0x3A7DB34E59FF7; when 20 result = 0x3DEA64C123422; when 21 result = 0x4160A21F72E2A; when 22 result = 0x44E086061892D; when 23 result = 0x486A2B5C13CD0; when 24 result = 0x4BFDAD5362A27; when 25 result = 0x4F9B2769D2CA7; when 26 result = 0x5342B569D4F82; when 27 result = 0x56F4736B527DA; when 28 result = 0x5AB07DD485429; when 29 result = 0x5E76F15AD2148; when 30 result = 0x6247EB03A5585; when 31 result = 0x6623882552225; when 32 result = 0x6A09E667F3BCD; when 33 result = 0x6DFB23C651A2F; when 34 result = 0x71F75E8EC5F74; when 35 result = 0x75FEB564267C9; when 36 result = 0x7A11473EB0187; when 37 result = 0x7E2F336CF4E62; when 38 result = 0x82589994CCE13; when 39 result = 0x868D99B4492ED; when 40 result = 0x8ACE5422AA0DB; when 41 result = 0x8F1AE99157736; when 42 result = 0x93737B0CDC5E5; when 43 result = 0x97D829FDE4E50; when 44 result = 0x9C49182A3F090; when 45 result = 0xA0C667B5DE565; when 46 result = 0xA5503B23E255D; when 47 result = 0xA9E6B5579FDBF; when 48 result = 0xAE89F995AD3AD; when 49 result = 0xB33A2B84F15FB; when 50 result = 0xB7F76F2FB5E47; when 51 result = 0xBCC1E904BC1D2; when 52 result = 0xC199BDD85529C; when 53 result = 0xC67F12E57D14B; when 54 result = 0xCB720DCEF9069; when 55 result = 0xD072D4A07897C; when 56 result = 0xD5818DCFBA487; when 57 result = 0xDA9E603DB3285; when 58 result = 0xDFC97337B9B5F; when 59 result = 0xE502EE78B3FF6; when 60 result = 0xEA4AFA2A490DA; when 61 result = 0xEFA1BEE615A27; when 62 result = 0xF50765B6E4540; when 63 result = 0xFA7C1819E90D8; return result<N-1:0>;// ElemP[] - assignment form // =========================ElemP[bits(N) &pred, integer e, integer esize] = bit value integer psize = esize DIV 8; integer n = e * psize; assert n >= 0 && (n + psize) <= N; pred<n+psize-1:n> = ZeroExtend(value, psize); return;

Library pseudocode for aarch64/functions/sve/FPMinNormalFFR

// FPMinNormal() // ============= // FFR[] - non-assignment form // =========================== bits(N)bits(width) FPMinNormal(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =FFR[] assert width == PL; return _FFR<width-1:0>; // FFR[] - assignment form // ======================= Zeros(E-1):'1'; frac =FFR[] = bits(width) value assert width == PL; if (Unpredictable_SVEZEROUPPER) then _FFR = ZeroExtendZerosConstrainUnpredictableBool(F); return sign : exp : frac;(value); else _FFR<width-1:0> = value;

Library pseudocode for aarch64/functions/sve/FPOneFPCompareNE

// FPOne() // ======= // FPCompareNE() // ============= bits(N)boolean FPOne(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0':FPCompareNE(bits(N) op1, bits(N) op2,OnesFPCRType(E-1); frac =fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = (op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = TRUE; if type1==FPType_SNaN || type2==FPType_SNaN then FPProcessException(FPExc_InvalidOpZerosFPUnpack(F); return sign : exp : frac;, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 != value2); return result;

Library pseudocode for aarch64/functions/sve/FPPointFiveFPCompareUN

// FPPointFive() // FPCompareUN() // ============= bits(N)boolean FPPointFive(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0':FPCompareUN(bits(N) op1, bits(N) op2,OnesFPCRType(E-2):'0'; frac =fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = (op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type2==FPType_SNaN then FPProcessException(FPExc_InvalidOp, fpcr); return (type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaNZerosFPUnpack(F); return sign : exp : frac;);

Library pseudocode for aarch64/functions/sve/FPProcessFPConvertSVE

// FPProcess() // =========== // FPConvertSVE() // ============== bits(N)bits(M) FPProcess(bits(N) input) bits(N) result; assert N IN {16,32,64}; (fptype,sign,value) =FPConvertSVE(bits(N) op, FPUnpackFPCRType(input, FPCR); if fptype ==fpcr, FPType_SNaNFPRounding || fptype ==rounding) fpcr.AHP = '0'; return FPType_QNaNFPConvert then result =(op, fpcr, rounding); // FPConvertSVE() // ============== bits(M) FPProcessNaN(fptype, input, FPCR); elsif fptype ==FPConvertSVE(bits(N) op, FPType_InfinityFPCRType then result =fpcr) fpcr.AHP = '0'; return FPInfinityFPConvert(sign); elsif fptype ==(op, fpcr, FPType_ZeroFPRoundingMode then result = FPZero(sign); else result = FPRound(value, FPCR); return result;(fpcr));

Library pseudocode for aarch64/functions/sve/FPScaleFPExpA

// FPScale() // ========= // FPExpA() // ======== bits(N) FPScale(bits (N) op, integer scale,FPExpA(bits(N) op) assert N IN {16,32,64}; bits(N) result; bits(N) coeff; integer idx = if N == 16 then FPCRTypeUInt fpcr) assert N IN {16,32,64}; (fptype,sign,value) =(op<4:0>) else FPUnpackUInt(op, fpcr); if fptype ==(op<5:0>); coeff = FPType_SNaNFPExpCoefficient || fptype == FPType_QNaN then result = FPProcessNaN(fptype, op, fpcr); elsif fptype == FPType_Zero then result = FPZero(sign); elsif fptype == FPType_Infinity then result = FPInfinity(sign); else result = FPRound(value * (2.0^scale), fpcr); [idx]; if N == 16 then result<15:0> = '0':op<9:5>:coeff<9:0>; elsif N == 32 then result<31:0> = '0':op<13:6>:coeff<22:0>; else // N == 64 result<63:0> = '0':op<16:6>:coeff<51:0>; return result;

Library pseudocode for aarch64/functions/sve/FPTrigMAddFPExpCoefficient

// FPTrigMAdd() // ============ // FPExpCoefficient() // ================== bits(N) FPTrigMAdd(integer x, bits(N) op1, bits(N) op2,FPExpCoefficient[integer index] assert N IN {16,32,64}; integer result; if N == 16 then case index of when 0 result = 0x0000; when 1 result = 0x0016; when 2 result = 0x002d; when 3 result = 0x0045; when 4 result = 0x005d; when 5 result = 0x0075; when 6 result = 0x008e; when 7 result = 0x00a8; when 8 result = 0x00c2; when 9 result = 0x00dc; when 10 result = 0x00f8; when 11 result = 0x0114; when 12 result = 0x0130; when 13 result = 0x014d; when 14 result = 0x016b; when 15 result = 0x0189; when 16 result = 0x01a8; when 17 result = 0x01c8; when 18 result = 0x01e8; when 19 result = 0x0209; when 20 result = 0x022b; when 21 result = 0x024e; when 22 result = 0x0271; when 23 result = 0x0295; when 24 result = 0x02ba; when 25 result = 0x02e0; when 26 result = 0x0306; when 27 result = 0x032e; when 28 result = 0x0356; when 29 result = 0x037f; when 30 result = 0x03a9; when 31 result = 0x03d4; elsif N == 32 then case index of when 0 result = 0x000000; when 1 result = 0x0164d2; when 2 result = 0x02cd87; when 3 result = 0x043a29; when 4 result = 0x05aac3; when 5 result = 0x071f62; when 6 result = 0x08980f; when 7 result = 0x0a14d5; when 8 result = 0x0b95c2; when 9 result = 0x0d1adf; when 10 result = 0x0ea43a; when 11 result = 0x1031dc; when 12 result = 0x11c3d3; when 13 result = 0x135a2b; when 14 result = 0x14f4f0; when 15 result = 0x16942d; when 16 result = 0x1837f0; when 17 result = 0x19e046; when 18 result = 0x1b8d3a; when 19 result = 0x1d3eda; when 20 result = 0x1ef532; when 21 result = 0x20b051; when 22 result = 0x227043; when 23 result = 0x243516; when 24 result = 0x25fed7; when 25 result = 0x27cd94; when 26 result = 0x29a15b; when 27 result = 0x2b7a3a; when 28 result = 0x2d583f; when 29 result = 0x2f3b79; when 30 result = 0x3123f6; when 31 result = 0x3311c4; when 32 result = 0x3504f3; when 33 result = 0x36fd92; when 34 result = 0x38fbaf; when 35 result = 0x3aff5b; when 36 result = 0x3d08a4; when 37 result = 0x3f179a; when 38 result = 0x412c4d; when 39 result = 0x4346cd; when 40 result = 0x45672a; when 41 result = 0x478d75; when 42 result = 0x49b9be; when 43 result = 0x4bec15; when 44 result = 0x4e248c; when 45 result = 0x506334; when 46 result = 0x52a81e; when 47 result = 0x54f35b; when 48 result = 0x5744fd; when 49 result = 0x599d16; when 50 result = 0x5bfbb8; when 51 result = 0x5e60f5; when 52 result = 0x60ccdf; when 53 result = 0x633f89; when 54 result = 0x65b907; when 55 result = 0x68396a; when 56 result = 0x6ac0c7; when 57 result = 0x6d4f30; when 58 result = 0x6fe4ba; when 59 result = 0x728177; when 60 result = 0x75257d; when 61 result = 0x77d0df; when 62 result = 0x7a83b3; when 63 result = 0x7d3e0c; else // N == 64 case index of when 0 result = 0x0000000000000; when 1 result = 0x02C9A3E778061; when 2 result = 0x059B0D3158574; when 3 result = 0x0874518759BC8; when 4 result = 0x0B5586CF9890F; when 5 result = 0x0E3EC32D3D1A2; when 6 result = 0x11301D0125B51; when 7 result = 0x1429AAEA92DE0; when 8 result = 0x172B83C7D517B; when 9 result = 0x1A35BEB6FCB75; when 10 result = 0x1D4873168B9AA; when 11 result = 0x2063B88628CD6; when 12 result = 0x2387A6E756238; when 13 result = 0x26B4565E27CDD; when 14 result = 0x29E9DF51FDEE1; when 15 result = 0x2D285A6E4030B; when 16 result = 0x306FE0A31B715; when 17 result = 0x33C08B26416FF; when 18 result = 0x371A7373AA9CB; when 19 result = 0x3A7DB34E59FF7; when 20 result = 0x3DEA64C123422; when 21 result = 0x4160A21F72E2A; when 22 result = 0x44E086061892D; when 23 result = 0x486A2B5C13CD0; when 24 result = 0x4BFDAD5362A27; when 25 result = 0x4F9B2769D2CA7; when 26 result = 0x5342B569D4F82; when 27 result = 0x56F4736B527DA; when 28 result = 0x5AB07DD485429; when 29 result = 0x5E76F15AD2148; when 30 result = 0x6247EB03A5585; when 31 result = 0x6623882552225; when 32 result = 0x6A09E667F3BCD; when 33 result = 0x6DFB23C651A2F; when 34 result = 0x71F75E8EC5F74; when 35 result = 0x75FEB564267C9; when 36 result = 0x7A11473EB0187; when 37 result = 0x7E2F336CF4E62; when 38 result = 0x82589994CCE13; when 39 result = 0x868D99B4492ED; when 40 result = 0x8ACE5422AA0DB; when 41 result = 0x8F1AE99157736; when 42 result = 0x93737B0CDC5E5; when 43 result = 0x97D829FDE4E50; when 44 result = 0x9C49182A3F090; when 45 result = 0xA0C667B5DE565; when 46 result = 0xA5503B23E255D; when 47 result = 0xA9E6B5579FDBF; when 48 result = 0xAE89F995AD3AD; when 49 result = 0xB33A2B84F15FB; when 50 result = 0xB7F76F2FB5E47; when 51 result = 0xBCC1E904BC1D2; when 52 result = 0xC199BDD85529C; when 53 result = 0xC67F12E57D14B; when 54 result = 0xCB720DCEF9069; when 55 result = 0xD072D4A07897C; when 56 result = 0xD5818DCFBA487; when 57 result = 0xDA9E603DB3285; when 58 result = 0xDFC97337B9B5F; when 59 result = 0xE502EE78B3FF6; when 60 result = 0xEA4AFA2A490DA; when 61 result = 0xEFA1BEE615A27; when 62 result = 0xF50765B6E4540; when 63 result = 0xFA7C1819E90D8; return result<N-1:0>; FPCRType fpcr) assert N IN {16,32,64}; assert x >= 0; assert x < 8; bits(N) coeff; if op2<N-1> == '1' then x = x + 8; op2<N-1> = '0'; coeff = FPTrigMAddCoefficient[x]; result = FPMulAdd(coeff, op1, op2, fpcr); return result;

Library pseudocode for aarch64/functions/sve/FPTrigMAddCoefficientFPMinNormal

// FPTrigMAddCoefficient() // ======================= // FPMinNormal() // ============= bits(N) FPTrigMAddCoefficient[integer index] FPMinNormal(bit sign) assert N IN {16,32,64}; integer result; if N == 16 then case index of when 0 result = 0x3c00; when 1 result = 0xb155; when 2 result = 0x2030; when 3 result = 0x0000; when 4 result = 0x0000; when 5 result = 0x0000; when 6 result = 0x0000; when 7 result = 0x0000; when 8 result = 0x3c00; when 9 result = 0xb800; when 10 result = 0x293a; when 11 result = 0x0000; when 12 result = 0x0000; when 13 result = 0x0000; when 14 result = 0x0000; when 15 result = 0x0000; elsif N == 32 then case index of when 0 result = 0x3f800000; when 1 result = 0xbe2aaaab; when 2 result = 0x3c088886; when 3 result = 0xb95008b9; when 4 result = 0x36369d6d; when 5 result = 0x00000000; when 6 result = 0x00000000; when 7 result = 0x00000000; when 8 result = 0x3f800000; when 9 result = 0xbf000000; when 10 result = 0x3d2aaaa6; when 11 result = 0xbab60705; when 12 result = 0x37cd37cc; when 13 result = 0x00000000; when 14 result = 0x00000000; when 15 result = 0x00000000; else // N == 64 case index of when 0 result = 0x3ff0000000000000; when 1 result = 0xbfc5555555555543; when 2 result = 0x3f8111111110f30c; when 3 result = 0xbf2a01a019b92fc6; when 4 result = 0x3ec71de351f3d22b; when 5 result = 0xbe5ae5e2b60f7b91; when 6 result = 0x3de5d8408868552f; when 7 result = 0x0000000000000000; when 8 result = 0x3ff0000000000000; when 9 result = 0xbfe0000000000000; when 10 result = 0x3fa5555555555536; when 11 result = 0xbf56c16c16c13a0b; when 12 result = 0x3efa01a019b1e8d8; when 13 result = 0xbe927e4f7282f468; when 14 result = 0x3e21ee96d2641b13; when 15 result = 0xbda8f76380fbb401; return result<N-1:0>; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =Zeros(E-1):'1'; frac = Zeros(F); return sign : exp : frac;

Library pseudocode for aarch64/functions/sve/FPTrigSMulFPOne

// FPTrigSMul() // ============ // FPOne() // ======= bits(N) FPTrigSMul(bits(N) op1, bits(N) op2,FPOne(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0': FPCRTypeOnes fpcr) assert N IN {16,32,64}; result =(E-1); frac = FPMulZeros(op1, op1, fpcr); (fptype, sign, value) = FPUnpack(result, fpcr); if (fptype != FPType_QNaN) && (fptype != FPType_SNaN) then result<N-1> = op2<0>; return result;(F); return sign : exp : frac;

Library pseudocode for aarch64/functions/sve/FPTrigSSelFPPointFive

// FPTrigSSel() // ============ // FPPointFive() // ============= bits(N) FPTrigSSel(bits(N) op1, bits(N) op2) FPPointFive(bit sign) assert N IN {16,32,64}; bits(N) result; if op2<0> == '1' then result = constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0': (E-2):'0'; frac = ZerosFPOneOnes(op2<1>); else result = op1; result<N-1> = result<N-1> EOR op2<1>; return result;(F); return sign : exp : frac;

Library pseudocode for aarch64/functions/sve/FirstActiveFPProcess

// FirstActive() // ============= // FPProcess() // =========== bitbits(N) FirstActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = 0 to elements-1 ifFPProcess(bits(N) input) bits(N) result; assert N IN {16,32,64}; (type,sign,value) = ElemPFPUnpack[mask, e, esize] == '1' then return(input, FPCR); if type == || type == FPType_QNaN then result = FPProcessNaN(type, input, FPCR); elsif type == FPType_Infinity then result = FPInfinity(sign); elsif type == FPType_Zero then result = FPZero(sign); else result = FPRoundElemPFPType_SNaN[x, e, esize]; return '0';(value, FPCR); return result;

Library pseudocode for aarch64/functions/sve/FloorPow2FPScale

// FloorPow2() // =========== // For a positive integer X, return the largest power of 2 <= X // FPScale() // ========= integerbits(N) FloorPow2(integer x) assert x >= 0; integer n = 1; if x == 0 then return 0; while x >= 2^n do n = n + 1; return 2^(n - 1);FPScale(bits (N) op, integer scale,FPCRType fpcr) assert N IN {16,32,64}; (type,sign,value) = FPUnpack(op, fpcr); if type == FPType_SNaN || type == FPType_QNaN then result = FPProcessNaN(type, op, fpcr); elsif type == FPType_Zero then result = FPZero(sign); elsif type == FPType_Infinity then result = FPInfinity(sign); else result = FPRound(value * (2.0^scale), fpcr); return result;

Library pseudocode for aarch64/functions/sve/HaveSVEFPTrigMAdd

// HaveSVE() // ========= // FPTrigMAdd() // ============ booleanbits(N) HaveSVE() returnFPTrigMAdd(integer x, bits(N) op1, bits(N) op2, HasArchVersionFPCRType(fpcr) assert N IN {16,32,64}; assert x >= 0; assert x < 8; bits(N) coeff; if op2<N-1> == '1' then x = x + 8; op2<N-1> = '0'; coeff =[x]; result = FPMulAddARMv8p2FPTrigMAddCoefficient) && boolean IMPLEMENTATION_DEFINED "Have SVE ISA";(coeff, op1, op2, fpcr); return result;

Library pseudocode for aarch64/functions/sve/ImplementedSVEVectorLengthFPTrigMAddCoefficient

// ImplementedSVEVectorLength() // ============================ // Reduce SVE vector length to a supported value (e.g. power of two) // FPTrigMAddCoefficient() // ======================= integerbits(N) ImplementedSVEVectorLength(integer nbits) return integer IMPLEMENTATION_DEFINED;FPTrigMAddCoefficient[integer index] assert N IN {16,32,64}; integer result; if N == 16 then case index of when 0 result = 0x3c00; when 1 result = 0xb155; when 2 result = 0x2030; when 3 result = 0x0000; when 4 result = 0x0000; when 5 result = 0x0000; when 6 result = 0x0000; when 7 result = 0x0000; when 8 result = 0x3c00; when 9 result = 0xb800; when 10 result = 0x293a; when 11 result = 0x0000; when 12 result = 0x0000; when 13 result = 0x0000; when 14 result = 0x0000; when 15 result = 0x0000; elsif N == 32 then case index of when 0 result = 0x3f800000; when 1 result = 0xbe2aaaab; when 2 result = 0x3c088886; when 3 result = 0xb95008b9; when 4 result = 0x36369d6d; when 5 result = 0x00000000; when 6 result = 0x00000000; when 7 result = 0x00000000; when 8 result = 0x3f800000; when 9 result = 0xbf000000; when 10 result = 0x3d2aaaa6; when 11 result = 0xbab60705; when 12 result = 0x37cd37cc; when 13 result = 0x00000000; when 14 result = 0x00000000; when 15 result = 0x00000000; else // N == 64 case index of when 0 result = 0x3ff0000000000000; when 1 result = 0xbfc5555555555543; when 2 result = 0x3f8111111110f30c; when 3 result = 0xbf2a01a019b92fc6; when 4 result = 0x3ec71de351f3d22b; when 5 result = 0xbe5ae5e2b60f7b91; when 6 result = 0x3de5d8408868552f; when 7 result = 0x0000000000000000; when 8 result = 0x3ff0000000000000; when 9 result = 0xbfe0000000000000; when 10 result = 0x3fa5555555555536; when 11 result = 0xbf56c16c16c13a0b; when 12 result = 0x3efa01a019b1e8d8; when 13 result = 0xbe927e4f7282f468; when 14 result = 0x3e21ee96d2641b13; when 15 result = 0xbda8f76380fbb401; return result<N-1:0>;

Library pseudocode for aarch64/functions/sve/IsEvenFPTrigSMul

// IsEven() // ======== // FPTrigSMul() // ============ booleanbits(N) IsEven(integer val) return val MOD 2 == 0;FPTrigSMul(bits(N) op1, bits(N) op2,FPCRType fpcr) assert N IN {16,32,64}; result = FPMul(op1, op1, fpcr); (type, sign, value) = FPUnpack(result, fpcr); if (type != FPType_QNaN) && (type != FPType_SNaN) then result<N-1> = op2<0>; return result;

Library pseudocode for aarch64/functions/sve/IsFPEnabledFPTrigSSel

// IsFPEnabled() // ============= // FPTrigSSel() // ============ booleanbits(N) IsFPEnabled(bits(2) el) ifFPTrigSSel(bits(N) op1, bits(N) op2) assert N IN {16,32,64}; bits(N) result; if op2<0> == '1' then result = ELUsingAArch32FPOne(el) then return AArch32.IsFPEnabled(el); else return AArch64.IsFPEnabled(el);(op2<1>); else result = op1; result<N-1> = result<N-1> EOR op2<1>; return result;

Library pseudocode for aarch64/functions/sve/IsSVEEnabledFirstActive

// IsSVEEnabled() // ============== // FirstActive() // ============= booleanbit IsSVEEnabled(bits(2) el) ifFirstActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = 0 to elements-1 if ELUsingAArch32ElemP(el) then return FALSE; // Check if access disabled in CPACR_EL1 if el IN {[mask, e, esize] == '1' then returnEL0ElemP, EL1} then // Check SVE at EL0/EL1 case CPACR[].ZEN of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; // Check if access disabled in CPTR_EL2 if el IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.ZEN == 'x0' then return FALSE; else if CPTR_EL2.TZ == '1' then return FALSE; // Check if access disabled in CPTR_EL3 if HaveEL(EL3) then if CPTR_EL3.EZ == '0' then return FALSE; return TRUE;[x, e, esize]; return '0';

Library pseudocode for aarch64/functions/sve/LastActiveFloorPow2

// LastActive() // ============ // FloorPow2() // =========== // For a positive integer X, return the largest power of 2 <= X bitinteger LastActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = elements-1 downto 0 ifFloorPow2(integer x) assert x >= 0; integer n = 1; if x == 0 then return 0; while x >= 2^n do n = n + 1; return 2^(n - 1); ElemP[mask, e, esize] == '1' then return ElemP[x, e, esize]; return '0';

Library pseudocode for aarch64/functions/sve/LastActiveElementHaveSVE

// LastActiveElement() // =================== // HaveSVE() // ========= integerboolean LastActiveElement(bits(N) mask, integer esize) assert esize IN {8, 16, 32, 64}; integer elements = VL DIV esize; for e = elements-1 downto 0 ifHaveSVE() return (ARMv8p2ElemPHasArchVersion[mask, e, esize] == '1' then return e; return -1;) && boolean IMPLEMENTATION_DEFINED "Have SVE ISA";

Library pseudocode for aarch64/functions/sve/MAX_PLImplementedSVEVectorLength

constant integer// ImplementedSVEVectorLength() // ============================ // Reduce SVE vector length to a supported value (e.g. power of two) integer MAX_PL = 256;ImplementedSVEVectorLength(integer nbits) return integer IMPLEMENTATION_DEFINED;

Library pseudocode for aarch64/functions/sve/MAX_VLIsEven

constant integer// IsEven() // ======== boolean MAX_VL = 2048;IsEven(integer val) return val MOD 2 == 0;

Library pseudocode for aarch64/functions/sve/MaybeZeroSVEUppersIsFPEnabled

// MaybeZeroSVEUppers() // ====================// IsFPEnabled() // ============= boolean MaybeZeroSVEUppers(bits(2) target_el) boolean lower_enabled; IsFPEnabled(bits(2) el) if UIntELUsingAArch32(target_el) <=(el) then return UIntAArch32.IsFPEnabled(PSTATE.EL) || !(el); else returnIsSVEEnabledAArch64.IsFPEnabled(target_el) then return; if target_el == EL3 then if EL2Enabled() then lower_enabled = IsFPEnabled(EL2); else lower_enabled = IsFPEnabled(EL1); else lower_enabled = IsFPEnabled(target_el - 1); if lower_enabled then integer vl = if IsSVEEnabled(PSTATE.EL) then VL else 128; integer pl = vl DIV 8; for n = 0 to 31 if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _Z[n] = ZeroExtend(_Z[n]<vl-1:0>); for n = 0 to 15 if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _P[n] = ZeroExtend(_P[n]<pl-1:0>); if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _FFR = ZeroExtend(_FFR<pl-1:0>);(el);

Library pseudocode for aarch64/functions/sve/MemNFIsSVEEnabled

// MemNF[] - non-assignment form // ============================= // IsSVEEnabled() // ============== (bits(8*size), boolean) MemNF[bits(64) address, integer size,boolean AccType acctype] assert size IN {1, 2, 4, 8, 16}; bits(8*size) value; aligned = (address ==IsSVEEnabled(bits(2) el) if AlignELUsingAArch32(address, size)); A =(el) then return FALSE; // Check if access disabled in CPACR_EL1 if el IN { SCTLREL0[].A; if !aligned && (A == '1') then return (bits(8*size) UNKNOWN, TRUE); atomic = aligned || size == 1; if !atomic then (value<7:0>, bad) = MemSingleNF[address, 1, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c =, ConstrainUnpredictableEL1(} then // Check SVE at EL0/EL1 caseUnpredictable_DEVPAGE2CPACR); assert c IN {[].ZEN of when 'x0' disabled = TRUE; when '01' disabled = (el ==Constraint_FAULTEL0,); when '11' disabled = FALSE; if disabled then return FALSE; // Check if access disabled in CPTR_EL2 if el IN { Constraint_NONEEL0}; if c ==, Constraint_NONEEL1 then aligned = TRUE; for i = 1 to size-1 (value<8*i+7:8*i>, bad) = MemSingleNF[address+i, 1, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); else (value, bad) = MemSingleNF[address, size, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); if, BigEndianEL2() then value =} && () then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.ZEN == 'x0' then return FALSE; else if CPTR_EL2.TZ == '1' then return FALSE; // Check if access disabled in CPTR_EL3 if HaveEL(EL3BigEndianReverseEL2Enabled(value); ) then if CPTR_EL3.EZ == '0' then return FALSE; return (value, FALSE); return TRUE;

Library pseudocode for aarch64/functions/sve/MemSingleNFLastActive

// MemSingleNF[] - non-assignment form // =================================== // LastActive() // ============ (bits(8*size), boolean) MemSingleNF[bits(64) address, integer size,bit AccType acctype, boolean wasaligned] bits(8*size) value; boolean iswrite = FALSE;LastActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = elements-1 downto 0 if AddressDescriptorElemP memaddrdesc; // Implementation may suppress NF load for any reason if[mask, e, esize] == '1' then return ConstrainUnpredictableBoolElemP(Unpredictable_NONFAULT) then return (bits(8*size) UNKNOWN, TRUE); // MMU or MPU memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Non-fault load from Device memory must not be performed externally if memaddrdesc.memattrs.memtype == MemType_Device then return (bits(8*size) UNKNOWN, TRUE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return (bits(8*size) UNKNOWN, TRUE); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AArch64.AccessIsTagChecked(address, acctype) then bits(4) ptag = AArch64.TransformTag(address); if !AArch64.CheckTag(memaddrdesc, ptag, iswrite) then return (bits(8*size) UNKNOWN, TRUE); value = _Mem[memaddrdesc, size, accdesc]; return (value, FALSE);[x, e, esize]; return '0';

Library pseudocode for aarch64/functions/sve/NoneActiveLastActiveElement

// NoneActive() // ============ // LastActiveElement() // =================== bitinteger NoneActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = 0 to elements-1 LastActiveElement(bits(N) mask, integer esize) assert esize IN {8, 16, 32, 64}; integer elements = VL DIV esize; for e = elements-1 downto 0 if ElemP[mask, e, esize] == '1' &&[mask, e, esize] == '1' then return e; return -1; ElemP[x, e, esize] == '1' then return '0'; return '1';

Library pseudocode for aarch64/functions/sve/PMAX_PL

// P[] - non-assignment form // ========================= bits(width)constant integer P[integer n] assert n >= 0 && n <= 31; assert width == PL; return _P[n]<width-1:0>; // P[] - assignment form // =====================MAX_PL = 256; P[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width == PL; if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _P[n] = ZeroExtend(value); else _P[n]<width-1:0> = value;

Library pseudocode for aarch64/functions/sve/PLMAX_VL

// PL - non-assignment form // ======================== integer PL return VL DIV 8;constant integerMAX_VL = 2048;

Library pseudocode for aarch64/functions/sve/PredTestMaybeZeroSVEUppers

// PredTest() // ========== bits(4)// MaybeZeroSVEUppers() // ==================== PredTest(bits(N) mask, bits(N) result, integer esize) bit n =MaybeZeroSVEUppers(bits(2) target_el) boolean lower_enabled; if FirstActiveUInt(mask, result, esize); bit z =(target_el) <= NoneActiveUInt(mask, result, esize); bit c = NOT(PSTATE.EL) || ! (target_el) then return; if target_el == EL3 then if EL2Enabled() then lower_enabled = IsFPEnabled(EL2); else lower_enabled = IsFPEnabled(EL1); else lower_enabled = IsFPEnabled(target_el - 1); if lower_enabled then integer vl = if IsSVEEnabled(PSTATE.EL) then VL else 128; integer pl = vl DIV 8; for n = 0 to 31 if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _Z[n] = ZeroExtend(_Z[n]<vl-1:0>); for n = 0 to 15 if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _P[n] = ZeroExtend(_P[n]<pl-1:0>); if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _FFR = ZeroExtendLastActiveIsSVEEnabled(mask, result, esize); bit v = '0'; return n:z:c:v;(_FFR<pl-1:0>);

Library pseudocode for aarch64/functions/sve/ReducePredicatedMemNF

// ReducePredicated() // ================== // MemNF[] - non-assignment form // ============================= bits(esize)(bits(8*size), boolean) MemNF[bits(64) address, integer size, ReducePredicated(acctype] assert size IN {1, 2, 4, 8, 16}; bits(8*size) value; aligned = (address ==ReduceOpAlign op, bits(N) input, bits(M) mask, bits(esize) identity) assert(N == M * 8); integer p2bits =(address, size)); A = CeilPow2SCTLR(N); bits(p2bits) operand; integer elements = p2bits DIV esize; [].A; for e = 0 to elements-1 if e * esize < N && if !aligned && (A == '1') then return (bits(8*size) UNKNOWN, TRUE); atomic = aligned || size == 1; if !atomic then (value<7:0>, bad) = MemSingleNF[address, 1, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = ElemPConstrainUnpredictable[mask, e, esize] == '1' then( ElemUnpredictable_DEVPAGE2[operand, e, esize] =); assert c IN { ElemConstraint_FAULT[input, e, esize]; else, ElemConstraint_NONE[operand, e, esize] = identity; return}; if c == then aligned = TRUE; for i = 1 to size-1 (value<8*i+7:8*i>, bad) = MemSingleNF[address+i, 1, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); else (value, bad) = MemSingleNF[address, size, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); if BigEndian() then value = BigEndianReverseReduceConstraint_NONE(op, operand, esize);(value); return (value, FALSE);

Library pseudocode for aarch64/functions/sve/ReverseMemSingleNF

// Reverse() // ========= // Reverse subwords of M bits in an N-bit word // MemSingleNF[] - non-assignment form // =================================== bits(N)(bits(8*size), boolean) MemSingleNF[bits(64) address, integer size, Reverse(bits(N) word, integer M) bits(N) result; integer sw = N DIV M; assert N == sw * M; for s = 0 to sw-1acctype, boolean wasaligned] bits(8*size) value; boolean iswrite = FALSE; ElemAddressDescriptor[result, sw - 1 - s, M] =memaddrdesc; // Implementation may suppress NF load for any reason if (Unpredictable_NONFAULT) then return (bits(8*size) UNKNOWN, TRUE); // MMU or MPU memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Non-fault load from Device memory must not be performed externally if memaddrdesc.memattrs.type == MemType_Device then return (bits(8*size) UNKNOWN, TRUE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return (bits(8*size) UNKNOWN, TRUE); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(address, acctype) then bits(4) ptag = TransformTag(address); if !CheckTagElemConstrainUnpredictableBool[word, s, M]; return result;(memaddrdesc, ptag, iswrite) then return (bits(8*size) UNKNOWN, TRUE); value = _Mem[memaddrdesc, size, accdesc]; return (value, FALSE);

Library pseudocode for aarch64/functions/sve/SVEAccessTrapNoneActive

// SVEAccessTrap() // =============== // Trapped access to SVE registers due to CPACR_EL1, CPTR_EL2, or CPTR_EL3.// NoneActive() // ============ bit SVEAccessTrap(bits(2) target_el) assertNoneActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = 0 to elements-1 if UIntElemP(target_el) >=[mask, e, esize] == '1' && UIntElemP(PSTATE.EL) && target_el != EL0 && HaveEL(target_el); route_to_el2 = target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1'; exception = ExceptionSyndrome(Exception_SVEAccessTrap); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; if route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);[x, e, esize] == '1' then return '0'; return '1';

Library pseudocode for aarch64/functions/sve/SVECmpP

enumeration// P[] - non-assignment form // ========================= bits(width) SVECmp {P[integer n] assert n >= 0 && n <= 31; assert width == PL; return _P[n]<width-1:0>; // P[] - assignment form // ===================== Cmp_EQ,P[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width == PL; if Cmp_NE,( Cmp_GE,) then _P[n] = Cmp_GT, Cmp_LT, Cmp_LE, Cmp_UN };(value); else _P[n]<width-1:0> = value;

Library pseudocode for aarch64/functions/sve/SVEMoveMaskPreferredPL

// SVEMoveMaskPreferred() // ====================== // Return FALSE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single DUP instruction. // Used as a condition for the preferred MOV<-DUPM alias. // PL - non-assignment form // ======================== booleaninteger PL return VL DIV 8; SVEMoveMaskPreferred(bits(13) imm13) bits(64) imm; (imm, -) = DecodeBitMasks(imm13<12>, imm13<5:0>, imm13<11:6>, TRUE); // Check for 8 bit immediates if !IsZero(imm<7:0>) then // Check for 'ffffffffffffffxy' or '00000000000000xy' if IsZero(imm<63:7>) || IsOnes(imm<63:7>) then return FALSE; // Check for 'ffffffxyffffffxy' or '000000xy000000xy' if imm<63:32> == imm<31:0> && (IsZero(imm<31:7>) || IsOnes(imm<31:7>)) then return FALSE; // Check for 'ffxyffxyffxyffxy' or '00xy00xy00xy00xy' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> && (IsZero(imm<15:7>) || IsOnes(imm<15:7>)) then return FALSE; // Check for 'xyxyxyxyxyxyxyxy' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> && (imm<15:8> == imm<7:0>) then return FALSE; // Check for 16 bit immediates else // Check for 'ffffffffffffxy00' or '000000000000xy00' if IsZero(imm<63:15>) || IsOnes(imm<63:15>) then return FALSE; // Check for 'ffffxy00ffffxy00' or '0000xy000000xy00' if imm<63:32> == imm<31:0> && (IsZero(imm<31:7>) || IsOnes(imm<31:7>)) then return FALSE; // Check for 'xy00xy00xy00xy00' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sve/SystemPredTest

array bits(// PredTest() // ========== bits(4)MAX_VL) _Z[0..31]; array bits(PredTest(bits(N) mask, bits(N) result, integer esize) bit n =MAX_PLFirstActive) _P[0..15]; bits((mask, result, esize); bit z =(mask, result, esize); bit c = NOT LastActiveMAX_PLNoneActive) _FFR;(mask, result, esize); bit v = '0'; return n:z:c:v;

Library pseudocode for aarch64/functions/sve/VLReducePredicated

// VL - non-assignment form // ======================== // ReducePredicated() // ================== integer VL integer vl; if PSTATE.EL ==bits(esize) EL1 || (PSTATE.EL ==ReducePredicated( EL0ReduceOp && !op, bits(N) input, bits(M) mask, bits(esize) identity) assert(N == M * 8); integer p2bits =IsInHostCeilPow2()) then vl =(N); bits(p2bits) operand; integer elements = p2bits DIV esize; for e = 0 to elements-1 if e * esize < N && UIntElemP(ZCR_EL1.LEN); if PSTATE.EL ==[mask, e, esize] == '1' then EL2Elem || (PSTATE.EL ==[operand, e, esize] = EL0Elem &&[input, e, esize]; else IsInHostElem()) then vl =[operand, e, esize] = identity; return UIntReduce(ZCR_EL2.LEN); elsif PSTATE.EL IN {EL0, EL1} && EL2Enabled() then vl = Min(vl, UInt(ZCR_EL2.LEN)); if PSTATE.EL == EL3 then vl = UInt(ZCR_EL3.LEN); elsif HaveEL(EL3) then vl = Min(vl, UInt(ZCR_EL3.LEN)); vl = (vl + 1) * 128; vl = ImplementedSVEVectorLength(vl); return vl;(op, operand, esize);

Library pseudocode for aarch64/functions/sve/ZReverse

// Z[] - non-assignment form // ========================= // Reverse() // ========= // Reverse subwords of M bits in an N-bit word bits(width)bits(N) Z[integer n] assert n >= 0 && n <= 31; assert width == VL; return _Z[n]<width-1:0>; // Z[] - assignment form // =====================Reverse(bits(N) word, integer M) bits(N) result; integer sw = N DIV M; assert N == sw * M; for s = 0 to sw-1 Z[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width == VL; if[result, sw - 1 - s, M] = ConstrainUnpredictableBoolElem(Unpredictable_SVEZEROUPPER) then _Z[n] = ZeroExtend(value); else _Z[n]<width-1:0> = value;[word, s, M]; return result;

Library pseudocode for aarch64/functions/sysregisterssve/CNTKCTLSVEAccessTrap

// CNTKCTL[] - non-assignment form // =============================== CNTKCTLType// SVEAccessTrap() // =============== // Trapped access to SVE registers due to CPACR_EL1, CPTR_EL2, or CPTR_EL3. CNTKCTL[] bits(32) r; ifSVEAccessTrap(bits(2) target_el) assert (target_el) >= UInt(PSTATE.EL) && target_el != EL0 && HaveEL(target_el); route_to_el2 = target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1'; exception = ExceptionSyndrome(Exception_SVEAccessTrap); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; if route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeExceptionIsInHostUInt() then r = CNTHCTL_EL2; return r; r = CNTKCTL_EL1; return r;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/sysregisterssve/CNTKCTLTypeSVECmp

typeenumeration CNTKCTLType;SVECmp {Cmp_EQ, Cmp_NE, Cmp_GE, Cmp_GT, Cmp_LT, Cmp_LE, Cmp_UN };

Library pseudocode for aarch64/functions/sysregisterssve/CPACRSVEMoveMaskPreferred

// CPACR[] - non-assignment form // ============================= // SVEMoveMaskPreferred() // ====================== // Return FALSE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single DUP instruction. // Used as a condition for the preferred MOV<-DUPM alias. CPACRTypeboolean CPACR[] bits(32) r; ifSVEMoveMaskPreferred(bits(13) imm13) bits(64) imm; (imm, -) = (imm13<12>, imm13<5:0>, imm13<11:6>, TRUE); // Check for 8 bit immediates if !IsZero(imm<7:0>) then // Check for 'ffffffffffffffxy' or '00000000000000xy' if IsZero(imm<63:7>) || IsOnes(imm<63:7>) then return FALSE; // Check for 'ffffffxyffffffxy' or '000000xy000000xy' if imm<63:32> == imm<31:0> && (IsZero(imm<31:7>) || IsOnes(imm<31:7>)) then return FALSE; // Check for 'ffxyffxyffxyffxy' or '00xy00xy00xy00xy' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> && (IsZero(imm<15:7>) || IsOnes(imm<15:7>)) then return FALSE; // Check for 'xyxyxyxyxyxyxyxy' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> && (imm<15:8> == imm<7:0>) then return FALSE; // Check for 16 bit immediates else // Check for 'ffffffffffffxy00' or '000000000000xy00' if IsZero(imm<63:15>) || IsOnes(imm<63:15>) then return FALSE; // Check for 'ffffxy00ffffxy00' or '0000xy000000xy00' if imm<63:32> == imm<31:0> && (IsZero(imm<31:7>) || IsOnesIsInHostDecodeBitMasks() then r = CPTR_EL2; return r; r = CPACR_EL1; return r;(imm<31:7>)) then return FALSE; // Check for 'xy00xy00xy00xy00' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sysregisterssve/CPACRTypeSystem

typearray bits( ) _Z[0..31]; array bits(MAX_PL) _P[0..15]; bits(MAX_PLCPACRType;) _FFR;

Library pseudocode for aarch64/functions/sysregisterssve/ELRVL

// ELR[] - non-assignment form // =========================== // VL - non-assignment form // ======================== bits(64)integer VL integer vl; if PSTATE.EL == ELR[bits(2) el] bits(64) r; case el of when EL1 r = ELR_EL1; when|| (PSTATE.EL == EL2EL0 r = ELR_EL2; when&& ! EL3IsInHost r = ELR_EL3; otherwise()) then vl = UnreachableUInt(); return r; (ZCR_EL1.LEN); // ELR[] - non-assignment form // =========================== bits(64) if PSTATE.EL == ELR[] assert PSTATE.EL !=|| (PSTATE.EL == EL0; return&& ELRIsInHost[PSTATE.EL]; // ELR[] - assignment form // =======================()) then vl = ELR[bits(2) el] = bits(64) value bits(64) r = value; case el of when(ZCR_EL2.LEN); elsif EL2Enabled() && PSTATE.EL IN {EL0,EL1 ELR_EL1 = r; when} then vl = EL2Min ELR_EL2 = r; when(vl, UInt(ZCR_EL2.LEN)); if PSTATE.EL == EL3 ELR_EL3 = r; otherwisethen vl = UnreachableUInt(); return; // ELR[] - assignment form // =======================(ZCR_EL3.LEN); elsif ELR[] = bits(64) value assert PSTATE.EL !=( EL0EL3;) then vl = (vl, UInt(ZCR_EL3.LEN)); vl = (vl + 1) * 128; vl = ImplementedSVEVectorLengthELRMin[PSTATE.EL] = value; return;(vl); return vl;

Library pseudocode for aarch64/functions/sysregisterssve/ESRZ

// ESR[] - non-assignment form // =========================== // Z[] - non-assignment form // ========================= ESRTypebits(width) ESR[bits(2) regime] bits(32) r; case regime of whenZ[integer n] assert n >= 0 && n <= 31; assert width == VL; return _Z[n]<width-1:0>; // Z[] - assignment form // ===================== EL1 r = ESR_EL1; when EL2 r = ESR_EL2; when EL3 r = ESR_EL3; otherwise Unreachable(); return r; // ESR[] - non-assignment form // =========================== ESRType ESR[] returnZ[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width == VL; if ESRConstrainUnpredictableBool[(S1TranslationRegimeUnpredictable_SVEZEROUPPER()]; // ESR[] - assignment form // =======================) then _Z[n] = ESR[bits(2) regime] = ESRType value bits(32) r = value; case regime of when EL1 ESR_EL1 = r; when EL2 ESR_EL2 = r; when EL3 ESR_EL3 = r; otherwise Unreachable(); return; // ESR[] - assignment form // ======================= ESR[] = ESRType value ESR[S1TranslationRegime()] = value;(value); else _Z[n]<width-1:0> = value;

Library pseudocode for aarch64/functions/sysregisters/ESRTypeCNTKCTL

type// CNTKCTL[] - non-assignment form // =============================== CNTKCTLType ESRType;CNTKCTL[] bits(32) r; ifIsInHost() then r = CNTHCTL_EL2; return r; r = CNTKCTL_EL1; return r;

Library pseudocode for aarch64/functions/sysregisters/FARCNTKCTLType

// FAR[] - non-assignment form // =========================== bits(64)type FAR[bits(2) regime] bits(64) r; case regime of whenCNTKCTLType; EL1 r = FAR_EL1; when EL2 r = FAR_EL2; when EL3 r = FAR_EL3; otherwise Unreachable(); return r; // FAR[] - non-assignment form // =========================== bits(64) FAR[] return FAR[S1TranslationRegime()]; // FAR[] - assignment form // ======================= FAR[bits(2) regime] = bits(64) value bits(64) r = value; case regime of when EL1 FAR_EL1 = r; when EL2 FAR_EL2 = r; when EL3 FAR_EL3 = r; otherwise Unreachable(); return; // FAR[] - assignment form // ======================= FAR[] = bits(64) value FAR[S1TranslationRegime()] = value; return;

Library pseudocode for aarch64/functions/sysregisters/MAIRCPACR

// MAIR[] - non-assignment form // ============================ // CPACR[] - non-assignment form // ============================= MAIRTypeCPACRType MAIR[bits(2) regime] bits(64) r; case regime of whenCPACR[] bits(32) r; if EL1IsInHost r = MAIR_EL1; when EL2 r = MAIR_EL2; when EL3 r = MAIR_EL3; otherwise Unreachable(); return r; // MAIR[] - non-assignment form // ============================ MAIRType MAIR[] return MAIR[S1TranslationRegime()];() then r = CPTR_EL2; return r; r = CPACR_EL1; return r;

Library pseudocode for aarch64/functions/sysregisters/MAIRTypeCPACRType

type MAIRType;CPACRType;

Library pseudocode for aarch64/functions/sysregisters/SCTLRELR

// SCTLR[] - non-assignment form // ============================= // ELR[] - non-assignment form // =========================== SCTLRTypebits(64) SCTLR[bits(2) regime] ELR[bits(2) el] bits(64) r; case regime of case el of when EL1 r = SCTLR_EL1; r = ELR_EL1; when EL2 r = SCTLR_EL2; r = ELR_EL2; when EL3 r = SCTLR_EL3; r = ELR_EL3; otherwise Unreachable(); return r; // SCTLR[] - non-assignment form // ============================= // ELR[] - non-assignment form // =========================== SCTLRTypebits(64) SCTLR[] returnELR[] assert PSTATE.EL != SCTLREL0[; return[PSTATE.EL]; // ELR[] - assignment form // ======================= ELR[bits(2) el] = bits(64) value bits(64) r = value; case el of when EL1 ELR_EL1 = r; when EL2 ELR_EL2 = r; when EL3 ELR_EL3 = r; otherwise Unreachable(); return; // ELR[] - assignment form // ======================= ELR[] = bits(64) value assert PSTATE.EL != EL0; ELRS1TranslationRegimeELR()];[PSTATE.EL] = value; return;

Library pseudocode for aarch64/functions/sysregisters/SCTLRTypeESR

type// ESR[] - non-assignment form // =========================== ESRType SCTLRType;ESR[bits(2) regime] bits(32) r; case regime of whenEL1 r = ESR_EL1; when EL2 r = ESR_EL2; when EL3 r = ESR_EL3; otherwise Unreachable(); return r; // ESR[] - non-assignment form // =========================== ESRType ESR[] return ESR[S1TranslationRegime()]; // ESR[] - assignment form // ======================= ESR[bits(2) regime] = ESRType value bits(32) r = value; case regime of when EL1 ESR_EL1 = r; when EL2 ESR_EL2 = r; when EL3 ESR_EL3 = r; otherwise Unreachable(); return; // ESR[] - assignment form // ======================= ESR[] = ESRType value ESR[S1TranslationRegime()] = value;

Library pseudocode for aarch64/functions/sysregisters/VBARESRType

// VBAR[] - non-assignment form // ============================ bits(64)type VBAR[bits(2) regime] bits(64) r; case regime of whenESRType; EL1 r = VBAR_EL1; when EL2 r = VBAR_EL2; when EL3 r = VBAR_EL3; otherwise Unreachable(); return r; // VBAR[] - non-assignment form // ============================ bits(64) VBAR[] return VBAR[S1TranslationRegime()];

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.AllocationTagAccessIsEnabledFAR

// AArch64.AllocationTagAccessIsEnabled() // ====================================== // Check whether access to Allocation Tags is enabled. // FAR[] - non-assignment form // =========================== booleanbits(64) AArch64.AllocationTagAccessIsEnabled() if SCR_EL3.ATA == '0' && PSTATE.EL IN {FAR[bits(2) regime] bits(64) r; case regime of whenEL0, EL1,r = FAR_EL1; when EL2} then return FALSE; elsif HCR_EL2.ATA == '0' && PSTATE.EL IN {r = FAR_EL2; whenEL0EL3,r = FAR_EL3; otherwise EL1Unreachable} &&(); return r; // FAR[] - non-assignment form // =========================== bits(64) EL2Enabled() && HCR_EL2.<E2H,TGE> != '11' then return FALSE; elsif SCTLR_EL3.ATA == '0' && PSTATE.EL ==FAR[] return EL3FAR then return FALSE; elsif SCTLR_EL2.ATA == '0' && PSTATE.EL ==[ EL2S1TranslationRegime then return FALSE; elsif SCTLR_EL1.ATA == '0' && PSTATE.EL ==()]; // FAR[] - assignment form // ======================= FAR[bits(2) regime] = bits(64) value bits(64) r = value; case regime of when EL1 then return FALSE; elsif SCTLR_EL2.ATA0 == '0' && PSTATE.EL ==FAR_EL1 = r; when EL0EL2 &&FAR_EL2 = r; when EL2EnabledEL3() && HCR_EL2.<E2H,TGE> == '11' then return FALSE; elsif SCTLR_EL1.ATA0 == '0' && PSTATE.EL ==FAR_EL3 = r; otherwise EL0Unreachable && !((); return; // FAR[] - assignment form // =======================FAR[] = bits(64) value FAR[S1TranslationRegimeEL2Enabled() && HCR_EL2.<E2H,TGE> == '11') then return FALSE; else return TRUE;()] = value; return;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.CheckSystemAccessMAIR

// AArch64.CheckSystemAccess() // =========================== // Checks if an AArch64 MSR, MRS or SYS instruction is allowed from the current exception level and security state. // Also checks for traps by TIDCP and NV access.// MAIR[] - non-assignment form // ============================ MAIRType AArch64.CheckSystemAccess(bits(2) op0, bits(3) op1, bits(4) crn, bits(4) crm, bits(3) op2, bits(5) rt, bit read) boolean unallocated = FALSE; boolean need_secure = FALSE; bits(2) min_EL; // Check for traps by HCR_EL2.TIDCP if PSTATE.EL IN {MAIR[bits(2) regime] bits(64) r; case regime of whenEL0, EL1} &&r = MAIR_EL1; when EL2Enabled() && HCR_EL2.TIDCP == '1' && op0 == 'x1' && crn == '1x11' then // At EL0, it is IMPLEMENTATION_DEFINED whether attempts to execute system // register access instructions with reserved encodings are trapped to EL2 or UNDEFINED rcs_el0_trap = boolean IMPLEMENTATION_DEFINED "Reserved Control Space EL0 Trapped"; if PSTATE.EL == EL1 || rcs_el0_trap then AArch64.SystemAccessTrap(EL2, 0x18); // Exception_SystemRegisterTrap // Check for unallocated encodings case op1 of when '00x', '010' min_EL =r = MAIR_EL2; when EL1; when '011' min_EL = EL0; when '100' min_EL = EL2; when '101' if !HaveVirtHostExt() then UNDEFINED; min_EL = EL2; when '110' min_EL = EL3; when '111' min_EL =r = MAIR_EL3; otherwise EL1Unreachable; need_secure = TRUE; (); return r; if// MAIR[] - non-assignment form // ============================ MAIRType UInt(PSTATE.EL) <MAIR[] return UIntMAIR(min_EL) then // Check for traps on read/write access to registers named _EL2, _EL02, _EL12 from non-secure EL1 when HCR_EL2.NV bit is set nv_access =[ HaveNVExtS1TranslationRegime() && min_EL == EL2 && PSTATE.EL == EL1 && EL2Enabled() && HCR_EL2.NV == '1'; if !nv_access then UNDEFINED; elsif need_secure && !IsSecure() then UNDEFINED;()];

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.ChooseNonExcludedTagMAIRType

// AArch64.ChooseNonExcludedTag() // ============================== // Return a tag derived from the start and the offset values, excluding // any tags in the given mask. bits(4)type AArch64.ChooseNonExcludedTag(bits(4) tag, bits(4) offset, bits(16) exclude) ifMAIRType; IsOnes(exclude) then return '0000'; if offset == '0000' then while exclude<UInt(tag)> == '1' do tag = tag + '0001'; while offset != '0000' do offset = offset - '0001'; tag = tag + '0001'; while exclude<UInt(tag)> == '1' do tag = tag + '0001'; return tag;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.ExecutingATS1xPInstrSCTLR

// AArch64.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1E1R/WP // SCTLR[] - non-assignment form // ============================= booleanSCTLRType AArch64.ExecutingATS1xPInstr() if !SCTLR[bits(2) regime] bits(64) r; case regime of whenHavePrivATExtEL1() then return FALSE; instr =r = SCTLR_EL1; when r = SCTLR_EL2; when EL3 r = SCTLR_EL3; otherwise Unreachable(); return r; // SCTLR[] - non-assignment form // ============================= SCTLRType SCTLR[] return SCTLR[S1TranslationRegimeThisInstrEL2(); if instr<22+:10> == '1101010100' then op1 = instr<16+:3>; CRn = instr<12+:4>; CRm = instr<8+:4>; op2 = instr<5+:3>; return op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}; else return FALSE;()];

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.ExecutingBROrBLROrRetInstrSCTLRType

// AArch64.ExecutingBROrBLROrRetInstr() // ==================================== // Returns TRUE if current instruction is a BR, BLR, RET, B[L]RA[B][Z], or RETA[B]. booleantype AArch64.ExecutingBROrBLROrRetInstr() if !SCTLRType;HaveBTIExt() then return FALSE; instr = ThisInstr(); if instr<31:25> == '1101011' && instr<20:16> == '11111' then opc = instr<24:21>; return opc != '0101'; else return FALSE;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.ExecutingBTIInstrVBAR

// AArch64.ExecutingBTIInstr() // =========================== // Returns TRUE if current instruction is a BTI. // VBAR[] - non-assignment form // ============================ booleanbits(64) AArch64.ExecutingBTIInstr() if !VBAR[bits(2) regime] bits(64) r; case regime of whenHaveBTIExtEL1() then return FALSE; instr =r = VBAR_EL1; when r = VBAR_EL2; when EL3 r = VBAR_EL3; otherwise Unreachable(); return r; // VBAR[] - non-assignment form // ============================ bits(64) VBAR[] return VBAR[S1TranslationRegimeThisInstrEL2(); if instr<31:22> == '1101010100' && instr<21:12> == '0000110010' && instr<4:0> == '11111' then CRm = instr<11:8>; op2 = instr<7:5>; return (CRm == '0100' && op2<0> == '0'); else return FALSE;()];

Library pseudocode for aarch64/functions/system/AArch64.ExecutingERETInstrAArch64.CheckSystemAccess

// AArch64.ExecutingERETInstr() // ============================ // Returns TRUE if current instruction is ERET. boolean// AArch64.CheckSystemAccess() // =========================== // Checks if an AArch64 MSR, MRS or SYS instruction is allowed from the current exception level and security state. // Also checks for traps by TIDCP and NV access. AArch64.ExecutingERETInstr() instr =AArch64.CheckSystemAccess(bits(2) op0, bits(3) op1, bits(4) crn, bits(4) crm, bits(3) op2, bits(5) rt, bit read) boolean unallocated = FALSE; boolean need_secure = FALSE; bits(2) min_EL; // Check for traps by HCR_EL2.TIDCP if PSTATE.EL IN { ,EL1} && EL2Enabled() && HCR_EL2.TIDCP == 1 && op0 == 'x1' && crn == '1x11' then // At EL0, it is IMPLEMENTATION_DEFINED whether attempts to execute system // register access instructions with reserved encodings are trapped to EL2 or UNDEFINED rcs_el0_trap = boolean IMPLEMENTATION_DEFINED "Reserved Control Space EL0 Trapped"; if PSTATE.EL == EL1 || rcs_el0_trap then AArch64.SystemAccessTrap(EL2, 0x18); // Exception_SystemRegisterTrap // Check for unallocated encodings case op1 of when '00x', '010' min_EL = EL1; when '011' min_EL = EL0; when '100' min_EL = EL2; when '101' if !HaveVirtHostExt() then UNDEFINED; min_EL = EL2; when '110' min_EL = EL3; when '111' min_EL = EL1; need_secure = TRUE; if UInt(PSTATE.EL) < UInt(min_EL) then // Check for traps on read/write access to registers named _EL2, _EL02, _EL12 from non-secure EL1 when HCR_EL2.NV bit is set nv_access = HaveNVExt() && min_EL == EL2 && PSTATE.EL == EL1 && EL2Enabled() && HCR_EL2.NV == '1'; if !nv_access then UNDEFINED; elsif need_secure && !IsSecureThisInstrEL0(); return instr<31:12> == '11010110100111110000';() then UNDEFINED;

Library pseudocode for aarch64/functions/system/AArch64.NextRandomTagBitAArch64.ExecutingATS1xPInstr

// AArch64.NextRandomTagBit() // ========================== // Generate a random bit suitable for generating a random Allocation Tag. // AArch64.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1E1R/WP bitboolean AArch64.NextRandomTagBit() bits(16) lfsr = RGSR_EL1.SEED; bit top = lfsr<5> EOR lfsr<3> EOR lfsr<2> EOR lfsr<0>; RGSR_EL1.SEED = top:lfsr<15:1>; return top;AArch64.ExecutingATS1xPInstr() if !HavePrivATExt() then return FALSE; instr = ThisInstr(); if instr<22+:10> == '1101010100' then op1 = instr<16+:3>; CRn = instr<12+:4>; CRm = instr<8+:4>; op2 = instr<5+:3>; return op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}; else return FALSE;

Library pseudocode for aarch64/functions/system/AArch64.RandomTagAArch64.ExecutingBROrBLROrRetInstr

// AArch64.RandomTag() // =================== // Generate a random Allocation Tag. // AArch64.ExecutingBROrBLROrRetInstr() // ==================================== // Returns TRUE if current instruction is a BR, BLR, RET, B[L]RA[B][Z], or RETA[B]. bits(4)boolean AArch64.RandomTag() bits(4) tag; for i = 0 to 3 tag<i> =AArch64.ExecutingBROrBLROrRetInstr() if ! () then return FALSE; instr = ThisInstrAArch64.NextRandomTagBitHaveBTIExt(); return tag; if instr<31:25> == '1101011' && instr<20:16> == '11111' then opc = instr<24:21>; return opc != '0101'; else return FALSE;

Library pseudocode for aarch64/functions/system/AArch64.SysInstrAArch64.ExecutingBTIInstr

// Execute a system instruction with write (source operand).// AArch64.ExecutingBTIInstr() // =========================== // Returns TRUE if current instruction is a BTI. boolean AArch64.SysInstr(integer op0, integer op1, integer crn, integer crm, integer op2, bits(64) val);AArch64.ExecutingBTIInstr() if !HaveBTIExt() then return FALSE; instr = ThisInstr(); if instr<31:22> == '1101010100' && instr<21:12> == '0000110010' && instr<4:0> == '11111' then CRm = instr<11:8>; op2 = instr<7:5>; return (CRm == '0100' && op2<0> == '0'); else return FALSE;

Library pseudocode for aarch64/functions/system/AArch64.SysInstrWithResultAArch64.SysInstr

// Execute a system instruction with read (result operand). // Returns the result of the instruction. bits(64)// Execute a system instruction with write (source operand). AArch64.SysInstrWithResult(integer op0, integer op1, integer crn, integer crm, integer op2);AArch64.SysInstr(integer op0, integer op1, integer crn, integer crm, integer op2, bits(64) val);

Library pseudocode for aarch64/functions/system/AArch64.SysRegReadAArch64.SysInstrWithResult

// Read from a system register and return the contents of the register. // Execute a system instruction with read (result operand). // Returns the result of the instruction. bits(64) AArch64.SysRegRead(integer op0, integer op1, integer crn, integer crm, integer op2);AArch64.SysInstrWithResult(integer op0, integer op1, integer crn, integer crm, integer op2);

Library pseudocode for aarch64/functions/system/AArch64.SysRegWriteAArch64.SysRegRead

// Write to a system register.// Read from a system register and return the contents of the register. bits(64) AArch64.SysRegWrite(integer op0, integer op1, integer crn, integer crm, integer op2, bits(64) val);AArch64.SysRegRead(integer op0, integer op1, integer crn, integer crm, integer op2);

Library pseudocode for aarch64/functions/system/BTypeCompatibleAArch64.SysRegWrite

boolean BTypeCompatible;// Write to a system register.AArch64.SysRegWrite(integer op0, integer op1, integer crn, integer crm, integer op2, bits(64) val);

Library pseudocode for aarch64/functions/system/BTypeCompatible_BTIBTypeCompatible

// BTypeCompatible_BTI // =================== // This function determines whether a given hint encoding is compatible with the current value of // PSTATE.BTYPE. A value of TRUE here indicates a valid Branch Target Identification instruction. booleanboolean BTypeCompatible; BTypeCompatible_BTI(bits(2) hintcode) case hintcode of when '00' return FALSE; when '01' return PSTATE.BTYPE != '11'; when '10' return PSTATE.BTYPE != '10'; when '11' return TRUE;

Library pseudocode for aarch64/functions/system/BTypeCompatible_PACIXSPBTypeCompatible_BTI

// BTypeCompatible_PACIXSP() // ========================= // Returns TRUE if PACIASP, PACIBSP instruction is implicit compatible with PSTATE.BTYPE, // FALSE otherwise. // BTypeCompatible_BTI // =================== // This function determines whether a given hint encoding is compatible with the current value of // PSTATE.BTYPE. A value of TRUE here indicates a valid Branch Target Identification instruction. boolean BTypeCompatible_PACIXSP() if PSTATE.BTYPE IN {'01', '10'} then return TRUE; elsif PSTATE.BTYPE == '11' then index = if PSTATE.EL ==BTypeCompatible_BTI(bits(2) hintcode) case hintcode of when '00' return FALSE; when '01' return PSTATE.BTYPE != '11'; when '10' return PSTATE.BTYPE != '10'; when '11' return TRUE; EL0 then 35 else 36; return SCTLR[]<index> == '0'; else return FALSE;

Library pseudocode for aarch64/functions/system/BTypeNextBTypeCompatible_PACIXSP

bits(2) BTypeNext;// BTypeCompatible_PACIXSP() // ========================= // Returns TRUE if PACIASP, PACIBSP instruction is implicit compatible with PSTATE.BTYPE, // FALSE otherwise. booleanBTypeCompatible_PACIXSP() if PSTATE.BTYPE IN {'01', '10'} then return TRUE; elsif PSTATE.BTYPE == '11' then index = if PSTATE.EL == EL0 then 35 else 36; return SCTLR[]<index> == '0'; else return FALSE;

Library pseudocode for aarch64/functions/system/InGuardedPageBTypeNext

boolean InGuardedPage;bits(2) BTypeNext;

Library pseudocode for aarch64/instrsfunctions/branchsystem/eret/AArch64.ExceptionReturnInGuardedPage

// AArch64.ExceptionReturn() // =========================boolean InGuardedPage; AArch64.ExceptionReturn(bits(64) new_pc, bits(32) spsr) SynchronizeContext(); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); // Attempts to change to an illegal state will invoke the Illegal Execution state mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' && spsr<4> == '1' && spsr<20> == '0' then // If the exception return is illegal, PC[63:32,1:0] are UNKNOWN new_pc<63:32> = bits(32) UNKNOWN; new_pc<1:0> = bits(2) UNKNOWN; elsif UsingAArch32() then // Return to AArch32 // ELR_ELx[1:0] or ELR_ELx[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 else // Return to AArch64 // ELR_ELx[63:56] might include a tag new_pc = AArch64.BranchAddr(new_pc); if UsingAArch32() then // 32 most significant bits are ignored. BranchTo(new_pc<31:0>, BranchType_ERET); else BranchToAddr(new_pc, BranchType_ERET);

Library pseudocode for aarch64/instrs/countopbranch/CountOperet/AArch64.ExceptionReturn

enumeration// AArch64.ExceptionReturn() // ========================= CountOp {AArch64.ExceptionReturn(bits(64) new_pc, bits(32) spsr)CountOp_CLZ,(); sync_errors = CountOp_CLS,() && [].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); // Attempts to change to an illegal state will invoke the Illegal Execution state mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' && spsr<4> == '1' && spsr<20> == '0' then // If the exception return is illegal, PC[63:32,1:0] are UNKNOWN new_pc<63:32> = bits(32) UNKNOWN; new_pc<1:0> = bits(2) UNKNOWN; elsif UsingAArch32() then // Return to AArch32 // ELR_ELx[1:0] or ELR_ELx[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 else // Return to AArch64 // ELR_ELx[63:56] might include a tag new_pc = AArch64.BranchAddr(new_pc); if UsingAArch32() then // 32 most significant bits are ignored. BranchTo(new_pc<31:0>, BranchType_ERET); else BranchToAddr(new_pc, BranchType_ERETCountOp_CNT};);

Library pseudocode for aarch64/instrs/extendregcountop/DecodeRegExtendCountOp

// DecodeRegExtend() // ================= // Decode a register extension option ExtendTypeenumeration DecodeRegExtend(bits(3) op) case op of when '000' returnCountOp { ExtendType_UXTB; when '001' returnCountOp_CLZ, ExtendType_UXTH; when '010' returnCountOp_CLS, ExtendType_UXTW; when '011' return ExtendType_UXTX; when '100' return ExtendType_SXTB; when '101' return ExtendType_SXTH; when '110' return ExtendType_SXTW; when '111' return ExtendType_SXTX;CountOp_CNT};

Library pseudocode for aarch64/instrs/extendreg/ExtendRegDecodeRegExtend

// ExtendReg() // =========== // Perform a register extension and shift // DecodeRegExtend() // ================= // Decode a register extension option bits(N)ExtendType ExtendReg(integer reg,DecodeRegExtend(bits(3) op) case op of when '000' return ExtendType exttype, integer shift) assert shift >= 0 && shift <= 4; bits(N) val = X[reg]; boolean unsigned; integer len; case exttype of when ExtendType_SXTB unsigned = FALSE; len = 8; when ExtendType_SXTH unsigned = FALSE; len = 16; when ExtendType_SXTW unsigned = FALSE; len = 32; when ExtendType_SXTX unsigned = FALSE; len = 64; when ExtendType_UXTB unsigned = TRUE; len = 8; when; when '001' return ExtendType_UXTH unsigned = TRUE; len = 16; when; when '010' return ExtendType_UXTW unsigned = TRUE; len = 32; when; when '011' return ExtendType_UXTX unsigned = TRUE; len = 64; // Note the extended width of the intermediate value and // that sign extension occurs from bit <len+shift-1>, not // from bit <len-1>. This is equivalent to the instruction // [SU]BFIZ Rtmp, Rreg, #shift, #len // It may also be seen as a sign/zero extend followed by a shift: // LSL(Extend(val<len-1:0>, N, unsigned), shift); len =; when '100' return MinExtendType_SXTB(len, N - shift); return; when '101' return ExtendExtendType_SXTH(val<len-1:0> :; when '110' return ; when '111' return ExtendType_SXTXZerosExtendType_SXTW(shift), N, unsigned);;

Library pseudocode for aarch64/instrs/extendreg/ExtendTypeExtendReg

enumeration// ExtendReg() // =========== // Perform a register extension and shift bits(N) ExtendType {ExtendReg(integer reg,ExtendType_SXTB,type, integer shift) assert shift >= 0 && shift <= 4; bits(N) val = ExtendType_SXTH,[reg]; boolean unsigned; integer len; case type of when ExtendType_SXTW,unsigned = FALSE; len = 8; when ExtendType_SXTX,unsigned = FALSE; len = 16; when ExtendType_UXTB,unsigned = FALSE; len = 32; when ExtendType_UXTH,unsigned = FALSE; len = 64; when ExtendType_UXTW,unsigned = TRUE; len = 8; when unsigned = TRUE; len = 16; when ExtendType_UXTW unsigned = TRUE; len = 32; when ExtendType_UXTX unsigned = TRUE; len = 64; // Note the extended width of the intermediate value and // that sign extension occurs from bit <len+shift-1>, not // from bit <len-1>. This is equivalent to the instruction // [SU]BFIZ Rtmp, Rreg, #shift, #len // It may also be seen as a sign/zero extend followed by a shift: // LSL(Extend(val<len-1:0>, N, unsigned), shift); len = Min(len, N - shift); return Extend(val<len-1:0> : ZerosExtendType_UXTX};(shift), N, unsigned);

Library pseudocode for aarch64/instrs/floatextendreg/arithmetic/max-min/fpmaxminop/FPMaxMinOpExtendType

enumeration FPMaxMinOp {ExtendType {FPMaxMinOp_MAX,ExtendType_SXTB, FPMaxMinOp_MIN,ExtendType_SXTH, FPMaxMinOp_MAXNUM,ExtendType_SXTW, FPMaxMinOp_MINNUM};ExtendType_SXTX,ExtendType_UXTB, ExtendType_UXTH, ExtendType_UXTW, ExtendType_UXTX};

Library pseudocode for aarch64/instrs/float/arithmetic/unarymax-min/fpunaryopfpmaxminop/FPUnaryOpFPMaxMinOp

enumeration FPUnaryOp {FPMaxMinOp {FPUnaryOp_ABS,FPMaxMinOp_MAX, FPUnaryOp_MOV,FPMaxMinOp_MIN, FPUnaryOp_NEG,FPMaxMinOp_MAXNUM, FPUnaryOp_SQRT};FPMaxMinOp_MINNUM};

Library pseudocode for aarch64/instrs/float/convertarithmetic/fpconvopunary/FPConvOpfpunaryop/FPUnaryOp

enumeration FPConvOp {FPUnaryOp {FPConvOp_CVT_FtoI,FPUnaryOp_ABS, FPConvOp_CVT_ItoF,FPUnaryOp_MOV, FPConvOp_MOV_FtoI,FPUnaryOp_NEG, FPConvOp_MOV_ItoF ,FPUnaryOp_SQRT}; FPConvOp_CVT_FtoI_JS };

Library pseudocode for aarch64/instrs/integerfloat/bitfieldconvert/bfxpreferredfpconvop/BFXPreferredFPConvOp

// BFXPreferred() // ============== // // Return TRUE if UBFX or SBFX is the preferred disassembly of a // UBFM or SBFM bitfield instruction. Must exclude more specific // aliases UBFIZ, SBFIZ, UXT[BH], SXT[BHW], LSL, LSR and ASR. booleanenumeration BFXPreferred(bit sf, bit uns, bits(6) imms, bits(6) immr) integer S =FPConvOp { UInt(imms); integer R =FPConvOp_CVT_FtoI, UInt(immr); // must not match UBFIZ/SBFIX alias ifFPConvOp_CVT_ItoF, UInt(imms) <FPConvOp_MOV_FtoI, FPConvOp_MOV_ItoF , UInt(immr) then return FALSE; // must not match LSR/ASR/LSL alias (imms == 31 or 63) if imms == sf:'11111' then return FALSE; // must not match UXTx/SXTx alias if immr == '000000' then // must not match 32-bit UXT[BH] or SXT[BH] if sf == '0' && imms IN {'000111', '001111'} then return FALSE; // must not match 64-bit SXT[BHW] if sf:uns == '10' && imms IN {'000111', '001111', '011111'} then return FALSE; // must be UBFX/SBFX alias return TRUE;FPConvOp_CVT_FtoI_JS };

Library pseudocode for aarch64/instrs/integer/bitmasksbitfield/DecodeBitMasksbfxpreferred/BFXPreferred

// DecodeBitMasks() // ================ // BFXPreferred() // ============== // // Return TRUE if UBFX or SBFX is the preferred disassembly of a // UBFM or SBFM bitfield instruction. Must exclude more specific // aliases UBFIZ, SBFIZ, UXT[BH], SXT[BHW], LSL, LSR and ASR. // Decode AArch64 bitfield and logical immediate masks which use a similar encoding structure (bits(M), bits(M))boolean DecodeBitMasks(bit immN, bits(6) imms, bits(6) immr, boolean immediate) bits(64) tmask, wmask; bits(6) tmask_and, wmask_and; bits(6) tmask_or, wmask_or; bits(6) levels; // Compute log2 of element size // 2^len must be in range [2, M] len =BFXPreferred(bit sf, bit uns, bits(6) imms, bits(6) immr) integer S = HighestSetBitUInt(immN:NOT(imms)); if len < 1 then UNDEFINED; assert M >= (1 << len); // Determine S, R and S - R parameters levels =(imms); integer R = ZeroExtendUInt((immr); // must not match UBFIZ/SBFIX alias ifOnes(len), 6); // For logical immediates an all-ones value of S is reserved // since it would generate a useless all-ones result (many times) if immediate && (imms AND levels) == levels then UNDEFINED; S = UInt(imms AND levels); R =(imms) < UInt(immr AND levels); diff = S - R; // 6-bit subtract with borrow // From a software perspective, the remaining code is equivalant to: // esize = 1 << len; // d = UInt(diff<len-1:0>); // welem = ZeroExtend(Ones(S + 1), esize); // telem = ZeroExtend(Ones(d + 1), esize); // wmask = Replicate(ROR(welem, R)); // tmask = Replicate(telem); // return (wmask, tmask); // Compute "top mask" tmask_and = diff<5:0> OR NOT(levels); tmask_or = diff<5:0> AND levels; tmask = Ones(64); tmask = ((tmask AND Replicate(Replicate(tmask_and<0>, 1) : Ones(1), 32)) OR Replicate(Zeros(1) : Replicate(tmask_or<0>, 1), 32)); // optimization of first step: // tmask = Replicate(tmask_and<0> : '1', 32); tmask = ((tmask AND Replicate(Replicate(tmask_and<1>, 2) : Ones(2), 16)) OR Replicate(Zeros(2) : Replicate(tmask_or<1>, 2), 16)); tmask = ((tmask AND Replicate(Replicate(tmask_and<2>, 4) : Ones(4), 8)) OR Replicate(Zeros(4) : Replicate(tmask_or<2>, 4), 8)); tmask = ((tmask AND Replicate(Replicate(tmask_and<3>, 8) : Ones(8), 4)) OR Replicate(Zeros(8) : Replicate(tmask_or<3>, 8), 4)); tmask = ((tmask AND Replicate(Replicate(tmask_and<4>, 16) : Ones(16), 2)) OR Replicate(Zeros(16) : Replicate(tmask_or<4>, 16), 2)); tmask = ((tmask AND Replicate(Replicate(tmask_and<5>, 32) : Ones(32), 1)) OR Replicate(Zeros(32) : Replicate(tmask_or<5>, 32), 1)); // Compute "wraparound mask" wmask_and = immr OR NOT(levels); wmask_or = immr AND levels; wmask = Zeros(64); wmask = ((wmask AND Replicate(Ones(1) : Replicate(wmask_and<0>, 1), 32)) OR Replicate(Replicate(wmask_or<0>, 1) : Zeros(1), 32)); // optimization of first step: // wmask = Replicate(wmask_or<0> : '0', 32); wmask = ((wmask AND Replicate(Ones(2) : Replicate(wmask_and<1>, 2), 16)) OR Replicate(Replicate(wmask_or<1>, 2) : Zeros(2), 16)); wmask = ((wmask AND Replicate(Ones(4) : Replicate(wmask_and<2>, 4), 8)) OR Replicate(Replicate(wmask_or<2>, 4) : Zeros(4), 8)); wmask = ((wmask AND Replicate(Ones(8) : Replicate(wmask_and<3>, 8), 4)) OR Replicate(Replicate(wmask_or<3>, 8) : Zeros(8), 4)); wmask = ((wmask AND Replicate(Ones(16) : Replicate(wmask_and<4>, 16), 2)) OR Replicate(Replicate(wmask_or<4>, 16) : Zeros(16), 2)); wmask = ((wmask AND Replicate(Ones(32) : Replicate(wmask_and<5>, 32), 1)) OR Replicate(Replicate(wmask_or<5>, 32) : Zeros(32), 1)); (immr) then return FALSE; if diff<6> != '0' then // borrow from S - R wmask = wmask AND tmask; else wmask = wmask OR tmask; // must not match LSR/ASR/LSL alias (imms == 31 or 63) if imms == sf:'11111' then return FALSE; return (wmask<M-1:0>, tmask<M-1:0>); // must not match UXTx/SXTx alias if immr == '000000' then // must not match 32-bit UXT[BH] or SXT[BH] if sf == '0' && imms IN {'000111', '001111'} then return FALSE; // must not match 64-bit SXT[BHW] if sf:uns == '10' && imms IN {'000111', '001111', '011111'} then return FALSE; // must be UBFX/SBFX alias return TRUE;

Library pseudocode for aarch64/instrs/integer/ins-extbitmasks/insert/movewide/movewideop/MoveWideOpDecodeBitMasks

enumeration// DecodeBitMasks() // ================ // Decode AArch64 bitfield and logical immediate masks which use a similar encoding structure (bits(M), bits(M)) MoveWideOp {DecodeBitMasks(bit immN, bits(6) imms, bits(6) immr, boolean immediate) bits(64) tmask, wmask; bits(6) tmask_and, wmask_and; bits(6) tmask_or, wmask_or; bits(6) levels; // Compute log2 of element size // 2^len must be in range [2, M] len =MoveWideOp_N,(immN:NOT(imms)); if len < 1 then UNDEFINED; assert M >= (1 << len); // Determine S, R and S - R parameters levels = MoveWideOp_Z,( (len), 6); // For logical immediates an all-ones value of S is reserved // since it would generate a useless all-ones result (many times) if immediate && (imms AND levels) == levels then UNDEFINED; S = UInt(imms AND levels); R = UInt(immr AND levels); diff = S - R; // 6-bit subtract with borrow // From a software perspective, the remaining code is equivalant to: // esize = 1 << len; // d = UInt(diff<len-1:0>); // welem = ZeroExtend(Ones(S + 1), esize); // telem = ZeroExtend(Ones(d + 1), esize); // wmask = Replicate(ROR(welem, R)); // tmask = Replicate(telem); // return (wmask, tmask); // Compute "top mask" tmask_and = diff<5:0> OR NOT(levels); tmask_or = diff<5:0> AND levels; tmask = Ones(64); tmask = ((tmask AND Replicate(Replicate(tmask_and<0>, 1) : Ones(1), 32)) OR Replicate(Zeros(1) : Replicate(tmask_or<0>, 1), 32)); // optimization of first step: // tmask = Replicate(tmask_and<0> : '1', 32); tmask = ((tmask AND Replicate(Replicate(tmask_and<1>, 2) : Ones(2), 16)) OR Replicate(Zeros(2) : Replicate(tmask_or<1>, 2), 16)); tmask = ((tmask AND Replicate(Replicate(tmask_and<2>, 4) : Ones(4), 8)) OR Replicate(Zeros(4) : Replicate(tmask_or<2>, 4), 8)); tmask = ((tmask AND Replicate(Replicate(tmask_and<3>, 8) : Ones(8), 4)) OR Replicate(Zeros(8) : Replicate(tmask_or<3>, 8), 4)); tmask = ((tmask AND Replicate(Replicate(tmask_and<4>, 16) : Ones(16), 2)) OR Replicate(Zeros(16) : Replicate(tmask_or<4>, 16), 2)); tmask = ((tmask AND Replicate(Replicate(tmask_and<5>, 32) : Ones(32), 1)) OR Replicate(Zeros(32) : Replicate(tmask_or<5>, 32), 1)); // Compute "wraparound mask" wmask_and = immr OR NOT(levels); wmask_or = immr AND levels; wmask = Zeros(64); wmask = ((wmask AND Replicate(Ones(1) : Replicate(wmask_and<0>, 1), 32)) OR Replicate(Replicate(wmask_or<0>, 1) : Zeros(1), 32)); // optimization of first step: // wmask = Replicate(wmask_or<0> : '0', 32); wmask = ((wmask AND Replicate(Ones(2) : Replicate(wmask_and<1>, 2), 16)) OR Replicate(Replicate(wmask_or<1>, 2) : Zeros(2), 16)); wmask = ((wmask AND Replicate(Ones(4) : Replicate(wmask_and<2>, 4), 8)) OR Replicate(Replicate(wmask_or<2>, 4) : Zeros(4), 8)); wmask = ((wmask AND Replicate(Ones(8) : Replicate(wmask_and<3>, 8), 4)) OR Replicate(Replicate(wmask_or<3>, 8) : Zeros(8), 4)); wmask = ((wmask AND Replicate(Ones(16) : Replicate(wmask_and<4>, 16), 2)) OR Replicate(Replicate(wmask_or<4>, 16) : Zeros(16), 2)); wmask = ((wmask AND Replicate(Ones(32) : Replicate(wmask_and<5>, 32), 1)) OR Replicate(Replicate(wmask_or<5>, 32) : ZerosMoveWideOp_K};(32), 1)); if diff<6> != '0' then // borrow from S - R wmask = wmask AND tmask; else wmask = wmask OR tmask; return (wmask<M-1:0>, tmask<M-1:0>);

Library pseudocode for aarch64/instrs/integer/logicalins-ext/movwpreferredinsert/MoveWidePreferredmovewide/movewideop/MoveWideOp

// MoveWidePreferred() // =================== // // Return TRUE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single MOVZ or MOVN instruction. // Used as a condition for the preferred MOV<-ORR alias. booleanenumeration MoveWidePreferred(bit sf, bit immN, bits(6) imms, bits(6) immr) integer S =MoveWideOp { UInt(imms); integer R =MoveWideOp_N, MoveWideOp_Z, UInt(immr); integer width = if sf == '1' then 64 else 32; // element size must equal total immediate size if sf == '1' && immN:imms != '1xxxxxx' then return FALSE; if sf == '0' && immN:imms != '00xxxxx' then return FALSE; // for MOVZ must contain no more than 16 ones if S < 16 then // ones must not span halfword boundary when rotated return (-R MOD 16) <= (15 - S); // for MOVN must contain no more than 16 zeros if S >= width - 15 then // zeros must not span halfword boundary when rotated return (R MOD 16) <= (S - (width - 15)); return FALSE;MoveWideOp_K};

Library pseudocode for aarch64/instrs/integer/shiftreglogical/DecodeShiftmovwpreferred/MoveWidePreferred

// DecodeShift() // ============= // Decode shift encodings // MoveWidePreferred() // =================== // // Return TRUE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single MOVZ or MOVN instruction. // Used as a condition for the preferred MOV<-ORR alias. ShiftTypeboolean DecodeShift(bits(2) op) case op of when '00' returnMoveWidePreferred(bit sf, bit immN, bits(6) imms, bits(6) immr) integer S = ShiftType_LSLUInt; when '01' return(imms); integer R = ShiftType_LSRUInt; when '10' return ShiftType_ASR; when '11' return ShiftType_ROR;(immr); integer width = if sf == '1' then 64 else 32; // element size must equal total immediate size if sf == '1' && immN:imms != '1xxxxxx' then return FALSE; if sf == '0' && immN:imms != '00xxxxx' then return FALSE; // for MOVZ must contain no more than 16 ones if S < 16 then // ones must not span halfword boundary when rotated return (-R MOD 16) <= (15 - S); // for MOVN must contain no more than 16 zeros if S >= width - 15 then // zeros must not span halfword boundary when rotated return (R MOD 16) <= (S - (width - 15)); return FALSE;

Library pseudocode for aarch64/instrs/integer/shiftreg/ShiftRegDecodeShift

// ShiftReg() // ========== // Perform shift of a register operand // DecodeShift() // ============= // Decode shift encodings bits(N)ShiftType ShiftReg(integer reg,DecodeShift(bits(2) op) case op of when '00' return ShiftType shiftype, integer amount) bits(N) result = X[reg]; case shiftype of when ShiftType_LSL result =; when '01' return LSL(result, amount); when ShiftType_LSR result =; when '10' return LSR(result, amount); when ShiftType_ASR result =; when '11' return ASR(result, amount); when ShiftType_ROR result = ROR(result, amount); return result;;

Library pseudocode for aarch64/instrs/integer/shiftreg/ShiftTypeShiftReg

enumeration// ShiftReg() // ========== // Perform shift of a register operand bits(N) ShiftType {ShiftReg(integer reg,ShiftType_LSL,type, integer amount) bits(N) result = ShiftType_LSR,[reg]; case type of when ShiftType_ASR,result = (result, amount); when ShiftType_LSR result = LSR(result, amount); when ShiftType_ASR result = ASR(result, amount); when ShiftType_ROR result = RORShiftType_ROR};(result, amount); return result;

Library pseudocode for aarch64/instrs/logicalopinteger/LogicalOpshiftreg/ShiftType

enumeration LogicalOp {ShiftType {LogicalOp_AND,ShiftType_LSL, LogicalOp_EOR,ShiftType_LSR, LogicalOp_ORR};ShiftType_ASR,ShiftType_ROR};

Library pseudocode for aarch64/instrs/memorylogicalop/memop/MemAtomicOpLogicalOp

enumeration MemAtomicOp {LogicalOp {MemAtomicOp_ADD,LogicalOp_AND, MemAtomicOp_BIC,LogicalOp_EOR, MemAtomicOp_EOR,LogicalOp_ORR}; MemAtomicOp_ORR, MemAtomicOp_SMAX, MemAtomicOp_SMIN, MemAtomicOp_UMAX, MemAtomicOp_UMIN, MemAtomicOp_SWP};

Library pseudocode for aarch64/instrs/memory/memop/MemOpMemAtomicOp

enumeration MemOp {MemAtomicOp {MemOp_LOAD,MemAtomicOp_ADD, MemOp_STORE,MemAtomicOp_BIC, MemOp_PREFETCH};MemAtomicOp_EOR,MemAtomicOp_ORR, MemAtomicOp_SMAX, MemAtomicOp_SMIN, MemAtomicOp_UMAX, MemAtomicOp_UMIN, MemAtomicOp_SWP};

Library pseudocode for aarch64/instrs/memory/prefetchmemop/PrefetchMemOp

// Prefetch() // ========== // Decode and execute the prefetch hint on ADDRESS specified by PRFOPenumeration Prefetch(bits(64) address, bits(5) prfop)MemOp { PrefetchHint hint; integer target; boolean stream; case prfop<4:3> of when '00' hint =MemOp_LOAD, Prefetch_READ; // PLD: prefetch for load when '01' hint =MemOp_STORE, Prefetch_EXEC; // PLI: preload instructions when '10' hint = Prefetch_WRITE; // PST: prepare for store when '11' return; // unallocated hint target = UInt(prfop<2:1>); // target cache level stream = (prfop<0> != '0'); // streaming (non-temporal) Hint_Prefetch(address, hint, target, stream); return;MemOp_PREFETCH};

Library pseudocode for aarch64/instrs/systemmemory/barriersprefetch/barrierop/MemBarrierOpPrefetch

enumeration// Prefetch() // ========== // Decode and execute the prefetch hint on ADDRESS specified by PRFOP MemBarrierOp {Prefetch(bits(64) address, bits(5) prfop) MemBarrierOp_DSB // Data Synchronization Barrier ,hint; integer target; boolean stream; case prfop<4:3> of when '00' hint = MemBarrierOp_DMB // Data Memory Barrier ,; // PLD: prefetch for load when '01' hint = MemBarrierOp_ISB // Instruction Synchronization Barrier ,; // PLI: preload instructions when '10' hint = MemBarrierOp_SSBB // Speculative Synchronization Barrier to VA ,; // PST: prepare for store when '11' return; // unallocated hint target = MemBarrierOp_PSSBB // Speculative Synchronization Barrier to PA ,(prfop<2:1>); // target cache level stream = (prfop<0> != '0'); // streaming (non-temporal) MemBarrierOp_SB // Speculation Barrier };(address, hint, target, stream); return;

Library pseudocode for aarch64/instrs/system/hintsbarriers/syshintopbarrierop/SystemHintOpMemBarrierOp

enumeration SystemHintOp {MemBarrierOp { SystemHintOp_NOP,MemBarrierOp_DSB // Data Synchronization Barrier , SystemHintOp_YIELD,MemBarrierOp_DMB // Data Memory Barrier , SystemHintOp_WFE,MemBarrierOp_ISB // Instruction Synchronization Barrier , SystemHintOp_WFI,MemBarrierOp_SSBB // Speculative Synchronization Barrier to VA , SystemHintOp_SEV,MemBarrierOp_PSSBB // Speculative Synchronization Barrier to PA , SystemHintOp_SEVL,MemBarrierOp_SB // Speculation Barrier }; SystemHintOp_ESB, SystemHintOp_PSB, SystemHintOp_TSB, SystemHintOp_BTI, SystemHintOp_CSDB };

Library pseudocode for aarch64/instrs/system/registerhints/cpsrsyshintop/pstatefield/PSTATEFieldSystemHintOp

enumeration PSTATEField {SystemHintOp {PSTATEField_DAIFSet,SystemHintOp_NOP, PSTATEField_DAIFClr,SystemHintOp_YIELD, PSTATEField_PAN, // Armv8.1SystemHintOp_WFE, PSTATEField_UAO, // Armv8.2SystemHintOp_WFI, PSTATEField_DIT, // Armv8.4SystemHintOp_SEV, PSTATEField_SSBS,SystemHintOp_SEVL, PSTATEField_TCO, // Armv8.5SystemHintOp_ESB, PSTATEField_SP };SystemHintOp_PSB,SystemHintOp_TSB, SystemHintOp_BTI, SystemHintOp_CSDB };

Library pseudocode for aarch64/instrs/system/sysopsregister/sysopcpsr/SysOppstatefield/PSTATEField

// SysOp() // ======= SystemOpenumeration SysOp(bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2) case op1:CRn:CRm:op2 of when '000 0111 1000 000' returnPSTATEField { Sys_AT; // S1E1R when '100 0111 1000 000' returnPSTATEField_DAIFSet, Sys_AT; // S1E2R when '110 0111 1000 000' returnPSTATEField_DAIFClr, Sys_AT; // S1E3R when '000 0111 1000 001' returnPSTATEField_PAN, // Armv8.1 Sys_AT; // S1E1W when '100 0111 1000 001' returnPSTATEField_UAO, // Armv8.2 Sys_AT; // S1E2W when '110 0111 1000 001' returnPSTATEField_DIT, // Armv8.4 Sys_AT; // S1E3W when '000 0111 1000 010' returnPSTATEField_SSBS, Sys_AT; // S1E0R when '000 0111 1000 011' return Sys_AT; // S1E0W when '100 0111 1000 100' return Sys_AT; // S12E1R when '100 0111 1000 101' return Sys_AT; // S12E1W when '100 0111 1000 110' return Sys_AT; // S12E0R when '100 0111 1000 111' return Sys_AT; // S12E0W when '011 0111 0100 001' return Sys_DC; // ZVA when '000 0111 0110 001' return Sys_DC; // IVAC when '000 0111 0110 010' return Sys_DC; // ISW when '011 0111 1010 001' return Sys_DC; // CVAC when '000 0111 1010 010' return Sys_DC; // CSW when '011 0111 1011 001' return Sys_DC; // CVAU when '011 0111 1110 001' return Sys_DC; // CIVAC when '000 0111 1110 010' return Sys_DC; // CISW when '011 0111 1101 001' return Sys_DC; // CVADP when '000 0111 0001 000' return Sys_IC; // IALLUIS when '000 0111 0101 000' return Sys_IC; // IALLU when '011 0111 0101 001' return Sys_IC; // IVAU when '100 1000 0000 001' return Sys_TLBI; // IPAS2E1IS when '100 1000 0000 101' return Sys_TLBI; // IPAS2LE1IS when '000 1000 0011 000' return Sys_TLBI; // VMALLE1IS when '100 1000 0011 000' return Sys_TLBI; // ALLE2IS when '110 1000 0011 000' return Sys_TLBI; // ALLE3IS when '000 1000 0011 001' return Sys_TLBI; // VAE1IS when '100 1000 0011 001' return Sys_TLBI; // VAE2IS when '110 1000 0011 001' return Sys_TLBI; // VAE3IS when '000 1000 0011 010' return Sys_TLBI; // ASIDE1IS when '000 1000 0011 011' return Sys_TLBI; // VAAE1IS when '100 1000 0011 100' return Sys_TLBI; // ALLE1IS when '000 1000 0011 101' return Sys_TLBI; // VALE1IS when '100 1000 0011 101' return Sys_TLBI; // VALE2IS when '110 1000 0011 101' return Sys_TLBI; // VALE3IS when '100 1000 0011 110' return Sys_TLBI; // VMALLS12E1IS when '000 1000 0011 111' return Sys_TLBI; // VAALE1IS when '100 1000 0100 001' return Sys_TLBI; // IPAS2E1 when '100 1000 0100 101' return Sys_TLBI; // IPAS2LE1 when '000 1000 0111 000' return Sys_TLBI; // VMALLE1 when '100 1000 0111 000' return Sys_TLBI; // ALLE2 when '110 1000 0111 000' return Sys_TLBI; // ALLE3 when '000 1000 0111 001' return Sys_TLBI; // VAE1 when '100 1000 0111 001' return Sys_TLBI; // VAE2 when '110 1000 0111 001' return Sys_TLBI; // VAE3 when '000 1000 0111 010' return Sys_TLBI; // ASIDE1 when '000 1000 0111 011' return Sys_TLBI; // VAAE1 when '100 1000 0111 100' return Sys_TLBI; // ALLE1 when '000 1000 0111 101' return Sys_TLBI; // VALE1 when '100 1000 0111 101' return Sys_TLBI; // VALE2 when '110 1000 0111 101' return Sys_TLBI; // VALE3 when '100 1000 0111 110' return Sys_TLBI; // VMALLS12E1 when '000 1000 0111 111' return Sys_TLBI; // VAALE1 return Sys_SYS;PSTATEField_SP };

Library pseudocode for aarch64/instrs/system/sysops/sysop/SystemOpSysOp

enumeration// SysOp() // ======= SystemOp SystemOp {SysOp(bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2) case op1:CRn:CRm:op2 of when '000 0111 1000 000' returnSys_AT,; // S1E1R when '100 0111 1000 000' return Sys_DC,; // S1E2R when '110 0111 1000 000' return Sys_IC,; // S1E3R when '000 0111 1000 001' return Sys_TLBI,; // S1E1W when '100 0111 1000 001' return ; // S1E2W when '110 0111 1000 001' return Sys_AT; // S1E3W when '000 0111 1000 010' return Sys_AT; // S1E0R when '000 0111 1000 011' return Sys_AT; // S1E0W when '100 0111 1000 100' return Sys_AT; // S12E1R when '100 0111 1000 101' return Sys_AT; // S12E1W when '100 0111 1000 110' return Sys_AT; // S12E0R when '100 0111 1000 111' return Sys_AT; // S12E0W when '011 0111 0100 001' return Sys_DC; // ZVA when '000 0111 0110 001' return Sys_DC; // IVAC when '000 0111 0110 010' return Sys_DC; // ISW when '011 0111 1010 001' return Sys_DC; // CVAC when '000 0111 1010 010' return Sys_DC; // CSW when '011 0111 1011 001' return Sys_DC; // CVAU when '011 0111 1110 001' return Sys_DC; // CIVAC when '000 0111 1110 010' return Sys_DC; // CISW when '011 0111 1101 001' return Sys_DC; // CVADP when '000 0111 0001 000' return Sys_IC; // IALLUIS when '000 0111 0101 000' return Sys_IC; // IALLU when '011 0111 0101 001' return Sys_IC; // IVAU when '100 1000 0000 001' return Sys_TLBI; // IPAS2E1IS when '100 1000 0000 101' return Sys_TLBI; // IPAS2LE1IS when '000 1000 0011 000' return Sys_TLBI; // VMALLE1IS when '100 1000 0011 000' return Sys_TLBI; // ALLE2IS when '110 1000 0011 000' return Sys_TLBI; // ALLE3IS when '000 1000 0011 001' return Sys_TLBI; // VAE1IS when '100 1000 0011 001' return Sys_TLBI; // VAE2IS when '110 1000 0011 001' return Sys_TLBI; // VAE3IS when '000 1000 0011 010' return Sys_TLBI; // ASIDE1IS when '000 1000 0011 011' return Sys_TLBI; // VAAE1IS when '100 1000 0011 100' return Sys_TLBI; // ALLE1IS when '000 1000 0011 101' return Sys_TLBI; // VALE1IS when '100 1000 0011 101' return Sys_TLBI; // VALE2IS when '110 1000 0011 101' return Sys_TLBI; // VALE3IS when '100 1000 0011 110' return Sys_TLBI; // VMALLS12E1IS when '000 1000 0011 111' return Sys_TLBI; // VAALE1IS when '100 1000 0100 001' return Sys_TLBI; // IPAS2E1 when '100 1000 0100 101' return Sys_TLBI; // IPAS2LE1 when '000 1000 0111 000' return Sys_TLBI; // VMALLE1 when '100 1000 0111 000' return Sys_TLBI; // ALLE2 when '110 1000 0111 000' return Sys_TLBI; // ALLE3 when '000 1000 0111 001' return Sys_TLBI; // VAE1 when '100 1000 0111 001' return Sys_TLBI; // VAE2 when '110 1000 0111 001' return Sys_TLBI; // VAE3 when '000 1000 0111 010' return Sys_TLBI; // ASIDE1 when '000 1000 0111 011' return Sys_TLBI; // VAAE1 when '100 1000 0111 100' return Sys_TLBI; // ALLE1 when '000 1000 0111 101' return Sys_TLBI; // VALE1 when '100 1000 0111 101' return Sys_TLBI; // VALE2 when '110 1000 0111 101' return Sys_TLBI; // VALE3 when '100 1000 0111 110' return Sys_TLBI; // VMALLS12E1 when '000 1000 0111 111' return Sys_TLBI; // VAALE1 return Sys_SYSSys_SYS};;

Library pseudocode for aarch64/instrs/vectorsystem/arithmeticsysops/binarysysop/uniform/logical/bsl-eor/vbitop/VBitOpSystemOp

enumeration VBitOp {SystemOp {VBitOp_VBIF,Sys_AT, VBitOp_VBIT,Sys_DC, VBitOp_VBSL,Sys_IC, VBitOp_VEOR};Sys_TLBI,Sys_SYS};

Library pseudocode for aarch64/instrs/vector/arithmetic/unarybinary/cmpuniform/compareoplogical/CompareOpbsl-eor/vbitop/VBitOp

enumeration CompareOp {VBitOp {CompareOp_GT,VBitOp_VBIF, CompareOp_GE,VBitOp_VBIT, CompareOp_EQ,VBitOp_VBSL, CompareOp_LE,VBitOp_VEOR}; CompareOp_LT};

Library pseudocode for aarch64/instrs/vector/logicalarithmetic/immediateopunary/ImmediateOpcmp/compareop/CompareOp

enumeration ImmediateOp {CompareOp {ImmediateOp_MOVI,CompareOp_GT, ImmediateOp_MVNI,CompareOp_GE, ImmediateOp_ORR,CompareOp_EQ, ImmediateOp_BIC};CompareOp_LE,CompareOp_LT};

Library pseudocode for aarch64/instrs/vector/reducelogical/reduceopimmediateop/ReduceImmediateOp

// Reduce() // ======== bits(esize)enumeration Reduce(ImmediateOp {ReduceOp op, bits(N) input, integer esize) integer half; bits(esize) hi; bits(esize) lo; bits(esize) result; if N == esize then return input<esize-1:0>; half = N DIV 2; hi =ImmediateOp_MOVI, Reduce(op, input<N-1:half>, esize); lo =ImmediateOp_MVNI, Reduce(op, input<half-1:0>, esize); case op of whenImmediateOp_ORR, ReduceOp_FMINNUM result = FPMinNum(lo, hi, FPCR); when ReduceOp_FMAXNUM result = FPMaxNum(lo, hi, FPCR); when ReduceOp_FMIN result = FPMin(lo, hi, FPCR); when ReduceOp_FMAX result = FPMax(lo, hi, FPCR); when ReduceOp_FADD result = FPAdd(lo, hi, FPCR); when ReduceOp_ADD result = lo + hi; return result;ImmediateOp_BIC};

Library pseudocode for aarch64/instrs/vector/reduce/reduceop/ReduceOpReduce

enumeration// Reduce() // ======== bits(esize) ReduceOp {Reduce(ReduceOp_FMINNUM,op, bits(N) input, integer esize) integer half; bits(esize) hi; bits(esize) lo; bits(esize) result; if N == esize then return input<esize-1:0>; half = N DIV 2; hi = ReduceOp_FMAXNUM,(op, input<N-1:half>, esize); lo = ReduceOp_FMIN,(op, input<half-1:0>, esize); case op of when ReduceOp_FMAX,result = ReduceOp_FADD,(lo, hi, FPCR); when result = FPMaxNum(lo, hi, FPCR); when ReduceOp_FMIN result = FPMin(lo, hi, FPCR); when ReduceOp_FMAX result = FPMax(lo, hi, FPCR); when ReduceOp_FADD result = FPAdd(lo, hi, FPCR); when ReduceOp_ADDReduceOp_ADD};result = lo + hi; return result;

Library pseudocode for aarch64/translationinstrs/attrsvector/AArch64.InstructionDevicereduce/reduceop/ReduceOp

// AArch64.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. AddressDescriptorenumeration AArch64.InstructionDevice(ReduceOp {AddressDescriptor addrdesc, bits(64) vaddress, bits(52) ipaddress, integer level,ReduceOp_FMINNUM, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) c =ReduceOp_FMAXNUM, ConstrainUnpredictable(ReduceOp_FMIN,Unpredictable_INSTRDEVICE); assert c IN {ReduceOp_FMAX,Constraint_NONE,ReduceOp_FADD, Constraint_FAULT}; if c == Constraint_FAULT then addrdesc.fault = AArch64.PermissionFault(ipaddress, boolean UNKNOWN, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.memtype = MemType_Normal; addrdesc.memattrs.inner.attrs = MemAttr_NC; addrdesc.memattrs.inner.hints = MemHint_No; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = MemAttrDefaults(addrdesc.memattrs); return addrdesc;ReduceOp_ADD};

Library pseudocode for aarch64/translation/attrs/AArch64.S1AttrDecodeAArch64.InstructionDevice

// AArch64.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. // AArch64.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. MemoryAttributesAddressDescriptor AArch64.S1AttrDecode(bits(2) SH, bits(3) attr,AArch64.InstructionDevice( AddressDescriptor addrdesc, bits(64) vaddress, bits(52) ipaddress, integer level, AccType acctype)acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) c = MemoryAttributesConstrainUnpredictable memattrs; mair =( MAIRUnpredictable_INSTRDEVICE[]; index = 8 *); assert c IN { UIntConstraint_NONE(attr); attrfield = mair<index+7:index>; memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) =, ConstrainUnpredictableBitsConstraint_FAULT(}; if c ==Unpredictable_RESMAIRConstraint_FAULT); if !then addrdesc.fault =HaveMTEExtAArch64.PermissionFault() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) =(ipaddress,bit UNKNOWN, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.type = ConstrainUnpredictableBits(Unpredictable_RESMAIR); if attrfield<7:4> == '0000' then // Device memattrs.memtype = MemType_Device; case attrfield<3:0> of when '0000' memattrs.device = DeviceType_nGnRnE; when '0100' memattrs.device = DeviceType_nGnRE; when '1000' memattrs.device = DeviceType_nGRE; when '1100' memattrs.device = DeviceType_GRE; otherwise Unreachable(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.memtype = MemType_Normal; memattrs.outer = addrdesc.memattrs.inner.attrs = LongConvertAttrsHintsMemAttr_NC(attrfield<7:4>, acctype); memattrs.inner =; addrdesc.memattrs.inner.hints = LongConvertAttrsHintsMemHint_No(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif HaveMTEExt() && attrfield == '11110000' then // Normal, Tagged if WB-RWA memattrs.memtype = MemType_Normal; memattrs.outer = LongConvertAttrsHints('1111', acctype); // WB_RWA memattrs.inner = LongConvertAttrsHints('1111', acctype); // WB_RWA memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; memattrs.tagged = (memattrs.inner.attrs == MemAttr_WB && memattrs.inner.hints == MemHint_RWA && memattrs.outer.attrs == MemAttr_WB && memattrs.outer.hints == MemHint_RWA); else Unreachable(); // Reserved, handled above return; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = MemAttrDefaults(memattrs);(addrdesc.memattrs); return addrdesc;

Library pseudocode for aarch64/translation/attrs/AArch64.TranslateAddressS1OffAArch64.S1AttrDecode

// AArch64.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. // AArch64.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. TLBRecordMemoryAttributes AArch64.TranslateAddressS1Off(bits(64) vaddress,AArch64.S1AttrDecode(bits(2) SH, bits(3) attr, AccType acctype, boolean iswrite) assert !acctype)ELUsingAArch32MemoryAttributes(memattrs; mair =S1TranslationRegimeMAIR());[]; index = 8 * TLBRecordUInt result; (attr); attrfield = mair<index+7:index>; Top = memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) = AddrTopConstrainUnpredictableBits(vaddress, (acctype ==( AccType_IFETCHUnpredictable_RESMAIR), PSTATE.EL); ); if !IsZeroHaveMTEExt(vaddress<Top:() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) =PAMaxConstrainUnpredictableBits()>) then level = 0; ipaddress = bits(52) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; result.addrdesc.fault =( AArch64.AddressSizeFaultUnpredictable_RESMAIR(ipaddress,boolean UNKNOWN, level, acctype, iswrite, secondstage, s2fs1walk); return result; ); default_cacheable = ( if attrfield<7:4> == '0000' then // Device memattrs.type =HasS2TranslationMemType_Device() && HCR_EL2.DC == '1'); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.memtype =; case attrfield<3:0> of when '0000' memattrs.device = MemType_NormalDeviceType_nGnRnE; result.addrdesc.memattrs.inner.attrs = when '0100' memattrs.device = MemAttr_WBDeviceType_nGnRE; // Write-back result.addrdesc.memattrs.inner.hints =; when '1000' memattrs.device = MemHint_RWADeviceType_nGRE; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype != when '1100' memattrs.device = AccType_IFETCHDeviceType_GRE then // Treat data as Device result.addrdesc.memattrs.memtype =; otherwise MemType_DeviceUnreachable; result.addrdesc.memattrs.device =(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.type = DeviceType_nGnRnEMemType_Normal; result.addrdesc.memattrs.inner = memattrs.outer = MemAttrHintsLongConvertAttrsHints UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR_ELx.I cacheable =(attrfield<7:4>, acctype); memattrs.inner = SCTLRLongConvertAttrsHints[].I == '1'; result.addrdesc.memattrs.memtype =(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif HaveMTEExt() && attrfield == '11110000' then // Tagged, Normal memattrs.tagged = TRUE; memattrs.type = MemType_Normal; if cacheable then result.addrdesc.memattrs.inner.attrs = memattrs.outer.attrs = MemAttr_WTMemAttr_WB; result.addrdesc.memattrs.inner.hints = memattrs.inner.attrs = MemHint_RAMemAttr_WB; else result.addrdesc.memattrs.inner.attrs = memattrs.outer.hints = MemAttr_NCMemHint_RWA; result.addrdesc.memattrs.inner.hints = memattrs.inner.hints = MemHint_NoMemHint_RWA; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else Unreachable(); // Reserved, handled above return MemAttrDefaults(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = vaddress<51:0>; result.addrdesc.paddress.NS = if IsSecure() then '0' else '1'; result.addrdesc.fault = AArch64.NoFault(); return result;(memattrs);

Library pseudocode for aarch64/translation/checksattrs/AArch64.AccessIsPrivilegedAArch64.TranslateAddressS1Off

// AArch64.AccessIsPrivileged() // ============================ // AArch64.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. booleanTLBRecord AArch64.AccessIsPrivileged(AArch64.TranslateAddressS1Off(bits(64) vaddress,AccType acctype) el =acctype, boolean iswrite) assert ! AArch64.AccessUsesELELUsingAArch32(acctype); if el ==( EL0S1TranslationRegime then ispriv = FALSE; elsif el ==()); EL3TLBRecord then ispriv = TRUE; elsif el ==result; Top = EL2AddrTop && (!(vaddress, (acctype ==IsInHostAccType_IFETCH() || HCR_EL2.TGE == '0') then ispriv = TRUE; elsif), PSTATE.EL); if ! HaveUAOExtIsZero() && PSTATE.UAO == '1' then ispriv = TRUE; else ispriv = (acctype !=(vaddress<Top: ()>) then level = 0; ipaddress = bits(52) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,bit UNKNOWN, level, acctype, iswrite, secondstage, s2fs1walk); return result; default_cacheable = (HasS2Translation() && HCR_EL2.DC == '1'); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.type = MemType_Normal; result.addrdesc.memattrs.inner.attrs = MemAttr_WB; // Write-back result.addrdesc.memattrs.inner.hints = MemHint_RWA; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype != AccType_IFETCH then // Treat data as Device result.addrdesc.memattrs.type = MemType_Device; result.addrdesc.memattrs.device = DeviceType_nGnRnE; result.addrdesc.memattrs.inner = MemAttrHints UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR_ELx.I cacheable = SCTLR[].I == '1'; result.addrdesc.memattrs.type = MemType_Normal; if cacheable then result.addrdesc.memattrs.inner.attrs = MemAttr_WT; result.addrdesc.memattrs.inner.hints = MemHint_RA; else result.addrdesc.memattrs.inner.attrs = MemAttr_NC; result.addrdesc.memattrs.inner.hints = MemHint_No; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = MemAttrDefaults(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = vaddress<51:0>; result.addrdesc.paddress.NS = if IsSecure() then '0' else '1'; result.addrdesc.fault = AArch64.NoFaultAccType_UNPRIVPAMax); return ispriv;(); return result;

Library pseudocode for aarch64/translation/checks/AArch64.AccessUsesELAArch64.AccessIsPrivileged

// AArch64.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. // AArch64.AccessIsPrivileged() // ============================ bits(2)boolean AArch64.AccessUsesEL(AArch64.AccessIsPrivileged(AccType acctype) if acctype == el = AccType_UNPRIVAArch64.AccessUsesEL then return(acctype); if el == EL0; elsif acctype ==then ispriv = FALSE; elsif el == AccType_NV2REGISTEREL3 then return ispriv = TRUE; elsif el == EL2 && (!IsInHost() || HCR_EL2.TGE == '0') then ispriv = TRUE; elsif HaveUAOExt() && PSTATE.UAO == '1' then ispriv = TRUE; else ispriv = (acctype != AccType_UNPRIV; else return PSTATE.EL;); return ispriv;

Library pseudocode for aarch64/translation/checks/AArch64.CheckPermissionAArch64.AccessUsesEL

// AArch64.CheckPermission() // ========================= // Function used for permission checking from AArch64 stage 1 translations // AArch64.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. FaultRecordbits(2) AArch64.CheckPermission(AArch64.AccessUsesEL(Permissions perms, bits(64) vaddress, integer level, bit NS, AccType acctype, boolean iswrite) assert !acctype) if acctype ==ELUsingAArch32AccType_UNPRIV(then returnS1TranslationRegime()); wxn = SCTLR[].WXN == '1'; if (PSTATE.EL == EL0 ||; elsif acctype == IsInHost() || (PSTATE.EL == EL1 && !HaveNV2Ext()) || (PSTATE.EL == EL1 && HaveNV2Ext() && (acctype != AccType_NV2REGISTER || !then returnELIsInHost(EL2)))) then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; ispriv = AArch64.AccessIsPrivileged(acctype); pan = if HavePANExt() then PSTATE.PAN else '0'; if (EL2Enabled() && ((PSTATE.EL == EL1 && HaveNVExt() && HCR_EL2.<NV, NV1> == '11') || (HaveNV2Ext() && acctype == AccType_NV2REGISTER && HCR_EL2.NV2 == '1'))) then pan = '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch64.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = perms.xn == '1' || (user_w && wxn); priv_xn = perms.pxn == '1' || (priv_w && wxn) || user_w; if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 or EL3 r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecure() && NS == '1' && SCR_EL3.SIF == '1' then xn = TRUE; if acctype == AccType_IFETCH then fail = xn; failedread = TRUE; elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW } then fail = !r || !w; failedread = !r; elsif iswrite then fail = !w; failedread = FALSE; elsif acctype == AccType_DC && PSTATE.EL != EL0 then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation, // other than DC IVAC, which requires write permission, and operations executed at EL0, // which require read permission. fail = FALSE; else fail = !r; failedread = TRUE; if fail then secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(52) UNKNOWN; return AArch64.PermissionFault(ipaddress,boolean UNKNOWN, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch64.NoFault();; else return PSTATE.EL;

Library pseudocode for aarch64/translation/checks/AArch64.CheckS2PermissionAArch64.CheckPermission

// AArch64.CheckS2Permission() // =========================== // Function used for permission checking from AArch64 stage 2 translations // AArch64.CheckPermission() // ========================= // Function used for permission checking from AArch64 stage 1 translations FaultRecord AArch64.CheckS2Permission(AArch64.CheckPermission(Permissions perms, bits(64) vaddress, bits(52) ipaddress, integer level,perms, bits(64) vaddress, integer level, bit NS, AccType acctype, boolean iswrite, boolean NS, boolean s2fs1walk, boolean hwupdatewalk) assertacctype, boolean iswrite) assert ! IsSecureEL2EnabledELUsingAArch32() || (( HaveELS1TranslationRegime(()); wxn =EL2SCTLR) && ![].WXN == '1'; if (PSTATE.EL ==IsSecureEL0() && !||ELUsingAArch32IsInHost(() || (PSTATE.EL ==EL1 && !HaveNV2Ext()) || (PSTATE.EL == EL1 && HaveNV2Ext() && (acctype != AccType_NV2REGISTER || !ELIsInHost(EL2) ) &&)))) then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; ispriv = HasS2TranslationAArch64.AccessIsPrivileged(); (acctype); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if pan = if HaveExtendedExecuteNeverExtHavePANExt() then case perms.xn:perms.xxn of when '00' xn = FALSE; when '01' xn = PSTATE.EL ==() then PSTATE.PAN else '0'; if ( EL2Enabled() && ((PSTATE.EL == EL1; when '10' xn = TRUE; when '11' xn = PSTATE.EL ==&& () && HCR_EL2.<NV, NV1> == '11') || (HaveNV2Ext() && acctype == AccType_NV2REGISTER && HCR_EL2.NV2 == '1'))) then pan = '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch64.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = perms.xn == '1' || (user_w && wxn); priv_xn = perms.pxn == '1' || (priv_w && wxn) || user_w; if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 or EL3 r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecureEL0HaveNVExt; else xn = perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type () && NS == '1' && SCR_EL3.SIF == '1' then xn = TRUE; if acctype == AccType_IFETCH && !s2fs1walk then then fail = xn; failedread = TRUE; elsif (acctype IN { elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk then } then fail = !r || !w; failedread = !r; elsif iswrite && !s2fs1walk then elsif iswrite then fail = !w; failedread = FALSE; elsif acctype == AccType_DC && PSTATE.EL != EL0 && !s2fs1walk then // DC maintenance instructions operating by VA, with the exception of DC IVAC, do // not generate Permission faults from stage 2 translation, other than when // performing a stage 1 translation table walk. then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation, // other than DC IVAC, which requires write permission, and operations executed at EL0, // which require read permission. fail = FALSE; elsif hwupdatewalk then fail = !w; failedread = !iswrite; else fail = !r; failedread = !iswrite; failedread = TRUE; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(52) UNKNOWN; return AArch64.PermissionFault(ipaddress,NS, level, acctype, (ipaddress,bit UNKNOWN, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch64.NoFault();

Library pseudocode for aarch64/translation/debugchecks/AArch64.CheckBreakpointAArch64.CheckS2Permission

// AArch64.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch64 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. // AArch64.CheckS2Permission() // =========================== // Function used for permission checking from AArch64 stage 2 translations FaultRecord AArch64.CheckBreakpoint(bits(64) vaddress,AArch64.CheckS2Permission( Permissions perms, bits(64) vaddress, bits(52) ipaddress, integer level, AccType acctype, integer size) assert !acctype, boolean iswrite, bit NS, boolean s2fs1walk, boolean hwupdatewalk) assertELUsingAArch32IsSecureEL2Enabled(() || (S1TranslationRegimeHaveEL()); assert ((UsingAArch32EL2() && size IN {2,4}) || size == 4; match = FALSE; for i = 0 to) && ! UIntIsSecure(ID_AA64DFR0_EL1.BRPs) match_i =() && ! AArch64.BreakpointMatchELUsingAArch32(i, vaddress, acctype, size); match = match || match_i; if match &&( HaltOnBreakpointOrWatchpointEL2() then reason =) ) && DebugHalt_BreakpointHasS2Translation;(); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if HaltHaveExtendedExecuteNeverExt(reason); elsif match && MDSCR_EL1.MDE == '1' &&() then case perms.xn:perms.xxn of when '00' xn = FALSE; when '01' xn = PSTATE.EL == AArch64.GenerateDebugExceptionsEL1() then acctype =; when '10' xn = TRUE; when '11' xn = PSTATE.EL == EL0; else xn = perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type if acctype == AccType_IFETCH; iswrite = FALSE; return&& !s2fs1walk then fail = xn; failedread = TRUE; elsif (acctype IN { , AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk then fail = !r || !w; failedread = !r; elsif iswrite && !s2fs1walk then fail = !w; failedread = FALSE; elsif acctype == AccType_DC && PSTATE.EL != EL0 && !s2fs1walk then // DC maintenance instructions operating by VA, with the exception of DC IVAC, do // not generate Permission faults from stage 2 translation, other than when // performing a stage 1 translation table walk. fail = FALSE; elsif hwupdatewalk then fail = !w; failedread = !iswrite; else fail = !r; failedread = !iswrite; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; return AArch64.PermissionFaultAArch64.DebugFaultAccType_ATOMICRW(acctype, iswrite); (ipaddress,NS, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch64.NoFault();

Library pseudocode for aarch64/translation/debug/AArch64.CheckDebugAArch64.CheckBreakpoint

// AArch64.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. // AArch64.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch64 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. FaultRecord AArch64.CheckDebug(bits(64) vaddress,AArch64.CheckBreakpoint(bits(64) vaddress, AccType acctype, boolean iswrite, integer size)acctype, integer size) assert ! FaultRecordELUsingAArch32 fault =( AArch64.NoFaultS1TranslationRegime(); d_side = (acctype !=()); assert ( AccType_IFETCHUsingAArch32); generate_exception =() && size IN {2,4}) || size == 4; match = FALSE; for i = 0 to AArch64.GenerateDebugExceptionsUInt() && MDSCR_EL1.MDE == '1'; halt =(ID_AA64DFR0_EL1.BRPs) match_i = AArch64.BreakpointMatch(i, vaddress, acctype, size); match = match || match_i; if match && HaltOnBreakpointOrWatchpoint(); if generate_exception || halt then if d_side then fault =() then reason = AArch64.CheckWatchpointDebugHalt_Breakpoint(vaddress, acctype, iswrite, size); else fault =; (reason); elsif match && MDSCR_EL1.MDE == '1' && AArch64.GenerateDebugExceptions() then acctype = AccType_IFETCH; iswrite = FALSE; return AArch64.DebugFault(acctype, iswrite); else return AArch64.NoFaultAArch64.CheckBreakpointHalt(vaddress, acctype, size); return fault;();

Library pseudocode for aarch64/translation/debug/AArch64.CheckWatchpointAArch64.CheckDebug

// AArch64.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". // AArch64.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. FaultRecord AArch64.CheckWatchpoint(bits(64) vaddress,AArch64.CheckDebug(bits(64) vaddress, AccType acctype, boolean iswrite, integer size) assert !acctype, boolean iswrite, integer size)ELUsingAArch32FaultRecord(fault =S1TranslationRegimeAArch64.NoFault()); (); match = FALSE; ispriv = d_side = (acctype != AArch64.AccessIsPrivilegedAccType_IFETCH(acctype); for i = 0 to); generate_exception = UIntAArch64.GenerateDebugExceptions(ID_AA64DFR0_EL1.WRPs) match = match ||() && MDSCR_EL1.MDE == '1'; halt = AArch64.WatchpointMatch(i, vaddress, size, ispriv, acctype, iswrite); if match && HaltOnBreakpointOrWatchpoint() then if acctype !=(); if generate_exception || halt then if d_side then fault = AccType_NONFAULTAArch64.CheckWatchpoint && acctype !=(vaddress, acctype, iswrite, size); else fault = AccType_CNOTFIRSTAArch64.CheckBreakpoint then reason = DebugHalt_Watchpoint; Halt(reason); else // Fault will be reported and cancelled return AArch64.DebugFault(acctype, iswrite); elsif match && MDSCR_EL1.MDE == '1' && AArch64.GenerateDebugExceptions() then return AArch64.DebugFault(acctype, iswrite); else return AArch64.NoFault();(vaddress, acctype, size); return fault;

Library pseudocode for aarch64/translation/faultsdebug/AArch64.AccessFlagFaultAArch64.CheckWatchpoint

// AArch64.AccessFlagFault() // AArch64.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". FaultRecord AArch64.AccessFlagFault(bits(52) ipaddress,boolean NS, integer level,AArch64.CheckWatchpoint(bits(64) vaddress, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; returnacctype, boolean iswrite, integer size) assert ! AArch64.CreateFaultRecordELUsingAArch32(()); match = FALSE; ispriv = AArch64.AccessIsPrivileged(acctype); for i = 0 to UInt(ID_AA64DFR0_EL1.WRPs) match = match || AArch64.WatchpointMatch(i, vaddress, size, ispriv, acctype, iswrite); if match && HaltOnBreakpointOrWatchpoint() then if acctype != AccType_NONFAULT && acctype != AccType_CNOTFIRST then reason = DebugHalt_Watchpoint; Halt(reason); else // Fault will be reported and cancelled return AArch64.DebugFault(acctype, iswrite); elsif match && MDSCR_EL1.MDE == '1' && AArch64.GenerateDebugExceptions() then return AArch64.DebugFault(acctype, iswrite); else return AArch64.NoFaultFault_AccessFlagS1TranslationRegime, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch64/translation/faults/AArch64.AddressSizeFaultAArch64.AccessFlagFault

// AArch64.AddressSizeFault() // ========================== // AArch64.AccessFlagFault() // ========================= FaultRecord AArch64.AddressSizeFault(bits(52) ipaddress,boolean NS, integer level,AArch64.AccessFlagFault(bits(52) ipaddress,bit NS, integer level, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; return AArch64.CreateFaultRecord(Fault_AddressSizeFault_AccessFlag, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64/translation/faults/AArch64.AlignmentFaultAArch64.AddressSizeFault

// AArch64.AlignmentFault() // ======================== // AArch64.AddressSizeFault() // ========================== FaultRecord AArch64.AlignmentFault(AArch64.AddressSizeFault(bits(52) ipaddress,bit NS, integer level,AccType acctype, boolean iswrite, boolean secondstage) acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; return AArch64.CreateFaultRecord(Fault_AlignmentFault_AddressSize, ipaddress, boolean UNKNOWN, level, acctype, iswrite, , ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64/translation/faults/AArch64.AsynchExternalAbortAArch64.AlignmentFault

// AArch64.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts // AArch64.AlignmentFault() // ======================== FaultRecord AArch64.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) faulttype = if parity thenAArch64.AlignmentFault( Fault_AsyncParityAccType elseacctype, boolean iswrite, boolean secondstage) ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; return Fault_AsyncExternalAArch64.CreateFaultRecord; ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; acctype =( AccType_NORMALFault_Alignment; iswrite = boolean UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch64.CreateFaultRecord(faulttype, ipaddress, boolean UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64/translation/faults/AArch64.DebugFaultAArch64.AsynchExternalAbort

// AArch64.DebugFault() // ==================== // AArch64.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts FaultRecord AArch64.DebugFault(AArch64.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) type = if parity thenAccTypeFault_AsyncParity acctype, boolean iswrite) ipaddress = bits(52) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; returnelse Fault_AsyncExternal; ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; acctype = AccType_NORMAL; iswrite = boolean UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch64.CreateFaultRecord(Fault_Debug, ipaddress, boolean UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);(type, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64/translation/faults/AArch64.NoFaultAArch64.DebugFault

// AArch64.NoFault() // ================= // AArch64.DebugFault() // ==================== FaultRecord AArch64.NoFault() ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; acctype =AArch64.DebugFault( AccType_NORMALAccType; iswrite = boolean UNKNOWN; acctype, boolean iswrite) ipaddress = bits(52) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch64.CreateFaultRecord(Fault_NoneFault_Debug, ipaddress, boolean UNKNOWN, level, acctype, iswrite, , ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64/translation/faults/AArch64.PermissionFaultAArch64.NoFault

// AArch64.PermissionFault() // ========================= // AArch64.NoFault() // ================= FaultRecord AArch64.PermissionFault(bits(52) ipaddress,boolean NS, integer level,AArch64.NoFault() ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; acctype = AccTypeAccType_NORMAL acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) ; iswrite = boolean UNKNOWN; extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch64.CreateFaultRecord(Fault_PermissionFault_None, ipaddress, NS, level, acctype, iswrite, , ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64/translation/faults/AArch64.TranslationFaultAArch64.PermissionFault

// AArch64.TranslationFault() // ========================== // AArch64.PermissionFault() // ========================= FaultRecord AArch64.TranslationFault(bits(52) ipaddress, boolean NS, integer level,AArch64.PermissionFault(bits(52) ipaddress,bit NS, integer level, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; return AArch64.CreateFaultRecord(Fault_TranslationFault_Permission, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64/translation/translationfaults/AArch64.CheckAndUpdateDescriptorAArch64.TranslationFault

// AArch64.CheckAndUpdateDescriptor() // ================================== // Check and update translation table descriptor if hardware update is configured // AArch64.TranslationFault() // ========================== FaultRecord AArch64.CheckAndUpdateDescriptor(AArch64.TranslationFault(bits(52) ipaddress, bit NS, integer level,DescriptorUpdate result, FaultRecord fault, boolean secondstage, bits(64) vaddress, AccType acctype, boolean iswrite, boolean s2fs1walk, boolean hwupdatewalk) acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) boolean hw_update_AF = FALSE; boolean hw_update_AP = FALSE; // Check if access flag can be updated // Address translation instructions are permitted to update AF but not required if result.AF then if fault.statuscode == extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; return Fault_NoneAArch64.CreateFaultRecord ||( ConstrainUnpredictableFault_Translation(Unpredictable_AFUPDATE) == Constraint_TRUE then hw_update_AF = TRUE; if result.AP && fault.statuscode == Fault_None then write_perm_req = (iswrite || acctype IN {AccType_ATOMICRW,AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk; hw_update_AP = (write_perm_req && !(acctype IN {AccType_AT, AccType_DC, AccType_DC_UNPRIV})) || hwupdatewalk; if hw_update_AF || hw_update_AP then if secondstage || !HasS2Translation() then descaddr2 = result.descaddr; else hwupdatewalk = TRUE; descaddr2 = AArch64.SecondStageWalk(result.descaddr, vaddress, acctype, iswrite, 8, hwupdatewalk); if IsFault(descaddr2) then return descaddr2.fault; accdesc = CreateAccessDescriptor(AccType_ATOMICRW); desc = _Mem[descaddr2, 8, accdesc]; el = AArch64.AccessUsesEL(acctype); case el of when EL3 reversedescriptors = SCTLR_EL3.EE == '1'; when EL2 reversedescriptors = SCTLR_EL2.EE == '1'; otherwise reversedescriptors = SCTLR_EL1.EE == '1'; if reversedescriptors then desc = BigEndianReverse(desc); if hw_update_AF then desc<10> = '1'; if hw_update_AP then desc<7> = (if secondstage then '1' else '0'); _Mem[descaddr2,8,accdesc] = if reversedescriptors then BigEndianReverse(desc) else desc; return fault;, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64/translation/translation/AArch64.FirstStageTranslateAArch64.CheckAndUpdateDescriptor

// AArch64.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. // AArch64.CheckAndUpdateDescriptor() // ================================== // Check and update translation table descriptor if hardware update is configured AddressDescriptorFaultRecord AArch64.FirstStageTranslate(bits(64) vaddress,AArch64.CheckAndUpdateDescriptor( AccTypeDescriptorUpdate acctype, boolean iswrite, boolean wasaligned, integer size) ifresult, HaveNV2ExtFaultRecord() && acctype ==fault, boolean secondstage, bits(64) vaddress, AccType_NV2REGISTERAccType then s1_enabled = SCTLR_EL2.M == '1'; elsifacctype, boolean iswrite, boolean s2fs1walk, boolean hwupdatewalk) // Check if access flag can be updated // Address translation instructions are permitted to update AF but not required if result.AF then if fault.type == HasS2TranslationFault_None() then s1_enabled = HCR_EL2.TGE == '0' && HCR_EL2.DC == '0' && SCTLR_EL1.M == '1'; else s1_enabled =then hw_update_AF = TRUE; elsif SCTLRConstrainUnpredictable[].M == '1'; ipaddress = bits(52) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; if s1_enabled then // First stage enabled S1 =( AArch64.TranslationTableWalkUnpredictable_AFUPDATE(ipaddress, TRUE, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; if acctype ==) == AccType_IFETCHConstraint_TRUE then InGuardedPage = S1.GP == '1'; // Global state updated on instruction fetch that denotes // if the fetched instruction is from a guarded page. else S1 = hw_update_AF = TRUE; else hw_update_AF = FALSE; if result.AP && fault.type == AArch64.TranslateAddressS1OffFault_None(vaddress, acctype, iswrite); permissioncheck = FALSE; ifthen write_perm_req = (iswrite || acctype IN { UsingAArch32AccType_ATOMICRW() &&, HaveTrapLoadStoreMultipleDeviceExtAccType_ORDEREDRW() &&, AArch32.ExecutingLSMInstrAccType_ORDEREDATOMICRW() then if S1.addrdesc.memattrs.memtype ==}) && !s2fs1walk; hw_update_AP = (write_perm_req && !(acctype IN { MemType_DeviceAccType_AT && S1.addrdesc.memattrs.device !=, DeviceType_GREAccType_DC then nTLSMD = if, S1TranslationRegimeAccType_DC_UNPRIV() ==})) || hwupdatewalk; else hw_update_AP = FALSE; if hw_update_AF || hw_update_AP then if secondstage || ! EL2HasS2Translation then SCTLR_EL2.nTLSMD else SCTLR_EL1.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault =() then descaddr2 = result.descaddr; else hwupdatewalk = TRUE; descaddr2 = AArch64.AlignmentFaultAArch64.SecondStageWalk(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype !=(result.descaddr, vaddress, acctype, iswrite, 8, hwupdatewalk); if AccType_IFETCH) || (acctype == AccType_DCZVA)) && S1.addrdesc.memattrs.memtype == MemType_Device && !IsFault(S1.addrdesc) then S1.addrdesc.fault =(descaddr2) then return descaddr2.fault; accdesc = CreateAccessDescriptor( AArch64.AlignmentFaultAccType_ATOMICRW(acctype, iswrite, secondstage); if !); desc = _Mem[descaddr2, 8, accdesc]; el =IsFaultAArch64.AccessUsesEL(S1.addrdesc) && permissioncheck then S1.addrdesc.fault =(acctype); case el of when AArch64.CheckPermissionEL3(S1.perms, vaddress, S1.level, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (!reversedescriptors = SCTLR_EL3.EE == '1'; whenIsFaultEL2(S1.addrdesc) && S1.addrdesc.memattrs.memtype ==reversedescriptors = SCTLR_EL2.EE == '1'; otherwise reversedescriptors = SCTLR_EL1.EE == '1'; if reversedescriptors then desc = MemType_DeviceBigEndianReverse && acctype ==(desc); if hw_update_AF then desc<10> = '1'; if hw_update_AP then desc<7> = (if secondstage then '1' else '0'); _Mem[descaddr2,8,accdesc] = if reversedescriptors then AccType_IFETCHBigEndianReverse) then S1.addrdesc = AArch64.InstructionDevice(S1.addrdesc, vaddress, ipaddress, S1.level, acctype, iswrite, secondstage, s2fs1walk); // Check and update translation table descriptor if required hwupdatewalk = FALSE; s2fs1walk = FALSE; S1.addrdesc.fault = AArch64.CheckAndUpdateDescriptor(S1.descupdate, S1.addrdesc.fault, secondstage, vaddress, acctype, iswrite, s2fs1walk, hwupdatewalk); (desc) else desc; return S1.addrdesc; return fault;

Library pseudocode for aarch64/translation/translation/AArch64.FullTranslateAArch64.FirstStageTranslate

// AArch64.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. // AArch64.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. AddressDescriptor AArch64.FullTranslate(bits(64) vaddress,AArch64.FirstStageTranslate(bits(64) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, integer size) boolean wasaligned, integer size) // First Stage Translation S1 = if AArch64.FirstStageTranslate(vaddress, acctype, iswrite, wasaligned, size); if !IsFault(S1) && !(HaveNV2Ext() && acctype == AccType_NV2REGISTER) &&then s1_enabled = SCTLR_EL2.M == '1'; elsif HasS2Translation() then s2fs1walk = FALSE; hwupdatewalk = FALSE; result = s1_enabled = HCR_EL2.TGE == '0' && HCR_EL2.DC == '0' && SCTLR_EL1.M == '1'; else s1_enabled = [].M == '1'; ipaddress = bits(52) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; if s1_enabled then // First stage enabled S1 = AArch64.TranslationTableWalk(ipaddress, '1', vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; if acctype == AccType_IFETCH then InGuardedPage = S1.GP == '1'; // Global state updated on instruction fetch that denotes // if the fetched instruction is from a guarded page. else S1 = AArch64.TranslateAddressS1Off(vaddress, acctype, iswrite); permissioncheck = FALSE; if UsingAArch32() && HaveTrapLoadStoreMultipleDeviceExt() && AArch32.ExecutingLSMInstr() then if S1.addrdesc.memattrs.type == MemType_Device && S1.addrdesc.memattrs.device != DeviceType_GRE then nTLSMD = if S1TranslationRegime() == EL2 then SCTLR_EL2.nTLSMD else SCTLR_EL1.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault = AArch64.AlignmentFault(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S1.addrdesc.memattrs.type == MemType_Device && !IsFault(S1.addrdesc) then S1.addrdesc.fault = AArch64.AlignmentFault(acctype, iswrite, secondstage); if !IsFault(S1.addrdesc) && permissioncheck then S1.addrdesc.fault = AArch64.CheckPermission(S1.perms, vaddress, S1.level, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (!IsFault(S1.addrdesc) && S1.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then S1.addrdesc = AArch64.InstructionDevice(S1.addrdesc, vaddress, ipaddress, S1.level, acctype, iswrite, secondstage, s2fs1walk); // Check and update translation table descriptor if required hwupdatewalk = FALSE; s2fs1walk = FALSE; S1.addrdesc.fault = AArch64.CheckAndUpdateDescriptorAArch64.SecondStageTranslateSCTLR(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); else result = S1; (S1.descupdate, S1.addrdesc.fault, secondstage, vaddress, acctype, iswrite, s2fs1walk, hwupdatewalk); return result; return S1.addrdesc;

Library pseudocode for aarch64/translation/translation/AArch64.SecondStageTranslateAArch64.FullTranslate

// AArch64.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. // AArch64.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. AddressDescriptor AArch64.SecondStageTranslate(AArch64.FullTranslate(bits(64) vaddress,AddressDescriptor S1, bits(64) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size, boolean hwupdatewalk) assertacctype, boolean iswrite, boolean wasaligned, integer size) // First Stage Translation S1 = HasS2TranslationAArch64.FirstStageTranslate(); s2_enabled = HCR_EL2.VM == '1' || HCR_EL2.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<51:0>; NS = S1.paddress.NS == '1'; S2 =(vaddress, acctype, iswrite, wasaligned, size); if ! AArch64.TranslationTableWalk(ipaddress, NS, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S2.addrdesc.memattrs.memtype == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault =(S1) && !( AArch64.AlignmentFaultHaveNV2Ext(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !() && acctype ==IsFaultAccType_NV2REGISTER(S2.addrdesc) then S2.addrdesc.fault =) && AArch64.CheckS2PermissionHasS2Translation(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, NS,s2fs1walk, hwupdatewalk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !() then s2fs1walk = FALSE; hwupdatewalk = FALSE; result =IsFaultAArch64.SecondStageTranslate(S2.addrdesc) && S2.addrdesc.memattrs.memtype == MemType_Device && acctype == AccType_IFETCH) then S2.addrdesc = AArch64.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR_EL2.PTW == '1' && S2.addrdesc.memattrs.memtype == MemType_Device) then S2.addrdesc.fault = AArch64.PermissionFault(ipaddress, NS, S2.level, acctype, iswrite, secondstage, s2fs1walk); // Check and update translation table descriptor if required S2.addrdesc.fault = AArch64.CheckAndUpdateDescriptor(S2.descupdate, S2.addrdesc.fault, secondstage, vaddress, acctype, iswrite, s2fs1walk, hwupdatewalk); result = CombineS1S2Desc(S1, S2.addrdesc); (S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); else result = S1; return result;

Library pseudocode for aarch64/translation/translation/AArch64.SecondStageWalkAArch64.SecondStageTranslate

// AArch64.SecondStageWalk() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. // AArch64.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. AddressDescriptor AArch64.SecondStageWalk(AArch64.SecondStageTranslate(AddressDescriptor S1, bits(64) vaddress, AccType acctype, boolean iswrite, integer size, boolean hwupdatewalk) acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size, boolean hwupdatewalk) assert HasS2Translation(); s2fs1walk = TRUE; wasaligned = TRUE; return s2_enabled = HCR_EL2.VM == '1' || HCR_EL2.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<51:0>; NS = S1.paddress.NS; S2 = (ipaddress, NS, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S2.addrdesc.memattrs.type == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch64.AlignmentFault(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch64.CheckS2Permission(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, NS,s2fs1walk, hwupdatewalk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !IsFault(S2.addrdesc) && S2.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then S2.addrdesc = AArch64.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR_EL2.PTW == '1' && S2.addrdesc.memattrs.type == MemType_Device) then S2.addrdesc.fault = AArch64.PermissionFault(ipaddress, S1.paddress.NS, S2.level, acctype, iswrite, secondstage, s2fs1walk); // Check and update translation table descriptor if required S2.addrdesc.fault = AArch64.CheckAndUpdateDescriptor(S2.descupdate, S2.addrdesc.fault, secondstage, vaddress, acctype, iswrite, s2fs1walk, hwupdatewalk); result = CombineS1S2DescAArch64.SecondStageTranslateAArch64.TranslationTableWalk(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk);(S1, S2.addrdesc); else result = S1; return result;

Library pseudocode for aarch64/translation/translation/AArch64.TranslateAddressAArch64.SecondStageWalk

// AArch64.TranslateAddress() // ========================== // Main entry point for translating an address // AArch64.SecondStageWalk() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. AddressDescriptor AArch64.TranslateAddress(bits(64) vaddress,AArch64.SecondStageWalk( AddressDescriptor S1, bits(64) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, integer size) acctype, boolean iswrite, integer size, boolean hwupdatewalk) result = assert AArch64.FullTranslateHasS2Translation(vaddress, acctype, iswrite, wasaligned, size); (); if !(acctype IN { s2fs1walk = TRUE; wasaligned = TRUE; returnAccType_PTWAArch64.SecondStageTranslate, AccType_IC, AccType_AT}) && !IsFault(result) then result.fault = AArch64.CheckDebug(vaddress, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress = ZeroExtend(vaddress); return result;(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk);

Library pseudocode for aarch64/translation/walktranslation/AArch64.TranslationTableWalkAArch64.TranslateAddress

// AArch64.TranslationTableWalk() // ============================== // Returns a result of a translation table walk // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. // AArch64.TranslateAddress() // ========================== // Main entry point for translating an address TLBRecordAddressDescriptor AArch64.TranslationTableWalk(bits(52) ipaddress, boolean s1_nonsecure, bits(64) vaddress,AArch64.TranslateAddress(bits(64) vaddress, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assert !acctype, boolean iswrite, boolean wasaligned, integer size) result =ELUsingAArch32AArch64.FullTranslate((vaddress, acctype, iswrite, wasaligned, size); if !(acctype IN {S1TranslationRegimeAccType_PTW()); else assert, IsSecureEL2EnabledAccType_IC() || (, HaveELAccType_AT(}) && !EL2) && !IsSecure() && !ELUsingAArch32(EL2) ) && HasS2Translation(); TLBRecord result; AddressDescriptor descaddr; bits(64) baseregister; bits(64) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 bit nswalk; // Stage 2 translation table walks are to Secure or to Non-secure PA space descaddr.memattrs.memtype = MemType_Normal; // Derived parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB, 16KB or 64KB in AArch64 // stride = Log2(Address per Level) - Bits of address consumed at each level // firstblocklevel = First level where a block entry is allowed // ps = Physical Address size as encoded in TCR_EL1.IPS or TCR_ELx/VTCR_EL2.PS // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = ZeroExtend(vaddress); el = AArch64.AccessUsesEL(acctype); top = AddrTop(inputaddr, (acctype == AccType_IFETCH), el); if el == EL3 then largegrain = TCR_EL3.TG0 == '01'; midgrain = TCR_EL3.TG0 == '10'; inputsize = 64 - UInt(TCR_EL3.T0SZ); inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = TCR_EL3.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = FALSE; baseregister = TTBR0_EL3; descaddr.memattrs = WalkAttrDecode(TCR_EL3.SH0, TCR_EL3.ORGN0, TCR_EL3.IRGN0, secondstage); reversedescriptors = SCTLR_EL3.EE == '1'; lookupsecure = TRUE; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL3.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL3.HD == '1'; hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL3.HPD == '1'; elsif ELIsInHost(el) then if inputaddr<top> == '0' then largegrain = TCR_EL2.TG0 == '01'; midgrain = TCR_EL2.TG0 == '10'; inputsize = 64 - UInt(TCR_EL2.T0SZ); inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = TCR_EL2.EPD0 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL2.E0PD0 == '1'); baseregister = TTBR0_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH0, TCR_EL2.ORGN0, TCR_EL2.IRGN0, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD0 == '1'; else inputsize = 64 - UInt(TCR_EL2.T1SZ); largegrain = TCR_EL2.TG1 == '11'; // TG1 and TG0 encodings differ midgrain = TCR_EL2.TG1 == '01'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsOnes(inputaddr<top:inputsize>); disabled = TCR_EL2.EPD1 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL2.E0PD1 == '1'); baseregister = TTBR1_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH1, TCR_EL2.ORGN1, TCR_EL2.IRGN1, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD1 == '1'; ps = TCR_EL2.IPS; reversedescriptors = SCTLR_EL2.EE == '1'; lookupsecure = if IsSecureEL2Enabled() then IsSecure() else FALSE; singlepriv = FALSE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL2.HD == '1'; elsif el == EL2 then inputsize = 64 - UInt(TCR_EL2.T0SZ); largegrain = TCR_EL2.TG0 == '01'; midgrain = TCR_EL2.TG0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = TCR_EL2.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = FALSE; baseregister = TTBR0_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH0, TCR_EL2.ORGN0, TCR_EL2.IRGN0, secondstage); reversedescriptors = SCTLR_EL2.EE == '1'; lookupsecure = if IsSecureEL2Enabled() then IsSecure() else FALSE; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL2.HD == '1'; hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD == '1'; else if inputaddr<top> == '0' then inputsize = 64 - UInt(TCR_EL1.T0SZ); largegrain = TCR_EL1.TG0 == '01'; midgrain = TCR_EL1.TG0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = TCR_EL1.EPD0 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL1.E0PD0 == '1'); disabled = disabled || (el == EL0 && acctype == AccType_NONFAULT && TCR_EL1.NFD0 == '1'); baseregister = TTBR0_EL1; descaddr.memattrs = WalkAttrDecode(TCR_EL1.SH0, TCR_EL1.ORGN0, TCR_EL1.IRGN0, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL1.HPD0 == '1'; else inputsize = 64 - UInt(TCR_EL1.T1SZ); largegrain = TCR_EL1.TG1 == '11'; // TG1 and TG0 encodings differ midgrain = TCR_EL1.TG1 == '01'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsOnes(inputaddr<top:inputsize>); disabled = TCR_EL1.EPD1 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL1.E0PD1 == '1'); disabled = disabled || (el == EL0 && acctype == AccType_NONFAULT && TCR_EL1.NFD1 == '1'); baseregister = TTBR1_EL1; descaddr.memattrs = WalkAttrDecode(TCR_EL1.SH1, TCR_EL1.ORGN1, TCR_EL1.IRGN1, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL1.HPD1 == '1'; ps = TCR_EL1.IPS; reversedescriptors = SCTLR_EL1.EE == '1'; lookupsecure = IsSecure(); singlepriv = FALSE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL1.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL1.HD == '1'; if largegrain then grainsize = 16; // Log2(64KB page size) firstblocklevel = (if Have52BitPAExt() then 1 else 2); // Largest block is 4TB (2^42 bytes) for 52 bit PA // and 512MB (2^29 bytes) otherwise elsif midgrain then grainsize = 14; // Log2(16KB page size) firstblocklevel = 2; // Largest block is 32MB (2^25 bytes) else // Small grain grainsize = 12; // Log2(4KB page size) firstblocklevel = 1; // Largest block is 1GB (2^30 bytes) stride = grainsize - 3; // Log2(page size / 8 bytes) // The starting level is the number of strides needed to consume the input address level = 4 - RoundUp(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ZeroExtend(ipaddress); if IsSecureBelowEL3() then // Second stage for Secure translation regime if s1_nonsecure then // Non-secure IPA space t0size = VTCR_EL2.T0SZ; tg0 = VTCR_EL2.TG0; nswalk = VTCR_EL2.NSW; else // Secure IPA space t0size = VSTCR_EL2.T0SZ; tg0 = VSTCR_EL2.TG0; nswalk = VSTCR_EL2.SW; // Stage 2 translation accesses the Non-secure PA space or the Secure PA space if nswalk == '1' then // When walk is Non-secure, access must be to the Non-secure PA space nsaccess = '1'; elsif !s1_nonsecure then // When walk is Secure and in the Secure IPA space, // access is specified by VSTCR_EL2.SA nsaccess = VSTCR_EL2.SA; elsif VSTCR_EL2.SW == '1' || VSTCR_EL2.SA == '1' then // When walk is Secure and in the Non-secure IPA space, // access is Non-secure when VSTCR_EL2.SA specifies the Non-secure PA space nsaccess = '1'; else // When walk is Secure and in the Non-secure IPA space, // if VSTCR_EL2.SA specifies the Secure PA space, access is specified by VTCR_EL2.NSA nsaccess = VTCR_EL2.NSA; else // Second stage for Non-secure translation regime t0size = VTCR_EL2.T0SZ; tg0 = VTCR_EL2.TG0; nswalk = '1'; nsaccess = '1'; inputsize = 64 - UInt(t0size); largegrain = tg0 == '01'; midgrain = tg0 == '10'; inputsize_max = if Have52BitPAExt() && PAMax() == 52 && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = VTCR_EL2.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<63:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR_EL2.SH0, VTCR_EL2.ORGN0, VTCR_EL2.IRGN0, secondstage); reversedescriptors = SCTLR_EL2.EE == '1'; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && VTCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && VTCR_EL2.HD == '1'; if IsSecureEL2Enabled() then lookupsecure = !s1_nonsecure; else lookupsecure = FALSE; if lookupsecure then baseregister = VSTTBR_EL2; startlevel = UInt(VSTCR_EL2.SL0); else baseregister = VTTBR_EL2; startlevel = UInt(VTCR_EL2.SL0); if largegrain then grainsize = 16; // Log2(64KB page size) level = 3 - startlevel; firstblocklevel = (if Have52BitPAExt() then 1 else 2); // Largest block is 4TB (2^42 bytes) for 52 bit PA // and 512MB (2^29 bytes) otherwise elsif midgrain then grainsize = 14; // Log2(16KB page size) level = 3 - startlevel; firstblocklevel = 2; // Largest block is 32MB (2^25 bytes) else // Small grain grainsize = 12; // Log2(4KB page size) if HaveSmallPageTblExt() && startlevel == 3 then level = startlevel; // Startlevel 3 (VTCR_EL2.SL0 or VSCTR_EL2.SL0 == 0b11) for 4KB granule else level = 2 - startlevel; firstblocklevel = 1; // Largest block is 1GB (2^30 bytes) stride = grainsize - 3; // Log2(page size / 8 bytes) // Limits on IPA controls based on implemented PA size. Level 0 is only // supported by small grain translations if largegrain then // 64KB pages // Level 1 only supported if implemented PA size is greater than 2^42 bytes if level == 0 || (level == 1 && PAMax() <= 42) then basefound = FALSE; elsif midgrain then // 16KB pages // Level 1 only supported if implemented PA size is greater than 2^40 bytes if level == 0 || (level == 1 && PAMax() <= 40) then basefound = FALSE; else // Small grain, 4KB pages // Level 0 only supported if implemented PA size is greater than 2^42 bytes if level < 0 || (level == 0 && PAMax() <= 42) then basefound = FALSE; // If the inputsize exceeds the PAMax value, the behavior is CONSTRAINED UNPREDICTABLE inputsizecheck = inputsize; if inputsize > PAMax() && (!ELUsingAArch32(EL1) || inputsize > 40) then case ConstrainUnpredictable(Unpredictable_LARGEIPA) of when Constraint_FORCE // Restrict the inputsize to the PAMax value inputsize = PAMax(); inputsizecheck = PAMax(); when Constraint_FORCENOSLCHECK // As FORCE, except use the configured inputsize in the size checks below inputsize = PAMax(); when Constraint_FAULT // Generate a translation fault basefound = FALSE; otherwise Unreachable(); // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsizecheck - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // Upper bound check is: startsizecheck > Log2(pagesize/8*16) if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 0; // AArch32 reports this as a level 1 fault result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; case ps of when '000' outputsize = 32; when '001' outputsize = 36; when '010' outputsize = 40; when '011' outputsize = 42; when '100' outputsize = 44; when '101' outputsize = 48; when '110' outputsize = (if Have52BitPAExt() && largegrain then 52 else 48); otherwise outputsize = integer IMPLEMENTATION_DEFINED "Reserved Intermediate Physical Address size value"; if outputsize > PAMax() then outputsize = PAMax(); if outputsize < 48 && !IsZero(baseregister<47:outputsize>) then level = 0; result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) if outputsize == 52 then z = (if baselowerbound < 6 then 6 else baselowerbound); baseaddress = baseregister<5:2>:baseregister<47:z>:Zeros(z); else baseaddress = ZeroExtend(baseregister<47:baselowerbound>:Zeros(baselowerbound)); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; addrselecttop = inputsize - 1; apply_nvnv1_effect = HaveNVExt() && EL2Enabled() && HCR_EL2.<NV,NV1> == '11' && S1TranslationRegime() == EL1 && !secondstage; repeat addrselectbottom = (3-level)*stride + grainsize; bits(52) index = ZeroExtend(inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address = baseaddress OR index; descaddr.paddress.NS = if secondstage then nswalk else ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || !HasS2Translation() || (HaveNV2Ext() && acctype == AccType_NV2REGISTER) then descaddr2 = descaddr; else hwupdatewalk = FALSE; descaddr2 = AArch64.SecondStageWalk(descaddr, vaddress, acctype, iswrite, 8, hwupdatewalk); // Check for a fault on the stage 2 walk if IsFault(descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress =(result) then result.fault = ZeroExtendAArch64.CheckDebug(vaddress); (vaddress, acctype, iswrite, size); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = // Update virtual address for abort functions result.vaddress = BigEndianReverse(desc); if desc<0> == '0' || (desc<1:0> == '01' && (level == 3 || (HaveBlockBBM() && IsBlockDescriptorNTBitValid() && desc<16> == '1'))) then // Fault (00), Reserved (10), Block (01) at level 3, or Block(01) with nT bit set. result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if (outputsize < 52 && largegrain && !IsZero(desc<15:12>)) || (outputsize < 48 && !IsZero(desc<47:outputsize>)) then result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; if outputsize == 52 then baseaddress = desc<15:12>:desc<47:grainsize>:Zeros(grainsize); else baseaddress = ZeroExtend(desc<47:grainsize>:Zeros(grainsize)); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only if apply_nvnv1_effect then pxn_table = pxn_table OR desc<60>; else xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply in EL1&0 or EL2&0 translation regimes if !singlepriv then if !apply_nvnv1_effect then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Check block size is supported at this level if level < firstblocklevel then result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check for misprogramming of the contiguous bit if largegrain then contiguousbitcheck = level == 2 && inputsize < 34; elsif midgrain then contiguousbitcheck = level == 2 && inputsize < 30; else contiguousbitcheck = level == 1 && inputsize < 34; if contiguousbitcheck && desc<52> == '1' then if boolean IMPLEMENTATION_DEFINED "Translation fault on misprogrammed contiguous bit" then result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Unpack the descriptor into address and upper and lower block attributes if largegrain then outputaddress = desc<15:12>:desc<47:addrselectbottom>:inputaddr<addrselectbottom-1:0>; else outputaddress = ZeroExtend(desc<47:addrselectbottom>:inputaddr<addrselectbottom-1:0>); // When 52-bit PA is supported, for 64 Kbyte translation granule, // block size might be larger than the supported output address size if outputsize < 52 && !IsZero(outputaddress<51:outputsize>) then result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check Access Flag if desc<10> == '0' then if !update_AF then result.addrdesc.fault = AArch64.AccessFlagFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; else result.descupdate.AF = TRUE; if update_AP && desc<51> == '1' then // If hw update of access permission field is configured consider AP[2] as '0' / S2AP[2] as '1' if !secondstage && desc<7> == '1' then desc<7> = '0'; result.descupdate.AP = TRUE; elsif secondstage && desc<7> == '0' then desc<7> = '1'; result.descupdate.AP = TRUE; // Required descriptor if AF or AP[2]/S2AP[2] needs update result.descupdate.descaddr = descaddr; if apply_nvnv1_effect then pxn = desc<54>; // Bit[54] of the block/page descriptor holds PXN instead of UXN xn = '0'; // XN is '0' ap = desc<7>:'01'; // Bit[6] of the block/page descriptor is treated as '0' regardless of value programmed else xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply in EL1&0 or EL2&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = AArch64.S1AttrDecode(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if HaveExtendedExecuteNeverExt() then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = nsaccess; result.addrdesc.paddress.address = outputaddress; result.addrdesc.fault = AArch64.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExt() then result.CnP = baseregister<0>; (vaddress); return result;

Library pseudocode for sharedaarch64/debugtranslation/ClearStickyErrorswalk/ClearStickyErrorsAArch64.TranslationTableWalk

// ClearStickyErrors() // ===================// AArch64.TranslationTableWalk() // ============================== // Returns a result of a translation table walk // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. TLBRecord ClearStickyErrors() EDSCR.TXU = '0'; // Clear TX underrun flag EDSCR.RXO = '0'; // Clear RX overrun flag ifAArch64.TranslationTableWalk(bits(52) ipaddress, bit s1_nonsecure, bits(64) vaddress, HaltedAccType() then // in Debug state EDSCR.ITO = '0'; // Clear ITR overrun flag // If halted and the ITR is not empty then it is UNPREDICTABLE whether the EDSCR.ERR is cleared. // The UNPREDICTABLE behavior also affects the instructions in flight, but this is not described // in the pseudocode. ifacctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assert ! HaltedELUsingAArch32() && EDSCR.ITE == '0' &&( ConstrainUnpredictableBoolS1TranslationRegime(()); else assert() || ( HaveEL(EL2) && !IsSecure() && !ELUsingAArch32(EL2) ) && HasS2Translation(); TLBRecord result; AddressDescriptor descaddr; bits(64) baseregister; bits(64) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 descaddr.memattrs.type = MemType_Normal; // Derived parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB, 16KB or 64KB in AArch64 // stride = Log2(Address per Level) - Bits of address consumed at each level // firstblocklevel = First level where a block entry is allowed // ps = Physical Address size as encoded in TCR_EL1.IPS or TCR_ELx/VTCR_EL2.PS // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = ZeroExtend(vaddress); el = AArch64.AccessUsesEL(acctype); top = AddrTop(inputaddr, (acctype == AccType_IFETCH), el); if el == EL3 then largegrain = TCR_EL3.TG0 == '01'; midgrain = TCR_EL3.TG0 == '10'; inputsize = 64 - UInt(TCR_EL3.T0SZ); inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = TCR_EL3.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = FALSE; baseregister = TTBR0_EL3; descaddr.memattrs = WalkAttrDecode(TCR_EL3.SH0, TCR_EL3.ORGN0, TCR_EL3.IRGN0, secondstage); reversedescriptors = SCTLR_EL3.EE == '1'; lookupsecure = TRUE; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL3.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL3.HD == '1'; hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL3.HPD == '1'; elsif ELIsInHost(el) then if inputaddr<top> == '0' then largegrain = TCR_EL2.TG0 == '01'; midgrain = TCR_EL2.TG0 == '10'; inputsize = 64 - UInt(TCR_EL2.T0SZ); inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = TCR_EL2.EPD0 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL2.E0PD0 == '1'); baseregister = TTBR0_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH0, TCR_EL2.ORGN0, TCR_EL2.IRGN0, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD0 == '1'; else inputsize = 64 - UInt(TCR_EL2.T1SZ); largegrain = TCR_EL2.TG1 == '11'; // TG1 and TG0 encodings differ midgrain = TCR_EL2.TG1 == '01'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsOnes(inputaddr<top:inputsize>); disabled = TCR_EL2.EPD1 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL2.E0PD1 == '1'); baseregister = TTBR1_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH1, TCR_EL2.ORGN1, TCR_EL2.IRGN1, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD1 == '1'; ps = TCR_EL2.IPS; reversedescriptors = SCTLR_EL2.EE == '1'; lookupsecure = if IsSecureEL2Enabled() then IsSecure() else FALSE; singlepriv = FALSE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL2.HD == '1'; elsif el == EL2 then inputsize = 64 - UInt(TCR_EL2.T0SZ); largegrain = TCR_EL2.TG0 == '01'; midgrain = TCR_EL2.TG0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = TCR_EL2.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = FALSE; baseregister = TTBR0_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH0, TCR_EL2.ORGN0, TCR_EL2.IRGN0, secondstage); reversedescriptors = SCTLR_EL2.EE == '1'; lookupsecure = if IsSecureEL2Enabled() then IsSecure() else FALSE; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL2.HD == '1'; hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD == '1'; else if inputaddr<top> == '0' then inputsize = 64 - UInt(TCR_EL1.T0SZ); largegrain = TCR_EL1.TG0 == '01'; midgrain = TCR_EL1.TG0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = TCR_EL1.EPD0 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL1.E0PD0 == '1'); disabled = disabled || (el == EL0 && acctype == AccType_NONFAULT && TCR_EL1.NFD0 == '1'); baseregister = TTBR0_EL1; descaddr.memattrs = WalkAttrDecode(TCR_EL1.SH0, TCR_EL1.ORGN0, TCR_EL1.IRGN0, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL1.HPD0 == '1'; else inputsize = 64 - UInt(TCR_EL1.T1SZ); largegrain = TCR_EL1.TG1 == '11'; // TG1 and TG0 encodings differ midgrain = TCR_EL1.TG1 == '01'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsOnes(inputaddr<top:inputsize>); disabled = TCR_EL1.EPD1 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL1.E0PD1 == '1'); disabled = disabled || (el == EL0 && acctype == AccType_NONFAULT && TCR_EL1.NFD1 == '1'); baseregister = TTBR1_EL1; descaddr.memattrs = WalkAttrDecode(TCR_EL1.SH1, TCR_EL1.ORGN1, TCR_EL1.IRGN1, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL1.HPD1 == '1'; ps = TCR_EL1.IPS; reversedescriptors = SCTLR_EL1.EE == '1'; lookupsecure = IsSecure(); singlepriv = FALSE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL1.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL1.HD == '1'; if largegrain then grainsize = 16; // Log2(64KB page size) firstblocklevel = (if Have52BitPAExt() then 1 else 2); // Largest block is 4TB (2^42 bytes) for 52 bit PA // and 512MB (2^29 bytes) otherwise elsif midgrain then grainsize = 14; // Log2(16KB page size) firstblocklevel = 2; // Largest block is 32MB (2^25 bytes) else // Small grain grainsize = 12; // Log2(4KB page size) firstblocklevel = 1; // Largest block is 1GB (2^30 bytes) stride = grainsize - 3; // Log2(page size / 8 bytes) // The starting level is the number of strides needed to consume the input address level = 4 - RoundUp(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ZeroExtend(ipaddress); // Stage 2 translation table walk for the Secure EL2 translation regime if IsSecureEL2Enabled() && IsSecure() then // Stage 2 translation walk is in the Non-secure IPA space or the Secure IPA space t0size = if s1_nonsecure == '1' then VTCR_EL2.T0SZ else VSTCR_EL2.T0SZ; tg0 = if s1_nonsecure == '1' then VTCR_EL2.TG0 else VSTCR_EL2.TG0; // Stage 2 translation table walk is to the Non-secure PA space or to the Secure PA space nswalk = if s1_nonsecure == '1' then VTCR_EL2.NSW else VSTCR_EL2.SW; // Stage 2 translation accesses the Non-secure PA space or the Secure PA space if nswalk == '1' then nsaccess = '1'; // When walk is non-secure, access must be to the Non-secure PA space else if s1_nonsecure == '0' then nsaccess = VSTCR_EL2.SA; // When walk is secure and in the Secure IPA space, access is specified by VSTCR_EL2.SA else if VSTCR_EL2.SW == '1' || VSTCR_EL2.SA == '1' then nsaccess = '1'; // When walk is secure and in the Non-secure IPA space, access is non-secure when VSTCR_EL2.SA specifies the Non-secure PA space else nsaccess = VTCR_EL2.NSA; // When walk is secure and in the Non-secure IPA space, if VSTCR_EL2.SA specifies the Secure PA space, access is specified by VSTCR_EL2.NSA else t0size = VTCR_EL2.T0SZ; tg0 = VTCR_EL2.TG0; nsaccess = '1'; inputsize = 64 - UInt(t0size); largegrain = tg0 == '01'; midgrain = tg0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = VTCR_EL2.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<63:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR_EL2.SH0, VTCR_EL2.ORGN0, VTCR_EL2.IRGN0, secondstage); reversedescriptors = SCTLR_EL2.EE == '1'; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && VTCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && VTCR_EL2.HD == '1'; lookupsecure = if IsSecureEL2Enabled() then s1_nonsecure == '0' else FALSE; // Stage2 translation table walk is to secure PA space or to Non-secure PA space baseregister = if lookupsecure then VSTTBR_EL2 else VTTBR_EL2; startlevel = if lookupsecure then UInt(VSTCR_EL2.SL0) else UInt(VTCR_EL2.SL0); if largegrain then grainsize = 16; // Log2(64KB page size) level = 3 - startlevel; firstblocklevel = (if Have52BitPAExt() then 1 else 2); // Largest block is 4TB (2^42 bytes) for 52 bit PA // and 512MB (2^29 bytes) otherwise elsif midgrain then grainsize = 14; // Log2(16KB page size) level = 3 - startlevel; firstblocklevel = 2; // Largest block is 32MB (2^25 bytes) else // Small grain grainsize = 12; // Log2(4KB page size) if HaveSmallPageTblExt() && startlevel == 3 then level = startlevel; // Startlevel 3 (VTCR_EL2.SL0 or VSCTR_EL2.SL0 == 0b11) for 4KB granule else level = 2 - startlevel; firstblocklevel = 1; // Largest block is 1GB (2^30 bytes) stride = grainsize - 3; // Log2(page size / 8 bytes) // Limits on IPA controls based on implemented PA size. Level 0 is only // supported by small grain translations if largegrain then // 64KB pages // Level 1 only supported if implemented PA size is greater than 2^42 bytes if level == 0 || (level == 1 && PAMax() <= 42) then basefound = FALSE; elsif midgrain then // 16KB pages // Level 1 only supported if implemented PA size is greater than 2^40 bytes if level == 0 || (level == 1 && PAMax() <= 40) then basefound = FALSE; else // Small grain, 4KB pages // Level 0 only supported if implemented PA size is greater than 2^42 bytes if level < 0 || (level == 0 && PAMax() <= 42) then basefound = FALSE; // If the inputsize exceeds the PAMax value, the behavior is CONSTRAINED UNPREDICTABLE inputsizecheck = inputsize; if inputsize > PAMax() && (!ELUsingAArch32(EL1) || inputsize > 40) then case ConstrainUnpredictable(Unpredictable_LARGEIPA) of when Constraint_FORCE // Restrict the inputsize to the PAMax value inputsize = PAMax(); inputsizecheck = PAMax(); when Constraint_FORCENOSLCHECK // As FORCE, except use the configured inputsize in the size checks below inputsize = PAMax(); when Constraint_FAULT // Generate a translation fault basefound = FALSE; otherwise Unreachable(); // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsizecheck - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // Upper bound check is: startsizecheck > Log2(pagesize/8*16) if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 0; // AArch32 reports this as a level 1 fault result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; case ps of when '000' outputsize = 32; when '001' outputsize = 36; when '010' outputsize = 40; when '011' outputsize = 42; when '100' outputsize = 44; when '101' outputsize = 48; when '110' outputsize = (if Have52BitPAExt() && largegrain then 52 else 48); otherwise outputsize = integer IMPLEMENTATION_DEFINED "Reserved Intermediate Physical Address size value"; if outputsize > PAMax() then outputsize = PAMax(); if outputsize < 48 && !IsZero(baseregister<47:outputsize>) then level = 0; result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) if outputsize == 52 then z = (if baselowerbound < 6 then 6 else baselowerbound); baseaddress = baseregister<5:2>:baseregister<47:z>:Zeros(z); else baseaddress = ZeroExtend(baseregister<47:baselowerbound>:Zeros(baselowerbound)); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; addrselecttop = inputsize - 1; apply_nvnv1_effect = HaveNVExt() && EL2Enabled() && HCR_EL2.<NV,NV1> == '11' && S1TranslationRegime() == EL1 && !secondstage; repeat addrselectbottom = (3-level)*stride + grainsize; bits(52) index = ZeroExtend(inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address = baseaddress OR index; descaddr.paddress.NS = ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || !HasS2Translation() || (HaveNV2Ext() && acctype == AccType_NV2REGISTER) then descaddr2 = descaddr; else hwupdatewalk = FALSE; descaddr2 = AArch64.SecondStageWalk(descaddr, vaddress, acctype, iswrite, 8, hwupdatewalk); // Check for a fault on the stage 2 walk if IsFault(descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = BigEndianReverse(desc); if desc<0> == '0' || (desc<1:0> == '01' && (level == 3 || (HaveBlockBBM() && IsBlockDescriptorNTBitValid() && desc<16> == '1'))) then // Fault (00), Reserved (10), Block (01) at level 3, or Block(01) with nT bit set. result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if (outputsize < 52 && largegrain && !IsZero(desc<15:12>)) || (outputsize < 48 && !IsZero(desc<47:outputsize>)) then result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; if outputsize == 52 then baseaddress = desc<15:12>:desc<47:grainsize>:Zeros(grainsize); else baseaddress = ZeroExtend(desc<47:grainsize>:Zeros(grainsize)); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only if apply_nvnv1_effect then pxn_table = pxn_table OR desc<60>; else xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply in EL1&0 or EL2&0 translation regimes if !singlepriv then if !apply_nvnv1_effect then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Check block size is supported at this level if level < firstblocklevel then result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check for misprogramming of the contiguous bit if largegrain then contiguousbitcheck = level == 2 && inputsize < 34; elsif midgrain then contiguousbitcheck = level == 2 && inputsize < 30; else contiguousbitcheck = level == 1 && inputsize < 34; if contiguousbitcheck && desc<52> == '1' then if boolean IMPLEMENTATION_DEFINED "Translation fault on misprogrammed contiguous bit" then result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check the output address is inside the supported range if (outputsize < 52 && largegrain && !IsZero(desc<15:12>)) || (outputsize < 48 && !IsZero(desc<47:outputsize>)) then result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Unpack the descriptor into address and upper and lower block attributes if outputsize == 52 then outputaddress = desc<15:12>:desc<47:addrselectbottom>:inputaddr<addrselectbottom-1:0>; else outputaddress = ZeroExtend(desc<47:addrselectbottom>:inputaddr<addrselectbottom-1:0>); // Check Access Flag if desc<10> == '0' then if !update_AF then result.addrdesc.fault = AArch64.AccessFlagFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; else result.descupdate.AF = TRUE; if update_AP && desc<51> == '1' then // If hw update of access permission field is configured consider AP[2] as '0' / S2AP[2] as '1' if !secondstage && desc<7> == '1' then desc<7> = '0'; result.descupdate.AP = TRUE; elsif secondstage && desc<7> == '0' then desc<7> = '1'; result.descupdate.AP = TRUE; // Required descriptor if AF or AP[2]/S2AP[2] needs update result.descupdate.descaddr = descaddr; if apply_nvnv1_effect then pxn = desc<54>; // Bit[54] of the block/page descriptor holds PXN instead of UXN xn = '0'; // XN is '0' ap = desc<7>:'01'; // Bit[6] of the block/page descriptor is treated as '0' regardless of value programmed else xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply in EL1&0 or EL2&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = AArch64.S1AttrDecode(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if HaveExtendedExecuteNeverExt() then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = nsaccess; result.addrdesc.paddress.address = outputaddress; result.addrdesc.fault = AArch64.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExtUnpredictable_CLEARERRITEZEROIsSecureEL2Enabled) then return; EDSCR.ERR = '0'; // Clear cumulative error flag () then result.CnP = baseregister<0>; return; return result;

Library pseudocode for shared/debug/DebugTargetClearStickyErrors/DebugTargetClearStickyErrors

// DebugTarget() // ============= // Returns the debug exception target Exception level bits(2)// ClearStickyErrors() // =================== DebugTarget() secure =ClearStickyErrors() EDSCR.TXU = '0'; // Clear TX underrun flag EDSCR.RXO = '0'; // Clear RX overrun flag if IsSecureHalted(); return() then // in Debug state EDSCR.ITO = '0'; // Clear ITR overrun flag // If halted and the ITR is not empty then it is UNPREDICTABLE whether the EDSCR.ERR is cleared. // The UNPREDICTABLE behavior also affects the instructions in flight, but this is not described // in the pseudocode. if () && EDSCR.ITE == '0' && ConstrainUnpredictableBool(Unpredictable_CLEARERRITEZERODebugTargetFromHalted(secure);) then return; EDSCR.ERR = '0'; // Clear cumulative error flag return;

Library pseudocode for shared/debug/DebugTarget/DebugTargetFromDebugTarget

// DebugTargetFrom() // ================= // DebugTarget() // ============= // Returns the debug exception target Exception level bits(2) DebugTargetFrom(boolean secure) ifDebugTarget() secure = HaveELIsSecure((); returnEL2DebugTargetFrom) && !secure then if ELUsingAArch32(EL2) then route_to_el2 = (HDCR.TDE == '1' || HCR.TGE == '1'); else route_to_el2 = (MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'); else route_to_el2 = FALSE; if route_to_el2 then target = EL2; elsif HaveEL(EL3) && HighestELUsingAArch32() && secure then target = EL3; else target = EL1; return target;(secure);

Library pseudocode for shared/debug/DoubleLockStatusDebugTarget/DoubleLockStatusDebugTargetFrom

// DoubleLockStatus() // ================== // Returns the state of the OS Double Lock. // FALSE if OSDLR_EL1.DLK == 0 or DBGPRCR_EL1.CORENPDRQ == 1 or the PE is in Debug state. // TRUE if OSDLR_EL1.DLK == 1 and DBGPRCR_EL1.CORENPDRQ == 0 and the PE is in Non-debug state. // DebugTargetFrom() // ================= booleanbits(2) DoubleLockStatus() if !DebugTargetFrom(boolean secure) ifHaveDoubleLockHaveEL() then return FALSE; elsif( EL2) && !secure then if ELUsingAArch32(EL1EL2) then return DBGOSDLR.DLK == '1' && DBGPRCR.CORENPDRQ == '0' && ! route_to_el2 = (HDCR.TDE == '1' || HCR.TGE == '1'); else route_to_el2 = (MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'); else route_to_el2 = FALSE; if route_to_el2 then target =HaltedEL2(); else return OSDLR_EL1.DLK == '1' && DBGPRCR_EL1.CORENPDRQ == '0' && !; elsif(EL3) && HighestELUsingAArch32() && secure then target = EL3; else target = EL1HaltedHaveEL();; return target;

Library pseudocode for shared/debug/authenticationDoubleLockStatus/AllowExternalDebugAccessDoubleLockStatus

// AllowExternalDebugAccess() // ========================== // Returns TRUE if an external debug interface access to the External debug registers // is allowed, FALSE otherwise. // DoubleLockStatus() // ================== // Returns the state of the OS Double Lock. // FALSE if OSDLR_EL1.DLK == 0 or DBGPRCR_EL1.CORENPDRQ == 1 or the PE is in Debug state. // TRUE if OSDLR_EL1.DLK == 1 and DBGPRCR_EL1.CORENPDRQ == 0 and the PE is in Non-debug state. boolean AllowExternalDebugAccess() // The access may also be subject to OS Lock, power-down, etc. ifDoubleLockStatus() if ! HaveSecureExtDebugViewHaveDoubleLock() then return return FALSE; elsif AllowExternalDebugAccessELUsingAArch32(IsAccessSecureEL1()); else return) then return DBGOSDLR.DLK == '1' && DBGPRCR.CORENPDRQ == '0' && ! AllowExternalDebugAccessHalted((); else return OSDLR_EL1.DLK == '1' && DBGPRCR_EL1.CORENPDRQ == '0' && !ExternalSecureInvasiveDebugEnabledHalted()); // AllowExternalDebugAccess() // ========================== // Returns TRUE if an external debug interface access to the External debug registers // is allowed for the given Security state, FALSE otherwise. boolean AllowExternalDebugAccess(boolean allow_secure) // The access may also be subject to OS Lock, power-down, etc. if HaveSecureExtDebugView() || ExternalInvasiveDebugEnabled() then if allow_secure then return TRUE; elsif HaveEL(EL3) then if ELUsingAArch32(EL3) then return SDCR.EDAD == '0'; else return MDCR_EL3.EDAD == '0'; else return !IsSecure(); else return FALSE;();

Library pseudocode for shared/debug/authentication/AllowExternalPMUAccessAllowExternalDebugAccess

// AllowExternalPMUAccess() // ======================== // Returns TRUE if an external debug interface access to the PMU registers is allowed, FALSE otherwise. // AllowExternalDebugAccess() // ========================== // Returns TRUE if the External Debugger access is allowed. boolean AllowExternalPMUAccess() // The access may also be subject to OS Lock, power-down, etc. AllowExternalDebugAccess(boolean access_is_secure) // The access may also be subject to OS lock, power-down, etc. if HaveSecureExtDebugView() then return() || AllowExternalPMUAccessExternalDebugEnabled(() then ifIsAccessSecure()); else return AllowExternalPMUAccess(ExternalSecureNoninvasiveDebugEnabled()); // AllowExternalPMUAccess() // ======================== // Returns TRUE if an external debug interface access to the PMU registers is allowed for the given // Security state, FALSE otherwise. boolean AllowExternalPMUAccess(boolean allow_secure) // The access may also be subject to OS Lock, power-down, etc. if HaveSecureExtDebugView() ||() then allow_secure = access_is_secure; else allow_secure = ExternalNoninvasiveDebugEnabledExternalSecureDebugEnabled() then (); if allow_secure then return TRUE; elsif HaveEL(EL3) then if return (if ELUsingAArch32(EL3) then return SDCR.EPMAD == '0'; else return MDCR_EL3.EPMAD == '0'; ) then SDCR.EDAD else MDCR_EL3.EDAD) == '0'; else return !IsSecure(); else return FALSE;

Library pseudocode for shared/debug/authentication/Debug_authenticationAllowExternalPMUAccess

signal DBGEN; signal NIDEN; signal SPIDEN; signal SPNIDEN;// AllowExternalPMUAccess() // ======================== // Returns TRUE if the External Debugger access is allowed. booleanAllowExternalPMUAccess(boolean access_is_secure) // The access may also be subject to OS lock, power-down, etc. if HaveSecureExtDebugView() || ExternalNoninvasiveDebugEnabled() then if HaveSecureExtDebugView() then allow_secure = access_is_secure; else allow_secure = ExternalSecureNoninvasiveDebugEnabled(); if allow_secure then return TRUE; elsif HaveEL(EL3) then return (if ELUsingAArch32(EL3) then SDCR.EPMAD else MDCR_EL3.EPMAD) == '0'; else return !IsSecure(); else return FALSE;

Library pseudocode for shared/debug/authentication/ExternalInvasiveDebugEnabledDebug_authentication

// ExternalInvasiveDebugEnabled() // ============================== // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the DBGEN signal. booleansignal DBGEN; signal NIDEN; signal SPIDEN; signal SPNIDEN; ExternalInvasiveDebugEnabled() return DBGEN == HIGH;

Library pseudocode for shared/debug/authentication/ExternalNoninvasiveDebugAllowedExternalDebugEnabled

// ExternalNoninvasiveDebugAllowed() // ================================= // Returns TRUE if Trace and PC Sample-based Profiling are allowed // ExternalDebugEnabled() // ====================== boolean ExternalNoninvasiveDebugAllowed() return (ExternalDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the DBGEN signal. return DBGEN == HIGH;ExternalNoninvasiveDebugEnabled() && (!IsSecure() || ExternalSecureNoninvasiveDebugEnabled() || (ELUsingAArch32(EL1) && PSTATE.EL == EL0 && SDER.SUNIDEN == '1')));

Library pseudocode for shared/debug/authentication/ExternalNoninvasiveDebugEnabledExternalHypInvasiveDebugEnabled

// ExternalNoninvasiveDebugEnabled() // ExternalHypInvasiveDebugEnabled() // ================================= // This function returns TRUE if the ARMv8.4-Debug is implemented, otherwise this // function is IMPLEMENTATION DEFINED. // In the recommended interface, ExternalNoninvasiveDebugEnabled returns the state of the (DBGEN // OR NIDEN) signal. boolean ExternalNoninvasiveDebugEnabled() return !ExternalHypInvasiveDebugEnabled() // In the recommended interface, ExternalHypInvasiveDebugEnabled returns the state of the // (DBGEN AND HIDEN) signal. returnHaveNoninvasiveDebugAuthExternalDebugEnabled() || ExternalInvasiveDebugEnabled() || NIDEN == HIGH;() && HIDEN == HIGH;

Library pseudocode for shared/debug/authentication/ExternalSecureInvasiveDebugEnabledExternalHypNoninvasiveDebugEnabled

// ExternalSecureInvasiveDebugEnabled() // ExternalHypNoninvasiveDebugEnabled() // ==================================== // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN AND SPIDEN) signal. // CoreSight allows asserting SPIDEN without also asserting DBGEN, but this is not recommended. boolean ExternalSecureInvasiveDebugEnabled() if !ExternalHypNoninvasiveDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, ExternalHypNoninvasiveDebugEnabled returns the state of the // (DBGEN OR NIDEN) AND (HIDEN OR HNIDEN) signal. returnHaveELExternalNoninvasiveDebugEnabled(EL3) && !IsSecure() then return FALSE; return ExternalInvasiveDebugEnabled() && SPIDEN == HIGH;() && (HIDEN == HIGH || HNIDEN == HIGH);

Library pseudocode for shared/debug/authentication/ExternalSecureNoninvasiveDebugEnabledExternalNoninvasiveDebugAllowed

// ExternalSecureNoninvasiveDebugEnabled() // ======================================= // This function returns the value of ExternalSecureInvasiveDebugEnabled() when ARMv8.4-Debug // is implemented. Otherwise, the definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN OR NIDEN) AND // (SPIDEN OR SPNIDEN) signal. // ExternalNoninvasiveDebugAllowed() // ================================= boolean ExternalSecureNoninvasiveDebugEnabled() if !ExternalNoninvasiveDebugAllowed() // Return TRUE if Trace and PC Sample-based Profiling are allowed return (HaveELExternalNoninvasiveDebugEnabled(() && (!EL3) && !IsSecure() then return FALSE; if() || HaveNoninvasiveDebugAuthExternalSecureNoninvasiveDebugEnabled() then return() || ( ExternalNoninvasiveDebugEnabledELUsingAArch32() && (SPIDEN == HIGH || SPNIDEN == HIGH); else return( ) && PSTATE.EL == EL0ExternalSecureInvasiveDebugEnabledEL1();&& SDER.SUNIDEN == '1')));

Library pseudocode for shared/debug/authentication/IsAccessSecureExternalNoninvasiveDebugEnabled

// Returns TRUE when an access is Secure // ExternalNoninvasiveDebugEnabled() // ================================= boolean IsAccessSecure();ExternalNoninvasiveDebugEnabled() // This function returns TRUE if the v8.4 Debug relaxations are implemented, otherwise this // function is IMPLEMENTATION DEFINED. // In the recommended interface, ExternalNoninvasiveDebugEnabled returns the state of the (DBGEN // OR NIDEN) signal. return !HaveNoninvasiveDebugAuth() || ExternalDebugEnabled() || NIDEN == HIGH;

Library pseudocode for shared/debug/ctiauthentication/CTI_SetEventLevelExternalSecureDebugEnabled

// Set a Cross Trigger multi-cycle input event trigger to the specified level. CTI_SetEventLevel(// ExternalSecureDebugEnabled() // ============================ booleanExternalSecureDebugEnabled() if !HaveEL(EL3) && !IsSecure() then return FALSE; // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN AND SPIDEN) signal. // CoreSight allows asserting SPIDEN without also asserting DBGEN, but this is not recommended. return ExternalDebugEnabledCrossTriggerIn id, signal level);() && SPIDEN == HIGH;

Library pseudocode for shared/debug/ctiauthentication/CTI_SignalEventExternalSecureNoninvasiveDebugEnabled

// Signal a discrete event on a Cross Trigger input event trigger.// ExternalSecureNoninvasiveDebugEnabled() // ======================================= boolean CTI_SignalEvent(ExternalSecureNoninvasiveDebugEnabled() if !(EL3) && !IsSecure() then return FALSE; // If the v8.4 Debug relaxations are implemented, this function returns the value of // ExternalSecureDebugEnabled(). Otherwise the definition of this function is // IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN OR NIDEN) AND // (SPIDEN OR SPNIDEN) signal. if HaveNoninvasiveDebugAuth() then return ExternalNoninvasiveDebugEnabled() && (SPIDEN == HIGH || SPNIDEN == HIGH); else return ExternalSecureDebugEnabledCrossTriggerInHaveEL id);();

Library pseudocode for shared/debug/cti/CrossTriggerCTI_SetEventLevel

enumeration// Set a Cross Trigger multi-cycle input event trigger to the specified level. CTI_SetEventLevel( CrossTriggerOut {CrossTriggerOut_DebugRequest, CrossTriggerOut_RestartRequest, CrossTriggerOut_IRQ, CrossTriggerOut_RSVD3, CrossTriggerOut_TraceExtIn0, CrossTriggerOut_TraceExtIn1, CrossTriggerOut_TraceExtIn2, CrossTriggerOut_TraceExtIn3}; enumeration CrossTriggerIn {CrossTriggerIn_CrossHalt, CrossTriggerIn_PMUOverflow, CrossTriggerIn_RSVD2, CrossTriggerIn_RSVD3, CrossTriggerIn_TraceExtOut0, CrossTriggerIn_TraceExtOut1, CrossTriggerIn_TraceExtOut2, CrossTriggerIn_TraceExtOut3};id, signal level);

Library pseudocode for shared/debug/dccanditrcti/CheckForDCCInterruptsCTI_SignalEvent

// CheckForDCCInterrupts() // =======================// Signal a discrete event on a Cross Trigger input event trigger. CheckForDCCInterrupts() commrx = (EDSCR.RXfull == '1'); commtx = (EDSCR.TXfull == '0'); // COMMRX and COMMTX support is optional and not recommended for new designs. // SetInterruptRequestLevel(InterruptID_COMMRX, if commrx then HIGH else LOW); // SetInterruptRequestLevel(InterruptID_COMMTX, if commtx then HIGH else LOW); // The value to be driven onto the common COMMIRQ signal. ifCTI_SignalEvent( ELUsingAArch32CrossTriggerIn(EL1) then commirq = ((commrx && DBGDCCINT.RX == '1') || (commtx && DBGDCCINT.TX == '1')); else commirq = ((commrx && MDCCINT_EL1.RX == '1') || (commtx && MDCCINT_EL1.TX == '1')); SetInterruptRequestLevel(InterruptID_COMMIRQ, if commirq then HIGH else LOW); return;id);

Library pseudocode for shared/debug/dccanditrcti/DBGDTRRX_EL0CrossTrigger

// DBGDTRRX_EL0[] (external write) // =============================== // Called on writes to debug register 0x08C.enumeration DBGDTRRX_EL0[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if EDSCR.RXfull == '1' || (CrossTriggerOut {Halted() && EDSCR.MA == '1' && EDSCR.ITE == '0') then EDSCR.RXO = '1'; EDSCR.ERR = '1'; // Overrun condition: ignore write return; EDSCR.RXfull = '1'; DTRRX = value; ifCrossTriggerOut_DebugRequest, Halted() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !CrossTriggerOut_RestartRequest,UsingAArch32() thenCrossTriggerOut_IRQ, ExecuteA64(0xD5330501<31:0>); // A64 "MRS X1,DBGDTRRX_EL0"CrossTriggerOut_RSVD3, ExecuteA64(0xB8004401<31:0>); // A64 "STR W1,[X0],#4"CrossTriggerOut_TraceExtIn0, X[1] = bits(64) UNKNOWN; elseCrossTriggerOut_TraceExtIn1, ExecuteT32(0xEE10<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MRS R1,DBGDTRRXint"CrossTriggerOut_TraceExtIn2, ExecuteT32(0xF840<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "STR R1,[R0],#4"CrossTriggerOut_TraceExtIn3}; enumeration CrossTriggerIn {CrossTriggerIn_CrossHalt, CrossTriggerIn_PMUOverflow, CrossTriggerIn_RSVD2, CrossTriggerIn_RSVD3, CrossTriggerIn_TraceExtOut0, CrossTriggerIn_TraceExtOut1, R[1] = bits(32) UNKNOWN; // If the store aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.RXfull = bit UNKNOWN; DBGDTRRX_EL0 = bits(32) UNKNOWN; else // "MRS X1,DBGDTRRX_EL0" calls DBGDTR_EL0[] (read) which clears RXfull. assert EDSCR.RXfull == '0'; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return; // DBGDTRRX_EL0[] (external read) // ============================== bits(32)CrossTriggerIn_TraceExtOut2, DBGDTRRX_EL0[boolean memory_mapped] return DTRRX;CrossTriggerIn_TraceExtOut3};

Library pseudocode for shared/debug/dccanditr/DBGDTRTX_EL0CheckForDCCInterrupts

// DBGDTRTX_EL0[] (external read) // ============================== // Called on reads of debug register 0x080. bits(32)// CheckForDCCInterrupts() // ======================= DBGDTRTX_EL0[boolean memory_mapped] CheckForDCCInterrupts() commrx = (EDSCR.RXfull == '1'); commtx = (EDSCR.TXfull == '0'); if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; // COMMRX and COMMTX support is optional and not recommended for new designs. // SetInterruptRequestLevel(InterruptID_COMMRX, if commrx then HIGH else LOW); // SetInterruptRequestLevel(InterruptID_COMMTX, if commtx then HIGH else LOW); underrun = EDSCR.TXfull == '0' || ( // The value to be driven onto the common COMMIRQ signal. ifHaltedELUsingAArch32() && EDSCR.MA == '1' && EDSCR.ITE == '0'); value = if underrun then bits(32) UNKNOWN else DTRTX; if EDSCR.ERR == '1' then return value; // Error flag set: no side-effects // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then // Software lock locked: no side-effects return value; if underrun then EDSCR.TXU = '1'; EDSCR.ERR = '1'; // Underrun condition: block side-effects return value; // Return UNKNOWN EDSCR.TXfull = '0'; if( HaltedEL1() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !) then commirq = ((commrx && DBGDCCINT.RX == '1') || (commtx && DBGDCCINT.TX == '1')); else commirq = ((commrx && MDCCINT_EL1.RX == '1') || (commtx && MDCCINT_EL1.TX == '1')); SetInterruptRequestLevel(UsingAArch32InterruptID_COMMIRQ() then ExecuteA64(0xB8404401<31:0>); // A64 "LDR W1,[X0],#4" else ExecuteT32(0xF850<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "LDR R1,[R0],#4" // If the load aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.TXfull = bit UNKNOWN; DBGDTRTX_EL0 = bits(32) UNKNOWN; else if !UsingAArch32() then ExecuteA64(0xD5130501<31:0>); // A64 "MSR DBGDTRTX_EL0,X1" else ExecuteT32(0xEE00<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MSR DBGDTRTXint,R1" // "MSR DBGDTRTX_EL0,X1" calls DBGDTR_EL0[] (write) which sets TXfull. assert EDSCR.TXfull == '1'; if !UsingAArch32() then X[1] = bits(64) UNKNOWN; else R[1] = bits(32) UNKNOWN; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return value; // DBGDTRTX_EL0[] (external write) // =============================== DBGDTRTX_EL0[boolean memory_mapped] = bits(32) value // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write DTRTX = value; , if commirq then HIGH else LOW); return;

Library pseudocode for shared/debug/dccanditr/DBGDTR_EL0DBGDTRRX_EL0

// DBGDTR_EL0[] (write) // ==================== // System register writes to DBGDTR_EL0, DBGDTRTX_EL0 (AArch64) and DBGDTRTXint (AArch32)// DBGDTRRX_EL0[] (external write) // =============================== // Called on writes to debug register 0x08C. DBGDTR_EL0[] = bits(N) value // For MSR DBGDTRTX_EL0,<Rt> N=32, value=X[t]<31:0>, X[t]<63:32> is ignored // For MSR DBGDTR_EL0,<Xt> N=64, value=X[t]<63:0> assert N IN {32,64}; if EDSCR.TXfull == '1' then value = bits(N) UNKNOWN; // On a 64-bit write, implement a half-duplex channel if N == 64 then DTRRX = value<63:32>; DTRTX = value<31:0>; // 32-bit or 64-bit write EDSCR.TXfull = '1'; return; DBGDTRRX_EL0[boolean memory_mapped] = bits(32) value // DBGDTR_EL0[] (read) // =================== // System register reads of DBGDTR_EL0, DBGDTRRX_EL0 (AArch64) and DBGDTRRXint (AArch32) if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return; bits(N) if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if EDSCR.RXfull == '1' || ( Halted() && EDSCR.MA == '1' && EDSCR.ITE == '0') then EDSCR.RXO = '1'; EDSCR.ERR = '1'; // Overrun condition: ignore write return; EDSCR.RXfull = '1'; DTRRX = value; if Halted() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !UsingAArch32() then ExecuteA64(0xD5330501<31:0>); // A64 "MRS X1,DBGDTRRX_EL0" ExecuteA64(0xB8004401<31:0>); // A64 "STR W1,[X0],#4" X[1] = bits(64) UNKNOWN; else ExecuteT32(0xEE10<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MRS R1,DBGDTRRXint" ExecuteT32(0xF840<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "STR R1,[R0],#4" R[1] = bits(32) UNKNOWN; // If the store aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.RXfull = bit UNKNOWN; DBGDTRRX_EL0 = bits(32) UNKNOWN; else // "MRS X1,DBGDTRRX_EL0" calls DBGDTR_EL0[] (read) which clears RXfull. assert EDSCR.RXfull == '0'; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return; // DBGDTRRX_EL0[] (external read) // ============================== bits(32) DBGDTR_EL0[] // For MRS <Rt>,DBGDTRTX_EL0 N=32, X[t]=Zeros(32):result // For MRS <Xt>,DBGDTR_EL0 N=64, X[t]=result assert N IN {32,64}; bits(N) result; if EDSCR.RXfull == '0' then result = bits(N) UNKNOWN; else // On a 64-bit read, implement a half-duplex channel // NOTE: the word order is reversed on reads with regards to writes if N == 64 then result<63:32> = DTRTX; result<31:0> = DTRRX; EDSCR.RXfull = '0'; return result;DBGDTRRX_EL0[boolean memory_mapped] return DTRRX;

Library pseudocode for shared/debug/dccanditr/DTRDBGDTRTX_EL0

bits(32) DTRRX; bits(32) DTRTX;// DBGDTRTX_EL0[] (external read) // ============================== // Called on reads of debug register 0x080. bits(32)DBGDTRTX_EL0[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; underrun = EDSCR.TXfull == '0' || (Halted() && EDSCR.MA == '1' && EDSCR.ITE == '0'); value = if underrun then bits(32) UNKNOWN else DTRTX; if EDSCR.ERR == '1' then return value; // Error flag set: no side-effects // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then // Software lock locked: no side-effects return value; if underrun then EDSCR.TXU = '1'; EDSCR.ERR = '1'; // Underrun condition: block side-effects return value; // Return UNKNOWN EDSCR.TXfull = '0'; if Halted() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !UsingAArch32() then ExecuteA64(0xB8404401<31:0>); // A64 "LDR W1,[X0],#4" else ExecuteT32(0xF850<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "LDR R1,[R0],#4" // If the load aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.TXfull = bit UNKNOWN; DBGDTRTX_EL0 = bits(32) UNKNOWN; else if !UsingAArch32() then ExecuteA64(0xD5130501<31:0>); // A64 "MSR DBGDTRTX_EL0,X1" else ExecuteT32(0xEE00<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MSR DBGDTRTXint,R1" // "MSR DBGDTRTX_EL0,X1" calls DBGDTR_EL0[] (write) which sets TXfull. assert EDSCR.TXfull == '1'; if !UsingAArch32() then X[1] = bits(64) UNKNOWN; else R[1] = bits(32) UNKNOWN; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return value; // DBGDTRTX_EL0[] (external write) // =============================== DBGDTRTX_EL0[boolean memory_mapped] = bits(32) value // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write DTRTX = value; return;

Library pseudocode for shared/debug/dccanditr/EDITRDBGDTR_EL0

// EDITR[] (external write) // ======================== // Called on writes to debug register 0x084.// DBGDTR_EL0[] (write) // ==================== // System register writes to DBGDTR_EL0, DBGDTRTX_EL0 (AArch64) and DBGDTRTXint (AArch32) EDITR[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return; DBGDTR_EL0[] = bits(N) value // For MSR DBGDTRTX_EL0,<Rt> N=32, value=X[t]<31:0>, X[t]<63:32> is ignored // For MSR DBGDTR_EL0,<Xt> N=64, value=X[t]<63:0> assert N IN {32,64}; if EDSCR.TXfull == '1' then value = bits(N) UNKNOWN; // On a 64-bit write, implement a half-duplex channel if N == 64 then DTRRX = value<63:32>; DTRTX = value<31:0>; // 32-bit or 64-bit write EDSCR.TXfull = '1'; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // DBGDTR_EL0[] (read) // =================== // System register reads of DBGDTR_EL0, DBGDTRRX_EL0 (AArch64) and DBGDTRRXint (AArch32) // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if !bits(N)Halted() then return; // Non-debug state: ignore write if EDSCR.ITE == '0' || EDSCR.MA == '1' then EDSCR.ITO = '1'; EDSCR.ERR = '1'; // Overrun condition: block write return; // ITE indicates whether the processor is ready to accept another instruction; the processor // may support multiple outstanding instructions. Unlike the "InstrCompl" flag in [v7A] there // is no indication that the pipeline is empty (all instructions have completed). In this // pseudocode, the assumption is that only one instruction can be executed at a time, // meaning ITE acts like "InstrCompl". EDSCR.ITE = '0'; if !UsingAArch32() then ExecuteA64(value); else ExecuteT32(value<15:0>/*hw1*/, value<31:16> /*hw2*/); EDSCR.ITE = '1'; return;DBGDTR_EL0[] // For MRS <Rt>,DBGDTRTX_EL0 N=32, X[t]=Zeros(32):result // For MRS <Xt>,DBGDTR_EL0 N=64, X[t]=result assert N IN {32,64}; bits(N) result; if EDSCR.RXfull == '0' then result = bits(N) UNKNOWN; else // On a 64-bit read, implement a half-duplex channel // NOTE: the word order is reversed on reads with regards to writes if N == 64 then result<63:32> = DTRTX; result<31:0> = DTRRX; EDSCR.RXfull = '0'; return result;

Library pseudocode for shared/debug/haltingdccanditr/DCPSInstructionDTR

// DCPSInstruction() // ================= // Operation of the DCPS instruction in Debug statebits(32) DTRRX; bits(32) DTRTX; DCPSInstruction(bits(2) target_el) SynchronizeContext(); case target_el of when EL1 if PSTATE.EL == EL2 || (PSTATE.EL == EL3 && !UsingAArch32()) then handle_el = PSTATE.EL; elsif EL2Enabled() && HCR_EL2.TGE == '1' then UndefinedFault(); else handle_el = EL1; when EL2 if !HaveEL(EL2) then UndefinedFault(); elsif PSTATE.EL == EL3 && !UsingAArch32() then handle_el = EL3; elsif !IsSecureEL2Enabled() && IsSecure() then UndefinedFault(); else handle_el = EL2; when EL3 if EDSCR.SDD == '1' || !HaveEL(EL3) then UndefinedFault(); handle_el = EL3; otherwise Unreachable(); from_secure = IsSecure(); if ELUsingAArch32(handle_el) then if PSTATE.M == M32_Monitor then SCR.NS = '0'; assert UsingAArch32(); // Cannot move from AArch64 to AArch32 case handle_el of when EL1AArch32.WriteMode(M32_Svc); if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; when EL2 AArch32.WriteMode(M32_Hyp); when EL3AArch32.WriteMode(M32_Monitor); if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if handle_el == EL2 then ELR_hyp = bits(32) UNKNOWN; HSR = bits(32) UNKNOWN; else LR = bits(32) UNKNOWN; SPSR[] = bits(32) UNKNOWN; PSTATE.E = SCTLR[].EE; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else // Targeting AArch64 if UsingAArch32() then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); PSTATE.nRW = '0'; PSTATE.SP = '1'; PSTATE.EL = handle_el; if HavePANExt() && ((handle_el == EL1 && SCTLR_EL1.SPAN == '0') || (handle_el == EL2 && HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && SCTLR_EL2.SPAN == '0')) then PSTATE.PAN = '1'; ELR[] = bits(64) UNKNOWN; SPSR[] = bits(32) UNKNOWN; ESR[] = bits(32) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; if HaveUAOExt() then PSTATE.UAO = '0'; UpdateEDSCRFields(); // Update EDSCR PE state flags sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrors(); return;

Library pseudocode for shared/debug/haltingdccanditr/DRPSInstructionEDITR

// DRPSInstruction() // ================= // Operation of the A64 DRPS and T32 ERET instructions in Debug state// EDITR[] (external write) // ======================== // Called on writes to debug register 0x084. DRPSInstruction()EDITR[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if ! SynchronizeContextHalted(); () then return; // Non-debug state: ignore write sync_errors = if EDSCR.ITE == '0' || EDSCR.MA == '1' then EDSCR.ITO = '1'; EDSCR.ERR = '1'; // Overrun condition: block write return; // ITE indicates whether the processor is ready to accept another instruction; the processor // may support multiple outstanding instructions. Unlike the "InstrCompl" flag in [v7A] there // is no indication that the pipeline is empty (all instructions have completed). In this // pseudocode, the assumption is that only one instruction can be executed at a time, // meaning ITE acts like "InstrCompl". EDSCR.ITE = '0'; if ! HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL ==() then EL3ExecuteA64); // SCTLR[].IESB might be ignored in Debug state. if !(value); elseConstrainUnpredictableBoolExecuteT32(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrors(); SetPSTATEFromPSR(SPSR[]); // PSTATE.{N,Z,C,V,Q,GE,SS,D,A,I,F} are not observable and ignored in Debug state, so // behave as if UNKNOWN. if UsingAArch32() then PSTATE.<N,Z,C,V,Q,GE,SS,A,I,F> = bits(13) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else PSTATE.<N,Z,C,V,SS,D,A,I,F> = bits(9) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; UpdateEDSCRFields(); // Update EDSCR PE state flags (value<15:0>/*hw1*/, value<31:16> /*hw2*/); EDSCR.ITE = '1'; return;

Library pseudocode for shared/debug/halting/DebugHaltDCPSInstruction

constant bits(6)// DCPSInstruction() // ================= // Operation of the DCPS instruction in Debug state DebugHalt_Breakpoint = '000111'; constant bits(6)DCPSInstruction(bits(2) target_el) DebugHalt_EDBGRQ = '010011'; constant bits(6)(); case target_el of when DebugHalt_Step_Normal = '011011'; constant bits(6)if PSTATE.EL == DebugHalt_Step_Exclusive = '011111'; constant bits(6)|| (PSTATE.EL == DebugHalt_OSUnlockCatch = '100011'; constant bits(6)&& ! DebugHalt_ResetCatch = '100111'; constant bits(6)()) then handle_el = PSTATE.EL; elsif DebugHalt_Watchpoint = '101011'; constant bits(6)() && HCR_EL2.TGE == '1' then UndefinedFault(); else handle_el = DebugHalt_HaltInstruction = '101111'; constant bits(6); when DebugHalt_SoftwareAccess = '110011'; constant bits(6)if ! DebugHalt_ExceptionCatch = '110111'; constant bits(6)( ) then UndefinedFault(); elsif PSTATE.EL == EL3 && !UsingAArch32() then handle_el = EL3; elsif !IsSecureEL2Enabled() && IsSecure() then UndefinedFault(); else handle_el = EL2; when EL3 if EDSCR.SDD == '1' || !HaveEL(EL3) then UndefinedFault(); handle_el = EL3; otherwise Unreachable(); from_secure = IsSecure(); if ELUsingAArch32(handle_el) then if PSTATE.M == M32_Monitor then SCR.NS = '0'; assert UsingAArch32(); // Cannot move from AArch64 to AArch32 case handle_el of when EL1AArch32.WriteMode(M32_Svc); if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; when EL2 AArch32.WriteMode(M32_Hyp); when EL3AArch32.WriteMode(M32_Monitor); if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if handle_el == EL2 then ELR_hyp = bits(32) UNKNOWN; HSR = bits(32) UNKNOWN; else LR = bits(32) UNKNOWN; SPSR[] = bits(32) UNKNOWN; PSTATE.E = SCTLR[].EE; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else // Targeting AArch64 if UsingAArch32() then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); PSTATE.nRW = '0'; PSTATE.SP = '1'; PSTATE.EL = handle_el; if HavePANExt() && ((handle_el == EL1 && SCTLR_EL1.SPAN == '0') || (handle_el == EL2 && HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && SCTLR_EL2.SPAN == '0')) then PSTATE.PAN = '1'; ELR[] = bits(64) UNKNOWN; SPSR[] = bits(32) UNKNOWN; ESR[] = bits(32) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; if HaveUAOExt() then PSTATE.UAO = '0'; UpdateEDSCRFields(); // Update EDSCR PE state flags sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrorsDebugHalt_Step_NoSyndrome = '111011';(); return;

Library pseudocode for shared/debug/halting/DisableITRAndResumeInstructionPrefetchDRPSInstruction

// DRPSInstruction() // ================= // Operation of the A64 DRPS and T32 ERET instructions in Debug state DRPSInstruction() SynchronizeContext(); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrors(); SetPSTATEFromPSR(SPSR[]); // PSTATE.{N,Z,C,V,Q,GE,SS,D,A,I,F} are not observable and ignored in Debug state, so // behave as if UNKNOWN. if UsingAArch32() then PSTATE.<N,Z,C,V,Q,GE,SS,A,I,F> = bits(13) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else PSTATE.<N,Z,C,V,SS,D,A,I,F> = bits(9) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; UpdateEDSCRFieldsDisableITRAndResumeInstructionPrefetch();(); // Update EDSCR PE state flags return;

Library pseudocode for shared/debug/halting/ExecuteA64DebugHalt

// Execute an A64 instruction in Debug state.constant bits(6) ExecuteA64(bits(32) instr);DebugHalt_Breakpoint = '000111'; constant bits(6)DebugHalt_EDBGRQ = '010011'; constant bits(6) DebugHalt_Step_Normal = '011011'; constant bits(6) DebugHalt_Step_Exclusive = '011111'; constant bits(6) DebugHalt_OSUnlockCatch = '100011'; constant bits(6) DebugHalt_ResetCatch = '100111'; constant bits(6) DebugHalt_Watchpoint = '101011'; constant bits(6) DebugHalt_HaltInstruction = '101111'; constant bits(6) DebugHalt_SoftwareAccess = '110011'; constant bits(6) DebugHalt_ExceptionCatch = '110111'; constant bits(6) DebugHalt_Step_NoSyndrome = '111011';

Library pseudocode for shared/debug/halting/ExecuteT32DisableITRAndResumeInstructionPrefetch

// Execute a T32 instruction in Debug state. ExecuteT32(bits(16) hw1, bits(16) hw2);DisableITRAndResumeInstructionPrefetch();

Library pseudocode for shared/debug/halting/ExitDebugStateExecuteA64

// ExitDebugState() // ================// Execute an A64 instruction in Debug state. ExitDebugState() assertExecuteA64(bits(32) instr); Halted(); SynchronizeContext(); // Although EDSCR.STATUS signals that the PE is restarting, debuggers must use EDPRSR.SDR to // detect that the PE has restarted. EDSCR.STATUS = '000001'; // Signal restarting EDESR<2:0> = '000'; // Clear any pending Halting debug events bits(64) new_pc; bits(32) spsr; if UsingAArch32() then new_pc = ZeroExtend(DLR); spsr = DSPSR; else new_pc = DLR_EL0; spsr = DSPSR_EL0; // If this is an illegal return, SetPSTATEFromPSR() will set PSTATE.IL. SetPSTATEFromPSR(spsr); // Can update privileged bits, even at EL0 if UsingAArch32() then if ConstrainUnpredictableBool(Unpredictable_RESTARTALIGNPC) then new_pc<0> = '0'; BranchTo(new_pc<31:0>, BranchType_DBGEXIT); // AArch32 branch else // If targeting AArch32 then possibly zero the 32 most significant bits of the target PC if spsr<4> == '1' && ConstrainUnpredictableBool(Unpredictable_RESTARTZEROUPPERPC) then new_pc<63:32> = Zeros(); BranchTo(new_pc, BranchType_DBGEXIT); // A type of branch that is never predicted (EDSCR.STATUS,EDPRSR.SDR) = ('000010','1'); // Atomically signal restarted UpdateEDSCRFields(); // Stop signalling PE state DisableITRAndResumeInstructionPrefetch(); return;

Library pseudocode for shared/debug/halting/HaltExecuteT32

// Halt() // ======// Execute a T32 instruction in Debug state. Halt(bits(6) reason)ExecuteT32(bits(16) hw1, bits(16) hw2); CTI_SignalEvent(CrossTriggerIn_CrossHalt); // Trigger other cores to halt bits(64) preferred_restart_address = ThisInstrAddr(); spsr = GetPSRFromPSTATE(); if UsingAArch32() then // If entering from AArch32 state, spsr<21> is the DIT bit which has to be moved for DSPSR spsr<24> = spsr<21>; spsr<21> = PSTATE.SS; // Always save the SS bit if (HaveBTIExt() && !(reason IN {DebugHalt_Step_Normal, DebugHalt_Step_Exclusive, DebugHalt_Step_NoSyndrome, DebugHalt_Breakpoint, DebugHalt_HaltInstruction}) && ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE)) then DSPSR<11:10> = '00'; if UsingAArch32() then DLR = preferred_restart_address<31:0>; DSPSR = spsr; else DLR_EL0 = preferred_restart_address; DSPSR_EL0 = spsr; EDSCR.ITE = '1'; EDSCR.ITO = '0'; if IsSecure() then EDSCR.SDD = '0'; // If entered in Secure state, allow debug elsif HaveEL(EL3) then EDSCR.SDD = if ExternalSecureInvasiveDebugEnabled() then '0' else '1'; else assert EDSCR.SDD == '1'; // Otherwise EDSCR.SDD is RES1 EDSCR.MA = '0'; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if // UNKNOWN. PSTATE.{N,Z,C,V,Q,GE} are also not observable, but since these are not changed on // exception entry, this function also leaves them unchanged. PSTATE.{E,M,nRW,EL,SP} are // unchanged. PSTATE.IL is set to 0. if UsingAArch32() then PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 else PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; PSTATE.IL = '0'; StopInstructionPrefetchAndEnableITR(); EDSCR.STATUS = reason; // Signal entered Debug state UpdateEDSCRFields(); // Update EDSCR PE state flags. return;

Library pseudocode for shared/debug/halting/HaltOnBreakpointOrWatchpointExitDebugState

// HaltOnBreakpointOrWatchpoint() // ============================== // Returns TRUE if the Breakpoint and Watchpoint debug events should be considered for Debug // state entry, FALSE if they should be considered for a debug exception. boolean// ExitDebugState() // ================ HaltOnBreakpointOrWatchpoint() returnExitDebugState() assert (); SynchronizeContext(); // Although EDSCR.STATUS signals that the PE is restarting, debuggers must use EDPRSR.SDR to // detect that the PE has restarted. EDSCR.STATUS = '000001'; // Signal restarting EDESR<2:0> = '000'; // Clear any pending Halting debug events bits(64) new_pc; bits(32) spsr; if UsingAArch32() then new_pc = ZeroExtend(DLR); spsr = DSPSR; else new_pc = DLR_EL0; spsr = DSPSR_EL0; // If this is an illegal return, SetPSTATEFromPSR() will set PSTATE.IL. SetPSTATEFromPSR(spsr); // Can update privileged bits, even at EL0 if UsingAArch32() then if ConstrainUnpredictableBool(Unpredictable_RESTARTALIGNPC) then new_pc<0> = '0'; BranchTo(new_pc<31:0>, BranchType_DBGEXIT); // AArch32 branch else // If targeting AArch32 then possibly zero the 32 most significant bits of the target PC if spsr<4> == '1' && ConstrainUnpredictableBool(Unpredictable_RESTARTZEROUPPERPC) then new_pc<63:32> = Zeros(); BranchTo(new_pc, BranchType_DBGEXIT); // A type of branch that is never predicted (EDSCR.STATUS,EDPRSR.SDR) = ('000010','1'); // Atomically signal restarted UpdateEDSCRFields(); // Stop signalling PE state DisableITRAndResumeInstructionPrefetchHaltingAllowedHalted() && EDSCR.HDE == '1' && OSLSR_EL1.OSLK == '0';(); return;

Library pseudocode for shared/debug/halting/HaltedHalt

// Halted() // ======== boolean// Halt() // ====== Halted() return !(EDSCR.STATUS IN {'000001', '000010'}); // HaltedHalt(bits(6) reason)CTI_SignalEvent(CrossTriggerIn_CrossHalt); // Trigger other cores to halt bits(64) preferred_restart_address = ThisInstrAddr(); spsr = GetPSRFromPSTATE(); if UsingAArch32() then // If entering from AArch32 state, spsr<21> is the DIT bit which has to be moved for DSPSR spsr<24> = spsr<21>; spsr<21> = PSTATE.SS; // Always save the SS bit if UsingAArch32() then DLR = preferred_restart_address<31:0>; DSPSR = spsr; else DLR_EL0 = preferred_restart_address; DSPSR_EL0 = spsr; EDSCR.ITE = '1'; EDSCR.ITO = '0'; if IsSecure() then EDSCR.SDD = '0'; // If entered in Secure state, allow debug elsif HaveEL(EL3) then EDSCR.SDD = if ExternalSecureDebugEnabled() then '0' else '1'; else assert EDSCR.SDD == '1'; // Otherwise EDSCR.SDD is RES1 EDSCR.MA = '0'; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if // UNKNOWN. PSTATE.{N,Z,C,V,Q,GE} are also not observable, but since these are not changed on // exception entry, this function also leaves them unchanged. PSTATE.{E,M,nRW,EL,SP} are // unchanged. PSTATE.IL is set to 0. if UsingAArch32() then PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 else PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; PSTATE.IL = '0'; StopInstructionPrefetchAndEnableITR(); EDSCR.STATUS = reason; // Signal entered Debug state UpdateEDSCRFields(); // Update EDSCR PE state flags. return;

Library pseudocode for shared/debug/halting/HaltingAllowedHaltOnBreakpointOrWatchpoint

// HaltingAllowed() // ================ // Returns TRUE if halting is currently allowed, FALSE if halting is prohibited. // HaltOnBreakpointOrWatchpoint() // ============================== // Returns TRUE if the Breakpoint and Watchpoint debug events should be considered for Debug // state entry, FALSE if they should be considered for a debug exception. boolean HaltingAllowed() ifHaltOnBreakpointOrWatchpoint() return HaltedHaltingAllowed() || DoubleLockStatus() then return FALSE; elsif IsSecure() then return ExternalSecureInvasiveDebugEnabled(); else return ExternalInvasiveDebugEnabled();() && EDSCR.HDE == '1' && OSLSR_EL1.OSLK == '0';

Library pseudocode for shared/debug/halting/RestartingHalted

// Restarting() // ============ // Halted() // ======== boolean Restarting() return EDSCR.STATUS == '000001'; // RestartingHalted() return !(EDSCR.STATUS IN {'000001', '000010'}); // Halted

Library pseudocode for shared/debug/halting/StopInstructionPrefetchAndEnableITRHaltingAllowed

// HaltingAllowed() // ================ // Returns TRUE if halting is currently allowed, FALSE if halting is prohibited. boolean HaltingAllowed() if Halted() || DoubleLockStatus() then return FALSE; elsif IsSecure() then return ExternalSecureDebugEnabled(); else return ExternalDebugEnabledStopInstructionPrefetchAndEnableITR();();

Library pseudocode for shared/debug/halting/UpdateEDSCRFieldsRestarting

// UpdateEDSCRFields() // =================== // Update EDSCR PE state fields// Restarting() // ============ boolean UpdateEDSCRFields() if !Restarting() return EDSCR.STATUS == '000001'; // RestartingHalted() then EDSCR.EL = '00'; EDSCR.NS = bit UNKNOWN; EDSCR.RW = '1111'; else EDSCR.EL = PSTATE.EL; EDSCR.NS = if IsSecure() then '0' else '1'; bits(4) RW; RW<1> = if ELUsingAArch32(EL1) then '0' else '1'; if PSTATE.EL != EL0 then RW<0> = RW<1>; else RW<0> = if UsingAArch32() then '0' else '1'; if !HaveEL(EL2) || (HaveEL(EL3) && SCR_GEN[].NS == '0' && !IsSecureEL2Enabled()) then RW<2> = RW<1>; else RW<2> = if ELUsingAArch32(EL2) then '0' else '1'; if !HaveEL(EL3) then RW<3> = RW<2>; else RW<3> = if ELUsingAArch32(EL3) then '0' else '1'; // The least-significant bits of EDSCR.RW are UNKNOWN if any higher EL is using AArch32. if RW<3> == '0' then RW<2:0> = bits(3) UNKNOWN; elsif RW<2> == '0' then RW<1:0> = bits(2) UNKNOWN; elsif RW<1> == '0' then RW<0> = bit UNKNOWN; EDSCR.RW = RW; return;

Library pseudocode for shared/debug/haltingeventshalting/CheckExceptionCatchStopInstructionPrefetchAndEnableITR

// CheckExceptionCatch() // ===================== // Check whether an Exception Catch debug event is set on the current Exception level CheckExceptionCatch(boolean exception_entry) // Called after an exception entry or exit, that is, such that IsSecure() and PSTATE.EL are correct // for the exception target. base = if IsSecure() then 0 else 4; if HaltingAllowed() then if HaveExtendedECDebugEvents() then exception_exit = !exception_entry; ctrl = EDECCR<UInt(PSTATE.EL) + base + 8>:EDECCR<UInt(PSTATE.EL) + base>; case ctrl of when '00' halt = FALSE; when '01' halt = TRUE; when '10' halt = (exception_exit == TRUE); when '11' halt = (exception_entry == TRUE); else halt = (EDECCR<UInt(PSTATE.EL) + base> == '1'); if halt then Halt(DebugHalt_ExceptionCatch);StopInstructionPrefetchAndEnableITR();

Library pseudocode for shared/debug/haltingeventshalting/CheckHaltingStepUpdateEDSCRFields

// CheckHaltingStep() // ================== // Check whether EDESR.SS has been set by Halting Step// UpdateEDSCRFields() // =================== // Update EDSCR PE state fields CheckHaltingStep() ifUpdateEDSCRFields() if ! HaltingAllowedHalted() && EDESR.SS == '1' then // The STATUS code depends on how we arrived at the state where EDESR.SS == 1. if() then EDSCR.EL = '00'; EDSCR.NS = bit UNKNOWN; EDSCR.RW = '1111'; else EDSCR.EL = PSTATE.EL; EDSCR.NS = if HaltingStep_DidNotStepIsSecure() then() then '0' else '1'; bits(4) RW; RW<1> = if HaltELUsingAArch32(DebugHalt_Step_NoSyndromeEL1); elsif) then '0' else '1'; if PSTATE.EL != HaltingStep_SteppedEXEL0() thenthen RW<0> = RW<1>; else RW<0> = if HaltUsingAArch32(() then '0' else '1'; if !DebugHalt_Step_ExclusiveHaveEL); else( HaltEL2() || ((EL3) && SCR_GEN[].NS == '0' && !IsSecureEL2Enabled()) then RW<2> = RW<1>; else RW<2> = if ELUsingAArch32(EL2) then '0' else '1'; if !HaveEL(EL3) then RW<3> = RW<2>; else RW<3> = if ELUsingAArch32(EL3DebugHalt_Step_NormalHaveEL);) then '0' else '1'; // The least-significant bits of EDSCR.RW are UNKNOWN if any higher EL is using AArch32. if RW<3> == '0' then RW<2:0> = bits(3) UNKNOWN; elsif RW<2> == '0' then RW<1:0> = bits(2) UNKNOWN; elsif RW<1> == '0' then RW<0> = bit UNKNOWN; EDSCR.RW = RW; return;

Library pseudocode for shared/debug/haltingevents/CheckOSUnlockCatchCheckExceptionCatch

// CheckOSUnlockCatch() // ==================== // Called on unlocking the OS Lock to pend an OS Unlock Catch debug event// CheckExceptionCatch() // ===================== // Check whether an Exception Catch debug event is set on the current Exception level CheckOSUnlockCatch() if EDECR.OSUCE == '1' && !CheckExceptionCatch(boolean exception_entry) // Called after an exception entry or exit, that is, such that IsSecure() and PSTATE.EL are correct // for the exception target. base = if() then 0 else 4; if HaltingAllowed() then if HaveExtendedECDebugEvents() then exception_exit = !exception_entry; ctrl = EDECCR<UInt(PSTATE.EL) + base + 8>:EDECCR<UInt(PSTATE.EL) + base>; case ctrl of when '00' halt = FALSE; when '01' halt = TRUE; when '10' halt = (exception_exit == TRUE); when '11' halt = (exception_entry == TRUE); else halt = (EDECCR<UInt(PSTATE.EL) + base> == '1'); if halt then Halt(DebugHalt_ExceptionCatchHaltedIsSecure() then EDESR.OSUC = '1';);

Library pseudocode for shared/debug/haltingevents/CheckPendingOSUnlockCatchCheckHaltingStep

// CheckPendingOSUnlockCatch() // =========================== // Check whether EDESR.OSUC has been set by an OS Unlock Catch debug event// CheckHaltingStep() // ================== // Check whether EDESR.SS has been set by Halting Step CheckPendingOSUnlockCatch() CheckHaltingStep() if HaltingAllowed() && EDESR.OSUC == '1' then() && EDESR.SS == '1' then // The STATUS code depends on how we arrived at the state where EDESR.SS == 1. if HaltingStep_DidNotStep() then Halt(); elsif HaltingStep_SteppedEX() then Halt(DebugHalt_Step_Exclusive); else Halt(DebugHalt_Step_NormalDebugHalt_OSUnlockCatchDebugHalt_Step_NoSyndrome);

Library pseudocode for shared/debug/haltingevents/CheckPendingResetCatchCheckOSUnlockCatch

// CheckPendingResetCatch() // ======================== // Check whether EDESR.RC has been set by a Reset Catch debug event// CheckOSUnlockCatch() // ==================== // Called on unlocking the OS Lock to pend an OS Unlock Catch debug event CheckPendingResetCatch() ifCheckOSUnlockCatch() if EDECR.OSUCE == '1' && ! HaltingAllowedHalted() && EDESR.RC == '1' then Halt(DebugHalt_ResetCatch);() then EDESR.OSUC = '1';

Library pseudocode for shared/debug/haltingevents/CheckResetCatchCheckPendingOSUnlockCatch

// CheckResetCatch() // ================= // Called after reset// CheckPendingOSUnlockCatch() // =========================== // Check whether EDESR.OSUC has been set by an OS Unlock Catch debug event CheckResetCatch() if EDECR.RCE == '1' then EDESR.RC = '1'; // If halting is allowed then halt immediately ifCheckPendingOSUnlockCatch() if HaltingAllowed() then() && EDESR.OSUC == '1' then Halt(DebugHalt_ResetCatchDebugHalt_OSUnlockCatch);

Library pseudocode for shared/debug/haltingevents/CheckSoftwareAccessToDebugRegistersCheckPendingResetCatch

// CheckSoftwareAccessToDebugRegisters() // ===================================== // Check for access to Breakpoint and Watchpoint registers.// CheckPendingResetCatch() // ======================== // Check whether EDESR.RC has been set by a Reset Catch debug event CheckSoftwareAccessToDebugRegisters() os_lock = (ifCheckPendingResetCatch() if ELUsingAArch32(EL1) then DBGOSLSR.OSLK else OSLSR_EL1.OSLK); if HaltingAllowed() && EDSCR.TDA == '1' && os_lock == '0' then() && EDESR.RC == '1' then Halt(DebugHalt_SoftwareAccessDebugHalt_ResetCatch);

Library pseudocode for shared/debug/haltingevents/ExternalDebugRequestCheckResetCatch

// ExternalDebugRequest() // ======================// CheckResetCatch() // ================= // Called after reset ExternalDebugRequest() ifCheckResetCatch() if EDECR.RCE == '1' then EDESR.RC = '1'; // If halting is allowed then halt immediately if HaltingAllowed() then Halt(DebugHalt_EDBGRQDebugHalt_ResetCatch); // Otherwise the CTI continues to assert the debug request until it is taken.);

Library pseudocode for shared/debug/haltingevents/HaltingStep_DidNotStepCheckSoftwareAccessToDebugRegisters

// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean// CheckSoftwareAccessToDebugRegisters() // ===================================== // Check for access to Breakpoint and Watchpoint registers. HaltingStep_DidNotStep();CheckSoftwareAccessToDebugRegisters() os_lock = (ifELUsingAArch32(EL1) then DBGOSLSR.OSLK else OSLSR_EL1.OSLK); if HaltingAllowed() && EDSCR.TDA == '1' && os_lock == '0' then Halt(DebugHalt_SoftwareAccess);

Library pseudocode for shared/debug/haltingevents/HaltingStep_SteppedEXExternalDebugRequest

// Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean// ExternalDebugRequest() // ====================== HaltingStep_SteppedEX();ExternalDebugRequest() ifHaltingAllowed() then Halt(DebugHalt_EDBGRQ); // Otherwise the CTI continues to assert the debug request until it is taken.

Library pseudocode for shared/debug/haltingevents/RunHaltingStepHaltingStep_DidNotStep

// RunHaltingStep() // ================// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean RunHaltingStep(boolean exception_generated, bits(2) exception_target, boolean syscall, boolean reset) // "exception_generated" is TRUE if the previous instruction generated a synchronous exception // or was cancelled by an asynchronous exception. // // if "exception_generated" is TRUE then "exception_target" is the target of the exception, and // "syscall" is TRUE if the exception is a synchronous exception where the preferred return // address is the instruction following that which generated the exception. // // "reset" is TRUE if exiting reset state into the highest EL. if reset then assert !HaltingStep_DidNotStep();Halted(); // Cannot come out of reset halted active = EDECR.SS == '1' && !Halted(); if active && reset then // Coming out of reset with EDECR.SS set EDESR.SS = '1'; elsif active && HaltingAllowed() then if exception_generated && exception_target == EL3 then advance = syscall || ExternalSecureInvasiveDebugEnabled(); else advance = TRUE; if advance then EDESR.SS = '1'; return;

Library pseudocode for shared/debug/interruptshaltingevents/ExternalDebugInterruptsDisabledHaltingStep_SteppedEX

// ExternalDebugInterruptsDisabled() // ================================= // Determine whether EDSCR disables interrupts routed to 'target' // Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean ExternalDebugInterruptsDisabled(bits(2) target) case target of whenHaltingStep_SteppedEX(); EL3 int_dis = EDSCR.INTdis == '11' && ExternalSecureInvasiveDebugEnabled(); when EL2 int_dis = EDSCR.INTdis == '1x' && ExternalInvasiveDebugEnabled(); when EL1 if IsSecure() then int_dis = EDSCR.INTdis == '1x' && ExternalSecureInvasiveDebugEnabled(); else int_dis = EDSCR.INTdis != '00' && ExternalInvasiveDebugEnabled(); return int_dis;

Library pseudocode for shared/debug/interruptshaltingevents/InterruptIDRunHaltingStep

enumeration// RunHaltingStep() // ================ InterruptID {RunHaltingStep(boolean exception_generated, bits(2) exception_target, boolean syscall, boolean reset) // "exception_generated" is TRUE if the previous instruction generated a synchronous exception // or was cancelled by an asynchronous exception. // // if "exception_generated" is TRUE then "exception_target" is the target of the exception, and // "syscall" is TRUE if the exception is a synchronous exception where the preferred return // address is the instruction following that which generated the exception. // // "reset" is TRUE if exiting reset state into the highest EL. if reset then assert !InterruptID_PMUIRQ,(); // Cannot come out of reset halted active = EDECR.SS == '1' && ! InterruptID_COMMIRQ,(); if active && reset then // Coming out of reset with EDECR.SS set EDESR.SS = '1'; elsif active && InterruptID_CTIIRQ,() then if exception_generated && exception_target == InterruptID_COMMRX,then advance = syscall || InterruptID_COMMTX};(); else advance = TRUE; if advance then EDESR.SS = '1'; return;

Library pseudocode for shared/debug/interrupts/SetInterruptRequestLevelExternalDebugInterruptsDisabled

// Set a level-sensitive interrupt to the specified level. SetInterruptRequestLevel(// ExternalDebugInterruptsDisabled() // ================================= // Determine whether EDSCR disables interrupts routed to 'target' booleanExternalDebugInterruptsDisabled(bits(2) target) case target of when EL3 int_dis = EDSCR.INTdis == '11' && ExternalSecureDebugEnabled(); when EL2 int_dis = EDSCR.INTdis == '1x' && ExternalDebugEnabled(); when EL1 if IsSecure() then int_dis = EDSCR.INTdis == '1x' && ExternalSecureDebugEnabled(); else int_dis = EDSCR.INTdis != '00' && ExternalDebugEnabledInterruptID id, signal level);(); return int_dis;

Library pseudocode for shared/debug/samplebasedprofilinginterrupts/CreatePCSampleInterruptID

// CreatePCSample() // ================enumeration CreatePCSample() // In a simple sequential execution of the program, CreatePCSample is executed each time the PE // executes an instruction that can be sampled. An implementation is not constrained such that // reads of EDPCSRlo return the current values of PC, etc. pc_sample.valid =InterruptID { ExternalNoninvasiveDebugAllowed() && !InterruptID_PMUIRQ,Halted(); pc_sample.pc =InterruptID_COMMIRQ, ThisInstrAddr(); pc_sample.el = PSTATE.EL; pc_sample.rw = ifInterruptID_CTIIRQ, UsingAArch32() then '0' else '1'; pc_sample.ns = ifInterruptID_COMMRX, IsSecure() then '0' else '1'; pc_sample.contextidr = if ELUsingAArch32(EL1) then CONTEXTIDR else CONTEXTIDR_EL1; pc_sample.has_el2 = EL2Enabled(); if EL2Enabled() then if ELUsingAArch32(EL2) then pc_sample.vmid = ZeroExtend(VTTBR.VMID, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then pc_sample.vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); else pc_sample.vmid = VTTBR_EL2.VMID; if HaveVirtHostExt() && !ELUsingAArch32(EL2) then pc_sample.contextidr_el2 = CONTEXTIDR_EL2; else pc_sample.contextidr_el2 = bits(32) UNKNOWN; pc_sample.el0h = PSTATE.EL == EL0 && IsInHost(); return;InterruptID_COMMTX};

Library pseudocode for shared/debug/samplebasedprofilinginterrupts/EDPCSRloSetInterruptRequestLevel

// EDPCSRlo[] (read) // ================= bits(32)// Set a level-sensitive interrupt to the specified level. SetInterruptRequestLevel( EDPCSRlo[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || EDLSR.SLK == '0'; // Software locked: no side-effects if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then if HaveVirtHostExt() && EDSCR.SC2 == '1' then EDPCSRhi.PC = (if pc_sample.rw == '0' then Zeros(24) else pc_sample.pc<55:32>); EDPCSRhi.EL = pc_sample.el; EDPCSRhi.NS = pc_sample.ns; else EDPCSRhi = (if pc_sample.rw == '0' then Zeros(32) else pc_sample.pc<63:32>); EDCIDSR = pc_sample.contextidr; if HaveVirtHostExt() && EDSCR.SC2 == '1' then EDVIDSR = (if HaveEL(EL2) && pc_sample.ns == '1' then pc_sample.contextidr_el2 else bits(32) UNKNOWN); else if HaveEL(EL2) && pc_sample.ns == '1' && pc_sample.el IN {EL1,EL0} then EDVIDSR.VMID = pc_sample.vmid; else EDVIDSR.VMID = Zeros(); EDVIDSR.NS = pc_sample.ns; EDVIDSR.E2 = (if pc_sample.el == EL2 then '1' else '0'); EDVIDSR.E3 = (if pc_sample.el == EL3 then '1' else '0') AND pc_sample.rw; // The conditions for setting HV are not specified if PCSRhi is zero. // An example implementation may be "pc_sample.rw". EDVIDSR.HV = (if !IsZero(EDPCSRhi) then '1' else bit IMPLEMENTATION_DEFINED "0 or 1"); else sample = Ones(32); if update then EDPCSRhi = bits(32) UNKNOWN; EDCIDSR = bits(32) UNKNOWN; EDVIDSR = bits(32) UNKNOWN; return sample;id, signal level);

Library pseudocode for shared/debug/samplebasedprofiling/PCSampleCreatePCSample

type// CreatePCSample() // ================ PCSample is ( boolean valid, bits(64) pc, bits(2) el, bit rw, bit ns, boolean has_el2, bits(32) contextidr, bits(32) contextidr_el2, boolean el0h, bits(16) vmid )CreatePCSample() // In a simple sequential execution of the program, CreatePCSample is executed each time the PE // executes an instruction that can be sampled. An implementation is not constrained such that // reads of EDPCSRlo return the current values of PC, etc. pc_sample.valid = () && !Halted(); pc_sample.pc = ThisInstrAddr(); pc_sample.el = PSTATE.EL; pc_sample.rw = if UsingAArch32() then '0' else '1'; pc_sample.ns = if IsSecure() then '0' else '1'; pc_sample.contextidr = if ELUsingAArch32(EL1) then CONTEXTIDR else CONTEXTIDR_EL1; pc_sample.has_el2 = EL2Enabled(); if EL2Enabled() then if ELUsingAArch32(EL2) then pc_sample.vmid = ZeroExtend(VTTBR.VMID, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then pc_sample.vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); else pc_sample.vmid = VTTBR_EL2.VMID; if HaveVirtHostExt() && !ELUsingAArch32(EL2) then pc_sample.contextidr_el2 = CONTEXTIDR_EL2; else pc_sample.contextidr_el2 = bits(32) UNKNOWN; pc_sample.el0h = PSTATE.EL == EL0 && IsInHostPCSampleExternalNoninvasiveDebugAllowed pc_sample;(); return;

Library pseudocode for shared/debug/samplebasedprofiling/PMPCSREDPCSRlo

// PMPCSR[] (read) // =============== // EDPCSRlo[] (read) // ================= bits(32) PMPCSR[boolean memory_mapped] EDPCSRlo[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || PMLSR.SLK == '0'; // Software locked: no side-effects update = !memory_mapped || EDLSR.SLK == '0'; // Software locked: no side-effects if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then PMPCSR<55:32> = (if pc_sample.rw == '0' then if HaveVirtHostExt() && EDSCR.SC2 == '1' then EDPCSRhi.PC = (if pc_sample.rw == '0' then Zeros(24) else pc_sample.pc<55:32>); PMPCSR.EL = pc_sample.el; PMPCSR.NS = pc_sample.ns; PMCID1SR = pc_sample.contextidr; PMCID2SR = if pc_sample.has_el2 then pc_sample.contextidr_el2 else bits(32) UNKNOWN; PMVIDSR.VMID = (if pc_sample.has_el2 && pc_sample.el IN { EDPCSRhi.EL = pc_sample.el; EDPCSRhi.NS = pc_sample.ns; else EDPCSRhi = (if pc_sample.rw == '0' thenZeros(32) else pc_sample.pc<63:32>); EDCIDSR = pc_sample.contextidr; if HaveVirtHostExt() && EDSCR.SC2 == '1' then EDVIDSR = (if HaveEL(EL2) && pc_sample.ns == '1' then pc_sample.contextidr_el2 else bits(32) UNKNOWN); else if HaveEL(EL2) && pc_sample.ns == '1' && pc_sample.el IN {EL1,EL0} then EDVIDSR.VMID = pc_sample.vmid; else EDVIDSR.VMID = Zeros(); EDVIDSR.NS = pc_sample.ns; EDVIDSR.E2 = (if pc_sample.el == EL2 then '1' else '0'); EDVIDSR.E3 = (if pc_sample.el == EL3 then '1' else '0') AND pc_sample.rw; // The conditions for setting HV are not specified if PCSRhi is zero. // An example implementation may be "pc_sample.rw". EDVIDSR.HV = (if !IsZero} && !pc_sample.el0h then pc_sample.vmid else bits(16) UNKNOWN); (EDPCSRhi) then '1' else bit IMPLEMENTATION_DEFINED "0 or 1"); else sample = Ones(32); if update then PMPCSR<55:32> = bits(24) UNKNOWN; PMPCSR.EL = bits(2) UNKNOWN; PMPCSR.NS = bit UNKNOWN; PMCID1SR = bits(32) UNKNOWN; PMCID2SR = bits(32) UNKNOWN; PMVIDSR.VMID = bits(16) UNKNOWN; EDPCSRhi = bits(32) UNKNOWN; EDCIDSR = bits(32) UNKNOWN; EDVIDSR = bits(32) UNKNOWN; return sample;

Library pseudocode for shared/debug/softwarestepsamplebasedprofiling/CheckSoftwareStepPCSample

// CheckSoftwareStep() // =================== // Take a Software Step exception if in the active-pending statetype CheckSoftwareStep() // Other self-hosted debug functions will call AArch32.GenerateDebugExceptions() if called from // AArch32 state. However, because Software Step is only active when the debug target Exception // level is using AArch64, CheckSoftwareStep only calls AArch64.GenerateDebugExceptions(). if !PCSample is ( boolean valid, bits(64) pc, bits(2) el, bit rw, bit ns, boolean has_el2, bits(32) contextidr, bits(32) contextidr_el2, boolean el0h, bits(16) vmid )ELUsingAArch32PCSample(DebugTarget()) && AArch64.GenerateDebugExceptions() then if MDSCR_EL1.SS == '1' && PSTATE.SS == '0' then AArch64.SoftwareStepException();pc_sample;

Library pseudocode for shared/debug/softwarestepsamplebasedprofiling/DebugExceptionReturnSSPMPCSR

// DebugExceptionReturnSS() // ======================== // Returns value to write to PSTATE.SS on an exception return or Debug state exit. // PMPCSR[] (read) // =============== bitbits(32) DebugExceptionReturnSS(bits(32) spsr) assertPMPCSR[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || PMLSR.SLK == '0'; // Software locked: no side-effects if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then PMPCSR<55:32> = (if pc_sample.rw == '0' then HaltedZeros() ||(24) else pc_sample.pc<55:32>); PMPCSR.EL = pc_sample.el; PMPCSR.NS = pc_sample.ns; PMCID1SR = pc_sample.contextidr; PMCID2SR = if pc_sample.has_el2 then pc_sample.contextidr_el2 else bits(32) UNKNOWN; PMVIDSR.VMID = (if pc_sample.has_el2 && pc_sample.el IN { RestartingEL1() || PSTATE.EL !=, EL0; SS_bit = '0'; if MDSCR_EL1.SS == '1' then if} && !pc_sample.el0h then pc_sample.vmid else bits(16) UNKNOWN); else sample = RestartingOnes() then enabled_at_source = FALSE; elsif UsingAArch32() then enabled_at_source = AArch32.GenerateDebugExceptions(); else enabled_at_source = AArch64.GenerateDebugExceptions(); if IllegalExceptionReturn(spsr) then dest = PSTATE.EL; else (valid, dest) = ELFromSPSR(spsr); assert valid; secure = IsSecureBelowEL3() || dest == EL3; if ELUsingAArch32(dest) then enabled_at_dest = AArch32.GenerateDebugExceptionsFrom(dest, secure); else mask = spsr<9>; enabled_at_dest = AArch64.GenerateDebugExceptionsFrom(dest, secure, mask); ELd = DebugTargetFrom(secure); if !ELUsingAArch32(ELd) && !enabled_at_source && enabled_at_dest then SS_bit = spsr<21>; return SS_bit;(32); if update then PMPCSR<55:32> = bits(24) UNKNOWN; PMPCSR.EL = bits(2) UNKNOWN; PMPCSR.NS = bit UNKNOWN; PMCID1SR = bits(32) UNKNOWN; PMCID2SR = bits(32) UNKNOWN; PMVIDSR.VMID = bits(16) UNKNOWN; return sample;

Library pseudocode for shared/debug/softwarestep/SSAdvanceCheckSoftwareStep

// SSAdvance() // =========== // Advance the Software Step state machine.// CheckSoftwareStep() // =================== // Take a Software Step exception if in the active-pending state SSAdvance() CheckSoftwareStep() // A simpler implementation of this function just clears PSTATE.SS to zero regardless of the // current Software Step state machine. However, this check is made to illustrate that the // processor only needs to consider advancing the state machine from the active-not-pending // state. target = // Other self-hosted debug functions will call AArch32.GenerateDebugExceptions() if called from // AArch32 state. However, because Software Step is only active when the debug target Exception // level is using AArch64, CheckSoftwareStep only calls AArch64.GenerateDebugExceptions(). if ! ELUsingAArch32(DebugTarget(); step_enabled = !()) &&() then if MDSCR_EL1.SS == '1' && PSTATE.SS == '0' then AArch64.SoftwareStepExceptionELUsingAArch32AArch64.GenerateDebugExceptions(target) && MDSCR_EL1.SS == '1'; active_not_pending = step_enabled && PSTATE.SS == '1'; if active_not_pending then PSTATE.SS = '0'; return;();

Library pseudocode for shared/debug/softwarestep/SoftwareStep_DidNotStepDebugExceptionReturnSS

// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean// DebugExceptionReturnSS() // ======================== // Returns value to write to PSTATE.SS on an exception return or Debug state exit. bit SoftwareStep_DidNotStep();DebugExceptionReturnSS(bits(32) spsr) assertHalted() || Restarting() || PSTATE.EL != EL0; SS_bit = '0'; if MDSCR_EL1.SS == '1' then if Restarting() then enabled_at_source = FALSE; elsif UsingAArch32() then enabled_at_source = AArch32.GenerateDebugExceptions(); else enabled_at_source = AArch64.GenerateDebugExceptions(); if IllegalExceptionReturn(spsr) then dest = PSTATE.EL; else (valid, dest) = ELFromSPSR(spsr); assert valid; secure = IsSecureBelowEL3() || dest == EL3; if ELUsingAArch32(dest) then enabled_at_dest = AArch32.GenerateDebugExceptionsFrom(dest, secure); else mask = spsr<9>; enabled_at_dest = AArch64.GenerateDebugExceptionsFrom(dest, secure, mask); ELd = DebugTargetFrom(secure); if !ELUsingAArch32(ELd) && !enabled_at_source && enabled_at_dest then SS_bit = spsr<21>; return SS_bit;

Library pseudocode for shared/debug/softwarestep/SoftwareStep_SteppedEXSSAdvance

// Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean// SSAdvance() // =========== // Advance the Software Step state machine. SoftwareStep_SteppedEX();SSAdvance() // A simpler implementation of this function just clears PSTATE.SS to zero regardless of the // current Software Step state machine. However, this check is made to illustrate that the // processor only needs to consider advancing the state machine from the active-not-pending // state. target =DebugTarget(); step_enabled = !ELUsingAArch32(target) && MDSCR_EL1.SS == '1'; active_not_pending = step_enabled && PSTATE.SS == '1'; if active_not_pending then PSTATE.SS = '0'; return;

Library pseudocode for shared/exceptionsdebug/exceptionssoftwarestep/ConditionSyndromeSoftwareStep_DidNotStep

// ConditionSyndrome() // =================== // Return CV and COND fields of instruction syndrome bits(5)// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean ConditionSyndrome() bits(5) syndrome; ifSoftwareStep_DidNotStep(); UsingAArch32() then cond = AArch32.CurrentCond(); if PSTATE.T == '0' then // A32 syndrome<4> = '1'; // A conditional A32 instruction that is known to pass its condition code check // can be presented either with COND set to 0xE, the value for unconditional, or // the COND value held in the instruction. if ConditionHolds(cond) && ConstrainUnpredictableBool(Unpredictable_ESRCONDPASS) then syndrome<3:0> = '1110'; else syndrome<3:0> = cond; else // T32 // When a T32 instruction is trapped, it is IMPLEMENTATION DEFINED whether: // * CV set to 0 and COND is set to an UNKNOWN value // * CV set to 1 and COND is set to the condition code for the condition that // applied to the instruction. if boolean IMPLEMENTATION_DEFINED "Condition valid for trapped T32" then syndrome<4> = '1'; syndrome<3:0> = cond; else syndrome<4> = '0'; syndrome<3:0> = bits(4) UNKNOWN; else syndrome<4> = '1'; syndrome<3:0> = '1110'; return syndrome;

Library pseudocode for shared/exceptionsdebug/exceptionssoftwarestep/ExceptionSoftwareStep_SteppedEX

enumeration// Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean Exception {SoftwareStep_SteppedEX();Exception_Uncategorized, // Uncategorized or unknown reason Exception_WFxTrap, // Trapped WFI or WFE instruction Exception_CP15RTTrap, // Trapped AArch32 MCR or MRC access to CP15 Exception_CP15RRTTrap, // Trapped AArch32 MCRR or MRRC access to CP15 Exception_CP14RTTrap, // Trapped AArch32 MCR or MRC access to CP14 Exception_CP14DTTrap, // Trapped AArch32 LDC or STC access to CP14 Exception_AdvSIMDFPAccessTrap, // HCPTR-trapped access to SIMD or FP Exception_FPIDTrap, // Trapped access to SIMD or FP ID register // Trapped BXJ instruction not supported in Armv8 Exception_PACTrap, // Trapped invalid PAC use Exception_CP14RRTTrap, // Trapped MRRC access to CP14 from AArch32 Exception_IllegalState, // Illegal Execution state Exception_SupervisorCall, // Supervisor Call Exception_HypervisorCall, // Hypervisor Call Exception_MonitorCall, // Monitor Call or Trapped SMC instruction Exception_SystemRegisterTrap, // Trapped MRS or MSR system register access Exception_ERetTrap, // Trapped invalid ERET use Exception_InstructionAbort, // Instruction Abort or Prefetch Abort Exception_PCAlignment, // PC alignment fault Exception_DataAbort, // Data Abort Exception_NV2DataAbort, // Data abort at EL1 reported as being from EL2 Exception_SPAlignment, // SP alignment fault Exception_FPTrappedException, // IEEE trapped FP exception Exception_SError, // SError interrupt Exception_Breakpoint, // (Hardware) Breakpoint Exception_SoftwareStep, // Software Step Exception_Watchpoint, // Watchpoint Exception_SoftwareBreakpoint, // Software Breakpoint Instruction Exception_VectorCatch, // AArch32 Vector Catch Exception_IRQ, // IRQ interrupt Exception_SVEAccessTrap, // HCPTR trapped access to SVE Exception_BranchTarget, // Branch Target Identification Exception_FIQ}; // FIQ interrupt

Library pseudocode for shared/exceptions/exceptions/ExceptionRecordConditionSyndrome

type// ConditionSyndrome() // =================== // Return CV and COND fields of instruction syndrome bits(5) ExceptionRecord is (ConditionSyndrome() bits(5) syndrome; if() then cond = AArch32.CurrentCond(); if PSTATE.T == '0' then // A32 syndrome<4> = '1'; // A conditional A32 instruction that is known to pass its condition code check // can be presented either with COND set to 0xE, the value for unconditional, or // the COND value held in the instruction. if ConditionHolds(cond) && ConstrainUnpredictableBool(Unpredictable_ESRCONDPASSExceptionUsingAArch32 exceptype, // Exception class bits(25) syndrome, // Syndrome record bits(64) vaddress, // Virtual fault address boolean ipavalid, // Physical fault address for second stage faults is valid bits(1) NS, // Physical fault address for second stage faults is Non-secure or secure bits(52) ipaddress) // Physical fault address for second stage faults) then syndrome<3:0> = '1110'; else syndrome<3:0> = cond; else // T32 // When a T32 instruction is trapped, it is IMPLEMENTATION DEFINED whether: // * CV set to 0 and COND is set to an UNKNOWN value // * CV set to 1 and COND is set to the condition code for the condition that // applied to the instruction. if boolean IMPLEMENTATION_DEFINED "Condition valid for trapped T32" then syndrome<4> = '1'; syndrome<3:0> = cond; else syndrome<4> = '0'; syndrome<3:0> = bits(4) UNKNOWN; else syndrome<4> = '1'; syndrome<3:0> = '1110'; return syndrome;

Library pseudocode for shared/exceptions/exceptions/ExceptionSyndromeException

// ExceptionSyndrome() // =================== // Return a blank exception syndrome record for an exception of the given type. ExceptionRecordenumeration ExceptionSyndrome(Exception {Exception exceptype)Exception_Uncategorized, // Uncategorized or unknown reason ExceptionRecord r; r.exceptype = exceptype; // Initialize all other fields r.syndrome =Exception_WFxTrap, // Trapped WFI or WFE instruction Zeros(); r.vaddress =Exception_CP15RTTrap, // Trapped AArch32 MCR or MRC access to CP15 Zeros(); r.ipavalid = FALSE; r.NS = '0'; r.ipaddress =Exception_CP15RRTTrap, // Trapped AArch32 MCRR or MRRC access to CP15 Exception_CP14RTTrap, // Trapped AArch32 MCR or MRC access to CP14 Exception_CP14DTTrap, // Trapped AArch32 LDC or STC access to CP14 Exception_AdvSIMDFPAccessTrap, // HCPTR-trapped access to SIMD or FP Exception_FPIDTrap, // Trapped access to SIMD or FP ID register // Trapped BXJ instruction not supported in Armv8 Exception_PACTrap, // Trapped invalid PAC use Exception_CP14RRTTrap, // Trapped MRRC access to CP14 from AArch32 Exception_IllegalState, // Illegal Execution state Exception_SupervisorCall, // Supervisor Call Exception_HypervisorCall, // Hypervisor Call Exception_MonitorCall, // Monitor Call or Trapped SMC instruction Exception_SystemRegisterTrap, // Trapped MRS or MSR system register access Exception_ERetTrap, // Trapped invalid ERET use Exception_InstructionAbort, // Instruction Abort or Prefetch Abort Exception_PCAlignment, // PC alignment fault Exception_DataAbort, // Data Abort Exception_NV2DataAbort, // Data abort at EL1 reported as being from EL2 Exception_SPAlignment, // SP alignment fault Exception_FPTrappedException, // IEEE trapped FP exception Exception_SError, // SError interrupt Exception_Breakpoint, // (Hardware) Breakpoint Exception_SoftwareStep, // Software Step Exception_Watchpoint, // Watchpoint Exception_SoftwareBreakpoint, // Software Breakpoint Instruction Exception_VectorCatch, // AArch32 Vector Catch Exception_IRQ, // IRQ interrupt Exception_SVEAccessTrap, // HCPTR trapped access to SVE Exception_BranchTarget, // Branch Target Identification Zeros(); return r;Exception_FIQ}; // FIQ interrupt

Library pseudocode for shared/exceptions/trapsexceptions/ReservedValueExceptionRecord

// ReservedValue() // ===============type ReservedValue() ifExceptionRecord is ( UsingAArch32Exception() && !AArch32.GeneralExceptionsToAArch64() then AArch32.TakeUndefInstrException(); else AArch64.UndefinedFault();type, // Exception class bits(25) syndrome, // Syndrome record bits(64) vaddress, // Virtual fault address boolean ipavalid, // Physical fault address for second stage faults is valid bits(1) NS, // Physical fault address for second stage faults is Non-secure or secure bits(52) ipaddress) // Physical fault address for second stage faults

Library pseudocode for shared/exceptions/trapsexceptions/UnallocatedEncodingExceptionSyndrome

// UnallocatedEncoding() // =====================// ExceptionSyndrome() // =================== // Return a blank exception syndrome record for an exception of the given type. ExceptionRecord UnallocatedEncoding() ifExceptionSyndrome( UsingAArch32Exception() &&type) AArch32.ExecutingCP10or11InstrExceptionRecord() then FPEXC.DEX = '0'; ifr; r.type = type; // Initialize all other fields r.syndrome = UsingAArch32Zeros() && !(); r.vaddress =AArch32.GeneralExceptionsToAArch64Zeros() then(); r.ipavalid = FALSE; r.NS = '0'; r.ipaddress = AArch32.TakeUndefInstrExceptionZeros(); else AArch64.UndefinedFault();(); return r;

Library pseudocode for shared/functionsexceptions/abortstraps/EncodeLDFSCReservedValue

// EncodeLDFSC() // ============= // Function that gives the Long-descriptor FSC code for types of Fault bits(6)// ReservedValue() // =============== EncodeLDFSC(ReservedValue() ifFaultUsingAArch32 statuscode, integer level) bits(6) result; case statuscode of when() && ! Fault_AddressSizeAArch32.GeneralExceptionsToAArch64 result = '0000':level<1:0>; assert level IN {0,1,2,3}; when() then Fault_AccessFlagAArch32.TakeUndefInstrException result = '0010':level<1:0>; assert level IN {1,2,3}; when(); else Fault_PermissionAArch64.UndefinedFault result = '0011':level<1:0>; assert level IN {1,2,3}; when Fault_Translation result = '0001':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncExternal result = '010000'; when Fault_SyncExternalOnWalk result = '0101':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncParity result = '011000'; when Fault_SyncParityOnWalk result = '0111':level<1:0>; assert level IN {0,1,2,3}; when Fault_AsyncParity result = '011001'; when Fault_AsyncExternal result = '010001'; when Fault_Alignment result = '100001'; when Fault_Debug result = '100010'; when Fault_TLBConflict result = '110000'; when Fault_HWUpdateAccessFlag result = '110001'; when Fault_Lockdown result = '110100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '110101'; // IMPLEMENTATION DEFINED otherwise Unreachable(); return result;();

Library pseudocode for shared/functionsexceptions/abortstraps/IPAValidUnallocatedEncoding

// IPAValid() // ========== // Return TRUE if the IPA is reported for the abort boolean// UnallocatedEncoding() // ===================== IPAValid(UnallocatedEncoding() ifFaultRecordUsingAArch32 fault) assert fault.statuscode !=() && Fault_NoneAArch32.ExecutingCP10or11Instr; if fault.s2fs1walk then return fault.statuscode IN {() then FPEXC.DEX = '0'; ifFault_AccessFlagUsingAArch32,() && ! Fault_PermissionAArch32.GeneralExceptionsToAArch64,() then Fault_TranslationAArch32.TakeUndefInstrException,(); else Fault_AddressSizeAArch64.UndefinedFault}; elsif fault.secondstage then return fault.statuscode IN {Fault_AccessFlag, Fault_Translation, Fault_AddressSize}; else return FALSE;();

Library pseudocode for shared/functions/aborts/IsAsyncAbortEncodeLDFSC

// IsAsyncAbort() // ============== // Returns TRUE if the abort currently being processed is an asynchronous abort, and FALSE // otherwise. // EncodeLDFSC() // ============= // Function that gives the Long-descriptor FSC code for types of Fault booleanbits(6) IsAsyncAbort(EncodeLDFSC(Fault statuscode) assert statuscode !=type, integer level) bits(6) result; case type of when Fault_NoneFault_AddressSize; return (statuscode IN {result = '0000':level<1:0>; assert level IN {0,1,2,3}; whenFault_AsyncExternalFault_AccessFlag,result = '0010':level<1:0>; assert level IN {1,2,3}; when Fault_Permission result = '0011':level<1:0>; assert level IN {1,2,3}; when Fault_Translation result = '0001':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncExternal result = '010000'; when Fault_SyncExternalOnWalk result = '0101':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncParity result = '011000'; when Fault_SyncParityOnWalk result = '0111':level<1:0>; assert level IN {0,1,2,3}; when Fault_AsyncParity}); // IsAsyncAbort() // ============== booleanresult = '011001'; when IsAsyncAbortFault_AsyncExternal(result = '010001'; whenFaultRecordFault_Alignment fault) returnresult = '100001'; when result = '100010'; when Fault_TLBConflict result = '110000'; when Fault_HWUpdateAccessFlag result = '110001'; when Fault_Lockdown result = '110100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '110101'; // IMPLEMENTATION DEFINED otherwise UnreachableIsAsyncAbortFault_Debug(fault.statuscode);(); return result;

Library pseudocode for shared/functions/aborts/IsDebugExceptionIPAValid

// IsDebugException() // ================== // IPAValid() // ========== // Return TRUE if the IPA is reported for the abort boolean IsDebugException(IPAValid(FaultRecord fault) assert fault.statuscode != assert fault.type != Fault_None; return fault.statuscode == if fault.s2fs1walk then return fault.type IN { , Fault_Permission, Fault_Translation, Fault_AddressSize}; elsif fault.secondstage then return fault.type IN {Fault_AccessFlag, Fault_Translation, Fault_AddressSizeFault_DebugFault_AccessFlag;}; else return FALSE;

Library pseudocode for shared/functions/aborts/IsExternalAbortIsAsyncAbort

// IsExternalAbort() // ================= // Returns TRUE if the abort currently being processed is an external abort and FALSE otherwise. // IsAsyncAbort() // ============== // Returns TRUE if the abort currently being processed is an asynchronous abort, and FALSE // otherwise. boolean IsExternalAbort(IsAsyncAbort(Fault statuscode) assert statuscode !=type) assert type != Fault_None; return (statuscode IN { return (type IN {Fault_SyncExternal, Fault_SyncParity, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk, Fault_AsyncExternal, Fault_AsyncParity }); // IsExternalAbort() // ================= // IsAsyncAbort() // ============== boolean IsExternalAbortIsAsyncAbort(FaultRecord fault) return IsExternalAbortIsAsyncAbort(fault.statuscode);(fault.type);

Library pseudocode for shared/functions/aborts/IsExternalSyncAbortIsDebugException

// IsExternalSyncAbort() // ===================== // Returns TRUE if the abort currently being processed is an external synchronous abort and FALSE otherwise. // IsDebugException() // ================== boolean IsExternalSyncAbort(IsDebugException(FaultFaultRecord statuscode) assert statuscode !=fault) assert fault.type != Fault_None; return (statuscode IN { return fault.type ==Fault_SyncExternalFault_Debug, Fault_SyncParity, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk}); // IsExternalSyncAbort() // ===================== boolean IsExternalSyncAbort(FaultRecord fault) return IsExternalSyncAbort(fault.statuscode);;

Library pseudocode for shared/functions/aborts/IsFaultIsExternalAbort

// IsFault() // ========= // Return TRUE if a fault is associated with an address descriptor // IsExternalAbort() // ================= // Returns TRUE if the abort currently being processed is an external abort and FALSE otherwise. boolean IsFault(IsExternalAbort(AddressDescriptorFault addrdesc) return addrdesc.fault.statuscode !=type) assert type != Fault_None; return (type IN {Fault_SyncExternal, Fault_SyncParity, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk, Fault_AsyncExternal, Fault_AsyncParity }); // IsExternalAbort() // ================= boolean IsExternalAbort(FaultRecord fault) return IsExternalAbort;(fault.type);

Library pseudocode for shared/functions/aborts/IsSErrorInterruptIsExternalSyncAbort

// IsSErrorInterrupt() // =================== // Returns TRUE if the abort currently being processed is an SError interrupt, and FALSE // otherwise. // IsExternalSyncAbort() // ===================== // Returns TRUE if the abort currently being processed is an external synchronous abort and FALSE otherwise. boolean IsSErrorInterrupt(IsExternalSyncAbort(Fault statuscode) assert statuscode !=type) assert type != Fault_None; return (statuscode IN { return (type IN {Fault_AsyncExternalFault_SyncExternal, Fault_AsyncParityFault_SyncParity}); // IsSErrorInterrupt() // =================== boolean, IsSErrorInterruptFault_SyncExternalOnWalk(,Fault_SyncParityOnWalk}); // IsExternalSyncAbort() // ===================== boolean IsExternalSyncAbort(FaultRecord fault) return IsSErrorInterruptIsExternalSyncAbort(fault.statuscode);(fault.type);

Library pseudocode for shared/functions/aborts/IsSecondStageIsFault

// IsSecondStage() // =============== // IsFault() // ========= // Return TRUE if a fault is associated with an address descriptor boolean IsSecondStage(IsFault(FaultRecordAddressDescriptor fault) assert fault.statuscode !=addrdesc) return addrdesc.fault.type != Fault_None; return fault.secondstage;;

Library pseudocode for shared/functions/aborts/LSInstructionSyndromeIsSErrorInterrupt

bits(11)// IsSErrorInterrupt() // =================== // Returns TRUE if the abort currently being processed is an SError interrupt, and FALSE // otherwise. boolean LSInstructionSyndrome();IsSErrorInterrupt(Fault type) assert type != Fault_None; return (type IN {Fault_AsyncExternal, Fault_AsyncParity}); // IsSErrorInterrupt() // =================== boolean IsSErrorInterrupt(FaultRecord fault) return IsSErrorInterrupt(fault.type);

Library pseudocode for shared/functions/commonaborts/ASRIsSecondStage

// ASR() // ===== // IsSecondStage() // =============== bits(N)boolean ASR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =IsSecondStage( fault) assert fault.type != Fault_NoneASR_CFaultRecord(x, shift); return result;; return fault.secondstage;

Library pseudocode for shared/functions/commonaborts/ASR_CLSInstructionSyndrome

// ASR_C() // ======= (bits(N), bit)bits(11) ASR_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x =LSInstructionSyndrome(); SignExtend(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/AbsASR

// Abs() // ASR() // ===== integerbits(N) Abs(integer x) return if x >= 0 then x else -x; // Abs() // ===== realASR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) = AbsASR_C(real x) return if x >= 0.0 then x else -x;(x, shift); return result;

Library pseudocode for shared/functions/common/AlignASR_C

// Align() // ASR_C() // ======= integer(bits(N), bit) Align(integer x, integer y) return y * (x DIV y); // Align() // ======= bits(N)ASR_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x = AlignSignExtend(bits(N) x, integer y) return Align(UInt(x), y)<N-1:0>;(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/BitCountAbs

// BitCount() // ========== // Abs() // ===== integer BitCount(bits(N) x) integer result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 1; return result;Abs(integer x) return if x >= 0 then x else -x; // Abs() // ===== realAbs(real x) return if x >= 0.0 then x else -x;

Library pseudocode for shared/functions/common/CountLeadingSignBitsAlign

// CountLeadingSignBits() // ====================== // Align() // ======= integer CountLeadingSignBits(bits(N) x) returnAlign(integer x, integer y) return y * (x DIV y); // Align() // ======= bits(N) (bits(N) x, integer y) return Align(UIntCountLeadingZeroBitsAlign(x<N-1:1> EOR x<N-2:0>);(x), y)<N-1:0>;

Library pseudocode for shared/functions/common/CountLeadingZeroBitsBitCount

// CountLeadingZeroBits() // ====================== // BitCount() // ========== integer CountLeadingZeroBits(bits(N) x) return N - (BitCount(bits(N) x) integer result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 1; return result;HighestSetBit(x) + 1);

Library pseudocode for shared/functions/common/ElemCountLeadingSignBits

// Elem[] - non-assignment form // ============================ // CountLeadingSignBits() // ====================== bits(size)integer Elem[bits(N) vector, integer e, integer size] assert e >= 0 && (e+1)*size <= N; return vector<e*size+size-1 : e*size>; // Elem[] - non-assignment form // ============================ bits(size)CountLeadingSignBits(bits(N) x) return Elem[bits(N) vector, integer e] return Elem[vector, e, size]; // Elem[] - assignment form // ======================== Elem[bits(N) &vector, integer e, integer size] = bits(size) value assert e >= 0 && (e+1)*size <= N; vector<(e+1)*size-1:e*size> = value; return; // Elem[] - assignment form // ======================== Elem[bits(N) &vector, integer e] = bits(size) value Elem[vector, e, size] = value; return;(x<N-1:1> EOR x<N-2:0>);

Library pseudocode for shared/functions/common/ExtendCountLeadingZeroBits

// Extend() // ======== // CountLeadingZeroBits() // ====================== bits(N)integer Extend(bits(M) x, integer N, boolean unsigned) return if unsigned thenCountLeadingZeroBits(bits(N) x) return N - ( ZeroExtendHighestSetBit(x, N) else SignExtend(x, N); // Extend() // ======== bits(N) Extend(bits(M) x, boolean unsigned) return Extend(x, N, unsigned);(x) + 1);

Library pseudocode for shared/functions/common/HighestSetBitElem

// HighestSetBit() // =============== // Elem[] - non-assignment form // ============================ integerbits(size) HighestSetBit(bits(N) x) for i = N-1 downto 0 if x<i> == '1' then return i; return -1;Elem[bits(N) vector, integer e, integer size] assert e >= 0 && (e+1)*size <= N; return vector<e*size+size-1 : e*size>; // Elem[] - non-assignment form // ============================ bits(size)Elem[bits(N) vector, integer e] return Elem[vector, e, size]; // Elem[] - assignment form // ======================== Elem[bits(N) &vector, integer e, integer size] = bits(size) value assert e >= 0 && (e+1)*size <= N; vector<(e+1)*size-1:e*size> = value; return; // Elem[] - assignment form // ======================== Elem[bits(N) &vector, integer e] = bits(size) value Elem[vector, e, size] = value; return;

Library pseudocode for shared/functions/common/IntExtend

// Int() // ===== // Extend() // ======== integerbits(N) Int(bits(N) x, boolean unsigned) result = if unsigned thenExtend(bits(M) x, integer N, boolean unsigned) return if unsigned then UIntZeroExtend(x) else(x, N) else (x, N); // Extend() // ======== bits(N) Extend(bits(M) x, boolean unsigned) return ExtendSIntSignExtend(x); return result;(x, N, unsigned);

Library pseudocode for shared/functions/common/IsOnesHighestSetBit

// IsOnes() // ======== // HighestSetBit() // =============== booleaninteger IsOnes(bits(N) x) return x ==HighestSetBit(bits(N) x) for i = N-1 downto 0 if x<i> == '1' then return i; return -1; Ones(N);

Library pseudocode for shared/functions/common/IsZeroInt

// IsZero() // ======== // Int() // ===== booleaninteger IsZero(bits(N) x) return x ==Int(bits(N) x, boolean unsigned) result = if unsigned then (x) else SIntZerosUInt(N);(x); return result;

Library pseudocode for shared/functions/common/IsZeroBitIsOnes

// IsZeroBit() // =========== // IsOnes() // ======== bitboolean IsZeroBit(bits(N) x) return ifIsOnes(bits(N) x) return x == IsZeroOnes(x) then '1' else '0';(N);

Library pseudocode for shared/functions/common/LSLIsZero

// LSL() // ===== // IsZero() // ======== bits(N)boolean LSL(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =IsZero(bits(N) x) return x == LSL_CZeros(x, shift); return result;(N);

Library pseudocode for shared/functions/common/LSL_CIsZeroBit

// LSL_C() // ======= // IsZeroBit() // =========== (bits(N), bit)bit LSL_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x = x :IsZeroBit(bits(N) x) return if ZerosIsZero(shift); result = extended_x<N-1:0>; carry_out = extended_x<N>; return (result, carry_out);(x) then '1' else '0';

Library pseudocode for shared/functions/common/LSRLSL

// LSR() // LSL() // ===== bits(N) LSR(bits(N) x, integer shift) LSL(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) = LSR_CLSL_C(x, shift); return result;

Library pseudocode for shared/functions/common/LSR_CLSL_C

// LSR_C() // LSL_C() // ======= (bits(N), bit) LSR_C(bits(N) x, integer shift) LSL_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x = extended_x = x : ZeroExtendZeros(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; (shift); result = extended_x<N-1:0>; carry_out = extended_x<N>; return (result, carry_out);

Library pseudocode for shared/functions/common/LowestSetBitLSR

// LowestSetBit() // ============== // LSR() // ===== integerbits(N) LowestSetBit(bits(N) x) for i = 0 to N-1 if x<i> == '1' then return i; return N;LSR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =LSR_C(x, shift); return result;

Library pseudocode for shared/functions/common/MaxLSR_C

// Max() // ===== // LSR_C() // ======= integer(bits(N), bit) Max(integer a, integer b) return if a >= b then a else b; // Max() // ===== realLSR_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x = MaxZeroExtend(real a, real b) return if a >= b then a else b;(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/MinLowestSetBit

// Min() // ===== // LowestSetBit() // ============== integer Min(integer a, integer b) return if a <= b then a else b; // Min() // ===== realLowestSetBit(bits(N) x) for i = 0 to N-1 if x<i> == '1' then return i; return N; Min(real a, real b) return if a <= b then a else b;

Library pseudocode for shared/functions/common/OnesMax

// Ones() // ====== // Max() // ===== bits(N)integer Ones(integer N) returnMax(integer a, integer b) return if a >= b then a else b; // Max() // ===== real ReplicateMax('1',N); // Ones() // ====== bits(N) Ones() return Ones(N);(real a, real b) return if a >= b then a else b;

Library pseudocode for shared/functions/common/RORMin

// ROR() // Min() // ===== bits(N)integer ROR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =Min(integer a, integer b) return if a <= b then a else b; // Min() // ===== real ROR_CMin(x, shift); return result;(real a, real b) return if a <= b then a else b;

Library pseudocode for shared/functions/common/ROR_COnes

// ROR_C() // ======= // Ones() // ====== (bits(N), bit)bits(N) ROR_C(bits(N) x, integer shift) assert shift != 0; m = shift MOD N; result =Ones(integer N) return LSRReplicate(x,m) OR('1',N); // Ones() // ====== bits(N) Ones() return OnesLSL(x,N-m); carry_out = result<N-1>; return (result, carry_out);(N);

Library pseudocode for shared/functions/common/ReplicateROR

// Replicate() // =========== // ROR() // ===== bits(N) Replicate(bits(M) x) assert N MOD M == 0; returnROR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) = ReplicateROR_C(x, N DIV M); bits(M*N) Replicate(bits(M) x, integer N);(x, shift); return result;

Library pseudocode for shared/functions/common/RoundDownROR_C

integer// ROR_C() // ======= (bits(N), bit) RoundDown(real x);ROR_C(bits(N) x, integer shift) assert shift != 0; m = shift MOD N; result =LSR(x,m) OR LSL(x,N-m); carry_out = result<N-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/RoundTowardsZeroReplicate

// RoundTowardsZero() // ================== // Replicate() // =========== integerbits(N) RoundTowardsZero(real x) return if x == 0.0 then 0 else if x >= 0.0 thenReplicate(bits(M) x) assert N MOD M == 0; return RoundDownReplicate(x) else(x, N DIV M); bits(M*N) RoundUp(x);Replicate(bits(M) x, integer N);

Library pseudocode for shared/functions/common/RoundUpRoundDown

integer RoundUp(real x);RoundDown(real x);

Library pseudocode for shared/functions/common/SIntRoundTowardsZero

// SInt() // ====== // RoundTowardsZero() // ================== integer SInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; if x<N-1> == '1' then result = result - 2^N; return result;RoundTowardsZero(real x) return if x == 0.0 then 0 else if x >= 0.0 thenRoundDown(x) else RoundUp(x);

Library pseudocode for shared/functions/common/SignExtendRoundUp

// SignExtend() // ============ bits(N)integer SignExtend(bits(M) x, integer N) assert N >= M; returnRoundUp(real x); Replicate(x<M-1>, N-M) : x; // SignExtend() // ============ bits(N) SignExtend(bits(M) x) return SignExtend(x, N);

Library pseudocode for shared/functions/common/UIntSInt

// UInt() // SInt() // ====== integer UInt(bits(N) x) SInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; if x<N-1> == '1' then result = result - 2^N; return result;

Library pseudocode for shared/functions/common/ZeroExtendSignExtend

// ZeroExtend() // SignExtend() // ============ bits(N) ZeroExtend(bits(M) x, integer N) SignExtend(bits(M) x, integer N) assert N >= M; return ZerosReplicate(N-M) : x; (x<M-1>, N-M) : x; // ZeroExtend() // SignExtend() // ============ bits(N) ZeroExtend(bits(M) x) SignExtend(bits(M) x) return ZeroExtendSignExtend(x, N);

Library pseudocode for shared/functions/common/ZerosUInt

// Zeros() // ======= // UInt() // ====== bits(N)integer Zeros(integer N) returnUInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; return result; Replicate('0',N); // Zeros() // ======= bits(N) Zeros() return Zeros(N);

Library pseudocode for shared/functions/crccommon/BitReverseZeroExtend

// BitReverse() // ZeroExtend() // ============ bits(N) BitReverse(bits(N) data) bits(N) result; for i = 0 to N-1 result<N-i-1> = data<i>; return result;ZeroExtend(bits(M) x, integer N) assert N >= M; returnZeros(N-M) : x; // ZeroExtend() // ============ bits(N) ZeroExtend(bits(M) x) return ZeroExtend(x, N);

Library pseudocode for shared/functions/crccommon/HaveCRCExtZeros

// HaveCRCExt() // ============ // Zeros() // ======= booleanbits(N) HaveCRCExt() Zeros(integer N) return HasArchVersionReplicate(('0',N); // Zeros() // ======= bits(N)Zeros() return ZerosARMv8p1) || boolean IMPLEMENTATION_DEFINED "Have CRC extension";(N);

Library pseudocode for shared/functions/crc/Poly32Mod2BitReverse

// Poly32Mod2() // BitReverse() // ============ // Poly32Mod2 on a bitstring does a polynomial Modulus over {0,1} operation bits(32)bits(N) Poly32Mod2(bits(N) data, bits(32) poly) assert N > 32; for i = N-1 downto 32 if data<i> == '1' then data<i-1:0> = data<i-1:0> EOR (poly:BitReverse(bits(N) data) bits(N) result; for i = 0 to N-1 result<N-i-1> = data<i>; return result;Zeros(i-32)); return data<31:0>;

Library pseudocode for shared/functions/cryptocrc/AESInvMixColumnsHaveCRCExt

bits(128)// HaveCRCExt() // ============ boolean AESInvMixColumns(bits (128) op);HaveCRCExt() returnHasArchVersion(ARMv8p1) || boolean IMPLEMENTATION_DEFINED "Have CRC extension";

Library pseudocode for shared/functions/cryptocrc/AESInvShiftRowsPoly32Mod2

bits(128)// Poly32Mod2() // ============ // Poly32Mod2 on a bitstring does a polynomial Modulus over {0,1} operation bits(32) AESInvShiftRows(bits(128) op);Poly32Mod2(bits(N) data, bits(32) poly) assert N > 32; for i = N-1 downto 32 if data<i> == '1' then data<i-1:0> = data<i-1:0> EOR (poly:Zeros(i-32)); return data<31:0>;

Library pseudocode for shared/functions/crypto/AESInvSubBytesAESInvMixColumns

bits(128) AESInvSubBytes(bits(128) op);AESInvMixColumns(bits (128) op);

Library pseudocode for shared/functions/crypto/AESMixColumnsAESInvShiftRows

bits(128) AESMixColumns(bits (128) op);AESInvShiftRows(bits(128) op);

Library pseudocode for shared/functions/crypto/AESShiftRowsAESInvSubBytes

bits(128) AESShiftRows(bits(128) op);AESInvSubBytes(bits(128) op);

Library pseudocode for shared/functions/crypto/AESSubBytesAESMixColumns

bits(128) AESSubBytes(bits(128) op);AESMixColumns(bits (128) op);

Library pseudocode for shared/functions/crypto/HaveAESExtAESShiftRows

// HaveAESExt() // ============ // TRUE if AES cryptographic instructions support is implemented, // FALSE otherwise. booleanbits(128) HaveAESExt() return boolean IMPLEMENTATION_DEFINED "Has AES Crypto instructions";AESShiftRows(bits(128) op);

Library pseudocode for shared/functions/crypto/HaveBit128PMULLExtAESSubBytes

// HaveBit128PMULLExt() // ==================== // TRUE if 128 bit form of PMULL instructions support is implemented, // FALSE otherwise. booleanbits(128) HaveBit128PMULLExt() return boolean IMPLEMENTATION_DEFINED "Has 128-bit form of PMULL instructions";AESSubBytes(bits(128) op);

Library pseudocode for shared/functions/crypto/HaveSHA1ExtHaveAESExt

// HaveSHA1Ext() // ============= // TRUE if SHA1 cryptographic instructions support is implemented, // HaveAESExt() // ============ // TRUE if AES cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSHA1Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA1 Crypto instructions";HaveAESExt() return boolean IMPLEMENTATION_DEFINED "Has AES Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSHA256ExtHaveBit128PMULLExt

// HaveSHA256Ext() // =============== // TRUE if SHA256 cryptographic instructions support is implemented, // HaveBit128PMULLExt() // ==================== // TRUE if 128 bit form of PMULL instructions support is implemented, // FALSE otherwise. boolean HaveSHA256Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA256 Crypto instructions";HaveBit128PMULLExt() return boolean IMPLEMENTATION_DEFINED "Has 128-bit form of PMULL instructions";

Library pseudocode for shared/functions/crypto/HaveSHA3ExtHaveSHA1Ext

// HaveSHA3Ext() // HaveSHA1Ext() // ============= // TRUE if SHA3 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // TRUE if SHA1 cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSHA3Ext() if !HaveSHA1Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA1 Crypto instructions";HasArchVersion(ARMv8p2) || !(HaveSHA1Ext() && HaveSHA256Ext()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA3 Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSHA512ExtHaveSHA256Ext

// HaveSHA512Ext() // HaveSHA256Ext() // =============== // TRUE if SHA512 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // TRUE if SHA256 cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSHA512Ext() if !HaveSHA256Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA256 Crypto instructions";HasArchVersion(ARMv8p2) || !(HaveSHA1Ext() && HaveSHA256Ext()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA512 Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSM3ExtHaveSHA3Ext

// HaveSM3Ext() // ============ // TRUE if SM3 cryptographic instructions support is implemented, // HaveSHA3Ext() // ============= // TRUE if SHA3 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSM3Ext() HaveSHA3Ext() if !HasArchVersion(ARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM3 Crypto instructions";) || !(HaveSHA1Ext() && HaveSHA256Ext()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA3 Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSM4ExtHaveSHA512Ext

// HaveSM4Ext() // ============ // TRUE if SM4 cryptographic instructions support is implemented, // HaveSHA512Ext() // =============== // TRUE if SHA512 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSM4Ext() HaveSHA512Ext() if !HasArchVersion(ARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM4 Crypto instructions";) || !(HaveSHA1Ext() && HaveSHA256Ext()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA512 Crypto instructions";

Library pseudocode for shared/functions/crypto/ROLHaveSM3Ext

// ROL() // ===== // HaveSM3Ext() // ============ // TRUE if SM3 cryptographic instructions support is implemented, // FALSE otherwise. bits(N)boolean ROL(bits(N) x, integer shift) assert shift >= 0 && shift <= N; if (shift == 0) then return x; returnHaveSM3Ext() if ! (ARMv8p2RORHasArchVersion(x, N-shift);) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM3 Crypto instructions";

Library pseudocode for shared/functions/crypto/SHA256hashHaveSM4Ext

// SHA256hash() // HaveSM4Ext() // ============ // TRUE if SM4 cryptographic instructions support is implemented, // FALSE otherwise. bits(128)boolean SHA256hash(bits (128) X, bits(128) Y, bits(128) W, boolean part1) bits(32) chs, maj, t; for e = 0 to 3 chs =HaveSM4Ext() if ! SHAchooseHasArchVersion(Y<31:0>, Y<63:32>, Y<95:64>); maj =( SHAmajorityARMv8p2(X<31:0>, X<63:32>, X<95:64>); t = Y<127:96> + SHAhashSIGMA1(Y<31:0>) + chs + Elem[W, e, 32]; X<127:96> = t + X<127:96>; Y<127:96> = t + SHAhashSIGMA0(X<31:0>) + maj; <Y, X> = ROL(Y : X, 32); return (if part1 then X else Y);) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM4 Crypto instructions";

Library pseudocode for shared/functions/crypto/SHAchooseROL

// SHAchoose() // =========== // ROL() // ===== bits(32)bits(N) SHAchoose(bits(32) x, bits(32) y, bits(32) z) return (((y EOR z) AND x) EOR z);ROL(bits(N) x, integer shift) assert shift >= 0 && shift <= N; if (shift == 0) then return x; returnROR(x, N-shift);

Library pseudocode for shared/functions/crypto/SHAhashSIGMA0SHA256hash

// SHAhashSIGMA0() // =============== // SHA256hash() // ============ bits(32)bits(128) SHAhashSIGMA0(bits(32) x) returnSHA256hash(bits (128) X, bits(128) Y, bits(128) W, boolean part1) bits(32) chs, maj, t; for e = 0 to 3 chs = RORSHAchoose(x, 2) EOR(Y<31:0>, Y<63:32>, Y<95:64>); maj = RORSHAmajority(x, 13) EOR(X<31:0>, X<63:32>, X<95:64>); t = Y<127:96> + (Y<31:0>) + chs + Elem[W, e, 32]; X<127:96> = t + X<127:96>; Y<127:96> = t + SHAhashSIGMA0(X<31:0>) + maj; <Y, X> = ROLRORSHAhashSIGMA1(x, 22);(Y : X, 32); return (if part1 then X else Y);

Library pseudocode for shared/functions/crypto/SHAhashSIGMA1SHAchoose

// SHAhashSIGMA1() // =============== // SHAchoose() // =========== bits(32) SHAhashSIGMA1(bits(32) x) returnSHAchoose(bits(32) x, bits(32) y, bits(32) z) return (((y EOR z) AND x) EOR z); ROR(x, 6) EOR ROR(x, 11) EOR ROR(x, 25);

Library pseudocode for shared/functions/crypto/SHAmajoritySHAhashSIGMA0

// SHAmajority() // ============= // SHAhashSIGMA0() // =============== bits(32) SHAmajority(bits(32) x, bits(32) y, bits(32) z) return ((x AND y) OR ((x OR y) AND z));SHAhashSIGMA0(bits(32) x) returnROR(x, 2) EOR ROR(x, 13) EOR ROR(x, 22);

Library pseudocode for shared/functions/crypto/SHAparitySHAhashSIGMA1

// SHAparity() // =========== // SHAhashSIGMA1() // =============== bits(32) SHAparity(bits(32) x, bits(32) y, bits(32) z) return (x EOR y EOR z);SHAhashSIGMA1(bits(32) x) returnROR(x, 6) EOR ROR(x, 11) EOR ROR(x, 25);

Library pseudocode for shared/functions/crypto/SboxSHAmajority

// Sbox() // ====== // Used in SM4E crypto instruction // SHAmajority() // ============= bits(8)bits(32) Sbox(bits(8) sboxin) bits(8) sboxout; bits(2048) sboxstring = 0xd690e9fecce13db716b614c228fb2c052b679a762abe04c3aa441326498606999c4250f491ef987a33540b43edcfac62e4b31ca9c908e89580df94fa758f3fa64707a7fcf37317ba83593c19e6854fa8686b81b27164da8bf8eb0f4b70569d351e240e5e6358d1a225227c3b01217887d40046579fd327524c3602e7a0c4c89eeabf8ad240c738b5a3f7f2cef96115a1e0ae5da49b341a55ad933230f58cb1e31df6e22e8266ca60c02923ab0d534e6fd5db3745defd8e2f03ff6a726d6c5b518d1baf92bbddbc7f11d95c411f105ad80ac13188a5cd7bbd2d74d012b8e5b4b08969974a0c96777e65b9f109c56ec68418f07dec3adc4d2079ee5f3ed7cb3948<2047:0>; sboxout = sboxstring<(255-SHAmajority(bits(32) x, bits(32) y, bits(32) z) return ((x AND y) OR ((x OR y) AND z));UInt(sboxin))*8+7:(255-UInt(sboxin))*8>; return sboxout;

Library pseudocode for shared/functions/exclusivecrypto/ClearExclusiveByAddressSHAparity

// Clear the global Exclusives monitors for all PEs EXCEPT processorid if they // record any part of the physical address region of size bytes starting at paddress. // It is IMPLEMENTATION DEFINED whether the global Exclusives monitor for processorid // is also cleared if it records any part of the address region.// SHAparity() // =========== bits(32) ClearExclusiveByAddress(SHAparity(bits(32) x, bits(32) y, bits(32) z) return (x EOR y EOR z);FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusivecrypto/ClearExclusiveLocalSbox

// Clear the local Exclusives monitor for the specified processorid.// Sbox() // ====== // Used in SM4E crypto instruction bits(8) ClearExclusiveLocal(integer processorid);Sbox(bits(8) sboxin) bits(8) sboxout; bits(2048) sboxstring = 0xd690e9fecce13db716b614c228fb2c052b679a762abe04c3aa441326498606999c4250f491ef987a33540b43edcfac62e4b31ca9c908e89580df94fa758f3fa64707a7fcf37317ba83593c19e6854fa8686b81b27164da8bf8eb0f4b70569d351e240e5e6358d1a225227c3b01217887d40046579fd327524c3602e7a0c4c89eeabf8ad240c738b5a3f7f2cef96115a1e0ae5da49b341a55ad933230f58cb1e31df6e22e8266ca60c02923ab0d534e6fd5db3745defd8e2f03ff6a726d6c5b518d1baf92bbddbc7f11d95c411f105ad80ac13188a5cd7bbd2d74d012b8e5b4b08969974a0c96777e65b9f109c56ec68418f07dec3adc4d2079ee5f3ed7cb3948<2047:0>; sboxout = sboxstring<(255-UInt(sboxin))*8+7:(255-UInt(sboxin))*8>; return sboxout;

Library pseudocode for shared/functions/exclusive/ClearExclusiveMonitorsClearExclusiveByAddress

// ClearExclusiveMonitors() // ======================== // Clear the local Exclusives monitor for the executing PE.// Clear the global Exclusives monitors for all PEs EXCEPT processorid if they // record any part of the physical address region of size bytes starting at paddress. // It is IMPLEMENTATION DEFINED whether the global Exclusives monitor for processorid // is also cleared if it records any part of the address region. ClearExclusiveMonitors()ClearExclusiveByAddress( ClearExclusiveLocalFullAddress(ProcessorID());paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/ExclusiveMonitorsStatusClearExclusiveLocal

// Returns '0' to indicate success if the last memory write by this PE was to // the same physical address region endorsed by ExclusiveMonitorsPass(). // Returns '1' to indicate failure if address translation resulted in a different // physical address. bit// Clear the local Exclusives monitor for the specified processorid. ExclusiveMonitorsStatus();ClearExclusiveLocal(integer processorid);

Library pseudocode for shared/functions/exclusive/IsExclusiveGlobalClearExclusiveMonitors

// Return TRUE if the global Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. boolean// ClearExclusiveMonitors() // ======================== // Clear the local Exclusives monitor for the executing PE. IsExclusiveGlobal(ClearExclusiveMonitors()(ProcessorIDFullAddressClearExclusiveLocal paddress, integer processorid, integer size);());

Library pseudocode for shared/functions/exclusive/IsExclusiveLocalExclusiveMonitorsStatus

// Return TRUE if the local Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. boolean// Returns '0' to indicate success if the last memory write by this PE was to // the same physical address region endorsed by ExclusiveMonitorsPass(). // Returns '1' to indicate failure if address translation resulted in a different // physical address. bit IsExclusiveLocal(ExclusiveMonitorsStatus();FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/MarkExclusiveGlobalIsExclusiveGlobal

// Record the physical address region of size bytes starting at paddress in // the global Exclusives monitor for processorid.// Return TRUE if the global Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. boolean MarkExclusiveGlobal(IsExclusiveGlobal(FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/MarkExclusiveLocalIsExclusiveLocal

// Record the physical address region of size bytes starting at paddress in // the local Exclusives monitor for processorid.// Return TRUE if the local Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. boolean MarkExclusiveLocal(IsExclusiveLocal(FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/ProcessorIDMarkExclusiveGlobal

// Return the ID of the currently executing PE. integer// Record the physical address region of size bytes starting at paddress in // the global Exclusives monitor for processorid. ProcessorID();MarkExclusiveGlobal(FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/extensionexclusive/AArch32.HaveHPDExtMarkExclusiveLocal

// AArch32.HaveHPDExt() // ==================== boolean// Record the physical address region of size bytes starting at paddress in // the local Exclusives monitor for processorid. AArch32.HaveHPDExt() returnMarkExclusiveLocal( HasArchVersionFullAddress(ARMv8p2);paddress, integer processorid, integer size);

Library pseudocode for shared/functions/extensionexclusive/AArch64.HaveHPDExtProcessorID

// AArch64.HaveHPDExt() // ==================== boolean// Return the ID of the currently executing PE. integer AArch64.HaveHPDExt() returnProcessorID(); HasArchVersion(ARMv8p1);

Library pseudocode for shared/functions/extension/Have52BitPAExtAArch32.HaveHPDExt

// Have52BitPAExt() // ================ // AArch32.HaveHPDExt() // ==================== boolean Have52BitPAExt() AArch32.HaveHPDExt() return HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/Have52BitVAExtAArch64.HaveHPDExt

// Have52BitVAExt() // ================ // AArch64.HaveHPDExt() // ==================== boolean Have52BitVAExt() AArch64.HaveHPDExt() return HasArchVersion(ARMv8p2ARMv8p1);

Library pseudocode for shared/functions/extension/HaveAtomicExtHave52BitPAExt

// HaveAtomicExt() // =============== // Have52BitPAExt() // ================ boolean HaveAtomicExt() Have52BitPAExt() return HasArchVersion(ARMv8p1ARMv8p2);

Library pseudocode for shared/functions/extension/HaveBTIExtHave52BitVAExt

// HaveBTIExt() // ============ // Returns TRUE if support for Branch Target Indentification is implemented. // Have52BitVAExt() // ================ boolean HaveBTIExt() Have52BitVAExt() return HasArchVersion(ARMv8p5ARMv8p2);

Library pseudocode for shared/functions/extension/HaveBlockBBMHaveAtomicExt

// HaveBlockBBM() // ============== // Returns TRUE if support for changing block size without requring break-before-make is implemented. // HaveAtomicExt() // =============== boolean HaveBlockBBM() HaveAtomicExt() return HasArchVersion(ARMv8p4ARMv8p1);

Library pseudocode for shared/functions/extension/HaveCommonNotPrivateTransExtHaveBTIExt

// HaveCommonNotPrivateTransExt() // ============================== // HaveBTIExt() // ============ // Returns TRUE if BTI implemented and FALSE otherwise boolean HaveCommonNotPrivateTransExt() HaveBTIExt() return HasArchVersion(ARMv8p2ARMv8p5);

Library pseudocode for shared/functions/extension/HaveDITExtHaveBlockBBM

// HaveDITExt() // ============ // HaveBlockBBM() // ============== // Returns TRUE if support for changing block size without requring break-before-make is implemented. boolean HaveDITExt() HaveBlockBBM() return HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveDOTPExtHaveCommonNotPrivateTransExt

// HaveDOTPExt() // ============= // Returns TRUE if Dot Product feature support is implemented, and FALSE otherwise. // HaveCommonNotPrivateTransExt() // ============================== boolean HaveDOTPExt() HaveCommonNotPrivateTransExt() return HasArchVersion(ARMv8p4) || (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has Dot Product extension"););

Library pseudocode for shared/functions/extension/HaveDoubleFaultExtHaveDITExt

// HaveDoubleFaultExt() // ==================== // HaveDITExt() // ============ boolean HaveDoubleFaultExt() return (HaveDITExt() returnHasArchVersion(ARMv8p4) &&); HaveEL(EL3) && !ELUsingAArch32(EL3) && HaveIESB());

Library pseudocode for shared/functions/extension/HaveDoubleLockHaveDOTPExt

// HaveDoubleLock() // ================ // Returns TRUE if support for the OS Double Lock is implemented. // HaveDOTPExt() // ============= // Returns TRUE if has Dot Product feature support, and FALSE otherwise. boolean HaveDoubleLock() return !HaveDOTPExt() returnHasArchVersion(ARMv8p4) || boolean IMPLEMENTATION_DEFINED "OS Double Lock is implemented";) || (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has Dot Product extension");

Library pseudocode for shared/functions/extension/HaveE0PDExtHaveDoubleFaultExt

// HaveE0PDExt() // ============= // Returns TRUE if support for constant fault times for unprivileged accesses // to the memory map is implemented. // HaveDoubleFaultExt() // ==================== boolean HaveE0PDExt() returnHaveDoubleFaultExt() return ( HasArchVersion() && HaveEL(EL3) && !ELUsingAArch32(EL3) && HaveIESBARMv8p5ARMv8p4);());

Library pseudocode for shared/functions/extension/HaveExtendedCacheSetsHaveDoubleLock

// HaveExtendedCacheSets() // ======================= // HaveDoubleLock() // ================ // Returns TRUE if support for the OS Double Lock is implemented boolean HaveExtendedCacheSets() returnHaveDoubleLock() return ! HasArchVersion(ARMv8p3ARMv8p4);) || boolean IMPLEMENTATION_DEFINED "OS Double Lock is implemented";

Library pseudocode for shared/functions/extension/HaveExtendedECDebugEventsHaveE0PDExt

// HaveExtendedECDebugEvents() // =========================== // HaveE0PDExt() // ============= // Returns TRUE if support for constant fault times for unprivileged accesses // to the memory map is implemented. boolean HaveExtendedECDebugEvents() HaveE0PDExt() return HasArchVersion(ARMv8p2ARMv8p5);

Library pseudocode for shared/functions/extension/HaveExtendedExecuteNeverExtHaveExtendedCacheSets

// HaveExtendedExecuteNeverExt() // ============================= // HaveExtendedCacheSets() // ======================= boolean HaveExtendedExecuteNeverExt() HaveExtendedCacheSets() return HasArchVersion(ARMv8p2ARMv8p3);

Library pseudocode for shared/functions/extension/HaveFCADDExtHaveExtendedECDebugEvents

// HaveFCADDExt() // ============== // HaveExtendedECDebugEvents() // =========================== boolean HaveFCADDExt() HaveExtendedECDebugEvents() return HasArchVersion(ARMv8p3ARMv8p2);

Library pseudocode for shared/functions/extension/HaveFJCVTZSExtHaveExtendedExecuteNeverExt

// HaveFJCVTZSExt() // ================ // HaveExtendedExecuteNeverExt() // ============================= boolean HaveFJCVTZSExt() HaveExtendedExecuteNeverExt() return HasArchVersion(ARMv8p3ARMv8p2);

Library pseudocode for shared/functions/extension/HaveFP16MulNoRoundingToFP32ExtHaveFCADDExt

// HaveFP16MulNoRoundingToFP32Ext() // ================================ // Returns TRUE if has FP16 multiply with no intermediate rounding accumulate to FP32 instructions, // and FALSE otherwise // HaveFCADDExt() // ============== boolean HaveFP16MulNoRoundingToFP32Ext() if !HaveFCADDExt() returnHaveFP16Ext() then return FALSE; if HasArchVersion(ARMv8p4ARMv8p3) then return TRUE; return (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has accumulate FP16 product into FP32 extension"););

Library pseudocode for shared/functions/extension/HaveFlagFormatExtHaveFJCVTZSExt

// HaveFlagFormatExt() // =================== // Returns TRUE if flag format conversion instructions implemented. // HaveFJCVTZSExt() // ================ boolean HaveFlagFormatExt() HaveFJCVTZSExt() return HasArchVersion(ARMv8p5ARMv8p3);

Library pseudocode for shared/functions/extension/HaveFlagManipulateExtHaveFP16MulNoRoundingToFP32Ext

// HaveFlagManipulateExt() // ======================= // Returns TRUE if flag manipulate instructions are implemented. // HaveFP16MulNoRoundingToFP32Ext() // ================================ // Returns TRUE if has FP16 multiply with no intermediate rounding accumulate to FP32 instructions, // and FALSE otherwise boolean HaveFlagManipulateExt() returnHaveFP16MulNoRoundingToFP32Ext() if ! HaveFP16Ext() then return FALSE; if HasArchVersion(ARMv8p4) then return TRUE; return (HasArchVersion(ARMv8p2);) && boolean IMPLEMENTATION_DEFINED "Has accumulate FP16 product into FP32 extension");

Library pseudocode for shared/functions/extension/HaveFrintExtHaveFlagFormatExt

// HaveFrintExt() // ============== // Returns TRUE if FRINT instructions are implemented. // HaveFlagFormatExt() // =================== // Returns TRUE if flag format conversion instructions implemented // and FALSE otherwise boolean HaveFrintExt() HaveFlagFormatExt() return HasArchVersion(ARMv8p5);

Library pseudocode for shared/functions/extension/HaveHPMDExtHaveFlagManipulateExt

// HaveHPMDExt() // ============= // HaveFlagManipulateExt() // ======================= // Returns TRUE if has flag manipulate instructions, and FALSE otherwise boolean HaveHPMDExt() HaveFlagManipulateExt() return HasArchVersion(ARMv8p1ARMv8p4);

Library pseudocode for shared/functions/extension/HaveIDSExtHaveFrintExt

// HaveIDSExt() // ============ // Returns TRUE if ID register handling feature is implemented. // HaveFrintExt() // ============== // Returns TRUE if FRINT instructions are implemented and FALSE otherwise boolean HaveIDSExt() HaveFrintExt() return HasArchVersion(ARMv8p4ARMv8p5);

Library pseudocode for shared/functions/extension/HaveIESBHaveHPMDExt

// HaveIESB() // ========== // HaveHPMDExt() // ============= boolean HaveIESB() return (HaveHPMDExt() return(ARMv8p1HaveRASExtHasArchVersion() && boolean IMPLEMENTATION_DEFINED "Has Implicit Error Synchronization Barrier"););

Library pseudocode for shared/functions/extension/HaveMPAMExtHaveIESB

// HaveMPAMExt() // ============= // Returns TRUE if MPAM is implemented, and FALSE otherwise. // HaveIESB() // ========== boolean HaveMPAMExt() HaveIESB() return (HasArchVersionHaveRASExt(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has MPAM extension");() && boolean IMPLEMENTATION_DEFINED "Has Implicit Error Synchronization Barrier");

Library pseudocode for shared/functions/extension/HaveMTEExtHaveMPAMExt

// HaveMTEExt() // ============ // Returns TRUE if MTE implemented, and FALSE otherwise. // HaveMPAMExt() // ============= // Returns TRUE if MPAM implemented and FALSE otherwise. boolean HaveMTEExt() if !HaveMPAMExt() return (HasArchVersion(ARMv8p5ARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has MTE extension";) && boolean IMPLEMENTATION_DEFINED "Has MPAM extension");

Library pseudocode for shared/functions/extension/HaveNV2ExtHaveMTEExt

// HaveNV2Ext() // HaveMTEExt() // ============ // Returns TRUE if Enhanced Nested Virtualization is implemented. // Returns TRUE if MTE implemented and FALSE otherwise. boolean HaveNV2Ext() return (HaveMTEExt() if !HasArchVersion(ARMv8p4ARMv8p5) && HaveNVExt() && boolean IMPLEMENTATION_DEFINED "Has support for Enhanced Nested Virtualization");) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has MTE extension";

Library pseudocode for shared/functions/extension/HaveNVExtHaveNV2Ext

// HaveNVExt() // =========== // Returns TRUE if Nested Virtualization is implemented. // HaveNV2Ext() // ============ boolean HaveNVExt() HaveNV2Ext() return HasArchVersion(ARMv8p3ARMv8p4) && boolean IMPLEMENTATION_DEFINED "Has Nested Virtualization";);

Library pseudocode for shared/functions/extension/HaveNoSecurePMUDisableOverrideHaveNVExt

// HaveNoSecurePMUDisableOverride() // ================================ // HaveNVExt() // =========== boolean HaveNoSecurePMUDisableOverride() HaveNVExt() return HasArchVersion(ARMv8p2ARMv8p3);

Library pseudocode for shared/functions/extension/HaveNoninvasiveDebugAuthHaveNoSecurePMUDisableOverride

// HaveNoninvasiveDebugAuth() // ========================== // Returns TRUE if the Non-invasive debug controls are implemented. // HaveNoSecurePMUDisableOverride() // ================================ boolean HaveNoninvasiveDebugAuth() return !HaveNoSecurePMUDisableOverride() returnHasArchVersion(ARMv8p4ARMv8p2);

Library pseudocode for shared/functions/extension/HavePANExtHaveNoninvasiveDebugAuth

// HavePANExt() // ============ // HaveNoninvasiveDebugAuth() // ========================== // Returns FALSE if support for the removal of the non-invasive Debug controls is implemented. boolean HavePANExt() returnHaveNoninvasiveDebugAuth() return ! HasArchVersion(ARMv8p1ARMv8p4);

Library pseudocode for shared/functions/extension/HavePageBasedHardwareAttributesHavePANExt

// HavePageBasedHardwareAttributes() // ================================= // HavePANExt() // ============ boolean HavePageBasedHardwareAttributes() HavePANExt() return HasArchVersion(ARMv8p2ARMv8p1);

Library pseudocode for shared/functions/extension/HavePrivATExtHavePageBasedHardwareAttributes

// HavePrivATExt() // =============== // HavePageBasedHardwareAttributes() // ================================= boolean HavePrivATExt() HavePageBasedHardwareAttributes() return HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveQRDMLAHExtHavePrivATExt

// HaveQRDMLAHExt() // ================ // HavePrivATExt() // =============== boolean HaveQRDMLAHExt() HavePrivATExt() return HasArchVersion(ARMv8p1ARMv8p2); boolean HaveAccessFlagUpdateExt() return HasArchVersion(ARMv8p1); boolean HaveDirtyBitModifierExt() return HasArchVersion(ARMv8p1);

Library pseudocode for shared/functions/extension/HaveRASExtHaveQRDMLAHExt

// HaveRASExt() // ============ // HaveQRDMLAHExt() // ================ boolean HaveRASExt() return (HaveQRDMLAHExt() returnHasArchVersion(); boolean HaveAccessFlagUpdateExt() return HasArchVersion(ARMv8p1); boolean HaveDirtyBitModifierExt() return HasArchVersion(ARMv8p1ARMv8p2ARMv8p1) || boolean IMPLEMENTATION_DEFINED "Has RAS extension"););

Library pseudocode for shared/functions/extension/HaveSBExtHaveRASExt

// HaveSBExt() // =========== // Returns TRUE if support for SB is implemented, and FALSE otherwise. // HaveRASExt() // ============ boolean HaveSBExt() returnHaveRASExt() return ( HasArchVersion(ARMv8p5ARMv8p2) || boolean IMPLEMENTATION_DEFINED "Has SB extension";) || boolean IMPLEMENTATION_DEFINED "Has RAS extension");

Library pseudocode for shared/functions/extension/HaveSSBSExtHaveSBExt

// HaveSSBSExt() // ============= // Returns TRUE if support for SSBS is implemented, and FALSE otherwise. // HaveSBExt() // =========== // Returns TRUE if has SB feature support, and FALSE otherwise. boolean HaveSSBSExt() HaveSBExt() return HasArchVersion(ARMv8p5) || boolean IMPLEMENTATION_DEFINED "Has SSBS extension";) || boolean IMPLEMENTATION_DEFINED "Has SB extension";

Library pseudocode for shared/functions/extension/HaveSecureEL2ExtHaveSSBSExt

// HaveSecureEL2Ext() // ================== // Returns TRUE if Secure EL2 is implemented. // HaveSSBSExt() // ============= // Returns TRUE if has SSBS feature support, and FALSE otherwise. boolean HaveSecureEL2Ext() HaveSSBSExt() return HasArchVersion(ARMv8p4ARMv8p5);) || boolean IMPLEMENTATION_DEFINED "Has SSBS extension";

Library pseudocode for shared/functions/extension/HaveSecureExtDebugViewHaveSecureEL2Ext

// HaveSecureExtDebugView() // ======================== // Returns TRUE if support for Secure and Non-secure views of debug peripherals is implemented. // HaveSecureEL2Ext() // ================== // Returns TRUE if has Secure EL2, and FALSE otherwise boolean HaveSecureExtDebugView() HaveSecureEL2Ext() return HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveSelfHostedTraceHaveSecureExtDebugView

// HaveSelfHostedTrace() // ===================== // HaveSecureExtDebugView() // ======================== // Returns TRUE if supports Secure and Non-secure views of debug peripherals is implemented. boolean HaveSelfHostedTrace() HaveSecureExtDebugView() return HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveSmallPageTblExtHaveSelfHostedTrace

// HaveSmallPageTblExt() // HaveSelfHostedTrace() // ===================== // Returns TRUE if Small Page Table Support is implemented. boolean HaveSmallPageTblExt() HaveSelfHostedTrace() return HasArchVersion(ARMv8p4) && boolean IMPLEMENTATION_DEFINED "Has Small Page Table extension";);

Library pseudocode for shared/functions/extension/HaveStage2MemAttrControlHaveSmallPageTblExt

// HaveStage2MemAttrControl() // ========================== // Returns TRUE if support for Stage2 control of memory types and cacheability attributes is implemented. // HaveSmallPageTblExt() // ===================== // Returns TRUE if has Small Page Table Support, and FALSE otherwise boolean HaveStage2MemAttrControl() HaveSmallPageTblExt() return HasArchVersion(ARMv8p4);) && boolean IMPLEMENTATION_DEFINED "Has Small Page Table extension";

Library pseudocode for shared/functions/extension/HaveStatisticalProfilingHaveStage2MemAttrControl

// HaveStatisticalProfiling() // HaveStage2MemAttrControl() // ========================== // Returns TRUE if support for Stage2 control of memory types and cacheability attributes is implemented. boolean HaveStatisticalProfiling() HaveStage2MemAttrControl() return HasArchVersion(ARMv8p2ARMv8p4);

Library pseudocode for shared/functions/extension/HaveTraceExtHaveStatisticalProfiling

// HaveTraceExt() // ============== // Returns TRUE if Trace functionality as described by the Trace Architecture // is implemented. // HaveStatisticalProfiling() // ========================== boolean HaveTraceExt() return boolean IMPLEMENTATION_DEFINED "Has Trace Architecture functionality";HaveStatisticalProfiling() returnHasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveTrapLoadStoreMultipleDeviceExt

// HaveTrapLoadStoreMultipleDeviceExt() // ==================================== boolean HaveTrapLoadStoreMultipleDeviceExt() return HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveUA16Ext

// HaveUA16Ext() // ============= // Returns TRUE if extended unaligned memory access support is implemented, and FALSE otherwise. // Returns TRUE if has extended unaligned memory access support, and FALSE otherwise boolean HaveUA16Ext() return HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveUAOExt

// HaveUAOExt() // ============ boolean HaveUAOExt() return HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveVirtHostExt

// HaveVirtHostExt() // ================= boolean HaveVirtHostExt() return HasArchVersion(ARMv8p1);

Library pseudocode for shared/functions/extension/InsertIESBBeforeException

// If SCTLR_ELx.IESB is 1 when an exception is generated to ELx, any pending Unrecoverable // SError interrupt must be taken before executing any instructions in the exception handler. // However, this can be before the branch to the exception handler is made. boolean InsertIESBBeforeException(bits(2) el);

Library pseudocode for shared/functions/float/fixedtofp/FixedToFP

// FixedToFP() // =========== // Convert M-bit fixed point OP with FBITS fractional bits to // N-bit precision floating point, controlled by UNSIGNED and ROUNDING. bits(N) FixedToFP(bits(M) op, integer fbits, boolean unsigned, FPCRType fpcr, FPRounding rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(N) result; assert fbits >= 0; assert rounding != FPRounding_ODD; // Correct signed-ness int_operand = Int(op, unsigned); // Scale by fractional bits and generate a real value real_operand = Real(int_operand) / 2.0^fbits; if real_operand == 0.0 then result = FPZero('0'); else result = FPRound(real_operand, fpcr, rounding); return result;

Library pseudocode for shared/functions/float/fpabs/FPAbs

// FPAbs() // ======= bits(N) FPAbs(bits(N) op) assert N IN {16,32,64}; return '0' : op<N-2:0>;

Library pseudocode for shared/functions/float/fpadd/FPAdd

// FPAdd() // ======= bits(N) FPAdd(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == NOT(sign2) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '0') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '1') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == sign2 then result = FPZero(sign1); else result_value = value1 + value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding); return result;

Library pseudocode for shared/functions/float/fpcompare/FPCompare

// FPCompare() // =========== bits(4) FPCompare(bits(N) op1, bits(N) op2, boolean signal_nans, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = '0011'; if type1==FPType_SNaN || type2==FPType_SNaN || signal_nans then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() if value1 == value2 then result = '0110'; elsif value1 < value2 then result = '1000'; else // value1 > value2 result = '0010'; return result;

Library pseudocode for shared/functions/float/fpcompareeq/FPCompareEQ

// FPCompareEQ() // ============= boolean FPCompareEQ(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = FALSE; if type1==FPType_SNaN || type2==FPType_SNaN then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 == value2); return result;

Library pseudocode for shared/functions/float/fpcomparege/FPCompareGE

// FPCompareGE() // ============= boolean FPCompareGE(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = FALSE; FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 >= value2); return result;

Library pseudocode for shared/functions/float/fpcomparegt/FPCompareGT

// FPCompareGT() // ============= boolean FPCompareGT(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = FALSE; FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 > value2); return result;

Library pseudocode for shared/functions/float/fpconvert/FPConvert

// FPConvert() // =========== // Convert floating point OP with N-bit precision to M-bit precision, // with rounding controlled by ROUNDING. // This is used by the FP-to-FP conversion instructions and so for // half-precision data ignores FZ16, but observes AHP. bits(M) FPConvert(bits(N) op, FPCRType fpcr, FPRounding rounding) assert M IN {16,32,64}; assert N IN {16,32,64}; bits(M) result; // Unpack floating-point operand optionally with flush-to-zero. (fptype,sign,value) = (type,sign,value) = FPUnpackCV(op, fpcr); alt_hp = (M == 16) && (fpcr.AHP == '1'); if fptype == if type == FPType_SNaN || fptype ==|| type == FPType_QNaN then if alt_hp then result = FPZero(sign); elsif fpcr.DN == '1' then result = FPDefaultNaN(); else result = FPConvertNaN(op); if fptype == if type == FPType_SNaN || alt_hp then FPProcessException(FPExc_InvalidOp,fpcr); elsif fptype == elsif type == FPType_Infinity then if alt_hp then result = sign:Ones(M-1); FPProcessException(FPExc_InvalidOp, fpcr); else result = FPInfinity(sign); elsif fptype == elsif type == FPType_Zero then result = FPZero(sign); else result = FPRoundCV(value, fpcr, rounding); return result; // FPConvert() // =========== bits(M) FPConvert(bits(N) op, FPCRType fpcr) return FPConvert(op, fpcr, FPRoundingMode(fpcr));

Library pseudocode for shared/functions/float/fpconvertnan/FPConvertNaN

// FPConvertNaN() // ============== // Converts a NaN of one floating-point type to another bits(M) FPConvertNaN(bits(N) op) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(M) result; bits(51) frac; sign = op<N-1>; // Unpack payload from input NaN case N of when 64 frac = op<50:0>; when 32 frac = op<21:0>:Zeros(29); when 16 frac = op<8:0>:Zeros(42); // Repack payload into output NaN, while // converting an SNaN to a QNaN. case M of when 64 result = sign:Ones(M-52):frac; when 32 result = sign:Ones(M-23):frac<50:29>; when 16 result = sign:Ones(M-10):frac<50:42>; return result;

Library pseudocode for shared/functions/float/fpcrtype/FPCRType

type FPCRType;

Library pseudocode for shared/functions/float/fpdecoderm/FPDecodeRM

// FPDecodeRM() // ============ // Decode most common AArch32 floating-point rounding encoding. FPRounding FPDecodeRM(bits(2) rm) case rm of when '00' return FPRounding_TIEAWAY; // A when '01' return FPRounding_TIEEVEN; // N when '10' return FPRounding_POSINF; // P when '11' return FPRounding_NEGINF; // M

Library pseudocode for shared/functions/float/fpdecoderounding/FPDecodeRounding

// FPDecodeRounding() // ================== // Decode floating-point rounding mode and common AArch64 encoding. FPRounding FPDecodeRounding(bits(2) rmode) case rmode of when '00' return FPRounding_TIEEVEN; // N when '01' return FPRounding_POSINF; // P when '10' return FPRounding_NEGINF; // M when '11' return FPRounding_ZERO; // Z

Library pseudocode for shared/functions/float/fpdefaultnan/FPDefaultNaN

// FPDefaultNaN() // ============== bits(N) FPDefaultNaN() assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); sign = '0'; exp = Ones(E); frac = '1':Zeros(F-1); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpdiv/FPDiv

// FPDiv() // ======= bits(N) FPDiv(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && inf2) || (zero1 && zero2) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif inf1 || zero2 then result = FPInfinity(sign1 EOR sign2); if !inf1 then FPProcessException(FPExc_DivideByZero, fpcr); elsif zero1 || inf2 then result = FPZero(sign1 EOR sign2); else result = FPRound(value1/value2, fpcr); return result;

Library pseudocode for shared/functions/float/fpexc/FPExc

enumeration FPExc {FPExc_InvalidOp, FPExc_DivideByZero, FPExc_Overflow, FPExc_Underflow, FPExc_Inexact, FPExc_InputDenorm};

Library pseudocode for shared/functions/float/fpinfinity/FPInfinity

// FPInfinity() // ============ bits(N) FPInfinity(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = Ones(E); frac = Zeros(F); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpmax/FPMax

// FPMax() // ======= bits(N) FPMax(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then if value1 > value2 then (fptype,sign,value) = (type1,sign1,value1); (type,sign,value) = (type1,sign1,value1); else (fptype,sign,value) = (type2,sign2,value2); if fptype == (type,sign,value) = (type2,sign2,value2); if type == FPType_Infinity then result = FPInfinity(sign); elsif fptype == elsif type == FPType_Zero then sign = sign1 AND sign2; // Use most positive sign result = FPZero(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. result = FPRound(value, fpcr); return result;

Library pseudocode for shared/functions/float/fpmaxnormal/FPMaxNormal

// FPMaxNormal() // ============= bits(N) FPMaxNormal(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = Ones(E-1):'0'; frac = Ones(F); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpmaxnum/FPMaxNum

// FPMaxNum() // ========== bits(N) FPMaxNum(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,-,-) = FPUnpack(op1, fpcr); (type2,-,-) = FPUnpack(op2, fpcr); // treat a single quiet-NaN as -Infinity if type1 == FPType_QNaN && type2 != FPType_QNaN then op1 = FPInfinity('1'); elsif type1 != FPType_QNaN && type2 == FPType_QNaN then op2 = FPInfinity('1'); return FPMax(op1, op2, fpcr);

Library pseudocode for shared/functions/float/fpmin/FPMin

// FPMin() // ======= bits(N) FPMin(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then if value1 < value2 then (fptype,sign,value) = (type1,sign1,value1); (type,sign,value) = (type1,sign1,value1); else (fptype,sign,value) = (type2,sign2,value2); if fptype == (type,sign,value) = (type2,sign2,value2); if type == FPType_Infinity then result = FPInfinity(sign); elsif fptype == elsif type == FPType_Zero then sign = sign1 OR sign2; // Use most negative sign result = FPZero(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. result = FPRound(value, fpcr); return result;

Library pseudocode for shared/functions/float/fpminnum/FPMinNum

// FPMinNum() // ========== bits(N) FPMinNum(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,-,-) = FPUnpack(op1, fpcr); (type2,-,-) = FPUnpack(op2, fpcr); // Treat a single quiet-NaN as +Infinity if type1 == FPType_QNaN && type2 != FPType_QNaN then op1 = FPInfinity('0'); elsif type1 != FPType_QNaN && type2 == FPType_QNaN then op2 = FPInfinity('0'); return FPMin(op1, op2, fpcr);

Library pseudocode for shared/functions/float/fpmul/FPMul

// FPMul() // ======= bits(N) FPMul(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); elsif zero1 || zero2 then result = FPZero(sign1 EOR sign2); else result = FPRound(value1*value2, fpcr); return result;

Library pseudocode for shared/functions/float/fpmuladd/FPMulAdd

// FPMulAdd() // ========== // // Calculates addend + op1*op2 with a single rounding. bits(N) FPMulAdd(bits(N) addend, bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (typeA,signA,valueA) = FPUnpack(addend, fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); inf1 = (type1 == FPType_Infinity); zero1 = (type1 == FPType_Zero); inf2 = (type2 == FPType_Infinity); zero2 = (type2 == FPType_Zero); (done,result) = FPProcessNaNs3(typeA, type1, type2, addend, op1, op2, fpcr); if typeA == FPType_QNaN && ((inf1 && zero2) || (zero1 && inf2)) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); if !done then infA = (typeA == FPType_Infinity); zeroA = (typeA == FPType_Zero); // Determine sign and type product will have if it does not cause an Invalid // Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero by infinity and // additions of opposite-signed infinities. if (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity('0'); elsif (infA && signA == '1') || (infP && signP == '1') then result = FPInfinity('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(signA); // Otherwise calculate numerical result and round it. else result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr); return result;

Library pseudocode for shared/functions/float/fpmuladdh/FPMulAddH

// FPMulAddH() // =========== bits(N) FPMulAddH(bits(N) addend, bits(N DIV 2) op1, bits(N DIV 2) op2, FPCRType fpcr) assert N IN {32,64}; rounding = FPRoundingMode(fpcr); (typeA,signA,valueA) = FPUnpack(addend, fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); inf1 = (type1 == FPType_Infinity); zero1 = (type1 == FPType_Zero); inf2 = (type2 == FPType_Infinity); zero2 = (type2 == FPType_Zero); (done,result) = FPProcessNaNs3H(typeA, type1, type2, addend, op1, op2, fpcr); if typeA == FPType_QNaN && ((inf1 && zero2) || (zero1 && inf2)) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); if !done then infA = (typeA == FPType_Infinity); zeroA = (typeA == FPType_Zero); // Determine sign and type product will have if it does not cause an Invalid // Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero by infinity and // additions of opposite-signed infinities. if (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity('0'); elsif (infA && signA == '1') || (infP && signP == '1') then result = FPInfinity('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(signA); // Otherwise calculate numerical result and round it. else result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr); return result;

Library pseudocode for shared/functions/float/fpmuladdh/FPProcessNaNs3H

// FPProcessNaNs3H() // ================= (boolean, bits(N)) FPProcessNaNs3H(FPType type1, FPType type2, FPType type3, bits(N) op1, bits(N DIV 2) op2, bits(N DIV 2) op3, FPCRType fpcr) assert N IN {32,64}; bits(N) result; if type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_SNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr)); elsif type3 == FPType_SNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr)); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr)); elsif type3 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr)); else done = FALSE; result = Zeros(); // 'Don't care' result return (done, result);

Library pseudocode for shared/functions/float/fpmulx/FPMulX

// FPMulX() // ======== bits(N) FPMulX(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; bits(N) result; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPTwo(sign1 EOR sign2); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); elsif zero1 || zero2 then result = FPZero(sign1 EOR sign2); else result = FPRound(value1*value2, fpcr); return result;

Library pseudocode for shared/functions/float/fpneg/FPNeg

// FPNeg() // ======= bits(N) FPNeg(bits(N) op) assert N IN {16,32,64}; return NOT(op<N-1>) : op<N-2:0>;

Library pseudocode for shared/functions/float/fponepointfive/FPOnePointFive

// FPOnePointFive() // ================ bits(N) FPOnePointFive(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0':Ones(E-1); frac = '1':Zeros(F-1); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpprocessexception/FPProcessException

// FPProcessException() // ==================== // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. FPProcessException(FPExc exception, FPCRType fpcr) // Determine the cumulative exception bit number case exception of when FPExc_InvalidOp cumul = 0; when FPExc_DivideByZero cumul = 1; when FPExc_Overflow cumul = 2; when FPExc_Underflow cumul = 3; when FPExc_Inexact cumul = 4; when FPExc_InputDenorm cumul = 7; enable = cumul + 8; if fpcr<enable> == '1' then // Trapping of the exception enabled. // It is IMPLEMENTATION DEFINED whether the enable bit may be set at all, and // if so then how exceptions may be accumulated before calling FPTrapException() IMPLEMENTATION_DEFINED "floating-point trap handling"; elsif UsingAArch32() then // Set the cumulative exception bit FPSCR<cumul> = '1'; else // Set the cumulative exception bit FPSR<cumul> = '1'; return;

Library pseudocode for shared/functions/float/fpprocessnan/FPProcessNaN

// FPProcessNaN() // ============== bits(N) FPProcessNaN(FPType fptype, bits(N) op,type, bits(N) op, FPCRType fpcr) assert N IN {16,32,64}; assert fptype IN { assert type IN {FPType_QNaN, FPType_SNaN}; case N of when 16 topfrac = 9; when 32 topfrac = 22; when 64 topfrac = 51; result = op; if fptype == if type == FPType_SNaN then result<topfrac> = '1'; FPProcessException(FPExc_InvalidOp, fpcr); if fpcr.DN == '1' then // DefaultNaN requested result = FPDefaultNaN(); return result;

Library pseudocode for shared/functions/float/fpprocessnans/FPProcessNaNs

// FPProcessNaNs() // =============== // // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. (boolean, bits(N)) FPProcessNaNs(FPType type1, FPType type2, bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; if type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); else done = FALSE; result = Zeros(); // 'Don't care' result return (done, result);

Library pseudocode for shared/functions/float/fpprocessnans3/FPProcessNaNs3

// FPProcessNaNs3() // ================ // // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. (boolean, bits(N)) FPProcessNaNs3(FPType type1, FPType type2, FPType type3, bits(N) op1, bits(N) op2, bits(N) op3, FPCRType fpcr) assert N IN {16,32,64}; if type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); elsif type3 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type3, op3, fpcr); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); elsif type3 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type3, op3, fpcr); else done = FALSE; result = Zeros(); // 'Don't care' result return (done, result);

Library pseudocode for shared/functions/float/fprecipestimate/FPRecipEstimate

// FPRecipEstimate() // ================= bits(N) FPRecipEstimate(bits(N) operand, FPCRType fpcr) assert N IN {16,32,64}; (fptype,sign,value) = (type,sign,value) = FPUnpack(operand, fpcr); if fptype == if type == FPType_SNaN || fptype ==|| type == FPType_QNaN then result = FPProcessNaN(fptype, operand, fpcr); elsif fptype ==(type, operand, fpcr); elsif type == FPType_Infinity then result = FPZero(sign); elsif fptype == elsif type == FPType_Zero then result = FPInfinity(sign); FPProcessException(FPExc_DivideByZero, fpcr); elsif ( (N == 16 && Abs(value) < 2.0^-16) || (N == 32 && Abs(value) < 2.0^-128) || (N == 64 && Abs(value) < 2.0^-1024) ) then case FPRoundingMode(fpcr) of when FPRounding_TIEEVEN overflow_to_inf = TRUE; when FPRounding_POSINF overflow_to_inf = (sign == '0'); when FPRounding_NEGINF overflow_to_inf = (sign == '1'); when FPRounding_ZERO overflow_to_inf = FALSE; result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); FPProcessException(FPExc_Overflow, fpcr); FPProcessException(FPExc_Inexact, fpcr); elsif ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && ( (N == 16 && Abs(value) >= 2.0^14) || (N == 32 && Abs(value) >= 2.0^126) || (N == 64 && Abs(value) >= 2.0^1022) ) then // Result flushed to zero of correct sign result = FPZero(sign); if UsingAArch32() then FPSCR.UFC = '1'; else FPSR.UFC = '1'; else // Scale to a fixed point value in the range 0.5 <= x < 1.0 in steps of 1/512, and // calculate result exponent. Scaled value has copied sign bit, // exponent = 1022 = double-precision biased version of -1, // fraction = original fraction case N of when 16 fraction = operand<9:0> : Zeros(42); exp = UInt(operand<14:10>); when 32 fraction = operand<22:0> : Zeros(29); exp = UInt(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then if fraction<51> == '0' then if fraction<51> == 0 then exp = -1; fraction = fraction<49:0>:'00'; else fraction = fraction<50:0>:'0'; integer scaled = UInt('1':fraction<51:44>); case N of when 16 result_exp = 29 - exp; // In range 29-30 = -1 to 29+1 = 30 when 32 result_exp = 253 - exp; // In range 253-254 = -1 to 253+1 = 254 when 64 result_exp = 2045 - exp; // In range 2045-2046 = -1 to 2045+1 = 2046 // scaled is in range 256..511 representing a fixed-point number in range [0.5..1.0) estimate = RecipEstimate(scaled); // estimate is in the range 256..511 representing a fixed point result in the range [1.0..2.0) // Convert to scaled floating point result with copied sign bit, // high-order bits from estimate, and exponent calculated above. fraction = estimate<7:0> : Zeros(44); if result_exp == 0 then fraction = '1' : fraction<51:1>; elsif result_exp == -1 then fraction = '01' : fraction<51:2>; result_exp = 0; case N of when 16 result = sign : result_exp<N-12:0> : fraction<51:42>; when 32 result = sign : result_exp<N-25:0> : fraction<51:29>; when 64 result = sign : result_exp<N-54:0> : fraction<51:0>; return result;

Library pseudocode for shared/functions/float/fprecipestimate/RecipEstimate

// Compute estimate of reciprocal of 9-bit fixed-point number // // a is in range 256 .. 511 representing a number in the range 0.5 <= x < 1.0. // result is in the range 256 .. 511 representing a number in the range in the range 1.0 to 511/256. integer RecipEstimate(integer a) assert 256 <= a && a < 512; a = a*2+1; // round to nearest integer b = (2 ^ 19) DIV a; r = (b+1) DIV 2; // round to nearest assert 256 <= r && r < 512; return r;

Library pseudocode for shared/functions/float/fprecpx/FPRecpX

// FPRecpX() // ========= bits(N) FPRecpX(bits(N) op, FPCRType fpcr) assert N IN {16,32,64}; case N of when 16 esize = 5; when 32 esize = 8; when 64 esize = 11; bits(N) result; bits(esize) exp; bits(esize) max_exp; bits(N-(esize+1)) frac = Zeros(); case N of when 16 exp = op<10+esize-1:10>; when 32 exp = op<23+esize-1:23>; when 64 exp = op<52+esize-1:52>; max_exp = Ones(esize) - 1; (fptype,sign,value) = (type,sign,value) = FPUnpack(op, fpcr); if fptype == if type == FPType_SNaN || fptype ==|| type == FPType_QNaN then result = FPProcessNaN(fptype, op, fpcr); (type, op, fpcr); else if IsZero(exp) then // Zero and denormals result = sign:max_exp:frac; else // Infinities and normals result = sign:NOT(exp):frac; return result;

Library pseudocode for shared/functions/float/fpround/FPRound

// FPRound() // ========= // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. bits(N) FPRound(real op, FPCRType fpcr, FPRounding rounding) fpcr.AHP = '0'; return FPRoundBase(op, fpcr, rounding); // Convert a real number OP into an N-bit floating-point value using the // supplied rounding mode RMODE. bits(N) FPRoundBase(real op, FPCRType fpcr, FPRounding rounding) assert N IN {16,32,64}; assert op != 0.0; assert rounding != FPRounding_TIEAWAY; bits(N) result; // Obtain format parameters - minimum exponent, numbers of exponent and fraction bits. if N == 16 then minimum_exp = -14; E = 5; F = 10; elsif N == 32 then minimum_exp = -126; E = 8; F = 23; else // N == 64 minimum_exp = -1022; E = 11; F = 52; // Split value into sign, unrounded mantissa and exponent. if op < 0.0 then sign = '1'; mantissa = -op; else sign = '0'; mantissa = op; exponent = 0; while mantissa < 1.0 do mantissa = mantissa * 2.0; exponent = exponent - 1; while mantissa >= 2.0 do mantissa = mantissa / 2.0; exponent = exponent + 1; // Deal with flush-to-zero. if ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && exponent < minimum_exp then // Flush-to-zero never generates a trapped exception if UsingAArch32() then FPSCR.UFC = '1'; else FPSR.UFC = '1'; return FPZero(sign); // Start creating the exponent value for the result. Start by biasing the actual exponent // so that the minimum exponent becomes 1, lower values 0 (indicating possible underflow). biased_exp = Max(exponent - minimum_exp + 1, 0); if biased_exp == 0 then mantissa = mantissa / 2.0^(minimum_exp - exponent); // Get the unrounded mantissa as an integer, and the "units in last place" rounding error. int_mant = RoundDown(mantissa * 2.0^F); // < 2.0^F if biased_exp == 0, >= 2.0^F if not error = mantissa * 2.0^F - Real(int_mant); // Underflow occurs if exponent is too small before rounding, and result is inexact or // the Underflow exception is trapped. if biased_exp == 0 && (error != 0.0 || fpcr.UFE == '1') then FPProcessException(FPExc_Underflow, fpcr); // Round result according to rounding mode. case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_mant<0> == '1')); overflow_to_inf = TRUE; when FPRounding_POSINF round_up = (error != 0.0 && sign == '0'); overflow_to_inf = (sign == '0'); when FPRounding_NEGINF round_up = (error != 0.0 && sign == '1'); overflow_to_inf = (sign == '1'); when FPRounding_ZERO, FPRounding_ODD round_up = FALSE; overflow_to_inf = FALSE; if round_up then int_mant = int_mant + 1; if int_mant == 2^F then // Rounded up from denormalized to normalized biased_exp = 1; if int_mant == 2^(F+1) then // Rounded up to next exponent biased_exp = biased_exp + 1; int_mant = int_mant DIV 2; // Handle rounding to odd aka Von Neumann rounding if error != 0.0 && rounding == FPRounding_ODD then int_mant<0> = '1'; // Deal with overflow and generate result. if N != 16 || fpcr.AHP == '0' then // Single, double or IEEE half precision if biased_exp >= 2^E - 1 then result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); FPProcessException(FPExc_Overflow, fpcr); error = 1.0; // Ensure that an Inexact exception occurs else result = sign : biased_exp<N-F-2:0> : int_mant<F-1:0>; else // Alternative half precision if biased_exp >= 2^E then result = sign : Ones(N-1); FPProcessException(FPExc_InvalidOp, fpcr); error = 0.0; // Ensure that an Inexact exception does not occur else result = sign : biased_exp<N-F-2:0> : int_mant<F-1:0>; // Deal with Inexact exception. if error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); return result; // FPRound() // ========= bits(N) FPRound(real op, FPCRType fpcr) return FPRound(op, fpcr, FPRoundingMode(fpcr));

Library pseudocode for shared/functions/float/fpround/FPRoundCV

// FPRoundCV() // =========== // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. bits(N) FPRoundCV(real op, FPCRType fpcr, FPRounding rounding) fpcr.FZ16 = '0'; return FPRoundBase(op, fpcr, rounding);

Library pseudocode for shared/functions/float/fprounding/FPRounding

enumeration FPRounding {FPRounding_TIEEVEN, FPRounding_POSINF, FPRounding_NEGINF, FPRounding_ZERO, FPRounding_TIEAWAY, FPRounding_ODD};

Library pseudocode for shared/functions/float/fproundingmode/FPRoundingMode

// FPRoundingMode() // ================ // Return the current floating-point rounding mode. FPRounding FPRoundingMode(FPCRType fpcr) return FPDecodeRounding(fpcr.RMode);

Library pseudocode for shared/functions/float/fproundint/FPRoundInt

// FPRoundInt() // ============ // Round OP to nearest integral floating point value using rounding mode ROUNDING. // If EXACT is TRUE, set FPSR.IXC if result is not numerically equal to OP. bits(N) FPRoundInt(bits(N) op, FPCRType fpcr, FPRounding rounding, boolean exact) assert rounding != FPRounding_ODD; assert N IN {16,32,64}; // Unpack using FPCR to determine if subnormals are flushed-to-zero (fptype,sign,value) = (type,sign,value) = FPUnpack(op, fpcr); if fptype == if type == FPType_SNaN || fptype ==|| type == FPType_QNaN then result = FPProcessNaN(fptype, op, fpcr); elsif fptype ==(type, op, fpcr); elsif type == FPType_Infinity then result = FPInfinity(sign); elsif fptype == elsif type == FPType_Zero then result = FPZero(sign); else // extract integer component int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when FPRounding_POSINF round_up = (error != 0.0); when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = (error != 0.0 && int_result < 0); when FPRounding_TIEAWAY round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Convert integer value into an equivalent real value real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact if real_result == 0.0 then result = FPZero(sign); else result = FPRound(real_result, fpcr, FPRounding_ZERO); // Generate inexact exceptions if error != 0.0 && exact then FPProcessException(FPExc_Inexact, fpcr); return result;

Library pseudocode for shared/functions/float/fproundintn/FPRoundIntN

// FPRoundIntN() // ============= bits(N) FPRoundIntN(bits(N) op, FPCRType fpcr, FPRounding rounding, integer intsize) assert rounding != FPRounding_ODD; assert N IN {32,64}; assert intsize IN {32, 64}; integer exp; constant integer E = (if N == 32 then 8 else 11); constant integer F = N - (E + 1); // Unpack using FPCR to determine if subnormals are flushed-to-zero (fptype,sign,value) = (type,sign,value) = FPUnpack(op, fpcr); if fptype IN { if type IN {FPType_SNaN, FPType_QNaN, FPType_Infinity} then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:Zeros(F); else exp = 1022+intsize; result = '1':exp<(E-1):0>:Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); elsif fptype == elsif type == FPType_Zero then result = FPZero(sign); else // Extract integer component int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of when FPRounding_TIEEVEN round_up = error > 0.5 || (error == 0.5 && int_result<0> == '1'); when FPRounding_POSINF round_up = error != 0.0; when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = error != 0.0 && int_result < 0; when FPRounding_TIEAWAY round_up = error > 0.5 || (error == 0.5 && int_result >= 0); if round_up then int_result = int_result + 1; if int_result > 2^(intsize-1)-1 || int_result < -1*2^(intsize-1) then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:Zeros(F); else exp = 1022 + intsize; result = '1':exp<(E-1):0>:Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); // this case shouldn't set Inexact error = 0.0; else // Convert integer value into an equivalent real value real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact if real_result == 0.0 then result = FPZero(sign); else result = FPRound(real_result, fpcr, FPRounding_ZERO); // Generate inexact exceptions if error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); return result;

Library pseudocode for shared/functions/float/fprsqrtestimate/FPRSqrtEstimate

// FPRSqrtEstimate() // ================= bits(N) FPRSqrtEstimate(bits(N) operand, FPCRType fpcr) assert N IN {16,32,64}; (fptype,sign,value) = (type,sign,value) = FPUnpack(operand, fpcr); if fptype == if type == FPType_SNaN || fptype ==|| type == FPType_QNaN then result = FPProcessNaN(fptype, operand, fpcr); elsif fptype ==(type, operand, fpcr); elsif type == FPType_Zero then result = FPInfinity(sign); FPProcessException(FPExc_DivideByZero, fpcr); elsif sign == '1' then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif fptype == elsif type == FPType_Infinity then result = FPZero('0'); else // Scale to a fixed-point value in the range 0.25 <= x < 1.0 in steps of 512, with the // evenness or oddness of the exponent unchanged, and calculate result exponent. // Scaled value has copied sign bit, exponent = 1022 or 1021 = double-precision // biased version of -1 or -2, fraction = original fraction extended with zeros. case N of when 16 fraction = operand<9:0> : Zeros(42); exp = UInt(operand<14:10>); when 32 fraction = operand<22:0> : Zeros(29); exp = UInt(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then while fraction<51> == '0' do while fraction<51> == 0 do fraction = fraction<50:0> : '0'; exp = exp - 1; fraction = fraction<50:0> : '0'; if exp<0> == '0' then scaled = UInt('1':fraction<51:44>); else scaled = UInt('01':fraction<51:45>); case N of when 16 result_exp = ( 44 - exp) DIV 2; when 32 result_exp = ( 380 - exp) DIV 2; when 64 result_exp = (3068 - exp) DIV 2; estimate = RecipSqrtEstimate(scaled); // estimate is in the range 256..511 representing a fixed point result in the range [1.0..2.0) // Convert to scaled floating point result with copied sign bit and high-order // fraction bits, and exponent calculated above. case N of when 16 result = '0' : result_exp<N-12:0> : estimate<7:0>:Zeros( 2); when 32 result = '0' : result_exp<N-25:0> : estimate<7:0>:Zeros(15); when 64 result = '0' : result_exp<N-54:0> : estimate<7:0>:Zeros(44); return result;

Library pseudocode for shared/functions/float/fprsqrtestimate/RecipSqrtEstimate

// Compute estimate of reciprocal square root of 9-bit fixed-point number // // a is in range 128 .. 511 representing a number in the range 0.25 <= x < 1.0. // result is in the range 256 .. 511 representing a number in the range in the range 1.0 to 511/256. integer RecipSqrtEstimate(integer a) assert 128 <= a && a < 512; if a < 256 then // 0.25 .. 0.5 a = a*2+1; // a in units of 1/512 rounded to nearest else // 0.5 .. 1.0 a = (a >> 1) << 1; // discard bottom bit a = (a+1)*2; // a in units of 1/256 rounded to nearest integer b = 512; while a*(b+1)*(b+1) < 2^28 do b = b+1; // b = largest b such that b < 2^14 / sqrt(a) do r = (b+1) DIV 2; // round to nearest assert 256 <= r && r < 512; return r;

Library pseudocode for shared/functions/float/fpsqrt/FPSqrt

// FPSqrt() // ======== bits(N) FPSqrt(bits(N) op, FPCRType fpcr) assert N IN {16,32,64}; (fptype,sign,value) = (type,sign,value) = FPUnpack(op, fpcr); if fptype == if type == FPType_SNaN || fptype ==|| type == FPType_QNaN then result = FPProcessNaN(fptype, op, fpcr); elsif fptype ==(type, op, fpcr); elsif type == FPType_Zero then result = FPZero(sign); elsif fptype == elsif type == FPType_Infinity && sign == '0' then result = FPInfinity(sign); elsif sign == '1' then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); else result = FPRound(Sqrt(value), fpcr); return result;

Library pseudocode for shared/functions/float/fpsub/FPSub

// FPSub() // ======= bits(N) FPSub(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == sign2 then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == NOT(sign2) then result = FPZero(sign1); else result_value = value1 - value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding); return result;

Library pseudocode for shared/functions/float/fpthree/FPThree

// FPThree() // ========= bits(N) FPThree(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1':Zeros(E-1); frac = '1':Zeros(F-1); return sign : exp : frac;

Library pseudocode for shared/functions/float/fptofixed/FPToFixed

// FPToFixed() // =========== // Convert N-bit precision floating point OP to M-bit fixed point with // FBITS fractional bits, controlled by UNSIGNED and ROUNDING. bits(M) FPToFixed(bits(N) op, integer fbits, boolean unsigned, FPCRType fpcr, FPRounding rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; assert fbits >= 0; assert rounding != FPRounding_ODD; // Unpack using fpcr to determine if subnormals are flushed-to-zero (fptype,sign,value) = (type,sign,value) = FPUnpack(op, fpcr); // If NaN, set cumulative flag or take exception if fptype == if type == FPType_SNaN || fptype ==|| type == FPType_QNaN then FPProcessException(FPExc_InvalidOp, fpcr); // Scale by fractional bits and produce integer rounded towards minus-infinity value = value * 2.0^fbits; int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when FPRounding_POSINF round_up = (error != 0.0); when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = (error != 0.0 && int_result < 0); when FPRounding_TIEAWAY round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Generate saturated result and exceptions (result, overflow) = SatQ(int_result, M, unsigned); if overflow then FPProcessException(FPExc_InvalidOp, fpcr); elsif error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); return result;

Library pseudocode for shared/functions/float/fptofixedjs/FPToFixedJS

// FPToFixedJS() // ============= // Converts a double precision floating point input value // to a signed integer, with rounding to zero. bits(N) FPToFixedJS(bits(M) op, FPCRType fpcr, boolean Is64) assert M == 64 && N == 32; // Unpack using fpcr to determine if subnormals are flushed-to-zero (fptype,sign,value) = (type,sign,value) = FPUnpack(op, fpcr); Z = '1'; // If NaN, set cumulative flag or take exception if fptype == if type == FPType_SNaN || fptype ==|| type == FPType_QNaN then FPProcessException(FPExc_InvalidOp, fpcr); Z = '0'; int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment round_it_up = (error != 0.0 && int_result < 0); if round_it_up then int_result = int_result + 1; if int_result < 0 then result = int_result - 2^32*RoundUp(Real(int_result)/Real(2^32)); else result = int_result - 2^32*RoundDown(Real(int_result)/Real(2^32)); // Generate exceptions if int_result < -(2^31) || int_result > (2^31)-1 then FPProcessException(FPExc_InvalidOp, fpcr); Z = '0'; elsif error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); Z = '0'; if sign == '1'&& value == 0.0 then Z = '0'; if fptype == if type == FPType_Infinity then result = 0; if Is64 then PSTATE.<N,Z,C,V> = '0':Z:'00'; else FPSCR<31:28> = '0':Z:'00'; return result<N-1:0>;

Library pseudocode for shared/functions/float/fptwo/FPTwo

// FPTwo() // ======= bits(N) FPTwo(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1':Zeros(E-1); frac = Zeros(F); return sign : exp : frac;

Library pseudocode for shared/functions/float/fptype/FPType

enumeration FPType {FPType_Nonzero, FPType_Zero, FPType_Infinity, FPType_QNaN, FPType_SNaN};

Library pseudocode for shared/functions/float/fpunpack/FPUnpack

// FPUnpack() // ========== // // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. (FPType, bit, real) FPUnpack(bits(N) fpval, FPCRType fpcr) fpcr.AHP = '0'; (fp_type, sign, value) = FPUnpackBase(fpval, fpcr); return (fp_type, sign, value);

Library pseudocode for shared/functions/float/fpunpack/FPUnpackBase

// FPUnpackBase() // ============== // // Unpack a floating-point number into its type, sign bit and the real number // that it represents. The real number result has the correct sign for numbers // and infinities, is very large in magnitude for infinities, and is 0.0 for // NaNs. (These values are chosen to simplify the description of comparisons // and conversions.) // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. (FPType, bit, real) FPUnpackBase(bits(N) fpval, FPCRType fpcr) assert N IN {16,32,64}; if N == 16 then sign = fpval<15>; exp16 = fpval<14:10>; frac16 = fpval<9:0>; if IsZero(exp16) then // Produce zero if value is zero or flush-to-zero is selected if IsZero(frac16) || fpcr.FZ16 == '1' then fptype = type = FPType_Zero; value = 0.0; else fptype = type = FPType_Nonzero; value = 2.0^-14 * (Real(UInt(frac16)) * 2.0^-10); elsif IsOnes(exp16) && fpcr.AHP == '0' then // Infinity or NaN in IEEE format if IsZero(frac16) then fptype = type = FPType_Infinity; value = 2.0^1000000; else fptype = if frac16<9> == '1' then type = if frac16<9> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = type = FPType_Nonzero; value = 2.0^(UInt(exp16)-15) * (1.0 + Real(UInt(frac16)) * 2.0^-10); elsif N == 32 then sign = fpval<31>; exp32 = fpval<30:23>; frac32 = fpval<22:0>; if IsZero(exp32) then // Produce zero if value is zero or flush-to-zero is selected. if IsZero(frac32) || fpcr.FZ == '1' then fptype = type = FPType_Zero; value = 0.0; if !IsZero(frac32) then // Denormalized input flushed to zero FPProcessException(FPExc_InputDenorm, fpcr); else fptype = type = FPType_Nonzero; value = 2.0^-126 * (Real(UInt(frac32)) * 2.0^-23); elsif IsOnes(exp32) then if IsZero(frac32) then fptype = type = FPType_Infinity; value = 2.0^1000000; else fptype = if frac32<22> == '1' then type = if frac32<22> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = type = FPType_Nonzero; value = 2.0^(UInt(exp32)-127) * (1.0 + Real(UInt(frac32)) * 2.0^-23); else // N == 64 sign = fpval<63>; exp64 = fpval<62:52>; frac64 = fpval<51:0>; if IsZero(exp64) then // Produce zero if value is zero or flush-to-zero is selected. if IsZero(frac64) || fpcr.FZ == '1' then fptype = type = FPType_Zero; value = 0.0; if !IsZero(frac64) then // Denormalized input flushed to zero FPProcessException(FPExc_InputDenorm, fpcr); else fptype = type = FPType_Nonzero; value = 2.0^-1022 * (Real(UInt(frac64)) * 2.0^-52); elsif IsOnes(exp64) then if IsZero(frac64) then fptype = type = FPType_Infinity; value = 2.0^1000000; else fptype = if frac64<51> == '1' then type = if frac64<51> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = type = FPType_Nonzero; value = 2.0^(UInt(exp64)-1023) * (1.0 + Real(UInt(frac64)) * 2.0^-52); if sign == '1' then value = -value; return (fptype, sign, value); return (type, sign, value);

Library pseudocode for shared/functions/float/fpunpack/FPUnpackCV

// FPUnpackCV() // ============ // // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. (FPType, bit, real) FPUnpackCV(bits(N) fpval, FPCRType fpcr) fpcr.FZ16 = '0'; (fp_type, sign, value) = FPUnpackBase(fpval, fpcr); return (fp_type, sign, value);

Library pseudocode for shared/functions/float/fpzero/FPZero

// FPZero() // ======== bits(N) FPZero(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = Zeros(E); frac = Zeros(F); return sign : exp : frac;

Library pseudocode for shared/functions/float/vfpexpandimm/VFPExpandImm

// VFPExpandImm() // ============== bits(N) VFPExpandImm(bits(8) imm8) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - E - 1; sign = imm8<7>; exp = NOT(imm8<6>):Replicate(imm8<6>,E-3):imm8<5:4>; frac = imm8<3:0>:Zeros(F-4); return sign : exp : frac;

Library pseudocode for shared/functions/integer/AddWithCarry

// AddWithCarry() // ============== // Integer addition with carry input, returning result and NZCV flags (bits(N), bits(4)) AddWithCarry(bits(N) x, bits(N) y, bit carry_in) integer unsigned_sum = UInt(x) + UInt(y) + UInt(carry_in); integer signed_sum = SInt(x) + SInt(y) + UInt(carry_in); bits(N) result = unsigned_sum<N-1:0>; // same value as signed_sum<N-1:0> bit n = result<N-1>; bit z = if IsZero(result) then '1' else '0'; bit c = if UInt(result) == unsigned_sum then '0' else '1'; bit v = if SInt(result) == signed_sum then '0' else '1'; return (result, n:z:c:v);

Library pseudocode for shared/functions/memory/AArch64.BranchAddr

// AArch64.BranchAddr() // ==================== // Return the virtual address with tag bits removed for storing to the program counter. bits(64) AArch64.BranchAddr(bits(64) vaddress) assert !UsingAArch32(); msbit = AddrTop(vaddress, TRUE, PSTATE.EL); if msbit == 63 then return vaddress; elsif (PSTATE.EL IN {EL0, EL1} || IsInHost()) && vaddress<msbit> == '1' then return SignExtend(vaddress<msbit:0>); else return ZeroExtend(vaddress<msbit:0>);

Library pseudocode for shared/functions/memory/AccType

enumeration AccType {AccType_NORMAL, AccType_VEC, // Normal loads and stores AccType_STREAM, AccType_VECSTREAM, // Streaming loads and stores AccType_ATOMIC, AccType_ATOMICRW, // Atomic loads and stores AccType_ORDERED, AccType_ORDEREDRW, // Load-Acquire and Store-Release AccType_ORDEREDATOMIC, // Load-Acquire and Store-Release with atomic access AccType_ORDEREDATOMICRW, AccType_LIMITEDORDERED, // Load-LOAcquire and Store-LORelease AccType_UNPRIV, // Load and store unprivileged AccType_IFETCH, // Instruction fetch AccType_PTW, // Page table walk AccType_NONFAULT, // Non-faulting loads AccType_CNOTFIRST, // Contiguous FF load, not first element AccType_NV2REGISTER, // MRS/MSR instruction used at EL1 and which is converted // to a memory access that uses the EL2 translation regime // Other operations AccType_DC, // Data cache maintenance AccType_DC_UNPRIV, // Data cache maintenance instruction used at EL0 AccType_IC, // Instruction cache maintenance AccType_DCZVA, // DC ZVA instructions AccType_AT}; // Address translation

Library pseudocode for shared/functions/memory/AccessDescriptor

type AccessDescriptor is ( AccType acctype, MPAMinfo mpam, boolean page_table_walk, boolean secondstage, boolean s2fs1walk, integer level )

Library pseudocode for shared/functions/memory/AddrTop

// AddrTop() // ========= // Return the MSB number of a virtual address in the stage 1 translation regime for "el". // If EL1 is using AArch64 then addresses from EL0 using AArch32 are zero-extended to 64 bits. integer AddrTop(bits(64) address, boolean IsInstr, bits(2) el) assert HaveEL(el); regime = S1TranslationRegime(el); if ELUsingAArch32(regime) then // AArch32 translation regime. return 31; else // AArch64 translation regime. case regime of when EL1 tbi = (if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0); if HavePACExt() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tbi = (if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0); if HavePACExt() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if HavePACExt() then tbid = TCR_EL2.TBID; when EL3 tbi = TCR_EL3.TBI; if HavePACExt() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!HavePACExt() || tbid == '0' || !IsInstr ) then 55 else 63);

Library pseudocode for shared/functions/memory/AddressDescriptor

type AddressDescriptor is ( FaultRecord fault, // fault.statuscode indicates whether the address is validfault, // fault.type indicates whether the address is valid MemoryAttributes memattrs, FullAddress paddress, bits(64) vaddress )

Library pseudocode for shared/functions/memory/AllocationAddressWithAllocationTag

constant bits(2)// AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(64) MemHint_No = '00'; // No Read-Allocate, No Write-Allocate constant bits(2)AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result; MemHint_WA = '01'; // No Read-Allocate, Write-Allocate constant bits(2) MemHint_RA = '10'; // Read-Allocate, No Write-Allocate constant bits(2) MemHint_RWA = '11'; // Read-Allocate, Write-Allocate

Library pseudocode for shared/functions/memory/BigEndianAllocation

// BigEndian() // =========== booleanconstant bits(2) BigEndian() boolean bigend; ifMemHint_No = '00'; // No Read-Allocate, No Write-Allocate constant bits(2) UsingAArch32() then bigend = (PSTATE.E != '0'); elsif PSTATE.EL ==MemHint_WA = '01'; // No Read-Allocate, Write-Allocate constant bits(2) EL0 then bigend = (MemHint_RA = '10'; // Read-Allocate, No Write-Allocate constant bits(2)SCTLR[].E0E != '0'); else bigend = (SCTLR[].EE != '0'); return bigend;MemHint_RWA = '11'; // Read-Allocate, Write-Allocate

Library pseudocode for shared/functions/memory/BigEndianReverseAllocationTagFromAddress

// BigEndianReverse() // ================== // AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(width)bits(4) BigEndianReverse (bits(width) value) assert width IN {8, 16, 32, 64, 128}; integer half = width DIV 2; if width == 8 then return value; returnAllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag; BigEndianReverse(value<half-1:0>) : BigEndianReverse(value<width-1:half>);

Library pseudocode for shared/functions/memory/CacheabilityBigEndian

constant bits(2)// BigEndian() // =========== boolean MemAttr_NC = '00'; // Non-cacheable constant bits(2)BigEndian() boolean bigend; if MemAttr_WT = '10'; // Write-through constant bits(2)() then bigend = (PSTATE.E != '0'); elsif PSTATE.EL == then bigend = (SCTLR[].E0E != '0'); else bigend = (SCTLRMemAttr_WB = '11'; // Write-back[].EE != '0'); return bigend;

Library pseudocode for shared/functions/memory/CreateAccessDescriptorBigEndianReverse

// CreateAccessDescriptor() // ======================== // BigEndianReverse() // ================== AccessDescriptor CreateAccessDescriptor(bits(width)AccType acctype) AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam = GenMPAMcurEL(acctype IN {BigEndianReverse (bits(width) value) assert width IN {8, 16, 32, 64, 128}; integer half = width DIV 2; if width == 8 then return value; returnAccType_IFETCHBigEndianReverse,(value<half-1:0>) : AccType_ICBigEndianReverse}); accdesc.page_table_walk = FALSE; return accdesc;(value<width-1:half>);

Library pseudocode for shared/functions/memory/CreateAccessDescriptorPTWCacheability

// CreateAccessDescriptorPTW() // =========================== AccessDescriptor CreateAccessDescriptorPTW(constant bits(2)AccType acctype, boolean secondstage, boolean s2fs1walk, integer level) AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam = GenMPAMcurEL(acctype IN {MemAttr_NC = '00'; // Non-cacheable constant bits(2)AccType_IFETCH,MemAttr_WT = '10'; // Write-through constant bits(2) AccType_IC}); accdesc.page_table_walk = TRUE; accdesc.secondstage = s2fs1walk; accdesc.secondstage = secondstage; accdesc.level = level; return accdesc;MemAttr_WB = '11'; // Write-back

Library pseudocode for shared/functions/memory/DataMemoryBarrierCheckTag

// CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed boolean DataMemoryBarrier(CheckTag(MBReqDomainAddressDescriptor domain,memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress = (memaddrdesc.paddress.address); return ptag == MemTagMBReqTypesZeroExtend types);[paddress]; else return TRUE;

Library pseudocode for shared/functions/memory/DataSynchronizationBarrierCreateAccessDescriptor

DataSynchronizationBarrier(// CreateAccessDescriptor() // ======================== AccessDescriptor CreateAccessDescriptor(MBReqDomainAccType domain,acctype) AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam = GenMPAMcurEL(acctype IN { , AccType_ICMBReqTypesAccType_IFETCH types);}); accdesc.page_table_walk = FALSE; return accdesc;

Library pseudocode for shared/functions/memory/DescriptorUpdateCreateAccessDescriptorPTW

type// CreateAccessDescriptorPTW() // =========================== AccessDescriptor CreateAccessDescriptorPTW( DescriptorUpdate is ( boolean AF, // AF needs to be set boolean AP, // AP[2] / S2AP[2] will be modifiedacctype, boolean secondstage, boolean s2fs1walk, integer level) AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam = GenMPAMcurEL(acctype IN { , AccType_ICAddressDescriptorAccType_IFETCH descaddr // Descriptor to be updated )}); accdesc.page_table_walk = TRUE; accdesc.secondstage = s2fs1walk; accdesc.secondstage = secondstage; accdesc.level = level; return accdesc;

Library pseudocode for shared/functions/memory/DeviceTypeDataMemoryBarrier

enumeration DeviceType {DataMemoryBarrier(DeviceType_GRE,domain, DeviceType_nGRE, DeviceType_nGnRE, DeviceType_nGnRnE};types);

Library pseudocode for shared/functions/memory/EffectiveTBIDataSynchronizationBarrier

// EffectiveTBI() // ============== // Returns the effective TBI in the AArch64 stage 1 translation regime for "el". bit EffectiveTBI(bits(64) address, boolean IsInstr, bits(2) el) assertDataSynchronizationBarrier( HaveELMBReqDomain(el); regime =domain, S1TranslationRegimeMBReqTypes(el); assert(!ELUsingAArch32(regime)); case regime of when EL1 tbi = if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tbi = if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if HavePACExt() then tbid = TCR_EL2.TBID; when EL3 tbi = TCR_EL3.TBI; if HavePACExt() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!HavePACExt() || tbid == '0' || !IsInstr) then '1' else '0');types);

Library pseudocode for shared/functions/memory/EffectiveTCMADescriptorUpdate

// EffectiveTCMA() // =============== // Returns the effective TCMA of a virtual address in the stage 1 translation regime for "el". bittype EffectiveTCMA(bits(64) address, bits(2) el) assertDescriptorUpdate is ( boolean AF, // AF needs to be set boolean AP, // AP[2] / S2AP[2] will be modified HaveELAddressDescriptor(el); regime = S1TranslationRegime(el); assert(!ELUsingAArch32(regime)); case regime of when EL1 tcma = if address<55> == '1' then TCR_EL1.TCMA1 else TCR_EL1.TCMA0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tcma = if address<55> == '1' then TCR_EL2.TCMA1 else TCR_EL2.TCMA0; else tcma = TCR_EL2.TCMA; when EL3 tcma = TCR_EL3.TCMA; return tcma;descaddr // Descriptor to be updated )

Library pseudocode for shared/functions/memory/FaultDeviceType

enumeration Fault {DeviceType {Fault_None,DeviceType_GRE, Fault_AccessFlag,DeviceType_nGRE, Fault_Alignment,DeviceType_nGnRE, Fault_Background,DeviceType_nGnRnE}; Fault_Domain, Fault_Permission, Fault_Translation, Fault_AddressSize, Fault_SyncExternal, Fault_SyncExternalOnWalk, Fault_SyncParity, Fault_SyncParityOnWalk, Fault_AsyncParity, Fault_AsyncExternal, Fault_Debug, Fault_TLBConflict, Fault_BranchTarget, Fault_HWUpdateAccessFlag, Fault_Lockdown, Fault_Exclusive, Fault_ICacheMaint};

Library pseudocode for shared/functions/memory/FaultRecordEffectiveTBI

type// EffectiveTBI() // ============== // Returns the effective TBI in the AArch64 stage 1 translation regime for "el". bit FaultRecord is (EffectiveTBI(bits(64) address, boolean IsInstr, bits(2) el) assertFaultHaveEL statuscode, // Fault Status(el); regime = AccTypeS1TranslationRegime acctype, // Type of access that faulted(el); assert(! (regime)); case regime of when EL1 tbi = if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tbi = if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if HavePACExt() then tbid = TCR_EL2.TBID; when EL3 tbi = TCR_EL3.TBI; if HavePACExt() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!HavePACExtFullAddressELUsingAArch32 ipaddress, // Intermediate physical address boolean s2fs1walk, // Is on a Stage 1 page table walk boolean write, // TRUE for a write, FALSE for a read integer level, // For translation, access flag and permission faults bit extflag, // IMPLEMENTATION DEFINED syndrome for external aborts boolean secondstage, // Is a Stage 2 abort bits(4) domain, // Domain number, AArch32 only bits(2) errortype, // [Armv8.2 RAS] AArch32 AET or AArch64 SET bits(4) debugmoe) // Debug method of entry, from AArch32 only type PARTIDtype = bits(16); type PMGtype = bits(8); type MPAMinfo is ( bit mpam_ns, PARTIDtype partid, PMGtype pmg )() || tbid == '0' || !IsInstr) then '1' else '0');

Library pseudocode for shared/functions/memory/FullAddressEffectiveTCMA

type// EffectiveTCMA() // =============== // Returns the effective TCMA of a virtual address in the stage 1 translation regime for "el". bit FullAddress is ( bits(52) address, bit NS // '0' = Secure, '1' = Non-secure )EffectiveTCMA(bits(64) address, bits(2) el) assertHaveEL(el); regime = S1TranslationRegime(el); assert(!ELUsingAArch32(regime)); case regime of when EL1 tcma = if address<55> == '1' then TCR_EL1.TCMA1 else TCR_EL1.TCMA0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tcma = if address<55> == '1' then TCR_EL2.TCMA1 else TCR_EL2.TCMA0; else tcma = TCR_EL2.TCMA; when EL3 tcma = TCR_EL3.TCMA; return tcma;

Library pseudocode for shared/functions/memory/Hint_PrefetchFault

// Signals the memory system that memory accesses of type HINT to or from the specified address are // likely in the near future. The memory system may take some action to speed up the memory // accesses when they do occur, such as pre-loading the the specified address into one or more // caches as indicated by the innermost cache level target (0=L1, 1=L2, etc) and non-temporal hint // stream. Any or all prefetch hints may be treated as a NOP. A prefetch hint must not cause a // synchronous abort due to Alignment or Translation faults and the like. Its only effect on // software-visible state should be on caches and TLBs associated with address, which must be // accessible by reads, writes or execution, as defined in the translation regime of the current // Exception level. It is guaranteed not to access Device memory. // A Prefetch_EXEC hint must not result in an access that could not be performed by a speculative // instruction fetch, therefore if all associated MMUs are disabled, then it cannot access any // memory location that cannot be accessed by instruction fetches.enumeration Hint_Prefetch(bits(64) address,Fault { Fault_None, Fault_AccessFlag, Fault_Alignment, Fault_Background, Fault_Domain, Fault_Permission, Fault_Translation, Fault_AddressSize, Fault_SyncExternal, Fault_SyncExternalOnWalk, Fault_SyncParity, Fault_SyncParityOnWalk, Fault_AsyncParity, Fault_AsyncExternal, Fault_Debug, Fault_TLBConflict, Fault_BranchTarget, Fault_HWUpdateAccessFlag, Fault_Lockdown, Fault_Exclusive, PrefetchHint hint, integer target, boolean stream);Fault_ICacheMaint};

Library pseudocode for shared/functions/memory/MBReqDomainFaultRecord

enumerationtype MBReqDomain {FaultRecord is (MBReqDomain_Nonshareable,type, // Fault Status MBReqDomain_InnerShareable,acctype, // Type of access that faulted MBReqDomain_OuterShareable, MBReqDomain_FullSystem};ipaddress, // Intermediate physical address boolean s2fs1walk, // Is on a Stage 1 page table walk boolean write, // TRUE for a write, FALSE for a read integer level, // For translation, access flag and permission faults bit extflag, // IMPLEMENTATION DEFINED syndrome for external aborts boolean secondstage, // Is a Stage 2 abort bits(4) domain, // Domain number, AArch32 only bits(2) errortype, // [Armv8.2 RAS] AArch32 AET or AArch64 SET bits(4) debugmoe) // Debug method of entry, from AArch32 only type PARTIDtype = bits(16); type PMGtype = bits(8); type MPAMinfo is ( bit mpam_ns, PARTIDtype partid, PMGtype pmg )

Library pseudocode for shared/functions/memory/MBReqTypesFullAddress

enumerationtype MBReqTypes {FullAddress is ( bits(52) address, bit NS // '0' = Secure, '1' = Non-secure )MBReqTypes_Reads, MBReqTypes_Writes, MBReqTypes_All};

Library pseudocode for shared/functions/memory/MemAttrHintsHint_Prefetch

type// Signals the memory system that memory accesses of type HINT to or from the specified address are // likely in the near future. The memory system may take some action to speed up the memory // accesses when they do occur, such as pre-loading the the specified address into one or more // caches as indicated by the innermost cache level target (0=L1, 1=L2, etc) and non-temporal hint // stream. Any or all prefetch hints may be treated as a NOP. A prefetch hint must not cause a // synchronous abort due to Alignment or Translation faults and the like. Its only effect on // software-visible state should be on caches and TLBs associated with address, which must be // accessible by reads, writes or execution, as defined in the translation regime of the current // Exception level. It is guaranteed not to access Device memory. // A Prefetch_EXEC hint must not result in an access that could not be performed by a speculative // instruction fetch, therefore if all associated MMUs are disabled, then it cannot access any // memory location that cannot be accessed by instruction fetches. MemAttrHints is ( bits(2) attrs, // See MemAttr_*, Cacheability attributes bits(2) hints, // See MemHint_*, Allocation hints boolean transient )Hint_Prefetch(bits(64) address,PrefetchHint hint, integer target, boolean stream);

Library pseudocode for shared/functions/memory/MemTypeMBReqDomain

enumeration MemType {MBReqDomain {MemType_Normal,MBReqDomain_Nonshareable, MemType_Device};MBReqDomain_InnerShareable,MBReqDomain_OuterShareable, MBReqDomain_FullSystem};

Library pseudocode for shared/functions/memory/MemoryAttributesMBReqTypes

typeenumeration MemoryAttributes is (MBReqTypes { MemType memtype,MBReqTypes_Reads, DeviceType device, // For Device memory typesMBReqTypes_Writes, MemAttrHints inner, // Inner hints and attributes MemAttrHints outer, // Outer hints and attributes boolean tagged, // Tagged access boolean shareable, boolean outershareable )MBReqTypes_All};

Library pseudocode for shared/functions/memory/PermissionsMemAttrHints

type Permissions is ( bits(3) ap, // Access permission bits bit xn, // Execute-never bit bit xxn, // [Armv8.2] Extended execute-never bit for stage 2 bit pxn // Privileged execute-never bit MemAttrHints is ( bits(2) attrs, // See MemAttr_*, Cacheability attributes bits(2) hints, // See MemHint_*, Allocation hints boolean transient )

Library pseudocode for shared/functions/memory/PrefetchHintMemTag

enumeration// MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. bits(4) PrefetchHint {MemTag[bits(64) address]Prefetch_READ,memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AllocationTagAccessIsEnabled() && memaddrdesc.memattrs.tagged then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. Prefetch_WRITE,MemTag[bits(64) address] = bits(4) value memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != Align(address, TAG_GRANULE) then boolean secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(AccType_NORMAL, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access if AllocationTagAccessIsEnabledPrefetch_EXEC};() && memaddrdesc.memattrs.tagged then _MemTag[memaddrdesc] = value;

Library pseudocode for shared/functions/memory/SpeculativeStoreBypassBarrierToPAMemType

enumeration MemType {MemType_Normal, SpeculativeStoreBypassBarrierToPA();MemType_Device};

Library pseudocode for shared/functions/memory/SpeculativeStoreBypassBarrierToVAMemoryAttributes

type MemoryAttributes is ( MemType type, DeviceType device, // For Device memory types MemAttrHints inner, // Inner hints and attributes MemAttrHintsSpeculativeStoreBypassBarrierToVA();outer, // Outer hints and attributes boolean tagged, // Tagged access boolean shareable, boolean outershareable )

Library pseudocode for shared/functions/memory/TLBRecordPermissions

type TLBRecord is (Permissions is ( bits(3) ap, // Access permission bits bit xn, // Execute-never bit bit xxn, // [Armv8.2] Extended execute-never bit for stage 2 bit pxn // Privileged execute-never bit ) Permissions perms, bit nG, // '0' = Global, '1' = not Global bits(4) domain, // AArch32 only bit GP, // Guarded Page boolean contiguous, // Contiguous bit from page table integer level, // AArch32 Short-descriptor format: Indicates Section/Page integer blocksize, // Describes size of memory translated in KBytes DescriptorUpdate descupdate, // [Armv8.1] Context for h/w update of table descriptor bit CnP, // [Armv8.2] TLB entry can be shared between different PEs AddressDescriptor addrdesc )

Library pseudocode for shared/functions/memory/_MemPrefetchHint

// These two _Mem[] accessors are the hardware operations which perform single-copy atomic, // aligned, little-endian memory accesses of size bytes from/to the underlying physical // memory array of bytes. // // The functions address the array using desc.paddress which supplies: // * A 52-bit physical address // * A single NS bit to select between Secure and Non-secure parts of the array. // // The accdesc descriptor describes the access type: normal, exclusive, ordered, streaming, // etc and other parameters required to access the physical memory or for setting syndrome // register in the event of an external abort. bits(8*size) _Mem[enumerationAddressDescriptor desc, integer size, AccessDescriptor accdesc]; _Mem[PrefetchHint {Prefetch_READ, Prefetch_WRITE, AddressDescriptor desc, integer size, AccessDescriptor accdesc] = bits(8*size) value;Prefetch_EXEC};

Library pseudocode for shared/functions/mpammemory/DefaultMPAMinfoSpeculativeStoreBypassBarrierToPA

// DefaultMPAMinfo // =============== // Returns default MPAM info. If secure is TRUE return default Secure // MPAMinfo, otherwise return default Non-secure MPAMinfo. MPAMinfo DefaultMPAMinfo(boolean secure) MPAMinfo DefaultInfo; DefaultInfo.mpam_ns = if secure then '0' else '1'; DefaultInfo.partid = DefaultPARTID; DefaultInfo.pmg = DefaultPMG; return DefaultInfo;SpeculativeStoreBypassBarrierToPA();

Library pseudocode for shared/functions/mpammemory/DefaultPARTIDSpeculativeStoreBypassBarrierToVA

constant PARTIDtype DefaultPARTID = 0<15:0>;SpeculativeStoreBypassBarrierToVA();

Library pseudocode for shared/functions/mpammemory/DefaultPMGTLBRecord

constant PMGtype DefaultPMG = 0<7:0>;typeTLBRecord is ( Permissions perms, bit nG, // '0' = Global, '1' = not Global bits(4) domain, // AArch32 only bit GP, // Guarded Page boolean contiguous, // Contiguous bit from page table integer level, // AArch32 Short-descriptor format: Indicates Section/Page integer blocksize, // Describes size of memory translated in KBytes DescriptorUpdate descupdate, // [Armv8.1] Context for h/w update of table descriptor bit CnP, // [Armv8.2] TLB entry can be shared between different PEs AddressDescriptor addrdesc )

Library pseudocode for shared/functions/mpammemory/GenMPAMcurELTransformTag

// GenMPAMcurEL // ============ // Returns MPAMinfo for the current EL and security state. // InD is TRUE instruction access and FALSE otherwise. // May be called if MPAM is not implemented (but in an version that supports // MPAM), MPAM is disabled, or in AArch32. In AArch32, convert the mode to // EL if can and use that to drive MPAM information generation. If mode // cannot be converted, MPAM is not implemented, or MPAM is disabled return // default MPAM information for the current security state. // TransformTag() // ============== // Apply tag transformation rules. MPAMinfo GenMPAMcurEL(boolean InD) bits(2) mpamel; boolean validEL; boolean secure =bits(4) IsSecure(); ifTransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta = HaveMPAMExtZeroExtend() && MPAMisEnabled() then if UsingAArch32() then (validEL, mpamel) = ELFromM32(PSTATE.M); else validEL = TRUE; mpamel = PSTATE.EL; if validEL then return genMPAM(UInt(mpamel), InD, secure); return DefaultMPAMinfo(secure);(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;

Library pseudocode for shared/functions/mpammemory/MAP_vPARTID_Mem

// MAP_vPARTID // =========== // Performs conversion of virtual PARTID into physical PARTID // Contains all of the error checking and implementation // choices for the conversion. (PARTIDtype, boolean) MAP_vPARTID(PARTIDtype vpartid) // should not ever be called if EL2 is not implemented // or is implemented but not enabled in the current // security state. PARTIDtype ret; boolean err; integer virt =// These two _Mem[] accessors are the hardware operations which perform single-copy atomic, // aligned, little-endian memory accesses of size bytes from/to the underlying physical // memory array of bytes. // // The functions address the array using desc.paddress which supplies: // * A 52-bit physical address // * A single NS bit to select between Secure and Non-secure parts of the array. // // The accdesc descriptor describes the access type: normal, exclusive, ordered, streaming, // etc and other parameters required to access the physical memory or for setting syndrome // register in the event of an external abort. bits(8*size) _Mem[ UIntAddressDescriptor( vpartid ); integer vmprmax =desc, integer size, AccessDescriptor accdesc]; _Mem[ UIntAddressDescriptor( MPAMIDR_EL1.VPMR_MAX ); // vpartid_max is largest vpartid supported integer vpartid_max = 4 * vmprmax + 3; // One of many ways to reduce vpartid to value less than vpartid_max. if virt > vpartid_max then virt = virt MOD (vpartid_max+1); // Check for valid mapping entry. if MPAMVPMV_EL2<virt> == '1' then // vpartid has a valid mapping so access the map. ret = mapvpmw(virt); err = FALSE; // Is the default virtual PARTID valid? elsif MPAMVPMV_EL2<0> == '1' then // Yes, so use default mapping for vpartid == 0. ret = MPAMVPM0_EL2<0 +: 16>; err = FALSE; // Neither is valid so use default physical PARTID. else ret = DefaultPARTID; err = TRUE; // Check that the physical PARTID is in-range. // This physical PARTID came from a virtual mapping entry. integer partid_max = UInt( MPAMIDR_EL1.PARTID_MAX ); if UInt(ret) > partid_max then // Out of range, so return default physical PARTID ret = DefaultPARTID; err = TRUE; return (ret, err);desc, integer size, AccessDescriptor accdesc] = bits(8*size) value;

Library pseudocode for shared/functions/mpammemory/MPAMisEnabledboolean

// MPAMisEnabled // ============= // Returns TRUE if MPAMisEnabled. // boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. boolean MPAMisEnabled() el =AccessIsTagChecked(bits(64) vaddr, HighestELAccType(); case el of whenacctype) if PSTATE.M<4> == '1' then return FALSE; if EL3EffectiveTBI return MPAM3_EL3.MPAMEN == '1'; when(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if EL2EffectiveTCMA return MPAM2_EL2.MPAMEN == '1'; when(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if ! () then return FALSE; if acctype IN {AccType_IFETCH, AccType_PTW} then return FALSE; if acctype == AccType_NV2REGISTEREL1AllocationTagAccessIsEnabled return MPAM1_EL1.MPAMEN == '1';then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;

Library pseudocode for shared/functions/mpam/MPAMisVirtualDefaultMPAMinfo

// MPAMisVirtual // ============= // Returns TRUE if MPAM is configured to be virtual at EL. // DefaultMPAMinfo // =============== // Returns default MPAM info. If secure is TRUE return default Secure // MPAMinfo, otherwise return default Non-secure MPAMinfo. booleanMPAMinfo DefaultMPAMinfo(boolean secure) MPAMinfo DefaultInfo; DefaultInfo.mpam_ns = if secure then '0' else '1'; DefaultInfo.partid = DefaultPARTID; DefaultInfo.pmg = DefaultPMG; return DefaultInfo; MPAMisVirtual(integer el) return (MPAMIDR_EL1.HAS_HCR == '1' && EL2Enabled() && (( el == 0 && MPAMHCR_EL2.EL0_VPMEN == '1' ) || ( el == 1 && MPAMHCR_EL2.EL1_VPMEN == '1')));

Library pseudocode for shared/functions/mpam/genMPAMDefaultPARTID

// genMPAM // ======= // Returns MPAMinfo for exception level el. // If InD is TRUE returns MPAM information using PARTID_I and PMG_I fields // of MPAMel_ELx register and otherwise using PARTID_D and PMG_D fields. // Produces a Secure PARTID if Secure is TRUE and a Non-secure PARTID otherwise. MPAMinfo genMPAM(integer el, boolean InD, boolean secure) MPAMinfo returnInfo; PARTIDtype partidel; boolean perr; boolean gstplk = (el == 0 &&constant PARTIDtype DefaultPARTID = 0<15:0>; EL2Enabled() && MPAMHCR_EL2.GSTAPP_PLK == '1' && HCR_EL2.TGE == '0'); integer eff_el = if gstplk then 1 else el; (partidel, perr) = genPARTID(eff_el, InD); PMGtype groupel = genPMG(eff_el, InD, perr); returnInfo.mpam_ns = if secure then '0' else '1'; returnInfo.partid = partidel; returnInfo.pmg = groupel; return returnInfo;

Library pseudocode for shared/functions/mpam/genMPAMelDefaultPMG

// genMPAMel // ========= // Returns MPAMinfo for specified EL in the current security state. // InD is TRUE for instruction access and FALSE otherwise. MPAMinfo genMPAMel(bits(2) el, boolean InD) boolean secure =constant PMGtype DefaultPMG = 0<7:0>; IsSecure(); if HaveMPAMExt() && MPAMisEnabled() then return genMPAM(UInt(el), InD, secure); return DefaultMPAMinfo(secure);

Library pseudocode for shared/functions/mpam/genPARTIDGenMPAMcurEL

// genPARTID // ========= // Returns physical PARTID and error boolean for exception level el. // If InD is TRUE then PARTID is from MPAMel_ELx.PARTID_I and // otherwise from MPAMel_ELx.PARTID_D. // GenMPAMcurEL // ============ // Returns MPAMinfo for the current EL and security state. // InD is TRUE instruction access and FALSE otherwise. // May be called if MPAM is not implemented (but in an version that supports // MPAM), MPAM is disabled, or in AArch32. In AArch32, convert the mode to // EL if can and use that to drive MPAM information generation. If mode // cannot be converted, MPAM is not implemented, or MPAM is disabled return // default MPAM information for the current security state. (PARTIDtype, boolean) genPARTID(integer el, boolean InD) PARTIDtype partidel = getMPAM_PARTID(el, InD); integer partid_max =MPAMinfo GenMPAMcurEL(boolean InD) bits(2) mpamel; boolean validEL; boolean secure = UIntIsSecure(MPAMIDR_EL1.PARTID_MAX); (); if HaveMPAMExt() && MPAMisEnabled() then if UsingAArch32() then (validEL, mpamel) = ELFromM32(PSTATE.M); else validEL = TRUE; mpamel = PSTATE.EL; if validEL then return genMPAM(UInt(partidel) > partid_max then return (DefaultPARTID, TRUE); if MPAMisVirtual(el) then return MAP_vPARTID(partidel); else return (partidel, FALSE);(mpamel), InD, secure); return DefaultMPAMinfo(secure);

Library pseudocode for shared/functions/mpam/genPMGMAP_vPARTID

// genPMG // ====== // Returns PMG for exception level el and I- or D-side (InD). // If PARTID generation (genPARTID) encountered an error, genPMG() should be // called with partid_err as TRUE. // MAP_vPARTID // =========== // Performs conversion of virtual PARTID into physical PARTID // Contains all of the error checking and implementation // choices for the conversion. PMGtype genPMG(integer el, boolean InD, boolean partid_err) integer pmg_max =(PARTIDtype, boolean) MAP_vPARTID(PARTIDtype vpartid) // should not ever be called if EL2 is not implemented // or is implemented but not enabled in the current // security state. PARTIDtype ret; boolean err; integer virt = UInt(MPAMIDR_EL1.PMG_MAX); // It is CONSTRAINED UNPREDICTABLE whether partid_err forces PMG to // use the default or if it uses the PMG from getMPAM_PMG. if partid_err then return DefaultPMG; PMGtype groupel = getMPAM_PMG(el, InD); if( vpartid ); integer vmprmax = UInt(groupel) <= pmg_max then return groupel; return DefaultPMG;( MPAMIDR_EL1.VPMR_MAX ); // vpartid_max is largest vpartid supported integer vpartid_max = 4 * vmprmax + 3; // One of many ways to reduce vpartid to value less than vpartid_max. if virt > vpartid_max then virt = virt MOD (vpartid_max+1); // Check for valid mapping entry. if MPAMVPMV_EL2<virt> == '1' then // vpartid has a valid mapping so access the map. ret = mapvpmw(virt); err = FALSE; // Is the default virtual PARTID valid? elsif MPAMVPMV_EL2<0> == '1' then // Yes, so use default mapping for vpartid == 0. ret = MPAMVPM0_EL2<0 +: 16>; err = FALSE; // Neither is valid so use default physical PARTID. else ret = DefaultPARTID; err = TRUE; // Check that the physical PARTID is in-range. // This physical PARTID came from a virtual mapping entry. integer partid_max =UInt( MPAMIDR_EL1.PARTID_MAX ); if UInt(ret) > partid_max then // Out of range, so return default physical PARTID ret = DefaultPARTID; err = TRUE; return (ret, err);

Library pseudocode for shared/functions/mpam/getMPAM_PARTIDMPAMisEnabled

// getMPAM_PARTID // ============== // Returns a PARTID from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PARTID_I field of that // register. Otherwise, selects the PARTID_D field. // MPAMisEnabled // ============= // Returns TRUE if MPAMisEnabled. PARTIDtype getMPAM_PARTID(integer MPAMn, boolean InD) PARTIDtype partid; boolean el2avail =boolean EL2Enabled(); if InD then case MPAMn of when 3 partid = MPAM3_EL3.PARTID_I; when 2 partid = if el2avail then MPAM2_EL2.PARTID_I elseMPAMisEnabled() el = ZerosHighestEL(); when 1 partid = MPAM1_EL1.PARTID_I; when 0 partid = MPAM0_EL1.PARTID_I; otherwise partid = PARTIDtype UNKNOWN; else case MPAMn of when 3 partid = MPAM3_EL3.PARTID_D; when 2 partid = if el2avail then MPAM2_EL2.PARTID_D else case el of when return MPAM3_EL3.MPAMEN == '1'; when EL2 return MPAM2_EL2.MPAMEN == '1'; when EL1ZerosEL3(); when 1 partid = MPAM1_EL1.PARTID_D; when 0 partid = MPAM0_EL1.PARTID_D; otherwise partid = PARTIDtype UNKNOWN; return partid;return MPAM1_EL1.MPAMEN == '1';

Library pseudocode for shared/functions/mpam/getMPAM_PMGMPAMisVirtual

// getMPAM_PMG // =========== // Returns a PMG from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PMG_I field of that // register. Otherwise, selects the PMG_D field. // MPAMisVirtual // ============= // Returns TRUE if MPAM is configured to be virtual at EL. PMGtype getMPAM_PMG(integer MPAMn, boolean InD) PMGtype pmg; boolean el2avail =boolean MPAMisVirtual(integer el) return (MPAMIDR_EL1.HAS_HCR == '1' && EL2Enabled(); if InD then case MPAMn of when 3 pmg = MPAM3_EL3.PMG_I; when 2 pmg = if el2avail then MPAM2_EL2.PMG_I else Zeros(); when 1 pmg = MPAM1_EL1.PMG_I; when 0 pmg = MPAM0_EL1.PMG_I; otherwise pmg = PMGtype UNKNOWN; else case MPAMn of when 3 pmg = MPAM3_EL3.PMG_D; when 2 pmg = if el2avail then MPAM2_EL2.PMG_D else Zeros(); when 1 pmg = MPAM1_EL1.PMG_D; when 0 pmg = MPAM0_EL1.PMG_D; otherwise pmg = PMGtype UNKNOWN; return pmg;() && (( el == 0 && MPAMHCR_EL2.EL0_VPMEN == '1' ) || ( el == 1 && MPAMHCR_EL2.EL1_VPMEN == '1')));

Library pseudocode for shared/functions/mpam/mapvpmwgenMPAM

// mapvpmw // genMPAM // ======= // Map a virtual PARTID into a physical PARTID using // the MPAMVPMn_EL2 registers. // vpartid is now assumed in-range and valid (checked by caller) // returns physical PARTID from mapping entry. // Returns MPAMinfo for exception level el. // If InD is TRUE returns MPAM information using PARTID_I and PMG_I fields // of MPAMel_ELx register and otherwise using PARTID_D and PMG_D fields. // Produces a Secure PARTID if Secure is TRUE and a Non-secure PARTID otherwise. PARTIDtype mapvpmw(integer vpartid) bits(64) vpmw; integer wd = vpartid DIV 4; case wd of when 0 vpmw = MPAMVPM0_EL2; when 1 vpmw = MPAMVPM1_EL2; when 2 vpmw = MPAMVPM2_EL2; when 3 vpmw = MPAMVPM3_EL2; when 4 vpmw = MPAMVPM4_EL2; when 5 vpmw = MPAMVPM5_EL2; when 6 vpmw = MPAMVPM6_EL2; when 7 vpmw = MPAMVPM7_EL2; otherwise vpmw =MPAMinfo genMPAM(integer el, boolean InD, boolean secure) MPAMinfo returnInfo; PARTIDtype partidel; boolean perr; boolean gstplk = (el == 0 && ZerosEL2Enabled(64); // vpme_lsb selects LSB of field within register integer vpme_lsb = (vpartid REM 4) * 16; return vpmw<vpme_lsb +: 16>;() && MPAMHCR_EL2.GSTAPP_PLK == '1' && HCR_EL2.TGE == '0'); integer eff_el = if gstplk then 1 else el; (partidel, perr) = genPARTID(eff_el, InD); PMGtype groupel = genPMG(eff_el, InD, perr); returnInfo.mpam_ns = if secure then '0' else '1'; returnInfo.partid = partidel; returnInfo.pmg = groupel; return returnInfo;

Library pseudocode for shared/functions/registersmpam/BranchTogenMPAMel

// BranchTo() // ========== // genMPAMel // ========= // Returns MPAMinfo for specified EL in the current security state. // InD is TRUE for instruction access and FALSE otherwise. // Set program counter to a new address, with a branch type // In AArch64 state the address might include a tag in the top eight bits.MPAMinfo genMPAMel(bits(2) el, boolean InD) boolean secure = BranchTo(bits(N) target,(); if BranchTypeHaveMPAMExt branch_type)() && Hint_BranchMPAMisEnabled(branch_type); if N == 32 then assert() then return genMPAM( UsingAArch32UInt(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); _PC = AArch64.BranchAddr(target<63:0>); return;(el), InD, secure); return DefaultMPAMinfo(secure);

Library pseudocode for shared/functions/registersmpam/BranchToAddrgenPARTID

// BranchToAddr() // ============== // genPARTID // ========= // Returns physical PARTID and error boolean for exception level el. // If InD is TRUE then PARTID is from MPAMel_ELx.PARTID_I and // otherwise from MPAMel_ELx.PARTID_D. // Set program counter to a new address, with a branch type // In AArch64 state the address does not include a tag in the top eight bits.(PARTIDtype, boolean) genPARTID(integer el, boolean InD) PARTIDtype partidel = getMPAM_PARTID(el, InD); integer partid_max = BranchToAddr(bits(N) target,(MPAMIDR_EL1.PARTID_MAX); if BranchTypeUInt branch_type)(partidel) > partid_max then return (DefaultPARTID, TRUE); if Hint_BranchMPAMisVirtual(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); _PC = target<63:0>; return;(el) then return MAP_vPARTID(partidel); else return (partidel, FALSE);

Library pseudocode for shared/functions/registersmpam/BranchTypegenPMG

enumeration// genPMG // ====== // Returns PMG for exception level el and I- or D-side (InD). // If PARTID generation (genPARTID) encountered an error, genPMG() should be // called with partid_err as TRUE. PMGtype genPMG(integer el, boolean InD, boolean partid_err) integer pmg_max = BranchType {(MPAMIDR_EL1.PMG_MAX); // It is CONSTRAINED UNPREDICTABLE whether partid_err forces PMG to // use the default or if it uses the PMG from getMPAM_PMG. if partid_err then return DefaultPMG; PMGtype groupel = getMPAM_PMG(el, InD); if BranchType_DIRCALL, // Direct Branch with link BranchType_INDCALL, // Indirect Branch with link BranchType_ERET, // Exception return (indirect) BranchType_DBGEXIT, // Exit from Debug state BranchType_RET, // Indirect branch with function return hint BranchType_DIR, // Direct branch BranchType_INDIR, // Indirect branch BranchType_EXCEPTION, // Exception entry BranchType_RESET, // Reset BranchType_UNKNOWN}; // Other(groupel) <= pmg_max then return groupel; return DefaultPMG;

Library pseudocode for shared/functions/registersmpam/Hint_BranchgetMPAM_PARTID

// Report the hint passed to BranchTo() and BranchToAddr(), for consideration when processing // the next instruction.// getMPAM_PARTID // ============== // Returns a PARTID from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PARTID_I field of that // register. Otherwise, selects the PARTID_D field. PARTIDtype getMPAM_PARTID(integer MPAMn, boolean InD) PARTIDtype partid; boolean el2avail = Hint_Branch((); if InD then case MPAMn of when 3 partid = MPAM3_EL3.PARTID_I; when 2 partid = if el2avail then MPAM2_EL2.PARTID_I else(); when 1 partid = MPAM1_EL1.PARTID_I; when 0 partid = MPAM0_EL1.PARTID_I; otherwise partid = PARTIDtype UNKNOWN; else case MPAMn of when 3 partid = MPAM3_EL3.PARTID_D; when 2 partid = if el2avail then MPAM2_EL2.PARTID_D else ZerosBranchTypeZeros hint);(); when 1 partid = MPAM1_EL1.PARTID_D; when 0 partid = MPAM0_EL1.PARTID_D; otherwise partid = PARTIDtype UNKNOWN; return partid;

Library pseudocode for shared/functions/registersmpam/NextInstrAddrgetMPAM_PMG

// Return address of the sequentially next instruction. bits(N)// getMPAM_PMG // =========== // Returns a PMG from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PMG_I field of that // register. Otherwise, selects the PMG_D field. PMGtype getMPAM_PMG(integer MPAMn, boolean InD) PMGtype pmg; boolean el2avail = (); if InD then case MPAMn of when 3 pmg = MPAM3_EL3.PMG_I; when 2 pmg = if el2avail then MPAM2_EL2.PMG_I else Zeros(); when 1 pmg = MPAM1_EL1.PMG_I; when 0 pmg = MPAM0_EL1.PMG_I; otherwise pmg = PMGtype UNKNOWN; else case MPAMn of when 3 pmg = MPAM3_EL3.PMG_D; when 2 pmg = if el2avail then MPAM2_EL2.PMG_D else ZerosNextInstrAddr();(); when 1 pmg = MPAM1_EL1.PMG_D; when 0 pmg = MPAM0_EL1.PMG_D; otherwise pmg = PMGtype UNKNOWN; return pmg;

Library pseudocode for shared/functions/registersmpam/ResetExternalDebugRegistersmapvpmw

// Reset the External Debug registers in the Core power domain.// mapvpmw // ======= // Map a virtual PARTID into a physical PARTID using // the MPAMVPMn_EL2 registers. // vpartid is now assumed in-range and valid (checked by caller) // returns physical PARTID from mapping entry. PARTIDtype mapvpmw(integer vpartid) bits(64) vpmw; integer wd = vpartid DIV 4; case wd of when 0 vpmw = MPAMVPM0_EL2; when 1 vpmw = MPAMVPM1_EL2; when 2 vpmw = MPAMVPM2_EL2; when 3 vpmw = MPAMVPM3_EL2; when 4 vpmw = MPAMVPM4_EL2; when 5 vpmw = MPAMVPM5_EL2; when 6 vpmw = MPAMVPM6_EL2; when 7 vpmw = MPAMVPM7_EL2; otherwise vpmw = ResetExternalDebugRegisters(boolean cold_reset);(64); // vpme_lsb selects LSB of field within register integer vpme_lsb = (vpartid REM 4) * 16; return vpmw<vpme_lsb +: 16>;

Library pseudocode for shared/functions/registers/ThisInstrAddrBranchTo

// ThisInstrAddr() // =============== // Return address of the current instruction. // BranchTo() // ========== bits(N)// Set program counter to a new address, with a branch type // In AArch64 state the address might include a tag in the top eight bits. ThisInstrAddr() assert N == 64 || (N == 32 &&BranchTo(bits(N) target, BranchType branch_type) Hint_Branch(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); _PC = AArch64.BranchAddr()); return _PC<N-1:0>;(target<63:0>); return;

Library pseudocode for shared/functions/registers/_PCBranchToAddr

bits(64) _PC;// BranchToAddr() // ============== // Set program counter to a new address, with a branch type // In AArch64 state the address does not include a tag in the top eight bits.BranchToAddr(bits(N) target, BranchType branch_type) Hint_Branch(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); _PC = target<63:0>; return;

Library pseudocode for shared/functions/registers/_RBranchType

array bits(64) _R[0..30];enumerationBranchType { BranchType_DIRCALL, // Direct Branch with link BranchType_INDCALL, // Indirect Branch with link BranchType_ERET, // Exception return (indirect) BranchType_DBGEXIT, // Exit from Debug state BranchType_RET, // Indirect branch with function return hint BranchType_DIR, // Direct branch BranchType_INDIR, // Indirect branch BranchType_EXCEPTION, // Exception entry BranchType_RESET, // Reset BranchType_UNKNOWN}; // Other

Library pseudocode for shared/functions/sysregistersregisters/SPSRHint_Branch

// SPSR[] - non-assignment form // ============================ bits(32)// Report the hint passed to BranchTo() and BranchToAddr(), for consideration when processing // the next instruction. SPSR[] bits(32) result; ifHint_Branch( UsingAArch32BranchType() then case PSTATE.M of when M32_FIQ result = SPSR_fiq; when M32_IRQ result = SPSR_irq; when M32_Svc result = SPSR_svc; when M32_Monitor result = SPSR_mon; when M32_Abort result = SPSR_abt; when M32_Hyp result = SPSR_hyp; when M32_Undef result = SPSR_und; otherwise Unreachable(); else case PSTATE.EL of when EL1 result = SPSR_EL1; when EL2 result = SPSR_EL2; when EL3 result = SPSR_EL3; otherwise Unreachable(); return result; // SPSR[] - assignment form // ======================== SPSR[] = bits(32) value if UsingAArch32() then case PSTATE.M of when M32_FIQ SPSR_fiq = value; when M32_IRQ SPSR_irq = value; when M32_Svc SPSR_svc = value; when M32_Monitor SPSR_mon = value; when M32_Abort SPSR_abt = value; when M32_Hyp SPSR_hyp = value; when M32_Undef SPSR_und = value; otherwise Unreachable(); else case PSTATE.EL of when EL1 SPSR_EL1 = value; when EL2 SPSR_EL2 = value; when EL3 SPSR_EL3 = value; otherwise Unreachable(); return;hint);

Library pseudocode for shared/functions/systemregisters/ArchVersionNextInstrAddr

enumeration// Return address of the sequentially next instruction. bits(N) ArchVersion {NextInstrAddr(); ARMv8p0 , ARMv8p1 , ARMv8p2 , ARMv8p3 , ARMv8p4 , ARMv8p5 };

Library pseudocode for shared/functions/systemregisters/BranchTargetCheckResetExternalDebugRegisters

// BranchTargetCheck() // =================== // This function is executed checks if the current instruction is a valid target for a branch // taken into, or inside, a guarded page. It is executed on every cycle once the current // instruction has been decoded and the values of InGuardedPage and BTypeCompatible have been // determined for the current instruction.// Reset the External Debug registers in the Core power domain. BranchTargetCheck() assertResetExternalDebugRegisters(boolean cold_reset); HaveBTIExt() && !UsingAArch32(); // The branch target check considers two state variables: // * InGuardedPage, which is evaluated during instruction fetch. // * BTypeCompatible, which is evaluated during instruction decode. if InGuardedPage && PSTATE.BTYPE != '00' && !BTypeCompatible && !Halted() then bits(64) pc = ThisInstrAddr(); AArch64.BranchTargetException(pc<51:0>); boolean branch_instr = AArch64.ExecutingBROrBLROrRetInstr(); boolean bti_instr = AArch64.ExecutingBTIInstr(); // PSTATE.BTYPE defaults to 00 for instructions that do not explictly set BTYPE. if !(branch_instr || bti_instr) then BTypeNext = '00';

Library pseudocode for shared/functions/systemregisters/ClearEventRegisterThisInstrAddr

// ClearEventRegister() // ==================== // Clear the Event Register of this PE// ThisInstrAddr() // =============== // Return address of the current instruction. bits(N) ClearEventRegister() EventRegister = '0'; return;ThisInstrAddr() assert N == 64 || (N == 32 &&UsingAArch32()); return _PC<N-1:0>;

Library pseudocode for shared/functions/systemregisters/ClearPendingPhysicalSError_PC

// Clear a pending physical SError interruptbits(64) _PC; ClearPendingPhysicalSError();

Library pseudocode for shared/functions/systemregisters/ClearPendingVirtualSError_R

// Clear a pending virtual SError interruptarray bits(64) _R[0..30]; ClearPendingVirtualSError();

Library pseudocode for shared/functions/systemregisters/ConditionHolds_V

// ConditionHolds() // ================ // Return TRUE iff COND currently holds booleanarray bits(128) _V[0..31]; ConditionHolds(bits(4) cond) // Evaluate base condition. case cond<3:1> of when '000' result = (PSTATE.Z == '1'); // EQ or NE when '001' result = (PSTATE.C == '1'); // CS or CC when '010' result = (PSTATE.N == '1'); // MI or PL when '011' result = (PSTATE.V == '1'); // VS or VC when '100' result = (PSTATE.C == '1' && PSTATE.Z == '0'); // HI or LS when '101' result = (PSTATE.N == PSTATE.V); // GE or LT when '110' result = (PSTATE.N == PSTATE.V && PSTATE.Z == '0'); // GT or LE when '111' result = TRUE; // AL // Condition flag values in the set '111x' indicate always true // Otherwise, invert condition if necessary. if cond<0> == '1' && cond != '1111' then result = !result; return result;

Library pseudocode for shared/functions/systemsysregisters/ConsumptionOfSpeculativeDataBarrierSPSR

// SPSR[] - non-assignment form // ============================ bits(32) SPSR[] bits(32) result; if UsingAArch32() then case PSTATE.M of when M32_FIQ result = SPSR_fiq; when M32_IRQ result = SPSR_irq; when M32_Svc result = SPSR_svc; when M32_Monitor result = SPSR_mon; when M32_Abort result = SPSR_abt; when M32_Hyp result = SPSR_hyp; when M32_Undef result = SPSR_und; otherwise Unreachable(); else case PSTATE.EL of when EL1 result = SPSR_EL1; when EL2 result = SPSR_EL2; when EL3 result = SPSR_EL3; otherwise Unreachable(); return result; // SPSR[] - assignment form // ======================== SPSR[] = bits(32) value if UsingAArch32() then case PSTATE.M of when M32_FIQ SPSR_fiq = value; when M32_IRQ SPSR_irq = value; when M32_Svc SPSR_svc = value; when M32_Monitor SPSR_mon = value; when M32_Abort SPSR_abt = value; when M32_Hyp SPSR_hyp = value; when M32_Undef SPSR_und = value; otherwise Unreachable(); else case PSTATE.EL of when EL1 SPSR_EL1 = value; when EL2 SPSR_EL2 = value; when EL3 SPSR_EL3 = value; otherwise UnreachableConsumptionOfSpeculativeDataBarrier();(); return;

Library pseudocode for shared/functions/system/CurrentInstrSetAllocationTagAccessIsEnabled

// CurrentInstrSet() // ================= // AllocationTagAccessIsEnabled() // ============================== // Check whether access to Allocation Tags is enabled. InstrSetboolean CurrentInstrSet() ifAllocationTagAccessIsEnabled() if SCR_EL3.ATA == '0' && PSTATE.EL IN { UsingAArch32EL0() then result = if PSTATE.T == '0' then, InstrSet_A32EL1 else, InstrSet_T32EL2; // PSTATE.J is RES0. Implementation of T32EE or Jazelle state not permitted. else result =} then return FALSE; elsif HCR_EL2.ATA == '0' && HCR_EL2.<E2H,TGE> != '11' && PSTATE.EL IN { , EL1} then return FALSE; elsif SCTLR_EL3.ATA == '0' && PSTATE.EL == EL3 then return FALSE; elsif SCTLR_EL2.ATA == '0' && PSTATE.EL == EL2 then return FALSE; elsif SCTLR_EL1.ATA == '0' && PSTATE.EL == EL1 then return FALSE; elsif SCTLR_EL2.ATA0 == '0' && HCR_EL2.<E2H,TGE> == '11' && PSTATE.EL == EL0 then return FALSE; elsif SCTLR_EL1.ATA0 == '0' && HCR_EL2.<E2H,TGE> != '11' && PSTATE.EL == EL0InstrSet_A64EL0; return result;then return FALSE; else return TRUE;

Library pseudocode for shared/functions/system/CurrentPLArchVersion

// CurrentPL() // =========== PrivilegeLevelenumeration CurrentPL() returnArchVersion { ARMv8p0 , ARMv8p1 , ARMv8p2 , ARMv8p3 , ARMv8p4 , PLOfEL(PSTATE.EL);ARMv8p5 };

Library pseudocode for shared/functions/system/EL0BranchTargetCheck

constant bits(2)// BranchTargetCheck() // =================== // This function is executed checks if the current instruction is a valid target for a branch // taken into, or inside, a guarded page. It is executed on every cycle once the current // instruction has been decoded and the values of InGuardedPage and BTypeCompatible have been // determined for the current instruction. EL3 = '11'; constant bits(2)BranchTargetCheck() assert EL2 = '10'; constant bits(2)() && ! EL1 = '01'; constant bits(2)(); // The branch target check considers two state variables: // * InGuardedPage, which is evaluated during instruction fetch. // * BTypeCompatible, which is evaluated during instruction decode. if InGuardedPage && PSTATE.BTYPE != '00' && !BTypeCompatible && ! () then bits(64) pc = ThisInstrAddr(); AArch64.BranchTargetException(pc<51:0>); boolean branch_instr = AArch64.ExecutingBROrBLROrRetInstr(); boolean bti_instr = AArch64.ExecutingBTIInstrEL0 = '00';(); // PSTATE.BTYPE defaults to 00 for instructions that don't explictly set BTYPE. if !(branch_instr || bti_instr) then BTypeNext = '00';

Library pseudocode for shared/functions/system/EL2EnabledChooseNonExcludedTag

// EL2Enabled() // ============ // Returns TRUE if EL2 is present and executing // - with SCR_EL3.NS==1 when Non-secure EL2 is implemented, or // - with SCR_EL3.NS==0 when Secure EL2 is implemented and enabled, or // - when EL3 is not implemented. // ChooseNonExcludedTag() // ====================== // Return a tag derived from the start and the offset values, excluding // any tags in the given mask. booleanbits(4) EL2Enabled() returnChooseNonExcludedTag(bits(4) tag, bits(4) offset, bits(16) exclude) if HaveELIsOnes((exclude) then return tag; if offset == '0000' then while exclude<EL2UInt) && (!(tag)> == '1' do tag = tag + '0001'; while offset != '0000' do offset = offset - '0001'; tag = tag + '0001'; while exclude<HaveELUInt(EL3) || SCR_EL3.NS == '1' || IsSecureEL2Enabled());(tag)> == '1' do tag = tag + '0001'; return tag;

Library pseudocode for shared/functions/system/ELFromM32ClearEventRegister

// ELFromM32() // =========== (boolean,bits(2))// ClearEventRegister() // ==================== // Clear the Event Register of this PE ELFromM32(bits(5) mode) // Convert an AArch32 mode encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'mode<4:0>' encodes a mode that is both valid for this implementation // and the current value of SCR.NS/SCR_EL3.NS. // 'EL' is the Exception level decoded from 'mode'. bits(2) el; boolean valid = !ClearEventRegister() EventRegister = '0'; return;BadMode(mode); // Check for modes that are not valid for this implementation case mode of when M32_Monitor el = EL3; when M32_Hyp el = EL2; valid = valid && (!HaveEL(EL3) || SCR_GEN[].NS == '1'); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. el = (if HaveEL(EL3) && HighestELUsingAArch32() && SCR.NS == '0' then EL3 else EL1); when M32_User el = EL0; otherwise valid = FALSE; // Passed an illegal mode value if !valid then el = bits(2) UNKNOWN; return (valid, el);

Library pseudocode for shared/functions/system/ELFromSPSRClearPendingPhysicalSError

// ELFromSPSR() // ============ // Convert an SPSR value encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'spsr<4:0>' encodes a valid mode for the current state. // 'EL' is the Exception level decoded from 'spsr'. (boolean,bits(2))// Clear a pending physical SError interrupt ELFromSPSR(bits(32) spsr) if spsr<4> == '0' then // AArch64 state el = spsr<3:2>; ifClearPendingPhysicalSError(); HighestELUsingAArch32() then // No AArch64 support valid = FALSE; elsif !HaveEL(el) then // Exception level not implemented valid = FALSE; elsif spsr<1> == '1' then // M[1] must be 0 valid = FALSE; elsif el == EL0 && spsr<0> == '1' then // for EL0, M[0] must be 0 valid = FALSE; elsif el == EL2 && HaveEL(EL3) && !IsSecureEL2Enabled() && SCR_EL3.NS == '0' then valid = FALSE; // Unless Secure EL2 is enabled, EL2 only valid in Non-secure state else valid = TRUE; elsif HaveAnyAArch32() then // AArch32 state (valid, el) = ELFromM32(spsr<4:0>); else valid = FALSE; if !valid then el = bits(2) UNKNOWN; return (valid,el);

Library pseudocode for shared/functions/system/ELIsInHostClearPendingVirtualSError

// ELIsInHost() // ============ boolean// Clear a pending virtual SError interrupt ELIsInHost(bits(2) el) return ((ClearPendingVirtualSError();IsSecureEL2Enabled() || !IsSecureBelowEL3()) && HaveVirtHostExt() && !ELUsingAArch32(EL2) && HCR_EL2.E2H == '1' && (el == EL2 || (el == EL0 && HCR_EL2.TGE == '1')));

Library pseudocode for shared/functions/system/ELStateUsingAArch32ConditionHolds

// ELStateUsingAArch32() // ===================== // ConditionHolds() // ================ // Return TRUE iff COND currently holds boolean ELStateUsingAArch32(bits(2) el, boolean secure) // See ELStateUsingAArch32K() for description. Must only be called in circumstances where // result is valid (typically, that means 'el IN {EL1,EL2,EL3}'). (known, aarch32) =ConditionHolds(bits(4) cond) // Evaluate base condition. case cond<3:1> of when '000' result = (PSTATE.Z == '1'); // EQ or NE when '001' result = (PSTATE.C == '1'); // CS or CC when '010' result = (PSTATE.N == '1'); // MI or PL when '011' result = (PSTATE.V == '1'); // VS or VC when '100' result = (PSTATE.C == '1' && PSTATE.Z == '0'); // HI or LS when '101' result = (PSTATE.N == PSTATE.V); // GE or LT when '110' result = (PSTATE.N == PSTATE.V && PSTATE.Z == '0'); // GT or LE when '111' result = TRUE; // AL // Condition flag values in the set '111x' indicate always true // Otherwise, invert condition if necessary. if cond<0> == '1' && cond != '1111' then result = !result; return result; ELStateUsingAArch32K(el, secure); assert known; return aarch32;

Library pseudocode for shared/functions/system/ELStateUsingAArch32KConsumptionOfSpeculativeDataBarrier

// ELStateUsingAArch32K() // ====================== (boolean,boolean) ELStateUsingAArch32K(bits(2) el, boolean secure) // Returns (known, aarch32): // 'known' is FALSE for EL0 if the current Exception level is not EL0 and EL1 is // using AArch64, since it cannot determine the state of EL0; TRUE otherwise. // 'aarch32' is TRUE if the specified Exception level is using AArch32; FALSE otherwise. if !HaveAArch32EL(el) then return (TRUE, FALSE); // Exception level is using AArch64 elsif secure && el == EL2 then return (TRUE, FALSE); // Secure EL2 is using AArch64 elsif HighestELUsingAArch32() then return (TRUE, TRUE); // Highest Exception level, and therefore all levels are using AArch32 elsif el == HighestEL() then return (TRUE, FALSE); // This is highest Exception level, so is using AArch64 // Remainder of function deals with the interprocessing cases when highest Exception level is using AArch64 boolean aarch32 = boolean UNKNOWN; boolean known = TRUE; aarch32_below_el3 = HaveEL(EL3) && SCR_EL3.RW == '0' && (!secure || !HaveSecureEL2Ext() || SCR_EL3.EEL2 == '0'); aarch32_at_el1 = (aarch32_below_el3 || (HaveEL(EL2) && ((HaveSecureEL2Ext() && SCR_EL3.EEL2 == '1') || !secure) && HCR_EL2.RW == '0' && !(HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && HaveVirtHostExt()))); if el == EL0 && !aarch32_at_el1 then // Only know if EL0 using AArch32 from PSTATE if PSTATE.EL == EL0 then aarch32 = PSTATE.nRW == '1'; // EL0 controlled by PSTATE else known = FALSE; // EL0 state is UNKNOWN else aarch32 = (aarch32_below_el3 && el != EL3) || (aarch32_at_el1 && el IN {EL1,EL0}); if !known then aarch32 = boolean UNKNOWN; return (known, aarch32);ConsumptionOfSpeculativeDataBarrier();

Library pseudocode for shared/functions/system/ELUsingAArch32CurrentInstrSet

// ELUsingAArch32() // ================ // CurrentInstrSet() // ================= booleanInstrSet ELUsingAArch32(bits(2) el) returnCurrentInstrSet() if ELStateUsingAArch32UsingAArch32(el,() then result = if PSTATE.T == '0' then else InstrSet_T32; // PSTATE.J is RES0. Implementation of T32EE or Jazelle state not permitted. else result = InstrSet_A64IsSecureBelowEL3InstrSet_A32());; return result;

Library pseudocode for shared/functions/system/ELUsingAArch32KCurrentPL

// ELUsingAArch32K() // ================= // CurrentPL() // =========== (boolean,boolean)PrivilegeLevel ELUsingAArch32K(bits(2) el) CurrentPL() return ELStateUsingAArch32KPLOfEL(el, IsSecureBelowEL3());(PSTATE.EL);

Library pseudocode for shared/functions/system/EndOfInstructionEL0

// Terminate processing of the current instruction.constant bits(2) EndOfInstruction();EL3 = '11'; constant bits(2)EL2 = '10'; constant bits(2) EL1 = '01'; constant bits(2) EL0 = '00';

Library pseudocode for shared/functions/system/EnterLowPowerStateEL2Enabled

// PE enters a low-power state// EL2Enabled() // ============ // Returns TRUE if EL2 is present and executing in either non-Secure state when Secure EL2 is not implemented, // or in Secure state when Secure EL2 is implemented, FALSE otherwise boolean EnterLowPowerState();EL2Enabled() is_secure =HaveEL(EL3) && SCR_EL3.NS == '0'; return HaveEL(EL2) && (!is_secure || IsSecureEL2Enabled());

Library pseudocode for shared/functions/system/EventRegisterELFromM32

bits(1) EventRegister;// ELFromM32() // =========== (boolean,bits(2))ELFromM32(bits(5) mode) // Convert an AArch32 mode encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'mode<4:0>' encodes a mode that is both valid for this implementation // and the current value of SCR.NS/SCR_EL3.NS. // 'EL' is the Exception level decoded from 'mode'. bits(2) el; boolean valid = !BadMode(mode); // Check for modes that are not valid for this implementation case mode of when M32_Monitor el = EL3; when M32_Hyp el = EL2; valid = valid && (!HaveEL(EL3) || SCR_GEN[].NS == '1'); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. el = (if HaveEL(EL3) && HighestELUsingAArch32() && SCR.NS == '0' then EL3 else EL1); when M32_User el = EL0; otherwise valid = FALSE; // Passed an illegal mode value if !valid then el = bits(2) UNKNOWN; return (valid, el);

Library pseudocode for shared/functions/system/GetPSRFromPSTATEELFromSPSR

// GetPSRFromPSTATE() // ================== // Return a PSR value which represents the current PSTATE // ELFromSPSR() // ============ bits(32)// Convert an SPSR value encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'spsr<4:0>' encodes a valid mode for the current state. // 'EL' is the Exception level decoded from 'spsr'. (boolean,bits(2)) GetPSRFromPSTATE() bits(32) spsr =ELFromSPSR(bits(32) spsr) if spsr<4> == '0' then // AArch64 state el = spsr<3:2>; if ZerosHighestELUsingAArch32(); spsr<31:28> = PSTATE.<N,Z,C,V>; if() then // No AArch64 support valid = FALSE; elsif ! HavePANExtHaveEL() then spsr<22> = PSTATE.PAN; spsr<20> = PSTATE.IL; if PSTATE.nRW == '1' then // AArch32 state spsr<27> = PSTATE.Q; spsr<26:25> = PSTATE.IT<1:0>; if(el) then // Exception level not implemented valid = FALSE; elsif spsr<1> == '1' then // M[1] must be 0 valid = FALSE; elsif el == HaveSSBSExtEL0() then spsr<23> = PSTATE.SSBS; if&& spsr<0> == '1' then // for EL0, M[0] must be 0 valid = FALSE; elsif el == HaveDITExtEL2() then spsr<21> = PSTATE.DIT; spsr<19:16> = PSTATE.GE; spsr<15:10> = PSTATE.IT<7:2>; spsr<9> = PSTATE.E; spsr<8:6> = PSTATE.<A,I,F>; // No PSTATE.D in AArch32 state spsr<5> = PSTATE.T; assert PSTATE.M<4> == PSTATE.nRW; // bit [4] is the discriminator spsr<4:0> = PSTATE.M; else // AArch64 state if&& HaveMTEExtHaveEL() then spsr<25> = PSTATE.TCO; if( HaveDITExtEL3() then spsr<24> = PSTATE.DIT; if) && ! HaveUAOExtIsSecureEL2Enabled() then spsr<23> = PSTATE.UAO; spsr<21> = PSTATE.SS; if() && SCR_EL3.NS == '0' then valid = FALSE; // Unless Secure EL2 is enabled, EL2 only valid in Non-secure state else valid = TRUE; elsif HaveSSBSExtHaveAnyAArch32() then spsr<12> = PSTATE.SSBS; if() then // AArch32 state (valid, el) = HaveBTIExtELFromM32() then spsr<11:10> = PSTATE.BTYPE; spsr<9:6> = PSTATE.<D,A,I,F>; spsr<4> = PSTATE.nRW; spsr<3:2> = PSTATE.EL; spsr<0> = PSTATE.SP; return spsr;(spsr<4:0>); else valid = FALSE; if !valid then el = bits(2) UNKNOWN; return (valid,el);

Library pseudocode for shared/functions/system/HasArchVersionELIsInHost

// HasArchVersion() // ================ // Return TRUE if the implemented architecture includes the extensions defined in the specified // architecture version. // ELIsInHost() // ============ boolean HasArchVersion(ELIsInHost(bits(2) el) return ((ArchVersionIsSecureEL2Enabled version) return version ==() || ! ()) && HaveVirtHostExt() && !ELUsingAArch32(EL2) && HCR_EL2.E2H == '1' && (el == EL2 || (el == EL0ARMv8p0IsSecureBelowEL3 || boolean IMPLEMENTATION_DEFINED;&& HCR_EL2.TGE == '1')));

Library pseudocode for shared/functions/system/HaveAArch32ELELStateUsingAArch32

// HaveAArch32EL() // =============== // ELStateUsingAArch32() // ===================== boolean HaveAArch32EL(bits(2) el) // Return TRUE if Exception level 'el' supports AArch32 in this implementation if !ELStateUsingAArch32(bits(2) el, boolean secure) // See ELStateUsingAArch32K() for description. Must only be called in circumstances where // result is valid (typically, that means 'el IN {EL1,EL2,EL3}'). (known, aarch32) =HaveELELStateUsingAArch32K(el) then return FALSE; // The Exception level is not implemented elsif !HaveAnyAArch32() then return FALSE; // No Exception level can use AArch32 elsif HighestELUsingAArch32() then return TRUE; // All Exception levels are using AArch32 elsif el == HighestEL() then return FALSE; // The highest Exception level is using AArch64 elsif el == EL0 then return TRUE; // EL0 must support using AArch32 if any AArch32 return boolean IMPLEMENTATION_DEFINED;(el, secure); assert known; return aarch32;

Library pseudocode for shared/functions/system/HaveAnyAArch32ELStateUsingAArch32K

// HaveAnyAArch32() // ================ // Return TRUE if AArch32 state is supported at any Exception level // ELStateUsingAArch32K() // ====================== boolean(boolean,boolean) HaveAnyAArch32() return boolean IMPLEMENTATION_DEFINED;ELStateUsingAArch32K(bits(2) el, boolean secure) // Returns (known, aarch32): // 'known' is FALSE for EL0 if the current Exception level is not EL0 and EL1 is // using AArch64, since it cannot determine the state of EL0; TRUE otherwise. // 'aarch32' is TRUE if the specified Exception level is using AArch32; FALSE otherwise. boolean aarch32; known = TRUE; if !HaveAArch32EL(el) then aarch32 = FALSE; // Exception level is using AArch64 elsif HighestELUsingAArch32() then aarch32 = TRUE; // All levels are using AArch32 else aarch32_below_el3 = HaveEL(EL3) && SCR_EL3.RW == '0'; aarch32_at_el1 = (aarch32_below_el3 || (HaveEL(EL2) && ((HaveSecureEL2Ext() && SCR_EL3.EEL2 == '1') || !secure) && HCR_EL2.RW == '0' && !(HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && HaveVirtHostExt()))); if el == EL0 && !aarch32_at_el1 then // Only know if EL0 using AArch32 from PSTATE if PSTATE.EL == EL0 then aarch32 = PSTATE.nRW == '1'; // EL0 controlled by PSTATE else known = FALSE; // EL0 state is UNKNOWN else aarch32 = (aarch32_below_el3 && el != EL3) || (aarch32_at_el1 && el IN {EL1,EL0}); if !known then aarch32 = boolean UNKNOWN; return (known, aarch32);

Library pseudocode for shared/functions/system/HaveAnyAArch64ELUsingAArch32

// HaveAnyAArch64() // ELUsingAArch32() // ================ // Return TRUE if AArch64 state is supported at any Exception level boolean HaveAnyAArch64() return !ELUsingAArch32(bits(2) el) return(el, IsSecureBelowEL3HighestELUsingAArch32ELStateUsingAArch32();());

Library pseudocode for shared/functions/system/HaveELELUsingAArch32K

// HaveEL() // ======== // Return TRUE if Exception level 'el' is supported // ELUsingAArch32K() // ================= boolean(boolean,boolean) HaveEL(bits(2) el) if el IN {ELUsingAArch32K(bits(2) el) returnEL1ELStateUsingAArch32K,(el,EL0IsSecureBelowEL3} then return TRUE; // EL1 and EL0 must exist return boolean IMPLEMENTATION_DEFINED;());

Library pseudocode for shared/functions/system/HaveFP16ExtEndOfInstruction

// HaveFP16Ext() // ============= // Return TRUE if FP16 extension is supported boolean// Terminate processing of the current instruction. HaveFP16Ext() return boolean IMPLEMENTATION_DEFINED;EndOfInstruction();

Library pseudocode for shared/functions/system/HighestELEnterLowPowerState

// HighestEL() // =========== // Returns the highest implemented Exception level. bits(2)// PE enters a low-power state HighestEL() ifEnterLowPowerState(); HaveEL(EL3) then return EL3; elsif HaveEL(EL2) then return EL2; else return EL1;

Library pseudocode for shared/functions/system/HighestELUsingAArch32EventRegister

// HighestELUsingAArch32() // ======================= // Return TRUE if configured to boot into AArch32 operation booleanbits(1) EventRegister; HighestELUsingAArch32() if !HaveAnyAArch32() then return FALSE; return boolean IMPLEMENTATION_DEFINED; // e.g. CFG32SIGNAL == HIGH

Library pseudocode for shared/functions/system/Hint_YieldGetPSRFromPSTATE

// GetPSRFromPSTATE() // ================== // Return a PSR value which represents the current PSTATE bits(32) GetPSRFromPSTATE() bits(32) spsr = Zeros(); spsr<31:28> = PSTATE.<N,Z,C,V>; if HavePANExt() then spsr<22> = PSTATE.PAN; spsr<20> = PSTATE.IL; if PSTATE.nRW == '1' then // AArch32 state spsr<27> = PSTATE.Q; spsr<26:25> = PSTATE.IT<1:0>; if HaveSSBSExt() then spsr<23> = PSTATE.SSBS; if HaveDITExt() then spsr<21> = PSTATE.DIT; spsr<19:16> = PSTATE.GE; spsr<15:10> = PSTATE.IT<7:2>; spsr<9> = PSTATE.E; spsr<8:6> = PSTATE.<A,I,F>; // No PSTATE.D in AArch32 state spsr<5> = PSTATE.T; assert PSTATE.M<4> == PSTATE.nRW; // bit [4] is the discriminator spsr<4:0> = PSTATE.M; else // AArch64 state if HaveDITExt() then spsr<24> = PSTATE.DIT; if HaveUAOExt() then spsr<23> = PSTATE.UAO; spsr<21> = PSTATE.SS; if HaveSSBSExt() then spsr<12> = PSTATE.SSBS; if HaveMTEExt() then spsr<25> = PSTATE.TCO; if HaveBTIExtHint_Yield();() then spsr<11:10> = PSTATE.BTYPE; spsr<9:6> = PSTATE.<D,A,I,F>; spsr<4> = PSTATE.nRW; spsr<3:2> = PSTATE.EL; spsr<0> = PSTATE.SP; return spsr;

Library pseudocode for shared/functions/system/IllegalExceptionReturnHasArchVersion

// IllegalExceptionReturn() // ======================== // HasArchVersion() // ================ // Return TRUE if the implemented architecture includes the extensions defined in the specified // architecture version. boolean IllegalExceptionReturn(bits(32) spsr) // Check for illegal return: // * To an unimplemented Exception level. // * To EL2 in Secure state, when SecureEL2 is not enabled. // * To EL0 using AArch64 state, with SPSR.M[0]==1. // * To AArch64 state with SPSR.M[1]==1. // * To AArch32 state with an illegal value of SPSR.M. (valid, target) =HasArchVersion( ELFromSPSRArchVersion(spsr); if !valid then return TRUE; // Check for return to higher Exception level ifversion) return version == UIntARMv8p0(target) > UInt(PSTATE.EL) then return TRUE; spsr_mode_is_aarch32 = (spsr<4> == '1'); // Check for illegal return: // * To EL1, EL2 or EL3 with register width specified in the SPSR different from the // Execution state used in the Exception level being returned to, as determined by // the SCR_EL3.RW or HCR_EL2.RW bits, or as configured from reset. // * To EL0 using AArch64 state when EL1 is using AArch32 state as determined by the // SCR_EL3.RW or HCR_EL2.RW bits or as configured from reset. // * To AArch64 state from AArch32 state (should be caught by above) (known, target_el_is_aarch32) = ELUsingAArch32K(target); assert known || (target == EL0 && !ELUsingAArch32(EL1)); if known && spsr_mode_is_aarch32 != target_el_is_aarch32 then return TRUE; // Check for illegal return from AArch32 to AArch64 if UsingAArch32() && !spsr_mode_is_aarch32 then return TRUE; // Check for illegal return to EL1 when HCR.TGE is set and when either of // * SecureEL2 is enabled. // * SecureEL2 is not enabled and EL1 is in Non-secure state. if HaveEL(EL2) && target == EL1 && HCR_EL2.TGE == '1' then if (!IsSecureBelowEL3() || IsSecureEL2Enabled()) then return TRUE; return FALSE;|| boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/InstrSetHaveAArch32EL

enumeration// HaveAArch32EL() // =============== boolean InstrSet {HaveAArch32EL(bits(2) el) // Return TRUE if Exception level 'el' supports AArch32 in this implementation if !InstrSet_A64,(el) then return FALSE; // The Exception level is not implemented elsif ! InstrSet_A32,() then return FALSE; // No Exception level can use AArch32 elsif () then return TRUE; // All Exception levels are using AArch32 elsif el == HighestEL() then return FALSE; // The highest Exception level is using AArch64 elsif el == EL0InstrSet_T32};then return TRUE; // EL0 must support using AArch32 if any AArch32 return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/InstructionSynchronizationBarrierHaveAnyAArch32

// HaveAnyAArch32() // ================ // Return TRUE if AArch32 state is supported at any Exception level boolean InstructionSynchronizationBarrier();HaveAnyAArch32() return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/InterruptPendingHaveAnyAArch64

// InterruptPending() // ================== // Return TRUE if there are any pending physical or virtual interrupts, and FALSE otherwise // HaveAnyAArch64() // ================ // Return TRUE if AArch64 state is supported at any Exception level boolean InterruptPending() returnHaveAnyAArch64() return ! IsPhysicalSErrorPendingHighestELUsingAArch32() || IsVirtualSErrorPending();

Library pseudocode for shared/functions/system/IsEventRegisterSetHaveEL

// IsEventRegisterSet() // ==================== // Return TRUE if the Event Register of this PE is set, and FALSE otherwise // HaveEL() // ======== // Return TRUE if Exception level 'el' is supported boolean IsEventRegisterSet() return EventRegister == '1';HaveEL(bits(2) el) if el IN {EL1,EL0} then return TRUE; // EL1 and EL0 must exist return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/IsHighestELHaveFP16Ext

// IsHighestEL() // HaveFP16Ext() // ============= // Returns TRUE if given exception level is the highest exception level implemented // Return TRUE if FP16 extension is supported boolean IsHighestEL(bits(2) el) returnHaveFP16Ext() return boolean IMPLEMENTATION_DEFINED; HighestEL() == el;

Library pseudocode for shared/functions/system/IsInHostHighestEL

// IsInHost() // ========== // HighestEL() // =========== // Returns the highest implemented Exception level. booleanbits(2) IsInHost() returnHighestEL() if (EL3) then return EL3; elsif HaveEL(EL2) then return EL2; else return EL1ELIsInHostHaveEL(PSTATE.EL);;

Library pseudocode for shared/functions/system/IsPhysicalSErrorPendingHighestELUsingAArch32

// Return TRUE if a physical SError interrupt is pending // HighestELUsingAArch32() // ======================= // Return TRUE if configured to boot into AArch32 operation boolean IsPhysicalSErrorPending();HighestELUsingAArch32() if !HaveAnyAArch32() then return FALSE; return boolean IMPLEMENTATION_DEFINED; // e.g. CFG32SIGNAL == HIGH

Library pseudocode for shared/functions/system/IsSecureHint_Yield

// IsSecure() // ========== // Returns TRUE if current Exception level is in Secure state. boolean IsSecure() if HaveEL(EL3) && !UsingAArch32() && PSTATE.EL == EL3 then return TRUE; elsif HaveEL(EL3) && UsingAArch32() && PSTATE.M == M32_Monitor then return TRUE; return IsSecureBelowEL3();Hint_Yield();

Library pseudocode for shared/functions/system/IsSecureBelowEL3IllegalExceptionReturn

// IsSecureBelowEL3() // ================== // Return TRUE if an Exception level below EL3 is in Secure state // or would be following an exception return to that level. // // Differs from IsSecure in that it ignores the current EL or Mode // in considering security state. // That is, if at AArch64 EL3 or in AArch32 Monitor mode, whether an // exception return would pass to Secure or Non-secure state. // IllegalExceptionReturn() // ======================== boolean IsSecureBelowEL3() ifIllegalExceptionReturn(bits(32) spsr) // Check for illegal return: // * To an unimplemented Exception level. // * To EL2 in Secure state, when SecureEL2 is not enabled. // * To EL0 using AArch64 state, with SPSR.M[0]==1. // * To AArch64 state with SPSR.M[1]==1. // * To AArch32 state with an illegal value of SPSR.M. (valid, target) = HaveELELFromSPSR((spsr); if !valid then return TRUE; // Check for return to higher Exception level ifEL3UInt) then return(target) > SCR_GENUInt[].NS == '0'; elsif(PSTATE.EL) then return TRUE; spsr_mode_is_aarch32 = (spsr<4> == '1'); // Check for illegal return: // * To EL1, EL2 or EL3 with register width specified in the SPSR different from the // Execution state used in the Exception level being returned to, as determined by // the SCR_EL3.RW or HCR_EL2.RW bits, or as configured from reset. // * To EL0 using AArch64 state when EL1 is using AArch32 state as determined by the // SCR_EL3.RW or HCR_EL2.RW bits or as configured from reset. // * To AArch64 state from AArch32 state (should be caught by above) (known, target_el_is_aarch32) = ELUsingAArch32K(target); assert known || (target == EL0 && !ELUsingAArch32(EL1)); if known && spsr_mode_is_aarch32 != target_el_is_aarch32 then return TRUE; // Check for illegal return from AArch32 to AArch64 if UsingAArch32() && !spsr_mode_is_aarch32 then return TRUE; // Check for illegal return to EL1 when HCR.TGE is set and when either of // * SecureEL2 is enabled. // * SecureEL2 is not enabled and EL1 is in Non-secure state. if HaveEL(EL2) && (!) && target ==HaveSecureEL2ExtEL1() ||&& HCR_EL2.TGE == '1' then if (! () || IsSecureEL2EnabledHighestELUsingAArch32IsSecureBelowEL3()) then // If Secure EL2 is not an architecture option then we must be Non-secure. return FALSE; else // TRUE if processor is Secure or FALSE if Non-secure. return boolean IMPLEMENTATION_DEFINED "Secure-only implementation";()) then return TRUE; return FALSE;

Library pseudocode for shared/functions/system/IsSecureEL2EnabledInstrSet

// IsSecureEL2Enabled() // ==================== // Returns TRUE if Secure EL2 is enabled, FALSE otherwise. booleanenumeration IsSecureEL2Enabled() ifInstrSet { HaveEL(InstrSet_A64,EL2) &&InstrSet_A32, HaveSecureEL2Ext() then if HaveEL(EL3) then if !ELUsingAArch32(EL3) && SCR_EL3.EEL2 == '1' then return TRUE; else return FALSE; else return IsSecure(); else return FALSE;InstrSet_T32};

Library pseudocode for shared/functions/system/IsVirtualSErrorPendingInstructionSynchronizationBarrier

// Return TRUE if a virtual SError interrupt is pending boolean IsVirtualSErrorPending();InstructionSynchronizationBarrier();

Library pseudocode for shared/functions/system/Mode_BitsInterruptPending

constant bits(5)// InterruptPending() // ================== // Return TRUE if there are any pending physical or virtual interrupts, and FALSE otherwise boolean M32_User = '10000'; constant bits(5)InterruptPending() return M32_FIQ = '10001'; constant bits(5)() || M32_IRQ = '10010'; constant bits(5) M32_Svc = '10011'; constant bits(5) M32_Monitor = '10110'; constant bits(5) M32_Abort = '10111'; constant bits(5) M32_Hyp = '11010'; constant bits(5) M32_Undef = '11011'; constant bits(5) M32_System = '11111';();

Library pseudocode for shared/functions/system/PLOfELIsEventRegisterSet

// PLOfEL() // ======== // IsEventRegisterSet() // ==================== // Return TRUE if the Event Register of this PE is set, and FALSE otherwise PrivilegeLevelboolean PLOfEL(bits(2) el) case el of whenIsEventRegisterSet() return EventRegister == '1'; EL3 return if HighestELUsingAArch32() then PL1 else PL3; when EL2 return PL2; when EL1 return PL1; when EL0 return PL0;

Library pseudocode for shared/functions/system/PSTATEIsHighestEL

// IsHighestEL() // ============= // Returns TRUE if given exception level is the highest exception level implemented boolean IsHighestEL(bits(2) el) return HighestELProcState PSTATE;() == el;

Library pseudocode for shared/functions/system/PrivilegeLevelIsInHost

enumeration// IsInHost() // ========== boolean PrivilegeLevel {IsInHost() returnPL3, PL2, PL1, PL0};(PSTATE.EL);

Library pseudocode for shared/functions/system/ProcStateIsPhysicalSErrorPending

type// Return TRUE if a physical SError interrupt is pending boolean ProcState is ( bits (1) N, // Negative condition flag bits (1) Z, // Zero condition flag bits (1) C, // Carry condition flag bits (1) V, // oVerflow condition flag bits (1) D, // Debug mask bit [AArch64 only] bits (1) A, // SError interrupt mask bit bits (1) I, // IRQ mask bit bits (1) F, // FIQ mask bit bits (1) PAN, // Privileged Access Never Bit [v8.1] bits (1) UAO, // User Access Override [v8.2] bits (1) DIT, // Data Independent Timing [v8.4] bits (1) TCO, // Tag Check Override [v8.5, AArch64 only] bits (2) BTYPE, // Branch Type [v8.5] bits (1) SS, // Software step bit bits (1) IL, // Illegal Execution state bit bits (2) EL, // Exception Level bits (1) nRW, // not Register Width: 0=64, 1=32 bits (1)IsPhysicalSErrorPending(); SP, // Stack pointer select: 0=SP0, 1=SPx [AArch64 only] bits (1) Q, // Cumulative saturation flag [AArch32 only] bits (4) GE, // Greater than or Equal flags [AArch32 only] bits (1) SSBS, // Speculative Store Bypass Safe bits (8) IT, // If-then bits, RES0 in CPSR [AArch32 only] bits (1) J, // J bit, RES0 [AArch32 only, RES0 in SPSR and CPSR] bits (1) T, // T32 bit, RES0 in CPSR [AArch32 only] bits (1) E, // Endianness bit [AArch32 only] bits (5) M // Mode field [AArch32 only] )

Library pseudocode for shared/functions/system/RestoredITBitsIsSecure

// RestoredITBits() // ================ // Get the value of PSTATE.IT to be restored on this exception return. // IsSecure() // ========== bits(8)boolean RestoredITBits(bits(32) spsr) it = spsr<15:10,26:25>; // When PSTATE.IL is set, it is CONSTRAINED UNPREDICTABLE whether the IT bits are each set // to zero or copied from the SPSR. if PSTATE.IL == '1' then ifIsSecure() // Return TRUE if current Exception level is in Secure state. if ConstrainUnpredictableBoolHaveEL(Unpredictable_ILZEROITEL3) then return '00000000'; else return it; // The IT bits are forced to zero when they are set to a reserved value. if !) && !IsZeroUsingAArch32(it<7:4>) &&() && PSTATE.EL == IsZeroEL3(it<3:0>) then return '00000000'; // The IT bits are forced to zero when returning to A32 state, or when returning to an EL // with the ITD bit set to 1, and the IT bits are describing a multi-instruction block. itd = if PSTATE.EL ==then return TRUE; elsif EL2HaveEL then HSCTLR.ITD else SCTLR.ITD; if (spsr<5> == '0' && !(IsZeroEL3(it)) || (itd == '1' && !) &&() && PSTATE.M == M32_Monitor then return TRUE; return IsSecureBelowEL3IsZeroUsingAArch32(it<2:0>)) then return '00000000'; else return it;();

Library pseudocode for shared/functions/system/SCRTypeIsSecureBelowEL3

type// IsSecureBelowEL3() // ================== // Return TRUE if an Exception level below EL3 is in Secure state // or would be following an exception return to that level. // // Differs from IsSecure in that it ignores the current EL or Mode // in considering security state. // That is, if at AArch64 EL3 or in AArch32 Monitor mode, whether an // exception return would pass to Secure or Non-secure state. boolean SCRType;IsSecureBelowEL3() ifHaveEL(EL3) then return SCR_GEN[].NS == '0'; elsif HaveEL(EL2) && (!HaveSecureEL2Ext() || HighestELUsingAArch32()) then // If Secure EL2 is not an architecture option then we must be Non-secure. return FALSE; else // TRUE if processor is Secure or FALSE if Non-secure. return boolean IMPLEMENTATION_DEFINED "Secure-only implementation";

Library pseudocode for shared/functions/system/SCR_GENIsSecureEL2Enabled

// SCR_GEN[] // ========= // IsSecureEL2Enabled() // ==================== // Returns TRUE if Secure EL2 is enabled, FALSE otherwise SCRTypeboolean SCR_GEN[] // AArch32 secure & AArch64 EL3 registers are not architecturally mapped assertIsSecureEL2Enabled() return ( HaveEL(EL3); bits(64) r; if) && HighestELUsingAArch32HaveSecureEL2Ext() then r =() && ! (EL2ZeroExtendELUsingAArch32(SCR); else r = SCR_EL3; return r;) && SCR_EL3.EEL2 == '1');

Library pseudocode for shared/functions/system/SendEventIsVirtualSErrorPending

// Signal an event to all PEs in a multiprocessor system to set their Event Registers. // When a PE executes the SEV instruction, it causes this function to be executed// Return TRUE if a virtual SError interrupt is pending boolean SendEvent();IsVirtualSErrorPending();

Library pseudocode for shared/functions/system/SendEventLocalMode_Bits

// SendEventLocal() // ================ // Set the local Event Register of this PE. // When a PE executes the SEVL instruction, it causes this function to be executedconstant bits(5) SendEventLocal() EventRegister = '1'; return;M32_User = '10000'; constant bits(5)M32_FIQ = '10001'; constant bits(5) M32_IRQ = '10010'; constant bits(5) M32_Svc = '10011'; constant bits(5) M32_Monitor = '10110'; constant bits(5) M32_Abort = '10111'; constant bits(5) M32_Hyp = '11010'; constant bits(5) M32_Undef = '11011'; constant bits(5) M32_System = '11111';

Library pseudocode for shared/functions/system/SetPSTATEFromPSRPLOfEL

// SetPSTATEFromPSR() // ================== // Set PSTATE based on a PSR value// PLOfEL() // ======== PrivilegeLevel SetPSTATEFromPSR(bits(32) spsr) PSTATE.SS =PLOfEL(bits(2) el) case el of when DebugExceptionReturnSSEL3(spsr); ifreturn if IllegalExceptionReturnHighestELUsingAArch32(spsr) then PSTATE.IL = '1'; if() then HaveSSBSExtPL1() then PSTATE.SSBS = bit UNKNOWN; else // State that is reinstated only on a legal exception return PSTATE.IL = spsr<20>; if spsr<4> == '1' then // AArch32 stateelse AArch32.WriteModePL3(spsr<4:0>); // Sets PSTATE.EL correctly if; when HaveSSBSExtEL2() then PSTATE.SSBS = spsr<23>; else // AArch64 state PSTATE.nRW = '0'; PSTATE.EL = spsr<3:2>; PSTATE.SP = spsr<0>; ifreturn HaveSSBSExtPL2() then PSTATE.SSBS = spsr<12>; // If PSTATE.IL is set and returning to AArch32 state, it is CONSTRAINED UNPREDICTABLE whether // the T bit is set to zero or copied from SPSR. if PSTATE.IL == '1' && PSTATE.nRW == '1' then if; when ConstrainUnpredictableBoolEL1(returnUnpredictable_ILZEROTPL1) then spsr<5> = '0'; // State that is reinstated regardless of illegal exception return PSTATE.<N,Z,C,V> = spsr<31:28>; if; when HavePANExtEL0() then PSTATE.PAN = spsr<22>; if PSTATE.nRW == '1' then // AArch32 state PSTATE.Q = spsr<27>; PSTATE.IT =return RestoredITBitsPL0(spsr); ShouldAdvanceIT = FALSE; if HaveDITExt() then PSTATE.DIT = (if Restarting() then spsr<24> else spsr<21>); PSTATE.GE = spsr<19:16>; PSTATE.E = spsr<9>; PSTATE.<A,I,F> = spsr<8:6>; // No PSTATE.D in AArch32 state PSTATE.T = spsr<5>; // PSTATE.J is RES0 else // AArch64 state if HaveMTEExt() then PSTATE.TCO = spsr<25>; if HaveDITExt() then PSTATE.DIT = spsr<24>; if HaveUAOExt() then PSTATE.UAO = spsr<23>; if HaveBTIExt() then PSTATE.BTYPE = spsr<11:10>; PSTATE.<D,A,I,F> = spsr<9:6>; // No PSTATE.<Q,IT,GE,E,T> in AArch64 state return;;

Library pseudocode for shared/functions/system/ShouldAdvanceITPSTATE

ProcStateboolean ShouldAdvanceIT;PSTATE;

Library pseudocode for shared/functions/system/SpeculationBarrierPrivilegeLevel

enumeration PrivilegeLevel {PL3, PL2, PL1, SpeculationBarrier();PL0};

Library pseudocode for shared/functions/system/SynchronizeContextProcState

type ProcState is ( bits (1) N, // Negative condition flag bits (1) Z, // Zero condition flag bits (1) C, // Carry condition flag bits (1) V, // oVerflow condition flag bits (1) D, // Debug mask bit [AArch64 only] bits (1) A, // SError interrupt mask bit bits (1) I, // IRQ mask bit bits (1) F, // FIQ mask bit bits (1) PAN, // Privileged Access Never Bit [v8.1] bits (1) UAO, // User Access Override [v8.2] bits (1) DIT, // Data Independent Timing [v8.4] bits (1) TCO, // Tag Check Override [v8.5, AArch64 only] bits (2) BTYPE, // Branch Type [v8.5] bits (1) SS, // Software step bit bits (1) IL, // Illegal Execution state bit bits (2) EL, // Exception Level bits (1) nRW, // not Register Width: 0=64, 1=32 bits (1) SPSynchronizeContext();, // Stack pointer select: 0=SP0, 1=SPx [AArch64 only] bits (1) Q, // Cumulative saturation flag [AArch32 only] bits (4) GE, // Greater than or Equal flags [AArch32 only] bits (1) SSBS, // Speculative Store Bypass Safe bits (8) IT, // If-then bits, RES0 in CPSR [AArch32 only] bits (1) J, // J bit, RES0 [AArch32 only, RES0 in SPSR and CPSR] bits (1) T, // T32 bit, RES0 in CPSR [AArch32 only] bits (1) E, // Endianness bit [AArch32 only] bits (5) M // Mode field [AArch32 only] )

Library pseudocode for shared/functions/system/SynchronizeErrorsRandomTag

// Implements the error synchronization event.// RandomTag() // =========== // Generate a random Allocation Tag. bits(4) SynchronizeErrors();RandomTag() bits(4) tag; for i = 0 to 3 tag<i> =NextRandomTagBit(); return tag;

Library pseudocode for shared/functions/system/TakeUnmaskedPhysicalSErrorInterruptsRandomTagBit

// Take any pending unmasked physical SError interrupt// RandomTagBit() // ============== // Generate a random bit suitable for generating a random Allocation Tag. bit TakeUnmaskedPhysicalSErrorInterrupts(boolean iesb_req);NextRandomTagBit() bits(16) lfsr = RGSR_EL1.SEED; bit top = lfsr<5> EOR lfsr<3> EOR lfsr<2> EOR lfsr<0>; RGSR_EL1.SEED = top:lfsr<15:1>; return top;

Library pseudocode for shared/functions/system/TakeUnmaskedSErrorInterruptsRestoredITBits

// Take any pending unmasked physical SError interrupt or unmasked virtual SError // interrupt.// RestoredITBits() // ================ // Get the value of PSTATE.IT to be restored on this exception return. bits(8) TakeUnmaskedSErrorInterrupts();RestoredITBits(bits(32) spsr) it = spsr<15:10,26:25>; // When PSTATE.IL is set, it is CONSTRAINED UNPREDICTABLE whether the IT bits are each set // to zero or copied from the SPSR. if PSTATE.IL == '1' then ifConstrainUnpredictableBool(Unpredictable_ILZEROIT) then return '00000000'; else return it; // The IT bits are forced to zero when they are set to a reserved value. if !IsZero(it<7:4>) && IsZero(it<3:0>) then return '00000000'; // The IT bits are forced to zero when returning to A32 state, or when returning to an EL // with the ITD bit set to 1, and the IT bits are describing a multi-instruction block. itd = if PSTATE.EL == EL2 then HSCTLR.ITD else SCTLR.ITD; if (spsr<5> == '0' && !IsZero(it)) || (itd == '1' && !IsZero(it<2:0>)) then return '00000000'; else return it;

Library pseudocode for shared/functions/system/ThisInstrSCRType

bits(32)type ThisInstr();SCRType;

Library pseudocode for shared/functions/system/ThisInstrLengthSCR_GEN

integer// SCR_GEN[] // ========= SCRType ThisInstrLength();SCR_GEN[] // AArch32 secure & AArch64 EL3 registers are not architecturally mapped assertHaveEL(EL3); bits(32) r; if HighestELUsingAArch32() then r = SCR; else r = SCR_EL3; return r;

Library pseudocode for shared/functions/system/UnreachableSendEvent

// Signal an event to all PEs in a multiprocessor system to set their Event Registers. // When a PE executes the SEV instruction, it causes this function to be executed Unreachable() assert FALSE;SendEvent();

Library pseudocode for shared/functions/system/UsingAArch32SendEventLocal

// UsingAArch32() // ============== // Return TRUE if the current Exception level is using AArch32, FALSE if using AArch64. boolean// SendEventLocal() // ================ // Set the local Event Register of this PE. // When a PE executes the SEVL instruction, it causes this function to be executed UsingAArch32() boolean aarch32 = (PSTATE.nRW == '1'); if !SendEventLocal() EventRegister = '1'; return;HaveAnyAArch32() then assert !aarch32; if HighestELUsingAArch32() then assert aarch32; return aarch32;

Library pseudocode for shared/functions/system/WaitForEventSetPSTATEFromPSR

// WaitForEvent() // ============== // PE suspends its operation and enters a low-power state // if the Event Register is clear when the WFE is executed// SetPSTATEFromPSR() // ================== // Set PSTATE based on a PSR value WaitForEvent() if EventRegister == '0' thenSetPSTATEFromPSR(bits(32) spsr) PSTATE.SS = (spsr); if IllegalExceptionReturn(spsr) then PSTATE.IL = '1'; else // State that is reinstated only on a legal exception return PSTATE.IL = spsr<20>; if spsr<4> == '1' then // AArch32 state AArch32.WriteMode(spsr<4:0>); // Sets PSTATE.EL correctly else // AArch64 state PSTATE.nRW = '0'; PSTATE.EL = spsr<3:2>; PSTATE.SP = spsr<0>; // If PSTATE.IL is set and returning to AArch32 state, it is CONSTRAINED UNPREDICTABLE whether // the T bit is set to zero or copied from SPSR. if PSTATE.IL == '1' && PSTATE.nRW == '1' then if ConstrainUnpredictableBool(Unpredictable_ILZEROT) then spsr<5> = '0'; // State that is reinstated regardless of illegal exception return PSTATE.<N,Z,C,V> = spsr<31:28>; if HavePANExt() then PSTATE.PAN = spsr<22>; if PSTATE.nRW == '1' then // AArch32 state PSTATE.Q = spsr<27>; PSTATE.IT = RestoredITBits(spsr); ShouldAdvanceIT = FALSE; if HaveSSBSExt() then PSTATE.SSBS = spsr<23>; if HaveDITExt() then PSTATE.DIT = (if Restarting() then spsr<24> else spsr<21>); PSTATE.GE = spsr<19:16>; PSTATE.E = spsr<9>; PSTATE.<A,I,F> = spsr<8:6>; // No PSTATE.D in AArch32 state PSTATE.T = spsr<5>; // PSTATE.J is RES0 else // AArch64 state if HaveMTEExt() then PSTATE.TCO = spsr<25>; if HaveDITExt() then PSTATE.DIT = spsr<24>; if HaveUAOExt() then PSTATE.UAO = spsr<23>; if HaveSSBSExt() then PSTATE.SSBS = spsr<12>; if HaveBTIExtEnterLowPowerStateDebugExceptionReturnSS(); () then PSTATE.BTYPE = spsr<11:10>; PSTATE.<D,A,I,F> = spsr<9:6>; // No PSTATE.<Q,IT,GE,E,T> in AArch64 state return;

Library pseudocode for shared/functions/system/WaitForInterruptShouldAdvanceIT

// WaitForInterrupt() // ================== // PE suspends its operation to enter a low-power state // until a WFI wake-up event occurs or the PE is resetboolean ShouldAdvanceIT; WaitForInterrupt() EnterLowPowerState(); return;

Library pseudocode for shared/functions/unpredictablesystem/ConstrainUnpredictableSpeculationBarrier

// ConstrainUnpredictable() // ======================== // Return the appropriate Constraint result to control the caller's behavior. The return value // is IMPLEMENTATION DEFINED within a permitted list for each UNPREDICTABLE case. // (The permitted list is determined by an assert or case statement at the call site.) // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // The extra argument is used here to allow this example definition. This is an example only and // does not imply a fixed implementation of these behaviors. Indeed the intention is that it should // be defined by each implementation, according to its implementation choices. Constraint ConstrainUnpredictable(Unpredictable which) case which of when Unpredictable_WBOVERLAPLD return Constraint_WBSUPPRESS; // return loaded value when Unpredictable_WBOVERLAPST return Constraint_NONE; // store pre-writeback value when Unpredictable_LDPOVERLAP return Constraint_UNDEF; // instruction is UNDEFINED when Unpredictable_BASEOVERLAP return Constraint_NONE; // use original address when Unpredictable_DATAOVERLAP return Constraint_NONE; // store original value when Unpredictable_DEVPAGE2 return Constraint_FAULT; // take an alignment fault when Unpredictable_INSTRDEVICE return Constraint_NONE; // Do not take a fault when Unpredictable_RESCPACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESMAIR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTEXCB return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESDACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESPRRR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESVTCRS return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_OORTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_LARGEIPA return Constraint_FORCE; // Restrict the inputsize to the PAMax value when Unpredictable_ESRCONDPASS return Constraint_FALSE; // Report as "AL" when Unpredictable_ILZEROIT return Constraint_FALSE; // Do not zero PSTATE.IT when Unpredictable_ILZEROT return Constraint_FALSE; // Do not zero PSTATE.T when Unpredictable_BPVECTORCATCHPRI return Constraint_TRUE; // Debug Vector Catch: match on 2nd halfword when Unpredictable_VCMATCHHALF return Constraint_FALSE; // No match when Unpredictable_VCMATCHDAPA return Constraint_FALSE; // No match on Data Abort or Prefetch abort when Unpredictable_WPMASKANDBAS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_WPBASCONTIGUOUS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESWPMASK return Constraint_DISABLED; // Watchpoint disabled when Unpredictable_WPMASKEDBITS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESBPWPCTRL return Constraint_DISABLED; // Breakpoint/watchpoint disabled when Unpredictable_BPNOTIMPL return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_RESBPTYPE return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPNOTCTXCMP return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPMATCHHALF return Constraint_FALSE; // No match when Unpredictable_BPMISMATCHHALF return Constraint_FALSE; // No match when Unpredictable_RESTARTALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_RESTARTZEROUPPERPC return Constraint_TRUE; // Force zero extension when Unpredictable_ZEROUPPER return Constraint_TRUE; // zero top halves of X registers when Unpredictable_ERETZEROUPPERPC return Constraint_TRUE; // zero top half of PC when Unpredictable_A32FORCEALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_SMD return Constraint_UNDEF; // disabled SMC is Unallocated when Unpredictable_NONFAULT return Constraint_FALSE; // Speculation enabled when Unpredictable_SVEZEROUPPER return Constraint_TRUE; // zero top bits of Z registers when Unpredictable_SVELDNFDATA return Constraint_TRUE; // Load mem data in NF loads when Unpredictable_SVELDNFZERO return Constraint_TRUE; // Write zeros in NF loads when Unpredictable_AFUPDATE // AF update for alignment or permission fault return Constraint_TRUE; when Unpredictable_IESBinDebug // Use SCTLR[].IESB in Debug state return Constraint_TRUE; when Unpredictable_ZEROBTYPE return Constraint_TRUE; // Save BTYPE in SPSR_ELx/DPSR_EL0 as '00' when Unpredictable_CLEARERRITEZERO // Clearing sticky errors when instruction in flight return Constraint_FALSE;SpeculationBarrier();

Library pseudocode for shared/functions/unpredictablesystem/ConstrainUnpredictableBitsSynchronizeContext

// ConstrainUnpredictableBits() // ============================ // This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. // If the result is Constraint_UNKNOWN then the function also returns UNKNOWN value, but that // value is always an allocated value; that is, one for which the behavior is not itself // CONSTRAINED. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the bits part // of the result, and may not be applicable in all cases. (Constraint,bits(width)) ConstrainUnpredictableBits(Unpredictable which) c = ConstrainUnpredictable(which); if c == Constraint_UNKNOWN then return (c, Zeros(width)); // See notes; this is an example implementation only else return (c, bits(width) UNKNOWN); // bits result not usedSynchronizeContext();

Library pseudocode for shared/functions/unpredictablesystem/ConstrainUnpredictableBoolSynchronizeErrors

// ConstrainUnpredictableBool() // ============================ // This is a simple wrapper function for cases where the constrained result is either TRUE or FALSE. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. boolean// Implements the error synchronization event. ConstrainUnpredictableBool(SynchronizeErrors();Unpredictable which) c = ConstrainUnpredictable(which); assert c IN {Constraint_TRUE, Constraint_FALSE}; return (c == Constraint_TRUE);

Library pseudocode for shared/functions/unpredictablesystem/ConstrainUnpredictableIntegerTakeUnmaskedPhysicalSErrorInterrupts

// ConstrainUnpredictableInteger() // =============================== // This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. If // the result is Constraint_UNKNOWN then the function also returns an UNKNOWN value in the range // low to high, inclusive. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the integer part // of the result. (Constraint,integer)// Take any pending unmasked physical SError interrupt ConstrainUnpredictableInteger(integer low, integer high,TakeUnmaskedPhysicalSErrorInterrupts(boolean iesb_req); Unpredictable which) c = ConstrainUnpredictable(which); if c == Constraint_UNKNOWN then return (c, low); // See notes; this is an example implementation only else return (c, integer UNKNOWN); // integer result not used

Library pseudocode for shared/functions/unpredictablesystem/ConstraintTakeUnmaskedSErrorInterrupts

enumeration// Take any pending unmasked physical SError interrupt or unmasked virtual SError // interrupt. Constraint {// GeneralTakeUnmaskedSErrorInterrupts(); Constraint_NONE, // Instruction executes with // no change or side-effect to its described behavior Constraint_UNKNOWN, // Destination register has UNKNOWN value Constraint_UNDEF, // Instruction is UNDEFINED Constraint_UNDEFEL0, // Instruction is UNDEFINED at EL0 only Constraint_NOP, // Instruction executes as NOP Constraint_TRUE, Constraint_FALSE, Constraint_DISABLED, Constraint_UNCOND, // Instruction executes unconditionally Constraint_COND, // Instruction executes conditionally Constraint_ADDITIONAL_DECODE, // Instruction executes with additional decode // Load-store Constraint_WBSUPPRESS, Constraint_FAULT, // IPA too large Constraint_FORCE, Constraint_FORCENOSLCHECK};

Library pseudocode for shared/functions/unpredictablesystem/UnpredictableThisInstr

enumerationbits(32) Unpredictable {// Writeback/transfer register overlap (load)ThisInstr(); Unpredictable_WBOVERLAPLD, // Writeback/transfer register overlap (store) Unpredictable_WBOVERLAPST, // Load Pair transfer register overlap Unpredictable_LDPOVERLAP, // Store-exclusive base/status register overlap Unpredictable_BASEOVERLAP, // Store-exclusive data/status register overlap Unpredictable_DATAOVERLAP, // Load-store alignment checks Unpredictable_DEVPAGE2, // Instruction fetch from Device memory Unpredictable_INSTRDEVICE, // Reserved CPACR value Unpredictable_RESCPACR, // Reserved MAIR value Unpredictable_RESMAIR, // Reserved TEX:C:B value Unpredictable_RESTEXCB, // Reserved PRRR value Unpredictable_RESPRRR, // Reserved DACR field Unpredictable_RESDACR, // Reserved VTCR.S value Unpredictable_RESVTCRS, // Reserved TCR.TnSZ value Unpredictable_RESTnSZ, // Out-of-range TCR.TnSZ value Unpredictable_OORTnSZ, // IPA size exceeds PA size Unpredictable_LARGEIPA, // Syndrome for a known-passing conditional A32 instruction Unpredictable_ESRCONDPASS, // Illegal State exception: zero PSTATE.IT Unpredictable_ILZEROIT, // Illegal State exception: zero PSTATE.T Unpredictable_ILZEROT, // Debug: prioritization of Vector Catch Unpredictable_BPVECTORCATCHPRI, // Debug Vector Catch: match on 2nd halfword Unpredictable_VCMATCHHALF, // Debug Vector Catch: match on Data Abort or Prefetch abort Unpredictable_VCMATCHDAPA, // Debug watchpoints: non-zero MASK and non-ones BAS Unpredictable_WPMASKANDBAS, // Debug watchpoints: non-contiguous BAS Unpredictable_WPBASCONTIGUOUS, // Debug watchpoints: reserved MASK Unpredictable_RESWPMASK, // Debug watchpoints: non-zero MASKed bits of address Unpredictable_WPMASKEDBITS, // Debug breakpoints and watchpoints: reserved control bits Unpredictable_RESBPWPCTRL, // Debug breakpoints: not implemented Unpredictable_BPNOTIMPL, // Debug breakpoints: reserved type Unpredictable_RESBPTYPE, // Debug breakpoints: not-context-aware breakpoint Unpredictable_BPNOTCTXCMP, // Debug breakpoints: match on 2nd halfword of instruction Unpredictable_BPMATCHHALF, // Debug breakpoints: mismatch on 2nd halfword of instruction Unpredictable_BPMISMATCHHALF, // Debug: restart to a misaligned AArch32 PC value Unpredictable_RESTARTALIGNPC, // Debug: restart to a not-zero-extended AArch32 PC value Unpredictable_RESTARTZEROUPPERPC, // Zero top 32 bits of X registers in AArch32 state Unpredictable_ZEROUPPER, // Zero top 32 bits of PC on illegal return to AArch32 state Unpredictable_ERETZEROUPPERPC, // Force address to be aligned when interworking branch to A32 state Unpredictable_A32FORCEALIGNPC, // SMC disabled Unpredictable_SMD, // FF speculation Unpredictable_NONFAULT, // Zero top bits of Z registers in EL change Unpredictable_SVEZEROUPPER, // Load mem data in NF loads Unpredictable_SVELDNFDATA, // Write zeros in NF loads Unpredictable_SVELDNFZERO, // Access Flag Update by HW Unpredictable_AFUPDATE, // Consider SCTLR[].IESB in Debug state Unpredictable_IESBinDebug, // No events selected in PMSEVFR_EL1 Unpredictable_ZEROPMSEVFR, // No operation type selected in PMSFCR_EL1 Unpredictable_NOOPTYPES, // Zero latency in PMSLATFR_EL1 Unpredictable_ZEROMINLATENCY, // Zero saved BType value in SPSR_ELx/DPSR_EL0 Unpredictable_ZEROBTYPE, // Timestamp constrained to virtual or physical Unpredictable_EL2TIMESTAMP, Unpredictable_EL1TIMESTAMP, // Clearing DCC/ITR sticky flags when instruction is in flight Unpredictable_CLEARERRITEZERO};

Library pseudocode for shared/functions/vectorsystem/AdvSIMDExpandImmThisInstrLength

// AdvSIMDExpandImm() // ================== bits(64)integer AdvSIMDExpandImm(bit op, bits(4) cmode, bits(8) imm8) case cmode<3:1> of when '000' imm64 =ThisInstrLength(); Replicate(Zeros(24):imm8, 2); when '001' imm64 = Replicate(Zeros(16):imm8:Zeros(8), 2); when '010' imm64 = Replicate(Zeros(8):imm8:Zeros(16), 2); when '011' imm64 = Replicate(imm8:Zeros(24), 2); when '100' imm64 = Replicate(Zeros(8):imm8, 4); when '101' imm64 = Replicate(imm8:Zeros(8), 4); when '110' if cmode<0> == '0' then imm64 = Replicate(Zeros(16):imm8:Ones(8), 2); else imm64 = Replicate(Zeros(8):imm8:Ones(16), 2); when '111' if cmode<0> == '0' && op == '0' then imm64 = Replicate(imm8, 8); if cmode<0> == '0' && op == '1' then imm8a = Replicate(imm8<7>, 8); imm8b = Replicate(imm8<6>, 8); imm8c = Replicate(imm8<5>, 8); imm8d = Replicate(imm8<4>, 8); imm8e = Replicate(imm8<3>, 8); imm8f = Replicate(imm8<2>, 8); imm8g = Replicate(imm8<1>, 8); imm8h = Replicate(imm8<0>, 8); imm64 = imm8a:imm8b:imm8c:imm8d:imm8e:imm8f:imm8g:imm8h; if cmode<0> == '1' && op == '0' then imm32 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,5):imm8<5:0>:Zeros(19); imm64 = Replicate(imm32, 2); if cmode<0> == '1' && op == '1' then if UsingAArch32() then ReservedEncoding(); imm64 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,8):imm8<5:0>:Zeros(48); return imm64;

Library pseudocode for shared/functions/vectorsystem/PolynomialMultUnreachable

// PolynomialMult() // ================ bits(M+N) PolynomialMult(bits(M) op1, bits(N) op2) result = Zeros(M+N); extended_op2 = ZeroExtend(op2, M+N); for i=0 to M-1 if op1<i> == '1' then result = result EOR LSL(extended_op2, i); return result;Unreachable() assert FALSE;

Library pseudocode for shared/functions/vectorsystem/SatQUsingAArch32

// SatQ() // ====== // UsingAArch32() // ============== // Return TRUE if the current Exception level is using AArch32, FALSE if using AArch64. (bits(N), boolean)boolean SatQ(integer i, integer N, boolean unsigned) (result, sat) = if unsigned thenUsingAArch32() boolean aarch32 = (PSTATE.nRW == '1'); if ! UnsignedSatQHaveAnyAArch32(i, N) else() then assert !aarch32; if SignedSatQHighestELUsingAArch32(i, N); return (result, sat);() then assert aarch32; return aarch32;

Library pseudocode for shared/functions/vectorsystem/SignedSatQWaitForEvent

// SignedSatQ() // ============ (bits(N), boolean)// WaitForEvent() // ============== // PE suspends its operation and enters a low-power state // if the Event Register is clear when the WFE is executed SignedSatQ(integer i, integer N) if i > 2^(N-1) - 1 then result = 2^(N-1) - 1; saturated = TRUE; elsif i < -(2^(N-1)) then result = -(2^(N-1)); saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated);WaitForEvent() if EventRegister == '0' thenEnterLowPowerState(); return;

Library pseudocode for shared/functions/vectorsystem/UnsignedRSqrtEstimateWaitForInterrupt

// UnsignedRSqrtEstimate() // ======================= bits(N)// WaitForInterrupt() // ================== // PE suspends its operation to enter a low-power state // until a WFI wake-up event occurs or the PE is reset UnsignedRSqrtEstimate(bits(N) operand) assert N IN {16,32}; if operand<N-1:N-2> == '00' then // Operands <= 0x3FFFFFFF produce 0xFFFFFFFF result =WaitForInterrupt() OnesEnterLowPowerState(N); else // input is in the range 0x40000000 .. 0xffffffff representing [0.25 .. 1.0) // estimate is in the range 256 .. 511 representing [1.0 .. 2.0) case N of when 16 estimate = RecipSqrtEstimate(UInt(operand<15:7>)); when 32 estimate = RecipSqrtEstimate(UInt(operand<31:23>)); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : Zeros(N-9); return result;(); return;

Library pseudocode for shared/functions/vectorunpredictable/UnsignedRecipEstimateConstrainUnpredictable

// UnsignedRecipEstimate() // ======================= // ConstrainUnpredictable() // ======================== // Return the appropriate Constraint result to control the caller's behavior. The return value // is IMPLEMENTATION DEFINED within a permitted list for each UNPREDICTABLE case. // (The permitted list is determined by an assert or case statement at the call site.) bits(N)// NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // The extra argument is used here to allow this example definition. This is an example only and // does not imply a fixed implementation of these behaviors. Indeed the intention is that it should // be defined by each implementation, according to its implementation choices. Constraint UnsignedRecipEstimate(bits(N) operand) assert N IN {16,32}; if operand<N-1> == '0' then // Operands <= 0x7FFFFFFF produce 0xFFFFFFFF result =ConstrainUnpredictable( OnesUnpredictable(N); else // input is in the range 0x80000000 .. 0xffffffff representing [0.5 .. 1.0) // estimate is in the range 256 to 511 representing [1.0 .. 2.0) case N of when 16 estimate =which) case which of when RecipEstimateUnpredictable_WBOVERLAPLD(returnUIntConstraint_WBSUPPRESS(operand<15:7>)); when 32 estimate =; // return loaded value when RecipEstimateUnpredictable_WBOVERLAPST(returnUIntConstraint_NONE(operand<31:23>)); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> :; // store pre-writeback value when return Constraint_UNDEF; // instruction is UNDEFINED when Unpredictable_BASEOVERLAP return Constraint_NONE; // use original address when Unpredictable_DATAOVERLAP return Constraint_NONE; // store original value when Unpredictable_DEVPAGE2 return Constraint_FAULT; // take an alignment fault when Unpredictable_INSTRDEVICE return Constraint_NONE; // Do not take a fault when Unpredictable_RESCPACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESMAIR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTEXCB return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESDACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESPRRR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESVTCRS return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_OORTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_LARGEIPA return Constraint_FORCE; // Restrict the inputsize to the PAMax value when Unpredictable_ESRCONDPASS return Constraint_FALSE; // Report as "AL" when Unpredictable_ILZEROIT return Constraint_FALSE; // Do not zero PSTATE.IT when Unpredictable_ILZEROT return Constraint_FALSE; // Do not zero PSTATE.T when Unpredictable_BPVECTORCATCHPRI return Constraint_TRUE; // Debug Vector Catch: match on 2nd halfword when Unpredictable_VCMATCHHALF return Constraint_FALSE; // No match when Unpredictable_VCMATCHDAPA return Constraint_FALSE; // No match on Data Abort or Prefetch abort when Unpredictable_WPMASKANDBAS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_WPBASCONTIGUOUS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESWPMASK return Constraint_DISABLED; // Watchpoint disabled when Unpredictable_WPMASKEDBITS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESBPWPCTRL return Constraint_DISABLED; // Breakpoint/watchpoint disabled when Unpredictable_BPNOTIMPL return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_RESBPTYPE return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPNOTCTXCMP return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPMATCHHALF return Constraint_FALSE; // No match when Unpredictable_BPMISMATCHHALF return Constraint_FALSE; // No match when Unpredictable_RESTARTALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_RESTARTZEROUPPERPC return Constraint_TRUE; // Force zero extension when Unpredictable_ZEROUPPER return Constraint_TRUE; // zero top halves of X registers when Unpredictable_ERETZEROUPPERPC return Constraint_TRUE; // zero top half of PC when Unpredictable_A32FORCEALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_SMD return Constraint_UNDEF; // disabled SMC is Unallocated when Unpredictable_NONFAULT return Constraint_FALSE; // Speculation enabled when Unpredictable_SVEZEROUPPER return Constraint_TRUE; // zero top bits of Z registers when Unpredictable_SVELDNFDATA return Constraint_TRUE; // Load mem data in NF loads when Unpredictable_SVELDNFZERO return Constraint_TRUE; // Write zeros in NF loads when Unpredictable_AFUPDATE // AF update for alignment or permission fault return Constraint_TRUE; when Unpredictable_IESBinDebug // Use SCTLR[].IESB in Debug state return Constraint_TRUE; when Unpredictable_ZEROBTYPE return Constraint_TRUE; // Save BTYPE in SPSR_ELx/DPSR_EL0 as '00' when Unpredictable_CLEARERRITEZERO // Clearing sticky errors when instruction in flight return Constraint_FALSEZerosUnpredictable_LDPOVERLAP(N-9); return result;;

Library pseudocode for shared/functions/vectorunpredictable/UnsignedSatQConstrainUnpredictableBits

// UnsignedSatQ() // ============== // ConstrainUnpredictableBits() // ============================ (bits(N), boolean)// This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. // If the result is Constraint_UNKNOWN then the function also returns UNKNOWN value, but that // value is always an allocated value; that is, one for which the behavior is not itself // CONSTRAINED. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the bits part // of the result, and may not be applicable in all cases. (Constraint,bits(width)) UnsignedSatQ(integer i, integer N) if i > 2^N - 1 then result = 2^N - 1; saturated = TRUE; elsif i < 0 then result = 0; saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated);ConstrainUnpredictableBits(Unpredictable which) c = ConstrainUnpredictable(which); if c == Constraint_UNKNOWN then return (c, Zeros(width)); // See notes; this is an example implementation only else return (c, bits(width) UNKNOWN); // bits result not used

Library pseudocode for shared/tracefunctions/selfhostedunpredictable/SelfHostedTraceEnabledConstrainUnpredictableBool

// SelfHostedTraceEnabled() // ======================== // Returns TRUE if Self-hosted Trace is enabled. // ConstrainUnpredictableBool() // ============================ // This is a simple wrapper function for cases where the constrained result is either TRUE or FALSE. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. boolean SelfHostedTraceEnabled() if !ConstrainUnpredictableBool(HaveTraceExtUnpredictable() || !which) c =HaveSelfHostedTraceConstrainUnpredictable() then return FALSE; if(which); assert c IN { HaveELConstraint_TRUE(,EL3Constraint_FALSE) then secure_trace_enable = (if}; return (c == ELUsingAArch32Constraint_TRUE(EL3) then SDCR.STE else MDCR_EL3.STE); niden = (secure_trace_enable == '0' || ExternalSecureNoninvasiveDebugEnabled()); else // If no EL3, IsSecure() returns the Effective value of (SCR_EL3.NS == '0') niden = (!IsSecure() || ExternalSecureNoninvasiveDebugEnabled()); return (EDSCR.TFO == '0' || !niden););

Library pseudocode for shared/tracefunctions/selfhostedunpredictable/TraceAllowedConstrainUnpredictableInteger

// TraceAllowed() // ============== // Returns TRUE if Self-hosted Trace is allowed in the current Security state and Exception Level // ConstrainUnpredictableInteger() // =============================== boolean// This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. If // the result is Constraint_UNKNOWN then the function also returns an UNKNOWN value in the range // low to high, inclusive. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the integer part // of the result. (Constraint,integer) TraceAllowed() if !ConstrainUnpredictableInteger(integer low, integer high,HaveTraceExtUnpredictable() then return FALSE; ifwhich) c = SelfHostedTraceEnabledConstrainUnpredictable() then if(which); if c == IsSecureConstraint_UNKNOWN() && HaveEL(EL3) then secure_trace_enable = (if ELUsingAArch32(EL3) then SDCR.STE else MDCR_EL3.STE); if secure_trace_enable == '0' then return FALSE; TGE_bit = if EL2Enabled() then HCR_EL2.TGE else '0'; case PSTATE.EL of when EL3 TRE_bit = if HighestELUsingAArch32() then TRFCR.E1TRE else '0'; when EL2 TRE_bit = TRFCR_EL2.E2TRE; when EL1 TRE_bit = TRFCR_EL1.E1TRE; when EL0 TRE_bit = if TGE_bit == '1' then TRFCR_EL2.E0HTRE else TRFCR_EL1.E0TRE; return TRE_bit == '1'; else return (!IsSecure() || ExternalSecureNoninvasiveDebugEnabled());then return (c, low); // See notes; this is an example implementation only else return (c, integer UNKNOWN); // integer result not used

Library pseudocode for shared/tracefunctions/selfhostedunpredictable/TraceContextIDR2Constraint

// TraceContextIDR2() // ================== booleanenumeration TraceContextIDR2() if !Constraint {// GeneralTraceAllowed()|| !Constraint_NONE, // Instruction executes with // no change or side-effect to its described behaviorHaveEL(Constraint_UNKNOWN, // Destination register has UNKNOWN valueEL2) then return FALSE; return (!Constraint_UNDEF, // Instruction is UNDEFINEDConstraint_UNDEFEL0, // Instruction is UNDEFINED at EL0 only Constraint_NOP, // Instruction executes as NOP Constraint_TRUE, Constraint_FALSE, Constraint_DISABLED, Constraint_UNCOND, // Instruction executes unconditionally Constraint_COND, // Instruction executes conditionally Constraint_ADDITIONAL_DECODE, // Instruction executes with additional decode // Load-store Constraint_WBSUPPRESS, Constraint_FAULT, // IPA too large Constraint_FORCE, SelfHostedTraceEnabled() || TRFCR_EL2.CX == '1');Constraint_FORCENOSLCHECK};

Library pseudocode for shared/tracefunctions/selfhostedunpredictable/TraceTimeStampUnpredictable

// TraceTimeStamp() // ================ TimeStampenumeration TraceTimeStamp() ifUnpredictable {// Writeback/transfer register overlap (load) SelfHostedTraceEnabled() then ifUnpredictable_WBOVERLAPLD, // Writeback/transfer register overlap (store) HaveEL(Unpredictable_WBOVERLAPST, // Load Pair transfer register overlapEL2) then TS_el2 = TRFCR_EL2.TS; if TS_el2 == '10' then (-, TS_el2) =Unpredictable_LDPOVERLAP, // Store-exclusive base/status register overlap ConstrainUnpredictableBits(Unpredictable_BASEOVERLAP, // Store-exclusive data/status register overlapUnpredictable_EL2TIMESTAMP); // Reserved value case TS_el2 of when '00' /* falls through to check TRFCR_EL1.TS */ when '01' returnUnpredictable_DATAOVERLAP, // Load-store alignment checks TimeStamp_Virtual; when '11' returnUnpredictable_DEVPAGE2, // Instruction fetch from Device memory TimeStamp_Physical; otherwiseUnpredictable_INSTRDEVICE, // Reserved CPACR value Unreachable(); // ConstrainUnpredictableBits removes this case TS_el1 = TRFCR_EL1.TS; if TS_el1 == 'x0' then (-, TS_el1) =Unpredictable_RESCPACR, // Reserved MAIR value ConstrainUnpredictableBits(Unpredictable_RESMAIR, // Reserved TEX:C:B valueUnpredictable_EL1TIMESTAMP); // Reserved values case TS_el1 of when '01' returnUnpredictable_RESTEXCB, // Reserved PRRR value TimeStamp_Virtual; when '11' returnUnpredictable_RESPRRR, // Reserved DACR field TimeStamp_Physical; otherwiseUnpredictable_RESDACR, // Reserved VTCR.S value Unreachable(); // ConstrainUnpredictableBits removes this case else returnUnpredictable_RESVTCRS, // Reserved TCR.TnSZ value Unpredictable_RESTnSZ, // Out-of-range TCR.TnSZ value Unpredictable_OORTnSZ, // IPA size exceeds PA size Unpredictable_LARGEIPA, // Syndrome for a known-passing conditional A32 instruction Unpredictable_ESRCONDPASS, // Illegal State exception: zero PSTATE.IT Unpredictable_ILZEROIT, // Illegal State exception: zero PSTATE.T Unpredictable_ILZEROT, // Debug: prioritization of Vector Catch Unpredictable_BPVECTORCATCHPRI, // Debug Vector Catch: match on 2nd halfword Unpredictable_VCMATCHHALF, // Debug Vector Catch: match on Data Abort or Prefetch abort Unpredictable_VCMATCHDAPA, // Debug watchpoints: non-zero MASK and non-ones BAS Unpredictable_WPMASKANDBAS, // Debug watchpoints: non-contiguous BAS Unpredictable_WPBASCONTIGUOUS, // Debug watchpoints: reserved MASK Unpredictable_RESWPMASK, // Debug watchpoints: non-zero MASKed bits of address Unpredictable_WPMASKEDBITS, // Debug breakpoints and watchpoints: reserved control bits Unpredictable_RESBPWPCTRL, // Debug breakpoints: not implemented Unpredictable_BPNOTIMPL, // Debug breakpoints: reserved type Unpredictable_RESBPTYPE, // Debug breakpoints: not-context-aware breakpoint Unpredictable_BPNOTCTXCMP, // Debug breakpoints: match on 2nd halfword of instruction Unpredictable_BPMATCHHALF, // Debug breakpoints: mismatch on 2nd halfword of instruction Unpredictable_BPMISMATCHHALF, // Debug: restart to a misaligned AArch32 PC value Unpredictable_RESTARTALIGNPC, // Debug: restart to a not-zero-extended AArch32 PC value Unpredictable_RESTARTZEROUPPERPC, // Zero top 32 bits of X registers in AArch32 state Unpredictable_ZEROUPPER, // Zero top 32 bits of PC on illegal return to AArch32 state Unpredictable_ERETZEROUPPERPC, // Force address to be aligned when interworking branch to A32 state Unpredictable_A32FORCEALIGNPC, // SMC disabled Unpredictable_SMD, // FF speculation Unpredictable_NONFAULT, // Zero top bits of Z registers in EL change Unpredictable_SVEZEROUPPER, // Load mem data in NF loads Unpredictable_SVELDNFDATA, // Write zeros in NF loads Unpredictable_SVELDNFZERO, // Access Flag Update by HW Unpredictable_AFUPDATE, // Consider SCTLR[].IESB in Debug state Unpredictable_IESBinDebug, // No events selected in PMSEVFR_EL1 Unpredictable_ZEROPMSEVFR, // No operation type selected in PMSFCR_EL1 Unpredictable_NOOPTYPES, // Zero latency in PMSLATFR_EL1 Unpredictable_ZEROMINLATENCY, // Zero saved BType value in SPSR_ELx/DPSR_EL0 Unpredictable_ZEROBTYPE, // Clearing DCC/ITR sticky flags when instruction is in flight TimeStamp_CoreSight;Unpredictable_CLEARERRITEZERO};

Library pseudocode for shared/translationfunctions/attrsvector/CombineS1S2AttrHintsAdvSIMDExpandImm

// CombineS1S2AttrHints() // ====================== // Combines cacheability attributes and allocation hints from stage 1 and stage 2 // AdvSIMDExpandImm() // ================== MemAttrHintsbits(64) CombineS1S2AttrHints(AdvSIMDExpandImm(bit op, bits(4) cmode, bits(8) imm8) case cmode<3:1> of when '000' imm64 =MemAttrHintsReplicate s1desc,( MemAttrHintsZeros s2desc)(24):imm8, 2); when '001' imm64 = MemAttrHintsReplicate result; apply_force_writeback =( HaveStage2MemAttrControlZeros() && HCR_EL2.FWB == '1'; if apply_force_writeback then if s2desc.attrs == '11' then result.attrs = s1desc.attrs; elsif s2desc.attrs == '10' then result.attrs =(16):imm8: MemAttr_WBZeros; // force Write-back else result.attrs =(8), 2); when '010' imm64 = MemAttr_NCReplicate; else if s2desc.attrs == '01' || s1desc.attrs == '01' then result.attrs = bits(2) UNKNOWN; // Reserved elsif s2desc.attrs ==( MemAttr_NCZeros || s1desc.attrs ==(8):imm8: MemAttr_NCZeros then result.attrs =(16), 2); when '011' imm64 = MemAttr_NCReplicate; // Non-cacheable elsif s2desc.attrs ==(imm8: MemAttr_WTZeros || s1desc.attrs ==(24), 2); when '100' imm64 = MemAttr_WTReplicate then result.attrs =( MemAttr_WTZeros; // Write-through else result.attrs =(8):imm8, 4); when '101' imm64 = MemAttr_WBReplicate; // Write-back if(imm8: HaveStage2MemAttrControlZeros() && HCR_EL2.FWB == '1' then if s1desc.attrs !=(8), 4); when '110' if cmode<0> == '0' then imm64 = MemAttr_NCReplicate && result.attrs !=( MemAttr_NCZeros then result.hints = s1desc.hints; elsif s1desc.attrs ==(16):imm8: MemAttr_NCOnes && result.attrs !=(8), 2); else imm64 = MemAttr_NCReplicate then result.hints =( (8):imm8:Ones(16), 2); when '111' if cmode<0> == '0' && op == '0' then imm64 = Replicate(imm8, 8); if cmode<0> == '0' && op == '1' then imm8a = Replicate(imm8<7>, 8); imm8b = Replicate(imm8<6>, 8); imm8c = Replicate(imm8<5>, 8); imm8d = Replicate(imm8<4>, 8); imm8e = Replicate(imm8<3>, 8); imm8f = Replicate(imm8<2>, 8); imm8g = Replicate(imm8<1>, 8); imm8h = Replicate(imm8<0>, 8); imm64 = imm8a:imm8b:imm8c:imm8d:imm8e:imm8f:imm8g:imm8h; if cmode<0> == '1' && op == '0' then imm32 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,5):imm8<5:0>:Zeros(19); imm64 = Replicate(imm32, 2); if cmode<0> == '1' && op == '1' then if UsingAArch32() then ReservedEncoding(); imm64 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,8):imm8<5:0>:ZerosMemHint_RWAZeros; else result.hints = s1desc.hints; result.transient = s1desc.transient; (48); return result; return imm64;

Library pseudocode for shared/translationfunctions/attrsvector/CombineS1S2DescPolynomialMult

// CombineS1S2Desc() // ================= // Combines the address descriptors from stage 1 and stage 2 // PolynomialMult() // ================ AddressDescriptorbits(M+N) CombineS1S2Desc(PolynomialMult(bits(M) op1, bits(N) op2) result =AddressDescriptorZeros s1desc,(M+N); extended_op2 = AddressDescriptorZeroExtend s2desc)(op2, M+N); for i=0 to M-1 if op1<i> == '1' then result = result EOR AddressDescriptorLSL result; result.paddress = s2desc.paddress; apply_force_writeback = HaveStage2MemAttrControl() && HCR_EL2.FWB == '1'; if IsFault(s1desc) || IsFault(s2desc) then result = if IsFault(s1desc) then s1desc else s2desc; elsif s2desc.memattrs.memtype == MemType_Device || ( (apply_force_writeback && s1desc.memattrs.memtype == MemType_Device && s2desc.memattrs.inner.attrs != '10') || (!apply_force_writeback && s1desc.memattrs.memtype == MemType_Device) ) then result.memattrs.memtype = MemType_Device; if s1desc.memattrs.memtype == MemType_Normal then result.memattrs.device = s2desc.memattrs.device; elsif s2desc.memattrs.memtype == MemType_Normal then result.memattrs.device = s1desc.memattrs.device; else // Both Device result.memattrs.device = CombineS1S2Device(s1desc.memattrs.device, s2desc.memattrs.device); result.memattrs.tagged = FALSE; else result.memattrs.memtype = MemType_Normal; result.memattrs.device = DeviceType UNKNOWN; if apply_force_writeback then if s2desc.memattrs.memtype == MemType_Normal && s2desc.memattrs.inner.attrs == '10' then result.memattrs.inner.attrs = MemAttr_WB; // force Write-back elsif s2desc.memattrs.inner.attrs == '11' then result.memattrs.inner.attrs = s1desc.memattrs.inner.attrs; else result.memattrs.inner.attrs = MemAttr_NC; result.memattrs.outer = result.memattrs.inner; else result.memattrs.inner = CombineS1S2AttrHints(s1desc.memattrs.inner, s2desc.memattrs.inner); result.memattrs.outer = CombineS1S2AttrHints(s1desc.memattrs.outer, s2desc.memattrs.outer); result.memattrs.shareable = (s1desc.memattrs.shareable || s2desc.memattrs.shareable); result.memattrs.outershareable = (s1desc.memattrs.outershareable || s2desc.memattrs.outershareable); result.memattrs.tagged = (s1desc.memattrs.tagged && result.memattrs.inner.attrs == MemAttr_WB && result.memattrs.inner.hints == MemHint_RWA && result.memattrs.outer.attrs == MemAttr_WB && result.memattrs.outer.hints == MemHint_RWA); result.memattrs = MemAttrDefaults(result.memattrs); (extended_op2, i); return result;

Library pseudocode for shared/translationfunctions/attrsvector/CombineS1S2DeviceSatQ

// CombineS1S2Device() // =================== // Combines device types from stage 1 and stage 2 // SatQ() // ====== DeviceType(bits(N), boolean) CombineS1S2Device(SatQ(integer i, integer N, boolean unsigned) (result, sat) = if unsigned thenDeviceTypeUnsignedSatQ s1device,(i, N) else DeviceTypeSignedSatQ s2device) if s2device == DeviceType_nGnRnE || s1device == DeviceType_nGnRnE then result = DeviceType_nGnRnE; elsif s2device == DeviceType_nGnRE || s1device == DeviceType_nGnRE then result = DeviceType_nGnRE; elsif s2device == DeviceType_nGRE || s1device == DeviceType_nGRE then result = DeviceType_nGRE; else result = DeviceType_GRE; return result;(i, N); return (result, sat);

Library pseudocode for shared/translationfunctions/attrsvector/LongConvertAttrsHintsSignedSatQ

// LongConvertAttrsHints() // ======================= // Convert the long attribute fields for Normal memory as used in the MAIR fields // to orthogonal attributes and hints // SignedSatQ() // ============ MemAttrHints(bits(N), boolean) LongConvertAttrsHints(bits(4) attrfield,SignedSatQ(integer i, integer N) if i > 2^(N-1) - 1 then result = 2^(N-1) - 1; saturated = TRUE; elsif i < -(2^(N-1)) then result = -(2^(N-1)); saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated); AccType acctype) assert !IsZero(attrfield); MemAttrHints result; if S1CacheDisabled(acctype) then // Force Non-cacheable result.attrs = MemAttr_NC; result.hints = MemHint_No; else if attrfield<3:2> == '00' then // Write-through transient result.attrs = MemAttr_WT; result.hints = attrfield<1:0>; result.transient = TRUE; elsif attrfield<3:0> == '0100' then // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; result.transient = FALSE; elsif attrfield<3:2> == '01' then // Write-back transient result.attrs = MemAttr_WB; result.hints = attrfield<1:0>; result.transient = TRUE; else // Write-through/Write-back non-transient result.attrs = attrfield<3:2>; result.hints = attrfield<1:0>; result.transient = FALSE; return result;

Library pseudocode for shared/translationfunctions/attrsvector/MemAttrDefaultsUnsignedRSqrtEstimate

// MemAttrDefaults() // ================= // Supply default values for memory attributes, including overriding the shareability attributes // for Device and Non-cacheable memory types. // UnsignedRSqrtEstimate() // ======================= MemoryAttributesbits(N) MemAttrDefaults(UnsignedRSqrtEstimate(bits(N) operand) assert N IN {16,32}; if operand<N-1:N-2> == '00' then // Operands <= 0x3FFFFFFF produce 0xFFFFFFFF result =MemoryAttributesOnes memattrs) (N); else // input is in the range 0x40000000 .. 0xffffffff representing [0.25 .. 1.0) if memattrs.memtype == // estimate is in the range 256 .. 511 representing [1.0 .. 2.0) case N of when 16 estimate = MemType_DeviceRecipSqrtEstimate then memattrs.inner =( MemAttrHintsUInt UNKNOWN; memattrs.outer =(operand<15:7>)); when 32 estimate = MemAttrHintsRecipSqrtEstimate UNKNOWN; memattrs.shareable = TRUE; memattrs.outershareable = TRUE; else memattrs.device =( DeviceTypeUInt UNKNOWN; if memattrs.inner.attrs ==(operand<31:23>)); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : MemAttr_NCZeros && memattrs.outer.attrs == MemAttr_NC then memattrs.shareable = TRUE; memattrs.outershareable = TRUE; (N-9); return memattrs; return result;

Library pseudocode for shared/translationfunctions/attrsvector/S1CacheDisabledUnsignedRecipEstimate

// S1CacheDisabled() // ================= // UnsignedRecipEstimate() // ======================= booleanbits(N) S1CacheDisabled(UnsignedRecipEstimate(bits(N) operand) assert N IN {16,32}; if operand<N-1> == '0' then // Operands <= 0x7FFFFFFF produce 0xFFFFFFFF result =AccTypeOnes acctype) if(N); else // input is in the range 0x80000000 .. 0xffffffff representing [0.5 .. 1.0) // estimate is in the range 256 to 511 representing [1.0 .. 2.0) case N of when 16 estimate = ELUsingAArch32RecipEstimate(S1TranslationRegimeUInt()) then if PSTATE.EL ==(operand<15:7>)); when 32 estimate = EL2RecipEstimate then enable = if acctype ==( AccType_IFETCHUInt then HSCTLR.I else HSCTLR.C; else enable = if acctype ==(operand<31:23>)); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : AccType_IFETCHZeros then SCTLR.I else SCTLR.C; else enable = if acctype == AccType_IFETCH then SCTLR[].I else SCTLR[].C; return enable == '0';(N-9); return result;

Library pseudocode for shared/translationfunctions/attrsvector/S2AttrDecodeUnsignedSatQ

// S2AttrDecode() // UnsignedSatQ() // ============== // Converts the Stage 2 attribute fields into orthogonal attributes and hints MemoryAttributes(bits(N), boolean) S2AttrDecode(bits(2) SH, bits(4) attr,UnsignedSatQ(integer i, integer N) if i > 2^N - 1 then result = 2^N - 1; saturated = TRUE; elsif i < 0 then result = 0; saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated); AccType acctype) MemoryAttributes memattrs; apply_force_writeback = HaveStage2MemAttrControl() && HCR_EL2.FWB == '1'; // Device memory if (apply_force_writeback && attr<2> == '0') || attr<3:2> == '00' then memattrs.memtype = MemType_Device; case attr<1:0> of when '00' memattrs.device = DeviceType_nGnRnE; when '01' memattrs.device = DeviceType_nGnRE; when '10' memattrs.device = DeviceType_nGRE; when '11' memattrs.device = DeviceType_GRE; // Normal memory elsif apply_force_writeback then if attr<2> == '1' then memattrs.memtype = MemType_Normal; memattrs.inner.attrs = attr<1:0>; memattrs.outer.attrs = attr<1:0>; elsif attr<1:0> != '00' then memattrs.memtype = MemType_Normal; memattrs.outer = S2ConvertAttrsHints(attr<3:2>, acctype); memattrs.inner = S2ConvertAttrsHints(attr<1:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else memattrs = MemoryAttributes UNKNOWN; // Reserved return MemAttrDefaults(memattrs);

Library pseudocode for shared/translation/attrs/S2CacheDisabledCombineS1S2AttrHints

// S2CacheDisabled() // ================= // CombineS1S2AttrHints() // ====================== booleanMemAttrHints S2CacheDisabled(CombineS1S2AttrHints(AccTypeMemAttrHints acctype) ifs1desc, ELUsingAArch32MemAttrHints(s2desc)EL2MemAttrHints) then disable = if acctype ==result; if AccType_IFETCHHaveStage2MemAttrControl then HCR2.ID else HCR2.CD; else disable = if acctype ==() && HCR_EL2.FWB == '1' then if s2desc.attrs == then result.attrs = s1desc.attrs; elsif s2desc.attrs == MemAttr_WT then result.attrs = MemAttr_WB; else result.attrs = MemAttr_NC; else if s2desc.attrs == '01' || s1desc.attrs == '01' then result.attrs = bits(2) UNKNOWN; // Reserved elsif s2desc.attrs == MemAttr_NC || s1desc.attrs == MemAttr_NC then result.attrs = MemAttr_NC; // Non-cacheable elsif s2desc.attrs == MemAttr_WT || s1desc.attrs == MemAttr_WT then result.attrs = MemAttr_WT; // Write-through else result.attrs = MemAttr_WBAccType_IFETCHMemAttr_WB then HCR_EL2.ID else HCR_EL2.CD; return disable == '1';; // Write-back result.hints = s1desc.hints; result.transient = s1desc.transient; return result;

Library pseudocode for shared/translation/attrs/S2ConvertAttrsHintsCombineS1S2Desc

// S2ConvertAttrsHints() // ===================== // Converts the attribute fields for Normal memory as used in stage 2 // descriptors to orthogonal attributes and hints // CombineS1S2Desc() // ================= // Combines the address descriptors from stage 1 and stage 2 MemAttrHintsAddressDescriptor S2ConvertAttrsHints(bits(2) attr,CombineS1S2Desc( AccTypeAddressDescriptor acctype) assert !s1desc,IsZeroAddressDescriptor(attr);s2desc) MemAttrHintsAddressDescriptor result; result.paddress = s2desc.paddress; if S2CacheDisabledIsFault(acctype) then // Force Non-cacheable result.attrs =(s1desc) || MemAttr_NCIsFault; result.hints =(s2desc) then result = if MemHint_NoIsFault; else case attr of when '01' // Non-cacheable (no allocate) result.attrs =(s1desc) then s1desc else s2desc; elsif s2desc.memattrs.type == MemAttr_NCMemType_Device; result.hints =|| s1desc.memattrs.type == MemHint_NoMemType_Device; when '10' // Write-through result.attrs =then result.memattrs.type = MemAttr_WTMemType_Device; result.hints = if s1desc.memattrs.type == MemType_Normal then result.memattrs.device = s2desc.memattrs.device; elsif s2desc.memattrs.type == MemType_Normal then result.memattrs.device = s1desc.memattrs.device; else // Both Device result.memattrs.device = CombineS1S2Device(s1desc.memattrs.device, s2desc.memattrs.device); result.memattrs.tagged = FALSE; else // Both Normal result.memattrs.type = MemType_Normal; result.memattrs.device = DeviceType UNKNOWN; result.memattrs.inner = CombineS1S2AttrHints(s1desc.memattrs.inner, s2desc.memattrs.inner); result.memattrs.outer = CombineS1S2AttrHints(s1desc.memattrs.outer, s2desc.memattrs.outer); result.memattrs.shareable = (s1desc.memattrs.shareable || s2desc.memattrs.shareable); result.memattrs.outershareable = (s1desc.memattrs.outershareable || s2desc.memattrs.outershareable); result.memattrs.tagged = (s1desc.memattrs.tagged && result.memattrs.inner.attrs == MemAttr_WB && result.memattrs.inner.hints == MemHint_RWA; when '11' // Write-back result.attrs =&& result.memattrs.outer.attrs == MemAttr_WB; result.hints =&& result.memattrs.outer.hints == MemHint_RWA); result.memattrs = MemAttrDefaults; result.transient = FALSE; (result.memattrs); return result;

Library pseudocode for shared/translation/attrs/ShortConvertAttrsHintsCombineS1S2Device

// ShortConvertAttrsHints() // ======================== // Converts the short attribute fields for Normal memory as used in the TTBR and // TEX fields to orthogonal attributes and hints // CombineS1S2Device() // =================== // Combines device types from stage 1 and stage 2 MemAttrHintsDeviceType ShortConvertAttrsHints(bits(2) RGN,CombineS1S2Device( AccTypeDeviceType acctype, boolean secondstage)s1device, MemAttrHintsDeviceType result; s2device) if (!secondstage && if s2device == S1CacheDisabledDeviceType_nGnRnE(acctype)) || (secondstage &&|| s1device == S2CacheDisabledDeviceType_nGnRnE(acctype)) then // Force Non-cacheable result.attrs =then result = MemAttr_NCDeviceType_nGnRnE; result.hints = elsif s2device == MemHint_NoDeviceType_nGnRE; else case RGN of when '00' // Non-cacheable (no allocate) result.attrs =|| s1device == MemAttr_NCDeviceType_nGnRE; result.hints =then result = MemHint_NoDeviceType_nGnRE; when '01' // Write-back, Read and Write allocate result.attrs = elsif s2device == MemAttr_WBDeviceType_nGRE; result.hints =|| s1device == MemHint_RWADeviceType_nGRE; when '10' // Write-through, Read allocate result.attrs =then result = MemAttr_WTDeviceType_nGRE; result.hints = else result = MemHint_RADeviceType_GRE; when '11' // Write-back, Read allocate result.attrs = MemAttr_WB; result.hints = MemHint_RA; result.transient = FALSE; return result;

Library pseudocode for shared/translation/attrs/WalkAttrDecodeLongConvertAttrsHints

// WalkAttrDecode() // ================ // LongConvertAttrsHints() // ======================= // Convert the long attribute fields for Normal memory as used in the MAIR fields // to orthogonal attributes and hints MemoryAttributesMemAttrHints WalkAttrDecode(bits(2) SH, bits(2) ORGN, bits(2) IRGN, boolean secondstage)LongConvertAttrsHints(bits(4) attrfield, MemoryAttributes memattrs; AccType acctype =acctype) assert ! AccType_NORMALIsZero; memattrs.memtype =(attrfield); MemType_NormalMemAttrHints; memattrs.inner =result; if ShortConvertAttrsHintsS1CacheDisabled(IRGN, acctype, secondstage); memattrs.outer =(acctype) then // Force Non-cacheable result.attrs = ShortConvertAttrsHintsMemAttr_NC(ORGN, acctype, secondstage); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; memattrs.tagged = FALSE; return; result.hints = ; else if attrfield<3:2> == '00' then // Write-through transient result.attrs = MemAttr_WT; result.hints = attrfield<1:0>; result.transient = TRUE; elsif attrfield<3:0> == '0100' then // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; result.transient = FALSE; elsif attrfield<3:2> == '01' then // Write-back transient result.attrs = MemAttr_WBMemAttrDefaultsMemHint_No(memattrs);; result.hints = attrfield<1:0>; result.transient = TRUE; else // Write-through/Write-back non-transient result.attrs = attrfield<3:2>; result.hints = attrfield<1:0>; result.transient = FALSE; return result;

Library pseudocode for shared/translation/translationattrs/HasS2TranslationMemAttrDefaults

// HasS2Translation() // ================== // Returns TRUE if stage 2 translation is present for the current translation regime // MemAttrDefaults() // ================= // Supply default values for memory attributes, including overriding the shareability attributes // for Device and Non-cacheable memory types. booleanMemoryAttributes HasS2Translation() return (MemAttrDefaults(EL2EnabledMemoryAttributes() && !memattrs) if memattrs.type ==IsInHostMemType_Device() && PSTATE.EL IN {then memattrs.inner =EL0MemAttrHints,UNKNOWN; memattrs.outer = UNKNOWN; memattrs.shareable = TRUE; memattrs.outershareable = TRUE; else memattrs.device = DeviceType UNKNOWN; if memattrs.inner.attrs == MemAttr_NC && memattrs.outer.attrs == MemAttr_NCEL1MemAttrHints});then memattrs.shareable = TRUE; memattrs.outershareable = TRUE; return memattrs;

Library pseudocode for shared/translation/translationattrs/Have16bitVMIDS1CacheDisabled

// Have16bitVMID() // =============== // Returns TRUE if EL2 and support for a 16-bit VMID are implemented. // S1CacheDisabled() // ================= boolean Have16bitVMID() returnS1CacheDisabled( HaveELAccType(acctype) ifELUsingAArch32(S1TranslationRegime()) then if PSTATE.EL == EL2 then enable = if acctype == AccType_IFETCH then HSCTLR.I else HSCTLR.C; else enable = if acctype == AccType_IFETCH then SCTLR.I else SCTLR.C; else enable = if acctype == AccType_IFETCH then SCTLR[].I else SCTLR) && boolean IMPLEMENTATION_DEFINED;[].C; return enable == '0';

Library pseudocode for shared/translation/translationattrs/PAMaxS2AttrDecode

// PAMax() // ======= // Returns the IMPLEMENTATION DEFINED upper limit on the physical address // size for this processor, as log2(). // S2AttrDecode() // ============== // Converts the Stage 2 attribute fields into orthogonal attributes and hints integerMemoryAttributes PAMax() return integer IMPLEMENTATION_DEFINED "Maximum Physical Address Size";S2AttrDecode(bits(2) SH, bits(4) attr,AccType acctype) MemoryAttributes memattrs; apply_force_writeback = HaveStage2MemAttrControl() && HCR_EL2.FWB == '1'; // Device memory if (apply_force_writeback && attr<2> == '0') || attr<3:2> == '00' then memattrs.type = MemType_Device; case attr<1:0> of when '00' memattrs.device = DeviceType_nGnRnE; when '01' memattrs.device = DeviceType_nGnRE; when '10' memattrs.device = DeviceType_nGRE; when '11' memattrs.device = DeviceType_GRE; // Normal memory elsif attr<1:0> != '00' then memattrs.type = MemType_Normal; if apply_force_writeback then memattrs.outer = S2ConvertAttrsHints(attr<1:0>, acctype); else memattrs.outer = S2ConvertAttrsHints(attr<3:2>, acctype); memattrs.inner = S2ConvertAttrsHints(attr<1:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else memattrs = MemoryAttributes UNKNOWN; // Reserved return MemAttrDefaults(memattrs);

Library pseudocode for shared/translation/translationattrs/S1TranslationRegimeS2CacheDisabled

// S1TranslationRegime() // ===================== // Stage 1 translation regime for the given Exception level // S2CacheDisabled() // ================= bits(2)boolean S1TranslationRegime(bits(2) el) if el !=S2CacheDisabled( EL0AccType then return el; elsifacctype) if HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.NS == '0' then return EL3; elsif HaveVirtHostExt() && ELIsInHost(el) then return EL2; else return) then disable = if acctype == EL1AccType_IFETCH; // S1TranslationRegime() // ===================== // Returns the Exception level controlling the current Stage 1 translation regime. For the most // part this is unused in code because the system register accessors (SCTLR[], etc.) implicitly // return the correct value. bits(2)then HCR2.ID else HCR2.CD; else disable = if acctype == S1TranslationRegime() return S1TranslationRegime(PSTATE.EL);then HCR_EL2.ID else HCR_EL2.CD; return disable == '1';

Library pseudocode for shared/translation/translationattrs/VAMaxS2ConvertAttrsHints

// VAMax() // ======= // Returns the IMPLEMENTATION DEFINED upper limit on the virtual address // size for this processor, as log2(). // S2ConvertAttrsHints() // ===================== // Converts the attribute fields for Normal memory as used in stage 2 // descriptors to orthogonal attributes and hints integerMemAttrHints VAMax() return integer IMPLEMENTATION_DEFINED "Maximum Virtual Address Size";S2ConvertAttrsHints(bits(2) attr,AccType acctype) assert !IsZero(attr); MemAttrHints result; if S2CacheDisabled(acctype) then // Force Non-cacheable result.attrs = MemAttr_NC; result.hints = MemHint_No; else case attr of when '01' // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; when '10' // Write-through result.attrs = MemAttr_WT; result.hints = MemHint_RWA; when '11' // Write-back result.attrs = MemAttr_WB; result.hints = MemHint_RWA; result.transient = FALSE; return result;

Library pseudocode for shared/translation/attrs/ShortConvertAttrsHints

// ShortConvertAttrsHints() // ======================== // Converts the short attribute fields for Normal memory as used in the TTBR and // TEX fields to orthogonal attributes and hints MemAttrHints ShortConvertAttrsHints(bits(2) RGN, AccType acctype, boolean secondstage) MemAttrHints result; if (!secondstage && S1CacheDisabled(acctype)) || (secondstage && S2CacheDisabled(acctype)) then // Force Non-cacheable result.attrs = MemAttr_NC; result.hints = MemHint_No; else case RGN of when '00' // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; when '01' // Write-back, Read and Write allocate result.attrs = MemAttr_WB; result.hints = MemHint_RWA; when '10' // Write-through, Read allocate result.attrs = MemAttr_WT; result.hints = MemHint_RA; when '11' // Write-back, Read allocate result.attrs = MemAttr_WB; result.hints = MemHint_RA; result.transient = FALSE; return result;

Library pseudocode for shared/translation/attrs/WalkAttrDecode

// WalkAttrDecode() // ================ MemoryAttributes WalkAttrDecode(bits(2) SH, bits(2) ORGN, bits(2) IRGN, boolean secondstage) MemoryAttributes memattrs; AccType acctype = AccType_NORMAL; memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(IRGN, acctype, secondstage); memattrs.outer = ShortConvertAttrsHints(ORGN, acctype, secondstage); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; memattrs.tagged = FALSE; return MemAttrDefaults(memattrs);

Library pseudocode for shared/translation/translation/HasS2Translation

// HasS2Translation() // ================== // Returns TRUE if stage 2 translation is present for the current translation regime boolean HasS2Translation() return (EL2Enabled() && !IsInHost() && PSTATE.EL IN {EL0,EL1});

Library pseudocode for shared/translation/translation/Have16bitVMID

// Have16bitVMID() // =============== // Returns TRUE if EL2 and support for a 16-bit VMID are implemented. boolean Have16bitVMID() return HaveEL(EL2) && boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/translation/translation/PAMax

// PAMax() // ======= // Returns the IMPLEMENTATION DEFINED upper limit on the physical address // size for this processor, as log2(). integer PAMax() return integer IMPLEMENTATION_DEFINED "Maximum Physical Address Size";

Library pseudocode for shared/translation/translation/S1TranslationRegime

// S1TranslationRegime() // ===================== // Stage 1 translation regime for the given Exception level bits(2) S1TranslationRegime(bits(2) el) if el != EL0 then return el; elsif HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.NS == '0' then return EL3; elsif HaveVirtHostExt() && ELIsInHost(el) then return EL2; else return EL1; // S1TranslationRegime() // ===================== // Returns the Exception level controlling the current Stage 1 translation regime. For the most // part this is unused in code because the system register accessors (SCTLR[], etc.) implicitly // return the correct value. bits(2) S1TranslationRegime() return S1TranslationRegime(PSTATE.EL);

Library pseudocode for shared/translation/translation/VAMax

// VAMax() // ======= // Returns the IMPLEMENTATION DEFINED upper limit on the virtual address // size for this processor, as log2(). integer VAMax() return integer IMPLEMENTATION_DEFINED "Maximum Virtual Address Size";


Internal version only: isa v00_96v00_88, pseudocode r8p5_00bet2_rc5v85-xml-00bet9_rc1_1 ; Build timestamp: 2019-03-28T072018-12-12T12:5933

Copyright © 2010-20192010-2018 Arm Limited or its affiliates. All rights reserved. This document is Non-Confidential.

(old) htmldiff from-(new)