(old) htmldiff from-(new)

Shared Pseudocode Functions

This page displays common pseudocode functions shared by many pages.

Pseudocodes

Library pseudocode for aarch32/debug/VCRMatch/AArch32.VCRMatch

// AArch32.VCRMatch() // ================== boolean AArch32.VCRMatch(bits(32) vaddress) if UsingAArch32() && ELUsingAArch32(EL1) && IsZero(vaddress<1:0>) && PSTATE.EL != EL2 then // Each bit position in this string corresponds to a bit in DBGVCR and an exception vector. match_word = Zeros(32); if vaddress<31:5> == ExcVectorBase()<31:5> then if HaveEL(EL3) && !IsSecure() then match_word<UInt(vaddress<4:2>) + 24> = '1'; // Non-secure vectors else match_word<UInt(vaddress<4:2>) + 0> = '1'; // Secure vectors (or no EL3) if HaveEL(EL3) && ELUsingAArch32(EL3) && IsSecure() && vaddress<31:5> == MVBAR<31:5> then match_word<UInt(vaddress<4:2>) + 8> = '1'; // Monitor vectors // Mask out bits not corresponding to vectors. if !HaveEL(EL3) then mask = '00000000':'00000000':'00000000':'11011110'; // DBGVCR[31:8] are RES0 elsif !ELUsingAArch32(EL3) then mask = '11011110':'00000000':'00000000':'11011110'; // DBGVCR[15:8] are RES0 else mask = '11011110':'00000000':'11011100':'11011110'; match_word = match_word AND DBGVCR AND mask; match = !IsZero(match_word); // Check for UNPREDICTABLE case - match on Prefetch Abort and Data Abort vectors if !IsZero(match_word<28:27,12:11,4:3>) && DebugTarget() == PSTATE.EL then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHDAPA); else match = FALSE; return match;

Library pseudocode for aarch32/debug/authentication/AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled

// AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled() // ======================================================== boolean AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled returns // the state of the (DBGEN AND SPIDEN) signal. if !HaveEL(EL3) && !IsSecure() then return FALSE; return DBGEN == HIGH && SPIDEN == HIGH;

Library pseudocode for aarch32/debug/breakpoint/AArch32.BreakpointMatch

// AArch32.BreakpointMatch() // ========================= // Breakpoint matching in an AArch32 translation regime. (boolean,boolean) AArch32.BreakpointMatch(integer n, bits(32) vaddress, integer size) assert ELUsingAArch32(S1TranslationRegime()); assert n <= UInt(DBGDIDR.BRPs); enabled = DBGBCR[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR[n].BT == '0x01'; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch32.StateMatch(DBGBCR[n].SSC, DBGBCR[n].HMC, DBGBCR[n].PMC, linked, DBGBCR[n].LBN, isbreakpnt, ispriv); (value_match, value_mismatch) = AArch32.BreakpointValueMatch(n, vaddress, linked_to); if size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. (match_i, mismatch_i) = AArch32.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if value_mismatch && !mismatch_i then value_mismatch = ConstrainUnpredictableBool(Unpredictable_BPMISMATCHHALF); if vaddress<1> == '1' && DBGBCR[n].BAS == '1111' then // The above notwithstanding, if DBGBCR[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if !value_mismatch then value_mismatch = ConstrainUnpredictableBool(Unpredictable_BPMISMATCHHALF); match = value_match && state_match && enabled; mismatch = value_mismatch && state_match && enabled; return (match, mismatch);

Library pseudocode for aarch32/debug/breakpoint/AArch32.BreakpointValueMatch

// AArch32.BreakpointValueMatch() // ============================== // The first result is whether an Address Match or Context breakpoint is programmed on the // instruction at "address". The second result is whether an Address Mismatch breakpoint is // programmed on the instruction, that is, whether the instruction should be stepped. (boolean,boolean) AArch32.BreakpointValueMatch(integer n, bits(32) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n > UInt(DBGDIDR.BRPs) then (c, n) = ConstrainUnpredictableInteger(0, UInt(DBGDIDR.BRPs), Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (FALSE,FALSE); // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR[n].E == '0' then return (FALSE,FALSE); context_aware = (n >= UInt(DBGDIDR.BRPs) - UInt(DBGDIDR.CTX_CMPs)); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. type = DBGBCR[n].BT; if ((type IN {'011x','11xx'} && !HaveVirtHostExt()) || // Context matching (type == '010x' && HaltOnBreakpointOrWatchpoint()) || // Address mismatch (type != '0x0x' && !context_aware) || // Context matching (type == '1xxx' && !HaveEL(EL2))) then // EL2 extension (c, type) = ConstrainUnpredictableBits(Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (FALSE,FALSE); // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (type == '0x0x'); mismatch = (type == '010x'); match_vmid = (type == '10xx'); match_cid1 = (type == 'xx1x'); match_cid2 = (type == '11xx'); linked = (type == 'xxx1'); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return (FALSE,FALSE); // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return (FALSE,FALSE); // Do the comparison. if match_addr then byte = UInt(vaddress<1:0>); assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR[n].BAS<byte> == '1'); BVR_match = vaddress<31:2> == DBGBVR[n]<31:2> && byte_select_match; elsif match_cid1 then BVR_match = (PSTATE.EL != EL2 && CONTEXTIDR == DBGBVR[n]<31:0>); if match_vmid then if ELUsingAArch32(EL2) then vmid = ZeroExtend(VTTBR.VMID, 16); bvr_vmid = ZeroExtend(DBGBXVR[n]<7:0>, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBXVR[n]<7:0>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBXVR[n]<15:0>; BXVR_match = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (!IsSecure() && HaveVirtHostExt() && !ELUsingAArch32(EL2) && DBGBXVR[n]<31:0> == CONTEXTIDR_EL2); bvr_match_valid = (match_addr || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return (match && !mismatch, !match && mismatch);

Library pseudocode for aarch32/debug/breakpoint/AArch32.StateMatch

// AArch32.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. boolean AArch32.StateMatch(bits(2) SSC, bit HMC, bits(2) PxC, boolean linked, bits(4) LBN, boolean isbreakpnt, boolean ispriv) // "SSC", "HMC", "PxC" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. // If parameters are set to a reserved type, behaves as either disabled or a defined type if ((HMC:SSC:PxC) IN {'011xx','100x0','101x0','11010','11101','1111x'} || // Reserved (HMC == '0' && PxC == '00' && !isbreakpnt) || // Usr/Svc/Sys (SSC IN {'01','10'} && !HaveEL(EL3)) || // No EL3 (HMC:SSC:PxC == '11000' && ELUsingAArch32(EL3)) || // AArch64 only (HMC:SSC != '000' && HMC:SSC != '111' && !HaveEL(EL3) && !HaveEL(EL2)) || // No EL3/EL2 (HMC:SSC:PxC == '11100' && !HaveEL(EL2))) then // No EL2 (c, <HMC,SSC,PxC>) = ConstrainUnpredictableBits(Unpredictable_RESBPWPCTRL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value PL2_match = HaveEL(EL2) && HMC == '1'; PL1_match = PxC<0> == '1'; PL0_match = PxC<1> == '1'; SSU_match = isbreakpnt && HMC == '0' && PxC == '00' && SSC != '11'; el = PSTATE.EL; if !ispriv && !isbreakpnt then priv_match = PL0_match; elsif SSU_match then priv_match = PSTATE.M IN {M32_User,M32_Svc,M32_System}; else case el of when EL3 priv_match = PL1_match; // EL3 and EL1 are both PL1 when EL2 priv_match = PL2_match; when EL1 priv_match = PL1_match; when EL0 priv_match = PL0_match; case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = !IsSecure(); // Non-secure only when '10' security_state_match = IsSecure(); // Secure only when '11' security_state_match = TRUE; // Both if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = (UInt(DBGDIDR.BRPs) - UInt(DBGDIDR.CTX_CMPs)); last_ctx_cmp = UInt(DBGDIDR.BRPs); if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint if linked then vaddress = bits(32) UNKNOWN; linked_to = TRUE; (linked_match,-) = AArch32.BreakpointValueMatch(lbn, vaddress, linked_to); return priv_match && security_state_match && (!linked || linked_match);

Library pseudocode for aarch32/debug/enables/AArch32.GenerateDebugExceptions

// AArch32.GenerateDebugExceptions() // ================================= boolean AArch32.GenerateDebugExceptions() return AArch32.GenerateDebugExceptionsFrom(PSTATE.EL, IsSecure());

Library pseudocode for aarch32/debug/enables/AArch32.GenerateDebugExceptionsFrom

// AArch32.GenerateDebugExceptionsFrom() // ===================================== boolean AArch32.GenerateDebugExceptionsFrom(bits(2) from, boolean secure) if from == EL0 && !ELStateUsingAArch32(EL1, secure) then mask = bit UNKNOWN; // PSTATE.D mask, unused for EL0 case return AArch64.GenerateDebugExceptionsFrom(from, secure, mask); if DBGOSLSR.OSLK == '1' || DoubleLockStatus() || Halted() then return FALSE; if HaveEL(EL3) && secure then spd = if ELUsingAArch32(EL3) then SDCR.SPD else MDCR_EL3.SPD32; if spd<1> == '1' then enabled = spd<0> == '1'; else // SPD == 0b01 is reserved, but behaves the same as 0b00. enabled = AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled(); if from == EL0 then enabled = enabled || SDER.SUIDEN == '1'; else enabled = from != EL2; return enabled;

Library pseudocode for aarch32/debug/pmu/AArch32.CheckForPMUOverflow

// AArch32.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events boolean AArch32.CheckForPMUOverflow() if !ELUsingAArch32(EL1) then return AArch64.CheckForPMUOverflow(); pmuirq = PMCR.E == '1' && PMINTENSET<31> == '1' && PMOVSSET<31> == '1'; for n = 0 to UInt(PMCR.N) - 1 if HaveEL(EL2) then hpmn = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN; hpme = if !ELUsingAArch32(EL2) then MDCR_EL2.HPME else HDCR.HPME; E = (if n < UInt(hpmn) then PMCR.E else hpme); else E = PMCR.E; if E == '1' && PMINTENSET<n> == '1' && PMOVSSET<n> == '1' then pmuirq = TRUE; SetInterruptRequestLevel(InterruptID_PMUIRQ, if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflow, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.) return pmuirq;

Library pseudocode for aarch32/debug/pmu/AArch32.CountEvents

// AArch32.CountEvents() // ===================== // Return TRUE if counter "n" should count its event. For the cycle counter, n == 31. boolean AArch32.CountEvents(integer n) assert n == 31 || n < UInt(PMCR.N); if !ELUsingAArch32(EL1) then return AArch64.CountEvents(n); // Event counting is disabled in Debug state debug = Halted(); // In Non-secure state, some counters are reserved for EL2 if HaveEL(EL2) then hpmn = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN; hpme = if !ELUsingAArch32(EL2) then MDCR_EL2.HPME else HDCR.HPME; E = if n < UInt(hpmn) || n == 31 then PMCR.E else hpme; else E = PMCR.E; enabled = E == '1' && PMCNTENSET<n> == '1'; if !IsSecure() then // Event counting in Non-secure state is allowed unless all of: // * EL2 and the HPMD Extension are implemented // * Executing at EL2 // * PMNx is not reserved for EL2 // * HDCR.HPMD == 1 if HaveHPMDExt() && PSTATE.EL == EL2 && (n < UInt(hpmn) || n == 31) then hpmd = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMD else HDCR.HPMD; prohibited = (hpmd == '1'); else prohibited = FALSE; else // Event counting in Secure state is prohibited unless any one of: // * EL3 is not implemented // * EL3 is using AArch64 and MDCR_EL3.SPME == 1 // * EL3 is using AArch32 and SDCR.SPME == 1 // * Executing at EL0, and SDER.SUNIDEN == 1. spme = (if ELUsingAArch32(EL3) then SDCR.SPME else MDCR_EL3.SPME); prohibited = HaveEL(EL3) && spme == '0' && (PSTATE.EL != EL0 || SDER.SUNIDEN == '0'); // The IMPLEMENTATION DEFINED authentication interface might override software controls if prohibited && !HaveNoSecurePMUDisableOverride() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // For the cycle counter, PMCR.DP enables counting when otherwise prohibited if prohibited && n == 31 then prohibited = (PMCR.DP == '1'); // Event counting can be filtered by the {P, U, NSK, NSU, NSH} bits filter = if n == 31 then PMCCFILTR else PMEVTYPER[n]; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) then filter<27> else '0'; case PSTATE.EL of when EL0 filtered = if IsSecure() then U == '1' else U != NSU; when EL1 filtered = if IsSecure() then P == '1' else P != NSK; when EL2 filtered = (NSH == '0'); when EL3 filtered = (P == '1'); return !debug && enabled && !prohibited && !filtered;

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterHypModeInDebugState

// AArch32.EnterHypModeInDebugState() // ================================== // Take an exception in Debug state to Hyp mode. AArch32.EnterHypModeInDebugState(ExceptionRecord exception) SynchronizeContext(); assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2); AArch32.ReportHypEntry(exception); AArch32.WriteMode(M32_Hyp); SPSR[] = bits(32) UNKNOWN; PSTATE.SSBS = bits(1) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; EDSCR.ERR = '1'; if HaveSSBSExt() then PSTATE.SSBS = bits(1) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; EDSCR.ERR = '1'; UpdateEDSCRFields(); EndOfInstruction();

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterModeInDebugState

// AArch32.EnterModeInDebugState() // =============================== // Take an exception in Debug state to a mode other than Monitor and Hyp mode. AArch32.EnterModeInDebugState(bits(5) target_mode) SynchronizeContext(); assert ELUsingAArch32(EL1) && PSTATE.EL != EL2; if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(target_mode); SPSR[] = bits(32) UNKNOWN; PSTATE.SSBS = bits(1) UNKNOWN; if HaveSSBSExt() then PSTATE.SSBS = bits(1) UNKNOWN; R[14] = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. EndOfInstruction();

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterMonitorModeInDebugState

// AArch32.EnterMonitorModeInDebugState() // ====================================== // Take an exception in Debug state to Monitor mode. AArch32.EnterMonitorModeInDebugState() SynchronizeContext(); assert HaveEL(EL3) && ELUsingAArch32(EL3); from_secure = IsSecure(); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(M32_Monitor); SPSR[] = bits(32) UNKNOWN; PSTATE.SSBS = bits(1) UNKNOWN; if HaveSSBSExt() then PSTATE.SSBS = bits(1) UNKNOWN; R[14] = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. EndOfInstruction();

Library pseudocode for aarch32/debug/watchpoint/AArch32.WatchpointByteMatch

// AArch32.WatchpointByteMatch() // ============================= boolean AArch32.WatchpointByteMatch(integer n, bits(32) vaddress) bottom = if DBGWVR[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR[n].BAS<UInt(vaddress<bottom-1:0>)> != '0'); mask = UInt(DBGWCR[n].MASK); // If DBGWCR[n].MASK is non-zero value and DBGWCR[n].BAS is not set to '11111111', or // DBGWCR[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !IsOnes(DBGWCR[n].BAS) then byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPMASKANDBAS); else LSB = (DBGWCR[n].BAS AND NOT(DBGWCR[n].BAS - 1)); MSB = (DBGWCR[n].BAS + LSB); if !IsZero(MSB AND (MSB - 1)) then // Not contiguous byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUS); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then (c, mask) = ConstrainUnpredictableInteger(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value if mask > bottom then WVR_match = (vaddress<31:mask> == DBGWVR[n]<31:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITS); else WVR_match = vaddress<31:bottom> == DBGWVR[n]<31:bottom>; return WVR_match && byte_select_match;

Library pseudocode for aarch32/debug/watchpoint/AArch32.WatchpointMatch

// AArch32.WatchpointMatch() // ========================= // Watchpoint matching in an AArch32 translation regime. boolean AArch32.WatchpointMatch(integer n, bits(32) vaddress, integer size, boolean ispriv, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); assert n <= UInt(DBGDIDR.WRPs); // "ispriv" is FALSE for LDRT/STRT instructions executed at EL1 and all // load/stores at EL0, TRUE for all other load/stores. "iswrite" is TRUE for stores, FALSE for // loads. enabled = DBGWCR[n].E == '1'; linked = DBGWCR[n].WT == '1'; isbreakpnt = FALSE; state_match = AArch32.StateMatch(DBGWCR[n].SSC, DBGWCR[n].HMC, DBGWCR[n].PAC, linked, DBGWCR[n].LBN, isbreakpnt, ispriv); ls_match = (DBGWCR[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch32.WatchpointByteMatch(n, vaddress + byte); return value_match && state_match && ls_match && enabled;

Library pseudocode for aarch32/exceptions/aborts/AArch32.Abort

// AArch32.Abort() // =============== // Abort and Debug exception handling in an AArch32 translation regime. AArch32.Abort(bits(32) vaddress, FaultRecord fault) // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = (HCR_EL2.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && MDCR_EL2.TDE == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1' && IsExternalAbort(fault); if route_to_aarch64 then AArch64.Abort(ZeroExtend(vaddress), fault); elsif fault.acctype == AccType_IFETCH then AArch32.TakePrefetchAbortException(vaddress, fault); else AArch32.TakeDataAbortException(vaddress, fault);

Library pseudocode for aarch32/exceptions/aborts/AArch32.AbortSyndrome

// AArch32.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort exceptions taken to Hyp mode // from an AArch32 translation regime. ExceptionRecord AArch32.AbortSyndrome(Exception type, FaultRecord fault, bits(32) vaddress) exception = ExceptionSyndrome(type); d_side = type == Exception_DataAbort; exception.syndrome = AArch32.FaultSyndrome(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValid(fault) then exception.ipavalid = TRUE; exception.NS = fault.ipaddress.NS; exception.ipaddress = ZeroExtend(fault.ipaddress.address); else exception.ipavalid = FALSE; return exception;

Library pseudocode for aarch32/exceptions/aborts/AArch32.CheckPCAlignment

// AArch32.CheckPCAlignment() // ========================== AArch32.CheckPCAlignment() bits(32) pc = ThisInstrAddr(); if (CurrentInstrSet() == InstrSet_A32 && pc<1> == '1') || pc<0> == '1' then if AArch32.GeneralExceptionsToAArch64() then AArch64.PCAlignmentFault(); // Generate an Alignment fault Prefetch Abort exception vaddress = pc; acctype = AccType_IFETCH; iswrite = FALSE; secondstage = FALSE; AArch32.Abort(vaddress, AArch32.AlignmentFault(acctype, iswrite, secondstage));

Library pseudocode for aarch32/exceptions/aborts/AArch32.ReportDataAbort

// AArch32.ReportDataAbort() // ========================= // Report syndrome information for aborts taken to modes other than Hyp mode. AArch32.ReportDataAbort(boolean route_to_monitor, FaultRecord fault, bits(32) vaddress) // The encoding used in the IFSR or DFSR can be Long-descriptor format or Short-descriptor // format. Normally, the current translation table format determines the format. For an abort // from Non-secure state to Monitor mode, the IFSR or DFSR uses the Long-descriptor format if // any of the following applies: // * The Secure TTBCR.EAE is set to 1. // * The abort is synchronous and either: // - It is taken from Hyp mode. // - It is taken from EL1 or EL0, and the Non-secure TTBCR.EAE is set to 1. long_format = FALSE; if route_to_monitor && !IsSecure() then long_format = TTBCR_S.EAE == '1'; if !IsSErrorInterrupt(fault) && !long_format then long_format = PSTATE.EL == EL2 || TTBCR.EAE == '1'; else long_format = TTBCR.EAE == '1'; d_side = TRUE; if long_format then syndrome = AArch32.FaultStatusLD(d_side, fault); else syndrome = AArch32.FaultStatusSD(d_side, fault); if fault.acctype == AccType_IC then if (!long_format && boolean IMPLEMENTATION_DEFINED "Report I-cache maintenance fault in IFSR") then i_syndrome = syndrome; syndrome<10,3:0> = EncodeSDFSC(Fault_ICacheMaint, 1); else i_syndrome = bits(32) UNKNOWN; if route_to_monitor then IFSR_S = i_syndrome; else IFSR = i_syndrome; if route_to_monitor then DFSR_S = syndrome; DFAR_S = vaddress; else DFSR = syndrome; DFAR = vaddress; return;

Library pseudocode for aarch32/exceptions/aborts/AArch32.ReportPrefetchAbort

// AArch32.ReportPrefetchAbort() // ============================= // Report syndrome information for aborts taken to modes other than Hyp mode. AArch32.ReportPrefetchAbort(boolean route_to_monitor, FaultRecord fault, bits(32) vaddress) // The encoding used in the IFSR can be Long-descriptor format or Short-descriptor format. // Normally, the current translation table format determines the format. For an abort from // Non-secure state to Monitor mode, the IFSR uses the Long-descriptor format if any of the // following applies: // * The Secure TTBCR.EAE is set to 1. // * It is taken from Hyp mode. // * It is taken from EL1 or EL0, and the Non-secure TTBCR.EAE is set to 1. long_format = FALSE; if route_to_monitor && !IsSecure() then long_format = TTBCR_S.EAE == '1' || PSTATE.EL == EL2 || TTBCR.EAE == '1'; else long_format = TTBCR.EAE == '1'; d_side = FALSE; if long_format then fsr = AArch32.FaultStatusLD(d_side, fault); else fsr = AArch32.FaultStatusSD(d_side, fault); if route_to_monitor then IFSR_S = fsr; IFAR_S = vaddress; else IFSR = fsr; IFAR = vaddress; return;

Library pseudocode for aarch32/exceptions/aborts/AArch32.TakeDataAbortException

// AArch32.TakeDataAbortException() // ================================ AArch32.TakeDataAbortException(bits(32) vaddress, FaultRecord fault) route_to_monitor = HaveEL(EL3) && SCR.EA == '1' && IsExternalAbort(fault); route_to_hyp = (HaveEL(EL2) && !IsSecure() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && HDCR.TDE == '1'))); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; if IsDebugException(fault) then DBGDSCRext.MOE = fault.debugmoe; if route_to_monitor then AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/AArch32.TakePrefetchAbortException

// AArch32.TakePrefetchAbortException() // ==================================== AArch32.TakePrefetchAbortException(bits(32) vaddress, FaultRecord fault) route_to_monitor = HaveEL(EL3) && SCR.EA == '1' && IsExternalAbort(fault); route_to_hyp = (HaveEL(EL2) && !IsSecure() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && HDCR.TDE == '1'))); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0C; lr_offset = 4; if IsDebugException(fault) then DBGDSCRext.MOE = fault.debugmoe; if route_to_monitor then AArch32.ReportPrefetchAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then if fault.type == Fault_Alignment then // PC Alignment fault exception = ExceptionSyndrome(Exception_PCAlignment); exception.vaddress = ThisInstrAddr(); else exception = AArch32.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportPrefetchAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/BranchTargetException

// BranchTargetException // ===================== // Raise branch target exception. AArch64.BranchTargetException(bits(52) vaddress) route_to_el2 = EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_BranchTarget); exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> = Zeros(); // RES0 if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/EffectiveTCF

// EffectiveTCF() // ============== // Returns the TCF field applied to Tag Check Fails in the given Exception Level bits(2) EffectiveTCF(bits(2) el) if el == EL3 then tcf = SCTLR_EL3.TCF; elsif el == EL2 then tcf = SCTLR_EL2.TCF; elsif el == EL1 then tcf = SCTLR_EL1.TCF; elsif el == EL0 && HCR_EL2.<E2H,TGE> == '11' then tcf = SCTLR_EL2.TCF0; elsif el == EL0 && HCR_EL2.<E2H,TGE> != '11' then tcf = SCTLR_EL1.TCF0; return tcf;

Library pseudocode for aarch32/exceptions/aborts/RecordTagCheckFail

// RecordTagCheckFail() // ==================== // Records a tag fail exception into the appropriate TCFR_ELx ReportTagCheckFail(bits(2) el, bit ttbr) if el == EL3 then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el == EL2 then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el == EL1 then if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el == EL0 then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';

Library pseudocode for aarch32/exceptions/aborts/TagCheckFail

// TagCheckFail() // ============== // Handle a tag check fail condition TagCheckFail(bits(64) vaddress, boolean iswrite) bits(2) tcf = EffectiveTCF(PSTATE.EL); if tcf == '01' then TagCheckFault(vaddress, iswrite); elsif tcf == '10' then ReportTagCheckFail(PSTATE.EL, vaddress<55>);

Library pseudocode for aarch32/exceptions/aborts/TagCheckFault

// TagCheckFault() // =============== // Raise a tag check fail exception. TagCheckFault(bits(64) va, boolean write) bits(2) target_el; bits(64) preferred_exception_return = ThisInstrAddr(); integer vect_offset = 0x0; if PSTATE.EL == EL0 then target_el = if HCR_EL2.TGE == 0 then EL1 else EL2; else target_el = PSTATE.EL; exception = ExceptionSyndrome(Exception_DataAbort); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = va; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakePhysicalFIQException

// AArch32.TakePhysicalFIQException() // ================================== AArch32.TakePhysicalFIQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.FMO == '1' && !IsInHost()); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.FIQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalFIQException(); route_to_monitor = HaveEL(EL3) && SCR.FIQ == '1'; route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.FMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_FIQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_FIQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakePhysicalIRQException

// AArch32.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception. AArch32.TakePhysicalIRQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.IMO == '1' && !IsInHost()); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.IRQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalIRQException(); route_to_monitor = HaveEL(EL3) && SCR.IRQ == '1'; route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.IMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x18; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_IRQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_IRQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakePhysicalSErrorException

// AArch32.TakePhysicalSErrorException() // ===================================== AArch32.TakePhysicalSErrorException(boolean parity, bit extflag, bits(2) errortype, boolean impdef_syndrome, bits(24) full_syndrome) ClearPendingPhysicalSError(); // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = (HCR_EL2.TGE == '1' || (!IsInHost() && HCR_EL2.AMO == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.TakePhysicalSErrorException(impdef_syndrome, full_syndrome); route_to_monitor = HaveEL(EL3) && SCR.EA == '1'; route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.AMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; fault = AArch32.AsynchExternalAbort(parity, errortype, extflag); vaddress = bits(32) UNKNOWN; if route_to_monitor then AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakeVirtualFIQException

// AArch32.TakeVirtualFIQException() // ================================= AArch32.TakeVirtualFIQException() assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; if ELUsingAArch32(EL2) then // Virtual IRQ enabled if TGE==0 and FMO==1 assert HCR.TGE == '0' && HCR.FMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualFIQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; AArch32.EnterMode(M32_FIQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakeVirtualIRQException

// AArch32.TakeVirtualIRQException() // ================================= AArch32.TakeVirtualIRQException() assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; if ELUsingAArch32(EL2) then // Virtual IRQs enabled if TGE==0 and IMO==1 assert HCR.TGE == '0' && HCR.IMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualIRQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x18; lr_offset = 4; AArch32.EnterMode(M32_IRQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakeVirtualSErrorException

// AArch32.TakeVirtualSErrorException() // ==================================== AArch32.TakeVirtualSErrorException(bit extflag, bits(2) errortype, boolean impdef_syndrome, bits(24) full_syndrome) assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; if ELUsingAArch32(EL2) then // Virtual SError enabled if TGE==0 and AMO==1 assert HCR.TGE == '0' && HCR.AMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualSErrorException(impdef_syndrome, full_syndrome); route_to_monitor = FALSE; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; vaddress = bits(32) UNKNOWN; parity = FALSE; if HaveRASExt() then if ELUsingAArch32(EL2) then fault = AArch32.AsynchExternalAbort(FALSE, VDFSR.AET, VDFSR.ExT); else fault = AArch32.AsynchExternalAbort(FALSE, VSESR_EL2.AET, VSESR_EL2.ExT); else fault = AArch32.AsynchExternalAbort(parity, errortype, extflag); ClearPendingVirtualSError(); AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/debug/AArch32.SoftwareBreakpoint

// AArch32.SoftwareBreakpoint() // ============================ AArch32.SoftwareBreakpoint(bits(16) immediate) if (EL2Enabled() && !ELUsingAArch32(EL2) && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')) || !ELUsingAArch32(EL1) then AArch64.SoftwareBreakpoint(immediate); vaddress = bits(32) UNKNOWN; acctype = AccType_IFETCH; // Take as a Prefetch Abort iswrite = FALSE; entry = DebugException_BKPT; fault = AArch32.DebugFault(acctype, iswrite, entry); AArch32.Abort(vaddress, fault);

Library pseudocode for aarch32/exceptions/debug/DebugException

constant bits(4) DebugException_Breakpoint = '0001'; constant bits(4) DebugException_BKPT = '0011'; constant bits(4) DebugException_VectorCatch = '0101'; constant bits(4) DebugException_Watchpoint = '1010';

Library pseudocode for aarch32/exceptions/exceptions/AArch32.CheckAdvSIMDOrFPRegisterTraps

// AArch32.CheckAdvSIMDOrFPRegisterTraps() // ======================================= // Check if an instruction that accesses an Advanced SIMD and // floating-point System register is trapped by an appropriate HCR.TIDx // ID group trap control. AArch32.CheckAdvSIMDOrFPRegisterTraps(bits(4) reg) if PSTATE.EL == EL1 && EL2Enabled() then tid0 = if ELUsingAArch32(EL2) then HCR.TID0 else HCR_EL2.TID0; tid3 = if ELUsingAArch32(EL2) then HCR.TID3 else HCR_EL2.TID3; if (tid0 == '1' && reg == '0000') // FPSID || (tid3 == '1' && reg IN {'0101', '0110', '0111'}) then // MVFRx if ELUsingAArch32(EL2) then AArch32.SystemAccessTrapAArch32.AArch32SystemAccessTrap(M32_HypEL2, 0x8); // Exception_AdvSIMDFPAccessTrap else, ThisInstr()); else AArch64.AArch32SystemAccessTrap(EL2, ThisInstr, 0x8); // Exception_AdvSIMDFPAccessTrap());

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ExceptionClass

// AArch32.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in HSR // Return the Exception Class and Instruction Length fields for reported in HSR (integer,bit) AArch32.ExceptionClass(Exception type) il = if ThisInstrLength() == 32 then '1' else '0'; case type of when Exception_Uncategorized ec = 0x00; il = '1'; when Exception_WFxTrap ec = 0x01; when Exception_CP15RTTrap ec = 0x03; when Exception_CP15RRTTrap ec = 0x04; when Exception_CP14RTTrap ec = 0x05; when Exception_CP14DTTrap ec = 0x06; when Exception_AdvSIMDFPAccessTrap ec = 0x07; when Exception_FPIDTrap ec = 0x08; when Exception_PACTrap ec = 0x09; when Exception_CP14RRTTrap ec = 0x0C; when Exception_BranchTarget ec = 0x0D; when Exception_IllegalState ec = 0x0E; il = '1'; when Exception_SupervisorCall ec = 0x11; when Exception_HypervisorCall ec = 0x12; when Exception_MonitorCall ec = 0x13; when Exception_ERetTrap ec = 0x1A; when Exception_InstructionAbort ec = 0x20; il = '1'; when Exception_PCAlignment ec = 0x22; il = '1'; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_FPTrappedException ec = 0x28; otherwise Unreachable(); if ec IN {0x20,0x24} && PSTATE.EL == EL2 then ec = ec + 1; return (ec,il);

Library pseudocode for aarch32/exceptions/exceptions/AArch32.GeneralExceptionsToAArch64

// AArch32.GeneralExceptionsToAArch64() // ==================================== // Returns TRUE if exceptions normally routed to EL1 are being handled at an Exception // level using AArch64, because either EL1 is using AArch64 or TGE is in force and EL2 // is using AArch64. boolean AArch32.GeneralExceptionsToAArch64() return ((PSTATE.EL == EL0 && !ELUsingAArch32(EL1)) || (EL2Enabled() && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1'));

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ReportHypEntry

// AArch32.ReportHypEntry() // ======================== // Report syndrome information to Hyp mode registers. AArch32.ReportHypEntry(ExceptionRecord exception) Exception type = exception.type; (ec,il) = AArch32.ExceptionClass(type); iss = exception.syndrome; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; HSR = ec<5:0>:il:iss; if type IN {Exception_InstructionAbort, Exception_PCAlignment} then HIFAR = exception.vaddress<31:0>; HDFAR = bits(32) UNKNOWN; elsif type == Exception_DataAbort then HIFAR = bits(32) UNKNOWN; HDFAR = exception.vaddress<31:0>; if exception.ipavalid then HPFAR<31:4> = exception.ipaddress<39:12>; else HPFAR<31:4> = bits(28) UNKNOWN; return;

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ResetControlRegisters

// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values. AArch32.ResetControlRegisters(boolean cold_reset);

Library pseudocode for aarch32/exceptions/exceptions/AArch32.TakeReset

// AArch32.TakeReset() // =================== // Reset into AArch32 state AArch32.TakeReset(boolean cold_reset) assert HighestELUsingAArch32(); // Enter the highest implemented Exception level in AArch32 state if HaveEL(EL3) then AArch32.WriteMode(M32_Svc); SCR.NS = '0'; // Secure state elsif HaveEL(EL2) then AArch32.WriteMode(M32_Hyp); else AArch32.WriteMode(M32_Svc); // Reset the CP14 and CP15 registers and other system components AArch32.ResetControlRegisters(cold_reset); FPEXC.EN = '0'; // Reset all other PSTATE fields, including instruction set and endianness according to the // SCTLR values produced by the above call to ResetControlRegisters() PSTATE.<A,I,F> = '111'; // All asynchronous exceptions masked PSTATE.IT = '00000000'; // IT block state reset PSTATE.T = SCTLR.TE; // Instruction set: TE=0: A32, TE=1: T32. PSTATE.J is RES0. PSTATE.E = SCTLR.EE; // Endianness: EE=0: little-endian, EE=1: big-endian PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // R14 or ELR_hyp and SPSR have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch32.ResetGeneralRegisters(); AArch32.ResetSIMDFPRegisters(); AArch32.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(32) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then if MVBAR<0> == '1' then // Reset vector in MVBAR rv = MVBAR<31:1>:'0'; else rv = bits(32) IMPLEMENTATION_DEFINED "reset vector address"; else rv = RVBAR<31:1>:'0'; // The reset vector must be correctly aligned assert rv<0> == '0' && (PSTATE.T == '1' || rv<1> == '0'); BranchTo(rv, BranchType_RESET);

Library pseudocode for aarch32/exceptions/exceptions/ExcVectorBase

// ExcVectorBase() // =============== bits(32) ExcVectorBase() if SCTLR.V == '1' then // Hivecs selected, base = 0xFFFF0000 return Ones(16):Zeros(16); else return VBAR<31:5>:Zeros(5);

Library pseudocode for aarch32/exceptions/ieeefp/AArch32.FPTrappedException

// AArch32.FPTrappedException() // ============================ AArch32.FPTrappedException(bits(8) accumulated_exceptions) if AArch32.GeneralExceptionsToAArch64() then is_ase = FALSE; element = 0; AArch64.FPTrappedException(is_ase, element, accumulated_exceptions); FPEXC.DEX = '1'; FPEXC.TFV = '1'; FPEXC<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF FPEXC<10:8> = '111'; // VECITR is RES1 AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/syscalls/AArch32.CallHypervisor

// AArch32.CallHypervisor() // ======================== // Performs a HVC call AArch32.CallHypervisor(bits(16) immediate) assert HaveEL(EL2); if !ELUsingAArch32(EL2) then AArch64.CallHypervisor(immediate); else AArch32.TakeHVCException(immediate);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.CallSupervisor

// AArch32.CallSupervisor() // ======================== // Calls the Supervisor AArch32.CallSupervisor(bits(16) immediate) if AArch32.CurrentCond() != '1110' then immediate = bits(16) UNKNOWN; if AArch32.GeneralExceptionsToAArch64() then AArch64.CallSupervisor(immediate); else AArch32.TakeSVCException(immediate);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeHVCException

// AArch32.TakeHVCException() // ========================== AArch32.TakeHVCException(bits(16) immediate) assert HaveEL(EL2) && ELUsingAArch32(EL2); AArch32.ITAdvance(); SSAdvance(); bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; exception = ExceptionSyndrome(Exception_HypervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeSMCException

// AArch32.TakeSMCException() // ========================== AArch32.TakeSMCException() assert HaveEL(EL3) && ELUsingAArch32(EL3); AArch32.ITAdvance(); SSAdvance(); bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; lr_offset = 0; AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeSVCException

// AArch32.TakeSVCException() // ========================== AArch32.TakeSVCException(bits(16) immediate) AArch32.ITAdvance(); SSAdvance(); route_to_hyp = EL2Enabled() && PSTATE.EL == EL0 && HCR.TGE == '1'; bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; lr_offset = 0; if PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_SupervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.EnterMode(M32_Svc, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterHypMode

// AArch32.EnterHypMode() // ====================== // Take an exception to Hyp mode. AArch32.EnterHypMode(ExceptionRecord exception, bits(32) preferred_exception_return, integer vect_offset) SynchronizeContext(); assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2); spsr = GetPSRFromPSTATE(); if !(exception.type IN {Exception_IRQ, Exception_FIQ}) then AArch32.ReportHypEntry(exception); AArch32.WriteMode(M32_Hyp); SPSR[] = spsr; if HaveSSBSExt() then PSTATE.SSBS = HSCTLR.DSSBS; ELR_hyp = preferred_exception_return; PSTATE.T = HSCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if !HaveEL(EL3) || SCR_GEN[].EA == '0' then PSTATE.A = '1'; if !HaveEL(EL3) || SCR_GEN[].IRQ == '0' then PSTATE.I = '1'; if !HaveEL(EL3) || SCR_GEN[].FIQ == '0' then PSTATE.F = '1'; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; BranchTo(HVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterMode

// AArch32.EnterMode() // =================== // Take an exception to a mode other than Monitor and Hyp mode. AArch32.EnterMode(bits(5) target_mode, bits(32) preferred_exception_return, integer lr_offset, integer vect_offset) SynchronizeContext(); assert ELUsingAArch32(EL1) && PSTATE.EL != EL2; spsr = GetPSRFromPSTATE(); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(target_mode); SPSR[] = spsr; if HaveSSBSExt() then PSTATE.SSBS = SCTLR.DSSBS; R[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if target_mode == M32_FIQ then PSTATE.<A,I,F> = '111'; elsif target_mode IN {M32_Abort, M32_IRQ} then PSTATE.<A,I> = '11'; else PSTATE.I = '1'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; BranchTo(ExcVectorBase()<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterMonitorMode

// AArch32.EnterMonitorMode() // ========================== // Take an exception to Monitor mode. AArch32.EnterMonitorMode(bits(32) preferred_exception_return, integer lr_offset, integer vect_offset) SynchronizeContext(); assert HaveEL(EL3) && ELUsingAArch32(EL3); from_secure = IsSecure(); spsr = GetPSRFromPSTATE(); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(M32_Monitor); SPSR[] = spsr; if HaveSSBSExt() then PSTATE.SSBS = SCTLR.DSSBS; R[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; PSTATE.<A,I,F> = '111'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; BranchTo(MVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckAdvSIMDOrFPEnabledAArch32.AArch32SystemAccessTrap

// AArch32.CheckAdvSIMDOrFPEnabled() // AArch32.AArch32SystemAccessTrap() // ================================= // Check against CPACR, FPEXC, HCPTR, NSACR, and CPTR_EL3.// Trapped AArch32 System register access other than due to CPTR_EL2 or CPACR_EL1. AArch32.CheckAdvSIMDOrFPEnabled(boolean fpexc_check, boolean advsimd) if PSTATE.EL ==AArch32.AArch32SystemAccessTrap(bits(2) target_el, bits(32) instr) assert EL0 && (!HaveEL((target_el) && target_el !=EL2) || (!ELUsingAArch32(EL2) && HCR_EL2.TGE == '0')) && !ELUsingAArch32(EL1) then // The PE behaves as if FPEXC.EN is 1 AArch64.CheckFPAdvSIMDEnabled(); elsif PSTATE.EL == EL0 && HaveELUInt((target_el) >=EL2UInt) && !(PSTATE.EL); if !ELUsingAArch32((target_el) ||EL2AArch32.GeneralExceptionsToAArch64) && HCR_EL2.TGE == '1' && !() thenELUsingAArch32AArch64.AArch32SystemAccessTrap((target_el, instr); assert target_el IN {EL1) then if fpexc_check && HCR_EL2.RW == '0' then fpexc_en = bits(1) IMPLEMENTATION_DEFINED "FPEXC.EN value when TGE==1 and RW==0"; if fpexc_en == '0' then UNDEFINED;, AArch64.CheckFPAdvSIMDEnabledEL2(); else cpacr_asedis = CPACR.ASEDIS; cpacr_cp10 = CPACR.cp10; }; if if target_el == HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then cpacr_asedis = '1'; if NSACR.cp10 == '0' then cpacr_cp10 = '00'; if PSTATE.EL != EL2 then // Check if Advanced SIMD disabled in CPACR if advsimd && cpacr_asedis == '1' then UNDEFINED; if cpacr_cp10 == '10' then (c, cpacr_cp10) = exception = ConstrainUnpredictableBitsAArch32.AArch32SystemAccessTrapSyndrome((instr);Unpredictable_RESCPACRAArch32.TakeHypTrapException); // Check if access disabled in CPACR case cpacr_cp10 of when '00' disabled = TRUE; when '01' disabled = PSTATE.EL ==(exception); else EL0AArch32.TakeUndefInstrException; when '11' disabled = FALSE; if disabled then UNDEFINED; // If required, check FPEXC enabled bit. if fpexc_check && FPEXC.EN == '0' then UNDEFINED; AArch32.CheckFPAdvSIMDTrap(advsimd); // Also check against HCPTR and CPTR_EL3();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckFPAdvSIMDTrapAArch32.AArch32SystemAccessTrapSyndrome

// AArch32.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3.// AArch32.AArch32SystemAccessTrapSyndrome() // ========================================= // Return the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord AArch32.CheckFPAdvSIMDTrap(boolean advsimd) ifAArch32.AArch32SystemAccessTrapSyndrome(bits(32) instr) EL2EnabledExceptionRecord() && !exception; cpnum =ELUsingAArch32UInt((instr<11:8>); bits(20) iss =EL2Zeros) then(); if instr<27:24> == '1110' && instr<4> == '1' && instr<31:28> != '1111' then // MRC/MCR case cpnum of when 10 exception = AArch64.CheckFPAdvSIMDTrapExceptionSyndrome(); else if( HaveELException_FPIDTrap(); when 14 exception =EL2ExceptionSyndrome) && !(IsSecureException_CP14RTTrap() then hcptr_tase = HCPTR.TASE; hcptr_cp10 = HCPTR.TCP10; if); when 15 exception = HaveELExceptionSyndrome(EL3Exception_CP15RTTrap) &&); otherwise ELUsingAArch32Unreachable((); iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<8:5> = instr<15:12>; // Rt iss<4:1> = instr<3:0>; // CRm elsif instr<27:21> == '1100010' && instr<31:28> != '1111' then // MRRC/MCRR case cpnum of when 14 exception =EL3ExceptionSyndrome) && !(IsSecureException_CP14RRTTrap() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then hcptr_tase = '1'; if NSACR.cp10 == '0' then hcptr_cp10 = '1'; // Check if access disabled in HCPTR if (advsimd && hcptr_tase == '1') || hcptr_cp10 == '1' then exception =); when 15 exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrapException_CP15RRTTrap); exception.syndrome<24:20> = otherwise ConditionSyndromeUnreachable(); if advsimd then exception.syndrome<5> = '1'; else exception.syndrome<5> = '0'; exception.syndrome<3:0> = '1010'; // coproc field, always 0xA if PSTATE.EL == iss<19:16> = instr<7:4>; // opc1 iss<13:10> = instr<19:16>; // Rt2 iss<8:5> = instr<15:12>; // Rt iss<4:1> = instr<3:0>; // CRm elsif instr<27:25> == '110' && instr<31:28> != '1111' then // LDC/STC assert cpnum == 14; exception = EL2ExceptionSyndrome then( AArch32.TakeUndefInstrExceptionException_CP14DTTrap(exception); else); iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<8:5> = bits(4) UNKNOWN; iss<3> = '1'; else iss<8:5> = instr<19:16>; // Rn iss<3> = '0'; else AArch32.TakeHypTrapExceptionUnreachable(exception); (); iss<0> = instr<20>; // Direction if exception.syndrome<24:20> = HaveELConditionSyndrome(EL3) && !ELUsingAArch32(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3); return;(); exception.syndrome<19:0> = iss; return exception;

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckForSMCUndefOrTrapAArch32.CheckAdvSIMDOrFPEnabled

// AArch32.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction// AArch32.CheckAdvSIMDOrFPEnabled() // ================================= // Check against CPACR, FPEXC, HCPTR, NSACR, and CPTR_EL3. AArch32.CheckForSMCUndefOrTrap() if !AArch32.CheckAdvSIMDOrFPEnabled(boolean fpexc_check, boolean advsimd) if PSTATE.EL ==EL0 && (!HaveEL(EL3EL2) || PSTATE.EL ==) || (! ELUsingAArch32(EL2) && HCR_EL2.TGE == '0')) && !ELUsingAArch32(EL1) then // The PE behaves as if FPEXC.EN is 1 AArch64.CheckFPAdvSIMDEnabled(); elsif PSTATE.EL == EL0 then UNDEFINED; if&& EL2EnabledHaveEL() && !(EL2) && !ELUsingAArch32(EL2) then) && HCR_EL2.TGE == '1' && ! AArch64.CheckForSMCUndefOrTrapELUsingAArch32(ZerosEL1(16)); else route_to_hyp =) then if fpexc_check && HCR_EL2.RW == '0' then fpexc_en = bits(1) IMPLEMENTATION_DEFINED "FPEXC.EN value when TGE==1 and RW==0"; if fpexc_en == '0' then UNDEFINED; AArch64.CheckFPAdvSIMDEnabled(); else cpacr_asedis = CPACR.ASEDIS; cpacr_cp10 = CPACR.cp10; if HaveEL(EL2EL3) && !) &&ELUsingAArch32(EL3) && !IsSecure() && PSTATE.EL ==() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then cpacr_asedis = '1'; if NSACR.cp10 == '0' then cpacr_cp10 = '00'; if PSTATE.EL != EL1EL2 && HCR.TSC == '1'; if route_to_hyp then exception =then // Check if Advanced SIMD disabled in CPACR if advsimd && cpacr_asedis == '1' then UNDEFINED; if cpacr_cp10 == '10' then (c, cpacr_cp10) = ExceptionSyndromeConstrainUnpredictableBits(Exception_MonitorCallUnpredictable_RESCPACR);); // Check if access disabled in CPACR case cpacr_cp10 of when '00' disabled = TRUE; when '01' disabled = PSTATE.EL == ; when '11' disabled = FALSE; if disabled then UNDEFINED; // If required, check FPEXC enabled bit. if fpexc_check && FPEXC.EN == '0' then UNDEFINED; AArch32.CheckFPAdvSIMDTrapAArch32.TakeHypTrapExceptionEL0(exception);(advsimd); // Also check against HCPTR and CPTR_EL3

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckForWFxTrapAArch32.CheckFPAdvSIMDTrap

// AArch32.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction// AArch32.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3. AArch32.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assertAArch32.CheckFPAdvSIMDTrap(boolean advsimd) if HaveELEL2Enabled(target_el); // Check for routing to AArch64 if !() && !ELUsingAArch32(target_el) then( AArch64.CheckForWFxTrapEL2(target_el, is_wfe); return; case target_el of when) then EL1AArch64.CheckFPAdvSIMDTrap trap = (if is_wfe then SCTLR.nTWE else SCTLR.nTWI) == '0'; when(); else if HaveEL(EL2 trap = (if is_wfe then HCR.TWE else HCR.TWI) == '1'; when) && ! EL3IsSecure trap = (if is_wfe then SCR.TWE else SCR.TWI) == '1'; if trap then if target_el ==() then hcptr_tase = HCPTR.TASE; hcptr_cp10 = HCPTR.TCP10; if EL1HaveEL &&( EL2EnabledEL3() && !) &&ELUsingAArch32(EL2) && HCR_EL2.TGE == '1' then AArch64.WFxTrap(target_el, is_wfe); if target_el == EL3 then) && ! AArch32.TakeMonitorTrapExceptionIsSecure(); elsif target_el ==() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then hcptr_tase = '1'; if NSACR.cp10 == '0' then hcptr_cp10 = '1'; // Check if access disabled in HCPTR if (advsimd && hcptr_tase == '1') || hcptr_cp10 == '1' then exception = EL2 then exception = ExceptionSyndrome(Exception_WFxTrapException_AdvSIMDFPAccessTrap); exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<0> = if is_wfe then '1' else '0'; if advsimd then exception.syndrome<5> = '1'; else exception.syndrome<5> = '0'; exception.syndrome<3:0> = '1010'; // coproc field, always 0xA if PSTATE.EL == EL2 then AArch32.TakeUndefInstrException(exception); else AArch32.TakeHypTrapException(exception); else if (EL3) && !ELUsingAArch32(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3AArch32.TakeUndefInstrExceptionHaveEL();); return;

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckITEnabledAArch32.CheckForSMCUndefOrTrap

// AArch32.CheckITEnabled() // ======================== // Check whether the T32 IT instruction is disabled.// AArch32.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction AArch32.CheckITEnabled(bits(4) mask) if PSTATE.EL ==AArch32.CheckForSMCUndefOrTrap() if ! EL2HaveEL then it_disabled = HSCTLR.ITD; else it_disabled = (if( EL3) || PSTATE.EL == EL0 then UNDEFINED; if EL2Enabled() && !ELUsingAArch32(EL2) then AArch64.CheckForSMCUndefOrTrap(Zeros(16)); else route_to_hyp = HaveEL(EL2) && !IsSecure() && PSTATE.EL == EL1) then SCTLR.ITD else SCTLR[].ITD); if it_disabled == '1' then if mask != '1000' then UNDEFINED; // Otherwise whether the IT block is allowed depends on hw1 of the next instruction. next_instr =&& HCR.TSC == '1'; if route_to_hyp then exception = AArch32.MemSingleExceptionSyndrome[(NextInstrAddrException_MonitorCall(), 2,); AccType_IFETCHAArch32.TakeHypTrapException, TRUE]; if next_instr IN {'11xxxxxxxxxxxxxx', '1011xxxxxxxxxxxx', '10100xxxxxxxxxxx', '01001xxxxxxxxxxx', '010001xxx1111xxx', '010001xx1xxxx111'} then // It is IMPLEMENTATION DEFINED whether the Undefined Instruction exception is // taken on the IT instruction or the next instruction. This is not reflected in // the pseudocode, which always takes the exception on the IT instruction. This // also does not take into account cases where the next instruction is UNPREDICTABLE. UNDEFINED; return;(exception);

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckIllegalStateAArch32.CheckForWFxTrap

// AArch32.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set.// AArch32.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction AArch32.CheckIllegalState() ifAArch32.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assert AArch32.GeneralExceptionsToAArch64HaveEL() then(target_el); // Check for routing to AArch64 if ! AArch64.CheckIllegalStateELUsingAArch32(); elsif PSTATE.IL == '1' then route_to_hyp =(target_el) then AArch64.CheckForWFxTrap(target_el, is_wfe); return; case target_el of when EL1 trap = (if is_wfe then SCTLR.nTWE else SCTLR.nTWI) == '0'; when EL2 trap = (if is_wfe then HCR.TWE else HCR.TWI) == '1'; when EL3 trap = (if is_wfe then SCR.TWE else SCR.TWI) == '1'; if trap then if target_el == EL1 && EL2Enabled() && PSTATE.EL ==() && ! EL0ELUsingAArch32 && HCR.TGE == '1'; bits(32) preferred_exception_return =( ThisInstrAddrEL2(); vect_offset = 0x04; if PSTATE.EL ==) && HCR_EL2.TGE == '1' then AArch64.WFxTrap(target_el, is_wfe); if target_el == EL3 then AArch32.TakeMonitorTrapException(); elsif target_el == EL2 || route_to_hyp then then exception = ExceptionSyndrome(Exception_IllegalStateException_WFxTrap); if PSTATE.EL == exception.syndrome<24:20> = EL2ConditionSyndrome then(); exception.syndrome<0> = if is_wfe then '1' else '0'; AArch32.EnterHypModeAArch32.TakeHypTrapException(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); (exception); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckSETENDEnabledAArch32.CheckITEnabled

// AArch32.CheckSETENDEnabled() // ============================ // Check whether the AArch32 SETEND instruction is disabled.// AArch32.CheckITEnabled() // ======================== // Check whether the T32 IT instruction is disabled. AArch32.CheckSETENDEnabled() AArch32.CheckITEnabled(bits(4) mask) if PSTATE.EL == EL2 then setend_disabled = HSCTLR.SED; it_disabled = HSCTLR.ITD; else setend_disabled = (if it_disabled = (if ELUsingAArch32(EL1) then SCTLR.SED else SCTLR[].SED); if setend_disabled == '1' then UNDEFINED; ) then SCTLR.ITD else SCTLR[].ITD); if it_disabled == '1' then if mask != '1000' then UNDEFINED; return; // Otherwise whether the IT block is allowed depends on hw1 of the next instruction. next_instr =AArch32.MemSingle[NextInstrAddr(), 2, AccType_IFETCH, TRUE]; if next_instr IN {'11xxxxxxxxxxxxxx', '1011xxxxxxxxxxxx', '10100xxxxxxxxxxx', '01001xxxxxxxxxxx', '010001xxx1111xxx', '010001xx1xxxx111'} then // It is IMPLEMENTATION DEFINED whether the Undefined Instruction exception is // taken on the IT instruction or the next instruction. This is not reflected in // the pseudocode, which always takes the exception on the IT instruction. This // also does not take into account cases where the next instruction is UNPREDICTABLE. UNDEFINED; return;

Library pseudocode for aarch32/exceptions/traps/AArch32.SystemAccessTrapAArch32.CheckIllegalState

// AArch32.SystemAccessTrap() // ========================== // Trapped system register access.// AArch32.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set. AArch32.SystemAccessTrap(bits(5) mode, integer ec) (valid, target_el) =AArch32.CheckIllegalState() if ELFromM32AArch32.GeneralExceptionsToAArch64(mode); assert valid &&() then HaveELAArch64.CheckIllegalState(target_el) && target_el !=(); elsif PSTATE.IL == '1' then route_to_hyp = EL2Enabled() && PSTATE.EL == EL0 &&&& HCR.TGE == '1'; bits(32) preferred_exception_return = UIntThisInstrAddr(target_el) >=(); vect_offset = 0x04; if PSTATE.EL == UIntEL2(PSTATE.EL); if target_el ==|| route_to_hyp then exception = ExceptionSyndrome(Exception_IllegalState); if PSTATE.EL == EL2 then exception =then AArch32.SystemAccessTrapSyndromeAArch32.EnterHypMode((exception, preferred_exception_return, vect_offset); elseThisInstrAArch32.EnterHypMode(), ec); AArch32.TakeHypTrapException(exception); else(exception, preferred_exception_return, 0x14); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.SystemAccessTrapSyndromeAArch32.CheckSETENDEnabled

// AArch32.SystemAccessTrapSyndrome() // ================================== // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord// AArch32.CheckSETENDEnabled() // ============================ // Check whether the AArch32 SETEND instruction is disabled. AArch32.SystemAccessTrapSyndrome(bits(32) instr, integer ec)AArch32.CheckSETENDEnabled() if PSTATE.EL == ExceptionRecordEL2 exception; case ec of when 0x0 exception =then setend_disabled = HSCTLR.SED; else setend_disabled = (if ExceptionSyndromeELUsingAArch32(Exception_UncategorizedEL1); when 0x3 exception = ExceptionSyndrome(Exception_CP15RTTrap); when 0x4 exception = ExceptionSyndrome(Exception_CP15RRTTrap); when 0x5 exception = ExceptionSyndrome(Exception_CP14RTTrap); when 0x6 exception = ExceptionSyndrome(Exception_CP14DTTrap); when 0x7 exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); when 0x8 exception = ExceptionSyndrome(Exception_FPIDTrap); when 0xC exception = ExceptionSyndrome(Exception_CP14RRTTrap); otherwise Unreachable(); bits(20) iss = Zeros(); if exception.type IN {Exception_FPIDTrap, Exception_CP14RTTrap, Exception_CP15RTTrap} then // Trapped MRC/MCR, VMRS on FPSID iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<8:5> = instr<15:12>; // Rt elsif exception.type IN {Exception_CP14RRTTrap, Exception_AdvSIMDFPAccessTrap, Exception_CP15RRTTrap} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 iss<13:10> = instr<19:16>; // Rt2 iss<8:5> = instr<15:12>; // Rt iss<4:1> = instr<3:0>; // CRm elsif exception.type == Exception_CP14DTTrap then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<8:5> = bits(4) UNKNOWN; iss<3> = '1'; elsif exception.type == Exception_Uncategorized then // Trapped for unknown reason iss<8:5> = instr<19:16>; // Rn iss<3> = '0'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<19:0> = iss; ) then SCTLR.SED else SCTLR[].SED); if setend_disabled == '1' then UNDEFINED; return exception; return;

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeHypTrapException

// AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception. AArch32.TakeHypTrapException(integer ec) exception =AArch32.TakeHypTrapException( AArch32.SystemAccessTrapSyndrome(ThisInstr(), ec); AArch32.TakeHypTrapException(exception); // AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception. AArch32.TakeHypTrapException(ExceptionRecord exception) assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x14; AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeMonitorTrapException

// AArch32.TakeMonitorTrapException() // ================================== // Exceptions routed to Monitor mode as a Monitor Trap exception. AArch32.TakeMonitorTrapException() assert HaveEL(EL3) && ELUsingAArch32(EL3); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; lr_offset = if CurrentInstrSet() == InstrSet_A32 then 4 else 2; AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeUndefInstrException

// AArch32.TakeUndefInstrException() // ================================= AArch32.TakeUndefInstrException() exception = ExceptionSyndrome(Exception_Uncategorized); AArch32.TakeUndefInstrException(exception); // AArch32.TakeUndefInstrException() // ================================= AArch32.TakeUndefInstrException(ExceptionRecord exception) route_to_hyp = EL2Enabled() && PSTATE.EL == EL0 && HCR.TGE == '1'; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; lr_offset = if CurrentInstrSet() == InstrSet_A32 then 4 else 2; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); elsif route_to_hyp then AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.EnterMode(M32_Undef, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.UndefinedFault

// AArch32.UndefinedFault() // ======================== AArch32.UndefinedFault() if AArch32.GeneralExceptionsToAArch64() then AArch64.UndefinedFault(); AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/functions/aborts/AArch32.CreateFaultRecord

// AArch32.CreateFaultRecord() // =========================== FaultRecord AArch32.CreateFaultRecord(Fault type, bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean write, bit extflag, bits(4) debugmoe, bits(2) errortype, boolean secondstage, boolean s2fs1walk) FaultRecord fault; fault.type = type; if (type != Fault_None && PSTATE.EL != EL2 && TTBCR.EAE == '0' && !secondstage && !s2fs1walk && AArch32.DomainValid(type, level)) then fault.domain = domain; else fault.domain = bits(4) UNKNOWN; fault.debugmoe = debugmoe; fault.errortype = errortype; fault.ipaddress.NS = bit UNKNOWN; fault.ipaddress.address = ZeroExtend(ipaddress); fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;

Library pseudocode for aarch32/functions/aborts/AArch32.DomainValid

// AArch32.DomainValid() // ===================== // Returns TRUE if the Domain is valid for a Short-descriptor translation scheme. boolean AArch32.DomainValid(Fault type, integer level) assert type != Fault_None; case type of when Fault_Domain return TRUE; when Fault_Translation, Fault_AccessFlag, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk return level == 2; otherwise return FALSE;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultStatusLD

// AArch32.FaultStatusLD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Long-descriptor format. bits(32) AArch32.FaultStatusLD(boolean d_side, FaultRecord fault) assert fault.type != Fault_None; bits(32) fsr = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '1'; fsr<5:0> = EncodeLDFSC(fault.type, fault.level); return fsr;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultStatusSD

// AArch32.FaultStatusSD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Short-descriptor format. bits(32) AArch32.FaultStatusSD(boolean d_side, FaultRecord fault) assert fault.type != Fault_None; bits(32) fsr = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '0'; fsr<10,3:0> = EncodeSDFSC(fault.type, fault.level); if d_side then fsr<7:4> = fault.domain; // Domain field (data fault only) return fsr;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultSyndrome

// AArch32.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // AArch32 Hyp mode. bits(25) AArch32.FaultSyndrome(boolean d_side, FaultRecord fault) assert fault.type != Fault_None; bits(25) iss = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then iss<11:10> = fault.errortype; // AET if d_side then if IsSecondStage(fault) && !fault.s2fs1walk then iss<24:14> = LSInstructionSyndrome(); if fault.acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_IC, AccType_AT} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.type, fault.level); return iss;

Library pseudocode for aarch32/functions/aborts/EncodeSDFSC

// EncodeSDFSC() // ============= // Function that gives the Short-descriptor FSR code for different types of Fault bits(5) EncodeSDFSC(Fault type, integer level) bits(5) result; case type of when Fault_AccessFlag assert level IN {1,2}; result = if level == 1 then '00011' else '00110'; when Fault_Alignment result = '00001'; when Fault_Permission assert level IN {1,2}; result = if level == 1 then '01101' else '01111'; when Fault_Domain assert level IN {1,2}; result = if level == 1 then '01001' else '01011'; when Fault_Translation assert level IN {1,2}; result = if level == 1 then '00101' else '00111'; when Fault_SyncExternal result = '01000'; when Fault_SyncExternalOnWalk assert level IN {1,2}; result = if level == 1 then '01100' else '01110'; when Fault_SyncParity result = '11001'; when Fault_SyncParityOnWalk assert level IN {1,2}; result = if level == 1 then '11100' else '11110'; when Fault_AsyncParity result = '11000'; when Fault_AsyncExternal result = '10110'; when Fault_Debug result = '00010'; when Fault_TLBConflict result = '10000'; when Fault_Lockdown result = '10100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '10101'; // IMPLEMENTATION DEFINED when Fault_ICacheMaint result = '00100'; otherwise Unreachable(); return result;

Library pseudocode for aarch32/functions/common/A32ExpandImm

// A32ExpandImm() // ============== bits(32) A32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) = A32ExpandImm_C(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/common/A32ExpandImm_C

// A32ExpandImm_C() // ================ (bits(32), bit) A32ExpandImm_C(bits(12) imm12, bit carry_in) unrotated_value = ZeroExtend(imm12<7:0>, 32); (imm32, carry_out) = Shift_C(unrotated_value, SRType_ROR, 2*UInt(imm12<11:8>), carry_in); return (imm32, carry_out);

Library pseudocode for aarch32/functions/common/DecodeImmShift

// DecodeImmShift() // ================ (SRType, integer) DecodeImmShift(bits(2) type, bits(5) imm5) case type of when '00' shift_t = SRType_LSL; shift_n = UInt(imm5); when '01' shift_t = SRType_LSR; shift_n = if imm5 == '00000' then 32 else UInt(imm5); when '10' shift_t = SRType_ASR; shift_n = if imm5 == '00000' then 32 else UInt(imm5); when '11' if imm5 == '00000' then shift_t = SRType_RRX; shift_n = 1; else shift_t = SRType_ROR; shift_n = UInt(imm5); return (shift_t, shift_n);

Library pseudocode for aarch32/functions/common/DecodeRegShift

// DecodeRegShift() // ================ SRType DecodeRegShift(bits(2) type) case type of when '00' shift_t = SRType_LSL; when '01' shift_t = SRType_LSR; when '10' shift_t = SRType_ASR; when '11' shift_t = SRType_ROR; return shift_t;

Library pseudocode for aarch32/functions/common/RRX

// RRX() // ===== bits(N) RRX(bits(N) x, bit carry_in) (result, -) = RRX_C(x, carry_in); return result;

Library pseudocode for aarch32/functions/common/RRX_C

// RRX_C() // ======= (bits(N), bit) RRX_C(bits(N) x, bit carry_in) result = carry_in : x<N-1:1>; carry_out = x<0>; return (result, carry_out);

Library pseudocode for aarch32/functions/common/SRType

enumeration SRType {SRType_LSL, SRType_LSR, SRType_ASR, SRType_ROR, SRType_RRX};

Library pseudocode for aarch32/functions/common/Shift

// Shift() // ======= bits(N) Shift(bits(N) value, SRType type, integer amount, bit carry_in) (result, -) = Shift_C(value, type, amount, carry_in); return result;

Library pseudocode for aarch32/functions/common/Shift_C

// Shift_C() // ========= (bits(N), bit) Shift_C(bits(N) value, SRType type, integer amount, bit carry_in) assert !(type == SRType_RRX && amount != 1); if amount == 0 then (result, carry_out) = (value, carry_in); else case type of when SRType_LSL (result, carry_out) = LSL_C(value, amount); when SRType_LSR (result, carry_out) = LSR_C(value, amount); when SRType_ASR (result, carry_out) = ASR_C(value, amount); when SRType_ROR (result, carry_out) = ROR_C(value, amount); when SRType_RRX (result, carry_out) = RRX_C(value, carry_in); return (result, carry_out);

Library pseudocode for aarch32/functions/common/T32ExpandImm

// T32ExpandImm() // ============== bits(32) T32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) = T32ExpandImm_C(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/common/T32ExpandImm_C

// T32ExpandImm_C() // ================ (bits(32), bit) T32ExpandImm_C(bits(12) imm12, bit carry_in) if imm12<11:10> == '00' then case imm12<9:8> of when '00' imm32 = ZeroExtend(imm12<7:0>, 32); when '01' imm32 = '00000000' : imm12<7:0> : '00000000' : imm12<7:0>; when '10' imm32 = imm12<7:0> : '00000000' : imm12<7:0> : '00000000'; when '11' imm32 = imm12<7:0> : imm12<7:0> : imm12<7:0> : imm12<7:0>; carry_out = carry_in; else unrotated_value = ZeroExtend('1':imm12<6:0>, 32); (imm32, carry_out) = ROR_C(unrotated_value, UInt(imm12<11:7>)); return (imm32, carry_out);

Library pseudocode for aarch32/functions/coproc/AArch32.CheckCP15InstrCoarseTraps

// AArch32.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained CP15 traps in HSTR and HCR. boolean AArch32.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) // Check for coarse-grained Hyp traps if EL2Enabled() && PSTATE.EL IN {EL0,EL1} then if PSTATE.EL == EL0 && !ELUsingAArch32(EL2) then return AArch64.CheckCP15InstrCoarseTraps(CRn, nreg, CRm); // Check for MCR, MRC, MCRR and MRRC disabled by HSTR<CRn/CRm> major = if nreg == 1 then CRn else CRm; if !(major IN {4,14}) && HSTR<major> == '1' then return TRUE; // Check for MRC and MCR disabled by HCR.TIDCP if (HCR.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;

Library pseudocode for aarch32/functions/exclusivecoproc/AArch32.ExclusiveMonitorsPassAArch32.CheckSystemAccess

// AArch32.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean// AArch32.CheckSystemAccess() // =========================== // Check System register access instruction for enables and disables AArch32.ExclusiveMonitorsPass(bits(32) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype =AArch32.CheckSystemAccess(integer cp_num, bits(32) instr) assert cp_num == AccType_ATOMICUInt; iswrite = TRUE; aligned = (address ==(instr<11:8>) && (cp_num IN {14,15}); if PSTATE.EL == AlignEL0(address, size)); if !aligned then secondstage = FALSE;&& ! AArch32.AbortELUsingAArch32(address,( AArch32.AlignmentFaultEL1(acctype, iswrite, secondstage)); passed =) then AArch32.IsExclusiveVAAArch64.CheckAArch32SystemAccess(address,(instr); return; // Decode the AArch32 System register access instruction if instr<31:28> != '1111' && instr<27:24> == '1110' && instr<4> == '1' then // MRC/MCR cprt = TRUE; cpdt = FALSE; nreg = 1; opc1 = ProcessorIDUInt(), size); if !passed then return FALSE; memaddrdesc =(instr<23:21>); opc2 = AArch32.TranslateAddressUInt(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if(instr<7:5>); CRn = IsFaultUInt(memaddrdesc) then(instr<19:16>); CRm = AArch32.AbortUInt(address, memaddrdesc.fault); passed =(instr<3:0>); elsif instr<31:28> != '1111' && instr<27:21> == '1100010' then // MRRC/MCRR cprt = TRUE; cpdt = FALSE; nreg = 2; opc1 = IsExclusiveLocalUInt(memaddrdesc.paddress,(instr<7:4>); CRm = ProcessorIDUInt(), size); if passed then(instr<3:0>); elsif instr<31:28> != '1111' && instr<27:25> == '110' && instr<22> == '0' then // LDC/STC cprt = FALSE; cpdt = TRUE; nreg = 0; opc1 = 0; CRn = ClearExclusiveLocalUInt((instr<15:12>); else allocated = FALSE; // // Coarse-grain decode into CP14 or CP15 encoding space. Each of the CPxxxInstrDecode functions // returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. if cp_num == 14 then // LDC and STC only supported for c5 in CP14 encoding space if cpdt && CRn != 5 then allocated = FALSE; else // Coarse-grained decode of CP14 based on opc1 field case opc1 of when 0 allocated =ProcessorIDCP14DebugInstrDecode()); if memaddrdesc.memattrs.shareable then passed =(instr); when 1 allocated = IsExclusiveGlobalCP14TraceInstrDecode(memaddrdesc.paddress,(instr); when 7 allocated = (instr); // JIDR only otherwise allocated = FALSE; // All other values are unallocated elsif cp_num == 15 then // LDC and STC not supported in CP15 encoding space if !cprt then allocated = FALSE; else allocated = CP15InstrDecode(instr); // Coarse-grain traps to EL2 have a higher priority than exceptions generated because // the access instruction is UNDEFINED if AArch32.CheckCP15InstrCoarseTraps(CRn, nreg, CRm) then // For a coarse-grain trap, if it is IMPLEMENTATION DEFINED whether an access from // User mode is UNDEFINED when the trap is disabled, then it is // IMPLEMENTATION DEFINED whether the same access is UNDEFINED or generates a trap // when the trap is enabled. if PSTATE.EL == EL0 && EL2Enabled() && !allocated then if boolean IMPLEMENTATION_DEFINED "UNDEF unallocated CP15 access at EL0" then UNDEFINED; AArch32.AArch32SystemAccessTrap(EL2, instr); else allocated = FALSE; if !allocated then UNDEFINED; // If the instruction is not UNDEFINED, it might be disabled or trapped to a higher EL. AArch32.CheckSystemAccessTrapsProcessorIDCP14JazelleInstrDecode(), size); (instr); return passed; return;

Library pseudocode for aarch32/functions/exclusivecoproc/AArch32.IsExclusiveVAAArch32.CheckSystemAccessEL1Traps

// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean// AArch32.CheckSystemAccessEL1Traps() // =================================== // Check for configurable disables or traps to EL1 or EL2 of a System register // access instruction. AArch32.IsExclusiveVA(bits(32) address, integer processorid, integer size);AArch32.CheckSystemAccessEL1Traps(bits(32) instr) assert PSTATE.EL ==EL0; if ((HaveEL(EL1) && IsSecure() && !ELUsingAArch32(EL1)) || IsInHost()) then AArch64.CheckAArch32SystemAccessEL1Traps(instr); return; trap = FALSE; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = AArch32.DecodeSysRegAccess(instr); if cp_num == 14 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 5 && opc2 == 0) || // DBGDTRRXint/DBGDTRTXint (op == SystemAccessType_DT && CRn == 5 && opc2 == 0)) then // DBGDTRRXint/DBGDTRTXint (STC/LDC) trap = !Halted() && DBGDSCRext.UDCCdis == '1'; elsif opc1 == 0 then trap = DBGDSCRext.UDCCdis == '1'; elsif opc1 == 1 then trap = CPACR.TRCDIS == '1'; if HaveEL(EL3) && ELUsingAArch32(EL3) && NSACR.NSTRCDIS == '1' then trap = TRUE; elsif cp_num == 15 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 0) || // PMCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 1) || // PMCNTENSET (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 2) || // PMCNTENCLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 3) || // PMOVSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 6) || // PMCEID0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 7) || // PMCEID1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 1) || // PMXEVTYPER (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 14 && opc2 == 3) || // PMOVSSET (op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 12)) then // PMEVTYPER<n> trap = PMUSERENR.EN == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 14 && opc2 == 4 then // PMSWINC trap = PMUSERENR.EN == '0' && PMUSERENR.SW == '0'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 0) || // PMCCNTR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCRR) trap = PMUSERENR.EN == '0' && (write || PMUSERENR.CR == '0'); elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 2) || // PMXEVCNTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8 && CRm <= 11)) then // PMEVCNTR<n> trap = PMUSERENR.EN == '0' && (write || PMUSERENR.ER == '0'); elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 5 then // PMSELR trap = PMUSERENR.EN == '0' && PMUSERENR.ER == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 2 && opc2 IN {0,1,2} then // CNTP_TVAL CNTP_CTL CNTP_CVAL trap = CNTKCTL.PL0PTEN == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 0 && opc2 == 0 then // CNTFRQ trap = CNTKCTL.PL0PCTEN == '0' && CNTKCTL.PL0VCTEN == '0'; elsif op == SystemAccessType_RRT && opc1 == 1 && CRm == 14 then // CNTVCT trap = CNTKCTL.PL0VCTEN == '0'; if trap then AArch32.AArch32SystemAccessTrap(EL1, instr);

Library pseudocode for aarch32/functions/exclusivecoproc/AArch32.MarkExclusiveVAAArch32.CheckSystemAccessEL2Traps

// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid.// AArch32.CheckSystemAccessEL2Traps() // =================================== // Check for configurable traps to EL2 of a System register access instruction. AArch32.MarkExclusiveVA(bits(32) address, integer processorid, integer size);AArch32.CheckSystemAccessEL2Traps(bits(32) instr) assertEL2Enabled() && PSTATE.EL IN {EL0, EL1, EL2}; if EL2Enabled() && !ELUsingAArch32(EL2) then AArch64.CheckAArch32SystemAccessEL2Traps(instr); return; trap = FALSE; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = AArch32.DecodeSysRegAccess(instr); if cp_num == 14 && PSTATE.EL IN {EL0, EL1} then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 0) || // DBGDRAR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 1) || // DBGDRAR (MRRC) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 0) || // DBGDSAR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 2)) then // DBGDSAR (MRRC) trap = HDCR.TDRA == '1' || HDCR.TDE == '1' || HCR.TGE == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 4) || // DBGOSLAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 4) || // DBGOSLSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 4) || // DBGOSDLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 4 && opc2 == 4)) then // DBGPRCR trap = HDCR.TDOSA == '1' || HDCR.TDE == '1' || HCR.TGE == '1'; elsif opc1 == 0 && (!Halted() || !(op == SystemAccessType_RT && CRn == 0 && CRm == 5 && opc2 == 0)) then trap = HDCR.TDA == '1' || HDCR.TDE == '1' || HCR.TGE == '1'; elsif opc1 == 1 then trap = HCPTR.TTA == '1'; if HaveEL(EL3) && ELUsingAArch32(EL3) && NSACR.NSTRCDIS == '1' then trap = TRUE; elsif op == SystemAccessType_RT && opc1 == 7 && CRn == 0 && CRm == 0 && opc2 == 0 then // JIDR trap = HCR.TID0 == '1'; elsif cp_num == 14 && PSTATE.EL == EL2 then if opc1 == 1 then trap = HCPTR.TTA == '1'; elsif cp_num == 15 && PSTATE.EL IN {EL0, EL1} then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 0) || // SCTLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 0) || // TTBR0 (op == SystemAccessType_RRT && opc1 == 0 && CRm == 2) || // TTBR0 (MRRC/MCCR) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 1) || // TTBR1 (op == SystemAccessType_RRT && opc1 == 1 && CRm == 2) || // TTBR1 (MRRC/MCCR) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 2) || // TTBCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 3) || // TTBCR2 (op == SystemAccessType_RT && opc1 == 0 && CRn == 3 && CRm == 0 && opc2 == 0) || // DACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 0 && opc2 == 0) || // DFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 0 && opc2 == 1) || // IFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 6 && CRm == 0 && opc2 == 0) || // DFAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 6 && CRm == 0 && opc2 == 2) || // IFAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 1 && opc2 == 0) || // ADFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 1 && opc2 == 1) || // AIFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 2 && opc2 == 0) || // PRRR/MAIR0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 2 && opc2 == 1) || // NMRR/MAIR1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 3 && opc2 == 0) || // AMAIR0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 3 && opc2 == 1) || // AMAIR1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 13 && CRm == 0 && opc2 == 1)) then // CONTEXTIDR trap = if write then HCR.TVM == '1' else HCR.TRVM == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 8 then // TLBI trap = write && HCR.TTLB == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 6 && opc2 == 2) || // DCISW (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 10 && opc2 == 2) || // DCCSW (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 14 && opc2 == 2)) then // DCCISW trap = write && HCR.TSW == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 6 && opc2 == 1) || // DCIMVAC (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 10 && opc2 == 1) || // DCCMVAC (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 14 && opc2 == 1)) then // DCCIMVAC trap = write && HCR.TPC == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 5 && opc2 == 1) || // ICIMVAU (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 5 && opc2 == 0) || // ICIALLU (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 1 && opc2 == 0) || // ICIALLUIS (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 11 && opc2 == 1)) then // DCCMVAU trap = write && HCR.TPU == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 1) || // ACTLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 3)) then // ACTLR2 trap = HCR.TAC == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 2) || // TCMTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 3) || // TLBTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 6) || // REVIDR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 7)) then // AIDR trap = HCR.TID1 == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 1) || // CTR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 0) || // CCSIDR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 2) || // CCSIDR2 (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 1) || // CLIDR (op == SystemAccessType_RT && opc1 == 2 && CRn == 0 && CRm == 0 && opc2 == 0)) then // CSSELR trap = HCR.TID2 == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 1) || // ID_* (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 2 && opc2 <= 7) || // ID_* (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm >= 3 && opc2 <= 1) || // Reserved (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 3 && opc2 == 2) || // Reserved (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 5 && opc2 IN {4,5})) then // Reserved trap = HCR.TID3 == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 2 then // CPACR trap = HCPTR.TCPAC == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 0 then // PMCR trap = HDCR.TPMCR == '1' || HDCR.TPM == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8) || // PMEVCNTR<n>/PMEVTYPER<n> (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm IN {12,13,14}) || // PM* (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCCR) trap = HDCR.TPM == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 2 && opc2 IN {0,1,2} then // CNTP_TVAL CNTP_CTL CNTP_CVAL trap = CNTHCTL.PL1PCEN == '0'; elsif op == SystemAccessType_RRT && opc1 == 0 && CRm == 14 then // CNTPCT trap = CNTHCTL.PL1PCTEN == '0'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 0) || // SCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 2) || // NSACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 12 && CRm == 0 && opc2 == 1) || // MVBAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 1) || // SDCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 8 && opc2 >= 4)) then // ATS12NSOxx trap = IsSecureEL2Enabled() && PSTATE.EL == EL1 && IsSecure() && ELUsingAArch32(EL1); if trap then AArch32.AArch32SystemAccessTrap(EL2, instr);

Library pseudocode for aarch32/functions/exclusivecoproc/AArch32.SetExclusiveMonitorsAArch32.CheckSystemAccessTraps

// AArch32.SetExclusiveMonitors() // ============================== // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address.// AArch32.CheckSystemAccessTraps() // ================================ // Check for configurable disables or traps to a higher EL of an System register access. AArch32.SetExclusiveMonitors(bits(32) address, integer size) AArch32.CheckSystemAccessTraps(bits(32) instr) acctype = if PSTATE.EL == AccType_ATOMICEL0; iswrite = FALSE; aligned = (address ==then AlignAArch32.CheckSystemAccessEL1Traps(address, size)); memaddrdesc =(instr); if AArch32.TranslateAddressEL2Enabled(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if() && PSTATE.EL IN { IsFaultEL0(memaddrdesc) then return; if memaddrdesc.memattrs.shareable then, MarkExclusiveGlobalEL1(memaddrdesc.paddress,, ProcessorIDEL2(), size);} && ! MarkExclusiveLocalIsInHost(memaddrdesc.paddress,() then ProcessorIDAArch32.CheckSystemAccessEL2Traps(), size);(instr); if AArch32.MarkExclusiveVAHaveEL(address,( ) && !ELUsingAArch32(EL3) && PSTATE.EL IN {EL0,EL1,EL2} then AArch64.CheckAArch32SystemAccessEL3TrapsProcessorIDEL3(), size);(instr);

Library pseudocode for aarch32/functions/floatcoproc/CheckAdvSIMDEnabledAArch32.DecodeSysRegAccess

// CheckAdvSIMDEnabled() // =====================// AArch32.DecodeSysRegAccess() // ============================ // Decode an AArch32 System register access instruction into its operands. (SystemAccessType, integer, integer, integer, integer, integer, boolean) CheckAdvSIMDEnabled() AArch32.DecodeSysRegAccess(bits(32) instr) fpexc_check = TRUE; advsimd = TRUE; cp_num = AArch32.CheckAdvSIMDOrFPEnabledUInt(fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if Advanced SIMD access is permitted (instr<11:8>); // Make temporary copy of D registers // _Dclone[] is used as input data for instruction pseudocode for i = 0 to 31 _Dclone[i] = // Decode the AArch32 System register access instruction if instr<31:28> != '1111' && instr<27:24> == '1110' && instr<4> == '1' then // MRC/MCR op = ; opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); write = instr<20> == '0'; elsif instr<31:28> != '1111' && instr<27:21> == '1100010' then // MRRC/MCRR op = SystemAccessType_RRT; opc1 = UInt(instr<7:4>); CRm = UInt(instr<3:0>); write = instr<20> == '0'; elsif instr<31:28> != '1111' && instr<27:25> == '110' then // LDC/STC op = SystemAccessType_DT; CRn = UIntDSystemAccessType_RT[i]; (instr<15:12>); write = instr<20> == '0'; return; return (op, cp_num, opc1, CRn, CRm, opc2, write);

Library pseudocode for aarch32/functions/floatcoproc/CheckAdvSIMDOrVFPEnabledCP14DebugInstrDecode

// CheckAdvSIMDOrVFPEnabled() // ==========================// Decodes an accepted access to a debug System register in the CP14 encoding space. // Returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. boolean CheckAdvSIMDOrVFPEnabled(boolean include_fpexc_check, boolean advsimd)CP14DebugInstrDecode(bits(32) instr); AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/floatcoproc/CheckCryptoEnabled32CP14JazelleInstrDecode

// CheckCryptoEnabled32() // ======================// Decodes an accepted access to a Jazelle System register in the CP14 encoding space. // Returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. boolean CheckCryptoEnabled32()CP14JazelleInstrDecode(bits(32) instr); CheckAdvSIMDEnabled(); // Return from CheckAdvSIMDEnabled() occurs only if access is permitted return;

Library pseudocode for aarch32/functions/floatcoproc/CheckVFPEnabledCP14TraceInstrDecode

// CheckVFPEnabled() // =================// Decodes an accepted access to a trace System register in the CP14 encoding space. // Returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. boolean CheckVFPEnabled(boolean include_fpexc_check) advsimd = FALSE;CP14TraceInstrDecode(bits(32) instr); AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/floatcoproc/FPHalvedSubCP15InstrDecode

// FPHalvedSub() // ============= bits(N)// Decodes an accepted access to a System register in the CP15 encoding space. // Returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. boolean FPHalvedSub(bits(N) op1, bits(N) op2,CP15InstrDecode(bits(32) instr); FPCRType fpcr) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == sign2 then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 != sign2 then result = FPZero(sign1); else result_value = (value1 - value2) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr); return result;

Library pseudocode for aarch32/functions/floatexclusive/FPRSqrtStepAArch32.ExclusiveMonitorsPass

// FPRSqrtStep() // ============= // AArch32.ExclusiveMonitorsPass() // =============================== bits(N)// Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean FPRSqrtStep(bits(N) op1, bits(N) op2) assert N IN {16,32};AArch32.ExclusiveMonitorsPass(bits(32) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype = FPCRTypeAccType_ATOMIC fpcr =; iswrite = TRUE; aligned = (address == StandardFPSCRValueAlign(); (type1,sign1,value1) =(address, size)); if !aligned then secondstage = FALSE; FPUnpackAArch32.Abort(op1, fpcr); (type2,sign2,value2) =(address, FPUnpackAArch32.AlignmentFault(op2, fpcr); (done,result) =(acctype, iswrite, secondstage)); passed = FPProcessNaNsAArch32.IsExclusiveVA(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 ==(address, FPType_InfinityProcessorID); inf2 = (type2 ==(), size); if !passed then return FALSE; memaddrdesc = FPType_InfinityAArch32.TranslateAddress); zero1 = (type1 ==(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if FPType_ZeroIsFault); zero2 = (type2 ==(memaddrdesc) then FPType_ZeroAArch32.Abort); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product =(address, memaddrdesc.fault); passed = FPZeroIsExclusiveLocal('0'); else product =(memaddrdesc.paddress, FPMulProcessorID(op1, op2, fpcr); bits(N) three =(), size); if passed then FPThreeClearExclusiveLocal('0'); result =( ()); if memaddrdesc.memattrs.shareable then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorIDFPHalvedSubProcessorID(three, product, fpcr); return result;(), size); return passed;

Library pseudocode for aarch32/functions/floatexclusive/FPRecipStepAArch32.IsExclusiveVA

// FPRecipStep() // ============= bits(N)// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean FPRecipStep(bits(N) op1, bits(N) op2) assert N IN {16,32};AArch32.IsExclusiveVA(bits(32) address, integer processorid, integer size); FPCRType fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) two = FPTwo('0'); result = FPSub(two, product, fpcr); return result;

Library pseudocode for aarch32/functions/floatexclusive/StandardFPSCRValueAArch32.MarkExclusiveVA

// StandardFPSCRValue() // ==================== FPCRType// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid. StandardFPSCRValue() return '00000' : FPSCR.AHP : '110000' : FPSCR.FZ16 : '0000000000000000000';AArch32.MarkExclusiveVA(bits(32) address, integer processorid, integer size);

Library pseudocode for aarch32/functions/memoryexclusive/AArch32.CheckAlignmentAArch32.SetExclusiveMonitors

// AArch32.CheckAlignment() // ======================== // AArch32.SetExclusiveMonitors() // ============================== boolean// Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address. AArch32.CheckAlignment(bits(32) address, integer alignment,AArch32.SetExclusiveMonitors(bits(32) address, integer size) acctype = AccTypeAccType_ATOMIC acctype, boolean iswrite) if PSTATE.EL ==; iswrite = FALSE; aligned = (address == EL0 && !ELUsingAArch32(S1TranslationRegime()) then A = SCTLR[].A; //use AArch64 register, when higher Exception level is using AArch64 elsif PSTATE.EL == EL2 then A = HSCTLR.A; else A = SCTLR.A; aligned = (address == Align(address, alignment)); atomic = acctype IN {(address, size)); memaddrdesc = AccType_ATOMICAArch32.TranslateAddress,(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if AccType_ATOMICRWIsFault,(memaddrdesc) then return; if memaddrdesc.memattrs.shareable then AccType_ORDEREDATOMICMarkExclusiveGlobal,(memaddrdesc.paddress, AccType_ORDEREDATOMICRWProcessorID }; ordered = acctype IN {(), size); AccType_ORDEREDMarkExclusiveLocal,(memaddrdesc.paddress, AccType_ORDEREDRWProcessorID,(), size); AccType_LIMITEDORDEREDAArch32.MarkExclusiveVA,(address, AccType_ORDEREDATOMICProcessorID, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; // AccType_VEC is used for SIMD element alignment checks only check = (atomic || ordered || vector || A == '1'); if check && !aligned then secondstage = FALSE; AArch32.Abort(address, AArch32.AlignmentFault(acctype, iswrite, secondstage)); return aligned;(), size);

Library pseudocode for aarch32/functions/memoryfloat/AArch32.MemSingleCheckAdvSIMDEnabled

// AArch32.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8)// CheckAdvSIMDEnabled() // ===================== AArch32.MemSingle[bits(32) address, integer size,CheckAdvSIMDEnabled() fpexc_check = TRUE; advsimd = TRUE; AccTypeAArch32.CheckAdvSIMDOrFPEnabled acctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; assert address ==(fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if Advanced SIMD access is permitted // Make temporary copy of D registers // _Dclone[] is used as input data for instruction pseudocode for i = 0 to 31 _Dclone[i] = AlignD(address, size); AddressDescriptor memaddrdesc; bits(size*8) value; iswrite = FALSE; // MMU or MPU memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtend(address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; // AArch32.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtend(address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; [i]; return;

Library pseudocode for aarch32/functions/memoryfloat/AddressWithAllocationTagCheckAdvSIMDOrVFPEnabled

// AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(64)// CheckAdvSIMDOrVFPEnabled() // ========================== AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result;CheckAdvSIMDOrVFPEnabled(boolean include_fpexc_check, boolean advsimd)AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/memoryfloat/AllocationTagFromAddressCheckCryptoEnabled32

// AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(4)// CheckCryptoEnabled32() // ====================== AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag;CheckCryptoEnabled32()CheckAdvSIMDEnabled(); // Return from CheckAdvSIMDEnabled() occurs only if access is permitted return;

Library pseudocode for aarch32/functions/memoryfloat/CheckTagCheckVFPEnabled

// CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed boolean// CheckVFPEnabled() // ================= CheckTag(CheckVFPEnabled(boolean include_fpexc_check) advsimd = FALSE;AddressDescriptorAArch32.CheckAdvSIMDOrFPEnabled memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress = ZeroExtend(memaddrdesc.paddress.address); return ptag == MemTag[paddress]; else return TRUE;(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/memoryfloat/Hint_PreloadDataFPHalvedSub

// FPHalvedSub() // ============= bits(N) FPHalvedSub(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == sign2 then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 != sign2 then result = FPZero(sign1); else result_value = (value1 - value2) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRoundHint_PreloadData(bits(32) address);(result_value, fpcr); return result;

Library pseudocode for aarch32/functions/memoryfloat/Hint_PreloadDataForWriteFPRSqrtStep

// FPRSqrtStep() // ============= bits(N) FPRSqrtStep(bits(N) op1, bits(N) op2) assert N IN {16,32}; FPCRType fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) three = FPThree('0'); result = FPHalvedSubHint_PreloadDataForWrite(bits(32) address);(three, product, fpcr); return result;

Library pseudocode for aarch32/functions/memoryfloat/Hint_PreloadInstrFPRecipStep

// FPRecipStep() // ============= bits(N) FPRecipStep(bits(N) op1, bits(N) op2) assert N IN {16,32}; FPCRType fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) two = FPTwo('0'); result = FPSubHint_PreloadInstr(bits(32) address);(two, product, fpcr); return result;

Library pseudocode for aarch32/functions/memoryfloat/MemAStandardFPSCRValue

// MemA[] - non-assignment form // ============================ // StandardFPSCRValue() // ==================== bits(8*size)FPCRType MemA[bits(32) address, integer size] acctype =StandardFPSCRValue() return '00000' : FPSCR.AHP : '110000' : FPSCR.FZ16 : '0000000000000000000'; AccType_ATOMIC; return Mem_with_type[address, size, acctype]; // MemA[] - assignment form // ======================== MemA[bits(32) address, integer size] = bits(8*size) value acctype = AccType_ATOMIC; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemOAArch32.CheckAlignment

// MemO[] - non-assignment form // ============================ // AArch32.CheckAlignment() // ======================== bits(8*size)boolean MemO[bits(32) address, integer size] acctype =AArch32.CheckAlignment(bits(32) address, integer alignment, AccType_ORDEREDAccType; returnacctype, boolean iswrite) if PSTATE.EL == Mem_with_typeEL0[address, size, acctype]; // MemO[] - assignment form // ========================&& ! MemO[bits(32) address, integer size] = bits(8*size) value acctype =( S1TranslationRegime()) then A = SCTLR[].A; //use AArch64 register, when higher Exception level is using AArch64 elsif PSTATE.EL == EL2 then A = HSCTLR.A; else A = SCTLR.A; aligned = (address == Align(address, alignment)); atomic = acctype IN { AccType_ATOMIC, AccType_ATOMICRW, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; ordered = acctype IN { AccType_ORDERED;, , AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; // AccType_VEC is used for SIMD element alignment checks only check = (atomic || ordered || vector || A == '1'); if check && !aligned then secondstage = FALSE; AArch32.Abort(address, AArch32.AlignmentFaultMem_with_typeAccType_ORDEREDRW[address, size, acctype] = value; return;(acctype, iswrite, secondstage)); return aligned;

Library pseudocode for aarch32/functions/memory/MemTagAArch32.MemSingle

// MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. // AArch32.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(4)bits(size*8) MemTag[bits(64) address]AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; bits(4) value; bits(size*8) value; iswrite = FALSE; // MMU or MPU memaddrdesc = AArch64.TranslateAddressAArch32.TranslateAddress(address,(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if AccType_NORMAL, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.AbortAArch32.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... // Memory array access accdesc = CreateAccessDescriptor(acctype); if AllocationTagAccessIsEnabledHaveMTEExt() && memaddrdesc.memattrs.tagged then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory.() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtend(address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; // AArch32.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. MemTag[bits(64) address] = bits(4) valueAArch32.MemSingle[bits(32) address, integer size, AddressDescriptorAccType memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address !=acctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, TAG_GRANULE) then boolean secondstage = FALSE;(address, size); AArch64.AbortAddressDescriptor(address,memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = AArch64.AlignmentFaultAArch32.TranslateAddress((address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions ifAccType_NORMALIsFault, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc =(memaddrdesc) then AArch64.TranslateAddressAArch32.Abort(address,(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then AccType_NORMALClearExclusiveByAddress, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if(memaddrdesc.paddress, IsFaultProcessorID(memaddrdesc) then(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if AArch64.AbortHaveMTEExt(address, memaddrdesc.fault); // Memory array access if() then if (ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtendAllocationTagAccessIsEnabledAccessIsTagChecked() && memaddrdesc.memattrs.tagged then _MemTag[memaddrdesc] = value;(address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; return;

Library pseudocode for aarch32/functions/memory/MemUAddressWithAllocationTag

// MemU[] - non-assignment form // ============================ // AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(8*size)bits(64) MemU[bits(32) address, integer size] acctype =AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result; AccType_NORMAL; return Mem_with_type[address, size, acctype]; // MemU[] - assignment form // ======================== MemU[bits(32) address, integer size] = bits(8*size) value acctype = AccType_NORMAL; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemU_unprivAllocationTagFromAddress

// MemU_unpriv[] - non-assignment form // =================================== // AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(8*size)bits(4) MemU_unpriv[bits(32) address, integer size] acctype =AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag; AccType_UNPRIV; return Mem_with_type[address, size, acctype]; // MemU_unpriv[] - assignment form // =============================== MemU_unpriv[bits(32) address, integer size] = bits(8*size) value acctype = AccType_UNPRIV; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/Mem_with_typeCheckTag

// Mem_with_type[] - non-assignment (read) form // ============================================ // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch32.MemSingle directly. // CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed bits(size*8)boolean Mem_with_type[bits(32) address, integer size,CheckTag( AccTypeAddressDescriptor acctype] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned =memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress = AArch32.CheckAlignmentZeroExtend(address, size, acctype, iswrite); if !aligned then assert size > 1; value<7:0> =(memaddrdesc.paddress.address); return ptag == AArch32.MemSingleMemTag[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = AArch32.MemSingle[address+i, 1, acctype, aligned]; else value = AArch32.MemSingle[address, size, acctype, aligned]; if BigEndian() then value = BigEndianReverse(value); return value; // Mem_with_type[] - assignment (write) form // ========================================= // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem_with_type[bits(32) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if BigEndian() then value = BigEndianReverse(value); aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; AArch32.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch32.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; else AArch32.MemSingle[address, size, acctype, aligned] = value; return;[paddress]; else return TRUE;

Library pseudocode for aarch32/functions/memory/TransformTagHint_PreloadData

// TransformTag() // ============== // Apply tag transformation rules. bits(4) TransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta = ZeroExtend(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;Hint_PreloadData(bits(32) address);

Library pseudocode for aarch32/functions/memory/booleanHint_PreloadDataForWrite

// boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. boolean AccessIsTagChecked(bits(64) vaddr, AccType acctype) if PSTATE.M<4> == '1' then return FALSE; if EffectiveTBI(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if EffectiveTCMA(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if !AllocationTagAccessIsEnabled() then return FALSE; if acctype IN {AccType_IFETCH, AccType_PTW} then return FALSE; if acctype == AccType_NV2REGISTER then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;Hint_PreloadDataForWrite(bits(32) address);

Library pseudocode for aarch32/functions/rasmemory/AArch32.ESBOperationHint_PreloadInstr

// AArch32.ESBOperation() // ====================== // Perform the AArch32 ESB operation for ESB executed in AArch32 state AArch32.ESBOperation() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1'; if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.ESBOperation(); return; route_to_monitor = HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.EA == '1'; route_to_hyp = EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.AMO == '1'); if route_to_monitor then target = M32_Monitor; elsif route_to_hyp || PSTATE.M == M32_Hyp then target = M32_Hyp; else target = M32_Abort; if IsSecure() then mask_active = TRUE; elsif target == M32_Monitor then mask_active = SCR.AW == '1' && (!HaveEL(EL2) || (HCR.TGE == '0' && HCR.AMO == '0')); else mask_active = target == M32_Abort || PSTATE.M == M32_Hyp; mask_set = PSTATE.A == '1'; (-, el) = ELFromM32(target); intdis = Halted() || ExternalDebugInterruptsDisabled(el); masked = intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); ClearPendingPhysicalSError(); return;Hint_PreloadInstr(bits(32) address);

Library pseudocode for aarch32/functions/rasmemory/AArch32.PhysicalSErrorSyndromeMemA

// Return the SError syndrome AArch32.SErrorSyndrome// MemA[] - non-assignment form // ============================ bits(8*size) AArch32.PhysicalSErrorSyndrome();MemA[bits(32) address, integer size] acctype =AccType_ATOMIC; return Mem_with_type[address, size, acctype]; // MemA[] - assignment form // ======================== MemA[bits(32) address, integer size] = bits(8*size) value acctype = AccType_ATOMIC; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/rasmemory/AArch32.ReportDeferredSErrorMemO

// AArch32.ReportDeferredSError() // ============================== // Return deferred SError syndrome // MemO[] - non-assignment form // ============================ bits(32)bits(8*size) AArch32.ReportDeferredSError(bits(2) AET, bit ExT) bits(32) target; target<31> = '1'; // A syndrome =MemO[bits(32) address, integer size] acctype = ZerosAccType_ORDERED(16); if PSTATE.EL ==; return EL2Mem_with_type then syndrome<11:10> = AET; // AET syndrome<9> = ExT; // EA syndrome<5:0> = '010001'; // DFSC else syndrome<15:14> = AET; // AET syndrome<12> = ExT; // ExT syndrome<9> = TTBCR.EAE; // LPAE if TTBCR.EAE == '1' then // Long-descriptor format syndrome<5:0> = '010001'; // STATUS else // Short-descriptor format syndrome<10,3:0> = '10110'; // FS if[address, size, acctype]; // MemO[] - assignment form // ======================== HaveAnyAArch64() then target<24:0> =MemO[bits(32) address, integer size] = bits(8*size) value acctype = ; Mem_with_typeZeroExtendAccType_ORDERED(syndrome);// Any RES0 fields must be set to zero else target<15:0> = syndrome; return target;[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/rasmemory/AArch32.SErrorSyndromeMemTag

type// MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. bits(4) AArch32.SErrorSyndrome is ( bits(2) AET, bit ExT )MemTag[bits(64) address]AddressDescriptor memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AllocationTagAccessIsEnabled() then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. MemTag[bits(64) address] = bits(4) value AddressDescriptor memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != Align(address, TAG_GRANULE) then boolean secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(AccType_NORMAL, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access if AllocationTagAccessIsEnabled() then _MemTag[memaddrdesc] = value;

Library pseudocode for aarch32/functions/rasmemory/AArch32.vESBOperationMemU

// AArch32.vESBOperation() // ======================= // Perform the ESB operation for virtual SError interrupts executed in AArch32 state// MemU[] - non-assignment form // ============================ bits(8*size) AArch32.vESBOperation() assertMemU[bits(32) address, integer size] acctype = EL2EnabledAccType_NORMAL() && PSTATE.EL IN {; returnEL0Mem_with_type,[address, size, acctype]; // MemU[] - assignment form // ========================EL1}; // Check for EL2 using AArch64 state if !MemU[bits(32) address, integer size] = bits(8*size) value acctype =ELUsingAArch32AccType_NORMAL(;EL2Mem_with_type) then AArch64.vESBOperation(); return; // If physical SError interrupts are routed to Hyp mode, and TGE is not set, then a // virtual SError interrupt might be pending vSEI_enabled = HCR.TGE == '0' && HCR.AMO == '1'; vSEI_pending = vSEI_enabled && HCR.VA == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); HCR.VA = '0'; // Clear pending virtual SError [address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/registersmemory/AArch32.ResetGeneralRegistersMemU_unpriv

// AArch32.ResetGeneralRegisters() // ===============================// MemU_unpriv[] - non-assignment form // =================================== bits(8*size) AArch32.ResetGeneralRegisters() for i = 0 to 7MemU_unpriv[bits(32) address, integer size] acctype = RAccType_UNPRIV[i] = bits(32) UNKNOWN; for i = 8 to 12; return RmodeMem_with_type[i,[address, size, acctype]; // MemU_unpriv[] - assignment form // =============================== M32_User] = bits(32) UNKNOWN;MemU_unpriv[bits(32) address, integer size] = bits(8*size) value acctype = RmodeAccType_UNPRIV[i,; M32_FIQMem_with_type] = bits(32) UNKNOWN; if HaveEL(EL2) then Rmode[13, M32_Hyp] = bits(32) UNKNOWN; // No R14_hyp for i = 13 to 14 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; Rmode[i, M32_IRQ] = bits(32) UNKNOWN; Rmode[i, M32_Svc] = bits(32) UNKNOWN; Rmode[i, M32_Abort] = bits(32) UNKNOWN; Rmode[i, M32_Undef] = bits(32) UNKNOWN; if HaveEL(EL3) then Rmode[i, M32_Monitor] = bits(32) UNKNOWN; [address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/registersmemory/AArch32.ResetSIMDFPRegistersMem_with_type

// AArch32.ResetSIMDFPRegisters() // ==============================// Mem_with_type[] - non-assignment (read) form // ============================================ // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch32.MemSingle directly. bits(size*8) AArch32.ResetSIMDFPRegisters() for i = 0 to 15Mem_with_type[bits(32) address, integer size, acctype] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; value<7:0> = AArch32.MemSingle[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = AArch32.MemSingle[address+i, 1, acctype, aligned]; else value = AArch32.MemSingle[address, size, acctype, aligned]; if BigEndian() then value = BigEndianReverse(value); return value; // Mem_with_type[] - assignment (write) form // ========================================= // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem_with_type[bits(32) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if BigEndian() then value = BigEndianReverse(value); aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; AArch32.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch32.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; else AArch32.MemSingleQAccType[i] = bits(128) UNKNOWN; [address, size, acctype, aligned] = value; return;

Library pseudocode for aarch32/functions/registersmemory/AArch32.ResetSpecialRegistersTransformTag

// AArch32.ResetSpecialRegisters() // ===============================// TransformTag() // ============== // Apply tag transformation rules. bits(4) AArch32.ResetSpecialRegisters() // AArch32 special registers SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_svc = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; ifTransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta = HaveELZeroExtend(EL2) then SPSR_hyp = bits(32) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; if HaveEL(EL3) then SPSR_mon = bits(32) UNKNOWN; // External debug special registers DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; return;(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;

Library pseudocode for aarch32/functions/registersmemory/AArch32.ResetSystemRegistersboolean

// boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. boolean AccessIsTagChecked(bits(64) vaddr, AccType acctype) if PSTATE.M<4> == '1' then return FALSE; if EffectiveTBI(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if EffectiveTCMA(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if !AllocationTagAccessIsEnabled() then return FALSE; if acctype IN {AccType_IFETCH, AccType_PTW} then return FALSE; if acctype == AccType_NV2REGISTERAArch32.ResetSystemRegisters(boolean cold_reset);then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;

Library pseudocode for aarch32/functions/registersras/ALUExceptionReturnAArch32.ESBOperation

// ALUExceptionReturn() // ====================// AArch32.ESBOperation() // ====================== // Perform the AArch32 ESB operation for ESB executed in AArch32 state ALUExceptionReturn(bits(32) address) if PSTATE.EL ==AArch32.ESBOperation() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2 then UNDEFINED; elsif PSTATE.M IN {) then route_to_aarch64 = HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1'; if !route_to_aarch64 &&M32_UserHaveEL,(M32_SystemEL3} then UNPREDICTABLE; // UNDEFINED or NOP else) && ! AArch32.ExceptionReturnELUsingAArch32(address,( ) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.ESBOperation(); return; route_to_monitor = HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.EA == '1'; route_to_hyp = EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.AMO == '1'); if route_to_monitor then target = M32_Monitor; elsif route_to_hyp || PSTATE.M == M32_Hyp then target = M32_Hyp; else target = M32_Abort; if IsSecure() then mask_active = TRUE; elsif target == M32_Monitor then mask_active = SCR.AW == '1' && (!HaveEL(EL2) || (HCR.TGE == '0' && HCR.AMO == '0')); else mask_active = target == M32_Abort || PSTATE.M == M32_Hyp; mask_set = PSTATE.A == '1'; (-, el) = ELFromM32(target); intdis = Halted() || ExternalDebugInterruptsDisabled(el); masked = intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); ClearPendingPhysicalSErrorSPSREL3[]);(); return;

Library pseudocode for aarch32/functions/registersras/ALUWritePCAArch32.PhysicalSErrorSyndrome

// ALUWritePC() // ============// Return the SError syndrome AArch32.SErrorSyndrome ALUWritePC(bits(32) address) ifAArch32.PhysicalSErrorSyndrome(); CurrentInstrSet() == InstrSet_A32 then BXWritePC(address, BranchType_INDIR); else BranchWritePC(address, BranchType_INDIR);

Library pseudocode for aarch32/functions/registersras/BXWritePCAArch32.ReportDeferredSError

// BXWritePC() // ===========// AArch32.ReportDeferredSError() // ============================== // Return deferred SError syndrome bits(32) BXWritePC(bits(32) address,AArch32.ReportDeferredSError(bits(2) AET, bit ExT) bits(32) target; target<31> = '1'; // A syndrome = BranchTypeZeros branch_type) if address<0> == '1' then(16); if PSTATE.EL == SelectInstrSetEL2(then syndrome<11:10> = AET; // AET syndrome<9> = ExT; // EA syndrome<5:0> = '010001'; // DFSC else syndrome<15:14> = AET; // AET syndrome<12> = ExT; // ExT syndrome<9> = TTBCR.EAE; // LPAE if TTBCR.EAE == '1' then // Long-descriptor format syndrome<5:0> = '010001'; // STATUS else // Short-descriptor format syndrome<10,3:0> = '10110'; // FS ifInstrSet_T32HaveAnyAArch64); address<0> = '0'; else() then target<24:0> = SelectInstrSetZeroExtend(InstrSet_A32); // For branches to an unaligned PC counter in A32 state, the processor takes the branch // and does one of: // * Forces the address to be aligned // * Leaves the PC unaligned, meaning the target generates a PC Alignment fault. if address<1> == '1' && ConstrainUnpredictableBool(Unpredictable_A32FORCEALIGNPC) then address<1> = '0'; BranchTo(address, branch_type);(syndrome);// Any RES0 fields must be set to zero else target<15:0> = syndrome; return target;

Library pseudocode for aarch32/functions/registersras/BranchWritePCAArch32.SErrorSyndrome

// BranchWritePC() // ===============type BranchWritePC(bits(32) address,AArch32.SErrorSyndrome is ( bits(2) AET, bit ExT ) BranchType branch_type) if CurrentInstrSet() == InstrSet_A32 then address<1:0> = '00'; else address<0> = '0'; BranchTo(address, branch_type);

Library pseudocode for aarch32/functions/registersras/DAArch32.vESBOperation

// D[] - non-assignment form // ========================= bits(64)// AArch32.vESBOperation() // ======================= // Perform the ESB operation for virtual SError interrupts executed in AArch32 state D[integer n] assert n >= 0 && n <= 31; base = (n MOD 2) * 64; return _V[n DIV 2]<base+63:base>; // D[] - assignment form // =====================AArch32.vESBOperation() assert () && PSTATE.EL IN {EL0,EL1}; // Check for EL2 using AArch64 state if !ELUsingAArch32(EL2) then AArch64.vESBOperation(); return; // If physical SError interrupts are routed to Hyp mode, and TGE is not set, then a // virtual SError interrupt might be pending vSEI_enabled = HCR.TGE == '0' && HCR.AMO == '1'; vSEI_pending = vSEI_enabled && HCR.VA == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then VDISR = AArch32.ReportDeferredSErrorD[integer n] = bits(64) value assert n >= 0 && n <= 31; base = (n MOD 2) * 64; _V[n DIV 2]<base+63:base> = value; (VDFSR<15:14>, VDFSR<12>); HCR.VA = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch32/functions/registers/DinAArch32.ResetGeneralRegisters

// Din[] - non-assignment form // =========================== bits(64)// AArch32.ResetGeneralRegisters() // =============================== Din[integer n] assert n >= 0 && n <= 31; return _Dclone[n];AArch32.ResetGeneralRegisters() for i = 0 to 7R[i] = bits(32) UNKNOWN; for i = 8 to 12 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; if HaveEL(EL2) then Rmode[13, M32_Hyp] = bits(32) UNKNOWN; // No R14_hyp for i = 13 to 14 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; Rmode[i, M32_IRQ] = bits(32) UNKNOWN; Rmode[i, M32_Svc] = bits(32) UNKNOWN; Rmode[i, M32_Abort] = bits(32) UNKNOWN; Rmode[i, M32_Undef] = bits(32) UNKNOWN; if HaveEL(EL3) then Rmode[i, M32_Monitor] = bits(32) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/LRAArch32.ResetSIMDFPRegisters

// LR - assignment form // ====================// AArch32.ResetSIMDFPRegisters() // ============================== LR = bits(32) valueAArch32.ResetSIMDFPRegisters() for i = 0 to 15 RQ[14] = value; return; // LR - non-assignment form // ======================== bits(32) LR return R[14];[i] = bits(128) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/LoadWritePCAArch32.ResetSpecialRegisters

// LoadWritePC() // =============// AArch32.ResetSpecialRegisters() // =============================== LoadWritePC(bits(32) address)AArch32.ResetSpecialRegisters() // AArch32 special registers SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_svc = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; if BXWritePCHaveEL(address,( ) then SPSR_hyp = bits(32) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; if HaveEL(EL3BranchType_INDIREL2);) then SPSR_mon = bits(32) UNKNOWN; // External debug special registers DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/LookUpRIndexAArch32.ResetSystemRegisters

// LookUpRIndex() // ============== integer LookUpRIndex(integer n, bits(5) mode) assert n >= 0 && n <= 14; case n of // Select index by mode: usr fiq irq svc abt und hyp when 8 result = RBankSelect(mode, 8, 24, 8, 8, 8, 8, 8); when 9 result = RBankSelect(mode, 9, 25, 9, 9, 9, 9, 9); when 10 result = RBankSelect(mode, 10, 26, 10, 10, 10, 10, 10); when 11 result = RBankSelect(mode, 11, 27, 11, 11, 11, 11, 11); when 12 result = RBankSelect(mode, 12, 28, 12, 12, 12, 12, 12); when 13 result = RBankSelect(mode, 13, 29, 17, 19, 21, 23, 15); when 14 result = RBankSelect(mode, 14, 30, 16, 18, 20, 22, 14); otherwise result = n; return result;AArch32.ResetSystemRegisters(boolean cold_reset);

Library pseudocode for aarch32/functions/registers/Monitor_mode_registersALUExceptionReturn

bits(32) SP_mon; bits(32) LR_mon;// ALUExceptionReturn() // ====================ALUExceptionReturn(bits(32) address) if PSTATE.EL == EL2 then UNDEFINED; elsif PSTATE.M IN {M32_User,M32_System} then UNPREDICTABLE; // UNDEFINED or NOP else AArch32.ExceptionReturn(address, SPSR[]);

Library pseudocode for aarch32/functions/registers/PCALUWritePC

// PC - non-assignment form // ======================== bits(32) PC return// ALUWritePC() // ============ ALUWritePC(bits(32) address) if CurrentInstrSet() == InstrSet_A32 then BXWritePC(address, BranchType_INDIR); else BranchWritePC(address, BranchType_INDIRR[15]; // This includes the offset from AArch32 state);

Library pseudocode for aarch32/functions/registers/PCStoreValueBXWritePC

// PCStoreValue() // ============== bits(32)// BXWritePC() // =========== PCStoreValue() // This function returns the PC value. On architecture versions before Armv7, it // is permitted to instead return PC+4, provided it does so consistently. It is // used only to describe A32 instructions, so it returns the address of the current // instruction plus 8 (normally) or 12 (when the alternative is permitted). return PC;BXWritePC(bits(32) address,BranchType branch_type) if address<0> == '1' then SelectInstrSet(InstrSet_T32); address<0> = '0'; else SelectInstrSet(InstrSet_A32); // For branches to an unaligned PC counter in A32 state, the processor takes the branch // and does one of: // * Forces the address to be aligned // * Leaves the PC unaligned, meaning the target generates a PC Alignment fault. if address<1> == '1' && ConstrainUnpredictableBool(Unpredictable_A32FORCEALIGNPC) then address<1> = '0'; BranchTo(address, branch_type);

Library pseudocode for aarch32/functions/registers/QBranchWritePC

// Q[] - non-assignment form // ========================= bits(128)// BranchWritePC() // =============== Q[integer n] assert n >= 0 && n <= 15; return _V[n]; // Q[] - assignment form // =====================BranchWritePC(bits(32) address, branch_type) if CurrentInstrSet() == InstrSet_A32 then address<1:0> = '00'; else address<0> = '0'; BranchToQ[integer n] = bits(128) value assert n >= 0 && n <= 15; _V[n] = value; return;(address, branch_type);

Library pseudocode for aarch32/functions/registers/QinD

// Qin[] - non-assignment form // =========================== // D[] - non-assignment form // ========================= bits(128)bits(64) Qin[integer n] assert n >= 0 && n <= 15; returnD[integer n] assert n >= 0 && n <= 31; base = (n MOD 2) * 64; return _V[n DIV 2]<base+63:base>; // D[] - assignment form // ===================== Din[2*n+1]:Din[2*n];D[integer n] = bits(64) value assert n >= 0 && n <= 31; base = (n MOD 2) * 64; _V[n DIV 2]<base+63:base> = value; return;

Library pseudocode for aarch32/functions/registers/RDin

// R[] - assignment form // =====================// Din[] - non-assignment form // =========================== bits(64) R[integer n] = bits(32) valueDin[integer n] assert n >= 0 && n <= 31; return _Dclone[n]; Rmode[n, PSTATE.M] = value; return; // R[] - non-assignment form // ========================= bits(32) R[integer n] if n == 15 then offset = (if CurrentInstrSet() == InstrSet_A32 then 8 else 4); return _PC<31:0> + offset; else return Rmode[n, PSTATE.M];

Library pseudocode for aarch32/functions/registers/RBankSelectLR

// RBankSelect() // ============= integer// LR - assignment form // ==================== RBankSelect(bits(5) mode, integer usr, integer fiq, integer irq, integer svc, integer abt, integer und, integer hyp) case mode of whenLR = bits(32) value M32_UserR result = usr; // User mode when[14] = value; return; // LR - non-assignment form // ======================== bits(32) LR return M32_FIQR result = fiq; // FIQ mode when M32_IRQ result = irq; // IRQ mode when M32_Svc result = svc; // Supervisor mode when M32_Abort result = abt; // Abort mode when M32_Hyp result = hyp; // Hyp mode when M32_Undef result = und; // Undefined mode when M32_System result = usr; // System mode uses User mode registers otherwise Unreachable(); // Monitor mode return result;[14];

Library pseudocode for aarch32/functions/registers/RmodeLoadWritePC

// Rmode[] - non-assignment form // ============================= bits(32)// LoadWritePC() // ============= Rmode[integer n, bits(5) mode] assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !LoadWritePC(bits(32) address)IsSecureBXWritePC() then assert mode !=(address, M32_MonitorBranchType_INDIR; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then return SP_mon; elsif n == 14 then return LR_mon; else return _R[n]<31:0>; else return _R[LookUpRIndex(n, mode)]<31:0>; // Rmode[] - assignment form // ========================= Rmode[integer n, bits(5) mode] = bits(32) value assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !IsSecure() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then SP_mon = value; elsif n == 14 then LR_mon = value; else _R[n]<31:0> = value; else // It is CONSTRAINED UNPREDICTABLE whether the upper 32 bits of the X // register are unchanged or set to zero. This is also tested for on // exception entry, as this applies to all AArch32 registers. if !HighestELUsingAArch32() && ConstrainUnpredictableBool(Unpredictable_ZEROUPPER) then _R[LookUpRIndex(n, mode)] = ZeroExtend(value); else _R[LookUpRIndex(n, mode)]<31:0> = value; return;);

Library pseudocode for aarch32/functions/registers/SLookUpRIndex

// S[] - non-assignment form // ========================= // LookUpRIndex() // ============== bits(32)integer S[integer n] assert n >= 0 && n <= 31; base = (n MOD 4) * 32; return _V[n DIV 4]<base+31:base>; LookUpRIndex(integer n, bits(5) mode) assert n >= 0 && n <= 14; // S[] - assignment form // ===================== case n of // Select index by mode: usr fiq irq svc abt und hyp when 8 result = (mode, 8, 24, 8, 8, 8, 8, 8); when 9 result = RBankSelect(mode, 9, 25, 9, 9, 9, 9, 9); when 10 result = RBankSelect(mode, 10, 26, 10, 10, 10, 10, 10); when 11 result = RBankSelect(mode, 11, 27, 11, 11, 11, 11, 11); when 12 result = RBankSelect(mode, 12, 28, 12, 12, 12, 12, 12); when 13 result = RBankSelect(mode, 13, 29, 17, 19, 21, 23, 15); when 14 result = RBankSelectS[integer n] = bits(32) value assert n >= 0 && n <= 31; base = (n MOD 4) * 32; _V[n DIV 4]<base+31:base> = value; return;(mode, 14, 30, 16, 18, 20, 22, 14); otherwise result = n; return result;

Library pseudocode for aarch32/functions/registers/SPMonitor_mode_registers

// SP - assignment form // ====================bits(32) SP_mon; bits(32) LR_mon; SP = bits(32) value R[13] = value; return; // SP - non-assignment form // ======================== bits(32) SP return R[13];

Library pseudocode for aarch32/functions/registers/_DclonePC

array bits(64) _Dclone[0..31];// PC - non-assignment form // ======================== bits(32) PC returnR[15]; // This includes the offset from AArch32 state

Library pseudocode for aarch32/functions/systemregisters/AArch32.ExceptionReturnPCStoreValue

// AArch32.ExceptionReturn() // =========================// PCStoreValue() // ============== bits(32) AArch32.ExceptionReturn(bits(32) new_pc, bits(32) spsr)PCStoreValue() // This function returns the PC value. On architecture versions before ARMv7, it // is permitted to instead return PC+4, provided it does so consistently. It is // used only to describe A32 instructions, so it returns the address of the current // instruction plus 8 (normally) or 12 (when the alternative is permitted). return PC; SynchronizeContext(); // Attempts to change to an illegal mode or state will invoke the Illegal Execution state // mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' then // If the exception return is illegal, PC[1:0] are UNKNOWN new_pc<1:0> = bits(2) UNKNOWN; else // LR[1:0] or LR[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 BranchTo(new_pc, BranchType_ERET);

Library pseudocode for aarch32/functions/systemregisters/AArch32.ExecutingATS1xPInstrQ

// AArch32.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1CPR/WP // Q[] - non-assignment form // ========================= booleanbits(128) AArch32.ExecutingATS1xPInstr() if !Q[integer n] assert n >= 0 && n <= 15; return _V[n]; // Q[] - assignment form // =====================HavePrivATExt() then return FALSE; instr = ThisInstr(); if instr<24+:4> == '1110' && instr<8+:4> == '1110' then op1 = instr<21+:3>; CRn = instr<16+:4>; CRm = instr<0+:4>; op2 = instr<5+:3>; return (op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}); else return FALSE;Q[integer n] = bits(128) value assert n >= 0 && n <= 15; _V[n] = value; return;

Library pseudocode for aarch32/functions/systemregisters/AArch32.ExecutingCP10or11InstrQin

// AArch32.ExecutingCP10or11Instr() // ================================ // Qin[] - non-assignment form // =========================== booleanbits(128) AArch32.ExecutingCP10or11Instr() instr =Qin[integer n] assert n >= 0 && n <= 15; return ThisInstrDin(); instr_set =[2*n+1]: CurrentInstrSetDin(); assert instr_set IN {InstrSet_A32, InstrSet_T32}; if instr_set == InstrSet_A32 then return ((instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x'); else // InstrSet_T32 return (instr<31:28> == '111x' && (instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x');[2*n];

Library pseudocode for aarch32/functions/systemregisters/AArch32.ExecutingLSMInstrR

// AArch32.ExecutingLSMInstr() // =========================== // Returns TRUE if processor is executing a Load/Store Multiple instruction boolean// R[] - assignment form // ===================== AArch32.ExecutingLSMInstr() instr =R[integer n] = bits(32) value ThisInstrRmode(); instr_set =[n, PSTATE.M] = value; return; // R[] - non-assignment form // ========================= bits(32) R[integer n] if n == 15 then offset = (if CurrentInstrSet(); assert instr_set IN {() ==InstrSet_A32,then 8 else 4); return _PC<31:0> + offset; else return InstrSet_T32Rmode}; if instr_set == InstrSet_A32 then return (instr<28+:4> != '1111' && instr<25+:3> == '100'); else // InstrSet_T32 if ThisInstrLength() == 16 then return (instr<12+:4> == '1100'); else return (instr<25+:7> == '1110100' && instr<22> == '0');[n, PSTATE.M];

Library pseudocode for aarch32/functions/systemregisters/AArch32.ITAdvanceRBankSelect

// AArch32.ITAdvance() // ===================// RBankSelect() // ============= integer AArch32.ITAdvance() if PSTATE.IT<2:0> == '000' then PSTATE.IT = '00000000'; else PSTATE.IT<4:0> =RBankSelect(bits(5) mode, integer usr, integer fiq, integer irq, integer svc, integer abt, integer und, integer hyp) case mode of when result = usr; // User mode when M32_FIQ result = fiq; // FIQ mode when M32_IRQ result = irq; // IRQ mode when M32_Svc result = svc; // Supervisor mode when M32_Abort result = abt; // Abort mode when M32_Hyp result = hyp; // Hyp mode when M32_Undef result = und; // Undefined mode when M32_System result = usr; // System mode uses User mode registers otherwise UnreachableLSLM32_User(PSTATE.IT<4:0>, 1); return;(); // Monitor mode return result;

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegReadRmode

// Read from a 32-bit AArch32 System register and return the register's contents. // Rmode[] - non-assignment form // ============================= bits(32) AArch32.SysRegRead(integer cp_num, bits(32) instr);Rmode[integer n, bits(5) mode] assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !IsSecure() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then return SP_mon; elsif n == 14 then return LR_mon; else return _R[n]<31:0>; else return _R[LookUpRIndex(n, mode)]<31:0>; // Rmode[] - assignment form // ========================= Rmode[integer n, bits(5) mode] = bits(32) value assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !IsSecure() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then SP_mon = value; elsif n == 14 then LR_mon = value; else _R[n]<31:0> = value; else // It is CONSTRAINED UNPREDICTABLE whether the upper 32 bits of the X // register are unchanged or set to zero. This is also tested for on // exception entry, as this applies to all AArch32 registers. if !HighestELUsingAArch32() && ConstrainUnpredictableBool(Unpredictable_ZEROUPPER) then _R[LookUpRIndex(n, mode)] = ZeroExtend(value); else _R[LookUpRIndex(n, mode)]<31:0> = value; return;

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegRead64S

// Read from a 64-bit AArch32 System register and return the register's contents. bits(64)// S[] - non-assignment form // ========================= bits(32) AArch32.SysRegRead64(integer cp_num, bits(32) instr);S[integer n] assert n >= 0 && n <= 31; base = (n MOD 4) * 32; return _V[n DIV 4]<base+31:base>; // S[] - assignment form // =====================S[integer n] = bits(32) value assert n >= 0 && n <= 31; base = (n MOD 4) * 32; _V[n DIV 4]<base+31:base> = value; return;

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegReadCanWriteAPSRSP

// AArch32.SysRegReadCanWriteAPSR() // ================================ // Determines whether the AArch32 System register read instruction can write to APSR flags. boolean// SP - assignment form // ==================== AArch32.SysRegReadCanWriteAPSR(integer cp_num, bits(32) instr) assertSP = bits(32) value UsingAArch32R(); assert (cp_num IN {14,15}); assert cp_num ==[13] = value; return; // SP - non-assignment form // ======================== bits(32) SP return UIntR(instr<11:8>); opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); if cp_num == 14 && opc1 == 0 && CRn == 0 && CRm == 1 && opc2 == 0 then // DBGDSCRint return TRUE; return FALSE;[13];

Library pseudocode for aarch32/functions/systemregisters/AArch32.SysRegWrite_Dclone

// Write to a 32-bit AArch32 System register.array bits(64) _Dclone[0..31]; AArch32.SysRegWrite(integer cp_num, bits(32) instr, bits(32) val);

Library pseudocode for aarch32/functions/system/AArch32.SysRegWrite64AArch32.ExceptionReturn

// Write to a 64-bit AArch32 System register.// AArch32.ExceptionReturn() // ========================= AArch32.SysRegWrite64(integer cp_num, bits(32) instr, bits(64) val);AArch32.ExceptionReturn(bits(32) new_pc, bits(32) spsr)SynchronizeContext(); // Attempts to change to an illegal mode or state will invoke the Illegal Execution state // mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' then // If the exception return is illegal, PC[1:0] are UNKNOWN new_pc<1:0> = bits(2) UNKNOWN; else // LR[1:0] or LR[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 BranchTo(new_pc, BranchType_ERET);

Library pseudocode for aarch32/functions/system/AArch32.WriteModeAArch32.ExecutingATS1xPInstr

// AArch32.WriteMode() // =================== // Function for dealing with writes to PSTATE.M from AArch32 state only. // This ensures that PSTATE.EL and PSTATE.SP are always valid.// AArch32.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1CPR/WP boolean AArch32.WriteMode(bits(5) mode) (valid,el) =AArch32.ExecutingATS1xPInstr() if ! ELFromM32HavePrivATExt(mode); assert valid; PSTATE.M = mode; PSTATE.EL = el; PSTATE.nRW = '1'; PSTATE.SP = (if mode IN {() then return FALSE; instr =M32_UserThisInstr,M32_System} then '0' else '1'); return;(); if instr<24+:4> == '1110' && instr<8+:4> == '1110' then op1 = instr<21+:3>; CRn = instr<16+:4>; CRm = instr<0+:4>; op2 = instr<5+:3>; return (op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}); else return FALSE;

Library pseudocode for aarch32/functions/system/AArch32.WriteModeByInstrAArch32.ExecutingCP10or11Instr

// AArch32.WriteModeByInstr() // ========================== // Function for dealing with writes to PSTATE.M from an AArch32 instruction, and ensuring that // illegal state changes are correctly flagged in PSTATE.IL.// AArch32.ExecutingCP10or11Instr() // ================================ boolean AArch32.WriteModeByInstr(bits(5) mode) (valid,el) =AArch32.ExecutingCP10or11Instr() instr = ELFromM32ThisInstr(mode); // 'valid' is set to FALSE if' mode' is invalid for this implementation or the current value // of SCR.NS/SCR_EL3.NS. Additionally, it is illegal for an instruction to write 'mode' to // PSTATE.EL if it would result in any of: // * A change to a mode that would cause entry to a higher Exception level. if(); instr_set = UIntCurrentInstrSet(el) >(); assert instr_set IN { UIntInstrSet_A32(PSTATE.EL) then valid = FALSE; // * A change to or from Hyp mode. if (PSTATE.M ==, M32_HypInstrSet_T32 || mode ==}; if instr_set == M32_HypInstrSet_A32) && PSTATE.M != mode then valid = FALSE; // * When EL2 is implemented, the value of HCR.TGE is '1', a change to a Non-secure EL1 mode. if PSTATE.M == M32_Monitor && HaveEL(EL2) && el == EL1 && SCR.NS == '1' && HCR.TGE == '1' then valid = FALSE; if !valid then PSTATE.IL = '1'; else AArch32.WriteMode(mode);then return ((instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x'); else // InstrSet_T32 return (instr<31:28> == '111x' && (instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x');

Library pseudocode for aarch32/functions/system/BadModeAArch32.ExecutingLSMInstr

// BadMode() // ========= // AArch32.ExecutingLSMInstr() // =========================== // Returns TRUE if processor is executing a Load/Store Multiple instruction boolean BadMode(bits(5) mode) // Return TRUE if 'mode' encodes a mode that is not valid for this implementation case mode of whenAArch32.ExecutingLSMInstr() instr = M32_MonitorThisInstr valid =(); instr_set = HaveAArch32ELCurrentInstrSet((); assert instr_set IN {EL3InstrSet_A32); when, M32_HypInstrSet_T32 valid =}; if instr_set == HaveAArch32ELInstrSet_A32(then return (instr<28+:4> != '1111' && instr<25+:3> == '100'); else // InstrSet_T32 ifEL2ThisInstrLength); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. // Therefore it is sufficient to test this implementation supports EL1 using AArch32. valid = HaveAArch32EL(EL1); when M32_User valid = HaveAArch32EL(EL0); otherwise valid = FALSE; // Passed an illegal mode value return !valid;() == 16 then return (instr<12+:4> == '1100'); else return (instr<25+:7> == '1110100' && instr<22> == '0');

Library pseudocode for aarch32/functions/system/BankedRegisterAccessValidAArch32.ITAdvance

// BankedRegisterAccessValid() // =========================== // Checks for MRS (Banked register) or MSR (Banked register) accesses to registers // other than the SPSRs that are invalid. This includes ELR_hyp accesses.// AArch32.ITAdvance() // =================== BankedRegisterAccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '000xx', '00100' // R8_usr to R12_usr if mode !=AArch32.ITAdvance() if PSTATE.IT<2:0> == '000' then PSTATE.IT = '00000000'; else PSTATE.IT<4:0> = M32_FIQLSL then UNPREDICTABLE; when '00101' // SP_usr if mode == M32_System then UNPREDICTABLE; when '00110' // LR_usr if mode IN {M32_Hyp,M32_System} then UNPREDICTABLE; when '010xx', '0110x', '01110' // R8_fiq to R12_fiq, SP_fiq, LR_fiq if mode == M32_FIQ then UNPREDICTABLE; when '1000x' // LR_irq, SP_irq if mode == M32_IRQ then UNPREDICTABLE; when '1001x' // LR_svc, SP_svc if mode == M32_Svc then UNPREDICTABLE; when '1010x' // LR_abt, SP_abt if mode == M32_Abort then UNPREDICTABLE; when '1011x' // LR_und, SP_und if mode == M32_Undef then UNPREDICTABLE; when '1110x' // LR_mon, SP_mon if !HaveEL(EL3) || !IsSecure() || mode == M32_Monitor then UNPREDICTABLE; when '11110' // ELR_hyp, only from Monitor or Hyp mode if !HaveEL(EL2) || !(mode IN {M32_Monitor,M32_Hyp}) then UNPREDICTABLE; when '11111' // SP_hyp, only from Monitor mode if !HaveEL(EL2) || mode != M32_Monitor then UNPREDICTABLE; otherwise UNPREDICTABLE; (PSTATE.IT<4:0>, 1); return;

Library pseudocode for aarch32/functions/system/CPSRWriteByInstrAArch32.SysRegRead

// CPSRWriteByInstr() // ================== // Update PSTATE.<N,Z,C,V,Q,GE,E,A,I,F,M> from a CPSR value written by an MSR instruction.// Read from a 32-bit AArch32 System register and return the register's contents. bits(32) CPSRWriteByInstr(bits(32) value, bits(4) bytemask) privileged = PSTATE.EL !=AArch32.SysRegRead(integer cp_num, bits(32) instr); EL0; // PSTATE.<A,I,F,M> are not writable at EL0 // Write PSTATE from 'value', ignoring bytes masked by 'bytemask' if bytemask<3> == '1' then PSTATE.<N,Z,C,V,Q> = value<31:27>; // Bits <26:24> are ignored if bytemask<2> == '1' then // Bit <23> is RES0 if privileged then PSTATE.PAN = value<22>; // Bits <21:20> are RES0 PSTATE.GE = value<19:16>; if bytemask<1> == '1' then // Bits <15:10> are RES0 PSTATE.E = value<9>; // PSTATE.E is writable at EL0 if privileged then PSTATE.A = value<8>; if bytemask<0> == '1' then if privileged then PSTATE.<I,F> = value<7:6>; // Bit <5> is RES0 // AArch32.WriteModeByInstr() sets PSTATE.IL to 1 if this is an illegal mode change. AArch32.WriteModeByInstr(value<4:0>); return;

Library pseudocode for aarch32/functions/system/ConditionPassedAArch32.SysRegRead64

// ConditionPassed() // ================= boolean// Read from a 64-bit AArch32 System register and return the register's contents. bits(64) ConditionPassed() returnAArch32.SysRegRead64(integer cp_num, bits(32) instr); ConditionHolds(AArch32.CurrentCond());

Library pseudocode for aarch32/functions/system/CurrentCondAArch32.SysRegReadCanWriteAPSR

bits(4)// AArch32.SysRegReadCanWriteAPSR() // ================================ // Determines whether the AArch32 System register read instruction can write to APSR flags. boolean AArch32.CurrentCond();AArch32.SysRegReadCanWriteAPSR(integer cp_num, bits(32) instr) assertUsingAArch32(); assert (cp_num IN {14,15}); assert cp_num == UInt(instr<11:8>); opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); if cp_num == 14 && opc1 == 0 && CRn == 0 && CRm == 1 && opc2 == 0 then // DBGDSCRint return TRUE; return FALSE;

Library pseudocode for aarch32/functions/system/InITBlockAArch32.SysRegWrite

// InITBlock() // =========== boolean// Write to a 32-bit AArch32 System register. InITBlock() ifAArch32.SysRegWrite(integer cp_num, bits(32) instr, bits(32) val); CurrentInstrSet() == InstrSet_T32 then return PSTATE.IT<3:0> != '0000'; else return FALSE;

Library pseudocode for aarch32/functions/system/LastInITBlockAArch32.SysRegWrite64

// LastInITBlock() // =============== boolean// Write to a 64-bit AArch32 System register. LastInITBlock() return (PSTATE.IT<3:0> == '1000');AArch32.SysRegWrite64(integer cp_num, bits(32) instr, bits(64) val);

Library pseudocode for aarch32/functions/system/SPSRWriteByInstrAArch32.WriteMode

// SPSRWriteByInstr() // ==================// AArch32.WriteMode() // =================== // Function for dealing with writes to PSTATE.M from AArch32 state only. // This ensures that PSTATE.EL and PSTATE.SP are always valid. SPSRWriteByInstr(bits(32) value, bits(4) bytemask) new_spsr =AArch32.WriteMode(bits(5) mode) (valid,el) = SPSRELFromM32[]; if bytemask<3> == '1' then new_spsr<31:24> = value<31:24>; // N,Z,C,V,Q flags, IT[1:0],J bits if bytemask<2> == '1' then new_spsr<23:16> = value<23:16>; // IL bit, GE[3:0] flags if bytemask<1> == '1' then new_spsr<15:8> = value<15:8>; // IT[7:2] bits, E bit, A interrupt mask if bytemask<0> == '1' then new_spsr<7:0> = value<7:0>; // I,F interrupt masks, T bit, Mode bits(mode); assert valid; PSTATE.M = mode; PSTATE.EL = el; PSTATE.nRW = '1'; PSTATE.SP = (if mode IN { ,M32_SystemSPSRM32_User[] = new_spsr; // UNPREDICTABLE if User or System mode } then '0' else '1'); return;

Library pseudocode for aarch32/functions/system/SPSRaccessValidAArch32.WriteModeByInstr

// SPSRaccessValid() // ================= // Checks for MRS (Banked register) or MSR (Banked register) accesses to the SPSRs // that are UNPREDICTABLE// AArch32.WriteModeByInstr() // ========================== // Function for dealing with writes to PSTATE.M from an AArch32 instruction, and ensuring that // illegal state changes are correctly flagged in PSTATE.IL. SPSRaccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '01110' // SPSR_fiq if mode ==AArch32.WriteModeByInstr(bits(5) mode) (valid,el) = M32_FIQELFromM32 then UNPREDICTABLE; when '10000' // SPSR_irq if mode ==(mode); // 'valid' is set to FALSE if' mode' is invalid for this implementation or the current value // of SCR.NS/SCR_EL3.NS. Additionally, it is illegal for an instruction to write 'mode' to // PSTATE.EL if it would result in any of: // * A change to a mode that would cause entry to a higher Exception level. if M32_IRQUInt then UNPREDICTABLE; when '10010' // SPSR_svc if mode ==(el) > M32_SvcUInt then UNPREDICTABLE; when '10100' // SPSR_abt if mode ==(PSTATE.EL) then valid = FALSE; // * A change to or from Hyp mode. if (PSTATE.M == M32_AbortM32_Hyp then UNPREDICTABLE; when '10110' // SPSR_und if mode ==|| mode == M32_UndefM32_Hyp then UNPREDICTABLE; when '11100' // SPSR_mon if !) && PSTATE.M != mode then valid = FALSE; // * When EL2 is implemented, the value of HCR.TGE is '1', a change to a Non-secure EL1 mode. if PSTATE.M ==HaveEL(EL3) || mode == M32_Monitor || !&&IsSecure() then UNPREDICTABLE; when '11110' // SPSR_hyp if !HaveEL(EL2) || mode !=) && el == && SCR.NS == '1' && HCR.TGE == '1' then valid = FALSE; if !valid then PSTATE.IL = '1'; else AArch32.WriteModeM32_MonitorEL1 then UNPREDICTABLE; otherwise UNPREDICTABLE; return;(mode);

Library pseudocode for aarch32/functions/system/SelectInstrSetBadMode

// SelectInstrSet() // ================// BadMode() // ========= boolean SelectInstrSet(BadMode(bits(5) mode) // Return TRUE if 'mode' encodes a mode that is not valid for this implementation case mode of whenInstrSetM32_Monitor iset) assertvalid = CurrentInstrSetHaveAArch32EL() IN {(InstrSet_A32EL3,); when InstrSet_T32M32_Hyp}; assert iset IN {valid =InstrSet_A32HaveAArch32EL,( InstrSet_T32EL2}; PSTATE.T = if iset ==); when , M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. // Therefore it is sufficient to test this implementation supports EL1 using AArch32. valid = HaveAArch32EL(EL1); when M32_User valid = HaveAArch32EL(EL0InstrSet_A32M32_FIQ then '0' else '1'; return;); otherwise valid = FALSE; // Passed an illegal mode value return !valid;

Library pseudocode for aarch32/functions/v6simdsystem/SatBankedRegisterAccessValid

// Sat() // ===== bits(N)// BankedRegisterAccessValid() // =========================== // Checks for MRS (Banked register) or MSR (Banked register) accesses to registers // other than the SPSRs that are invalid. This includes ELR_hyp accesses. Sat(integer i, integer N, boolean unsigned) result = if unsigned thenBankedRegisterAccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '000xx', '00100' // R8_usr to R12_usr if mode != UnsignedSatM32_FIQ(i, N) elsethen UNPREDICTABLE; when '00101' // SP_usr if mode == then UNPREDICTABLE; when '00110' // LR_usr if mode IN {M32_Hyp,M32_System} then UNPREDICTABLE; when '010xx', '0110x', '01110' // R8_fiq to R12_fiq, SP_fiq, LR_fiq if mode == M32_FIQ then UNPREDICTABLE; when '1000x' // LR_irq, SP_irq if mode == M32_IRQ then UNPREDICTABLE; when '1001x' // LR_svc, SP_svc if mode == M32_Svc then UNPREDICTABLE; when '1010x' // LR_abt, SP_abt if mode == M32_Abort then UNPREDICTABLE; when '1011x' // LR_und, SP_und if mode == M32_Undef then UNPREDICTABLE; when '1110x' // LR_mon, SP_mon if !HaveEL(EL3) || !IsSecure() || mode == M32_Monitor then UNPREDICTABLE; when '11110' // ELR_hyp, only from Monitor or Hyp mode if !HaveEL(EL2) || !(mode IN {M32_Monitor,M32_Hyp}) then UNPREDICTABLE; when '11111' // SP_hyp, only from Monitor mode if !HaveEL(EL2) || mode != M32_MonitorSignedSatM32_System(i, N); return result;then UNPREDICTABLE; otherwise UNPREDICTABLE; return;

Library pseudocode for aarch32/functions/v6simdsystem/SignedSatCPSRWriteByInstr

// SignedSat() // =========== bits(N)// CPSRWriteByInstr() // ================== // Update PSTATE.<N,Z,C,V,Q,GE,E,A,I,F,M> from a CPSR value written by an MSR instruction. SignedSat(integer i, integer N) (result, -) =CPSRWriteByInstr(bits(32) value, bits(4) bytemask) privileged = PSTATE.EL != ; // PSTATE.<A,I,F,M> are not writable at EL0 // Write PSTATE from 'value', ignoring bytes masked by 'bytemask' if bytemask<3> == '1' then PSTATE.<N,Z,C,V,Q> = value<31:27>; // Bits <26:24> are ignored if bytemask<2> == '1' then // Bit <23> is RES0 if privileged then PSTATE.PAN = value<22>; // Bits <21:20> are RES0 PSTATE.GE = value<19:16>; if bytemask<1> == '1' then // Bits <15:10> are RES0 PSTATE.E = value<9>; // PSTATE.E is writable at EL0 if privileged then PSTATE.A = value<8>; if bytemask<0> == '1' then if privileged then PSTATE.<I,F> = value<7:6>; // Bit <5> is RES0 // AArch32.WriteModeByInstr() sets PSTATE.IL to 1 if this is an illegal mode change. AArch32.WriteModeByInstrSignedSatQEL0(i, N); return result;(value<4:0>); return;

Library pseudocode for aarch32/functions/v6simdsystem/UnsignedSatConditionPassed

// UnsignedSat() // ============= // ConditionPassed() // ================= bits(N)boolean UnsignedSat(integer i, integer N) (result, -) =ConditionPassed() return (AArch32.CurrentCondUnsignedSatQConditionHolds(i, N); return result;());

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.DefaultTEXDecodeCurrentCond

// AArch32.DefaultTEXDecode() // ========================== MemoryAttributesbits(4) AArch32.DefaultTEXDecode(bits(3) TEX, bit C, bit B, bit S,AArch32.CurrentCond(); AccType acctype) MemoryAttributes memattrs; // Reserved values map to allocated values if (TEX == '001' && C:B == '01') || (TEX == '010' && C:B != '00') || TEX == '011' then bits(5) texcb; (-, texcb) = ConstrainUnpredictableBits(Unpredictable_RESTEXCB); TEX = texcb<4:2>; C = texcb<1>; B = texcb<0>; case TEX:C:B of when '00000' // Device-nGnRnE memattrs.type = MemType_Device; memattrs.device = DeviceType_nGnRnE; when '00001', '01000' // Device-nGnRE memattrs.type = MemType_Device; memattrs.device = DeviceType_nGnRE; when '00010', '00011', '00100' // Write-back or Write-through Read allocate, or Non-cacheable memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.shareable = (S == '1'); when '00110' memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; when '00111' // Write-back Read and Write allocate memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints('01', acctype, FALSE); memattrs.outer = ShortConvertAttrsHints('01', acctype, FALSE); memattrs.shareable = (S == '1'); when '1xxxx' // Cacheable, TEX<1:0> = Outer attrs, {C,B} = Inner attrs memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(TEX<1:0>, acctype, FALSE); memattrs.shareable = (S == '1'); otherwise // Reserved, handled above Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; // distinction between inner and outer shareable is not supported in this format memattrs.outershareable = memattrs.shareable; memattrs.tagged = FALSE; return MemAttrDefaults(memattrs);

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.InstructionDeviceInITBlock

// AArch32.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. // InITBlock() // =========== AddressDescriptorboolean AArch32.InstructionDevice(InITBlock() ifAddressDescriptorCurrentInstrSet addrdesc, bits(32) vaddress, bits(40) ipaddress, integer level, bits(4) domain,() == AccTypeInstrSet_T32 acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) c = ConstrainUnpredictable(Unpredictable_INSTRDEVICE); assert c IN {Constraint_NONE, Constraint_FAULT}; if c == Constraint_FAULT then addrdesc.fault = AArch32.PermissionFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.type = MemType_Normal; addrdesc.memattrs.inner.attrs = MemAttr_NC; addrdesc.memattrs.inner.hints = MemHint_No; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = MemAttrDefaults(addrdesc.memattrs); return addrdesc;then return PSTATE.IT<3:0> != '0000'; else return FALSE;

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.RemappedTEXDecodeLastInITBlock

// AArch32.RemappedTEXDecode() // =========================== // LastInITBlock() // =============== MemoryAttributesboolean AArch32.RemappedTEXDecode(bits(3) TEX, bit C, bit B, bit S,LastInITBlock() return (PSTATE.IT<3:0> == '1000'); AccType acctype) MemoryAttributes memattrs; region = UInt(TEX<0>:C:B); // TEX<2:1> are ignored in this mapping scheme if region == 6 then memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; else base = 2 * region; attrfield = PRRR<base+1:base>; if attrfield == '11' then // Reserved, maps to allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESPRRR); case attrfield of when '00' // Device-nGnRnE memattrs.type = MemType_Device; memattrs.device = DeviceType_nGnRnE; when '01' // Device-nGnRE memattrs.type = MemType_Device; memattrs.device = DeviceType_nGnRE; when '10' memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(NMRR<base+1:base>, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(NMRR<base+17:base+16>, acctype, FALSE); s_bit = if S == '0' then PRRR.NS0 else PRRR.NS1; memattrs.shareable = (s_bit == '1'); memattrs.outershareable = (s_bit == '1' && PRRR<region+24> == '0'); when '11' Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; memattrs.tagged = FALSE; return MemAttrDefaults(memattrs);

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.S1AttrDecodeSPSRWriteByInstr

// AArch32.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. MemoryAttributes// SPSRWriteByInstr() // ================== AArch32.S1AttrDecode(bits(2) SH, bits(3) attr,SPSRWriteByInstr(bits(32) value, bits(4) bytemask) new_spsr = AccTypeSPSR acctype)[]; if bytemask<3> == '1' then new_spsr<31:24> = value<31:24>; // N,Z,C,V,Q flags, IT[1:0],J bits if bytemask<2> == '1' then new_spsr<23:16> = value<23:16>; // IL bit, GE[3:0] flags if bytemask<1> == '1' then new_spsr<15:8> = value<15:8>; // IT[7:2] bits, E bit, A interrupt mask if bytemask<0> == '1' then new_spsr<7:0> = value<7:0>; // I,F interrupt masks, T bit, Mode bits MemoryAttributesSPSR memattrs; if PSTATE.EL == EL2 then mair = HMAIR1:HMAIR0; else mair = MAIR1:MAIR0; index = 8 * UInt(attr); attrfield = mair<index+7:index>; memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESMAIR); if !HaveMTEExt() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESMAIR); if attrfield<7:4> == '0000' then // Device memattrs.type = MemType_Device; case attrfield<3:0> of when '0000' memattrs.device = DeviceType_nGnRnE; when '0100' memattrs.device = DeviceType_nGnRE; when '1000' memattrs.device = DeviceType_nGRE; when '1100' memattrs.device = DeviceType_GRE; otherwise Unreachable(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.type = MemType_Normal; memattrs.outer = LongConvertAttrsHints(attrfield<7:4>, acctype); memattrs.inner = LongConvertAttrsHints(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif HaveMTEExt() && attrfield == '11110000' then // Tagged, Normal memattrs.tagged = TRUE; memattrs.type = MemType_Normal; memattrs.outer.attrs = MemAttr_WB; memattrs.inner.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RWA; memattrs.inner.hints = MemHint_RWA; memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else Unreachable(); // Reserved, handled above return MemAttrDefaults(memattrs);[] = new_spsr; // UNPREDICTABLE if User or System mode return;

Library pseudocode for aarch32/translationfunctions/attrssystem/AArch32.TranslateAddressS1OffSPSRaccessValid

// AArch32.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. TLBRecord// SPSRaccessValid() // ================= // Checks for MRS (Banked register) or MSR (Banked register) accesses to the SPSRs // that are UNPREDICTABLE AArch32.TranslateAddressS1Off(bits(32) vaddress,SPSRaccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '01110' // SPSR_fiq if mode == AccTypeM32_FIQ acctype, boolean iswrite) assertthen UNPREDICTABLE; when '10000' // SPSR_irq if mode == ELUsingAArch32M32_IRQ(then UNPREDICTABLE; when '10010' // SPSR_svc if mode ==S1TranslationRegimeM32_Svc());then UNPREDICTABLE; when '10100' // SPSR_abt if mode == TLBRecordM32_Abort result; default_cacheable = (then UNPREDICTABLE; when '10110' // SPSR_und if mode ==HasS2TranslationM32_Undef() && ((ifthen UNPREDICTABLE; when '11100' // SPSR_mon if ! ELUsingAArch32HaveEL(EL2EL3) then HCR.DC else HCR_EL2.DC) == '1')); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.type =) || mode == MemType_NormalM32_Monitor; result.addrdesc.memattrs.inner.attrs =|| ! MemAttr_WBIsSecure; // Write-back result.addrdesc.memattrs.inner.hints =() then UNPREDICTABLE; when '11110' // SPSR_hyp if ! MemHint_RWAHaveEL; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype !=( AccType_IFETCH then // Treat data as Device result.addrdesc.memattrs.type = MemType_Device; result.addrdesc.memattrs.device = DeviceType_nGnRnE; result.addrdesc.memattrs.inner = MemAttrHints UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR/HSCTLR.I if PSTATE.EL == EL2 then cacheable = HSCTLR.I == '1'; else cacheable = SCTLR.I == '1'; result.addrdesc.memattrs.type =) || mode != MemType_NormalM32_Monitor; if cacheable then result.addrdesc.memattrs.inner.attrs = MemAttr_WT; result.addrdesc.memattrs.inner.hints = MemHint_RA; else result.addrdesc.memattrs.inner.attrs = MemAttr_NC; result.addrdesc.memattrs.inner.hints = MemHint_No; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = MemAttrDefaults(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = ZeroExtend(vaddress); result.addrdesc.paddress.NS = if IsSecure() then '0' else '1'; result.addrdesc.fault = AArch32.NoFault(); return result;then UNPREDICTABLE; otherwise UNPREDICTABLE; return;

Library pseudocode for aarch32/translationfunctions/checkssystem/AArch32.AccessIsPrivilegedSelectInstrSet

// AArch32.AccessIsPrivileged() // ============================ boolean// SelectInstrSet() // ================ AArch32.AccessIsPrivileged(SelectInstrSet(AccTypeInstrSet acctype) el =iset) assert AArch32.AccessUsesELCurrentInstrSet(acctype); if el ==() IN { EL0InstrSet_A32 then ispriv = FALSE; elsif el !=, EL1InstrSet_T32 then ispriv = TRUE; else ispriv = (acctype !=}; assert iset IN { , InstrSet_T32}; PSTATE.T = if iset == InstrSet_A32AccType_UNPRIVInstrSet_A32); then '0' else '1'; return ispriv; return;

Library pseudocode for aarch32/translationfunctions/checksv6simd/AArch32.AccessUsesELSat

// AArch32.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. // Sat() // ===== bits(2)bits(N) AArch32.AccessUsesEL(Sat(integer i, integer N, boolean unsigned) result = if unsigned thenAccTypeUnsignedSat acctype) if acctype ==(i, N) else AccType_UNPRIVSignedSat then return EL0; else return PSTATE.EL;(i, N); return result;

Library pseudocode for aarch32/translationfunctions/checksv6simd/AArch32.CheckDomainSignedSat

// AArch32.CheckDomain() // ===================== // SignedSat() // =========== (boolean, FaultRecord)bits(N) AArch32.CheckDomain(bits(4) domain, bits(32) vaddress, integer level,SignedSat(integer i, integer N) (result, -) = AccTypeSignedSatQ acctype, boolean iswrite) index = 2 * UInt(domain); attrfield = DACR<index+1:index>; if attrfield == '10' then // Reserved, maps to an allocated value // Reserved value maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESDACR); if attrfield == '00' then fault = AArch32.DomainFault(domain, level, acctype, iswrite); else fault = AArch32.NoFault(); permissioncheck = (attrfield == '01'); return (permissioncheck, fault);(i, N); return result;

Library pseudocode for aarch32/translationfunctions/checksv6simd/AArch32.CheckPermissionUnsignedSat

// AArch32.CheckPermission() // ========================= // Function used for permission checking from AArch32 stage 1 translations // UnsignedSat() // ============= FaultRecordbits(N) AArch32.CheckPermission(UnsignedSat(integer i, integer N) (result, -) =PermissionsUnsignedSatQ perms, bits(32) vaddress, integer level, bits(4) domain, bit NS, AccType acctype, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); if PSTATE.EL != EL2 then wxn = SCTLR.WXN == '1'; if TTBCR.EAE == '1' || SCTLR.AFE == '1' || perms.ap<0> == '1' then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; else priv_r = perms.ap<2:1> != '00'; priv_w = perms.ap<2:1> == '01'; user_r = perms.ap<1> == '1'; user_w = FALSE; uwxn = SCTLR.UWXN == '1'; ispriv = AArch32.AccessIsPrivileged(acctype); pan = if HavePANExt() then PSTATE.PAN else '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch32.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = !user_r || perms.xn == '1' || (user_w && wxn); priv_xn = (!priv_r || perms.xn == '1' || perms.pxn == '1' || (priv_w && wxn) || (user_w && uwxn)); if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 wxn = HSCTLR.WXN == '1'; r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecure() && NS == '1' then secure_instr_fetch = if ELUsingAArch32(EL3) then SCR.SIF else SCR_EL3.SIF; if secure_instr_fetch == '1' then xn = TRUE; if acctype == AccType_IFETCH then fail = xn; failedread = TRUE; elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW } then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation. fail = FALSE; elsif iswrite then fail = !w; failedread = FALSE; else fail = !r; failedread = TRUE; if fail then secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(40) UNKNOWN; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFault();(i, N); return result;

Library pseudocode for aarch32/translation/checksattrs/AArch32.CheckS2PermissionAArch32.DefaultTEXDecode

// AArch32.CheckS2Permission() // =========================== // Function used for permission checking from AArch32 stage 2 translations // AArch32.DefaultTEXDecode() // ========================== FaultRecordMemoryAttributes AArch32.CheckS2Permission(AArch32.DefaultTEXDecode(bits(3) TEX, bit C, bit B, bit S,Permissions perms, bits(32) vaddress, bits(40) ipaddress, integer level, AccType acctype, boolean iswrite, boolean s2fs1walk) assertacctype) HaveELMemoryAttributes(memattrs; // Reserved values map to allocated values if (TEX == '001' && C:B == '01') || (TEX == '010' && C:B != '00') || TEX == '011' then bits(5) texcb; (-, texcb) =EL2ConstrainUnpredictableBits) && !(IsSecureUnpredictable_RESTEXCB() &&); TEX = texcb<4:2>; C = texcb<1>; B = texcb<0>; case TEX:C:B of when '00000' // Device-nGnRnE memattrs.type = ELUsingAArch32MemType_Device(; memattrs.device =EL2DeviceType_nGnRnE) &&; when '00001', '01000' // Device-nGnRE memattrs.type = HasS2TranslationMemType_Device(); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if; memattrs.device = HaveExtendedExecuteNeverExtDeviceType_nGnRE() then case perms.xn:perms.xxn of when '00' xn = !r; when '01' xn = !r || PSTATE.EL ==; when '00010', '00011', '00100' // Write-back or Write-through Read allocate, or Non-cacheable memattrs.type = EL1MemType_Normal; when '10' xn = TRUE; when '11' xn = !r || PSTATE.EL == memattrs.inner = EL0ShortConvertAttrsHints; else xn = !r || perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type if acctype ==(C:B, acctype, FALSE); memattrs.outer = AccType_IFETCHShortConvertAttrsHints && !s2fs1walk then fail = xn; failedread = TRUE; elsif (acctype IN {(C:B, acctype, FALSE); memattrs.shareable = (S == '1'); when '00110' memattrs = AccType_ATOMICRWMemoryAttributes,IMPLEMENTATION_DEFINED; when '00111' // Write-back Read and Write allocate memattrs.type = AccType_ORDEREDRWMemType_Normal,; memattrs.inner = AccType_ORDEREDATOMICRWShortConvertAttrsHints }) && !s2fs1walk then fail = !r || !w; failedread = !r; elsif acctype ==('01', acctype, FALSE); memattrs.outer = AccType_DCShortConvertAttrsHints && !s2fs1walk then // DC maintenance instructions operating by VA, do not generate Permission faults // from stage 2 translation, other than from stage 1 translation table walk. fail = FALSE; elsif iswrite && !s2fs1walk then fail = !w; failedread = FALSE; else fail = !r; failedread = !iswrite; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; return('01', acctype, FALSE); memattrs.shareable = (S == '1'); when '1xxxx' // Cacheable, TEX<1:0> = Outer attrs, {C,B} = Inner attrs memattrs.type = AArch32.PermissionFaultMemType_Normal(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return; memattrs.inner = (C:B, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(TEX<1:0>, acctype, FALSE); memattrs.shareable = (S == '1'); otherwise // Reserved, handled above Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; // distinction between inner and outer shareable is not supported in this format memattrs.outershareable = memattrs.shareable; memattrs.tagged = FALSE; return MemAttrDefaultsAArch32.NoFaultShortConvertAttrsHints();(memattrs);

Library pseudocode for aarch32/translation/debugattrs/AArch32.CheckBreakpointAArch32.InstructionDevice

// AArch32.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. // AArch32.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. FaultRecordAddressDescriptor AArch32.CheckBreakpoint(bits(32) vaddress, integer size) assertAArch32.InstructionDevice( ELUsingAArch32AddressDescriptor(addrdesc, bits(32) vaddress, bits(40) ipaddress, integer level, bits(4) domain,S1TranslationRegimeAccType()); assert size IN {2,4}; acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) match = FALSE; mismatch = FALSE; for i = 0 to c = UIntConstrainUnpredictable(DBGDIDR.BRPs) (match_i, mismatch_i) =( AArch32.BreakpointMatchUnpredictable_INSTRDEVICE(i, vaddress, size); match = match || match_i; mismatch = mismatch || mismatch_i; if match &&); assert c IN { HaltOnBreakpointOrWatchpointConstraint_NONE() then reason =, DebugHalt_BreakpointConstraint_FAULT;}; if c == HaltConstraint_FAULT(reason); elsif (match || mismatch) && DBGDSCRext.MDBGen == '1' &&then addrdesc.fault = AArch32.GenerateDebugExceptionsAArch32.PermissionFault() then acctype =(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.type = AccType_IFETCHMemType_Normal; iswrite = FALSE; debugmoe = addrdesc.memattrs.inner.attrs = DebugException_BreakpointMemAttr_NC; return addrdesc.memattrs.inner.hints = AArch32.DebugFaultMemHint_No(acctype, iswrite, debugmoe); else return; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = AArch32.NoFaultMemAttrDefaults();(addrdesc.memattrs); return addrdesc;

Library pseudocode for aarch32/translation/debugattrs/AArch32.CheckDebugAArch32.RemappedTEXDecode

// AArch32.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. // AArch32.RemappedTEXDecode() // =========================== FaultRecordMemoryAttributes AArch32.CheckDebug(bits(32) vaddress,AArch32.RemappedTEXDecode(bits(3) TEX, bit C, bit B, bit S, AccType acctype, boolean iswrite, integer size)acctype) FaultRecordMemoryAttributes fault =memattrs; region = AArch32.NoFaultUInt(); d_side = (acctype !=(TEX<0>:C:B); // TEX<2:1> are ignored in this mapping scheme if region == 6 then memattrs = AccType_IFETCHMemoryAttributes); generate_exception =IMPLEMENTATION_DEFINED; else base = 2 * region; attrfield = PRRR<base+1:base>; if attrfield == '11' then // Reserved, maps to allocated value (-, attrfield) = AArch32.GenerateDebugExceptionsConstrainUnpredictableBits() && DBGDSCRext.MDBGen == '1'; halt =( HaltOnBreakpointOrWatchpointUnpredictable_RESPRRR(); // Relative priority of Vector Catch and Breakpoint exceptions not defined in the architecture vector_catch_first =); case attrfield of when '00' // Device-nGnRnE memattrs.type = ConstrainUnpredictableBoolMemType_Device(; memattrs.device =Unpredictable_BPVECTORCATCHPRIDeviceType_nGnRnE); if !d_side && vector_catch_first && generate_exception then fault =; when '01' // Device-nGnRE memattrs.type = AArch32.CheckVectorCatchMemType_Device(vaddress, size); if fault.type ==; memattrs.device = Fault_NoneDeviceType_nGnRE && (generate_exception || halt) then if d_side then fault =; when '10' memattrs.type = AArch32.CheckWatchpointMemType_Normal(vaddress, acctype, iswrite, size); else fault =; memattrs.inner = AArch32.CheckBreakpointShortConvertAttrsHints(vaddress, size); if fault.type ==(NMRR<base+1:base>, acctype, FALSE); memattrs.outer = Fault_NoneShortConvertAttrsHints && !d_side && !vector_catch_first && generate_exception then return(NMRR<base+17:base+16>, acctype, FALSE); s_bit = if S == '0' then PRRR.NS0 else PRRR.NS1; memattrs.shareable = (s_bit == '1'); memattrs.outershareable = (s_bit == '1' && PRRR<region+24> == '0'); when '11' (); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; memattrs.tagged = FALSE; return MemAttrDefaultsAArch32.CheckVectorCatchUnreachable(vaddress, size); return fault;(memattrs);

Library pseudocode for aarch32/translation/debugattrs/AArch32.CheckVectorCatchAArch32.S1AttrDecode

// AArch32.CheckVectorCatch() // ========================== // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // Vector Catch can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. // AArch32.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. FaultRecordMemoryAttributes AArch32.CheckVectorCatch(bits(32) vaddress, integer size) assertAArch32.S1AttrDecode(bits(2) SH, bits(3) attr, ELUsingAArch32AccType(acctype)S1TranslationRegimeMemoryAttributes()); memattrs; match = if PSTATE.EL == AArch32.VCRMatchEL2(vaddress); if size == 4 && !match &&then mair = HMAIR1:HMAIR0; else mair = MAIR1:MAIR0; index = 8 * AArch32.VCRMatchUInt(vaddress + 2) then match =(attr); attrfield = mair<index+7:index>; memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) = ConstrainUnpredictableBoolConstrainUnpredictableBits(Unpredictable_VCMATCHHALFUnpredictable_RESMAIR); if match && DBGDSCRext.MDBGen == '1' && if ! AArch32.GenerateDebugExceptionsHaveMTEExt() then acctype =() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) = AccType_IFETCHConstrainUnpredictableBits; iswrite = FALSE; debugmoe =( DebugException_VectorCatchUnpredictable_RESMAIR; return); if attrfield<7:4> == '0000' then // Device memattrs.type = AArch32.DebugFaultMemType_Device(acctype, iswrite, debugmoe); else return; case attrfield<3:0> of when '0000' memattrs.device = ; when '0100' memattrs.device = DeviceType_nGnRE; when '1000' memattrs.device = DeviceType_nGRE; when '1100' memattrs.device = DeviceType_GRE; otherwise Unreachable(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.type = MemType_Normal; memattrs.outer = LongConvertAttrsHints(attrfield<7:4>, acctype); memattrs.inner = LongConvertAttrsHints(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif HaveMTEExt() && attrfield == '11110000' then // Tagged, Normal memattrs.tagged = TRUE; memattrs.type = MemType_Normal; memattrs.outer.attrs = MemAttr_WB; memattrs.inner.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RWA; memattrs.inner.hints = MemHint_RWA; memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else Unreachable(); // Reserved, handled above return MemAttrDefaultsAArch32.NoFaultDeviceType_nGnRnE();(memattrs);

Library pseudocode for aarch32/translation/debugattrs/AArch32.CheckWatchpointAArch32.TranslateAddressS1Off

// AArch32.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". // AArch32.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. FaultRecordTLBRecord AArch32.CheckWatchpoint(bits(32) vaddress,AArch32.TranslateAddressS1Off(bits(32) vaddress, AccType acctype, boolean iswrite, integer size) acctype, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); match = FALSE; ispriv =()); AArch32.AccessIsPrivilegedTLBRecord(acctype); result; for i = 0 to default_cacheable = ( UIntHasS2Translation(DBGDIDR.WRPs) match = match ||() && ((if AArch32.WatchpointMatchELUsingAArch32(i, vaddress, size, ispriv, iswrite); if match &&( HaltOnBreakpointOrWatchpointEL2() then reason =) then HCR.DC else HCR_EL2.DC) == '1')); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.type = DebugHalt_WatchpointMemType_Normal;; result.addrdesc.memattrs.inner.attrs = HaltMemAttr_WB(reason); elsif match && DBGDSCRext.MDBGen == '1' &&; // Write-back result.addrdesc.memattrs.inner.hints = AArch32.GenerateDebugExceptionsMemHint_RWA() then debugmoe =; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype != DebugException_WatchpointAccType_IFETCH; returnthen // Treat data as Device result.addrdesc.memattrs.type = ; result.addrdesc.memattrs.device = DeviceType_nGnRnE; result.addrdesc.memattrs.inner = MemAttrHints UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR/HSCTLR.I if PSTATE.EL == EL2 then cacheable = HSCTLR.I == '1'; else cacheable = SCTLR.I == '1'; result.addrdesc.memattrs.type = MemType_Normal; if cacheable then result.addrdesc.memattrs.inner.attrs = MemAttr_WT; result.addrdesc.memattrs.inner.hints = MemHint_RA; else result.addrdesc.memattrs.inner.attrs = MemAttr_NC; result.addrdesc.memattrs.inner.hints = MemHint_No; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = MemAttrDefaults(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = ZeroExtend(vaddress); result.addrdesc.paddress.NS = if IsSecureAArch32.DebugFaultMemType_Device(acctype, iswrite, debugmoe); else return() then '0' else '1'; result.addrdesc.fault = AArch32.NoFault();(); return result;

Library pseudocode for aarch32/translation/faultschecks/AArch32.AccessFlagFaultAArch32.AccessIsPrivileged

// AArch32.AccessFlagFault() // ========================= // AArch32.AccessIsPrivileged() // ============================ FaultRecordboolean AArch32.AccessFlagFault(bits(40) ipaddress, bits(4) domain, integer level,AArch32.AccessIsPrivileged( AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) acctype) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return el = AArch32.CreateFaultRecordAArch32.AccessUsesEL((acctype); if el == then ispriv = FALSE; elsif el != EL1 then ispriv = TRUE; else ispriv = (acctype != AccType_UNPRIVFault_AccessFlagEL0, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);); return ispriv;

Library pseudocode for aarch32/translation/faultschecks/AArch32.AddressSizeFaultAArch32.AccessUsesEL

// AArch32.AddressSizeFault() // ========================== // AArch32.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. FaultRecordbits(2) AArch32.AddressSizeFault(bits(40) ipaddress, bits(4) domain, integer level,AArch32.AccessUsesEL( AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; returnacctype) if acctype == AArch32.CreateFaultRecordAccType_UNPRIV(then returnFault_AddressSizeEL0, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);; else return PSTATE.EL;

Library pseudocode for aarch32/translation/faultschecks/AArch32.AlignmentFaultAArch32.CheckDomain

// AArch32.AlignmentFault() // ======================== // AArch32.CheckDomain() // ===================== FaultRecord(boolean, FaultRecord) AArch32.AlignmentFault(AArch32.CheckDomain(bits(4) domain, bits(32) vaddress, integer level,AccType acctype, boolean iswrite, boolean secondstage) acctype, boolean iswrite) ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; return index = 2 * AArch32.CreateFaultRecordUInt((domain); attrfield = DACR<index+1:index>; if attrfield == '10' then // Reserved, maps to an allocated value // Reserved value maps to an allocated value (-, attrfield) =(Unpredictable_RESDACR); if attrfield == '00' then fault = AArch32.DomainFault(domain, level, acctype, iswrite); else fault = AArch32.NoFaultFault_AlignmentConstrainUnpredictableBits, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);(); permissioncheck = (attrfield == '01'); return (permissioncheck, fault);

Library pseudocode for aarch32/translation/faultschecks/AArch32.AsynchExternalAbortAArch32.CheckPermission

// AArch32.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts // AArch32.CheckPermission() // ========================= // Function used for permission checking from AArch32 stage 1 translations FaultRecord AArch32.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) type = if parity thenAArch32.CheckPermission( Fault_AsyncParityPermissions elseperms, bits(32) vaddress, integer level, bits(4) domain, bit NS, Fault_AsyncExternalAccType; ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype =acctype, boolean iswrite) assert AccType_NORMALELUsingAArch32; iswrite = boolean UNKNOWN; debugmoe = bits(4) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return( ()); if PSTATE.EL != EL2 then wxn = SCTLR.WXN == '1'; if TTBCR.EAE == '1' || SCTLR.AFE == '1' || perms.ap<0> == '1' then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; else priv_r = perms.ap<2:1> != '00'; priv_w = perms.ap<2:1> == '01'; user_r = perms.ap<1> == '1'; user_w = FALSE; uwxn = SCTLR.UWXN == '1'; ispriv = AArch32.AccessIsPrivileged(acctype); pan = if HavePANExt() then PSTATE.PAN else '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch32.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = !user_r || perms.xn == '1' || (user_w && wxn); priv_xn = (!priv_r || perms.xn == '1' || perms.pxn == '1' || (priv_w && wxn) || (user_w && uwxn)); if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 wxn = HSCTLR.WXN == '1'; r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecure() && NS == '1' then secure_instr_fetch = if ELUsingAArch32(EL3) then SCR.SIF else SCR_EL3.SIF; if secure_instr_fetch == '1' then xn = TRUE; if acctype == AccType_IFETCH then fail = xn; failedread = TRUE; elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW } then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation. fail = FALSE; elsif iswrite then fail = !w; failedread = FALSE; else fail = !r; failedread = TRUE; if fail then secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(40) UNKNOWN; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFaultAArch32.CreateFaultRecordS1TranslationRegime(type, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/faultschecks/AArch32.DebugFaultAArch32.CheckS2Permission

// AArch32.DebugFault() // ==================== // AArch32.CheckS2Permission() // =========================== // Function used for permission checking from AArch32 stage 2 translations FaultRecord AArch32.DebugFault(AArch32.CheckS2Permission(Permissions perms, bits(32) vaddress, bits(40) ipaddress, integer level, AccType acctype, boolean iswrite, bits(4) debugmoe) acctype, boolean iswrite, boolean s2fs1walk) ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return assert AArch32.CreateFaultRecordHaveEL() && !IsSecure() && ELUsingAArch32(EL2) && HasS2Translation(); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if HaveExtendedExecuteNeverExt() then case perms.xn:perms.xxn of when '00' xn = !r; when '01' xn = !r || PSTATE.EL == EL1; when '10' xn = TRUE; when '11' xn = !r || PSTATE.EL == EL0; else xn = !r || perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type if acctype == AccType_IFETCH && !s2fs1walk then fail = xn; failedread = TRUE; elsif (acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC && !s2fs1walk then // DC maintenance instructions operating by VA, do not generate Permission faults // from stage 2 translation, other than from stage 1 translation table walk. fail = FALSE; elsif iswrite && !s2fs1walk then fail = !w; failedread = FALSE; else fail = !r; failedread = !iswrite; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFaultFault_DebugEL2, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/faultsdebug/AArch32.DomainFaultAArch32.CheckBreakpoint

// AArch32.DomainFault() // ===================== // AArch32.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. FaultRecord AArch32.DomainFault(bits(4) domain, integer level,AArch32.CheckBreakpoint(bits(32) vaddress, integer size) assert AccTypeELUsingAArch32 acctype, boolean iswrite) ipaddress = bits(40) UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return( AArch32.CreateFaultRecordS1TranslationRegime(()); assert size IN {2,4}; match = FALSE; mismatch = FALSE; for i = 0 to(DBGDIDR.BRPs) (match_i, mismatch_i) = AArch32.BreakpointMatch(i, vaddress, size); match = match || match_i; mismatch = mismatch || mismatch_i; if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Breakpoint; Halt(reason); elsif (match || mismatch) && DBGDSCRext.MDBGen == '1' && AArch32.GenerateDebugExceptions() then acctype = AccType_IFETCH; iswrite = FALSE; debugmoe = DebugException_Breakpoint; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return AArch32.NoFaultFault_DomainUInt, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/faultsdebug/AArch32.NoFaultAArch32.CheckDebug

// AArch32.NoFault() // ================= // AArch32.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. FaultRecord AArch32.NoFault() ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype =AArch32.CheckDebug(bits(32) vaddress, AccType_NORMALAccType; iswrite = boolean UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; returnacctype, boolean iswrite, integer size) AArch32.CreateFaultRecordFaultRecord(fault =AArch32.NoFault(); d_side = (acctype != AccType_IFETCH); generate_exception = AArch32.GenerateDebugExceptions() && DBGDSCRext.MDBGen == '1'; halt = HaltOnBreakpointOrWatchpoint(); // Relative priority of Vector Catch and Breakpoint exceptions not defined in the architecture vector_catch_first = ConstrainUnpredictableBool(Unpredictable_BPVECTORCATCHPRI); if !d_side && vector_catch_first && generate_exception then fault = AArch32.CheckVectorCatch(vaddress, size); if fault.type == Fault_None && (generate_exception || halt) then if d_side then fault = AArch32.CheckWatchpoint(vaddress, acctype, iswrite, size); else fault = AArch32.CheckBreakpoint(vaddress, size); if fault.type == Fault_None && !d_side && !vector_catch_first && generate_exception then return AArch32.CheckVectorCatch, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);(vaddress, size); return fault;

Library pseudocode for aarch32/translation/faultsdebug/AArch32.PermissionFaultAArch32.CheckVectorCatch

// AArch32.PermissionFault() // ========================= // AArch32.CheckVectorCatch() // ========================== // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // Vector Catch can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. FaultRecord AArch32.PermissionFault(bits(40) ipaddress, bits(4) domain, integer level,AArch32.CheckVectorCatch(bits(32) vaddress, integer size) assert AccTypeELUsingAArch32 acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return( AArch32.CreateFaultRecordS1TranslationRegime(()); match =(vaddress); if size == 4 && !match && AArch32.VCRMatch(vaddress + 2) then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHHALF); if match && DBGDSCRext.MDBGen == '1' && AArch32.GenerateDebugExceptions() then acctype = AccType_IFETCH; iswrite = FALSE; debugmoe = DebugException_VectorCatch; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return AArch32.NoFaultFault_PermissionAArch32.VCRMatch, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/faultsdebug/AArch32.TranslationFaultAArch32.CheckWatchpoint

// AArch32.TranslationFault() // ========================== // AArch32.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". FaultRecord AArch32.TranslationFault(bits(40) ipaddress, bits(4) domain, integer level,AArch32.CheckWatchpoint(bits(32) vaddress, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; returnacctype, boolean iswrite, integer size) assert AArch32.CreateFaultRecordELUsingAArch32(()); match = FALSE; ispriv = AArch32.AccessIsPrivileged(acctype); for i = 0 to UInt(DBGDIDR.WRPs) match = match || AArch32.WatchpointMatch(i, vaddress, size, ispriv, iswrite); if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Watchpoint; Halt(reason); elsif match && DBGDSCRext.MDBGen == '1' && AArch32.GenerateDebugExceptions() then debugmoe = DebugException_Watchpoint; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return AArch32.NoFaultFault_TranslationS1TranslationRegime, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);();

Library pseudocode for aarch32/translation/translationfaults/AArch32.FirstStageTranslateAArch32.AccessFlagFault

// AArch32.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. // AArch32.AccessFlagFault() // ========================= AddressDescriptorFaultRecord AArch32.FirstStageTranslate(bits(32) vaddress,AArch32.AccessFlagFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean iswrite, boolean wasaligned, integer size) acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) if PSTATE.EL == extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return EL2AArch32.CreateFaultRecord then s1_enabled = HSCTLR.M == '1'; elsif( EL2EnabledFault_AccessFlag() then tge = (if ELUsingAArch32(EL2) then HCR.TGE else HCR_EL2.TGE); dc = (if ELUsingAArch32(EL2) then HCR.DC else HCR_EL2.DC); s1_enabled = tge == '0' && dc == '0' && SCTLR.M == '1'; else s1_enabled = SCTLR.M == '1'; ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; if s1_enabled then // First stage enabled use_long_descriptor_format = PSTATE.EL == EL2 || TTBCR.EAE == '1'; if use_long_descriptor_format then S1 = AArch32.TranslationTableWalkLD(ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; domaincheck = FALSE; else S1 = AArch32.TranslationTableWalkSD(vaddress, acctype, iswrite, size); permissioncheck = TRUE; domaincheck = TRUE; else S1 = AArch32.TranslateAddressS1Off(vaddress, acctype, iswrite); permissioncheck = FALSE; domaincheck = FALSE; if UsingAArch32() && HaveTrapLoadStoreMultipleDeviceExt() && AArch32.ExecutingLSMInstr() then if S1.addrdesc.memattrs.type == MemType_Device && S1.addrdesc.memattrs.device != DeviceType_GRE then nTLSMD = if S1TranslationRegime() == EL2 then HSCTLR.nTLSMD else SCTLR.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S1.addrdesc.memattrs.type == MemType_Device && !IsFault(S1.addrdesc) then S1.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); if !IsFault(S1.addrdesc) && domaincheck then (permissioncheck, abort) = AArch32.CheckDomain(S1.domain, vaddress, S1.level, acctype, iswrite); S1.addrdesc.fault = abort; if !IsFault(S1.addrdesc) && permissioncheck then S1.addrdesc.fault = AArch32.CheckPermission(S1.perms, vaddress, S1.level, S1.domain, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (!IsFault(S1.addrdesc) && S1.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then S1.addrdesc = AArch32.InstructionDevice(S1.addrdesc, vaddress, ipaddress, S1.level, S1.domain, acctype, iswrite, secondstage, s2fs1walk); return S1.addrdesc;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/translationfaults/AArch32.FullTranslateAArch32.AddressSizeFault

// AArch32.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. // AArch32.AddressSizeFault() // ========================== AddressDescriptorFaultRecord AArch32.FullTranslate(bits(32) vaddress,AArch32.AddressSizeFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean iswrite, boolean wasaligned, integer size) acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) // First Stage Translation S1 = extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return AArch32.FirstStageTranslateAArch32.CreateFaultRecord(vaddress, acctype, iswrite, wasaligned, size); if !(IsFaultFault_AddressSize(S1) && !(HaveNV2Ext() && acctype == AccType_NV2REGISTER) && HasS2Translation() then s2fs1walk = FALSE; result = AArch32.SecondStageTranslate(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size); else result = S1; return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/translationfaults/AArch32.SecondStageTranslateAArch32.AlignmentFault

// AArch32.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. // AArch32.AlignmentFault() // ======================== AddressDescriptorFaultRecord AArch32.SecondStageTranslate(AArch32.AlignmentFault(AddressDescriptor S1, bits(32) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size) assertacctype, boolean iswrite, boolean secondstage) ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; return HasS2TranslationAArch32.CreateFaultRecord(); assert( IsZeroFault_Alignment(S1.paddress.address<47:40>); hwupdatewalk = FALSE; if !ELUsingAArch32(EL2) then return AArch64.SecondStageTranslate(S1, ZeroExtend(vaddress, 64), acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); s2_enabled = HCR.VM == '1' || HCR.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<39:0>; S2 = AArch32.TranslationTableWalkLD(ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S2.addrdesc.memattrs.type == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.CheckS2Permission(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, s2fs1walk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !IsFault(S2.addrdesc) && S2.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then domain = bits(4) UNKNOWN; S2.addrdesc = AArch32.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, domain, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR.PTW == '1' && S2.addrdesc.memattrs.type == MemType_Device) then domain = bits(4) UNKNOWN; S2.addrdesc.fault = AArch32.PermissionFault(ipaddress, domain, S2.level, acctype, iswrite, secondstage, s2fs1walk); result = CombineS1S2Desc(S1, S2.addrdesc); else result = S1; return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/translationfaults/AArch32.SecondStageWalkAArch32.AsynchExternalAbort

// AArch32.SecondStageWalk() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. // AArch32.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts AddressDescriptorFaultRecord AArch32.SecondStageWalk(AArch32.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) type = if parity thenAddressDescriptorFault_AsyncParity S1, bits(32) vaddress,else AccTypeFault_AsyncExternal acctype, boolean iswrite, integer size) assert; ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype = HasS2TranslationAccType_NORMAL(); ; iswrite = boolean UNKNOWN; debugmoe = bits(4) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; s2fs1walk = TRUE; wasaligned = TRUE; return AArch32.SecondStageTranslateAArch32.CreateFaultRecord(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size);(type, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/translationfaults/AArch32.TranslateAddressAArch32.DebugFault

// AArch32.TranslateAddress() // ========================== // Main entry point for translating an address // AArch32.DebugFault() // ==================== AddressDescriptorFaultRecord AArch32.TranslateAddress(bits(32) vaddress,AArch32.DebugFault( AccType acctype, boolean iswrite, boolean wasaligned, integer size) acctype, boolean iswrite, bits(4) debugmoe) if ! ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; returnELUsingAArch32AArch32.CreateFaultRecord(S1TranslationRegimeFault_Debug()) then return AArch64.TranslateAddress(ZeroExtend(vaddress, 64), acctype, iswrite, wasaligned, size); result = AArch32.FullTranslate(vaddress, acctype, iswrite, wasaligned, size); if !(acctype IN {AccType_PTW, AccType_IC, AccType_AT}) && !IsFault(result) then result.fault = AArch32.CheckDebug(vaddress, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress = ZeroExtend(vaddress); return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/walkfaults/AArch32.TranslationTableWalkLDAArch32.DomainFault

// AArch32.TranslationTableWalkLD() // ================================ // Returns a result of a translation table walk using the Long-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. // AArch32.DomainFault() // ===================== TLBRecordFaultRecord AArch32.TranslationTableWalkLD(bits(40) ipaddress, bits(32) vaddress,AArch32.DomainFault(bits(4) domain, integer level, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assertacctype, boolean iswrite) ipaddress = bits(40) UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return ELUsingAArch32AArch32.CreateFaultRecord(S1TranslationRegimeFault_Domain()); else assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2) && HasS2Translation(); TLBRecord result; AddressDescriptor descaddr; bits(64) baseregister; bits(40) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 domain = bits(4) UNKNOWN; descaddr.memattrs.type = MemType_Normal; // Fixed parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB in AArch32 // stride = Log2(Address per Level) - Bits of address consumed at each level constant integer grainsize = 12; // Log2(4KB page size) constant integer stride = grainsize - 3; // Log2(page size / 8 bytes) // Derived parameters for the page table walk: // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = ZeroExtend(vaddress); el = AArch32.AccessUsesEL(acctype); if el == EL2 then inputsize = 32 - UInt(HTCR.T0SZ); basefound = inputsize == 32 || IsZero(inputaddr<31:inputsize>); disabled = FALSE; baseregister = HTTBR; descaddr.memattrs = WalkAttrDecode(HTCR.SH0, HTCR.ORGN0, HTCR.IRGN0, secondstage); reversedescriptors = HSCTLR.EE == '1'; lookupsecure = FALSE; singlepriv = TRUE; hierattrsdisabled = AArch32.HaveHPDExt() && HTCR.HPD == '1'; else basefound = FALSE; disabled = FALSE; t0size = UInt(TTBCR.T0SZ); if t0size == 0 || IsZero(inputaddr<31:(32-t0size)>) then inputsize = 32 - t0size; basefound = TRUE; baseregister = TTBR0; descaddr.memattrs = WalkAttrDecode(TTBCR.SH0, TTBCR.ORGN0, TTBCR.IRGN0, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD0 == '1'; t1size = UInt(TTBCR.T1SZ); if (t1size == 0 && !basefound) || (t1size > 0 && IsOnes(inputaddr<31:(32-t1size)>)) then inputsize = 32 - t1size; basefound = TRUE; baseregister = TTBR1; descaddr.memattrs = WalkAttrDecode(TTBCR.SH1, TTBCR.ORGN1, TTBCR.IRGN1, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD1 == '1'; reversedescriptors = SCTLR.EE == '1'; lookupsecure = IsSecure(); singlepriv = FALSE; // The starting level is the number of strides needed to consume the input address level = 4 - RoundUp(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ipaddress; inputsize = 32 - SInt(VTCR.T0SZ); // VTCR.S must match VTCR.T0SZ[3] if VTCR.S != VTCR.T0SZ<3> then (-, inputsize) = ConstrainUnpredictableInteger(32-7, 32+8, Unpredictable_RESVTCRS); basefound = inputsize == 40 || IsZero(inputaddr<39:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR.SH0, VTCR.ORGN0, VTCR.IRGN0, secondstage); reversedescriptors = HSCTLR.EE == '1'; singlepriv = TRUE; lookupsecure = FALSE; baseregister = VTTBR; startlevel = UInt(VTCR.SL0); level = 2 - startlevel; if level <= 0 then basefound = FALSE; // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsize - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // That is, VTCR.SL0 == '00' and SInt(VTCR.T0SZ) > 1, Size of Input Address < 2^31 bytes // Upper bound check is: startsizecheck > Log2(pagesize/8*16) // That is, VTCR.SL0 == '01' and SInt(VTCR.T0SZ) < -2, Size of Input Address > 2^34 bytes if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 1; // AArch64 reports this as a level 0 fault result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if !IsZero(baseregister<47:40>) then level = 0; result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) baseaddress = baseregister<39:baselowerbound>:Zeros(baselowerbound); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; addrselecttop = inputsize - 1; repeat addrselectbottom = (3-level)*stride + grainsize; bits(40) index = ZeroExtend(inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address = ZeroExtend(baseaddress OR index); descaddr.paddress.NS = ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || !HasS2Translation() || (HaveNV2Ext() && acctype == AccType_NV2REGISTER) then descaddr2 = descaddr; else descaddr2 = AArch32.SecondStageWalk(descaddr, vaddress, acctype, iswrite, 8); // Check for a fault on the stage 2 walk if IsFault(descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = BigEndianReverse(desc); if desc<0> == '0' || (desc<1:0> == '01' && level == 3) then // Fault (00), Reserved (10), or Block (01) at level 3. result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; baseaddress = desc<39:grainsize>:Zeros(grainsize); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply only in EL1&0 translation regimes if !singlepriv then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Check the output address is inside the supported range if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Unpack the descriptor into address and upper and lower block attributes outputaddress = desc<39:addrselectbottom>:inputaddr<addrselectbottom-1:0>; // Check the access flag if desc<10> == '0' then result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply only in EL1&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = AArch32.S1AttrDecode(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if HaveExtendedExecuteNeverExt() then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = '1'; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.fault = AArch32.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExt() then result.CnP = baseregister<0>; return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/walkfaults/AArch32.TranslationTableWalkSDAArch32.NoFault

// AArch32.TranslationTableWalkSD() // ================================ // Returns a result of a translation table walk using the Short-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. // AArch32.NoFault() // ================= TLBRecordFaultRecord AArch32.TranslationTableWalkSD(bits(32) vaddress,AArch32.NoFault() ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype = AccTypeAccType_NORMAL acctype, boolean iswrite, integer size) assert; iswrite = boolean UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return ELUsingAArch32AArch32.CreateFaultRecord(S1TranslationRegimeFault_None()); // This is only called when address translation is enabled TLBRecord result; AddressDescriptor l1descaddr; AddressDescriptor l2descaddr; bits(40) outputaddress; // Variables for Abort functions ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; NS = bit UNKNOWN; // Default setting of the domain domain = bits(4) UNKNOWN; // Determine correct Translation Table Base Register to use. bits(64) ttbr; n = UInt(TTBCR.N); if n == 0 || IsZero(vaddress<31:(32-n)>) then ttbr = TTBR0; disabled = (TTBCR.PD0 == '1'); else ttbr = TTBR1; disabled = (TTBCR.PD1 == '1'); n = 0; // TTBR1 translation always works like N=0 TTBR0 translation // Check this Translation Table Base Register is not disabled. if disabled then level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Obtain descriptor from initial lookup. l1descaddr.paddress.address = ZeroExtend(ttbr<31:14-n>:vaddress<31-n:20>:'00'); l1descaddr.paddress.NS = if IsSecure() then '0' else '1'; IRGN = ttbr<0>:ttbr<6>; // TTBR.IRGN RGN = ttbr<4:3>; // TTBR.RGN SH = ttbr<1>:ttbr<5>; // TTBR.S:TTBR.NOS l1descaddr.memattrs = WalkAttrDecode(SH, RGN, IRGN, secondstage); if !HaveEL(EL2) || (IsSecure() && !IsSecureEL2Enabled()) then // if only 1 stage of translation l1descaddr2 = l1descaddr; else l1descaddr2 = AArch32.SecondStageWalk(l1descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l1descaddr2) then result.addrdesc.fault = l1descaddr2.fault; return result; // Update virtual address for abort functions l1descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l1desc = _Mem[l1descaddr2, 4,accdesc]; if SCTLR.EE == '1' then l1desc = BigEndianReverse(l1desc); // Process descriptor from initial lookup. case l1desc<1:0> of when '00' // Fault, Reserved level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; when '01' // Large page or Small page domain = l1desc<8:5>; level = 2; pxn = l1desc<2>; NS = l1desc<3>; // Obtain descriptor from level 2 lookup. l2descaddr.paddress.address = ZeroExtend(l1desc<31:10>:vaddress<19:12>:'00'); l2descaddr.paddress.NS = if IsSecure() then '0' else '1'; l2descaddr.memattrs = l1descaddr.memattrs; if !HaveEL(EL2) || (IsSecure() && !IsSecureEL2Enabled()) then // if only 1 stage of translation l2descaddr2 = l2descaddr; else l2descaddr2 = AArch32.SecondStageWalk(l2descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l2descaddr2) then result.addrdesc.fault = l2descaddr2.fault; return result; // Update virtual address for abort functions l2descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l2desc = _Mem[l2descaddr2, 4, accdesc]; if SCTLR.EE == '1' then l2desc = BigEndianReverse(l2desc); // Process descriptor from level 2 lookup. if l2desc<1:0> == '00' then result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; nG = l2desc<11>; S = l2desc<10>; ap = l2desc<9,5:4>; if SCTLR.AFE == '1' && l2desc<4> == '0' then // Armv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l2desc<1> == '0' then // Large page xn = l2desc<15>; tex = l2desc<14:12>; c = l2desc<3>; b = l2desc<2>; blocksize = 64; outputaddress = ZeroExtend(l2desc<31:16>:vaddress<15:0>); else // Small page tex = l2desc<8:6>; c = l2desc<3>; b = l2desc<2>; xn = l2desc<0>; blocksize = 4; outputaddress = ZeroExtend(l2desc<31:12>:vaddress<11:0>); when '1x' // Section or Supersection NS = l1desc<19>; nG = l1desc<17>; S = l1desc<16>; ap = l1desc<15,11:10>; tex = l1desc<14:12>; xn = l1desc<4>; c = l1desc<3>; b = l1desc<2>; pxn = l1desc<0>; level = 1; if SCTLR.AFE == '1' && l1desc<10> == '0' then // Armv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l1desc<18> == '0' then // Section domain = l1desc<8:5>; blocksize = 1024; outputaddress = ZeroExtend(l1desc<31:20>:vaddress<19:0>); else // Supersection domain = '0000'; blocksize = 16384; outputaddress = l1desc<8:5>:l1desc<23:20>:l1desc<31:24>:vaddress<23:0>; // Decode the TEX, C, B and S bits to produce the TLBRecord's memory attributes if SCTLR.TRE == '0' then if RemapRegsHaveResetValues() then result.addrdesc.memattrs = AArch32.DefaultTEXDecode(tex, c, b, S, acctype); else result.addrdesc.memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; else result.addrdesc.memattrs = AArch32.RemappedTEXDecode(tex, c, b, S, acctype); // Set the rest of the TLBRecord, try to add it to the TLB, and return it. result.perms.ap = ap; result.perms.xn = xn; result.perms.pxn = pxn; result.nG = nG; result.domain = domain; result.level = level; result.blocksize = blocksize; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.paddress.NS = if IsSecure() then NS else '1'; result.addrdesc.fault = AArch32.NoFault(); return result;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/walkfaults/RemapRegsHaveResetValuesAArch32.PermissionFault

boolean// AArch32.PermissionFault() // ========================= FaultRecord RemapRegsHaveResetValues();AArch32.PermissionFault(bits(40) ipaddress, bits(4) domain, integer level,AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return AArch32.CreateFaultRecord(Fault_Permission, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64aarch32/debugtranslation/breakpointfaults/AArch64.BreakpointMatchAArch32.TranslationFault

// AArch64.BreakpointMatch() // ========================= // Breakpoint matching in an AArch64 translation regime. // AArch32.TranslationFault() // ========================== booleanFaultRecord AArch64.BreakpointMatch(integer n, bits(64) vaddress,AArch32.TranslationFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, integer size) assert !acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; returnELUsingAArch32AArch32.CreateFaultRecord(S1TranslationRegimeFault_Translation()); assert n <= UInt(ID_AA64DFR0_EL1.BRPs); enabled = DBGBCR_EL1[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR_EL1[n].BT == '0x01'; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch64.StateMatch(DBGBCR_EL1[n].SSC, DBGBCR_EL1[n].HMC, DBGBCR_EL1[n].PMC, linked, DBGBCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); value_match = AArch64.BreakpointValueMatch(n, vaddress, linked_to); if HaveAnyAArch32() && size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. match_i = AArch64.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if vaddress<1> == '1' && DBGBCR_EL1[n].BAS == '1111' then // The above notwithstanding, if DBGBCR_EL1[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR_EL1[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); match = value_match && state_match && enabled; return match;, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch64aarch32/debugtranslation/breakpointtranslation/AArch64.BreakpointValueMatchAArch32.FirstStageTranslate

// AArch64.BreakpointValueMatch() // ============================== // AArch32.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. booleanAddressDescriptor AArch64.BreakpointValueMatch(integer n, bits(64) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n >AArch32.FirstStageTranslate(bits(32) vaddress, UIntAccType(ID_AA64DFR0_EL1.BRPs) then (c, n) =acctype, boolean iswrite, boolean wasaligned, integer size) if PSTATE.EL == ConstrainUnpredictableIntegerEL2(0,then s1_enabled = HSCTLR.M == '1'; elsif UIntEL2Enabled(ID_AA64DFR0_EL1.BRPs),() then tge = (if Unpredictable_BPNOTIMPLELUsingAArch32); assert c IN {(Constraint_DISABLEDEL2,) then HCR.TGE else HCR_EL2.TGE); dc = (if Constraint_UNKNOWNELUsingAArch32}; if c ==( Constraint_DISABLEDEL2 then return FALSE; ) then HCR.DC else HCR_EL2.DC); s1_enabled = tge == '0' && dc == '0' && SCTLR.M == '1'; else s1_enabled = SCTLR.M == '1'; // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR_EL1[n].E == '0' then return FALSE; ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; context_aware = (n >= if s1_enabled then // First stage enabled use_long_descriptor_format = PSTATE.EL == UIntEL2(ID_AA64DFR0_EL1.BRPs) -|| TTBCR.EAE == '1'; if use_long_descriptor_format then S1 = UIntAArch32.TranslationTableWalkLD(ID_AA64DFR0_EL1.CTX_CMPs)); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. type = DBGBCR_EL1[n].BT; if ((type IN {'011x','11xx'} && !(ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; domaincheck = FALSE; else S1 =HaveVirtHostExtAArch32.TranslationTableWalkSD()) || // Context matching type == '010x' || // Reserved (type != '0x0x' && !context_aware) || // Context matching (type == '1xxx' && !(vaddress, acctype, iswrite, size); permissioncheck = TRUE; domaincheck = TRUE; else S1 =HaveELAArch32.TranslateAddressS1Off((vaddress, acctype, iswrite); permissioncheck = FALSE; domaincheck = FALSE; ifEL2UsingAArch32))) then // EL2 extension (c, type) =() && ConstrainUnpredictableBitsHaveTrapLoadStoreMultipleDeviceExt(() &&Unpredictable_RESBPTYPEAArch32.ExecutingLSMInstr); assert c IN {() then if S1.addrdesc.memattrs.type ==Constraint_DISABLEDMemType_Device,&& S1.addrdesc.memattrs.device != Constraint_UNKNOWNDeviceType_GRE}; if c ==then nTLSMD = if Constraint_DISABLEDS1TranslationRegime then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (type == '0x0x'); match_vmid = (type == '10xx'); match_cid = (type == '001x'); match_cid1 = (type IN { '101x', 'x11x'}); match_cid2 = (type == '11xx'); linked = (type == 'xxx1'); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return FALSE; // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return FALSE; // Do the comparison. if match_addr then byte =() == UIntEL2(vaddress<1:0>); ifthen HSCTLR.nTLSMD else SCTLR.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault = HaveAnyAArch32AArch32.AlignmentFault() then // T32 instructions can be executed at EL0 in an AArch64 translation regime. assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR_EL1[n].BAS<byte> == '1'); else assert byte == 0; // "vaddress" is word aligned byte_select_match = TRUE; // DBGBCR_EL1[n].BAS<byte> is RES1 top =(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AddrTopAccType_IFETCH(vaddress, TRUE, PSTATE.EL); BVR_match = vaddress<top:2> == DBGBVR_EL1[n]<top:2> && byte_select_match; elsif match_cid then if) || (acctype == IsInHostAccType_DCZVA() then BVR_match = (CONTEXTIDR_EL2 == DBGBVR_EL1[n]<31:0>); else BVR_match = (PSTATE.EL IN {)) && S1.addrdesc.memattrs.type ==EL0MemType_Device,&& !EL1IsFault} && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); elsif match_cid1 then BVR_match = (PSTATE.EL IN {(S1.addrdesc) then S1.addrdesc.fault =EL0AArch32.AlignmentFault,(acctype, iswrite, secondstage); if !EL1IsFault} && !(S1.addrdesc) && domaincheck then (permissioncheck, abort) =IsInHostAArch32.CheckDomain() && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); if match_vmid then if !(S1.domain, vaddress, S1.level, acctype, iswrite); S1.addrdesc.fault = abort; if !Have16bitVMIDIsFault() || VTCR_EL2.VS == '0' then vmid =(S1.addrdesc) && permissioncheck then S1.addrdesc.fault = ZeroExtendAArch32.CheckPermission(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid =(S1.perms, vaddress, S1.level, S1.domain, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (! ZeroExtendIsFault(DBGBVR_EL1[n]<39:32>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBVR_EL1[n]<47:32>; BXVR_match = ((S1.addrdesc) && S1.addrdesc.memattrs.type ==EL2EnabledMemType_Device() && PSTATE.EL IN {&& acctype ==EL0AccType_IFETCH,) then S1.addrdesc =EL1AArch32.InstructionDevice} && !IsInHost() && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (!IsSecure() && HaveVirtHostExt() && DBGBVR_EL1[n]<63:32> == CONTEXTIDR_EL2); (S1.addrdesc, vaddress, ipaddress, S1.level, S1.domain, acctype, iswrite, secondstage, s2fs1walk); bvr_match_valid = (match_addr || match_cid || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return match; return S1.addrdesc;

Library pseudocode for aarch64aarch32/debugtranslation/breakpointtranslation/AArch64.StateMatchAArch32.FullTranslate

// AArch64.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. // AArch32.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. booleanAddressDescriptor AArch64.StateMatch(bits(2) SSC, bit HMC, bits(2) PxC, boolean linked, bits(4) LBN, boolean isbreakpnt,AArch32.FullTranslate(bits(32) vaddress, AccType acctype, boolean ispriv) // "SSC", "HMC", "PxC" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. acctype, boolean iswrite, boolean wasaligned, integer size) // If parameters are set to a reserved type, behaves as either disabled or a defined type if ((HMC:SSC:PxC) IN {'011xx','100x0','101x0','11010','11101','1111x'} || // Reserved (HMC == '0' && PxC == '00' && (!isbreakpnt || ! // First Stage Translation S1 =HaveAArch32ELAArch32.FirstStageTranslate((vaddress, acctype, iswrite, wasaligned, size); if !EL1IsFault))) || // Usr/Svc/Sys (SSC IN {'01','10'} && !(S1) && !(HaveEL(EL3)) || // No EL3 (HMC:SSC != '000' && HMC:SSC != '111' && !HaveEL(EL3) && !HaveEL(EL2)) || // No EL3/EL2 (HMC:SSC:PxC == '11100' && !HaveEL(EL2))) then // No EL2 (c, <HMC,SSC,PxC>) = ConstrainUnpredictableBits(Unpredictable_RESBPWPCTRL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value EL3_match = HaveEL(EL3) && HMC == '1' && SSC<0> == '0'; EL2_match = HaveEL(EL2) && HMC == '1'; EL1_match = PxC<0> == '1'; EL0_match = PxC<1> == '1'; el = if HaveNV2Ext() && acctype == AccType_NV2REGISTER then) && EL2HasS2Translation else PSTATE.EL; if !ispriv && !isbreakpnt then priv_match = EL0_match; else case el of when() then s2fs1walk = FALSE; result = EL3AArch32.SecondStageTranslate priv_match = EL3_match; when EL2 priv_match = EL2_match; when EL1 priv_match = EL1_match; when EL0 priv_match = EL0_match; case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = !IsSecure(); // Non-secure only when '10' security_state_match = IsSecure(); // Secure only when '11' security_state_match = TRUE; // Both if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = (UInt(ID_AA64DFR0_EL1.BRPs) - UInt(ID_AA64DFR0_EL1.CTX_CMPs)); last_ctx_cmp = UInt(ID_AA64DFR0_EL1.BRPs); if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint if linked then vaddress = bits(64) UNKNOWN; linked_to = TRUE; linked_match = AArch64.BreakpointValueMatch(lbn, vaddress, linked_to); (S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size); else result = S1; return priv_match && security_state_match && (!linked || linked_match); return result;

Library pseudocode for aarch64aarch32/debugtranslation/enablestranslation/AArch64.GenerateDebugExceptionsAArch32.SecondStageTranslate

// AArch64.GenerateDebugExceptions() // ================================= // AArch32.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. booleanAddressDescriptor AArch64.GenerateDebugExceptions() returnAArch32.SecondStageTranslate( AArch64.GenerateDebugExceptionsFromAddressDescriptor(PSTATE.EL,S1, bits(32) vaddress, acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size) assert HasS2Translation(); assert IsZero(S1.paddress.address<47:40>); hwupdatewalk = FALSE; if !ELUsingAArch32(EL2) then return AArch64.SecondStageTranslate(S1, ZeroExtend(vaddress, 64), acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); s2_enabled = HCR.VM == '1' || HCR.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<39:0>; S2 = AArch32.TranslationTableWalkLD(ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S2.addrdesc.memattrs.type == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.CheckS2Permission(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, s2fs1walk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !IsFault(S2.addrdesc) && S2.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then domain = bits(4) UNKNOWN; S2.addrdesc = AArch32.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, domain, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR.PTW == '1' && S2.addrdesc.memattrs.type == MemType_Device) then domain = bits(4) UNKNOWN; S2.addrdesc.fault = AArch32.PermissionFault(ipaddress, domain, S2.level, acctype, iswrite, secondstage, s2fs1walk); result = CombineS1S2DescIsSecureAccType(), PSTATE.D);(S1, S2.addrdesc); else result = S1; return result;

Library pseudocode for aarch64aarch32/debugtranslation/enablestranslation/AArch64.GenerateDebugExceptionsFromAArch32.SecondStageWalk

// AArch64.GenerateDebugExceptionsFrom() // ===================================== // AArch32.SecondStageWalk() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. booleanAddressDescriptor AArch64.GenerateDebugExceptionsFrom(bits(2) from, boolean secure, bit mask) if OSLSR_EL1.OSLK == '1' ||AArch32.SecondStageWalk( DoubleLockStatusAddressDescriptor() ||S1, bits(32) vaddress, HaltedAccType() then return FALSE; acctype, boolean iswrite, integer size) route_to_el2 = assert HaveELHasS2Translation((); s2fs1walk = TRUE; wasaligned = TRUE; returnEL2AArch32.SecondStageTranslate) && !secure && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); target = (if route_to_el2 then EL2 else EL1); enabled = !HaveEL(EL3) || !secure || MDCR_EL3.SDD == '0'; if from == target then enabled = enabled && MDSCR_EL1.KDE == '1' && mask == '0'; else enabled = enabled && UInt(target) > UInt(from); return enabled;(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size);

Library pseudocode for aarch64aarch32/debugtranslation/pmutranslation/AArch64.CheckForPMUOverflowAArch32.TranslateAddress

// AArch64.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events // AArch32.TranslateAddress() // ========================== // Main entry point for translating an address booleanAddressDescriptor AArch64.CheckForPMUOverflow() pmuirq = PMCR_EL0.E == '1' && PMINTENSET_EL1<31> == '1' && PMOVSSET_EL0<31> == '1'; for n = 0 toAArch32.TranslateAddress(bits(32) vaddress, UIntAccType(PMCR_EL0.N) - 1 ifacctype, boolean iswrite, boolean wasaligned, integer size) if ! HaveELELUsingAArch32(EL2S1TranslationRegime) then E = (if n <()) then return UIntAArch64.TranslateAddress(MDCR_EL2.HPMN) then PMCR_EL0.E else MDCR_EL2.HPME); else E = PMCR_EL0.E; if E == '1' && PMINTENSET_EL1<n> == '1' && PMOVSSET_EL0<n> == '1' then pmuirq = TRUE; SetInterruptRequestLevel((InterruptID_PMUIRQZeroExtend, if pmuirq then HIGH else LOW); CTI_SetEventLevel((vaddress, 64), acctype, iswrite, wasaligned, size); result =(vaddress, acctype, iswrite, wasaligned, size); if !(acctype IN {AccType_PTW, AccType_IC, AccType_AT}) && !IsFault(result) then result.fault = AArch32.CheckDebug(vaddress, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress = ZeroExtendCrossTriggerIn_PMUOverflowAArch32.FullTranslate, if pmuirq then HIGH else LOW); (vaddress); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.) return pmuirq; return result;

Library pseudocode for aarch64aarch32/debugtranslation/pmuwalk/AArch64.CountEventsAArch32.TranslationTableWalkLD

// AArch64.CountEvents() // ===================== // Return TRUE if counter "n" should count its event. For the cycle counter, n == 31. // AArch32.TranslationTableWalkLD() // ================================ // Returns a result of a translation table walk using the Long-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. booleanTLBRecord AArch64.CountEvents(integer n) assert n == 31 || n <AArch32.TranslationTableWalkLD(bits(40) ipaddress, bits(32) vaddress, UIntAccType(PMCR_EL0.N); // Event counting is disabled in Debug state debug =acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assert HaltedELUsingAArch32(); // In Non-secure state, some counters are reserved for EL2 if( S1TranslationRegime()); else assert HaveEL(EL2) then E = if n <) && ! IsSecure() && ELUsingAArch32(EL2) && HasS2Translation(); TLBRecord result; AddressDescriptor descaddr; bits(64) baseregister; bits(40) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 domain = bits(4) UNKNOWN; descaddr.memattrs.type = MemType_Normal; // Fixed parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB in AArch32 // stride = Log2(Address per Level) - Bits of address consumed at each level constant integer grainsize = 12; // Log2(4KB page size) constant integer stride = grainsize - 3; // Log2(page size / 8 bytes) // Derived parameters for the page table walk: // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = ZeroExtend(vaddress); el = AArch32.AccessUsesEL(acctype); if el == EL2 then inputsize = 32 - UInt(MDCR_EL2.HPMN) || n == 31 then PMCR_EL0.E else MDCR_EL2.HPME; else E = PMCR_EL0.E; enabled = E == '1' && PMCNTENSET_EL0<n> == '1'; if !(HTCR.T0SZ); basefound = inputsize == 32 ||IsZero(inputaddr<31:inputsize>); disabled = FALSE; baseregister = HTTBR; descaddr.memattrs = WalkAttrDecode(HTCR.SH0, HTCR.ORGN0, HTCR.IRGN0, secondstage); reversedescriptors = HSCTLR.EE == '1'; lookupsecure = FALSE; singlepriv = TRUE; hierattrsdisabled = AArch32.HaveHPDExt() && HTCR.HPD == '1'; else basefound = FALSE; disabled = FALSE; t0size = UInt(TTBCR.T0SZ); if t0size == 0 || IsZero(inputaddr<31:(32-t0size)>) then inputsize = 32 - t0size; basefound = TRUE; baseregister = TTBR0; descaddr.memattrs = WalkAttrDecode(TTBCR.SH0, TTBCR.ORGN0, TTBCR.IRGN0, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD0 == '1'; t1size = UInt(TTBCR.T1SZ); if (t1size == 0 && !basefound) || (t1size > 0 && IsOnes(inputaddr<31:(32-t1size)>)) then inputsize = 32 - t1size; basefound = TRUE; baseregister = TTBR1; descaddr.memattrs = WalkAttrDecode(TTBCR.SH1, TTBCR.ORGN1, TTBCR.IRGN1, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD1 == '1'; reversedescriptors = SCTLR.EE == '1'; lookupsecure = IsSecure() then // Event counting in Non-secure state is allowed unless all of: // * EL2 and the HPMD Extension are implemented // * Executing at EL2 // * PMNx is not reserved for EL2 // * MDCR_EL2.HPMD == 1 if(); singlepriv = FALSE; // The starting level is the number of strides needed to consume the input address level = 4 - HaveHPMDExtRoundUp() && PSTATE.EL ==(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ipaddress; inputsize = 32 - EL2SInt && (n <(VTCR.T0SZ); // VTCR.S must match VTCR.T0SZ[3] if VTCR.S != VTCR.T0SZ<3> then (-, inputsize) = ConstrainUnpredictableInteger(32-7, 32+8, Unpredictable_RESVTCRS); basefound = inputsize == 40 || IsZero(inputaddr<39:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR.IRGN0, VTCR.ORGN0, VTCR.SH0, secondstage); reversedescriptors = HSCTLR.EE == '1'; singlepriv = TRUE; lookupsecure = FALSE; baseregister = VTTBR; startlevel = UInt(MDCR_EL2.HPMN) || n == 31) then prohibited = (MDCR_EL2.HPMD == '1'); else prohibited = FALSE; else // Event counting in Secure state is prohibited unless any one of: // * EL3 is not implemented // * EL3 is using AArch64 and MDCR_EL3.SPME == 1 prohibited =(VTCR.SL0); level = 2 - startlevel; if level <= 0 then basefound = FALSE; // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsize - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // That is, VTCR.SL0 == '00' and SInt(VTCR.T0SZ) > 1, Size of Input Address < 2^31 bytes // Upper bound check is: startsizecheck > Log2(pagesize/8*16) // That is, VTCR.SL0 == '01' and SInt(VTCR.T0SZ) < -2, Size of Input Address > 2^34 bytes if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 1; // AArch64 reports this as a level 0 fault result.addrdesc.fault = HaveELAArch32.TranslationFault((ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if !EL3IsZero) && MDCR_EL3.SPME == '0'; // The IMPLEMENTATION DEFINED authentication interface might override software controls if prohibited && !(baseregister<47:40>) then level = 0; result.addrdesc.fault =HaveNoSecurePMUDisableOverrideAArch32.AddressSizeFault() then prohibited = !(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) baseaddress = baseregister<39:baselowerbound>:ExternalSecureNoninvasiveDebugEnabledZeros(); (baselowerbound); // For the cycle counter, PMCR_EL0.DP enables counting when otherwise prohibited if prohibited && n == 31 then prohibited = (PMCR_EL0.DP == '1'); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; // Event counting can be filtered by the {P, U, NSK, NSU, NSH, M} bits filter = if n == 31 then PMCCFILTR else PMEVTYPER[n]; addrselecttop = inputsize - 1; P = filter<31>; U = filter<30>; NSK = if repeat addrselectbottom = (3-level)*stride + grainsize; bits(40) index = HaveELZeroExtend((inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address =EL3ZeroExtend) then filter<29> else '0'; NSU = if(baseaddress OR index); descaddr.paddress.NS = ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || ! HaveELHasS2Translation(() || (EL3HaveNV2Ext) then filter<28> else '0'; NSH = if() && acctype == HaveELAccType_NV2REGISTER() then descaddr2 = descaddr; else descaddr2 =EL2AArch32.SecondStageWalk) then filter<27> else '0'; M = if(descaddr, vaddress, acctype, iswrite, 8); // Check for a fault on the stage 2 walk if HaveELIsFault((descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress =EL3ZeroExtend) then filter<26> else '0'; (vaddress); case PSTATE.EL of when accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = EL0BigEndianReverse filtered = if(desc); if desc<0> == '0' || (desc<1:0> == '01' && level == 3) then // Fault (00), Reserved (10), or Block (01) at level 3. result.addrdesc.fault = IsSecureAArch32.TranslationFault() then U == '1' else U != NSU; when(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if ! EL1IsZero filtered = if(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; baseaddress = desc<39:grainsize>:Zeros(grainsize); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply only in EL1&0 translation regimes if !singlepriv then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Check the output address is inside the supported range if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Unpack the descriptor into address and upper and lower block attributes outputaddress = desc<39:addrselectbottom>:inputaddr<addrselectbottom-1:0>; // Check the access flag if desc<10> == '0' then result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply only in EL1&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then P == '1' else P != NSK; when() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = EL2AArch32.S1AttrDecode filtered = (NSH == '0'); when(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if () then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = '1'; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.fault = AArch32.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExtEL3HaveExtendedExecuteNeverExt filtered = (M != P); () then result.CnP = baseregister<0>; return !debug && enabled && !prohibited && !filtered; return result;

Library pseudocode for aarch64aarch32/debugtranslation/statisticalprofilingwalk/CheckProfilingBufferAccessAArch32.TranslationTableWalkSD

// CheckProfilingBufferAccess() // ============================ // AArch32.TranslationTableWalkSD() // ================================ // Returns a result of a translation table walk using the Short-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. SysRegAccessTLBRecord CheckProfilingBufferAccess() if !AArch32.TranslationTableWalkSD(bits(32) vaddress,HaveStatisticalProfilingAccType() || PSTATE.EL ==acctype, boolean iswrite, integer size) assert EL0ELUsingAArch32 ||( UsingAArch32S1TranslationRegime() then return()); // This is only called when address translation is enabled SysRegAccess_UNDEFINEDTLBRecord; ifresult; EL2EnabledAddressDescriptor() && PSTATE.EL ==l1descaddr; EL1AddressDescriptor && MDCR_EL2.E2PB<0> != '1' then returnl2descaddr; bits(40) outputaddress; // Variables for Abort functions ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; NS = bit UNKNOWN; // Default setting of the domain domain = bits(4) UNKNOWN; // Determine correct Translation Table Base Register to use. bits(64) ttbr; n = SysRegAccess_TrapToEL2UInt; if(TTBCR.N); if n == 0 || IsZero(vaddress<31:(32-n)>) then ttbr = TTBR0; disabled = (TTBCR.PD0 == '1'); else ttbr = TTBR1; disabled = (TTBCR.PD1 == '1'); n = 0; // TTBR1 translation always works like N=0 TTBR0 translation // Check this Translation Table Base Register is not disabled. if disabled then level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Obtain descriptor from initial lookup. l1descaddr.paddress.address = ZeroExtend(ttbr<31:14-n>:vaddress<31-n:20>:'00'); l1descaddr.paddress.NS = if IsSecure() then '0' else '1'; IRGN = ttbr<0>:ttbr<6>; // TTBR.IRGN RGN = ttbr<4:3>; // TTBR.RGN SH = ttbr<1>:ttbr<5>; // TTBR.S:TTBR.NOS l1descaddr.memattrs = WalkAttrDecode(SH, RGN, IRGN, secondstage); if !HaveEL(EL3EL2) && PSTATE.EL !=) || ( EL3IsSecure && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then return() && ! SysRegAccess_TrapToEL3IsSecureEL2Enabled; return()) then // if only 1 stage of translation l1descaddr2 = l1descaddr; else l1descaddr2 = (l1descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l1descaddr2) then result.addrdesc.fault = l1descaddr2.fault; return result; // Update virtual address for abort functions l1descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l1desc = _Mem[l1descaddr2, 4,accdesc]; if SCTLR.EE == '1' then l1desc = BigEndianReverse(l1desc); // Process descriptor from initial lookup. case l1desc<1:0> of when '00' // Fault, Reserved level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; when '01' // Large page or Small page domain = l1desc<8:5>; level = 2; pxn = l1desc<2>; NS = l1desc<3>; // Obtain descriptor from level 2 lookup. l2descaddr.paddress.address = ZeroExtend(l1desc<31:10>:vaddress<19:12>:'00'); l2descaddr.paddress.NS = if IsSecure() then '0' else '1'; l2descaddr.memattrs = l1descaddr.memattrs; if !HaveEL(EL2) || (IsSecure() && !IsSecureEL2Enabled()) then // if only 1 stage of translation l2descaddr2 = l2descaddr; else l2descaddr2 = AArch32.SecondStageWalk(l2descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l2descaddr2) then result.addrdesc.fault = l2descaddr2.fault; return result; // Update virtual address for abort functions l2descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l2desc = _Mem[l2descaddr2, 4, accdesc]; if SCTLR.EE == '1' then l2desc = BigEndianReverse(l2desc); // Process descriptor from level 2 lookup. if l2desc<1:0> == '00' then result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; nG = l2desc<11>; S = l2desc<10>; ap = l2desc<9,5:4>; if SCTLR.AFE == '1' && l2desc<4> == '0' then // ARMv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l2desc<1> == '0' then // Large page xn = l2desc<15>; tex = l2desc<14:12>; c = l2desc<3>; b = l2desc<2>; blocksize = 64; outputaddress = ZeroExtend(l2desc<31:16>:vaddress<15:0>); else // Small page tex = l2desc<8:6>; c = l2desc<3>; b = l2desc<2>; xn = l2desc<0>; blocksize = 4; outputaddress = ZeroExtend(l2desc<31:12>:vaddress<11:0>); when '1x' // Section or Supersection NS = l1desc<19>; nG = l1desc<17>; S = l1desc<16>; ap = l1desc<15,11:10>; tex = l1desc<14:12>; xn = l1desc<4>; c = l1desc<3>; b = l1desc<2>; pxn = l1desc<0>; level = 1; if SCTLR.AFE == '1' && l1desc<10> == '0' then // ARMv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l1desc<18> == '0' then // Section domain = l1desc<8:5>; blocksize = 1024; outputaddress = ZeroExtend(l1desc<31:20>:vaddress<19:0>); else // Supersection domain = '0000'; blocksize = 16384; outputaddress = l1desc<8:5>:l1desc<23:20>:l1desc<31:24>:vaddress<23:0>; // Decode the TEX, C, B and S bits to produce the TLBRecord's memory attributes if SCTLR.TRE == '0' then if RemapRegsHaveResetValues() then result.addrdesc.memattrs = AArch32.DefaultTEXDecode(tex, c, b, S, acctype); else result.addrdesc.memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; else result.addrdesc.memattrs = AArch32.RemappedTEXDecode(tex, c, b, S, acctype); // Set the rest of the TLBRecord, try to add it to the TLB, and return it. result.perms.ap = ap; result.perms.xn = xn; result.perms.pxn = pxn; result.nG = nG; result.domain = domain; result.level = level; result.blocksize = blocksize; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.paddress.NS = if IsSecure() then NS else '1'; result.addrdesc.fault = AArch32.NoFaultSysRegAccess_OKAArch32.SecondStageWalk;(); return result;

Library pseudocode for aarch64aarch32/debugtranslation/statisticalprofilingwalk/CheckStatisticalProfilingAccessRemapRegsHaveResetValues

// CheckStatisticalProfilingAccess() // ================================= SysRegAccessboolean CheckStatisticalProfilingAccess() if !RemapRegsHaveResetValues();HaveStatisticalProfiling() || PSTATE.EL == EL0 || UsingAArch32() then return SysRegAccess_UNDEFINED; if EL2Enabled() && PSTATE.EL == EL1 && MDCR_EL2.TPMS == '1' then return SysRegAccess_TrapToEL2; if HaveEL(EL3) && PSTATE.EL != EL3 && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then return SysRegAccess_TrapToEL3; return SysRegAccess_OK;

Library pseudocode for aarch64/debug/statisticalprofilingbreakpoint/CollectContextIDR1AArch64.BreakpointMatch

// CollectContextIDR1() // ==================== // AArch64.BreakpointMatch() // ========================= // Breakpoint matching in an AArch64 translation regime. boolean CollectContextIDR1() if !AArch64.BreakpointMatch(integer n, bits(64) vaddress,StatisticalProfilingEnabledAccType() then return FALSE; if PSTATE.EL ==acctype, integer size) assert ! EL2ELUsingAArch32 then return FALSE; if( ()); assert n <= UInt(ID_AA64DFR0_EL1.BRPs); enabled = DBGBCR_EL1[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR_EL1[n].BT == '0x01'; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch64.StateMatch(DBGBCR_EL1[n].SSC, DBGBCR_EL1[n].HMC, DBGBCR_EL1[n].PMC, linked, DBGBCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); value_match = AArch64.BreakpointValueMatch(n, vaddress, linked_to); if HaveAnyAArch32() && size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. match_i = AArch64.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if vaddress<1> == '1' && DBGBCR_EL1[n].BAS == '1111' then // The above notwithstanding, if DBGBCR_EL1[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR_EL1[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALFEL2EnabledS1TranslationRegime() && HCR_EL2.TGE == '1' then return FALSE; return PMSCR_EL1.CX == '1';); match = value_match && state_match && enabled; return match;

Library pseudocode for aarch64/debug/statisticalprofilingbreakpoint/CollectContextIDR2AArch64.BreakpointValueMatch

// CollectContextIDR2() // ==================== // AArch64.BreakpointValueMatch() // ============================== boolean CollectContextIDR2() if !AArch64.BreakpointValueMatch(integer n, bits(64) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n >StatisticalProfilingEnabledUInt() then return FALSE; if(ID_AA64DFR0_EL1.BRPs) then (c, n) = ConstrainUnpredictableInteger(0, UInt(ID_AA64DFR0_EL1.BRPs), Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR_EL1[n].E == '0' then return FALSE; context_aware = (n >= UInt(ID_AA64DFR0_EL1.BRPs) - UInt(ID_AA64DFR0_EL1.CTX_CMPs)); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. type = DBGBCR_EL1[n].BT; if ((type IN {'011x','11xx'} && !HaveVirtHostExt()) || // Context matching type == '010x' || // Reserved (type != '0x0x' && !context_aware) || // Context matching (type == '1xxx' && !HaveEL(EL2))) then // EL2 extension (c, type) = ConstrainUnpredictableBits(Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (type == '0x0x'); match_vmid = (type == '10xx'); match_cid = (type == '001x'); match_cid1 = (type IN { '101x', 'x11x'}); match_cid2 = (type == '11xx'); linked = (type == 'xxx1'); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return FALSE; // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return FALSE; // Do the comparison. if match_addr then byte = UInt(vaddress<1:0>); if HaveAnyAArch32() then // T32 instructions can be executed at EL0 in an AArch64 translation regime. assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR_EL1[n].BAS<byte> == '1'); else assert byte == 0; // "vaddress" is word aligned byte_select_match = TRUE; // DBGBCR_EL1[n].BAS<byte> is RES1 top = AddrTop(vaddress, TRUE, PSTATE.EL); BVR_match = vaddress<top:2> == DBGBVR_EL1[n]<top:2> && byte_select_match; elsif match_cid then if IsInHost() then BVR_match = (CONTEXTIDR_EL2 == DBGBVR_EL1[n]<31:0>); else BVR_match = (PSTATE.EL IN {EL0,EL1} && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); elsif match_cid1 then BVR_match = (PSTATE.EL IN {EL0,EL1} && !IsInHost() && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); if match_vmid then if !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBVR_EL1[n]<39:32>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBVR_EL1[n]<47:32>; BXVR_match = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && !IsInHost() && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (!IsSecure() && HaveVirtHostExt() then return FALSE; return PMSCR_EL2.CX == '1';() && DBGBVR_EL1[n]<63:32> == CONTEXTIDR_EL2); bvr_match_valid = (match_addr || match_cid || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return match;

Library pseudocode for aarch64/debug/statisticalprofilingbreakpoint/CollectPhysicalAddressAArch64.StateMatch

// CollectPhysicalAddress() // ======================== // AArch64.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. boolean CollectPhysicalAddress() if !AArch64.StateMatch(bits(2) SSC, bit HMC, bits(2) PxC, boolean linked, bits(4) LBN, boolean isbreakpnt,StatisticalProfilingEnabledAccType() then return FALSE; (secure, el) =acctype, boolean ispriv) // "SSC", "HMC", "PxC" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. // If parameters are set to a reserved type, behaves as either disabled or a defined type if ((HMC:SSC:PxC) IN {'011xx','100x0','101x0','11010','11101','1111x'} || // Reserved (HMC == '0' && PxC == '00' && (!isbreakpnt || ! ProfilingBufferOwnerHaveAArch32EL(); if !secure &&( EL1))) || // Usr/Svc/Sys (SSC IN {'01','10'} && !HaveEL(EL3)) || // No EL3 (HMC:SSC != '000' && HMC:SSC != '111' && !HaveEL(EL3) && !HaveEL(EL2) then return PMSCR_EL2.PA == '1' && (el ==)) || // No EL3/EL2 (HMC:SSC:PxC == '11100' && ! HaveEL(EL2))) then // No EL2 (c, <HMC,SSC,PxC>) = ConstrainUnpredictableBits(Unpredictable_RESBPWPCTRL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value EL3_match = HaveEL(EL3) && HMC == '1' && SSC<0> == '0'; EL2_match = HaveEL(EL2) && HMC == '1'; EL1_match = PxC<0> == '1'; EL0_match = PxC<1> == '1'; el = if HaveNV2Ext() && acctype == AccType_NV2REGISTER then EL2 else PSTATE.EL; if !ispriv && !isbreakpnt then priv_match = EL0_match; else case el of when EL3 priv_match = EL3_match; when EL2 priv_match = EL2_match; when EL1 priv_match = EL1_match; when EL0 priv_match = EL0_match; case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = !IsSecure(); // Non-secure only when '10' security_state_match = IsSecure(); // Secure only when '11' security_state_match = TRUE; // Both if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = (UInt(ID_AA64DFR0_EL1.BRPs) - UInt(ID_AA64DFR0_EL1.CTX_CMPs)); last_ctx_cmp = UInt(ID_AA64DFR0_EL1.BRPs); if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint if linked then vaddress = bits(64) UNKNOWN; linked_to = TRUE; linked_match = AArch64.BreakpointValueMatch || PMSCR_EL1.PA == '1'); else return PMSCR_EL1.PA == '1';(lbn, vaddress, linked_to); return priv_match && security_state_match && (!linked || linked_match);

Library pseudocode for aarch64/debug/statisticalprofilingenables/CollectRecordAArch64.GenerateDebugExceptions

// CollectRecord() // =============== // AArch64.GenerateDebugExceptions() // ================================= boolean CollectRecord(bits(64) events, integer total_latency,AArch64.GenerateDebugExceptions() return OpTypeAArch64.GenerateDebugExceptionsFrom optype) assert(PSTATE.EL, StatisticalProfilingEnabledIsSecure(); if PMSFCR_EL1.FE == '1' then e = events<63:48,31:24,15:12,7,5,3,1>; m = PMSEVFR_EL1<63:48,31:24,15:12,7,5,3,1>; // Check for UNPREDICTABLE case if IsZero(PMSEVFR_EL1) && ConstrainUnpredictableBool(Unpredictable_ZEROPMSEVFR) then return FALSE; if !IsZero(NOT(e) AND m) then return FALSE; if PMSFCR_EL1.FT == '1' then // Check for UNPREDICTABLE case if IsZero(PMSFCR_EL1.<B,LD,ST>) && ConstrainUnpredictableBool(Unpredictable_NOOPTYPES) then return FALSE; case optype of when OpType_Branch if PMSFCR_EL1.B == '0' then return FALSE; when OpType_Load if PMSFCR_EL1.LD == '0' then return FALSE; when OpType_Store if PMSFCR_EL1.ST == '0' then return FALSE; when OpType_LoadAtomic if PMSFCR_EL1.<LD,ST> == '00' then return FALSE; otherwise return FALSE; if PMSFCR_EL1.FL == '1' then if IsZero(PMSLATFR_EL1.MINLAT) && ConstrainUnpredictableBool(Unpredictable_ZEROMINLATENCY) then // UNPREDICTABLE case return FALSE; if total_latency < UInt(PMSLATFR_EL1.MINLAT) then return FALSE; return TRUE;(), PSTATE.D);

Library pseudocode for aarch64/debug/statisticalprofilingenables/CollectTimeStampAArch64.GenerateDebugExceptionsFrom

// CollectTimeStamp() // ================== // AArch64.GenerateDebugExceptionsFrom() // ===================================== TimeStampboolean CollectTimeStamp() if !AArch64.GenerateDebugExceptionsFrom(bits(2) from, boolean secure, bit mask) if OSLSR_EL1.OSLK == '1' ||StatisticalProfilingEnabledDoubleLockStatus() then return() || TimeStamp_NoneHalted; (secure, el) =() then return FALSE; route_to_el2 = ProfilingBufferOwnerHaveEL(); if el ==( EL2 then if PMSCR_EL2.TS == '0' then return) && !secure && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); target = (if route_to_el2 then TimeStamp_NoneEL2; else if PMSCR_EL1.TS == '0' then returnelse TimeStamp_NoneEL1; if); enabled = ! EL2EnabledHaveEL() then pct = PMSCR_EL2.PCT == '1' && (el ==( EL2EL3 || PMSCR_EL1.PCT == '1'); ) || !secure || MDCR_EL3.SDD == '0'; if from == target then enabled = enabled && MDSCR_EL1.KDE == '1' && mask == '0'; else pct = PMSCR_EL1.PCT == '1'; return (if pct then enabled = enabled && TimeStamp_PhysicalUInt else(target) > TimeStamp_VirtualUInt);(from); return enabled;

Library pseudocode for aarch64/debug/statisticalprofilingpmu/OpTypeAArch64.CheckForPMUOverflow

enumeration// AArch64.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events boolean OpType {AArch64.CheckForPMUOverflow() pmuirq = PMCR_EL0.E == '1' && PMINTENSET_EL1<31> == '1' && PMOVSSET_EL0<31> == '1'; for n = 0 to OpType_Load, // Any memory-read operation other than atomics, compare-and-swap, and swap(PMCR_EL0.N) - 1 if OpType_Store, // Any memory-write operation, including atomics without return( OpType_LoadAtomic, // Atomics with return, compare-and-swap and swap) then E = (if n < OpType_Branch, // Software write to the PC(MDCR_EL2.HPMN) then PMCR_EL0.E else MDCR_EL2.HPME); else E = PMCR_EL0.E; if E == '1' && PMINTENSET_EL1<n> == '1' && PMOVSSET_EL0<n> == '1' then pmuirq = TRUE; SetInterruptRequestLevel( , if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflowOpType_Other // Any other class of operation };, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.) return pmuirq;

Library pseudocode for aarch64/debug/statisticalprofilingpmu/ProfilingBufferEnabledAArch64.CountEvents

// ProfilingBufferEnabled() // ======================== // AArch64.CountEvents() // ===================== // Return TRUE if counter "n" should count its event. For the cycle counter, n == 31. boolean ProfilingBufferEnabled() if !AArch64.CountEvents(integer n) assert n == 31 || n <HaveStatisticalProfilingUInt() then return FALSE; (secure, el) =(PMCR_EL0.N); // Event counting is disabled in Debug state debug = ProfilingBufferOwnerHalted(); non_secure_bit = if secure then '0' else '1'; return (! // In Non-secure state, some counters are reserved for EL2 if(EL2) then E = if n < UInt(MDCR_EL2.HPMN) || n == 31 then PMCR_EL0.E else MDCR_EL2.HPME; else E = PMCR_EL0.E; enabled = E == '1' && PMCNTENSET_EL0<n> == '1'; if !IsSecure() then // Event counting in Non-secure state is allowed unless all of: // * EL2 and the HPMD Extension are implemented // * Executing at EL2 // * PMNx is not reserved for EL2 // * MDCR_EL2.HPMD == 1 if HaveHPMDExt() && PSTATE.EL == EL2 && (n < UInt(MDCR_EL2.HPMN) || n == 31) then prohibited = (MDCR_EL2.HPMD == '1'); else prohibited = FALSE; else // Event counting in Secure state is prohibited unless any one of: // * EL3 is not implemented // * EL3 is using AArch64 and MDCR_EL3.SPME == 1 prohibited = HaveEL(EL3) && MDCR_EL3.SPME == '0'; // The IMPLEMENTATION DEFINED authentication interface might override software controls if prohibited && !HaveNoSecurePMUDisableOverride() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // For the cycle counter, PMCR_EL0.DP enables counting when otherwise prohibited if prohibited && n == 31 then prohibited = (PMCR_EL0.DP == '1'); // Event counting can be filtered by the {P, U, NSK, NSU, NSH, M} bits filter = if n == 31 then PMCCFILTR else PMEVTYPER[n]; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) then filter<27> else '0'; M = if HaveEL(EL3) then filter<26> else '0'; case PSTATE.EL of when EL0 filtered = if IsSecure() then U == '1' else U != NSU; when EL1 filtered = if IsSecure() then P == '1' else P != NSK; when EL2 filtered = (NSH == '0'); when EL3ELUsingAArch32HaveEL(el) && non_secure_bit == SCR_EL3.NS && PMBLIMITR_EL1.E == '1' && PMBSR_EL1.S == '0');filtered = (M != P); return !debug && enabled && !prohibited && !filtered;

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingBufferOwnerCheckProfilingBufferAccess

// ProfilingBufferOwner() // ====================== // CheckProfilingBufferAccess() // ============================ (boolean, bits(2))SysRegAccess ProfilingBufferOwner() secure = ifCheckProfilingBufferAccess() if ! HaveELHaveStatisticalProfiling(() || PSTATE.EL ==EL3EL0) then (MDCR_EL3.NSPB<1> == '0') else|| IsSecureUsingAArch32(); el = if !secure &&() then return SysRegAccess_UNDEFINED; if EL2Enabled() && PSTATE.EL == EL1 && MDCR_EL2.E2PB<0> != '1' then return SysRegAccess_TrapToEL2; if HaveEL(EL2EL3) && MDCR_EL2.E2PB == '00' then) && PSTATE.EL != EL2EL3 else&& MDCR_EL3.NSPB != SCR_EL3.NS:'1' then return ; return SysRegAccess_OKEL1SysRegAccess_TrapToEL3; return (secure, el);;

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingSynchronizationBarrierCheckStatisticalProfilingAccess

// Barrier to ensure that all existing profiling data has been formatted, and profiling buffer // addresses have been translated such that writes to the profiling buffer have been initiated. // A following DSB completes when writes to the profiling buffer have completed.// CheckStatisticalProfilingAccess() // ================================= SysRegAccess ProfilingSynchronizationBarrier();CheckStatisticalProfilingAccess() if !HaveStatisticalProfiling() || PSTATE.EL == EL0 || UsingAArch32() then return SysRegAccess_UNDEFINED; if EL2Enabled() && PSTATE.EL == EL1 && MDCR_EL2.TPMS == '1' then return SysRegAccess_TrapToEL2; if HaveEL(EL3) && PSTATE.EL != EL3 && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then return SysRegAccess_TrapToEL3; return SysRegAccess_OK;

Library pseudocode for aarch64/debug/statisticalprofiling/StatisticalProfilingEnabledCollectContextIDR1

// StatisticalProfilingEnabled() // ============================= // CollectContextIDR1() // ==================== boolean StatisticalProfilingEnabled() CollectContextIDR1() if !HaveStatisticalProfilingStatisticalProfilingEnabled() ||() then return FALSE; if PSTATE.EL == UsingAArch32EL2() || !then return FALSE; ifProfilingBufferEnabled() then return FALSE; in_host = EL2Enabled() && HCR_EL2.TGE == '1'; (secure, el) = ProfilingBufferOwner(); if UInt(el) < UInt(PSTATE.EL) || secure != IsSecure() || (in_host && el == EL1) then return FALSE; case PSTATE.EL of when EL3 Unreachable(); when EL2 spe_bit = PMSCR_EL2.E2SPE; when EL1 spe_bit = PMSCR_EL1.E1SPE; when EL0 spe_bit = (if in_host then PMSCR_EL2.E0HSPE else PMSCR_EL1.E0SPE); return spe_bit == '1';() && HCR_EL2.TGE == '1' then return FALSE; return PMSCR_EL1.CX == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/SysRegAccessCollectContextIDR2

enumeration// CollectContextIDR2() // ==================== boolean SysRegAccess {CollectContextIDR2() if ! SysRegAccess_OK,() then return FALSE; if SysRegAccess_UNDEFINED, SysRegAccess_TrapToEL1, SysRegAccess_TrapToEL2, SysRegAccess_TrapToEL3 };() then return FALSE; return PMSCR_EL2.CX == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/TimeStampCollectPhysicalAddress

enumeration// CollectPhysicalAddress() // ======================== boolean TimeStamp {CollectPhysicalAddress() if ! TimeStamp_None,() then return FALSE; (secure, el) = TimeStamp_Virtual,(); if !secure && (EL2) then return PMSCR_EL2.PA == '1' && (el == EL2TimeStamp_Physical };|| PMSCR_EL1.PA == '1'); else return PMSCR_EL1.PA == '1';

Library pseudocode for aarch64/debug/takeexceptiondbgstatisticalprofiling/AArch64.TakeExceptionInDebugStateCollectRecord

// AArch64.TakeExceptionInDebugState() // =================================== // Take an exception in Debug state to an Exception Level using AArch64.// CollectRecord() // =============== boolean AArch64.TakeExceptionInDebugState(bits(2) target_el,CollectRecord(bits(64) events, integer total_latency, ExceptionRecordOpType exception) optype) assert HaveELStatisticalProfilingEnabled(target_el) && !(); if PMSFCR_EL1.FE == '1' then e = events<63:48,31:24,15:12,7,5,3,1>; m = PMSEVFR_EL1<63:48,31:24,15:12,7,5,3,1>; // Check for UNPREDICTABLE case ifELUsingAArch32IsZero(target_el) &&(PMSEVFR_EL1) && UIntConstrainUnpredictableBool(target_el) >=( UIntUnpredictable_ZEROPMSEVFR(PSTATE.EL); sync_errors =) then return FALSE; if ! HaveIESBIsZero() &&(NOT(e) AND m) then return FALSE; if PMSFCR_EL1.FT == '1' then // Check for UNPREDICTABLE case if SCTLRIsZero[].IESB == '1'; if(PMSFCR_EL1.<B,LD,ST>) && HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebugUnpredictable_NOOPTYPES) then sync_errors = FALSE; if sync_errors && return FALSE; case optype of when InsertIESBBeforeExceptionOpType_Branch(target_el) thenif PMSFCR_EL1.B == '0' then return FALSE; when SynchronizeErrorsOpType_Load();if PMSFCR_EL1.LD == '0' then return FALSE; when SynchronizeContextOpType_Store(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 =if PMSFCR_EL1.ST == '0' then return FALSE; when UsingAArch32OpType_LoadAtomic(); if from_32 thenif PMSFCR_EL1.<LD,ST> == '00' then return FALSE; otherwise return FALSE; if PMSFCR_EL1.FL == '1' then if AArch64.MaybeZeroRegisterUppersIsZero();(PMSLATFR_EL1.MINLAT) && MaybeZeroSVEUppers(target_el); AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = bits(32) UNKNOWN; ELR[] = bits(64) UNKNOWN; PSTATE.SSBS = bits(1) UNKNOWN; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if UNKNOWN. PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; if HaveBTIExt() then if exception.type == Exception_Breakpoint then DSPSR_EL0<11:10> = PSTATE.BTYPE; else DSPSR_EL0<11:10> = if ConstrainUnpredictableBool(Unpredictable_ZEROBTYPEUnpredictable_ZEROMINLATENCY) then '00' else PSTATE.BTYPE; PSTATE.BTYPE = '00'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if) then // UNPREDICTABLE case return FALSE; if total_latency < HavePANExtUInt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0' then PSTATE.PAN = '1'; if HaveMTEExt() then PSTATE.TCO = '1'; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. if sync_errors then SynchronizeErrors(); EndOfInstruction();(PMSLATFR_EL1.MINLAT) then return FALSE; return TRUE;

Library pseudocode for aarch64/debug/watchpointstatisticalprofiling/AArch64.WatchpointByteMatchCollectTimeStamp

// AArch64.WatchpointByteMatch() // ============================= // CollectTimeStamp() // ================== booleanTimeStamp AArch64.WatchpointByteMatch(integer n,CollectTimeStamp() if ! AccTypeStatisticalProfilingEnabled acctype, bits(64) vaddress) el = if() then return HaveNV2ExtTimeStamp_None() && acctype ==; (secure, el) = AccType_NV2REGISTERProfilingBufferOwner then(); if el == EL2 else PSTATE.EL; top =then if PMSCR_EL2.TS == '0' then return AddrTopTimeStamp_None(vaddress, FALSE, el); bottom = if DBGWVR_EL1[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR_EL1[n].BAS<; else if PMSCR_EL1.TS == '0' then returnUIntTimeStamp_None(vaddress<bottom-1:0>)> != '0'); mask =; if UIntEL2Enabled(DBGWCR_EL1[n].MASK); // If DBGWCR_EL1[n].MASK is non-zero value and DBGWCR_EL1[n].BAS is not set to '11111111', or // DBGWCR_EL1[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !() then pct = PMSCR_EL2.PCT == '1' && (el ==IsOnesEL2(DBGWCR_EL1[n].BAS) then byte_select_match =|| PMSCR_EL1.PCT == '1'); else pct = PMSCR_EL1.PCT == '1'; return (if pct then ConstrainUnpredictableBoolTimeStamp_Physical(elseUnpredictable_WPMASKANDBASTimeStamp_Virtual); else LSB = (DBGWCR_EL1[n].BAS AND NOT(DBGWCR_EL1[n].BAS - 1)); MSB = (DBGWCR_EL1[n].BAS + LSB); if !IsZero(MSB AND (MSB - 1)) then // Not contiguous byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUS); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then (c, mask) = ConstrainUnpredictableInteger(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value if mask > bottom then WVR_match = (vaddress<top:mask> == DBGWVR_EL1[n]<top:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR_EL1[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITS); else WVR_match = vaddress<top:bottom> == DBGWVR_EL1[n]<top:bottom>; return WVR_match && byte_select_match;);

Library pseudocode for aarch64/debug/watchpointstatisticalprofiling/AArch64.WatchpointMatchOpType

// AArch64.WatchpointMatch() // ========================= // Watchpoint matching in an AArch64 translation regime. booleanenumeration AArch64.WatchpointMatch(integer n, bits(64) vaddress, integer size, boolean ispriv,OpType { AccType acctype, boolean iswrite) assert !OpType_Load, // Any memory-read operation other than atomics, compare-and-swap, and swapELUsingAArch32(OpType_Store, // Any memory-write operation, including atomics without returnS1TranslationRegime()); assert n <=OpType_LoadAtomic, // Atomics with return, compare-and-swap and swap UInt(ID_AA64DFR0_EL1.WRPs); // "ispriv" is FALSE for LDTR/STTR instructions executed at EL1 and all // load/stores at EL0, TRUE for all other load/stores. "iswrite" is TRUE for stores, FALSE for // loads. enabled = DBGWCR_EL1[n].E == '1'; linked = DBGWCR_EL1[n].WT == '1'; isbreakpnt = FALSE; state_match =OpType_Branch, // Software write to the PC AArch64.StateMatch(DBGWCR_EL1[n].SSC, DBGWCR_EL1[n].HMC, DBGWCR_EL1[n].PAC, linked, DBGWCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); ls_match = (DBGWCR_EL1[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch64.WatchpointByteMatch(n, acctype, vaddress + byte); return value_match && state_match && ls_match && enabled;OpType_Other // Any other class of operation };

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.AbortProfilingBufferEnabled

// AArch64.Abort() // =============== // Abort and Debug exception handling in an AArch64 translation regime.// ProfilingBufferEnabled() // ======================== boolean AArch64.Abort(bits(64) vaddress,ProfilingBufferEnabled() if ! FaultRecordHaveStatisticalProfiling fault) if() then return FALSE; (secure, el) = IsDebugExceptionProfilingBufferOwner(fault) then if fault.acctype ==(); non_secure_bit = if secure then '0' else '1'; return (! AccType_IFETCHELUsingAArch32 then if UsingAArch32() && fault.debugmoe == DebugException_VectorCatch then AArch64.VectorCatchException(fault); else AArch64.BreakpointException(fault); else AArch64.WatchpointException(vaddress, fault); elsif fault.acctype == AccType_IFETCH then AArch64.InstructionAbort(vaddress, fault); else AArch64.DataAbort(vaddress, fault);(el) && non_secure_bit == SCR_EL3.NS && PMBLIMITR_EL1.E == '1' && PMBSR_EL1.S == '0');

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.AbortSyndromeProfilingBufferOwner

// AArch64.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort and Watchpoint exceptions // from an AArch64 translation regime. // ProfilingBufferOwner() // ====================== ExceptionRecord(boolean, bits(2)) AArch64.AbortSyndrome(ProfilingBufferOwner() secure = ifExceptionHaveEL type,( FaultRecordEL3 fault, bits(64) vaddress) exception =) then (MDCR_EL3.NSPB<1> == '0') else ExceptionSyndromeIsSecure(type); d_side = type IN {(); el = if !secure &&Exception_DataAbortHaveEL,( Exception_NV2DataAbortEL2,) && MDCR_EL2.E2PB == '00' then Exception_WatchpointEL2}; exception.syndrome =else AArch64.FaultSyndromeEL1(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValid(fault) then exception.ipavalid = TRUE; exception.NS = fault.ipaddress.NS; exception.ipaddress = fault.ipaddress.address; else exception.ipavalid = FALSE; return exception;; return (secure, el);

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.CheckPCAlignmentProfilingSynchronizationBarrier

// AArch64.CheckPCAlignment() // ==========================// Barrier to ensure that all existing profiling data has been formatted, and profiling buffer // addresses have been translated such that writes to the profiling buffer have been initiated. // A following DSB completes when writes to the profiling buffer have completed. AArch64.CheckPCAlignment() bits(64) pc =ProfilingSynchronizationBarrier(); ThisInstrAddr(); if pc<1:0> != '00' then AArch64.PCAlignmentFault();

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.DataAbortStatisticalProfilingEnabled

// AArch64.DataAbort() // ===================// StatisticalProfilingEnabled() // ============================= boolean AArch64.DataAbort(bits(64) vaddress,StatisticalProfilingEnabled() if ! FaultRecordHaveStatisticalProfiling fault) route_to_el3 =() || HaveELUsingAArch32(() || !EL3ProfilingBufferEnabled) && SCR_EL3.EA == '1' &&() then return FALSE; in_host = IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {() && HCR_EL2.TGE == '1'; (secure, el) =EL0ProfilingBufferOwner,(); ifEL1UInt} && (HCR_EL2.TGE == '1' || ((el) <HaveRASExtUInt() && HCR_EL2.TEA == '1' &&(PSTATE.EL) || secure != IsExternalAbortIsSecure(fault)) || (() || (in_host && el ==HaveNV2ExtEL1() && fault.acctype ==) then return FALSE; case PSTATE.EL of when AccType_NV2REGISTER) || IsSecondStage(fault))); bits(64) preferred_exception_return = ThisInstrAddr(); if (HaveDoubleFaultExt() && (PSTATE.EL == EL3 || route_to_el3) && IsExternalAbortUnreachable(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; if(); when HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then exception = AArch64.AbortSyndrome(Exception_NV2DataAbort, fault, vaddress); else exception = AArch64.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 thenspe_bit = PMSCR_EL2.E2SPE; when AArch64.TakeExceptionEL1(spe_bit = PMSCR_EL1.E1SPE; whenEL2EL0, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);spe_bit = (if in_host then PMSCR_EL2.E0HSPE else PMSCR_EL1.E0SPE); return spe_bit == '1';

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.InstructionAbortSysRegAccess

// AArch64.InstructionAbort() // ==========================enumeration AArch64.InstructionAbort(bits(64) vaddress,SysRegAccess { FaultRecord fault) // External aborts on instruction fetch must be taken synchronously ifSysRegAccess_OK, HaveDoubleFaultExt() then assert fault.type !=SysRegAccess_UNDEFINED, Fault_AsyncExternal; route_to_el3 =SysRegAccess_TrapToEL1, HaveEL(SysRegAccess_TrapToEL2,EL3) && SCR_EL3.EA == '1' && IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = AArch64.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);SysRegAccess_TrapToEL3 };

Library pseudocode for aarch64/exceptionsdebug/abortsstatisticalprofiling/AArch64.PCAlignmentFaultTimeStamp

// AArch64.PCAlignmentFault() // ========================== // Called on unaligned program counter in AArch64 state.enumeration AArch64.PCAlignmentFault() bits(64) preferred_exception_return =TimeStamp { ThisInstrAddr(); vect_offset = 0x0; exception =TimeStamp_None, ExceptionSyndrome(TimeStamp_Virtual,Exception_PCAlignment); exception.vaddress = ThisInstrAddr(); if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);TimeStamp_Physical };

Library pseudocode for aarch64/exceptionsdebug/abortstakeexceptiondbg/AArch64.SPAlignmentFaultAArch64.TakeExceptionInDebugState

// AArch64.SPAlignmentFault() // ========================== // Called on an unaligned stack pointer in AArch64 state.// AArch64.TakeExceptionInDebugState() // =================================== // Take an exception in Debug state to an Exception Level using AArch64. AArch64.SPAlignmentFault() bits(64) preferred_exception_return =AArch64.TakeExceptionInDebugState(bits(2) target_el, ThisInstrAddrExceptionRecord(); vect_offset = 0x0; exception =exception) assert ExceptionSyndromeHaveEL((target_el) && !Exception_SPAlignmentELUsingAArch32); if(target_el) && UInt(PSTATE.EL) >(target_el) >= UInt((PSTATE.EL); sync_errors =EL1HaveIESB) then() && AArch64.TakeExceptionSCTLR(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif[].IESB == '1'; if EL2EnabledHaveDoubleFaultExt() && HCR_EL2.TGE == '1' then() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == AArch64.TakeExceptionEL3(); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = bits(32) UNKNOWN; ELR[] = bits(64) UNKNOWN; if HaveSSBSExt() then PSTATE.SSBS = bits(1) UNKNOWN; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if UNKNOWN. PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; if HaveBTIExt() then if exception.type == Exception_Breakpoint then DSPSR_EL0<11:10> = PSTATE.BTYPE; else DSPSR_EL0<11:10> = if ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE) then '00' else PSTATE.BTYPE; PSTATE.BTYPE = '00'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2, exception, preferred_exception_return, vect_offset); else&& AArch64.TakeExceptionELIsInHost())) && SCTLR[].SPAN == '0' then PSTATE.PAN = '1'; if HaveMTEExt() then PSTATE.TCO = '1'; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. if sync_errors then SynchronizeErrors(); EndOfInstructionEL1EL0, exception, preferred_exception_return, vect_offset);();

Library pseudocode for aarch64/exceptionsdebug/abortswatchpoint/BranchTargetExceptionAArch64.WatchpointByteMatch

// BranchTargetException // ===================== // Raise branch target exception.// AArch64.WatchpointByteMatch() // ============================= boolean AArch64.BranchTargetException(bits(52) vaddress) route_to_el2 =AArch64.WatchpointByteMatch(integer n, EL2EnabledAccType() && PSTATE.EL ==acctype, bits(64) vaddress) el = if EL0HaveNV2Ext && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =() && acctype == ThisInstrAddrAccType_NV2REGISTER(); vect_offset = 0x0; exception =then ExceptionSyndromeEL2(else PSTATE.EL; top =Exception_BranchTargetAddrTop); exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> =(vaddress, FALSE, el); bottom = if DBGWVR_EL1[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR_EL1[n].BAS< Zeros(); // RES0 if UInt(PSTATE.EL) >(vaddress<bottom-1:0>)> != '0'); mask = UInt((DBGWCR_EL1[n].MASK); // If DBGWCR_EL1[n].MASK is non-zero value and DBGWCR_EL1[n].BAS is not set to '11111111', or // DBGWCR_EL1[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !EL1IsOnes) then(DBGWCR_EL1[n].BAS) then byte_select_match = AArch64.TakeExceptionConstrainUnpredictableBool(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then( AArch64.TakeExceptionUnpredictable_WPMASKANDBAS(); else LSB = (DBGWCR_EL1[n].BAS AND NOT(DBGWCR_EL1[n].BAS - 1)); MSB = (DBGWCR_EL1[n].BAS + LSB); if !EL2IsZero, exception, preferred_exception_return, vect_offset); else(MSB AND (MSB - 1)) then // Not contiguous byte_select_match = AArch64.TakeExceptionConstrainUnpredictableBool(); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then (c, mask) = ConstrainUnpredictableInteger(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value if mask > bottom then WVR_match = (vaddress<top:mask> == DBGWVR_EL1[n]<top:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR_EL1[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITSEL1Unpredictable_WPBASCONTIGUOUS, exception, preferred_exception_return, vect_offset);); else WVR_match = vaddress<top:bottom> == DBGWVR_EL1[n]<top:bottom>; return WVR_match && byte_select_match;

Library pseudocode for aarch64/exceptionsdebug/abortswatchpoint/EffectiveTCFAArch64.WatchpointMatch

// EffectiveTCF() // ============== // Returns the TCF field applied to Tag Check Fails in the given Exception Level // AArch64.WatchpointMatch() // ========================= // Watchpoint matching in an AArch64 translation regime. bits(2)boolean EffectiveTCF(bits(2) el) if el ==AArch64.WatchpointMatch(integer n, bits(64) vaddress, integer size, boolean ispriv, EL3AccType then tcf = SCTLR_EL3.TCF; elsif el ==acctype, boolean iswrite) assert ! EL2ELUsingAArch32 then tcf = SCTLR_EL2.TCF; elsif el ==( EL1S1TranslationRegime then tcf = SCTLR_EL1.TCF; elsif el ==()); assert n <= EL0UInt && HCR_EL2.<E2H,TGE> == '11' then tcf = SCTLR_EL2.TCF0; elsif el ==(ID_AA64DFR0_EL1.WRPs); // "ispriv" is FALSE for LDTR/STTR instructions executed at EL1 and all // load/stores at EL0, TRUE for all other load/stores. "iswrite" is TRUE for stores, FALSE for // loads. enabled = DBGWCR_EL1[n].E == '1'; linked = DBGWCR_EL1[n].WT == '1'; isbreakpnt = FALSE; state_match = (DBGWCR_EL1[n].SSC, DBGWCR_EL1[n].HMC, DBGWCR_EL1[n].PAC, linked, DBGWCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); ls_match = (DBGWCR_EL1[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch64.WatchpointByteMatchEL0AArch64.StateMatch && HCR_EL2.<E2H,TGE> != '11' then tcf = SCTLR_EL1.TCF0; (n, acctype, vaddress + byte); return tcf; return value_match && state_match && ls_match && enabled;

Library pseudocode for aarch64/exceptions/aborts/RecordTagCheckFailAArch64.Abort

// RecordTagCheckFail() // ==================== // Records a tag fail exception into the appropriate TCFR_ELx// AArch64.Abort() // =============== // Abort and Debug exception handling in an AArch64 translation regime. ReportTagCheckFail(bits(2) el, bit ttbr) if el ==AArch64.Abort(bits(64) vaddress, EL3FaultRecord then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el ==fault) if EL2IsDebugException then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el ==(fault) then if fault.acctype == EL1AccType_IFETCH then if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el == if () && fault.debugmoe == DebugException_VectorCatch then AArch64.VectorCatchException(fault); else AArch64.BreakpointException(fault); else AArch64.WatchpointException(vaddress, fault); elsif fault.acctype == AccType_IFETCH then AArch64.InstructionAbort(vaddress, fault); else AArch64.DataAbortEL0UsingAArch32 then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';(vaddress, fault);

Library pseudocode for aarch64/exceptions/aborts/TagCheckFailAArch64.AbortSyndrome

// TagCheckFail() // ============== // Handle a tag check fail condition// AArch64.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort and Watchpoint exceptions // from an AArch64 translation regime. ExceptionRecord TagCheckFail(bits(64) vaddress, boolean iswrite) bits(2) tcf =AArch64.AbortSyndrome( EffectiveTCFException(PSTATE.EL); if tcf == '01' thentype, TagCheckFaultFaultRecord(vaddress, iswrite); elsif tcf == '10' thenfault, bits(64) vaddress) exception = (type); d_side = type IN {Exception_DataAbort, Exception_NV2DataAbort, Exception_Watchpoint}; exception.syndrome = AArch64.FaultSyndrome(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValidReportTagCheckFailExceptionSyndrome(PSTATE.EL, vaddress<55>);(fault) then exception.ipavalid = TRUE; exception.NS = fault.ipaddress.NS; exception.ipaddress = fault.ipaddress.address; else exception.ipavalid = FALSE; return exception;

Library pseudocode for aarch64/exceptions/aborts/TagCheckFaultAArch64.CheckPCAlignment

// TagCheckFault() // =============== // Raise a tag check fail exception.// AArch64.CheckPCAlignment() // ========================== TagCheckFault(bits(64) va, boolean write) bits(2) target_el; bits(64) preferred_exception_return =AArch64.CheckPCAlignment() bits(64) pc = ThisInstrAddr(); integer vect_offset = 0x0; if PSTATE.EL == if pc<1:0> != '00' then EL0AArch64.PCAlignmentFault then target_el = if HCR_EL2.TGE == 0 then EL1 else EL2; else target_el = PSTATE.EL; exception = ExceptionSyndrome(Exception_DataAbort); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = va; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);();

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakePhysicalFIQExceptionAArch64.DataAbort

// AArch64.TakePhysicalFIQException() // ==================================// AArch64.DataAbort() // =================== AArch64.TakePhysicalFIQException() route_to_el3 =AArch64.DataAbort(bits(64) vaddress, FaultRecord fault) route_to_el3 = HaveEL(EL3) && SCR_EL3.FIQ == '1'; route_to_el2 = () && SCR_EL3.EA == '1' &&IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || HCR_EL2.FMO == '1')); bits(64) preferred_exception_return =} && (HCR_EL2.TGE == '1' || ( ThisInstrAddrHaveRASExt(); vect_offset = 0x100; exception =() && HCR_EL2.TEA == '1' && ExceptionSyndromeIsExternalAbort((fault)) || (Exception_FIQHaveNV2Ext); if route_to_el3 then() && fault.acctype == AArch64.TakeExceptionAccType_NV2REGISTER() ||IsSecondStage(fault))); bits(64) preferred_exception_return = ThisInstrAddr(); if (HaveDoubleFaultExt() && (PSTATE.EL == EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL ==|| route_to_el3) && EL2IsExternalAbort || route_to_el2 then assert PSTATE.EL !=(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then exception = AArch64.AbortSyndrome(Exception_NV2DataAbort, fault, vaddress); else exception = AArch64.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL3;|| route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {|| route_to_el2 thenEL0AArch64.TakeException,(EL1EL2};, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakePhysicalIRQExceptionAArch64.InstructionAbort

// AArch64.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception.// AArch64.InstructionAbort() // ========================== AArch64.TakePhysicalIRQException() route_to_el3 =AArch64.InstructionAbort(bits(64) vaddress, FaultRecord fault) // External aborts on instruction fetch must be taken synchronously if HaveDoubleFaultExt() then assert fault.type != Fault_AsyncExternal; route_to_el3 = HaveEL(EL3) && SCR_EL3.IRQ == '1'; route_to_el2 = () && SCR_EL3.EA == '1' &&IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || HCR_EL2.IMO == '1')); bits(64) preferred_exception_return = (HCR_EL2.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x80; vect_offset = 0x0; exception = ExceptionSyndromeAArch64.AbortSyndrome(Exception_IRQException_InstructionAbort); , fault, vaddress); if route_to_el3 then if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then assert PSTATE.EL !=|| route_to_el2 then EL3; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0,EL1};, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakePhysicalSErrorExceptionAArch64.PCAlignmentFault

// AArch64.TakePhysicalSErrorException() // =====================================// AArch64.PCAlignmentFault() // ========================== // Called on unaligned program counter in AArch64 state. AArch64.TakePhysicalSErrorException(boolean impdef_syndrome, bits(24) syndrome) AArch64.PCAlignmentFault() route_to_el3 = bits(64) preferred_exception_return = HaveELThisInstrAddr((); vect_offset = 0x0; exception =EL3ExceptionSyndrome) && SCR_EL3.EA == '1'; route_to_el2 = ((EL2EnabledException_PCAlignment() && PSTATE.EL IN {); exception.vaddress =EL0,EL1} && (HCR_EL2.TGE == '1' || (!IsInHost() && HCR_EL2.AMO == '1'))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x180; exception = if ExceptionSyndromeUInt((PSTATE.EL) >Exception_SErrorUInt); exception.syndrome<24> = if impdef_syndrome then '1' else '0'; exception.syndrome<23:0> = syndrome;( ClearPendingPhysicalSErrorEL1(); if PSTATE.EL ==) then EL3 || route_to_el3 then AArch64.TakeException((PSTATE.EL, exception, preferred_exception_return, vect_offset); elsifEL3EL2Enabled, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakeVirtualFIQExceptionAArch64.SPAlignmentFault

// AArch64.TakeVirtualFIQException() // =================================// AArch64.SPAlignmentFault() // ========================== // Called on an unaligned stack pointer in AArch64 state. AArch64.TakeVirtualFIQException() assertAArch64.SPAlignmentFault() bits(64) preferred_exception_return = EL2EnabledThisInstrAddr() && PSTATE.EL IN {(); vect_offset = 0x0; exception =EL0ExceptionSyndrome,(Exception_SPAlignment); if UInt(PSTATE.EL) > UInt(EL1}; assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Virtual IRQ enabled if TGE==0 and FMO==1 bits(64) preferred_exception_return =) then ThisInstrAddrAArch64.TakeException(); vect_offset = 0x100; exception =(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif ExceptionSyndromeEL2Enabled(() && HCR_EL2.TGE == '1' then(EL2Exception_FIQAArch64.TakeException);, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakeVirtualIRQExceptionBranchTargetException

// AArch64.TakeVirtualIRQException() // =================================// BranchTargetException // ===================== // Raise branch target exception. AArch64.TakeVirtualIRQException() assertAArch64.BranchTargetException(bits(52) vaddress) route_to_el2 = EL2Enabled() && PSTATE.EL IN {() && PSTATE.EL ==EL0,&& HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =EL1}; assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Virtual IRQ enabled if TGE==0 and IMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x80; vect_offset = 0x0; exception = ExceptionSyndrome(); exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> = Zeros(); // RES0 if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2Exception_IRQException_BranchTarget);, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynchaborts/AArch64.TakeVirtualSErrorExceptionEffectiveTCF

// AArch64.TakeVirtualSErrorException() // ====================================// EffectiveTCF() // ============== // Returns the TCF field applied to Tag Check Fails in the given Exception Level bits(2) AArch64.TakeVirtualSErrorException(boolean impdef_syndrome, bits(24) syndrome) assertEffectiveTCF(bits(2) el) if el == EL2EnabledEL3() && PSTATE.EL IN {then tcf = SCTLR_EL3.TCF; elsif el ==EL0EL2,then tcf = SCTLR_EL2.TCF; elsif el ==EL1}; assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Virtual SError enabled if TGE==0 and AMO==1 bits(64) preferred_exception_return =then tcf = SCTLR_EL1.TCF; elsif el == ThisInstrAddrEL0(); vect_offset = 0x180; exception =&& HCR_EL2.<E2H,TGE> == '11' then tcf = SCTLR_EL2.TCF0; elsif el == ExceptionSyndromeEL0(Exception_SError); if HaveRASExt() then exception.syndrome<24> = VSESR_EL2.IDS; exception.syndrome<23:0> = VSESR_EL2.ISS; else exception.syndrome<24> = if impdef_syndrome then '1' else '0'; if impdef_syndrome then exception.syndrome<23:0> = syndrome; ClearPendingVirtualSError(); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);&& HCR_EL2.<E2H,TGE> != '11' then tcf = SCTLR_EL1.TCF0; return tcf;

Library pseudocode for aarch64/exceptions/debugaborts/AArch64.BreakpointExceptionRecordTagCheckFail

// AArch64.BreakpointException() // =============================// RecordTagCheckFail() // ==================== // Records a tag fail exception into the appropriate TCFR_ELx AArch64.BreakpointException(ReportTagCheckFail(bits(2) el, bit ttbr) if el ==FaultRecord fault) assert PSTATE.EL != EL3; route_to_el2 = (then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el ==EL2EnabledEL2() && PSTATE.EL IN {then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el ==EL0,EL1} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return =then if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el == ThisInstrAddrEL0(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception = AArch64.AbortSyndrome(Exception_Breakpoint, fault, vaddress); if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';

Library pseudocode for aarch64/exceptions/debugaborts/AArch64.SoftwareBreakpointTagCheckFail

// AArch64.SoftwareBreakpoint() // ============================// TagCheckFail() // ============== // Handle a tag check fail condition AArch64.SoftwareBreakpoint(bits(16) immediate) route_to_el2 = (TagCheckFail(bits(64) vaddress, boolean iswrite) bits(2) tcf =EL2EnabledEffectiveTCF() && PSTATE.EL IN {(PSTATE.EL); if tcf == '01' thenEL0TagCheckFault,(vaddress, iswrite); elsif tcf == '10' thenEL1ReportTagCheckFail} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SoftwareBreakpoint); exception.syndrome<15:0> = immediate; if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);(PSTATE.EL, vaddress<55>);

Library pseudocode for aarch64/exceptions/debugaborts/AArch64.SoftwareStepExceptionTagCheckFault

// AArch64.SoftwareStepException() // ===============================// TagCheckFault() // =============== // Raise a tag check fail exception. AArch64.SoftwareStepException() assert PSTATE.EL !=TagCheckFault(bits(64) va, boolean write) bits(2) target_el; bits(64) preferred_exception_return = EL3ThisInstrAddr; (); integer vect_offset = 0x0; route_to_el2 = ( if PSTATE.EL ==EL2Enabled() && PSTATE.EL IN {EL0,then target_el = if HCR_EL2.TGE == 0 thenEL1} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return =else ThisInstrAddrEL2(); vect_offset = 0x0; ; else target_el = PSTATE.EL; exception = ExceptionSyndrome(Exception_SoftwareStepException_DataAbort); if exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = va; SoftwareStep_DidNotStep() then exception.syndrome<24> = '0'; else exception.syndrome<24> = '1'; exception.syndrome<6> = if SoftwareStep_SteppedEX() then '1' else '0'; if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debugasynch/AArch64.VectorCatchExceptionAArch64.TakePhysicalFIQException

// AArch64.VectorCatchException() // ============================== // Vector Catch taken from EL0 or EL1 to EL2. This can only be called when debug exceptions are // being routed to EL2, as Vector Catch is a legacy debug event.// AArch64.TakePhysicalFIQException() // ================================== AArch64.VectorCatchException(AArch64.TakePhysicalFIQException() route_to_el3 =FaultRecordHaveEL fault) assert PSTATE.EL !=( EL2EL3; assert) && SCR_EL3.FIQ == '1'; route_to_el2 = ( EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); bits(64) preferred_exception_return =() && PSTATE.EL IN { EL0,EL1} && (HCR_EL2.TGE == '1' || HCR_EL2.FMO == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; vect_offset = 0x100; exception = AArch64.AbortSyndromeExceptionSyndrome(Exception_VectorCatchException_FIQ, fault, vaddress);); if route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then assert PSTATE.EL != EL3; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0,EL1}; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debugasynch/AArch64.WatchpointExceptionAArch64.TakePhysicalIRQException

// AArch64.WatchpointException() // =============================// AArch64.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception. AArch64.WatchpointException(bits(64) vaddress,AArch64.TakePhysicalIRQException() route_to_el3 = FaultRecordHaveEL fault) assert PSTATE.EL !=( EL3; ) && SCR_EL3.IRQ == '1'; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); (HCR_EL2.TGE == '1' || HCR_EL2.IMO == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vect_offset = 0x80; exception = AArch64.AbortSyndromeExceptionSyndrome(Exception_WatchpointException_IRQ, fault, vaddress); ); if PSTATE.EL == if route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then|| route_to_el2 then assert PSTATE.EL != EL3; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0,EL1, exception, preferred_exception_return, vect_offset); else}; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsasynch/AArch64.ExceptionClassAArch64.TakePhysicalSErrorException

// AArch64.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in ESR (integer,bit)// AArch64.TakePhysicalSErrorException() // ===================================== AArch64.ExceptionClass(AArch64.TakePhysicalSErrorException(boolean impdef_syndrome, bits(24) syndrome) route_to_el3 =ExceptionHaveEL type, bits(2) target_el) il = if( ThisInstrLengthEL3() == 32 then '1' else '0'; from_32 =) && SCR_EL3.EA == '1'; route_to_el2 = ( UsingAArch32EL2Enabled(); assert from_32 || il == '1'; // AArch64 instructions always 32-bit case type of when() && PSTATE.EL IN { Exception_UncategorizedEL0 ec = 0x00; il = '1'; when, Exception_WFxTrapEL1 ec = 0x01; when} && (HCR_EL2.TGE == '1' || (! Exception_CP15RTTrapIsInHost ec = 0x03; assert from_32; when() && HCR_EL2.AMO == '1'))); bits(64) preferred_exception_return = Exception_CP15RRTTrapThisInstrAddr ec = 0x04; assert from_32; when(); vect_offset = 0x180; exception = Exception_CP14RTTrapExceptionSyndrome ec = 0x05; assert from_32; when( Exception_CP14DTTrapException_SError ec = 0x06; assert from_32; when); exception.syndrome<24> = if impdef_syndrome then '1' else '0'; exception.syndrome<23:0> = syndrome; Exception_AdvSIMDFPAccessTrapClearPendingPhysicalSError ec = 0x07; when(); if PSTATE.EL == Exception_FPIDTrapEL3 ec = 0x08; when|| route_to_el3 then Exception_PACTrapAArch64.TakeException ec = 0x09; when( Exception_CP14RRTTrapEL3 ec = 0x0C; assert from_32; when, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == Exception_BranchTargetEL2 ec = 0x0D; when|| route_to_el2 then Exception_IllegalStateAArch64.TakeException ec = 0x0E; il = '1'; when( Exception_SupervisorCallEL2 ec = 0x11; when, exception, preferred_exception_return, vect_offset); else Exception_HypervisorCallAArch64.TakeException ec = 0x12; when( Exception_MonitorCallEL1 ec = 0x13; when Exception_SystemRegisterTrap ec = 0x18; assert !from_32; when Exception_SVEAccessTrap ec = 0x19; assert !from_32; when Exception_ERetTrap ec = 0x1A; when Exception_InstructionAbort ec = 0x20; il = '1'; when Exception_PCAlignment ec = 0x22; il = '1'; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_SPAlignment ec = 0x26; il = '1'; assert !from_32; when Exception_FPTrappedException ec = 0x28; when Exception_SError ec = 0x2F; il = '1'; when Exception_Breakpoint ec = 0x30; il = '1'; when Exception_SoftwareStep ec = 0x32; il = '1'; when Exception_Watchpoint ec = 0x34; il = '1'; when Exception_SoftwareBreakpoint ec = 0x38; when Exception_VectorCatch ec = 0x3A; il = '1'; assert from_32; otherwise Unreachable(); if ec IN {0x20,0x24,0x30,0x32,0x34} && target_el == PSTATE.EL then ec = ec + 1; if ec IN {0x11,0x12,0x13,0x28,0x38} && !from_32 then ec = ec + 4; return (ec,il);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsasynch/AArch64.ReportExceptionAArch64.TakeVirtualFIQException

// AArch64.ReportException() // ========================= // Report syndrome information for exception taken to AArch64 state.// AArch64.TakeVirtualFIQException() // ================================= AArch64.ReportException(AArch64.TakeVirtualFIQException() assertExceptionRecordEL2Enabled exception, bits(2) target_el)() && PSTATE.EL IN { ExceptionEL0 type = exception.type; (ec,il) =, AArch64.ExceptionClassEL1(type, target_el); iss = exception.syndrome; }; assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Virtual IRQ enabled if TGE==0 and FMO==1 // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; bits(64) preferred_exception_return = ESRThisInstrAddr[target_el] = ec<5:0>:il:iss; (); vect_offset = 0x100; if type IN { exception =Exception_InstructionAbortExceptionSyndrome,( Exception_PCAlignmentException_FIQ,); Exception_DataAbortAArch64.TakeException,( Exception_NV2DataAbortEL1, Exception_Watchpoint} then FAR[target_el] = exception.vaddress; else FAR[target_el] = bits(64) UNKNOWN; if target_el == EL2 then if exception.ipavalid then HPFAR_EL2<43:4> = exception.ipaddress<51:12>; if HaveSecureEL2Ext() then if IsSecureEL2Enabled() then HPFAR_EL2.NS = exception.NS; else HPFAR_EL2.NS = '0'; else HPFAR_EL2<43:4> = bits(40) UNKNOWN; return;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsasynch/AArch64.ResetControlRegistersAArch64.TakeVirtualIRQException

// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values.// AArch64.TakeVirtualIRQException() // ================================= AArch64.ResetControlRegisters(boolean cold_reset);AArch64.TakeVirtualIRQException() assertEL2Enabled() && PSTATE.EL IN {EL0,EL1}; assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Virtual IRQ enabled if TGE==0 and IMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x80; exception = ExceptionSyndrome(Exception_IRQ); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsasynch/AArch64.TakeResetAArch64.TakeVirtualSErrorException

// AArch64.TakeReset() // =================== // Reset into AArch64 state// AArch64.TakeVirtualSErrorException() // ==================================== AArch64.TakeReset(boolean cold_reset) assert !AArch64.TakeVirtualSErrorException(boolean impdef_syndrome, bits(24) syndrome) assertHighestELUsingAArch32EL2Enabled(); // Enter the highest implemented Exception level in AArch64 state PSTATE.nRW = '0'; if() && PSTATE.EL IN { HaveELEL0(,EL3) then PSTATE.EL = EL3; elsif HaveEL(EL2) then PSTATE.EL = EL2; else PSTATE.EL = EL1; }; assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Virtual SError enabled if TGE==0 and AMO==1 // Reset the system registers and other system components bits(64) preferred_exception_return = AArch64.ResetControlRegistersThisInstrAddr(cold_reset); (); vect_offset = 0x180; // Reset all other PSTATE fields PSTATE.SP = '1'; // Select stack pointer PSTATE.<D,A,I,F> = '1111'; // All asynchronous exceptions masked PSTATE.SS = '0'; // Clear software step bit PSTATE.DIT = '0'; // PSTATE.DIT is reset to 0 when resetting into AArch64 PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // ELR_ELx and SPSR_ELx have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. exception = AArch64.ResetGeneralRegistersExceptionSyndrome();( AArch64.ResetSIMDFPRegistersException_SError();); if AArch64.ResetSpecialRegistersHaveRASExt();() then exception.syndrome<24> = VSESR_EL2.IDS; exception.syndrome<23:0> = VSESR_EL2.ISS; else exception.syndrome<24> = if impdef_syndrome then '1' else '0'; if impdef_syndrome then exception.syndrome<23:0> = syndrome; ResetExternalDebugRegistersClearPendingVirtualSError(cold_reset); bits(64) rv; // IMPLEMENTATION DEFINED reset vector if(); HaveELAArch64.TakeException(EL3EL1) then rv = RVBAR_EL3; elsif HaveEL(EL2) then rv = RVBAR_EL2; else rv = RVBAR_EL1; // The reset vector must be correctly aligned assert IsZero(rv<63:PAMax()>) && IsZero(rv<1:0>); BranchTo(rv, BranchType_RESET);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/ieeefpdebug/AArch64.FPTrappedExceptionAArch64.BreakpointException

// AArch64.FPTrappedException() // ============================// AArch64.BreakpointException() // ============================= AArch64.FPTrappedException(boolean is_ase, integer element, bits(8) accumulated_exceptions) exception =AArch64.BreakpointException( ExceptionSyndromeFaultRecord(fault) assert PSTATE.EL !=Exception_FPTrappedExceptionEL3); if is_ase then if boolean IMPLEMENTATION_DEFINED "vector instructions set TFV to 1" then exception.syndrome<23> = '1'; // TFV else exception.syndrome<23> = '0'; // TFV else exception.syndrome<23> = '1'; // TFV exception.syndrome<10:8> = bits(3) UNKNOWN; // VECITR if exception.syndrome<23> == '1' then exception.syndrome<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF else exception.syndrome<7,4:0> = bits(6) UNKNOWN; ; route_to_el2 = route_to_el2 = ( EL2Enabled() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =() && PSTATE.EL IN { EL0,EL1} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; if vaddress = bits(64) UNKNOWN; exception = UIntAArch64.AbortSyndrome(PSTATE.EL) >( UIntException_Breakpoint(, fault, vaddress); if PSTATE.EL ==EL1EL2) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then|| route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscallsdebug/AArch64.CallHypervisorAArch64.SoftwareBreakpoint

// AArch64.CallHypervisor() // ======================== // Performs a HVC call// AArch64.SoftwareBreakpoint() // ============================ AArch64.CallHypervisor(bits(16) immediate) assertAArch64.SoftwareBreakpoint(bits(16) immediate) route_to_el2 = ( HaveELEL2Enabled(() && PSTATE.EL IN {EL2EL0); if, UsingAArch32EL1() then} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = AArch32.ITAdvanceThisInstrAddr();(); vect_offset = 0x0; exception = SSAdvanceExceptionSyndrome(); bits(64) preferred_exception_return =( NextInstrAddrException_SoftwareBreakpoint(); vect_offset = 0x0; ); exception.syndrome<15:0> = immediate; exception = if ExceptionSyndromeUInt((PSTATE.EL) >Exception_HypervisorCallUInt); exception.syndrome<15:0> = immediate; if PSTATE.EL ==( EL3EL1 then) then AArch64.TakeException((PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 thenEL3AArch64.TakeException, exception, preferred_exception_return, vect_offset); else( EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL2EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscallsdebug/AArch64.CallSecureMonitorAArch64.SoftwareStepException

// AArch64.CallSecureMonitor() // ===========================// AArch64.SoftwareStepException() // =============================== AArch64.CallSecureMonitor(bits(16) immediate) assertAArch64.SoftwareStepException() assert PSTATE.EL != HaveEL(EL3) && !; route_to_el2 = (ELUsingAArch32EL2Enabled(() && PSTATE.EL IN {EL3EL0); if, UsingAArch32EL1() then} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = AArch32.ITAdvanceThisInstrAddr();(); vect_offset = 0x0; exception = SSAdvanceExceptionSyndrome(); bits(64) preferred_exception_return =( NextInstrAddrException_SoftwareStep(); vect_offset = 0x0; exception =); if ExceptionSyndromeSoftwareStep_DidNotStep(() then exception.syndrome<24> = '0'; else exception.syndrome<24> = '1'; exception.syndrome<6> = ifException_MonitorCallSoftwareStep_SteppedEX); exception.syndrome<15:0> = immediate;() then '1' else '0'; if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1EL3EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscallsdebug/AArch64.CallSupervisorAArch64.VectorCatchException

// AArch64.CallSupervisor() // ======================== // Calls the Supervisor// AArch64.VectorCatchException() // ============================== // Vector Catch taken from EL0 or EL1 to EL2. This can only be called when debug exceptions are // being routed to EL2, as Vector Catch is a legacy debug event. AArch64.CallSupervisor(bits(16) immediate) ifAArch64.VectorCatchException( UsingAArch32FaultRecord() thenfault) assert PSTATE.EL != AArch32.ITAdvanceEL2();; assert SSAdvance(); route_to_el2 = EL2Enabled() && PSTATE.EL ==() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); bits(64) preferred_exception_return = EL0ThisInstrAddr && HCR_EL2.TGE == '1'; (); vect_offset = 0x0; bits(64) preferred_exception_return = vaddress = bits(64) UNKNOWN; exception = NextInstrAddrAArch64.AbortSyndrome(); vect_offset = 0x0; exception =( ExceptionSyndromeException_VectorCatch(, fault, vaddress);Exception_SupervisorCall); exception.syndrome<15:0> = immediate; if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then( AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/takeexceptiondebug/AArch64.TakeExceptionAArch64.WatchpointException

// AArch64.TakeException() // ======================= // Take an exception to an Exception Level using AArch64.// AArch64.WatchpointException() // ============================= AArch64.TakeException(bits(2) target_el,AArch64.WatchpointException(bits(64) vaddress, ExceptionRecordFaultRecord exception, bits(64) preferred_exception_return, integer vect_offset) assertfault) assert PSTATE.EL != HaveEL(target_el) && !ELUsingAArch32(target_el) && UInt(target_el) >= UInt(PSTATE.EL); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors &&; route_to_el2 = ( InsertIESBBeforeException(target_el) then SynchronizeErrors(); iesb_req = FALSE; sync_errors = FALSE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); if UInt(target_el) > UInt(PSTATE.EL) then boolean lower_32; if target_el == EL3 then if EL2Enabled() then lower_32 =() && PSTATE.EL IN { ELUsingAArch32(EL2); else lower_32 = ELUsingAArch32(EL1); elsif IsInHost() && PSTATE.EL == EL0 && target_el ==, EL2 then lower_32 = ELUsingAArch32(EL0); else lower_32 = ELUsingAArch32(target_el - 1); vect_offset = vect_offset + (if lower_32 then 0x600 else 0x400); elsif PSTATE.SP == '1' then vect_offset = vect_offset + 0x200; spsr = GetPSRFromPSTATE(); if PSTATE.EL == EL1 && target_el ==} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = EL1ThisInstrAddr &&(); vect_offset = 0x0; exception = HaveNVExtAArch64.AbortSyndrome() &&( EL2EnabledException_Watchpoint() && HCR_EL2.<NV, NV1> == '10' then spsr<3:2> = '10'; , fault, vaddress); if if PSTATE.EL == HaveUAOExtEL2() then PSTATE.UAO = '0'; if !(exception.type IN {|| route_to_el2 thenException_IRQAArch64.TakeException,( Exception_FIQEL2}) then, exception, preferred_exception_return, vect_offset); else AArch64.ReportExceptionAArch64.TakeException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; if( HaveBTIExt() then if ( exception.type IN {Exception_SError, Exception_IRQ, Exception_FIQ, Exception_SoftwareStep, Exception_PCAlignment, Exception_InstructionAbort, Exception_SoftwareBreakpoint, Exception_IllegalState, Exception_BranchTarget} ) then spsr_btype = PSTATE.BTYPE; else spsr_btype = if ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE) then '00' else PSTATE.BTYPE; spsr<11:10> = spsr_btype; SPSR[] = spsr; ELR[] = preferred_exception_return; if HaveSSBSExt() then PSTATE.SSBS = SCTLR[].DSSBS; if HaveBTIExt() then PSTATE.BTYPE = '00'; PSTATE.SS = '0'; PSTATE.<D,A,I,F> = '1111'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0' then PSTATE.PAN = '1'; if HaveMTEExt() then PSTATE.TCO = '1'; BranchTo(VBAR[]<63:11>:vect_offset<10:0>, BranchType_EXCEPTION); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); EndOfInstruction();, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapsexceptions/AArch64.AArch32SystemAccessTrapAArch64.ExceptionClass

// AArch64.AArch32SystemAccessTrap() // ================================= // Trapped AARCH32 system register access.// AArch64.ExceptionClass() // ======================== // Return the Exception Class and Instruction Length fields for reported in ESR (integer,bit) AArch64.AArch32SystemAccessTrap(bits(2) target_el, integer ec) assertAArch64.ExceptionClass( HaveELException(target_el) && target_el !=type, bits(2) target_el) il = if EL0ThisInstrLength &&() == 32 then '1' else '0'; from_32 = UIntUsingAArch32(target_el) >=(); assert from_32 || il == '1'; // AArch64 instructions always 32-bit case type of when UIntException_Uncategorized(PSTATE.EL); bits(64) preferred_exception_return =ec = 0x00; il = '1'; when ThisInstrAddrException_WFxTrap(); vect_offset = 0x0; exception =ec = 0x01; when AArch64.AArch32SystemAccessTrapSyndromeException_CP15RTTrap(ec = 0x03; assert from_32; whenThisInstrException_CP15RRTTrap(), ec);ec = 0x04; assert from_32; when ec = 0x05; assert from_32; when Exception_CP14DTTrap ec = 0x06; assert from_32; when Exception_AdvSIMDFPAccessTrap ec = 0x07; when Exception_FPIDTrap ec = 0x08; when Exception_PACTrap ec = 0x09; when Exception_CP14RRTTrap ec = 0x0C; assert from_32; when Exception_BranchTarget ec = 0x0D; when Exception_IllegalState ec = 0x0E; il = '1'; when Exception_SupervisorCall ec = 0x11; when Exception_HypervisorCall ec = 0x12; when Exception_MonitorCall ec = 0x13; when Exception_SystemRegisterTrap ec = 0x18; assert !from_32; when Exception_SVEAccessTrap ec = 0x19; assert !from_32; when Exception_ERetTrap ec = 0x1A; when Exception_InstructionAbort ec = 0x20; il = '1'; when Exception_PCAlignment ec = 0x22; il = '1'; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_SPAlignment ec = 0x26; il = '1'; assert !from_32; when Exception_FPTrappedException ec = 0x28; when Exception_SError ec = 0x2F; il = '1'; when Exception_Breakpoint ec = 0x30; il = '1'; when Exception_SoftwareStep ec = 0x32; il = '1'; when Exception_Watchpoint ec = 0x34; il = '1'; when Exception_SoftwareBreakpoint ec = 0x38; when Exception_VectorCatch ec = 0x3A; il = '1'; assert from_32; otherwise UnreachableAArch64.TakeExceptionException_CP14RTTrap(target_el, exception, preferred_exception_return, vect_offset);(); if ec IN {0x20,0x24,0x30,0x32,0x34} && target_el == PSTATE.EL then ec = ec + 1; if ec IN {0x11,0x12,0x13,0x28,0x38} && !from_32 then ec = ec + 4; return (ec,il);

Library pseudocode for aarch64/exceptions/trapsexceptions/AArch64.AArch32SystemAccessTrapSyndromeAArch64.ReportException

// AArch64.AArch32SystemAccessTrapSyndrome() // ========================================= // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord// AArch64.ReportException() // ========================= // Report syndrome information for exception taken to AArch64 state. AArch64.AArch32SystemAccessTrapSyndrome(bits(32) instr, integer ec)AArch64.ReportException( ExceptionRecord exception; case ec of when 0x0 exception =exception, bits(2) target_el) ExceptionSyndromeException(type = exception.type; (ec,il) =Exception_UncategorizedAArch64.ExceptionClass); when 0x3 exception =(type, target_el); iss = exception.syndrome; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; ExceptionSyndromeESR([target_el] = ec<5:0>:il:iss; if type IN {Exception_CP15RTTrapException_InstructionAbort); when 0x4 exception =, ExceptionSyndromeException_PCAlignment(,Exception_CP15RRTTrapException_DataAbort); when 0x5 exception =, ExceptionSyndromeException_NV2DataAbort(,Exception_CP14RTTrapException_Watchpoint); when 0x6 exception =} then ExceptionSyndromeFAR([target_el] = exception.vaddress; elseException_CP14DTTrapFAR); when 0x7 exception =[target_el] = bits(64) UNKNOWN; if target_el == ExceptionSyndromeEL2(then if exception.ipavalid then HPFAR_EL2<43:4> = exception.ipaddress<51:12>; ifException_AdvSIMDFPAccessTrapHaveSecureEL2Ext); when 0x8 exception =() then if ExceptionSyndromeIsSecureEL2Enabled(Exception_FPIDTrap); when 0xC exception = ExceptionSyndrome(Exception_CP14RRTTrap); otherwise Unreachable(); bits(20) iss = Zeros(); if exception.type IN {Exception_FPIDTrap, Exception_CP14RTTrap, Exception_CP15RTTrap} then // Trapped MRC/MCR, VMRS on FPSID if exception.type != Exception_FPIDTrap then // When trap is not for VMRS iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<4:1> = instr<3:0>; // CRm else iss<19:17> = '000'; iss<16:14> = '111'; iss<13:10> = instr<19:16>; // reg iss<4:1> = '0000'; if instr<20> == '1' && instr<15:12> == '1111' then // MRC, Rt==15 iss<9:5> = '11111'; elsif instr<20> == '0' && instr<15:12> == '1111' then // MCR, Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; elsif exception.type IN {Exception_CP14RRTTrap, Exception_AdvSIMDFPAccessTrap, Exception_CP15RRTTrap} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 if instr<19:16> == '1111' then // Rt2==15 iss<14:10> = bits(5) UNKNOWN; else iss<14:10> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; if instr<15:12> == '1111' then // Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif exception.type == Exception_CP14DTTrap then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<9:5> = bits(5) UNKNOWN; iss<3> = '1'; elsif exception.type == Exception_Uncategorized then // Trapped for unknown reason iss<9:5> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; // Rn iss<3> = '0'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<19:0> = iss; () then HPFAR_EL2.NS = exception.NS; else HPFAR_EL2.NS = '0'; else HPFAR_EL2<43:4> = bits(40) UNKNOWN; return exception; return;

Library pseudocode for aarch64/exceptions/trapsexceptions/AArch64.AdvSIMDFPAccessTrapAArch64.ResetControlRegisters

// AArch64.AdvSIMDFPAccessTrap() // ============================= // Trapped access to Advanced SIMD or FP registers due to CPACR[].// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values. AArch64.AdvSIMDFPAccessTrap(bits(2) target_el) bits(64) preferred_exception_return =AArch64.ResetControlRegisters(boolean cold_reset); ThisInstrAddr(); vect_offset = 0x0; route_to_el2 = (target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1'); if route_to_el2 then exception = ExceptionSyndrome(Exception_Uncategorized); AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); exception.syndrome<24:20> = ConditionSyndrome(); AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset); return;

Library pseudocode for aarch64/exceptions/trapsexceptions/AArch64.CheckCP15InstrCoarseTrapsAArch64.TakeReset

// AArch64.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained AArch32 CP15 traps in HSTR_EL2 and HCR_EL2. boolean// AArch64.TakeReset() // =================== // Reset into AArch64 state AArch64.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) // Check for coarse-grained Hyp traps ifAArch64.TakeReset(boolean cold_reset) assert ! EL2EnabledHighestELUsingAArch32() && PSTATE.EL IN {(); // Enter the highest implemented Exception level in AArch64 state PSTATE.nRW = '0'; ifEL0HaveEL,(EL3) then PSTATE.EL = EL3; elsif HaveEL(EL2) then PSTATE.EL = EL2; else PSTATE.EL = EL1} then // Check for MCR, MRC, MCRR and MRRC disabled by HSTR_EL2<CRn/CRm> major = if nreg == 1 then CRn else CRm; if !; // Reset the system registers and other system components(cold_reset); // Reset all other PSTATE fields PSTATE.SP = '1'; // Select stack pointer PSTATE.<D,A,I,F> = '1111'; // All asynchronous exceptions masked PSTATE.SS = '0'; // Clear software step bit PSTATE.DIT = '0'; // PSTATE.DIT is reset to 0 when resetting into AArch64 PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // ELR_ELx and SPSR_ELx have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch64.ResetGeneralRegisters(); AArch64.ResetSIMDFPRegisters(); AArch64.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(64) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then rv = RVBAR_EL3; elsif HaveEL(EL2) then rv = RVBAR_EL2; else rv = RVBAR_EL1; // The reset vector must be correctly aligned assert IsZero(rv<63:PAMax()>) && IsZero(rv<1:0>); BranchTo(rv, BranchType_RESETIsInHostAArch64.ResetControlRegisters() && !(major IN {4,14}) && HSTR_EL2<major> == '1' then return TRUE; // Check for MRC and MCR disabled by HCR_EL2.TIDCP if (HCR_EL2.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;);

Library pseudocode for aarch64/exceptions/trapsieeefp/AArch64.CheckFPAdvSIMDEnabledAArch64.FPTrappedException

// AArch64.CheckFPAdvSIMDEnabled() // =============================== // Check against CPACR[]// AArch64.FPTrappedException() // ============================ AArch64.CheckFPAdvSIMDEnabled() if PSTATE.EL IN {AArch64.FPTrappedException(boolean is_ase, integer element, bits(8) accumulated_exceptions) exception =EL0ExceptionSyndrome,( Exception_FPTrappedException); if is_ase then if boolean IMPLEMENTATION_DEFINED "vector instructions set TFV to 1" then exception.syndrome<23> = '1'; // TFV else exception.syndrome<23> = '0'; // TFV else exception.syndrome<23> = '1'; // TFV exception.syndrome<10:8> = bits(3) UNKNOWN; // VECITR if exception.syndrome<23> == '1' then exception.syndrome<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF else exception.syndrome<7,4:0> = bits(6) UNKNOWN; route_to_el2 = EL2Enabled() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; if UInt(PSTATE.EL) > UInt(EL1} && !) thenIsInHostAArch64.TakeException() then // Check if access disabled in CPACR_EL1 case(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then CPACRAArch64.TakeException[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL ==( EL0EL2; when '11' disabled = FALSE; if disabled then, exception, preferred_exception_return, vect_offset); else AArch64.AdvSIMDFPAccessTrapAArch64.TakeException(EL1); AArch64.CheckFPAdvSIMDTrap(); // Also check against CPTR_EL2 and CPTR_EL3, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapssyscalls/AArch64.CheckFPAdvSIMDTrapAArch64.CallHypervisor

// AArch64.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3.// AArch64.CallHypervisor() // ======================== // Performs a HVC call AArch64.CheckFPAdvSIMDTrap() if PSTATE.EL IN {AArch64.CallHypervisor(bits(16) immediate) assertEL0HaveEL,( EL1, EL2} &&); if EL2EnabledUsingAArch32() then // Check if access disabled in CPTR_EL2 if() then HaveVirtHostExtAArch32.ITAdvance() && HCR_EL2.E2H == '1' then case CPTR_EL2.FPEN of when 'x0' disabled = !(PSTATE.EL ==(); EL1SSAdvance && HCR_EL2.TGE == '1'); when '01' disabled = (PSTATE.EL ==(); bits(64) preferred_exception_return = EL0NextInstrAddr && HCR_EL2.TGE == '1'); when '11' disabled = FALSE; if disabled then(); vect_offset = 0x0; exception = AArch64.AdvSIMDFPAccessTrapExceptionSyndrome(EL2Exception_HypervisorCall); else if CPTR_EL2.TFP == '1' then exception.syndrome<15:0> = immediate; if PSTATE.EL == AArch64.AdvSIMDFPAccessTrapEL3(thenEL2AArch64.TakeException); if( HaveEL(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then, exception, preferred_exception_return, vect_offset); else AArch64.AdvSIMDFPAccessTrapAArch64.TakeException(EL3EL2); return;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapssyscalls/AArch64.CheckForERetTrapAArch64.CallSecureMonitor

// AArch64.CheckForERetTrap() // ========================== // Check for trap on ERET, ERETAA, ERETAB instruction// AArch64.CallSecureMonitor() // =========================== AArch64.CheckForERetTrap(boolean eret_with_pac, boolean pac_uses_key_a) // Non-secure EL1 execution of ERET, ERETAA, ERETAB when HCR_EL2.NV bit is set, is trapped to EL2 route_to_el2 =AArch64.CallSecureMonitor(bits(16) immediate) assert HaveNVExtHaveEL() &&( EL2EnabledEL3() && PSTATE.EL ==) && ! EL1ELUsingAArch32 && HCR_EL2.NV == '1'; if route_to_el2 then( ExceptionRecordEL3 exception; bits(64) preferred_exception_return =); if ThisInstrAddrUsingAArch32(); vect_offset = 0x0; exception =() then AArch32.ITAdvance(); SSAdvance(); bits(64) preferred_exception_return = NextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_ERetTrapException_MonitorCall); if !eret_with_pac then // ERET exception.syndrome<1> = '0'; exception.syndrome<0> = '0'; // RES0 else exception.syndrome<1> = '1'; if pac_uses_key_a then // ERETAA exception.syndrome<0> = '0'; else // ERETAB exception.syndrome<0> = '1'; exception.syndrome<15:0> = immediate; AArch64.TakeException(EL2EL3, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapssyscalls/AArch64.CheckForSMCUndefOrTrapAArch64.CallSupervisor

// AArch64.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction// AArch64.CallSupervisor() // ======================== // Calls the Supervisor AArch64.CheckForSMCUndefOrTrap(bits(16) imm) route_to_el2 =AArch64.CallSupervisor(bits(16) immediate) if UsingAArch32() then AArch32.ITAdvance(); SSAdvance(); route_to_el2 = EL2Enabled() && PSTATE.EL == EL1 && HCR_EL2.TSC == '1'; if PSTATE.EL == EL0 then UNDEFINED; if !&& HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =HaveELNextInstrAddr((); vect_offset = 0x0; exception =EL3ExceptionSyndrome) then if( EL2EnabledException_SupervisorCall() && PSTATE.EL ==); exception.syndrome<15:0> = immediate; if EL1UInt then if(PSTATE.EL) > HaveNVExtUInt() && HCR_EL2.NV == '1' && HCR_EL2.TSC == '1' then route_to_el2 = TRUE; else UNDEFINED; else UNDEFINED; else route_to_el2 =( EL2Enabled() && PSTATE.EL == EL1 && HCR_EL2.TSC == '1'; if route_to_el2 then bits(64) preferred_exception_return =) then ThisInstrAddrAArch64.TakeException(); vect_offset = 0x0; exception =(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then ExceptionSyndromeAArch64.TakeException(Exception_MonitorCallEL2); exception.syndrome<15:0> = imm;, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL2EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapstakeexception/AArch64.CheckForWFxTrapAArch64.TakeException

// AArch64.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction// AArch64.TakeException() // ======================= // Take an exception to an Exception Level using AArch64. AArch64.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assertAArch64.TakeException(bits(2) target_el, ExceptionRecord exception, bits(64) preferred_exception_return, integer vect_offset) assert HaveEL(target_el); case target_el of when(target_el) && ! EL1ELUsingAArch32 trap = (if is_wfe then(target_el) && UInt(target_el) >= UInt(PSTATE.EL); sync_errors = HaveIESB() && SCTLR[].nTWE else[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); iesb_req = FALSE; sync_errors = FALSE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); if UInt(target_el) > UInt(PSTATE.EL) then boolean lower_32; if target_el == EL3 then if EL2Enabled() then lower_32 = ELUsingAArch32(EL2); else lower_32 = ELUsingAArch32(EL1); elsif IsInHost() && PSTATE.EL == EL0 && target_el == EL2 then lower_32 = ELUsingAArch32(EL0); else lower_32 = ELUsingAArch32(target_el - 1); vect_offset = vect_offset + (if lower_32 then 0x600 else 0x400); elsif PSTATE.SP == '1' then vect_offset = vect_offset + 0x200; spsr = GetPSRFromPSTATE(); if PSTATE.EL == EL1 && target_el == EL1 && HaveNVExt() && EL2Enabled() && HCR_EL2.<NV, NV1> == '10' then spsr<3:2> = '10'; if HaveUAOExt() then PSTATE.UAO = '0'; if !(exception.type IN {Exception_IRQ, Exception_FIQ}) then AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; if HaveBTIExt() then if ( exception.type IN {Exception_SError, Exception_IRQ, Exception_FIQ, Exception_SoftwareStep, Exception_PCAlignment, Exception_InstructionAbort, Exception_SoftwareBreakpoint, Exception_IllegalState, Exception_BranchTarget} ) then spsr_btype = PSTATE.BTYPE; else spsr_btype = if ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE) then '00' else PSTATE.BTYPE; spsr<11:10> = spsr_btype; SPSR[] = spsr; ELR[] = preferred_exception_return; if HaveSSBSExt() then PSTATE.SSBS = SCTLR[].nTWI) == '0'; when[].DSSBS; if HaveBTIExt() then PSTATE.BTYPE = '00'; PSTATE.SS = '0'; PSTATE.<D,A,I,F> = '1111'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 trap = (if is_wfe then HCR_EL2.TWE else HCR_EL2.TWI) == '1'; when&& EL3ELIsInHost trap = (if is_wfe then SCR_EL3.TWE else SCR_EL3.TWI) == '1'; if trap then( ))) && SCTLR[].SPAN == '0' then PSTATE.PAN = '1'; if HaveMTEExt() then PSTATE.TCO = '1'; BranchTo(VBAR[]<63:11>:vect_offset<10:0>, BranchType_EXCEPTION); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); EndOfInstructionAArch64.WFxTrapEL0(target_el, is_wfe);();

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckIllegalStateAArch64.AArch32SystemAccessTrap

// AArch64.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set.// AArch64.AArch32SystemAccessTrap() // ================================= // Trapped AArch32 System register access other than due to CPTR_EL2 or CPACR_EL1. AArch64.CheckIllegalState() if PSTATE.IL == '1' then route_to_el2 =AArch64.AArch32SystemAccessTrap(bits(2) target_el, bits(32) aarch32_instr) assert EL2EnabledHaveEL() && PSTATE.EL ==(target_el) && target_el != EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =&& ThisInstrAddrUInt(); vect_offset = 0x0; exception =(target_el) >= ExceptionSyndromeUInt((PSTATE.EL); bits(64) preferred_exception_return =Exception_IllegalStateThisInstrAddr); (); vect_offset = 0x0; if exception = UIntAArch64.AArch32SystemAccessTrapSyndrome(PSTATE.EL) >(aarch32_instr); if target_el == UInt(EL1) then&& AArch64.TakeExceptionEL2Enabled(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.MonitorModeTrapAArch64.AArch32SystemAccessTrapSyndrome

// AArch64.MonitorModeTrap() // ========================= // Trapped use of Monitor mode features in a Secure EL1 AArch32 mode// AArch64.AArch32SystemAccessTrapSyndrome() // ========================================= // Return the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord AArch64.MonitorModeTrap() bits(64) preferred_exception_return =AArch64.AArch32SystemAccessTrapSyndrome(bits(32) instr) ThisInstrAddrExceptionRecord(); vect_offset = 0x0; exception =exception; cpnum = UInt(instr<11:8>); bits(20) iss = Zeros(); if instr<27:24> == '1110' && instr<4> == '1' && instr<31:28> != '1111' then // MRC/MCR case cpnum of when 10 exception = ExceptionSyndrome(Exception_UncategorizedException_FPIDTrap); if when 14 exception = IsSecureEL2EnabledExceptionSyndrome() then( AArch64.TakeExceptionException_CP14RTTrap(); when 15 exception =EL2ExceptionSyndrome, exception, preferred_exception_return, vect_offset);( AArch64.TakeExceptionException_CP15RTTrap(); otherwise(); iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn if instr<20> == '1' && instr<15:12> == '1111' then // MRC, Rt==15 iss<9:5> = '11111'; elsif instr<20> == '0' && instr<15:12> == '1111' then // MCR, Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif instr<27:21> == '1100010' && instr<31:28> != '1111' then // MRRC/MCRR case cpnum of when 14 exception = ExceptionSyndrome(Exception_CP14RRTTrap); when 15 exception = ExceptionSyndrome(Exception_CP15RRTTrap); otherwise Unreachable(); iss<19:16> = instr<7:4>; // opc1 if instr<19:16> == '1111' then // Rt2==15 iss<14:10> = bits(5) UNKNOWN; else iss<14:10> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; if instr<15:12> == '1111' then // Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif instr<27:25> == '110' && instr<31:28> != '1111' then // LDC/STC assert cpnum == 14; exception = ExceptionSyndrome(Exception_CP14DTTrap); iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<9:5> = bits(5) UNKNOWN; iss<3> = '1'; else iss<9:5> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; // Rn iss<3> = '0'; else Unreachable(); iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndromeEL3Unreachable, exception, preferred_exception_return, vect_offset);(); exception.syndrome<19:0> = iss; return exception;

Library pseudocode for aarch64/exceptions/traps/AArch64.SystemAccessTrapAArch64.AdvSIMDFPAccessTrap

// AArch64.SystemAccessTrap() // ========================== // Trapped access to AArch64 system register or system instruction.// AArch64.AdvSIMDFPAccessTrap() // ============================= // Trapped access to Advanced SIMD or FP registers due to CPACR[]. AArch64.SystemAccessTrap(bits(2) target_el, integer ec) assertAArch64.AdvSIMDFPAccessTrap(bits(2) target_el) bits(64) preferred_exception_return = HaveELThisInstrAddr(target_el) && target_el !=(); vect_offset = 0x0; route_to_el2 = (target_el == EL0EL1 && UIntEL2Enabled(target_el) >=() && HCR_EL2.TGE == '1'); if route_to_el2 then exception = UIntExceptionSyndrome(PSTATE.EL); bits(64) preferred_exception_return =( ThisInstrAddrException_Uncategorized(); vect_offset = 0x0; exception =); AArch64.SystemAccessTrapSyndromeAArch64.TakeException(, exception, preferred_exception_return, vect_offset); else exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); exception.syndrome<24:20> = ConditionSyndromeThisInstrEL2(), ec);(); AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);(target_el, exception, preferred_exception_return, vect_offset); return;

Library pseudocode for aarch64/exceptions/traps/AArch64.SystemAccessTrapSyndromeAArch64.CheckAArch32SystemAccess

// AArch64.SystemAccessTrapSyndrome() // AArch64.CheckAArch32SystemAccess() // ================================== // Returns the syndrome information for traps on AArch64 MSR/MRS instructions. ExceptionRecord// Check AArch32 System register access instruction for enables and disables AArch64.SystemAccessTrapSyndrome(bits(32) instr, integer ec)AArch64.CheckAArch32SystemAccess(bits(32) instr) cp_num = ExceptionRecordUInt exception; case ec of when 0x0 // Trapped access due to unknown reason. exception =(instr<11:8>); assert cp_num IN {14,15}; // Decode the AArch32 System register access instruction if instr<31:28> != '1111' && instr<27:24> == '1110' && instr<4> == '1' then // MRC/MCR cprt = TRUE; cpdt = FALSE; nreg = 1; opc1 = ExceptionSyndromeUInt((instr<23:21>); opc2 =Exception_UncategorizedUInt); when 0x7 // Trapped access to SVE, Advance SIMD&FP system register. exception =(instr<7:5>); CRn = ExceptionSyndromeUInt((instr<19:16>); CRm =Exception_AdvSIMDFPAccessTrapUInt); exception.syndrome<24:20> =(instr<3:0>); elsif instr<31:28> != '1111' && instr<27:21> == '1100010' then // MRRC/MCRR cprt = TRUE; cpdt = FALSE; nreg = 2; opc1 = ConditionSyndromeUInt(); when 0x18 // Trapped access to system register or system instruction. exception =(instr<7:4>); CRm = ExceptionSyndromeUInt((instr<3:0>); elsif instr<31:28> != '1111' && instr<27:25> == '110' && instr<22> == '0' then // LDC/STC cprt = FALSE; cpdt = TRUE; nreg = 0; opc1 = 0; CRn =Exception_SystemRegisterTrapUInt); instr =(instr<15:12>); else allocated = FALSE; // // Coarse-grain decode into CP14 or CP15 encoding space. Each of the CPxxxInstrDecode functions // returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. if cp_num == 14 then // LDC and STC only supported for c5 in CP14 encoding space if cpdt && CRn != 5 then allocated = FALSE; else // Coarse-grained decode of CP14 based on opc1 field case opc1 of when 0 allocated = CP14DebugInstrDecode(instr); when 1 allocated = CP14TraceInstrDecode(instr); when 7 allocated = CP14JazelleInstrDecode(instr); // JIDR only otherwise allocated = FALSE; // All other values are unallocated elsif cp_num == 15 then // LDC and STC not supported in CP15 encoding space if !cprt then allocated = FALSE; else allocated = CP15InstrDecode(instr); // Coarse-grain traps to EL2 have a higher priority than exceptions generated because // the access instruction is UNDEFINED if ThisInstrAArch64.CheckCP15InstrCoarseTraps(); exception.syndrome<21:20> = instr<20:19>; // Op0 exception.syndrome<19:17> = instr<7:5>; // Op2 exception.syndrome<16:14> = instr<18:16>; // Op1 exception.syndrome<13:10> = instr<15:12>; // CRn exception.syndrome<9:5> = instr<4:0>; // Rt exception.syndrome<4:1> = instr<11:8>; // CRm exception.syndrome<0> = instr<21>; // Direction otherwise(CRn, nreg, CRm) then // For a coarse-grain trap, if it is IMPLEMENTATION DEFINED whether an access from // User mode is UNDEFINED when the trap is disabled, then it is // IMPLEMENTATION DEFINED whether the same access is UNDEFINED or generates a trap // when the trap is enabled. if PSTATE.EL == && EL2Enabled() && !allocated then if boolean IMPLEMENTATION_DEFINED "UNDEF unallocated CP15 access at EL0" then UNDEFINED; AArch64.AArch32SystemAccessTrap(EL2, instr); else allocated = FALSE; if !allocated then UNDEFINED; // If the instruction is not UNDEFINED, it might be disabled or trapped to a higher EL. AArch64.CheckAArch32SystemAccessTrapsUnreachableEL0(); (instr); return exception; return;

Library pseudocode for aarch64/exceptions/traps/AArch64.UndefinedFaultAArch64.CheckAArch32SystemAccessEL1Traps

// AArch64.UndefinedFault() // ========================// AArch64.CheckAArch32SystemAccessEL1Traps() // ========================================== // Check for configurable disables or traps to EL1 or EL2 of an AArch32 System register // access instruction. AArch64.UndefinedFault() route_to_el2 =AArch64.CheckAArch32SystemAccessEL1Traps(bits(32) instr) assert PSTATE.EL == EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =; trap = FALSE; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = ThisInstrAddrAArch32.DecodeSysRegAccess(); vect_offset = 0x0; (instr); exception = if cp_num == 14 then if ((op == ExceptionSyndromeSystemAccessType_RT(&& opc1 == 0 && CRn == 0 && CRm == 5 && opc2 == 0) || // DBGDTRRXint/DBGDTRTXint (op ==Exception_UncategorizedSystemAccessType_DT); if&& CRn == 5 && opc2 == 0)) then // DBGDTRRXint/DBGDTRTXint (STC/LDC) trap = ! UIntHalted(PSTATE.EL) >() && MDSCR_EL1.TDCC == '1'; elsif opc1 == 0 then trap = MDSCR_EL1.TDCC == '1'; elsif opc1 == 1 then trap = UIntCPACR([].TTA == '1'; elsif cp_num == 15 then if ((op ==EL1SystemAccessType_RT) then&& opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 0) || // PMCR (op == AArch64.TakeExceptionSystemAccessType_RT(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then&& opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 1) || // PMCNTENSET (op == AArch64.TakeExceptionSystemAccessType_RT(&& opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 2) || // PMCNTENCLR (op ==EL2SystemAccessType_RT, exception, preferred_exception_return, vect_offset); else&& opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 3) || // PMOVSR (op == && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 6) || // PMCEID0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 7) || // PMCEID1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 1) || // PMXEVTYPER (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 14 && opc2 == 3) || // PMOVSSET (op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 12)) then // PMEVTYPER<n> trap = PMUSERENR_EL0.EN == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 14 && opc2 == 4 then // PMSWINC trap = PMUSERENR_EL0.EN == '0' && PMUSERENR_EL0.SW == '0'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 0) || // PMCCNTR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCRR) trap = PMUSERENR_EL0.EN == '0' && (write || PMUSERENR_EL0.CR == '0'); elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 2) || // PMXEVCNTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8 && CRm <= 11)) then // PMEVCNTR<n> trap = PMUSERENR_EL0.EN == '0' && (write || PMUSERENR_EL0.ER == '0'); elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 5 then // PMSELR trap = PMUSERENR_EL0.EN == '0' && PMUSERENR_EL0.ER == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 2 && opc2 IN {0,1,2} then // CNTP_TVAL CNTP_CTL CNTP_CVAL trap = CNTKCTL[].EL0PTEN == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 0 && opc2 == 0 then // CNTFRQ trap = CNTKCTL[].EL0PTEN == '0' && CNTKCTL[].EL0VCTEN == '0'; elsif op == SystemAccessType_RRT && opc1 == 1 && CRm == 14 then // CNTVCT trap = CNTKCTL[].EL0VCTEN == '0'; if trap then AArch64.AArch32SystemAccessTrapAArch64.TakeExceptionSystemAccessType_RT(EL1, exception, preferred_exception_return, vect_offset);, instr);

Library pseudocode for aarch64/exceptions/traps/AArch64.WFxTrapAArch64.CheckAArch32SystemAccessEL2Traps

// AArch64.WFxTrap() // =================// AArch64.CheckAArch32SystemAccessEL2Traps() // ========================================== // Check for configurable traps to EL2 of an AArch32 System register access instruction. AArch64.WFxTrap(bits(2) target_el, boolean is_wfe) AArch64.CheckAArch32SystemAccessEL2Traps(bits(32) instr) assert UIntEL2Enabled(target_el) >() && PSTATE.EL IN { UIntEL0(PSTATE.EL); bits(64) preferred_exception_return =, ThisInstrAddrEL1(); vect_offset = 0x0; exception =, ExceptionSyndromeEL2(}; trap = FALSE; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) =Exception_WFxTrapAArch32.DecodeSysRegAccess); exception.syndrome<24:20> =(instr); if cp_num == 14 && PSTATE.EL IN { ConditionSyndromeEL0(); exception.syndrome<0> = if is_wfe then '1' else '0'; if target_el ==, EL1 &&} then if ((op == EL2EnabledSystemAccessType_RT() && HCR_EL2.TGE == '1' then&& opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 0) || // DBGDRAR (op == AArch64.TakeExceptionSystemAccessType_RRT(&& opc1 == 0 && CRm == 1) || // DBGDRAR (MRRC) (op ==SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 0) || // DBGDSAR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 2)) then // DBGDSAR (MRRC) trap = MDCR_EL2.TDRA == '1' || MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 4) || // DBGOSLAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 4) || // DBGOSLSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 4) || // DBGOSDLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 4 && opc2 == 4)) then // DBGPRCR trap = MDCR_EL2.TDOSA == '1' || MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'; elsif opc1 == 0 && (!Halted() || !(op == SystemAccessType_RT && CRn == 0 && CRm == 5 && opc2 == 0)) then trap = MDCR_EL2.TDA == '1' || MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'; elsif opc1 == 1 then trap = CPTR_EL2.TTA == '1'; elsif op == SystemAccessType_RT && opc1 == 7 && CRn == 0 && CRm == 0 && opc2 == 0 then // JIDR trap = HCR_EL2.TID0 == '1'; elsif cp_num == 14 && PSTATE.EL == EL2, exception, preferred_exception_return, vect_offset); elsethen if opc1 == 1 then trap = CPTR_EL2.TTA == '1'; elsif cp_num == 15 && PSTATE.EL IN { , EL1} then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 0) || // SCTLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 0) || // TTBR0 (op == SystemAccessType_RRT && opc1 == 0 && CRm == 2) || // TTBR0 (MRRC/MCCR) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 1) || // TTBR1 (op == SystemAccessType_RRT && opc1 == 1 && CRm == 2) || // TTBR1 (MRRC/MCCR) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 2) || // TTBCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 3) || // TTBCR2 (op == SystemAccessType_RT && opc1 == 0 && CRn == 3 && CRm == 0 && opc2 == 0) || // DACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 0 && opc2 == 0) || // DFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 0 && opc2 == 1) || // IFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 6 && CRm == 0 && opc2 == 0) || // DFAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 6 && CRm == 0 && opc2 == 2) || // IFAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 1 && opc2 == 0) || // ADFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 1 && opc2 == 1) || // AIFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 2 && opc2 == 0) || // PRRR/MAIR0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 2 && opc2 == 1) || // NMRR/MAIR1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 3 && opc2 == 0) || // AMAIR0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 3 && opc2 == 1) || // AMAIR1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 13 && CRm == 0 && opc2 == 1)) then // CONTEXTIDR trap = if write then HCR_EL2.TVM == '1' else HCR_EL2.TRVM == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 8 then // TLBI trap = write && HCR_EL2.TTLB == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 6 && opc2 == 2) || // DCISW (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 10 && opc2 == 2) || // DCCSW (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 14 && opc2 == 2)) then // DCCISW trap = write && HCR_EL2.TSW == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 6 && opc2 == 1) || // DCIMVAC (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 10 && opc2 == 1) || // DCCMVAC (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 14 && opc2 == 1)) then // DCCIMVAC trap = write && HCR_EL2.TPCP == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 5 && opc2 == 1) || // ICIMVAU (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 5 && opc2 == 0) || // ICIALLU (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 1 && opc2 == 0) || // ICIALLUIS (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 11 && opc2 == 1)) then // DCCMVAU trap = write && HCR_EL2.TPU == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 1) || // ACTLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 3)) then // ACTLR2 trap = HCR_EL2.TACR == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 2) || // TCMTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 3) || // TLBTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 6) || // REVIDR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 7)) then // AIDR trap = HCR_EL2.TID1 == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 1) || // CTR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 0) || // CCSIDR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 2) || // CCSIDR2 (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 1) || // CLIDR (op == SystemAccessType_RT && opc1 == 2 && CRn == 0 && CRm == 0 && opc2 == 0)) then // CSSELR trap = HCR_EL2.TID2 == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 1) || // ID_* (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 2 && opc2 <= 7) || // ID_* (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm >= 3 && opc2 <= 1) || // Reserved (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 3 && opc2 == 2) || // Reserved (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 5 && opc2 IN {4,5})) then // Reserved trap = HCR_EL2.TID3 == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 2 then // CPACR trap = CPTR_EL2.TCPAC == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 0 then // PMCR trap = MDCR_EL2.TPMCR == '1' || MDCR_EL2.TPM == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8) || // PMEVCNTR<n>/PMEVTYPER<n> (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm IN {12,13,14}) || // PM* (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCCR) trap = MDCR_EL2.TPM == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 2 && opc2 IN {0,1,2} then // CNTP_TVAL CNTP_CTL CNTP_CVAL if !HaveVirtHostExt() || HCR_EL2.E2H == '0' then trap = CNTHCTL_EL2.EL1PCEN == '0'; else trap = CNTHCTL_EL2.EL1PTEN == '0'; elsif op == SystemAccessType_RRT && opc1 == 0 && CRm == 14 then // CNTPCT trap = CNTHCTL_EL2.EL1PCTEN == '0'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 0) || // SCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 2) || // NSACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 12 && CRm == 0 && opc2 == 1) || // MVBAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 1) || // SDCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 8 && opc2 >= 4)) then // ATS12NSOxx trap = IsSecureEL2Enabled() && PSTATE.EL == EL1 && IsSecure() && ELUsingAArch32(EL1); if trap then AArch64.AArch32SystemAccessTrap(EL2AArch64.TakeExceptionEL0(target_el, exception, preferred_exception_return, vect_offset);, instr);

Library pseudocode for aarch64/exceptions/traps/CheckFPAdvSIMDEnabled64AArch64.CheckAArch32SystemAccessEL3Traps

// CheckFPAdvSIMDEnabled64() // ========================= // AArch64 instruction wrapper// AArch64.CheckAArch32SystemAccessEL3Traps() // ========================================== // Check for configurable traps to EL3 of an AArch32 System register access instruction. CheckFPAdvSIMDEnabled64()AArch64.CheckAArch32SystemAccessEL3Traps(bits(32) instr) assert (EL3) && PSTATE.EL != EL3; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = AArch32.DecodeSysRegAccess(instr); trap = FALSE; if cp_num == 14 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 4 && !write) || // DBGOSLAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 4 && write) || // DBGOSLSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 4) || // DBGOSDLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 4 && opc2 == 4)) then // DBGPRCR trap = MDCR_EL3.TDOSA == '1'; elsif opc1 == 0 && (!Halted() || !(op == SystemAccessType_RT && CRn == 0 && CRm == 5 && opc2 == 0)) then trap = MDCR_EL3.TDA == '1'; elsif opc1 == 1 then trap = CPTR_EL3.TTA == '1'; elsif cp_num == 15 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 0) || // SCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 2) || // NSACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 12 && CRm == 0 && opc2 == 1) || // MVBAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 1) || // SDCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 8 && opc2 >= 4)) then // ATS12NSOxx trap = PSTATE.EL == EL1 && IsSecure(); elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 2) || // CPACR (op == SystemAccessType_RT && opc1 == 4 && CRn == 1 && CRm == 1 && opc2 == 2)) then // HCPTR trap = CPTR_EL3.TCPAC == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8) || // PMEVCNTR<n>/PMEVTYPER<n> (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm IN {12,13,14}) || // PM* (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCCR) trap = MDCR_EL3.TPM == '1'; if trap then AArch64.AArch32SystemAccessTrap(EL3AArch64.CheckFPAdvSIMDEnabledHaveEL();, instr);

Library pseudocode for aarch64/functionsexceptions/abortstraps/AArch64.CreateFaultRecordAArch64.CheckAArch32SystemAccessTraps

// AArch64.CreateFaultRecord() // =========================== FaultRecord// AArch64.CheckAArch32SystemAccessTraps() // ======================================= // Check for configurable disables or traps to a higher EL of an AArch32 System register access. AArch64.CreateFaultRecord(AArch64.CheckAArch32SystemAccessTraps(bits(32) instr) if PSTATE.EL ==FaultEL0 type, bits(52) ipaddress, bit NS, integer level,then AccTypeAArch64.CheckAArch32SystemAccessEL1Traps acctype, boolean write, bit extflag, bits(2) errortype, boolean secondstage, boolean s2fs1walk)(instr); if () && PSTATE.EL IN {EL0, EL1, EL2} && !IsInHost() then AArch64.CheckAArch32SystemAccessEL2Traps(instr); AArch64.CheckAArch32SystemAccessEL3TrapsFaultRecordEL2Enabled fault; fault.type = type; fault.domain = bits(4) UNKNOWN; // Not used from AArch64 fault.debugmoe = bits(4) UNKNOWN; // Not used from AArch64 fault.errortype = errortype; fault.ipaddress.NS = NS; fault.ipaddress.address = ipaddress; fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;(instr);

Library pseudocode for aarch64/functionsexceptions/abortstraps/AArch64.FaultSyndromeAArch64.CheckCP15InstrCoarseTraps

// AArch64.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // an Exception Level using AArch64. // AArch64.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained AArch32 CP15 traps in HSTR_EL2 and HCR_EL2. bits(25)boolean AArch64.FaultSyndrome(boolean d_side,AArch64.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) // Check for coarse-grained Hyp traps if FaultRecordEL2Enabled fault) assert fault.type !=() && PSTATE.EL IN { Fault_NoneEL0; bits(25) iss =, ZerosEL1(); if} then // Check for MCR, MRC, MCRR and MRRC disabled by HSTR_EL2<CRn/CRm> major = if nreg == 1 then CRn else CRm; if ! HaveRASExtIsInHost() && IsExternalSyncAbort(fault) then iss<12:11> = fault.errortype; // SET if d_side then if IsSecondStage(fault) && !fault.s2fs1walk then iss<24:14> = LSInstructionSyndrome(); if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then iss<13> = '1'; // Value of '1' indicates fault is generated by use of VNCR_EL2 if fault.acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_IC, AccType_AT} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.type, fault.level); () && !(major IN {4,14}) && HSTR_EL2<major> == '1' then return TRUE; return iss; // Check for MRC and MCR disabled by HCR_EL2.TIDCP if (HCR_EL2.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;

Library pseudocode for aarch64/functionsexceptions/exclusivetraps/AArch64.ExclusiveMonitorsPassAArch64.CheckFPAdvSIMDEnabled

// AArch64.ExclusiveMonitorsPass() // AArch64.CheckFPAdvSIMDEnabled() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean// Check against CPACR[] AArch64.ExclusiveMonitorsPass(bits(64) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype =AArch64.CheckFPAdvSIMDEnabled() if PSTATE.EL IN { AccType_ATOMICEL0; iswrite = TRUE; aligned = (address ==, AlignEL1(address, size)); if !aligned then secondstage = FALSE;} && ! AArch64.AbortIsInHost(address,() then // Check if access disabled in CPACR_EL1 case AArch64.AlignmentFaultCPACR(acctype, iswrite, secondstage)); passed =[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == AArch64.IsExclusiveVAEL0(address,; when '11' disabled = FALSE; if disabled then ProcessorIDAArch64.AdvSIMDFPAccessTrap(), size); if !passed then return FALSE; memaddrdesc =( AArch64.TranslateAddressEL1(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if); IsFaultAArch64.CheckFPAdvSIMDTrap(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); if passed then ClearExclusiveLocal(ProcessorID()); if memaddrdesc.memattrs.shareable then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); return passed;(); // Also check against CPTR_EL2 and CPTR_EL3

Library pseudocode for aarch64/functionsexceptions/exclusivetraps/AArch64.IsExclusiveVAAArch64.CheckFPAdvSIMDTrap

// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean// AArch64.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3. AArch64.IsExclusiveVA(bits(64) address, integer processorid, integer size);AArch64.CheckFPAdvSIMDTrap() ifEL2Enabled() then // Check if access disabled in CPTR_EL2 if HaveVirtHostExt() && HCR_EL2.E2H == '1' then case CPTR_EL2.FPEN of when 'x0' disabled = !(PSTATE.EL == EL1 && HCR_EL2.TGE == '1'); when '01' disabled = (PSTATE.EL == EL0 && HCR_EL2.TGE == '1'); when '11' disabled = FALSE; if disabled then AArch64.AdvSIMDFPAccessTrap(EL2); else if CPTR_EL2.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL2); if HaveEL(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3); return;

Library pseudocode for aarch64/functionsexceptions/exclusivetraps/AArch64.MarkExclusiveVAAArch64.CheckForERetTrap

// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid.// AArch64.CheckForERetTrap() // ========================== // Check for trap on ERET, ERETAA, ERETAB instruction AArch64.MarkExclusiveVA(bits(64) address, integer processorid, integer size);AArch64.CheckForERetTrap(boolean eret_with_pac, boolean pac_uses_key_a) // Non-secure EL1 execution of ERET, ERETAA, ERETAB when HCR_EL2.NV bit is set, is trapped to EL2 route_to_el2 =HaveNVExt() && EL2Enabled() && PSTATE.EL == EL1 && HCR_EL2.NV == '1'; if route_to_el2 then ExceptionRecord exception; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_ERetTrap); if !eret_with_pac then // ERET exception.syndrome<1> = '0'; exception.syndrome<0> = '0'; // RES0 else exception.syndrome<1> = '1'; if pac_uses_key_a then // ERETAA exception.syndrome<0> = '0'; else // ERETAB exception.syndrome<0> = '1'; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/exclusivetraps/AArch64.SetExclusiveMonitorsAArch64.CheckForSMCUndefOrTrap

// AArch64.SetExclusiveMonitors() // ============================== // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address.// AArch64.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction AArch64.SetExclusiveMonitors(bits(64) address, integer size) acctype =AArch64.CheckForSMCUndefOrTrap(bits(16) imm) route_to_el2 = AccType_ATOMICEL2Enabled; iswrite = FALSE; aligned = (address ==() && PSTATE.EL == AlignEL1(address, size)); memaddrdesc =&& HCR_EL2.TSC == '1'; if PSTATE.EL == AArch64.TranslateAddressEL0(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions ifthen UNDEFINED; if ! IsFaultHaveEL(memaddrdesc) then return; if memaddrdesc.memattrs.shareable then( MarkExclusiveGlobalEL3(memaddrdesc.paddress,) then if ProcessorIDEL2Enabled(), size);() && PSTATE.EL == MarkExclusiveLocalEL1(memaddrdesc.paddress,then if ProcessorIDHaveNVExt(), size);() && HCR_EL2.NV == '1' && HCR_EL2.TSC == '1' then route_to_el2 = TRUE; else UNDEFINED; else UNDEFINED; else route_to_el2 = AArch64.MarkExclusiveVAEL2Enabled(address,() && PSTATE.EL == && HCR_EL2.TSC == '1'; if route_to_el2 then bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_MonitorCall); exception.syndrome<15:0> = imm; AArch64.TakeException(EL2ProcessorIDEL1(), size);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/fusedrsteptraps/FPRSqrtStepFusedAArch64.CheckForWFxTrap

// FPRSqrtStepFused() // ================== bits(N)// AArch64.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction FPRSqrtStepFused(bits(N) op1, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; op1 =AArch64.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assert FPNegHaveEL(op1); (type1,sign1,value1) =(target_el); case target_el of when FPUnpackEL1(op1, FPCR); (type2,sign2,value2) =trap = (if is_wfe then FPUnpackSCTLR(op2, FPCR); (done,result) =[].nTWE else FPProcessNaNsSCTLR(type1, type2, op1, op2, FPCR); if !done then inf1 = (type1 ==[].nTWI) == '0'; when FPType_InfinityEL2); inf2 = (type2 ==trap = (if is_wfe then HCR_EL2.TWE else HCR_EL2.TWI) == '1'; when FPType_InfinityEL3); zero1 = (type1 ==trap = (if is_wfe then SCR_EL3.TWE else SCR_EL3.TWI) == '1'; if trap then FPType_ZeroAArch64.WFxTrap); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPOnePointFive('0'); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); else // Fully fused multiply-add and halve result_value = (3.0 + (value1 * value2)) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if FPRoundingMode(FPCR) == FPRounding_NEGINF then '1' else '0'; result = FPZero(sign); else result = FPRound(result_value, FPCR); return result;(target_el, is_wfe);

Library pseudocode for aarch64/functionsexceptions/fusedrsteptraps/FPRecipStepFusedAArch64.CheckIllegalState

// FPRecipStepFused() // ================== bits(N)// AArch64.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set. FPRecipStepFused(bits(N) op1, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; op1 =AArch64.CheckIllegalState() if PSTATE.IL == '1' then route_to_el2 = FPNegEL2Enabled(op1); (type1,sign1,value1) =() && PSTATE.EL == FPUnpackEL0(op1, FPCR); (type2,sign2,value2) =&& HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = FPUnpackThisInstrAddr(op2, FPCR); (done,result) =(); vect_offset = 0x0; exception = FPProcessNaNsExceptionSyndrome(type1, type2, op1, op2, FPCR); if !done then inf1 = (type1 ==( FPType_InfinityException_IllegalState); inf2 = (type2 == if FPType_InfinityUInt); zero1 = (type1 ==(PSTATE.EL) > FPType_ZeroUInt); zero2 = (type2 ==( FPType_ZeroEL1); if (inf1 && zero2) || (zero1 && inf2) then result =) then FPTwoAArch64.TakeException('0'); elsif inf1 || inf2 then result =(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then FPInfinityAArch64.TakeException(sign1 EOR sign2); else // Fully fused multiply-add result_value = 2.0 + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if( FPRoundingModeEL2(FPCR) ==, exception, preferred_exception_return, vect_offset); else FPRounding_NEGINFAArch64.TakeException then '1' else '0'; result =( FPZeroEL1(sign); else result = FPRound(result_value, FPCR); return result;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/memorytraps/AArch64.CheckAlignmentAArch64.MonitorModeTrap

// AArch64.CheckAlignment() // ======================== boolean// AArch64.MonitorModeTrap() // ========================= // Trapped use of Monitor mode features in a Secure EL1 AArch32 mode AArch64.CheckAlignment(bits(64) address, integer alignment,AArch64.MonitorModeTrap() bits(64) preferred_exception_return = AccTypeThisInstrAddr acctype, boolean iswrite) (); vect_offset = 0x0; aligned = (address == exception = AlignExceptionSyndrome(address, alignment)); atomic = acctype IN {( AccType_ATOMICException_Uncategorized,); if AccType_ATOMICRWIsSecureEL2Enabled,() then AccType_ORDEREDATOMICAArch64.TakeException,( AccType_ORDEREDATOMICRWEL2 }; ordered = acctype IN {, exception, preferred_exception_return, vect_offset); AccType_ORDEREDAArch64.TakeException,( AccType_ORDEREDRWEL3, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; if SCTLR[].A == '1' then check = TRUE; elsif HaveUA16Ext() then check = (UInt(address<0+:4>) + alignment > 16) && ((ordered && SCTLR[].nAA == '0') || atomic); else check = atomic || ordered; if check && !aligned then secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(acctype, iswrite, secondstage)); return aligned;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/memorytraps/AArch64.MemSingleAArch64.SystemRegisterTrap

// AArch64.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8)// AArch64.SystemRegisterTrap() // ============================ // Trapped system register access other than due to CPTR_EL2 and CPACR_EL1 AArch64.MemSingle[bits(64) address, integer size,AArch64.SystemRegisterTrap(bits(2) target_el, bits(2) op0, bits(3) op2, bits(3) op1, bits(4) crn, bits(5) rt, bits(4) crm, bit dir) assert AccTypeUInt acctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; assert address ==(target_el) >= AlignUInt(address, size);(PSTATE.EL); bits(64) preferred_exception_return = AddressDescriptorThisInstrAddr memaddrdesc; bits(size*8) value; iswrite = FALSE; (); vect_offset = 0x0; // MMU or MPU memaddrdesc = exception = AArch64.TranslateAddressExceptionSyndrome(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if( IsFaultException_SystemRegisterTrap(memaddrdesc) then); exception.syndrome<21:20> = op0; exception.syndrome<19:17> = op2; exception.syndrome<16:14> = op1; exception.syndrome<13:10> = crn; exception.syndrome<9:5> = rt; exception.syndrome<4:1> = crm; exception.syndrome<0> = dir; if target_el == AArch64.AbortEL1(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if&& HaveMTEExtEL2Enabled() then if() && HCR_EL2.TGE == '1' then AccessIsTagCheckedAArch64.TakeException(ZeroExtendEL2(address, 64), acctype) then bits(4) ptag =, exception, preferred_exception_return, vect_offset); else TransformTagAArch64.TakeException(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtend(address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; // AArch64.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. AArch64.MemSingle[bits(64) address, integer size, AccType acctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtend(address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; return;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/memorytraps/AddressWithAllocationTagAArch64.UndefinedFault

// AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(64)// AArch64.UndefinedFault() // ======================== AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result;AArch64.UndefinedFault() route_to_el2 =EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_Uncategorized); if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/memorytraps/AllocationTagFromAddressAArch64.WFxTrap

// AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(4)// AArch64.WFxTrap() // ================= AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag;AArch64.WFxTrap(bits(2) target_el, boolean is_wfe) assertUInt(target_el) > UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_WFxTrap); exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<0> = if is_wfe then '1' else '0'; if target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functionsexceptions/memorytraps/CheckSPAlignmentCheckFPAdvSIMDEnabled64

// CheckSPAlignment() // ================== // Check correct stack pointer alignment for AArch64 state.// CheckFPAdvSIMDEnabled64() // ========================= // AArch64 instruction wrapper CheckSPAlignment() bits(64) sp =CheckFPAdvSIMDEnabled64() SPAArch64.CheckFPAdvSIMDEnabled[]; if PSTATE.EL == EL0 then stack_align_check = (SCTLR[].SA0 != '0'); else stack_align_check = (SCTLR[].SA != '0'); if stack_align_check && sp != Align(sp, 16) then AArch64.SPAlignmentFault(); return;();

Library pseudocode for aarch64/functions/memoryaborts/CheckTagAArch64.CreateFaultRecord

// CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed // AArch64.CreateFaultRecord() // =========================== booleanFaultRecord CheckTag(AArch64.CreateFaultRecord(AddressDescriptorFault memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress =type, bits(52) ipaddress, bit NS, integer level, ZeroExtendAccType(memaddrdesc.paddress.address); return ptag ==acctype, boolean write, bit extflag, bits(2) errortype, boolean secondstage, boolean s2fs1walk) MemTagFaultRecord[paddress]; else return TRUE;fault; fault.type = type; fault.domain = bits(4) UNKNOWN; // Not used from AArch64 fault.debugmoe = bits(4) UNKNOWN; // Not used from AArch64 fault.errortype = errortype; fault.ipaddress.NS = NS; fault.ipaddress.address = ipaddress; fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;

Library pseudocode for aarch64/functions/memoryaborts/IsBlockDescriptorNTBitValidAArch64.FaultSyndrome

// If the implementation supports changing the block size without a break-before-make // approach, then for implementations that have level 1 or 2 support, the nT bit in // the block descriptor is valid. boolean// AArch64.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // an Exception Level using AArch64. bits(25) IsBlockDescriptorNTBitValid();AArch64.FaultSyndrome(boolean d_side,FaultRecord fault) assert fault.type != Fault_None; bits(25) iss = Zeros(); if HaveRASExt() && IsExternalSyncAbort(fault) then iss<12:11> = fault.errortype; // SET if d_side then if IsSecondStage(fault) && !fault.s2fs1walk then iss<24:14> = LSInstructionSyndrome(); if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then iss<13> = '1'; // Value of '1' indicates fault is generated by use of VNCR_EL2 if fault.acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_IC, AccType_AT} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.type, fault.level); return iss;

Library pseudocode for aarch64/functions/memoryexclusive/MemAArch64.ExclusiveMonitorsPass

// Mem[] - non-assignment (read) form // ================================== // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch64.MemSingle directly. // AArch64.ExclusiveMonitorsPass() // =============================== bits(size*8)// Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean Mem[bits(64) address, integer size,AArch64.ExclusiveMonitorsPass(bits(64) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype = AccTypeAccType_ATOMIC acctype] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned =; iswrite = TRUE; aligned = (address == AArch64.CheckAlignment(address, size, acctype, iswrite); if size != 16 || !(acctype IN {AccType_VEC, AccType_VECSTREAM}) then atomic = aligned; else // 128-bit SIMD&FP loads are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == Align(address, 8); (address, size)); if !atomic then assert size > 1; value<7:0> = if !aligned then secondstage = FALSE; AArch64.MemSingleAArch64.Abort[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c =(address, ConstrainUnpredictableAArch64.AlignmentFault((acctype, iswrite, secondstage)); passed =Unpredictable_DEVPAGE2AArch64.IsExclusiveVA); assert c IN {(address,Constraint_FAULTProcessorID,(), size); if !passed then return FALSE; memaddrdesc = Constraint_NONEAArch64.TranslateAddress}; if c ==(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if Constraint_NONEIsFault then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> =(memaddrdesc) then AArch64.MemSingleAArch64.Abort[address+i, 1, acctype, aligned]; elsif size == 16 && acctype IN {(address, memaddrdesc.fault); passed =AccType_VECIsExclusiveLocal,(memaddrdesc.paddress, AccType_VECSTREAMProcessorID} then value<63:0> =(), size); if passed then AArch64.MemSingleClearExclusiveLocal[address, 8, acctype, aligned]; value<127:64> =( AArch64.MemSingleProcessorID[address+8, 8, acctype, aligned]; else value =()); if memaddrdesc.memattrs.shareable then passed = AArch64.MemSingleIsExclusiveGlobal[address, size, acctype, aligned]; if ((memaddrdesc.paddress,HaveNV2ExtProcessorID() && acctype == AccType_NV2REGISTER && SCTLR_EL2.EE == '1') || BigEndian() then value = BigEndianReverse(value); return value; // Mem[] - assignment (write) form // =============================== // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem[bits(64) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if (HaveNV2Ext() && acctype == AccType_NV2REGISTER && SCTLR_EL2.EE == '1') || BigEndian() then value = BigEndianReverse(value); aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); if size != 16 || !(acctype IN {AccType_VEC, AccType_VECSTREAM}) then atomic = aligned; else // 128-bit SIMD&FP stores are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == Align(address, 8); if !atomic then assert size > 1; AArch64.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch64.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; elsif size == 16 && acctype IN {AccType_VEC, AccType_VECSTREAM} then AArch64.MemSingle[address, 8, acctype, aligned] = value<63:0>; AArch64.MemSingle[address+8, 8, acctype, aligned] = value<127:64>; else AArch64.MemSingle[address, size, acctype, aligned] = value; return;(), size); return passed;

Library pseudocode for aarch64/functions/memoryexclusive/MemTagAArch64.IsExclusiveVA

// MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. bits(4)// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean MemTag[bits(64) address]AArch64.IsExclusiveVA(bits(64) address, integer processorid, integer size); AddressDescriptor memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AllocationTagAccessIsEnabled() && memaddrdesc.memattrs.tagged then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. MemTag[bits(64) address] = bits(4) value AddressDescriptor memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != Align(address, TAG_GRANULE) then boolean secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(AccType_NORMAL, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access if AllocationTagAccessIsEnabled() && memaddrdesc.memattrs.tagged then _MemTag[memaddrdesc] = value;

Library pseudocode for aarch64/functions/memoryexclusive/TransformTagAArch64.MarkExclusiveVA

// TransformTag() // ============== // Apply tag transformation rules. bits(4)// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid. TransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta =AArch64.MarkExclusiveVA(bits(64) address, integer processorid, integer size); ZeroExtend(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;

Library pseudocode for aarch64/functions/memoryexclusive/booleanAArch64.SetExclusiveMonitors

// boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. // AArch64.SetExclusiveMonitors() // ============================== boolean// Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address. AccessIsTagChecked(bits(64) vaddr,AArch64.SetExclusiveMonitors(bits(64) address, integer size) acctype = AccTypeAccType_ATOMIC acctype) if PSTATE.M<4> == '1' then return FALSE; if; iswrite = FALSE; aligned = (address == EffectiveTBIAlign(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if(address, size)); memaddrdesc = EffectiveTCMAAArch64.TranslateAddress(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; (address, acctype, iswrite, aligned, size); if ! // Check for aborts or debug exceptions ifAllocationTagAccessIsEnabledIsFault() then return FALSE; (memaddrdesc) then return; if acctype IN { if memaddrdesc.memattrs.shareable thenAccType_IFETCHMarkExclusiveGlobal,(memaddrdesc.paddress, AccType_PTWProcessorID} then return FALSE; if acctype ==(), size); (memaddrdesc.paddress, ProcessorID(), size); AArch64.MarkExclusiveVA(address, ProcessorIDAccType_NV2REGISTERMarkExclusiveLocal then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;(), size);

Library pseudocode for aarch64/functions/pacfusedrstep/addpac/AddPACFPRSqrtStepFused

// AddPAC() // ======== // Calculates the pointer authentication code for a 64-bit quantity and then // inserts that into pointer authentication code field of that 64-bit quantity. // FPRSqrtStepFused() // ================== bits(64)bits(N) AddPAC(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data) bits(64) PAC; bits(64) result; bits(64) ext_ptr; bits(64) extfield; bit selbit; boolean tbi =FPRSqrtStepFused(bits(N) op1, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; op1 = CalculateTBIFPNeg(ptr, data); integer top_bit = if tbi then 55 else 63; // If tagged pointers are in use for a regime with two TTBRs, use bit<55> of // the pointer to select between upper and lower ranges, and preserve this. // This handles the awkward case where there is apparently no correct choice between // the upper and lower address range - ie an addr of 1xxxxxxx0... with TBI0=0 and TBI1=1 // and 0xxxxxxx1 with TBI1=0 and TBI0=1: if(op1); (type1,sign1,value1) = PtrHasUpperAndLowerAddRangesFPUnpack() then assert(op1, FPCR); (type2,sign2,value2) = S1TranslationRegimeFPUnpack() IN {(op2, FPCR); (done,result) =EL1FPProcessNaNs,(type1, type2, op1, op2, FPCR); if !done then inf1 = (type1 == EL2FPType_Infinity}; if); inf2 = (type2 == S1TranslationRegimeFPType_Infinity() ==); zero1 = (type1 == EL1FPType_Zero then // EL1 translation regime registers if data then selbit = if TCR_EL1.TBI1 == '1' || TCR_EL1.TBI0 == '1' then ptr<55> else ptr<63>; else if ((TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0') || (TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then selbit = ptr<55>; else selbit = ptr<63>; else // EL2 translation regime registers if data then selbit = if ((); zero2 = (type2 ==HaveELFPType_Zero(); if (inf1 && zero2) || (zero1 && inf2) then result =EL2FPOnePointFive) && TCR_EL2.TBI1 == '1') || (('0'); elsif inf1 || inf2 then result =HaveELFPInfinity((sign1 EOR sign2); else // Fully fused multiply-add and halve result_value = (3.0 + (value1 * value2)) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = ifEL2FPRoundingMode) && TCR_EL2.TBI0 == '1')) then ptr<55> else ptr<63>; else selbit = if (((FPCR) ==HaveELFPRounding_NEGINF(then '1' else '0'; result =EL2FPZero) && TCR_EL2.TBI1 == '1' && TCR_EL1.TBID1 == '0') || ((sign); else result =HaveELFPRound(EL2) && TCR_EL2.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then ptr<55> else ptr<63>; else selbit = if tbi then ptr<55> else ptr<63>; integer bottom_PAC_bit = CalculateBottomPACBit(selbit); // The pointer authentication code field takes all the available bits in between extfield = Replicate(selbit, 64); // Compute the pointer authentication code for a ptr with good extension bits if tbi then ext_ptr = ptr<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; else ext_ptr = extfield<(64-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = ComputePAC(ext_ptr, modifier, K<127:64>, K<63:0>); // Check if the ptr has good extension bits and corrupt the pointer authentication code if not if !IsZero(ptr<top_bit:bottom_PAC_bit>) && !IsOnes(ptr<top_bit:bottom_PAC_bit>) then if HaveEnhancedPAC() then PAC = Zeros(); else PAC<top_bit-1> = NOT(PAC<top_bit-1>); // Preserve the determination between upper and lower address at bit<55> and insert PAC if tbi then result = ptr<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; else result = PAC<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; (result_value, FPCR); return result;

Library pseudocode for aarch64/functions/pacfusedrstep/addpacda/AddPACDAFPRecipStepFused

// AddPACDA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDAKey_EL1. // FPRecipStepFused() // ================== bits(64)bits(N) AddPACDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of whenFPRecipStepFused(bits(N) op1, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; op1 = EL0FPNeg boolean IsEL1Regime =(op1); (type1,sign1,value1) = S1TranslationRegimeFPUnpack() ==(op1, FPCR); (type2,sign2,value2) = EL1FPUnpack; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = ((op2, FPCR); (done,result) =EL2EnabledFPProcessNaNs() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =(type1, type2, op1, op2, FPCR); if !done then inf1 = (type1 == HaveELFPType_Infinity(); inf2 = (type2 ==EL3FPType_Infinity) && SCR_EL3.API == '0'; when); zero1 = (type1 == EL1FPType_Zero Enable = SCTLR_EL1.EnDA; TrapEL2 =); zero2 = (type2 == EL2EnabledFPType_Zero() && HCR_EL2.API == '0'; TrapEL3 =); if (inf1 && zero2) || (zero1 && inf2) then result = HaveELFPTwo(('0'); elsif inf1 || inf2 then result =EL3FPInfinity) && SCR_EL3.API == '0'; when(sign1 EOR sign2); else // Fully fused multiply-add result_value = 2.0 + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if EL2FPRoundingMode Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 =(FPCR) == HaveELFPRounding_NEGINF(then '1' else '0'; result =EL3FPZero) && SCR_EL3.API == '0'; when(sign); else result = EL3FPRound Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APDAKey_EL1, TRUE);(result_value, FPCR); return result;

Library pseudocode for aarch64/functions/pacmemory/addpacdb/AddPACDBAArch64.CheckAlignment

// AddPACDB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDBKey_EL1. // AArch64.CheckAlignment() // ======================== bits(64)boolean AddPACDB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of whenAArch64.CheckAlignment(bits(64) address, integer alignment, EL0AccType boolean IsEL1Regime =acctype, boolean iswrite) aligned = (address == S1TranslationRegimeAlign() ==(address, alignment)); atomic = acctype IN { EL1AccType_ATOMIC; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (,EL2EnabledAccType_ATOMICRW() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =, HaveELAccType_ORDEREDATOMIC(,EL3AccType_ORDEREDATOMICRW) && SCR_EL3.API == '0'; when}; ordered = acctype IN { EL1AccType_ORDERED Enable = SCTLR_EL1.EnDB; TrapEL2 =, EL2EnabledAccType_ORDEREDRW() && HCR_EL2.API == '0'; TrapEL3 =, HaveELAccType_LIMITEDORDERED(,EL3AccType_ORDEREDATOMIC) && SCR_EL3.API == '0'; when, EL2AccType_ORDEREDATOMICRW Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 =}; vector = acctype == HaveELAccType_VEC(; ifEL3SCTLR) && SCR_EL3.API == '0'; when[].A == '1' then check = TRUE; elsif EL3HaveUA16Ext Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then() then check = ( TrapPACUseUInt((address<0+:4>) + alignment > 16) && ((ordered &&EL2SCTLR); elsif TrapEL3 then[].nAA == '0') || atomic); else check = atomic || ordered; if check && !aligned then secondstage = FALSE; TrapPACUseAArch64.Abort((address,EL3AArch64.AlignmentFault); else return AddPAC(X, Y, APDBKey_EL1, TRUE);(acctype, iswrite, secondstage)); return aligned;

Library pseudocode for aarch64/functions/pacmemory/addpacga/AddPACGAAArch64.MemSingle

// AddPACGA() // ========== // Returns a 64-bit value where the lower 32 bits are 0, and the upper 32 bits contain // a 32-bit pointer authentication code which is derived using a cryptographic // algorithm as a combination of X, Y and the APGAKey_EL1. // AArch64.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(64)bits(size*8) AddPACGA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(128) APGAKey_EL1; APGAKey_EL1 = APGAKeyHi_EL1<63:0> : APGAKeyLo_EL1<63:0>; case PSTATE.EL of whenAArch64.MemSingle[bits(64) address, integer size, EL0AccType TrapEL2 = (acctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; assert address ==EL2EnabledAlign() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =(address, size); HaveELAddressDescriptor(memaddrdesc; bits(size*8) value; iswrite = FALSE; // MMU or MPU memaddrdesc =EL3AArch64.TranslateAddress) && SCR_EL3.API == '0'; when(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if EL1IsFault TrapEL2 =(memaddrdesc) then EL2EnabledAArch64.Abort() && HCR_EL2.API == '0'; TrapEL3 =(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveELHaveMTEExt(() then ifEL3AccessIsTagChecked) && SCR_EL3.API == '0'; when( EL2ZeroExtend TrapEL2 = FALSE; TrapEL3 =(address, 64), acctype) then bits(4) ptag = HaveELTransformTag(EL3ZeroExtend) && SCR_EL3.API == '0'; when(address, 64)); if ! EL3CheckTag TrapEL2 = FALSE; TrapEL3 = FALSE; if TrapEL2 then(memaddrdesc, ptag, iswrite) then TrapPACUseTagCheckFail(EL2ZeroExtend); elsif TrapEL3 then(address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; // AArch64.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. TrapPACUse(AArch64.MemSingle[bits(64) address, integer size,EL3AccType); else returnacctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == ComputePACAlign(X, Y, APGAKey_EL1<127:64>, APGAKey_EL1<63:0>)<63:32>:(address, size); memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtendZerosAddressDescriptor(32);(address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; return;

Library pseudocode for aarch64/functions/pacmemory/addpacia/AddPACIAAddressWithAllocationTag

// AddPACIA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y, and the // APIAKey_EL1. // AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(64) AddPACIA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; APIAKey_EL1 = APIAKeyHi_EL1<63:0>:APIAKeyLo_EL1<63:0>; case PSTATE.EL of whenAddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result; EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APIAKey_EL1, FALSE);

Library pseudocode for aarch64/functions/pacmemory/addpacib/AddPACIBAllocationTagFromAddress

// AddPACIB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APIBKey_EL1. // AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(64)bits(4) AddPACIB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of whenAllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag; EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APIBKey_EL1, FALSE);

Library pseudocode for aarch64/functions/pacmemory/auth/AuthCheckSPAlignment

// Auth() // ====== // Restores the upper bits of the address to be all zeros or all ones (based on the // value of bit[55]) and computes and checks the pointer authentication code. If the // check passes, then the restored address is returned. If the check fails, the // second-top and third-top bits of the extension bits in the pointer authentication code // field are corrupted to ensure that accessing the address will give a translation fault. bits(64)// CheckSPAlignment() // ================== // Check correct stack pointer alignment for AArch64 state. Auth(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data, bit keynumber) bits(64) PAC; bits(64) result; bits(64) original_ptr; bits(2) error_code; bits(64) extfield; // Reconstruct the extension field used of adding the PAC to the pointer boolean tbi =CheckSPAlignment() bits(64) sp = CalculateTBISP(ptr, data); integer bottom_PAC_bit =[]; if PSTATE.EL == CalculateBottomPACBitEL0(ptr<55>); extfield =then stack_align_check = ( ReplicateSCTLR(ptr<55>, 64); if tbi then original_ptr = ptr<63:56>:extfield<56-bottom_PAC_bit-1:0>:ptr<bottom_PAC_bit-1:0>; [].SA0 != '0'); else original_ptr = extfield<64-bottom_PAC_bit-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = stack_align_check = ( [].SA != '0'); if stack_align_check && sp != Align(sp, 16) then AArch64.SPAlignmentFaultComputePACSCTLR(original_ptr, modifier, K<127:64>, K<63:0>); // Check pointer authentication code if tbi then if PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit> then result = original_ptr; else error_code = keynumber:NOT(keynumber); result = original_ptr<63:55>:error_code:original_ptr<52:0>; else if ((PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit>) && (PAC<63:56> == ptr<63:56>)) then result = original_ptr; else error_code = keynumber:NOT(keynumber); result = original_ptr<63>:error_code:original_ptr<60:0>; return result;(); return;

Library pseudocode for aarch64/functions/pacmemory/authda/AuthDACheckTag

// AuthDA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACDA(). // CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed bits(64)boolean AuthDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of whenCheckTag( EL0AddressDescriptor boolean IsEL1Regime =memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress = S1TranslationRegimeZeroExtend() ==(memaddrdesc.paddress.address); return ptag == EL1MemTag; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APDAKey_EL1, TRUE, '0');[paddress]; else return TRUE;

Library pseudocode for aarch64/functions/pacmemory/authdb/AuthDBIsBlockDescriptorNTBitValid

// AuthDB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a // pointer authentication code in the pointer authentication code field bits of X, using // the same algorithm and key as AddPACDB(). bits(64)// If the implementation supports changing the block size without a break-before-make // approach, then for implementations that have level 1 or 2 support, the nT bit in // the block descriptor is valid. boolean AuthDB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of whenIsBlockDescriptorNTBitValid(); EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APDBKey_EL1, TRUE, '1');

Library pseudocode for aarch64/functions/pacmemory/authia/AuthIAMem

// AuthIA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIA(). // Mem[] - non-assignment (read) form // ================================== // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch64.MemSingle directly. bits(64)bits(size*8) AuthIA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; APIAKey_EL1 = APIAKeyHi_EL1<63:0> : APIAKeyLo_EL1<63:0>; case PSTATE.EL of whenMem[bits(64) address, integer size, EL0AccType boolean IsEL1Regime =acctype] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned = S1TranslationRegimeAArch64.CheckAlignment() ==(address, size, acctype, iswrite); if size != 16 || !(acctype IN { EL1AccType_VEC; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = (,EL2EnabledAccType_VECSTREAM() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =}) then atomic = aligned; else // 128-bit SIMD&FP loads are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == HaveELAlign((address, 8); if !atomic then assert size > 1; value<7:0> =EL3AArch64.MemSingle) && SCR_EL3.API == '0'; when[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = EL1ConstrainUnpredictable Enable = SCTLR_EL1.EnIA; TrapEL2 =( EL2EnabledUnpredictable_DEVPAGE2() && HCR_EL2.API == '0'; TrapEL3 =); assert c IN { HaveELConstraint_FAULT(,EL3Constraint_NONE) && SCR_EL3.API == '0'; when}; if c == EL2Constraint_NONE Enable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 =then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = HaveELAArch64.MemSingle([address+i, 1, acctype, aligned]; elsif size == 16 && acctype IN {EL3AccType_VEC) && SCR_EL3.API == '0'; when, EL3AccType_VECSTREAM Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then} then value<63:0> = TrapPACUseAArch64.MemSingle([address, 8, acctype, aligned]; value<127:64> =EL2AArch64.MemSingle); elsif TrapEL3 then[address+8, 8, acctype, aligned]; else value = TrapPACUseAArch64.MemSingle([address, size, acctype, aligned]; if (EL3HaveNV2Ext); else return() && acctype == && SCTLR_EL2.EE == '1') || BigEndian() then value = BigEndianReverse(value); return value; // Mem[] - assignment (write) form // =============================== // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem[bits(64) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if (HaveNV2Ext() && acctype == AccType_NV2REGISTER && SCTLR_EL2.EE == '1') || BigEndian() then value = BigEndianReverse(value); aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); if size != 16 || !(acctype IN {AccType_VEC, AccType_VECSTREAM}) then atomic = aligned; else // 128-bit SIMD&FP stores are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == Align(address, 8); if !atomic then assert size > 1; AArch64.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch64.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; elsif size == 16 && acctype IN {AccType_VEC, AccType_VECSTREAM} then AArch64.MemSingle[address, 8, acctype, aligned] = value<63:0>; AArch64.MemSingle[address+8, 8, acctype, aligned] = value<127:64>; else AArch64.MemSingleAuthAccType_NV2REGISTER(X, Y, APIAKey_EL1, FALSE, '0');[address, size, acctype, aligned] = value; return;

Library pseudocode for aarch64/functions/pacmemory/authib/AuthIBMemTag

// AuthIB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIB(). // MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. bits(64)bits(4) AuthIB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of whenMemTag[bits(64) address] EL0AddressDescriptor boolean IsEL1Regime =memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = S1TranslationRegimeAArch64.TranslateAddress() ==(address, EL1AccType_NORMAL; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = (, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions ifEL2EnabledIsFault() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =(memaddrdesc) then HaveELAArch64.Abort((address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... ifEL3AllocationTagAccessIsEnabled) && SCR_EL3.API == '0'; when() then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. EL1 Enable = SCTLR_EL1.EnIB; TrapEL2 =MemTag[bits(64) address] = bits(4) value EL2EnabledAddressDescriptor() && HCR_EL2.API == '0'; TrapEL3 =memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != HaveELAlign((address, TAG_GRANULE) then boolean secondstage = FALSE;EL3AArch64.Abort) && SCR_EL3.API == '0'; when(address, EL2AArch64.AlignmentFault Enable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 =( HaveELAccType_NORMAL(, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc =EL3AArch64.TranslateAddress) && SCR_EL3.API == '0'; when(address, EL3AccType_NORMAL Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; , iswrite, wasaligned, TAG_GRANULE); if Enable == '0' then return X; elsif TrapEL2 then // Check for aborts or debug exceptions if TrapPACUseIsFault((memaddrdesc) thenEL2AArch64.Abort); elsif TrapEL3 then(address, memaddrdesc.fault); // Memory array access if TrapPACUseAllocationTagAccessIsEnabled(EL3); else return Auth(X, Y, APIBKey_EL1, FALSE, '1');() then _MemTag[memaddrdesc] = value;

Library pseudocode for aarch64/functions/pacmemory/calcbottompacbit/CalculateBottomPACBitTransformTag

// CalculateBottomPACBit() // ======================= // TransformTag() // ============== // Apply tag transformation rules. integerbits(4) CalculateBottomPACBit(bit top_bit) integer tsz_field; ifTransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta = PtrHasUpperAndLowerAddRangesZeroExtend() then assert S1TranslationRegime() IN {EL1, EL2}; if S1TranslationRegime() == EL1 then // EL1 translation regime registers tsz_field = if top_bit == '1' then UInt(TCR_EL1.T1SZ) else UInt(TCR_EL1.T0SZ); using64k = if top_bit == '1' then TCR_EL1.TG1 == '11' else TCR_EL1.TG0 == '01'; else // EL2 translation regime registers assert HaveEL(EL2); tsz_field = if top_bit == '1' then UInt(TCR_EL2.T1SZ) else UInt(TCR_EL2.T0SZ); using64k = if top_bit == '1' then TCR_EL2.TG1 == '11' else TCR_EL2.TG0 == '01'; else tsz_field = if PSTATE.EL == EL2 then UInt(TCR_EL2.T0SZ) else UInt(TCR_EL3.T0SZ); using64k = if PSTATE.EL == EL2 then TCR_EL2.TG0 == '01' else TCR_EL3.TG0 == '01'; max_limit_tsz_field = (if !HaveSmallPageTblExt() then 39 else if using64k then 47 else 48); if tsz_field > max_limit_tsz_field then // TCR_ELx.TySZ is out of range c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = max_limit_tsz_field; tszmin = if using64k && VAMax() == 52 then 12 else 16; if tsz_field < tszmin then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = tszmin; return (64-tsz_field);(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;

Library pseudocode for aarch64/functions/pacmemory/calculatetbi/CalculateTBIboolean

// CalculateTBI() // ============== // boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. boolean CalculateTBI(bits(64) ptr, boolean data) boolean tbi = FALSE; ifAccessIsTagChecked(bits(64) vaddr, PtrHasUpperAndLowerAddRangesAccType() then assertacctype) if PSTATE.M<4> == '1' then return FALSE; if S1TranslationRegimeEffectiveTBI() IN {(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; ifEL1EffectiveTCMA,(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if ! EL2AllocationTagAccessIsEnabled}; if() then return FALSE; if acctype IN { S1TranslationRegimeAccType_IFETCH() ==, EL1AccType_PTW then // EL1 translation regime registers if data then tbi = if ptr<55> == '1' then TCR_EL1.TBI1 == '1' else TCR_EL1.TBI0 == '1'; else if ptr<55> == '1' then tbi = TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0'; else tbi = TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0'; else // EL2 translation regime registers if data then tbi = if ptr<55> == '1' then TCR_EL2.TBI1 == '1' else TCR_EL2.TBI0 == '1'; else if ptr<55> == '1' then tbi = TCR_EL2.TBI1 == '1' && TCR_EL2.TBID1 == '0'; else tbi = TCR_EL2.TBI0 == '1' && TCR_EL2.TBID0 == '0'; elsif PSTATE.EL ==} then return FALSE; if acctype == EL2AccType_NV2REGISTER then tbi = if data then TCR_EL2.TBI=='1' else TCR_EL2.TBI=='1' && TCR_EL2.TBID=='0'; elsif PSTATE.EL == EL3 then tbi = if data then TCR_EL3.TBI=='1' else TCR_EL3.TBI=='1' && TCR_EL3.TBID=='0'; return FALSE; return tbi; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/pac/computepacaddpac/ComputePACAddPAC

array bits(64) RC[0..4]; // AddPAC() // ======== // Calculates the pointer authentication code for a 64-bit quantity and then // inserts that into pointer authentication code field of that 64-bit quantity. bits(64) ComputePAC(bits(64) data, bits(64) modifier, bits(64) key0, bits(64) key1) bits(64) workingval; bits(64) runningmod; bits(64) roundkey; bits(64) modk0; constant bits(64) Alpha = 0xC0AC29B7C97C50DD<63:0>; RC[0] = 0x0000000000000000<63:0>; RC[1] = 0x13198A2E03707344<63:0>; RC[2] = 0xA4093822299F31D0<63:0>; RC[3] = 0x082EFA98EC4E6C89<63:0>; RC[4] = 0x452821E638D01377<63:0>; modk0 = key0<0>:key0<63:2>:(key0<63> EOR key0<1>); runningmod = modifier; workingval = data EOR key0; for i = 0 to 4 roundkey = key1 EOR runningmod; workingval = workingval EOR roundkey; workingval = workingval EOR RC[i]; if i > 0 then workingval =AddPAC(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data) bits(64) PAC; bits(64) result; bits(64) ext_ptr; bits(64) extfield; bit selbit; boolean tbi = PACCellShuffleCalculateTBI(workingval); workingval =(ptr, data); integer top_bit = if tbi then 55 else 63; // If tagged pointers are in use for a regime with two TTBRs, use bit<55> of // the pointer to select between upper and lower ranges, and preserve this. // This handles the awkward case where there is apparently no correct choice between // the upper and lower address range - ie an addr of 1xxxxxxx0... with TBI0=0 and TBI1=1 // and 0xxxxxxx1 with TBI1=0 and TBI0=1: if PACMultPtrHasUpperAndLowerAddRanges(workingval); workingval =() then assert PACSubS1TranslationRegime(workingval); runningmod =() IN { TweakShuffleEL1(runningmod<63:0>); roundkey = modk0 EOR runningmod; workingval = workingval EOR roundkey; workingval =, PACCellShuffleEL2(workingval); workingval =}; if PACMultS1TranslationRegime(workingval); workingval =() == PACSubEL1(workingval); workingval =then // EL1 translation regime registers if data then selbit = if TCR_EL1.TBI1 == '1' || TCR_EL1.TBI0 == '1' then ptr<55> else ptr<63>; else if ((TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0') || (TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then selbit = ptr<55>; else selbit = ptr<63>; else // EL2 translation regime registers if data then selbit = if (( PACCellShuffleHaveEL(workingval); workingval =( PACMultEL2(workingval); workingval = key1 EOR workingval; workingval =) && TCR_EL2.TBI1 == '1') || ( PACCellInvShuffleHaveEL(workingval); workingval =( PACInvSubEL2(workingval); workingval =) && TCR_EL2.TBI0 == '1')) then ptr<55> else ptr<63>; else selbit = if (( PACMultHaveEL(workingval); workingval =( PACCellInvShuffleEL2(workingval); workingval = workingval EOR key0; workingval = workingval EOR runningmod; for i = 0 to 4 workingval =) && TCR_EL2.TBI1 == '1' && TCR_EL1.TBID1 == '0') || ( PACInvSubHaveEL(workingval); if i < 4 then workingval =( PACMultEL2(workingval); workingval =) && TCR_EL2.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then ptr<55> else ptr<63>; else selbit = if tbi then ptr<55> else ptr<63>; integer bottom_PAC_bit = (selbit); // The pointer authentication code field takes all the available bits in between extfield = Replicate(selbit, 64); // Compute the pointer authentication code for a ptr with good extension bits if tbi then ext_ptr = ptr<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; else ext_ptr = extfield<(64-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = ComputePAC(ext_ptr, modifier, K<127:64>, K<63:0>); // Check if the ptr has good extension bits and corrupt the pointer authentication code if not; if !IsZero(ptr<top_bit:bottom_PAC_bit>) && !IsOnesPACCellInvShuffleCalculateBottomPACBit(workingval); runningmod = TweakInvShuffle(runningmod<63:0>); roundkey = key1 EOR runningmod; workingval = workingval EOR RC[4-i]; workingval = workingval EOR roundkey; workingval = workingval EOR Alpha; workingval = workingval EOR modk0; (ptr<top_bit:bottom_PAC_bit>) then PAC<top_bit-1> = NOT(PAC<top_bit-1>); return workingval; // Preserve the determination between upper and lower address at bit<55> and insert PAC if tbi then result = ptr<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; else result = PAC<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; return result;

Library pseudocode for aarch64/functions/pac/computepacaddpacda/PACCellInvShuffleAddPACDA

// PACCellInvShuffle() // =================== // AddPACDA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDAKey_EL1. bits(64) PACCellInvShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<15:12>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<51:48>; outdata<15:12> = indata<39:36>; outdata<19:16> = indata<59:56>; outdata<23:20> = indata<47:44>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<19:16>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<31:28>; outdata<47:44> = indata<11:8>; outdata<51:48> = indata<23:20>; outdata<55:52> = indata<3:0>; outdata<59:56> = indata<43:40>; outdata<63:60> = indata<63:60>; return outdata;AddPACDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of whenEL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APDAKey_EL1, TRUE);

Library pseudocode for aarch64/functions/pac/computepacaddpacdb/PACCellShuffleAddPACDB

// PACCellShuffle() // ================ // AddPACDB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDBKey_EL1. bits(64) PACCellShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<55:52>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<47:44>; outdata<15:12> = indata<3:0>; outdata<19:16> = indata<31:28>; outdata<23:20> = indata<51:48>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<43:40>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<15:12>; outdata<43:40> = indata<59:56>; outdata<47:44> = indata<23:20>; outdata<51:48> = indata<11:8>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<19:16>; outdata<63:60> = indata<63:60>; return outdata;AddPACDB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of whenEL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APDBKey_EL1, TRUE);

Library pseudocode for aarch64/functions/pac/computepacaddpacga/PACInvSubAddPACGA

// PACInvSub() // =========== // AddPACGA() // ========== // Returns a 64-bit value where the lower 32 bits are 0, and the upper 32 bits contain // a 32-bit pointer authentication code which is derived using a cryptographic // algorithm as a combination of X, Y and the APGAKey_EL1. bits(64) PACInvSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher AddPACGA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(128) APGAKey_EL1; bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '0101'; when '0001' Toutput<4*i+3:4*i> = '1110'; when '0010' Toutput<4*i+3:4*i> = '1101'; when '0011' Toutput<4*i+3:4*i> = '1000'; when '0100' Toutput<4*i+3:4*i> = '1010'; when '0101' Toutput<4*i+3:4*i> = '1011'; when '0110' Toutput<4*i+3:4*i> = '0001'; when '0111' Toutput<4*i+3:4*i> = '1001'; when '1000' Toutput<4*i+3:4*i> = '0010'; when '1001' Toutput<4*i+3:4*i> = '0110'; when '1010' Toutput<4*i+3:4*i> = '1111'; when '1011' Toutput<4*i+3:4*i> = '0000'; when '1100' Toutput<4*i+3:4*i> = '0100'; when '1101' Toutput<4*i+3:4*i> = '1100'; when '1110' Toutput<4*i+3:4*i> = '0111'; when '1111' Toutput<4*i+3:4*i> = '0011'; return Toutput; APGAKey_EL1 = APGAKeyHi_EL1<63:0> : APGAKeyLo_EL1<63:0>; case PSTATE.EL of whenEL0 TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 TrapEL2 = FALSE; TrapEL3 = FALSE; if TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return ComputePAC(X, Y, APGAKey_EL1<127:64>, APGAKey_EL1<63:0>)<63:32>:Zeros(32);

Library pseudocode for aarch64/functions/pac/computepacaddpacia/PACMultAddPACIA

// PACMult() // ========= // AddPACIA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y, and the // APIAKey_EL1. bits(64) PACMult(bits(64) Sinput) bits(4) t0; bits(4) t1; bits(4) t2; bits(4) t3; bits(64) Soutput; AddPACIA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; for i = 0 to 3 t0<3:0> = APIAKey_EL1 = APIAKeyHi_EL1<63:0>:APIAKeyLo_EL1<63:0>; case PSTATE.EL of when RotCellEL0(Sinput<4*(i+8)+3:4*(i+8)>, 1) EORboolean IsEL1Regime = RotCellS1TranslationRegime(Sinput<4*(i+4)+3:4*(i+4)>, 2); t0<3:0> = t0<3:0> EOR() == RotCellEL1(Sinput<4*(i)+3:4*(i)>, 1); t1<3:0> =; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = ( RotCellEL2Enabled(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = RotCellHaveEL(Sinput<4*(i+4)+3:4*(i+4)>, 1); t1<3:0> = t1<3:0> EOR( RotCellEL3(Sinput<4*(i)+3:4*(i)>, 2); t2<3:0> =) && SCR_EL3.API == '0'; when RotCellEL1(Sinput<4*(i+12)+3:4*(i+12)>, 2) EOREnable = SCTLR_EL1.EnIA; TrapEL2 = RotCellEL2Enabled(Sinput<4*(i+8)+3:4*(i+8)>, 1); t2<3:0> = t2<3:0> EOR() && HCR_EL2.API == '0'; TrapEL3 = RotCellHaveEL(Sinput<4*(i)+3:4*(i)>, 1); t3<3:0> =( RotCellEL3(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR) && SCR_EL3.API == '0'; when RotCellEL2(Sinput<4*(i+8)+3:4*(i+8)>, 2); t3<3:0> = t3<3:0> EOREnable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 = (EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPACRotCellHaveEL(Sinput<4*(i+4)+3:4*(i+4)>, 1); Soutput<4*i+3:4*i> = t3<3:0>; Soutput<4*(i+4)+3:4*(i+4)> = t2<3:0>; Soutput<4*(i+8)+3:4*(i+8)> = t1<3:0>; Soutput<4*(i+12)+3:4*(i+12)> = t0<3:0>; return Soutput;(X, Y, APIAKey_EL1, FALSE);

Library pseudocode for aarch64/functions/pac/computepacaddpacib/PACSubAddPACIB

// PACSub() // ======== // AddPACIB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APIBKey_EL1. bits(64) PACSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '1011'; when '0001' Toutput<4*i+3:4*i> = '0110'; when '0010' Toutput<4*i+3:4*i> = '1000'; when '0011' Toutput<4*i+3:4*i> = '1111'; when '0100' Toutput<4*i+3:4*i> = '1100'; when '0101' Toutput<4*i+3:4*i> = '0000'; when '0110' Toutput<4*i+3:4*i> = '1001'; when '0111' Toutput<4*i+3:4*i> = '1110'; when '1000' Toutput<4*i+3:4*i> = '0011'; when '1001' Toutput<4*i+3:4*i> = '0111'; when '1010' Toutput<4*i+3:4*i> = '0100'; when '1011' Toutput<4*i+3:4*i> = '0101'; when '1100' Toutput<4*i+3:4*i> = '1101'; when '1101' Toutput<4*i+3:4*i> = '0010'; when '1110' Toutput<4*i+3:4*i> = '0001'; when '1111' Toutput<4*i+3:4*i> = '1010'; return Toutput;AddPACIB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of whenEL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APIBKey_EL1, FALSE);

Library pseudocode for aarch64/functions/pac/computepacauth/RotCellAuth

// RotCell() // ========= // Auth() // ====== // Restores the upper bits of the address to be all zeros or all ones (based on the // value of bit[55]) and computes and checks the pointer authentication code. If the // check passes, then the restored address is returned. If the check fails, the // second-top and third-top bits of the extension bits in the pointer authentication code // field are corrupted to ensure that accessing the address will give a translation fault. bits(4)bits(64) RotCell(bits(4) incell, integer amount) bits(8) tmp; bits(4) outcell; Auth(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data, bit keynumber) bits(64) PAC; bits(64) result; bits(64) original_ptr; bits(2) error_code; bits(64) extfield; // assert amount>3 || amount<1; tmp<7:0> = incell<3:0>:incell<3:0>; outcell = tmp<7-amount:4-amount>; return outcell; // Reconstruct the extension field used of adding the PAC to the pointer boolean tbi =CalculateTBI(ptr, data); integer bottom_PAC_bit = CalculateBottomPACBit(ptr<55>); extfield = Replicate(ptr<55>, 64); if tbi then original_ptr = ptr<63:56>:extfield<56-bottom_PAC_bit-1:0>:ptr<bottom_PAC_bit-1:0>; else original_ptr = extfield<64-bottom_PAC_bit-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = ComputePAC(original_ptr, modifier, K<127:64>, K<63:0>); // Check pointer authentication code if tbi then if PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit> then result = original_ptr; else error_code = keynumber:NOT(keynumber); result = original_ptr<63:55>:error_code:original_ptr<52:0>; else if ((PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit>) && (PAC<63:56> == ptr<63:56>)) then result = original_ptr; else error_code = keynumber:NOT(keynumber); result = original_ptr<63>:error_code:original_ptr<60:0>; return result;

Library pseudocode for aarch64/functions/pac/computepacauthda/TweakCellInvRotAuthDA

// TweakCellInvRot() // ================= // AuthDA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACDA(). bits(4) TweakCellInvRot(bits(4)incell) bits(4) outcell; outcell<3> = incell<2>; outcell<2> = incell<1>; outcell<1> = incell<0>; outcell<0> = incell<0> EOR incell<3>; return outcell;bits(64)AuthDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APDAKey_EL1, TRUE, '0');

Library pseudocode for aarch64/functions/pac/computepacauthdb/TweakCellRotAuthDB

// TweakCellRot() // ============== // AuthDB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a // pointer authentication code in the pointer authentication code field bits of X, using // the same algorithm and key as AddPACDB(). bits(4)bits(64) TweakCellRot(bits(4) incell) bits(4) outcell; outcell<3> = incell<0> EOR incell<1>; outcell<2> = incell<3>; outcell<1> = incell<2>; outcell<0> = incell<1>; return outcell;AuthDB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of whenEL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APDBKey_EL1, TRUE, '1');

Library pseudocode for aarch64/functions/pac/computepacauthia/TweakInvShuffleAuthIA

// TweakInvShuffle() // ================= // AuthIA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIA(). bits(64) TweakInvShuffle(bits(64)indata) bits(64) outdata; outdata<3:0> = TweakCellInvRot(indata<51:48>); outdata<7:4> = indata<55:52>; outdata<11:8> = indata<23:20>; outdata<15:12> = indata<27:24>; outdata<19:16> = indata<3:0>; outdata<23:20> = indata<7:4>; outdata<27:24> = TweakCellInvRot(indata<11:8>); outdata<31:28> = indata<15:12>; outdata<35:32> = TweakCellInvRot(indata<31:28>); outdata<39:36> = TweakCellInvRot(indata<63:60>); outdata<43:40> = TweakCellInvRot(indata<59:56>); outdata<47:44> = TweakCellInvRot(indata<19:16>); outdata<51:48> = indata<35:32>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<43:40>; outdata<63:60> = TweakCellInvRot(indata<47:44>); return outdata;bits(64)AuthIA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; APIAKey_EL1 = APIAKeyHi_EL1<63:0> : APIAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APIAKey_EL1, FALSE, '0');

Library pseudocode for aarch64/functions/pac/computepacauthib/TweakShuffleAuthIB

// TweakShuffle() // ============== // AuthIB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIB(). bits(64) TweakShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<19:16>; outdata<7:4> = indata<23:20>; outdata<11:8> =AuthIB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of when TweakCellRotEL0(indata<27:24>); outdata<15:12> = indata<31:28>; outdata<19:16> =boolean IsEL1Regime = TweakCellRotS1TranslationRegime(indata<47:44>); outdata<23:20> = indata<11:8>; outdata<27:24> = indata<15:12>; outdata<31:28> =() == TweakCellRotEL1(indata<35:32>); outdata<35:32> = indata<51:48>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<59:56>; outdata<47:44> =; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = ( TweakCellRotEL2Enabled(indata<63:60>); outdata<51:48> =() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = TweakCellRotHaveEL(indata<3:0>); outdata<55:52> = indata<7:4>; outdata<59:56> =( TweakCellRotEL3(indata<43:40>); outdata<63:60> =) && SCR_EL3.API == '0'; when Enable = SCTLR_EL1.EnIB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return AuthTweakCellRotEL1(indata<39:36>); return outdata;(X, Y, APIBKey_EL1, FALSE, '1');

Library pseudocode for aarch64/functions/pac/paccalcbottompacbit/HavePACExtCalculateBottomPACBit

// HavePACExt() // ============ // CalculateBottomPACBit() // ======================= booleaninteger HavePACExt() returnCalculateBottomPACBit(bit top_bit) integer tsz_field; if HasArchVersionPtrHasUpperAndLowerAddRanges(() then assertARMv8p3S1TranslationRegime); boolean() IN { HaveEnhancedPAC() return (, }; if S1TranslationRegime() == EL1 then // EL1 translation regime registers tsz_field = if top_bit == '1' then UInt(TCR_EL1.T1SZ) else UInt(TCR_EL1.T0SZ); using64k = if top_bit == '1' then TCR_EL1.TG1 == '11' else TCR_EL1.TG0 == '01'; else // EL2 translation regime registers assert HaveEL(EL2); tsz_field = if top_bit == '1' then UInt(TCR_EL2.T1SZ) else UInt(TCR_EL2.T0SZ); using64k = if top_bit == '1' then TCR_EL2.TG1 == '11' else TCR_EL2.TG0 == '01'; else tsz_field = if PSTATE.EL == EL2 then UInt(TCR_EL2.T0SZ) else UInt(TCR_EL3.T0SZ); using64k = if PSTATE.EL == EL2 then TCR_EL2.TG0 == '01' else TCR_EL3.TG0 == '01'; max_limit_tsz_field = (if !HaveSmallPageTblExt() then 39 else if using64k then 47 else 48); if tsz_field > max_limit_tsz_field then // TCR_ELx.TySZ is out of range c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = max_limit_tsz_field; tszmin = if using64k && VAMax() == 52 then 12 else 16; if tsz_field < tszmin then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCEHavePACExtEL2() && boolean IMPLEMENTATION_DEFINED "Has enhanced PAC functionality" );then tsz_field = tszmin; return (64-tsz_field);

Library pseudocode for aarch64/functions/pac/paccalculatetbi/PtrHasUpperAndLowerAddRangesCalculateTBI

// PtrHasUpperAndLowerAddRanges() // ============================== // Returns TRUE if the pointer has upper and lower address ranges // CalculateTBI() // ============== boolean PtrHasUpperAndLowerAddRanges() return PSTATE.EL ==CalculateTBI(bits(64) ptr, boolean data) boolean tbi = FALSE; if PtrHasUpperAndLowerAddRanges() then assert S1TranslationRegime() IN {EL1 || PSTATE.EL ==, EL0EL2 || (PSTATE.EL ==}; if S1TranslationRegime() == EL1 then // EL1 translation regime registers if data then tbi = if ptr<55> == '1' then TCR_EL1.TBI1 == '1' else TCR_EL1.TBI0 == '1'; else if ptr<55> == '1' then tbi = TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0'; else tbi = TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0'; else // EL2 translation regime registers if data then tbi = if ptr<55> == '1' then TCR_EL2.TBI1 == '1' else TCR_EL2.TBI0 == '1'; else if ptr<55> == '1' then tbi = TCR_EL2.TBI1 == '1' && TCR_EL2.TBID1 == '0'; else tbi = TCR_EL2.TBI0 == '1' && TCR_EL2.TBID0 == '0'; elsif PSTATE.EL == EL2 then tbi = if data then TCR_EL2.TBI=='1' else TCR_EL2.TBI=='1' && TCR_EL2.TBID=='0'; elsif PSTATE.EL == EL3 && HCR_EL2.E2H == '1');then tbi = if data then TCR_EL3.TBI=='1' else TCR_EL3.TBI=='1' && TCR_EL3.TBID=='0'; return tbi;

Library pseudocode for aarch64/functions/pac/stripcomputepac/StripComputePAC

// Strip() // ======= // Strip() returns a 64-bit value containing A, but replacing the pointer authentication // code field bits with the extension of the address bits. This can apply to either // instructions or data, where, as the use of tagged pointers is distinct, it might be // handled differently. array bits(64) RC[0..4]; bits(64) Strip(bits(64) A, boolean data) boolean TrapEL2; boolean TrapEL3; bits(64) original_ptr; bits(64) extfield; boolean tbi =ComputePAC(bits(64) data, bits(64) modifier, bits(64) key0, bits(64) key1) bits(64) workingval; bits(64) runningmod; bits(64) roundkey; bits(64) modk0; constant bits(64) Alpha = 0xC0AC29B7C97C50DD<63:0>; RC[0] = 0x0000000000000000<63:0>; RC[1] = 0x13198A2E03707344<63:0>; RC[2] = 0xA4093822299F31D0<63:0>; RC[3] = 0x082EFA98EC4E6C89<63:0>; RC[4] = 0x452821E638D01377<63:0>; modk0 = key0<0>:key0<63:2>:(key0<63> EOR key0<1>); runningmod = modifier; workingval = data EOR key0; for i = 0 to 4 roundkey = key1 EOR runningmod; workingval = workingval EOR roundkey; workingval = workingval EOR RC[i]; if i > 0 then workingval = CalculateTBIPACCellShuffle(A, data); integer bottom_PAC_bit =(workingval); workingval = CalculateBottomPACBitPACMult(A<55>); extfield =(workingval); workingval = ReplicatePACSub(A<55>, 64); if tbi then original_ptr = A<63:56>:extfield< 56-bottom_PAC_bit-1:0>:A<bottom_PAC_bit-1:0>; else original_ptr = extfield< 64-bottom_PAC_bit-1:0>:A<bottom_PAC_bit-1:0>; case PSTATE.EL of when(workingval); runningmod = EL0TweakShuffle TrapEL2 = ((runningmod<63:0>); roundkey = modk0 EOR runningmod; workingval = workingval EOR roundkey; workingval =EL2EnabledPACCellShuffle() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =(workingval); workingval = HaveELPACMult((workingval); workingval =EL3PACSub) && SCR_EL3.API == '0'; when(workingval); workingval = EL1PACCellShuffle TrapEL2 =(workingval); workingval = EL2EnabledPACMult() && HCR_EL2.API == '0'; TrapEL3 =(workingval); workingval = key1 EOR workingval; workingval = HaveELPACCellInvShuffle((workingval); workingval =EL3PACInvSub) && SCR_EL3.API == '0'; when(workingval); workingval = EL2PACMult TrapEL2 = FALSE; TrapEL3 =(workingval); workingval = HaveELPACCellInvShuffle((workingval); workingval = workingval EOR key0; workingval = workingval EOR runningmod; for i = 0 to 4 workingval =EL3PACInvSub) && SCR_EL3.API == '0'; when(workingval); if i < 4 then workingval = EL3PACMult TrapEL2 = FALSE; TrapEL3 = FALSE; if TrapEL2 then(workingval); workingval = TrapPACUsePACCellInvShuffle(EL2); elsif TrapEL3 then TrapPACUse(EL3); else return original_ptr;(workingval); runningmod = TweakInvShuffle(runningmod<63:0>); roundkey = key1 EOR runningmod; workingval = workingval EOR RC[4-i]; workingval = workingval EOR roundkey; workingval = workingval EOR Alpha; workingval = workingval EOR modk0; return workingval;

Library pseudocode for aarch64/functions/pac/trappacusecomputepac/TrapPACUsePACCellInvShuffle

// TrapPACUse() // ============ // Used for the trapping of the pointer authentication functions by higher exception // levels.// PACCellInvShuffle() // =================== bits(64) TrapPACUse(bits(2) target_el) assertPACCellInvShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<15:12>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<51:48>; outdata<15:12> = indata<39:36>; outdata<19:16> = indata<59:56>; outdata<23:20> = indata<47:44>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<19:16>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<31:28>; outdata<47:44> = indata<11:8>; outdata<51:48> = indata<23:20>; outdata<55:52> = indata<3:0>; outdata<59:56> = indata<43:40>; outdata<63:60> = indata<63:60>; return outdata; HaveEL(target_el) && target_el != EL0 && UInt(target_el) >= UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); ExceptionRecord exception; vect_offset = 0; exception = ExceptionSyndrome(Exception_PACTrap); AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/raspac/AArch64.ESBOperationcomputepac/PACCellShuffle

// AArch64.ESBOperation() // ====================== // Perform the AArch64 ESB operation, either for ESB executed in AArch64 state, or for // ESB in AArch32 state when SError interrupts are routed to an Exception level using // AArch64// PACCellShuffle() // ================ bits(64) AArch64.ESBOperation() route_to_el3 = (PACCellShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<55:52>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<47:44>; outdata<15:12> = indata<3:0>; outdata<19:16> = indata<31:28>; outdata<23:20> = indata<51:48>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<43:40>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<15:12>; outdata<43:40> = indata<59:56>; outdata<47:44> = indata<23:20>; outdata<51:48> = indata<11:8>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<19:16>; outdata<63:60> = indata<63:60>; return outdata;HaveEL(EL3) && SCR_EL3.EA == '1'); route_to_el2 = (EL2Enabled() && (HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1')); target = (if route_to_el3 then EL3 elsif route_to_el2 then EL2 else EL1); if target == EL1 then mask_active = (PSTATE.EL IN {EL0,EL1}); elsif HaveVirtHostExt() && target == EL2 && HCR_EL2.<E2H,TGE> == '11' then mask_active = (PSTATE.EL IN {EL0,EL2}); else mask_active = (PSTATE.EL == target); mask_set = (PSTATE.A == '1' && (!HaveDoubleFaultExt() || SCR_EL3.EA == '0' || PSTATE.EL != EL3 || SCR_EL3.NMEA == '0')); intdis = (Halted() || ExternalDebugInterruptsDisabled(target)); masked = (UInt(target) < UInt(PSTATE.EL)) || intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then // This function might be called for an interworking case, and INTdis is masking // the SError interrupt. if ELUsingAArch32(S1TranslationRegime()) then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); else implicit_esb = FALSE; syndrome64 = AArch64.PhysicalSErrorSyndrome(implicit_esb); DISR_EL1 = AArch64.ReportDeferredSError(syndrome64); ClearPendingPhysicalSError(); // Set ISR_EL1.A to 0 return;

Library pseudocode for aarch64/functions/raspac/AArch64.PhysicalSErrorSyndromecomputepac/PACInvSub

// Return the SError syndrome bits(25)// PACInvSub() // =========== bits(64) AArch64.PhysicalSErrorSyndrome(boolean implicit_esb);PACInvSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '0101'; when '0001' Toutput<4*i+3:4*i> = '1110'; when '0010' Toutput<4*i+3:4*i> = '1101'; when '0011' Toutput<4*i+3:4*i> = '1000'; when '0100' Toutput<4*i+3:4*i> = '1010'; when '0101' Toutput<4*i+3:4*i> = '1011'; when '0110' Toutput<4*i+3:4*i> = '0001'; when '0111' Toutput<4*i+3:4*i> = '1001'; when '1000' Toutput<4*i+3:4*i> = '0010'; when '1001' Toutput<4*i+3:4*i> = '0110'; when '1010' Toutput<4*i+3:4*i> = '1111'; when '1011' Toutput<4*i+3:4*i> = '0000'; when '1100' Toutput<4*i+3:4*i> = '0100'; when '1101' Toutput<4*i+3:4*i> = '1100'; when '1110' Toutput<4*i+3:4*i> = '0111'; when '1111' Toutput<4*i+3:4*i> = '0011'; return Toutput;

Library pseudocode for aarch64/functions/raspac/AArch64.ReportDeferredSErrorcomputepac/PACMult

// AArch64.ReportDeferredSError() // ============================== // Generate deferred SError syndrome // PACMult() // ========= bits(64) AArch64.ReportDeferredSError(bits(25) syndrome) bits(64) target; target<31> = '1'; // A target<24> = syndrome<24>; // IDS target<23:0> = syndrome<23:0>; // ISS return target;PACMult(bits(64) Sinput) bits(4) t0; bits(4) t1; bits(4) t2; bits(4) t3; bits(64) Soutput; for i = 0 to 3 t0<3:0> =RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 1) EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 2); t0<3:0> = t0<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 1); t1<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 1); t1<3:0> = t1<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 2); t2<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 2) EOR RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 1); t2<3:0> = t2<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 1); t3<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 2); t3<3:0> = t3<3:0> EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 1); Soutput<4*i+3:4*i> = t3<3:0>; Soutput<4*(i+4)+3:4*(i+4)> = t2<3:0>; Soutput<4*(i+8)+3:4*(i+8)> = t1<3:0>; Soutput<4*(i+12)+3:4*(i+12)> = t0<3:0>; return Soutput;

Library pseudocode for aarch64/functions/raspac/AArch64.vESBOperationcomputepac/PACSub

// AArch64.vESBOperation() // ======================= // Perform the AArch64 ESB operation for virtual SError interrupts, either for ESB // executed in AArch64 state, or for ESB in AArch32 state with EL2 using AArch64 state// PACSub() // ======== bits(64) AArch64.vESBOperation() assertPACSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '1011'; when '0001' Toutput<4*i+3:4*i> = '0110'; when '0010' Toutput<4*i+3:4*i> = '1000'; when '0011' Toutput<4*i+3:4*i> = '1111'; when '0100' Toutput<4*i+3:4*i> = '1100'; when '0101' Toutput<4*i+3:4*i> = '0000'; when '0110' Toutput<4*i+3:4*i> = '1001'; when '0111' Toutput<4*i+3:4*i> = '1110'; when '1000' Toutput<4*i+3:4*i> = '0011'; when '1001' Toutput<4*i+3:4*i> = '0111'; when '1010' Toutput<4*i+3:4*i> = '0100'; when '1011' Toutput<4*i+3:4*i> = '0101'; when '1100' Toutput<4*i+3:4*i> = '1101'; when '1101' Toutput<4*i+3:4*i> = '0010'; when '1110' Toutput<4*i+3:4*i> = '0001'; when '1111' Toutput<4*i+3:4*i> = '1010'; return Toutput; EL2Enabled() && PSTATE.EL IN {EL0,EL1}; // If physical SError interrupts are routed to EL2, and TGE is not set, then a virtual // SError interrupt might be pending vSEI_enabled = HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; vSEI_pending = vSEI_enabled && HCR_EL2.VSE == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then // This function might be called for the interworking case, and INTdis is masking // the virtual SError interrupt. if ELUsingAArch32(EL1) then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); else VDISR_EL2 = AArch64.ReportDeferredSError(VSESR_EL2<24:0>); HCR_EL2.VSE = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch64/functions/registerspac/AArch64.MaybeZeroRegisterUpperscomputepac/RotCell

// AArch64.MaybeZeroRegisterUppers() // ================================= // On taking an exception to AArch64 from AArch32, it is CONSTRAINED UNPREDICTABLE whether the top // 32 bits of registers visible at any lower Exception level using AArch32 are set to zero.// RotCell() // ========= bits(4) AArch64.MaybeZeroRegisterUppers() assertRotCell(bits(4) incell, integer amount) bits(8) tmp; bits(4) outcell; // assert amount>3 || amount<1; tmp<7:0> = incell<3:0>:incell<3:0>; outcell = tmp<7-amount:4-amount>; return outcell; UsingAArch32(); // Always called from AArch32 state before entering AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then first = 0; last = 14; include_R15 = FALSE; elsif PSTATE.EL IN {EL0,EL1} && EL2Enabled() && !ELUsingAArch32(EL2) then first = 0; last = 30; include_R15 = FALSE; else first = 0; last = 30; include_R15 = TRUE; for n = first to last if (n != 15 || include_R15) && ConstrainUnpredictableBool(Unpredictable_ZEROUPPER) then _R[n]<63:32> = Zeros(); return;

Library pseudocode for aarch64/functions/registerspac/AArch64.ResetGeneralRegisterscomputepac/TweakCellInvRot

// AArch64.ResetGeneralRegisters() // ===============================// TweakCellInvRot() // ================= bits(4) TweakCellInvRot(bits(4)incell) bits(4) outcell; outcell<3> = incell<2>; outcell<2> = incell<1>; outcell<1> = incell<0>; outcell<0> = incell<0> EOR incell<3>; return outcell; AArch64.ResetGeneralRegisters() for i = 0 to 30 X[i] = bits(64) UNKNOWN; return;

Library pseudocode for aarch64/functions/registerspac/AArch64.ResetSIMDFPRegisterscomputepac/TweakCellRot

// AArch64.ResetSIMDFPRegisters() // ==============================// TweakCellRot() // ============== bits(4) AArch64.ResetSIMDFPRegisters() for i = 0 to 31TweakCellRot(bits(4) incell) bits(4) outcell; outcell<3> = incell<0> EOR incell<1>; outcell<2> = incell<3>; outcell<1> = incell<2>; outcell<0> = incell<1>; return outcell; V[i] = bits(128) UNKNOWN; return;

Library pseudocode for aarch64/functions/registerspac/AArch64.ResetSpecialRegisterscomputepac/TweakInvShuffle

// AArch64.ResetSpecialRegisters() // ===============================// TweakInvShuffle() // ================= bits(64) TweakInvShuffle(bits(64)indata) bits(64) outdata; outdata<3:0> = TweakCellInvRot(indata<51:48>); outdata<7:4> = indata<55:52>; outdata<11:8> = indata<23:20>; outdata<15:12> = indata<27:24>; outdata<19:16> = indata<3:0>; outdata<23:20> = indata<7:4>; outdata<27:24> = TweakCellInvRot(indata<11:8>); outdata<31:28> = indata<15:12>; outdata<35:32> = TweakCellInvRot(indata<31:28>); outdata<39:36> = TweakCellInvRot(indata<63:60>); outdata<43:40> = TweakCellInvRot(indata<59:56>); outdata<47:44> = TweakCellInvRot(indata<19:16>); outdata<51:48> = indata<35:32>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<43:40>; outdata<63:60> = TweakCellInvRot(indata<47:44>); return outdata; AArch64.ResetSpecialRegisters() // AArch64 special registers SP_EL0 = bits(64) UNKNOWN; SP_EL1 = bits(64) UNKNOWN; SPSR_EL1 = bits(32) UNKNOWN; ELR_EL1 = bits(64) UNKNOWN; if HaveEL(EL2) then SP_EL2 = bits(64) UNKNOWN; SPSR_EL2 = bits(32) UNKNOWN; ELR_EL2 = bits(64) UNKNOWN; if HaveEL(EL3) then SP_EL3 = bits(64) UNKNOWN; SPSR_EL3 = bits(32) UNKNOWN; ELR_EL3 = bits(64) UNKNOWN; // AArch32 special registers that are not architecturally mapped to AArch64 registers if HaveAArch32EL(EL1) then SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; // External debug special registers DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; return;

Library pseudocode for aarch64/functions/registerspac/AArch64.ResetSystemRegisterscomputepac/TweakShuffle

// TweakShuffle() // ============== bits(64) TweakShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<19:16>; outdata<7:4> = indata<23:20>; outdata<11:8> = TweakCellRot(indata<27:24>); outdata<15:12> = indata<31:28>; outdata<19:16> = TweakCellRot(indata<47:44>); outdata<23:20> = indata<11:8>; outdata<27:24> = indata<15:12>; outdata<31:28> = TweakCellRot(indata<35:32>); outdata<35:32> = indata<51:48>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<59:56>; outdata<47:44> = TweakCellRot(indata<63:60>); outdata<51:48> = TweakCellRot(indata<3:0>); outdata<55:52> = indata<7:4>; outdata<59:56> = TweakCellRot(indata<43:40>); outdata<63:60> = TweakCellRotAArch64.ResetSystemRegisters(boolean cold_reset);(indata<39:36>); return outdata;

Library pseudocode for aarch64/functions/registerspac/PCpac/HavePACExt

// PC - non-assignment form // ======================== // Read program counter. // HavePACExt() // ============ bits(64)boolean PC[] return _PC;HavePACExt() returnHasArchVersion(ARMv8p3);

Library pseudocode for aarch64/functions/registerspac/SPpac/PtrHasUpperAndLowerAddRanges

// SP[] - assignment form // ====================== // Write to stack pointer from either a 32-bit or a 64-bit value.// PtrHasUpperAndLowerAddRanges() // ============================== // Returns TRUE if the pointer has upper and lower address ranges boolean SP[] = bits(width) value assert width IN {32,64}; if PSTATE.SP == '0' then SP_EL0 =PtrHasUpperAndLowerAddRanges() return PSTATE.EL == ZeroExtend(value); else case PSTATE.EL of when EL0 SP_EL0 = ZeroExtend(value); when EL1 SP_EL1 =|| PSTATE.EL == ZeroExtend(value); when EL2 SP_EL2 = ZeroExtend(value); when EL3 SP_EL3 = ZeroExtend(value); return; // SP[] - non-assignment form // ========================== // Read stack pointer with implicit slice of 8, 16, 32 or 64 bits. bits(width) SP[] assert width IN {8,16,32,64}; if PSTATE.SP == '0' then return SP_EL0<width-1:0>; else case PSTATE.EL of when EL0 return SP_EL0<width-1:0>; when|| (PSTATE.EL == EL1 return SP_EL1<width-1:0>; when EL2 return SP_EL2<width-1:0>; when EL3 return SP_EL3<width-1:0>;&& HCR_EL2.E2H == '1');

Library pseudocode for aarch64/functions/registerspac/Vstrip/Strip

// V[] - assignment form // =====================// Strip() // ======= // Strip() returns a 64-bit value containing A, but replacing the pointer authentication // code field bits with the extension of the address bits. This can apply to either // instructions or data, where, as the use of tagged pointers is distinct, it might be // handled differently. bits(64) V[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; integer vlen = ifStrip(bits(64) A, boolean data) boolean TrapEL2; boolean TrapEL3; bits(64) original_ptr; bits(64) extfield; boolean tbi = IsSVEEnabledCalculateTBI(PSTATE.EL) then VL else 128; if(A, data); integer bottom_PAC_bit = ConstrainUnpredictableBoolCalculateBottomPACBit((A<55>); extfield =Unpredictable_SVEZEROUPPERReplicate) then _Z[n] =(A<55>, 64); if tbi then original_ptr = A<63:56>:extfield< 56-bottom_PAC_bit-1:0>:A<bottom_PAC_bit-1:0>; else original_ptr = extfield< 64-bottom_PAC_bit-1:0>:A<bottom_PAC_bit-1:0>; case PSTATE.EL of when ZeroExtendEL0(value); else _Z[n]<vlen-1:0> =TrapEL2 = ( ZeroExtendEL2Enabled(value); // V[] - non-assignment form // ========================= bits(width)() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = (EL3) && SCR_EL3.API == '0'; when EL1 TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 TrapEL2 = FALSE; TrapEL3 = FALSE; if TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then TrapPACUse(EL3V[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; return _Z[n]<width-1:0>;); else return original_ptr;

Library pseudocode for aarch64/functions/registerspac/Vparttrappacuse/TrapPACUse

// Vpart[] - non-assignment form // ============================= bits(width)// TrapPACUse() // ============ // Used for the trapping of the pointer authentication functions by higher exception // levels. Vpart[integer n, integer part] assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; returnTrapPACUse(bits(2) target_el) assert VHaveEL[n]; else assert width == 64; return _V[n]<(width * 2)-1:width>; // Vpart[] - assignment form // =========================(target_el) && target_el != Vpart[integer n, integer part] = bits(width) value assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64};&& VUInt[n] = value; else assert width == 64; bits(64) vreg =(target_el) >= VUInt[n];(PSTATE.EL); bits(64) preferred_exception_return = (); ExceptionRecord exception; vect_offset = 0; exception = ExceptionSyndrome(Exception_PACTrap); AArch64.TakeExceptionVThisInstrAddr[n] = value<63:0> : vreg;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/registersras/XAArch64.ESBOperation

// X[] - assignment form // ===================== // Write to general-purpose register from either a 32-bit or a 64-bit value.// AArch64.ESBOperation() // ====================== // Perform the AArch64 ESB operation, either for ESB executed in AArch64 state, or for // ESB in AArch32 state when SError interrupts are routed to an Exception level using // AArch64 X[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {32,64}; if n != 31 then _R[n] =AArch64.ESBOperation() route_to_el3 = ( ZeroExtendHaveEL(value); return; // X[] - non-assignment form // ========================= // Read from general-purpose register with implicit slice of 8, 16, 32 or 64 bits. bits(width)( X[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64}; if n != 31 then return _R[n]<width-1:0>; else return) && SCR_EL3.EA == '1'); route_to_el2 = ( () && (HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1')); target = (if route_to_el3 then EL3 elsif route_to_el2 then EL2 else EL1); if target == EL1 then mask_active = (PSTATE.EL IN {EL0,EL1}); elsif HaveVirtHostExt() && target == EL2 && HCR_EL2.<E2H,TGE> == '11' then mask_active = (PSTATE.EL IN {EL0,EL2}); else mask_active = (PSTATE.EL == target); mask_set = (PSTATE.A == '1' && (!HaveDoubleFaultExt() || SCR_EL3.EA == '0' || PSTATE.EL != EL3 || SCR_EL3.NMEA == '0')); intdis = (Halted() || ExternalDebugInterruptsDisabled(target)); masked = (UInt(target) < UInt(PSTATE.EL)) || intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then // This function might be called for an interworking case, and INTdis is masking // the SError interrupt. if ELUsingAArch32(S1TranslationRegime()) then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); else implicit_esb = FALSE; syndrome64 = AArch64.PhysicalSErrorSyndrome(implicit_esb); DISR_EL1 = AArch64.ReportDeferredSError(syndrome64); ClearPendingPhysicalSErrorZerosEL2Enabled(width);(); // Set ISR_EL1.A to 0 return;

Library pseudocode for aarch64/functions/sveras/AArch32.IsFPEnabledAArch64.PhysicalSErrorSyndrome

// AArch32.IsFPEnabled() // ===================== boolean// Return the SError syndrome bits(25) AArch32.IsFPEnabled(bits(2) el) if el ==AArch64.PhysicalSErrorSyndrome(boolean implicit_esb); EL0 && !ELUsingAArch32(EL1) then return AArch64.IsFPEnabled(el); if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.cp10 == '0' then return FALSE; if el IN {EL0, EL1} then // Check if access disabled in CPACR case CPACR.cp10 of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; if el IN {EL0, EL1, EL2} then if EL2Enabled() then if !ELUsingAArch32(EL2) then if CPTR_EL2.TFP == '1' then return FALSE; else if HCPTR.TCP10 == '1' then return FALSE; if HaveEL(EL3) && !ELUsingAArch32(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sveras/AArch64.IsFPEnabledAArch64.ReportDeferredSError

// AArch64.IsFPEnabled() // ===================== // AArch64.ReportDeferredSError() // ============================== // Generate deferred SError syndrome booleanbits(64) AArch64.IsFPEnabled(bits(2) el) // Check if access disabled in CPACR_EL1 if el IN {AArch64.ReportDeferredSError(bits(25) syndrome) bits(64) target; target<31> = '1'; // A target<24> = syndrome<24>; // IDS target<23:0> = syndrome<23:0>; // ISS return target;EL0, EL1} then // Check FP&SIMD at EL0/EL1 case CPACR[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; // Check if access disabled in CPTR_EL2 if el IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.FPEN == 'x0' then return FALSE; else if CPTR_EL2.TFP == '1' then return FALSE; // Check if access disabled in CPTR_EL3 if HaveEL(EL3) then if CPTR_EL3.TFP == '1' then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sveras/CeilPow2AArch64.vESBOperation

// CeilPow2() // ========== // For a positive integer X, return the smallest power of 2 >= X integer// AArch64.vESBOperation() // ======================= // Perform the AArch64 ESB operation for virtual SError interrupts, either for ESB // executed in AArch64 state, or for ESB in AArch32 state with EL2 using AArch64 state CeilPow2(integer x) if x == 0 then return 0; if x == 1 then return 2; returnAArch64.vESBOperation() assert () && PSTATE.EL IN {EL0,EL1}; // If physical SError interrupts are routed to EL2, and TGE is not set, then a virtual // SError interrupt might be pending vSEI_enabled = HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; vSEI_pending = vSEI_enabled && HCR_EL2.VSE == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then // This function might be called for the interworking case, and INTdis is masking // the virtual SError interrupt. if ELUsingAArch32(EL1) then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); else VDISR_EL2 = AArch64.ReportDeferredSErrorFloorPow2EL2Enabled(x - 1) * 2;(VSESR_EL2<24:0>); HCR_EL2.VSE = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch64/functions/sveregisters/CheckSVEEnabledAArch64.MaybeZeroRegisterUppers

// CheckSVEEnabled() // =================// AArch64.MaybeZeroRegisterUppers() // ================================= // On taking an exception to AArch64 from AArch32, it is CONSTRAINED UNPREDICTABLE whether the top // 32 bits of registers visible at any lower Exception level using AArch32 are set to zero. CheckSVEEnabled() // Check if access disabled in CPACR_EL1 if PSTATE.EL IN {AArch64.MaybeZeroRegisterUppers() assertEL0UsingAArch32,(); // Always called from AArch32 state before entering AArch64 state if PSTATE.EL == EL1} then // Check SVE at EL0/EL1 case CPACR[].ZEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0; when '11' disabled = FALSE; if disabled then&& ! SVEAccessTrapELUsingAArch32(EL1); // Check FP&SIMD at EL0/EL1 case) then first = 0; last = 14; include_R15 = FALSE; elsif PSTATE.EL IN { CPACR[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0; when '11' disabled = FALSE; if disabled then, AArch64.AdvSIMDFPAccessTrap(EL1); if PSTATE.EL IN {} &&EL0, EL1, EL2} && EL2Enabled() then if() && ! HaveVirtHostExtELUsingAArch32() && HCR_EL2.E2H == '1' then if CPTR_EL2.ZEN == 'x0' then( SVEAccessTrap(EL2); if CPTR_EL2.FPEN == 'x0' then) then first = 0; last = 30; include_R15 = FALSE; else first = 0; last = 30; include_R15 = TRUE; for n = first to last if (n != 15 || include_R15) && AArch64.AdvSIMDFPAccessTrapConstrainUnpredictableBool(EL2Unpredictable_ZEROUPPER); else if CPTR_EL2.TZ == '1' then) then _R[n]<63:32> = SVEAccessTrapZeros(EL2); if CPTR_EL2.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL2); // Check if access disabled in CPTR_EL3 if HaveEL(EL3) then if CPTR_EL3.EZ == '0' then SVEAccessTrap(EL3); if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3);(); return;

Library pseudocode for aarch64/functions/sveregisters/DecodePredCountAArch64.ResetGeneralRegisters

// DecodePredCount() // ================= integer// AArch64.ResetGeneralRegisters() // =============================== DecodePredCount(bits(5) pattern, integer esize) integer elements = VL DIV esize; integer numElem; case pattern of when '00000' numElem =AArch64.ResetGeneralRegisters() for i = 0 to 30 FloorPow2X(elements); when '00001' numElem = if elements >= 1 then 1 else 0; when '00010' numElem = if elements >= 2 then 2 else 0; when '00011' numElem = if elements >= 3 then 3 else 0; when '00100' numElem = if elements >= 4 then 4 else 0; when '00101' numElem = if elements >= 5 then 5 else 0; when '00110' numElem = if elements >= 6 then 6 else 0; when '00111' numElem = if elements >= 7 then 7 else 0; when '01000' numElem = if elements >= 8 then 8 else 0; when '01001' numElem = if elements >= 16 then 16 else 0; when '01010' numElem = if elements >= 32 then 32 else 0; when '01011' numElem = if elements >= 64 then 64 else 0; when '01100' numElem = if elements >= 128 then 128 else 0; when '01101' numElem = if elements >= 256 then 256 else 0; when '11101' numElem = elements - (elements MOD 4); when '11110' numElem = elements - (elements MOD 3); when '11111' numElem = elements; otherwise numElem = 0; return numElem;[i] = bits(64) UNKNOWN; return;

Library pseudocode for aarch64/functions/sveregisters/ElemFFRAArch64.ResetSIMDFPRegisters

// ElemFFR[] - non-assignment form // =============================== bit// AArch64.ResetSIMDFPRegisters() // ============================== ElemFFR[integer e, integer esize] returnAArch64.ResetSIMDFPRegisters() for i = 0 to 31 ElemPV[_FFR, e, esize]; // ElemFFR[] - assignment form // =========================== ElemFFR[integer e, integer esize] = bit value integer psize = esize DIV 8; integer n = e * psize; assert n >= 0 && (n + psize) <= PL; _FFR<n+psize-1:n> = ZeroExtend(value, psize); [i] = bits(128) UNKNOWN; return;

Library pseudocode for aarch64/functions/sveregisters/ElemPAArch64.ResetSpecialRegisters

// ElemP[] - non-assignment form // ============================= bit// AArch64.ResetSpecialRegisters() // =============================== ElemP[bits(N) pred, integer e, integer esize] integer n = e * (esize DIV 8); assert n >= 0 && n < N; return pred<n>; AArch64.ResetSpecialRegisters() // ElemP[] - assignment form // ========================= // AArch64 special registers SP_EL0 = bits(64) UNKNOWN; SP_EL1 = bits(64) UNKNOWN; SPSR_EL1 = bits(32) UNKNOWN; ELR_EL1 = bits(64) UNKNOWN; if ElemP[bits(N) &pred, integer e, integer esize] = bit value integer psize = esize DIV 8; integer n = e * psize; assert n >= 0 && (n + psize) <= N; pred<n+psize-1:n> =( ) then SP_EL2 = bits(64) UNKNOWN; SPSR_EL2 = bits(32) UNKNOWN; ELR_EL2 = bits(64) UNKNOWN; if HaveEL(EL3) then SP_EL3 = bits(64) UNKNOWN; SPSR_EL3 = bits(32) UNKNOWN; ELR_EL3 = bits(64) UNKNOWN; // AArch32 special registers that are not architecturally mapped to AArch64 registers if HaveAArch32EL(EL1ZeroExtendEL2(value, psize); ) then SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; // External debug special registers DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; return;

Library pseudocode for aarch64/functions/sveregisters/FFRAArch64.ResetSystemRegisters

// FFR[] - non-assignment form // =========================== bits(width) FFR[] assert width == PL; return _FFR<width-1:0>; // FFR[] - assignment form // ======================= FFR[] = bits(width) value assert width == PL; if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _FFR = ZeroExtend(value); else _FFR<width-1:0> = value;AArch64.ResetSystemRegisters(boolean cold_reset);

Library pseudocode for aarch64/functions/sveregisters/FPCompareNEPC

// FPCompareNE() // ============= // PC - non-assignment form // ======================== // Read program counter. booleanbits(64) FPCompareNE(bits(N) op1, bits(N) op2,PC[] return _PC; FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = TRUE; if type1==FPType_SNaN || type2==FPType_SNaN then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 != value2); return result;

Library pseudocode for aarch64/functions/sveregisters/FPCompareUNSP

// FPCompareUN() // ============= boolean// SP[] - assignment form // ====================== // Write to stack pointer from either a 32-bit or a 64-bit value. FPCompareUN(bits(N) op1, bits(N) op2,SP[] = bits(width) value assert width IN {32,64}; if PSTATE.SP == '0' then SP_EL0 = FPCRTypeZeroExtend fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =(value); else case PSTATE.EL of when FPUnpackEL0(op1, fpcr); (type2,sign2,value2) =SP_EL0 = FPUnpackZeroExtend(op2, fpcr); if type1==(value); whenFPType_SNaNEL1 || type2==SP_EL1 =FPType_SNaNZeroExtend then(value); when FPProcessExceptionEL2(SP_EL2 =FPExc_InvalidOpZeroExtend, fpcr); return (type1==(value); whenFPType_SNaNEL3 || type1==SP_EL3 =FPType_QNaNZeroExtend || type2==(value); return; // SP[] - non-assignment form // ========================== // Read stack pointer with implicit slice of 8, 16, 32 or 64 bits. bits(width)FPType_SNaN || type2==SP[] assert width IN {8,16,32,64}; if PSTATE.SP == '0' then return SP_EL0<width-1:0>; else case PSTATE.EL of when return SP_EL0<width-1:0>; when EL1 return SP_EL1<width-1:0>; when EL2 return SP_EL2<width-1:0>; when EL3FPType_QNaNEL0);return SP_EL3<width-1:0>;

Library pseudocode for aarch64/functions/sveregisters/FPConvertSVEV

// FPConvertSVE() // ============== bits(M)// V[] - assignment form // ===================== FPConvertSVE(bits(N) op,V[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; integer vlen = if FPCRTypeIsSVEEnabled fpcr,(PSTATE.EL) then VL else 128; if FPRoundingConstrainUnpredictableBool rounding) fpcr.AHP = '0'; return( FPConvertUnpredictable_SVEZEROUPPER(op, fpcr, rounding); // FPConvertSVE() // ============== bits(M)) then _Z[n] = FPConvertSVE(bits(N) op,(value); else _Z[n]<vlen-1:0> = FPCRTypeZeroExtend fpcr) fpcr.AHP = '0'; return(value); // V[] - non-assignment form // ========================= bits(width) FPConvert(op, fpcr, FPRoundingMode(fpcr));V[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; return _Z[n]<width-1:0>;

Library pseudocode for aarch64/functions/sveregisters/FPExpAVpart

// FPExpA() // ======== // Vpart[] - non-assignment form // ============================= bits(N)bits(width) FPExpA(bits(N) op) assert N IN {16,32,64}; bits(N) result; bits(N) coeff; integer idx = if N == 16 thenVpart[integer n, integer part] assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; return UIntV(op<4:0>) else[n]; else assert width == 64; return _V[n]<(width * 2)-1:width>; // Vpart[] - assignment form // ========================= UInt(op<5:0>); coeff =Vpart[integer n, integer part] = bits(width) value assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; [n] = value; else assert width == 64; bits(64) vreg = V[n]; VFPExpCoefficientV[idx]; if N == 16 then result<15:0> = '0':op<9:5>:coeff<9:0>; elsif N == 32 then result<31:0> = '0':op<13:6>:coeff<22:0>; else // N == 64 result<63:0> = '0':op<16:6>:coeff<51:0>; return result;[n] = value<63:0> : vreg;

Library pseudocode for aarch64/functions/sveregisters/FPExpCoefficientX

// FPExpCoefficient() // ================== bits(N)// X[] - assignment form // ===================== // Write to general-purpose register from either a 32-bit or a 64-bit value. FPExpCoefficient[integer index] assert N IN {16,32,64}; integer result; if N == 16 then case index of when 0 result = 0x0000; when 1 result = 0x0016; when 2 result = 0x002d; when 3 result = 0x0045; when 4 result = 0x005d; when 5 result = 0x0075; when 6 result = 0x008e; when 7 result = 0x00a8; when 8 result = 0x00c2; when 9 result = 0x00dc; when 10 result = 0x00f8; when 11 result = 0x0114; when 12 result = 0x0130; when 13 result = 0x014d; when 14 result = 0x016b; when 15 result = 0x0189; when 16 result = 0x01a8; when 17 result = 0x01c8; when 18 result = 0x01e8; when 19 result = 0x0209; when 20 result = 0x022b; when 21 result = 0x024e; when 22 result = 0x0271; when 23 result = 0x0295; when 24 result = 0x02ba; when 25 result = 0x02e0; when 26 result = 0x0306; when 27 result = 0x032e; when 28 result = 0x0356; when 29 result = 0x037f; when 30 result = 0x03a9; when 31 result = 0x03d4; elsif N == 32 then case index of when 0 result = 0x000000; when 1 result = 0x0164d2; when 2 result = 0x02cd87; when 3 result = 0x043a29; when 4 result = 0x05aac3; when 5 result = 0x071f62; when 6 result = 0x08980f; when 7 result = 0x0a14d5; when 8 result = 0x0b95c2; when 9 result = 0x0d1adf; when 10 result = 0x0ea43a; when 11 result = 0x1031dc; when 12 result = 0x11c3d3; when 13 result = 0x135a2b; when 14 result = 0x14f4f0; when 15 result = 0x16942d; when 16 result = 0x1837f0; when 17 result = 0x19e046; when 18 result = 0x1b8d3a; when 19 result = 0x1d3eda; when 20 result = 0x1ef532; when 21 result = 0x20b051; when 22 result = 0x227043; when 23 result = 0x243516; when 24 result = 0x25fed7; when 25 result = 0x27cd94; when 26 result = 0x29a15b; when 27 result = 0x2b7a3a; when 28 result = 0x2d583f; when 29 result = 0x2f3b79; when 30 result = 0x3123f6; when 31 result = 0x3311c4; when 32 result = 0x3504f3; when 33 result = 0x36fd92; when 34 result = 0x38fbaf; when 35 result = 0x3aff5b; when 36 result = 0x3d08a4; when 37 result = 0x3f179a; when 38 result = 0x412c4d; when 39 result = 0x4346cd; when 40 result = 0x45672a; when 41 result = 0x478d75; when 42 result = 0x49b9be; when 43 result = 0x4bec15; when 44 result = 0x4e248c; when 45 result = 0x506334; when 46 result = 0x52a81e; when 47 result = 0x54f35b; when 48 result = 0x5744fd; when 49 result = 0x599d16; when 50 result = 0x5bfbb8; when 51 result = 0x5e60f5; when 52 result = 0x60ccdf; when 53 result = 0x633f89; when 54 result = 0x65b907; when 55 result = 0x68396a; when 56 result = 0x6ac0c7; when 57 result = 0x6d4f30; when 58 result = 0x6fe4ba; when 59 result = 0x728177; when 60 result = 0x75257d; when 61 result = 0x77d0df; when 62 result = 0x7a83b3; when 63 result = 0x7d3e0c; else // N == 64 case index of when 0 result = 0x0000000000000; when 1 result = 0x02C9A3E778061; when 2 result = 0x059B0D3158574; when 3 result = 0x0874518759BC8; when 4 result = 0x0B5586CF9890F; when 5 result = 0x0E3EC32D3D1A2; when 6 result = 0x11301D0125B51; when 7 result = 0x1429AAEA92DE0; when 8 result = 0x172B83C7D517B; when 9 result = 0x1A35BEB6FCB75; when 10 result = 0x1D4873168B9AA; when 11 result = 0x2063B88628CD6; when 12 result = 0x2387A6E756238; when 13 result = 0x26B4565E27CDD; when 14 result = 0x29E9DF51FDEE1; when 15 result = 0x2D285A6E4030B; when 16 result = 0x306FE0A31B715; when 17 result = 0x33C08B26416FF; when 18 result = 0x371A7373AA9CB; when 19 result = 0x3A7DB34E59FF7; when 20 result = 0x3DEA64C123422; when 21 result = 0x4160A21F72E2A; when 22 result = 0x44E086061892D; when 23 result = 0x486A2B5C13CD0; when 24 result = 0x4BFDAD5362A27; when 25 result = 0x4F9B2769D2CA7; when 26 result = 0x5342B569D4F82; when 27 result = 0x56F4736B527DA; when 28 result = 0x5AB07DD485429; when 29 result = 0x5E76F15AD2148; when 30 result = 0x6247EB03A5585; when 31 result = 0x6623882552225; when 32 result = 0x6A09E667F3BCD; when 33 result = 0x6DFB23C651A2F; when 34 result = 0x71F75E8EC5F74; when 35 result = 0x75FEB564267C9; when 36 result = 0x7A11473EB0187; when 37 result = 0x7E2F336CF4E62; when 38 result = 0x82589994CCE13; when 39 result = 0x868D99B4492ED; when 40 result = 0x8ACE5422AA0DB; when 41 result = 0x8F1AE99157736; when 42 result = 0x93737B0CDC5E5; when 43 result = 0x97D829FDE4E50; when 44 result = 0x9C49182A3F090; when 45 result = 0xA0C667B5DE565; when 46 result = 0xA5503B23E255D; when 47 result = 0xA9E6B5579FDBF; when 48 result = 0xAE89F995AD3AD; when 49 result = 0xB33A2B84F15FB; when 50 result = 0xB7F76F2FB5E47; when 51 result = 0xBCC1E904BC1D2; when 52 result = 0xC199BDD85529C; when 53 result = 0xC67F12E57D14B; when 54 result = 0xCB720DCEF9069; when 55 result = 0xD072D4A07897C; when 56 result = 0xD5818DCFBA487; when 57 result = 0xDA9E603DB3285; when 58 result = 0xDFC97337B9B5F; when 59 result = 0xE502EE78B3FF6; when 60 result = 0xEA4AFA2A490DA; when 61 result = 0xEFA1BEE615A27; when 62 result = 0xF50765B6E4540; when 63 result = 0xFA7C1819E90D8; return result<N-1:0>;X[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {32,64}; if n != 31 then _R[n] =ZeroExtend(value); return; // X[] - non-assignment form // ========================= // Read from general-purpose register with implicit slice of 8, 16, 32 or 64 bits. bits(width) X[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64}; if n != 31 then return _R[n]<width-1:0>; else return Zeros(width);

Library pseudocode for aarch64/functions/sve/FPMinNormalAArch32.IsFPEnabled

// FPMinNormal() // ============= // AArch32.IsFPEnabled() // ===================== bits(N)boolean FPMinNormal(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =AArch32.IsFPEnabled(bits(2) el) if el == ZerosEL0(E-1):'1'; frac =&& ! (EL1) then return AArch64.IsFPEnabled(el); if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.cp10 == '0' then return FALSE; if el IN {EL0, EL1} then // Check if access disabled in CPACR case CPACR.cp10 of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; if el IN {EL0, EL1, EL2} then if EL2Enabled() then if !ELUsingAArch32(EL2) then if CPTR_EL2.TFP == '1' then return FALSE; else if HCPTR.TCP10 == '1' then return FALSE; if HaveEL(EL3) && !ELUsingAArch32(EL3ZerosELUsingAArch32(F); return sign : exp : frac;) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sve/FPOneAArch64.IsFPEnabled

// FPOne() // ======= // AArch64.IsFPEnabled() // ===================== bits(N)boolean FPOne(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0':AArch64.IsFPEnabled(bits(2) el) // Check if access disabled in CPACR_EL1 if el IN {OnesEL0(E-1); frac =, } then // Check FP&SIMD at EL0/EL1 case CPACR[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; // Check if access disabled in CPTR_EL2 if el IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.FPEN == 'x0' then return FALSE; else if CPTR_EL2.TFP == '1' then return FALSE; // Check if access disabled in CPTR_EL3 if HaveEL(EL3ZerosEL1(F); return sign : exp : frac;) then if CPTR_EL3.TFP == '1' then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sve/FPPointFiveCeilPow2

// FPPointFive() // ============= // CeilPow2() // ========== bits(N)// For a positive integer X, return the smallest power of 2 >= X integer FPPointFive(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0':CeilPow2(integer x) if x == 0 then return 0; if x == 1 then return 2; returnOnesFloorPow2(E-2):'0'; frac = Zeros(F); return sign : exp : frac;(x - 1) * 2;

Library pseudocode for aarch64/functions/sve/FPProcessCheckSVEEnabled

// FPProcess() // =========== bits(N)// CheckSVEEnabled() // ================= FPProcess(bits(N) input) bits(N) result; assert N IN {16,32,64}; (type,sign,value) =CheckSVEEnabled() // Check if access disabled in CPACR_EL1 if PSTATE.EL IN { FPUnpackEL0(input, FPCR); if type ==, FPType_SNaNEL1 || type ==} then // Check SVE at EL0/EL1 case FPType_QNaNCPACR then result =[].ZEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == FPProcessNaNEL0(type, input, FPCR); elsif type ==; when '11' disabled = FALSE; if disabled then FPType_InfinitySVEAccessTrap then result =( FPInfinityEL1(sign); elsif type ==); // Check FP&SIMD at EL0/EL1 case FPType_ZeroCPACR then result =[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == FPZeroEL0(sign); else result =; when '11' disabled = FALSE; if disabled then (EL1); if PSTATE.EL IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.ZEN == 'x0' then SVEAccessTrap(EL2); if CPTR_EL2.FPEN == 'x0' then AArch64.AdvSIMDFPAccessTrap(EL2); else if CPTR_EL2.TZ == '1' then SVEAccessTrap(EL2); if CPTR_EL2.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL2); // Check if access disabled in CPTR_EL3 if HaveEL(EL3) then if CPTR_EL3.EZ == '0' then SVEAccessTrap(EL3); if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3FPRoundAArch64.AdvSIMDFPAccessTrap(value, FPCR); return result;);

Library pseudocode for aarch64/functions/sve/FPScaleDecodePredCount

// FPScale() // ========= // DecodePredCount() // ================= bits(N)integer FPScale(bits (N) op, integer scale,DecodePredCount(bits(5) pattern, integer esize) integer elements = VL DIV esize; integer numElem; case pattern of when '00000' numElem = FPCRTypeFloorPow2 fpcr) assert N IN {16,32,64}; (type,sign,value) = FPUnpack(op, fpcr); if type == FPType_SNaN || type == FPType_QNaN then result = FPProcessNaN(type, op, fpcr); elsif type == FPType_Zero then result = FPZero(sign); elsif type == FPType_Infinity then result = FPInfinity(sign); else result = FPRound(value * (2.0^scale), fpcr); return result;(elements); when '00001' numElem = if elements >= 1 then 1 else 0; when '00010' numElem = if elements >= 2 then 2 else 0; when '00011' numElem = if elements >= 3 then 3 else 0; when '00100' numElem = if elements >= 4 then 4 else 0; when '00101' numElem = if elements >= 5 then 5 else 0; when '00110' numElem = if elements >= 6 then 6 else 0; when '00111' numElem = if elements >= 7 then 7 else 0; when '01000' numElem = if elements >= 8 then 8 else 0; when '01001' numElem = if elements >= 16 then 16 else 0; when '01010' numElem = if elements >= 32 then 32 else 0; when '01011' numElem = if elements >= 64 then 64 else 0; when '01100' numElem = if elements >= 128 then 128 else 0; when '01101' numElem = if elements >= 256 then 256 else 0; when '11101' numElem = elements - (elements MOD 4); when '11110' numElem = elements - (elements MOD 3); when '11111' numElem = elements; otherwise numElem = 0; return numElem;

Library pseudocode for aarch64/functions/sve/FPTrigMAddElemFFR

// FPTrigMAdd() // ============ // ElemFFR[] - non-assignment form // =============================== bits(N)bit FPTrigMAdd(integer x, bits(N) op1, bits(N) op2,ElemFFR[integer e, integer esize] return FPCRTypeElemP fpcr) assert N IN {16,32,64}; assert x >= 0; assert x < 8; bits(N) coeff; [_FFR, e, esize]; if op2<N-1> == '1' then x = x + 8; op2<N-1> = '0'; coeff =// ElemFFR[] - assignment form // =========================== FPTrigMAddCoefficient[x]; result =ElemFFR[integer e, integer esize] = bit value integer psize = esize DIV 8; integer n = e * psize; assert n >= 0 && (n + psize) <= PL; _FFR<n+psize-1:n> = FPMulAddZeroExtend(coeff, op1, op2, fpcr); return result;(value, psize); return;

Library pseudocode for aarch64/functions/sve/FPTrigMAddCoefficientElemP

// FPTrigMAddCoefficient() // ======================= // ElemP[] - non-assignment form // ============================= bits(N)bit FPTrigMAddCoefficient[integer index] assert N IN {16,32,64}; integer result; ElemP[bits(N) pred, integer e, integer esize] integer n = e * (esize DIV 8); assert n >= 0 && n < N; return pred<n>; if N == 16 then case index of when 0 result = 0x3c00; when 1 result = 0xb155; when 2 result = 0x2030; when 3 result = 0x0000; when 4 result = 0x0000; when 5 result = 0x0000; when 6 result = 0x0000; when 7 result = 0x0000; when 8 result = 0x3c00; when 9 result = 0xb800; when 10 result = 0x293a; when 11 result = 0x0000; when 12 result = 0x0000; when 13 result = 0x0000; when 14 result = 0x0000; when 15 result = 0x0000; elsif N == 32 then case index of when 0 result = 0x3f800000; when 1 result = 0xbe2aaaab; when 2 result = 0x3c088886; when 3 result = 0xb95008b9; when 4 result = 0x36369d6d; when 5 result = 0x00000000; when 6 result = 0x00000000; when 7 result = 0x00000000; when 8 result = 0x3f800000; when 9 result = 0xbf000000; when 10 result = 0x3d2aaaa6; when 11 result = 0xbab60705; when 12 result = 0x37cd37cc; when 13 result = 0x00000000; when 14 result = 0x00000000; when 15 result = 0x00000000; else // N == 64 case index of when 0 result = 0x3ff0000000000000; when 1 result = 0xbfc5555555555543; when 2 result = 0x3f8111111110f30c; when 3 result = 0xbf2a01a019b92fc6; when 4 result = 0x3ec71de351f3d22b; when 5 result = 0xbe5ae5e2b60f7b91; when 6 result = 0x3de5d8408868552f; when 7 result = 0x0000000000000000; when 8 result = 0x3ff0000000000000; when 9 result = 0xbfe0000000000000; when 10 result = 0x3fa5555555555536; when 11 result = 0xbf56c16c16c13a0b; when 12 result = 0x3efa01a019b1e8d8; when 13 result = 0xbe927e4f7282f468; when 14 result = 0x3e21ee96d2641b13; when 15 result = 0xbda8f76380fbb401; return result<N-1:0>;// ElemP[] - assignment form // =========================ElemP[bits(N) &pred, integer e, integer esize] = bit value integer psize = esize DIV 8; integer n = e * psize; assert n >= 0 && (n + psize) <= N; pred<n+psize-1:n> = ZeroExtend(value, psize); return;

Library pseudocode for aarch64/functions/sve/FPTrigSMulFFR

// FPTrigSMul() // ============ // FFR[] - non-assignment form // =========================== bits(N)bits(width) FPTrigSMul(bits(N) op1, bits(N) op2,FFR[] assert width == PL; return _FFR<width-1:0>; // FFR[] - assignment form // ======================= FPCRType fpcr) assert N IN {16,32,64}; result =FFR[] = bits(width) value assert width == PL; if FPMulConstrainUnpredictableBool(op1, op1, fpcr); (type, sign, value) =( FPUnpackUnpredictable_SVEZEROUPPER(result, fpcr); if (type !=) then _FFR = FPType_QNaNZeroExtend) && (type != FPType_SNaN) then result<N-1> = op2<0>; return result;(value); else _FFR<width-1:0> = value;

Library pseudocode for aarch64/functions/sve/FPTrigSSelFPCompareNE

// FPTrigSSel() // ============ // FPCompareNE() // ============= bits(N)boolean FPTrigSSel(bits(N) op1, bits(N) op2) assert N IN {16,32,64}; bits(N) result; if op2<0> == '1' then result =FPCompareNE(bits(N) op1, bits(N) op2, fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = TRUE; if type1==FPType_SNaN || type2==FPType_SNaN then FPProcessException(FPExc_InvalidOpFPOneFPCRType(op2<1>); else result = op1; result<N-1> = result<N-1> EOR op2<1>; , fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 != value2); return result;

Library pseudocode for aarch64/functions/sve/FirstActiveFPCompareUN

// FirstActive() // FPCompareUN() // ============= bitboolean FirstActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = 0 to elements-1 ifFPCompareUN(bits(N) op1, bits(N) op2, ElemPFPCRType[mask, e, esize] == '1' then returnfpcr) assert N IN {16,32,64}; (type1,sign1,value1) = (op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type2==FPType_SNaN then FPProcessException(FPExc_InvalidOp, fpcr); return (type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaNElemPFPUnpack[x, e, esize]; return '0';);

Library pseudocode for aarch64/functions/sve/FloorPow2FPConvertSVE

// FloorPow2() // =========== // For a positive integer X, return the largest power of 2 <= X // FPConvertSVE() // ============== integerbits(M) FloorPow2(integer x) assert x >= 0; integer n = 1; if x == 0 then return 0; while x >= 2^n do n = n + 1; return 2^(n - 1);FPConvertSVE(bits(N) op,FPCRType fpcr, FPRounding rounding) fpcr.AHP = '0'; return FPConvert(op, fpcr, rounding); // FPConvertSVE() // ============== bits(M) FPConvertSVE(bits(N) op, FPCRType fpcr) fpcr.AHP = '0'; return FPConvert(op, fpcr, FPRoundingMode(fpcr));

Library pseudocode for aarch64/functions/sve/HaveSVEFPExpA

// HaveSVE() // ========= // FPExpA() // ======== booleanbits(N) HaveSVE() returnFPExpA(bits(N) op, HasArchVersionFPCRType(fpcr) assert N IN {16,32,64}; bits(N) result; bits(N) coeff; integer idx = if N == 16 then(op<4:0>) else UInt(op<5:0>); coeff = FPExpCoefficientARMv8p2UInt) && boolean IMPLEMENTATION_DEFINED "Have SVE ISA";[idx]; if N == 16 then result<15:0> = '0':op<9:5>:coeff<9:0>; elsif N == 32 then result<31:0> = '0':op<13:6>:coeff<22:0>; else // N == 64 result<63:0> = '0':op<16:6>:coeff<51:0>; return result;

Library pseudocode for aarch64/functions/sve/ImplementedSVEVectorLengthFPExpCoefficient

// ImplementedSVEVectorLength() // ============================ // Reduce SVE vector length to a supported value (e.g. power of two) // FPExpCoefficient() // ================== integerbits(N) ImplementedSVEVectorLength(integer nbits) return integer IMPLEMENTATION_DEFINED;FPExpCoefficient[integer index] assert N IN {16,32,64}; integer result; if N == 16 then case index of when 0 result = 0x0000; when 1 result = 0x0016; when 2 result = 0x002d; when 3 result = 0x0045; when 4 result = 0x005d; when 5 result = 0x0075; when 6 result = 0x008e; when 7 result = 0x00a8; when 8 result = 0x00c2; when 9 result = 0x00dc; when 10 result = 0x00f8; when 11 result = 0x0114; when 12 result = 0x0130; when 13 result = 0x014d; when 14 result = 0x016b; when 15 result = 0x0189; when 16 result = 0x01a8; when 17 result = 0x01c8; when 18 result = 0x01e8; when 19 result = 0x0209; when 20 result = 0x022b; when 21 result = 0x024e; when 22 result = 0x0271; when 23 result = 0x0295; when 24 result = 0x02ba; when 25 result = 0x02e0; when 26 result = 0x0306; when 27 result = 0x032e; when 28 result = 0x0356; when 29 result = 0x037f; when 30 result = 0x03a9; when 31 result = 0x03d4; elsif N == 32 then case index of when 0 result = 0x000000; when 1 result = 0x0164d2; when 2 result = 0x02cd87; when 3 result = 0x043a29; when 4 result = 0x05aac3; when 5 result = 0x071f62; when 6 result = 0x08980f; when 7 result = 0x0a14d5; when 8 result = 0x0b95c2; when 9 result = 0x0d1adf; when 10 result = 0x0ea43a; when 11 result = 0x1031dc; when 12 result = 0x11c3d3; when 13 result = 0x135a2b; when 14 result = 0x14f4f0; when 15 result = 0x16942d; when 16 result = 0x1837f0; when 17 result = 0x19e046; when 18 result = 0x1b8d3a; when 19 result = 0x1d3eda; when 20 result = 0x1ef532; when 21 result = 0x20b051; when 22 result = 0x227043; when 23 result = 0x243516; when 24 result = 0x25fed7; when 25 result = 0x27cd94; when 26 result = 0x29a15b; when 27 result = 0x2b7a3a; when 28 result = 0x2d583f; when 29 result = 0x2f3b79; when 30 result = 0x3123f6; when 31 result = 0x3311c4; when 32 result = 0x3504f3; when 33 result = 0x36fd92; when 34 result = 0x38fbaf; when 35 result = 0x3aff5b; when 36 result = 0x3d08a4; when 37 result = 0x3f179a; when 38 result = 0x412c4d; when 39 result = 0x4346cd; when 40 result = 0x45672a; when 41 result = 0x478d75; when 42 result = 0x49b9be; when 43 result = 0x4bec15; when 44 result = 0x4e248c; when 45 result = 0x506334; when 46 result = 0x52a81e; when 47 result = 0x54f35b; when 48 result = 0x5744fd; when 49 result = 0x599d16; when 50 result = 0x5bfbb8; when 51 result = 0x5e60f5; when 52 result = 0x60ccdf; when 53 result = 0x633f89; when 54 result = 0x65b907; when 55 result = 0x68396a; when 56 result = 0x6ac0c7; when 57 result = 0x6d4f30; when 58 result = 0x6fe4ba; when 59 result = 0x728177; when 60 result = 0x75257d; when 61 result = 0x77d0df; when 62 result = 0x7a83b3; when 63 result = 0x7d3e0c; else // N == 64 case index of when 0 result = 0x0000000000000; when 1 result = 0x02C9A3E778061; when 2 result = 0x059B0D3158574; when 3 result = 0x0874518759BC8; when 4 result = 0x0B5586CF9890F; when 5 result = 0x0E3EC32D3D1A2; when 6 result = 0x11301D0125B51; when 7 result = 0x1429AAEA92DE0; when 8 result = 0x172B83C7D517B; when 9 result = 0x1A35BEB6FCB75; when 10 result = 0x1D4873168B9AA; when 11 result = 0x2063B88628CD6; when 12 result = 0x2387A6E756238; when 13 result = 0x26B4565E27CDD; when 14 result = 0x29E9DF51FDEE1; when 15 result = 0x2D285A6E4030B; when 16 result = 0x306FE0A31B715; when 17 result = 0x33C08B26416FF; when 18 result = 0x371A7373AA9CB; when 19 result = 0x3A7DB34E59FF7; when 20 result = 0x3DEA64C123422; when 21 result = 0x4160A21F72E2A; when 22 result = 0x44E086061892D; when 23 result = 0x486A2B5C13CD0; when 24 result = 0x4BFDAD5362A27; when 25 result = 0x4F9B2769D2CA7; when 26 result = 0x5342B569D4F82; when 27 result = 0x56F4736B527DA; when 28 result = 0x5AB07DD485429; when 29 result = 0x5E76F15AD2148; when 30 result = 0x6247EB03A5585; when 31 result = 0x6623882552225; when 32 result = 0x6A09E667F3BCD; when 33 result = 0x6DFB23C651A2F; when 34 result = 0x71F75E8EC5F74; when 35 result = 0x75FEB564267C9; when 36 result = 0x7A11473EB0187; when 37 result = 0x7E2F336CF4E62; when 38 result = 0x82589994CCE13; when 39 result = 0x868D99B4492ED; when 40 result = 0x8ACE5422AA0DB; when 41 result = 0x8F1AE99157736; when 42 result = 0x93737B0CDC5E5; when 43 result = 0x97D829FDE4E50; when 44 result = 0x9C49182A3F090; when 45 result = 0xA0C667B5DE565; when 46 result = 0xA5503B23E255D; when 47 result = 0xA9E6B5579FDBF; when 48 result = 0xAE89F995AD3AD; when 49 result = 0xB33A2B84F15FB; when 50 result = 0xB7F76F2FB5E47; when 51 result = 0xBCC1E904BC1D2; when 52 result = 0xC199BDD85529C; when 53 result = 0xC67F12E57D14B; when 54 result = 0xCB720DCEF9069; when 55 result = 0xD072D4A07897C; when 56 result = 0xD5818DCFBA487; when 57 result = 0xDA9E603DB3285; when 58 result = 0xDFC97337B9B5F; when 59 result = 0xE502EE78B3FF6; when 60 result = 0xEA4AFA2A490DA; when 61 result = 0xEFA1BEE615A27; when 62 result = 0xF50765B6E4540; when 63 result = 0xFA7C1819E90D8; return result<N-1:0>;

Library pseudocode for aarch64/functions/sve/IsEvenFPMinNormal

// IsEven() // ======== // FPMinNormal() // ============= booleanbits(N) IsEven(integer val) return val MOD 2 == 0;FPMinNormal(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =Zeros(E-1):'1'; frac = Zeros(F); return sign : exp : frac;

Library pseudocode for aarch64/functions/sve/IsFPEnabledFPOne

// IsFPEnabled() // ============= // FPOne() // ======= booleanbits(N) IsFPEnabled(bits(2) el) ifFPOne(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0': ELUsingAArch32Ones(el) then return(E-1); frac = AArch32.IsFPEnabledZeros(el); else return AArch64.IsFPEnabled(el);(F); return sign : exp : frac;

Library pseudocode for aarch64/functions/sve/IsSVEEnabledFPPointFive

// IsSVEEnabled() // ============== // FPPointFive() // ============= booleanbits(N) IsSVEEnabled(bits(2) el) ifFPPointFive(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0': ELUsingAArch32Ones(el) then return FALSE; // Check if access disabled in CPACR_EL1 if el IN {(E-2):'0'; frac =EL0Zeros, EL1} then // Check SVE at EL0/EL1 case CPACR[].ZEN of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; // Check if access disabled in CPTR_EL2 if el IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.ZEN == 'x0' then return FALSE; else if CPTR_EL2.TZ == '1' then return FALSE; // Check if access disabled in CPTR_EL3 if HaveEL(EL3) then if CPTR_EL3.EZ == '0' then return FALSE; return TRUE;(F); return sign : exp : frac;

Library pseudocode for aarch64/functions/sve/LastActiveFPProcess

// LastActive() // ============ // FPProcess() // =========== bitbits(N) LastActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = elements-1 downto 0 ifFPProcess(bits(N) input) bits(N) result; assert N IN {16,32,64}; (type,sign,value) = ElemPFPUnpack[mask, e, esize] == '1' then return(input, FPCR); if type == || type == FPType_QNaN then result = FPProcessNaN(type, input, FPCR); elsif type == FPType_Infinity then result = FPInfinity(sign); elsif type == FPType_Zero then result = FPZero(sign); else result = FPRoundElemPFPType_SNaN[x, e, esize]; return '0';(value, FPCR); return result;

Library pseudocode for aarch64/functions/sve/LastActiveElementFPScale

// LastActiveElement() // =================== // FPScale() // ========= integerbits(N) LastActiveElement(bits(N) mask, integer esize) assert esize IN {8, 16, 32, 64}; integer elements = VL DIV esize; for e = elements-1 downto 0 ifFPScale(bits (N) op, integer scale, fpcr) assert N IN {16,32,64}; (type,sign,value) = FPUnpack(op, fpcr); if type == FPType_SNaN || type == FPType_QNaN then result = FPProcessNaN(type, op, fpcr); elsif type == FPType_Zero then result = FPZero(sign); elsif type == FPType_Infinity then result = FPInfinity(sign); else result = FPRoundElemPFPCRType[mask, e, esize] == '1' then return e; return -1;(value * (2.0^scale), fpcr); return result;

Library pseudocode for aarch64/functions/sve/MAX_PLFPTrigMAdd

constant integer// FPTrigMAdd() // ============ bits(N) MAX_PL = 256;FPTrigMAdd(integer x, bits(N) op1, bits(N) op2,FPCRType fpcr) assert N IN {16,32,64}; assert x >= 0; assert x < 8; bits(N) coeff; if op2<N-1> == '1' then x = x + 8; op2<N-1> = '0'; coeff = FPTrigMAddCoefficient[x]; result = FPMulAdd(coeff, op1, op2, fpcr); return result;

Library pseudocode for aarch64/functions/sve/MAX_VLFPTrigMAddCoefficient

constant integer// FPTrigMAddCoefficient() // ======================= bits(N) MAX_VL = 2048;FPTrigMAddCoefficient[integer index] assert N IN {16,32,64}; integer result; if N == 16 then case index of when 0 result = 0x3c00; when 1 result = 0xb155; when 2 result = 0x2030; when 3 result = 0x0000; when 4 result = 0x0000; when 5 result = 0x0000; when 6 result = 0x0000; when 7 result = 0x0000; when 8 result = 0x3c00; when 9 result = 0xb800; when 10 result = 0x293a; when 11 result = 0x0000; when 12 result = 0x0000; when 13 result = 0x0000; when 14 result = 0x0000; when 15 result = 0x0000; elsif N == 32 then case index of when 0 result = 0x3f800000; when 1 result = 0xbe2aaaab; when 2 result = 0x3c088886; when 3 result = 0xb95008b9; when 4 result = 0x36369d6d; when 5 result = 0x00000000; when 6 result = 0x00000000; when 7 result = 0x00000000; when 8 result = 0x3f800000; when 9 result = 0xbf000000; when 10 result = 0x3d2aaaa6; when 11 result = 0xbab60705; when 12 result = 0x37cd37cc; when 13 result = 0x00000000; when 14 result = 0x00000000; when 15 result = 0x00000000; else // N == 64 case index of when 0 result = 0x3ff0000000000000; when 1 result = 0xbfc5555555555543; when 2 result = 0x3f8111111110f30c; when 3 result = 0xbf2a01a019b92fc6; when 4 result = 0x3ec71de351f3d22b; when 5 result = 0xbe5ae5e2b60f7b91; when 6 result = 0x3de5d8408868552f; when 7 result = 0x0000000000000000; when 8 result = 0x3ff0000000000000; when 9 result = 0xbfe0000000000000; when 10 result = 0x3fa5555555555536; when 11 result = 0xbf56c16c16c13a0b; when 12 result = 0x3efa01a019b1e8d8; when 13 result = 0xbe927e4f7282f468; when 14 result = 0x3e21ee96d2641b13; when 15 result = 0xbda8f76380fbb401; return result<N-1:0>;

Library pseudocode for aarch64/functions/sve/MaybeZeroSVEUppersFPTrigSMul

// MaybeZeroSVEUppers() // ====================// FPTrigSMul() // ============ bits(N) MaybeZeroSVEUppers(bits(2) target_el) boolean lower_enabled; ifFPTrigSMul(bits(N) op1, bits(N) op2, UIntFPCRType(target_el) <=fpcr) assert N IN {16,32,64}; result = UIntFPMul(PSTATE.EL) || !(op1, op1, fpcr); (type, sign, value) =IsSVEEnabledFPUnpack(target_el) then return; if target_el ==(result, fpcr); if (type != EL3FPType_QNaN then if) && (type != EL2EnabledFPType_SNaN() then lower_enabled = IsFPEnabled(EL2); else lower_enabled = IsFPEnabled(EL1); else lower_enabled = IsFPEnabled(target_el - 1); if lower_enabled then integer vl = if IsSVEEnabled(PSTATE.EL) then VL else 128; integer pl = vl DIV 8; for n = 0 to 31 if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _Z[n] = ZeroExtend(_Z[n]<vl-1:0>); for n = 0 to 15 if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _P[n] = ZeroExtend(_P[n]<pl-1:0>); if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _FFR = ZeroExtend(_FFR<pl-1:0>);) then result<N-1> = op2<0>; return result;

Library pseudocode for aarch64/functions/sve/MemNFFPTrigSSel

// MemNF[] - non-assignment form // ============================= // FPTrigSSel() // ============ (bits(8*size), boolean) MemNF[bits(64) address, integer size,bits(N) AccType acctype] assert size IN {1, 2, 4, 8, 16}; bits(8*size) value; FPTrigSSel(bits(N) op1, bits(N) op2) assert N IN {16,32,64}; bits(N) result; aligned = (address == if op2<0> == '1' then result = AlignFPOne(address, size)); A = SCTLR[].A; if !aligned && (A == '1') then return (bits(8*size) UNKNOWN, TRUE); atomic = aligned || size == 1; if !atomic then (value<7:0>, bad) = MemSingleNF[address, 1, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 (value<8*i+7:8*i>, bad) = MemSingleNF[address+i, 1, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); else (value, bad) = MemSingleNF[address, size, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); if BigEndian() then value = BigEndianReverse(value); (op2<1>); else result = op1; result<N-1> = result<N-1> EOR op2<1>; return (value, FALSE); return result;

Library pseudocode for aarch64/functions/sve/MemSingleNFFirstActive

// MemSingleNF[] - non-assignment form // =================================== // FirstActive() // ============= (bits(8*size), boolean) MemSingleNF[bits(64) address, integer size,bit AccType acctype, boolean wasaligned] bits(8*size) value; boolean iswrite = FALSE;FirstActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = 0 to elements-1 if AddressDescriptorElemP memaddrdesc; // Implementation may suppress NF load for any reason if[mask, e, esize] == '1' then return ConstrainUnpredictableBoolElemP(Unpredictable_NONFAULT) then return (bits(8*size) UNKNOWN, TRUE); // MMU or MPU memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Non-fault load from Device memory must not be performed externally if memaddrdesc.memattrs.type == MemType_Device then return (bits(8*size) UNKNOWN, TRUE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return (bits(8*size) UNKNOWN, TRUE); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(address, acctype) then bits(4) ptag = TransformTag(address); if !CheckTag(memaddrdesc, ptag, iswrite) then return (bits(8*size) UNKNOWN, TRUE); value = _Mem[memaddrdesc, size, accdesc]; return (value, FALSE);[x, e, esize]; return '0';

Library pseudocode for aarch64/functions/sve/NoneActiveFloorPow2

// NoneActive() // ============ // FloorPow2() // =========== // For a positive integer X, return the largest power of 2 <= X bitinteger NoneActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = 0 to elements-1 ifFloorPow2(integer x) assert x >= 0; integer n = 1; if x == 0 then return 0; while x >= 2^n do n = n + 1; return 2^(n - 1); ElemP[mask, e, esize] == '1' && ElemP[x, e, esize] == '1' then return '0'; return '1';

Library pseudocode for aarch64/functions/sve/PHaveSVE

// P[] - non-assignment form // ========================= // HaveSVE() // ========= bits(width)boolean P[integer n] assert n >= 0 && n <= 31; assert width == PL; return _P[n]<width-1:0>; // P[] - assignment form // =====================HaveSVE() return P[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width == PL; if( ConstrainUnpredictableBoolARMv8p2(Unpredictable_SVEZEROUPPER) then _P[n] = ZeroExtend(value); else _P[n]<width-1:0> = value;) && boolean IMPLEMENTATION_DEFINED "Have SVE ISA";

Library pseudocode for aarch64/functions/sve/PLImplementedSVEVectorLength

// PL - non-assignment form // ======================== // ImplementedSVEVectorLength() // ============================ // Reduce SVE vector length to a supported value (e.g. power of two) integer PL return VL DIV 8;integerImplementedSVEVectorLength(integer nbits) return integer IMPLEMENTATION_DEFINED;

Library pseudocode for aarch64/functions/sve/PredTestIsEven

// PredTest() // ========== // IsEven() // ======== bits(4)boolean PredTest(bits(N) mask, bits(N) result, integer esize) bit n =IsEven(integer val) return val MOD 2 == 0; FirstActive(mask, result, esize); bit z = NoneActive(mask, result, esize); bit c = NOT LastActive(mask, result, esize); bit v = '0'; return n:z:c:v;

Library pseudocode for aarch64/functions/sve/ReducePredicatedIsFPEnabled

// ReducePredicated() // ================== // IsFPEnabled() // ============= bits(esize)boolean ReducePredicated(IsFPEnabled(bits(2) el) ifReduceOpELUsingAArch32 op, bits(N) input, bits(M) mask, bits(esize) identity) assert(N == M * 8); integer p2bits =(el) then return CeilPow2AArch32.IsFPEnabled(N); bits(p2bits) operand; integer elements = p2bits DIV esize; for e = 0 to elements-1 if e * esize < N &&(el); else return ElemPAArch64.IsFPEnabled[mask, e, esize] == '1' then Elem[operand, e, esize] = Elem[input, e, esize]; else Elem[operand, e, esize] = identity; return Reduce(op, operand, esize);(el);

Library pseudocode for aarch64/functions/sve/ReverseIsSVEEnabled

// Reverse() // ========= // Reverse subwords of M bits in an N-bit word // IsSVEEnabled() // ============== bits(N)boolean Reverse(bits(N) word, integer M) bits(N) result; integer sw = N DIV M; assert N == sw * M; for s = 0 to sw-1IsSVEEnabled(bits(2) el) if ElemELUsingAArch32[result, sw - 1 - s, M] =(el) then return FALSE; // Check if access disabled in CPACR_EL1 if el IN { , EL1} then // Check SVE at EL0/EL1 case CPACR[].ZEN of when 'x0' disabled = TRUE; when '01' disabled = (el == EL0); when '11' disabled = FALSE; if disabled then return FALSE; // Check if access disabled in CPTR_EL2 if el IN {EL0, EL1, EL2} && EL2Enabled() then if HaveVirtHostExt() && HCR_EL2.E2H == '1' then if CPTR_EL2.ZEN == 'x0' then return FALSE; else if CPTR_EL2.TZ == '1' then return FALSE; // Check if access disabled in CPTR_EL3 if HaveEL(EL3ElemEL0[word, s, M]; return result;) then if CPTR_EL3.EZ == '0' then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sve/SVEAccessTrapLastActive

// SVEAccessTrap() // =============== // Trapped access to SVE registers due to CPACR_EL1, CPTR_EL2, or CPTR_EL3.// LastActive() // ============ bit SVEAccessTrap(bits(2) target_el) assertLastActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = elements-1 downto 0 if UIntElemP(target_el) >=[mask, e, esize] == '1' then return UIntElemP(PSTATE.EL) && target_el != EL0 && HaveEL(target_el); route_to_el2 = target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1'; exception = ExceptionSyndrome(Exception_SVEAccessTrap); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; if route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);[x, e, esize]; return '0';

Library pseudocode for aarch64/functions/sve/SVECmpLastActiveElement

enumeration// LastActiveElement() // =================== integer SVECmp {LastActiveElement(bits(N) mask, integer esize) assert esize IN {8, 16, 32, 64}; integer elements = VL DIV esize; for e = elements-1 downto 0 if Cmp_EQ, Cmp_NE, Cmp_GE, Cmp_GT, Cmp_LT, Cmp_LE, Cmp_UN };[mask, e, esize] == '1' then return e; return -1;

Library pseudocode for aarch64/functions/sve/SVEMoveMaskPreferredMAX_PL

// SVEMoveMaskPreferred() // ====================== // Return FALSE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single DUP instruction. // Used as a condition for the preferred MOV<-DUPM alias. booleanconstant integer SVEMoveMaskPreferred(bits(13) imm13) bits(64) imm; (imm, -) =MAX_PL = 256; DecodeBitMasks(imm13<12>, imm13<5:0>, imm13<11:6>, TRUE); // Check for 8 bit immediates if !IsZero(imm<7:0>) then // Check for 'ffffffffffffffxy' or '00000000000000xy' if IsZero(imm<63:7>) || IsOnes(imm<63:7>) then return FALSE; // Check for 'ffffffxyffffffxy' or '000000xy000000xy' if imm<63:32> == imm<31:0> && (IsZero(imm<31:7>) || IsOnes(imm<31:7>)) then return FALSE; // Check for 'ffxyffxyffxyffxy' or '00xy00xy00xy00xy' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> && (IsZero(imm<15:7>) || IsOnes(imm<15:7>)) then return FALSE; // Check for 'xyxyxyxyxyxyxyxy' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> && (imm<15:8> == imm<7:0>) then return FALSE; // Check for 16 bit immediates else // Check for 'ffffffffffffxy00' or '000000000000xy00' if IsZero(imm<63:15>) || IsOnes(imm<63:15>) then return FALSE; // Check for 'ffffxy00ffffxy00' or '0000xy000000xy00' if imm<63:32> == imm<31:0> && (IsZero(imm<31:7>) || IsOnes(imm<31:7>)) then return FALSE; // Check for 'xy00xy00xy00xy00' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sve/SystemMAX_VL

array bits(constant integerMAX_VL) _Z[0..31]; array bits(MAX_PL) _P[0..15]; bits(MAX_PL) _FFR;MAX_VL = 2048;

Library pseudocode for aarch64/functions/sve/VLMaybeZeroSVEUppers

// VL - non-assignment form // ======================== integer VL integer vl; if PSTATE.EL ==// MaybeZeroSVEUppers() // ==================== EL1 || (PSTATE.EL ==MaybeZeroSVEUppers(bits(2) target_el) boolean lower_enabled; if EL0UInt && !(target_el) <=IsInHost()) then vl = UInt(ZCR_EL1.LEN); if PSTATE.EL ==(PSTATE.EL) || ! EL2IsSVEEnabled || (PSTATE.EL ==(target_el) then return; if target_el == EL0EL3 &&then if IsInHostEL2Enabled()) then vl =() then lower_enabled = UIntIsFPEnabled(ZCR_EL2.LEN); elsif( EL2EnabledEL2() && PSTATE.EL IN {); else lower_enabled =EL0IsFPEnabled,(EL1} then vl =); else lower_enabled = MinIsFPEnabled(vl,(target_el - 1); if lower_enabled then integer vl = if UIntIsSVEEnabled(ZCR_EL2.LEN)); if PSTATE.EL ==(PSTATE.EL) then VL else 128; integer pl = vl DIV 8; for n = 0 to 31 if EL3ConstrainUnpredictableBool then vl =( UIntUnpredictable_SVEZEROUPPER(ZCR_EL3.LEN); elsif) then _Z[n] = HaveELZeroExtend((_Z[n]<vl-1:0>); for n = 0 to 15 ifEL3ConstrainUnpredictableBool) then vl =( MinUnpredictable_SVEZEROUPPER(vl,) then _P[n] = UIntZeroExtend(ZCR_EL3.LEN)); vl = (vl + 1) * 128; vl =(_P[n]<pl-1:0>); if (Unpredictable_SVEZEROUPPER) then _FFR = ZeroExtendImplementedSVEVectorLengthConstrainUnpredictableBool(vl); return vl;(_FFR<pl-1:0>);

Library pseudocode for aarch64/functions/sve/ZMemNF

// Z[] - non-assignment form // ========================= // MemNF[] - non-assignment form // ============================= bits(width)(bits(8*size), boolean) MemNF[bits(64) address, integer size, Z[integer n] assert n >= 0 && n <= 31; assert width == VL; return _Z[n]<width-1:0>; acctype] assert size IN {1, 2, 4, 8, 16}; bits(8*size) value; // Z[] - assignment form // ===================== aligned = (address == Z[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width == VL; if(address, size)); A = ConstrainUnpredictableBoolSCTLR([].A; if !aligned && (A == '1') then return (bits(8*size) UNKNOWN, TRUE); atomic = aligned || size == 1; if !atomic then (value<7:0>, bad) = MemSingleNF[address, 1, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c =Unpredictable_SVEZEROUPPERConstrainUnpredictable) then _Z[n] =( ); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 (value<8*i+7:8*i>, bad) = MemSingleNF[address+i, 1, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); else (value, bad) = MemSingleNF[address, size, acctype, aligned]; if bad then return (bits(8*size) UNKNOWN, TRUE); if BigEndian() then value = BigEndianReverseZeroExtendUnpredictable_DEVPAGE2(value); else _Z[n]<width-1:0> = value; return (value, FALSE);

Library pseudocode for aarch64/functions/sysregisterssve/CNTKCTLMemSingleNF

// CNTKCTL[] - non-assignment form // =============================== // MemSingleNF[] - non-assignment form // =================================== CNTKCTLType(bits(8*size), boolean) MemSingleNF[bits(64) address, integer size, CNTKCTL[] bits(32) r; ifacctype, boolean wasaligned] bits(8*size) value; boolean iswrite = FALSE; memaddrdesc; // Implementation may suppress NF load for any reason if ConstrainUnpredictableBool(Unpredictable_NONFAULT) then return (bits(8*size) UNKNOWN, TRUE); // MMU or MPU memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Non-fault load from Device memory must not be performed externally if memaddrdesc.memattrs.type == MemType_Device then return (bits(8*size) UNKNOWN, TRUE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return (bits(8*size) UNKNOWN, TRUE); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(address, acctype) then bits(4) ptag = TransformTag(address); if !CheckTagIsInHostAddressDescriptor() then r = CNTHCTL_EL2; return r; r = CNTKCTL_EL1; return r;(memaddrdesc, ptag, iswrite) then return (bits(8*size) UNKNOWN, TRUE); value = _Mem[memaddrdesc, size, accdesc]; return (value, FALSE);

Library pseudocode for aarch64/functions/sysregisterssve/CNTKCTLTypeNoneActive

type// NoneActive() // ============ bit CNTKCTLType;NoneActive(bits(N) mask, bits(N) x, integer esize) integer elements = N DIV (esize DIV 8); for e = 0 to elements-1 ifElemP[mask, e, esize] == '1' && ElemP[x, e, esize] == '1' then return '0'; return '1';

Library pseudocode for aarch64/functions/sysregisterssve/CPACRP

// CPACR[] - non-assignment form // ============================= // P[] - non-assignment form // ========================= CPACRTypebits(width) CPACR[] bits(32) r; ifP[integer n] assert n >= 0 && n <= 31; assert width == PL; return _P[n]<width-1:0>; // P[] - assignment form // ===================== P[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width == PL; if ConstrainUnpredictableBool(Unpredictable_SVEZEROUPPER) then _P[n] = ZeroExtendIsInHost() then r = CPTR_EL2; return r; r = CPACR_EL1; return r;(value); else _P[n]<width-1:0> = value;

Library pseudocode for aarch64/functions/sysregisterssve/CPACRTypePL

type// PL - non-assignment form // ======================== integer PL return VL DIV 8; CPACRType;

Library pseudocode for aarch64/functions/sysregisterssve/ELRPredTest

// ELR[] - non-assignment form // =========================== // PredTest() // ========== bits(64)bits(4) ELR[bits(2) el] bits(64) r; case el of whenPredTest(bits(N) mask, bits(N) result, integer esize) bit n = EL1FirstActive r = ELR_EL1; when(mask, result, esize); bit z = EL2NoneActive r = ELR_EL2; when(mask, result, esize); bit c = NOT EL3LastActive r = ELR_EL3; otherwise Unreachable(); return r; // ELR[] - non-assignment form // =========================== bits(64) ELR[] assert PSTATE.EL != EL0; return ELR[PSTATE.EL]; // ELR[] - assignment form // ======================= ELR[bits(2) el] = bits(64) value bits(64) r = value; case el of when EL1 ELR_EL1 = r; when EL2 ELR_EL2 = r; when EL3 ELR_EL3 = r; otherwise Unreachable(); return; // ELR[] - assignment form // ======================= ELR[] = bits(64) value assert PSTATE.EL != EL0; ELR[PSTATE.EL] = value; return;(mask, result, esize); bit v = '0'; return n:z:c:v;

Library pseudocode for aarch64/functions/sysregisterssve/ESRReducePredicated

// ESR[] - non-assignment form // =========================== // ReducePredicated() // ================== ESRTypebits(esize) ESR[bits(2) regime] bits(32) r; case regime of whenReducePredicated( EL1ReduceOp r = ESR_EL1; whenop, bits(N) input, bits(M) mask, bits(esize) identity) assert(N == M * 8); integer p2bits = EL2CeilPow2 r = ESR_EL2; when(N); bits(p2bits) operand; integer elements = p2bits DIV esize; for e = 0 to elements-1 if e * esize < N && EL3ElemP r = ESR_EL3; otherwise[mask, e, esize] == '1' then UnreachableElem(); return r; // ESR[] - non-assignment form // =========================== ESRType[operand, e, esize] = ESR[] return[input, e, esize]; else ESRElem[[operand, e, esize] = identity; returnS1TranslationRegimeReduce()]; // ESR[] - assignment form // ======================= ESR[bits(2) regime] = ESRType value bits(32) r = value; case regime of when EL1 ESR_EL1 = r; when EL2 ESR_EL2 = r; when EL3 ESR_EL3 = r; otherwise Unreachable(); return; // ESR[] - assignment form // ======================= ESR[] = ESRType value ESR[S1TranslationRegime()] = value;(op, operand, esize);

Library pseudocode for aarch64/functions/sysregisterssve/ESRTypeReverse

type// Reverse() // ========= // Reverse subwords of M bits in an N-bit word bits(N) ESRType;Reverse(bits(N) word, integer M) bits(N) result; integer sw = N DIV M; assert N == sw * M; for s = 0 to sw-1Elem[result, sw - 1 - s, M] = Elem[word, s, M]; return result;

Library pseudocode for aarch64/functions/sysregisterssve/FARSVEAccessTrap

// FAR[] - non-assignment form // =========================== bits(64)// SVEAccessTrap() // =============== // Trapped access to SVE registers due to CPACR_EL1, CPTR_EL2, or CPTR_EL3. FAR[bits(2) regime] bits(64) r; case regime of whenSVEAccessTrap(bits(2) target_el) assert EL1UInt r = FAR_EL1; when(target_el) >= EL2UInt r = FAR_EL2; when(PSTATE.EL) && target_el != EL3EL0 r = FAR_EL3; otherwise&& UnreachableHaveEL(); return r; // FAR[] - non-assignment form // =========================== bits(64)(target_el); route_to_el2 = target_el == FAR[] return&& FAREL2Enabled[() && HCR_EL2.TGE == '1'; exception =S1TranslationRegimeExceptionSyndrome()]; // FAR[] - assignment form // =======================( FAR[bits(2) regime] = bits(64) value bits(64) r = value; case regime of when); bits(64) preferred_exception_return = EL1ThisInstrAddr FAR_EL1 = r; when(); vect_offset = 0x0; if route_to_el2 then AArch64.TakeException(EL2 FAR_EL2 = r; when, exception, preferred_exception_return, vect_offset); else EL3AArch64.TakeException FAR_EL3 = r; otherwise Unreachable(); return; // FAR[] - assignment form // ======================= FAR[] = bits(64) value FAR[S1TranslationRegime()] = value; return;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/sysregisterssve/MAIRSVECmp

// MAIR[] - non-assignment form // ============================ MAIRTypeenumeration MAIR[bits(2) regime] bits(64) r; case regime of whenSVECmp { EL1 r = MAIR_EL1; whenCmp_EQ, EL2 r = MAIR_EL2; whenCmp_NE, EL3 r = MAIR_EL3; otherwiseCmp_GE, Unreachable(); return r; // MAIR[] - non-assignment form // ============================ MAIRTypeCmp_GT, MAIR[] returnCmp_LT, MAIR[Cmp_LE,S1TranslationRegime()];Cmp_UN };

Library pseudocode for aarch64/functions/sysregisterssve/MAIRTypeSVEMoveMaskPreferred

type// SVEMoveMaskPreferred() // ====================== // Return FALSE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single DUP instruction. // Used as a condition for the preferred MOV<-DUPM alias. boolean MAIRType;SVEMoveMaskPreferred(bits(13) imm13) bits(64) imm; (imm, -) =DecodeBitMasks(imm13<12>, imm13<5:0>, imm13<11:6>, TRUE); // Check for 8 bit immediates if !IsZero(imm<7:0>) then // Check for 'ffffffffffffffxy' or '00000000000000xy' if IsZero(imm<63:7>) || IsOnes(imm<63:7>) then return FALSE; // Check for 'ffffffxyffffffxy' or '000000xy000000xy' if imm<63:32> == imm<31:0> && (IsZero(imm<31:7>) || IsOnes(imm<31:7>)) then return FALSE; // Check for 'ffxyffxyffxyffxy' or '00xy00xy00xy00xy' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> && (IsZero(imm<15:7>) || IsOnes(imm<15:7>)) then return FALSE; // Check for 'xyxyxyxyxyxyxyxy' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> && (imm<15:8> == imm<7:0>) then return FALSE; // Check for 16 bit immediates else // Check for 'ffffffffffffxy00' or '000000000000xy00' if IsZero(imm<63:15>) || IsOnes(imm<63:15>) then return FALSE; // Check for 'ffffxy00ffffxy00' or '0000xy000000xy00' if imm<63:32> == imm<31:0> && (IsZero(imm<31:7>) || IsOnes(imm<31:7>)) then return FALSE; // Check for 'xy00xy00xy00xy00' if imm<63:32> == imm<31:0> && imm<31:16> == imm<15:0> then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/sysregisterssve/SCTLRSystem

// SCTLR[] - non-assignment form // ============================= SCTLRTypearray bits( SCTLR[bits(2) regime] bits(64) r; case regime of when) _Z[0..31]; array bits( EL1MAX_PL r = SCTLR_EL1; when) _P[0..15]; bits( EL2MAX_PL r = SCTLR_EL2; when EL3 r = SCTLR_EL3; otherwise Unreachable(); return r; // SCTLR[] - non-assignment form // ============================= SCTLRType SCTLR[] return SCTLR[S1TranslationRegime()];) _FFR;

Library pseudocode for aarch64/functions/sysregisterssve/SCTLRTypeVL

type// VL - non-assignment form // ======================== integer VL integer vl; if PSTATE.EL == || (PSTATE.EL == EL0 && !IsInHost()) then vl = UInt(ZCR_EL1.LEN); if PSTATE.EL == EL2 || (PSTATE.EL == EL0 && IsInHost()) then vl = UInt(ZCR_EL2.LEN); elsif EL2Enabled() && PSTATE.EL IN {EL0,EL1} then vl = Min(vl, UInt(ZCR_EL2.LEN)); if PSTATE.EL == EL3 then vl = UInt(ZCR_EL3.LEN); elsif HaveEL(EL3) then vl = Min(vl, UInt(ZCR_EL3.LEN)); vl = (vl + 1) * 128; vl = ImplementedSVEVectorLengthSCTLRType;(vl); return vl;

Library pseudocode for aarch64/functions/sysregisterssve/VBARZ

// VBAR[] - non-assignment form // ============================ // Z[] - non-assignment form // ========================= bits(64)bits(width) VBAR[bits(2) regime] bits(64) r; case regime of whenZ[integer n] assert n >= 0 && n <= 31; assert width == VL; return _Z[n]<width-1:0>; // Z[] - assignment form // ===================== EL1 r = VBAR_EL1; whenZ[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width == VL; if EL2ConstrainUnpredictableBool r = VBAR_EL2; when( EL3Unpredictable_SVEZEROUPPER r = VBAR_EL3; otherwise) then _Z[n] = UnreachableZeroExtend(); return r; // VBAR[] - non-assignment form // ============================ bits(64) VBAR[] return VBAR[S1TranslationRegime()];(value); else _Z[n]<width-1:0> = value;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.CheckSystemAccessCNTKCTL

// AArch64.CheckSystemAccess() // =========================== // Checks if an AArch64 MSR, MRS or SYS instruction is allowed from the current exception level and security state. // Also checks for traps by TIDCP and NV access.// CNTKCTL[] - non-assignment form // =============================== CNTKCTLType AArch64.CheckSystemAccess(bits(2) op0, bits(3) op1, bits(4) crn, bits(4) crm, bits(3) op2, bits(5) rt, bit read) boolean unallocated = FALSE; boolean need_secure = FALSE; bits(2) min_EL; // Check for traps by HCR_EL2.TIDCP if PSTATE.EL IN {CNTKCTL[] bits(32) r; ifEL0IsInHost,EL1} && EL2Enabled() && HCR_EL2.TIDCP == 1 && op0 == 'x1' && crn == '1x11' then // At EL0, it is IMPLEMENTATION_DEFINED whether attempts to execute system // register access instructions with reserved encodings are trapped to EL2 or UNDEFINED rcs_el0_trap = boolean IMPLEMENTATION_DEFINED "Reserved Control Space EL0 Trapped"; if PSTATE.EL == EL1 || rcs_el0_trap then AArch64.SystemAccessTrap(EL2, 0x18); // Exception_SystemRegisterTrap // Check for unallocated encodings case op1 of when '00x', '010' min_EL = EL1; when '011' min_EL = EL0; when '100' min_EL = EL2; when '101' if !HaveVirtHostExt() then UNDEFINED; min_EL = EL2; when '110' min_EL = EL3; when '111' min_EL = EL1; need_secure = TRUE; if UInt(PSTATE.EL) < UInt(min_EL) then // Check for traps on read/write access to registers named _EL2, _EL02, _EL12 from non-secure EL1 when HCR_EL2.NV bit is set nv_access = HaveNVExt() && min_EL == EL2 && PSTATE.EL == EL1 && EL2Enabled() && HCR_EL2.NV == '1'; if !nv_access then UNDEFINED; elsif need_secure && !IsSecure() then UNDEFINED; r = CNTHCTL_EL2; return r; r = CNTKCTL_EL1; return r;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.ExecutingATS1xPInstrCNTKCTLType

// AArch64.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1E1R/WP booleantype AArch64.ExecutingATS1xPInstr() if !CNTKCTLType;HavePrivATExt() then return FALSE; instr = ThisInstr(); if instr<22+:10> == '1101010100' then op1 = instr<16+:3>; CRn = instr<12+:4>; CRm = instr<8+:4>; op2 = instr<5+:3>; return op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}; else return FALSE;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.ExecutingBROrBLROrRetInstrCPACR

// AArch64.ExecutingBROrBLROrRetInstr() // ==================================== // Returns TRUE if current instruction is a BR, BLR, RET, B[L]RA[B][Z], or RETA[B]. // CPACR[] - non-assignment form // ============================= booleanCPACRType AArch64.ExecutingBROrBLROrRetInstr() if !CPACR[] bits(32) r; ifHaveBTIExtIsInHost() then return FALSE; instr = ThisInstr(); if instr<31:25> == '1101011' && instr<20:16> == '11111' then opc = instr<24:21>; return opc != '0101'; else return FALSE;() then r = CPTR_EL2; return r; r = CPACR_EL1; return r;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.ExecutingBTIInstrCPACRType

// AArch64.ExecutingBTIInstr() // =========================== // Returns TRUE if current instruction is a BTI. booleantype AArch64.ExecutingBTIInstr() if !CPACRType;HaveBTIExt() then return FALSE; instr = ThisInstr(); if instr<31:22> == '1101010100' && instr<21:12> == '0000110010' && instr<4:0> == '11111' then CRm = instr<11:8>; op2 = instr<7:5>; return (CRm == '0100' && op2<0> == '0'); else return FALSE;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.SysInstrELR

// Execute a system instruction with write (source operand).// ELR[] - non-assignment form // =========================== bits(64) AArch64.SysInstr(integer op0, integer op1, integer crn, integer crm, integer op2, bits(64) val);ELR[bits(2) el] bits(64) r; case el of whenEL1 r = ELR_EL1; when EL2 r = ELR_EL2; when EL3 r = ELR_EL3; otherwise Unreachable(); return r; // ELR[] - non-assignment form // =========================== bits(64) ELR[] assert PSTATE.EL != EL0; return ELR[PSTATE.EL]; // ELR[] - assignment form // ======================= ELR[bits(2) el] = bits(64) value bits(64) r = value; case el of when EL1 ELR_EL1 = r; when EL2 ELR_EL2 = r; when EL3 ELR_EL3 = r; otherwise Unreachable(); return; // ELR[] - assignment form // ======================= ELR[] = bits(64) value assert PSTATE.EL != EL0; ELR[PSTATE.EL] = value; return;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.SysInstrWithResultESR

// Execute a system instruction with read (result operand). // Returns the result of the instruction. bits(64)// ESR[] - non-assignment form // =========================== ESRType AArch64.SysInstrWithResult(integer op0, integer op1, integer crn, integer crm, integer op2);ESR[bits(2) regime] bits(32) r; case regime of whenEL1 r = ESR_EL1; when EL2 r = ESR_EL2; when EL3 r = ESR_EL3; otherwise Unreachable(); return r; // ESR[] - non-assignment form // =========================== ESRType ESR[] return ESR[S1TranslationRegime()]; // ESR[] - assignment form // ======================= ESR[bits(2) regime] = ESRType value bits(32) r = value; case regime of when EL1 ESR_EL1 = r; when EL2 ESR_EL2 = r; when EL3 ESR_EL3 = r; otherwise Unreachable(); return; // ESR[] - assignment form // ======================= ESR[] = ESRType value ESR[S1TranslationRegime()] = value;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.SysRegReadESRType

// Read from a system register and return the contents of the register. bits(64)type AArch64.SysRegRead(integer op0, integer op1, integer crn, integer crm, integer op2);ESRType;

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.SysRegWriteFAR

// Write to a system register.// FAR[] - non-assignment form // =========================== bits(64) AArch64.SysRegWrite(integer op0, integer op1, integer crn, integer crm, integer op2, bits(64) val);FAR[bits(2) regime] bits(64) r; case regime of whenEL1 r = FAR_EL1; when EL2 r = FAR_EL2; when EL3 r = FAR_EL3; otherwise Unreachable(); return r; // FAR[] - non-assignment form // =========================== bits(64) FAR[] return FAR[S1TranslationRegime()]; // FAR[] - assignment form // ======================= FAR[bits(2) regime] = bits(64) value bits(64) r = value; case regime of when EL1 FAR_EL1 = r; when EL2 FAR_EL2 = r; when EL3 FAR_EL3 = r; otherwise Unreachable(); return; // FAR[] - assignment form // ======================= FAR[] = bits(64) value FAR[S1TranslationRegime()] = value; return;

Library pseudocode for aarch64/functions/systemsysregisters/BTypeCompatibleMAIR

boolean BTypeCompatible;// MAIR[] - non-assignment form // ============================ MAIRTypeMAIR[bits(2) regime] bits(64) r; case regime of when EL1 r = MAIR_EL1; when EL2 r = MAIR_EL2; when EL3 r = MAIR_EL3; otherwise Unreachable(); return r; // MAIR[] - non-assignment form // ============================ MAIRType MAIR[] return MAIR[S1TranslationRegime()];

Library pseudocode for aarch64/functions/systemsysregisters/BTypeCompatible_BTIMAIRType

// BTypeCompatible_BTI // =================== // This function determines whether a given hint encoding is compatible with the current value of // PSTATE.BTYPE. A value of TRUE here indicates a valid Branch Target Identification instruction. booleantype BTypeCompatible_BTI(bits(2) hintcode) case hintcode of when '00' return FALSE; when '01' return PSTATE.BTYPE != '11'; when '10' return PSTATE.BTYPE != '10'; when '11' return TRUE;MAIRType;

Library pseudocode for aarch64/functions/systemsysregisters/BTypeCompatible_PACIXSPSCTLR

// BTypeCompatible_PACIXSP() // ========================= // Returns TRUE if PACIASP, PACIBSP instruction is implicit compatible with PSTATE.BTYPE, // FALSE otherwise. // SCTLR[] - non-assignment form // ============================= booleanSCTLRType BTypeCompatible_PACIXSP() if PSTATE.BTYPE IN {'01', '10'} then return TRUE; elsif PSTATE.BTYPE == '11' then index = if PSTATE.EL ==SCTLR[bits(2) regime] bits(64) r; case regime of when EL0EL1 then 35 else 36; returnr = SCTLR_EL1; when EL2 r = SCTLR_EL2; when EL3 r = SCTLR_EL3; otherwise Unreachable(); return r; // SCTLR[] - non-assignment form // ============================= SCTLRType SCTLR[] return SCTLR[S1TranslationRegime[]<index> == '0'; else return FALSE;()];

Library pseudocode for aarch64/functions/systemsysregisters/BTypeNextSCTLRType

bits(2) BTypeNext;typeSCTLRType;

Library pseudocode for aarch64/functions/systemsysregisters/InGuardedPageVBAR

boolean InGuardedPage;// VBAR[] - non-assignment form // ============================ bits(64)VBAR[bits(2) regime] bits(64) r; case regime of when EL1 r = VBAR_EL1; when EL2 r = VBAR_EL2; when EL3 r = VBAR_EL3; otherwise Unreachable(); return r; // VBAR[] - non-assignment form // ============================ bits(64) VBAR[] return VBAR[S1TranslationRegime()];

Library pseudocode for aarch64/instrsfunctions/branchsystem/eret/AArch64.ExceptionReturnAArch64.CheckAdvSIMDFPSystemRegisterTraps

// AArch64.ExceptionReturn() // =========================// Checks if an AArch64 MSR, MRS or SYS instruction on a SIMD or floating-point // register is trapped under the current configuration. Returns a boolean which // is TRUE if trapping occurs, plus a binary value that specifies the Exception // level trapped to. (boolean, bits(2)) AArch64.ExceptionReturn(bits(64) new_pc, bits(32) spsr)AArch64.CheckAdvSIMDFPSystemRegisterTraps(bits(2) op0, bits(3) op1, bits(4) crn, bits(4) crm, bits(3) op2, bit read); SynchronizeContext(); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); // Attempts to change to an illegal state will invoke the Illegal Execution state mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' && spsr<4> == '1' && spsr<20> == '0' then // If the exception return is illegal, PC[63:32,1:0] are UNKNOWN new_pc<63:32> = bits(32) UNKNOWN; new_pc<1:0> = bits(2) UNKNOWN; elsif UsingAArch32() then // Return to AArch32 // ELR_ELx[1:0] or ELR_ELx[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 else // Return to AArch64 // ELR_ELx[63:56] might include a tag new_pc = AArch64.BranchAddr(new_pc); if UsingAArch32() then // 32 most significant bits are ignored. BranchTo(new_pc<31:0>, BranchType_ERET); else BranchToAddr(new_pc, BranchType_ERET);

Library pseudocode for aarch64/instrsfunctions/countopsystem/CountOpAArch64.CheckSVESystemRegisterTraps

enumeration// Checks if an AArch64 MSR/MRS/SYS instruction on a Scalable Vector // register is trapped under the current configuration (boolean, bits(2)) CountOp {AArch64.CheckSVESystemRegisterTraps(bits(2) op0, bits(3) op1, bits(4) crn, bits(4) crm, bits(3) op2, bit read);CountOp_CLZ, CountOp_CLS, CountOp_CNT};

Library pseudocode for aarch64/instrsfunctions/extendregsystem/DecodeRegExtendAArch64.CheckSystemAccess

// DecodeRegExtend() // ================= // Decode a register extension option ExtendType// AArch64.CheckSystemAccess() // =========================== DecodeRegExtend(bits(3) op) case op of when '000' returnAArch64.CheckSystemAccess(bits(2) op0, bits(3) op1, bits(4) crn, bits(4) crm, bits(3) op2, bits(5) rt, bit read) // Checks if an AArch64 MSR, MRS or SYS instruction is UNALLOCATED or trapped at the current // exception level, security state and configuration, based on the opcode's encoding. boolean unallocated = FALSE; boolean need_secure = FALSE; bits(2) min_EL; // Check for traps by HCR_EL2.TIDCP if PSTATE.EL IN { ExtendType_UXTBEL0; when '001' return, ExtendType_UXTHEL1; when '010' return} && ExtendType_UXTWEL2Enabled; when '011' return() && HCR_EL2.TIDCP == 1 && op0 == 'x1' && crn == '1x11' then // At EL0, it is IMPLEMENTATION_DEFINED whether attempts to execute system // register access instructions with reserved encodings are trapped to EL2 or UNDEFINED rcs_el0_trap = boolean IMPLEMENTATION_DEFINED "Reserved Control Space ExtendType_UXTXEL0; when '100' returnTrapped"; if PSTATE.EL == ExtendType_SXTBEL1; when '101' return|| rcs_el0_trap then ExtendType_SXTHAArch64.SystemRegisterTrap; when '110' return( ExtendType_SXTWEL2; when '111' return, op0, op2, op1, crn, rt, crm, read); // Check for unallocated encodings case op1 of when '00x', '010' min_EL = ; when '011' min_EL = EL0; when '100' min_EL = EL2; when '101' if !HaveVirtHostExt() then UNDEFINED; min_EL = EL2; when '110' min_EL = EL3; when '111' min_EL = EL1; need_secure = TRUE; if UInt(PSTATE.EL) < UInt(min_EL) then // Check for traps on read/write access to registers named _EL2, _EL02, _EL12 from non-secure EL1 when HCR_EL2.NV bit is set nv_access = HaveNVExt() && min_EL == EL2 && PSTATE.EL == EL1 && EL2Enabled() && HCR_EL2.NV == '1'; if !nv_access then UNDEFINED; elsif need_secure && !IsSecure() then UNDEFINED; elsif AArch64.CheckUnallocatedSystemAccess(PSTATE.EL, op0, op1, crn, crm, op2, read) then UNDEFINED; // Check for traps on accesses to SIMD or floating-point registers (take_trap, target_el) = AArch64.CheckAdvSIMDFPSystemRegisterTraps(op0, op1, crn, crm, op2, read); if take_trap then AArch64.AdvSIMDFPAccessTrap(target_el); // Check for traps on accesses to Scalable Vector registers (take_trap, target_el) = AArch64.CheckSVESystemRegisterTraps(op0, op1, crn, crm, op2); if take_trap then SVEAccessTrap(target_el); // Check for traps on access to all other system registers (take_trap, target_el) = AArch64.CheckSystemRegisterTraps(op0, op1, crn, crm, op2, read); if take_trap then AArch64.SystemRegisterTrapExtendType_SXTXEL1;(target_el, op0, op2, op1, crn, rt, crm, read);

Library pseudocode for aarch64/instrsfunctions/extendregsystem/ExtendRegAArch64.CheckSystemRegisterTraps

// ExtendReg() // =========== // Perform a register extension and shift bits(N)// Checks if an AArch64 MSR, MRS or SYS instruction on a system register is trapped // under the current configuration. Returns a boolean which is TRUE if trapping // occurs, plus a binary value that specifies the Exception level trapped to. (boolean, bits(2)) ExtendReg(integer reg,AArch64.CheckSystemRegisterTraps(bits(2) op0, bits(3) op1, bits(4) crn, bits(4) crm, bits(3) op2, bit read); ExtendType type, integer shift) assert shift >= 0 && shift <= 4; bits(N) val = X[reg]; boolean unsigned; integer len; case type of when ExtendType_SXTB unsigned = FALSE; len = 8; when ExtendType_SXTH unsigned = FALSE; len = 16; when ExtendType_SXTW unsigned = FALSE; len = 32; when ExtendType_SXTX unsigned = FALSE; len = 64; when ExtendType_UXTB unsigned = TRUE; len = 8; when ExtendType_UXTH unsigned = TRUE; len = 16; when ExtendType_UXTW unsigned = TRUE; len = 32; when ExtendType_UXTX unsigned = TRUE; len = 64; // Note the extended width of the intermediate value and // that sign extension occurs from bit <len+shift-1>, not // from bit <len-1>. This is equivalent to the instruction // [SU]BFIZ Rtmp, Rreg, #shift, #len // It may also be seen as a sign/zero extend followed by a shift: // LSL(Extend(val<len-1:0>, N, unsigned), shift); len = Min(len, N - shift); return Extend(val<len-1:0> : Zeros(shift), N, unsigned);

Library pseudocode for aarch64/instrsfunctions/extendregsystem/ExtendTypeAArch64.CheckUnallocatedSystemAccess

enumeration// Checks if an AArch64 MSR, MRS or SYS instruction is unallocated under the current // configuration. boolean ExtendType {AArch64.CheckUnallocatedSystemAccess(bits(2) op0, bits(3) op1, bits(4) crn, bits(4) crm, bits(3) op2, bit read);ExtendType_SXTB, ExtendType_SXTH, ExtendType_SXTW, ExtendType_SXTX, ExtendType_UXTB, ExtendType_UXTH, ExtendType_UXTW, ExtendType_UXTX};

Library pseudocode for aarch64/instrsfunctions/floatsystem/arithmetic/max-min/fpmaxminop/FPMaxMinOpAArch64.ExecutingATS1xPInstr

enumeration// AArch64.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1E1R/WP boolean FPMaxMinOp {AArch64.ExecutingATS1xPInstr() if !FPMaxMinOp_MAX,() then return FALSE; instr = FPMaxMinOp_MIN, FPMaxMinOp_MAXNUM, FPMaxMinOp_MINNUM};(); if instr<22+:10> == '1101010100' then op1 = instr<16+:3>; CRn = instr<12+:4>; CRm = instr<8+:4>; op2 = instr<5+:3>; return op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}; else return FALSE;

Library pseudocode for aarch64/instrsfunctions/floatsystem/arithmetic/unary/fpunaryop/FPUnaryOpAArch64.ExecutingBROrBLROrRetInstr

enumeration// AArch64.ExecutingBROrBLROrRetInstr() // ==================================== // Returns TRUE if current instruction is a BR, BLR, RET, B[L]RA[B][Z], or RETA[B]. boolean FPUnaryOp {AArch64.ExecutingBROrBLROrRetInstr() if !FPUnaryOp_ABS,() then return FALSE; instr = FPUnaryOp_MOV, FPUnaryOp_NEG, FPUnaryOp_SQRT};(); if instr<31:25> == '1101011' && instr<20:16> == '11111' then opc = instr<24:21>; return opc != '0101'; else return FALSE;

Library pseudocode for aarch64/instrsfunctions/floatsystem/convert/fpconvop/FPConvOpAArch64.ExecutingBTIInstr

enumeration// AArch64.ExecutingBTIInstr() // =========================== // Returns TRUE if current instruction is a BTI. boolean FPConvOp {AArch64.ExecutingBTIInstr() if !FPConvOp_CVT_FtoI,() then return FALSE; instr = FPConvOp_CVT_ItoF, FPConvOp_MOV_FtoI, FPConvOp_MOV_ItoF , FPConvOp_CVT_FtoI_JS };(); if instr<31:22> == '1101010100' && instr<21:12> == '0000110010' && instr<4:0> == '11111' then CRm = instr<11:8>; op2 = instr<7:5>; return (CRm == '0100' && op2<0> == '0'); else return FALSE;

Library pseudocode for aarch64/instrsfunctions/integersystem/bitfield/bfxpreferred/BFXPreferredAArch64.SysInstr

// BFXPreferred() // ============== // // Return TRUE if UBFX or SBFX is the preferred disassembly of a // UBFM or SBFM bitfield instruction. Must exclude more specific // aliases UBFIZ, SBFIZ, UXT[BH], SXT[BHW], LSL, LSR and ASR. boolean// Execute a system instruction with write (source operand). BFXPreferred(bit sf, bit uns, bits(6) imms, bits(6) immr) integer S =AArch64.SysInstr(integer op0, integer op1, integer crn, integer crm, integer op2, bits(64) val); UInt(imms); integer R = UInt(immr); // must not match UBFIZ/SBFIX alias if UInt(imms) < UInt(immr) then return FALSE; // must not match LSR/ASR/LSL alias (imms == 31 or 63) if imms == sf:'11111' then return FALSE; // must not match UXTx/SXTx alias if immr == '000000' then // must not match 32-bit UXT[BH] or SXT[BH] if sf == '0' && imms IN {'000111', '001111'} then return FALSE; // must not match 64-bit SXT[BHW] if sf:uns == '10' && imms IN {'000111', '001111', '011111'} then return FALSE; // must be UBFX/SBFX alias return TRUE;

Library pseudocode for aarch64/instrsfunctions/integersystem/bitmasks/DecodeBitMasksAArch64.SysInstrWithResult

// DecodeBitMasks() // ================ // Decode AArch64 bitfield and logical immediate masks which use a similar encoding structure (bits(M), bits(M))// Execute a system instruction with read (result operand). // Returns the result of the instruction. bits(64) DecodeBitMasks(bit immN, bits(6) imms, bits(6) immr, boolean immediate) bits(64) tmask, wmask; bits(6) tmask_and, wmask_and; bits(6) tmask_or, wmask_or; bits(6) levels; // Compute log2 of element size // 2^len must be in range [2, M] len =AArch64.SysInstrWithResult(integer op0, integer op1, integer crn, integer crm, integer op2); HighestSetBit(immN:NOT(imms)); if len < 1 then UNDEFINED; assert M >= (1 << len); // Determine S, R and S - R parameters levels = ZeroExtend(Ones(len), 6); // For logical immediates an all-ones value of S is reserved // since it would generate a useless all-ones result (many times) if immediate && (imms AND levels) == levels then UNDEFINED; S = UInt(imms AND levels); R = UInt(immr AND levels); diff = S - R; // 6-bit subtract with borrow // From a software perspective, the remaining code is equivalant to: // esize = 1 << len; // d = UInt(diff<len-1:0>); // welem = ZeroExtend(Ones(S + 1), esize); // telem = ZeroExtend(Ones(d + 1), esize); // wmask = Replicate(ROR(welem, R)); // tmask = Replicate(telem); // return (wmask, tmask); // Compute "top mask" tmask_and = diff<5:0> OR NOT(levels); tmask_or = diff<5:0> AND levels; tmask = Ones(64); tmask = ((tmask AND Replicate(Replicate(tmask_and<0>, 1) : Ones(1), 32)) OR Replicate(Zeros(1) : Replicate(tmask_or<0>, 1), 32)); // optimization of first step: // tmask = Replicate(tmask_and<0> : '1', 32); tmask = ((tmask AND Replicate(Replicate(tmask_and<1>, 2) : Ones(2), 16)) OR Replicate(Zeros(2) : Replicate(tmask_or<1>, 2), 16)); tmask = ((tmask AND Replicate(Replicate(tmask_and<2>, 4) : Ones(4), 8)) OR Replicate(Zeros(4) : Replicate(tmask_or<2>, 4), 8)); tmask = ((tmask AND Replicate(Replicate(tmask_and<3>, 8) : Ones(8), 4)) OR Replicate(Zeros(8) : Replicate(tmask_or<3>, 8), 4)); tmask = ((tmask AND Replicate(Replicate(tmask_and<4>, 16) : Ones(16), 2)) OR Replicate(Zeros(16) : Replicate(tmask_or<4>, 16), 2)); tmask = ((tmask AND Replicate(Replicate(tmask_and<5>, 32) : Ones(32), 1)) OR Replicate(Zeros(32) : Replicate(tmask_or<5>, 32), 1)); // Compute "wraparound mask" wmask_and = immr OR NOT(levels); wmask_or = immr AND levels; wmask = Zeros(64); wmask = ((wmask AND Replicate(Ones(1) : Replicate(wmask_and<0>, 1), 32)) OR Replicate(Replicate(wmask_or<0>, 1) : Zeros(1), 32)); // optimization of first step: // wmask = Replicate(wmask_or<0> : '0', 32); wmask = ((wmask AND Replicate(Ones(2) : Replicate(wmask_and<1>, 2), 16)) OR Replicate(Replicate(wmask_or<1>, 2) : Zeros(2), 16)); wmask = ((wmask AND Replicate(Ones(4) : Replicate(wmask_and<2>, 4), 8)) OR Replicate(Replicate(wmask_or<2>, 4) : Zeros(4), 8)); wmask = ((wmask AND Replicate(Ones(8) : Replicate(wmask_and<3>, 8), 4)) OR Replicate(Replicate(wmask_or<3>, 8) : Zeros(8), 4)); wmask = ((wmask AND Replicate(Ones(16) : Replicate(wmask_and<4>, 16), 2)) OR Replicate(Replicate(wmask_or<4>, 16) : Zeros(16), 2)); wmask = ((wmask AND Replicate(Ones(32) : Replicate(wmask_and<5>, 32), 1)) OR Replicate(Replicate(wmask_or<5>, 32) : Zeros(32), 1)); if diff<6> != '0' then // borrow from S - R wmask = wmask AND tmask; else wmask = wmask OR tmask; return (wmask<M-1:0>, tmask<M-1:0>);

Library pseudocode for aarch64/instrsfunctions/integersystem/ins-ext/insert/movewide/movewideop/MoveWideOpAArch64.SysRegRead

enumeration// Read from a system register and return the contents of the register. bits(64) MoveWideOp {AArch64.SysRegRead(integer op0, integer op1, integer crn, integer crm, integer op2);MoveWideOp_N, MoveWideOp_Z, MoveWideOp_K};

Library pseudocode for aarch64/instrsfunctions/integersystem/logical/movwpreferred/MoveWidePreferredAArch64.SysRegWrite

// MoveWidePreferred() // =================== // // Return TRUE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single MOVZ or MOVN instruction. // Used as a condition for the preferred MOV<-ORR alias. boolean// Write to a system register. MoveWidePreferred(bit sf, bit immN, bits(6) imms, bits(6) immr) integer S =AArch64.SysRegWrite(integer op0, integer op1, integer crn, integer crm, integer op2, bits(64) val); UInt(imms); integer R = UInt(immr); integer width = if sf == '1' then 64 else 32; // element size must equal total immediate size if sf == '1' && immN:imms != '1xxxxxx' then return FALSE; if sf == '0' && immN:imms != '00xxxxx' then return FALSE; // for MOVZ must contain no more than 16 ones if S < 16 then // ones must not span halfword boundary when rotated return (-R MOD 16) <= (15 - S); // for MOVN must contain no more than 16 zeros if S >= width - 15 then // zeros must not span halfword boundary when rotated return (R MOD 16) <= (S - (width - 15)); return FALSE;

Library pseudocode for aarch64/instrsfunctions/integersystem/shiftreg/DecodeShiftBTypeCompatible

// DecodeShift() // ============= // Decode shift encodings ShiftTypeboolean BTypeCompatible; DecodeShift(bits(2) op) case op of when '00' return ShiftType_LSL; when '01' return ShiftType_LSR; when '10' return ShiftType_ASR; when '11' return ShiftType_ROR;

Library pseudocode for aarch64/instrsfunctions/integersystem/shiftreg/ShiftRegBTypeCompatible_BTI

// ShiftReg() // ========== // Perform shift of a register operand // BTypeCompatible_BTI // =================== // This function determines whether a given hint encoding is compatible with the current value of // PSTATE.BTYPE. A value of TRUE here indicates a valid Branch Target Identification instruction. bits(N)boolean ShiftReg(integer reg,BTypeCompatible_BTI(bits(2) hintcode) case hintcode of when '00' return FALSE; when '01' return PSTATE.BTYPE != '11'; when '10' return PSTATE.BTYPE != '10'; when '11' return TRUE; ShiftType type, integer amount) bits(N) result = X[reg]; case type of when ShiftType_LSL result = LSL(result, amount); when ShiftType_LSR result = LSR(result, amount); when ShiftType_ASR result = ASR(result, amount); when ShiftType_ROR result = ROR(result, amount); return result;

Library pseudocode for aarch64/instrsfunctions/integersystem/shiftreg/ShiftTypeBTypeCompatible_PACIXSP

enumeration// BTypeCompatible_PACIXSP() // ========================= // Returns TRUE if PACIASP, PACIBSP instruction is implicit compatible with PSTATE.BTYPE, // FALSE otherwise. boolean ShiftType {BTypeCompatible_PACIXSP() if PSTATE.BTYPE IN {'01', '10'} then return TRUE; elsif PSTATE.BTYPE == '11' then index = if PSTATE.EL ==ShiftType_LSL,then 35 else 36; return ShiftType_LSR, ShiftType_ASR, ShiftType_ROR};[]<index> == '0'; else return FALSE;

Library pseudocode for aarch64/instrsfunctions/logicalopsystem/LogicalOpBTypeNext

enumerationbits(2) BTypeNext; LogicalOp {LogicalOp_AND, LogicalOp_EOR, LogicalOp_ORR};

Library pseudocode for aarch64/instrsfunctions/memorysystem/memop/MemAtomicOpInGuardedPage

enumerationboolean InGuardedPage; MemAtomicOp {MemAtomicOp_ADD, MemAtomicOp_BIC, MemAtomicOp_EOR, MemAtomicOp_ORR, MemAtomicOp_SMAX, MemAtomicOp_SMIN, MemAtomicOp_UMAX, MemAtomicOp_UMIN, MemAtomicOp_SWP};

Library pseudocode for aarch64/instrs/memorybranch/memoperet/MemOpAArch64.ExceptionReturn

enumeration// AArch64.ExceptionReturn() // ========================= MemOp {AArch64.ExceptionReturn(bits(64) new_pc, bits(32) spsr)MemOp_LOAD,(); sync_errors = MemOp_STORE,() && [].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); // Attempts to change to an illegal state will invoke the Illegal Execution state mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' && spsr<4> == '1' && spsr<20> == '0' then // If the exception return is illegal, PC[63:32,1:0] are UNKNOWN new_pc<63:32> = bits(32) UNKNOWN; new_pc<1:0> = bits(2) UNKNOWN; elsif UsingAArch32() then // Return to AArch32 // ELR_ELx[1:0] or ELR_ELx[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 else // Return to AArch64 // ELR_ELx[63:56] might include a tag new_pc = AArch64.BranchAddr(new_pc); if UsingAArch32() then // 32 most significant bits are ignored. BranchTo(new_pc<31:0>, BranchType_ERET); else BranchToAddr(new_pc, BranchType_ERETMemOp_PREFETCH};);

Library pseudocode for aarch64/instrs/memorycountop/prefetch/PrefetchCountOp

// Prefetch() // ========== // Decode and execute the prefetch hint on ADDRESS specified by PRFOPenumeration Prefetch(bits(64) address, bits(5) prfop)CountOp { PrefetchHint hint; integer target; boolean stream; case prfop<4:3> of when '00' hint =CountOp_CLZ, Prefetch_READ; // PLD: prefetch for load when '01' hint =CountOp_CLS, Prefetch_EXEC; // PLI: preload instructions when '10' hint = Prefetch_WRITE; // PST: prepare for store when '11' return; // unallocated hint target = UInt(prfop<2:1>); // target cache level stream = (prfop<0> != '0'); // streaming (non-temporal) Hint_Prefetch(address, hint, target, stream); return;CountOp_CNT};

Library pseudocode for aarch64/instrs/systemextendreg/barriers/barrierop/MemBarrierOpDecodeRegExtend

enumeration// DecodeRegExtend() // ================= // Decode a register extension option ExtendType MemBarrierOp {DecodeRegExtend(bits(3) op) case op of when '000' return MemBarrierOp_DSB // Data Synchronization Barrier ,; when '001' return MemBarrierOp_DMB // Data Memory Barrier ,; when '010' return MemBarrierOp_ISB // Instruction Synchronization Barrier ,; when '011' return MemBarrierOp_SSBB // Speculative Synchronization Barrier to VA ,; when '100' return MemBarrierOp_PSSBB // Speculative Synchronization Barrier to PA ,; when '101' return ; when '110' return ExtendType_SXTW; when '111' return ExtendType_SXTXMemBarrierOp_SB // Speculation Barrier };;

Library pseudocode for aarch64/instrs/systemextendreg/hints/syshintop/SystemHintOpExtendReg

enumeration// ExtendReg() // =========== // Perform a register extension and shift bits(N) SystemHintOp {ExtendReg(integer reg, SystemHintOp_NOP,type, integer shift) assert shift >= 0 && shift <= 4; bits(N) val = SystemHintOp_YIELD,[reg]; boolean unsigned; integer len; case type of when SystemHintOp_WFE,unsigned = FALSE; len = 8; when SystemHintOp_WFI,unsigned = FALSE; len = 16; when SystemHintOp_SEV,unsigned = FALSE; len = 32; when SystemHintOp_SEVL,unsigned = FALSE; len = 64; when SystemHintOp_ESB,unsigned = TRUE; len = 8; when SystemHintOp_PSB,unsigned = TRUE; len = 16; when SystemHintOp_TSB,unsigned = TRUE; len = 32; when SystemHintOp_BTI,unsigned = TRUE; len = 64; // Note the extended width of the intermediate value and // that sign extension occurs from bit <len+shift-1>, not // from bit <len-1>. This is equivalent to the instruction // [SU]BFIZ Rtmp, Rreg, #shift, #len // It may also be seen as a sign/zero extend followed by a shift: // LSL(Extend(val<len-1:0>, N, unsigned), shift); len = (len, N - shift); return Extend(val<len-1:0> : ZerosSystemHintOp_CSDB };(shift), N, unsigned);

Library pseudocode for aarch64/instrs/systemextendreg/register/cpsr/pstatefield/PSTATEFieldExtendType

enumeration PSTATEField {ExtendType {PSTATEField_DAIFSet,ExtendType_SXTB, PSTATEField_DAIFClr,ExtendType_SXTH, PSTATEField_PAN, // Armv8.1ExtendType_SXTW, PSTATEField_UAO, // Armv8.2ExtendType_SXTX, PSTATEField_DIT, // Armv8.4ExtendType_UXTB, PSTATEField_SSBS,ExtendType_UXTH, PSTATEField_SP };ExtendType_UXTW,ExtendType_UXTX};

Library pseudocode for aarch64/instrs/systemfloat/sysopsarithmetic/sysopmax-min/SysOpfpmaxminop/FPMaxMinOp

// SysOp() // ======= SystemOpenumeration SysOp(bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2) case op1:CRn:CRm:op2 of when '000 0111 1000 000' returnFPMaxMinOp { Sys_AT; // S1E1R when '100 0111 1000 000' returnFPMaxMinOp_MAX, Sys_AT; // S1E2R when '110 0111 1000 000' returnFPMaxMinOp_MIN, Sys_AT; // S1E3R when '000 0111 1000 001' returnFPMaxMinOp_MAXNUM, Sys_AT; // S1E1W when '100 0111 1000 001' return Sys_AT; // S1E2W when '110 0111 1000 001' return Sys_AT; // S1E3W when '000 0111 1000 010' return Sys_AT; // S1E0R when '000 0111 1000 011' return Sys_AT; // S1E0W when '100 0111 1000 100' return Sys_AT; // S12E1R when '100 0111 1000 101' return Sys_AT; // S12E1W when '100 0111 1000 110' return Sys_AT; // S12E0R when '100 0111 1000 111' return Sys_AT; // S12E0W when '011 0111 0100 001' return Sys_DC; // ZVA when '000 0111 0110 001' return Sys_DC; // IVAC when '000 0111 0110 010' return Sys_DC; // ISW when '011 0111 1010 001' return Sys_DC; // CVAC when '000 0111 1010 010' return Sys_DC; // CSW when '011 0111 1011 001' return Sys_DC; // CVAU when '011 0111 1110 001' return Sys_DC; // CIVAC when '000 0111 1110 010' return Sys_DC; // CISW when '011 0111 1101 001' return Sys_DC; // CVADP when '000 0111 0001 000' return Sys_IC; // IALLUIS when '000 0111 0101 000' return Sys_IC; // IALLU when '011 0111 0101 001' return Sys_IC; // IVAU when '100 1000 0000 001' return Sys_TLBI; // IPAS2E1IS when '100 1000 0000 101' return Sys_TLBI; // IPAS2LE1IS when '000 1000 0011 000' return Sys_TLBI; // VMALLE1IS when '100 1000 0011 000' return Sys_TLBI; // ALLE2IS when '110 1000 0011 000' return Sys_TLBI; // ALLE3IS when '000 1000 0011 001' return Sys_TLBI; // VAE1IS when '100 1000 0011 001' return Sys_TLBI; // VAE2IS when '110 1000 0011 001' return Sys_TLBI; // VAE3IS when '000 1000 0011 010' return Sys_TLBI; // ASIDE1IS when '000 1000 0011 011' return Sys_TLBI; // VAAE1IS when '100 1000 0011 100' return Sys_TLBI; // ALLE1IS when '000 1000 0011 101' return Sys_TLBI; // VALE1IS when '100 1000 0011 101' return Sys_TLBI; // VALE2IS when '110 1000 0011 101' return Sys_TLBI; // VALE3IS when '100 1000 0011 110' return Sys_TLBI; // VMALLS12E1IS when '000 1000 0011 111' return Sys_TLBI; // VAALE1IS when '100 1000 0100 001' return Sys_TLBI; // IPAS2E1 when '100 1000 0100 101' return Sys_TLBI; // IPAS2LE1 when '000 1000 0111 000' return Sys_TLBI; // VMALLE1 when '100 1000 0111 000' return Sys_TLBI; // ALLE2 when '110 1000 0111 000' return Sys_TLBI; // ALLE3 when '000 1000 0111 001' return Sys_TLBI; // VAE1 when '100 1000 0111 001' return Sys_TLBI; // VAE2 when '110 1000 0111 001' return Sys_TLBI; // VAE3 when '000 1000 0111 010' return Sys_TLBI; // ASIDE1 when '000 1000 0111 011' return Sys_TLBI; // VAAE1 when '100 1000 0111 100' return Sys_TLBI; // ALLE1 when '000 1000 0111 101' return Sys_TLBI; // VALE1 when '100 1000 0111 101' return Sys_TLBI; // VALE2 when '110 1000 0111 101' return Sys_TLBI; // VALE3 when '100 1000 0111 110' return Sys_TLBI; // VMALLS12E1 when '000 1000 0111 111' return Sys_TLBI; // VAALE1 return Sys_SYS;FPMaxMinOp_MINNUM};

Library pseudocode for aarch64/instrs/systemfloat/sysopsarithmetic/sysopunary/SystemOpfpunaryop/FPUnaryOp

enumeration SystemOp {FPUnaryOp {Sys_AT,FPUnaryOp_ABS, Sys_DC,FPUnaryOp_MOV, Sys_IC,FPUnaryOp_NEG, Sys_TLBI,FPUnaryOp_SQRT}; Sys_SYS};

Library pseudocode for aarch64/instrs/vectorfloat/arithmeticconvert/binaryfpconvop/uniform/logical/bsl-eor/vbitop/VBitOpFPConvOp

enumeration VBitOp {FPConvOp {VBitOp_VBIF,FPConvOp_CVT_FtoI, VBitOp_VBIT,FPConvOp_CVT_ItoF, VBitOp_VBSL,FPConvOp_MOV_FtoI, VBitOp_VEOR};FPConvOp_MOV_ItoF ,FPConvOp_CVT_FtoI_JS };

Library pseudocode for aarch64/instrs/vectorinteger/arithmeticbitfield/unarybfxpreferred/cmp/compareop/CompareOpBFXPreferred

enumeration// BFXPreferred() // ============== // // Return TRUE if UBFX or SBFX is the preferred disassembly of a // UBFM or SBFM bitfield instruction. Must exclude more specific // aliases UBFIZ, SBFIZ, UXT[BH], SXT[BHW], LSL, LSR and ASR. boolean CompareOp {BFXPreferred(bit sf, bit uns, bits(6) imms, bits(6) immr) integer S =CompareOp_GT,(imms); integer R = CompareOp_GE,(immr); // must not match UBFIZ/SBFIX alias if CompareOp_EQ,(imms) < CompareOp_LE, CompareOp_LT};(immr) then return FALSE; // must not match LSR/ASR/LSL alias (imms == 31 or 63) if imms == sf:'11111' then return FALSE; // must not match UXTx/SXTx alias if immr == '000000' then // must not match 32-bit UXT[BH] or SXT[BH] if sf == '0' && imms IN {'000111', '001111'} then return FALSE; // must not match 64-bit SXT[BHW] if sf:uns == '10' && imms IN {'000111', '001111', '011111'} then return FALSE; // must be UBFX/SBFX alias return TRUE;

Library pseudocode for aarch64/instrs/vectorinteger/logicalbitmasks/immediateop/ImmediateOpDecodeBitMasks

enumeration// DecodeBitMasks() // ================ // Decode AArch64 bitfield and logical immediate masks which use a similar encoding structure (bits(M), bits(M)) ImmediateOp {DecodeBitMasks(bit immN, bits(6) imms, bits(6) immr, boolean immediate) bits(64) tmask, wmask; bits(6) tmask_and, wmask_and; bits(6) tmask_or, wmask_or; bits(6) levels; // Compute log2 of element size // 2^len must be in range [2, M] len =ImmediateOp_MOVI,(immN:NOT(imms)); if len < 1 then UNDEFINED; assert M >= (1 << len); // Determine S, R and S - R parameters levels = ImmediateOp_MVNI,( ImmediateOp_ORR,(len), 6); // For logical immediates an all-ones value of S is reserved // since it would generate a useless all-ones result (many times) if immediate && (imms AND levels) == levels then UNDEFINED; S = (imms AND levels); R = UInt(immr AND levels); diff = S - R; // 6-bit subtract with borrow // From a software perspective, the remaining code is equivalant to: // esize = 1 << len; // d = UInt(diff<len-1:0>); // welem = ZeroExtend(Ones(S + 1), esize); // telem = ZeroExtend(Ones(d + 1), esize); // wmask = Replicate(ROR(welem, R)); // tmask = Replicate(telem); // return (wmask, tmask); // Compute "top mask" tmask_and = diff<5:0> OR NOT(levels); tmask_or = diff<5:0> AND levels; tmask = Ones(64); tmask = ((tmask AND Replicate(Replicate(tmask_and<0>, 1) : Ones(1), 32)) OR Replicate(Zeros(1) : Replicate(tmask_or<0>, 1), 32)); // optimization of first step: // tmask = Replicate(tmask_and<0> : '1', 32); tmask = ((tmask AND Replicate(Replicate(tmask_and<1>, 2) : Ones(2), 16)) OR Replicate(Zeros(2) : Replicate(tmask_or<1>, 2), 16)); tmask = ((tmask AND Replicate(Replicate(tmask_and<2>, 4) : Ones(4), 8)) OR Replicate(Zeros(4) : Replicate(tmask_or<2>, 4), 8)); tmask = ((tmask AND Replicate(Replicate(tmask_and<3>, 8) : Ones(8), 4)) OR Replicate(Zeros(8) : Replicate(tmask_or<3>, 8), 4)); tmask = ((tmask AND Replicate(Replicate(tmask_and<4>, 16) : Ones(16), 2)) OR Replicate(Zeros(16) : Replicate(tmask_or<4>, 16), 2)); tmask = ((tmask AND Replicate(Replicate(tmask_and<5>, 32) : Ones(32), 1)) OR Replicate(Zeros(32) : Replicate(tmask_or<5>, 32), 1)); // Compute "wraparound mask" wmask_and = immr OR NOT(levels); wmask_or = immr AND levels; wmask = Zeros(64); wmask = ((wmask AND Replicate(Ones(1) : Replicate(wmask_and<0>, 1), 32)) OR Replicate(Replicate(wmask_or<0>, 1) : Zeros(1), 32)); // optimization of first step: // wmask = Replicate(wmask_or<0> : '0', 32); wmask = ((wmask AND Replicate(Ones(2) : Replicate(wmask_and<1>, 2), 16)) OR Replicate(Replicate(wmask_or<1>, 2) : Zeros(2), 16)); wmask = ((wmask AND Replicate(Ones(4) : Replicate(wmask_and<2>, 4), 8)) OR Replicate(Replicate(wmask_or<2>, 4) : Zeros(4), 8)); wmask = ((wmask AND Replicate(Ones(8) : Replicate(wmask_and<3>, 8), 4)) OR Replicate(Replicate(wmask_or<3>, 8) : Zeros(8), 4)); wmask = ((wmask AND Replicate(Ones(16) : Replicate(wmask_and<4>, 16), 2)) OR Replicate(Replicate(wmask_or<4>, 16) : Zeros(16), 2)); wmask = ((wmask AND Replicate(Ones(32) : Replicate(wmask_and<5>, 32), 1)) OR Replicate(Replicate(wmask_or<5>, 32) : ZerosImmediateOp_BIC};(32), 1)); if diff<6> != '0' then // borrow from S - R wmask = wmask AND tmask; else wmask = wmask OR tmask; return (wmask<M-1:0>, tmask<M-1:0>);

Library pseudocode for aarch64/instrs/vectorinteger/reduceins-ext/reduceopinsert/Reducemovewide/movewideop/MoveWideOp

// Reduce() // ======== bits(esize)enumeration Reduce(MoveWideOp {ReduceOp op, bits(N) input, integer esize) integer half; bits(esize) hi; bits(esize) lo; bits(esize) result; if N == esize then return input<esize-1:0>; half = N DIV 2; hi =MoveWideOp_N, Reduce(op, input<N-1:half>, esize); lo =MoveWideOp_Z, Reduce(op, input<half-1:0>, esize); case op of when ReduceOp_FMINNUM result = FPMinNum(lo, hi, FPCR); when ReduceOp_FMAXNUM result = FPMaxNum(lo, hi, FPCR); when ReduceOp_FMIN result = FPMin(lo, hi, FPCR); when ReduceOp_FMAX result = FPMax(lo, hi, FPCR); when ReduceOp_FADD result = FPAdd(lo, hi, FPCR); when ReduceOp_ADD result = lo + hi; return result;MoveWideOp_K};

Library pseudocode for aarch64/instrs/vectorinteger/reducelogical/reduceopmovwpreferred/ReduceOpMoveWidePreferred

enumeration// MoveWidePreferred() // =================== // // Return TRUE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single MOVZ or MOVN instruction. // Used as a condition for the preferred MOV<-ORR alias. boolean ReduceOp {MoveWidePreferred(bit sf, bit immN, bits(6) imms, bits(6) immr) integer S =ReduceOp_FMINNUM,(imms); integer R = ReduceOp_FMAXNUM, ReduceOp_FMIN, ReduceOp_FMAX, ReduceOp_FADD, ReduceOp_ADD};(immr); integer width = if sf == '1' then 64 else 32; // element size must equal total immediate size if sf == '1' && immN:imms != '1xxxxxx' then return FALSE; if sf == '0' && immN:imms != '00xxxxx' then return FALSE; // for MOVZ must contain no more than 16 ones if S < 16 then // ones must not span halfword boundary when rotated return (-R MOD 16) <= (15 - S); // for MOVN must contain no more than 16 zeros if S >= width - 15 then // zeros must not span halfword boundary when rotated return (R MOD 16) <= (S - (width - 15)); return FALSE;

Library pseudocode for aarch64/translationinstrs/attrsinteger/AArch64.InstructionDeviceshiftreg/DecodeShift

// AArch64.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. // DecodeShift() // ============= // Decode shift encodings AddressDescriptorShiftType AArch64.InstructionDevice(DecodeShift(bits(2) op) case op of when '00' returnAddressDescriptorShiftType_LSL addrdesc, bits(64) vaddress, bits(52) ipaddress, integer level,; when '01' return AccTypeShiftType_LSR acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) c =; when '10' return ConstrainUnpredictableShiftType_ASR(; when '11' returnUnpredictable_INSTRDEVICEShiftType_ROR); assert c IN {Constraint_NONE, Constraint_FAULT}; if c == Constraint_FAULT then addrdesc.fault = AArch64.PermissionFault(ipaddress,bit UNKNOWN, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.type = MemType_Normal; addrdesc.memattrs.inner.attrs = MemAttr_NC; addrdesc.memattrs.inner.hints = MemHint_No; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = MemAttrDefaults(addrdesc.memattrs); return addrdesc;;

Library pseudocode for aarch64/translationinstrs/attrsinteger/AArch64.S1AttrDecodeshiftreg/ShiftReg

// AArch64.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. // ShiftReg() // ========== // Perform shift of a register operand MemoryAttributesbits(N) AArch64.S1AttrDecode(bits(2) SH, bits(3) attr,ShiftReg(integer reg, AccTypeShiftType acctype)type, integer amount) bits(N) result = MemoryAttributesX memattrs; mair =[reg]; case type of when MAIRShiftType_LSL[]; index = 8 *result = UIntLSL(attr); attrfield = mair<index+7:index>; memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) =(result, amount); when ConstrainUnpredictableBitsShiftType_LSR(result =Unpredictable_RESMAIRLSR); if !(result, amount); whenHaveMTEExtShiftType_ASR() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) =result = ConstrainUnpredictableBitsASR((result, amount); whenUnpredictable_RESMAIRShiftType_ROR); if attrfield<7:4> == '0000' then // Device memattrs.type =result = MemType_DeviceROR; case attrfield<3:0> of when '0000' memattrs.device = DeviceType_nGnRnE; when '0100' memattrs.device = DeviceType_nGnRE; when '1000' memattrs.device = DeviceType_nGRE; when '1100' memattrs.device = DeviceType_GRE; otherwise Unreachable(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.type = MemType_Normal; memattrs.outer = LongConvertAttrsHints(attrfield<7:4>, acctype); memattrs.inner = LongConvertAttrsHints(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif HaveMTEExt() && attrfield == '11110000' then // Tagged, Normal memattrs.tagged = TRUE; memattrs.type = MemType_Normal; memattrs.outer.attrs = MemAttr_WB; memattrs.inner.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RWA; memattrs.inner.hints = MemHint_RWA; memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else Unreachable(); // Reserved, handled above return MemAttrDefaults(memattrs);(result, amount); return result;

Library pseudocode for aarch64/translationinstrs/attrsinteger/AArch64.TranslateAddressS1Offshiftreg/ShiftType

// AArch64.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. TLBRecordenumeration AArch64.TranslateAddressS1Off(bits(64) vaddress,ShiftType { AccType acctype, boolean iswrite) assert !ShiftType_LSL,ELUsingAArch32(ShiftType_LSR,S1TranslationRegime());ShiftType_ASR, TLBRecord result; Top = AddrTop(vaddress, (acctype == AccType_IFETCH), PSTATE.EL); if !IsZero(vaddress<Top:PAMax()>) then level = 0; ipaddress = bits(52) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,bit UNKNOWN, level, acctype, iswrite, secondstage, s2fs1walk); return result; default_cacheable = (HasS2Translation() && HCR_EL2.DC == '1'); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.type = MemType_Normal; result.addrdesc.memattrs.inner.attrs = MemAttr_WB; // Write-back result.addrdesc.memattrs.inner.hints = MemHint_RWA; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype != AccType_IFETCH then // Treat data as Device result.addrdesc.memattrs.type = MemType_Device; result.addrdesc.memattrs.device = DeviceType_nGnRnE; result.addrdesc.memattrs.inner = MemAttrHints UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR_ELx.I cacheable = SCTLR[].I == '1'; result.addrdesc.memattrs.type = MemType_Normal; if cacheable then result.addrdesc.memattrs.inner.attrs = MemAttr_WT; result.addrdesc.memattrs.inner.hints = MemHint_RA; else result.addrdesc.memattrs.inner.attrs = MemAttr_NC; result.addrdesc.memattrs.inner.hints = MemHint_No; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = MemAttrDefaults(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = vaddress<51:0>; result.addrdesc.paddress.NS = if IsSecure() then '0' else '1'; result.addrdesc.fault = AArch64.NoFault(); return result;ShiftType_ROR};

Library pseudocode for aarch64/translationinstrs/checkslogicalop/AArch64.AccessIsPrivilegedLogicalOp

// AArch64.AccessIsPrivileged() // ============================ booleanenumeration AArch64.AccessIsPrivileged(LogicalOp {AccType acctype) el =LogicalOp_AND, AArch64.AccessUsesEL(acctype); if el ==LogicalOp_EOR, EL0 then ispriv = FALSE; elsif el == EL3 then ispriv = TRUE; elsif el == EL2 && (!IsInHost() || HCR_EL2.TGE == '0') then ispriv = TRUE; elsif HaveUAOExt() && PSTATE.UAO == '1' then ispriv = TRUE; else ispriv = (acctype != AccType_UNPRIV); return ispriv;LogicalOp_ORR};

Library pseudocode for aarch64/translationinstrs/checksmemory/AArch64.AccessUsesELmemop/MemAtomicOp

// AArch64.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. bits(2)enumeration AArch64.AccessUsesEL(MemAtomicOp {AccType acctype) if acctype ==MemAtomicOp_ADD, AccType_UNPRIV then returnMemAtomicOp_BIC, EL0; elsif acctype ==MemAtomicOp_EOR, AccType_NV2REGISTER then returnMemAtomicOp_ORR, MemAtomicOp_SMAX, MemAtomicOp_SMIN, MemAtomicOp_UMAX, MemAtomicOp_UMIN, EL2; else return PSTATE.EL;MemAtomicOp_SWP};

Library pseudocode for aarch64/translationinstrs/checksmemory/AArch64.CheckPermissionmemop/MemOp

// AArch64.CheckPermission() // ========================= // Function used for permission checking from AArch64 stage 1 translations FaultRecordenumeration AArch64.CheckPermission(MemOp {Permissions perms, bits(64) vaddress, integer level, bit NS,MemOp_LOAD, AccType acctype, boolean iswrite) assert !MemOp_STORE,ELUsingAArch32(S1TranslationRegime()); wxn = SCTLR[].WXN == '1'; if (PSTATE.EL == EL0 || IsInHost() || (PSTATE.EL == EL1 && !HaveNV2Ext()) || (PSTATE.EL == EL1 && HaveNV2Ext() && (acctype != AccType_NV2REGISTER || !ELIsInHost(EL2)))) then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; ispriv = AArch64.AccessIsPrivileged(acctype); pan = if HavePANExt() then PSTATE.PAN else '0'; if (EL2Enabled() && ((PSTATE.EL == EL1 && HaveNVExt() && HCR_EL2.<NV, NV1> == '11') || (HaveNV2Ext() && acctype == AccType_NV2REGISTER && HCR_EL2.NV2 == '1'))) then pan = '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch64.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = perms.xn == '1' || (user_w && wxn); priv_xn = perms.pxn == '1' || (priv_w && wxn) || user_w; if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 or EL3 r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecure() && NS == '1' && SCR_EL3.SIF == '1' then xn = TRUE; if acctype == AccType_IFETCH then fail = xn; failedread = TRUE; elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW } then fail = !r || !w; failedread = !r; elsif iswrite then fail = !w; failedread = FALSE; elsif acctype == AccType_DC && PSTATE.EL != EL0 then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation, // other than DC IVAC, which requires write permission, and operations executed at EL0, // which require read permission. fail = FALSE; else fail = !r; failedread = TRUE; if fail then secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(52) UNKNOWN; return AArch64.PermissionFault(ipaddress,bit UNKNOWN, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch64.NoFault();MemOp_PREFETCH};

Library pseudocode for aarch64/translationinstrs/checksmemory/AArch64.CheckS2Permissionprefetch/Prefetch

// AArch64.CheckS2Permission() // =========================== // Function used for permission checking from AArch64 stage 2 translations // Prefetch() // ========== FaultRecord// Decode and execute the prefetch hint on ADDRESS specified by PRFOP AArch64.CheckS2Permission(Prefetch(bits(64) address, bits(5) prfop)PermissionsPrefetchHint perms, bits(64) vaddress, bits(52) ipaddress, integer level,hint; integer target; boolean stream; case prfop<4:3> of when '00' hint = AccTypePrefetch_READ acctype, boolean iswrite, bit NS, boolean s2fs1walk, boolean hwupdatewalk) assert; // PLD: prefetch for load when '01' hint = IsSecureEL2EnabledPrefetch_EXEC() || (; // PLI: preload instructions when '10' hint = HaveELPrefetch_WRITE(; // PST: prepare for store when '11' return; // unallocated hint target =EL2UInt) && !(prfop<2:1>); // target cache level stream = (prfop<0> != '0'); // streaming (non-temporal)IsSecureHint_Prefetch() && !ELUsingAArch32(EL2) ) && HasS2Translation(); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if HaveExtendedExecuteNeverExt() then case perms.xn:perms.xxn of when '00' xn = FALSE; when '01' xn = PSTATE.EL == EL1; when '10' xn = TRUE; when '11' xn = PSTATE.EL == EL0; else xn = perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type if acctype == AccType_IFETCH && !s2fs1walk then fail = xn; failedread = TRUE; elsif (acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk then fail = !r || !w; failedread = !r; elsif iswrite && !s2fs1walk then fail = !w; failedread = FALSE; elsif acctype == AccType_DC && PSTATE.EL != EL0 && !s2fs1walk then // DC maintenance instructions operating by VA, with the exception of DC IVAC, do // not generate Permission faults from stage 2 translation, other than when // performing a stage 1 translation table walk. fail = FALSE; elsif hwupdatewalk then fail = !w; failedread = !iswrite; else fail = !r; failedread = !iswrite; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; return AArch64.PermissionFault(ipaddress,NS, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch64.NoFault();(address, hint, target, stream); return;

Library pseudocode for aarch64/translationinstrs/debugsystem/AArch64.CheckBreakpointbarriers/barrierop/MemBarrierOp

// AArch64.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch64 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. FaultRecordenumeration AArch64.CheckBreakpoint(bits(64) vaddress,MemBarrierOp { AccType acctype, integer size) assert !MemBarrierOp_DSB // Data Synchronization Barrier ,ELUsingAArch32(MemBarrierOp_DMB // Data Memory Barrier ,S1TranslationRegime()); assert (MemBarrierOp_ISB // Instruction Synchronization Barrier ,UsingAArch32() && size IN {2,4}) || size == 4; match = FALSE; for i = 0 toMemBarrierOp_SSBB // Speculative Synchronization Barrier to VA , UInt(ID_AA64DFR0_EL1.BRPs) match_i =MemBarrierOp_PSSBB // Speculative Synchronization Barrier to PA , AArch64.BreakpointMatch(i, vaddress, acctype, size); match = match || match_i; if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Breakpoint; Halt(reason); elsif match && MDSCR_EL1.MDE == '1' && AArch64.GenerateDebugExceptions() then acctype = AccType_IFETCH; iswrite = FALSE; return AArch64.DebugFault(acctype, iswrite); else return AArch64.NoFault();MemBarrierOp_SB // Speculation Barrier };

Library pseudocode for aarch64/translationinstrs/debugsystem/AArch64.CheckDebughints/syshintop/SystemHintOp

// AArch64.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. FaultRecordenumeration AArch64.CheckDebug(bits(64) vaddress,SystemHintOp { AccType acctype, boolean iswrite, integer size)SystemHintOp_NOP, FaultRecord fault =SystemHintOp_YIELD, AArch64.NoFault(); d_side = (acctype !=SystemHintOp_WFE, AccType_IFETCH); generate_exception =SystemHintOp_WFI, AArch64.GenerateDebugExceptions() && MDSCR_EL1.MDE == '1'; halt =SystemHintOp_SEV, HaltOnBreakpointOrWatchpoint(); if generate_exception || halt then if d_side then fault =SystemHintOp_SEVL, AArch64.CheckWatchpoint(vaddress, acctype, iswrite, size); else fault =SystemHintOp_ESB, SystemHintOp_PSB, SystemHintOp_TSB, SystemHintOp_BTI, AArch64.CheckBreakpoint(vaddress, acctype, size); return fault;SystemHintOp_CSDB };

Library pseudocode for aarch64/translationinstrs/debugsystem/AArch64.CheckWatchpointregister/cpsr/pstatefield/PSTATEField

// AArch64.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". FaultRecordenumeration AArch64.CheckWatchpoint(bits(64) vaddress,PSTATEField { AccType acctype, boolean iswrite, integer size) assert !PSTATEField_DAIFSet,ELUsingAArch32(PSTATEField_DAIFClr,S1TranslationRegime()); match = FALSE; ispriv =PSTATEField_PAN, // ARMv8.1 AArch64.AccessIsPrivileged(acctype); for i = 0 toPSTATEField_UAO, // ARMv8.2 UInt(ID_AA64DFR0_EL1.WRPs) match = match ||PSTATEField_DIT, // ARMv8.4 AArch64.WatchpointMatch(i, vaddress, size, ispriv, acctype, iswrite); if match &&PSTATEField_SP, HaltOnBreakpointOrWatchpoint() then if acctype != AccType_NONFAULT && acctype != AccType_CNOTFIRST then reason = DebugHalt_Watchpoint; Halt(reason); else // Fault will be reported and cancelled return AArch64.DebugFault(acctype, iswrite); elsif match && MDSCR_EL1.MDE == '1' && AArch64.GenerateDebugExceptions() then return AArch64.DebugFault(acctype, iswrite); else return AArch64.NoFault();PSTATEField_SSBS };

Library pseudocode for aarch64/translationinstrs/faultssystem/AArch64.AccessFlagFaultsysops/sysop/SysOp

// AArch64.AccessFlagFault() // ========================= // SysOp() // ======= FaultRecordSystemOp AArch64.AccessFlagFault(bits(52) ipaddress,bit NS, integer level,SysOp(bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2) case op1:CRn:CRm:op2 of when '000 0111 1000 000' return AccTypeSys_AT acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; return; // S1E1R when '100 0111 1000 000' return AArch64.CreateFaultRecordSys_AT(; // S1E2R when '110 0111 1000 000' return; // S1E3R when '000 0111 1000 001' return Sys_AT; // S1E1W when '100 0111 1000 001' return Sys_AT; // S1E2W when '110 0111 1000 001' return Sys_AT; // S1E3W when '000 0111 1000 010' return Sys_AT; // S1E0R when '000 0111 1000 011' return Sys_AT; // S1E0W when '100 0111 1000 100' return Sys_AT; // S12E1R when '100 0111 1000 101' return Sys_AT; // S12E1W when '100 0111 1000 110' return Sys_AT; // S12E0R when '100 0111 1000 111' return Sys_AT; // S12E0W when '011 0111 0100 001' return Sys_DC; // ZVA when '000 0111 0110 001' return Sys_DC; // IVAC when '000 0111 0110 010' return Sys_DC; // ISW when '011 0111 1010 001' return Sys_DC; // CVAC when '000 0111 1010 010' return Sys_DC; // CSW when '011 0111 1011 001' return Sys_DC; // CVAU when '011 0111 1110 001' return Sys_DC; // CIVAC when '000 0111 1110 010' return Sys_DC; // CISW when '000 0111 0001 000' return Sys_IC; // IALLUIS when '000 0111 0101 000' return Sys_IC; // IALLU when '011 0111 0101 001' return Sys_IC; // IVAU when '100 1000 0000 001' return Sys_TLBI; // IPAS2E1IS when '100 1000 0000 101' return Sys_TLBI; // IPAS2LE1IS when '000 1000 0011 000' return Sys_TLBI; // VMALLE1IS when '100 1000 0011 000' return Sys_TLBI; // ALLE2IS when '110 1000 0011 000' return Sys_TLBI; // ALLE3IS when '000 1000 0011 001' return Sys_TLBI; // VAE1IS when '100 1000 0011 001' return Sys_TLBI; // VAE2IS when '110 1000 0011 001' return Sys_TLBI; // VAE3IS when '000 1000 0011 010' return Sys_TLBI; // ASIDE1IS when '000 1000 0011 011' return Sys_TLBI; // VAAE1IS when '100 1000 0011 100' return Sys_TLBI; // ALLE1IS when '000 1000 0011 101' return Sys_TLBI; // VALE1IS when '100 1000 0011 101' return Sys_TLBI; // VALE2IS when '110 1000 0011 101' return Sys_TLBI; // VALE3IS when '100 1000 0011 110' return Sys_TLBI; // VMALLS12E1IS when '000 1000 0011 111' return Sys_TLBI; // VAALE1IS when '100 1000 0100 001' return Sys_TLBI; // IPAS2E1 when '100 1000 0100 101' return Sys_TLBI; // IPAS2LE1 when '000 1000 0111 000' return Sys_TLBI; // VMALLE1 when '100 1000 0111 000' return Sys_TLBI; // ALLE2 when '110 1000 0111 000' return Sys_TLBI; // ALLE3 when '000 1000 0111 001' return Sys_TLBI; // VAE1 when '100 1000 0111 001' return Sys_TLBI; // VAE2 when '110 1000 0111 001' return Sys_TLBI; // VAE3 when '000 1000 0111 010' return Sys_TLBI; // ASIDE1 when '000 1000 0111 011' return Sys_TLBI; // VAAE1 when '100 1000 0111 100' return Sys_TLBI; // ALLE1 when '000 1000 0111 101' return Sys_TLBI; // VALE1 when '100 1000 0111 101' return Sys_TLBI; // VALE2 when '110 1000 0111 101' return Sys_TLBI; // VALE3 when '100 1000 0111 110' return Sys_TLBI; // VMALLS12E1 when '000 1000 0111 111' return Sys_TLBI; // VAALE1 return Sys_SYSFault_AccessFlagSys_AT, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);;

Library pseudocode for aarch64/translationinstrs/faultssystem/AArch64.AddressSizeFaultsysops/sysop/SystemOp

// AArch64.AddressSizeFault() // ========================== FaultRecordenumeration AArch64.AddressSizeFault(bits(52) ipaddress,bit NS, integer level,SystemOp { AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; returnSys_AT, AArch64.CreateFaultRecord(Sys_DC,Sys_IC, Sys_TLBI, Fault_AddressSize, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);Sys_SYS};

Library pseudocode for aarch64/translationinstrs/faultsvector/AArch64.AlignmentFaultarithmetic/binary/uniform/logical/bsl-eor/vbitop/VBitOp

// AArch64.AlignmentFault() // ======================== FaultRecordenumeration AArch64.AlignmentFault(VBitOp {AccType acctype, boolean iswrite, boolean secondstage) ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; returnVBitOp_VBIF, AArch64.CreateFaultRecord(VBitOp_VBIT,VBitOp_VBSL, Fault_Alignment, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);VBitOp_VEOR};

Library pseudocode for aarch64/translationinstrs/faultsvector/AArch64.AsynchExternalAbortarithmetic/unary/cmp/compareop/CompareOp

// AArch64.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts FaultRecordenumeration AArch64.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) type = if parity thenCompareOp { Fault_AsyncParity elseCompareOp_GT, Fault_AsyncExternal; ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; acctype =CompareOp_GE, AccType_NORMAL; iswrite = boolean UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; returnCompareOp_EQ, CompareOp_LE, AArch64.CreateFaultRecord(type, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);CompareOp_LT};

Library pseudocode for aarch64/translationinstrs/faultsvector/AArch64.DebugFaultlogical/immediateop/ImmediateOp

// AArch64.DebugFault() // ==================== FaultRecordenumeration AArch64.DebugFault(ImmediateOp {AccType acctype, boolean iswrite) ipaddress = bits(52) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; returnImmediateOp_MOVI, AArch64.CreateFaultRecord(ImmediateOp_MVNI,ImmediateOp_ORR, Fault_Debug, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);ImmediateOp_BIC};

Library pseudocode for aarch64/translationinstrs/faultsvector/AArch64.NoFaultreduce/reduceop/Reduce

// AArch64.NoFault() // ================= // Reduce() // ======== FaultRecordbits(esize) AArch64.NoFault() ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; acctype =Reduce( AccType_NORMALReduceOp; iswrite = boolean UNKNOWN; extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; op, bits(N) input, integer esize) integer half; bits(esize) hi; bits(esize) lo; bits(esize) result; return if N == esize then return input<esize-1:0>; half = N DIV 2; hi = AArch64.CreateFaultRecordReduce((op, input<N-1:half>, esize); lo =(op, input<half-1:0>, esize); case op of when ReduceOp_FMINNUM result = FPMinNum(lo, hi, FPCR); when ReduceOp_FMAXNUM result = FPMaxNum(lo, hi, FPCR); when ReduceOp_FMIN result = FPMin(lo, hi, FPCR); when ReduceOp_FMAX result = FPMax(lo, hi, FPCR); when ReduceOp_FADD result = FPAdd(lo, hi, FPCR); when ReduceOp_ADDFault_NoneReduce, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);result = lo + hi; return result;

Library pseudocode for aarch64/translationinstrs/faultsvector/AArch64.PermissionFaultreduce/reduceop/ReduceOp

// AArch64.PermissionFault() // ========================= FaultRecordenumeration AArch64.PermissionFault(bits(52) ipaddress,bit NS, integer level,ReduceOp { AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; returnReduceOp_FMINNUM, AArch64.CreateFaultRecord(ReduceOp_FMAXNUM,ReduceOp_FMIN, ReduceOp_FMAX, ReduceOp_FADD, Fault_Permission, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);ReduceOp_ADD};

Library pseudocode for aarch64/translation/faultsattrs/AArch64.TranslationFaultAArch64.InstructionDevice

// AArch64.TranslationFault() // ========================== // AArch64.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. FaultRecordAddressDescriptor AArch64.TranslationFault(bits(52) ipaddress, bit NS, integer level,AArch64.InstructionDevice( AddressDescriptor addrdesc, bits(64) vaddress, bits(52) ipaddress, integer level, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; return c = AArch64.CreateFaultRecordConstrainUnpredictable(); assert c IN {Constraint_NONE, Constraint_FAULT}; if c == Constraint_FAULT then addrdesc.fault = AArch64.PermissionFault(ipaddress,bit UNKNOWN, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.type = MemType_Normal; addrdesc.memattrs.inner.attrs = MemAttr_NC; addrdesc.memattrs.inner.hints = MemHint_No; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = MemAttrDefaultsFault_TranslationUnpredictable_INSTRDEVICE, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);(addrdesc.memattrs); return addrdesc;

Library pseudocode for aarch64/translation/translationattrs/AArch64.CheckAndUpdateDescriptorAArch64.S1AttrDecode

// AArch64.CheckAndUpdateDescriptor() // ================================== // Check and update translation table descriptor if hardware update is configured // AArch64.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. FaultRecordMemoryAttributes AArch64.CheckAndUpdateDescriptor(AArch64.S1AttrDecode(bits(2) SH, bits(3) attr,DescriptorUpdateAccType result,acctype) FaultRecordMemoryAttributes fault, boolean secondstage, bits(64) vaddress,memattrs; mair = AccTypeMAIR acctype, boolean iswrite, boolean s2fs1walk, boolean hwupdatewalk) // Check if access flag can be updated // Address translation instructions are permitted to update AF but not required if result.AF then if fault.type ==[]; index = 8 * Fault_NoneUInt then hw_update_AF = TRUE; elsif(attr); attrfield = mair<index+7:index>; memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) = ConstrainUnpredictableConstrainUnpredictableBits(Unpredictable_AFUPDATEUnpredictable_RESMAIR) ==); if ! Constraint_TRUEHaveMTEExt then hw_update_AF = TRUE; else hw_update_AF = FALSE; if result.AP && fault.type ==() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) = Fault_NoneConstrainUnpredictableBits then write_perm_req = (iswrite || acctype IN {(AccType_ATOMICRWUnpredictable_RESMAIR,); if attrfield<7:4> == '0000' then // Device memattrs.type =AccType_ORDEREDRWMemType_Device,; case attrfield<3:0> of when '0000' memattrs.device = AccType_ORDEREDATOMICRWDeviceType_nGnRnE }) && !s2fs1walk; hw_update_AP = (write_perm_req && !(acctype IN {; when '0100' memattrs.device =AccType_ATDeviceType_nGnRE,; when '1000' memattrs.device = AccType_DCDeviceType_nGRE,; when '1100' memattrs.device = AccType_DC_UNPRIVDeviceType_GRE})) || hwupdatewalk; else hw_update_AP = FALSE; if hw_update_AF || hw_update_AP then if secondstage || !; otherwiseHasS2TranslationUnreachable() then descaddr2 = result.descaddr; else hwupdatewalk = TRUE; descaddr2 =(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.type = AArch64.SecondStageWalkMemType_Normal(result.descaddr, vaddress, acctype, iswrite, 8, hwupdatewalk); if; memattrs.outer = IsFaultLongConvertAttrsHints(descaddr2) then return descaddr2.fault; accdesc = CreateAccessDescriptor((attrfield<7:4>, acctype); memattrs.inner =AccType_ATOMICRWLongConvertAttrsHints); desc = _Mem[descaddr2, 8, accdesc]; el =(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif AArch64.AccessUsesELHaveMTEExt(acctype); case el of when() && attrfield == '11110000' then // Tagged, Normal memattrs.tagged = TRUE; memattrs.type = EL3MemType_Normal reversedescriptors = SCTLR_EL3.EE == '1'; when; memattrs.outer.attrs = EL2MemAttr_WB reversedescriptors = SCTLR_EL2.EE == '1'; otherwise reversedescriptors = SCTLR_EL1.EE == '1'; if reversedescriptors then desc =; memattrs.inner.attrs = BigEndianReverseMemAttr_WB(desc); if hw_update_AF then desc<10> = '1'; if hw_update_AP then desc<7> = (if secondstage then '1' else '0'); _Mem[descaddr2,8,accdesc] = if reversedescriptors then; memattrs.outer.hints = ; memattrs.inner.hints = MemHint_RWA; memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else Unreachable(); // Reserved, handled above return MemAttrDefaultsBigEndianReverseMemHint_RWA(desc) else desc; return fault;(memattrs);

Library pseudocode for aarch64/translation/translationattrs/AArch64.FirstStageTranslateAArch64.TranslateAddressS1Off

// AArch64.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. // AArch64.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. AddressDescriptorTLBRecord AArch64.FirstStageTranslate(bits(64) vaddress,AArch64.TranslateAddressS1Off(bits(64) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, integer size) ifacctype, boolean iswrite) assert ! HaveNV2ExtELUsingAArch32() && acctype ==( AccType_NV2REGISTERS1TranslationRegime then s1_enabled = SCTLR_EL2.M == '1'; elsif()); HasS2TranslationTLBRecord() then s1_enabled = HCR_EL2.TGE == '0' && HCR_EL2.DC == '0' && SCTLR_EL1.M == '1'; else s1_enabled =result; Top = SCTLRAddrTop[].M == '1'; ipaddress = bits(52) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; if s1_enabled then // First stage enabled S1 =(vaddress, (acctype == AArch64.TranslationTableWalk(ipaddress, '1', vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; if acctype == AccType_IFETCH then InGuardedPage = S1.GP == '1'; // Global state updated on instruction fetch that denotes // if the fetched instruction is from a guarded page. else S1 =), PSTATE.EL); if ! AArch64.TranslateAddressS1OffIsZero(vaddress, acctype, iswrite); permissioncheck = FALSE; if(vaddress<Top: UsingAArch32PAMax() &&()>) then level = 0; ipaddress = bits(52) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; result.addrdesc.fault = HaveTrapLoadStoreMultipleDeviceExtAArch64.AddressSizeFault() &&(ipaddress,bit UNKNOWN, level, acctype, iswrite, secondstage, s2fs1walk); return result; default_cacheable = ( AArch32.ExecutingLSMInstrHasS2Translation() then if S1.addrdesc.memattrs.type ==() && HCR_EL2.DC == '1'); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.type = MemType_DeviceMemType_Normal && S1.addrdesc.memattrs.device !=; result.addrdesc.memattrs.inner.attrs = DeviceType_GREMemAttr_WB then nTLSMD = if; // Write-back result.addrdesc.memattrs.inner.hints = S1TranslationRegimeMemHint_RWA() ==; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype != EL2AccType_IFETCH then SCTLR_EL2.nTLSMD else SCTLR_EL1.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault =then // Treat data as Device result.addrdesc.memattrs.type = AArch64.AlignmentFaultMemType_Device(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype !=; result.addrdesc.memattrs.device = AccType_IFETCHDeviceType_nGnRnE) || (acctype ==; result.addrdesc.memattrs.inner = AccType_DCZVAMemAttrHints)) && S1.addrdesc.memattrs.type ==UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR_ELx.I cacheable = MemType_DeviceSCTLR && ![].I == '1'; result.addrdesc.memattrs.type =IsFaultMemType_Normal(S1.addrdesc) then S1.addrdesc.fault =; if cacheable then result.addrdesc.memattrs.inner.attrs = AArch64.AlignmentFaultMemAttr_WT(acctype, iswrite, secondstage); if !; result.addrdesc.memattrs.inner.hints =IsFaultMemHint_RA(S1.addrdesc) && permissioncheck then S1.addrdesc.fault =; else result.addrdesc.memattrs.inner.attrs = AArch64.CheckPermissionMemAttr_NC(S1.perms, vaddress, S1.level, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (!; result.addrdesc.memattrs.inner.hints =IsFaultMemHint_No(S1.addrdesc) && S1.addrdesc.memattrs.type ==; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = MemType_DeviceMemAttrDefaults && acctype ==(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = vaddress<51:0>; result.addrdesc.paddress.NS = if AccType_IFETCHIsSecure) then S1.addrdesc =() then '0' else '1'; result.addrdesc.fault = AArch64.InstructionDeviceAArch64.NoFault(S1.addrdesc, vaddress, ipaddress, S1.level, acctype, iswrite, secondstage, s2fs1walk); // Check and update translation table descriptor if required hwupdatewalk = FALSE; s2fs1walk = FALSE; S1.addrdesc.fault = AArch64.CheckAndUpdateDescriptor(S1.descupdate, S1.addrdesc.fault, secondstage, vaddress, acctype, iswrite, s2fs1walk, hwupdatewalk); return S1.addrdesc;(); return result;

Library pseudocode for aarch64/translation/translationchecks/AArch64.FullTranslateAArch64.AccessIsPrivileged

// AArch64.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. // AArch64.AccessIsPrivileged() // ============================ AddressDescriptorboolean AArch64.FullTranslate(bits(64) vaddress,AArch64.AccessIsPrivileged( AccType acctype, boolean iswrite, boolean wasaligned, integer size) acctype) // First Stage Translation S1 = el = AArch64.FirstStageTranslateAArch64.AccessUsesEL(vaddress, acctype, iswrite, wasaligned, size); if !(acctype); if el ==IsFaultEL0(S1) && !(then ispriv = FALSE; elsif el ==HaveNV2ExtEL3() && acctype ==then ispriv = TRUE; elsif el == AccType_NV2REGISTEREL2) &&&& (! HasS2TranslationIsInHost() then s2fs1walk = FALSE; hwupdatewalk = FALSE; result =() || HCR_EL2.TGE == '0') then ispriv = TRUE; elsif () && PSTATE.UAO == '1' then ispriv = TRUE; else ispriv = (acctype != AccType_UNPRIVAArch64.SecondStageTranslateHaveUAOExt(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); else result = S1; ); return result; return ispriv;

Library pseudocode for aarch64/translation/translationchecks/AArch64.SecondStageTranslateAArch64.AccessUsesEL

// AArch64.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. // AArch64.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. AddressDescriptorbits(2) AArch64.SecondStageTranslate(AArch64.AccessUsesEL(AddressDescriptor S1, bits(64) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size, boolean hwupdatewalk) assertacctype) if acctype == HasS2TranslationAccType_UNPRIV(); s2_enabled = HCR_EL2.VM == '1' || HCR_EL2.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<51:0>; NS = S1.paddress.NS; S2 =then return AArch64.TranslationTableWalkEL0(ipaddress, NS, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype !=; elsif acctype == AccType_IFETCHAccType_NV2REGISTER) || (acctype ==then return AccType_DCZVAEL2)) && S2.addrdesc.memattrs.type == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch64.AlignmentFault(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch64.CheckS2Permission(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, NS,s2fs1walk, hwupdatewalk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !IsFault(S2.addrdesc) && S2.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then S2.addrdesc = AArch64.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR_EL2.PTW == '1' && S2.addrdesc.memattrs.type == MemType_Device) then S2.addrdesc.fault = AArch64.PermissionFault(ipaddress, S1.paddress.NS, S2.level, acctype, iswrite, secondstage, s2fs1walk); // Check and update translation table descriptor if required S2.addrdesc.fault = AArch64.CheckAndUpdateDescriptor(S2.descupdate, S2.addrdesc.fault, secondstage, vaddress, acctype, iswrite, s2fs1walk, hwupdatewalk); result = CombineS1S2Desc(S1, S2.addrdesc); ; else result = S1; return result; return PSTATE.EL;

Library pseudocode for aarch64/translation/translationchecks/AArch64.SecondStageWalkAArch64.CheckPermission

// AArch64.SecondStageWalk() // AArch64.CheckPermission() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. // Function used for permission checking from AArch64 stage 1 translations AddressDescriptorFaultRecord AArch64.SecondStageWalk(AArch64.CheckPermission(AddressDescriptorPermissions S1, bits(64) vaddress,perms, bits(64) vaddress, integer level, bit NS, AccType acctype, boolean iswrite, integer size, boolean hwupdatewalk) assertacctype, boolean iswrite) assert ! HasS2TranslationELUsingAArch32(); s2fs1walk = TRUE; wasaligned = TRUE; return( ()); wxn = SCTLR[].WXN == '1'; if (PSTATE.EL == EL0 || IsInHost() || (PSTATE.EL == EL1 && !HaveNV2Ext()) || (PSTATE.EL == EL1 && HaveNV2Ext() && (acctype != AccType_NV2REGISTER || !ELIsInHost(EL2)))) then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; ispriv = AArch64.AccessIsPrivileged(acctype); pan = if HavePANExt() then PSTATE.PAN else '0'; if (EL2Enabled() && ((PSTATE.EL == EL1 && HaveNVExt() && HCR_EL2.<NV, NV1> == '11') || (HaveNV2Ext() && acctype == AccType_NV2REGISTER && HCR_EL2.NV2 == '1'))) then pan = '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch64.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = perms.xn == '1' || (user_w && wxn); priv_xn = perms.pxn == '1' || (priv_w && wxn) || user_w; if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 or EL3 r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecure() && NS == '1' && SCR_EL3.SIF == '1' then xn = TRUE; if acctype == AccType_IFETCH then fail = xn; failedread = TRUE; elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW } then fail = !r || !w; failedread = !r; elsif iswrite then fail = !w; failedread = FALSE; elsif acctype == AccType_DC && PSTATE.EL != EL0 then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation, // other than DC IVAC, which requires write permission, and operations executed at EL0, // which require read permission. fail = FALSE; else fail = !r; failedread = TRUE; if fail then secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(52) UNKNOWN; return AArch64.PermissionFault(ipaddress,bit UNKNOWN, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch64.NoFaultAArch64.SecondStageTranslateS1TranslationRegime(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk);();

Library pseudocode for aarch64/translation/translationchecks/AArch64.TranslateAddressAArch64.CheckS2Permission

// AArch64.TranslateAddress() // ========================== // Main entry point for translating an address // AArch64.CheckS2Permission() // =========================== // Function used for permission checking from AArch64 stage 2 translations AddressDescriptorFaultRecord AArch64.TranslateAddress(bits(64) vaddress,AArch64.CheckS2Permission( Permissions perms, bits(64) vaddress, bits(52) ipaddress, integer level, AccType acctype, boolean iswrite, boolean wasaligned, integer size) acctype, boolean iswrite, bit NS, boolean s2fs1walk, boolean hwupdatewalk) result = assert AArch64.FullTranslateIsSecureEL2Enabled(vaddress, acctype, iswrite, wasaligned, size); if !(acctype IN {() || (AccType_PTWHaveEL,( AccType_ICEL2,) && ! AccType_ATIsSecure}) && !() && !IsFaultELUsingAArch32(result) then result.fault =( AArch64.CheckDebugEL2(vaddress, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress =) ) && (); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if HaveExtendedExecuteNeverExt() then case perms.xn:perms.xxn of when '00' xn = FALSE; when '01' xn = PSTATE.EL == EL1; when '10' xn = TRUE; when '11' xn = PSTATE.EL == EL0; else xn = perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type if acctype == AccType_IFETCH && !s2fs1walk then fail = xn; failedread = TRUE; elsif (acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk then fail = !r || !w; failedread = !r; elsif iswrite && !s2fs1walk then fail = !w; failedread = FALSE; elsif acctype == AccType_DC && PSTATE.EL != EL0 && !s2fs1walk then // DC maintenance instructions operating by VA, with the exception of DC IVAC, do // not generate Permission faults from stage 2 translation, other than when // performing a stage 1 translation table walk. fail = FALSE; elsif hwupdatewalk then fail = !w; failedread = !iswrite; else fail = !r; failedread = !iswrite; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; return AArch64.PermissionFault(ipaddress,NS, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch64.NoFaultZeroExtendHasS2Translation(vaddress); return result;();

Library pseudocode for aarch64/translation/walkdebug/AArch64.TranslationTableWalkAArch64.CheckBreakpoint

// AArch64.TranslationTableWalk() // ============================== // Returns a result of a translation table walk // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. // AArch64.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch64 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. TLBRecordFaultRecord AArch64.TranslationTableWalk(bits(52) ipaddress, bit s1_nonsecure, bits(64) vaddress,AArch64.CheckBreakpoint(bits(64) vaddress, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assert !acctype, integer size) assert !ELUsingAArch32(S1TranslationRegime()); else assert assert ( IsSecureEL2EnabledUsingAArch32() || (() && size IN {2,4}) || size == 4; match = FALSE; for i = 0 to HaveELUInt((ID_AA64DFR0_EL1.BRPs) match_i =EL2AArch64.BreakpointMatch) && !(i, vaddress, acctype, size); match = match || match_i; if match &&IsSecureHaltOnBreakpointOrWatchpoint() && !() then reason =ELUsingAArch32DebugHalt_Breakpoint(;EL2Halt) ) &&(reason); elsif match && MDSCR_EL1.MDE == '1' && HasS2TranslationAArch64.GenerateDebugExceptions();() then acctype = TLBRecord result; AddressDescriptor descaddr; bits(64) baseregister; bits(64) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 descaddr.memattrs.type = MemType_Normal; // Derived parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB, 16KB or 64KB in AArch64 // stride = Log2(Address per Level) - Bits of address consumed at each level // firstblocklevel = First level where a block entry is allowed // ps = Physical Address size as encoded in TCR_EL1.IPS or TCR_ELx/VTCR_EL2.PS // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = ZeroExtend(vaddress); el = AArch64.AccessUsesEL(acctype); top = AddrTop(inputaddr, (acctype == AccType_IFETCH), el); if el ==; iswrite = FALSE; return EL3AArch64.DebugFault then largegrain = TCR_EL3.TG0 == '01'; midgrain = TCR_EL3.TG0 == '10'; inputsize = 64 -(acctype, iswrite); else return UInt(TCR_EL3.T0SZ); inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = TCR_EL3.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = FALSE; baseregister = TTBR0_EL3; descaddr.memattrs = WalkAttrDecode(TCR_EL3.SH0, TCR_EL3.ORGN0, TCR_EL3.IRGN0, secondstage); reversedescriptors = SCTLR_EL3.EE == '1'; lookupsecure = TRUE; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL3.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL3.HD == '1'; hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL3.HPD == '1'; elsif ELIsInHost(el) then if inputaddr<top> == '0' then largegrain = TCR_EL2.TG0 == '01'; midgrain = TCR_EL2.TG0 == '10'; inputsize = 64 - UInt(TCR_EL2.T0SZ); inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = TCR_EL2.EPD0 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL2.E0PD0 == '1'); baseregister = TTBR0_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH0, TCR_EL2.ORGN0, TCR_EL2.IRGN0, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD0 == '1'; else inputsize = 64 - UInt(TCR_EL2.T1SZ); largegrain = TCR_EL2.TG1 == '11'; // TG1 and TG0 encodings differ midgrain = TCR_EL2.TG1 == '01'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsOnes(inputaddr<top:inputsize>); disabled = TCR_EL2.EPD1 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL2.E0PD1 == '1'); baseregister = TTBR1_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH1, TCR_EL2.ORGN1, TCR_EL2.IRGN1, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD1 == '1'; ps = TCR_EL2.IPS; reversedescriptors = SCTLR_EL2.EE == '1'; lookupsecure = if IsSecureEL2Enabled() then IsSecure() else FALSE; singlepriv = FALSE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL2.HD == '1'; elsif el == EL2 then inputsize = 64 - UInt(TCR_EL2.T0SZ); largegrain = TCR_EL2.TG0 == '01'; midgrain = TCR_EL2.TG0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = TCR_EL2.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = FALSE; baseregister = TTBR0_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH0, TCR_EL2.ORGN0, TCR_EL2.IRGN0, secondstage); reversedescriptors = SCTLR_EL2.EE == '1'; lookupsecure = if IsSecureEL2Enabled() then IsSecure() else FALSE; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL2.HD == '1'; hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD == '1'; else if inputaddr<top> == '0' then inputsize = 64 - UInt(TCR_EL1.T0SZ); largegrain = TCR_EL1.TG0 == '01'; midgrain = TCR_EL1.TG0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = TCR_EL1.EPD0 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL1.E0PD0 == '1'); disabled = disabled || (el == EL0 && acctype == AccType_NONFAULT && TCR_EL1.NFD0 == '1'); baseregister = TTBR0_EL1; descaddr.memattrs = WalkAttrDecode(TCR_EL1.SH0, TCR_EL1.ORGN0, TCR_EL1.IRGN0, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL1.HPD0 == '1'; else inputsize = 64 - UInt(TCR_EL1.T1SZ); largegrain = TCR_EL1.TG1 == '11'; // TG1 and TG0 encodings differ midgrain = TCR_EL1.TG1 == '01'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsOnes(inputaddr<top:inputsize>); disabled = TCR_EL1.EPD1 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL1.E0PD1 == '1'); disabled = disabled || (el == EL0 && acctype == AccType_NONFAULT && TCR_EL1.NFD1 == '1'); baseregister = TTBR1_EL1; descaddr.memattrs = WalkAttrDecode(TCR_EL1.SH1, TCR_EL1.ORGN1, TCR_EL1.IRGN1, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL1.HPD1 == '1'; ps = TCR_EL1.IPS; reversedescriptors = SCTLR_EL1.EE == '1'; lookupsecure = IsSecure(); singlepriv = FALSE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL1.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL1.HD == '1'; if largegrain then grainsize = 16; // Log2(64KB page size) firstblocklevel = (if Have52BitPAExt() then 1 else 2); // Largest block is 4TB (2^42 bytes) for 52 bit PA // and 512MB (2^29 bytes) otherwise elsif midgrain then grainsize = 14; // Log2(16KB page size) firstblocklevel = 2; // Largest block is 32MB (2^25 bytes) else // Small grain grainsize = 12; // Log2(4KB page size) firstblocklevel = 1; // Largest block is 1GB (2^30 bytes) stride = grainsize - 3; // Log2(page size / 8 bytes) // The starting level is the number of strides needed to consume the input address level = 4 - RoundUp(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ZeroExtend(ipaddress); // Stage 2 translation table walk for the Secure EL2 translation regime if IsSecureEL2Enabled() && IsSecure() then // Stage 2 translation walk is in the Non-secure IPA space or the Secure IPA space t0size = if s1_nonsecure == '1' then VTCR_EL2.T0SZ else VSTCR_EL2.T0SZ; tg0 = if s1_nonsecure == '1' then VTCR_EL2.TG0 else VSTCR_EL2.TG0; // Stage 2 translation table walk is to the Non-secure PA space or to the Secure PA space nswalk = if s1_nonsecure == '1' then VTCR_EL2.NSW else VSTCR_EL2.SW; // Stage 2 translation accesses the Non-secure PA space or the Secure PA space if nswalk == '1' then nsaccess = '1'; // When walk is non-secure, access must be to the Non-secure PA space else if s1_nonsecure == '0' then nsaccess = VSTCR_EL2.SA; // When walk is secure and in the Secure IPA space, access is specified by VSTCR_EL2.SA else if VSTCR_EL2.SW == '1' || VSTCR_EL2.SA == '1' then nsaccess = '1'; // When walk is secure and in the Non-secure IPA space, access is non-secure when VSTCR_EL2.SA specifies the Non-secure PA space else nsaccess = VTCR_EL2.NSA; // When walk is secure and in the Non-secure IPA space, if VSTCR_EL2.SA specifies the Secure PA space, access is specified by VSTCR_EL2.NSA else t0size = VTCR_EL2.T0SZ; tg0 = VTCR_EL2.TG0; nsaccess = '1'; inputsize = 64 - UInt(t0size); largegrain = tg0 == '01'; midgrain = tg0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = VTCR_EL2.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<63:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR_EL2.SH0, VTCR_EL2.ORGN0, VTCR_EL2.IRGN0, secondstage); reversedescriptors = SCTLR_EL2.EE == '1'; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && VTCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && VTCR_EL2.HD == '1'; lookupsecure = if IsSecureEL2Enabled() then s1_nonsecure == '0' else FALSE; // Stage2 translation table walk is to secure PA space or to Non-secure PA space baseregister = if lookupsecure then VSTTBR_EL2 else VTTBR_EL2; startlevel = if lookupsecure then UInt(VSTCR_EL2.SL0) else UInt(VTCR_EL2.SL0); if largegrain then grainsize = 16; // Log2(64KB page size) level = 3 - startlevel; firstblocklevel = (if Have52BitPAExt() then 1 else 2); // Largest block is 4TB (2^42 bytes) for 52 bit PA // and 512MB (2^29 bytes) otherwise elsif midgrain then grainsize = 14; // Log2(16KB page size) level = 3 - startlevel; firstblocklevel = 2; // Largest block is 32MB (2^25 bytes) else // Small grain grainsize = 12; // Log2(4KB page size) if HaveSmallPageTblExt() && startlevel == 3 then level = startlevel; // Startlevel 3 (VTCR_EL2.SL0 or VSCTR_EL2.SL0 == 0b11) for 4KB granule else level = 2 - startlevel; firstblocklevel = 1; // Largest block is 1GB (2^30 bytes) stride = grainsize - 3; // Log2(page size / 8 bytes) // Limits on IPA controls based on implemented PA size. Level 0 is only // supported by small grain translations if largegrain then // 64KB pages // Level 1 only supported if implemented PA size is greater than 2^42 bytes if level == 0 || (level == 1 && PAMax() <= 42) then basefound = FALSE; elsif midgrain then // 16KB pages // Level 1 only supported if implemented PA size is greater than 2^40 bytes if level == 0 || (level == 1 && PAMax() <= 40) then basefound = FALSE; else // Small grain, 4KB pages // Level 0 only supported if implemented PA size is greater than 2^42 bytes if level < 0 || (level == 0 && PAMax() <= 42) then basefound = FALSE; // If the inputsize exceeds the PAMax value, the behavior is CONSTRAINED UNPREDICTABLE inputsizecheck = inputsize; if inputsize > PAMax() && (!ELUsingAArch32(EL1) || inputsize > 40) then case ConstrainUnpredictable(Unpredictable_LARGEIPA) of when Constraint_FORCE // Restrict the inputsize to the PAMax value inputsize = PAMax(); inputsizecheck = PAMax(); when Constraint_FORCENOSLCHECK // As FORCE, except use the configured inputsize in the size checks below inputsize = PAMax(); when Constraint_FAULT // Generate a translation fault basefound = FALSE; otherwise Unreachable(); // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsizecheck - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // Upper bound check is: startsizecheck > Log2(pagesize/8*16) if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 0; // AArch32 reports this as a level 1 fault result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; case ps of when '000' outputsize = 32; when '001' outputsize = 36; when '010' outputsize = 40; when '011' outputsize = 42; when '100' outputsize = 44; when '101' outputsize = 48; when '110' outputsize = (if Have52BitPAExt() && largegrain then 52 else 48); otherwise outputsize = integer IMPLEMENTATION_DEFINED "Reserved Intermediate Physical Address size value"; if outputsize > PAMax() then outputsize = PAMax(); if outputsize < 48 && !IsZero(baseregister<47:outputsize>) then level = 0; result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) if outputsize == 52 then z = (if baselowerbound < 6 then 6 else baselowerbound); baseaddress = baseregister<5:2>:baseregister<47:z>:Zeros(z); else baseaddress = ZeroExtend(baseregister<47:baselowerbound>:Zeros(baselowerbound)); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; addrselecttop = inputsize - 1; apply_nvnv1_effect = HaveNVExt() && EL2Enabled() && HCR_EL2.<NV,NV1> == '11' && S1TranslationRegime() == EL1 && !secondstage; repeat addrselectbottom = (3-level)*stride + grainsize; bits(52) index = ZeroExtend(inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address = baseaddress OR index; descaddr.paddress.NS = ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || !HasS2Translation() || (HaveNV2Ext() && acctype == AccType_NV2REGISTER) then descaddr2 = descaddr; else hwupdatewalk = FALSE; descaddr2 = AArch64.SecondStageWalk(descaddr, vaddress, acctype, iswrite, 8, hwupdatewalk); // Check for a fault on the stage 2 walk if IsFault(descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = BigEndianReverse(desc); if desc<0> == '0' || (desc<1:0> == '01' && (level == 3 || (HaveBlockBBM() && IsBlockDescriptorNTBitValid() && desc<16> == '1'))) then // Fault (00), Reserved (10), Block (01) at level 3, or Block(01) with nT bit set. result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if (outputsize < 52 && largegrain && !IsZero(desc<15:12>)) || (outputsize < 48 && !IsZero(desc<47:outputsize>)) then result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; if outputsize == 52 then baseaddress = desc<15:12>:desc<47:grainsize>:Zeros(grainsize); else baseaddress = ZeroExtend(desc<47:grainsize>:Zeros(grainsize)); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only if apply_nvnv1_effect then pxn_table = pxn_table OR desc<60>; else xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply in EL1&0 or EL2&0 translation regimes if !singlepriv then if !apply_nvnv1_effect then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Check block size is supported at this level if level < firstblocklevel then result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check for misprogramming of the contiguous bit if largegrain then contiguousbitcheck = level == 2 && inputsize < 34; elsif midgrain then contiguousbitcheck = level == 2 && inputsize < 30; else contiguousbitcheck = level == 1 && inputsize < 34; if contiguousbitcheck && desc<52> == '1' then if boolean IMPLEMENTATION_DEFINED "Translation fault on misprogrammed contiguous bit" then result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check the output address is inside the supported range if (outputsize < 52 && largegrain && !IsZero(desc<15:12>)) || (outputsize < 48 && !IsZero(desc<47:outputsize>)) then result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Unpack the descriptor into address and upper and lower block attributes if outputsize == 52 then outputaddress = desc<15:12>:desc<47:addrselectbottom>:inputaddr<addrselectbottom-1:0>; else outputaddress = ZeroExtend(desc<47:addrselectbottom>:inputaddr<addrselectbottom-1:0>); // Check Access Flag if desc<10> == '0' then if !update_AF then result.addrdesc.fault = AArch64.AccessFlagFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; else result.descupdate.AF = TRUE; if update_AP && desc<51> == '1' then // If hw update of access permission field is configured consider AP[2] as '0' / S2AP[2] as '1' if !secondstage && desc<7> == '1' then desc<7> = '0'; result.descupdate.AP = TRUE; elsif secondstage && desc<7> == '0' then desc<7> = '1'; result.descupdate.AP = TRUE; // Required descriptor if AF or AP[2]/S2AP[2] needs update result.descupdate.descaddr = descaddr; if apply_nvnv1_effect then pxn = desc<54>; // Bit[54] of the block/page descriptor holds PXN instead of UXN xn = '0'; // XN is '0' ap = desc<7>:'01'; // Bit[6] of the block/page descriptor is treated as '0' regardless of value programmed else xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply in EL1&0 or EL2&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = AArch64.S1AttrDecode(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if HaveExtendedExecuteNeverExt() then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = nsaccess; result.addrdesc.paddress.address = outputaddress; result.addrdesc.fault = AArch64.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExt() then result.CnP = baseregister<0>; return result;();

Library pseudocode for sharedaarch64/translation/debug/ClearStickyErrors/ClearStickyErrorsAArch64.CheckDebug

// ClearStickyErrors() // ===================// AArch64.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. FaultRecord ClearStickyErrors() EDSCR.TXU = '0'; // Clear TX underrun flag EDSCR.RXO = '0'; // Clear RX overrun flag ifAArch64.CheckDebug(bits(64) vaddress, HaltedAccType() then // in Debug state EDSCR.ITO = '0'; // Clear ITR overrun flag // If halted and the ITR is not empty then it is UNPREDICTABLE whether the EDSCR.ERR is cleared. // The UNPREDICTABLE behavior also affects the instructions in flight, but this is not described // in the pseudocode. ifacctype, boolean iswrite, integer size) HaltedFaultRecord() && EDSCR.ITE == '0' &&fault = ConstrainUnpredictableBoolAArch64.NoFault((); d_side = (acctype !=); generate_exception = AArch64.GenerateDebugExceptions() && MDSCR_EL1.MDE == '1'; halt = HaltOnBreakpointOrWatchpoint(); if generate_exception || halt then if d_side then fault = AArch64.CheckWatchpoint(vaddress, acctype, iswrite, size); else fault = AArch64.CheckBreakpointUnpredictable_CLEARERRITEZEROAccType_IFETCH) then return; EDSCR.ERR = '0'; // Clear cumulative error flag (vaddress, acctype, size); return; return fault;

Library pseudocode for sharedaarch64/translation/debug/DebugTarget/DebugTargetAArch64.CheckWatchpoint

// DebugTarget() // ============= // Returns the debug exception target Exception level // AArch64.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". bits(2)FaultRecord DebugTarget() secure =AArch64.CheckWatchpoint(bits(64) vaddress, IsSecureAccType(); returnacctype, boolean iswrite, integer size) assert ! (S1TranslationRegime()); match = FALSE; ispriv = AArch64.AccessIsPrivileged(acctype); for i = 0 to UInt(ID_AA64DFR0_EL1.WRPs) match = match || AArch64.WatchpointMatch(i, vaddress, size, ispriv, acctype, iswrite); if match && HaltOnBreakpointOrWatchpoint() then if acctype != AccType_NONFAULT && acctype != AccType_CNOTFIRST then reason = DebugHalt_Watchpoint; Halt(reason); else // Fault will be reported and cancelled return AArch64.DebugFault(acctype, iswrite); elsif match && MDSCR_EL1.MDE == '1' && AArch64.GenerateDebugExceptions() then return AArch64.DebugFault(acctype, iswrite); else return AArch64.NoFaultDebugTargetFromELUsingAArch32(secure);();

Library pseudocode for sharedaarch64/debugtranslation/DebugTargetfaults/DebugTargetFromAArch64.AccessFlagFault

// DebugTargetFrom() // ================= // AArch64.AccessFlagFault() // ========================= bits(2)FaultRecord DebugTargetFrom(boolean secure) ifAArch64.AccessFlagFault(bits(52) ipaddress,bit NS, integer level, HaveELAccType(acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; returnEL2AArch64.CreateFaultRecord) && !secure then if( ELUsingAArch32Fault_AccessFlag(EL2) then route_to_el2 = (HDCR.TDE == '1' || HCR.TGE == '1'); else route_to_el2 = (MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'); else route_to_el2 = FALSE; if route_to_el2 then target = EL2; elsif HaveEL(EL3) && HighestELUsingAArch32() && secure then target = EL3; else target = EL1; return target;, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for sharedaarch64/debugtranslation/DoubleLockStatusfaults/DoubleLockStatusAArch64.AddressSizeFault

// DoubleLockStatus() // ================== // Returns the state of the OS Double Lock. // FALSE if OSDLR_EL1.DLK == 0 or DBGPRCR_EL1.CORENPDRQ == 1 or the PE is in Debug state. // TRUE if OSDLR_EL1.DLK == 1 and DBGPRCR_EL1.CORENPDRQ == 0 and the PE is in Non-debug state. // AArch64.AddressSizeFault() // ========================== booleanFaultRecord DoubleLockStatus() if !AArch64.AddressSizeFault(bits(52) ipaddress,bit NS, integer level,HaveDoubleLockAccType() then return FALSE; elsifacctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; return ELUsingAArch32AArch64.CreateFaultRecord(EL1Fault_AddressSize) then return DBGOSDLR.DLK == '1' && DBGPRCR.CORENPDRQ == '0' && !Halted(); else return OSDLR_EL1.DLK == '1' && DBGPRCR_EL1.CORENPDRQ == '0' && !Halted();, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for sharedaarch64/debugtranslation/authenticationfaults/AllowExternalDebugAccessAArch64.AlignmentFault

// AllowExternalDebugAccess() // ========================== // Returns TRUE if the External Debugger access is allowed. // AArch64.AlignmentFault() // ======================== booleanFaultRecord AllowExternalDebugAccess(boolean access_is_secure) // The access may also be subject to OS lock, power-down, etc. ifAArch64.AlignmentFault( HaveSecureExtDebugViewAccType() ||acctype, boolean iswrite, boolean secondstage) ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; return ExternalDebugEnabledAArch64.CreateFaultRecord() then if( HaveSecureExtDebugViewFault_Alignment() then allow_secure = access_is_secure; else allow_secure = ExternalSecureDebugEnabled(); if allow_secure then return TRUE; elsif HaveEL(EL3) then return (if ELUsingAArch32(EL3) then SDCR.EDAD else MDCR_EL3.EDAD) == '0'; else return !IsSecure(); else return FALSE;, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for sharedaarch64/debugtranslation/authenticationfaults/AllowExternalPMUAccessAArch64.AsynchExternalAbort

// AllowExternalPMUAccess() // ======================== // Returns TRUE if the External Debugger access is allowed. // AArch64.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts booleanFaultRecord AllowExternalPMUAccess(boolean access_is_secure) // The access may also be subject to OS lock, power-down, etc. ifAArch64.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) type = if parity then HaveSecureExtDebugViewFault_AsyncParity() ||else ExternalNoninvasiveDebugEnabledFault_AsyncExternal() then if; ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; acctype = HaveSecureExtDebugViewAccType_NORMAL() then allow_secure = access_is_secure; else allow_secure =; iswrite = boolean UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return ExternalSecureNoninvasiveDebugEnabledAArch64.CreateFaultRecord(); if allow_secure then return TRUE; elsif HaveEL(EL3) then return (if ELUsingAArch32(EL3) then SDCR.EPMAD else MDCR_EL3.EPMAD) == '0'; else return !IsSecure(); else return FALSE;(type, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for sharedaarch64/debugtranslation/authenticationfaults/Debug_authenticationAArch64.DebugFault

signal DBGEN; signal NIDEN; signal SPIDEN; signal SPNIDEN;// AArch64.DebugFault() // ==================== FaultRecordAArch64.DebugFault(AccType acctype, boolean iswrite) ipaddress = bits(52) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch64.CreateFaultRecord(Fault_Debug, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for sharedaarch64/debugtranslation/authenticationfaults/ExternalDebugEnabledAArch64.NoFault

// ExternalDebugEnabled() // ====================== // AArch64.NoFault() // ================= booleanFaultRecord ExternalDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the DBGEN signal. return DBGEN == HIGH;AArch64.NoFault() ipaddress = bits(52) UNKNOWN; level = integer UNKNOWN; acctype =AccType_NORMAL; iswrite = boolean UNKNOWN; extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch64.CreateFaultRecord(Fault_None, ipaddress, bit UNKNOWN, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for sharedaarch64/debugtranslation/authenticationfaults/ExternalHypInvasiveDebugEnabledAArch64.PermissionFault

// ExternalHypInvasiveDebugEnabled() // ================================= // AArch64.PermissionFault() // ========================= booleanFaultRecord ExternalHypInvasiveDebugEnabled() // In the recommended interface, ExternalHypInvasiveDebugEnabled returns the state of the // (DBGEN AND HIDEN) signal. returnAArch64.PermissionFault(bits(52) ipaddress,bit NS, integer level, acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; return AArch64.CreateFaultRecord(Fault_PermissionExternalDebugEnabledAccType() && HIDEN == HIGH;, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for sharedaarch64/debugtranslation/authenticationfaults/ExternalHypNoninvasiveDebugEnabledAArch64.TranslationFault

// ExternalHypNoninvasiveDebugEnabled() // ==================================== // AArch64.TranslationFault() // ========================== booleanFaultRecord ExternalHypNoninvasiveDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, ExternalHypNoninvasiveDebugEnabled returns the state of the // (DBGEN OR NIDEN) AND (HIDEN OR HNIDEN) signal. returnAArch64.TranslationFault(bits(52) ipaddress, bit NS, integer level, acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; errortype = bits(2) UNKNOWN; return AArch64.CreateFaultRecord(Fault_TranslationExternalNoninvasiveDebugEnabledAccType() && (HIDEN == HIGH || HNIDEN == HIGH);, ipaddress, NS, level, acctype, iswrite, extflag, errortype, secondstage, s2fs1walk);

Library pseudocode for sharedaarch64/debugtranslation/authenticationtranslation/ExternalNoninvasiveDebugAllowedAArch64.CheckAndUpdateDescriptor

// ExternalNoninvasiveDebugAllowed() // ================================= // AArch64.CheckAndUpdateDescriptor() // ================================== // Check and update translation table descriptor if hardware update is configured booleanFaultRecord ExternalNoninvasiveDebugAllowed() // Return TRUE if Trace and PC Sample-based Profiling are allowed return (AArch64.CheckAndUpdateDescriptor(ExternalNoninvasiveDebugEnabledDescriptorUpdate() && (!result,IsSecureFaultRecord() ||fault, boolean secondstage, bits(64) vaddress, ExternalSecureNoninvasiveDebugEnabledAccType() || (acctype, boolean iswrite, boolean s2fs1walk, boolean hwupdatewalk) // Check if access flag can be updated // Address translation instructions are permitted to update AF but not required if result.AF then if fault.type ==ELUsingAArch32Fault_None(then hw_update_AF = TRUE; elsifEL1ConstrainUnpredictable) && PSTATE.EL ==( ) == Constraint_TRUE then hw_update_AF = TRUE; else hw_update_AF = FALSE; if result.AP && fault.type == Fault_None then write_perm_req = (iswrite || acctype IN {AccType_ATOMICRW,AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk; hw_update_AP = (write_perm_req && !(acctype IN {AccType_AT, AccType_DC, AccType_DC_UNPRIV})) || hwupdatewalk; else hw_update_AP = FALSE; if hw_update_AF || hw_update_AP then if secondstage || !HasS2Translation() then descaddr2 = result.descaddr; else hwupdatewalk = TRUE; descaddr2 = AArch64.SecondStageWalk(result.descaddr, vaddress, acctype, iswrite, 8, hwupdatewalk); if IsFault(descaddr2) then return descaddr2.fault; accdesc = CreateAccessDescriptor(AccType_ATOMICRW); desc = _Mem[descaddr2, 8, accdesc]; el = AArch64.AccessUsesEL(acctype); case el of when EL3 reversedescriptors = SCTLR_EL3.EE == '1'; when EL2 reversedescriptors = SCTLR_EL2.EE == '1'; otherwise reversedescriptors = SCTLR_EL1.EE == '1'; if reversedescriptors then desc = BigEndianReverse(desc); if hw_update_AF then desc<10> = '1'; if hw_update_AP then desc<7> = (if secondstage then '1' else '0'); _Mem[descaddr2,8,accdesc] = if reversedescriptors then BigEndianReverseEL0Unpredictable_AFUPDATE && SDER.SUNIDEN == '1')));(desc) else desc; return fault;

Library pseudocode for sharedaarch64/debugtranslation/authenticationtranslation/ExternalNoninvasiveDebugEnabledAArch64.FirstStageTranslate

// ExternalNoninvasiveDebugEnabled() // ================================= // AArch64.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. booleanAddressDescriptor ExternalNoninvasiveDebugEnabled() // This function returns TRUE if the v8.4 Debug relaxations are implemented, otherwise this // function is IMPLEMENTATION DEFINED. // In the recommended interface, ExternalNoninvasiveDebugEnabled returns the state of the (DBGEN // OR NIDEN) signal. return !AArch64.FirstStageTranslate(bits(64) vaddress,HaveNoninvasiveDebugAuthAccType() ||acctype, boolean iswrite, boolean wasaligned, integer size) if () && acctype == AccType_NV2REGISTER then s1_enabled = SCTLR_EL2.M == '1'; elsif HasS2Translation() then s1_enabled = HCR_EL2.TGE == '0' && HCR_EL2.DC == '0' && SCTLR_EL1.M == '1'; else s1_enabled = SCTLR[].M == '1'; ipaddress = bits(52) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; if s1_enabled then // First stage enabled S1 = AArch64.TranslationTableWalk(ipaddress, '1', vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; if acctype == AccType_IFETCH then InGuardedPage = S1.GP == '1'; // Global state updated on instruction fetch that denotes // if the fetched instruction is from a guarded page. else S1 = AArch64.TranslateAddressS1Off(vaddress, acctype, iswrite); permissioncheck = FALSE; if UsingAArch32() && HaveTrapLoadStoreMultipleDeviceExt() && AArch32.ExecutingLSMInstr() then if S1.addrdesc.memattrs.type == MemType_Device && S1.addrdesc.memattrs.device != DeviceType_GRE then nTLSMD = if S1TranslationRegime() == EL2 then SCTLR_EL2.nTLSMD else SCTLR_EL1.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault = AArch64.AlignmentFault(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S1.addrdesc.memattrs.type == MemType_Device && !IsFault(S1.addrdesc) then S1.addrdesc.fault = AArch64.AlignmentFault(acctype, iswrite, secondstage); if !IsFault(S1.addrdesc) && permissioncheck then S1.addrdesc.fault = AArch64.CheckPermission(S1.perms, vaddress, S1.level, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (!IsFault(S1.addrdesc) && S1.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then S1.addrdesc = AArch64.InstructionDevice(S1.addrdesc, vaddress, ipaddress, S1.level, acctype, iswrite, secondstage, s2fs1walk); // Check and update translation table descriptor if required hwupdatewalk = FALSE; s2fs1walk = FALSE; S1.addrdesc.fault = AArch64.CheckAndUpdateDescriptorExternalDebugEnabledHaveNV2Ext() || NIDEN == HIGH;(S1.descupdate, S1.addrdesc.fault, secondstage, vaddress, acctype, iswrite, s2fs1walk, hwupdatewalk); return S1.addrdesc;

Library pseudocode for sharedaarch64/debugtranslation/authenticationtranslation/ExternalSecureDebugEnabledAArch64.FullTranslate

// ExternalSecureDebugEnabled() // ============================ // AArch64.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. booleanAddressDescriptor ExternalSecureDebugEnabled() if !AArch64.FullTranslate(bits(64) vaddress,HaveELAccType(acctype, boolean iswrite, boolean wasaligned, integer size) // First Stage Translation S1 =EL3AArch64.FirstStageTranslate) && !(vaddress, acctype, iswrite, wasaligned, size); if !IsSecureIsFault() then return FALSE; // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN AND SPIDEN) signal. // CoreSight allows asserting SPIDEN without also asserting DBGEN, but this is not recommended. return(S1) && !( () && acctype == AccType_NV2REGISTER) && HasS2Translation() then s2fs1walk = FALSE; hwupdatewalk = FALSE; result = AArch64.SecondStageTranslateExternalDebugEnabledHaveNV2Ext() && SPIDEN == HIGH;(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); else result = S1; return result;

Library pseudocode for sharedaarch64/debugtranslation/authenticationtranslation/ExternalSecureNoninvasiveDebugEnabledAArch64.SecondStageTranslate

// ExternalSecureNoninvasiveDebugEnabled() // ======================================= // AArch64.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. booleanAddressDescriptor ExternalSecureNoninvasiveDebugEnabled() if !AArch64.SecondStageTranslate(HaveELAddressDescriptor(S1, bits(64) vaddress,EL3AccType) && !acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size, boolean hwupdatewalk) assertIsSecureHasS2Translation() then return FALSE; // If the v8.4 Debug relaxations are implemented, this function returns the value of // ExternalSecureDebugEnabled(). Otherwise the definition of this function is // IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN OR NIDEN) AND // (SPIDEN OR SPNIDEN) signal. if(); s2_enabled = HCR_EL2.VM == '1' || HCR_EL2.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<51:0>; NS = S1.paddress.NS; S2 = HaveNoninvasiveDebugAuthAArch64.TranslationTableWalk() then return(ipaddress, NS, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != ExternalNoninvasiveDebugEnabledAccType_IFETCH() && (SPIDEN == HIGH || SPNIDEN == HIGH); else return) || (acctype == )) && S2.addrdesc.memattrs.type == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch64.AlignmentFault(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch64.CheckS2Permission(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, NS,s2fs1walk, hwupdatewalk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !IsFault(S2.addrdesc) && S2.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then S2.addrdesc = AArch64.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR_EL2.PTW == '1' && S2.addrdesc.memattrs.type == MemType_Device) then S2.addrdesc.fault = AArch64.PermissionFault(ipaddress, S1.paddress.NS, S2.level, acctype, iswrite, secondstage, s2fs1walk); // Check and update translation table descriptor if required S2.addrdesc.fault = AArch64.CheckAndUpdateDescriptor(S2.descupdate, S2.addrdesc.fault, secondstage, vaddress, acctype, iswrite, s2fs1walk, hwupdatewalk); result = CombineS1S2DescExternalSecureDebugEnabledAccType_DCZVA();(S1, S2.addrdesc); else result = S1; return result;

Library pseudocode for sharedaarch64/debugtranslation/ctitranslation/CTI_SetEventLevelAArch64.SecondStageWalk

// Set a Cross Trigger multi-cycle input event trigger to the specified level. CTI_SetEventLevel(// AArch64.SecondStageWalk() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. AddressDescriptorAArch64.SecondStageWalk(AddressDescriptor S1, bits(64) vaddress, AccType acctype, boolean iswrite, integer size, boolean hwupdatewalk) assert HasS2Translation(); s2fs1walk = TRUE; wasaligned = TRUE; return AArch64.SecondStageTranslateCrossTriggerIn id, signal level);(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk);

Library pseudocode for sharedaarch64/debugtranslation/ctitranslation/CTI_SignalEventAArch64.TranslateAddress

// Signal a discrete event on a Cross Trigger input event trigger.// AArch64.TranslateAddress() // ========================== // Main entry point for translating an address AddressDescriptor CTI_SignalEvent(AArch64.TranslateAddress(bits(64) vaddress, acctype, boolean iswrite, boolean wasaligned, integer size) result = AArch64.FullTranslate(vaddress, acctype, iswrite, wasaligned, size); if !(acctype IN {AccType_PTW, AccType_IC, AccType_AT}) && !IsFault(result) then result.fault = AArch64.CheckDebug(vaddress, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress = ZeroExtendCrossTriggerInAccType id);(vaddress); return result;

Library pseudocode for sharedaarch64/debugtranslation/ctiwalk/CrossTriggerAArch64.TranslationTableWalk

enumeration// AArch64.TranslationTableWalk() // ============================== // Returns a result of a translation table walk // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. TLBRecord CrossTriggerOut {AArch64.TranslationTableWalk(bits(52) ipaddress, bit s1_nonsecure, bits(64) vaddress,CrossTriggerOut_DebugRequest,acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assert ! CrossTriggerOut_RestartRequest,( CrossTriggerOut_IRQ,()); else assert CrossTriggerOut_RSVD3,() || ( CrossTriggerOut_TraceExtIn0,( CrossTriggerOut_TraceExtIn1,) && ! CrossTriggerOut_TraceExtIn2,() && ! CrossTriggerOut_TraceExtIn3}; enumeration( CrossTriggerIn {) ) &&CrossTriggerIn_CrossHalt,(); CrossTriggerIn_PMUOverflow,result; CrossTriggerIn_RSVD2,descaddr; bits(64) baseregister; bits(64) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 descaddr.memattrs.type = CrossTriggerIn_RSVD3,; // Derived parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB, 16KB or 64KB in AArch64 // stride = Log2(Address per Level) - Bits of address consumed at each level // firstblocklevel = First level where a block entry is allowed // ps = Physical Address size as encoded in TCR_EL1.IPS or TCR_ELx/VTCR_EL2.PS // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = CrossTriggerIn_TraceExtOut0,(vaddress); el = CrossTriggerIn_TraceExtOut1,(acctype); top = CrossTriggerIn_TraceExtOut2,(inputaddr, (acctype == ), el); if el == EL3 then largegrain = TCR_EL3.TG0 == '01'; midgrain = TCR_EL3.TG0 == '10'; inputsize = 64 - UInt(TCR_EL3.T0SZ); inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = TCR_EL3.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = FALSE; baseregister = TTBR0_EL3; descaddr.memattrs = WalkAttrDecode(TCR_EL3.SH0, TCR_EL3.ORGN0, TCR_EL3.IRGN0, secondstage); reversedescriptors = SCTLR_EL3.EE == '1'; lookupsecure = TRUE; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL3.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL3.HD == '1'; hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL3.HPD == '1'; elsif ELIsInHost(el) then if inputaddr<top> == '0' then largegrain = TCR_EL2.TG0 == '01'; midgrain = TCR_EL2.TG0 == '10'; inputsize = 64 - UInt(TCR_EL2.T0SZ); inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = TCR_EL2.EPD0 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL2.E0PD0 == '1'); baseregister = TTBR0_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH0, TCR_EL2.ORGN0, TCR_EL2.IRGN0, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD0 == '1'; else inputsize = 64 - UInt(TCR_EL2.T1SZ); largegrain = TCR_EL2.TG1 == '11'; // TG1 and TG0 encodings differ midgrain = TCR_EL2.TG1 == '01'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsOnes(inputaddr<top:inputsize>); disabled = TCR_EL2.EPD1 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL2.E0PD1 == '1'); baseregister = TTBR1_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH1, TCR_EL2.ORGN1, TCR_EL2.IRGN1, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD1 == '1'; ps = TCR_EL2.IPS; reversedescriptors = SCTLR_EL2.EE == '1'; lookupsecure = if IsSecureEL2Enabled() then IsSecure() else FALSE; singlepriv = FALSE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL2.HD == '1'; elsif el == EL2 then inputsize = 64 - UInt(TCR_EL2.T0SZ); largegrain = TCR_EL2.TG0 == '01'; midgrain = TCR_EL2.TG0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = TCR_EL2.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = FALSE; baseregister = TTBR0_EL2; descaddr.memattrs = WalkAttrDecode(TCR_EL2.SH0, TCR_EL2.ORGN0, TCR_EL2.IRGN0, secondstage); reversedescriptors = SCTLR_EL2.EE == '1'; lookupsecure = if IsSecureEL2Enabled() then IsSecure() else FALSE; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL2.HD == '1'; hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL2.HPD == '1'; else if inputaddr<top> == '0' then inputsize = 64 - UInt(TCR_EL1.T0SZ); largegrain = TCR_EL1.TG0 == '01'; midgrain = TCR_EL1.TG0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<top:inputsize>); disabled = TCR_EL1.EPD0 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL1.E0PD0 == '1'); disabled = disabled || (el == EL0 && acctype == AccType_NONFAULT && TCR_EL1.NFD0 == '1'); baseregister = TTBR0_EL1; descaddr.memattrs = WalkAttrDecode(TCR_EL1.SH0, TCR_EL1.ORGN0, TCR_EL1.IRGN0, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL1.HPD0 == '1'; else inputsize = 64 - UInt(TCR_EL1.T1SZ); largegrain = TCR_EL1.TG1 == '11'; // TG1 and TG0 encodings differ midgrain = TCR_EL1.TG1 == '01'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsOnes(inputaddr<top:inputsize>); disabled = TCR_EL1.EPD1 == '1' || (PSTATE.EL == EL0 && HaveE0PDExt() && TCR_EL1.E0PD1 == '1'); disabled = disabled || (el == EL0 && acctype == AccType_NONFAULT && TCR_EL1.NFD1 == '1'); baseregister = TTBR1_EL1; descaddr.memattrs = WalkAttrDecode(TCR_EL1.SH1, TCR_EL1.ORGN1, TCR_EL1.IRGN1, secondstage); hierattrsdisabled = AArch64.HaveHPDExt() && TCR_EL1.HPD1 == '1'; ps = TCR_EL1.IPS; reversedescriptors = SCTLR_EL1.EE == '1'; lookupsecure = IsSecure(); singlepriv = FALSE; update_AF = HaveAccessFlagUpdateExt() && TCR_EL1.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && TCR_EL1.HD == '1'; if largegrain then grainsize = 16; // Log2(64KB page size) firstblocklevel = (if Have52BitPAExt() then 1 else 2); // Largest block is 4TB (2^42 bytes) for 52 bit PA // and 512MB (2^29 bytes) otherwise elsif midgrain then grainsize = 14; // Log2(16KB page size) firstblocklevel = 2; // Largest block is 32MB (2^25 bytes) else // Small grain grainsize = 12; // Log2(4KB page size) firstblocklevel = 1; // Largest block is 1GB (2^30 bytes) stride = grainsize - 3; // Log2(page size / 8 bytes) // The starting level is the number of strides needed to consume the input address level = 4 - RoundUp(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ZeroExtend(ipaddress); // Stage 2 translation table walk for the Secure EL2 translation regime if IsSecureEL2Enabled() && IsSecure() then // Stage 2 translation walk is in the Non-secure IPA space or the Secure IPA space t0size = if s1_nonsecure == '1' then VTCR_EL2.T0SZ else VSTCR_EL2.T0SZ; tg0 = if s1_nonsecure == '1' then VTCR_EL2.TG0 else VSTCR_EL2.TG0; // Stage 2 translation table walk is to the Non-secure PA space or to the Secure PA space nswalk = if s1_nonsecure == '1' then VTCR_EL2.NSW else VSTCR_EL2.SW; // Stage 2 translation accesses the Non-secure PA space or the Secure PA space if nswalk == '1' then nsaccess = '1'; // When walk is non-secure, access must be to the Non-secure PA space else if s1_nonsecure == '0' then nsaccess = VSTCR_EL2.SA; // When walk is secure and in the Secure IPA space, access is specified by VSTCR_EL2.SA else if VSTCR_EL2.SW == '1' || VSTCR_EL2.SA == '1' then nsaccess = '1'; // When walk is secure and in the Non-secure IPA space, access is non-secure when VSTCR_EL2.SA specifies the Non-secure PA space else nsaccess = VTCR_EL2.NSA; // When walk is secure and in the Non-secure IPA space, if VSTCR_EL2.SA specifies the Secure PA space, access is specified by VSTCR_EL2.NSA else t0size = VTCR_EL2.T0SZ; tg0 = VTCR_EL2.TG0; nsaccess = '1'; inputsize = 64 - UInt(t0size); largegrain = tg0 == '01'; midgrain = tg0 == '10'; inputsize_max = if Have52BitVAExt() && largegrain then 52 else 48; inputsize_min = 64 - (if !HaveSmallPageTblExt() then 39 else if largegrain then 47 else 48); if inputsize < inputsize_min then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_FAULT}; if c == Constraint_FORCE then inputsize = inputsize_min; ps = VTCR_EL2.PS; basefound = inputsize >= inputsize_min && inputsize <= inputsize_max && IsZero(inputaddr<63:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR_EL2.IRGN0, VTCR_EL2.ORGN0, VTCR_EL2.SH0, secondstage); reversedescriptors = SCTLR_EL2.EE == '1'; singlepriv = TRUE; update_AF = HaveAccessFlagUpdateExt() && VTCR_EL2.HA == '1'; update_AP = HaveDirtyBitModifierExt() && update_AF && VTCR_EL2.HD == '1'; lookupsecure = if IsSecureEL2Enabled() then s1_nonsecure == '0' else FALSE; // Stage2 translation table walk is to secure PA space or to Non-secure PA space baseregister = if lookupsecure then VSTTBR_EL2 else VTTBR_EL2; startlevel = if lookupsecure then UInt(VSTCR_EL2.SL0) else UInt(VTCR_EL2.SL0); if largegrain then grainsize = 16; // Log2(64KB page size) level = 3 - startlevel; firstblocklevel = (if Have52BitPAExt() then 1 else 2); // Largest block is 4TB (2^42 bytes) for 52 bit PA // and 512MB (2^29 bytes) otherwise elsif midgrain then grainsize = 14; // Log2(16KB page size) level = 3 - startlevel; firstblocklevel = 2; // Largest block is 32MB (2^25 bytes) else // Small grain grainsize = 12; // Log2(4KB page size) if HaveSmallPageTblExt() && startlevel == 3 then level = startlevel; // Startlevel 3 (VTCR_EL2.SL0 or VSCTR_EL2.SL0 == 0b11) for 4KB granule else level = 2 - startlevel; firstblocklevel = 1; // Largest block is 1GB (2^30 bytes) stride = grainsize - 3; // Log2(page size / 8 bytes) // Limits on IPA controls based on implemented PA size. Level 0 is only // supported by small grain translations if largegrain then // 64KB pages // Level 1 only supported if implemented PA size is greater than 2^42 bytes if level == 0 || (level == 1 && PAMax() <= 42) then basefound = FALSE; elsif midgrain then // 16KB pages // Level 1 only supported if implemented PA size is greater than 2^40 bytes if level == 0 || (level == 1 && PAMax() <= 40) then basefound = FALSE; else // Small grain, 4KB pages // Level 0 only supported if implemented PA size is greater than 2^42 bytes if level < 0 || (level == 0 && PAMax() <= 42) then basefound = FALSE; // If the inputsize exceeds the PAMax value, the behavior is CONSTRAINED UNPREDICTABLE inputsizecheck = inputsize; if inputsize > PAMax() && (!ELUsingAArch32(EL1) || inputsize > 40) then case ConstrainUnpredictable(Unpredictable_LARGEIPA) of when Constraint_FORCE // Restrict the inputsize to the PAMax value inputsize = PAMax(); inputsizecheck = PAMax(); when Constraint_FORCENOSLCHECK // As FORCE, except use the configured inputsize in the size checks below inputsize = PAMax(); when Constraint_FAULT // Generate a translation fault basefound = FALSE; otherwise Unreachable(); // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsizecheck - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // Upper bound check is: startsizecheck > Log2(pagesize/8*16) if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 0; // AArch32 reports this as a level 1 fault result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; case ps of when '000' outputsize = 32; when '001' outputsize = 36; when '010' outputsize = 40; when '011' outputsize = 42; when '100' outputsize = 44; when '101' outputsize = 48; when '110' outputsize = (if Have52BitPAExt() && largegrain then 52 else 48); otherwise outputsize = integer IMPLEMENTATION_DEFINED "Reserved Intermediate Physical Address size value"; if outputsize > PAMax() then outputsize = PAMax(); if outputsize < 48 && !IsZero(baseregister<47:outputsize>) then level = 0; result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) if outputsize == 52 then z = (if baselowerbound < 6 then 6 else baselowerbound); baseaddress = baseregister<5:2>:baseregister<47:z>:Zeros(z); else baseaddress = ZeroExtend(baseregister<47:baselowerbound>:Zeros(baselowerbound)); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; addrselecttop = inputsize - 1; apply_nvnv1_effect = HaveNVExt() && EL2Enabled() && HCR_EL2.<NV,NV1> == '11' && S1TranslationRegime() == EL1 && !secondstage; repeat addrselectbottom = (3-level)*stride + grainsize; bits(52) index = ZeroExtend(inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address = baseaddress OR index; descaddr.paddress.NS = ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || !HasS2Translation() || (HaveNV2Ext() && acctype == AccType_NV2REGISTER) then descaddr2 = descaddr; else hwupdatewalk = FALSE; descaddr2 = AArch64.SecondStageWalk(descaddr, vaddress, acctype, iswrite, 8, hwupdatewalk); // Check for a fault on the stage 2 walk if IsFault(descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = BigEndianReverse(desc); if desc<0> == '0' || (desc<1:0> == '01' && (level == 3 || (HaveBlockBBM() && IsBlockDescriptorNTBitValid() && desc<16> == '1'))) then // Fault (00), Reserved (10), Block (01) at level 3, or Block(01) with nT bit set. result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if (outputsize < 52 && largegrain && !IsZero(desc<15:12>)) || (outputsize < 48 && !IsZero(desc<47:outputsize>)) then result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; if outputsize == 52 then baseaddress = desc<15:12>:desc<47:grainsize>:Zeros(grainsize); else baseaddress = ZeroExtend(desc<47:grainsize>:Zeros(grainsize)); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only if apply_nvnv1_effect then pxn_table = pxn_table OR desc<60>; else xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply in EL1&0 or EL2&0 translation regimes if !singlepriv then if !apply_nvnv1_effect then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Check block size is supported at this level if level < firstblocklevel then result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check for misprogramming of the contiguous bit if largegrain then contiguousbitcheck = level == 2 && inputsize < 34; elsif midgrain then contiguousbitcheck = level == 2 && inputsize < 30; else contiguousbitcheck = level == 1 && inputsize < 34; if contiguousbitcheck && desc<52> == '1' then if boolean IMPLEMENTATION_DEFINED "Translation fault on misprogrammed contiguous bit" then result.addrdesc.fault = AArch64.TranslationFault(ipaddress, s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Check the output address is inside the supported range if (outputsize < 52 && largegrain && !IsZero(desc<15:12>)) || (outputsize < 48 && !IsZero(desc<47:outputsize>)) then result.addrdesc.fault = AArch64.AddressSizeFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Unpack the descriptor into address and upper and lower block attributes if outputsize == 52 then outputaddress = desc<15:12>:desc<47:addrselectbottom>:inputaddr<addrselectbottom-1:0>; else outputaddress = ZeroExtend(desc<47:addrselectbottom>:inputaddr<addrselectbottom-1:0>); // Check Access Flag if desc<10> == '0' then if !update_AF then result.addrdesc.fault = AArch64.AccessFlagFault(ipaddress,s1_nonsecure, level, acctype, iswrite, secondstage, s2fs1walk); return result; else result.descupdate.AF = TRUE; if update_AP && desc<51> == '1' then // If hw update of access permission field is configured consider AP[2] as '0' / S2AP[2] as '1' if !secondstage && desc<7> == '1' then desc<7> = '0'; result.descupdate.AP = TRUE; elsif secondstage && desc<7> == '0' then desc<7> = '1'; result.descupdate.AP = TRUE; // Required descriptor if AF or AP[2]/S2AP[2] needs update result.descupdate.descaddr = descaddr; if apply_nvnv1_effect then pxn = desc<54>; // Bit[54] of the block/page descriptor holds PXN instead of UXN xn = '0'; // XN is '0' ap = desc<7>:'01'; // Bit[6] of the block/page descriptor is treated as '0' regardless of value programmed else xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply in EL1&0 or EL2&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = AArch64.S1AttrDecode(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if HaveExtendedExecuteNeverExt() then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = nsaccess; result.addrdesc.paddress.address = outputaddress; result.addrdesc.fault = AArch64.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExtCrossTriggerIn_TraceExtOut3};() then result.CnP = baseregister<0>; return result;

Library pseudocode for shared/debug/dccanditrClearStickyErrors/CheckForDCCInterruptsClearStickyErrors

// CheckForDCCInterrupts() // =======================// ClearStickyErrors() // =================== CheckForDCCInterrupts() commrx = (EDSCR.RXfull == '1'); commtx = (EDSCR.TXfull == '0'); ClearStickyErrors() EDSCR.TXU = '0'; // Clear TX underrun flag EDSCR.RXO = '0'; // Clear RX overrun flag // COMMRX and COMMTX support is optional and not recommended for new designs. // SetInterruptRequestLevel(InterruptID_COMMRX, if commrx then HIGH else LOW); // SetInterruptRequestLevel(InterruptID_COMMTX, if commtx then HIGH else LOW); // The value to be driven onto the common COMMIRQ signal. if ELUsingAArch32Halted(() then // in Debug state EDSCR.ITO = '0'; // Clear ITR overrun flag // If halted and the ITR is not empty then it is UNPREDICTABLE whether the EDSCR.ERR is cleared. // The UNPREDICTABLE behavior also affects the instructions in flight, but this is not described // in the pseudocode. ifEL1Halted) then commirq = ((commrx && DBGDCCINT.RX == '1') || (commtx && DBGDCCINT.TX == '1')); else commirq = ((commrx && MDCCINT_EL1.RX == '1') || (commtx && MDCCINT_EL1.TX == '1')); SetInterruptRequestLevel(() && EDSCR.ITE == '0' &&(Unpredictable_CLEARERRITEZEROInterruptID_COMMIRQConstrainUnpredictableBool, if commirq then HIGH else LOW); ) then return; EDSCR.ERR = '0'; // Clear cumulative error flag return;

Library pseudocode for shared/debug/dccanditrDebugTarget/DBGDTRRX_EL0DebugTarget

// DBGDTRRX_EL0[] (external write) // =============================== // Called on writes to debug register 0x08C.// DebugTarget() // ============= // Returns the debug exception target Exception level bits(2) DBGDTRRX_EL0[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if EDSCR.RXfull == '1' || (DebugTarget() secure =HaltedIsSecure() && EDSCR.MA == '1' && EDSCR.ITE == '0') then EDSCR.RXO = '1'; EDSCR.ERR = '1'; // Overrun condition: ignore write return; EDSCR.RXfull = '1'; DTRRX = value; if(); return HaltedDebugTargetFrom() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !UsingAArch32() then ExecuteA64(0xD5330501<31:0>); // A64 "MRS X1,DBGDTRRX_EL0" ExecuteA64(0xB8004401<31:0>); // A64 "STR W1,[X0],#4" X[1] = bits(64) UNKNOWN; else ExecuteT32(0xEE10<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MRS R1,DBGDTRRXint" ExecuteT32(0xF840<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "STR R1,[R0],#4" R[1] = bits(32) UNKNOWN; // If the store aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.RXfull = bit UNKNOWN; DBGDTRRX_EL0 = bits(32) UNKNOWN; else // "MRS X1,DBGDTRRX_EL0" calls DBGDTR_EL0[] (read) which clears RXfull. assert EDSCR.RXfull == '0'; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return; // DBGDTRRX_EL0[] (external read) // ============================== bits(32) DBGDTRRX_EL0[boolean memory_mapped] return DTRRX;(secure);

Library pseudocode for shared/debug/dccanditrDebugTarget/DBGDTRTX_EL0DebugTargetFrom

// DBGDTRTX_EL0[] (external read) // ============================== // Called on reads of debug register 0x080. // DebugTargetFrom() // ================= bits(32)bits(2) DBGDTRTX_EL0[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; underrun = EDSCR.TXfull == '0' || (DebugTargetFrom(boolean secure) ifHaltedHaveEL() && EDSCR.MA == '1' && EDSCR.ITE == '0'); value = if underrun then bits(32) UNKNOWN else DTRTX; if EDSCR.ERR == '1' then return value; // Error flag set: no side-effects // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then // Software lock locked: no side-effects return value; if underrun then EDSCR.TXU = '1'; EDSCR.ERR = '1'; // Underrun condition: block side-effects return value; // Return UNKNOWN EDSCR.TXfull = '0'; if( HaltedEL2() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !) && !secure then ifUsingAArch32ELUsingAArch32() then( ExecuteA64EL2(0xB8404401<31:0>); // A64 "LDR W1,[X0],#4" else) then route_to_el2 = (HDCR.TDE == '1' || HCR.TGE == '1'); else route_to_el2 = (MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'); else route_to_el2 = FALSE; if route_to_el2 then target = ExecuteT32EL2(0xF850<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "LDR R1,[R0],#4" // If the load aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.TXfull = bit UNKNOWN; DBGDTRTX_EL0 = bits(32) UNKNOWN; else if !; elsifUsingAArch32HaveEL() then( ExecuteA64EL3(0xD5130501<31:0>); // A64 "MSR DBGDTRTX_EL0,X1" else) && ExecuteT32HighestELUsingAArch32(0xEE00<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MSR DBGDTRTXint,R1" // "MSR DBGDTRTX_EL0,X1" calls DBGDTR_EL0[] (write) which sets TXfull. assert EDSCR.TXfull == '1'; if !() && secure then target =UsingAArch32EL3() then; else target = XEL1[1] = bits(64) UNKNOWN; else R[1] = bits(32) UNKNOWN; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return value; // DBGDTRTX_EL0[] (external write) // =============================== DBGDTRTX_EL0[boolean memory_mapped] = bits(32) value // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write DTRTX = value; return;; return target;

Library pseudocode for shared/debug/dccanditrDoubleLockStatus/DBGDTR_EL0DoubleLockStatus

// DBGDTR_EL0[] (write) // ==================== // System register writes to DBGDTR_EL0, DBGDTRTX_EL0 (AArch64) and DBGDTRTXint (AArch32)// DoubleLockStatus() // ================== // Returns the state of the OS Double Lock. // FALSE if OSDLR_EL1.DLK == 0 or DBGPRCR_EL1.CORENPDRQ == 1 or the PE is in Debug state. // TRUE if OSDLR_EL1.DLK == 1 and DBGPRCR_EL1.CORENPDRQ == 0 and the PE is in Non-debug state. boolean DBGDTR_EL0[] = bits(N) value // For MSR DBGDTRTX_EL0,<Rt> N=32, value=X[t]<31:0>, X[t]<63:32> is ignored // For MSR DBGDTR_EL0,<Xt> N=64, value=X[t]<63:0> assert N IN {32,64}; if EDSCR.TXfull == '1' then value = bits(N) UNKNOWN; // On a 64-bit write, implement a half-duplex channel if N == 64 then DTRRX = value<63:32>; DTRTX = value<31:0>; // 32-bit or 64-bit write EDSCR.TXfull = '1'; return; // DBGDTR_EL0[] (read) // =================== // System register reads of DBGDTR_EL0, DBGDTRRX_EL0 (AArch64) and DBGDTRRXint (AArch32) bits(N)DoubleLockStatus() if ! () then return FALSE; elsif ELUsingAArch32(EL1) then return DBGOSDLR.DLK == '1' && DBGPRCR.CORENPDRQ == '0' && !Halted(); else return OSDLR_EL1.DLK == '1' && DBGPRCR_EL1.CORENPDRQ == '0' && !HaltedDBGDTR_EL0[] // For MRS <Rt>,DBGDTRTX_EL0 N=32, X[t]=Zeros(32):result // For MRS <Xt>,DBGDTR_EL0 N=64, X[t]=result assert N IN {32,64}; bits(N) result; if EDSCR.RXfull == '0' then result = bits(N) UNKNOWN; else // On a 64-bit read, implement a half-duplex channel // NOTE: the word order is reversed on reads with regards to writes if N == 64 then result<63:32> = DTRTX; result<31:0> = DTRRX; EDSCR.RXfull = '0'; return result;();

Library pseudocode for shared/debug/dccanditrauthentication/DTRAllowExternalDebugAccess

bits(32) DTRRX; bits(32) DTRTX;// AllowExternalDebugAccess() // ========================== // Returns TRUE if the External Debugger access is allowed. booleanAllowExternalDebugAccess(boolean access_is_secure) // The access may also be subject to OS lock, power-down, etc. if HaveSecureExtDebugView() || ExternalDebugEnabled() then if HaveSecureExtDebugView() then allow_secure = access_is_secure; else allow_secure = ExternalSecureDebugEnabled(); if allow_secure then return TRUE; elsif HaveEL(EL3) then return (if ELUsingAArch32(EL3) then SDCR.EDAD else MDCR_EL3.EDAD) == '0'; else return !IsSecure(); else return FALSE;

Library pseudocode for shared/debug/dccanditrauthentication/EDITRAllowExternalPMUAccess

// EDITR[] (external write) // AllowExternalPMUAccess() // ======================== // Called on writes to debug register 0x084.// Returns TRUE if the External Debugger access is allowed. boolean EDITR[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if !AllowExternalPMUAccess(boolean access_is_secure) // The access may also be subject to OS lock, power-down, etc. ifHaltedHaveSecureExtDebugView() then return; // Non-debug state: ignore write if EDSCR.ITE == '0' || EDSCR.MA == '1' then EDSCR.ITO = '1'; EDSCR.ERR = '1'; // Overrun condition: block write return; // ITE indicates whether the processor is ready to accept another instruction; the processor // may support multiple outstanding instructions. Unlike the "InstrCompl" flag in [v7A] there // is no indication that the pipeline is empty (all instructions have completed). In this // pseudocode, the assumption is that only one instruction can be executed at a time, // meaning ITE acts like "InstrCompl". EDSCR.ITE = '0'; if !() ||UsingAArch32ExternalNoninvasiveDebugEnabled() then() then if ExecuteA64HaveSecureExtDebugView(value); else() then allow_secure = access_is_secure; else allow_secure = (); if allow_secure then return TRUE; elsif HaveEL(EL3) then return (if ELUsingAArch32(EL3) then SDCR.EPMAD else MDCR_EL3.EPMAD) == '0'; else return !IsSecureExecuteT32ExternalSecureNoninvasiveDebugEnabled(value<15:0>/*hw1*/, value<31:16> /*hw2*/); EDSCR.ITE = '1'; return;(); else return FALSE;

Library pseudocode for shared/debug/haltingauthentication/DCPSInstructionDebug_authentication

// DCPSInstruction() // ================= // Operation of the DCPS instruction in Debug statesignal DBGEN; signal NIDEN; signal SPIDEN; signal SPNIDEN; DCPSInstruction(bits(2) target_el) SynchronizeContext(); case target_el of when EL1 if PSTATE.EL == EL2 || (PSTATE.EL == EL3 && !UsingAArch32()) then handle_el = PSTATE.EL; elsif EL2Enabled() && HCR_EL2.TGE == '1' then UndefinedFault(); else handle_el = EL1; when EL2 if !HaveEL(EL2) then UndefinedFault(); elsif PSTATE.EL == EL3 && !UsingAArch32() then handle_el = EL3; elsif !IsSecureEL2Enabled() && IsSecure() then UndefinedFault(); else handle_el = EL2; when EL3 if EDSCR.SDD == '1' || !HaveEL(EL3) then UndefinedFault(); handle_el = EL3; otherwise Unreachable(); from_secure = IsSecure(); if ELUsingAArch32(handle_el) then if PSTATE.M == M32_Monitor then SCR.NS = '0'; assert UsingAArch32(); // Cannot move from AArch64 to AArch32 case handle_el of when EL1AArch32.WriteMode(M32_Svc); if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; when EL2 AArch32.WriteMode(M32_Hyp); when EL3AArch32.WriteMode(M32_Monitor); if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if handle_el == EL2 then ELR_hyp = bits(32) UNKNOWN; HSR = bits(32) UNKNOWN; else LR = bits(32) UNKNOWN; SPSR[] = bits(32) UNKNOWN; PSTATE.E = SCTLR[].EE; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else // Targeting AArch64 if UsingAArch32() then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); PSTATE.nRW = '0'; PSTATE.SP = '1'; PSTATE.EL = handle_el; if HavePANExt() && ((handle_el == EL1 && SCTLR_EL1.SPAN == '0') || (handle_el == EL2 && HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && SCTLR_EL2.SPAN == '0')) then PSTATE.PAN = '1'; ELR[] = bits(64) UNKNOWN; SPSR[] = bits(32) UNKNOWN; ESR[] = bits(32) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; if HaveUAOExt() then PSTATE.UAO = '0'; UpdateEDSCRFields(); // Update EDSCR PE state flags sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrors(); return;

Library pseudocode for shared/debug/haltingauthentication/DRPSInstructionExternalDebugEnabled

// DRPSInstruction() // ================= // Operation of the A64 DRPS and T32 ERET instructions in Debug state// ExternalDebugEnabled() // ====================== boolean DRPSInstruction()ExternalDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the DBGEN signal. return DBGEN == HIGH; SynchronizeContext(); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrors(); SetPSTATEFromPSR(SPSR[]); // PSTATE.{N,Z,C,V,Q,GE,SS,D,A,I,F} are not observable and ignored in Debug state, so // behave as if UNKNOWN. if UsingAArch32() then PSTATE.<N,Z,C,V,Q,GE,SS,A,I,F> = bits(13) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else PSTATE.<N,Z,C,V,SS,D,A,I,F> = bits(9) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; UpdateEDSCRFields(); // Update EDSCR PE state flags return;

Library pseudocode for shared/debug/haltingauthentication/DebugHaltExternalHypInvasiveDebugEnabled

constant bits(6)// ExternalHypInvasiveDebugEnabled() // ================================= boolean DebugHalt_Breakpoint = '000111'; constant bits(6)ExternalHypInvasiveDebugEnabled() // In the recommended interface, ExternalHypInvasiveDebugEnabled returns the state of the // (DBGEN AND HIDEN) signal. return DebugHalt_EDBGRQ = '010011'; constant bits(6) DebugHalt_Step_Normal = '011011'; constant bits(6) DebugHalt_Step_Exclusive = '011111'; constant bits(6) DebugHalt_OSUnlockCatch = '100011'; constant bits(6) DebugHalt_ResetCatch = '100111'; constant bits(6) DebugHalt_Watchpoint = '101011'; constant bits(6) DebugHalt_HaltInstruction = '101111'; constant bits(6) DebugHalt_SoftwareAccess = '110011'; constant bits(6) DebugHalt_ExceptionCatch = '110111'; constant bits(6) DebugHalt_Step_NoSyndrome = '111011';() && HIDEN == HIGH;

Library pseudocode for shared/debug/haltingauthentication/DisableITRAndResumeInstructionPrefetchExternalHypNoninvasiveDebugEnabled

// ExternalHypNoninvasiveDebugEnabled() // ==================================== boolean ExternalHypNoninvasiveDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, ExternalHypNoninvasiveDebugEnabled returns the state of the // (DBGEN OR NIDEN) AND (HIDEN OR HNIDEN) signal. return ExternalNoninvasiveDebugEnabledDisableITRAndResumeInstructionPrefetch();() && (HIDEN == HIGH || HNIDEN == HIGH);

Library pseudocode for shared/debug/haltingauthentication/ExecuteA64ExternalNoninvasiveDebugAllowed

// Execute an A64 instruction in Debug state.// ExternalNoninvasiveDebugAllowed() // ================================= boolean ExecuteA64(bits(32) instr);ExternalNoninvasiveDebugAllowed() // Return TRUE if Trace and PC Sample-based Profiling are allowed return (ExternalNoninvasiveDebugEnabled() && (!IsSecure() || ExternalSecureNoninvasiveDebugEnabled() || (ELUsingAArch32(EL1) && PSTATE.EL == EL0 && SDER.SUNIDEN == '1')));

Library pseudocode for shared/debug/haltingauthentication/ExecuteT32ExternalNoninvasiveDebugEnabled

// Execute a T32 instruction in Debug state.// ExternalNoninvasiveDebugEnabled() // ================================= boolean ExecuteT32(bits(16) hw1, bits(16) hw2);ExternalNoninvasiveDebugEnabled() // This function returns TRUE if the v8.4 Debug relaxations are implemented, otherwise this // function is IMPLEMENTATION DEFINED. // In the recommended interface, ExternalNoninvasiveDebugEnabled returns the state of the (DBGEN // OR NIDEN) signal. return !HaveNoninvasiveDebugAuth() || ExternalDebugEnabled() || NIDEN == HIGH;

Library pseudocode for shared/debug/haltingauthentication/ExitDebugStateExternalSecureDebugEnabled

// ExitDebugState() // ================// ExternalSecureDebugEnabled() // ============================ boolean ExitDebugState() assertExternalSecureDebugEnabled() if ! HaltedHaveEL();( SynchronizeContextEL3(); // Although EDSCR.STATUS signals that the PE is restarting, debuggers must use EDPRSR.SDR to // detect that the PE has restarted. EDSCR.STATUS = '000001'; // Signal restarting EDESR<2:0> = '000'; // Clear any pending Halting debug events bits(64) new_pc; bits(32) spsr; if) && ! UsingAArch32IsSecure() then new_pc =() then return FALSE; // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN AND SPIDEN) signal. // CoreSight allows asserting SPIDEN without also asserting DBGEN, but this is not recommended. return ZeroExtendExternalDebugEnabled(DLR); spsr = DSPSR; else new_pc = DLR_EL0; spsr = DSPSR_EL0; // If this is an illegal return, SetPSTATEFromPSR() will set PSTATE.IL. SetPSTATEFromPSR(spsr); // Can update privileged bits, even at EL0 if UsingAArch32() then if ConstrainUnpredictableBool(Unpredictable_RESTARTALIGNPC) then new_pc<0> = '0'; BranchTo(new_pc<31:0>, BranchType_DBGEXIT); // AArch32 branch else // If targeting AArch32 then possibly zero the 32 most significant bits of the target PC if spsr<4> == '1' && ConstrainUnpredictableBool(Unpredictable_RESTARTZEROUPPERPC) then new_pc<63:32> = Zeros(); BranchTo(new_pc, BranchType_DBGEXIT); // A type of branch that is never predicted (EDSCR.STATUS,EDPRSR.SDR) = ('000010','1'); // Atomically signal restarted UpdateEDSCRFields(); // Stop signalling PE state DisableITRAndResumeInstructionPrefetch(); return;() && SPIDEN == HIGH;

Library pseudocode for shared/debug/haltingauthentication/HaltExternalSecureNoninvasiveDebugEnabled

// Halt() // ======// ExternalSecureNoninvasiveDebugEnabled() // ======================================= boolean Halt(bits(6) reason)ExternalSecureNoninvasiveDebugEnabled() if ! CTI_SignalEventHaveEL(CrossTriggerIn_CrossHaltEL3); // Trigger other cores to halt bits(64) preferred_restart_address =) && ! ThisInstrAddr(); spsr = GetPSRFromPSTATE(); if UsingAArch32() then // If entering from AArch32 state, spsr<21> is the DIT bit which has to be moved for DSPSR spsr<24> = spsr<21>; spsr<21> = PSTATE.SS; // Always save the SS bit if UsingAArch32() then DLR = preferred_restart_address<31:0>; DSPSR = spsr; else DLR_EL0 = preferred_restart_address; DSPSR_EL0 = spsr; EDSCR.ITE = '1'; EDSCR.ITO = '0'; if IsSecure() then EDSCR.SDD = '0'; // If entered in Secure state, allow debug elsif() then return FALSE; // If the v8.4 Debug relaxations are implemented, this function returns the value of // ExternalSecureDebugEnabled(). Otherwise the definition of this function is // IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN OR NIDEN) AND // (SPIDEN OR SPNIDEN) signal. if HaveELHaveNoninvasiveDebugAuth(() then returnEL3ExternalNoninvasiveDebugEnabled) then EDSCR.SDD = if() && (SPIDEN == HIGH || SPNIDEN == HIGH); else return ExternalSecureDebugEnabled() then '0' else '1'; else assert EDSCR.SDD == '1'; // Otherwise EDSCR.SDD is RES1 EDSCR.MA = '0'; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if // UNKNOWN. PSTATE.{N,Z,C,V,Q,GE} are also not observable, but since these are not changed on // exception entry, this function also leaves them unchanged. PSTATE.{E,M,nRW,EL,SP} are // unchanged. PSTATE.IL is set to 0. if UsingAArch32() then PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 else PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; PSTATE.IL = '0'; StopInstructionPrefetchAndEnableITR(); EDSCR.STATUS = reason; // Signal entered Debug state UpdateEDSCRFields(); // Update EDSCR PE state flags. return;();

Library pseudocode for shared/debug/haltingcti/HaltOnBreakpointOrWatchpointCTI_SetEventLevel

// HaltOnBreakpointOrWatchpoint() // ============================== // Returns TRUE if the Breakpoint and Watchpoint debug events should be considered for Debug // state entry, FALSE if they should be considered for a debug exception. boolean// Set a Cross Trigger multi-cycle input event trigger to the specified level. CTI_SetEventLevel( HaltOnBreakpointOrWatchpoint() return HaltingAllowed() && EDSCR.HDE == '1' && OSLSR_EL1.OSLK == '0';id, signal level);

Library pseudocode for shared/debug/haltingcti/HaltedCTI_SignalEvent

// Halted() // ======== boolean// Signal a discrete event on a Cross Trigger input event trigger. Halted() return !(EDSCR.STATUS IN {'000001', '000010'}); // HaltedCTI_SignalEvent(CrossTriggerIn id);

Library pseudocode for shared/debug/haltingcti/HaltingAllowedCrossTrigger

// HaltingAllowed() // ================ // Returns TRUE if halting is currently allowed, FALSE if halting is prohibited. booleanenumeration HaltingAllowed() ifCrossTriggerOut { Halted() ||CrossTriggerOut_DebugRequest, DoubleLockStatus() then return FALSE; elsifCrossTriggerOut_RestartRequest, IsSecure() then returnCrossTriggerOut_IRQ, ExternalSecureDebugEnabled(); else returnCrossTriggerOut_RSVD3, CrossTriggerOut_TraceExtIn0, CrossTriggerOut_TraceExtIn1, CrossTriggerOut_TraceExtIn2, CrossTriggerOut_TraceExtIn3}; enumeration CrossTriggerIn {CrossTriggerIn_CrossHalt, CrossTriggerIn_PMUOverflow, CrossTriggerIn_RSVD2, CrossTriggerIn_RSVD3, CrossTriggerIn_TraceExtOut0, CrossTriggerIn_TraceExtOut1, CrossTriggerIn_TraceExtOut2, ExternalDebugEnabled();CrossTriggerIn_TraceExtOut3};

Library pseudocode for shared/debug/haltingdccanditr/RestartingCheckForDCCInterrupts

// Restarting() // ============ boolean// CheckForDCCInterrupts() // ======================= Restarting() return EDSCR.STATUS == '000001'; // RestartingCheckForDCCInterrupts() commrx = (EDSCR.RXfull == '1'); commtx = (EDSCR.TXfull == '0'); // COMMRX and COMMTX support is optional and not recommended for new designs. // SetInterruptRequestLevel(InterruptID_COMMRX, if commrx then HIGH else LOW); // SetInterruptRequestLevel(InterruptID_COMMTX, if commtx then HIGH else LOW); // The value to be driven onto the common COMMIRQ signal. ifELUsingAArch32(EL1) then commirq = ((commrx && DBGDCCINT.RX == '1') || (commtx && DBGDCCINT.TX == '1')); else commirq = ((commrx && MDCCINT_EL1.RX == '1') || (commtx && MDCCINT_EL1.TX == '1')); SetInterruptRequestLevel(InterruptID_COMMIRQ, if commirq then HIGH else LOW); return;

Library pseudocode for shared/debug/haltingdccanditr/StopInstructionPrefetchAndEnableITRDBGDTRRX_EL0

// DBGDTRRX_EL0[] (external write) // =============================== // Called on writes to debug register 0x08C. DBGDTRRX_EL0[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if EDSCR.RXfull == '1' || (Halted() && EDSCR.MA == '1' && EDSCR.ITE == '0') then EDSCR.RXO = '1'; EDSCR.ERR = '1'; // Overrun condition: ignore write return; EDSCR.RXfull = '1'; DTRRX = value; if Halted() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !UsingAArch32() then ExecuteA64(0xD5330501<31:0>); // A64 "MRS X1,DBGDTRRX_EL0" ExecuteA64(0xB8004401<31:0>); // A64 "STR W1,[X0],#4" X[1] = bits(64) UNKNOWN; else ExecuteT32(0xEE10<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MRS R1,DBGDTRRXint" ExecuteT32(0xF840<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "STR R1,[R0],#4" R[1] = bits(32) UNKNOWN; // If the store aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.RXfull = bit UNKNOWN; DBGDTRRX_EL0 = bits(32) UNKNOWN; else // "MRS X1,DBGDTRRX_EL0" calls DBGDTR_EL0[] (read) which clears RXfull. assert EDSCR.RXfull == '0'; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return; // DBGDTRRX_EL0[] (external read) // ============================== bits(32) StopInstructionPrefetchAndEnableITR();DBGDTRRX_EL0[boolean memory_mapped] return DTRRX;

Library pseudocode for shared/debug/haltingdccanditr/UpdateEDSCRFieldsDBGDTRTX_EL0

// UpdateEDSCRFields() // =================== // Update EDSCR PE state fields// DBGDTRTX_EL0[] (external read) // ============================== // Called on reads of debug register 0x080. bits(32) UpdateEDSCRFields() DBGDTRTX_EL0[boolean memory_mapped] if ! if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; underrun = EDSCR.TXfull == '0' || (Halted() then EDSCR.EL = '00'; EDSCR.NS = bit UNKNOWN; EDSCR.RW = '1111'; else EDSCR.EL = PSTATE.EL; EDSCR.NS = if() && EDSCR.MA == '1' && EDSCR.ITE == '0'); value = if underrun then bits(32) UNKNOWN else DTRTX; if EDSCR.ERR == '1' then return value; // Error flag set: no side-effects // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then // Software lock locked: no side-effects return value; if underrun then EDSCR.TXU = '1'; EDSCR.ERR = '1'; // Underrun condition: block side-effects return value; // Return UNKNOWN EDSCR.TXfull = '0'; if IsSecureHalted() then '0' else '1'; () && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) bits(4) RW; RW<1> = if if ! ELUsingAArch32UsingAArch32(() thenEL1ExecuteA64) then '0' else '1'; if PSTATE.EL !=(0xB8404401<31:0>); // A64 "LDR W1,[X0],#4" else EL0ExecuteT32 then RW<0> = RW<1>; (0xF850<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "LDR R1,[R0],#4" // If the load aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.TXfull = bit UNKNOWN; DBGDTRTX_EL0 = bits(32) UNKNOWN; else RW<0> = if if ! UsingAArch32() then '0' else '1'; if !() thenHaveELExecuteA64((0xD5130501<31:0>); // A64 "MSR DBGDTRTX_EL0,X1" elseEL2ExecuteT32) || ((0xEE00<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MSR DBGDTRTXint,R1" // "MSR DBGDTRTX_EL0,X1" calls DBGDTR_EL0[] (write) which sets TXfull. assert EDSCR.TXfull == '1'; if !HaveELUsingAArch32(() thenEL3X) &&[1] = bits(64) UNKNOWN; else SCR_GENR[].NS == '0' && ![1] = bits(32) UNKNOWN; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return value; // DBGDTRTX_EL0[] (external write) // ===============================IsSecureEL2Enabled()) then RW<2> = RW<1>; else RW<2> = if ELUsingAArch32(EL2) then '0' else '1'; if !HaveEL(EL3) then RW<3> = RW<2>; else RW<3> = if ELUsingAArch32(EL3) then '0' else '1'; // The least-significant bits of EDSCR.RW are UNKNOWN if any higher EL is using AArch32. if RW<3> == '0' then RW<2:0> = bits(3) UNKNOWN; elsif RW<2> == '0' then RW<1:0> = bits(2) UNKNOWN; elsif RW<1> == '0' then RW<0> = bit UNKNOWN; EDSCR.RW = RW; DBGDTRTX_EL0[boolean memory_mapped] = bits(32) value // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write DTRTX = value; return;

Library pseudocode for shared/debug/haltingeventsdccanditr/CheckExceptionCatchDBGDTR_EL0

// CheckExceptionCatch() // ===================== // Check whether an Exception Catch debug event is set on the current Exception level// DBGDTR_EL0[] (write) // ==================== // System register writes to DBGDTR_EL0, DBGDTRTX_EL0 (AArch64) and DBGDTRTXint (AArch32) CheckExceptionCatch(boolean exception_entry) // Called after an exception entry or exit, that is, such that IsSecure() and PSTATE.EL are correct // for the exception target. base = ifDBGDTR_EL0[] = bits(N) value // For MSR DBGDTRTX_EL0,<Rt> N=32, value=X[t]<31:0>, X[t]<63:32> is ignored // For MSR DBGDTR_EL0,<Xt> N=64, value=X[t]<63:0> assert N IN {32,64}; if EDSCR.TXfull == '1' then value = bits(N) UNKNOWN; // On a 64-bit write, implement a half-duplex channel if N == 64 then DTRRX = value<63:32>; DTRTX = value<31:0>; // 32-bit or 64-bit write EDSCR.TXfull = '1'; return; // DBGDTR_EL0[] (read) // =================== // System register reads of DBGDTR_EL0, DBGDTRRX_EL0 (AArch64) and DBGDTRRXint (AArch32) bits(N) IsSecure() then 0 else 4; if HaltingAllowed() then if HaveExtendedECDebugEvents() then exception_exit = !exception_entry; ctrl = EDECCR<UInt(PSTATE.EL) + base + 8>:EDECCR<UInt(PSTATE.EL) + base>; case ctrl of when '00' halt = FALSE; when '01' halt = TRUE; when '10' halt = (exception_exit == TRUE); when '11' halt = (exception_entry == TRUE); else halt = (EDECCR<UInt(PSTATE.EL) + base> == '1'); if halt then Halt(DebugHalt_ExceptionCatch);DBGDTR_EL0[] // For MRS <Rt>,DBGDTRTX_EL0 N=32, X[t]=Zeros(32):result // For MRS <Xt>,DBGDTR_EL0 N=64, X[t]=result assert N IN {32,64}; bits(N) result; if EDSCR.RXfull == '0' then result = bits(N) UNKNOWN; else // On a 64-bit read, implement a half-duplex channel // NOTE: the word order is reversed on reads with regards to writes if N == 64 then result<63:32> = DTRTX; result<31:0> = DTRRX; EDSCR.RXfull = '0'; return result;

Library pseudocode for shared/debug/haltingeventsdccanditr/CheckHaltingStepDTR

// CheckHaltingStep() // ================== // Check whether EDESR.SS has been set by Halting Stepbits(32) DTRRX; bits(32) DTRTX; CheckHaltingStep() if HaltingAllowed() && EDESR.SS == '1' then // The STATUS code depends on how we arrived at the state where EDESR.SS == 1. if HaltingStep_DidNotStep() then Halt(DebugHalt_Step_NoSyndrome); elsif HaltingStep_SteppedEX() then Halt(DebugHalt_Step_Exclusive); else Halt(DebugHalt_Step_Normal);

Library pseudocode for shared/debug/haltingeventsdccanditr/CheckOSUnlockCatchEDITR

// CheckOSUnlockCatch() // ==================== // Called on unlocking the OS Lock to pend an OS Unlock Catch debug event// EDITR[] (external write) // ======================== // Called on writes to debug register 0x084. CheckOSUnlockCatch() if EDECR.OSUCE == '1' && !EDITR[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if !Halted() then EDESR.OSUC = '1';() then return; // Non-debug state: ignore write if EDSCR.ITE == '0' || EDSCR.MA == '1' then EDSCR.ITO = '1'; EDSCR.ERR = '1'; // Overrun condition: block write return; // ITE indicates whether the processor is ready to accept another instruction; the processor // may support multiple outstanding instructions. Unlike the "InstrCompl" flag in [v7A] there // is no indication that the pipeline is empty (all instructions have completed). In this // pseudocode, the assumption is that only one instruction can be executed at a time, // meaning ITE acts like "InstrCompl". EDSCR.ITE = '0'; if !UsingAArch32() then ExecuteA64(value); else ExecuteT32(value<15:0>/*hw1*/, value<31:16> /*hw2*/); EDSCR.ITE = '1'; return;

Library pseudocode for shared/debug/haltingeventshalting/CheckPendingOSUnlockCatchDCPSInstruction

// CheckPendingOSUnlockCatch() // =========================== // Check whether EDESR.OSUC has been set by an OS Unlock Catch debug event// DCPSInstruction() // ================= // Operation of the DCPS instruction in Debug state CheckPendingOSUnlockCatch() ifDCPSInstruction(bits(2) target_el) HaltingAllowedSynchronizeContext() && EDESR.OSUC == '1' then(); case target_el of when HaltEL1(if PSTATE.EL == || (PSTATE.EL == EL3 && !UsingAArch32()) then handle_el = PSTATE.EL; elsif EL2Enabled() && HCR_EL2.TGE == '1' then UndefinedFault(); else handle_el = EL1; when EL2 if !HaveEL(EL2) then UndefinedFault(); elsif PSTATE.EL == EL3 && !UsingAArch32() then handle_el = EL3; elsif !IsSecureEL2Enabled() && IsSecure() then UndefinedFault(); else handle_el = EL2; when EL3 if EDSCR.SDD == '1' || !HaveEL(EL3) then UndefinedFault(); handle_el = EL3; otherwise Unreachable(); from_secure = IsSecure(); if ELUsingAArch32(handle_el) then if PSTATE.M == M32_Monitor then SCR.NS = '0'; assert UsingAArch32(); // Cannot move from AArch64 to AArch32 case handle_el of when EL1AArch32.WriteMode(M32_Svc); if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; when EL2 AArch32.WriteMode(M32_Hyp); when EL3AArch32.WriteMode(M32_Monitor); if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if handle_el == EL2 then ELR_hyp = bits(32) UNKNOWN; HSR = bits(32) UNKNOWN; else LR = bits(32) UNKNOWN; SPSR[] = bits(32) UNKNOWN; PSTATE.E = SCTLR[].EE; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else // Targeting AArch64 if UsingAArch32() then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); PSTATE.nRW = '0'; PSTATE.SP = '1'; PSTATE.EL = handle_el; if HavePANExt() && ((handle_el == EL1 && SCTLR_EL1.SPAN == '0') || (handle_el == EL2 && HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && SCTLR_EL2.SPAN == '0')) then PSTATE.PAN = '1'; ELR[] = bits(64) UNKNOWN; SPSR[] = bits(32) UNKNOWN; ESR[] = bits(32) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; if HaveUAOExt() then PSTATE.UAO = '0'; UpdateEDSCRFields(); // Update EDSCR PE state flags sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrorsDebugHalt_OSUnlockCatchEL2);(); return;

Library pseudocode for shared/debug/haltingeventshalting/CheckPendingResetCatchDRPSInstruction

// CheckPendingResetCatch() // ======================== // Check whether EDESR.RC has been set by a Reset Catch debug event// DRPSInstruction() // ================= // Operation of the A64 DRPS and T32 ERET instructions in Debug state CheckPendingResetCatch() ifDRPSInstruction() HaltingAllowedSynchronizeContext() && EDESR.RC == '1' then(); sync_errors = HaltHaveIESB(() &&[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrors(); SetPSTATEFromPSR(SPSR[]); // PSTATE.{N,Z,C,V,Q,GE,SS,D,A,I,F} are not observable and ignored in Debug state, so // behave as if UNKNOWN. if UsingAArch32() then PSTATE.<N,Z,C,V,Q,GE,SS,A,I,F> = bits(13) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else PSTATE.<N,Z,C,V,SS,D,A,I,F> = bits(9) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; UpdateEDSCRFieldsDebugHalt_ResetCatchSCTLR);(); // Update EDSCR PE state flags return;

Library pseudocode for shared/debug/haltingeventshalting/CheckResetCatchDebugHalt

// CheckResetCatch() // ================= // Called after resetconstant bits(6) CheckResetCatch() if EDECR.RCE == '1' then EDESR.RC = '1'; // If halting is allowed then halt immediately ifDebugHalt_Breakpoint = '000111'; constant bits(6) HaltingAllowed() thenDebugHalt_EDBGRQ = '010011'; constant bits(6) Halt(DebugHalt_Step_Normal = '011011'; constant bits(6)DebugHalt_Step_Exclusive = '011111'; constant bits(6) DebugHalt_OSUnlockCatch = '100011'; constant bits(6) DebugHalt_ResetCatch = '100111'; constant bits(6) DebugHalt_Watchpoint = '101011'; constant bits(6) DebugHalt_HaltInstruction = '101111'; constant bits(6) DebugHalt_SoftwareAccess = '110011'; constant bits(6) DebugHalt_ExceptionCatch = '110111'; constant bits(6) DebugHalt_ResetCatch);DebugHalt_Step_NoSyndrome = '111011';

Library pseudocode for shared/debug/haltingeventshalting/CheckSoftwareAccessToDebugRegistersDisableITRAndResumeInstructionPrefetch

// CheckSoftwareAccessToDebugRegisters() // ===================================== // Check for access to Breakpoint and Watchpoint registers. CheckSoftwareAccessToDebugRegisters() os_lock = (if ELUsingAArch32(EL1) then DBGOSLSR.OSLK else OSLSR_EL1.OSLK); if HaltingAllowed() && EDSCR.TDA == '1' && os_lock == '0' then Halt(DebugHalt_SoftwareAccess);DisableITRAndResumeInstructionPrefetch();

Library pseudocode for shared/debug/haltingeventshalting/ExternalDebugRequestExecuteA64

// ExternalDebugRequest() // ======================// Execute an A64 instruction in Debug state. ExternalDebugRequest() ifExecuteA64(bits(32) instr); HaltingAllowed() then Halt(DebugHalt_EDBGRQ); // Otherwise the CTI continues to assert the debug request until it is taken.

Library pseudocode for shared/debug/haltingeventshalting/HaltingStep_DidNotStepExecuteT32

// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean// Execute a T32 instruction in Debug state. HaltingStep_DidNotStep();ExecuteT32(bits(16) hw1, bits(16) hw2);

Library pseudocode for shared/debug/haltingeventshalting/HaltingStep_SteppedEXExitDebugState

// Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean// ExitDebugState() // ================ HaltingStep_SteppedEX();ExitDebugState() assertHalted(); SynchronizeContext(); // Although EDSCR.STATUS signals that the PE is restarting, debuggers must use EDPRSR.SDR to // detect that the PE has restarted. EDSCR.STATUS = '000001'; // Signal restarting EDESR<2:0> = '000'; // Clear any pending Halting debug events bits(64) new_pc; bits(32) spsr; if UsingAArch32() then new_pc = ZeroExtend(DLR); spsr = DSPSR; else new_pc = DLR_EL0; spsr = DSPSR_EL0; // If this is an illegal return, SetPSTATEFromPSR() will set PSTATE.IL. SetPSTATEFromPSR(spsr); // Can update privileged bits, even at EL0 if UsingAArch32() then if ConstrainUnpredictableBool(Unpredictable_RESTARTALIGNPC) then new_pc<0> = '0'; BranchTo(new_pc<31:0>, BranchType_DBGEXIT); // AArch32 branch else // If targeting AArch32 then possibly zero the 32 most significant bits of the target PC if spsr<4> == '1' && ConstrainUnpredictableBool(Unpredictable_RESTARTZEROUPPERPC) then new_pc<63:32> = Zeros(); BranchTo(new_pc, BranchType_DBGEXIT); // A type of branch that is never predicted (EDSCR.STATUS,EDPRSR.SDR) = ('000010','1'); // Atomically signal restarted UpdateEDSCRFields(); // Stop signalling PE state DisableITRAndResumeInstructionPrefetch(); return;

Library pseudocode for shared/debug/haltingeventshalting/RunHaltingStepHalt

// RunHaltingStep() // ================// Halt() // ====== RunHaltingStep(boolean exception_generated, bits(2) exception_target, boolean syscall, boolean reset) // "exception_generated" is TRUE if the previous instruction generated a synchronous exception // or was cancelled by an asynchronous exception. // // if "exception_generated" is TRUE then "exception_target" is the target of the exception, and // "syscall" is TRUE if the exception is a synchronous exception where the preferred return // address is the instruction following that which generated the exception. // // "reset" is TRUE if exiting reset state into the highest EL. if reset then assert !Halt(bits(6) reason)HaltedCTI_SignalEvent(); // Cannot come out of reset halted active = EDECR.SS == '1' && !(HaltedCrossTriggerIn_CrossHalt(); if active && reset then // Coming out of reset with EDECR.SS set EDESR.SS = '1'; elsif active &&); // Trigger other cores to halt if HaltingAllowedUsingAArch32() then if exception_generated && exception_target == DLR = ThisInstrAddr(); DSPSR = GetPSRFromPSTATE(); DSPSR.SS = PSTATE.SS; // Always save PSTATE.SS else DLR_EL0 = ThisInstrAddr(); DSPSR_EL0 = GetPSRFromPSTATE(); DSPSR_EL0.SS = PSTATE.SS; // Always save PSTATE.SS EDSCR.ITE = '1'; EDSCR.ITO = '0'; if IsSecure() then EDSCR.SDD = '0'; // If entered in Secure state, allow debug elsif HaveEL(EL3 then advance = syscall ||) then EDSCR.SDD = if ExternalSecureDebugEnabled() then '0' else '1'; else assert EDSCR.SDD == '1'; // Otherwise EDSCR.SDD is RES1 EDSCR.MA = '0'; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if // UNKNOWN. PSTATE.{N,Z,C,V,Q,GE} are also not observable, but since these are not changed on // exception entry, this function also leaves them unchanged. PSTATE.{E,M,nRW,EL,SP} are // unchanged. PSTATE.IL is set to 0. if UsingAArch32() then PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 else PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; PSTATE.IL = '0'; StopInstructionPrefetchAndEnableITR(); EDSCR.STATUS = reason; // Signal entered Debug state UpdateEDSCRFields(); else advance = TRUE; if advance then EDESR.SS = '1'; (); // Update EDSCR PE state flags. return;

Library pseudocode for shared/debug/interruptshalting/ExternalDebugInterruptsDisabledHaltOnBreakpointOrWatchpoint

// ExternalDebugInterruptsDisabled() // ================================= // Determine whether EDSCR disables interrupts routed to 'target' // HaltOnBreakpointOrWatchpoint() // ============================== // Returns TRUE if the Breakpoint and Watchpoint debug events should be considered for Debug // state entry, FALSE if they should be considered for a debug exception. boolean ExternalDebugInterruptsDisabled(bits(2) target) case target of whenHaltOnBreakpointOrWatchpoint() return EL3HaltingAllowed int_dis = EDSCR.INTdis == '11' && ExternalSecureDebugEnabled(); when EL2 int_dis = EDSCR.INTdis == '1x' && ExternalDebugEnabled(); when EL1 if IsSecure() then int_dis = EDSCR.INTdis == '1x' && ExternalSecureDebugEnabled(); else int_dis = EDSCR.INTdis != '00' && ExternalDebugEnabled(); return int_dis;() && EDSCR.HDE == '1' && OSLSR_EL1.OSLK == '0';

Library pseudocode for shared/debug/interruptshalting/InterruptIDHalted

enumeration// Halted() // ======== boolean InterruptID {Halted() return !(EDSCR.STATUS IN {'000001', '000010'}); // HaltedInterruptID_PMUIRQ, InterruptID_COMMIRQ, InterruptID_CTIIRQ, InterruptID_COMMRX, InterruptID_COMMTX};

Library pseudocode for shared/debug/interruptshalting/SetInterruptRequestLevelHaltingAllowed

// Set a level-sensitive interrupt to the specified level. SetInterruptRequestLevel(// HaltingAllowed() // ================ // Returns TRUE if halting is currently allowed, FALSE if halting is prohibited. booleanHaltingAllowed() if Halted() || DoubleLockStatus() then return FALSE; elsif IsSecure() then return ExternalSecureDebugEnabled(); else return ExternalDebugEnabledInterruptID id, signal level);();

Library pseudocode for shared/debug/samplebasedprofilinghalting/CreatePCSampleRestarting

// CreatePCSample() // ================// Restarting() // ============ boolean CreatePCSample() // In a simple sequential execution of the program, CreatePCSample is executed each time the PE // executes an instruction that can be sampled. An implementation is not constrained such that // reads of EDPCSRlo return the current values of PC, etc. pc_sample.valid =Restarting() return EDSCR.STATUS == '000001'; // Restarting ExternalNoninvasiveDebugAllowed() && !Halted(); pc_sample.pc = ThisInstrAddr(); pc_sample.el = PSTATE.EL; pc_sample.rw = if UsingAArch32() then '0' else '1'; pc_sample.ns = if IsSecure() then '0' else '1'; pc_sample.contextidr = if ELUsingAArch32(EL1) then CONTEXTIDR else CONTEXTIDR_EL1; pc_sample.has_el2 = EL2Enabled(); if EL2Enabled() then if ELUsingAArch32(EL2) then pc_sample.vmid = ZeroExtend(VTTBR.VMID, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then pc_sample.vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); else pc_sample.vmid = VTTBR_EL2.VMID; if HaveVirtHostExt() && !ELUsingAArch32(EL2) then pc_sample.contextidr_el2 = CONTEXTIDR_EL2; else pc_sample.contextidr_el2 = bits(32) UNKNOWN; pc_sample.el0h = PSTATE.EL == EL0 && IsInHost(); return;

Library pseudocode for shared/debug/samplebasedprofilinghalting/EDPCSRloStopInstructionPrefetchAndEnableITR

// EDPCSRlo[] (read) // ================= bits(32) EDPCSRlo[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || EDLSR.SLK == '0'; // Software locked: no side-effects if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then if HaveVirtHostExt() && EDSCR.SC2 == '1' then EDPCSRhi.PC = (if pc_sample.rw == '0' then Zeros(24) else pc_sample.pc<55:32>); EDPCSRhi.EL = pc_sample.el; EDPCSRhi.NS = pc_sample.ns; else EDPCSRhi = (if pc_sample.rw == '0' then Zeros(32) else pc_sample.pc<63:32>); EDCIDSR = pc_sample.contextidr; if HaveVirtHostExt() && EDSCR.SC2 == '1' then EDVIDSR = (if HaveEL(EL2) && pc_sample.ns == '1' then pc_sample.contextidr_el2 else bits(32) UNKNOWN); else if HaveEL(EL2) && pc_sample.ns == '1' && pc_sample.el IN {EL1,EL0} then EDVIDSR.VMID = pc_sample.vmid; else EDVIDSR.VMID = Zeros(); EDVIDSR.NS = pc_sample.ns; EDVIDSR.E2 = (if pc_sample.el == EL2 then '1' else '0'); EDVIDSR.E3 = (if pc_sample.el == EL3 then '1' else '0') AND pc_sample.rw; // The conditions for setting HV are not specified if PCSRhi is zero. // An example implementation may be "pc_sample.rw". EDVIDSR.HV = (if !IsZero(EDPCSRhi) then '1' else bit IMPLEMENTATION_DEFINED "0 or 1"); else sample = Ones(32); if update then EDPCSRhi = bits(32) UNKNOWN; EDCIDSR = bits(32) UNKNOWN; EDVIDSR = bits(32) UNKNOWN; return sample;StopInstructionPrefetchAndEnableITR();

Library pseudocode for shared/debug/samplebasedprofilinghalting/PCSampleUpdateEDSCRFields

type// UpdateEDSCRFields() // =================== // Update EDSCR PE state fields PCSample is ( boolean valid, bits(64) pc, bits(2) el, bit rw, bit ns, boolean has_el2, bits(32) contextidr, bits(32) contextidr_el2, boolean el0h, bits(16) vmid )UpdateEDSCRFields() if ! () then EDSCR.EL = '00'; EDSCR.NS = bit UNKNOWN; EDSCR.RW = '1111'; else EDSCR.EL = PSTATE.EL; EDSCR.NS = if IsSecure() then '0' else '1'; bits(4) RW; RW<1> = if ELUsingAArch32(EL1) then '0' else '1'; if PSTATE.EL != EL0 then RW<0> = RW<1>; else RW<0> = if UsingAArch32() then '0' else '1'; if !HaveEL(EL2) || (HaveEL(EL3) && SCR_GEN[].NS == '0' && !IsSecureEL2Enabled()) then RW<2> = RW<1>; else RW<2> = if ELUsingAArch32(EL2) then '0' else '1'; if !HaveEL(EL3) then RW<3> = RW<2>; else RW<3> = if ELUsingAArch32(EL3PCSampleHalted pc_sample;) then '0' else '1'; // The least-significant bits of EDSCR.RW are UNKNOWN if any higher EL is using AArch32. if RW<3> == '0' then RW<2:0> = bits(3) UNKNOWN; elsif RW<2> == '0' then RW<1:0> = bits(2) UNKNOWN; elsif RW<1> == '0' then RW<0> = bit UNKNOWN; EDSCR.RW = RW; return;

Library pseudocode for shared/debug/samplebasedprofilinghaltingevents/PMPCSRCheckExceptionCatch

// PMPCSR[] (read) // =============== bits(32)// CheckExceptionCatch() // ===================== // Check whether an Exception Catch debug event is set on the current Exception level PMPCSR[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || PMLSR.SLK == '0'; // Software locked: no side-effects if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then PMPCSR<55:32> = (if pc_sample.rw == '0' thenCheckExceptionCatch(boolean exception_entry) // Called after an exception entry or exit, that is, such that IsSecure() and PSTATE.EL are correct // for the exception target. base = if ZerosIsSecure(24) else pc_sample.pc<55:32>); PMPCSR.EL = pc_sample.el; PMPCSR.NS = pc_sample.ns; PMCID1SR = pc_sample.contextidr; PMCID2SR = if pc_sample.has_el2 then pc_sample.contextidr_el2 else bits(32) UNKNOWN; PMVIDSR.VMID = (if pc_sample.has_el2 && pc_sample.el IN {() then 0 else 4; ifEL1HaltingAllowed,() then ifEL0HaveExtendedECDebugEvents} && !pc_sample.el0h then pc_sample.vmid else bits(16) UNKNOWN); else sample =() then exception_exit = !exception_entry; ctrl = EDECCR< (PSTATE.EL) + base + 8>:EDECCR<UInt(PSTATE.EL) + base>; case ctrl of when '00' halt = FALSE; when '01' halt = TRUE; when '10' halt = (exception_exit == TRUE); when '11' halt = (exception_entry == TRUE); else halt = (EDECCR<UInt(PSTATE.EL) + base> == '1'); if halt then Halt(DebugHalt_ExceptionCatchOnesUInt(32); if update then PMPCSR<55:32> = bits(24) UNKNOWN; PMPCSR.EL = bits(2) UNKNOWN; PMPCSR.NS = bit UNKNOWN; PMCID1SR = bits(32) UNKNOWN; PMCID2SR = bits(32) UNKNOWN; PMVIDSR.VMID = bits(16) UNKNOWN; return sample;);

Library pseudocode for shared/debug/softwarestephaltingevents/CheckSoftwareStepCheckHaltingStep

// CheckSoftwareStep() // =================== // Take a Software Step exception if in the active-pending state// CheckHaltingStep() // ================== // Check whether EDESR.SS has been set by Halting Step CheckSoftwareStep() // Other self-hosted debug functions will call AArch32.GenerateDebugExceptions() if called from // AArch32 state. However, because Software Step is only active when the debug target Exception // level is using AArch64, CheckSoftwareStep only calls AArch64.GenerateDebugExceptions(). if !CheckHaltingStep() ifELUsingAArch32HaltingAllowed(() && EDESR.SS == '1' then // The STATUS code depends on how we arrived at the state where EDESR.SS == 1. ifDebugTargetHaltingStep_DidNotStep()) &&() then AArch64.GenerateDebugExceptionsHalt() then if MDSCR_EL1.SS == '1' && PSTATE.SS == '0' then( ); elsif HaltingStep_SteppedEX() then Halt(DebugHalt_Step_Exclusive); else Halt(DebugHalt_Step_NormalAArch64.SoftwareStepExceptionDebugHalt_Step_NoSyndrome(););

Library pseudocode for shared/debug/softwarestephaltingevents/DebugExceptionReturnSSCheckOSUnlockCatch

// DebugExceptionReturnSS() // ======================== // Returns value to write to PSTATE.SS on an exception return or Debug state exit. bit// CheckOSUnlockCatch() // ==================== // Called on unlocking the OS Lock to pend an OS Unlock Catch debug event DebugExceptionReturnSS(bits(32) spsr) assertCheckOSUnlockCatch() if EDECR.OSUCE == '1' && ! Halted() ||() then EDESR.OSUC = '1'; Restarting() || PSTATE.EL != EL0; SS_bit = '0'; if MDSCR_EL1.SS == '1' then if Restarting() then enabled_at_source = FALSE; elsif UsingAArch32() then enabled_at_source = AArch32.GenerateDebugExceptions(); else enabled_at_source = AArch64.GenerateDebugExceptions(); if IllegalExceptionReturn(spsr) then dest = PSTATE.EL; else (valid, dest) = ELFromSPSR(spsr); assert valid; secure = IsSecureBelowEL3() || dest == EL3; if ELUsingAArch32(dest) then enabled_at_dest = AArch32.GenerateDebugExceptionsFrom(dest, secure); else mask = spsr<9>; enabled_at_dest = AArch64.GenerateDebugExceptionsFrom(dest, secure, mask); ELd = DebugTargetFrom(secure); if !ELUsingAArch32(ELd) && !enabled_at_source && enabled_at_dest then SS_bit = spsr<21>; return SS_bit;

Library pseudocode for shared/debug/softwarestephaltingevents/SSAdvanceCheckPendingOSUnlockCatch

// SSAdvance() // =========== // Advance the Software Step state machine.// CheckPendingOSUnlockCatch() // =========================== // Check whether EDESR.OSUC has been set by an OS Unlock Catch debug event SSAdvance() // A simpler implementation of this function just clears PSTATE.SS to zero regardless of the // current Software Step state machine. However, this check is made to illustrate that the // processor only needs to consider advancing the state machine from the active-not-pending // state. target =CheckPendingOSUnlockCatch() if DebugTargetHaltingAllowed(); step_enabled = !() && EDESR.OSUC == '1' then(DebugHalt_OSUnlockCatchELUsingAArch32Halt(target) && MDSCR_EL1.SS == '1'; active_not_pending = step_enabled && PSTATE.SS == '1'; if active_not_pending then PSTATE.SS = '0'; return;);

Library pseudocode for shared/debug/softwarestephaltingevents/SoftwareStep_DidNotStepCheckPendingResetCatch

// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean// CheckPendingResetCatch() // ======================== // Check whether EDESR.RC has been set by a Reset Catch debug event SoftwareStep_DidNotStep();CheckPendingResetCatch() ifHaltingAllowed() && EDESR.RC == '1' then Halt(DebugHalt_ResetCatch);

Library pseudocode for shared/debug/softwarestephaltingevents/SoftwareStep_SteppedEXCheckResetCatch

// Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean// CheckResetCatch() // ================= // Called after reset SoftwareStep_SteppedEX();CheckResetCatch() if EDECR.RCE == '1' then EDESR.RC = '1'; // If halting is allowed then halt immediately ifHaltingAllowed() then Halt(DebugHalt_ResetCatch);

Library pseudocode for shared/exceptionsdebug/exceptionshaltingevents/ConditionSyndromeCheckSoftwareAccessToDebugRegisters

// ConditionSyndrome() // =================== // Return CV and COND fields of instruction syndrome bits(5)// CheckSoftwareAccessToDebugRegisters() // ===================================== // Check for access to Breakpoint and Watchpoint registers. ConditionSyndrome() bits(5) syndrome; ifCheckSoftwareAccessToDebugRegisters() os_lock = (if UsingAArch32ELUsingAArch32() then cond =( AArch32.CurrentCondEL1(); if PSTATE.T == '0' then // A32 syndrome<4> = '1'; // A conditional A32 instruction that is known to pass its condition code check // can be presented either with COND set to 0xE, the value for unconditional, or // the COND value held in the instruction. if) then DBGOSLSR.OSLK else OSLSR_EL1.OSLK); if ConditionHoldsHaltingAllowed(cond) &&() && EDSCR.TDA == '1' && os_lock == '0' then ConstrainUnpredictableBoolHalt(Unpredictable_ESRCONDPASSDebugHalt_SoftwareAccess) then syndrome<3:0> = '1110'; else syndrome<3:0> = cond; else // T32 // When a T32 instruction is trapped, it is IMPLEMENTATION DEFINED whether: // * CV set to 0 and COND is set to an UNKNOWN value // * CV set to 1 and COND is set to the condition code for the condition that // applied to the instruction. if boolean IMPLEMENTATION_DEFINED "Condition valid for trapped T32" then syndrome<4> = '1'; syndrome<3:0> = cond; else syndrome<4> = '0'; syndrome<3:0> = bits(4) UNKNOWN; else syndrome<4> = '1'; syndrome<3:0> = '1110'; return syndrome;);

Library pseudocode for shared/exceptionsdebug/exceptionshaltingevents/ExceptionExternalDebugRequest

enumeration// ExternalDebugRequest() // ====================== Exception {ExternalDebugRequest() ifException_Uncategorized, // Uncategorized or unknown reason() then Exception_WFxTrap, // Trapped WFI or WFE instruction( Exception_CP15RTTrap, // Trapped AArch32 MCR or MRC access to CP15 Exception_CP15RRTTrap, // Trapped AArch32 MCRR or MRRC access to CP15 Exception_CP14RTTrap, // Trapped AArch32 MCR or MRC access to CP14 Exception_CP14DTTrap, // Trapped AArch32 LDC or STC access to CP14 Exception_AdvSIMDFPAccessTrap, // HCPTR-trapped access to SIMD or FP Exception_FPIDTrap, // Trapped access to SIMD or FP ID register // Trapped BXJ instruction not supported in Armv8 Exception_PACTrap, // Trapped invalid PAC use Exception_CP14RRTTrap, // Trapped MRRC access to CP14 from AArch32 Exception_IllegalState, // Illegal Execution state Exception_SupervisorCall, // Supervisor Call Exception_HypervisorCall, // Hypervisor Call Exception_MonitorCall, // Monitor Call or Trapped SMC instruction Exception_SystemRegisterTrap, // Trapped MRS or MSR system register access Exception_ERetTrap, // Trapped invalid ERET use Exception_InstructionAbort, // Instruction Abort or Prefetch Abort Exception_PCAlignment, // PC alignment fault Exception_DataAbort, // Data Abort Exception_NV2DataAbort, // Data abort at EL1 reported as being from EL2 Exception_SPAlignment, // SP alignment fault Exception_FPTrappedException, // IEEE trapped FP exception Exception_SError, // SError interrupt Exception_Breakpoint, // (Hardware) Breakpoint Exception_SoftwareStep, // Software Step Exception_Watchpoint, // Watchpoint Exception_SoftwareBreakpoint, // Software Breakpoint Instruction Exception_VectorCatch, // AArch32 Vector Catch Exception_IRQ, // IRQ interrupt Exception_SVEAccessTrap, // HCPTR trapped access to SVE Exception_BranchTarget, // Branch Target Identification Exception_FIQ}; // FIQ interrupt); // Otherwise the CTI continues to assert the debug request until it is taken.

Library pseudocode for shared/exceptionsdebug/exceptionshaltingevents/ExceptionRecordHaltingStep_DidNotStep

type// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean ExceptionRecord is (HaltingStep_DidNotStep();Exception type, // Exception class bits(25) syndrome, // Syndrome record bits(64) vaddress, // Virtual fault address boolean ipavalid, // Physical fault address for second stage faults is valid bits(1) NS, // Physical fault address for second stage faults is Non-secure or secure bits(52) ipaddress) // Physical fault address for second stage faults

Library pseudocode for shared/exceptionsdebug/exceptionshaltingevents/ExceptionSyndromeHaltingStep_SteppedEX

// ExceptionSyndrome() // =================== // Return a blank exception syndrome record for an exception of the given type. ExceptionRecord// Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean ExceptionSyndrome(HaltingStep_SteppedEX();Exception type) ExceptionRecord r; r.type = type; // Initialize all other fields r.syndrome = Zeros(); r.vaddress = Zeros(); r.ipavalid = FALSE; r.NS = '0'; r.ipaddress = Zeros(); return r;

Library pseudocode for shared/exceptionsdebug/trapshaltingevents/ReservedValueRunHaltingStep

// ReservedValue() // ===============// RunHaltingStep() // ================ ReservedValue() ifRunHaltingStep(boolean exception_generated, bits(2) exception_target, boolean syscall, boolean reset) // "exception_generated" is TRUE if the previous instruction generated a synchronous exception // or was cancelled by an asynchronous exception. // // if "exception_generated" is TRUE then "exception_target" is the target of the exception, and // "syscall" is TRUE if the exception is a synchronous exception where the preferred return // address is the instruction following that which generated the exception. // // "reset" is TRUE if exiting reset state into the highest EL. if reset then assert ! UsingAArch32Halted() && !(); // Cannot come out of reset halted active = EDECR.SS == '1' && !AArch32.GeneralExceptionsToAArch64Halted() then(); if active && reset then // Coming out of reset with EDECR.SS set EDESR.SS = '1'; elsif active && AArch32.TakeUndefInstrExceptionHaltingAllowed(); else() then if exception_generated && exception_target == then advance = syscall || ExternalSecureDebugEnabledAArch64.UndefinedFaultEL3();(); else advance = TRUE; if advance then EDESR.SS = '1'; return;

Library pseudocode for shared/exceptionsdebug/trapsinterrupts/UnallocatedEncodingExternalDebugInterruptsDisabled

// UnallocatedEncoding() // =====================// ExternalDebugInterruptsDisabled() // ================================= // Determine whether EDSCR disables interrupts routed to 'target' boolean UnallocatedEncoding() ifExternalDebugInterruptsDisabled(bits(2) target) case target of when UsingAArch32EL3() &&int_dis = EDSCR.INTdis == '11' && AArch32.ExecutingCP10or11InstrExternalSecureDebugEnabled() then FPEXC.DEX = '0'; if(); when UsingAArch32EL2() && !int_dis = EDSCR.INTdis == '1x' &&AArch32.GeneralExceptionsToAArch64ExternalDebugEnabled() then(); when AArch32.TakeUndefInstrExceptionEL1(); elseif () then int_dis = EDSCR.INTdis == '1x' && ExternalSecureDebugEnabled(); else int_dis = EDSCR.INTdis != '00' && ExternalDebugEnabledAArch64.UndefinedFaultIsSecure();(); return int_dis;

Library pseudocode for shared/functionsdebug/abortsinterrupts/EncodeLDFSCInterruptID

// EncodeLDFSC() // ============= // Function that gives the Long-descriptor FSC code for types of Fault bits(6)enumeration EncodeLDFSC(InterruptID {Fault type, integer level) bits(6) result; case type of whenInterruptID_PMUIRQ, Fault_AddressSize result = '0000':level<1:0>; assert level IN {0,1,2,3}; whenInterruptID_COMMIRQ, Fault_AccessFlag result = '0010':level<1:0>; assert level IN {1,2,3}; whenInterruptID_CTIIRQ, Fault_Permission result = '0011':level<1:0>; assert level IN {1,2,3}; whenInterruptID_COMMRX, Fault_Translation result = '0001':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncExternal result = '010000'; when Fault_SyncExternalOnWalk result = '0101':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncParity result = '011000'; when Fault_SyncParityOnWalk result = '0111':level<1:0>; assert level IN {0,1,2,3}; when Fault_AsyncParity result = '011001'; when Fault_AsyncExternal result = '010001'; when Fault_Alignment result = '100001'; when Fault_Debug result = '100010'; when Fault_TLBConflict result = '110000'; when Fault_HWUpdateAccessFlag result = '110001'; when Fault_Lockdown result = '110100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '110101'; // IMPLEMENTATION DEFINED otherwise Unreachable(); return result;InterruptID_COMMTX};

Library pseudocode for shared/functionsdebug/abortsinterrupts/IPAValidSetInterruptRequestLevel

// IPAValid() // ========== // Return TRUE if the IPA is reported for the abort boolean// Set a level-sensitive interrupt to the specified level. SetInterruptRequestLevel( IPAValid(FaultRecord fault) assert fault.type != Fault_None; if fault.s2fs1walk then return fault.type IN {Fault_AccessFlag, Fault_Permission, Fault_Translation, Fault_AddressSize}; elsif fault.secondstage then return fault.type IN {Fault_AccessFlag, Fault_Translation, Fault_AddressSize}; else return FALSE;id, signal level);

Library pseudocode for shared/functionsdebug/abortssamplebasedprofiling/IsAsyncAbortCreatePCSample

// IsAsyncAbort() // ============== // Returns TRUE if the abort currently being processed is an asynchronous abort, and FALSE // otherwise. boolean// CreatePCSample() // ================ IsAsyncAbort(CreatePCSample() // In a simple sequential execution of the program, CreatePCSample is executed each time the PE // executes an instruction that can be sampled. An implementation is not constrained such that // reads of EDPCSRlo return the current values of PC, etc. pc_sample.valid =FaultExternalNoninvasiveDebugAllowed type) assert type !=() && ! Fault_NoneHalted; return (type IN {(); pc_sample.pc =Fault_AsyncExternalThisInstrAddr,(); pc_sample.el = PSTATE.EL; pc_sample.rw = if Fault_AsyncParityUsingAArch32}); // IsAsyncAbort() // ============== boolean() then '0' else '1'; pc_sample.ns = if IsAsyncAbortIsSecure(() then '0' else '1'; pc_sample.contextidr = ifFaultRecordELUsingAArch32 fault) return( ) then CONTEXTIDR else CONTEXTIDR_EL1; pc_sample.has_el2 = EL2Enabled(); if EL2Enabled() then if ELUsingAArch32(EL2) then pc_sample.vmid = ZeroExtend(VTTBR.VMID, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then pc_sample.vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); else pc_sample.vmid = VTTBR_EL2.VMID; if HaveVirtHostExt() && !ELUsingAArch32(EL2) then pc_sample.contextidr_el2 = CONTEXTIDR_EL2; else pc_sample.contextidr_el2 = bits(32) UNKNOWN; pc_sample.el0h = PSTATE.EL == EL0 && IsInHostIsAsyncAbortEL1(fault.type);(); return;

Library pseudocode for shared/functionsdebug/abortssamplebasedprofiling/IsDebugExceptionEDPCSRlo

// IsDebugException() // ================== // EDPCSRlo[] (read) // ================= booleanbits(32) IsDebugException(EDPCSRlo[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || EDLSR.SLK == '0'; // Software locked: no side-effects if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then ifFaultRecordHaveVirtHostExt fault) assert fault.type !=() && EDSCR.SC2 == '1' then EDPCSRhi.PC = (if pc_sample.rw == '0' then Fault_NoneZeros; return fault.type ==(24) else pc_sample.pc<55:32>); EDPCSRhi.EL = pc_sample.el; EDPCSRhi.NS = pc_sample.ns; else EDPCSRhi = (if pc_sample.rw == '0' then (32) else pc_sample.pc<63:32>); EDCIDSR = pc_sample.contextidr; if HaveVirtHostExt() && EDSCR.SC2 == '1' then EDVIDSR = (if HaveEL(EL2) && pc_sample.ns == '1' then pc_sample.contextidr_el2 else bits(32) UNKNOWN); else if HaveEL(EL2) && pc_sample.ns == '1' && pc_sample.el IN {EL1,EL0} then EDVIDSR.VMID = pc_sample.vmid; else EDVIDSR.VMID = Zeros(); EDVIDSR.NS = pc_sample.ns; EDVIDSR.E2 = (if pc_sample.el == EL2 then '1' else '0'); EDVIDSR.E3 = (if pc_sample.el == EL3 then '1' else '0') AND pc_sample.rw; // The conditions for setting HV are not specified if PCSRhi is zero. // An example implementation may be "pc_sample.rw". EDVIDSR.HV = (if !IsZero(EDPCSRhi) then '1' else bit IMPLEMENTATION_DEFINED "0 or 1"); else sample = OnesFault_DebugZeros;(32); if update then EDPCSRhi = bits(32) UNKNOWN; EDCIDSR = bits(32) UNKNOWN; EDVIDSR = bits(32) UNKNOWN; return sample;

Library pseudocode for shared/functionsdebug/abortssamplebasedprofiling/IsExternalAbortPCSample

// IsExternalAbort() // ================= // Returns TRUE if the abort currently being processed is an external abort and FALSE otherwise. booleantype IsExternalAbort(PCSample is ( boolean valid, bits(64) pc, bits(2) el, bit rw, bit ns, boolean has_el2, bits(32) contextidr, bits(32) contextidr_el2, boolean el0h, bits(16) vmid )FaultPCSample type) assert type != Fault_None; return (type IN {Fault_SyncExternal, Fault_SyncParity, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk, Fault_AsyncExternal, Fault_AsyncParity }); // IsExternalAbort() // ================= boolean IsExternalAbort(FaultRecord fault) return IsExternalAbort(fault.type);pc_sample;

Library pseudocode for shared/functionsdebug/abortssamplebasedprofiling/IsExternalSyncAbortPMPCSR

// IsExternalSyncAbort() // ===================== // Returns TRUE if the abort currently being processed is an external synchronous abort and FALSE otherwise. // PMPCSR[] (read) // =============== booleanbits(32) IsExternalSyncAbort(PMPCSR[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "signal slave-generated error"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || PMLSR.SLK == '0'; // Software locked: no side-effects if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then PMPCSR<55:32> = (if pc_sample.rw == '0' thenFaultZeros type) assert type !=(24) else pc_sample.pc<55:32>); PMPCSR.EL = pc_sample.el; PMPCSR.NS = pc_sample.ns; PMCID1SR = pc_sample.contextidr; PMCID2SR = if pc_sample.has_el2 then pc_sample.contextidr_el2 else bits(32) UNKNOWN; PMVIDSR.VMID = (if pc_sample.has_el2 && pc_sample.el IN { Fault_NoneEL1; return (type IN {,Fault_SyncExternalEL0,} && !pc_sample.el0h then pc_sample.vmid else bits(16) UNKNOWN); else sample = Fault_SyncParityOnes, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk}); // IsExternalSyncAbort() // ===================== boolean IsExternalSyncAbort(FaultRecord fault) return IsExternalSyncAbort(fault.type);(32); if update then PMPCSR<55:32> = bits(24) UNKNOWN; PMPCSR.EL = bits(2) UNKNOWN; PMPCSR.NS = bit UNKNOWN; PMCID1SR = bits(32) UNKNOWN; PMCID2SR = bits(32) UNKNOWN; PMVIDSR.VMID = bits(16) UNKNOWN; return sample;

Library pseudocode for shared/functionsdebug/abortssoftwarestep/IsFaultCheckSoftwareStep

// IsFault() // ========= // Return TRUE if a fault is associated with an address descriptor boolean// CheckSoftwareStep() // =================== // Take a Software Step exception if in the active-pending state IsFault(CheckSoftwareStep() // Other self-hosted debug functions will call AArch32.GenerateDebugExceptions() if called from // AArch32 state. However, because Software Step is only active when the debug target Exception // level is using AArch64, CheckSoftwareStep only calls AArch64.GenerateDebugExceptions(). if !AddressDescriptorELUsingAArch32 addrdesc) return addrdesc.fault.type !=( ()) && AArch64.GenerateDebugExceptions() then if MDSCR_EL1.SS == '1' && PSTATE.SS == '0' then AArch64.SoftwareStepExceptionFault_NoneDebugTarget;();

Library pseudocode for shared/functionsdebug/abortssoftwarestep/IsSErrorInterruptDebugExceptionReturnSS

// IsSErrorInterrupt() // =================== // Returns TRUE if the abort currently being processed is an SError interrupt, and FALSE // otherwise. // DebugExceptionReturnSS() // ======================== // Returns value to write to PSTATE.SS on an exception return or Debug state exit. booleanbit IsSErrorInterrupt(DebugExceptionReturnSS(bits(32) spsr) assertFaultHalted type) assert type !=() || Fault_NoneRestarting; return (type IN {() || PSTATE.EL !=Fault_AsyncExternalEL0,; SS_bit = '0'; if MDSCR_EL1.SS == '1' then if Fault_AsyncParityRestarting}); // IsSErrorInterrupt() // =================== boolean() then enabled_at_source = FALSE; elsif IsSErrorInterruptUsingAArch32(() then enabled_at_source =FaultRecordAArch32.GenerateDebugExceptions fault) return(); else enabled_at_source = (); if IllegalExceptionReturn(spsr) then dest = PSTATE.EL; else (valid, dest) = ELFromSPSR(spsr); assert valid; secure = IsSecureBelowEL3() || dest == EL3; if ELUsingAArch32(dest) then enabled_at_dest = AArch32.GenerateDebugExceptionsFrom(dest, secure); else mask = spsr<9>; enabled_at_dest = AArch64.GenerateDebugExceptionsFrom(dest, secure, mask); ELd = DebugTargetFrom(secure); if !ELUsingAArch32IsSErrorInterruptAArch64.GenerateDebugExceptions(fault.type);(ELd) && !enabled_at_source && enabled_at_dest then SS_bit = spsr<21>; return SS_bit;

Library pseudocode for shared/functionsdebug/abortssoftwarestep/IsSecondStageSSAdvance

// IsSecondStage() // =============== boolean// SSAdvance() // =========== // Advance the Software Step state machine. IsSecondStage(SSAdvance() // A simpler implementation of this function just clears PSTATE.SS to zero regardless of the // current Software Step state machine. However, this check is made to illustrate that the // processor only needs to consider advancing the state machine from the active-not-pending // state. target =FaultRecordDebugTarget fault) assert fault.type !=(); step_enabled = ! Fault_NoneELUsingAArch32; (target) && MDSCR_EL1.SS == '1'; active_not_pending = step_enabled && PSTATE.SS == '1'; return fault.secondstage; if active_not_pending then PSTATE.SS = '0'; return;

Library pseudocode for shared/functionsdebug/abortssoftwarestep/LSInstructionSyndromeSoftwareStep_DidNotStep

bits(11)// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean LSInstructionSyndrome();SoftwareStep_DidNotStep();

Library pseudocode for shared/functionsdebug/commonsoftwarestep/ASRSoftwareStep_SteppedEX

// ASR() // ===== bits(N)// Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean ASR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =SoftwareStep_SteppedEX(); ASR_C(x, shift); return result;

Library pseudocode for shared/functionsexceptions/commonexceptions/ASR_CConditionSyndrome

// ASR_C() // ======= // ConditionSyndrome() // =================== // Return CV and COND fields of instruction syndrome (bits(N), bit)bits(5) ASR_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x =ConditionSyndrome() bits(5) syndrome; if () then cond = AArch32.CurrentCond(); if PSTATE.T == '0' then // A32 syndrome<4> = '1'; // A conditional A32 instruction that is known to pass its condition code check // can be presented either with COND set to 0xE, the value for unconditional, or // the COND value held in the instruction. if ConditionHolds(cond) && ConstrainUnpredictableBool(Unpredictable_ESRCONDPASSSignExtendUsingAArch32(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);) then syndrome<3:0> = '1110'; else syndrome<3:0> = cond; else // T32 // When a T32 instruction is trapped, it is IMPLEMENTATION DEFINED whether: // * CV set to 0 and COND is set to an UNKNOWN value // * CV set to 1 and COND is set to the condition code for the condition that // applied to the instruction. if boolean IMPLEMENTATION_DEFINED "Condition valid for trapped T32" then syndrome<4> = '1'; syndrome<3:0> = cond; else syndrome<4> = '0'; syndrome<3:0> = bits(4) UNKNOWN; else syndrome<4> = '1'; syndrome<3:0> = '1110'; return syndrome;

Library pseudocode for shared/functionsexceptions/commonexceptions/AbsException

// Abs() // ===== integerenumeration Abs(integer x) return if x >= 0 then x else -x; // Abs() // ===== realException { Exception_Uncategorized, // Uncategorized or unknown reason Exception_WFxTrap, // Trapped WFI or WFE instruction Exception_CP15RTTrap, // Trapped AArch32 MCR or MRC access to CP15 Exception_CP15RRTTrap, // Trapped AArch32 MCRR or MRRC access to CP15 Exception_CP14RTTrap, // Trapped AArch32 MCR or MRC access to CP14 Exception_CP14DTTrap, // Trapped AArch32 LDC or STC access to CP14 Exception_AdvSIMDFPAccessTrap, // HCPTR-trapped access to SIMD or FP Exception_FPIDTrap, // Trapped access to SIMD or FP ID register // Trapped BXJ instruction not supported in ARMv8 Exception_PACTrap, // Trapped invalid PAC use Exception_CP14RRTTrap, // Trapped MRRC access to CP14 from AArch32 Exception_IllegalState, // Illegal Execution state Exception_SupervisorCall, // Supervisor Call Exception_HypervisorCall, // Hypervisor Call Exception_MonitorCall, // Monitor Call or Trapped SMC instruction Exception_SystemRegisterTrap, // Trapped MRS or MSR system register access Exception_ERetTrap, // Trapped invalid ERET use Exception_InstructionAbort, // Instruction Abort or Prefetch Abort Exception_PCAlignment, // PC alignment fault Exception_DataAbort, // Data Abort Exception_NV2DataAbort, // Data abort at EL1 reported as being from EL2 Exception_SPAlignment, // SP alignment fault Exception_FPTrappedException, // IEEE trapped FP exception Exception_SError, // SError interrupt Exception_Breakpoint, // (Hardware) Breakpoint Exception_SoftwareStep, // Software Step Exception_Watchpoint, // Watchpoint Exception_SoftwareBreakpoint, // Software Breakpoint Instruction Exception_VectorCatch, // AArch32 Vector Catch Exception_IRQ, // IRQ interrupt Exception_SVEAccessTrap, // HCPTR trapped access to SVE Exception_BranchTarget, // Branch Target Identification Abs(real x) return if x >= 0.0 then x else -x;Exception_FIQ}; // FIQ interrupt

Library pseudocode for shared/functionsexceptions/commonexceptions/AlignExceptionRecord

// Align() // ======= integertype Align(integer x, integer y) return y * (x DIV y); // Align() // ======= bits(N)ExceptionRecord is ( AlignException(bits(N) x, integer y) return Align(UInt(x), y)<N-1:0>;type, // Exception class bits(25) syndrome, // Syndrome record bits(64) vaddress, // Virtual fault address boolean ipavalid, // Physical fault address for second stage faults is valid bits(1) NS, // Physical fault address for second stage faults is Non-secure or secure bits(52) ipaddress) // Physical fault address for second stage faults

Library pseudocode for shared/functionsexceptions/commonexceptions/BitCountExceptionSyndrome

// BitCount() // ========== // ExceptionSyndrome() // =================== // Return a blank exception syndrome record for an exception of the given type. integerExceptionRecord BitCount(bits(N) x) integer result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 1; return result;ExceptionSyndrome(Exception type) ExceptionRecord r; r.type = type; // Initialize all other fields r.syndrome = Zeros(); r.vaddress = Zeros(); r.ipavalid = FALSE; r.NS = '0'; r.ipaddress = Zeros(); return r;

Library pseudocode for shared/functionsexceptions/commontraps/CountLeadingSignBitsReservedValue

// CountLeadingSignBits() // ====================== integer// ReservedValue() // =============== CountLeadingSignBits(bits(N) x) returnReservedValue() if () && !AArch32.GeneralExceptionsToAArch64() then AArch32.TakeUndefInstrException(); else AArch64.UndefinedFaultCountLeadingZeroBitsUsingAArch32(x<N-1:1> EOR x<N-2:0>);();

Library pseudocode for shared/functionsexceptions/commontraps/CountLeadingZeroBitsSystemAccessType

// CountLeadingZeroBits() // ====================== integerenumeration CountLeadingZeroBits(bits(N) x) return N - (SystemAccessType {SystemAccessType_RT, SystemAccessType_RRT, HighestSetBit(x) + 1);SystemAccessType_DT };

Library pseudocode for shared/functionsexceptions/commontraps/ElemUnallocatedEncoding

// Elem[] - non-assignment form // ============================ bits(size)// UnallocatedEncoding() // ===================== Elem[bits(N) vector, integer e, integer size] assert e >= 0 && (e+1)*size <= N; return vector<e*size+size-1 : e*size>; // Elem[] - non-assignment form // ============================ bits(size)UnallocatedEncoding() if Elem[bits(N) vector, integer e] return() && ElemAArch32.ExecutingCP10or11Instr[vector, e, size]; // Elem[] - assignment form // ========================() then FPEXC.DEX = '0'; if Elem[bits(N) &vector, integer e, integer size] = bits(size) value assert e >= 0 && (e+1)*size <= N; vector<(e+1)*size-1:e*size> = value; return; // Elem[] - assignment form // ========================() && ! Elem[bits(N) &vector, integer e] = bits(size) value() then (); else AArch64.UndefinedFaultElemAArch32.TakeUndefInstrException[vector, e, size] = value; return;();

Library pseudocode for shared/functions/commonaborts/ExtendEncodeLDFSC

// Extend() // ======== // EncodeLDFSC() // ============= // Function that gives the Long-descriptor FSC code for types of Fault bits(N)bits(6) Extend(bits(M) x, integer N, boolean unsigned) return if unsigned thenEncodeLDFSC( ZeroExtendFault(x, N) elsetype, integer level) bits(6) result; case type of when SignExtendFault_AddressSize(x, N); // Extend() // ======== bits(N)result = '0000':level<1:0>; assert level IN {0,1,2,3}; when Extend(bits(M) x, boolean unsigned) returnresult = '0010':level<1:0>; assert level IN {1,2,3}; when result = '0011':level<1:0>; assert level IN {1,2,3}; when Fault_Translation result = '0001':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncExternal result = '010000'; when Fault_SyncExternalOnWalk result = '0101':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncParity result = '011000'; when Fault_SyncParityOnWalk result = '0111':level<1:0>; assert level IN {0,1,2,3}; when Fault_AsyncParity result = '011001'; when Fault_AsyncExternal result = '010001'; when Fault_Alignment result = '100001'; when Fault_Debug result = '100010'; when Fault_TLBConflict result = '110000'; when Fault_HWUpdateAccessFlag result = '110001'; when Fault_Lockdown result = '110100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '110101'; // IMPLEMENTATION DEFINED otherwise UnreachableExtendFault_Permission(x, N, unsigned);(); return result;

Library pseudocode for shared/functions/commonaborts/HighestSetBitIPAValid

// HighestSetBit() // =============== // IPAValid() // ========== // Return TRUE if the IPA is reported for the abort integerboolean HighestSetBit(bits(N) x) for i = N-1 downto 0 if x<i> == '1' then return i; return -1;IPAValid(FaultRecord fault) assert fault.type != Fault_None; if fault.s2fs1walk then return fault.type IN {Fault_AccessFlag, Fault_Permission, Fault_Translation, Fault_AddressSize}; elsif fault.secondstage then return fault.type IN {Fault_AccessFlag, Fault_Translation, Fault_AddressSize}; else return FALSE;

Library pseudocode for shared/functions/commonaborts/IntIsAsyncAbort

// Int() // ===== // IsAsyncAbort() // ============== // Returns TRUE if the abort currently being processed is an asynchronous abort, and FALSE // otherwise. integerboolean Int(bits(N) x, boolean unsigned) result = if unsigned thenIsAsyncAbort( UIntFault(x) elsetype) assert type != ; return (type IN {Fault_AsyncExternal, Fault_AsyncParity}); // IsAsyncAbort() // ============== boolean IsAsyncAbort(FaultRecord fault) return IsAsyncAbortSIntFault_None(x); return result;(fault.type);

Library pseudocode for shared/functions/commonaborts/IsOnesIsDebugException

// IsOnes() // ======== // IsDebugException() // ================== boolean IsOnes(bits(N) x) return x ==IsDebugException( fault) assert fault.type != Fault_None; return fault.type == Fault_DebugOnesFaultRecord(N);;

Library pseudocode for shared/functions/commonaborts/IsZeroIsExternalAbort

// IsZero() // ======== // IsExternalAbort() // ================= // Returns TRUE if the abort currently being processed is an external abort and FALSE otherwise. boolean IsZero(bits(N) x) return x ==IsExternalAbort( type) assert type != Fault_None; return (type IN {Fault_SyncExternal, Fault_SyncParity, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk, Fault_AsyncExternal, Fault_AsyncParity }); // IsExternalAbort() // ================= boolean IsExternalAbort(FaultRecord fault) return IsExternalAbortZerosFault(N);(fault.type);

Library pseudocode for shared/functions/commonaborts/IsZeroBitIsExternalSyncAbort

// IsZeroBit() // =========== // IsExternalSyncAbort() // ===================== // Returns TRUE if the abort currently being processed is an external synchronous abort and FALSE otherwise. bitboolean IsZeroBit(bits(N) x) return ifIsExternalSyncAbort( type) assert type != Fault_None; return (type IN {Fault_SyncExternal, Fault_SyncParity, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk}); // IsExternalSyncAbort() // ===================== boolean IsExternalSyncAbort(FaultRecord fault) return IsExternalSyncAbortIsZeroFault(x) then '1' else '0';(fault.type);

Library pseudocode for shared/functions/commonaborts/LSLIsFault

// LSL() // ===== // IsFault() // ========= // Return TRUE if a fault is associated with an address descriptor bits(N)boolean LSL(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =IsFault( addrdesc) return addrdesc.fault.type != Fault_NoneLSL_CAddressDescriptor(x, shift); return result;;

Library pseudocode for shared/functions/commonaborts/LSL_CIsSErrorInterrupt

// LSL_C() // ======= // IsSErrorInterrupt() // =================== // Returns TRUE if the abort currently being processed is an SError interrupt, and FALSE // otherwise. (bits(N), bit)boolean LSL_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x = x :IsSErrorInterrupt( type) assert type != Fault_None; return (type IN {Fault_AsyncExternal, Fault_AsyncParity}); // IsSErrorInterrupt() // =================== boolean IsSErrorInterrupt(FaultRecord fault) return IsSErrorInterruptZerosFault(shift); result = extended_x<N-1:0>; carry_out = extended_x<N>; return (result, carry_out);(fault.type);

Library pseudocode for shared/functions/commonaborts/LSRIsSecondStage

// LSR() // ===== // IsSecondStage() // =============== bits(N)boolean LSR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =IsSecondStage( fault) assert fault.type != Fault_NoneLSR_CFaultRecord(x, shift); return result;; return fault.secondstage;

Library pseudocode for shared/functions/commonaborts/LSR_CLSInstructionSyndrome

// LSR_C() // ======= (bits(N), bit)bits(11) LSR_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x =LSInstructionSyndrome(); ZeroExtend(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/LowestSetBitASR

// LowestSetBit() // ============== // ASR() // ===== integerbits(N) LowestSetBit(bits(N) x) for i = 0 to N-1 if x<i> == '1' then return i; return N;ASR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =ASR_C(x, shift); return result;

Library pseudocode for shared/functions/common/MaxASR_C

// Max() // ===== // ASR_C() // ======= integer(bits(N), bit) Max(integer a, integer b) return if a >= b then a else b; // Max() // ===== realASR_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x = MaxSignExtend(real a, real b) return if a >= b then a else b;(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/MinAbs

// Min() // Abs() // ===== integer Min(integer a, integer b) return if a <= b then a else b; Abs(integer x) return if x >= 0 then x else -x; // Min() // Abs() // ===== real MinAbs(real a, real b) return if a <= b then a else b;(real x) return if x >= 0.0 then x else -x;

Library pseudocode for shared/functions/common/OnesAlign

// Ones() // ====== // Align() // ======= bits(N)integer Ones(integer N) returnAlign(integer x, integer y) return y * (x DIV y); // Align() // ======= bits(N) ReplicateAlign('1',N); // Ones() // ====== bits(N)(bits(N) x, integer y) return Ones() return( OnesUInt(N);(x), y)<N-1:0>;

Library pseudocode for shared/functions/common/RORBitCount

// ROR() // ===== // BitCount() // ========== bits(N)integer ROR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =BitCount(bits(N) x) integer result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 1; return result; ROR_C(x, shift); return result;

Library pseudocode for shared/functions/common/ROR_CCountLeadingSignBits

// ROR_C() // ======= // CountLeadingSignBits() // ====================== (bits(N), bit)integer ROR_C(bits(N) x, integer shift) assert shift != 0; m = shift MOD N; result =CountLeadingSignBits(bits(N) x) return LSRCountLeadingZeroBits(x,m) OR LSL(x,N-m); carry_out = result<N-1>; return (result, carry_out);(x<N-1:1> EOR x<N-2:0>);

Library pseudocode for shared/functions/common/ReplicateCountLeadingZeroBits

// Replicate() // =========== // CountLeadingZeroBits() // ====================== bits(N)integer Replicate(bits(M) x) assert N MOD M == 0; returnCountLeadingZeroBits(bits(N) x) return N - ( ReplicateHighestSetBit(x, N DIV M); bits(M*N) Replicate(bits(M) x, integer N);(x) + 1);

Library pseudocode for shared/functions/common/RoundDownElem

integer// Elem[] - non-assignment form // ============================ bits(size) RoundDown(real x);Elem[bits(N) vector, integer e, integer size] assert e >= 0 && (e+1)*size <= N; return vector<e*size+size-1 : e*size>; // Elem[] - non-assignment form // ============================ bits(size)Elem[bits(N) vector, integer e] return Elem[vector, e, size]; // Elem[] - assignment form // ======================== Elem[bits(N) &vector, integer e, integer size] = bits(size) value assert e >= 0 && (e+1)*size <= N; vector<(e+1)*size-1:e*size> = value; return; // Elem[] - assignment form // ======================== Elem[bits(N) &vector, integer e] = bits(size) value Elem[vector, e, size] = value; return;

Library pseudocode for shared/functions/common/RoundTowardsZeroExtend

// RoundTowardsZero() // ================== // Extend() // ======== integerbits(N) RoundTowardsZero(real x) return if x == 0.0 then 0 else if x >= 0.0 thenExtend(bits(M) x, integer N, boolean unsigned) return if unsigned then RoundDownZeroExtend(x) else(x, N) else (x, N); // Extend() // ======== bits(N) Extend(bits(M) x, boolean unsigned) return ExtendRoundUpSignExtend(x);(x, N, unsigned);

Library pseudocode for shared/functions/common/RoundUpHighestSetBit

// HighestSetBit() // =============== integer RoundUp(real x);HighestSetBit(bits(N) x) for i = N-1 downto 0 if x<i> == '1' then return i; return -1;

Library pseudocode for shared/functions/common/SIntInt

// SInt() // ====== // Int() // ===== integer SInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; if x<N-1> == '1' then result = result - 2^N; return result;Int(bits(N) x, boolean unsigned) result = if unsigned thenUInt(x) else SInt(x); return result;

Library pseudocode for shared/functions/common/SignExtendIsOnes

// SignExtend() // ============ // IsOnes() // ======== bits(N)boolean SignExtend(bits(M) x, integer N) assert N >= M; returnIsOnes(bits(N) x) return x == ReplicateOnes(x<M-1>, N-M) : x; // SignExtend() // ============ bits(N) SignExtend(bits(M) x) return SignExtend(x, N);(N);

Library pseudocode for shared/functions/common/UIntIsZero

// UInt() // ====== // IsZero() // ======== integerboolean UInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; return result;IsZero(bits(N) x) return x ==Zeros(N);

Library pseudocode for shared/functions/common/ZeroExtendIsZeroBit

// ZeroExtend() // ============ // IsZeroBit() // =========== bits(N)bit ZeroExtend(bits(M) x, integer N) assert N >= M; returnIsZeroBit(bits(N) x) return if ZerosIsZero(N-M) : x; // ZeroExtend() // ============ bits(N) ZeroExtend(bits(M) x) return ZeroExtend(x, N);(x) then '1' else '0';

Library pseudocode for shared/functions/common/ZerosLSL

// Zeros() // ======= // LSL() // ===== bits(N) Zeros(integer N) returnLSL(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) = ReplicateLSL_C('0',N); // Zeros() // ======= bits(N) Zeros() return Zeros(N);(x, shift); return result;

Library pseudocode for shared/functions/crccommon/BitReverseLSL_C

// BitReverse() // ============ // LSL_C() // ======= bits(N)(bits(N), bit) BitReverse(bits(N) data) bits(N) result; for i = 0 to N-1 result<N-i-1> = data<i>; return result;LSL_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x = x :Zeros(shift); result = extended_x<N-1:0>; carry_out = extended_x<N>; return (result, carry_out);

Library pseudocode for shared/functions/crccommon/HaveCRCExtLSR

// HaveCRCExt() // ============ // LSR() // ===== booleanbits(N) HaveCRCExt() returnLSR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) = HasArchVersionLSR_C(ARMv8p1) || boolean IMPLEMENTATION_DEFINED "Have CRC extension";(x, shift); return result;

Library pseudocode for shared/functions/crccommon/Poly32Mod2LSR_C

// Poly32Mod2() // ============ // LSR_C() // ======= // Poly32Mod2 on a bitstring does a polynomial Modulus over {0,1} operation bits(32)(bits(N), bit) Poly32Mod2(bits(N) data, bits(32) poly) assert N > 32; for i = N-1 downto 32 if data<i> == '1' then data<i-1:0> = data<i-1:0> EOR (poly:LSR_C(bits(N) x, integer shift) assert shift > 0; shift = if shift > N then N else shift; extended_x =ZerosZeroExtend(i-32)); return data<31:0>;(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);

Library pseudocode for shared/functions/cryptocommon/AESInvMixColumnsLowestSetBit

bits(128)// LowestSetBit() // ============== integer AESInvMixColumns(bits (128) op);LowestSetBit(bits(N) x) for i = 0 to N-1 if x<i> == '1' then return i; return N;

Library pseudocode for shared/functions/cryptocommon/AESInvShiftRowsMax

bits(128)// Max() // ===== integer AESInvShiftRows(bits(128) op);Max(integer a, integer b) return if a >= b then a else b; // Max() // ===== realMax(real a, real b) return if a >= b then a else b;

Library pseudocode for shared/functions/cryptocommon/AESInvSubBytesMin

bits(128)// Min() // ===== integer AESInvSubBytes(bits(128) op);Min(integer a, integer b) return if a <= b then a else b; // Min() // ===== realMin(real a, real b) return if a <= b then a else b;

Library pseudocode for shared/functions/cryptocommon/AESMixColumnsOnes

bits(128)// Ones() // ====== bits(N) AESMixColumns(bits (128) op);Ones(integer N) returnReplicate('1',N); // Ones() // ====== bits(N) Ones() return Ones(N);

Library pseudocode for shared/functions/cryptocommon/AESShiftRowsROR

bits(128)// ROR() // ===== bits(N) AESShiftRows(bits(128) op);ROR(bits(N) x, integer shift) assert shift >= 0; if shift == 0 then result = x; else (result, -) =ROR_C(x, shift); return result;

Library pseudocode for shared/functions/cryptocommon/AESSubBytesROR_C

bits(128)// ROR_C() // ======= (bits(N), bit) AESSubBytes(bits(128) op);ROR_C(bits(N) x, integer shift) assert shift != 0; m = shift MOD N; result =LSR(x,m) OR LSL(x,N-m); carry_out = result<N-1>; return (result, carry_out);

Library pseudocode for shared/functions/cryptocommon/HaveAESExtReplicate

// HaveAESExt() // ============ // TRUE if AES cryptographic instructions support is implemented, // FALSE otherwise. // Replicate() // =========== booleanbits(N) HaveAESExt() return boolean IMPLEMENTATION_DEFINED "Has AES Crypto instructions";Replicate(bits(M) x) assert N MOD M == 0; returnReplicate(x, N DIV M); bits(M*N) Replicate(bits(M) x, integer N);

Library pseudocode for shared/functions/cryptocommon/HaveBit128PMULLExtRoundDown

// HaveBit128PMULLExt() // ==================== // TRUE if 128 bit form of PMULL instructions support is implemented, // FALSE otherwise. booleaninteger HaveBit128PMULLExt() return boolean IMPLEMENTATION_DEFINED "Has 128-bit form of PMULL instructions";RoundDown(real x);

Library pseudocode for shared/functions/cryptocommon/HaveSHA1ExtRoundTowardsZero

// HaveSHA1Ext() // ============= // TRUE if SHA1 cryptographic instructions support is implemented, // FALSE otherwise. // RoundTowardsZero() // ================== booleaninteger HaveSHA1Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA1 Crypto instructions";RoundTowardsZero(real x) return if x == 0.0 then 0 else if x >= 0.0 thenRoundDown(x) else RoundUp(x);

Library pseudocode for shared/functions/cryptocommon/HaveSHA256ExtRoundUp

// HaveSHA256Ext() // =============== // TRUE if SHA256 cryptographic instructions support is implemented, // FALSE otherwise. booleaninteger HaveSHA256Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA256 Crypto instructions";RoundUp(real x);

Library pseudocode for shared/functions/cryptocommon/HaveSHA3ExtSInt

// HaveSHA3Ext() // ============= // TRUE if SHA3 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // FALSE otherwise. // SInt() // ====== booleaninteger HaveSHA3Ext() if !SInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; if x<N-1> == '1' then result = result - 2^N; return result;HasArchVersion(ARMv8p2) || !(HaveSHA1Ext() && HaveSHA256Ext()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA3 Crypto instructions";

Library pseudocode for shared/functions/cryptocommon/HaveSHA512ExtSignExtend

// HaveSHA512Ext() // =============== // TRUE if SHA512 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // FALSE otherwise. // SignExtend() // ============ booleanbits(N) HaveSHA512Ext() if !SignExtend(bits(M) x, integer N) assert N >= M; returnHasArchVersionReplicate((x<M-1>, N-M) : x; // SignExtend() // ============ bits(N)ARMv8p2) || !(SignExtend(bits(M) x) returnHaveSHA1ExtSignExtend() && HaveSHA256Ext()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA512 Crypto instructions";(x, N);

Library pseudocode for shared/functions/cryptocommon/HaveSM3ExtUInt

// HaveSM3Ext() // ============ // TRUE if SM3 cryptographic instructions support is implemented, // FALSE otherwise. // UInt() // ====== booleaninteger HaveSM3Ext() if !UInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; return result;HasArchVersion(ARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM3 Crypto instructions";

Library pseudocode for shared/functions/cryptocommon/HaveSM4ExtZeroExtend

// HaveSM4Ext() // ZeroExtend() // ============ // TRUE if SM4 cryptographic instructions support is implemented, // FALSE otherwise. booleanbits(N) HaveSM4Ext() if !ZeroExtend(bits(M) x, integer N) assert N >= M; returnHasArchVersionZeros((N-M) : x; // ZeroExtend() // ============ bits(N)ZeroExtend(bits(M) x) return ZeroExtendARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM4 Crypto instructions";(x, N);

Library pseudocode for shared/functions/cryptocommon/ROLZeros

// ROL() // ===== // Zeros() // ======= bits(N) ROL(bits(N) x, integer shift) assert shift >= 0 && shift <= N; if (shift == 0) then return x; Zeros(integer N) return ('0',N); // Zeros() // ======= bits(N) Zeros() return ZerosRORReplicate(x, N-shift);(N);

Library pseudocode for shared/functions/cryptocrc/SHA256hashBitReverse

// SHA256hash() // BitReverse() // ============ bits(128)bits(N) SHA256hash(bits (128) X, bits(128) Y, bits(128) W, boolean part1) bits(32) chs, maj, t; for e = 0 to 3 chs =BitReverse(bits(N) data) bits(N) result; for i = 0 to N-1 result<N-i-1> = data<i>; return result; SHAchoose(Y<31:0>, Y<63:32>, Y<95:64>); maj = SHAmajority(X<31:0>, X<63:32>, X<95:64>); t = Y<127:96> + SHAhashSIGMA1(Y<31:0>) + chs + Elem[W, e, 32]; X<127:96> = t + X<127:96>; Y<127:96> = t + SHAhashSIGMA0(X<31:0>) + maj; <Y, X> = ROL(Y : X, 32); return (if part1 then X else Y);

Library pseudocode for shared/functions/cryptocrc/SHAchooseHaveCRCExt

// SHAchoose() // =========== // HaveCRCExt() // ============ bits(32)boolean SHAchoose(bits(32) x, bits(32) y, bits(32) z) return (((y EOR z) AND x) EOR z);HaveCRCExt() returnHasArchVersion(ARMv8p1) || boolean IMPLEMENTATION_DEFINED "Have CRC extension";

Library pseudocode for shared/functions/cryptocrc/SHAhashSIGMA0Poly32Mod2

// SHAhashSIGMA0() // =============== // Poly32Mod2() // ============ // Poly32Mod2 on a bitstring does a polynomial Modulus over {0,1} operation bits(32) SHAhashSIGMA0(bits(32) x) returnPoly32Mod2(bits(N) data, bits(32) poly) assert N > 32; for i = N-1 downto 32 if data<i> == '1' then data<i-1:0> = data<i-1:0> EOR (poly: RORZeros(x, 2) EOR ROR(x, 13) EOR ROR(x, 22);(i-32)); return data<31:0>;

Library pseudocode for shared/functions/crypto/SHAhashSIGMA1AESInvMixColumns

// SHAhashSIGMA1() // =============== bits(32)bits(128) SHAhashSIGMA1(bits(32) x) returnAESInvMixColumns(bits (128) op); ROR(x, 6) EOR ROR(x, 11) EOR ROR(x, 25);

Library pseudocode for shared/functions/crypto/SHAmajorityAESInvShiftRows

// SHAmajority() // ============= bits(32)bits(128) SHAmajority(bits(32) x, bits(32) y, bits(32) z) return ((x AND y) OR ((x OR y) AND z));AESInvShiftRows(bits(128) op);

Library pseudocode for shared/functions/crypto/SHAparityAESInvSubBytes

// SHAparity() // =========== bits(32)bits(128) SHAparity(bits(32) x, bits(32) y, bits(32) z) return (x EOR y EOR z);AESInvSubBytes(bits(128) op);

Library pseudocode for shared/functions/crypto/SboxAESMixColumns

// Sbox() // ====== // Used in SM4E crypto instruction bits(8)bits(128) Sbox(bits(8) sboxin) bits(8) sboxout; bits(2048) sboxstring = 0xd690e9fecce13db716b614c228fb2c052b679a762abe04c3aa441326498606999c4250f491ef987a33540b43edcfac62e4b31ca9c908e89580df94fa758f3fa64707a7fcf37317ba83593c19e6854fa8686b81b27164da8bf8eb0f4b70569d351e240e5e6358d1a225227c3b01217887d40046579fd327524c3602e7a0c4c89eeabf8ad240c738b5a3f7f2cef96115a1e0ae5da49b341a55ad933230f58cb1e31df6e22e8266ca60c02923ab0d534e6fd5db3745defd8e2f03ff6a726d6c5b518d1baf92bbddbc7f11d95c411f105ad80ac13188a5cd7bbd2d74d012b8e5b4b08969974a0c96777e65b9f109c56ec68418f07dec3adc4d2079ee5f3ed7cb3948<2047:0>; sboxout = sboxstring<(255-AESMixColumns(bits (128) op);UInt(sboxin))*8+7:(255-UInt(sboxin))*8>; return sboxout;

Library pseudocode for shared/functions/exclusivecrypto/ClearExclusiveByAddressAESShiftRows

// Clear the global Exclusives monitors for all PEs EXCEPT processorid if they // record any part of the physical address region of size bytes starting at paddress. // It is IMPLEMENTATION DEFINED whether the global Exclusives monitor for processorid // is also cleared if it records any part of the address region.bits(128) ClearExclusiveByAddress(AESShiftRows(bits(128) op);FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusivecrypto/ClearExclusiveLocalAESSubBytes

// Clear the local Exclusives monitor for the specified processorid.bits(128) ClearExclusiveLocal(integer processorid);AESSubBytes(bits(128) op);

Library pseudocode for shared/functions/exclusivecrypto/ClearExclusiveMonitorsHaveAESExt

// ClearExclusiveMonitors() // ======================== // HaveAESExt() // ============ // TRUE if AES cryptographic instructions support is implemented, // FALSE otherwise. // Clear the local Exclusives monitor for the executing PE.boolean ClearExclusiveMonitors()HaveAESExt() return boolean IMPLEMENTATION_DEFINED "Has AES Crypto instructions"; ClearExclusiveLocal(ProcessorID());

Library pseudocode for shared/functions/exclusivecrypto/ExclusiveMonitorsStatusHaveBit128PMULLExt

// Returns '0' to indicate success if the last memory write by this PE was to // the same physical address region endorsed by ExclusiveMonitorsPass(). // Returns '1' to indicate failure if address translation resulted in a different // physical address. bit// HaveBit128PMULLExt() // ==================== // TRUE if 128 bit form of PMULL instructions support is implemented, // FALSE otherwise. boolean ExclusiveMonitorsStatus();HaveBit128PMULLExt() return boolean IMPLEMENTATION_DEFINED "Has 128-bit form of PMULL instructions";

Library pseudocode for shared/functions/exclusivecrypto/IsExclusiveGlobalHaveSHA1Ext

// Return TRUE if the global Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. // HaveSHA1Ext() // ============= // TRUE if SHA1 cryptographic instructions support is implemented, // FALSE otherwise. boolean IsExclusiveGlobal(HaveSHA1Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA1 Crypto instructions";FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusivecrypto/IsExclusiveLocalHaveSHA256Ext

// Return TRUE if the local Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. // HaveSHA256Ext() // =============== // TRUE if SHA256 cryptographic instructions support is implemented, // FALSE otherwise. boolean IsExclusiveLocal(HaveSHA256Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA256 Crypto instructions";FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusivecrypto/MarkExclusiveGlobalHaveSHA3Ext

// Record the physical address region of size bytes starting at paddress in // the global Exclusives monitor for processorid.// HaveSHA3Ext() // ============= // TRUE if SHA3 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // FALSE otherwise. boolean MarkExclusiveGlobal(HaveSHA3Ext() if !(ARMv8p2) || !(HaveSHA1Ext() && HaveSHA256ExtFullAddressHasArchVersion paddress, integer processorid, integer size);()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA3 Crypto instructions";

Library pseudocode for shared/functions/exclusivecrypto/MarkExclusiveLocalHaveSHA512Ext

// Record the physical address region of size bytes starting at paddress in // the local Exclusives monitor for processorid.// HaveSHA512Ext() // =============== // TRUE if SHA512 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // FALSE otherwise. boolean MarkExclusiveLocal(HaveSHA512Ext() if !(ARMv8p2) || !(HaveSHA1Ext() && HaveSHA256ExtFullAddressHasArchVersion paddress, integer processorid, integer size);()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA512 Crypto instructions";

Library pseudocode for shared/functions/exclusivecrypto/ProcessorIDHaveSM3Ext

// Return the ID of the currently executing PE. integer// HaveSM3Ext() // ============ // TRUE if SM3 cryptographic instructions support is implemented, // FALSE otherwise. boolean ProcessorID();HaveSM3Ext() if !HasArchVersion(ARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM3 Crypto instructions";

Library pseudocode for shared/functions/extensioncrypto/AArch32.HaveHPDExtHaveSM4Ext

// AArch32.HaveHPDExt() // ==================== // HaveSM4Ext() // ============ // TRUE if SM4 cryptographic instructions support is implemented, // FALSE otherwise. boolean AArch32.HaveHPDExt() returnHaveSM4Ext() if ! HasArchVersion(ARMv8p2);) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM4 Crypto instructions";

Library pseudocode for shared/functions/extensioncrypto/AArch64.HaveHPDExtROL

// AArch64.HaveHPDExt() // ==================== // ROL() // ===== booleanbits(N) AArch64.HaveHPDExt() ROL(bits(N) x, integer shift) assert shift >= 0 && shift <= N; if (shift == 0) then return x; return HasArchVersionROR(ARMv8p1);(x, N-shift);

Library pseudocode for shared/functions/extensioncrypto/Have52BitPAExtSHA256hash

// Have52BitPAExt() // ================ // SHA256hash() // ============ booleanbits(128) Have52BitPAExt() returnSHA256hash(bits (128) X, bits(128) Y, bits(128) W, boolean part1) bits(32) chs, maj, t; for e = 0 to 3 chs = HasArchVersionSHAchoose((Y<31:0>, Y<63:32>, Y<95:64>); maj =(X<31:0>, X<63:32>, X<95:64>); t = Y<127:96> + SHAhashSIGMA1(Y<31:0>) + chs + Elem[W, e, 32]; X<127:96> = t + X<127:96>; Y<127:96> = t + SHAhashSIGMA0(X<31:0>) + maj; <Y, X> = ROLARMv8p2SHAmajority);(Y : X, 32); return (if part1 then X else Y);

Library pseudocode for shared/functions/extensioncrypto/Have52BitVAExtSHAchoose

// Have52BitVAExt() // ================ // SHAchoose() // =========== booleanbits(32) Have52BitVAExt() returnSHAchoose(bits(32) x, bits(32) y, bits(32) z) return (((y EOR z) AND x) EOR z); HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extensioncrypto/HaveAtomicExtSHAhashSIGMA0

// HaveAtomicExt() // SHAhashSIGMA0() // =============== booleanbits(32) HaveAtomicExt() SHAhashSIGMA0(bits(32) x) return HasArchVersionROR((x, 2) EOR(x, 13) EOR RORARMv8p1ROR);(x, 22);

Library pseudocode for shared/functions/extensioncrypto/HaveBTIExtSHAhashSIGMA1

// HaveBTIExt() // ============ // Returns TRUE if BTI implemented and FALSE otherwise // SHAhashSIGMA1() // =============== booleanbits(32) HaveBTIExt() SHAhashSIGMA1(bits(32) x) return HasArchVersionROR((x, 6) EOR(x, 11) EOR RORARMv8p5ROR);(x, 25);

Library pseudocode for shared/functions/extensioncrypto/HaveBlockBBMSHAmajority

// HaveBlockBBM() // ============== // Returns TRUE if support for changing block size without requring break-before-make is implemented. // SHAmajority() // ============= booleanbits(32) HaveBlockBBM() returnSHAmajority(bits(32) x, bits(32) y, bits(32) z) return ((x AND y) OR ((x OR y) AND z)); HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extensioncrypto/HaveCommonNotPrivateTransExtSHAparity

// HaveCommonNotPrivateTransExt() // ============================== // SHAparity() // =========== booleanbits(32) HaveCommonNotPrivateTransExt() returnSHAparity(bits(32) x, bits(32) y, bits(32) z) return (x EOR y EOR z); HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extensioncrypto/HaveDITExtSbox

// HaveDITExt() // ============ // Sbox() // ====== // Used in SM4E crypto instruction booleanbits(8) HaveDITExt() returnSbox(bits(8) sboxin) bits(8) sboxout; bits(2048) sboxstring = 0xd690e9fecce13db716b614c228fb2c052b679a762abe04c3aa441326498606999c4250f491ef987a33540b43edcfac62e4b31ca9c908e89580df94fa758f3fa64707a7fcf37317ba83593c19e6854fa8686b81b27164da8bf8eb0f4b70569d351e240e5e6358d1a225227c3b01217887d40046579fd327524c3602e7a0c4c89eeabf8ad240c738b5a3f7f2cef96115a1e0ae5da49b341a55ad933230f58cb1e31df6e22e8266ca60c02923ab0d534e6fd5db3745defd8e2f03ff6a726d6c5b518d1baf92bbddbc7f11d95c411f105ad80ac13188a5cd7bbd2d74d012b8e5b4b08969974a0c96777e65b9f109c56ec68418f07dec3adc4d2079ee5f3ed7cb3948<2047:0>; sboxout = sboxstring<(255- HasArchVersionUInt((sboxin))*8+7:(255-ARMv8p4UInt);(sboxin))*8>; return sboxout;

Library pseudocode for shared/functions/extensionexclusive/HaveDOTPExtClearExclusiveByAddress

// HaveDOTPExt() // ============= // Returns TRUE if has Dot Product feature support, and FALSE otherwise. boolean// Clear the global Exclusives monitors for all PEs EXCEPT processorid if they // record any part of the physical address region of size bytes starting at paddress. // It is IMPLEMENTATION DEFINED whether the global Exclusives monitor for processorid // is also cleared if it records any part of the address region. HaveDOTPExt() returnClearExclusiveByAddress( HasArchVersionFullAddress(ARMv8p4) || (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has Dot Product extension");paddress, integer processorid, integer size);

Library pseudocode for shared/functions/extensionexclusive/HaveDoubleFaultExtClearExclusiveLocal

// HaveDoubleFaultExt() // ==================== boolean// Clear the local Exclusives monitor for the specified processorid. HaveDoubleFaultExt() return (ClearExclusiveLocal(integer processorid);HasArchVersion(ARMv8p4) && HaveEL(EL3) && !ELUsingAArch32(EL3) && HaveIESB());

Library pseudocode for shared/functions/extensionexclusive/HaveDoubleLockClearExclusiveMonitors

// HaveDoubleLock() // ================ // Returns TRUE if support for the OS Double Lock is implemented // ClearExclusiveMonitors() // ======================== boolean// Clear the local Exclusives monitor for the executing PE. HaveDoubleLock() return !ClearExclusiveMonitors()HasArchVersionClearExclusiveLocal(ARMv8p4ProcessorID) || boolean IMPLEMENTATION_DEFINED "OS Double Lock is implemented";());

Library pseudocode for shared/functions/extensionexclusive/HaveE0PDExtExclusiveMonitorsStatus

// HaveE0PDExt() // ============= // Returns TRUE if support for constant fault times for unprivileged accesses // to the memory map is implemented. boolean// Returns '0' to indicate success if the last memory write by this PE was to // the same physical address region endorsed by ExclusiveMonitorsPass(). // Returns '1' to indicate failure if address translation resulted in a different // physical address. bit HaveE0PDExt() returnExclusiveMonitorsStatus(); HasArchVersion(ARMv8p5);

Library pseudocode for shared/functions/extensionexclusive/HaveExtendedCacheSetsIsExclusiveGlobal

// HaveExtendedCacheSets() // ======================= // Return TRUE if the global Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. boolean HaveExtendedCacheSets() returnIsExclusiveGlobal( HasArchVersionFullAddress(ARMv8p3);paddress, integer processorid, integer size);

Library pseudocode for shared/functions/extensionexclusive/HaveExtendedECDebugEventsIsExclusiveLocal

// HaveExtendedECDebugEvents() // =========================== // Return TRUE if the local Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. boolean HaveExtendedECDebugEvents() returnIsExclusiveLocal( HasArchVersionFullAddress(ARMv8p2);paddress, integer processorid, integer size);

Library pseudocode for shared/functions/extensionexclusive/HaveExtendedExecuteNeverExtMarkExclusiveGlobal

// HaveExtendedExecuteNeverExt() // ============================= boolean// Record the physical address region of size bytes starting at paddress in // the global Exclusives monitor for processorid. HaveExtendedExecuteNeverExt() returnMarkExclusiveGlobal( HasArchVersionFullAddress(ARMv8p2);paddress, integer processorid, integer size);

Library pseudocode for shared/functions/extensionexclusive/HaveFCADDExtMarkExclusiveLocal

// HaveFCADDExt() // ============== boolean// Record the physical address region of size bytes starting at paddress in // the local Exclusives monitor for processorid. HaveFCADDExt() returnMarkExclusiveLocal( HasArchVersionFullAddress(ARMv8p3);paddress, integer processorid, integer size);

Library pseudocode for shared/functions/extensionexclusive/HaveFJCVTZSExtProcessorID

// HaveFJCVTZSExt() // ================ boolean// Return the ID of the currently executing PE. integer HaveFJCVTZSExt() returnProcessorID(); HasArchVersion(ARMv8p3);

Library pseudocode for shared/functions/extension/HaveFP16MulNoRoundingToFP32ExtAArch32.HaveHPDExt

// HaveFP16MulNoRoundingToFP32Ext() // ================================ // Returns TRUE if has FP16 multiply with no intermediate rounding accumulate to FP32 instructions, // and FALSE otherwise // AArch32.HaveHPDExt() // ==================== boolean HaveFP16MulNoRoundingToFP32Ext() if !AArch32.HaveHPDExt() returnHaveFP16Ext() then return FALSE; if HasArchVersion(ARMv8p4) then return TRUE; return (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has accumulate FP16 product into FP32 extension"););

Library pseudocode for shared/functions/extension/HaveFlagFormatExtAArch64.HaveHPDExt

// HaveFlagFormatExt() // =================== // Returns TRUE if flag format conversion instructions implemented // and FALSE otherwise // AArch64.HaveHPDExt() // ==================== boolean HaveFlagFormatExt() AArch64.HaveHPDExt() return HasArchVersion(ARMv8p5ARMv8p1);

Library pseudocode for shared/functions/extension/HaveFlagManipulateExtHave52BitPAExt

// HaveFlagManipulateExt() // ======================= // Returns TRUE if has flag manipulate instructions, and FALSE otherwise // Have52BitPAExt() // ================ boolean HaveFlagManipulateExt() Have52BitPAExt() return HasArchVersion(ARMv8p4ARMv8p2);

Library pseudocode for shared/functions/extension/HaveFrintExtHave52BitVAExt

// HaveFrintExt() // ============== // Returns TRUE if FRINT instructions are implemented and FALSE otherwise // Have52BitVAExt() // ================ boolean HaveFrintExt() Have52BitVAExt() return HasArchVersion(ARMv8p5ARMv8p2);

Library pseudocode for shared/functions/extension/HaveHPMDExtHaveAtomicExt

// HaveHPMDExt() // ============= // HaveAtomicExt() // =============== boolean HaveHPMDExt() HaveAtomicExt() return HasArchVersion(ARMv8p1);

Library pseudocode for shared/functions/extension/HaveIESBHaveBTIExt

// HaveIESB() // ========== // HaveBTIExt() // ============ // Returns TRUE if BTI implemented and FALSE otherwise boolean HaveIESB() return (HaveBTIExt() return(ARMv8p5HaveRASExtHasArchVersion() && boolean IMPLEMENTATION_DEFINED "Has Implicit Error Synchronization Barrier"););

Library pseudocode for shared/functions/extension/HaveMPAMExtHaveBlockBBM

// HaveMPAMExt() // ============= // Returns TRUE if MPAM implemented and FALSE otherwise. // HaveBlockBBM() // ============== // Returns TRUE if support for changing block size without requring break-before-make is implemented. boolean HaveMPAMExt() return (HaveBlockBBM() returnHasArchVersion(ARMv8p2ARMv8p4) && boolean IMPLEMENTATION_DEFINED "Has MPAM extension"););

Library pseudocode for shared/functions/extension/HaveMTEExtHaveCommonNotPrivateTransExt

// HaveMTEExt() // ============ // Returns TRUE if MTE implemented and FALSE otherwise. // HaveCommonNotPrivateTransExt() // ============================== boolean HaveMTEExt() if !HaveCommonNotPrivateTransExt() returnHasArchVersion(ARMv8p5ARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has MTE extension";);

Library pseudocode for shared/functions/extension/HaveNV2ExtHaveDITExt

// HaveNV2Ext() // HaveDITExt() // ============ boolean HaveNV2Ext() HaveDITExt() return HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveNVExtHaveDOTPExt

// HaveNVExt() // =========== // HaveDOTPExt() // ============= // Returns TRUE if has Dot Product feature support, and FALSE otherwise. boolean HaveNVExt() HaveDOTPExt() return HasArchVersion() || (HasArchVersion(ARMv8p2ARMv8p3ARMv8p4);) && boolean IMPLEMENTATION_DEFINED "Has Dot Product extension");

Library pseudocode for shared/functions/extension/HaveNoSecurePMUDisableOverrideHaveDoubleFaultExt

// HaveNoSecurePMUDisableOverride() // ================================ // HaveDoubleFaultExt() // ==================== boolean HaveNoSecurePMUDisableOverride() returnHaveDoubleFaultExt() return ( HasArchVersion() && HaveEL(EL3) && !ELUsingAArch32(EL3) && HaveIESBARMv8p2ARMv8p4);());

Library pseudocode for shared/functions/extension/HaveNoninvasiveDebugAuthHaveDoubleLock

// HaveNoninvasiveDebugAuth() // ========================== // Returns FALSE if support for the removal of the non-invasive Debug controls is implemented. // HaveDoubleLock() // ================ // Returns TRUE if support for the OS Double Lock is implemented boolean HaveNoninvasiveDebugAuth() HaveDoubleLock() return !HasArchVersion(ARMv8p4);) || boolean IMPLEMENTATION_DEFINED "OS Double Lock is implemented";

Library pseudocode for shared/functions/extension/HavePANExtHaveE0PDExt

// HavePANExt() // ============ // HaveE0PDExt() // ============= // Returns TRUE if support for constant fault times for unprivileged accesses // to the memory map is implemented. boolean HavePANExt() HaveE0PDExt() return HasArchVersion(ARMv8p1ARMv8p5);

Library pseudocode for shared/functions/extension/HavePageBasedHardwareAttributesHaveExtendedCacheSets

// HavePageBasedHardwareAttributes() // ================================= // HaveExtendedCacheSets() // ======================= boolean HavePageBasedHardwareAttributes() HaveExtendedCacheSets() return HasArchVersion(ARMv8p2ARMv8p3);

Library pseudocode for shared/functions/extension/HavePrivATExtHaveExtendedECDebugEvents

// HavePrivATExt() // =============== // HaveExtendedECDebugEvents() // =========================== boolean HavePrivATExt() HaveExtendedECDebugEvents() return HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveQRDMLAHExtHaveExtendedExecuteNeverExt

// HaveQRDMLAHExt() // ================ // HaveExtendedExecuteNeverExt() // ============================= boolean HaveQRDMLAHExt() HaveExtendedExecuteNeverExt() return HasArchVersion(ARMv8p1ARMv8p2); boolean HaveAccessFlagUpdateExt() return HasArchVersion(ARMv8p1); boolean HaveDirtyBitModifierExt() return HasArchVersion(ARMv8p1);

Library pseudocode for shared/functions/extension/HaveRASExtHaveFCADDExt

// HaveRASExt() // ============ // HaveFCADDExt() // ============== boolean HaveRASExt() return (HaveFCADDExt() returnHasArchVersion(ARMv8p2ARMv8p3) || boolean IMPLEMENTATION_DEFINED "Has RAS extension"););

Library pseudocode for shared/functions/extension/HaveSBExtHaveFJCVTZSExt

// HaveSBExt() // =========== // Returns TRUE if has SB feature support, and FALSE otherwise. // HaveFJCVTZSExt() // ================ boolean HaveSBExt() HaveFJCVTZSExt() return HasArchVersion(ARMv8p5ARMv8p3) || boolean IMPLEMENTATION_DEFINED "Has SB extension";);

Library pseudocode for shared/functions/extension/HaveSSBSExtHaveFP16MulNoRoundingToFP32Ext

// HaveSSBSExt() // ============= // Returns TRUE if has SSBS feature support, and FALSE otherwise. // HaveFP16MulNoRoundingToFP32Ext() // ================================ // Returns TRUE if has FP16 multiply with no intermediate rounding accumulate to FP32 instructions, // and FALSE otherwise boolean HaveSSBSExt() returnHaveFP16MulNoRoundingToFP32Ext() if ! HaveFP16Ext() then return FALSE; if HasArchVersion() then return TRUE; return (HasArchVersion(ARMv8p2ARMv8p5ARMv8p4) || boolean IMPLEMENTATION_DEFINED "Has SSBS extension";) && boolean IMPLEMENTATION_DEFINED "Has accumulate FP16 product into FP32 extension");

Library pseudocode for shared/functions/extension/HaveSecureEL2ExtHaveFlagFormatExt

// HaveSecureEL2Ext() // ================== // Returns TRUE if has Secure EL2, and FALSE otherwise // HaveFlagFormatExt() // =================== // Returns TRUE if flag format conversion instructions implemented // and FALSE otherwise boolean HaveSecureEL2Ext() HaveFlagFormatExt() return HasArchVersion(ARMv8p4ARMv8p5);

Library pseudocode for shared/functions/extension/HaveSecureExtDebugViewHaveFlagManipulateExt

// HaveSecureExtDebugView() // ======================== // Returns TRUE if supports Secure and Non-secure views of debug peripherals is implemented. // HaveFlagManipulateExt() // ======================= // Returns TRUE if has flag manipulate instructions, and FALSE otherwise boolean HaveSecureExtDebugView() HaveFlagManipulateExt() return HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveSelfHostedTraceHaveFrintExt

// HaveSelfHostedTrace() // ===================== // HaveFrintExt() // ============== // Returns TRUE if FRINT instructions are implemented and FALSE otherwise boolean HaveSelfHostedTrace() HaveFrintExt() return HasArchVersion(ARMv8p4ARMv8p5);

Library pseudocode for shared/functions/extension/HaveSmallPageTblExtHaveHPMDExt

// HaveSmallPageTblExt() // ===================== // Returns TRUE if has Small Page Table Support, and FALSE otherwise // HaveHPMDExt() // ============= boolean HaveSmallPageTblExt() HaveHPMDExt() return HasArchVersion(ARMv8p4ARMv8p1) && boolean IMPLEMENTATION_DEFINED "Has Small Page Table extension";);

Library pseudocode for shared/functions/extension/HaveStage2MemAttrControlHaveIESB

// HaveStage2MemAttrControl() // ========================== // Returns TRUE if support for Stage2 control of memory types and cacheability attributes is implemented. // HaveIESB() // ========== boolean HaveStage2MemAttrControl() returnHaveIESB() return ( HasArchVersionHaveRASExt(ARMv8p4);() && boolean IMPLEMENTATION_DEFINED "Has Implicit Error Synchronization Barrier");

Library pseudocode for shared/functions/extension/HaveStatisticalProfilingHaveMPAMExt

// HaveStatisticalProfiling() // ========================== // HaveMPAMExt() // ============= // Returns TRUE if MPAM implemented and FALSE otherwise. boolean HaveStatisticalProfiling() returnHaveMPAMExt() return ( HasArchVersion(ARMv8p2);) && boolean IMPLEMENTATION_DEFINED "Has MPAM extension");

Library pseudocode for shared/functions/extension/HaveTrapLoadStoreMultipleDeviceExtHaveMTEExt

// HaveTrapLoadStoreMultipleDeviceExt() // ==================================== // HaveMTEExt() // ============ // Returns TRUE if MTE implemented and FALSE otherwise. boolean HaveTrapLoadStoreMultipleDeviceExt() returnHaveMTEExt() if ! HasArchVersion(ARMv8p2ARMv8p5);) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has MTE extension";

Library pseudocode for shared/functions/extension/HaveUA16ExtHaveNV2Ext

// HaveUA16Ext() // ============= // Returns TRUE if has extended unaligned memory access support, and FALSE otherwise // HaveNV2Ext() // ============ boolean HaveUA16Ext() HaveNV2Ext() return HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveUAOExtHaveNVExt

// HaveUAOExt() // ============ // HaveNVExt() // =========== boolean HaveUAOExt() HaveNVExt() return HasArchVersion(ARMv8p2ARMv8p3);

Library pseudocode for shared/functions/extension/HaveVirtHostExtHaveNoSecurePMUDisableOverride

// HaveVirtHostExt() // ================= // HaveNoSecurePMUDisableOverride() // ================================ boolean HaveVirtHostExt() HaveNoSecurePMUDisableOverride() return HasArchVersion(ARMv8p1ARMv8p2);

Library pseudocode for shared/functions/extension/InsertIESBBeforeExceptionHaveNoninvasiveDebugAuth

// If SCTLR_ELx.IESB is 1 when an exception is generated to ELx, any pending Unrecoverable // SError interrupt must be taken before executing any instructions in the exception handler. // However, this can be before the branch to the exception handler is made. // HaveNoninvasiveDebugAuth() // ========================== // Returns FALSE if support for the removal of the non-invasive Debug controls is implemented. boolean InsertIESBBeforeException(bits(2) el);HaveNoninvasiveDebugAuth() return !HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/floatextension/fixedtofp/FixedToFPHavePANExt

// FixedToFP() // =========== // HavePANExt() // ============ // Convert M-bit fixed point OP with FBITS fractional bits to // N-bit precision floating point, controlled by UNSIGNED and ROUNDING. bits(N)boolean FixedToFP(bits(M) op, integer fbits, boolean unsigned,HavePANExt() return FPCRTypeHasArchVersion fpcr,( FPRoundingARMv8p1 rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(N) result; assert fbits >= 0; assert rounding != FPRounding_ODD; // Correct signed-ness int_operand = Int(op, unsigned); // Scale by fractional bits and generate a real value real_operand = Real(int_operand) / 2.0^fbits; if real_operand == 0.0 then result = FPZero('0'); else result = FPRound(real_operand, fpcr, rounding); return result;);

Library pseudocode for shared/functions/floatextension/fpabs/FPAbsHavePageBasedHardwareAttributes

// FPAbs() // ======= // HavePageBasedHardwareAttributes() // ================================= bits(N)boolean FPAbs(bits(N) op) assert N IN {16,32,64}; return '0' : op<N-2:0>;HavePageBasedHardwareAttributes() returnHasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/floatextension/fpadd/FPAddHavePrivATExt

// FPAdd() // ======= // HavePrivATExt() // =============== bits(N)boolean FPAdd(bits(N) op1, bits(N) op2,HavePrivATExt() return FPCRTypeHasArchVersion fpcr) assert N IN {16,32,64}; rounding =( FPRoundingModeARMv8p2(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == NOT(sign2) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '0') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '1') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == sign2 then result = FPZero(sign1); else result_value = value1 + value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding); return result;);

Library pseudocode for shared/functions/floatextension/fpcompare/FPCompareHaveQRDMLAHExt

// FPCompare() // =========== // HaveQRDMLAHExt() // ================ bits(4)boolean FPCompare(bits(N) op1, bits(N) op2, boolean signal_nans,HaveQRDMLAHExt() return FPCRTypeHasArchVersion fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =( FPUnpackARMv8p1(op1, fpcr); (type2,sign2,value2) =); boolean FPUnpack(op2, fpcr); if type1==HaveAccessFlagUpdateExt() returnFPType_SNaNHasArchVersion || type1==(FPType_QNaNARMv8p1 || type2==); booleanFPType_SNaN || type2==HaveDirtyBitModifierExt() returnFPType_QNaNHasArchVersion then result = '0011'; if type1==(FPType_SNaNARMv8p1 || type2==FPType_SNaN || signal_nans then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() if value1 == value2 then result = '0110'; elsif value1 < value2 then result = '1000'; else // value1 > value2 result = '0010'; return result;);

Library pseudocode for shared/functions/floatextension/fpcompareeq/FPCompareEQHaveRASExt

// FPCompareEQ() // ============= // HaveRASExt() // ============ boolean FPCompareEQ(bits(N) op1, bits(N) op2,HaveRASExt() return ( FPCRTypeHasArchVersion fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =( FPUnpackARMv8p2(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = FALSE; if type1==FPType_SNaN || type2==FPType_SNaN then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 == value2); return result;) || boolean IMPLEMENTATION_DEFINED "Has RAS extension");

Library pseudocode for shared/functions/floatextension/fpcomparege/FPCompareGEHaveSBExt

// FPCompareGE() // ============= // HaveSBExt() // =========== // Returns TRUE if has SB feature support, and FALSE otherwise. boolean FPCompareGE(bits(N) op1, bits(N) op2,HaveSBExt() return FPCRTypeHasArchVersion fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =( FPUnpackARMv8p5(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = FALSE; FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 >= value2); return result;) || boolean IMPLEMENTATION_DEFINED "Has SB extension";

Library pseudocode for shared/functions/floatextension/fpcomparegt/FPCompareGTHaveSSBSExt

// FPCompareGT() // HaveSSBSExt() // ============= // Returns TRUE if has SSBS feature support, and FALSE otherwise. boolean FPCompareGT(bits(N) op1, bits(N) op2,HaveSSBSExt() return FPCRTypeHasArchVersion fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =( FPUnpackARMv8p5(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if type1==FPType_SNaN || type1==FPType_QNaN || type2==FPType_SNaN || type2==FPType_QNaN then result = FALSE; FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 > value2); return result;) || boolean IMPLEMENTATION_DEFINED "Has SSBS extension";

Library pseudocode for shared/functions/floatextension/fpconvert/FPConvertHaveSecureEL2Ext

// FPConvert() // =========== // HaveSecureEL2Ext() // ================== // Returns TRUE if has Secure EL2, and FALSE otherwise // Convert floating point OP with N-bit precision to M-bit precision, // with rounding controlled by ROUNDING. // This is used by the FP-to-FP conversion instructions and so for // half-precision data ignores FZ16, but observes AHP. bits(M)boolean FPConvert(bits(N) op,HaveSecureEL2Ext() return FPCRTypeHasArchVersion fpcr,( FPRoundingARMv8p4 rounding) assert M IN {16,32,64}; assert N IN {16,32,64}; bits(M) result; // Unpack floating-point operand optionally with flush-to-zero. (type,sign,value) = FPUnpackCV(op, fpcr); alt_hp = (M == 16) && (fpcr.AHP == '1'); if type == FPType_SNaN || type == FPType_QNaN then if alt_hp then result = FPZero(sign); elsif fpcr.DN == '1' then result = FPDefaultNaN(); else result = FPConvertNaN(op); if type == FPType_SNaN || alt_hp then FPProcessException(FPExc_InvalidOp,fpcr); elsif type == FPType_Infinity then if alt_hp then result = sign:Ones(M-1); FPProcessException(FPExc_InvalidOp, fpcr); else result = FPInfinity(sign); elsif type == FPType_Zero then result = FPZero(sign); else result = FPRoundCV(value, fpcr, rounding); return result; // FPConvert() // =========== bits(M) FPConvert(bits(N) op, FPCRType fpcr) return FPConvert(op, fpcr, FPRoundingMode(fpcr)););

Library pseudocode for shared/functions/floatextension/fpconvertnan/FPConvertNaNHaveSecureExtDebugView

// FPConvertNaN() // ============== // HaveSecureExtDebugView() // ======================== // Returns TRUE if supports Secure and Non-secure views of debug peripherals is implemented. // Converts a NaN of one floating-point type to another bits(M)boolean FPConvertNaN(bits(N) op) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(M) result; bits(51) frac; sign = op<N-1>; // Unpack payload from input NaN case N of when 64 frac = op<50:0>; when 32 frac = op<21:0>:HaveSecureExtDebugView() returnZerosHasArchVersion(29); when 16 frac = op<8:0>:(ZerosARMv8p4(42); // Repack payload into output NaN, while // converting an SNaN to a QNaN. case M of when 64 result = sign:Ones(M-52):frac; when 32 result = sign:Ones(M-23):frac<50:29>; when 16 result = sign:Ones(M-10):frac<50:42>; return result;);

Library pseudocode for shared/functions/floatextension/fpcrtype/FPCRTypeHaveSelfHostedTrace

type// HaveSelfHostedTrace() // ===================== boolean FPCRType;HaveSelfHostedTrace() returnHasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/floatextension/fpdecoderm/FPDecodeRMHaveSmallPageTblExt

// FPDecodeRM() // ============ // HaveSmallPageTblExt() // ===================== // Returns TRUE if has Small Page Table Support, and FALSE otherwise // Decode most common AArch32 floating-point rounding encoding. FPRoundingboolean FPDecodeRM(bits(2) rm) case rm of when '00' returnHaveSmallPageTblExt() return FPRounding_TIEAWAYHasArchVersion; // A when '01' return( FPRounding_TIEEVENARMv8p4; // N when '10' return FPRounding_POSINF; // P when '11' return FPRounding_NEGINF; // M) && boolean IMPLEMENTATION_DEFINED "Has Small Page Table extension";

Library pseudocode for shared/functions/floatextension/fpdecoderounding/FPDecodeRoundingHaveStage2MemAttrControl

// FPDecodeRounding() // ================== // HaveStage2MemAttrControl() // ========================== // Returns TRUE if support for Stage2 control of memory types and cacheability attributes is implemented. // Decode floating-point rounding mode and common AArch64 encoding. FPRoundingboolean FPDecodeRounding(bits(2) rmode) case rmode of when '00' returnHaveStage2MemAttrControl() return FPRounding_TIEEVENHasArchVersion; // N when '01' return( FPRounding_POSINFARMv8p4; // P when '10' return FPRounding_NEGINF; // M when '11' return FPRounding_ZERO; // Z);

Library pseudocode for shared/functions/floatextension/fpdefaultnan/FPDefaultNaNHaveStatisticalProfiling

// FPDefaultNaN() // ============== // HaveStatisticalProfiling() // ========================== bits(N)boolean FPDefaultNaN() assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); sign = '0'; exp =HaveStatisticalProfiling() return OnesHasArchVersion(E); frac = '1':(ZerosARMv8p2(F-1); return sign : exp : frac;);

Library pseudocode for shared/functions/floatextension/fpdiv/FPDivHaveTrapLoadStoreMultipleDeviceExt

// FPDiv() // ======= // HaveTrapLoadStoreMultipleDeviceExt() // ==================================== bits(N)boolean FPDiv(bits(N) op1, bits(N) op2,HaveTrapLoadStoreMultipleDeviceExt() return FPCRTypeHasArchVersion fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =( FPUnpackARMv8p2(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && inf2) || (zero1 && zero2) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif inf1 || zero2 then result = FPInfinity(sign1 EOR sign2); if !inf1 then FPProcessException(FPExc_DivideByZero, fpcr); elsif zero1 || inf2 then result = FPZero(sign1 EOR sign2); else result = FPRound(value1/value2, fpcr); return result;);

Library pseudocode for shared/functions/floatextension/fpexc/FPExcHaveUA16Ext

enumeration// HaveUA16Ext() // ============= // Returns TRUE if has extended unaligned memory access support, and FALSE otherwise boolean FPExc {HaveUA16Ext() returnFPExc_InvalidOp,( FPExc_DivideByZero, FPExc_Overflow, FPExc_Underflow, FPExc_Inexact, FPExc_InputDenorm};);

Library pseudocode for shared/functions/floatextension/fpinfinity/FPInfinityHaveUAOExt

// FPInfinity() // HaveUAOExt() // ============ bits(N)boolean FPInfinity(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =HaveUAOExt() return OnesHasArchVersion(E); frac =( ZerosARMv8p2(F); return sign : exp : frac;);

Library pseudocode for shared/functions/floatextension/fpmax/FPMaxHaveVirtHostExt

// FPMax() // ======= // HaveVirtHostExt() // ================= bits(N)boolean FPMax(bits(N) op1, bits(N) op2,HaveVirtHostExt() return FPCRTypeHasArchVersion fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =( FPUnpackARMv8p1(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then if value1 > value2 then (type,sign,value) = (type1,sign1,value1); else (type,sign,value) = (type2,sign2,value2); if type == FPType_Infinity then result = FPInfinity(sign); elsif type == FPType_Zero then sign = sign1 AND sign2; // Use most positive sign result = FPZero(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. result = FPRound(value, fpcr); return result;);

Library pseudocode for shared/functions/floatextension/fpmaxnormal/FPMaxNormalInsertIESBBeforeException

// FPMaxNormal() // ============= bits(N)// If SCTLR_ELx.IESB is 1 when an exception is generated to ELx, any pending Unrecoverable // SError interrupt must be taken before executing any instructions in the exception handler. // However, this can be before the branch to the exception handler is made. boolean FPMaxNormal(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =InsertIESBBeforeException(bits(2) el); Ones(E-1):'0'; frac = Ones(F); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpmaxnumfixedtofp/FPMaxNumFixedToFP

// FPMaxNum() // ========== // FixedToFP() // =========== // Convert M-bit fixed point OP with FBITS fractional bits to // N-bit precision floating point, controlled by UNSIGNED and ROUNDING. bits(N) FPMaxNum(bits(N) op1, bits(N) op2,FixedToFP(bits(M) op, integer fbits, boolean unsigned, FPCRType fpcr) assert N IN {16,32,64}; (type1,-,-) =fpcr, FPUnpackFPRounding(op1, fpcr); (type2,-,-) =rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(N) result; assert fbits >= 0; assert rounding != FPUnpackFPRounding_ODD(op2, fpcr); ; // treat a single quiet-NaN as -Infinity if type1 == // Correct signed-ness int_operand = FPType_QNaNInt && type2 !=(op, unsigned); // Scale by fractional bits and generate a real value real_operand = Real(int_operand) / 2.0^fbits; if real_operand == 0.0 then result = FPType_QNaNFPZero then op1 =('0'); else result = FPInfinityFPRound('1'); elsif type1 != FPType_QNaN && type2 == FPType_QNaN then op2 = FPInfinity('1'); return FPMax(op1, op2, fpcr);(real_operand, fpcr, rounding); return result;

Library pseudocode for shared/functions/float/fpminfpabs/FPMinFPAbs

// FPMin() // FPAbs() // ======= bits(N) FPMin(bits(N) op1, bits(N) op2,FPAbs(bits(N) op) assert N IN {16,32,64}; return '0' : op<N-2:0>; FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then if value1 < value2 then (type,sign,value) = (type1,sign1,value1); else (type,sign,value) = (type2,sign2,value2); if type == FPType_Infinity then result = FPInfinity(sign); elsif type == FPType_Zero then sign = sign1 OR sign2; // Use most negative sign result = FPZero(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. result = FPRound(value, fpcr); return result;

Library pseudocode for shared/functions/float/fpminnumfpadd/FPMinNumFPAdd

// FPMinNum() // ========== // FPAdd() // ======= bits(N) FPMinNum(bits(N) op1, bits(N) op2,FPAdd(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,-,-) = rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,-,-) = (type2,sign2,value2) = FPUnpack(op2, fpcr); // Treat a single quiet-NaN as +Infinity if type1 == (done,result) = FPType_QNaNFPProcessNaNs && type2 !=(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_QNaNFPType_Infinity then op1 =); inf2 = (type2 == FPInfinityFPType_Infinity('0'); elsif type1 !=); zero1 = (type1 == FPType_QNaNFPType_Zero && type2 ==); zero2 = (type2 == FPType_QNaNFPType_Zero then op2 =); if inf1 && inf2 && sign1 == NOT(sign2) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '0') then result = FPInfinity('0'); return elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '1') then result = ('1'); elsif zero1 && zero2 && sign1 == sign2 then result = FPZero(sign1); else result_value = value1 + value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRoundFPMinFPInfinity(op1, op2, fpcr);(result_value, fpcr, rounding); return result;

Library pseudocode for shared/functions/float/fpmulfpcompare/FPMulFPCompare

// FPMul() // ======= // FPCompare() // =========== bits(N)bits(4) FPMul(bits(N) op1, bits(N) op2,FPCompare(bits(N) op1, bits(N) op2, boolean signal_nans, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = if type1== FPProcessNaNsFPType_SNaN(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 ==|| type1== FPType_InfinityFPType_QNaN); inf2 = (type2 ==|| type2== FPType_InfinityFPType_SNaN); zero1 = (type1 ==|| type2== FPType_ZeroFPType_QNaN); zero2 = (type2 ==then result = '0011'; if type1== FPType_ZeroFPType_SNaN); if (inf1 && zero2) || (zero1 && inf2) then result =|| type2== FPDefaultNaNFPType_SNaN();|| signal_nans then FPProcessException(FPExc_InvalidOp, fpcr); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); elsif zero1 || zero2 then result = FPZero(sign1 EOR sign2); else result = FPRound(value1*value2, fpcr); , fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() if value1 == value2 then result = '0110'; elsif value1 < value2 then result = '1000'; else // value1 > value2 result = '0010'; return result;

Library pseudocode for shared/functions/float/fpmuladdfpcompareeq/FPMulAddFPCompareEQ

// FPMulAdd() // ========== // // Calculates addend + op1*op2 with a single rounding. // FPCompareEQ() // ============= bits(N)boolean FPMulAdd(bits(N) addend, bits(N) op1, bits(N) op2,FPCompareEQ(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; rounding = (type1,sign1,value1) = FPRoundingMode(fpcr); (typeA,signA,valueA) = FPUnpack(addend, fpcr); (type1,sign1,value1) =(op1, fpcr); (type2,sign2,value2) = FPUnpack(op1, fpcr); (type2,sign2,value2) =(op2, fpcr); if type1== FPUnpackFPType_SNaN(op2, fpcr); inf1 = (type1 ==|| type1== FPType_InfinityFPType_QNaN); zero1 = (type1 ==|| type2== FPType_ZeroFPType_SNaN); inf2 = (type2 ==|| type2== FPType_Infinity); zero2 = (type2 == FPType_Zero); (done,result) = FPProcessNaNs3(typeA, type1, type2, addend, op1, op2, fpcr); if typeA == FPType_QNaN && ((inf1 && zero2) || (zero1 && inf2)) then result =then result = FALSE; if type1== FPDefaultNaNFPType_SNaN();|| type2== FPProcessExceptionFPType_SNaN(thenFPExc_InvalidOp, fpcr); if !done then infA = (typeA == FPType_Infinity); zeroA = (typeA == FPType_Zero); // Determine sign and type product will have if it does not cause an Invalid // Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero by infinity and // additions of opposite-signed infinities. if (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity('0'); elsif (infA && signA == '1') || (infP && signP == '1') then result = FPInfinity('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(signA); // Otherwise calculate numerical result and round it. else result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr); , fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 == value2); return result;

Library pseudocode for shared/functions/float/fpmuladdhfpcomparege/FPMulAddHFPCompareGE

// FPMulAddH() // =========== // FPCompareGE() // ============= bits(N) FPMulAddH(bits(N) addend, bits(N DIV 2) op1, bits(N DIV 2) op2,boolean FPCompareGE(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {32,64}; rounding = assert N IN {16,32,64}; (type1,sign1,value1) = FPRoundingMode(fpcr); (typeA,signA,valueA) = FPUnpack(addend, fpcr); (type1,sign1,value1) =(op1, fpcr); (type2,sign2,value2) = FPUnpack(op1, fpcr); (type2,sign2,value2) =(op2, fpcr); if type1== FPUnpackFPType_SNaN(op2, fpcr); inf1 = (type1 ==|| type1== FPType_InfinityFPType_QNaN); zero1 = (type1 ==|| type2== FPType_ZeroFPType_SNaN); inf2 = (type2 ==|| type2== FPType_Infinity); zero2 = (type2 == FPType_Zero); (done,result) = FPProcessNaNs3H(typeA, type1, type2, addend, op1, op2, fpcr); if typeA == FPType_QNaN && ((inf1 && zero2) || (zero1 && inf2)) then result =then result = FALSE; FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); if !done then infA = (typeA == FPType_Infinity); zeroA = (typeA == FPType_Zero); // Determine sign and type product will have if it does not cause an Invalid // Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero by infinity and // additions of opposite-signed infinities. if (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity('0'); elsif (infA && signA == '1') || (infP && signP == '1') then result = FPInfinity('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(signA); // Otherwise calculate numerical result and round it. else result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr); , fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 >= value2); return result;

Library pseudocode for shared/functions/float/fpmuladdhfpcomparegt/FPProcessNaNs3HFPCompareGT

// FPProcessNaNs3H() // ================= // FPCompareGT() // ============= (boolean, bits(N)) FPProcessNaNs3H(booleanFPType type1,FPCompareGT(bits(N) op1, bits(N) op2, FPType type2, FPType type3, bits(N) op1, bits(N DIV 2) op2, bits(N DIV 2) op3, FPCRType fpcr) assert N IN {32,64}; bits(N) result; if type1 == assert N IN {16,32,64}; (type1,sign1,value1) = FPType_SNaNFPUnpack then done = TRUE; result =(op1, fpcr); (type2,sign2,value2) = FPProcessNaNFPUnpack(type1, op1, fpcr); elsif type2 ==(op2, fpcr); if type1== FPType_SNaN then done = TRUE; result =|| type1== FPConvertNaNFPType_QNaN(|| type2==FPProcessNaN(type2, op2, fpcr)); elsif type3 == FPType_SNaN then done = TRUE; result =|| type2== FPConvertNaN(FPProcessNaN(type3, op3, fpcr)); elsif type1 == FPType_QNaN then done = TRUE; result = result = FALSE; FPProcessNaNFPProcessException(type1, op1, fpcr); elsif type2 ==( FPType_QNaNFPExc_InvalidOp then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr)); elsif type3 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr)); else done = FALSE; result = Zeros(); // 'Don't care' result return (done, result);, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 > value2); return result;

Library pseudocode for shared/functions/float/fpmulxfpconvert/FPMulXFPConvert

// FPMulX() // ======== // FPConvert() // =========== bits(N)// Convert floating point OP with N-bit precision to M-bit precision, // with rounding controlled by ROUNDING. // This is used by the FP-to-FP conversion instructions and so for // half-precision data ignores FZ16, but observes AHP. bits(M) FPMulX(bits(N) op1, bits(N) op2,FPConvert(bits(N) op, FPCRType fpcr) assert N IN {16,32,64}; bits(N) result; (type1,sign1,value1) =fpcr, FPUnpackFPRounding(op1, fpcr); (type2,sign2,value2) =rounding) assert M IN {16,32,64}; assert N IN {16,32,64}; bits(M) result; // Unpack floating-point operand optionally with flush-to-zero. (type,sign,value) = FPUnpackFPUnpackCV(op2, fpcr); (done,result) =(op, fpcr); alt_hp = (M == 16) && (fpcr.AHP == '1'); if type == FPProcessNaNsFPType_SNaN(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 ==|| type == FPType_InfinityFPType_QNaN); inf2 = (type2 ==then if alt_hp then result = FPZero(sign); elsif fpcr.DN == '1' then result = FPDefaultNaN(); else result = FPConvertNaN(op); if type == FPType_SNaN || alt_hp then FPProcessException(FPExc_InvalidOp,fpcr); elsif type == FPType_Infinity); zero1 = (type1 ==then if alt_hp then result = sign: FPType_ZeroOnes); zero2 = (type2 ==(M-1); FPType_ZeroFPProcessException); if (inf1 && zero2) || (zero1 && inf2) then result =( FPTwoFPExc_InvalidOp(sign1 EOR sign2); elsif inf1 || inf2 then , fpcr); else result = FPInfinity(sign1 EOR sign2); elsif zero1 || zero2 then result =(sign); elsif type == FPType_Zero then result = FPZero(sign1 EOR sign2); else result =(sign); else result = (value, fpcr, rounding); return result; // FPConvert() // =========== bits(M) FPConvert(bits(N) op, FPCRType fpcr) return FPConvert(op, fpcr, FPRoundingModeFPRoundFPRoundCV(value1*value2, fpcr); return result;(fpcr));

Library pseudocode for shared/functions/float/fpnegfpconvertnan/FPNegFPConvertNaN

// FPNeg() // ======= // FPConvertNaN() // ============== bits(N)// Converts a NaN of one floating-point type to another bits(M) FPNeg(bits(N) op) FPConvertNaN(bits(N) op) assert N IN {16,32,64}; return NOT(op<N-1>) : op<N-2:0>; assert M IN {16,32,64}; bits(M) result; bits(51) frac; sign = op<N-1>; // Unpack payload from input NaN case N of when 64 frac = op<50:0>; when 32 frac = op<21:0>:Zeros(29); when 16 frac = op<8:0>:Zeros(42); // Repack payload into output NaN, while // converting an SNaN to a QNaN. case M of when 64 result = sign:Ones(M-52):frac; when 32 result = sign:Ones(M-23):frac<50:29>; when 16 result = sign:Ones(M-10):frac<50:42>; return result;

Library pseudocode for shared/functions/float/fponepointfivefpcrtype/FPOnePointFiveFPCRType

// FPOnePointFive() // ================ bits(N)type FPOnePointFive(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0':FPCRType;Ones(E-1); frac = '1':Zeros(F-1); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpprocessexceptionfpdecoderm/FPProcessExceptionFPDecodeRM

// FPProcessException() // ==================== // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate.// FPDecodeRM() // ============ // Decode most common AArch32 floating-point rounding encoding. FPRounding FPProcessException(FPDecodeRM(bits(2) rm) case rm of when '00' returnFPExcFPRounding_TIEAWAY exception,; // A when '01' return FPCRTypeFPRounding_TIEEVEN fpcr) // Determine the cumulative exception bit number case exception of when; // N when '10' return FPExc_InvalidOpFPRounding_POSINF cumul = 0; when; // P when '11' return FPExc_DivideByZeroFPRounding_NEGINF cumul = 1; when FPExc_Overflow cumul = 2; when FPExc_Underflow cumul = 3; when FPExc_Inexact cumul = 4; when FPExc_InputDenorm cumul = 7; enable = cumul + 8; if fpcr<enable> == '1' then // Trapping of the exception enabled. // It is IMPLEMENTATION DEFINED whether the enable bit may be set at all, and // if so then how exceptions may be accumulated before calling FPTrapException() IMPLEMENTATION_DEFINED "floating-point trap handling"; elsif UsingAArch32() then // Set the cumulative exception bit FPSCR<cumul> = '1'; else // Set the cumulative exception bit FPSR<cumul> = '1'; return;; // M

Library pseudocode for shared/functions/float/fpprocessnanfpdecoderounding/FPProcessNaNFPDecodeRounding

// FPProcessNaN() // ============== // FPDecodeRounding() // ================== bits(N)// Decode floating-point rounding mode and common AArch64 encoding. FPRounding FPProcessNaN(FPDecodeRounding(bits(2) rmode) case rmode of when '00' returnFPTypeFPRounding_TIEEVEN type, bits(N) op,; // N when '01' return FPCRTypeFPRounding_POSINF fpcr) assert N IN {16,32,64}; assert type IN {; // P when '10' returnFPType_QNaNFPRounding_NEGINF,; // M when '11' return FPType_SNaNFPRounding_ZERO}; case N of when 16 topfrac = 9; when 32 topfrac = 22; when 64 topfrac = 51; result = op; if type == FPType_SNaN then result<topfrac> = '1'; FPProcessException(FPExc_InvalidOp, fpcr); if fpcr.DN == '1' then // DefaultNaN requested result = FPDefaultNaN(); return result;; // Z

Library pseudocode for shared/functions/float/fpprocessnansfpdefaultnan/FPProcessNaNsFPDefaultNaN

// FPProcessNaNs() // =============== // // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. // FPDefaultNaN() // ============== (boolean, bits(N))bits(N) FPProcessNaNs(FPDefaultNaN() assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); sign = '0'; exp =FPTypeOnes type1, FPType type2, bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; if type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); else done = FALSE; result =(E); frac = '1': Zeros(); // 'Don't care' result return (done, result);(F-1); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpprocessnans3fpdiv/FPProcessNaNs3FPDiv

// FPProcessNaNs3() // ================ // // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. // FPDiv() // ======= (boolean, bits(N))bits(N) FPProcessNaNs3(FPDiv(bits(N) op1, bits(N) op2,FPTypeFPCRType type1,fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPTypeFPUnpack type2,(op1, fpcr); (type2,sign2,value2) = FPTypeFPUnpack type3, bits(N) op1, bits(N) op2, bits(N) op3,(op2, fpcr); (done,result) = FPCRTypeFPProcessNaNs fpcr) assert N IN {16,32,64}; if type1 ==(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_SNaNFPType_Infinity then done = TRUE; result =); inf2 = (type2 == FPProcessNaNFPType_Infinity(type1, op1, fpcr); elsif type2 ==); zero1 = (type1 == FPType_SNaNFPType_Zero then done = TRUE; result =); zero2 = (type2 == FPProcessNaNFPType_Zero(type2, op2, fpcr); elsif type3 ==); if (inf1 && inf2) || (zero1 && zero2) then result = FPType_SNaNFPDefaultNaN then done = TRUE; result =(); FPProcessNaNFPProcessException(type3, op3, fpcr); elsif type1 ==( FPType_QNaNFPExc_InvalidOp then done = TRUE; result =, fpcr); elsif inf1 || zero2 then result = FPProcessNaNFPInfinity(type1, op1, fpcr); elsif type2 ==(sign1 EOR sign2); if !inf1 then FPType_QNaNFPProcessException then done = TRUE; result =( FPProcessNaNFPExc_DivideByZero(type2, op2, fpcr); elsif type3 ==, fpcr); elsif zero1 || inf2 then result = FPType_QNaNFPZero then done = TRUE; result =(sign1 EOR sign2); else result = FPProcessNaNFPRound(type3, op3, fpcr); else done = FALSE; result = Zeros(); // 'Don't care' result return (done, result);(value1/value2, fpcr); return result;

Library pseudocode for shared/functions/float/fprecipestimatefpexc/FPRecipEstimateFPExc

// FPRecipEstimate() // ================= bits(N)enumeration FPRecipEstimate(bits(N) operand,FPExc { FPCRType fpcr) assert N IN {16,32,64}; (type,sign,value) =FPExc_InvalidOp, FPUnpack(operand, fpcr); if type ==FPExc_DivideByZero, FPType_SNaN || type ==FPExc_Overflow, FPType_QNaN then result =FPExc_Underflow, FPProcessNaN(type, operand, fpcr); elsif type ==FPExc_Inexact, FPType_Infinity then result = FPZero(sign); elsif type == FPType_Zero then result = FPInfinity(sign); FPProcessException(FPExc_DivideByZero, fpcr); elsif ( (N == 16 && Abs(value) < 2.0^-16) || (N == 32 && Abs(value) < 2.0^-128) || (N == 64 && Abs(value) < 2.0^-1024) ) then case FPRoundingMode(fpcr) of when FPRounding_TIEEVEN overflow_to_inf = TRUE; when FPRounding_POSINF overflow_to_inf = (sign == '0'); when FPRounding_NEGINF overflow_to_inf = (sign == '1'); when FPRounding_ZERO overflow_to_inf = FALSE; result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); FPProcessException(FPExc_Overflow, fpcr); FPProcessException(FPExc_Inexact, fpcr); elsif ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && ( (N == 16 && Abs(value) >= 2.0^14) || (N == 32 && Abs(value) >= 2.0^126) || (N == 64 && Abs(value) >= 2.0^1022) ) then // Result flushed to zero of correct sign result = FPZero(sign); if UsingAArch32() then FPSCR.UFC = '1'; else FPSR.UFC = '1'; else // Scale to a fixed point value in the range 0.5 <= x < 1.0 in steps of 1/512, and // calculate result exponent. Scaled value has copied sign bit, // exponent = 1022 = double-precision biased version of -1, // fraction = original fraction case N of when 16 fraction = operand<9:0> : Zeros(42); exp = UInt(operand<14:10>); when 32 fraction = operand<22:0> : Zeros(29); exp = UInt(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then if fraction<51> == 0 then exp = -1; fraction = fraction<49:0>:'00'; else fraction = fraction<50:0>:'0'; integer scaled = UInt('1':fraction<51:44>); case N of when 16 result_exp = 29 - exp; // In range 29-30 = -1 to 29+1 = 30 when 32 result_exp = 253 - exp; // In range 253-254 = -1 to 253+1 = 254 when 64 result_exp = 2045 - exp; // In range 2045-2046 = -1 to 2045+1 = 2046 // scaled is in range 256..511 representing a fixed-point number in range [0.5..1.0) estimate = RecipEstimate(scaled); // estimate is in the range 256..511 representing a fixed point result in the range [1.0..2.0) // Convert to scaled floating point result with copied sign bit, // high-order bits from estimate, and exponent calculated above. fraction = estimate<7:0> : Zeros(44); if result_exp == 0 then fraction = '1' : fraction<51:1>; elsif result_exp == -1 then fraction = '01' : fraction<51:2>; result_exp = 0; case N of when 16 result = sign : result_exp<N-12:0> : fraction<51:42>; when 32 result = sign : result_exp<N-25:0> : fraction<51:29>; when 64 result = sign : result_exp<N-54:0> : fraction<51:0>; return result;FPExc_InputDenorm};

Library pseudocode for shared/functions/float/fprecipestimatefpinfinity/RecipEstimateFPInfinity

// Compute estimate of reciprocal of 9-bit fixed-point number // // a is in range 256 .. 511 representing a number in the range 0.5 <= x < 1.0. // result is in the range 256 .. 511 representing a number in the range in the range 1.0 to 511/256. // FPInfinity() // ============ integerbits(N) RecipEstimate(integer a) assert 256 <= a && a < 512; a = a*2+1; // round to nearest integer b = (2 ^ 19) DIV a; r = (b+1) DIV 2; // round to nearest assert 256 <= r && r < 512; return r;FPInfinity(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =Ones(E); frac = Zeros(F); return sign : exp : frac;

Library pseudocode for shared/functions/float/fprecpxfpmax/FPRecpXFPMax

// FPRecpX() // ========= // FPMax() // ======= bits(N) FPRecpX(bits(N) op,FPMax(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; case N of when 16 esize = 5; when 32 esize = 8; when 64 esize = 11; bits(N) result; bits(esize) exp; bits(esize) max_exp; bits(N-(esize+1)) frac = (type1,sign1,value1) = ZerosFPUnpack(); case N of when 16 exp = op<10+esize-1:10>; when 32 exp = op<23+esize-1:23>; when 64 exp = op<52+esize-1:52>; max_exp =(op1, fpcr); (type2,sign2,value2) = Ones(esize) - 1; (type,sign,value) = FPUnpack(op, fpcr); if type ==(op2, fpcr); (done,result) = FPType_SNaNFPProcessNaNs || type ==(type1, type2, op1, op2, fpcr); if !done then if value1 > value2 then (type,sign,value) = (type1,sign1,value1); else (type,sign,value) = (type2,sign2,value2); if type == FPType_QNaNFPType_Infinity then result = FPProcessNaNFPInfinity(type, op, fpcr); else if(sign); elsif type == then sign = sign1 AND sign2; // Use most positive sign result = FPZero(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. result = FPRoundIsZeroFPType_Zero(exp) then // Zero and denormals result = sign:max_exp:frac; else // Infinities and normals result = sign:NOT(exp):frac; (value, fpcr); return result;

Library pseudocode for shared/functions/float/fproundfpmaxnormal/FPRoundFPMaxNormal

// FPRound() // ========= // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. // FPMaxNormal() // ============= bits(N) FPRound(real op,FPMaxNormal(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = FPCRTypeOnes fpcr,(E-1):'0'; frac = FPRounding rounding) fpcr.AHP = '0'; return FPRoundBase(op, fpcr, rounding); // Convert a real number OP into an N-bit floating-point value using the // supplied rounding mode RMODE. bits(N) FPRoundBase(real op, FPCRType fpcr, FPRounding rounding) assert N IN {16,32,64}; assert op != 0.0; assert rounding != FPRounding_TIEAWAY; bits(N) result; // Obtain format parameters - minimum exponent, numbers of exponent and fraction bits. if N == 16 then minimum_exp = -14; E = 5; F = 10; elsif N == 32 then minimum_exp = -126; E = 8; F = 23; else // N == 64 minimum_exp = -1022; E = 11; F = 52; // Split value into sign, unrounded mantissa and exponent. if op < 0.0 then sign = '1'; mantissa = -op; else sign = '0'; mantissa = op; exponent = 0; while mantissa < 1.0 do mantissa = mantissa * 2.0; exponent = exponent - 1; while mantissa >= 2.0 do mantissa = mantissa / 2.0; exponent = exponent + 1; // Deal with flush-to-zero. if ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && exponent < minimum_exp then // Flush-to-zero never generates a trapped exception if UsingAArch32() then FPSCR.UFC = '1'; else FPSR.UFC = '1'; return FPZero(sign); // Start creating the exponent value for the result. Start by biasing the actual exponent // so that the minimum exponent becomes 1, lower values 0 (indicating possible underflow). biased_exp = Max(exponent - minimum_exp + 1, 0); if biased_exp == 0 then mantissa = mantissa / 2.0^(minimum_exp - exponent); // Get the unrounded mantissa as an integer, and the "units in last place" rounding error. int_mant = RoundDown(mantissa * 2.0^F); // < 2.0^F if biased_exp == 0, >= 2.0^F if not error = mantissa * 2.0^F - Real(int_mant); // Underflow occurs if exponent is too small before rounding, and result is inexact or // the Underflow exception is trapped. if biased_exp == 0 && (error != 0.0 || fpcr.UFE == '1') then FPProcessException(FPExc_Underflow, fpcr); // Round result according to rounding mode. case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_mant<0> == '1')); overflow_to_inf = TRUE; when FPRounding_POSINF round_up = (error != 0.0 && sign == '0'); overflow_to_inf = (sign == '0'); when FPRounding_NEGINF round_up = (error != 0.0 && sign == '1'); overflow_to_inf = (sign == '1'); when FPRounding_ZERO, FPRounding_ODD round_up = FALSE; overflow_to_inf = FALSE; if round_up then int_mant = int_mant + 1; if int_mant == 2^F then // Rounded up from denormalized to normalized biased_exp = 1; if int_mant == 2^(F+1) then // Rounded up to next exponent biased_exp = biased_exp + 1; int_mant = int_mant DIV 2; // Handle rounding to odd aka Von Neumann rounding if error != 0.0 && rounding == FPRounding_ODD then int_mant<0> = '1'; // Deal with overflow and generate result. if N != 16 || fpcr.AHP == '0' then // Single, double or IEEE half precision if biased_exp >= 2^E - 1 then result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); FPProcessException(FPExc_Overflow, fpcr); error = 1.0; // Ensure that an Inexact exception occurs else result = sign : biased_exp<N-F-2:0> : int_mant<F-1:0>; else // Alternative half precision if biased_exp >= 2^E then result = sign : Ones(N-1); FPProcessException(FPExc_InvalidOp, fpcr); error = 0.0; // Ensure that an Inexact exception does not occur else result = sign : biased_exp<N-F-2:0> : int_mant<F-1:0>; // Deal with Inexact exception. if error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); return result; // FPRound() // ========= bits(N) FPRound(real op, FPCRType fpcr) return FPRound(op, fpcr, FPRoundingMode(fpcr));(F); return sign : exp : frac;

Library pseudocode for shared/functions/float/fproundfpmaxnum/FPRoundCVFPMaxNum

// FPRoundCV() // =========== // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. // FPMaxNum() // ========== bits(N) FPRoundCV(real op,FPMaxNum(bits(N) op1, bits(N) op2, FPCRType fpcr,fpcr) assert N IN {16,32,64}; (type1,-,-) = FPRoundingFPUnpack rounding) fpcr.FZ16 = '0'; return(op1, fpcr); (type2,-,-) = (op2, fpcr); // treat a single quiet-NaN as -Infinity if type1 == FPType_QNaN && type2 != FPType_QNaN then op1 = FPInfinity('1'); elsif type1 != FPType_QNaN && type2 == FPType_QNaN then op2 = FPInfinity('1'); return FPMaxFPRoundBaseFPUnpack(op, fpcr, rounding);(op1, op2, fpcr);

Library pseudocode for shared/functions/float/fproundingfpmin/FPRoundingFPMin

enumeration// FPMin() // ======= bits(N) FPRounding {FPMin(bits(N) op1, bits(N) op2,FPRounding_TIEEVEN,fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPRounding_POSINF,(op1, fpcr); (type2,sign2,value2) = FPRounding_NEGINF,(op2, fpcr); (done,result) = FPRounding_ZERO,(type1, type2, op1, op2, fpcr); if !done then if value1 < value2 then (type,sign,value) = (type1,sign1,value1); else (type,sign,value) = (type2,sign2,value2); if type == FPRounding_TIEAWAY,then result = (sign); elsif type == FPType_Zero then sign = sign1 OR sign2; // Use most negative sign result = FPZero(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. result = FPRoundFPRounding_ODD};(value, fpcr); return result;

Library pseudocode for shared/functions/float/fproundingmodefpminnum/FPRoundingModeFPMinNum

// FPRoundingMode() // ================ // FPMinNum() // ========== // Return the current floating-point rounding mode. FPRoundingbits(N) FPRoundingMode(FPMinNum(bits(N) op1, bits(N) op2,FPCRType fpcr) return assert N IN {16,32,64}; (type1,-,-) = (op1, fpcr); (type2,-,-) = FPUnpack(op2, fpcr); // Treat a single quiet-NaN as +Infinity if type1 == FPType_QNaN && type2 != FPType_QNaN then op1 = FPInfinity('0'); elsif type1 != FPType_QNaN && type2 == FPType_QNaN then op2 = FPInfinity('0'); return FPMinFPDecodeRoundingFPUnpack(fpcr.RMode);(op1, op2, fpcr);

Library pseudocode for shared/functions/float/fproundintfpmul/FPRoundIntFPMul

// FPRoundInt() // ============ // Round OP to nearest integral floating point value using rounding mode ROUNDING. // If EXACT is TRUE, set FPSR.IXC if result is not numerically equal to OP. // FPMul() // ======= bits(N) FPRoundInt(bits(N) op,FPMul(bits(N) op1, bits(N) op2, FPCRType fpcr,fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPRoundingFPUnpack rounding, boolean exact) assert rounding !=(op1, fpcr); (type2,sign2,value2) = FPRounding_ODD; assert N IN {16,32,64}; // Unpack using FPCR to determine if subnormals are flushed-to-zero (type,sign,value) = FPUnpack(op, fpcr); if type ==(op2, fpcr); (done,result) = FPType_SNaNFPProcessNaNs || type ==(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_QNaNFPType_Infinity then result =); inf2 = (type2 == FPProcessNaN(type, op, fpcr); elsif type == FPType_Infinity then result =); zero1 = (type1 == FPInfinityFPType_Zero(sign); elsif type ==); zero2 = (type2 == FPType_Zero then result =); if (inf1 && zero2) || (zero1 && inf2) then result = FPZeroFPDefaultNaN(sign); else // extract integer component int_result =(); RoundDownFPProcessException(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of when( FPRounding_TIEEVENFPExc_InvalidOp round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when, fpcr); elsif inf1 || inf2 then result = FPRounding_POSINFFPInfinity round_up = (error != 0.0); when(sign1 EOR sign2); elsif zero1 || zero2 then result = FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = (error != 0.0 && int_result < 0); when FPRounding_TIEAWAY round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Convert integer value into an equivalent real value real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact if real_result == 0.0 then result = FPZero(sign); (sign1 EOR sign2); else result = FPRound(real_result, fpcr, FPRounding_ZERO); // Generate inexact exceptions if error != 0.0 && exact then FPProcessException(FPExc_Inexact, fpcr); (value1*value2, fpcr); return result;

Library pseudocode for shared/functions/float/fproundintnfpmuladd/FPRoundIntNFPMulAdd

// FPRoundIntN() // ============= // FPMulAdd() // ========== // // Calculates addend + op1*op2 with a single rounding. bits(N) FPRoundIntN(bits(N) op,FPMulAdd(bits(N) addend, bits(N) op1, bits(N) op2, FPCRType fpcr,fpcr) assert N IN {16,32,64}; rounding = FPRoundingFPRoundingMode rounding, integer intsize) assert rounding !=(fpcr); (typeA,signA,valueA) = FPRounding_ODDFPUnpack; assert N IN {32,64}; assert intsize IN {32, 64}; integer exp; constant integer E = (if N == 32 then 8 else 11); constant integer F = N - (E + 1); // Unpack using FPCR to determine if subnormals are flushed-to-zero (type,sign,value) =(addend, fpcr); (type1,sign1,value1) = FPUnpack(op, fpcr); if type IN {(op1, fpcr); (type2,sign2,value2) =FPType_SNaNFPUnpack,(op2, fpcr); inf1 = (type1 == FPType_QNaN, FPType_Infinity} then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:); zero1 = (type1 ==ZerosFPType_Zero(F); else exp = 1022+intsize; result = '1':exp<(E-1):0>:); inf2 = (type2 ==ZerosFPType_Infinity(F);); zero2 = (type2 == FPProcessException(FPExc_InvalidOp, fpcr); elsif type == FPType_Zero then result =); (done,result) = FPZeroFPProcessNaNs3(sign); else // Extract integer component int_result =(typeA, type1, type2, addend, op1, op2, fpcr); if typeA == RoundDownFPType_QNaN(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of when&& ((inf1 && zero2) || (zero1 && inf2)) then result = FPRounding_TIEEVENFPDefaultNaN round_up = error > 0.5 || (error == 0.5 && int_result<0> == '1'); when(); FPRounding_POSINFFPProcessException round_up = error != 0.0; when( FPRounding_NEGINFFPExc_InvalidOp round_up = FALSE; when, fpcr); if !done then infA = (typeA == FPRounding_ZEROFPType_Infinity round_up = error != 0.0 && int_result < 0; when); zeroA = (typeA == FPRounding_TIEAWAYFPType_Zero round_up = error > 0.5 || (error == 0.5 && int_result >= 0); ); if round_up then int_result = int_result + 1; // Determine sign and type product will have if it does not cause an Invalid // Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; if int_result > 2^(intsize-1)-1 || int_result < -1*2^(intsize-1) then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>: // Non SNaN-generated Invalid Operation cases are multiplies of zero by infinity and // additions of opposite-signed infinities. if (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP) then result =ZerosFPDefaultNaN(F); else exp = 1022 + intsize; result = '1':exp<(E-1):0>:();Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); // this case shouldn't set Inexact error = 0.0; else // Convert integer value into an equivalent real value real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact if real_result == 0.0 then result = // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity('0'); elsif (infA && signA == '1') || (infP && signP == '1') then result = FPInfinity('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(sign); else result =(signA); // Otherwise calculate numerical result and round it. else result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRoundFPRounding_NEGINF(real_result, fpcr,then '1' else '0'; result = FPRounding_ZEROFPZero); // Generate inexact exceptions if error != 0.0 then(result_sign); else result = FPProcessExceptionFPRound(FPExc_Inexact, fpcr); (result_value, fpcr); return result;

Library pseudocode for shared/functions/float/fprsqrtestimatefpmuladdh/FPRSqrtEstimateFPMulAddH

// FPRSqrtEstimate() // ================= // FPMulAddH() // =========== bits(N)bits(N) FPMulAddH(bits(N) addend, bits(N DIV 2) op1, bits(N DIV 2) op2, FPRSqrtEstimate(bits(N) operand, FPCRType fpcr) assert N IN {16,32,64}; (type,sign,value) = assert N IN {32,64}; rounding = FPRoundingMode(fpcr); (typeA,signA,valueA) = FPUnpack(operand, fpcr); if type ==(addend, fpcr); (type1,sign1,value1) = FPType_SNaNFPUnpack || type ==(op1, fpcr); (type2,sign2,value2) = FPType_QNaNFPUnpack then result =(op2, fpcr); inf1 = (type1 == FPProcessNaNFPType_Infinity(type, operand, fpcr); elsif type ==); zero1 = (type1 == FPType_Zero then result =); inf2 = (type2 == FPInfinityFPType_Infinity(sign);); zero2 = (type2 == FPProcessExceptionFPType_Zero(); (done,result) = FPProcessNaNs3H(typeA, type1, type2, addend, op1, op2, fpcr); if typeA ==FPExc_DivideByZeroFPType_QNaN, fpcr); elsif sign == '1' then && ((inf1 && zero2) || (zero1 && inf2)) then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif type == if !done then infA = (typeA == FPType_Infinity then result =); zeroA = (typeA == FPZeroFPType_Zero('0'); else // Scale to a fixed-point value in the range 0.25 <= x < 1.0 in steps of 512, with the // evenness or oddness of the exponent unchanged, and calculate result exponent. // Scaled value has copied sign bit, exponent = 1022 or 1021 = double-precision // biased version of -1 or -2, fraction = original fraction extended with zeros. case N of when 16 fraction = operand<9:0> :); // Determine sign and type product will have if it does not cause an Invalid // Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero by infinity and // additions of opposite-signed infinities. if (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP) then result = ZerosFPDefaultNaN(42); exp =(); UIntFPProcessException(operand<14:10>); when 32 fraction = operand<22:0> :( ZerosFPExc_InvalidOp(29); exp =, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = UIntFPInfinity(operand<30:23>); when 64 fraction = operand<51:0>; exp =('0'); elsif (infA && signA == '1') || (infP && signP == '1') then result = UIntFPInfinity(operand<62:52>); if exp == 0 then while fraction<51> == 0 do fraction = fraction<50:0> : '0'; exp = exp - 1; fraction = fraction<50:0> : '0'; if exp<0> == '0' then scaled =('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = UIntFPZero('1':fraction<51:44>); (signA); // Otherwise calculate numerical result and round it. else scaled = result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == UIntFPRounding_NEGINF('01':fraction<51:45>); case N of when 16 result_exp = ( 44 - exp) DIV 2; when 32 result_exp = ( 380 - exp) DIV 2; when 64 result_exp = (3068 - exp) DIV 2; estimate =then '1' else '0'; result = RecipSqrtEstimateFPZero(scaled); // estimate is in the range 256..511 representing a fixed point result in the range [1.0..2.0) // Convert to scaled floating point result with copied sign bit and high-order // fraction bits, and exponent calculated above. case N of when 16 result = '0' : result_exp<N-12:0> : estimate<7:0>:(result_sign); else result =ZerosFPRound( 2); when 32 result = '0' : result_exp<N-25:0> : estimate<7:0>:Zeros(15); when 64 result = '0' : result_exp<N-54:0> : estimate<7:0>:Zeros(44); (result_value, fpcr); return result;

Library pseudocode for shared/functions/float/fprsqrtestimatefpmuladdh/RecipSqrtEstimateFPProcessNaNs3H

// Compute estimate of reciprocal square root of 9-bit fixed-point number // // a is in range 128 .. 511 representing a number in the range 0.25 <= x < 1.0. // result is in the range 256 .. 511 representing a number in the range in the range 1.0 to 511/256. // FPProcessNaNs3H() // ================= integer(boolean, bits(N)) FPProcessNaNs3H( type1, FPType type2, FPType type3, bits(N) op1, bits(N DIV 2) op2, bits(N DIV 2) op3, FPCRType fpcr) assert N IN {32,64}; bits(N) result; if type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_SNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr)); elsif type3 == FPType_SNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr)); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr)); elsif type3 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr)); else done = FALSE; result = ZerosRecipSqrtEstimate(integer a) assert 128 <= a && a < 512; if a < 256 then // 0.25 .. 0.5 a = a*2+1; // a in units of 1/512 rounded to nearest else // 0.5 .. 1.0 a = (a >> 1) << 1; // discard bottom bit a = (a+1)*2; // a in units of 1/256 rounded to nearest integer b = 512; while a*(b+1)*(b+1) < 2^28 do b = b+1; // b = largest b such that b < 2^14 / sqrt(a) do r = (b+1) DIV 2; // round to nearest assert 256 <= r && r < 512; return r;(); // 'Don't care' result return (done, result);

Library pseudocode for shared/functions/float/fpsqrtfpmulx/FPSqrtFPMulX

// FPSqrt() // FPMulX() // ======== bits(N) FPSqrt(bits(N) op,FPMulX(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type,sign,value) = bits(N) result; (type1,sign1,value1) = FPUnpack(op, fpcr); if type ==(op1, fpcr); (type2,sign2,value2) = FPType_SNaNFPUnpack || type ==(op2, fpcr); (done,result) = FPType_QNaNFPProcessNaNs then result =(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPProcessNaNFPType_Infinity(type, op, fpcr); elsif type ==); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero then result =); zero2 = (type2 == FPZeroFPType_Zero(sign); elsif type ==); if (inf1 && zero2) || (zero1 && inf2) then result = FPType_InfinityFPTwo && sign == '0' then result =(sign1 EOR sign2); elsif inf1 || inf2 then result = FPInfinity(sign); elsif sign == '1' then result =(sign1 EOR sign2); elsif zero1 || zero2 then result = FPDefaultNaNFPZero(); FPProcessException(FPExc_InvalidOp, fpcr); else result =(sign1 EOR sign2); else result = FPRound(Sqrt(value), fpcr); (value1*value2, fpcr); return result;

Library pseudocode for shared/functions/float/fpsubfpneg/FPSubFPNeg

// FPSub() // FPNeg() // ======= bits(N) FPSub(bits(N) op1, bits(N) op2,FPNeg(bits(N) op) assert N IN {16,32,64}; return NOT(op<N-1>) : op<N-2:0>; FPCRType fpcr) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == sign2 then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == NOT(sign2) then result = FPZero(sign1); else result_value = value1 - value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding); return result;

Library pseudocode for shared/functions/float/fpthreefponepointfive/FPThreeFPOnePointFive

// FPThree() // ========= // FPOnePointFive() // ================ bits(N) FPThree(bit sign) FPOnePointFive(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1': exp = '0':ZerosOnes(E-1); frac = '1':Zeros(F-1); return sign : exp : frac;

Library pseudocode for shared/functions/float/fptofixedfpprocessexception/FPToFixedFPProcessException

// FPToFixed() // =========== // Convert N-bit precision floating point OP to M-bit fixed point with // FBITS fractional bits, controlled by UNSIGNED and ROUNDING. bits(M)// FPProcessException() // ==================== // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. FPToFixed(bits(N) op, integer fbits, boolean unsigned,FPProcessException( FPExc exception, FPCRType fpcr,fpcr) // Determine the cumulative exception bit number case exception of when FPRounding rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; assert fbits >= 0; assert rounding != FPRounding_ODD; // Unpack using fpcr to determine if subnormals are flushed-to-zero (type,sign,value) = FPUnpack(op, fpcr); // If NaN, set cumulative flag or take exception if type == FPType_SNaN || type == FPType_QNaN then FPProcessException(FPExc_InvalidOp, fpcr); // Scale by fractional bits and produce integer rounded towards minus-infinity value = value * 2.0^fbits; int_result =cumul = 0; when RoundDownFPExc_DivideByZero(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of cumul = 1; when FPRounding_TIEEVENFPExc_Overflow round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); cumul = 2; when FPRounding_POSINFFPExc_Underflow round_up = (error != 0.0); cumul = 3; when FPRounding_NEGINFFPExc_Inexact round_up = FALSE; cumul = 4; when FPRounding_ZEROFPExc_InputDenorm round_up = (error != 0.0 && int_result < 0); whencumul = 7; enable = cumul + 8; if fpcr<enable> == '1' then // Trapping of the exception enabled. // It is IMPLEMENTATION DEFINED whether the enable bit may be set at all, and // if so then how exceptions may be accumulated before calling FPTrapException() IMPLEMENTATION_DEFINED "floating-point trap handling"; elsif FPRounding_TIEAWAYUsingAArch32 round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Generate saturated result and exceptions (result, overflow) = SatQ(int_result, M, unsigned); if overflow then FPProcessException(FPExc_InvalidOp, fpcr); elsif error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); return result;() then // Set the cumulative exception bit FPSCR<cumul> = '1'; else // Set the cumulative exception bit FPSR<cumul> = '1'; return;

Library pseudocode for shared/functions/float/fptofixedjsfpprocessnan/FPToFixedJSFPProcessNaN

// FPToFixedJS() // ============= // Converts a double precision floating point input value // to a signed integer, with rounding to zero. // FPProcessNaN() // ============== bits(N) FPToFixedJS(bits(M) op,FPProcessNaN( FPType type, bits(N) op, FPCRType fpcr, boolean Is64) assert M == 64 && N == 32; // Unpack using fpcr to determine if subnormals are flushed-to-zero (type,sign,value) =fpcr) assert N IN {16,32,64}; assert type IN { FPUnpackFPType_QNaN(op, fpcr); Z = '1'; // If NaN, set cumulative flag or take exception if type ==, FPType_SNaN || type ==}; case N of when 16 topfrac = 9; when 32 topfrac = 22; when 64 topfrac = 51; result = op; if type == FPType_QNaNFPType_SNaN thenthen result<topfrac> = '1'; FPProcessException(FPExc_InvalidOp, fpcr); Z = '0'; int_result = if fpcr.DN == '1' then // DefaultNaN requested result = RoundDownFPDefaultNaN(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment round_it_up = (error != 0.0 && int_result < 0); if round_it_up then int_result = int_result + 1; if int_result < 0 then result = int_result - 2^32*RoundUp(Real(int_result)/Real(2^32)); else result = int_result - 2^32*RoundDown(Real(int_result)/Real(2^32)); // Generate exceptions if int_result < -(2^31) || int_result > (2^31)-1 then FPProcessException(FPExc_InvalidOp, fpcr); Z = '0'; elsif error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); Z = '0'; if sign == '1'&& value == 0.0 then Z = '0'; if type == FPType_Infinity then result = 0; if Is64 then PSTATE.<N,Z,C,V> = '0':Z:'00'; else FPSCR<31:28> = '0':Z:'00'; return result<N-1:0>;(); return result;

Library pseudocode for shared/functions/float/fptwofpprocessnans/FPTwoFPProcessNaNs

// FPTwo() // ======= // FPProcessNaNs() // =============== // // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. bits(N)(boolean, bits(N)) FPTwo(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1':FPProcessNaNs( type1, FPType type2, bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; if type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaNZerosFPType(E-1); frac =(type2, op2, fpcr); else done = FALSE; result = Zeros(F); return sign : exp : frac;(); // 'Don't care' result return (done, result);

Library pseudocode for shared/functions/float/fptypefpprocessnans3/FPTypeFPProcessNaNs3

enumeration// FPProcessNaNs3() // ================ // // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. (boolean, bits(N)) FPType {FPProcessNaNs3(FPType_Nonzero,type1, FPType_Zero,type2, FPType_Infinity,type3, bits(N) op1, bits(N) op2, bits(N) op3, FPType_QNaN,fpcr) assert N IN {16,32,64}; if type1 == then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); elsif type3 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type3, op3, fpcr); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr); elsif type3 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type3, op3, fpcr); else done = FALSE; result = ZerosFPType_SNaN};(); // 'Don't care' result return (done, result);

Library pseudocode for shared/functions/float/fpunpackfprecipestimate/FPUnpackFPRecipEstimate

// FPUnpack() // ========== // // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. // FPRecipEstimate() // ================= (FPType, bit, real)bits(N) FPUnpack(bits(N) fpval,FPRecipEstimate(bits(N) operand, FPCRType fpcr) fpcr.AHP = '0'; (fp_type, sign, value) = assert N IN {16,32,64}; (type,sign,value) = (operand, fpcr); if type == FPType_SNaN || type == FPType_QNaN then result = FPProcessNaN(type, operand, fpcr); elsif type == FPType_Infinity then result = FPZero(sign); elsif type == FPType_Zero then result = FPInfinity(sign); FPProcessException(FPExc_DivideByZero, fpcr); elsif ( (N == 16 && Abs(value) < 2.0^-16) || (N == 32 && Abs(value) < 2.0^-128) || (N == 64 && Abs(value) < 2.0^-1024) ) then case FPRoundingMode(fpcr) of when FPRounding_TIEEVEN overflow_to_inf = TRUE; when FPRounding_POSINF overflow_to_inf = (sign == '0'); when FPRounding_NEGINF overflow_to_inf = (sign == '1'); when FPRounding_ZERO overflow_to_inf = FALSE; result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); FPProcessException(FPExc_Overflow, fpcr); FPProcessException(FPExc_Inexact, fpcr); elsif ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && ( (N == 16 && Abs(value) >= 2.0^14) || (N == 32 && Abs(value) >= 2.0^126) || (N == 64 && Abs(value) >= 2.0^1022) ) then // Result flushed to zero of correct sign result = FPZero(sign); if UsingAArch32() then FPSCR.UFC = '1'; else FPSR.UFC = '1'; else // Scale to a fixed point value in the range 0.5 <= x < 1.0 in steps of 1/512, and // calculate result exponent. Scaled value has copied sign bit, // exponent = 1022 = double-precision biased version of -1, // fraction = original fraction case N of when 16 fraction = operand<9:0> : Zeros(42); exp = UInt(operand<14:10>); when 32 fraction = operand<22:0> : Zeros(29); exp = UInt(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then if fraction<51> == 0 then exp = -1; fraction = fraction<49:0>:'00'; else fraction = fraction<50:0>:'0'; integer scaled = UInt('1':fraction<51:44>); case N of when 16 result_exp = 29 - exp; // In range 29-30 = -1 to 29+1 = 30 when 32 result_exp = 253 - exp; // In range 253-254 = -1 to 253+1 = 254 when 64 result_exp = 2045 - exp; // In range 2045-2046 = -1 to 2045+1 = 2046 // scaled is in range 256..511 representing a fixed-point number in range [0.5..1.0) estimate = RecipEstimate(scaled); // estimate is in the range 256..511 representing a fixed point result in the range [1.0..2.0) // Convert to scaled floating point result with copied sign bit, // high-order bits from estimate, and exponent calculated above. fraction = estimate<7:0> : ZerosFPUnpackBaseFPUnpack(fpval, fpcr); return (fp_type, sign, value);(44); if result_exp == 0 then fraction = '1' : fraction<51:1>; elsif result_exp == -1 then fraction = '01' : fraction<51:2>; result_exp = 0; case N of when 16 result = sign : result_exp<N-12:0> : fraction<51:42>; when 32 result = sign : result_exp<N-25:0> : fraction<51:29>; when 64 result = sign : result_exp<N-54:0> : fraction<51:0>; return result;

Library pseudocode for shared/functions/float/fpunpackfprecipestimate/FPUnpackBaseRecipEstimate

// FPUnpackBase() // ============== // Compute estimate of reciprocal of 9-bit fixed-point number // // Unpack a floating-point number into its type, sign bit and the real number // that it represents. The real number result has the correct sign for numbers // and infinities, is very large in magnitude for infinities, and is 0.0 for // NaNs. (These values are chosen to simplify the description of comparisons // and conversions.) // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. // a is in range 256 .. 511 representing a number in the range 0.5 <= x < 1.0. // result is in the range 256 .. 511 representing a number in the range in the range 1.0 to 511/256. (FPType, bit, real)integer FPUnpackBase(bits(N) fpval,RecipEstimate(integer a) assert 256 <= a && a < 512; a = a*2+1; // round to nearest integer b = (2 ^ 19) DIV a; r = (b+1) DIV 2; // round to nearest assert 256 <= r && r < 512; return r; FPCRType fpcr) assert N IN {16,32,64}; if N == 16 then sign = fpval<15>; exp16 = fpval<14:10>; frac16 = fpval<9:0>; if IsZero(exp16) then // Produce zero if value is zero or flush-to-zero is selected if IsZero(frac16) || fpcr.FZ16 == '1' then type = FPType_Zero; value = 0.0; else type = FPType_Nonzero; value = 2.0^-14 * (Real(UInt(frac16)) * 2.0^-10); elsif IsOnes(exp16) && fpcr.AHP == '0' then // Infinity or NaN in IEEE format if IsZero(frac16) then type = FPType_Infinity; value = 2.0^1000000; else type = if frac16<9> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else type = FPType_Nonzero; value = 2.0^(UInt(exp16)-15) * (1.0 + Real(UInt(frac16)) * 2.0^-10); elsif N == 32 then sign = fpval<31>; exp32 = fpval<30:23>; frac32 = fpval<22:0>; if IsZero(exp32) then // Produce zero if value is zero or flush-to-zero is selected. if IsZero(frac32) || fpcr.FZ == '1' then type = FPType_Zero; value = 0.0; if !IsZero(frac32) then // Denormalized input flushed to zero FPProcessException(FPExc_InputDenorm, fpcr); else type = FPType_Nonzero; value = 2.0^-126 * (Real(UInt(frac32)) * 2.0^-23); elsif IsOnes(exp32) then if IsZero(frac32) then type = FPType_Infinity; value = 2.0^1000000; else type = if frac32<22> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else type = FPType_Nonzero; value = 2.0^(UInt(exp32)-127) * (1.0 + Real(UInt(frac32)) * 2.0^-23); else // N == 64 sign = fpval<63>; exp64 = fpval<62:52>; frac64 = fpval<51:0>; if IsZero(exp64) then // Produce zero if value is zero or flush-to-zero is selected. if IsZero(frac64) || fpcr.FZ == '1' then type = FPType_Zero; value = 0.0; if !IsZero(frac64) then // Denormalized input flushed to zero FPProcessException(FPExc_InputDenorm, fpcr); else type = FPType_Nonzero; value = 2.0^-1022 * (Real(UInt(frac64)) * 2.0^-52); elsif IsOnes(exp64) then if IsZero(frac64) then type = FPType_Infinity; value = 2.0^1000000; else type = if frac64<51> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else type = FPType_Nonzero; value = 2.0^(UInt(exp64)-1023) * (1.0 + Real(UInt(frac64)) * 2.0^-52); if sign == '1' then value = -value; return (type, sign, value);

Library pseudocode for shared/functions/float/fpunpackfprecpx/FPUnpackCVFPRecpX

// FPUnpackCV() // ============ // // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. // FPRecpX() // ========= (FPType, bit, real)bits(N) FPUnpackCV(bits(N) fpval,FPRecpX(bits(N) op, FPCRType fpcr) fpcr.FZ16 = '0'; (fp_type, sign, value) = assert N IN {16,32,64}; case N of when 16 esize = 5; when 32 esize = 8; when 64 esize = 11; bits(N) result; bits(esize) exp; bits(esize) max_exp; bits(N-(esize+1)) frac = (); case N of when 16 exp = op<10+esize-1:10>; when 32 exp = op<23+esize-1:23>; when 64 exp = op<52+esize-1:52>; max_exp = Ones(esize) - 1; (type,sign,value) = FPUnpack(op, fpcr); if type == FPType_SNaN || type == FPType_QNaN then result = FPProcessNaN(type, op, fpcr); else if IsZeroFPUnpackBaseZeros(fpval, fpcr); return (fp_type, sign, value);(exp) then // Zero and denormals result = sign:max_exp:frac; else // Infinities and normals result = sign:NOT(exp):frac; return result;

Library pseudocode for shared/functions/float/fpzerofpround/FPZeroFPRound

// FPZero() // ======== // FPRound() // ========= // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. bits(N) FPZero(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =FPRound(real op, ZerosFPCRType(E); frac =fpcr, rounding) fpcr.AHP = '0'; return FPRoundBase(op, fpcr, rounding); // Convert a real number OP into an N-bit floating-point value using the // supplied rounding mode RMODE. bits(N) FPRoundBase(real op, FPCRType fpcr, FPRounding rounding) assert N IN {16,32,64}; assert op != 0.0; assert rounding != FPRounding_TIEAWAY; bits(N) result; // Obtain format parameters - minimum exponent, numbers of exponent and fraction bits. if N == 16 then minimum_exp = -14; E = 5; F = 10; elsif N == 32 then minimum_exp = -126; E = 8; F = 23; else // N == 64 minimum_exp = -1022; E = 11; F = 52; // Split value into sign, unrounded mantissa and exponent. if op < 0.0 then sign = '1'; mantissa = -op; else sign = '0'; mantissa = op; exponent = 0; while mantissa < 1.0 do mantissa = mantissa * 2.0; exponent = exponent - 1; while mantissa >= 2.0 do mantissa = mantissa / 2.0; exponent = exponent + 1; // Deal with flush-to-zero. if ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && exponent < minimum_exp then // Flush-to-zero never generates a trapped exception if UsingAArch32() then FPSCR.UFC = '1'; else FPSR.UFC = '1'; return FPZero(sign); // Start creating the exponent value for the result. Start by biasing the actual exponent // so that the minimum exponent becomes 1, lower values 0 (indicating possible underflow). biased_exp = Max(exponent - minimum_exp + 1, 0); if biased_exp == 0 then mantissa = mantissa / 2.0^(minimum_exp - exponent); // Get the unrounded mantissa as an integer, and the "units in last place" rounding error. int_mant = RoundDown(mantissa * 2.0^F); // < 2.0^F if biased_exp == 0, >= 2.0^F if not error = mantissa * 2.0^F - Real(int_mant); // Underflow occurs if exponent is too small before rounding, and result is inexact or // the Underflow exception is trapped. if biased_exp == 0 && (error != 0.0 || fpcr.UFE == '1') then FPProcessException(FPExc_Underflow, fpcr); // Round result according to rounding mode. case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_mant<0> == '1')); overflow_to_inf = TRUE; when FPRounding_POSINF round_up = (error != 0.0 && sign == '0'); overflow_to_inf = (sign == '0'); when FPRounding_NEGINF round_up = (error != 0.0 && sign == '1'); overflow_to_inf = (sign == '1'); when FPRounding_ZERO, FPRounding_ODD round_up = FALSE; overflow_to_inf = FALSE; if round_up then int_mant = int_mant + 1; if int_mant == 2^F then // Rounded up from denormalized to normalized biased_exp = 1; if int_mant == 2^(F+1) then // Rounded up to next exponent biased_exp = biased_exp + 1; int_mant = int_mant DIV 2; // Handle rounding to odd aka Von Neumann rounding if error != 0.0 && rounding == FPRounding_ODD then int_mant<0> = '1'; // Deal with overflow and generate result. if N != 16 || fpcr.AHP == '0' then // Single, double or IEEE half precision if biased_exp >= 2^E - 1 then result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); FPProcessException(FPExc_Overflow, fpcr); error = 1.0; // Ensure that an Inexact exception occurs else result = sign : biased_exp<N-F-2:0> : int_mant<F-1:0>; else // Alternative half precision if biased_exp >= 2^E then result = sign : Ones(N-1); FPProcessException(FPExc_InvalidOp, fpcr); error = 0.0; // Ensure that an Inexact exception does not occur else result = sign : biased_exp<N-F-2:0> : int_mant<F-1:0>; // Deal with Inexact exception. if error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); return result; // FPRound() // ========= bits(N) FPRound(real op, FPCRType fpcr) return FPRound(op, fpcr, FPRoundingModeZerosFPRounding(F); return sign : exp : frac;(fpcr));

Library pseudocode for shared/functions/float/vfpexpandimmfpround/VFPExpandImmFPRoundCV

// VFPExpandImm() // ============== // FPRoundCV() // =========== // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. bits(N) VFPExpandImm(bits(8) imm8) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - E - 1; sign = imm8<7>; exp = NOT(imm8<6>):FPRoundCV(real op,ReplicateFPCRType(imm8<6>,E-3):imm8<5:4>; frac = imm8<3:0>:fpcr, rounding) fpcr.FZ16 = '0'; return FPRoundBaseZerosFPRounding(F-4); return sign : exp : frac;(op, fpcr, rounding);

Library pseudocode for shared/functions/integerfloat/AddWithCarryfprounding/FPRounding

// AddWithCarry() // ============== // Integer addition with carry input, returning result and NZCV flags (bits(N), bits(4))enumeration AddWithCarry(bits(N) x, bits(N) y, bit carry_in) integer unsigned_sum =FPRounding { UInt(x) +FPRounding_TIEEVEN, UInt(y) +FPRounding_POSINF, UInt(carry_in); integer signed_sum =FPRounding_NEGINF, SInt(x) +FPRounding_ZERO, SInt(y) +FPRounding_TIEAWAY, UInt(carry_in); bits(N) result = unsigned_sum<N-1:0>; // same value as signed_sum<N-1:0> bit n = result<N-1>; bit z = if IsZero(result) then '1' else '0'; bit c = if UInt(result) == unsigned_sum then '0' else '1'; bit v = if SInt(result) == signed_sum then '0' else '1'; return (result, n:z:c:v);FPRounding_ODD};

Library pseudocode for shared/functions/memoryfloat/AArch64.BranchAddrfproundingmode/FPRoundingMode

// AArch64.BranchAddr() // ==================== // Return the virtual address with tag bits removed for storing to the program counter. // FPRoundingMode() // ================ bits(64)// Return the current floating-point rounding mode. FPRounding AArch64.BranchAddr(bits(64) vaddress) assert !FPRoundingMode(UsingAArch32FPCRType(); msbit =fpcr) return AddrTopFPDecodeRounding(vaddress, TRUE, PSTATE.EL); if msbit == 63 then return vaddress; elsif (PSTATE.EL IN {EL0, EL1} || IsInHost()) && vaddress<msbit> == '1' then return SignExtend(vaddress<msbit:0>); else return ZeroExtend(vaddress<msbit:0>);(fpcr.RMode);

Library pseudocode for shared/functions/memoryfloat/AccTypefproundint/FPRoundInt

enumeration// FPRoundInt() // ============ // Round OP to nearest integral floating point value using rounding mode ROUNDING. // If EXACT is TRUE, set FPSR.IXC if result is not numerically equal to OP. bits(N) AccType {FPRoundInt(bits(N) op,AccType_NORMAL,fpcr, AccType_VEC, // Normal loads and storesrounding, boolean exact) assert rounding != AccType_STREAM,; assert N IN {16,32,64}; // Unpack using FPCR to determine if subnormals are flushed-to-zero (type,sign,value) = AccType_VECSTREAM, // Streaming loads and stores(op, fpcr); if type == AccType_ATOMIC,|| type == AccType_ATOMICRW, // Atomic loads and storesthen result = AccType_ORDERED,(type, op, fpcr); elsif type == AccType_ORDEREDRW, // Load-Acquire and Store-Releasethen result = AccType_ORDEREDATOMIC, // Load-Acquire and Store-Release with atomic access(sign); elsif type == AccType_ORDEREDATOMICRW,then result = AccType_LIMITEDORDERED, // Load-LOAcquire and Store-LORelease(sign); else // extract integer component int_result = AccType_UNPRIV, // Load and store unprivileged(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of when AccType_IFETCH, // Instruction fetchround_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when AccType_PTW, // Page table walkround_up = (error != 0.0); when AccType_NONFAULT, // Non-faulting loadsround_up = FALSE; when AccType_CNOTFIRST, // Contiguous FF load, not first elementround_up = (error != 0.0 && int_result < 0); when AccType_NV2REGISTER, // MRS/MSR instruction used at EL1 and which is converted // to a memory access that uses the EL2 translation regime // Other operationsround_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Convert integer value into an equivalent real value real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact if real_result == 0.0 then result = AccType_DC, // Data cache maintenance(sign); else result = AccType_DC_UNPRIV, // Data cache maintenance instruction used at EL0(real_result, fpcr, AccType_IC, // Instruction cache maintenance); // Generate inexact exceptions if error != 0.0 && exact then AccType_DCZVA, // DC ZVA instructions( AccType_AT}; // Address translation, fpcr); return result;

Library pseudocode for shared/functions/memoryfloat/AccessDescriptorfproundintn/FPRoundIntN

type AccessDescriptor is (// FPRoundIntN() // ============= bits(N) FPRoundIntN(bits(N) op, FPCRType fpcr, FPRounding rounding, integer intsize) assert rounding != FPRounding_ODD; assert N IN {32,64}; assert intsize IN {32, 64}; integer exp; constant integer E = (if N == 32 then 8 else 11); constant integer F = N - (E + 1); // Unpack using FPCR to determine if subnormals are flushed-to-zero (type,sign,value) = FPUnpack(op, fpcr); if type IN {FPType_SNaN, FPType_QNaN, FPType_Infinity} then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:Zeros(F); else exp = 1022+intsize; result = '1':exp<(E-1):0>:Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); elsif type == FPType_Zero then result = FPZero(sign); else // Extract integer component int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of when FPRounding_TIEEVEN round_up = error > 0.5 || (error == 0.5 && int_result<0> == '1'); when FPRounding_POSINF round_up = error != 0.0; when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = error != 0.0 && int_result < 0; when FPRounding_TIEAWAY round_up = error > 0.5 || (error == 0.5 && int_result >= 0); if round_up then int_result = int_result + 1; if int_result > 2^(intsize-1)-1 || int_result < -1*2^(intsize-1) then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:Zeros(F); else exp = 1022 + intsize; result = '1':exp<(E-1):0>:Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); // this case shouldn't set Inexact error = 0.0; else // Convert integer value into an equivalent real value real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact if real_result == 0.0 then result = FPZero(sign); else result = FPRound(real_result, fpcr, FPRounding_ZERO); // Generate inexact exceptions if error != 0.0 then FPProcessException(FPExc_InexactAccType acctype, MPAMinfo mpam, boolean page_table_walk, boolean secondstage, boolean s2fs1walk, integer level ), fpcr); return result;

Library pseudocode for shared/functions/memoryfloat/AddrTopfprsqrtestimate/FPRSqrtEstimate

// AddrTop() // ========= // Return the MSB number of a virtual address in the stage 1 translation regime for "el". // If EL1 is using AArch64 then addresses from EL0 using AArch32 are zero-extended to 64 bits. // FPRSqrtEstimate() // ================= integerbits(N) AddrTop(bits(64) address, boolean IsInstr, bits(2) el) assertFPRSqrtEstimate(bits(N) operand, HaveELFPCRType(el); regime =fpcr) assert N IN {16,32,64}; (type,sign,value) = S1TranslationRegimeFPUnpack(el); if(operand, fpcr); if type == ELUsingAArch32FPType_SNaN(regime) then // AArch32 translation regime. return 31; else // AArch64 translation regime. case regime of when|| type == EL1FPType_QNaN tbi = (if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0); ifthen result = HavePACExtFPProcessNaN() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when(type, operand, fpcr); elsif type == EL2FPType_Zero ifthen result = HaveVirtHostExtFPInfinity() &&(sign); ELIsInHostFPProcessException(el) then tbi = (if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0); if( HavePACExtFPExc_DivideByZero() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if, fpcr); elsif sign == '1' then result = HavePACExtFPDefaultNaN() then tbid = TCR_EL2.TBID; when(); EL3FPProcessException tbi = TCR_EL3.TBI; if( HavePACExtFPExc_InvalidOp() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!, fpcr); elsif type == then result = FPZero('0'); else // Scale to a fixed-point value in the range 0.25 <= x < 1.0 in steps of 512, with the // evenness or oddness of the exponent unchanged, and calculate result exponent. // Scaled value has copied sign bit, exponent = 1022 or 1021 = double-precision // biased version of -1 or -2, fraction = original fraction extended with zeros. case N of when 16 fraction = operand<9:0> : Zeros(42); exp = UInt(operand<14:10>); when 32 fraction = operand<22:0> : Zeros(29); exp = UInt(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then while fraction<51> == 0 do fraction = fraction<50:0> : '0'; exp = exp - 1; fraction = fraction<50:0> : '0'; if exp<0> == '0' then scaled = UInt('1':fraction<51:44>); else scaled = UInt('01':fraction<51:45>); case N of when 16 result_exp = ( 44 - exp) DIV 2; when 32 result_exp = ( 380 - exp) DIV 2; when 64 result_exp = (3068 - exp) DIV 2; estimate = RecipSqrtEstimate(scaled); // estimate is in the range 256..511 representing a fixed point result in the range [1.0..2.0) // Convert to scaled floating point result with copied sign bit and high-order // fraction bits, and exponent calculated above. case N of when 16 result = '0' : result_exp<N-12:0> : estimate<7:0>:Zeros( 2); when 32 result = '0' : result_exp<N-25:0> : estimate<7:0>:Zeros(15); when 64 result = '0' : result_exp<N-54:0> : estimate<7:0>:ZerosHavePACExtFPType_Infinity() || tbid == '0' || !IsInstr ) then 55 else 63);(44); return result;

Library pseudocode for shared/functions/memoryfloat/AddressDescriptorfprsqrtestimate/RecipSqrtEstimate

type// Compute estimate of reciprocal square root of 9-bit fixed-point number // // a is in range 128 .. 511 representing a number in the range 0.25 <= x < 1.0. // result is in the range 256 .. 511 representing a number in the range in the range 1.0 to 511/256. integer AddressDescriptor is (RecipSqrtEstimate(integer a) assert 128 <= a && a < 512; if a < 256 then // 0.25 .. 0.5 a = a*2+1; // a in units of 1/512 rounded to nearest else // 0.5 .. 1.0 a = (a >> 1) << 1; // discard bottom bit a = (a+1)*2; // a in units of 1/256 rounded to nearest integer b = 512; while a*(b+1)*(b+1) < 2^28 do b = b+1; // b = largest b such that b < 2^14 / sqrt(a) do r = (b+1) DIV 2; // round to nearest assert 256 <= r && r < 512; return r; FaultRecord fault, // fault.type indicates whether the address is valid MemoryAttributes memattrs, FullAddress paddress, bits(64) vaddress )

Library pseudocode for shared/functions/memoryfloat/AddressWithAllocationTagfpsqrt/FPSqrt

// AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. // FPSqrt() // ======== bits(64)bits(N) AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result;FPSqrt(bits(N) op,FPCRType fpcr) assert N IN {16,32,64}; (type,sign,value) = FPUnpack(op, fpcr); if type == FPType_SNaN || type == FPType_QNaN then result = FPProcessNaN(type, op, fpcr); elsif type == FPType_Zero then result = FPZero(sign); elsif type == FPType_Infinity && sign == '0' then result = FPInfinity(sign); elsif sign == '1' then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); else result = FPRound(Sqrt(value), fpcr); return result;

Library pseudocode for shared/functions/memoryfloat/Allocationfpsub/FPSub

constant bits(2)// FPSub() // ======= bits(N) MemHint_No = '00'; // No Read-Allocate, No Write-Allocate constant bits(2)FPSub(bits(N) op1, bits(N) op2, MemHint_WA = '01'; // No Read-Allocate, Write-Allocate constant bits(2)fpcr) assert N IN {16,32,64}; rounding = MemHint_RA = '10'; // Read-Allocate, No Write-Allocate constant bits(2)(fpcr); (type1,sign1,value1) = (op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == sign2 then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == NOT(sign2) then result = FPZero(sign1); else result_value = value1 - value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRoundMemHint_RWA = '11'; // Read-Allocate, Write-Allocate(result_value, fpcr, rounding); return result;

Library pseudocode for shared/functions/memoryfloat/AllocationTagFromAddressfpthree/FPThree

// AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. // FPThree() // ========= bits(4)bits(N) AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag;FPThree(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1':Zeros(E-1); frac = '1':Zeros(F-1); return sign : exp : frac;

Library pseudocode for shared/functions/memoryfloat/BigEndianfptofixed/FPToFixed

// BigEndian() // FPToFixed() // =========== boolean// Convert N-bit precision floating point OP to M-bit fixed point with // FBITS fractional bits, controlled by UNSIGNED and ROUNDING. bits(M) BigEndian() boolean bigend; ifFPToFixed(bits(N) op, integer fbits, boolean unsigned, UsingAArch32FPCRType() then bigend = (PSTATE.E != '0'); elsif PSTATE.EL ==fpcr, EL0FPRounding then bigend = (rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; assert fbits >= 0; assert rounding !=SCTLRFPRounding_ODD[].E0E != '0'); else bigend = (; // Unpack using fpcr to determine if subnormals are flushed-to-zero (type,sign,value) =(op, fpcr); // If NaN, set cumulative flag or take exception if type == FPType_SNaN || type == FPType_QNaN then FPProcessException(FPExc_InvalidOp, fpcr); // Scale by fractional bits and produce integer rounded towards minus-infinity value = value * 2.0^fbits; int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when FPRounding_POSINF round_up = (error != 0.0); when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = (error != 0.0 && int_result < 0); when FPRounding_TIEAWAY round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Generate saturated result and exceptions (result, overflow) = SatQ(int_result, M, unsigned); if overflow then FPProcessException(FPExc_InvalidOp, fpcr); elsif error != 0.0 then FPProcessException(FPExc_InexactSCTLRFPUnpack[].EE != '0'); return bigend;, fpcr); return result;

Library pseudocode for shared/functions/memoryfloat/BigEndianReversefptofixedjs/FPToFixedJS

// BigEndianReverse() // ================== // FPToFixedJS() // ============= bits(width)// Converts a double precision floating point input value // to a signed integer, with rounding to zero. bits(N) BigEndianReverse (bits(width) value) assert width IN {8, 16, 32, 64, 128}; integer half = width DIV 2; if width == 8 then return value; returnFPToFixedJS(bits(M) op, BigEndianReverseFPCRType(value<half-1:0>) :fpcr, boolean Is64) assert M == 64 && N == 32; // Unpack using fpcr to determine if subnormals are flushed-to-zero (type,sign,value) = (op, fpcr); Z = '1'; // If NaN, set cumulative flag or take exception if type == FPType_SNaN || type == FPType_QNaN then FPProcessException(FPExc_InvalidOp, fpcr); Z = '0'; int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment round_it_up = (error != 0.0 && int_result < 0); if round_it_up then int_result = int_result + 1; if int_result < 0 then result = int_result - 2^32*RoundUp(Real(int_result)/Real(2^32)); else result = int_result - 2^32*RoundDown(Real(int_result)/Real(2^32)); // Generate exceptions if int_result < -(2^31) || int_result > (2^31)-1 then FPProcessException(FPExc_InvalidOp, fpcr); Z = '0'; elsif error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); Z = '0'; if sign == '1'&& value == 0.0 then Z = '0'; if type == FPType_InfinityBigEndianReverseFPUnpack(value<width-1:half>);then result = 0; if Is64 then PSTATE.<N,Z,C,V> = '0':Z:'00'; else FPSCR<31:28> = '0':Z:'00'; return result<N-1:0>;

Library pseudocode for shared/functions/memoryfloat/Cacheabilityfptwo/FPTwo

constant bits(2)// FPTwo() // ======= bits(N) MemAttr_NC = '00'; // Non-cacheable constant bits(2)FPTwo(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1': MemAttr_WT = '10'; // Write-through constant bits(2)(E-1); frac = MemAttr_WB = '11'; // Write-back(F); return sign : exp : frac;

Library pseudocode for shared/functions/memoryfloat/CheckTagfptype/FPType

// CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed booleanenumeration CheckTag(FPType {AddressDescriptor memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress =FPType_Nonzero, ZeroExtend(memaddrdesc.paddress.address); return ptag ==FPType_Zero, FPType_Infinity, FPType_QNaN, MemTag[paddress]; else return TRUE;FPType_SNaN};

Library pseudocode for shared/functions/memoryfloat/CreateAccessDescriptorfpunpack/FPUnpack

// CreateAccessDescriptor() // ======================== // FPUnpack() // ========== // // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. AccessDescriptor CreateAccessDescriptor((FPType, bit, real)AccType acctype) AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam = GenMPAMcurEL(acctype IN {FPUnpack(bits(N) fpval,AccType_IFETCHFPCRType,fpcr) fpcr.AHP = '0'; (fp_type, sign, value) = AccType_ICFPUnpackBase}); accdesc.page_table_walk = FALSE; return accdesc;(fpval, fpcr); return (fp_type, sign, value);

Library pseudocode for shared/functions/memoryfloat/CreateAccessDescriptorPTWfpunpack/FPUnpackBase

// CreateAccessDescriptorPTW() // =========================== // FPUnpackBase() // ============== // // Unpack a floating-point number into its type, sign bit and the real number // that it represents. The real number result has the correct sign for numbers // and infinities, is very large in magnitude for infinities, and is 0.0 for // NaNs. (These values are chosen to simplify the description of comparisons // and conversions.) // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. AccessDescriptor CreateAccessDescriptorPTW((FPType, bit, real)AccType acctype, boolean secondstage, boolean s2fs1walk, integer level) AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam = GenMPAMcurEL(acctype IN {FPUnpackBase(bits(N) fpval,AccType_IFETCHFPCRType,fpcr) assert N IN {16,32,64}; if N == 16 then sign = fpval<15>; exp16 = fpval<14:10>; frac16 = fpval<9:0>; if (exp16) then // Produce zero if value is zero or flush-to-zero is selected if IsZero(frac16) || fpcr.FZ16 == '1' then type = FPType_Zero; value = 0.0; else type = FPType_Nonzero; value = 2.0^-14 * (Real(UInt(frac16)) * 2.0^-10); elsif IsOnes(exp16) && fpcr.AHP == '0' then // Infinity or NaN in IEEE format if IsZero(frac16) then type = FPType_Infinity; value = 2.0^1000000; else type = if frac16<9> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else type = FPType_Nonzero; value = 2.0^(UInt(exp16)-15) * (1.0 + Real(UInt(frac16)) * 2.0^-10); elsif N == 32 then sign = fpval<31>; exp32 = fpval<30:23>; frac32 = fpval<22:0>; if IsZero(exp32) then // Produce zero if value is zero or flush-to-zero is selected. if IsZero(frac32) || fpcr.FZ == '1' then type = FPType_Zero; value = 0.0; if !IsZero(frac32) then // Denormalized input flushed to zero FPProcessException(FPExc_InputDenorm, fpcr); else type = FPType_Nonzero; value = 2.0^-126 * (Real(UInt(frac32)) * 2.0^-23); elsif IsOnes(exp32) then if IsZero(frac32) then type = FPType_Infinity; value = 2.0^1000000; else type = if frac32<22> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else type = FPType_Nonzero; value = 2.0^(UInt(exp32)-127) * (1.0 + Real(UInt(frac32)) * 2.0^-23); else // N == 64 sign = fpval<63>; exp64 = fpval<62:52>; frac64 = fpval<51:0>; if IsZero(exp64) then // Produce zero if value is zero or flush-to-zero is selected. if IsZero(frac64) || fpcr.FZ == '1' then type = FPType_Zero; value = 0.0; if !IsZero(frac64) then // Denormalized input flushed to zero FPProcessException(FPExc_InputDenorm, fpcr); else type = FPType_Nonzero; value = 2.0^-1022 * (Real(UInt(frac64)) * 2.0^-52); elsif IsOnes(exp64) then if IsZero(frac64) then type = FPType_Infinity; value = 2.0^1000000; else type = if frac64<51> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else type = FPType_Nonzero; value = 2.0^(UInt(exp64)-1023) * (1.0 + Real(UIntAccType_ICIsZero}); accdesc.page_table_walk = TRUE; accdesc.secondstage = s2fs1walk; accdesc.secondstage = secondstage; accdesc.level = level; return accdesc;(frac64)) * 2.0^-52); if sign == '1' then value = -value; return (type, sign, value);

Library pseudocode for shared/functions/memoryfloat/DataMemoryBarrierfpunpack/FPUnpackCV

// FPUnpackCV() // ============ // // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. (FPType, bit, real) DataMemoryBarrier(FPUnpackCV(bits(N) fpval,MBReqDomainFPCRType domain,fpcr) fpcr.FZ16 = '0'; (fp_type, sign, value) = MBReqTypesFPUnpackBase types);(fpval, fpcr); return (fp_type, sign, value);

Library pseudocode for shared/functions/memoryfloat/DataSynchronizationBarrierfpzero/FPZero

// FPZero() // ======== bits(N) DataSynchronizationBarrier(FPZero(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =MBReqDomainZeros domain,(E); frac = MBReqTypesZeros types);(F); return sign : exp : frac;

Library pseudocode for shared/functions/memoryfloat/DescriptorUpdatevfpexpandimm/VFPExpandImm

type// VFPExpandImm() // ============== bits(N) DescriptorUpdate is ( boolean AF, // AF needs to be set boolean AP, // AP[2] / S2AP[2] will be modifiedVFPExpandImm(bits(8) imm8) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - E - 1; sign = imm8<7>; exp = NOT(imm8<6>): (imm8<6>,E-3):imm8<5:4>; frac = imm8<3:0>:ZerosAddressDescriptorReplicate descaddr // Descriptor to be updated )(F-4); return sign : exp : frac;

Library pseudocode for shared/functions/memoryinteger/DeviceTypeAddWithCarry

enumeration// AddWithCarry() // ============== // Integer addition with carry input, returning result and NZCV flags (bits(N), bits(4)) DeviceType {AddWithCarry(bits(N) x, bits(N) y, bit carry_in) integer unsigned_sum =DeviceType_GRE,(x) + DeviceType_nGRE,(y) + DeviceType_nGnRE,(carry_in); integer signed_sum = (x) + SInt(y) + UInt(carry_in); bits(N) result = unsigned_sum<N-1:0>; // same value as signed_sum<N-1:0> bit n = result<N-1>; bit z = if IsZero(result) then '1' else '0'; bit c = if UInt(result) == unsigned_sum then '0' else '1'; bit v = if SIntDeviceType_nGnRnE};(result) == signed_sum then '0' else '1'; return (result, n:z:c:v);

Library pseudocode for shared/functions/memory/EffectiveTBIAArch64.BranchAddr

// EffectiveTBI() // ============== // Returns the effective TBI in the AArch64 stage 1 translation regime for "el". // AArch64.BranchAddr() // ==================== // Return the virtual address with tag bits removed for storing to the program counter. bitbits(64) EffectiveTBI(bits(64) address, boolean IsInstr, bits(2) el) assertAArch64.BranchAddr(bits(64) vaddress) assert ! HaveELUsingAArch32(el); regime =(); msbit = S1TranslationRegimeAddrTop(el); assert(!(vaddress, TRUE, PSTATE.EL); if msbit == 63 then return vaddress; elsif (PSTATE.EL IN {ELUsingAArch32EL0(regime)); case regime of when, EL1 tbi = if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0; if} || HavePACExtIsInHost() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when()) && vaddress<msbit> == '1' then return EL2SignExtend if(vaddress<msbit:0>); else return HaveVirtHostExtZeroExtend() && ELIsInHost(el) then tbi = if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if HavePACExt() then tbid = TCR_EL2.TBID; when EL3 tbi = TCR_EL3.TBI; if HavePACExt() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!HavePACExt() || tbid == '0' || !IsInstr) then '1' else '0');(vaddress<msbit:0>);

Library pseudocode for shared/functions/memory/EffectiveTCMAAccType

// EffectiveTCMA() // =============== // Returns the effective TCMA of a virtual address in the stage 1 translation regime for "el". bitenumeration EffectiveTCMA(bits(64) address, bits(2) el) assertAccType { HaveEL(el); regime =AccType_NORMAL, S1TranslationRegime(el); assert(!AccType_VEC, // Normal loads and storesELUsingAArch32(regime)); case regime of whenAccType_STREAM, EL1 tcma = if address<55> == '1' then TCR_EL1.TCMA1 else TCR_EL1.TCMA0; whenAccType_VECSTREAM, // Streaming loads and stores EL2 ifAccType_ATOMIC, HaveVirtHostExt() &&AccType_ATOMICRW, // Atomic loads and stores ELIsInHost(el) then tcma = if address<55> == '1' then TCR_EL2.TCMA1 else TCR_EL2.TCMA0; else tcma = TCR_EL2.TCMA; whenAccType_ORDERED, AccType_ORDEREDRW, // Load-Acquire and Store-Release AccType_ORDEREDATOMIC, // Load-Acquire and Store-Release with atomic access AccType_ORDEREDATOMICRW, AccType_LIMITEDORDERED, // Load-LOAcquire and Store-LORelease AccType_UNPRIV, // Load and store unprivileged AccType_IFETCH, // Instruction fetch AccType_PTW, // Page table walk AccType_NONFAULT, // Non-faulting loads AccType_CNOTFIRST, // Contiguous FF load, not first element AccType_NV2REGISTER, // MRS/MSR instruction used at EL1 and which is converted // to a memory access that uses the EL2 translation regime // Other operations AccType_DC, // Data cache maintenance AccType_DC_UNPRIV, // Data cache maintenance instruction used at EL0 AccType_IC, // Instruction cache maintenance AccType_DCZVA, // DC ZVA instructions EL3 tcma = TCR_EL3.TCMA; return tcma;AccType_AT}; // Address translation

Library pseudocode for shared/functions/memory/FaultAccessDescriptor

enumerationtype AccessDescriptor is ( Fault {Fault_None, Fault_AccessFlag, Fault_Alignment, Fault_Background, Fault_Domain, Fault_Permission, Fault_Translation, Fault_AddressSize, Fault_SyncExternal, Fault_SyncExternalOnWalk, Fault_SyncParity, Fault_SyncParityOnWalk, Fault_AsyncParity, Fault_AsyncExternal, Fault_Debug, Fault_TLBConflict, Fault_BranchTarget, Fault_HWUpdateAccessFlag, Fault_Lockdown, Fault_Exclusive, Fault_ICacheMaint};acctype, MPAMinfo mpam, boolean page_table_walk, boolean secondstage, boolean s2fs1walk, integer level )

Library pseudocode for shared/functions/memory/FaultRecordAddrTop

type// AddrTop() // ========= // Return the MSB number of a virtual address in the stage 1 translation regime for "el". // If EL1 is using AArch64 then addresses from EL0 using AArch32 are zero-extended to 64 bits. integer FaultRecord is (AddrTop(bits(64) address, boolean IsInstr, bits(2) el) assertFaultHaveEL type, // Fault Status(el); regime = AccTypeS1TranslationRegime acctype, // Type of access that faulted(el); if (regime) then // AArch32 translation regime. return 31; else // AArch64 translation regime. case regime of when EL1 tbi = (if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0); if HavePACExt() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tbi = (if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0); if HavePACExt() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if HavePACExt() then tbid = TCR_EL2.TBID; when EL3 tbi = TCR_EL3.TBI; if HavePACExt() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!HavePACExtFullAddressELUsingAArch32 ipaddress, // Intermediate physical address boolean s2fs1walk, // Is on a Stage 1 page table walk boolean write, // TRUE for a write, FALSE for a read integer level, // For translation, access flag and permission faults bit extflag, // IMPLEMENTATION DEFINED syndrome for external aborts boolean secondstage, // Is a Stage 2 abort bits(4) domain, // Domain number, AArch32 only bits(2) errortype, // [Armv8.2 RAS] AArch32 AET or AArch64 SET bits(4) debugmoe) // Debug method of entry, from AArch32 only type PARTIDtype = bits(16); type PMGtype = bits(8); type MPAMinfo is ( bit mpam_ns, PARTIDtype partid, PMGtype pmg )() || tbid == '0' || !IsInstr ) then 55 else 63);

Library pseudocode for shared/functions/memory/FullAddressAddressDescriptor

type FullAddress is ( bits(52) address, bit NS // '0' = Secure, '1' = Non-secure )AddressDescriptor is (FaultRecord fault, // fault.type indicates whether the address is valid MemoryAttributes memattrs, FullAddress paddress, bits(64) vaddress )

Library pseudocode for shared/functions/memory/Hint_PrefetchAddressWithAllocationTag

// Signals the memory system that memory accesses of type HINT to or from the specified address are // likely in the near future. The memory system may take some action to speed up the memory // accesses when they do occur, such as pre-loading the the specified address into one or more // caches as indicated by the innermost cache level target (0=L1, 1=L2, etc) and non-temporal hint // stream. Any or all prefetch hints may be treated as a NOP. A prefetch hint must not cause a // synchronous abort due to Alignment or Translation faults and the like. Its only effect on // software-visible state should be on caches and TLBs associated with address, which must be // accessible by reads, writes or execution, as defined in the translation regime of the current // Exception level. It is guaranteed not to access Device memory. // A Prefetch_EXEC hint must not result in an access that could not be performed by a speculative // instruction fetch, therefore if all associated MMUs are disabled, then it cannot access any // memory location that cannot be accessed by instruction fetches.// AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(64) Hint_Prefetch(bits(64) address,AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result; PrefetchHint hint, integer target, boolean stream);

Library pseudocode for shared/functions/memory/MBReqDomainAllocation

enumerationconstant bits(2) MBReqDomain {MemHint_No = '00'; // No Read-Allocate, No Write-Allocate constant bits(2)MBReqDomain_Nonshareable,MemHint_WA = '01'; // No Read-Allocate, Write-Allocate constant bits(2) MBReqDomain_InnerShareable,MemHint_RA = '10'; // Read-Allocate, No Write-Allocate constant bits(2) MBReqDomain_OuterShareable,MemHint_RWA = '11'; // Read-Allocate, Write-Allocate MBReqDomain_FullSystem};

Library pseudocode for shared/functions/memory/MBReqTypesAllocationTagFromAddress

enumeration// AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(4) MBReqTypes {AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag;MBReqTypes_Reads, MBReqTypes_Writes, MBReqTypes_All};

Library pseudocode for shared/functions/memory/MemAttrHintsBigEndian

type// BigEndian() // =========== boolean MemAttrHints is ( bits(2) attrs, // See MemAttr_*, Cacheability attributes bits(2) hints, // See MemHint_*, Allocation hints boolean transient )BigEndian() boolean bigend; ifUsingAArch32() then bigend = (PSTATE.E != '0'); elsif PSTATE.EL == EL0 then bigend = (SCTLR[].E0E != '0'); else bigend = (SCTLR[].EE != '0'); return bigend;

Library pseudocode for shared/functions/memory/MemTagBigEndianReverse

// MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. // BigEndianReverse() // ================== bits(4)bits(width) MemTag[bits(64) address]BigEndianReverse (bits(width) value) assert width IN {8, 16, 32, 64, 128}; integer half = width DIV 2; if width == 8 then return value; return AddressDescriptorBigEndianReverse memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc =(value<half-1:0>) : AArch64.TranslateAddressBigEndianReverse(address, AccType_NORMAL, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AllocationTagAccessIsEnabled() && memaddrdesc.memattrs.tagged then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. MemTag[bits(64) address] = bits(4) value AddressDescriptor memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != Align(address, TAG_GRANULE) then boolean secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(AccType_NORMAL, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access if AllocationTagAccessIsEnabled() && memaddrdesc.memattrs.tagged then _MemTag[memaddrdesc] = value;(value<width-1:half>);

Library pseudocode for shared/functions/memory/MemTypeCacheability

enumerationconstant bits(2) MemType {MemAttr_NC = '00'; // Non-cacheable constant bits(2)MemType_Normal,MemAttr_WT = '10'; // Write-through constant bits(2) MemType_Device};MemAttr_WB = '11'; // Write-back

Library pseudocode for shared/functions/memory/MemoryAttributesCheckTag

type// CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed boolean MemoryAttributes is (CheckTag( MemTypeAddressDescriptor type,memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress = DeviceTypeZeroExtend device, // For Device memory types(memaddrdesc.paddress.address); return ptag == MemAttrHintsMemTag inner, // Inner hints and attributes MemAttrHints outer, // Outer hints and attributes boolean tagged, // Tagged access boolean shareable, boolean outershareable )[paddress]; else return TRUE;

Library pseudocode for shared/functions/memory/PermissionsCreateAccessDescriptor

type// CreateAccessDescriptor() // ======================== AccessDescriptor CreateAccessDescriptor( acctype) AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam = GenMPAMcurEL(acctype IN {AccType_IFETCH, AccType_ICPermissions is ( bits(3) ap, // Access permission bits bit xn, // Execute-never bit bit xxn, // [Armv8.2] Extended execute-never bit for stage 2 bit pxn // Privileged execute-never bit )}); accdesc.page_table_walk = FALSE; return accdesc;

Library pseudocode for shared/functions/memory/PrefetchHintCreateAccessDescriptorPTW

enumeration// CreateAccessDescriptorPTW() // =========================== AccessDescriptor CreateAccessDescriptorPTW( PrefetchHint {acctype, boolean secondstage, boolean s2fs1walk, integer level) AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam = GenMPAMcurEL(acctype IN {Prefetch_READ,, Prefetch_WRITE, Prefetch_EXEC};}); accdesc.page_table_walk = TRUE; accdesc.secondstage = s2fs1walk; accdesc.secondstage = secondstage; accdesc.level = level; return accdesc;

Library pseudocode for shared/functions/memory/SpeculativeStoreBypassBarrierToPADataMemoryBarrier

SpeculativeStoreBypassBarrierToPA();DataMemoryBarrier(MBReqDomain domain, MBReqTypes types);

Library pseudocode for shared/functions/memory/SpeculativeStoreBypassBarrierToVADataSynchronizationBarrier

SpeculativeStoreBypassBarrierToVA();DataSynchronizationBarrier(MBReqDomain domain, MBReqTypes types);

Library pseudocode for shared/functions/memory/TLBRecordDescriptorUpdate

type TLBRecord is (DescriptorUpdate is ( boolean AF, // AF needs to be set boolean AP, // AP[2] / S2AP[2] will be modified Permissions perms, bit nG, // '0' = Global, '1' = not Global bits(4) domain, // AArch32 only bit GP, // Guarded Page boolean contiguous, // Contiguous bit from page table integer level, // AArch32 Short-descriptor format: Indicates Section/Page integer blocksize, // Describes size of memory translated in KBytes DescriptorUpdate descupdate, // [Armv8.1] Context for h/w update of table descriptor bit CnP, // [Armv8.2] TLB entry can be shared between different PEs AddressDescriptor addrdesc descaddr // Descriptor to be updated )

Library pseudocode for shared/functions/memory/TransformTagDeviceType

// TransformTag() // ============== // Apply tag transformation rules. bits(4)enumeration TransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta =DeviceType { DeviceType_GRE, DeviceType_nGRE, DeviceType_nGnRE, ZeroExtend(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;DeviceType_nGnRnE};

Library pseudocode for shared/functions/memory/_MemEffectiveTBI

// These two _Mem[] accessors are the hardware operations which perform single-copy atomic, // aligned, little-endian memory accesses of size bytes from/to the underlying physical // memory array of bytes. // // The functions address the array using desc.paddress which supplies: // * A 52-bit physical address // * A single NS bit to select between Secure and Non-secure parts of the array. // // The accdesc descriptor describes the access type: normal, exclusive, ordered, streaming, // etc and other parameters required to access the physical memory or for setting syndrome // register in the event of an external abort. bits(8*size) _Mem[// EffectiveTBI() // ============== // Returns the effective TBI in the AArch64 stage 1 translation regime for "el". bitAddressDescriptor desc, integer size, AccessDescriptor accdesc]; _Mem[EffectiveTBI(bits(64) address, boolean IsInstr, bits(2) el) assert(el); regime = S1TranslationRegime(el); assert(!ELUsingAArch32(regime)); case regime of when EL1 tbi = if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tbi = if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if HavePACExt() then tbid = TCR_EL2.TBID; when EL3 tbi = TCR_EL3.TBI; if HavePACExt() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!HavePACExtAddressDescriptorHaveEL desc, integer size, AccessDescriptor accdesc] = bits(8*size) value;() || tbid == '0' || !IsInstr) then '1' else '0');

Library pseudocode for shared/functions/memory/booleanEffectiveTCMA

// boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. // EffectiveTCMA() // =============== // Returns the effective TCMA of a virtual address in the stage 1 translation regime for "el". booleanbit AccessIsTagChecked(bits(64) vaddr,EffectiveTCMA(bits(64) address, bits(2) el) assert AccTypeHaveEL acctype) if PSTATE.M<4> == '1' then return FALSE; if(el); regime = EffectiveTBIS1TranslationRegime(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if(el); assert(! EffectiveTCMAELUsingAArch32(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; (regime)); if ! case regime of whenAllocationTagAccessIsEnabledEL1() then return FALSE; if acctype IN {tcma = if address<55> == '1' then TCR_EL1.TCMA1 else TCR_EL1.TCMA0; whenAccType_IFETCHEL2,if AccType_PTWHaveVirtHostExt} then return FALSE; if acctype ==() && (el) then tcma = if address<55> == '1' then TCR_EL2.TCMA1 else TCR_EL2.TCMA0; else tcma = TCR_EL2.TCMA; when EL3AccType_NV2REGISTERELIsInHost then return FALSE; tcma = TCR_EL3.TCMA; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE; return tcma;

Library pseudocode for shared/functions/mpammemory/DefaultMPAMinfoFault

// DefaultMPAMinfo // =============== // Returns default MPAM info. If secure is TRUE return default Secure // MPAMinfo, otherwise return default Non-secure MPAMinfo. MPAMinfo DefaultMPAMinfo(boolean secure) MPAMinfo DefaultInfo; DefaultInfo.mpam_ns = if secure then '0' else '1'; DefaultInfo.partid = DefaultPARTID; DefaultInfo.pmg = DefaultPMG; return DefaultInfo;enumerationFault {Fault_None, Fault_AccessFlag, Fault_Alignment, Fault_Background, Fault_Domain, Fault_Permission, Fault_Translation, Fault_AddressSize, Fault_SyncExternal, Fault_SyncExternalOnWalk, Fault_SyncParity, Fault_SyncParityOnWalk, Fault_AsyncParity, Fault_AsyncExternal, Fault_Debug, Fault_TLBConflict, Fault_BranchTarget, Fault_HWUpdateAccessFlag, Fault_Lockdown, Fault_Exclusive, Fault_ICacheMaint};

Library pseudocode for shared/functions/mpammemory/DefaultPARTIDFaultRecord

constant PARTIDtype DefaultPARTID = 0<15:0>;typeFaultRecord is (Fault type, // Fault Status AccType acctype, // Type of access that faulted FullAddress ipaddress, // Intermediate physical address boolean s2fs1walk, // Is on a Stage 1 page table walk boolean write, // TRUE for a write, FALSE for a read integer level, // For translation, access flag and permission faults bit extflag, // IMPLEMENTATION DEFINED syndrome for external aborts boolean secondstage, // Is a Stage 2 abort bits(4) domain, // Domain number, AArch32 only bits(2) errortype, // [ARMv8.2 RAS] AArch32 AET or AArch64 SET bits(4) debugmoe) // Debug method of entry, from AArch32 only type PARTIDtype = bits(16); type PMGtype = bits(8); type MPAMinfo is ( bit mpam_ns, PARTIDtype partid, PMGtype pmg )

Library pseudocode for shared/functions/mpammemory/DefaultPMGFullAddress

constant PMGtype DefaultPMG = 0<7:0>;typeFullAddress is ( bits(52) address, bit NS // '0' = Secure, '1' = Non-secure )

Library pseudocode for shared/functions/mpammemory/GenMPAMcurELHint_Prefetch

// GenMPAMcurEL // ============ // Returns MPAMinfo for the current EL and security state. // InD is TRUE instruction access and FALSE otherwise. // May be called if MPAM is not implemented (but in an version that supports // MPAM), MPAM is disabled, or in AArch32. In AArch32, convert the mode to // EL if can and use that to drive MPAM information generation. If mode // cannot be converted, MPAM is not implemented, or MPAM is disabled return // default MPAM information for the current security state. MPAMinfo GenMPAMcurEL(boolean InD) bits(2) mpamel; boolean validEL; boolean secure =// Signals the memory system that memory accesses of type HINT to or from the specified address are // likely in the near future. The memory system may take some action to speed up the memory // accesses when they do occur, such as pre-loading the the specified address into one or more // caches as indicated by the innermost cache level target (0=L1, 1=L2, etc) and non-temporal hint // stream. Any or all prefetch hints may be treated as a NOP. A prefetch hint must not cause a // synchronous abort due to Alignment or Translation faults and the like. Its only effect on // software-visible state should be on caches and TLBs associated with address, which must be // accessible by reads, writes or execution, as defined in the translation regime of the current // Exception level. It is guaranteed not to access Device memory. // A Prefetch_EXEC hint must not result in an access that could not be performed by a speculative // instruction fetch, therefore if all associated MMUs are disabled, then it cannot access any // memory location that cannot be accessed by instruction fetches. IsSecure(); ifHint_Prefetch(bits(64) address, HaveMPAMExtPrefetchHint() && MPAMisEnabled() then if UsingAArch32() then (validEL, mpamel) = ELFromM32(PSTATE.M); else validEL = TRUE; mpamel = PSTATE.EL; if validEL then return genMPAM(UInt(mpamel), InD, secure); return DefaultMPAMinfo(secure);hint, integer target, boolean stream);

Library pseudocode for shared/functions/mpammemory/MAP_vPARTIDMBReqDomain

// MAP_vPARTID // =========== // Performs conversion of virtual PARTID into physical PARTID // Contains all of the error checking and implementation // choices for the conversion. (PARTIDtype, boolean) MAP_vPARTID(PARTIDtype vpartid) // should not ever be called if EL2 is not implemented // or is implemented but not enabled in the current // security state. PARTIDtype ret; boolean err; integer virt =enumeration UInt( vpartid ); integer vmprmax =MBReqDomain { UInt( MPAMIDR_EL1.VPMR_MAX ); // vpartid_max is largest vpartid supported integer vpartid_max = 4 * vmprmax + 3; // One of many ways to reduce vpartid to value less than vpartid_max. if virt > vpartid_max then virt = virt MOD (vpartid_max+1); // Check for valid mapping entry. if MPAMVPMV_EL2<virt> == '1' then // vpartid has a valid mapping so access the map. ret = mapvpmw(virt); err = FALSE; // Is the default virtual PARTID valid? elsif MPAMVPMV_EL2<0> == '1' then // Yes, so use default mapping for vpartid == 0. ret = MPAMVPM0_EL2<0 +: 16>; err = FALSE; // Neither is valid so use default physical PARTID. else ret = DefaultPARTID; err = TRUE; // Check that the physical PARTID is in-range. // This physical PARTID came from a virtual mapping entry. integer partid_max =MBReqDomain_Nonshareable, UInt( MPAMIDR_EL1.PARTID_MAX ); ifMBReqDomain_InnerShareable, MBReqDomain_OuterShareable, UInt(ret) > partid_max then // Out of range, so return default physical PARTID ret = DefaultPARTID; err = TRUE; return (ret, err);MBReqDomain_FullSystem};

Library pseudocode for shared/functions/mpammemory/MPAMisEnabledMBReqTypes

// MPAMisEnabled // ============= // Returns TRUE if MPAMisEnabled. booleanenumeration MPAMisEnabled() el =MBReqTypes { HighestEL(); case el of whenMBReqTypes_Reads, EL3 return MPAM3_EL3.MPAMEN == '1'; whenMBReqTypes_Writes, EL2 return MPAM2_EL2.MPAMEN == '1'; when EL1 return MPAM1_EL1.MPAMEN == '1';MBReqTypes_All};

Library pseudocode for shared/functions/mpammemory/MPAMisVirtualMemAttrHints

// MPAMisVirtual // ============= // Returns TRUE if MPAM is configured to be virtual at EL. booleantype MPAMisVirtual(integer el) return (MPAMIDR_EL1.HAS_HCR == '1' &&MemAttrHints is ( bits(2) attrs, // See MemAttr_*, Cacheability attributes bits(2) hints, // See MemHint_*, Allocation hints boolean transient ) EL2Enabled() && (( el == 0 && MPAMHCR_EL2.EL0_VPMEN == '1' ) || ( el == 1 && MPAMHCR_EL2.EL1_VPMEN == '1')));

Library pseudocode for shared/functions/mpammemory/genMPAMMemTag

// genMPAM // ======= // Returns MPAMinfo for exception level el. // If InD is TRUE returns MPAM information using PARTID_I and PMG_I fields // of MPAMel_ELx register and otherwise using PARTID_D and PMG_D fields. // Produces a Secure PARTID if Secure is TRUE and a Non-secure PARTID otherwise. // MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. MPAMinfo genMPAM(integer el, boolean InD, boolean secure) MPAMinfo returnInfo; PARTIDtype partidel; boolean perr; boolean gstplk = (el == 0 &&bits(4) MemTag[bits(64) address] AddressDescriptor memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AllocationTagAccessIsEnabled() then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. MemTag[bits(64) address] = bits(4) value AddressDescriptor memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != Align(address, TAG_GRANULE) then boolean secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(AccType_NORMAL, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access if AllocationTagAccessIsEnabledEL2Enabled() && MPAMHCR_EL2.GSTAPP_PLK == '1' && HCR_EL2.TGE == '0'); integer eff_el = if gstplk then 1 else el; (partidel, perr) = genPARTID(eff_el, InD); PMGtype groupel = genPMG(eff_el, InD, perr); returnInfo.mpam_ns = if secure then '0' else '1'; returnInfo.partid = partidel; returnInfo.pmg = groupel; return returnInfo;() then _MemTag[memaddrdesc] = value;

Library pseudocode for shared/functions/mpammemory/genMPAMelMemType

// genMPAMel // ========= // Returns MPAMinfo for specified EL in the current security state. // InD is TRUE for instruction access and FALSE otherwise. MPAMinfo genMPAMel(bits(2) el, boolean InD) boolean secure =enumeration IsSecure(); ifMemType { HaveMPAMExt() &&MemType_Normal, MPAMisEnabled() then return genMPAM(UInt(el), InD, secure); return DefaultMPAMinfo(secure);MemType_Device};

Library pseudocode for shared/functions/mpammemory/genPARTIDMemoryAttributes

// genPARTID // ========= // Returns physical PARTID and error boolean for exception level el. // If InD is TRUE then PARTID is from MPAMel_ELx.PARTID_I and // otherwise from MPAMel_ELx.PARTID_D. (PARTIDtype, boolean) genPARTID(integer el, boolean InD) PARTIDtype partidel = getMPAM_PARTID(el, InD); integer partid_max =type UInt(MPAMIDR_EL1.PARTID_MAX); ifMemoryAttributes is ( UIntMemType(partidel) > partid_max then return (DefaultPARTID, TRUE); iftype, device, // For Device memory types MemAttrHints inner, // Inner hints and attributes MemAttrHintsMPAMisVirtualDeviceType(el) then return MAP_vPARTID(partidel); else return (partidel, FALSE);outer, // Outer hints and attributes boolean tagged, // Tagged access boolean shareable, boolean outershareable )

Library pseudocode for shared/functions/mpammemory/genPMGPermissions

// genPMG // ====== // Returns PMG for exception level el and I- or D-side (InD). // If PARTID generation (genPARTID) encountered an error, genPMG() should be // called with partid_err as TRUE. PMGtype genPMG(integer el, boolean InD, boolean partid_err) integer pmg_max =type UInt(MPAMIDR_EL1.PMG_MAX); // It is CONSTRAINED UNPREDICTABLE whether partid_err forces PMG to // use the default or if it uses the PMG from getMPAM_PMG. if partid_err then return DefaultPMG; PMGtype groupel = getMPAM_PMG(el, InD); if UInt(groupel) <= pmg_max then return groupel; return DefaultPMG;Permissions is ( bits(3) ap, // Access permission bits bit xn, // Execute-never bit bit xxn, // [ARMv8.2] Extended execute-never bit for stage 2 bit pxn // Privileged execute-never bit )

Library pseudocode for shared/functions/mpammemory/getMPAM_PARTIDPrefetchHint

// getMPAM_PARTID // ============== // Returns a PARTID from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PARTID_I field of that // register. Otherwise, selects the PARTID_D field. PARTIDtype getMPAM_PARTID(integer MPAMn, boolean InD) PARTIDtype partid; boolean el2avail =enumeration EL2Enabled(); if InD then case MPAMn of when 3 partid = MPAM3_EL3.PARTID_I; when 2 partid = if el2avail then MPAM2_EL2.PARTID_I elsePrefetchHint { Zeros(); when 1 partid = MPAM1_EL1.PARTID_I; when 0 partid = MPAM0_EL1.PARTID_I; otherwise partid = PARTIDtype UNKNOWN; else case MPAMn of when 3 partid = MPAM3_EL3.PARTID_D; when 2 partid = if el2avail then MPAM2_EL2.PARTID_D elsePrefetch_READ, Prefetch_WRITE, Zeros(); when 1 partid = MPAM1_EL1.PARTID_D; when 0 partid = MPAM0_EL1.PARTID_D; otherwise partid = PARTIDtype UNKNOWN; return partid;Prefetch_EXEC};

Library pseudocode for shared/functions/mpammemory/getMPAM_PMGSpeculativeSynchronizationBarrierToPA

// getMPAM_PMG // =========== // Returns a PMG from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PMG_I field of that // register. Otherwise, selects the PMG_D field. PMGtype getMPAM_PMG(integer MPAMn, boolean InD) PMGtype pmg; boolean el2avail = EL2Enabled(); if InD then case MPAMn of when 3 pmg = MPAM3_EL3.PMG_I; when 2 pmg = if el2avail then MPAM2_EL2.PMG_I else Zeros(); when 1 pmg = MPAM1_EL1.PMG_I; when 0 pmg = MPAM0_EL1.PMG_I; otherwise pmg = PMGtype UNKNOWN; else case MPAMn of when 3 pmg = MPAM3_EL3.PMG_D; when 2 pmg = if el2avail then MPAM2_EL2.PMG_D else Zeros(); when 1 pmg = MPAM1_EL1.PMG_D; when 0 pmg = MPAM0_EL1.PMG_D; otherwise pmg = PMGtype UNKNOWN; return pmg;SpeculativeSynchronizationBarrierToPA();

Library pseudocode for shared/functions/mpammemory/mapvpmwSpeculativeSynchronizationBarrierToVA

// mapvpmw // ======= // Map a virtual PARTID into a physical PARTID using // the MPAMVPMn_EL2 registers. // vpartid is now assumed in-range and valid (checked by caller) // returns physical PARTID from mapping entry. PARTIDtype mapvpmw(integer vpartid) bits(64) vpmw; integer wd = vpartid DIV 4; case wd of when 0 vpmw = MPAMVPM0_EL2; when 1 vpmw = MPAMVPM1_EL2; when 2 vpmw = MPAMVPM2_EL2; when 3 vpmw = MPAMVPM3_EL2; when 4 vpmw = MPAMVPM4_EL2; when 5 vpmw = MPAMVPM5_EL2; when 6 vpmw = MPAMVPM6_EL2; when 7 vpmw = MPAMVPM7_EL2; otherwise vpmw = Zeros(64); // vpme_lsb selects LSB of field within register integer vpme_lsb = (vpartid REM 4) * 16; return vpmw<vpme_lsb +: 16>;SpeculativeSynchronizationBarrierToVA();

Library pseudocode for shared/functions/registersmemory/BranchToTLBRecord

// BranchTo() // ========== // Set program counter to a new address, with a branch type // In AArch64 state the address might include a tag in the top eight bits.type BranchTo(bits(N) target,TLBRecord is ( BranchTypePermissions branch_type)perms, bit nG, // '0' = Global, '1' = not Global bits(4) domain, // AArch32 only bit GP, // Guarded Page boolean contiguous, // Contiguous bit from page table integer level, // AArch32 Short-descriptor format: Indicates Section/Page integer blocksize, // Describes size of memory translated in KBytes Hint_BranchDescriptorUpdate(branch_type); if N == 32 then assertdescupdate, // [ARMv8.1] Context for h/w update of table descriptor bit CnP, // [ARMv8.2] TLB entry can be shared between different PEs UsingAArch32AddressDescriptor(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); _PC = AArch64.BranchAddr(target<63:0>); return;addrdesc )

Library pseudocode for shared/functions/registersmemory/BranchToAddrTransformTag

// BranchToAddr() // TransformTag() // ============== // Apply tag transformation rules. // Set program counter to a new address, with a branch type // In AArch64 state the address does not include a tag in the top eight bits.bits(4) BranchToAddr(bits(N) target,TransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta = BranchType branch_type) Hint_Branch(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); _PC = target<63:0>; return;(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;

Library pseudocode for shared/functions/registersmemory/BranchType_Mem

enumeration// These two _Mem[] accessors are the hardware operations which perform single-copy atomic, // aligned, little-endian memory accesses of size bytes from/to the underlying physical // memory array of bytes. // // The functions address the array using desc.paddress which supplies: // * A 52-bit physical address // * A single NS bit to select between Secure and Non-secure parts of the array. // // The accdesc descriptor describes the access type: normal, exclusive, ordered, streaming, // etc and other parameters required to access the physical memory or for setting syndrome // register in the event of an external abort. bits(8*size) _Mem[ BranchType {desc, integer size, AccessDescriptor accdesc]; _Mem[ BranchType_DIRCALL, // Direct Branch with link BranchType_INDCALL, // Indirect Branch with link BranchType_ERET, // Exception return (indirect) BranchType_DBGEXIT, // Exit from Debug state BranchType_RET, // Indirect branch with function return hint BranchType_DIR, // Direct branch BranchType_INDIR, // Indirect branch BranchType_EXCEPTION, // Exception entry BranchType_RESET, // Reset BranchType_UNKNOWN}; // Otherdesc, integer size, AccessDescriptor accdesc] = bits(8*size) value;

Library pseudocode for shared/functions/registersmemory/Hint_Branchboolean

// Report the hint passed to BranchTo() and BranchToAddr(), for consideration when processing // the next instruction.// boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. boolean Hint_Branch(AccessIsTagChecked(bits(64) vaddr, acctype) if PSTATE.M<4> == '1' then return FALSE; if EffectiveTBI(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if EffectiveTCMA(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if !AllocationTagAccessIsEnabled() then return FALSE; if acctype IN {AccType_IFETCH, AccType_PTW} then return FALSE; if acctype == AccType_NV2REGISTERBranchTypeAccType hint);then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;

Library pseudocode for shared/functions/registersmpam/NextInstrAddrDefaultMPAMinfo

// Return address of the sequentially next instruction. bits(N)// DefaultMPAMinfo // =============== // Returns default MPAM info. If secure is TRUE return default Secure // MPAMinfo, otherwise return default Non-secure MPAMinfo. MPAMinfo DefaultMPAMinfo(boolean secure) MPAMinfo DefaultInfo; DefaultInfo.mpam_ns = if secure then '0' else '1'; DefaultInfo.partid = DefaultPARTID; DefaultInfo.pmg = DefaultPMG; return DefaultInfo; NextInstrAddr();

Library pseudocode for shared/functions/registersmpam/ResetExternalDebugRegistersDefaultPARTID

// Reset the External Debug registers in the Core power domain.constant PARTIDtype DefaultPARTID = 0<15:0>; ResetExternalDebugRegisters(boolean cold_reset);

Library pseudocode for shared/functions/registersmpam/ThisInstrAddrDefaultPMG

// ThisInstrAddr() // =============== // Return address of the current instruction. bits(N)constant PMGtype DefaultPMG = 0<7:0>; ThisInstrAddr() assert N == 64 || (N == 32 && UsingAArch32()); return _PC<N-1:0>;

Library pseudocode for shared/functions/registersmpam/_PCGenMPAMcurEL

bits(64) _PC;// GenMPAMcurEL // ============ // Returns MPAMinfo for the current EL and security state. // InD is TRUE instruction access and FALSE otherwise. // May be called if MPAM is not implemented (but in an version that supports // MPAM), MPAM is disabled, or in AArch32. In AArch32, convert the mode to // EL if can and use that to drive MPAM information generation. If mode // cannot be converted, MPAM is not implemented, or MPAM is disabled return // default MPAM information for the current security state. MPAMinfo GenMPAMcurEL(boolean InD) bits(2) mpamel; boolean validEL; boolean secure =IsSecure(); if HaveMPAMExt() && MPAMisEnabled() then if UsingAArch32() then (validEL, mpamel) = ELFromM32(PSTATE.M); else validEL = TRUE; mpamel = PSTATE.EL; if validEL then return genMPAM(UInt(mpamel), InD, secure); return DefaultMPAMinfo(secure);

Library pseudocode for shared/functions/registersmpam/_RMAP_vPARTID

array bits(64) _R[0..30];// MAP_vPARTID // =========== // Performs conversion of virtual PARTID into physical PARTID // Contains all of the error checking and implementation // choices for the conversion. (PARTIDtype, boolean) MAP_vPARTID(PARTIDtype vpartid) // should not ever be called if EL2 is not implemented // or is implemented but not enabled in the current // security state. PARTIDtype ret; boolean err; integer virt =UInt( vpartid ); integer vmprmax = UInt( MPAMIDR_EL1.VPMR_MAX ); // vpartid_max is largest vpartid supported integer vpartid_max = 4 * vmprmax + 3; // One of many ways to reduce vpartid to value less than vpartid_max. if virt > vpartid_max then virt = virt MOD (vpartid_max+1); // Check for valid mapping entry. if MPAMVPMV_EL2<virt> == '1' then // vpartid has a valid mapping so access the map. ret = mapvpmw(virt); err = FALSE; // Is the default virtual PARTID valid? elsif MPAMVPMV_EL2<0> == '1' then // Yes, so use default mapping for vpartid == 0. ret = MPAMVPM0_EL2<0 +: 16>; err = FALSE; // Neither is valid so use default physical PARTID. else ret = DefaultPARTID; err = TRUE; // Check that the physical PARTID is in-range. // This physical PARTID came from a virtual mapping entry. integer partid_max = UInt( MPAMIDR_EL1.PARTID_MAX ); if UInt(ret) > partid_max then // Out of range, so return default physical PARTID ret = DefaultPARTID; err = TRUE; return (ret, err);

Library pseudocode for shared/functions/registersmpam/_VMPAMisEnabled

array bits(128) _V[0..31];// MPAMisEnabled // ============= // Returns TRUE if MPAMisEnabled. booleanMPAMisEnabled() el = HighestEL(); case el of when EL3 return MPAM3_EL3.MPAMEN == '1'; when EL2 return MPAM2_EL2.MPAMEN == '1'; when EL1 return MPAM1_EL1.MPAMEN == '1';

Library pseudocode for shared/functions/sysregistersmpam/SPSRMPAMisVirtual

// SPSR[] - non-assignment form // ============================ // MPAMisVirtual // ============= // Returns TRUE if MPAM is configured to be virtual at EL. bits(32)boolean SPSR[] bits(32) result; ifMPAMisVirtual(integer el) return (MPAMIDR_EL1.HAS_HCR == '1' && UsingAArch32EL2Enabled() then case PSTATE.M of when M32_FIQ result = SPSR_fiq; when M32_IRQ result = SPSR_irq; when M32_Svc result = SPSR_svc; when M32_Monitor result = SPSR_mon; when M32_Abort result = SPSR_abt; when M32_Hyp result = SPSR_hyp; when M32_Undef result = SPSR_und; otherwise Unreachable(); else case PSTATE.EL of when EL1 result = SPSR_EL1; when EL2 result = SPSR_EL2; when EL3 result = SPSR_EL3; otherwise Unreachable(); return result; // SPSR[] - assignment form // ======================== SPSR[] = bits(32) value if UsingAArch32() then case PSTATE.M of when M32_FIQ SPSR_fiq = value; when M32_IRQ SPSR_irq = value; when M32_Svc SPSR_svc = value; when M32_Monitor SPSR_mon = value; when M32_Abort SPSR_abt = value; when M32_Hyp SPSR_hyp = value; when M32_Undef SPSR_und = value; otherwise Unreachable(); else case PSTATE.EL of when EL1 SPSR_EL1 = value; when EL2 SPSR_EL2 = value; when EL3 SPSR_EL3 = value; otherwise Unreachable(); return;() && (( el == 0 && MPAMHCR_EL2.EL0_VPMEN == '1' ) || ( el == 1 && MPAMHCR_EL2.EL1_VPMEN == '1')));

Library pseudocode for shared/functions/systemmpam/AllocationTagAccessIsEnabledgenMPAM

// AllocationTagAccessIsEnabled() // ============================== // Check whether access to Allocation Tags is enabled. // genMPAM // ======= // Returns MPAMinfo for exception level el. // If InD is TRUE returns MPAM information using PARTID_I and PMG_I fields // of MPAMel_ELx register and otherwise using PARTID_D and PMG_D fields. // Produces a Secure PARTID if Secure is TRUE and a Non-secure PARTID otherwise. booleanMPAMinfo genMPAM(integer el, boolean InD, boolean secure) MPAMinfo returnInfo; PARTIDtype partidel; boolean perr; boolean gstplk = (el == 0 && AllocationTagAccessIsEnabled() if SCR_EL3.ATA == '0' && PSTATE.EL IN {EL0, EL1, EL2} then return FALSE; elsif HCR_EL2.ATA == '0' && HCR_EL2.<E2H,TGE> != '11' && PSTATE.EL IN {EL0, EL1} then return FALSE; elsif SCTLR_EL3.ATA == '0' && PSTATE.EL == EL3 then return FALSE; elsif SCTLR_EL2.ATA == '0' && PSTATE.EL == EL2 then return FALSE; elsif SCTLR_EL1.ATA == '0' && PSTATE.EL == EL1 then return FALSE; elsif SCTLR_EL2.ATA0 == '0' && HCR_EL2.<E2H,TGE> == '11' && PSTATE.EL == EL0 then return FALSE; elsif SCTLR_EL1.ATA0 == '0' && HCR_EL2.<E2H,TGE> != '11' && PSTATE.EL == EL0 then return FALSE; else return TRUE;() && MPAMHCR_EL2.GSTAPP_PLK == '1' && HCR_EL2.TGE == '0'); integer eff_el = if gstplk then 1 else el; (partidel, perr) = genPARTID(eff_el, InD); PMGtype groupel = genPMG(eff_el, InD, perr); returnInfo.mpam_ns = if secure then '0' else '1'; returnInfo.partid = partidel; returnInfo.pmg = groupel; return returnInfo;

Library pseudocode for shared/functions/systemmpam/ArchVersiongenMPAMel

enumeration// genMPAMel // ========= // Returns MPAMinfo for specified EL in the current security state. // InD is TRUE for instruction access and FALSE otherwise. MPAMinfo genMPAMel(bits(2) el, boolean InD) boolean secure = ArchVersion {(); if ARMv8p0 ,() && ARMv8p1 ,() then return genMPAM( ARMv8p2 , ARMv8p3 , ARMv8p4 , ARMv8p5 };(el), InD, secure); return DefaultMPAMinfo(secure);

Library pseudocode for shared/functions/systemmpam/BranchTargetCheckgenPARTID

// BranchTargetCheck() // =================== // This function is executed checks if the current instruction is a valid target for a branch // taken into, or inside, a guarded page. It is executed on every cycle once the current // instruction has been decoded and the values of InGuardedPage and BTypeCompatible have been // determined for the current instruction.// genPARTID // ========= // Returns physical PARTID and error boolean for exception level el. // If InD is TRUE then PARTID is from MPAMel_ELx.PARTID_I and // otherwise from MPAMel_ELx.PARTID_D. (PARTIDtype, boolean) genPARTID(integer el, boolean InD) PARTIDtype partidel = getMPAM_PARTID(el, InD); integer partid_max = BranchTargetCheck() assert(MPAMIDR_EL1.PARTID_MAX); if HaveBTIExtUInt() && !(partidel) > partid_max then return (DefaultPARTID, TRUE); ifUsingAArch32MPAMisVirtual(); // The branch target check considers two state variables: // * InGuardedPage, which is evaluated during instruction fetch. // * BTypeCompatible, which is evaluated during instruction decode. if InGuardedPage && PSTATE.BTYPE != '00' && !BTypeCompatible && !Halted() then bits(64) pc = ThisInstrAddr(); AArch64.BranchTargetException(pc<51:0>); boolean branch_instr = AArch64.ExecutingBROrBLROrRetInstr(); boolean bti_instr = AArch64.ExecutingBTIInstr(); // PSTATE.BTYPE defaults to 00 for instructions that don't explictly set BTYPE. if !(branch_instr || bti_instr) then BTypeNext = '00';(el) then return MAP_vPARTID(partidel); else return (partidel, FALSE);

Library pseudocode for shared/functions/systemmpam/ChooseNonExcludedTaggenPMG

// ChooseNonExcludedTag() // ====================== // Return a tag derived from the start and the offset values, excluding // any tags in the given mask. // genPMG // ====== // Returns PMG for exception level el and I- or D-side (InD). // If PARTID generation (genPARTID) encountered an error, genPMG() should be // called with partid_err as TRUE. bits(4)PMGtype genPMG(integer el, boolean InD, boolean partid_err) integer pmg_max = ChooseNonExcludedTag(bits(4) tag, bits(4) offset, bits(16) exclude) if IsOnes(exclude) then return tag; if offset == '0000' then while exclude<UInt(tag)> == '1' do tag = tag + '0001'; (MPAMIDR_EL1.PMG_MAX); while offset != '0000' do offset = offset - '0001'; tag = tag + '0001'; while exclude< // It is CONSTRAINED UNPREDICTABLE whether partid_err forces PMG to // use the default or if it uses the PMG from getMPAM_PMG. if partid_err then return DefaultPMG; PMGtype groupel = getMPAM_PMG(el, InD); ifUInt(tag)> == '1' do tag = tag + '0001'; return tag;(groupel) <= pmg_max then return groupel; return DefaultPMG;

Library pseudocode for shared/functions/systemmpam/ClearEventRegistergetMPAM_PARTID

// ClearEventRegister() // ==================== // Clear the Event Register of this PE// getMPAM_PARTID // ============== // Returns a PARTID from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PARTID_I field of that // register. Otherwise, selects the PARTID_D field. PARTIDtype getMPAM_PARTID(integer MPAMn, boolean InD) PARTIDtype partid; boolean el2avail = (); if InD then case MPAMn of when 3 partid = MPAM3_EL3.PARTID_I; when 2 partid = if el2avail then MPAM2_EL2.PARTID_I else Zeros(); when 1 partid = MPAM1_EL1.PARTID_I; when 0 partid = MPAM0_EL1.PARTID_I; otherwise partid = PARTIDtype UNKNOWN; else case MPAMn of when 3 partid = MPAM3_EL3.PARTID_D; when 2 partid = if el2avail then MPAM2_EL2.PARTID_D else ZerosClearEventRegister() EventRegister = '0'; return;(); when 1 partid = MPAM1_EL1.PARTID_D; when 0 partid = MPAM0_EL1.PARTID_D; otherwise partid = PARTIDtype UNKNOWN; return partid;

Library pseudocode for shared/functions/systemmpam/ClearPendingPhysicalSErrorgetMPAM_PMG

// Clear a pending physical SError interrupt// getMPAM_PMG // =========== // Returns a PMG from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PMG_I field of that // register. Otherwise, selects the PMG_D field. PMGtype getMPAM_PMG(integer MPAMn, boolean InD) PMGtype pmg; boolean el2avail = (); if InD then case MPAMn of when 3 pmg = MPAM3_EL3.PMG_I; when 2 pmg = if el2avail then MPAM2_EL2.PMG_I else Zeros(); when 1 pmg = MPAM1_EL1.PMG_I; when 0 pmg = MPAM0_EL1.PMG_I; otherwise pmg = PMGtype UNKNOWN; else case MPAMn of when 3 pmg = MPAM3_EL3.PMG_D; when 2 pmg = if el2avail then MPAM2_EL2.PMG_D else ZerosClearPendingPhysicalSError();(); when 1 pmg = MPAM1_EL1.PMG_D; when 0 pmg = MPAM0_EL1.PMG_D; otherwise pmg = PMGtype UNKNOWN; return pmg;

Library pseudocode for shared/functions/systemmpam/ClearPendingVirtualSErrormapvpmw

// Clear a pending virtual SError interrupt// mapvpmw // ======= // Map a virtual PARTID into a physical PARTID using // the MPAMVPMn_EL2 registers. // vpartid is now assumed in-range and valid (checked by caller) // returns physical PARTID from mapping entry. PARTIDtype mapvpmw(integer vpartid) bits(64) vpmw; integer wd = vpartid DIV 4; case wd of when 0 vpmw = MPAMVPM0_EL2; when 1 vpmw = MPAMVPM1_EL2; when 2 vpmw = MPAMVPM2_EL2; when 3 vpmw = MPAMVPM3_EL2; when 4 vpmw = MPAMVPM4_EL2; when 5 vpmw = MPAMVPM5_EL2; when 6 vpmw = MPAMVPM6_EL2; when 7 vpmw = MPAMVPM7_EL2; otherwise vpmw = ClearPendingVirtualSError();(64); // vpme_lsb selects LSB of field within register integer vpme_lsb = (vpartid REM 4) * 16; return vpmw<vpme_lsb +: 16>;

Library pseudocode for shared/functions/systemregisters/ConditionHoldsBranchTo

// ConditionHolds() // ================ // BranchTo() // ========== // Return TRUE iff COND currently holds boolean// Set program counter to a new address, with a branch type // In AArch64 state the address might include a tag in the top eight bits. ConditionHolds(bits(4) cond) // Evaluate base condition. case cond<3:1> of when '000' result = (PSTATE.Z == '1'); // EQ or NE when '001' result = (PSTATE.C == '1'); // CS or CC when '010' result = (PSTATE.N == '1'); // MI or PL when '011' result = (PSTATE.V == '1'); // VS or VC when '100' result = (PSTATE.C == '1' && PSTATE.Z == '0'); // HI or LS when '101' result = (PSTATE.N == PSTATE.V); // GE or LT when '110' result = (PSTATE.N == PSTATE.V && PSTATE.Z == '0'); // GT or LE when '111' result = TRUE; // AL // Condition flag values in the set '111x' indicate always true // Otherwise, invert condition if necessary. if cond<0> == '1' && cond != '1111' then result = !result; return result;BranchTo(bits(N) target,BranchType branch_type) Hint_Branch(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); _PC = AArch64.BranchAddr(target<63:0>); return;

Library pseudocode for shared/functions/systemregisters/ConsumptionOfSpeculativeDataBarrierBranchToAddr

// BranchToAddr() // ============== // Set program counter to a new address, with a branch type // In AArch64 state the address does not include a tag in the top eight bits. BranchToAddr(bits(N) target, BranchType branch_type) Hint_Branch(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32ConsumptionOfSpeculativeDataBarrier();(); _PC = target<63:0>; return;

Library pseudocode for shared/functions/systemregisters/CurrentInstrSetBranchType

// CurrentInstrSet() // ================= InstrSetenumeration CurrentInstrSet() ifBranchType { UsingAArch32() then result = if PSTATE.T == '0' thenBranchType_DIRCALL, // Direct Branch with link InstrSet_A32 elseBranchType_INDCALL, // Indirect Branch with link InstrSet_T32; // PSTATE.J is RES0. Implementation of T32EE or Jazelle state not permitted. else result =BranchType_ERET, // Exception return (indirect) BranchType_DBGEXIT, // Exit from Debug state BranchType_RET, // Indirect branch with function return hint BranchType_DIR, // Direct branch BranchType_INDIR, // Indirect branch BranchType_EXCEPTION, // Exception entry BranchType_RESET, // Reset InstrSet_A64; return result;BranchType_UNKNOWN}; // Other

Library pseudocode for shared/functions/systemregisters/CurrentPLHint_Branch

// CurrentPL() // =========== PrivilegeLevel// Report the hint passed to BranchTo() and BranchToAddr(), for consideration when processing // the next instruction. CurrentPL() returnHint_Branch( PLOfELBranchType(PSTATE.EL);hint);

Library pseudocode for shared/functions/systemregisters/EL0NextInstrAddr

constant bits(2)// Return address of the sequentially next instruction. bits(N) EL3 = '11'; constant bits(2)NextInstrAddr(); EL2 = '10'; constant bits(2) EL1 = '01'; constant bits(2) EL0 = '00';

Library pseudocode for shared/functions/systemregisters/EL2EnabledResetExternalDebugRegisters

// EL2Enabled() // ============ // Returns TRUE if EL2 is present and executing in either non-Secure state when Secure EL2 is not implemented, // or in Secure state when Secure EL2 is implemented, FALSE otherwise boolean// Reset the External Debug registers in the Core power domain. EL2Enabled() is_secure =ResetExternalDebugRegisters(boolean cold_reset); HaveEL(EL3) && SCR_EL3.NS == '0'; return HaveEL(EL2) && (!is_secure || IsSecureEL2Enabled());

Library pseudocode for shared/functions/systemregisters/ELFromM32ThisInstrAddr

// ELFromM32() // =========== // ThisInstrAddr() // =============== // Return address of the current instruction. (boolean,bits(2))bits(N) ELFromM32(bits(5) mode) // Convert an AArch32 mode encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'mode<4:0>' encodes a mode that is both valid for this implementation // and the current value of SCR.NS/SCR_EL3.NS. // 'EL' is the Exception level decoded from 'mode'. bits(2) el; boolean valid = !ThisInstrAddr() assert N == 64 || (N == 32 &&BadModeUsingAArch32(mode); // Check for modes that are not valid for this implementation case mode of when M32_Monitor el = EL3; when M32_Hyp el = EL2; valid = valid && (!HaveEL(EL3) || SCR_GEN[].NS == '1'); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. el = (if HaveEL(EL3) && HighestELUsingAArch32() && SCR.NS == '0' then EL3 else EL1); when M32_User el = EL0; otherwise valid = FALSE; // Passed an illegal mode value if !valid then el = bits(2) UNKNOWN; return (valid, el);()); return _PC<N-1:0>;

Library pseudocode for shared/functions/systemregisters/ELFromSPSR_PC

// ELFromSPSR() // ============ // Convert an SPSR value encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'spsr<4:0>' encodes a valid mode for the current state. // 'EL' is the Exception level decoded from 'spsr'. (boolean,bits(2))bits(64) _PC; ELFromSPSR(bits(32) spsr) if spsr<4> == '0' then // AArch64 state el = spsr<3:2>; if HighestELUsingAArch32() then // No AArch64 support valid = FALSE; elsif !HaveEL(el) then // Exception level not implemented valid = FALSE; elsif spsr<1> == '1' then // M[1] must be 0 valid = FALSE; elsif el == EL0 && spsr<0> == '1' then // for EL0, M[0] must be 0 valid = FALSE; elsif el == EL2 && HaveEL(EL3) && !IsSecureEL2Enabled() && SCR_EL3.NS == '0' then valid = FALSE; // Unless Secure EL2 is enabled, EL2 only valid in Non-secure state else valid = TRUE; elsif HaveAnyAArch32() then // AArch32 state (valid, el) = ELFromM32(spsr<4:0>); else valid = FALSE; if !valid then el = bits(2) UNKNOWN; return (valid,el);

Library pseudocode for shared/functions/systemregisters/ELIsInHost_R

// ELIsInHost() // ============ booleanarray bits(64) _R[0..30]; ELIsInHost(bits(2) el) return ((IsSecureEL2Enabled() || !IsSecureBelowEL3()) && HaveVirtHostExt() && !ELUsingAArch32(EL2) && HCR_EL2.E2H == '1' && (el == EL2 || (el == EL0 && HCR_EL2.TGE == '1')));

Library pseudocode for shared/functions/systemregisters/ELStateUsingAArch32_V

// ELStateUsingAArch32() // ===================== booleanarray bits(128) _V[0..31]; ELStateUsingAArch32(bits(2) el, boolean secure) // See ELStateUsingAArch32K() for description. Must only be called in circumstances where // result is valid (typically, that means 'el IN {EL1,EL2,EL3}'). (known, aarch32) = ELStateUsingAArch32K(el, secure); assert known; return aarch32;

Library pseudocode for shared/functions/systemsysregisters/ELStateUsingAArch32KSPSR

// ELStateUsingAArch32K() // ====================== // SPSR[] - non-assignment form // ============================ (boolean,boolean)bits(32) ELStateUsingAArch32K(bits(2) el, boolean secure) // Returns (known, aarch32): // 'known' is FALSE for EL0 if the current Exception level is not EL0 and EL1 is // using AArch64, since it cannot determine the state of EL0; TRUE otherwise. // 'aarch32' is TRUE if the specified Exception level is using AArch32; FALSE otherwise. boolean aarch32; known = TRUE; if !SPSR[] bits(32) result; ifHaveAArch32ELUsingAArch32(el) then aarch32 = FALSE; // Exception level is using AArch64 elsif() then case PSTATE.M of when HighestELUsingAArch32M32_FIQ() then aarch32 = TRUE; // All levels are using AArch32 else aarch32_below_el3 =result = SPSR_fiq; when HaveELM32_IRQ(result = SPSR_irq; whenEL3M32_Svc) && SCR_EL3.RW == '0'; aarch32_at_el1 = (aarch32_below_el3 || (result = SPSR_svc; whenHaveELM32_Monitor(result = SPSR_mon; whenEL2M32_Abort) && ((result = SPSR_abt; whenHaveSecureEL2ExtM32_Hyp() && SCR_EL3.EEL2 == '1') || !secure) && HCR_EL2.RW == '0' && !(HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' &&result = SPSR_hyp; when HaveVirtHostExtM32_Undef()))); if el ==result = SPSR_und; otherwise EL0Unreachable && !aarch32_at_el1 then // Only know if EL0 using AArch32 from PSTATE if PSTATE.EL ==(); else case PSTATE.EL of when EL0EL1 then aarch32 = PSTATE.nRW == '1'; // EL0 controlled by PSTATE else known = FALSE; // EL0 state is UNKNOWN else aarch32 = (aarch32_below_el3 && el !=result = SPSR_EL1; when EL2 result = SPSR_EL2; when EL3) || (aarch32_at_el1 && el IN {result = SPSR_EL3; otherwiseUnreachable(); return result; // SPSR[] - assignment form // ======================== SPSR[] = bits(32) value if UsingAArch32() then case PSTATE.M of when M32_FIQ SPSR_fiq = value; when M32_IRQ SPSR_irq = value; when M32_Svc SPSR_svc = value; when M32_Monitor SPSR_mon = value; when M32_Abort SPSR_abt = value; when M32_Hyp SPSR_hyp = value; when M32_Undef SPSR_und = value; otherwise Unreachable(); else case PSTATE.EL of when EL1,SPSR_EL1 = value; when SPSR_EL2 = value; when EL3 SPSR_EL3 = value; otherwise UnreachableEL0EL2}); if !known then aarch32 = boolean UNKNOWN; return (known, aarch32);(); return;

Library pseudocode for shared/functions/system/ELUsingAArch32AllocationTagAccessIsEnabled

// ELUsingAArch32() // ================ // AllocationTagAccessIsEnabled() // ============================== // Check whether access to Allocation Tags is enabled. boolean ELUsingAArch32(bits(2) el) returnAllocationTagAccessIsEnabled() if SCR_EL3.ATA == '0' && PSTATE.EL IN { ELStateUsingAArch32EL0(el,, , EL2} then return FALSE; elsif HCR_EL2.ATA == '0' && HCR_EL2.<E2H,TGE> != '11' && PSTATE.EL IN {EL0, EL1} then return FALSE; elsif SCTLR_EL3.ATA == '0' && PSTATE.EL == EL3 then return FALSE; elsif SCTLR_EL2.ATA == '0' && PSTATE.EL == EL2 then return FALSE; elsif SCTLR_EL1.ATA == '0' && PSTATE.EL == EL1 then return FALSE; elsif SCTLR_EL2.ATA0 == '0' && HCR_EL2.<E2H,TGE> == '11' && PSTATE.EL == EL0 then return FALSE; elsif SCTLR_EL1.ATA0 == '0' && HCR_EL2.<E2H,TGE> != '11' && PSTATE.EL == EL0IsSecureBelowEL3EL1());then return FALSE; else return TRUE;

Library pseudocode for shared/functions/system/ELUsingAArch32KArchVersion

// ELUsingAArch32K() // ================= (boolean,boolean)enumeration ELUsingAArch32K(bits(2) el) returnArchVersion { ELStateUsingAArch32K(el,ARMv8p0 , ARMv8p1 , ARMv8p2 , ARMv8p3 , ARMv8p4 , IsSecureBelowEL3());ARMv8p5 };

Library pseudocode for shared/functions/system/EndOfInstructionBranchTargetCheck

// Terminate processing of the current instruction.// BranchTargetCheck() // =================== // This function is executed checks if the current instruction is a valid target for a branch // taken into, or inside, a guarded page. It is executed on every cycle once the current // instruction has been decoded and the values of InGuardedPage and BTypeCompatible have been // determined for the current instruction. EndOfInstruction();BranchTargetCheck() assertHaveBTIExt() && !UsingAArch32(); // The branch target check considers two state variables: // * InGuardedPage, which is evaluated during instruction fetch. // * BTypeCompatible, which is evaluated during instruction decode. if InGuardedPage && PSTATE.BTYPE != '00' && !BTypeCompatible && !Halted() then bits(64) pc = ThisInstrAddr(); AArch64.BranchTargetException(pc<51:0>); boolean branch_instr = AArch64.ExecutingBROrBLROrRetInstr(); boolean bti_instr = AArch64.ExecutingBTIInstr(); // PSTATE.BTYPE defaults to 00 for instructions that don't explictly set BTYPE. if !(branch_instr || bti_instr) then BTypeNext = '00';

Library pseudocode for shared/functions/system/EnterLowPowerStateChooseNonExcludedTag

// PE enters a low-power state// ChooseNonExcludedTag() // ====================== // Return a tag derived from the start and the offset values, excluding // any tags in the given mask. bits(4) EnterLowPowerState();ChooseNonExcludedTag(bits(4) tag, bits(4) offset, bits(16) exclude) if exclude ==Ones(16) then return tag; while offset != '0000' do offset = offset - '0001'; tag = tag + '0001'; while exclude<UInt(tag)> == '1' do tag = tag + '0001'; return tag;

Library pseudocode for shared/functions/system/EventRegisterClearEventRegister

bits(1) EventRegister;// ClearEventRegister() // ==================== // Clear the Event Register of this PEClearEventRegister() EventRegister = '0'; return;

Library pseudocode for shared/functions/system/GetPSRFromPSTATEClearPendingPhysicalSError

// GetPSRFromPSTATE() // ================== // Return a PSR value which represents the current PSTATE bits(32)// Clear a pending physical SError interrupt GetPSRFromPSTATE() bits(32) spsr =ClearPendingPhysicalSError(); Zeros(); spsr<31:28> = PSTATE.<N,Z,C,V>; if HavePANExt() then spsr<22> = PSTATE.PAN; spsr<20> = PSTATE.IL; if PSTATE.nRW == '1' then // AArch32 state spsr<27> = PSTATE.Q; spsr<26:25> = PSTATE.IT<1:0>; if HaveSSBSExt() then spsr<23> = PSTATE.SSBS; if HaveDITExt() then spsr<21> = PSTATE.DIT; spsr<19:16> = PSTATE.GE; spsr<15:10> = PSTATE.IT<7:2>; spsr<9> = PSTATE.E; spsr<8:6> = PSTATE.<A,I,F>; // No PSTATE.D in AArch32 state spsr<5> = PSTATE.T; assert PSTATE.M<4> == PSTATE.nRW; // bit [4] is the discriminator spsr<4:0> = PSTATE.M; else // AArch64 state if HaveDITExt() then spsr<24> = PSTATE.DIT; if HaveUAOExt() then spsr<23> = PSTATE.UAO; spsr<21> = PSTATE.SS; if HaveSSBSExt() then spsr<12> = PSTATE.SSBS; if HaveMTEExt() then spsr<25> = PSTATE.TCO; if HaveBTIExt() then spsr<11:10> = PSTATE.BTYPE; spsr<9:6> = PSTATE.<D,A,I,F>; spsr<4> = PSTATE.nRW; spsr<3:2> = PSTATE.EL; spsr<0> = PSTATE.SP; return spsr;

Library pseudocode for shared/functions/system/HasArchVersionClearPendingVirtualSError

// HasArchVersion() // ================ // Return TRUE if the implemented architecture includes the extensions defined in the specified // architecture version. boolean// Clear a pending virtual SError interrupt HasArchVersion(ClearPendingVirtualSError();ArchVersion version) return version == ARMv8p0 || boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/HaveAArch32ELConditionHolds

// HaveAArch32EL() // =============== // ConditionHolds() // ================ // Return TRUE iff COND currently holds boolean HaveAArch32EL(bits(2) el) // Return TRUE if Exception level 'el' supports AArch32 in this implementation if !ConditionHolds(bits(4) cond) // Evaluate base condition. case cond<3:1> of when '000' result = (PSTATE.Z == '1'); // EQ or NE when '001' result = (PSTATE.C == '1'); // CS or CC when '010' result = (PSTATE.N == '1'); // MI or PL when '011' result = (PSTATE.V == '1'); // VS or VC when '100' result = (PSTATE.C == '1' && PSTATE.Z == '0'); // HI or LS when '101' result = (PSTATE.N == PSTATE.V); // GE or LT when '110' result = (PSTATE.N == PSTATE.V && PSTATE.Z == '0'); // GT or LE when '111' result = TRUE; // AL // Condition flag values in the set '111x' indicate always true // Otherwise, invert condition if necessary. if cond<0> == '1' && cond != '1111' then result = !result; return result;HaveEL(el) then return FALSE; // The Exception level is not implemented elsif !HaveAnyAArch32() then return FALSE; // No Exception level can use AArch32 elsif HighestELUsingAArch32() then return TRUE; // All Exception levels are using AArch32 elsif el == HighestEL() then return FALSE; // The highest Exception level is using AArch64 elsif el == EL0 then return TRUE; // EL0 must support using AArch32 if any AArch32 return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/HaveAnyAArch32ConsumptionOfSpeculativeDataBarrier

// HaveAnyAArch32() // ================ // Return TRUE if AArch32 state is supported at any Exception level boolean HaveAnyAArch32() return boolean IMPLEMENTATION_DEFINED;ConsumptionOfSpeculativeDataBarrier();

Library pseudocode for shared/functions/system/HaveAnyAArch64CurrentInstrSet

// HaveAnyAArch64() // ================ // Return TRUE if AArch64 state is supported at any Exception level // CurrentInstrSet() // ================= booleanInstrSet HaveAnyAArch64() return !CurrentInstrSet() if() then result = if PSTATE.T == '0' then InstrSet_A32 else InstrSet_T32; // PSTATE.J is RES0. Implementation of T32EE or Jazelle state not permitted. else result = InstrSet_A64HighestELUsingAArch32UsingAArch32();; return result;

Library pseudocode for shared/functions/system/HaveELCurrentPL

// HaveEL() // ======== // Return TRUE if Exception level 'el' is supported // CurrentPL() // =========== booleanPrivilegeLevel HaveEL(bits(2) el) if el IN {CurrentPL() returnEL1PLOfEL,EL0} then return TRUE; // EL1 and EL0 must exist return boolean IMPLEMENTATION_DEFINED;(PSTATE.EL);

Library pseudocode for shared/functions/system/HaveFP16ExtEL0

// HaveFP16Ext() // ============= // Return TRUE if FP16 extension is supported booleanconstant bits(2) HaveFP16Ext() return boolean IMPLEMENTATION_DEFINED;EL3 = '11'; constant bits(2)EL2 = '10'; constant bits(2) EL1 = '01'; constant bits(2) EL0 = '00';

Library pseudocode for shared/functions/system/HighestELEL2Enabled

// HighestEL() // =========== // Returns the highest implemented Exception level. // EL2Enabled() // ============ // Returns TRUE if EL2 is present and executing in either non-Secure state when Secure EL2 is not implemented, // or in Secure state when Secure EL2 is implemented, FALSE otherwise bits(2)boolean HighestEL() ifEL2Enabled() return HaveELIsSecureEL2Enabled(() || (EL3) then return EL3; elsif HaveEL(EL2) then return) && ! EL2IsSecure; else return EL1;());

Library pseudocode for shared/functions/system/HighestELUsingAArch32ELFromM32

// HighestELUsingAArch32() // ======================= // Return TRUE if configured to boot into AArch32 operation // ELFromM32() // =========== boolean(boolean,bits(2)) HighestELUsingAArch32() if !ELFromM32(bits(5) mode) // Convert an AArch32 mode encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'mode<4:0>' encodes a mode that is both valid for this implementation // and the current value of SCR.NS/SCR_EL3.NS. // 'EL' is the Exception level decoded from 'mode'. bits(2) el; boolean valid = !(mode); // Check for modes that are not valid for this implementation case mode of when M32_Monitor el = EL3; when M32_Hyp el = EL2; valid = valid && (!HaveEL(EL3) || SCR_GEN[].NS == '1'); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. el = (if HaveEL(EL3) && HighestELUsingAArch32() && SCR.NS == '0' then EL3 else EL1); when M32_User el = EL0HaveAnyAArch32BadMode() then return FALSE; return boolean IMPLEMENTATION_DEFINED; // e.g. CFG32SIGNAL == HIGH; otherwise valid = FALSE; // Passed an illegal mode value if !valid then el = bits(2) UNKNOWN; return (valid, el);

Library pseudocode for shared/functions/system/Hint_YieldELFromSPSR

// ELFromSPSR() // ============ // Convert an SPSR value encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'spsr<4:0>' encodes a valid mode for the current state. // 'EL' is the Exception level decoded from 'spsr'. (boolean,bits(2)) ELFromSPSR(bits(32) spsr) if spsr<4> == '0' then // AArch64 state el = spsr<3:2>; if HighestELUsingAArch32() then // No AArch64 support valid = FALSE; elsif !HaveEL(el) then // Exception level not implemented valid = FALSE; elsif spsr<1> == '1' then // M[1] must be 0 valid = FALSE; elsif el == EL0 && spsr<0> == '1' then // for EL0, M[0] must be 0 valid = FALSE; elsif el == EL2 && HaveEL(EL3) && !IsSecureEL2Enabled() && SCR_EL3.NS == '0' then valid = FALSE; // Unless Secure EL2 is enabled, EL2 only valid in Non-secure state else valid = TRUE; elsif !HaveAnyAArch32() then // AArch32 not supported valid = FALSE; else // AArch32 state (valid, el) = ELFromM32Hint_Yield();(spsr<4:0>); if !valid then el = bits(2) UNKNOWN; return (valid,el);

Library pseudocode for shared/functions/system/IllegalExceptionReturnELIsInHost

// IllegalExceptionReturn() // ======================== // ELIsInHost() // ============ boolean IllegalExceptionReturn(bits(32) spsr) // Check for illegal return: // * To an unimplemented Exception level. // * To EL2 in Secure state, when SecureEL2 is not enabled. // * To EL0 using AArch64 state, with SPSR.M[0]==1. // * To AArch64 state with SPSR.M[1]==1. // * To AArch32 state with an illegal value of SPSR.M. (valid, target) =ELIsInHost(bits(2) el) return (( ELFromSPSRIsSecureEL2Enabled(spsr); if !valid then return TRUE; // Check for return to higher Exception level if() || ! UIntIsSecureBelowEL3(target) >()) && UIntHaveVirtHostExt(PSTATE.EL) then return TRUE; spsr_mode_is_aarch32 = (spsr<4> == '1'); // Check for illegal return: // * To EL1, EL2 or EL3 with register width specified in the SPSR different from the // Execution state used in the Exception level being returned to, as determined by // the SCR_EL3.RW or HCR_EL2.RW bits, or as configured from reset. // * To EL0 using AArch64 state when EL1 is using AArch32 state as determined by the // SCR_EL3.RW or HCR_EL2.RW bits or as configured from reset. // * To AArch64 state from AArch32 state (should be caught by above) (known, target_el_is_aarch32) =() && ! ELUsingAArch32K(target); assert known || (target == EL0 && !ELUsingAArch32(EL1EL2)); if known && spsr_mode_is_aarch32 != target_el_is_aarch32 then return TRUE; // Check for illegal return from AArch32 to AArch64 if) && HCR_EL2.E2H == '1' && (el == UsingAArch32() && !spsr_mode_is_aarch32 then return TRUE; // Check for illegal return to EL1 when HCR.TGE is set and when either of // * SecureEL2 is enabled. // * SecureEL2 is not enabled and EL1 is in Non-secure state. if HaveEL(EL2) && target ==|| (el == EL1EL0 && HCR_EL2.TGE == '1' then if (!IsSecureBelowEL3() || IsSecureEL2Enabled()) then return TRUE; return FALSE;&& HCR_EL2.TGE == '1')));

Library pseudocode for shared/functions/system/InstrSetELStateUsingAArch32

enumeration// ELStateUsingAArch32() // ===================== boolean InstrSet {ELStateUsingAArch32(bits(2) el, boolean secure) // See ELStateUsingAArch32K() for description. Must only be called in circumstances where // result is valid (typically, that means 'el IN {EL1,EL2,EL3}'). (known, aarch32) =InstrSet_A64, InstrSet_A32, InstrSet_T32};(el, secure); assert known; return aarch32;

Library pseudocode for shared/functions/system/InstructionSynchronizationBarrierELStateUsingAArch32K

// ELStateUsingAArch32K() // ====================== (boolean,boolean) ELStateUsingAArch32K(bits(2) el, boolean secure) // Returns (known, aarch32): // 'known' is FALSE for EL0 if the current Exception level is not EL0 and EL1 is // using AArch64, since it cannot determine the state of EL0; TRUE otherwise. // 'aarch32' is TRUE if the specified Exception level is using AArch32; FALSE otherwise. boolean aarch32; known = TRUE; if !HaveAArch32EL(el) then aarch32 = FALSE; // Exception level is using AArch64 elsif HighestELUsingAArch32() then aarch32 = TRUE; // All levels are using AArch32 else aarch32_below_el3 = HaveEL(EL3) && SCR_EL3.RW == '0'; aarch32_at_el1 = (aarch32_below_el3 || (HaveEL(EL2) && ((HaveSecureEL2Ext() && SCR_EL3.EEL2 == '1') || !secure) && HCR_EL2.RW == '0' && !(HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && HaveVirtHostExt()))); if el == EL0 && !aarch32_at_el1 then // Only know if EL0 using AArch32 from PSTATE if PSTATE.EL == EL0 then aarch32 = PSTATE.nRW == '1'; // EL0 controlled by PSTATE else known = FALSE; // EL0 state is UNKNOWN else aarch32 = (aarch32_below_el3 && el != EL3) || (aarch32_at_el1 && el IN {EL1,EL0InstructionSynchronizationBarrier();}); if !known then aarch32 = boolean UNKNOWN; return (known, aarch32);

Library pseudocode for shared/functions/system/InterruptPendingELUsingAArch32

// InterruptPending() // ================== // Return TRUE if there are any pending physical or virtual interrupts, and FALSE otherwise // ELUsingAArch32() // ================ boolean InterruptPending() ELUsingAArch32(bits(2) el) return IsPhysicalSErrorPendingELStateUsingAArch32() ||(el, IsVirtualSErrorPendingIsSecureBelowEL3();());

Library pseudocode for shared/functions/system/IsEventRegisterSetELUsingAArch32K

// IsEventRegisterSet() // ==================== // Return TRUE if the Event Register of this PE is set, and FALSE otherwise // ELUsingAArch32K() // ================= boolean(boolean,boolean) IsEventRegisterSet() return EventRegister == '1';ELUsingAArch32K(bits(2) el) returnELStateUsingAArch32K(el, IsSecureBelowEL3());

Library pseudocode for shared/functions/system/IsHighestELEndOfInstruction

// IsHighestEL() // ============= // Returns TRUE if given exception level is the highest exception level implemented boolean// Terminate processing of the current instruction. IsHighestEL(bits(2) el) returnEndOfInstruction(); HighestEL() == el;

Library pseudocode for shared/functions/system/IsInHostEnterLowPowerState

// IsInHost() // ========== boolean// PE enters a low-power state IsInHost() returnEnterLowPowerState(); ELIsInHost(PSTATE.EL);

Library pseudocode for shared/functions/system/IsPhysicalSErrorPendingEventRegister

// Return TRUE if a physical SError interrupt is pending booleanbits(1) EventRegister; IsPhysicalSErrorPending();

Library pseudocode for shared/functions/system/IsSecureGetPSRFromPSTATE

// IsSecure() // ========== // GetPSRFromPSTATE() // ================== // Return a PSR value which represents the current PSTATE booleanbits(32) IsSecure() // Return TRUE if current Exception level is in Secure state. ifGetPSRFromPSTATE() bits(32) spsr = HaveELZeros((); spsr<31:28> = PSTATE.<N,Z,C,V>; ifEL3HaveDITExt) && !() then spsr<24> = PSTATE.DIT; ifUsingAArch32HavePANExt() && PSTATE.EL ==() then spsr<22> = PSTATE.PAN; spsr<21> = PSTATE.SS; spsr<20> = PSTATE.IL; if PSTATE.nRW == '1' then // AArch32 state spsr<27> = PSTATE.Q; spsr<26:25> = PSTATE.IT<1:0>; if EL3HaveSSBSExt then return TRUE; elsif() then spsr<23> = PSTATE.SSBS; spsr<19:16> = PSTATE.GE; spsr<15:10> = PSTATE.IT<7:2>; spsr<9> = PSTATE.E; spsr<8:6> = PSTATE.<A,I,F>; // No PSTATE.D in AArch32 state spsr<5> = PSTATE.T; assert PSTATE.M<4> == PSTATE.nRW; // bit [4] is the discriminator spsr<4:0> = PSTATE.M; else // AArch64 state if HaveELHaveUAOExt(() then spsr<23> = PSTATE.UAO; ifEL3HaveSSBSExt) &&() then spsr<12> = PSTATE.SSBS; if UsingAArch32HaveMTEExt() && PSTATE.M ==() then spsr<25> = PSTATE.TCO; if M32_MonitorHaveBTIExt then return TRUE; return IsSecureBelowEL3();() then spsr<11:10> = PSTATE.BTYPE; spsr<9:6> = PSTATE.<D,A,I,F>; spsr<4> = PSTATE.nRW; spsr<3:2> = PSTATE.EL; spsr<0> = PSTATE.SP; return spsr;

Library pseudocode for shared/functions/system/IsSecureBelowEL3HasArchVersion

// IsSecureBelowEL3() // ================== // Return TRUE if an Exception level below EL3 is in Secure state // or would be following an exception return to that level. // // Differs from IsSecure in that it ignores the current EL or Mode // in considering security state. // That is, if at AArch64 EL3 or in AArch32 Monitor mode, whether an // exception return would pass to Secure or Non-secure state. // HasArchVersion() // ================ // Return TRUE if the implemented architecture includes the extensions defined in the specified // architecture version. boolean IsSecureBelowEL3() ifHasArchVersion( HaveELArchVersion(version) return version ==EL3ARMv8p0) then return SCR_GEN[].NS == '0'; elsif HaveEL(EL2) && (!HaveSecureEL2Ext() || HighestELUsingAArch32()) then // If Secure EL2 is not an architecture option then we must be Non-secure. return FALSE; else // TRUE if processor is Secure or FALSE if Non-secure. return boolean IMPLEMENTATION_DEFINED "Secure-only implementation";|| boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/IsSecureEL2EnabledHaveAArch32EL

// IsSecureEL2Enabled() // ==================== // Returns TRUE if Secure EL2 is enabled, FALSE otherwise // HaveAArch32EL() // =============== boolean IsSecureEL2Enabled() return (HaveAArch32EL(bits(2) el) // Return TRUE if Exception level 'el' supports AArch32 in this implementation if !HaveEL((el) then return FALSE; // The Exception level is not implemented elsif !EL3HaveAnyAArch32) &&() then return FALSE; // No Exception level can use AArch32 elsif HaveSecureEL2ExtHighestELUsingAArch32() && !() then return TRUE; // All Exception levels are using AArch32 elsif el ==ELUsingAArch32HighestEL(() then return FALSE; // The highest Exception level is using AArch64 elsif el ==EL2EL0) && SCR_EL3.EEL2 == '1');then return TRUE; // EL0 must support using AArch32 if any AArch32 return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/IsVirtualSErrorPendingHaveAnyAArch32

// Return TRUE if a virtual SError interrupt is pending // HaveAnyAArch32() // ================ // Return TRUE if AArch32 state is supported at any Exception level boolean IsVirtualSErrorPending();HaveAnyAArch32() return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/Mode_BitsHaveAnyAArch64

constant bits(5)// HaveAnyAArch64() // ================ // Return TRUE if AArch64 state is supported at any Exception level boolean M32_User = '10000'; constant bits(5)HaveAnyAArch64() return ! M32_FIQ = '10001'; constant bits(5) M32_IRQ = '10010'; constant bits(5) M32_Svc = '10011'; constant bits(5) M32_Monitor = '10110'; constant bits(5) M32_Abort = '10111'; constant bits(5) M32_Hyp = '11010'; constant bits(5) M32_Undef = '11011'; constant bits(5) M32_System = '11111';();

Library pseudocode for shared/functions/system/PLOfELHaveEL

// PLOfEL() // HaveEL() // ======== // Return TRUE if Exception level 'el' is supported PrivilegeLevelboolean PLOfEL(bits(2) el) case el of whenHaveEL(bits(2) el) if el IN { EL3 return if HighestELUsingAArch32() then PL1 else PL3; when EL2 return PL2; when EL1 return, PL1; when EL0 return PL0;} then return TRUE; // EL1 and EL0 must exist return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/PSTATEHaveFP16Ext

// HaveFP16Ext() // ============= // Return TRUE if FP16 extension is supported boolean ProcState PSTATE;HaveFP16Ext() return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/PrivilegeLevelHighestEL

enumeration// HighestEL() // =========== // Returns the highest implemented Exception level. bits(2) PrivilegeLevel {HighestEL() ifPL3,( PL2,) then return PL1,; elsif (EL2) then return EL2; else return EL1PL0};;

Library pseudocode for shared/functions/system/ProcStateHighestELUsingAArch32

type// HighestELUsingAArch32() // ======================= // Return TRUE if configured to boot into AArch32 operation boolean ProcState is ( bits (1) N, // Negative condition flag bits (1) Z, // Zero condition flag bits (1) C, // Carry condition flag bits (1) V, // oVerflow condition flag bits (1) D, // Debug mask bit [AArch64 only] bits (1) A, // SError interrupt mask bit bits (1) I, // IRQ mask bit bits (1) F, // FIQ mask bit bits (1) PAN, // Privileged Access Never Bit [v8.1] bits (1) UAO, // User Access Override [v8.2] bits (1) DIT, // Data Independent Timing [v8.4] bits (1) TCO, // Tag Check Override [v8.5, AArch64 only] bits (2) BTYPE, // Branch Type [v8.5] bits (1) SS, // Software step bit bits (1) IL, // Illegal Execution state bit bits (2) EL, // Exception Level bits (1) nRW, // not Register Width: 0=64, 1=32 bits (1)HighestELUsingAArch32() if ! SPHaveAnyAArch32, // Stack pointer select: 0=SP0, 1=SPx [AArch64 only] bits (1) Q, // Cumulative saturation flag [AArch32 only] bits (4) GE, // Greater than or Equal flags [AArch32 only] bits (1) SSBS, // Speculative Store Bypass Safe bits (8) IT, // If-then bits, RES0 in CPSR [AArch32 only] bits (1) J, // J bit, RES0 [AArch32 only, RES0 in SPSR and CPSR] bits (1) T, // T32 bit, RES0 in CPSR [AArch32 only] bits (1) E, // Endianness bit [AArch32 only] bits (5) M // Mode field [AArch32 only] )() then return FALSE; return boolean IMPLEMENTATION_DEFINED; // e.g. CFG32SIGNAL == HIGH

Library pseudocode for shared/functions/system/RandomTagHint_Yield

// RandomTag() // =========== // Generate a random Allocation Tag. bits(4) RandomTag() bits(4) tag; for i = 0 to 3 tag<i> = NextRandomTagBit(); return tag;Hint_Yield();

Library pseudocode for shared/functions/system/RandomTagBitIllegalExceptionReturn

// RandomTagBit() // ============== // Generate a random bit suitable for generating a random Allocation Tag. // IllegalExceptionReturn() // ======================== bitboolean NextRandomTagBit() bits(16) lfsr = RGSR_EL1.SEED; bit top = lfsr<5> EOR lfsr<3> EOR lfsr<2> EOR lfsr<0>; RGSR_EL1.SEED = top:lfsr<15:1>; return top;IllegalExceptionReturn(bits(32) spsr) // Check for illegal return: // * To an unimplemented Exception level. // * To EL2 in Secure state, when SecureEL2 is not enabled. // * To EL0 using AArch64 state, with SPSR.M[0]==1. // * To AArch64 state with SPSR.M[1]==1. // * To AArch32 state with an illegal value of SPSR.M. (valid, target) =ELFromSPSR(spsr); if !valid then return TRUE; // Check for return to higher Exception level if UInt(target) > UInt(PSTATE.EL) then return TRUE; spsr_mode_is_aarch32 = (spsr<4> == '1'); // Check for illegal return: // * To EL1, EL2 or EL3 with register width specified in the SPSR different from the // Execution state used in the Exception level being returned to, as determined by // the SCR_EL3.RW or HCR_EL2.RW bits, or as configured from reset. // * To EL0 using AArch64 state when EL1 is using AArch32 state as determined by the // SCR_EL3.RW or HCR_EL2.RW bits or as configured from reset. // * To AArch64 state from AArch32 state (should be caught by above) (known, target_el_is_aarch32) = ELUsingAArch32K(target); assert known || (target == EL0 && !ELUsingAArch32(EL1)); if known && spsr_mode_is_aarch32 != target_el_is_aarch32 then return TRUE; // Check for illegal return from AArch32 to AArch64 if UsingAArch32() && !spsr_mode_is_aarch32 then return TRUE; // Check for illegal return to EL1 when HCR.TGE is set and when either of // * SecureEL2 is enabled. // * SecureEL2 is not enabled and EL1 is in Non-secure state. if HaveEL(EL2) && target == EL1 && HCR_EL2.TGE == '1' then if (!IsSecureBelowEL3() || IsSecureEL2Enabled()) then return TRUE; return FALSE;

Library pseudocode for shared/functions/system/RestoredITBitsInstrSet

// RestoredITBits() // ================ // Get the value of PSTATE.IT to be restored on this exception return. bits(8)enumeration RestoredITBits(bits(32) spsr) it = spsr<15:10,26:25>; // When PSTATE.IL is set, it is CONSTRAINED UNPREDICTABLE whether the IT bits are each set // to zero or copied from the SPSR. if PSTATE.IL == '1' then ifInstrSet { ConstrainUnpredictableBool(InstrSet_A64,Unpredictable_ILZEROIT) then return '00000000'; else return it; // The IT bits are forced to zero when they are set to a reserved value. if !InstrSet_A32,IsZero(it<7:4>) && IsZero(it<3:0>) then return '00000000'; // The IT bits are forced to zero when returning to A32 state, or when returning to an EL // with the ITD bit set to 1, and the IT bits are describing a multi-instruction block. itd = if PSTATE.EL == EL2 then HSCTLR.ITD else SCTLR.ITD; if (spsr<5> == '0' && !IsZero(it)) || (itd == '1' && !IsZero(it<2:0>)) then return '00000000'; else return it;InstrSet_T32};

Library pseudocode for shared/functions/system/SCRTypeInstructionSynchronizationBarrier

type SCRType;InstructionSynchronizationBarrier();

Library pseudocode for shared/functions/system/SCR_GENInterruptPending

// SCR_GEN[] // ========= // InterruptPending() // ================== // Return TRUE if there are any pending physical or virtual interrupts, and FALSE otherwise SCRTypeboolean SCR_GEN[] // AArch32 secure & AArch64 EL3 registers are not architecturally mapped assertInterruptPending() return HaveELIsPhysicalSErrorPending(() ||EL3IsVirtualSErrorPending); bits(32) r; if HighestELUsingAArch32() then r = SCR; else r = SCR_EL3; return r;();

Library pseudocode for shared/functions/system/SendEventIsEventRegisterSet

// Signal an event to all PEs in a multiprocessor system to set their Event Registers. // When a PE executes the SEV instruction, it causes this function to be executed// IsEventRegisterSet() // ==================== // Return TRUE if the Event Register of this PE is set, and FALSE otherwise boolean SendEvent();IsEventRegisterSet() return EventRegister == '1';

Library pseudocode for shared/functions/system/SendEventLocalIsHighestEL

// SendEventLocal() // ================ // Set the local Event Register of this PE. // When a PE executes the SEVL instruction, it causes this function to be executed// IsHighestEL() // ============= // Returns TRUE if given exception level is the highest exception level implemented boolean SendEventLocal() EventRegister = '1'; return;IsHighestEL(bits(2) el) returnHighestEL() == el;

Library pseudocode for shared/functions/system/SetPSTATEFromPSRIsInHost

// SetPSTATEFromPSR() // ================== // Set PSTATE based on a PSR value// IsInHost() // ========== boolean SetPSTATEFromPSR(bits(32) spsr) PSTATE.SS =IsInHost() return DebugExceptionReturnSSELIsInHost(spsr); if IllegalExceptionReturn(spsr) then PSTATE.IL = '1'; else // State that is reinstated only on a legal exception return PSTATE.IL = spsr<20>; if spsr<4> == '1' then // AArch32 state AArch32.WriteMode(spsr<4:0>); // Sets PSTATE.EL correctly else // AArch64 state PSTATE.nRW = '0'; PSTATE.EL = spsr<3:2>; PSTATE.SP = spsr<0>; // If PSTATE.IL is set and returning to AArch32 state, it is CONSTRAINED UNPREDICTABLE whether // the T bit is set to zero or copied from SPSR. if PSTATE.IL == '1' && PSTATE.nRW == '1' then if ConstrainUnpredictableBool(Unpredictable_ILZEROT) then spsr<5> = '0'; // State that is reinstated regardless of illegal exception return PSTATE.<N,Z,C,V> = spsr<31:28>; if HavePANExt() then PSTATE.PAN = spsr<22>; if PSTATE.nRW == '1' then // AArch32 state PSTATE.Q = spsr<27>; PSTATE.IT = RestoredITBits(spsr); ShouldAdvanceIT = FALSE; if HaveSSBSExt() then PSTATE.SSBS = spsr<23>; if HaveDITExt() then PSTATE.DIT = (if Restarting() then spsr<24> else spsr<21>); PSTATE.GE = spsr<19:16>; PSTATE.E = spsr<9>; PSTATE.<A,I,F> = spsr<8:6>; // No PSTATE.D in AArch32 state PSTATE.T = spsr<5>; // PSTATE.J is RES0 else // AArch64 state if HaveMTEExt() then PSTATE.TCO = spsr<25>; if HaveDITExt() then PSTATE.DIT = spsr<24>; if HaveUAOExt() then PSTATE.UAO = spsr<23>; if HaveSSBSExt() then PSTATE.SSBS = spsr<12>; if HaveBTIExt() then PSTATE.BTYPE = spsr<11:10>; PSTATE.<D,A,I,F> = spsr<9:6>; // No PSTATE.<Q,IT,GE,E,T> in AArch64 state return;(PSTATE.EL);

Library pseudocode for shared/functions/system/ShouldAdvanceITIsPhysicalSErrorPending

boolean ShouldAdvanceIT;// Return TRUE if a physical SError interrupt is pending booleanIsPhysicalSErrorPending();

Library pseudocode for shared/functions/system/SpeculationBarrierIsSecure

// IsSecure() // ========== boolean IsSecure() // Return TRUE if current Exception level is in Secure state. if HaveEL(EL3) && !UsingAArch32() && PSTATE.EL == EL3 then return TRUE; elsif HaveEL(EL3) && UsingAArch32() && PSTATE.M == M32_Monitor then return TRUE; return IsSecureBelowEL3SpeculationBarrier();();

Library pseudocode for shared/functions/system/SynchronizeContextIsSecureBelowEL3

// IsSecureBelowEL3() // ================== // Return TRUE if an Exception level below EL3 is in Secure state // or would be following an exception return to that level. // // Differs from IsSecure in that it ignores the current EL or Mode // in considering security state. // That is, if at AArch64 EL3 or in AArch32 Monitor mode, whether an // exception return would pass to Secure or Non-secure state. boolean IsSecureBelowEL3() if HaveEL(EL3) then return SCR_GEN[].NS == '0'; elsif HaveEL(EL2) && (!HaveSecureEL2Ext() || HighestELUsingAArch32SynchronizeContext();()) then // If Secure EL2 is not an architecture option then we must be Non-secure. return FALSE; else // TRUE if processor is Secure or FALSE if Non-secure. return boolean IMPLEMENTATION_DEFINED "Secure-only implementation";

Library pseudocode for shared/functions/system/SynchronizeErrorsIsSecureEL2Enabled

// Implements the error synchronization event.// IsSecureEL2Enabled() // ==================== // Returns TRUE if Secure EL2 is enabled, FALSE otherwise boolean SynchronizeErrors();IsSecureEL2Enabled() return (HaveSecureEL2Ext() && HaveEL(EL2) && !ELUsingAArch32(EL2) && ((HaveEL(EL3) && !ELUsingAArch32(EL3) && SCR_EL3.EEL2 == '1') || (!HaveEL(EL3) && IsSecure())) );

Library pseudocode for shared/functions/system/TakeUnmaskedPhysicalSErrorInterruptsIsVirtualSErrorPending

// Take any pending unmasked physical SError interrupt// Return TRUE if a virtual SError interrupt is pending boolean TakeUnmaskedPhysicalSErrorInterrupts(boolean iesb_req);IsVirtualSErrorPending();

Library pseudocode for shared/functions/system/TakeUnmaskedSErrorInterruptsMode_Bits

// Take any pending unmasked physical SError interrupt or unmasked virtual SError // interrupt.constant bits(5) TakeUnmaskedSErrorInterrupts();M32_User = '10000'; constant bits(5)M32_FIQ = '10001'; constant bits(5) M32_IRQ = '10010'; constant bits(5) M32_Svc = '10011'; constant bits(5) M32_Monitor = '10110'; constant bits(5) M32_Abort = '10111'; constant bits(5) M32_Hyp = '11010'; constant bits(5) M32_Undef = '11011'; constant bits(5) M32_System = '11111';

Library pseudocode for shared/functions/system/ThisInstrPLOfEL

bits(32)// PLOfEL() // ======== PrivilegeLevel ThisInstr();PLOfEL(bits(2) el) case el of whenEL3 return if HighestELUsingAArch32() then PL1 else PL3; when EL2 return PL2; when EL1 return PL1; when EL0 return PL0;

Library pseudocode for shared/functions/system/ThisInstrLengthPSTATE

ProcStateinteger ThisInstrLength();PSTATE;

Library pseudocode for shared/functions/system/UnreachablePrivilegeLevel

enumeration PrivilegeLevel {PL3, PL2, PL1, Unreachable() assert FALSE;PL0};

Library pseudocode for shared/functions/system/UsingAArch32ProcState

// UsingAArch32() // ============== // Return TRUE if the current Exception level is using AArch32, FALSE if using AArch64. booleantype UsingAArch32() boolean aarch32 = (PSTATE.nRW == '1'); if !ProcState is ( bits (1) N, // Negative condition flag bits (1) Z, // Zero condition flag bits (1) C, // Carry condition flag bits (1) V, // oVerflow condition flag bits (1) D, // Debug mask bit [AArch64 only] bits (1) A, // SError interrupt mask bit bits (1) I, // IRQ mask bit bits (1) F, // FIQ mask bit bits (1) PAN, // Privileged Access Never Bit [v8.1] bits (1) UAO, // User Access Override [v8.2] bits (1) DIT, // Data Independent Timing [v8.4] bits (1) TCO, // Tag Check Override [v8.5, AArch64 only] bits (2) BTYPE, // Branch Type [v8.5] bits (1) SS, // Software step bit bits (1) IL, // Illegal Execution state bit bits (2) EL, // Exception Level bits (1) nRW, // not Register Width: 0=64, 1=32 bits (1)HaveAnyAArch32SP() then assert !aarch32; if HighestELUsingAArch32() then assert aarch32; return aarch32;, // Stack pointer select: 0=SP0, 1=SPx [AArch64 only] bits (1) Q, // Cumulative saturation flag [AArch32 only] bits (4) GE, // Greater than or Equal flags [AArch32 only] bits (1) SSBS, // Speculative Store Bypass Safe bits (8) IT, // If-then bits, RES0 in CPSR [AArch32 only] bits (1) J, // J bit, RES0 [AArch32 only, RES0 in SPSR and CPSR] bits (1) T, // T32 bit, RES0 in CPSR [AArch32 only] bits (1) E, // Endianness bit [AArch32 only] bits (5) M // Mode field [AArch32 only] )

Library pseudocode for shared/functions/system/WaitForEventRandomTag

// WaitForEvent() // ============== // PE suspends its operation and enters a low-power state // if the Event Register is clear when the WFE is executed// RandomTag() // =========== // Generate a random Allocation Tag. bits(4) WaitForEvent() if EventRegister == '0' thenRandomTag() bits(4) tag; for i = 0 to 3 tag<i> = EnterLowPowerStateNextRandomTagBit(); return; return tag;

Library pseudocode for shared/functions/system/WaitForInterruptRandomTagBit

// WaitForInterrupt() // ================== // PE suspends its operation to enter a low-power state // until a WFI wake-up event occurs or the PE is reset// RandomTagBit() // ============== // Generate a random bit suitable for generating a random Allocation Tag. bit WaitForInterrupt()NextRandomTagBit() bits(16) lfsr = RGSR_EL1.SEED; bit top = lfsr<5> EOR lfsr<3> EOR lfsr<2> EOR lfsr<0>; RGSR_EL1.SEED = top:lfsr<15:1>; return top; EnterLowPowerState(); return;

Library pseudocode for shared/functions/unpredictablesystem/ConstrainUnpredictableRestoredITBits

// ConstrainUnpredictable() // ======================== // Return the appropriate Constraint result to control the caller's behavior. The return value // is IMPLEMENTATION DEFINED within a permitted list for each UNPREDICTABLE case. // (The permitted list is determined by an assert or case statement at the call site.) // RestoredITBits() // ================ // Get the value of PSTATE.IT to be restored on this exception return. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // The extra argument is used here to allow this example definition. This is an example only and // does not imply a fixed implementation of these behaviors. Indeed the intention is that it should // be defined by each implementation, according to its implementation choices. Constraintbits(8) ConstrainUnpredictable(RestoredITBits(bits(32) spsr) it = spsr<15:10,26:25>; // When PSTATE.IL is set, it is CONSTRAINED UNPREDICTABLE whether the IT bits are each set // to zero or copied from the SPSR. if PSTATE.IL == '1' then ifUnpredictableConstrainUnpredictableBool which) case which of when( Unpredictable_WBOVERLAPLD return Constraint_WBSUPPRESS; // return loaded value when Unpredictable_WBOVERLAPST return Constraint_NONE; // store pre-writeback value when Unpredictable_LDPOVERLAP return Constraint_UNDEF; // instruction is UNDEFINED when Unpredictable_BASEOVERLAP return Constraint_NONE; // use original address when Unpredictable_DATAOVERLAP return Constraint_NONE; // store original value when Unpredictable_DEVPAGE2 return Constraint_FAULT; // take an alignment fault when Unpredictable_INSTRDEVICE return Constraint_NONE; // Do not take a fault when Unpredictable_RESCPACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESMAIR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTEXCB return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESDACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESPRRR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESVTCRS return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_OORTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_LARGEIPA return Constraint_FORCE; // Restrict the inputsize to the PAMax value when Unpredictable_ESRCONDPASS return Constraint_FALSE; // Report as "AL" when Unpredictable_ILZEROIT return) then return '00000000'; else return it; // The IT bits are forced to zero when they are set to a reserved value. if ! Constraint_FALSEIsZero; // Do not zero PSTATE.IT when(it<7:4>) && Unpredictable_ILZEROTIsZero return(it<3:0>) then return '00000000'; // The IT bits are forced to zero when returning to A32 state, or when returning to an EL // with the ITD bit set to 1, and the IT bits are describing a multi-instruction block. itd = if PSTATE.EL == Constraint_FALSEEL2; // Do not zero PSTATE.T whenthen HSCTLR.ITD else SCTLR.ITD; if (spsr<5> == '0' && ! Unpredictable_BPVECTORCATCHPRIIsZero return(it)) || (itd == '1' && ! Constraint_TRUEIsZero; // Debug Vector Catch: match on 2nd halfword when Unpredictable_VCMATCHHALF return Constraint_FALSE; // No match when Unpredictable_VCMATCHDAPA return Constraint_FALSE; // No match on Data Abort or Prefetch abort when Unpredictable_WPMASKANDBAS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_WPBASCONTIGUOUS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESWPMASK return Constraint_DISABLED; // Watchpoint disabled when Unpredictable_WPMASKEDBITS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESBPWPCTRL return Constraint_DISABLED; // Breakpoint/watchpoint disabled when Unpredictable_BPNOTIMPL return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_RESBPTYPE return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPNOTCTXCMP return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPMATCHHALF return Constraint_FALSE; // No match when Unpredictable_BPMISMATCHHALF return Constraint_FALSE; // No match when Unpredictable_RESTARTALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_RESTARTZEROUPPERPC return Constraint_TRUE; // Force zero extension when Unpredictable_ZEROUPPER return Constraint_TRUE; // zero top halves of X registers when Unpredictable_ERETZEROUPPERPC return Constraint_TRUE; // zero top half of PC when Unpredictable_A32FORCEALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_SMD return Constraint_UNDEF; // disabled SMC is Unallocated when Unpredictable_NONFAULT return Constraint_FALSE; // Speculation enabled when Unpredictable_SVEZEROUPPER return Constraint_TRUE; // zero top bits of Z registers when Unpredictable_SVELDNFDATA return Constraint_TRUE; // Load mem data in NF loads when Unpredictable_SVELDNFZERO return Constraint_TRUE; // Write zeros in NF loads when Unpredictable_AFUPDATE // AF update for alignment or permission fault return Constraint_TRUE; when Unpredictable_IESBinDebug // Use SCTLR[].IESB in Debug state return Constraint_TRUE; when Unpredictable_ZEROBTYPE return Constraint_TRUE; // Save BTYPE in SPSR_ELx/DPSR_EL0 as '00' when Unpredictable_CLEARERRITEZERO // Clearing sticky errors when instruction in flight return Constraint_FALSE;(it<2:0>)) then return '00000000'; else return it;

Library pseudocode for shared/functions/unpredictablesystem/ConstrainUnpredictableBitsSCRType

// ConstrainUnpredictableBits() // ============================ // This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. // If the result is Constraint_UNKNOWN then the function also returns UNKNOWN value, but that // value is always an allocated value; that is, one for which the behavior is not itself // CONSTRAINED. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the bits part // of the result, and may not be applicable in all cases. (Constraint,bits(width))type ConstrainUnpredictableBits(SCRType;Unpredictable which) c = ConstrainUnpredictable(which); if c == Constraint_UNKNOWN then return (c, Zeros(width)); // See notes; this is an example implementation only else return (c, bits(width) UNKNOWN); // bits result not used

Library pseudocode for shared/functions/unpredictablesystem/ConstrainUnpredictableBoolSCR_GEN

// ConstrainUnpredictableBool() // ============================ // SCR_GEN[] // ========= // This is a simple wrapper function for cases where the constrained result is either TRUE or FALSE. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. booleanSCRType ConstrainUnpredictableBool(SCR_GEN[] // AArch32 secure & AArch64 EL3 registers are not architecturally mapped assertUnpredictableHaveEL which) c =( ConstrainUnpredictableEL3(which); assert c IN {); bits(32) r; ifConstraint_TRUEHighestELUsingAArch32, Constraint_FALSE}; return (c == Constraint_TRUE);() then r = SCR; else r = SCR_EL3; return r;

Library pseudocode for shared/functions/unpredictablesystem/ConstrainUnpredictableIntegerSendEvent

// ConstrainUnpredictableInteger() // =============================== // This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. If // the result is Constraint_UNKNOWN then the function also returns an UNKNOWN value in the range // low to high, inclusive. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the integer part // of the result. (Constraint,integer)// Signal an event to all PEs in a multiprocessor system to set their Event Registers. // When a PE executes the SEV instruction, it causes this function to be executed ConstrainUnpredictableInteger(integer low, integer high,SendEvent(); Unpredictable which) c = ConstrainUnpredictable(which); if c == Constraint_UNKNOWN then return (c, low); // See notes; this is an example implementation only else return (c, integer UNKNOWN); // integer result not used

Library pseudocode for shared/functions/unpredictablesystem/ConstraintSendEventLocal

enumeration// SendEventLocal() // ================ // Set the local Event Register of this PE. // When a PE executes the SEVL instruction, it causes this function to be executed Constraint {// GeneralSendEventLocal() EventRegister = '1'; return; Constraint_NONE, // Instruction executes with // no change or side-effect to its described behavior Constraint_UNKNOWN, // Destination register has UNKNOWN value Constraint_UNDEF, // Instruction is UNDEFINED Constraint_UNDEFEL0, // Instruction is UNDEFINED at EL0 only Constraint_NOP, // Instruction executes as NOP Constraint_TRUE, Constraint_FALSE, Constraint_DISABLED, Constraint_UNCOND, // Instruction executes unconditionally Constraint_COND, // Instruction executes conditionally Constraint_ADDITIONAL_DECODE, // Instruction executes with additional decode // Load-store Constraint_WBSUPPRESS, Constraint_FAULT, // IPA too large Constraint_FORCE, Constraint_FORCENOSLCHECK};

Library pseudocode for shared/functions/unpredictablesystem/UnpredictableSetPSTATEFromPSR

enumeration// SetPSTATEFromPSR() // ================== // Set PSTATE based on a PSR value Unpredictable {// Writeback/transfer register overlap (load)SetPSTATEFromPSR(bits(32) spsr) PSTATE.SS = Unpredictable_WBOVERLAPLD, // Writeback/transfer register overlap (store)(spsr); if Unpredictable_WBOVERLAPST, // Load Pair transfer register overlap(spsr) then PSTATE.IL = '1'; else // State that is reinstated only on a legal exception return PSTATE.IL = spsr<20>; if spsr<4> == '1' then // AArch32 state Unpredictable_LDPOVERLAP, // Store-exclusive base/status register overlap(spsr<4:0>); // Sets PSTATE.EL correctly else // AArch64 state PSTATE.nRW = '0'; PSTATE.EL = spsr<3:2>; PSTATE.SP = spsr<0>; // If PSTATE.IL is set and returning to AArch32 state, it is CONSTRAINED UNPREDICTABLE whether // the T bit is set to zero or copied from SPSR. if PSTATE.IL == '1' && PSTATE.nRW == '1' then if Unpredictable_BASEOVERLAP, // Store-exclusive data/status register overlap( Unpredictable_DATAOVERLAP, // Load-store alignment checks) then spsr<5> = '0'; // State that is reinstated regardless of illegal exception return PSTATE.<N,Z,C,V> = spsr<31:28>; if Unpredictable_DEVPAGE2, // Instruction fetch from Device memory() then PSTATE.DIT = spsr<24>; if PSTATE.nRW == '1' then // AArch32 state PSTATE.Q = spsr<27>; PSTATE.IT = Unpredictable_INSTRDEVICE, // Reserved CPACR value(spsr); ShouldAdvanceIT = FALSE; if Unpredictable_RESCPACR, // Reserved MAIR value() then PSTATE.SSBS = spsr<23>; PSTATE.GE = spsr<19:16>; PSTATE.E = spsr<9>; PSTATE.<A,I,F> = spsr<8:6>; // No PSTATE.D in AArch32 state PSTATE.T = spsr<5>; // PSTATE.J is RES0 else // AArch64 state if Unpredictable_RESMAIR, // Reserved TEX:C:B value() then PSTATE.UAO = spsr<23>; if Unpredictable_RESTEXCB, // Reserved PRRR value() then PSTATE.SSBS = spsr<12>; PSTATE.<D,A,I,F> = spsr<9:6>; if Unpredictable_RESPRRR, // Reserved DACR field() then PSTATE.BTYPE = spsr<11:10>; if Unpredictable_RESDACR, // Reserved VTCR.S value() then PSTATE.PAN = spsr<22>; if Unpredictable_RESVTCRS, // Reserved TCR.TnSZ value Unpredictable_RESTnSZ, // Out-of-range TCR.TnSZ value Unpredictable_OORTnSZ, // IPA size exceeds PA size Unpredictable_LARGEIPA, // Syndrome for a known-passing conditional A32 instruction Unpredictable_ESRCONDPASS, // Illegal State exception: zero PSTATE.IT Unpredictable_ILZEROIT, // Illegal State exception: zero PSTATE.T Unpredictable_ILZEROT, // Debug: prioritization of Vector Catch Unpredictable_BPVECTORCATCHPRI, // Debug Vector Catch: match on 2nd halfword Unpredictable_VCMATCHHALF, // Debug Vector Catch: match on Data Abort or Prefetch abort Unpredictable_VCMATCHDAPA, // Debug watchpoints: non-zero MASK and non-ones BAS Unpredictable_WPMASKANDBAS, // Debug watchpoints: non-contiguous BAS Unpredictable_WPBASCONTIGUOUS, // Debug watchpoints: reserved MASK Unpredictable_RESWPMASK, // Debug watchpoints: non-zero MASKed bits of address Unpredictable_WPMASKEDBITS, // Debug breakpoints and watchpoints: reserved control bits Unpredictable_RESBPWPCTRL, // Debug breakpoints: not implemented Unpredictable_BPNOTIMPL, // Debug breakpoints: reserved type Unpredictable_RESBPTYPE, // Debug breakpoints: not-context-aware breakpoint Unpredictable_BPNOTCTXCMP, // Debug breakpoints: match on 2nd halfword of instruction Unpredictable_BPMATCHHALF, // Debug breakpoints: mismatch on 2nd halfword of instruction Unpredictable_BPMISMATCHHALF, // Debug: restart to a misaligned AArch32 PC value Unpredictable_RESTARTALIGNPC, // Debug: restart to a not-zero-extended AArch32 PC value Unpredictable_RESTARTZEROUPPERPC, // Zero top 32 bits of X registers in AArch32 state Unpredictable_ZEROUPPER, // Zero top 32 bits of PC on illegal return to AArch32 state Unpredictable_ERETZEROUPPERPC, // Force address to be aligned when interworking branch to A32 state Unpredictable_A32FORCEALIGNPC, // SMC disabled Unpredictable_SMD, // FF speculation Unpredictable_NONFAULT, // Zero top bits of Z registers in EL change Unpredictable_SVEZEROUPPER, // Load mem data in NF loads Unpredictable_SVELDNFDATA, // Write zeros in NF loads Unpredictable_SVELDNFZERO, // Access Flag Update by HW Unpredictable_AFUPDATE, // Consider SCTLR[].IESB in Debug state Unpredictable_IESBinDebug, // No events selected in PMSEVFR_EL1 Unpredictable_ZEROPMSEVFR, // No operation type selected in PMSFCR_EL1 Unpredictable_NOOPTYPES, // Zero latency in PMSLATFR_EL1 Unpredictable_ZEROMINLATENCY, // Zero saved BType value in SPSR_ELx/DPSR_EL0 Unpredictable_ZEROBTYPE, // Clearing DCC/ITR sticky flags when instruction is in flight Unpredictable_CLEARERRITEZERO};() then if PSTATE.nRW != '1' then PSTATE.TCO = spsr<25>; return;

Library pseudocode for shared/functions/vectorsystem/AdvSIMDExpandImmShouldAdvanceIT

// AdvSIMDExpandImm() // ================== bits(64)boolean ShouldAdvanceIT; AdvSIMDExpandImm(bit op, bits(4) cmode, bits(8) imm8) case cmode<3:1> of when '000' imm64 = Replicate(Zeros(24):imm8, 2); when '001' imm64 = Replicate(Zeros(16):imm8:Zeros(8), 2); when '010' imm64 = Replicate(Zeros(8):imm8:Zeros(16), 2); when '011' imm64 = Replicate(imm8:Zeros(24), 2); when '100' imm64 = Replicate(Zeros(8):imm8, 4); when '101' imm64 = Replicate(imm8:Zeros(8), 4); when '110' if cmode<0> == '0' then imm64 = Replicate(Zeros(16):imm8:Ones(8), 2); else imm64 = Replicate(Zeros(8):imm8:Ones(16), 2); when '111' if cmode<0> == '0' && op == '0' then imm64 = Replicate(imm8, 8); if cmode<0> == '0' && op == '1' then imm8a = Replicate(imm8<7>, 8); imm8b = Replicate(imm8<6>, 8); imm8c = Replicate(imm8<5>, 8); imm8d = Replicate(imm8<4>, 8); imm8e = Replicate(imm8<3>, 8); imm8f = Replicate(imm8<2>, 8); imm8g = Replicate(imm8<1>, 8); imm8h = Replicate(imm8<0>, 8); imm64 = imm8a:imm8b:imm8c:imm8d:imm8e:imm8f:imm8g:imm8h; if cmode<0> == '1' && op == '0' then imm32 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,5):imm8<5:0>:Zeros(19); imm64 = Replicate(imm32, 2); if cmode<0> == '1' && op == '1' then if UsingAArch32() then ReservedEncoding(); imm64 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,8):imm8<5:0>:Zeros(48); return imm64;

Library pseudocode for shared/functions/vectorsystem/PolynomialMultSpeculationBarrier

// PolynomialMult() // ================ bits(M+N) PolynomialMult(bits(M) op1, bits(N) op2) result = Zeros(M+N); extended_op2 = ZeroExtend(op2, M+N); for i=0 to M-1 if op1<i> == '1' then result = result EOR LSL(extended_op2, i); return result;SpeculationBarrier();

Library pseudocode for shared/functions/vectorsystem/SatQSynchronizeContext

// SatQ() // ====== (bits(N), boolean) SatQ(integer i, integer N, boolean unsigned) (result, sat) = if unsigned then UnsignedSatQ(i, N) else SignedSatQ(i, N); return (result, sat);SynchronizeContext();

Library pseudocode for shared/functions/vectorsystem/SignedSatQSynchronizeErrors

// SignedSatQ() // ============ (bits(N), boolean)// Implements the error synchronization event. SignedSatQ(integer i, integer N) if i > 2^(N-1) - 1 then result = 2^(N-1) - 1; saturated = TRUE; elsif i < -(2^(N-1)) then result = -(2^(N-1)); saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated);SynchronizeErrors();

Library pseudocode for shared/functions/vectorsystem/UnsignedRSqrtEstimateTakeUnmaskedPhysicalSErrorInterrupts

// UnsignedRSqrtEstimate() // ======================= bits(N)// Take any pending unmasked physical SError interrupt UnsignedRSqrtEstimate(bits(N) operand) assert N IN {16,32}; if operand<N-1:N-2> == '00' then // Operands <= 0x3FFFFFFF produce 0xFFFFFFFF result =TakeUnmaskedPhysicalSErrorInterrupts(boolean iesb_req); Ones(N); else // input is in the range 0x40000000 .. 0xffffffff representing [0.25 .. 1.0) // estimate is in the range 256 .. 511 representing [1.0 .. 2.0) case N of when 16 estimate = RecipSqrtEstimate(UInt(operand<15:7>)); when 32 estimate = RecipSqrtEstimate(UInt(operand<31:23>)); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : Zeros(N-9); return result;

Library pseudocode for shared/functions/vectorsystem/UnsignedRecipEstimateTakeUnmaskedSErrorInterrupts

// UnsignedRecipEstimate() // ======================= bits(N)// Take any pending unmasked physical SError interrupt or unmasked virtual SError // interrupt. UnsignedRecipEstimate(bits(N) operand) assert N IN {16,32}; if operand<N-1> == '0' then // Operands <= 0x7FFFFFFF produce 0xFFFFFFFF result =TakeUnmaskedSErrorInterrupts(); Ones(N); else // input is in the range 0x80000000 .. 0xffffffff representing [0.5 .. 1.0) // estimate is in the range 256 to 511 representing [1.0 .. 2.0) case N of when 16 estimate = RecipEstimate(UInt(operand<15:7>)); when 32 estimate = RecipEstimate(UInt(operand<31:23>)); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : Zeros(N-9); return result;

Library pseudocode for shared/functions/vectorsystem/UnsignedSatQThisInstr

// UnsignedSatQ() // ============== (bits(N), boolean)bits(32) UnsignedSatQ(integer i, integer N) if i > 2^N - 1 then result = 2^N - 1; saturated = TRUE; elsif i < 0 then result = 0; saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated);ThisInstr();

Library pseudocode for shared/translationfunctions/attrssystem/CombineS1S2AttrHintsThisInstrLength

// CombineS1S2AttrHints() // ====================== MemAttrHintsinteger CombineS1S2AttrHints(ThisInstrLength();MemAttrHints s1desc, MemAttrHints s2desc) MemAttrHints result; if HaveStage2MemAttrControl() && HCR_EL2.FWB == '1' then if s2desc.attrs == MemAttr_WB then result.attrs = s1desc.attrs; elsif s2desc.attrs == MemAttr_WT then result.attrs = MemAttr_WB; else result.attrs = MemAttr_NC; else if s2desc.attrs == '01' || s1desc.attrs == '01' then result.attrs = bits(2) UNKNOWN; // Reserved elsif s2desc.attrs == MemAttr_NC || s1desc.attrs == MemAttr_NC then result.attrs = MemAttr_NC; // Non-cacheable elsif s2desc.attrs == MemAttr_WT || s1desc.attrs == MemAttr_WT then result.attrs = MemAttr_WT; // Write-through else result.attrs = MemAttr_WB; // Write-back result.hints = s1desc.hints; result.transient = s1desc.transient; return result;

Library pseudocode for shared/translationfunctions/attrssystem/CombineS1S2DescUnreachable

// CombineS1S2Desc() // ================= // Combines the address descriptors from stage 1 and stage 2 AddressDescriptor CombineS1S2Desc(AddressDescriptor s1desc, AddressDescriptor s2desc) AddressDescriptor result; result.paddress = s2desc.paddress; if IsFault(s1desc) || IsFault(s2desc) then result = if IsFault(s1desc) then s1desc else s2desc; elsif s2desc.memattrs.type == MemType_Device || s1desc.memattrs.type == MemType_Device then result.memattrs.type = MemType_Device; if s1desc.memattrs.type == MemType_Normal then result.memattrs.device = s2desc.memattrs.device; elsif s2desc.memattrs.type == MemType_Normal then result.memattrs.device = s1desc.memattrs.device; else // Both Device result.memattrs.device = CombineS1S2Device(s1desc.memattrs.device, s2desc.memattrs.device); result.memattrs.tagged = FALSE; else // Both Normal result.memattrs.type = MemType_Normal; result.memattrs.device = DeviceType UNKNOWN; result.memattrs.inner = CombineS1S2AttrHints(s1desc.memattrs.inner, s2desc.memattrs.inner); result.memattrs.outer = CombineS1S2AttrHints(s1desc.memattrs.outer, s2desc.memattrs.outer); result.memattrs.shareable = (s1desc.memattrs.shareable || s2desc.memattrs.shareable); result.memattrs.outershareable = (s1desc.memattrs.outershareable || s2desc.memattrs.outershareable); result.memattrs.tagged = (s1desc.memattrs.tagged && result.memattrs.inner.attrs == MemAttr_WB && result.memattrs.inner.hints == MemHint_RWA && result.memattrs.outer.attrs == MemAttr_WB && result.memattrs.outer.hints == MemHint_RWA); result.memattrs = MemAttrDefaults(result.memattrs); return result;Unreachable() assert FALSE;

Library pseudocode for shared/translationfunctions/attrssystem/CombineS1S2DeviceUsingAArch32

// CombineS1S2Device() // =================== // Combines device types from stage 1 and stage 2 // UsingAArch32() // ============== // Return TRUE if the current Exception level is using AArch32, FALSE if using AArch64. DeviceTypeboolean CombineS1S2Device(UsingAArch32() boolean aarch32 = (PSTATE.nRW == '1'); if !DeviceTypeHaveAnyAArch32 s1device,() then assert !aarch32; if DeviceTypeHighestELUsingAArch32 s2device) if s2device == DeviceType_nGnRnE || s1device == DeviceType_nGnRnE then result = DeviceType_nGnRnE; elsif s2device == DeviceType_nGnRE || s1device == DeviceType_nGnRE then result = DeviceType_nGnRE; elsif s2device == DeviceType_nGRE || s1device == DeviceType_nGRE then result = DeviceType_nGRE; else result = DeviceType_GRE; return result;() then assert aarch32; return aarch32;

Library pseudocode for shared/translationfunctions/attrssystem/LongConvertAttrsHintsWaitForEvent

// LongConvertAttrsHints() // ======================= // Convert the long attribute fields for Normal memory as used in the MAIR fields // to orthogonal attributes and hints MemAttrHints// WaitForEvent() // ============== // PE suspends its operation and enters a low-power state // if the Event Register is clear when the WFE is executed LongConvertAttrsHints(bits(4) attrfield,WaitForEvent() if EventRegister == '0' then AccTypeEnterLowPowerState acctype) assert !IsZero(attrfield); MemAttrHints result; if S1CacheDisabled(acctype) then // Force Non-cacheable result.attrs = MemAttr_NC; result.hints = MemHint_No; else if attrfield<3:2> == '00' then // Write-through transient result.attrs = MemAttr_WT; result.hints = attrfield<1:0>; result.transient = TRUE; elsif attrfield<3:0> == '0100' then // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; result.transient = FALSE; elsif attrfield<3:2> == '01' then // Write-back transient result.attrs = MemAttr_WB; result.hints = attrfield<1:0>; result.transient = TRUE; else // Write-through/Write-back non-transient result.attrs = attrfield<3:2>; result.hints = attrfield<1:0>; result.transient = FALSE; return result;(); return;

Library pseudocode for shared/translationfunctions/attrssystem/MemAttrDefaultsWaitForInterrupt

// MemAttrDefaults() // ================= // Supply default values for memory attributes, including overriding the shareability attributes // for Device and Non-cacheable memory types. MemoryAttributes// WaitForInterrupt() // ================== // PE suspends its operation to enter a low-power state // until a WFI wake-up event occurs or the PE is reset MemAttrDefaults(WaitForInterrupt()MemoryAttributesEnterLowPowerState memattrs) if memattrs.type == MemType_Device then memattrs.inner = MemAttrHints UNKNOWN; memattrs.outer = MemAttrHints UNKNOWN; memattrs.shareable = TRUE; memattrs.outershareable = TRUE; else memattrs.device = DeviceType UNKNOWN; if memattrs.inner.attrs == MemAttr_NC && memattrs.outer.attrs == MemAttr_NC then memattrs.shareable = TRUE; memattrs.outershareable = TRUE; return memattrs;(); return;

Library pseudocode for shared/translationfunctions/attrsunpredictable/S1CacheDisabledConstrainUnpredictable

// S1CacheDisabled() // ================= // ConstrainUnpredictable() // ======================== // Return the appropriate Constraint result to control the caller's behavior. The return value // is IMPLEMENTATION DEFINED within a permitted list for each UNPREDICTABLE case. // (The permitted list is determined by an assert or case statement at the call site.) boolean// NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the ARMv8 Architecture Reference Manual. // The extra argument is used here to allow this example definition. This is an example only and // does not imply a fixed implementation of these behaviors. Indeed the intention is that it should // be defined by each implementation, according to its implementation choices. Constraint S1CacheDisabled(ConstrainUnpredictable(AccTypeUnpredictable acctype) ifwhich) case which of when ELUsingAArch32Unpredictable_WBOVERLAPLD(returnS1TranslationRegimeConstraint_WBSUPPRESS()) then if PSTATE.EL ==; // return loaded value when EL2Unpredictable_WBOVERLAPST then enable = if acctype ==return AccType_IFETCHConstraint_NONE then HSCTLR.I else HSCTLR.C; else enable = if acctype ==; // store pre-writeback value when AccType_IFETCHUnpredictable_LDPOVERLAP then SCTLR.I else SCTLR.C; else enable = if acctype ==return AccType_IFETCHConstraint_UNDEF then; // instruction is UNDEFINED when SCTLRUnpredictable_BASEOVERLAP[].I elsereturn ; // use original address when Unpredictable_DATAOVERLAP return Constraint_NONE; // store original value when Unpredictable_DEVPAGE2 return Constraint_FAULT; // take an alignment fault when Unpredictable_INSTRDEVICE return Constraint_NONE; // Do not take a fault when Unpredictable_RESCPACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESMAIR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTEXCB return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESDACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESPRRR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESVTCRS return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_OORTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_LARGEIPA return Constraint_FORCE; // Restrict the inputsize to the PAMax value when Unpredictable_ESRCONDPASS return Constraint_FALSE; // Report as "AL" when Unpredictable_ILZEROIT return Constraint_FALSE; // Do not zero PSTATE.IT when Unpredictable_ILZEROT return Constraint_FALSE; // Do not zero PSTATE.T when Unpredictable_BPVECTORCATCHPRI return Constraint_TRUE; // Debug Vector Catch: match on 2nd halfword when Unpredictable_VCMATCHHALF return Constraint_FALSE; // No match when Unpredictable_VCMATCHDAPA return Constraint_FALSE; // No match on Data Abort or Prefetch abort when Unpredictable_WPMASKANDBAS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_WPBASCONTIGUOUS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESWPMASK return Constraint_DISABLED; // Watchpoint disabled when Unpredictable_WPMASKEDBITS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESBPWPCTRL return Constraint_DISABLED; // Breakpoint/watchpoint disabled when Unpredictable_BPNOTIMPL return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_RESBPTYPE return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPNOTCTXCMP return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPMATCHHALF return Constraint_FALSE; // No match when Unpredictable_BPMISMATCHHALF return Constraint_FALSE; // No match when Unpredictable_RESTARTALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_RESTARTZEROUPPERPC return Constraint_TRUE; // Force zero extension when Unpredictable_ZEROUPPER return Constraint_TRUE; // zero top halves of X registers when Unpredictable_ERETZEROUPPERPC return Constraint_TRUE; // zero top half of PC when Unpredictable_A32FORCEALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_SMD return Constraint_UNDEF; // disabled SMC is Unallocated when Unpredictable_NONFAULT return Constraint_FALSE; // Speculation enabled when Unpredictable_SVEZEROUPPER return Constraint_TRUE; // zero top bits of Z registers when Unpredictable_SVELDNFDATA return Constraint_TRUE; // Load mem data in NF loads when Unpredictable_SVELDNFZERO return Constraint_TRUE; // Write zeros in NF loads when Unpredictable_AFUPDATE // AF update for alignment or permission fault return Constraint_TRUE; when Unpredictable_IESBinDebug // Use SCTLR[].IESB in Debug state return Constraint_TRUE; when Unpredictable_ZEROBTYPE return Constraint_TRUE; // Save BTYPE in SPSR_ELx/DPSR_EL0 as '00' when Unpredictable_CLEARERRITEZERO // Clearing sticky errors when instruction in flight return Constraint_FALSESCTLRConstraint_NONE[].C; return enable == '0';;

Library pseudocode for shared/translationfunctions/attrsunpredictable/S2AttrDecodeConstrainUnpredictableBits

// S2AttrDecode() // ============== // Converts the Stage 2 attribute fields into orthogonal attributes and hints // ConstrainUnpredictableBits() // ============================ MemoryAttributes// This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. // If the result is Constraint_UNKNOWN then the function also returns UNKNOWN value, but that // value is always an allocated value; that is, one for which the behavior is not itself // CONSTRAINED. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the ARMv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the bits part // of the result, and may not be applicable in all cases. (Constraint,bits(width)) S2AttrDecode(bits(2) SH, bits(4) attr,ConstrainUnpredictableBits( AccTypeUnpredictable acctype)which) c = MemoryAttributesConstrainUnpredictable memattrs; (which); apply_force_writeback = if c == HaveStage2MemAttrControlConstraint_UNKNOWN() && HCR_EL2.FWB == '1'; // Device memory if (apply_force_writeback && attr<2> == '0') || attr<3:2> == '00' then memattrs.type =then return (c, MemType_DeviceZeros; case attr<1:0> of when '00' memattrs.device = DeviceType_nGnRnE; when '01' memattrs.device = DeviceType_nGnRE; when '10' memattrs.device = DeviceType_nGRE; when '11' memattrs.device = DeviceType_GRE; // Normal memory elsif attr<1:0> != '00' then memattrs.type = MemType_Normal; if apply_force_writeback then memattrs.outer = S2ConvertAttrsHints(attr<1:0>, acctype); else memattrs.outer = S2ConvertAttrsHints(attr<3:2>, acctype); memattrs.inner = S2ConvertAttrsHints(attr<1:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else memattrs = MemoryAttributes UNKNOWN; // Reserved return MemAttrDefaults(memattrs);(width)); // See notes; this is an example implementation only else return (c, bits(width) UNKNOWN); // bits result not used

Library pseudocode for shared/translationfunctions/attrsunpredictable/S2CacheDisabledConstrainUnpredictableBool

// S2CacheDisabled() // ================= // ConstrainUnpredictableBool() // ============================ // This is a simple wrapper function for cases where the constrained result is either TRUE or FALSE. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the ARMv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. boolean S2CacheDisabled(ConstrainUnpredictableBool(AccTypeUnpredictable acctype) ifwhich) c = ELUsingAArch32ConstrainUnpredictable((which); assert c IN {EL2Constraint_TRUE) then disable = if acctype ==, AccType_IFETCHConstraint_FALSE then HCR2.ID else HCR2.CD; else disable = if acctype ==}; return (c == AccType_IFETCHConstraint_TRUE then HCR_EL2.ID else HCR_EL2.CD; return disable == '1';);

Library pseudocode for shared/translationfunctions/attrsunpredictable/S2ConvertAttrsHintsConstrainUnpredictableInteger

// S2ConvertAttrsHints() // ===================== // Converts the attribute fields for Normal memory as used in stage 2 // descriptors to orthogonal attributes and hints // ConstrainUnpredictableInteger() // =============================== MemAttrHints// This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. If // the result is Constraint_UNKNOWN then the function also returns an UNKNOWN value in the range // low to high, inclusive. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the ARMv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the integer part // of the result. (Constraint,integer) S2ConvertAttrsHints(bits(2) attr,ConstrainUnpredictableInteger(integer low, integer high, AccTypeUnpredictable acctype) assert !which) c =IsZeroConstrainUnpredictable(attr);(which); if c == MemAttrHintsConstraint_UNKNOWN result; if S2CacheDisabled(acctype) then // Force Non-cacheable result.attrs = MemAttr_NC; result.hints = MemHint_No; else case attr of when '01' // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; when '10' // Write-through result.attrs = MemAttr_WT; result.hints = MemHint_RWA; when '11' // Write-back result.attrs = MemAttr_WB; result.hints = MemHint_RWA; result.transient = FALSE; return result;then return (c, low); // See notes; this is an example implementation only else return (c, integer UNKNOWN); // integer result not used

Library pseudocode for shared/translationfunctions/attrsunpredictable/ShortConvertAttrsHintsConstraint

// ShortConvertAttrsHints() // ======================== // Converts the short attribute fields for Normal memory as used in the TTBR and // TEX fields to orthogonal attributes and hints MemAttrHintsenumeration ShortConvertAttrsHints(bits(2) RGN,Constraint {// General AccType acctype, boolean secondstage)Constraint_NONE, // Instruction executes with // no change or side-effect to its described behavior MemAttrHints result; if (!secondstage &&Constraint_UNKNOWN, // Destination register has UNKNOWN value S1CacheDisabled(acctype)) || (secondstage &&Constraint_UNDEF, // Instruction is UNDEFINED S2CacheDisabled(acctype)) then // Force Non-cacheable result.attrs =Constraint_UNDEFEL0, // Instruction is UNDEFINED at EL0 only MemAttr_NC; result.hints =Constraint_NOP, // Instruction executes as NOP MemHint_No; else case RGN of when '00' // Non-cacheable (no allocate) result.attrs =Constraint_TRUE, MemAttr_NC; result.hints =Constraint_FALSE, MemHint_No; when '01' // Write-back, Read and Write allocate result.attrs =Constraint_DISABLED, MemAttr_WB; result.hints =Constraint_UNCOND, // Instruction executes unconditionally MemHint_RWA; when '10' // Write-through, Read allocate result.attrs =Constraint_COND, // Instruction executes conditionally MemAttr_WT; result.hints =Constraint_ADDITIONAL_DECODE, // Instruction executes with additional decode // Load-store MemHint_RA; when '11' // Write-back, Read allocate result.attrs =Constraint_WBSUPPRESS, MemAttr_WB; result.hints =Constraint_FAULT, // IPA too large Constraint_FORCE, MemHint_RA; result.transient = FALSE; return result;Constraint_FORCENOSLCHECK};

Library pseudocode for shared/translationfunctions/attrsunpredictable/WalkAttrDecodeUnpredictable

// WalkAttrDecode() // ================ MemoryAttributesenumeration WalkAttrDecode(bits(2) SH, bits(2) ORGN, bits(2) IRGN, boolean secondstage)Unpredictable {// Writeback/transfer register overlap (load) MemoryAttributes memattrs;Unpredictable_WBOVERLAPLD, // Writeback/transfer register overlap (store) AccType acctype =Unpredictable_WBOVERLAPST, // Load Pair transfer register overlap AccType_NORMAL; memattrs.type =Unpredictable_LDPOVERLAP, // Store-exclusive base/status register overlap MemType_Normal; memattrs.inner =Unpredictable_BASEOVERLAP, // Store-exclusive data/status register overlap ShortConvertAttrsHints(IRGN, acctype, secondstage); memattrs.outer =Unpredictable_DATAOVERLAP, // Load-store alignment checks ShortConvertAttrsHints(ORGN, acctype, secondstage); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; memattrs.tagged = FALSE; returnUnpredictable_DEVPAGE2, // Instruction fetch from Device memory Unpredictable_INSTRDEVICE, // Reserved CPACR value Unpredictable_RESCPACR, // Reserved MAIR value Unpredictable_RESMAIR, // Reserved TEX:C:B value Unpredictable_RESTEXCB, // Reserved PRRR value Unpredictable_RESPRRR, // Reserved DACR field Unpredictable_RESDACR, // Reserved VTCR.S value Unpredictable_RESVTCRS, // Reserved TCR.TnSZ value Unpredictable_RESTnSZ, // Out-of-range TCR.TnSZ value Unpredictable_OORTnSZ, // IPA size exceeds PA size Unpredictable_LARGEIPA, // Syndrome for a known-passing conditional A32 instruction Unpredictable_ESRCONDPASS, // Illegal State exception: zero PSTATE.IT Unpredictable_ILZEROIT, // Illegal State exception: zero PSTATE.T Unpredictable_ILZEROT, // Debug: prioritization of Vector Catch Unpredictable_BPVECTORCATCHPRI, // Debug Vector Catch: match on 2nd halfword Unpredictable_VCMATCHHALF, // Debug Vector Catch: match on Data Abort or Prefetch abort Unpredictable_VCMATCHDAPA, // Debug watchpoints: non-zero MASK and non-ones BAS Unpredictable_WPMASKANDBAS, // Debug watchpoints: non-contiguous BAS Unpredictable_WPBASCONTIGUOUS, // Debug watchpoints: reserved MASK Unpredictable_RESWPMASK, // Debug watchpoints: non-zero MASKed bits of address Unpredictable_WPMASKEDBITS, // Debug breakpoints and watchpoints: reserved control bits Unpredictable_RESBPWPCTRL, // Debug breakpoints: not implemented Unpredictable_BPNOTIMPL, // Debug breakpoints: reserved type Unpredictable_RESBPTYPE, // Debug breakpoints: not-context-aware breakpoint Unpredictable_BPNOTCTXCMP, // Debug breakpoints: match on 2nd halfword of instruction Unpredictable_BPMATCHHALF, // Debug breakpoints: mismatch on 2nd halfword of instruction Unpredictable_BPMISMATCHHALF, // Debug: restart to a misaligned AArch32 PC value Unpredictable_RESTARTALIGNPC, // Debug: restart to a not-zero-extended AArch32 PC value Unpredictable_RESTARTZEROUPPERPC, // Zero top 32 bits of X registers in AArch32 state Unpredictable_ZEROUPPER, // Zero top 32 bits of PC on illegal return to AArch32 state Unpredictable_ERETZEROUPPERPC, // Force address to be aligned when interworking branch to A32 state Unpredictable_A32FORCEALIGNPC, // SMC disabled Unpredictable_SMD, // FF speculation Unpredictable_NONFAULT, // Zero top bits of Z registers in EL change Unpredictable_SVEZEROUPPER, // Load mem data in NF loads Unpredictable_SVELDNFDATA, // Write zeros in NF loads Unpredictable_SVELDNFZERO, // Access Flag Update by HW Unpredictable_AFUPDATE, // Consider SCTLR[].IESB in Debug state Unpredictable_IESBinDebug, // No events selected in PMSEVFR_EL1 Unpredictable_ZEROPMSEVFR, // No operation type selected in PMSFCR_EL1 Unpredictable_NOOPTYPES, // Zero latency in PMSLATFR_EL1 Unpredictable_ZEROMINLATENCY, // Zero saved BType value in SPSR_ELx/DPSR_EL0 Unpredictable_ZEROBTYPE, // Clearing DCC/ITR sticky flags when instruction is in flight MemAttrDefaults(memattrs);Unpredictable_CLEARERRITEZERO};

Library pseudocode for shared/translationfunctions/translationvector/HasS2TranslationAdvSIMDExpandImm

// HasS2Translation() // AdvSIMDExpandImm() // ================== // Returns TRUE if stage 2 translation is present for the current translation regime booleanbits(64) HasS2Translation() return (AdvSIMDExpandImm(bit op, bits(4) cmode, bits(8) imm8) case cmode<3:1> of when '000' imm64 =EL2EnabledReplicate() && !(IsInHostZeros() && PSTATE.EL IN {(24):imm8, 2); when '001' imm64 =EL0Replicate,((16):imm8:Zeros(8), 2); when '010' imm64 = Replicate(Zeros(8):imm8:Zeros(16), 2); when '011' imm64 = Replicate(imm8:Zeros(24), 2); when '100' imm64 = Replicate(Zeros(8):imm8, 4); when '101' imm64 = Replicate(imm8:Zeros(8), 4); when '110' if cmode<0> == '0' then imm64 = Replicate(Zeros(16):imm8:Ones(8), 2); else imm64 = Replicate(Zeros(8):imm8:Ones(16), 2); when '111' if cmode<0> == '0' && op == '0' then imm64 = Replicate(imm8, 8); if cmode<0> == '0' && op == '1' then imm8a = Replicate(imm8<7>, 8); imm8b = Replicate(imm8<6>, 8); imm8c = Replicate(imm8<5>, 8); imm8d = Replicate(imm8<4>, 8); imm8e = Replicate(imm8<3>, 8); imm8f = Replicate(imm8<2>, 8); imm8g = Replicate(imm8<1>, 8); imm8h = Replicate(imm8<0>, 8); imm64 = imm8a:imm8b:imm8c:imm8d:imm8e:imm8f:imm8g:imm8h; if cmode<0> == '1' && op == '0' then imm32 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,5):imm8<5:0>:Zeros(19); imm64 = Replicate(imm32, 2); if cmode<0> == '1' && op == '1' then if UsingAArch32() then ReservedEncoding(); imm64 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,8):imm8<5:0>:ZerosEL1Zeros});(48); return imm64;

Library pseudocode for shared/translationfunctions/translationvector/Have16bitVMIDPolynomialMult

// Have16bitVMID() // =============== // Returns TRUE if EL2 and support for a 16-bit VMID are implemented. // PolynomialMult() // ================ booleanbits(M+N) Have16bitVMID() returnPolynomialMult(bits(M) op1, bits(N) op2) result = HaveELZeros((M+N); extended_op2 =(op2, M+N); for i=0 to M-1 if op1<i> == '1' then result = result EOR LSLEL2ZeroExtend) && boolean IMPLEMENTATION_DEFINED;(extended_op2, i); return result;

Library pseudocode for shared/translationfunctions/translationvector/PAMaxSatQ

// PAMax() // ======= // Returns the IMPLEMENTATION DEFINED upper limit on the physical address // size for this processor, as log2(). // SatQ() // ====== integer(bits(N), boolean) PAMax() return integer IMPLEMENTATION_DEFINED "Maximum Physical Address Size";SatQ(integer i, integer N, boolean unsigned) (result, sat) = if unsigned thenUnsignedSatQ(i, N) else SignedSatQ(i, N); return (result, sat);

Library pseudocode for shared/translationfunctions/translationvector/S1TranslationRegimeSignedSatQ

// S1TranslationRegime() // ===================== // Stage 1 translation regime for the given Exception level // SignedSatQ() // ============ bits(2)(bits(N), boolean) S1TranslationRegime(bits(2) el) if el !=SignedSatQ(integer i, integer N) if i > 2^(N-1) - 1 then result = 2^(N-1) - 1; saturated = TRUE; elsif i < -(2^(N-1)) then result = -(2^(N-1)); saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated); EL0 then return el; elsif HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.NS == '0' then return EL3; elsif HaveVirtHostExt() && ELIsInHost(el) then return EL2; else return EL1; // S1TranslationRegime() // ===================== // Returns the Exception level controlling the current Stage 1 translation regime. For the most // part this is unused in code because the system register accessors (SCTLR[], etc.) implicitly // return the correct value. bits(2) S1TranslationRegime() return S1TranslationRegime(PSTATE.EL);

Library pseudocode for shared/translationfunctions/translationvector/VAMaxUnsignedRSqrtEstimate

// VAMax() // ======= // Returns the IMPLEMENTATION DEFINED upper limit on the virtual address // size for this processor, as log2(). // UnsignedRSqrtEstimate() // ======================= integerbits(N) VAMax() return integer IMPLEMENTATION_DEFINED "Maximum Virtual Address Size";UnsignedRSqrtEstimate(bits(N) operand) assert N IN {16,32}; if operand<N-1:N-2> == '00' then // Operands <= 0x3FFFFFFF produce 0xFFFFFFFF result =Ones(N); else // input is in the range 0x40000000 .. 0xffffffff representing [0.25 .. 1.0) // estimate is in the range 256 .. 511 representing [1.0 .. 2.0) case N of when 16 estimate = RecipSqrtEstimate(UInt(operand<15:7>)); when 32 estimate = RecipSqrtEstimate(UInt(operand<31:23>)); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : Zeros(N-9); return result;

Library pseudocode for shared/functions/vector/UnsignedRecipEstimate

// UnsignedRecipEstimate() // ======================= bits(N) UnsignedRecipEstimate(bits(N) operand) assert N IN {16,32}; if operand<N-1> == '0' then // Operands <= 0x7FFFFFFF produce 0xFFFFFFFF result = Ones(N); else // input is in the range 0x80000000 .. 0xffffffff representing [0.5 .. 1.0) // estimate is in the range 256 to 511 representing [1.0 .. 2.0) case N of when 16 estimate = RecipEstimate(UInt(operand<15:7>)); when 32 estimate = RecipEstimate(UInt(operand<31:23>)); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : Zeros(N-9); return result;

Library pseudocode for shared/functions/vector/UnsignedSatQ

// UnsignedSatQ() // ============== (bits(N), boolean) UnsignedSatQ(integer i, integer N) if i > 2^N - 1 then result = 2^N - 1; saturated = TRUE; elsif i < 0 then result = 0; saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated);

Library pseudocode for shared/translation/attrs/CombineS1S2AttrHints

// CombineS1S2AttrHints() // ====================== MemAttrHints CombineS1S2AttrHints(MemAttrHints s1desc, MemAttrHints s2desc) MemAttrHints result; if HaveStage2MemAttrControl() && HCR_EL2.FWB == '1' then if s2desc.attrs == MemAttr_WB then result.attrs = s1desc.attrs; elsif s2desc.attrs == MemAttr_WT then result.attrs = MemAttr_WB; else result.attrs = MemAttr_NC; else if s2desc.attrs == '01' || s1desc.attrs == '01' then result.attrs = bits(2) UNKNOWN; // Reserved elsif s2desc.attrs == MemAttr_NC || s1desc.attrs == MemAttr_NC then result.attrs = MemAttr_NC; // Non-cacheable elsif s2desc.attrs == MemAttr_WT || s1desc.attrs == MemAttr_WT then result.attrs = MemAttr_WT; // Write-through else result.attrs = MemAttr_WB; // Write-back result.hints = s1desc.hints; result.transient = s1desc.transient; return result;

Library pseudocode for shared/translation/attrs/CombineS1S2Desc

// CombineS1S2Desc() // ================= // Combines the address descriptors from stage 1 and stage 2 AddressDescriptor CombineS1S2Desc(AddressDescriptor s1desc, AddressDescriptor s2desc) AddressDescriptor result; result.paddress = s2desc.paddress; if IsFault(s1desc) || IsFault(s2desc) then result = if IsFault(s1desc) then s1desc else s2desc; elsif s2desc.memattrs.type == MemType_Device || s1desc.memattrs.type == MemType_Device then result.memattrs.type = MemType_Device; if s1desc.memattrs.type == MemType_Normal then result.memattrs.device = s2desc.memattrs.device; elsif s2desc.memattrs.type == MemType_Normal then result.memattrs.device = s1desc.memattrs.device; else // Both Device result.memattrs.device = CombineS1S2Device(s1desc.memattrs.device, s2desc.memattrs.device); result.memattrs.tagged = FALSE; else // Both Normal result.memattrs.type = MemType_Normal; result.memattrs.device = DeviceType UNKNOWN; result.memattrs.inner = CombineS1S2AttrHints(s1desc.memattrs.inner, s2desc.memattrs.inner); result.memattrs.outer = CombineS1S2AttrHints(s1desc.memattrs.outer, s2desc.memattrs.outer); result.memattrs.shareable = (s1desc.memattrs.shareable || s2desc.memattrs.shareable); result.memattrs.outershareable = (s1desc.memattrs.outershareable || s2desc.memattrs.outershareable); result.memattrs.tagged = (s1desc.memattrs.tagged && result.memattrs.inner.attrs == MemAttr_WB && result.memattrs.inner.hints == MemHint_RWA && result.memattrs.outer.attrs == MemAttr_WB && result.memattrs.outer.hints == MemHint_RWA); result.memattrs = MemAttrDefaults(result.memattrs); return result;

Library pseudocode for shared/translation/attrs/CombineS1S2Device

// CombineS1S2Device() // =================== // Combines device types from stage 1 and stage 2 DeviceType CombineS1S2Device(DeviceType s1device, DeviceType s2device) if s2device == DeviceType_nGnRnE || s1device == DeviceType_nGnRnE then result = DeviceType_nGnRnE; elsif s2device == DeviceType_nGnRE || s1device == DeviceType_nGnRE then result = DeviceType_nGnRE; elsif s2device == DeviceType_nGRE || s1device == DeviceType_nGRE then result = DeviceType_nGRE; else result = DeviceType_GRE; return result;

Library pseudocode for shared/translation/attrs/LongConvertAttrsHints

// LongConvertAttrsHints() // ======================= // Convert the long attribute fields for Normal memory as used in the MAIR fields // to orthogonal attributes and hints MemAttrHints LongConvertAttrsHints(bits(4) attrfield, AccType acctype) assert !IsZero(attrfield); MemAttrHints result; if S1CacheDisabled(acctype) then // Force Non-cacheable result.attrs = MemAttr_NC; result.hints = MemHint_No; else if attrfield<3:2> == '00' then // Write-through transient result.attrs = MemAttr_WT; result.hints = attrfield<1:0>; result.transient = TRUE; elsif attrfield<3:0> == '0100' then // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; result.transient = FALSE; elsif attrfield<3:2> == '01' then // Write-back transient result.attrs = MemAttr_WB; result.hints = attrfield<1:0>; result.transient = TRUE; else // Write-through/Write-back non-transient result.attrs = attrfield<3:2>; result.hints = attrfield<1:0>; result.transient = FALSE; return result;

Library pseudocode for shared/translation/attrs/MemAttrDefaults

// MemAttrDefaults() // ================= // Supply default values for memory attributes, including overriding the shareability attributes // for Device and Non-cacheable memory types. MemoryAttributes MemAttrDefaults(MemoryAttributes memattrs) if memattrs.type == MemType_Device then memattrs.inner = MemAttrHints UNKNOWN; memattrs.outer = MemAttrHints UNKNOWN; memattrs.shareable = TRUE; memattrs.outershareable = TRUE; else memattrs.device = DeviceType UNKNOWN; if memattrs.inner.attrs == MemAttr_NC && memattrs.outer.attrs == MemAttr_NC then memattrs.shareable = TRUE; memattrs.outershareable = TRUE; return memattrs;

Library pseudocode for shared/translation/attrs/S1CacheDisabled

// S1CacheDisabled() // ================= boolean S1CacheDisabled(AccType acctype) if ELUsingAArch32(S1TranslationRegime()) then if PSTATE.EL == EL2 then enable = if acctype == AccType_IFETCH then HSCTLR.I else HSCTLR.C; else enable = if acctype == AccType_IFETCH then SCTLR.I else SCTLR.C; else enable = if acctype == AccType_IFETCH then SCTLR[].I else SCTLR[].C; return enable == '0';

Library pseudocode for shared/translation/attrs/S2AttrDecode

// S2AttrDecode() // ============== // Converts the Stage 2 attribute fields into orthogonal attributes and hints MemoryAttributes S2AttrDecode(bits(2) SH, bits(4) attr, AccType acctype) MemoryAttributes memattrs; apply_force_writeback = HaveStage2MemAttrControl() && HCR_EL2.FWB == '1'; // Device memory if (apply_force_writeback && attr<2> == '0') || attr<3:2> == '00' then memattrs.type = MemType_Device; case attr<1:0> of when '00' memattrs.device = DeviceType_nGnRnE; when '01' memattrs.device = DeviceType_nGnRE; when '10' memattrs.device = DeviceType_nGRE; when '11' memattrs.device = DeviceType_GRE; // Normal memory elsif attr<1:0> != '00' then memattrs.type = MemType_Normal; if apply_force_writeback then memattrs.outer = S2ConvertAttrsHints(attr<1:0>, acctype); else memattrs.outer = S2ConvertAttrsHints(attr<3:2>, acctype); memattrs.inner = S2ConvertAttrsHints(attr<1:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else memattrs = MemoryAttributes UNKNOWN; // Reserved return MemAttrDefaults(memattrs);

Library pseudocode for shared/translation/attrs/S2CacheDisabled

// S2CacheDisabled() // ================= boolean S2CacheDisabled(AccType acctype) if ELUsingAArch32(EL2) then disable = if acctype == AccType_IFETCH then HCR2.ID else HCR2.CD; else disable = if acctype == AccType_IFETCH then HCR_EL2.ID else HCR_EL2.CD; return disable == '1';

Library pseudocode for shared/translation/attrs/S2ConvertAttrsHints

// S2ConvertAttrsHints() // ===================== // Converts the attribute fields for Normal memory as used in stage 2 // descriptors to orthogonal attributes and hints MemAttrHints S2ConvertAttrsHints(bits(2) attr, AccType acctype) assert !IsZero(attr); MemAttrHints result; if S2CacheDisabled(acctype) then // Force Non-cacheable result.attrs = MemAttr_NC; result.hints = MemHint_No; else case attr of when '01' // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; when '10' // Write-through result.attrs = MemAttr_WT; result.hints = MemHint_RWA; when '11' // Write-back result.attrs = MemAttr_WB; result.hints = MemHint_RWA; result.transient = FALSE; return result;

Library pseudocode for shared/translation/attrs/ShortConvertAttrsHints

// ShortConvertAttrsHints() // ======================== // Converts the short attribute fields for Normal memory as used in the TTBR and // TEX fields to orthogonal attributes and hints MemAttrHints ShortConvertAttrsHints(bits(2) RGN, AccType acctype, boolean secondstage) MemAttrHints result; if (!secondstage && S1CacheDisabled(acctype)) || (secondstage && S2CacheDisabled(acctype)) then // Force Non-cacheable result.attrs = MemAttr_NC; result.hints = MemHint_No; else case RGN of when '00' // Non-cacheable (no allocate) result.attrs = MemAttr_NC; result.hints = MemHint_No; when '01' // Write-back, Read and Write allocate result.attrs = MemAttr_WB; result.hints = MemHint_RWA; when '10' // Write-through, Read allocate result.attrs = MemAttr_WT; result.hints = MemHint_RA; when '11' // Write-back, Read allocate result.attrs = MemAttr_WB; result.hints = MemHint_RA; result.transient = FALSE; return result;

Library pseudocode for shared/translation/attrs/WalkAttrDecode

// WalkAttrDecode() // ================ MemoryAttributes WalkAttrDecode(bits(2) SH, bits(2) ORGN, bits(2) IRGN, boolean secondstage) MemoryAttributes memattrs; AccType acctype = AccType_NORMAL; memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(IRGN, acctype, secondstage); memattrs.outer = ShortConvertAttrsHints(ORGN, acctype, secondstage); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; memattrs.tagged = FALSE; return MemAttrDefaults(memattrs);

Library pseudocode for shared/translation/translation/HasS2Translation

// HasS2Translation() // ================== // Returns TRUE if stage 2 translation is present for the current translation regime boolean HasS2Translation() return (EL2Enabled() && !IsInHost() && PSTATE.EL IN {EL0,EL1});

Library pseudocode for shared/translation/translation/Have16bitVMID

// Have16bitVMID() // =============== // Returns TRUE if EL2 and support for a 16-bit VMID are implemented. boolean Have16bitVMID() return HaveEL(EL2) && boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/translation/translation/PAMax

// PAMax() // ======= // Returns the IMPLEMENTATION DEFINED upper limit on the physical address // size for this processor, as log2(). integer PAMax() return integer IMPLEMENTATION_DEFINED "Maximum Physical Address Size";

Library pseudocode for shared/translation/translation/S1TranslationRegime

// S1TranslationRegime() // ===================== // Stage 1 translation regime for the given Exception level bits(2) S1TranslationRegime(bits(2) el) if el != EL0 then return el; elsif HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.NS == '0' then return EL3; elsif HaveVirtHostExt() && ELIsInHost(el) then return EL2; else return EL1; // S1TranslationRegime() // ===================== // Returns the Exception level controlling the current Stage 1 translation regime. For the most // part this is unused in code because the system register accessors (SCTLR[], etc.) implicitly // return the correct value. bits(2) S1TranslationRegime() return S1TranslationRegime(PSTATE.EL);

Library pseudocode for shared/translation/translation/VAMax

// VAMax() // ======= // Returns the IMPLEMENTATION DEFINED upper limit on the virtual address // size for this processor, as log2(). integer VAMax() return integer IMPLEMENTATION_DEFINED "Maximum Virtual Address Size";


Internal version only: isa v00_88v00_87, pseudocode v85-xml-00bet9_rc1_1v85-xml-00bet8_rc3 ; Build timestamp: 2018-12-12T122018-09-13T14:3300

Copyright © 2010-2018 ArmARM Limited or its affiliates. All rights reserved. This document is Non-Confidential.

(old) htmldiff from-(new)