(old) htmldiff from-(new)

Shared Pseudocode Functions

This page displays common pseudocode functions shared by many pages

Pseudocodes

Library pseudocode for aarch32/at/AArch32.AT

// AArch32.AT() // ============ // Perform address translation as per AT instructions. AArch32.AT(bits(32) vaddress, TranslationStage stage_in, bits(2) el, ATAccess ataccess) TranslationStage stage = stage_in; SecurityState ss; Regime regime; boolean eae; // ATS1Hx instructions if el == EL2 then regime = Regime_EL2; eae = TRUE; ss = SS_NonSecure; // ATS1Cxx instructions elsif stage == TranslationStage_1 || (stage == TranslationStage_12 && !HaveEL(EL2)) then stage = TranslationStage_1; ss = SecurityStateAtEL(PSTATE.EL); regime = if ss == SS_Secure && ELUsingAArch32(EL3) then Regime_EL30 else Regime_EL10; eae = TTBCR.EAE == '1'; // ATS12NSOxx instructions else regime = Regime_EL10; eae = if HaveAArch32EL(EL3) then TTBCR_NS.EAE == '1' else TTBCR.EAE == '1'; ss = SS_NonSecure; AddressDescriptor addrdesc; SDFType sdftype; aligned = TRUE; ispriv = el != EL0; supersection = '0'; iswrite = ataccess IN {ATAccess_WritePAN, ATAccess_Write}; acctype = if ataccess IN {ATAccess_Read, ATAccess_Write} then AccType_AT else AccType_ATPAN; // Prepare fault fields in case a fault is detected fault = NoFault(); fault.acctype = acctype; fault.write = iswrite; if eae then (fault, addrdesc) = AArch32.S1TranslateLD(fault, regime, ss, vaddress, acctype, aligned, iswrite, ispriv); else (fault, addrdesc, sdftype) = AArch32.S1TranslateSD(fault, regime, ss, vaddress, acctype, aligned, iswrite, ispriv); supersection = if sdftype == SDFType_Supersection then '1' else '0'; // ATS12NSOxx instructions if stage == TranslationStage_12 && fault.statuscode == Fault_None then s2fs1walk = FALSE; (fault, addrdesc) = AArch32.S2Translate(fault, addrdesc, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); if fault.statuscode != Fault_None then // Take exception when External abort occurs on translation table walk if (IsExternalAbort(fault) || (stage == TranslationStage_1 && el != EL2 && PSTATE.EL == EL1 && EL2Enabled() && fault.s2fs1walk)) then PAR = bits(64) UNKNOWN; AArch32.Abort(vaddress, fault); addrdesc.fault = fault; if (eae || (stage == TranslationStage_12 && (HCR.VM == '1' || HCR.DC == '1')) || (stage == TranslationStage_1 && el != EL2 && PSTATE.EL == EL2)) then AArch32.EncodePARLD(addrdesc, ss); else AArch32.EncodePARSD(addrdesc, supersection, ss); return;

Library pseudocode for aarch32/at/AArch32.EncodePARLD

// AArch32.EncodePARLD() // ===================== // Returns 64-bit format PAR on address translation instruction. AArch32.EncodePARLD(AddressDescriptor addrdesc, SecurityState ss) if !IsFault(addrdesc) then bit ns; if ss == SS_NonSecure then ns = bit UNKNOWN; elsif addrdesc.paddress.paspace == PAS_Secure then ns = '0'; else ns = '1'; PAR.F = '0'; PAR.SH = ReportedPARShareability(PAREncodeShareability(addrdesc.memattrs)); PAR.NS = ns; PAR<10> = bit IMPLEMENTATION_DEFINED "Non-Faulting PAR"; // IMPDEF PAR.LPAE = '1'; PAR.PA = addrdesc.paddress.address<39:12>; PAR.ATTR = ReportedPARAttrs(EncodePARAttrs(addrdesc.memattrs)); else PAR.F = '1'; PAR.FST = AArch32.PARFaultStatusLD(addrdesc.fault); PAR.S2WLK = if addrdesc.fault.s2fs1walk then '1' else '0'; PAR.FSTAGE = if addrdesc.fault.secondstage then '1' else '0'; PAR.LPAE = '1'; PAR<63:48> = bits(16) IMPLEMENTATION_DEFINED "Faulting PAR"; // IMPDEF return;

Library pseudocode for aarch32/at/AArch32.EncodePARSD

// AArch32.EncodePARSD() // ===================== // Returns 32-bit format PAR on address translation instruction. AArch32.EncodePARSD(AddressDescriptor addrdesc_in, bit supersection, SecurityState ss) AddressDescriptor addrdesc = addrdesc_in; if !IsFault(addrdesc) then if (addrdesc.memattrs.memtype == MemType_Device || (addrdesc.memattrs.inner.attrs == MemAttr_NC && addrdesc.memattrs.outer.attrs == MemAttr_NC)) then addrdesc.memattrs.shareability = Shareability_OSH; bit ns; if ss == SS_NonSecure then ns = bit UNKNOWN; elsif addrdesc.paddress.paspace == PAS_Secure then ns = '0'; else ns = '1'; bits(2) sh = if addrdesc.memattrs.shareability != Shareability_NSH then '01' else '00'; PAR.F = '0'; PAR.SS = supersection; PAR.Outer = AArch32.ReportedOuterAttrs(AArch32.PAROuterAttrs(addrdesc.memattrs)); PAR.Inner = AArch32.ReportedInnerAttrs(AArch32.PARInnerAttrs(addrdesc.memattrs)); PAR.SH = ReportedPARShareability(sh); PAR<8> = bit IMPLEMENTATION_DEFINED "Non-Faulting PAR"; // IMPDEF PAR.NS = ns; PAR.NOS = if addrdesc.memattrs.shareability == Shareability_OSH then '0' else '1'; PAR.LPAE = '0'; PAR.PA = addrdesc.paddress.address<39:12>; else PAR.F = '1'; PAR.FST = AArch32.PARFaultStatusSD(addrdesc.fault); PAR.LPAE = '0'; PAR<31:16> = bits(16) IMPLEMENTATION_DEFINED "Faulting PAR"; // IMPDEF return;

Library pseudocode for aarch32/at/AArch32.PARFaultStatusLD

// AArch32.PARFaultStatusLD() // ========================== // Fault status field decoding of 64-bit PAR bits(6) AArch32.PARFaultStatusLD(FaultRecord fault) bits(32) syndrome; if fault.statuscode == Fault_Domain then // Report Domain fault assert fault.level IN {1,2}; syndrome<1:0> = if fault.level == 1 then '01' else '10'; syndrome<5:2> = '1111'; else syndrome = AArch32.FaultStatusLD(TRUE, fault); return syndrome<5:0>;

Library pseudocode for aarch32/at/AArch32.PARFaultStatusSD

// AArch32.PARFaultStatusSD() // ========================== // Fault status field decoding of 32-bit PAR. bits(6) AArch32.PARFaultStatusSD(FaultRecord fault) bits(32) syndrome; syndrome = AArch32.FaultStatusSD(TRUE, fault); return syndrome<12,10,3:0>;

Library pseudocode for aarch32/at/AArch32.PARInnerAttrs

// AArch32.PARInnerAttrs() // ======================= // Convert orthogonal attributes and hints to 32-bit PAR Inner field. bits(3) AArch32.PARInnerAttrs(MemoryAttributes memattrs) bits(3) result; if memattrs.memtype == MemType_Device then if memattrs.device == DeviceType_nGnRnE then result = '001'; // Non-cacheable elsif memattrs.device == DeviceType_nGnRE then result = '011'; // Non-cacheable else MemAttrHints inner = memattrs.inner; if inner.attrs == MemAttr_NC then result = '000'; // Non-cacheable elsif inner.attrs == MemAttr_WB && inner.hints<0> == '1' then result = '101'; // Write-Back, Write-Allocate elsif inner.attrs == MemAttr_WT then result = '110'; // Write-Through elsif inner.attrs == MemAttr_WB && inner.hints<0> == '0' then result = '111'; // Write-Back, no Write-Allocate return result;

Library pseudocode for aarch32/at/AArch32.PAROuterAttrs

// AArch32.PAROuterAttrs() // ======================= // Convert orthogonal attributes and hints to 32-bit PAR Outer field. bits(2) AArch32.PAROuterAttrs(MemoryAttributes memattrs) bits(2) result; if memattrs.memtype == MemType_Device then result = bits(2) UNKNOWN; else MemAttrHints outer = memattrs.outer; if outer.attrs == MemAttr_NC then result = '00'; // Non-cacheable elsif outer.attrs == MemAttr_WB && outer.hints<0> == '1' then result = '01'; // Write-Back, Write-Allocate elsif outer.attrs == MemAttr_WT && outer.hints<0> == '0' then result = '10'; // Write-Through, no Write-Allocate elsif outer.attrs == MemAttr_WB && outer.hints<0> == '0' then result = '11'; // Write-Back, no Write-Allocate return result;

Library pseudocode for aarch32/dc/AArch32.DC

// AArch32.DC() // ============ // Perform Data Cache Operation. AArch32.DC(bits(32) regval, CacheOp cacheop, CacheOpScope opscope) AccType acctype = AccType_DC; CacheRecord cache; cache.acctype = acctype; cache.cacheop = cacheop; cache.opscope = opscope; cache.cachetype = CacheType_Data; cache.security = SecurityStateAtEL(PSTATE.EL); if opscope == CacheOpScope_SetWay then cache.shareability = Shareability_NSH; (cache.set, cache.way, cache.level) = DecodeSW(ZeroExtend(regval), CacheType_Data); if (cacheop == CacheOp_Invalidate && PSTATE.EL == EL1 && EL2Enabled() && ((!ELUsingAArch32(EL2) && HCR_EL2.SWIO == '1') || (ELUsingAArch32(EL2) && HCR.SWIO == '1') || (!ELUsingAArch32(EL2) && HCR_EL2.<DC,VM> != '00') || (ELUsingAArch32(EL2) && HCR.<DC,VM> != '00'))) then cache.cacheop = CacheOp_CleanInvalidate; CACHE_OP(cache); return; if EL2Enabled() then if PSTATE.EL IN {EL0, EL1} then cache.is_vmid_valid = TRUE; cache.vmid = VMID[]; else cache.is_vmid_valid = FALSE; else cache.is_vmid_valid = FALSE; if PSTATE.EL == EL0 then cache.is_asid_valid = TRUE; cache.asid = ASID[]; else cache.is_asid_valid = FALSE; need_translate = DCInstNeedsTranslation(opscope); iswrite = cacheop == CacheOp_Invalidate; vaddress = regval; size = 0; // by default no watchpoint address if iswrite then size = integer IMPLEMENTATION_DEFINED "Data Cache Invalidate Watchpoint Size"; assert size >= 4*(2^(UInt(CTR_EL0.DminLine))) && size <= 2048; assert UInt(size<32:0> AND (size-1)<32:0>) == 0; // size is power of 2 vaddress = Align(regval, size); cache.translated = need_translate; cache.vaddress = ZeroExtend(vaddress); if need_translate then wasaligned = TRUE; memaddrdesc = AArch32.TranslateAddress(vaddress, acctype, iswrite, wasaligned, size); if IsFault(memaddrdesc) then AArch32.Abort(regval, memaddrdesc.fault); memattrs = memaddrdesc.memattrs; cache.paddress = memaddrdesc.paddress; if opscope == CacheOpScope_PoC then cache.shareability = memattrs.shareability; else cache.shareability = Shareability_NSH; else cache.shareability = Shareability UNKNOWN; cache.paddress = FullAddress UNKNOWN; if (cacheop == CacheOp_Invalidate && PSTATE.EL == EL1 && EL2Enabled() && ((!ELUsingAArch32(EL2) && HCR_EL2.<DC,VM> != '00') || (ELUsingAArch32(EL2) && HCR.<DC,VM> != '00'))) then cache.cacheop = CacheOp_CleanInvalidate; CACHE_OP(cache); return;

Library pseudocode for aarch32/debug/VCRMatch/AArch32.VCRMatch

// AArch32.VCRMatch() // ================== boolean AArch32.VCRMatch(bits(32) vaddress) boolean match; if UsingAArch32() && ELUsingAArch32(EL1) && PSTATE.EL != EL2 then // Each bit position in this string corresponds to a bit in DBGVCR and an exception vector. match_word = Zeros(32); ss = if vaddress<31:5> == CurrentSecurityState(); if vaddress<31:5> == ExcVectorBase()<31:5> then if HaveEL(EL3) && ss ==) && ! SS_NonSecureIsSecure then () then match_word<UInt(vaddress<4:2>) + 24> = '1'; // Non-secure vectors else match_word<UInt(vaddress<4:2>) + 0> = '1'; // Secure vectors (or no EL3) if ( ifHaveEL(EL3) && ELUsingAArch32(EL3) && vaddress<31:5> == MVBAR<31:5> && ss ==) && SS_SecureIsSecure) then () && vaddress<31:5> == MVBAR<31:5> then match_word<UInt(vaddress<4:2>) + 8> = '1'; // Monitor vectors // Mask out bits not corresponding to vectors. bits(32) mask; if !HaveEL(EL3) then mask = '00000000':'00000000':'00000000':'11011110'; // DBGVCR[31:8] are RES0 elsif !ELUsingAArch32(EL3) then mask = '11011110':'00000000':'00000000':'11011110'; // DBGVCR[15:8] are RES0 else mask = '11011110':'00000000':'11011100':'11011110'; match_word = match_word AND DBGVCR AND mask; match = !IsZero(match_word); // Check for UNPREDICTABLE case - match on Prefetch Abort and Data Abort vectors if !IsZero(match_word<28:27,12:11,4:3>) && DebugTarget() == PSTATE.EL then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHDAPA); if !IsZero(vaddress<1:0>) && match then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHHALF); else match = FALSE; return match;

Library pseudocode for aarch32/debug/authentication/AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled

// AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled() // ======================================================== boolean AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled returns // the state of the (DBGEN AND SPIDEN) signal. if !HaveEL(EL3) &&) && ! NonSecureOnlyImplementationIsSecure() then return FALSE; return DBGEN == HIGH && SPIDEN == HIGH;

Library pseudocode for aarch32/debug/breakpoint/AArch32.BreakpointMatch

// AArch32.BreakpointMatch() // ========================= // Breakpoint matching in an AArch32 translation regime. (boolean,boolean) AArch32.BreakpointMatch(integer n, bits(32) vaddress, integer size) assert ELUsingAArch32(S1TranslationRegime()); assert n < NumBreakpointsImplemented(); enabled = DBGBCR[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR[n].BT IN {'0x01'}; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch32.StateMatch(DBGBCR[n].SSC, DBGBCR[n].HMC, DBGBCR[n].PMC, linked, DBGBCR[n].LBN, isbreakpnt, ispriv); (value_match, value_mismatch) = AArch32.BreakpointValueMatch(n, vaddress, linked_to); if size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. (match_i, mismatch_i) = AArch32.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if value_mismatch && !mismatch_i then value_mismatch = ConstrainUnpredictableBool(Unpredictable_BPMISMATCHHALF); if vaddress<1> == '1' && DBGBCR[n].BAS == '1111' then // The above notwithstanding, if DBGBCR[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if !value_mismatch then value_mismatch = ConstrainUnpredictableBool(Unpredictable_BPMISMATCHHALF); match = value_match && state_match && enabled; mismatch = value_mismatch && state_match && enabled; return (match, mismatch);

Library pseudocode for aarch32/debug/breakpoint/AArch32.BreakpointValueMatch

// AArch32.BreakpointValueMatch() // ============================== // The first result is whether an Address Match or Context breakpoint is programmed on the // instruction at "address". The second result is whether an Address Mismatch breakpoint is // programmed on the instruction, that is, whether the instruction should be stepped. (boolean,boolean) AArch32.BreakpointValueMatch(integer n_in, bits(32) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. integer n = n_in; // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n >= NumBreakpointsImplemented() then Constraint c; (c, n) = ConstrainUnpredictableInteger(0, NumBreakpointsImplemented() - 1, Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (FALSE,FALSE); // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR[n].E == '0' then return (FALSE,FALSE); context_aware = (n >= (NumBreakpointsImplemented() - NumContextAwareBreakpointsImplemented())); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. dbgtype = DBGBCR[n].BT; Constraint c; if ((dbgtype IN {'011x','11xx'} && !HaveVirtHostExt() && !HaveV82Debug()) || // Context matching (dbgtype IN {'010x'} && HaltOnBreakpointOrWatchpoint()) || // Address mismatch (!(dbgtype IN {'0x0x'}) && !context_aware) || // Context matching (dbgtype IN {'1xxx'} && !HaveEL(EL2))) then // EL2 extension (c, dbgtype) = ConstrainUnpredictableBits(Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (FALSE,FALSE); // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (dbgtype IN {'0x0x'}); mismatch = (dbgtype IN {'010x'}); match_vmid = (dbgtype IN {'10xx'}); match_cid1 = (dbgtype IN {'xx1x'}); match_cid2 = (dbgtype IN {'11xx'}); linked = (dbgtype IN {'xxx1'}); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return (FALSE,FALSE); // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return (FALSE,FALSE); // Do the comparison. boolean BVR_match; if match_addr then boolean byte_select_match; byte = UInt(vaddress<1:0>); assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR[n].BAS<byte> == '1'); integer top = 31; BVR_match = (vaddress<top:2> == DBGBVR[n]<top:2>) && byte_select_match; elsif match_cid1 then BVR_match = (PSTATE.EL != EL2 && CONTEXTIDR == DBGBVR[n]<31:0>); boolean BXVR_match; if match_vmid then bits(16) vmid; bits(16) bvr_vmid; if ELUsingAArch32(EL2) then vmid = ZeroExtend(VTTBR.VMID, 16); bvr_vmid = ZeroExtend(DBGBXVR[n]<7:0>, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBXVR[n]<7:0>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBXVR[n]<15:0>; BXVR_match = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (PSTATE.EL != EL3 && (HaveVirtHostExt() || HaveV82Debug()) && EL2Enabled() && !ELUsingAArch32(EL2) && DBGBXVR[n]<31:0> == CONTEXTIDR_EL2<31:0>); bvr_match_valid = (match_addr || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return (match && !mismatch, !match && mismatch);

Library pseudocode for aarch32/debug/breakpoint/AArch32.StateMatch

// AArch32.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. boolean AArch32.StateMatch(bits(2) SSC_in, bit HMC_in, bits(2) PxC_in, boolean linked_in, bits(4) LBN, boolean isbreakpnt, boolean ispriv) // "SSC_in","HMC_in","PxC_in" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked_in" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. bits(2) SSC = SSC_in; bit HMC = HMC_in; bits(2) PxC = PxC_in; boolean linked = linked_in; // If parameters are set to a reserved type, behaves as either disabled or a defined type Constraint c; (c, SSC, HMC, PxC) = CheckValidStateMatch(SSC, HMC, PxC, isbreakpnt); if c == Constraint_DISABLED then return FALSE; // Otherwise the HMC,SSC,PxC values are either valid or the values returned by // CheckValidStateMatch are valid. PL2_match = HaveEL(EL2) && ((HMC == '1' && (SSC:PxC != '1000')) || SSC == '11'); PL1_match = PxC<0> == '1'; PL0_match = PxC<1> == '1'; SSU_match = isbreakpnt && HMC == '0' && PxC == '00' && SSC != '11'; boolean priv_match; if !ispriv && !isbreakpnt then priv_match = PL0_match; elsif SSU_match then priv_match = PSTATE.M IN {M32_User,M32_Svc,M32_System}; else case PSTATE.EL of when EL3 priv_match = PL1_match; // EL3 and EL1 are both PL1 when EL2 priv_match = PL2_match; when EL1 priv_match = PL1_match; when EL0 priv_match = PL0_match; boolean security_state_match; ss = CurrentSecurityState(); case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = ss == SS_NonSecure; // Non-secure only when '10' security_state_match = ss == SS_Secure; // Secure only when '11' security_state_match = (HMC == '1' || ss == SS_Secure); // HMC=1 -> Both, 0 -> Secure only integer lbn; if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = NumBreakpointsImplemented() - NumContextAwareBreakpointsImplemented(); last_ctx_cmp = NumBreakpointsImplemented() - 1; if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint boolean linked_match; if linked then vaddress = bits(32) UNKNOWN; linked_to = TRUE; (linked_match,-) = AArch32.BreakpointValueMatch(lbn, vaddress, linked_to); return priv_match && security_state_match && (!linked || linked_match);

Library pseudocode for aarch32/debug/enables/AArch32.GenerateDebugExceptions

// AArch32.GenerateDebugExceptions() // ================================= boolean AArch32.GenerateDebugExceptions() ss = return CurrentSecurityStateAArch32.GenerateDebugExceptionsFrom(); return(PSTATE.EL, AArch32.GenerateDebugExceptionsFromIsSecure(PSTATE.EL, ss);());

Library pseudocode for aarch32/debug/enables/AArch32.GenerateDebugExceptionsFrom

// AArch32.GenerateDebugExceptionsFrom() // ===================================== boolean AArch32.GenerateDebugExceptionsFrom(bits(2) from_el,AArch32.GenerateDebugExceptionsFrom(bits(2) from, boolean secure) if ! SecurityState from_state) if !ELUsingAArch32(DebugTargetFrom(from_state)) then (secure)) then mask = '0'; // No PSTATE.D in AArch32 state return AArch64.GenerateDebugExceptionsFrom(from_el, from_state, mask); (from, secure, mask); if DBGOSLSR.OSLK == '1' || DoubleLockStatus() || Halted() then return FALSE; boolean enabled; if HaveEL(EL3) && from_state == SS_Secure then assert from_el !=) && secure then assert from != EL2; // Secure EL2 always uses AArch64 if IsSecureEL2Enabled() then // Implies that EL3 and EL2 both using AArch64 enabled = MDCR_EL3.SDD == '0'; else spd = if ELUsingAArch32(EL3) then SDCR.SPD else MDCR_EL3.SPD32; if spd<1> == '1' then enabled = spd<0> == '1'; else // SPD == 0b01 is reserved, but behaves the same as 0b00. enabled = AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled(); if from_el == if from == EL0 then enabled = enabled || SDER.SUIDEN == '1'; else enabled = from_el != enabled = from != EL2; return enabled;

Library pseudocode for aarch32/debug/pmu/AArch32.CheckForPMUOverflow

// AArch32.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events AArch32.CheckForPMUOverflow() if !ELUsingAArch32(EL1) then AArch64.CheckForPMUOverflow(); return; bit hpme; if HaveEL(EL2) then hpme = if !ELUsingAArch32(EL2) then MDCR_EL2.HPME else HDCR.HPME; boolean pmuirq; bit E; pmuirq = PMCR.E == '1' && PMINTENSET.C == '1' && PMOVSSET.C == '1'; integer counters = GetNumEventCounters(); if counters != 0 then for idx = 0 to counters - 1 E = if AArch32.PMUCounterIsHyp(idx) then hpme else PMCR.E; if E == '1' && PMINTENSET<idx> == '1' && PMOVSSET<idx> == '1' then pmuirq = TRUE; SetInterruptRequestLevel(InterruptID_PMUIRQ, if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflow, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR.)

Library pseudocode for aarch32/debug/pmu/AArch32.ClearEventCounters

// AArch32.ClearEventCounters() // ============================ // Zero all the event counters. AArch32.ClearEventCounters() if HaveAArch64() then // Force the counter to be cleared as a 64-bit counter. AArch64.ClearEventCounters(); return; integer counters = AArch32.GetNumEventCountersAccessible(); if counters != 0 then for idx = 0 to counters - 1 PMEVCNTR[idx] = Zeros();

Library pseudocode for aarch32/debug/pmu/AArch32.CountPMUEvents

// AArch32.CountPMUEvents() // ======================== // Return TRUE if counter "idx" should count its event. // For the cycle counter, idx == CYCLE_COUNTER_ID. // Return TRUE if counter "idx" should count its event. For the cycle counter, idx == CYCLE_COUNTER_ID. boolean AArch32.CountPMUEvents(integer idx) assert idx == CYCLE_COUNTER_ID || idx < GetNumEventCounters(); if !ELUsingAArch32(EL1) then return AArch64.CountPMUEvents(idx); boolean debug; boolean enabled; boolean prohibited; boolean filtered; boolean frozen; boolean resvd_for_el2; bit E; bit spme; bits(32) ovflws; // Event counting is disabled in Debug state debug = Halted(); // Software can reserve some counters for EL2 resvd_for_el2 = AArch32.PMUCounterIsHyp(idx); ss = // Main enable controls if idx == CurrentSecurityState(); // Main enable controls if idx == CYCLE_COUNTER_ID then enabled = PMCR.E == '1' && PMCNTENSET.C == '1'; else if resvd_for_el2 then E = if ELUsingAArch32(EL2) then HDCR.HPME else MDCR_EL2.HPME; else E = PMCR.E; enabled = E == '1' && PMCNTENSET<idx> == '1'; // Event counting is allowed unless it is prohibited by any rule below prohibited = FALSE; // Event counting in Secure state is prohibited if all of: // * EL3 is implemented // * One of the following is true: // - EL3 is using AArch64, MDCR_EL3.SPME == 0, and either: // - FEAT_PMUv3p7 is not implemented // - MDCR_EL3.MPMX == 0 // - EL3 is using AArch32 and SDCR.SPME == 0 // * Not executing at EL0, or SDER.SUNIDEN == 0 if HaveEL(EL3) && ss ==) && SS_SecureIsSecure then () then spme = if ELUsingAArch32(EL3) then SDCR.SPME else MDCR_EL3.SPME; if !ELUsingAArch32(EL3) && HavePMUv3p7() then prohibited = spme == '0' && MDCR_EL3.MPMX == '0'; else prohibited = spme == '0'; if prohibited && PSTATE.EL == EL0 then prohibited = SDER.SUNIDEN == '0'; // Event counting at EL2 is prohibited if all of: // * The HPMD Extension is implemented // * PMNx is not reserved for EL2 // * HDCR.HPMD == 1 if !prohibited && PSTATE.EL == EL2 && HaveHPMDExt() && !resvd_for_el2 then prohibited = HDCR.HPMD == '1'; // The IMPLEMENTATION DEFINED authentication interface might override software if prohibited && !HaveNoSecurePMUDisableOverride() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // Event counting might be frozen frozen = FALSE; // If FEAT_PMUv3p7 is implemented, event counting can be frozen if HavePMUv3p7() then bits(5) hpmn; if HaveEL(EL2) then hpmn = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN; ovflws = ZeroExtend(PMOVSSET<GetNumEventCounters()-1:0>); bit FZ; if resvd_for_el2 then FZ = if ELUsingAArch32(EL2) then HDCR.HPMFZO else MDCR_EL2.HPMFZO; ovflws<UInt(hpmn)-1:0> = Zeros(); else FZ = PMCR.FZO; if HaveEL(EL2) && UInt(hpmn) < GetNumEventCounters() then ovflws<GetNumEventCounters()-1:UInt(hpmn)> = Zeros(); frozen = (FZ == '1') && !IsZero(ovflws); // PMCR.DP disables the cycle counter when event counting is prohibited if (prohibited || frozen) && idx == CYCLE_COUNTER_ID then enabled = enabled && (PMCR.DP == '0'); // Otherwise whether event counting is prohibited does not affect the cycle counter prohibited = FALSE; frozen = FALSE; // If FEAT_PMUv3p5 is implemented, cycle counting can be prohibited. // This is not overridden by PMCR.DP. if HavePMUv3p5() && idx == CYCLE_COUNTER_ID then if HaveEL(EL3) && ss ==) && SS_SecureIsSecure then () then sccd = if ELUsingAArch32(EL3) then SDCR.SCCD else MDCR_EL3.SCCD; if sccd == '1' then prohibited = TRUE; if PSTATE.EL == EL2 && HDCR.HCCD == '1' then prohibited = TRUE; // Event counting can be filtered by the {P, U, NSK, NSU, NSH} bits filter = if idx == CYCLE_COUNTER_ID then PMCCFILTR else PMEVTYPER[idx]; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) then filter<27> else '0'; ss = CurrentSecurityState(); case PSTATE.EL of when EL0 filtered = if ss == SS_Secure then U == '1' else U != NSU; when EL1 filtered = if ss == SS_Secure then P == '1' else P != NSK; when EL2 filtered = NSH == '0'; when EL3 filtered = P == '1'; return !debug && enabled && !prohibited && !filtered && !frozen;

Library pseudocode for aarch32/debug/pmu/AArch32.GetNumEventCountersAccessible

// AArch32.GetNumEventCountersAccessible() // ======================================= // Return the number of event counters that can be accessed at the current Exception level. integer AArch32.GetNumEventCountersAccessible() integer n; integer total_counters = GetNumEventCounters(); // Software can reserve some counters for EL2 if PSTATE.EL IN {EL1, EL0} && EL2Enabled() then n = UInt(if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN); if n > total_counters || (!HaveFeatHPMN0() && n == 0) then (-, n) = ConstrainUnpredictableInteger(0, total_counters, Unpredictable_PMUEVENTCOUNTER); else n = total_counters; return n;

Library pseudocode for aarch32/debug/pmu/AArch32.IncrementEventCounter

// AArch32.IncrementEventCounter() // =============================== // Increment the specified event counter by the specified amount. AArch32.IncrementEventCounter(integer idx, integer increment) if HaveAArch64() then // Force the counter to be incremented as a 64-bit counter. AArch64.IncrementEventCounter(idx, increment); return; // In this model, event counters in an AArch32-only implementation are 32 bits and // the LP bits are RES0 in this model, even if FEAT_PMUv3p5 is implemented. integer old_value; integer new_value; integer ovflw; bit lp; old_value = UInt(PMEVCNTR[idx]); new_value = old_value + PMUCountValue(idx, increment); PMEVCNTR[idx] = new_value<31:0>; ovflw = 32; if old_value<64:ovflw> != new_value<64:ovflw> then PMOVSSET<idx> = '1'; PMOVSR<idx> = '1'; // Check for the CHAIN event from an even counter if idx<0> == '0' && idx + 1 < GetNumEventCounters() then PMUEvent(PMU_EVENT_CHAIN, 1, idx + 1);

Library pseudocode for aarch32/debug/pmu/AArch32.PMUCounterIsHyp

// AArch32.PMUCounterIsHyp // ======================= // Returns TRUE if a counter is reserved for use by EL2, FALSE otherwise. boolean AArch32.PMUCounterIsHyp(integer n) boolean resvd_for_el2; // Software can reserve some event counters for EL2 if n != CYCLE_COUNTER_ID && HaveEL(EL2) then bits(5) hpmn = if ! hpmn = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN; resvd_for_el2 = n >= UInt(hpmn); if UInt(hpmn) > GetNumEventCounters() || (!HaveFeatHPMN0() && IsZero(hpmn)) then resvd_for_el2 = boolean UNKNOWN; else resvd_for_el2 = FALSE; return resvd_for_el2;

Library pseudocode for aarch32/debug/pmu/AArch32.PMUCycle

// AArch32.PMUCycle() // ================== // Called at the end of each cycle to increment event counters and // check for PMU overflow. In pseudocode, a cycle ends after the // execution of the operational pseudocode. AArch32.PMUCycle() if !HavePMUv3() then return; PMUEvent(PMU_EVENT_CPU_CYCLES); integer counters = GetNumEventCounters(); if counters != 0 then for idx = 0 to counters - 1 if AArch32.CountPMUEvents(idx) then accumulated = PMUEventAccumulator[idx]; AArch32.IncrementEventCounter(idx, accumulated); PMUEventAccumulator[idx] = 0; integer old_value; integer new_value; integer ovflw; if (AArch32.CountPMUEvents(CYCLE_COUNTER_ID) && (PMCR.LC == '1' || PMCR.D == '0' || HasElapsed64Cycles())) then old_value = UInt(PMCCNTR); new_value = old_value + 1; PMCCNTR = new_value<63:0>; ovflw = if PMCR.LC == '1' then 64 else 32; if old_value<64:ovflw> != new_value<64:ovflw> then PMOVSSET.C = '1'; PMOVSR.C = '1'; AArch32.CheckForPMUOverflow();

Library pseudocode for aarch32/debug/pmu/AArch32.PMUSwIncrement

// AArch32.PMUSwIncrement() // ======================== // Generate PMU Events on a write to PMSWINC. AArch32.PMUSwIncrement(bits(32) sw_incr) integer counters = AArch32.GetNumEventCountersAccessible(); if counters != 0 then for idx = 0 to counters - 1 if sw_incr<idx> == '1' then PMUEvent(PMU_EVENT_SW_INCR, 1, idx);

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterHypModeInDebugState

// AArch32.EnterHypModeInDebugState() // ================================== // Take an exception in Debug state to Hyp mode. AArch32.EnterHypModeInDebugState(ExceptionRecord exception) SynchronizeContext(); assert HaveEL(EL2) &&) && ! CurrentSecurityStateIsSecure() == SS_NonSecure &&() && ELUsingAArch32(EL2); AArch32.ReportHypEntry(exception); AArch32.WriteMode(M32_Hyp); SPSR[] = bits(32) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); EndOfInstruction();

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterModeInDebugState

// AArch32.EnterModeInDebugState() // =============================== // Take an exception in Debug state to a mode other than Monitor and Hyp mode. AArch32.EnterModeInDebugState(bits(5) target_mode) SynchronizeContext(); assert ELUsingAArch32(EL1) && PSTATE.EL != EL2; if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(target_mode); SPSR[] = bits(32) UNKNOWN; R[14] = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. EndOfInstruction();

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterMonitorModeInDebugState

// AArch32.EnterMonitorModeInDebugState() // ====================================== // Take an exception in Debug state to Monitor mode. AArch32.EnterMonitorModeInDebugState() SynchronizeContext(); assert HaveEL(EL3) && ELUsingAArch32(EL3); from_secure = CurrentSecurityStateIsSecure() == SS_Secure; (); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(M32_Monitor); SPSR[] = bits(32) UNKNOWN; R[14] = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. EndOfInstruction();

Library pseudocode for aarch32/debug/watchpoint/AArch32.WatchpointByteMatch

// AArch32.WatchpointByteMatch() // ============================= boolean AArch32.WatchpointByteMatch(integer n, bits(32) vaddress) integer top = 31; bottom = if DBGWVR[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR[n].BAS<UInt(vaddress<bottom-1:0>)> != '0'); mask = UInt(DBGWCR[n].MASK); // If DBGWCR[n].MASK is non-zero value and DBGWCR[n].BAS is not set to '11111111', or // DBGWCR[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !IsOnes(DBGWCR[n].BAS) then byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPMASKANDBAS); else LSB = (DBGWCR[n].BAS AND NOT(DBGWCR[n].BAS - 1)); MSB = (DBGWCR[n].BAS + LSB); if !IsZero(MSB AND (MSB - 1)) then // Not contiguous byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUS); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then Constraint c; (c, mask) = ConstrainUnpredictableInteger(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value boolean WVR_match; if mask > bottom then // If the DBGxVR<n>_EL1.RESS field bits are not a sign extension of the MSB // of DBGBVR<n>_EL1.VA, it is UNPREDICTABLE whether they appear to be // included in the match. if !IsOnes(DBGBVR_EL1[n]<63:top>) && !IsZero(DBGBVR_EL1[n]<63:top>) then if ConstrainUnpredictableBool(Unpredictable_DBGxVR_RESS) then top = 63; WVR_match = (vaddress<top:mask> == DBGWVR[n]<top:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITS); else WVR_match = vaddress<top:bottom> == DBGWVR[n]<top:bottom>; return WVR_match && byte_select_match;

Library pseudocode for aarch32/debug/watchpoint/AArch32.WatchpointMatch

// AArch32.WatchpointMatch() // ========================= // Watchpoint matching in an AArch32 translation regime. boolean AArch32.WatchpointMatch(integer n, bits(32) vaddress, integer size, boolean ispriv, AccType acctype, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); assert n < NumWatchpointsImplemented(); // "ispriv" is: // * FALSE for all loads, stores, and atomic operations executed at EL0. // * FALSE if the access is unprivileged. // * TRUE for all other loads, stores, and atomic operations. enabled = DBGWCR[n].E == '1'; linked = DBGWCR[n].WT == '1'; isbreakpnt = FALSE; state_match = AArch32.StateMatch(DBGWCR[n].SSC, DBGWCR[n].HMC, DBGWCR[n].PAC, linked, DBGWCR[n].LBN, isbreakpnt, ispriv); ls_match = FALSE; if iswrite then ls_match = (DBGWCR[n].LSC<1> != '0'); else ls_match = (DBGWCR[n].LSC<0> != '0'); ls_match = (DBGWCR[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch32.WatchpointByteMatch(n, vaddress + byte); return value_match && state_match && ls_match && enabled;

Library pseudocode for aarch32/exceptions/aborts/AArch32.Abort

// AArch32.Abort() // =============== // Abort and Debug exception handling in an AArch32 translation regime. AArch32.Abort(bits(32) vaddress, FaultRecord fault) // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = (HCR_EL2.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && MDCR_EL2.TDE == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1' && IsExternalAbort(fault); if route_to_aarch64 then AArch64.Abort(ZeroExtend(vaddress), fault); elsif fault.acctype == AccType_IFETCH then AArch32.TakePrefetchAbortException(vaddress, fault); else AArch32.TakeDataAbortException(vaddress, fault);

Library pseudocode for aarch32/exceptions/aborts/AArch32.AbortSyndrome

// AArch32.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort exceptions taken to Hyp mode // from an AArch32 translation regime. ExceptionRecord AArch32.AbortSyndrome(Exception exceptype, FaultRecord fault, bits(32) vaddress) exception = ExceptionSyndrome(exceptype); d_side = exceptype == Exception_DataAbort; exception.syndrome = AArch32.FaultSyndrome(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValid(fault) then exception.ipavalid = TRUE; exception.NS = if fault.ipaddress.paspace == PAS_NonSecure then '1' else '0'; exception.ipaddress = ZeroExtend(fault.ipaddress.address); else exception.ipavalid = FALSE; return exception;

Library pseudocode for aarch32/exceptions/aborts/AArch32.CheckPCAlignment

// AArch32.CheckPCAlignment() // ========================== AArch32.CheckPCAlignment() bits(32) pc = ThisInstrAddr(); if (CurrentInstrSet() == InstrSet_A32 && pc<1> == '1') || pc<0> == '1' then if AArch32.GeneralExceptionsToAArch64() then AArch64.PCAlignmentFault(); // Generate an Alignment fault Prefetch Abort exception vaddress = pc; acctype = AccType_IFETCH; iswrite = FALSE; secondstage = FALSE; AArch32.Abort(vaddress, AlignmentFault(acctype, iswrite, secondstage));

Library pseudocode for aarch32/exceptions/aborts/AArch32.ReportDataAbort

// AArch32.ReportDataAbort() // ========================= // Report syndrome information for aborts taken to modes other than Hyp mode. AArch32.ReportDataAbort(boolean route_to_monitor, FaultRecord fault, bits(32) vaddress) long_format = FALSE; if route_to_monitor && if route_to_monitor && ! CurrentSecurityStateIsSecure() != SS_Secure then () then long_format = ((TTBCR_S.EAE == '1') || (IsExternalSyncAbort(fault) && ((PSTATE.EL == EL2 || TTBCR.EAE == '1') || (fault.secondstage && boolean IMPLEMENTATION_DEFINED "Stage 2 synchronous External abort reports using Long-descriptor format when TTBCR_S.EAE is 0b0")))); (fault.secondstage && boolean IMPLEMENTATION_DEFINED "Stage 2 synchronous external abort reports using Long-descriptor format when TTBCR_S.EAE is 0b0")))); else long_format = TTBCR.EAE == '1'; d_side = TRUE; bits(32) syndrome; if long_format then syndrome = AArch32.FaultStatusLD(d_side, fault); else syndrome = AArch32.FaultStatusSD(d_side, fault); if fault.acctype == AccType_IC then bits(32) i_syndrome; if (!long_format && boolean IMPLEMENTATION_DEFINED "Report I-cache maintenance fault in IFSR") then i_syndrome = syndrome; syndrome<10,3:0> = EncodeSDFSC(Fault_ICacheMaint, 1); else i_syndrome = bits(32) UNKNOWN; if route_to_monitor then IFSR_S = i_syndrome; else IFSR = i_syndrome; if route_to_monitor then DFSR_S = syndrome; DFAR_S = vaddress; else DFSR = syndrome; DFAR = vaddress; return;

Library pseudocode for aarch32/exceptions/aborts/AArch32.ReportPrefetchAbort

// AArch32.ReportPrefetchAbort() // ============================= // Report syndrome information for aborts taken to modes other than Hyp mode. AArch32.ReportPrefetchAbort(boolean route_to_monitor, FaultRecord fault, bits(32) vaddress) // The encoding used in the IFSR can be Long-descriptor format or Short-descriptor format. // Normally, the current translation table format determines the format. For an abort from // Non-secure state to Monitor mode, the IFSR uses the Long-descriptor format if any of the // following applies: // * The Secure TTBCR.EAE is set to 1. // * It is taken from Hyp mode. // * It is taken from EL1 or EL0, and the Non-secure TTBCR.EAE is set to 1. long_format = FALSE; if route_to_monitor && if route_to_monitor && ! CurrentSecurityStateIsSecure() != SS_Secure then () then long_format = TTBCR_S.EAE == '1' || PSTATE.EL == EL2 || TTBCR.EAE == '1'; else long_format = TTBCR.EAE == '1'; d_side = FALSE; bits(32) fsr; if long_format then fsr = AArch32.FaultStatusLD(d_side, fault); else fsr = AArch32.FaultStatusSD(d_side, fault); if route_to_monitor then IFSR_S = fsr; IFAR_S = vaddress; else IFSR = fsr; IFAR = vaddress; return;

Library pseudocode for aarch32/exceptions/aborts/AArch32.TakeDataAbortException

// AArch32.TakeDataAbortException() // ================================ AArch32.TakeDataAbortException(bits(32) vaddress, FaultRecord fault) route_to_monitor = HaveEL(EL3) && SCR.EA == '1' && IsExternalAbort(fault); route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0, EL1} && (HCR.TGE == '1' || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && HDCR.TDE == '1') || IsSecondStage(fault))); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; if IsDebugException(fault) then DBGDSCRext.MOE = fault.debugmoe; if route_to_monitor then AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/AArch32.TakePrefetchAbortException

// AArch32.TakePrefetchAbortException() // ==================================== AArch32.TakePrefetchAbortException(bits(32) vaddress, FaultRecord fault) route_to_monitor = HaveEL(EL3) && SCR.EA == '1' && IsExternalAbort(fault); route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0, EL1} && (HCR.TGE == '1' || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && HDCR.TDE == '1') || IsSecondStage(fault))); ExceptionRecord exception; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0C; lr_offset = 4; if IsDebugException(fault) then DBGDSCRext.MOE = fault.debugmoe; if route_to_monitor then AArch32.ReportPrefetchAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then if fault.statuscode == Fault_Alignment then // PC Alignment fault exception = ExceptionSyndrome(Exception_PCAlignment); exception.vaddress = ThisInstrAddr(); else exception = AArch32.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportPrefetchAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/async/AArch32.TakePhysicalFIQException

// AArch32.TakePhysicalFIQException() // ================================== AArch32.TakePhysicalFIQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.FMO == '1' && !IsInHost()); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.FIQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalFIQException(); route_to_monitor = HaveEL(EL3) && SCR.FIQ == '1'; route_to_hyp = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR.TGE == '1' || HCR.FMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_FIQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_FIQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/async/AArch32.TakePhysicalIRQException

// AArch32.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception. AArch32.TakePhysicalIRQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.IMO == '1' && !IsInHost()); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.IRQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalIRQException(); route_to_monitor = HaveEL(EL3) && SCR.IRQ == '1'; route_to_hyp = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR.TGE == '1' || HCR.IMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x18; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_IRQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_IRQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/async/AArch32.TakePhysicalSErrorException

// AArch32.TakePhysicalSErrorException() // ===================================== AArch32.TakePhysicalSErrorException(boolean parity, bit extflag, bits(2) pe_error_state, bits(25) full_syndrome) // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = (HCR_EL2.TGE == '1' || (!IsInHost() && HCR_EL2.AMO == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.TakePhysicalSErrorException(full_syndrome); route_to_monitor = HaveEL(EL3) && SCR.EA == '1'; route_to_hyp = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR.TGE == '1' || HCR.AMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; bits(2) target_el; if route_to_monitor then target_el = EL3; elsif PSTATE.EL == EL2 || route_to_hyp then target_el = EL2; else target_el = EL1; if IsSErrorEdgeTriggered(target_el, full_syndrome) then ClearPendingPhysicalSError(); fault = AsyncExternalAbort(parity, pe_error_state, extflag); vaddress = bits(32) UNKNOWN; case target_el of when EL3AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); when EL2 exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); when EL1AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset); otherwise Unreachable();

Library pseudocode for aarch32/exceptions/async/AArch32.TakeVirtualFIQException

// AArch32.TakeVirtualFIQException() // ================================= AArch32.TakeVirtualFIQException() assert PSTATE.EL IN {EL0, EL1} && EL2Enabled(); if ELUsingAArch32(EL2) then // Virtual IRQ enabled if TGE==0 and FMO==1 assert HCR.TGE == '0' && HCR.FMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualFIQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; AArch32.EnterMode(M32_FIQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/async/AArch32.TakeVirtualIRQException

// AArch32.TakeVirtualIRQException() // ================================= AArch32.TakeVirtualIRQException() assert PSTATE.EL IN {EL0, EL1} && EL2Enabled(); if ELUsingAArch32(EL2) then // Virtual IRQs enabled if TGE==0 and IMO==1 assert HCR.TGE == '0' && HCR.IMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualIRQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x18; lr_offset = 4; AArch32.EnterMode(M32_IRQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/async/AArch32.TakeVirtualSErrorException

// AArch32.TakeVirtualSErrorException() // ==================================== AArch32.TakeVirtualSErrorException(bit extflag, bits(2) pe_error_state, bits(25) full_syndrome) assert PSTATE.EL IN {EL0, EL1} && EL2Enabled(); if ELUsingAArch32(EL2) then // Virtual SError enabled if TGE==0 and AMO==1 assert HCR.TGE == '0' && HCR.AMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualSErrorException(full_syndrome); route_to_monitor = FALSE; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; vaddress = bits(32) UNKNOWN; parity = FALSE; FaultRecord fault; if HaveRASExt() then if ELUsingAArch32(EL2) then fault = AsyncExternalAbort(FALSE, VDFSR.AET, VDFSR.ExT); else fault = AsyncExternalAbort(FALSE, VSESR_EL2.AET, VSESR_EL2.ExT); else fault = AsyncExternalAbort(parity, pe_error_state, extflag); ClearPendingVirtualSError(); AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/debug/AArch32.SoftwareBreakpoint

// AArch32.SoftwareBreakpoint() // ============================ AArch32.SoftwareBreakpoint(bits(16) immediate) if (EL2Enabled() && !ELUsingAArch32(EL2) && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')) || !ELUsingAArch32(EL1) then AArch64.SoftwareBreakpoint(immediate); vaddress = bits(32) UNKNOWN; acctype = AccType_IFETCH; // Take as a Prefetch Abort iswrite = FALSE; entry = DebugException_BKPT; fault = AArch32.DebugFault(acctype, iswrite, entry); AArch32.Abort(vaddress, fault);

Library pseudocode for aarch32/exceptions/debug/DebugException

constant bits(4) DebugException_Breakpoint = '0001'; constant bits(4) DebugException_BKPT = '0011'; constant bits(4) DebugException_VectorCatch = '0101'; constant bits(4) DebugException_Watchpoint = '1010';

Library pseudocode for aarch32/exceptions/exceptions/AArch32.CheckAdvSIMDOrFPRegisterTraps

// AArch32.CheckAdvSIMDOrFPRegisterTraps() // ======================================= // Check if an instruction that accesses an Advanced SIMD and // floating-point System register is trapped by an appropriate HCR.TIDx // ID group trap control. AArch32.CheckAdvSIMDOrFPRegisterTraps(bits(4) reg) if PSTATE.EL == EL1 && EL2Enabled() then tid0 = if ELUsingAArch32(EL2) then HCR.TID0 else HCR_EL2.TID0; tid3 = if ELUsingAArch32(EL2) then HCR.TID3 else HCR_EL2.TID3; if (tid0 == '1' && reg == '0000') // FPSID || (tid3 == '1' && reg IN {'0101', '0110', '0111'}) then // MVFRx if ELUsingAArch32(EL2) then AArch32.SystemAccessTrap(M32_Hyp, 0x8); // Exception_AdvSIMDFPAccessTrap else AArch64.AArch32SystemAccessTrap(EL2, 0x8); // Exception_AdvSIMDFPAccessTrap

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ExceptionClass

// AArch32.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in HSR (integer,bit) AArch32.ExceptionClass(Exception exceptype) il_is_valid = TRUE; integer ec; case exceptype of when Exception_Uncategorized ec = 0x00; il_is_valid = FALSE; when Exception_WFxTrap ec = 0x01; when Exception_CP15RTTrap ec = 0x03; when Exception_CP15RRTTrap ec = 0x04; when Exception_CP14RTTrap ec = 0x05; when Exception_CP14DTTrap ec = 0x06; when Exception_AdvSIMDFPAccessTrap ec = 0x07; when Exception_FPIDTrap ec = 0x08; when Exception_PACTrap ec = 0x09; when Exception_CP14RRTTrap ec = 0x0C; when Exception_BranchTarget ec = 0x0D; when Exception_IllegalState ec = 0x0E; il_is_valid = FALSE; when Exception_SupervisorCall ec = 0x11; when Exception_HypervisorCall ec = 0x12; when Exception_MonitorCall ec = 0x13; when Exception_InstructionAbort ec = 0x20; il_is_valid = FALSE; when Exception_PCAlignment ec = 0x22; il_is_valid = FALSE; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_FPTrappedException ec = 0x28; otherwise Unreachable(); if ec IN {0x20,0x24} && PSTATE.EL == EL2 then ec = ec + 1; bit il; if il_is_valid then il = if ThisInstrLength() == 32 then '1' else '0'; else il = '1'; return (ec,il);

Library pseudocode for aarch32/exceptions/exceptions/AArch32.GeneralExceptionsToAArch64

// AArch32.GeneralExceptionsToAArch64() // ==================================== // Returns TRUE if exceptions normally routed to EL1 are being handled at an Exception // level using AArch64, because either EL1 is using AArch64 or TGE is in force and EL2 // is using AArch64. boolean AArch32.GeneralExceptionsToAArch64() return ((PSTATE.EL == EL0 && !ELUsingAArch32(EL1)) || (EL2Enabled() && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1'));

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ReportHypEntry

// AArch32.ReportHypEntry() // ======================== // Report syndrome information to Hyp mode registers. AArch32.ReportHypEntry(ExceptionRecord exception) Exception exceptype = exception.exceptype; (ec,il) = AArch32.ExceptionClass(exceptype); iss = exception.syndrome; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; HSR = ec<5:0>:il:iss; if exceptype IN {Exception_InstructionAbort, Exception_PCAlignment} then HIFAR = exception.vaddress<31:0>; HDFAR = bits(32) UNKNOWN; elsif exceptype == Exception_DataAbort then HIFAR = bits(32) UNKNOWN; HDFAR = exception.vaddress<31:0>; if exception.ipavalid then HPFAR<31:4> = exception.ipaddress<39:12>; else HPFAR<31:4> = bits(28) UNKNOWN; return;

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ResetControlRegisters

// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values. AArch32.ResetControlRegisters(boolean cold_reset);

Library pseudocode for aarch32/exceptions/exceptions/AArch32.TakeReset

// AArch32.TakeReset() // =================== // Reset into AArch32 state AArch32.TakeReset(boolean cold_reset) assert !HaveAArch64(); // Enter the highest implemented Exception level in AArch32 state if HaveEL(EL3) then AArch32.WriteMode(M32_Svc); SCR.NS = '0'; // Secure state elsif HaveEL(EL2) then AArch32.WriteMode(M32_Hyp); else AArch32.WriteMode(M32_Svc); // Reset System registers in the coproc=0b111x encoding space and other system components AArch32.ResetControlRegisters(cold_reset); FPEXC.EN = '0'; // Reset all other PSTATE fields, including instruction set and endianness according to the // SCTLR values produced by the above call to ResetControlRegisters() PSTATE.<A,I,F> = '111'; // All asynchronous exceptions masked PSTATE.IT = '00000000'; // IT block state reset if HaveEL(EL2) && !HaveEL(EL3) then PSTATE.T = HSCTLR.TE; // Instruction set: TE=0: A32, TE=1: T32. PSTATE.J is RES0. PSTATE.E = HSCTLR.EE; // Endianness: EE=0: little-endian, EE=1: big-endian else PSTATE.T = SCTLR.TE; // Instruction set: TE=0: A32, TE=1: T32. PSTATE.J is RES0. PSTATE.E = SCTLR.EE; // Endianness: EE=0: little-endian, EE=1: big-endian PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // R14 or ELR_hyp and SPSR have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch32.ResetGeneralRegisters(); AArch32.ResetSIMDFPRegisters(); AArch32.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(32) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then if MVBAR<0> == '1' then // Reset vector in MVBAR rv = MVBAR<31:1>:'0'; else rv = bits(32) IMPLEMENTATION_DEFINED "reset vector address"; else rv = RVBAR<31:1>:'0'; // The reset vector must be correctly aligned assert rv<0> == '0' && (PSTATE.T == '1' || rv<1> == '0'); boolean branch_conditional = FALSE; BranchTo(rv, BranchType_RESET, branch_conditional);

Library pseudocode for aarch32/exceptions/exceptions/ExcVectorBase

// ExcVectorBase() // =============== bits(32) ExcVectorBase() if SCTLR.V == '1' then // Hivecs selected, base = 0xFFFF0000 return Ones(16):Zeros(16); else return VBAR<31:5>:Zeros(5);

Library pseudocode for aarch32/exceptions/ieeefp/AArch32.FPTrappedException

// AArch32.FPTrappedException() // ============================ AArch32.FPTrappedException(bits(8) accumulated_exceptions) if AArch32.GeneralExceptionsToAArch64() then is_ase = FALSE; element = 0; AArch64.FPTrappedException(is_ase, accumulated_exceptions); FPEXC.DEX = '1'; FPEXC.TFV = '1'; FPEXC<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF FPEXC<10:8> = '111'; // VECITR is RES1 AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/syscalls/AArch32.CallHypervisor

// AArch32.CallHypervisor() // ======================== // Performs a HVC call AArch32.CallHypervisor(bits(16) immediate) assert HaveEL(EL2); if !ELUsingAArch32(EL2) then AArch64.CallHypervisor(immediate); else AArch32.TakeHVCException(immediate);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.CallSupervisor

// AArch32.CallSupervisor() // ======================== // Calls the Supervisor AArch32.CallSupervisor(bits(16) immediate_in) bits(16) immediate = immediate_in; if AArch32.CurrentCond() != '1110' then immediate = bits(16) UNKNOWN; if AArch32.GeneralExceptionsToAArch64() then AArch64.CallSupervisor(immediate); else AArch32.TakeSVCException(immediate);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeHVCException

// AArch32.TakeHVCException() // ========================== AArch32.TakeHVCException(bits(16) immediate) assert HaveEL(EL2) && ELUsingAArch32(EL2); AArch32.ITAdvance(); SSAdvance(); bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; exception = ExceptionSyndrome(Exception_HypervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeSMCException

// AArch32.TakeSMCException() // ========================== AArch32.TakeSMCException() assert HaveEL(EL3) && ELUsingAArch32(EL3); AArch32.ITAdvance(); SSAdvance(); bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; lr_offset = 0; AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeSVCException

// AArch32.TakeSVCException() // ========================== AArch32.TakeSVCException(bits(16) immediate) AArch32.ITAdvance(); SSAdvance(); route_to_hyp = PSTATE.EL == EL0 && EL2Enabled() && HCR.TGE == '1'; bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; lr_offset = 0; if PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_SupervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.EnterMode(M32_Svc, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterHypMode

// AArch32.EnterHypMode() // ====================== // Take an exception to Hyp mode. AArch32.EnterHypMode(ExceptionRecord exception, bits(32) preferred_exception_return, integer vect_offset) SynchronizeContext(); assert HaveEL(EL2) &&) && ! CurrentSecurityStateIsSecure() == SS_NonSecure &&() && ELUsingAArch32(EL2); bits(32) spsr = GetPSRFromPSTATE(AArch32_NonDebugState); if !(exception.exceptype IN {Exception_IRQ, Exception_FIQ}) then AArch32.ReportHypEntry(exception); AArch32.WriteMode(M32_Hyp); SPSR[] = spsr; ELR_hyp = preferred_exception_return; PSTATE.T = HSCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if !HaveEL(EL3) || SCR_GEN[].EA == '0' then PSTATE.A = '1'; if !HaveEL(EL3) || SCR_GEN[].IRQ == '0' then PSTATE.I = '1'; if !HaveEL(EL3) || SCR_GEN[].FIQ == '0' then PSTATE.F = '1'; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HaveSSBSExt() then PSTATE.SSBS = HSCTLR.DSSBS; boolean branch_conditional = FALSE; BranchTo(HVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION, branch_conditional); CheckExceptionCatch(TRUE); // Check for debug event on exception entry EndOfInstruction();

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterMode

// AArch32.EnterMode() // =================== // Take an exception to a mode other than Monitor and Hyp mode. AArch32.EnterMode(bits(5) target_mode, bits(32) preferred_exception_return, integer lr_offset, integer vect_offset) SynchronizeContext(); assert ELUsingAArch32(EL1) && PSTATE.EL != EL2; bits(32) spsr = GetPSRFromPSTATE(AArch32_NonDebugState); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(target_mode); SPSR[] = spsr; R[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if target_mode == M32_FIQ then PSTATE.<A,I,F> = '111'; elsif target_mode IN {M32_Abort, M32_IRQ} then PSTATE.<A,I> = '11'; else PSTATE.I = '1'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if HaveSSBSExt() then PSTATE.SSBS = SCTLR.DSSBS; boolean branch_conditional = FALSE; BranchTo(ExcVectorBase()<31:5>:vect_offset<4:0>, BranchType_EXCEPTION, branch_conditional); CheckExceptionCatch(TRUE); // Check for debug event on exception entry EndOfInstruction();

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterMonitorMode

// AArch32.EnterMonitorMode() // ========================== // Take an exception to Monitor mode. AArch32.EnterMonitorMode(bits(32) preferred_exception_return, integer lr_offset, integer vect_offset) SynchronizeContext(); assert HaveEL(EL3) && ELUsingAArch32(EL3); from_secure = CurrentSecurityStateIsSecure() == SS_Secure; (); bits(32) spsr = GetPSRFromPSTATE(AArch32_NonDebugState); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(M32_Monitor); SPSR[] = spsr; R[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; PSTATE.<A,I,F> = '111'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if HaveSSBSExt() then PSTATE.SSBS = SCTLR.DSSBS; boolean branch_conditional = FALSE; BranchTo(MVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION, branch_conditional); CheckExceptionCatch(TRUE); // Check for debug event on exception entry EndOfInstruction();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckAdvSIMDOrFPEnabled

// AArch32.CheckAdvSIMDOrFPEnabled() // ================================= // Check against CPACR, FPEXC, HCPTR, NSACR, and CPTR_EL3. AArch32.CheckAdvSIMDOrFPEnabled(boolean fpexc_check, boolean advsimd) if PSTATE.EL == EL0 && (!EL2Enabled() || (!ELUsingAArch32(EL2) && HCR_EL2.TGE == '0')) && !ELUsingAArch32(EL1) then // The PE behaves as if FPEXC.EN is 1 AArch64.CheckFPEnabled(); AArch64.CheckFPAdvSIMDEnabled(); elsif PSTATE.EL == EL0 && EL2Enabled() && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1' && !ELUsingAArch32(EL1) then if fpexc_check && HCR_EL2.RW == '0' then fpexc_en = bits(1) IMPLEMENTATION_DEFINED "FPEXC.EN value when TGE==1 and RW==0"; if fpexc_en == '0' then UNDEFINED; AArch64.CheckFPEnabled(); else cpacr_asedis = CPACR.ASEDIS; cpacr_cp10 = CPACR.cp10; if HaveEL(EL3) && ELUsingAArch32(EL3) &&) && ! CurrentSecurityStateIsSecure() == SS_NonSecure then () then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then cpacr_asedis = '1'; if NSACR.cp10 == '0' then cpacr_cp10 = '00'; if PSTATE.EL != EL2 then // Check if Advanced SIMD disabled in CPACR if advsimd && cpacr_asedis == '1' then UNDEFINED; // Check if access disabled in CPACR boolean disabled; case cpacr_cp10 of when '00' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0; when '10' disabled = ConstrainUnpredictableBool(Unpredictable_RESCPACR); when '11' disabled = FALSE; if disabled then UNDEFINED; // If required, check FPEXC enabled bit. if fpexc_check && FPEXC.EN == '0' then UNDEFINED; AArch32.CheckFPAdvSIMDTrap(advsimd); // Also check against HCPTR and CPTR_EL3

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckFPAdvSIMDTrap

// AArch32.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3. AArch32.CheckFPAdvSIMDTrap(boolean advsimd) if EL2Enabled() && !ELUsingAArch32(EL2) then AArch64.CheckFPAdvSIMDTrap(); else if ( ifHaveEL(EL3) && !ELUsingAArch32(EL3) && CPTR_EL3.TFP == '1' && EL3SDDTrapPriority()) then UNDEFINED; ss = CurrentSecurityState(); if HaveEL(EL2) && ss !=) && ! SS_SecureIsSecure then () then hcptr_tase = HCPTR.TASE; hcptr_cp10 = HCPTR.TCP10; if HaveEL(EL3) && ELUsingAArch32(EL3) then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then hcptr_tase = '1'; if NSACR.cp10 == '0' then hcptr_cp10 = '1'; // Check if access disabled in HCPTR if (advsimd && hcptr_tase == '1') || hcptr_cp10 == '1' then exception =) && ! IsSecure() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then hcptr_tase = '1'; if NSACR.cp10 == '0' then hcptr_cp10 = '1'; // Check if access disabled in HCPTR if (advsimd && hcptr_tase == '1') || hcptr_cp10 == '1' then exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); exception.syndrome<24:20> = ConditionSyndrome(); if advsimd then exception.syndrome<5> = '1'; else exception.syndrome<5> = '0'; exception.syndrome<3:0> = '1010'; // coproc field, always 0xA if PSTATE.EL == EL2 then AArch32.TakeUndefInstrException(exception); else AArch32.TakeHypTrapException(exception); if HaveEL(EL3) && !ELUsingAArch32(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3);); return;

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckForSMCUndefOrTrap

// AArch32.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction AArch32.CheckForSMCUndefOrTrap() if !HaveEL(EL3) || PSTATE.EL == EL0 then UNDEFINED; if EL2Enabled() && !ELUsingAArch32(EL2) then AArch64.CheckForSMCUndefOrTrap(Zeros(16)); else route_to_hyp = EL2Enabled() && PSTATE.EL == EL1 && HCR.TSC == '1'; if route_to_hyp then exception = ExceptionSyndrome(Exception_MonitorCall); AArch32.TakeHypTrapException(exception);

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckForSVCTrap

// AArch32.CheckForSVCTrap() // ========================= // Check for trap on SVC instruction AArch32.CheckForSVCTrap(bits(16) immediate) if HaveFGTExt() then route_to_el2 = FALSE; if PSTATE.EL == EL0 then route_to_el2 = (!ELUsingAArch32(EL1) && EL2Enabled() && HFGITR_EL2.SVC_EL0 == '1' && (HCR_EL2.<E2H, TGE> != '11' && (!HaveEL(EL3) || SCR_EL3.FGTEn == '1'))); if route_to_el2 then exception = ExceptionSyndrome(Exception_SupervisorCall); exception.syndrome<15:0> = immediate; exception.trappedsyscallinst = TRUE; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckForWFxTrap

// AArch32.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction AArch32.CheckForWFxTrap(bits(2) target_el, WFxType wfxtype) assert HaveEL(target_el); // Check for routing to AArch64 if !ELUsingAArch32(target_el) then AArch64.CheckForWFxTrap(target_el, wfxtype); return; boolean is_wfe = wfxtype == WFxType_WFE; boolean trap; case target_el of when EL1 trap = (if is_wfe then SCTLR.nTWE else SCTLR.nTWI) == '0'; when EL2 trap = (if is_wfe then HCR.TWE else HCR.TWI) == '1'; when EL3 trap = (if is_wfe then SCR.TWE else SCR.TWI) == '1'; if trap then if target_el == EL1 && EL2Enabled() && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1' then AArch64.WFxTrap(wfxtype, target_el); if target_el == EL3 then AArch32.TakeMonitorTrapException(); elsif target_el == EL2 then exception = ExceptionSyndrome(Exception_WFxTrap); exception.syndrome<24:20> = ConditionSyndrome(); case wfxtype of when WFxType_WFI exception.syndrome<0> = '0'; when WFxType_WFE exception.syndrome<0> = '1'; AArch32.TakeHypTrapException(exception); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckITEnabled

// AArch32.CheckITEnabled() // ======================== // Check whether the T32 IT instruction is disabled. AArch32.CheckITEnabled(bits(4) mask) bit it_disabled; if PSTATE.EL == EL2 then it_disabled = HSCTLR.ITD; else it_disabled = (if ELUsingAArch32(EL1) then SCTLR.ITD else SCTLR[].ITD); if it_disabled == '1' then if mask != '1000' then UNDEFINED; // Otherwise whether the IT block is allowed depends on hw1 of the next instruction. next_instr = AArch32.MemSingle[NextInstrAddr(), 2, AccType_IFETCH, TRUE]; if next_instr IN {'11xxxxxxxxxxxxxx', '1011xxxxxxxxxxxx', '10100xxxxxxxxxxx', '01001xxxxxxxxxxx', '010001xxx1111xxx', '010001xx1xxxx111'} then // It is IMPLEMENTATION DEFINED whether the Undefined Instruction exception is // taken on the IT instruction or the next instruction. This is not reflected in // the pseudocode, which always takes the exception on the IT instruction. This // also does not take into account cases where the next instruction is UNPREDICTABLE. UNDEFINED; return;

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckIllegalState

// AArch32.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set. AArch32.CheckIllegalState() if AArch32.GeneralExceptionsToAArch64() then AArch64.CheckIllegalState(); elsif PSTATE.IL == '1' then route_to_hyp = PSTATE.EL == EL0 && EL2Enabled() && HCR.TGE == '1'; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; if PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_IllegalState); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckSETENDEnabled

// AArch32.CheckSETENDEnabled() // ============================ // Check whether the AArch32 SETEND instruction is disabled. AArch32.CheckSETENDEnabled() bit setend_disabled; if PSTATE.EL == EL2 then setend_disabled = HSCTLR.SED; else setend_disabled = (if ELUsingAArch32(EL1) then SCTLR.SED else SCTLR[].SED); if setend_disabled == '1' then UNDEFINED; return;

Library pseudocode for aarch32/exceptions/traps/AArch32.SystemAccessTrap

// AArch32.SystemAccessTrap() // ========================== // Trapped system register access. AArch32.SystemAccessTrap(bits(5) mode, integer ec) (valid, target_el) = ELFromM32(mode); assert valid && HaveEL(target_el) && target_el != EL0 && UInt(target_el) >= UInt(PSTATE.EL); if target_el == EL2 then exception = AArch32.SystemAccessTrapSyndrome(ThisInstr(), ec); AArch32.TakeHypTrapException(exception); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.SystemAccessTrapSyndrome

// AArch32.SystemAccessTrapSyndrome() // ================================== // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord AArch32.SystemAccessTrapSyndrome(bits(32) instr, integer ec) ExceptionRecord exception; case ec of when 0x0 exception = ExceptionSyndrome(Exception_Uncategorized); when 0x3 exception = ExceptionSyndrome(Exception_CP15RTTrap); when 0x4 exception = ExceptionSyndrome(Exception_CP15RRTTrap); when 0x5 exception = ExceptionSyndrome(Exception_CP14RTTrap); when 0x6 exception = ExceptionSyndrome(Exception_CP14DTTrap); when 0x7 exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); when 0x8 exception = ExceptionSyndrome(Exception_FPIDTrap); when 0xC exception = ExceptionSyndrome(Exception_CP14RRTTrap); otherwise Unreachable(); bits(20) iss = Zeros(); if exception.exceptype == Exception_Uncategorized then return exception; elsif exception.exceptype IN {Exception_FPIDTrap, Exception_CP14RTTrap, Exception_CP15RTTrap} then // Trapped MRC/MCR, VMRS on FPSID iss<13:10> = instr<19:16>; // CRn, Reg in case of VMRS iss<8:5> = instr<15:12>; // Rt iss<9> = '0'; // RES0 if exception.exceptype != Exception_FPIDTrap then // When trap is not for VMRS iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<4:1> = instr<3:0>; //CRm else //VMRS Access iss<19:17> = '000'; //opc2 - Hardcoded for VMRS iss<16:14> = '111'; //opc1 - Hardcoded for VMRS iss<4:1> = '0000'; //CRm - Hardcoded for VMRS elsif exception.exceptype IN {Exception_CP14RRTTrap, Exception_AdvSIMDFPAccessTrap, Exception_CP15RRTTrap} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 iss<13:10> = instr<19:16>; // Rt2 iss<8:5> = instr<15:12>; // Rt iss<4:1> = instr<3:0>; // CRm elsif exception.exceptype == Exception_CP14DTTrap then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<8:5> = bits(4) UNKNOWN; iss<3> = '1'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<19:0> = iss; return exception;

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeHypTrapException

// AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception. AArch32.TakeHypTrapException(integer ec) exception = AArch32.SystemAccessTrapSyndrome(ThisInstr(), ec); AArch32.TakeHypTrapException(exception); // AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception. AArch32.TakeHypTrapException(ExceptionRecord exception) assert HaveEL(EL2) &&) && ! CurrentSecurityStateIsSecure() == SS_NonSecure &&() && ELUsingAArch32(EL2); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x14; AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeMonitorTrapException

// AArch32.TakeMonitorTrapException() // ================================== // Exceptions routed to Monitor mode as a Monitor Trap exception. AArch32.TakeMonitorTrapException() assert HaveEL(EL3) && ELUsingAArch32(EL3); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; lr_offset = if CurrentInstrSet() == InstrSet_A32 then 4 else 2; AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeUndefInstrException

// AArch32.TakeUndefInstrException() // ================================= AArch32.TakeUndefInstrException() exception = ExceptionSyndrome(Exception_Uncategorized); AArch32.TakeUndefInstrException(exception); // AArch32.TakeUndefInstrException() // ================================= AArch32.TakeUndefInstrException(ExceptionRecord exception) route_to_hyp = PSTATE.EL == EL0 && EL2Enabled() && HCR.TGE == '1'; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; lr_offset = if CurrentInstrSet() == InstrSet_A32 then 4 else 2; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); elsif route_to_hyp then AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.EnterMode(M32_Undef, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.UndefinedFault

// AArch32.UndefinedFault() // ======================== AArch32.UndefinedFault() if AArch32.GeneralExceptionsToAArch64() then AArch64.UndefinedFault(); AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/functions/aborts/AArch32.DomainValid

// AArch32.DomainValid() // ===================== // Returns TRUE if the Domain is valid for a Short-descriptor translation scheme. boolean AArch32.DomainValid(Fault statuscode, integer level) assert statuscode != Fault_None; case statuscode of when Fault_Domain return TRUE; when Fault_Translation, Fault_AccessFlag, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk return level == 2; otherwise return FALSE;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultStatusLD

// AArch32.FaultStatusLD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Long-descriptor format. bits(32) AArch32.FaultStatusLD(boolean d_side, FaultRecord fault) assert fault.statuscode != Fault_None; bits(32) fsr = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT, AccType_ATPAN} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '1'; fsr<5:0> = EncodeLDFSC(fault.statuscode, fault.level); return fsr;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultStatusSD

// AArch32.FaultStatusSD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Short-descriptor format. bits(32) AArch32.FaultStatusSD(boolean d_side, FaultRecord fault) assert fault.statuscode != Fault_None; bits(32) fsr = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT, AccType_ATPAN} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '0'; fsr<10,3:0> = EncodeSDFSC(fault.statuscode, fault.level); if d_side then fsr<7:4> = fault.domain; // Domain field (data fault only) return fsr;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultSyndrome

// AArch32.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // AArch32 Hyp mode. bits(25) AArch32.FaultSyndrome(boolean d_side, FaultRecord fault) assert fault.statuscode != Fault_None; bits(25) iss = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then iss<11:10> = fault.errortype; // AET if d_side then if (IsSecondStage(fault) && !fault.s2fs1walk && (!IsExternalSyncAbort(fault) || (!HaveRASExt() && fault.acctype == AccType_TTW && boolean IMPLEMENTATION_DEFINED "ISV on second stage translation table walk"))) then iss<24:14> = LSInstructionSyndrome(); if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT, AccType_ATPAN} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.statuscode, fault.level); return iss;

Library pseudocode for aarch32/functions/aborts/EncodeSDFSC

// EncodeSDFSC() // ============= // Function that gives the Short-descriptor FSR code for different types of Fault bits(5) EncodeSDFSC(Fault statuscode, integer level) bits(5) result; case statuscode of when Fault_AccessFlag assert level IN {1,2}; result = if level == 1 then '00011' else '00110'; when Fault_Alignment result = '00001'; when Fault_Permission assert level IN {1,2}; result = if level == 1 then '01101' else '01111'; when Fault_Domain assert level IN {1,2}; result = if level == 1 then '01001' else '01011'; when Fault_Translation assert level IN {1,2}; result = if level == 1 then '00101' else '00111'; when Fault_SyncExternal result = '01000'; when Fault_SyncExternalOnWalk assert level IN {1,2}; result = if level == 1 then '01100' else '01110'; when Fault_SyncParity result = '11001'; when Fault_SyncParityOnWalk assert level IN {1,2}; result = if level == 1 then '11100' else '11110'; when Fault_AsyncParity result = '11000'; when Fault_AsyncExternal result = '10110'; when Fault_Debug result = '00010'; when Fault_TLBConflict result = '10000'; when Fault_Lockdown result = '10100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '10101'; // IMPLEMENTATION DEFINED when Fault_ICacheMaint result = '00100'; otherwise Unreachable(); return result;

Library pseudocode for aarch32/functions/common/A32ExpandImm

// A32ExpandImm() // ============== bits(32) A32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) = A32ExpandImm_C(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/common/A32ExpandImm_C

// A32ExpandImm_C() // ================ (bits(32), bit) A32ExpandImm_C(bits(12) imm12, bit carry_in) unrotated_value = ZeroExtend(imm12<7:0>, 32); (imm32, carry_out) = Shift_C(unrotated_value, SRType_ROR, 2*UInt(imm12<11:8>), carry_in); return (imm32, carry_out);

Library pseudocode for aarch32/functions/common/DecodeImmShift

// DecodeImmShift() // ================ (SRType, integer) DecodeImmShift(bits(2) srtype, bits(5) imm5) SRType shift_t; integer shift_n; case srtype of when '00' shift_t = SRType_LSL; shift_n = UInt(imm5); when '01' shift_t = SRType_LSR; shift_n = if imm5 == '00000' then 32 else UInt(imm5); when '10' shift_t = SRType_ASR; shift_n = if imm5 == '00000' then 32 else UInt(imm5); when '11' if imm5 == '00000' then shift_t = SRType_RRX; shift_n = 1; else shift_t = SRType_ROR; shift_n = UInt(imm5); return (shift_t, shift_n);

Library pseudocode for aarch32/functions/common/DecodeRegShift

// DecodeRegShift() // ================ SRType DecodeRegShift(bits(2) srtype) SRType shift_t; case srtype of when '00' shift_t = SRType_LSL; when '01' shift_t = SRType_LSR; when '10' shift_t = SRType_ASR; when '11' shift_t = SRType_ROR; return shift_t;

Library pseudocode for aarch32/functions/common/RRX

// RRX() // ===== bits(N) RRX(bits(N) x, bit carry_in) (result, -) = RRX_C(x, carry_in); return result;

Library pseudocode for aarch32/functions/common/RRX_C

// RRX_C() // ======= (bits(N), bit) RRX_C(bits(N) x, bit carry_in) result = carry_in : x<N-1:1>; carry_out = x<0>; return (result, carry_out);

Library pseudocode for aarch32/functions/common/SRType

enumeration SRType {SRType_LSL, SRType_LSR, SRType_ASR, SRType_ROR, SRType_RRX};

Library pseudocode for aarch32/functions/common/Shift

// Shift() // ======= bits(N) Shift(bits(N) value, SRType srtype, integer amount, bit carry_in) (result, -) = Shift_C(value, srtype, amount, carry_in); return result;

Library pseudocode for aarch32/functions/common/Shift_C

// Shift_C() // ========= (bits(N), bit) Shift_C(bits(N) value, SRType srtype, integer amount, bit carry_in) assert !(srtype == SRType_RRX && amount != 1); bits(N) result; bit carry_out; if amount == 0 then (result, carry_out) = (value, carry_in); else case srtype of when SRType_LSL (result, carry_out) = LSL_C(value, amount); when SRType_LSR (result, carry_out) = LSR_C(value, amount); when SRType_ASR (result, carry_out) = ASR_C(value, amount); when SRType_ROR (result, carry_out) = ROR_C(value, amount); when SRType_RRX (result, carry_out) = RRX_C(value, carry_in); return (result, carry_out);

Library pseudocode for aarch32/functions/common/T32ExpandImm

// T32ExpandImm() // ============== bits(32) T32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) = T32ExpandImm_C(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/common/T32ExpandImm_C

// T32ExpandImm_C() // ================ (bits(32), bit) T32ExpandImm_C(bits(12) imm12, bit carry_in) bits(32) imm32; bit carry_out; if imm12<11:10> == '00' then case imm12<9:8> of when '00' imm32 = ZeroExtend(imm12<7:0>, 32); when '01' imm32 = '00000000' : imm12<7:0> : '00000000' : imm12<7:0>; when '10' imm32 = imm12<7:0> : '00000000' : imm12<7:0> : '00000000'; when '11' imm32 = imm12<7:0> : imm12<7:0> : imm12<7:0> : imm12<7:0>; carry_out = carry_in; else unrotated_value = ZeroExtend('1':imm12<6:0>, 32); (imm32, carry_out) = ROR_C(unrotated_value, UInt(imm12<11:7>)); return (imm32, carry_out);

Library pseudocode for aarch32/functions/common/VBitOps

enumeration VBitOps {VBitOps_VBIF, VBitOps_VBIT, VBitOps_VBSL};

Library pseudocode for aarch32/functions/common/VCGEType

enumeration VCGEType {VCGEType_signed, VCGEType_unsigned, VCGEType_fp};

Library pseudocode for aarch32/functions/common/VCGTtype

enumeration VCGTtype {VCGTtype_signed, VCGTtype_unsigned, VCGTtype_fp};

Library pseudocode for aarch32/functions/common/VFPNegMul

enumeration VFPNegMul {VFPNegMul_VNMLA, VFPNegMul_VNMLS, VFPNegMul_VNMUL};

Library pseudocode for aarch32/functions/coproc/AArch32.CheckCP15InstrCoarseTraps

// AArch32.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained traps to System registers in the // coproc=0b1111 encoding space by HSTR and HCR. AArch32.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) if PSTATE.EL == EL0 && (!ELUsingAArch32(EL1) || (EL2Enabled() && !ELUsingAArch32(EL2))) then AArch64.CheckCP15InstrCoarseTraps(CRn, nreg, CRm); trapped_encoding = ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15})); // Check for coarse-grained Hyp traps if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then major = if nreg == 1 then CRn else CRm; // Check for MCR, MRC, MCRR, and MRRC disabled by HSTR<CRn/CRm> // and MRC and MCR disabled by HCR.TIDCP. if ((!(major IN {4,14}) && HSTR<major> == '1') || (HCR.TIDCP == '1' && nreg == 1 && trapped_encoding)) then if (PSTATE.EL == EL0 && boolean IMPLEMENTATION_DEFINED "UNDEF unallocated CP15 access at EL0") then UNDEFINED; if ELUsingAArch32(EL2) then AArch32.SystemAccessTrap(M32_Hyp, 0x3); else AArch64.AArch32SystemAccessTrap(EL2, 0x3);

Library pseudocode for aarch32/functions/exclusive/AArch32.ExclusiveMonitorsPass

// AArch32.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean AArch32.ExclusiveMonitorsPass(bits(32) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype = AccType_ATOMIC; iswrite = TRUE; aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); passed = AArch32.IsExclusiveVA(address, ProcessorID(), size); if !passed then return FALSE; memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); ClearExclusiveLocal(ProcessorID()); if passed then if memaddrdesc.memattrs.shareability != Shareability_NSH then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); return passed;

Library pseudocode for aarch32/functions/exclusive/AArch32.IsExclusiveVA

// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean AArch32.IsExclusiveVA(bits(32) address, integer processorid, integer size);

Library pseudocode for aarch32/functions/exclusive/AArch32.MarkExclusiveVA

// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid. AArch32.MarkExclusiveVA(bits(32) address, integer processorid, integer size);

Library pseudocode for aarch32/functions/exclusive/AArch32.SetExclusiveMonitors

// AArch32.SetExclusiveMonitors() // ============================== // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address. AArch32.SetExclusiveMonitors(bits(32) address, integer size) acctype = AccType_ATOMIC; iswrite = FALSE; aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return; if memaddrdesc.memattrs.shareability != Shareability_NSH then MarkExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); MarkExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); AArch32.MarkExclusiveVA(address, ProcessorID(), size);

Library pseudocode for aarch32/functions/float/CheckAdvSIMDEnabled

// CheckAdvSIMDEnabled() // ===================== CheckAdvSIMDEnabled() fpexc_check = TRUE; advsimd = TRUE; AArch32.CheckAdvSIMDOrFPEnabled(fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if Advanced SIMD access is permitted // Make temporary copy of D registers // _Dclone[] is used as input data for instruction pseudocode for i = 0 to 31 _Dclone[i] = D[i]; return;

Library pseudocode for aarch32/functions/float/CheckAdvSIMDOrVFPEnabled

// CheckAdvSIMDOrVFPEnabled() // ========================== CheckAdvSIMDOrVFPEnabled(boolean include_fpexc_check, boolean advsimd) AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/float/CheckCryptoEnabled32

// CheckCryptoEnabled32() // ====================== CheckCryptoEnabled32() CheckAdvSIMDEnabled(); // Return from CheckAdvSIMDEnabled() occurs only if access is permitted return;

Library pseudocode for aarch32/functions/float/CheckVFPEnabled

// CheckVFPEnabled() // ================= CheckVFPEnabled(boolean include_fpexc_check) advsimd = FALSE; AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/float/FPHalvedSub

// FPHalvedSub() // ============= bits(N) FPHalvedSub(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == sign2 then result = FPDefaultNaN(fpcr); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 != sign2 then result = FPZero(sign1); else result_value = (value1 - value2) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr); return result;

Library pseudocode for aarch32/functions/float/FPRSqrtStep

// FPRSqrtStep() // ============= bits(N) FPRSqrtStep(bits(N) op1, bits(N) op2) assert N IN {16,32}; FPCRType fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) three = FPThree('0'); result = FPHalvedSub(three, product, fpcr); return result;

Library pseudocode for aarch32/functions/float/FPRecipStep

// FPRecipStep() // ============= bits(N) FPRecipStep(bits(N) op1, bits(N) op2) assert N IN {16,32}; FPCRType fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) two = FPTwo('0'); result = FPSub(two, product, fpcr); return result;

Library pseudocode for aarch32/functions/float/StandardFPSCRValue

// StandardFPSCRValue() // ==================== FPCRType StandardFPSCRValue() bits(32) upper = '00000000000000000000000000000000'; bits(32) lower = '00000' : FPSCR.AHP : '110000' : FPSCR.FZ16 : '0000000000000000000'; return upper : lower;

Library pseudocode for aarch32/functions/memory/AArch32.CheckAlignment

// AArch32.CheckAlignment() // ======================== boolean AArch32.CheckAlignment(bits(32) address, integer alignment, AccType acctype, boolean iswrite) bit A; if PSTATE.EL == EL0 && !ELUsingAArch32(S1TranslationRegime()) then A = SCTLR[].A; //use AArch64 register, when higher Exception level is using AArch64 elsif PSTATE.EL == EL2 then A = HSCTLR.A; else A = SCTLR.A; aligned = (address == Align(address, alignment)); atomic = acctype IN { AccType_ATOMIC, AccType_ATOMICRW, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW, AccType_ATOMICLS64, AccType_A32LSMD}; ordered = acctype IN { AccType_ORDERED, AccType_ORDEREDRW, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; // AccType_VEC is used for SIMD element alignment checks only check = (atomic || ordered || vector || A == '1'); if check && !aligned then secondstage = FALSE; AArch32.Abort(address, AlignmentFault(acctype, iswrite, secondstage)); return aligned;

Library pseudocode for aarch32/functions/memory/AArch32.MemSingle

// AArch32.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8) AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean aligned] boolean ispair = FALSE; return AArch32.MemSingle[address, size, acctype, aligned, ispair]; // AArch32.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8) AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean aligned, boolean ispair] assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; bits(size*8) value; iswrite = FALSE; memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); PhysMemRetStatus memstatus; (memstatus, value) = PhysMemRead(memaddrdesc, size, accdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, size, accdesc); return value; // AArch32.MemSingle[] - assignment (write) form // ============================================= AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean aligned] = bits(size*8) value boolean ispair = FALSE; AArch32.MemSingle[address, size, acctype, aligned, ispair] = value; return; // AArch32.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean aligned, boolean ispair] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; iswrite = TRUE; memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareability != Shareability_NSH then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); PhysMemRetStatus memstatus; memstatus = PhysMemWrite(memaddrdesc, size, accdesc, value); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, size, accdesc); return;

Library pseudocode for aarch32/functions/memory/Hint_PreloadData

Hint_PreloadData(bits(32) address);

Library pseudocode for aarch32/functions/memory/Hint_PreloadDataForWrite

Hint_PreloadDataForWrite(bits(32) address);

Library pseudocode for aarch32/functions/memory/Hint_PreloadInstr

Hint_PreloadInstr(bits(32) address);

Library pseudocode for aarch32/functions/memory/MemA

// MemA[] - non-assignment form // ============================ bits(8*size) MemA[bits(32) address, integer size] acctype = AccType_ATOMIC; return Mem_with_type[address, size, acctype]; // MemA[] - assignment form // ======================== MemA[bits(32) address, integer size] = bits(8*size) value acctype = AccType_ATOMIC; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemO

// MemO[] - non-assignment form // ============================ bits(8*size) MemO[bits(32) address, integer size] acctype = AccType_ORDERED; return Mem_with_type[address, size, acctype]; // MemO[] - assignment form // ======================== MemO[bits(32) address, integer size] = bits(8*size) value acctype = AccType_ORDERED; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemS

// MemS[] - non-assignment form // ============================ // Memory accessor for streaming load multiple instructions bits(8*size) MemS[bits(32) address, integer size] acctype = AccType_A32LSMD; return Mem_with_type[address, size, acctype]; // MemS[] - assignment form // ======================== // Memory accessor for streaming store multiple instructions MemS[bits(32) address, integer size] = bits(8*size) value acctype = AccType_A32LSMD; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemU

// MemU[] - non-assignment form // ============================ bits(8*size) MemU[bits(32) address, integer size] acctype = AccType_NORMAL; return Mem_with_type[address, size, acctype]; // MemU[] - assignment form // ======================== MemU[bits(32) address, integer size] = bits(8*size) value acctype = AccType_NORMAL; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemU_unpriv

// MemU_unpriv[] - non-assignment form // =================================== bits(8*size) MemU_unpriv[bits(32) address, integer size] acctype = AccType_UNPRIV; return Mem_with_type[address, size, acctype]; // MemU_unpriv[] - assignment form // =============================== MemU_unpriv[bits(32) address, integer size] = bits(8*size) value acctype = AccType_UNPRIV; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/Mem_with_type

// Mem_with_type[] - non-assignment (read) form // ============================================ // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch32.MemSingle directly. bits(size*8) Mem_with_type[bits(32) address, integer size, AccType acctype] boolean ispair = FALSE; return Mem_with_type[address, size, acctype, ispair]; bits(size*8) Mem_with_type[bits(32) address, integer size, AccType acctype, boolean ispair] assert size IN {1, 2, 4, 8, 16}; constant halfsize = size DIV 2; bits(size * 8) value; boolean iswrite = FALSE; boolean aligned; if ispair then // check alignment on size of element accessed, not overall access size aligned = AArch32.CheckAlignment(address, halfsize, acctype, iswrite); else aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; value<7:0> = AArch32.MemSingle[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = AArch32.MemSingle[address+i, 1, acctype, aligned]; else value = AArch32.MemSingle[address, size, acctype, aligned, ispair]; if BigEndian(acctype) then value = BigEndianReverse(value); return value; // Mem_with_type[] - assignment (write) form // ========================================= // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem_with_type[bits(32) address, integer size, AccType acctype] = bits(size*8) value_in boolean ispair = FALSE; Mem_with_type[address, size, acctype, ispair] = value_in; Mem_with_type[bits(32) address, integer size, AccType acctype, boolean ispair] = bits(size*8) value_in boolean iswrite = TRUE; constant halfsize = size DIV 2; bits(size*8) value = value_in; boolean aligned; if BigEndian(acctype) then value = BigEndianReverse(value); if ispair then // check alignment on size of element accessed, not overall access size aligned = AArch32.CheckAlignment(address, halfsize, acctype, iswrite); else aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; AArch32.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch32.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; else AArch32.MemSingle[address, size, acctype, aligned, ispair] = value; return;

Library pseudocode for aarch32/functions/ras/AArch32.ESBOperation

// AArch32.ESBOperation() // ====================== // Perform the AArch32 ESB operation for ESB executed in AArch32 state AArch32.ESBOperation() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1'; if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.ESBOperation(); return; route_to_monitor = HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.EA == '1'; route_to_hyp = PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR.TGE == '1' || HCR.AMO == '1'); bits(5) target; if route_to_monitor then target = M32_Monitor; elsif route_to_hyp || PSTATE.M == M32_Hyp then target = M32_Hyp; else target = M32_Abort; boolean mask_active; if CurrentSecurityStateIsSecure() == SS_Secure then () then mask_active = TRUE; elsif target == M32_Monitor then mask_active = SCR.AW == '1' && (!HaveEL(EL2) || (HCR.TGE == '0' && HCR.AMO == '0')); else mask_active = target == M32_Abort || PSTATE.M == M32_Hyp; mask_set = PSTATE.A == '1'; (-, el) = ELFromM32(target); intdis = Halted() || ExternalDebugInterruptsDisabled(el); masked = intdis || (mask_active && mask_set); // Check for a masked Physical SError pending that can be synchronized // by an Error synchronization event. if masked && IsSynchronizablePhysicalSErrorPending() then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); ClearPendingPhysicalSError(); return;

Library pseudocode for aarch32/functions/ras/AArch32.PhysicalSErrorSyndrome

// Return the SError syndrome AArch32.SErrorSyndrome AArch32.PhysicalSErrorSyndrome();

Library pseudocode for aarch32/functions/ras/AArch32.ReportDeferredSError

// AArch32.ReportDeferredSError() // ============================== // Return deferred SError syndrome bits(32) AArch32.ReportDeferredSError(bits(2) AET, bit ExT) bits(32) target; target<31> = '1'; // A syndrome = Zeros(16); if PSTATE.EL == EL2 then syndrome<11:10> = AET; // AET syndrome<9> = ExT; // EA syndrome<5:0> = '010001'; // DFSC else syndrome<15:14> = AET; // AET syndrome<12> = ExT; // ExT syndrome<9> = TTBCR.EAE; // LPAE if TTBCR.EAE == '1' then // Long-descriptor format syndrome<5:0> = '010001'; // STATUS else // Short-descriptor format syndrome<10,3:0> = '10110'; // FS if HaveAArch64() then target<24:0> = ZeroExtend(syndrome);// Any RES0 fields must be set to zero else target<15:0> = syndrome; return target;

Library pseudocode for aarch32/functions/ras/AArch32.SErrorSyndrome

type AArch32.SErrorSyndrome is ( bits(2) AET, bit ExT )

Library pseudocode for aarch32/functions/ras/AArch32.vESBOperation

// AArch32.vESBOperation() // ======================= // Perform the ESB operation for virtual SError interrupts executed in AArch32 state AArch32.vESBOperation() assert PSTATE.EL IN {EL0, EL1} && EL2Enabled(); // Check for EL2 using AArch64 state if !ELUsingAArch32(EL2) then AArch64.vESBOperation(); return; // If physical SError interrupts are routed to Hyp mode, and TGE is not set, then a // virtual SError interrupt might be pending vSEI_enabled = HCR.TGE == '0' && HCR.AMO == '1'; vSEI_pending = vSEI_enabled && HCR.VA == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); HCR.VA = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch32/functions/registers/AArch32.ResetGeneralRegisters

// AArch32.ResetGeneralRegisters() // =============================== AArch32.ResetGeneralRegisters() for i = 0 to 7 R[i] = bits(32) UNKNOWN; for i = 8 to 12 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; if HaveEL(EL2) then Rmode[13, M32_Hyp] = bits(32) UNKNOWN; // No R14_hyp for i = 13 to 14 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; Rmode[i, M32_IRQ] = bits(32) UNKNOWN; Rmode[i, M32_Svc] = bits(32) UNKNOWN; Rmode[i, M32_Abort] = bits(32) UNKNOWN; Rmode[i, M32_Undef] = bits(32) UNKNOWN; if HaveEL(EL3) then Rmode[i, M32_Monitor] = bits(32) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/AArch32.ResetSIMDFPRegisters

// AArch32.ResetSIMDFPRegisters() // ============================== AArch32.ResetSIMDFPRegisters() for i = 0 to 15 Q[i] = bits(128) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/AArch32.ResetSpecialRegisters

// AArch32.ResetSpecialRegisters() // =============================== AArch32.ResetSpecialRegisters() // AArch32 special registers SPSR_fiq<31:0> = bits(32) UNKNOWN; SPSR_irq<31:0> = bits(32) UNKNOWN; SPSR_svc<31:0> = bits(32) UNKNOWN; SPSR_abt<31:0> = bits(32) UNKNOWN; SPSR_und<31:0> = bits(32) UNKNOWN; if HaveEL(EL2) then SPSR_hyp = bits(32) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; if HaveEL(EL3) then SPSR_mon = bits(32) UNKNOWN; // External debug special registers DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/AArch32.ResetSystemRegisters

AArch32.ResetSystemRegisters(boolean cold_reset);

Library pseudocode for aarch32/functions/registers/ALUExceptionReturn

// ALUExceptionReturn() // ==================== ALUExceptionReturn(bits(32) address) if PSTATE.EL == EL2 then UNDEFINED; elsif PSTATE.M IN {M32_User,M32_System} then Constraint c = ConstrainUnpredictable(Unpredictable_ALUEXCEPTIONRETURN); assert c IN {Constraint_UNDEF, Constraint_NOP}; case c of when Constraint_UNDEF UNDEFINED; when Constraint_NOPEndOfInstruction(); else AArch32.ExceptionReturn(address, SPSR[]);

Library pseudocode for aarch32/functions/registers/ALUWritePC

// ALUWritePC() // ============ ALUWritePC(bits(32) address) if CurrentInstrSet() == InstrSet_A32 then BXWritePC(address, BranchType_INDIR); else BranchWritePC(address, BranchType_INDIR);

Library pseudocode for aarch32/functions/registers/BXWritePC

// BXWritePC() // =========== BXWritePC(bits(32) address_in, BranchType branch_type) bits(32) address = address_in; if address<0> == '1' then SelectInstrSet(InstrSet_T32); address<0> = '0'; else SelectInstrSet(InstrSet_A32); // For branches to an unaligned PC counter in A32 state, the processor takes the branch // and does one of: // * Forces the address to be aligned // * Leaves the PC unaligned, meaning the target generates a PC Alignment fault. if address<1> == '1' && ConstrainUnpredictableBool(Unpredictable_A32FORCEALIGNPC) then address<1> = '0'; boolean branch_conditional = !(AArch32.CurrentCond() IN {'111x'}); BranchTo(address, branch_type, branch_conditional);

Library pseudocode for aarch32/functions/registers/BranchWritePC

// BranchWritePC() // =============== BranchWritePC(bits(32) address_in, BranchType branch_type) bits(32) address = address_in; if CurrentInstrSet() == InstrSet_A32 then address<1:0> = '00'; else address<0> = '0'; boolean branch_conditional = !(AArch32.CurrentCond() IN {'111x'}); BranchTo(address, branch_type, branch_conditional);

Library pseudocode for aarch32/functions/registers/CBWritePC

// CBWritePC() // =========== // Takes a branch from a CBNZ/CBZ instruction. CBWritePC(bits(32) address_in) bits(32) address = address_in; assert CurrentInstrSet() == InstrSet_T32; address<0> = '0'; boolean branch_conditional = TRUE; BranchTo(address, BranchType_DIR, branch_conditional);

Library pseudocode for aarch32/functions/registers/D

// D[] - non-assignment form // ========================= bits(64) D[integer n] assert n >= 0 && n <= 31; base = (n MOD 2) * 64; bits(128) vreg = V[n DIV 2]; return vreg<base+63:base>; // D[] - assignment form // ===================== D[integer n] = bits(64) value assert n >= 0 && n <= 31; base = (n MOD 2) * 64; bits(128) vreg = V[n DIV 2]; vreg<base+63:base> = value; V[n DIV 2] = vreg; return;

Library pseudocode for aarch32/functions/registers/Din

// Din[] - non-assignment form // =========================== bits(64) Din[integer n] assert n >= 0 && n <= 31; return _Dclone[n];

Library pseudocode for aarch32/functions/registers/LR

// LR - assignment form // ==================== LR = bits(32) value R[14] = value; return; // LR - non-assignment form // ======================== bits(32) LR return R[14];

Library pseudocode for aarch32/functions/registers/LoadWritePC

// LoadWritePC() // ============= LoadWritePC(bits(32) address) BXWritePC(address, BranchType_INDIR);

Library pseudocode for aarch32/functions/registers/LookUpRIndex

// LookUpRIndex() // ============== integer LookUpRIndex(integer n, bits(5) mode) assert n >= 0 && n <= 14; integer result; case n of // Select index by mode: usr fiq irq svc abt und hyp when 8 result = RBankSelect(mode, 8, 24, 8, 8, 8, 8, 8); when 9 result = RBankSelect(mode, 9, 25, 9, 9, 9, 9, 9); when 10 result = RBankSelect(mode, 10, 26, 10, 10, 10, 10, 10); when 11 result = RBankSelect(mode, 11, 27, 11, 11, 11, 11, 11); when 12 result = RBankSelect(mode, 12, 28, 12, 12, 12, 12, 12); when 13 result = RBankSelect(mode, 13, 29, 17, 19, 21, 23, 15); when 14 result = RBankSelect(mode, 14, 30, 16, 18, 20, 22, 14); otherwise result = n; return result;

Library pseudocode for aarch32/functions/registers/Monitor_mode_registers

bits(32) SP_mon; bits(32) LR_mon;

Library pseudocode for aarch32/functions/registers/PC

// PC - non-assignment form // ======================== bits(32) PC return R[15]; // This includes the offset from AArch32 state

Library pseudocode for aarch32/functions/registers/PCStoreValue

// PCStoreValue() // ============== bits(32) PCStoreValue() // This function returns the PC value. On architecture versions before Armv7, it // is permitted to instead return PC+4, provided it does so consistently. It is // used only to describe A32 instructions, so it returns the address of the current // instruction plus 8 (normally) or 12 (when the alternative is permitted). return PC;

Library pseudocode for aarch32/functions/registers/Q

// Q[] - non-assignment form // ========================= bits(128) Q[integer n] assert n >= 0 && n <= 15; return V[n]; // Q[] - assignment form // ===================== Q[integer n] = bits(128) value assert n >= 0 && n <= 15; V[n] = value; return;

Library pseudocode for aarch32/functions/registers/Qin

// Qin[] - non-assignment form // =========================== bits(128) Qin[integer n] assert n >= 0 && n <= 15; return Din[2*n+1]:Din[2*n];

Library pseudocode for aarch32/functions/registers/R

// R[] - assignment form // ===================== R[integer n] = bits(32) value Rmode[n, PSTATE.M] = value; return; // R[] - non-assignment form // ========================= bits(32) R[integer n] if n == 15 then offset = (if CurrentInstrSet() == InstrSet_A32 then 8 else 4); return _PC<31:0> + offset; else return Rmode[n, PSTATE.M];

Library pseudocode for aarch32/functions/registers/RBankSelect

// RBankSelect() // ============= integer RBankSelect(bits(5) mode, integer usr, integer fiq, integer irq, integer svc, integer abt, integer und, integer hyp) integer result; case mode of when M32_User result = usr; // User mode when M32_FIQ result = fiq; // FIQ mode when M32_IRQ result = irq; // IRQ mode when M32_Svc result = svc; // Supervisor mode when M32_Abort result = abt; // Abort mode when M32_Hyp result = hyp; // Hyp mode when M32_Undef result = und; // Undefined mode when M32_System result = usr; // System mode uses User mode registers otherwise Unreachable(); // Monitor mode return result;

Library pseudocode for aarch32/functions/registers/Rmode

// Rmode[] - non-assignment form // ============================= bits(32) Rmode[integer n, bits(5) mode] assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if if ! CurrentSecurityStateIsSecure() !=() then assert mode != SS_Secure then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then return SP_mon; elsif n == 14 then return LR_mon; else return _R[n]<31:0>; else return _R[LookUpRIndex(n, mode)]<31:0>; // Rmode[] - assignment form // ========================= Rmode[integer n, bits(5) mode] = bits(32) value assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if if ! CurrentSecurityStateIsSecure() != SS_Secure then assert mode !=() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then SP_mon = value; elsif n == 14 then LR_mon = value; else _R[n]<31:0> = value; else // It is CONSTRAINED UNPREDICTABLE whether the upper 32 bits of the X // register are unchanged or set to zero. This is also tested for on // exception entry, as this applies to all AArch32 registers. if HaveAArch64() && ConstrainUnpredictableBool(Unpredictable_ZEROUPPER) then _R[LookUpRIndex(n, mode)] = ZeroExtend(value); else _R[LookUpRIndex(n, mode)]<31:0> = value; return;

Library pseudocode for aarch32/functions/registers/S

// S[] - non-assignment form // ========================= bits(32) S[integer n] assert n >= 0 && n <= 31; base = (n MOD 4) * 32; bits(128) vreg = V[n DIV 4]; return vreg<base+31:base>; // S[] - assignment form // ===================== S[integer n] = bits(32) value assert n >= 0 && n <= 31; base = (n MOD 4) * 32; bits(128) vreg = V[n DIV 4]; vreg<base+31:base> = value; V[n DIV 4] = vreg; return;

Library pseudocode for aarch32/functions/registers/SP

// SP - assignment form // ==================== SP = bits(32) value R[13] = value; return; // SP - non-assignment form // ======================== bits(32) SP return R[13];

Library pseudocode for aarch32/functions/registers/_Dclone

array bits(64) _Dclone[0..31];

Library pseudocode for aarch32/functions/system/AArch32.ExceptionReturn

// AArch32.ExceptionReturn() // ========================= AArch32.ExceptionReturn(bits(32) new_pc_in, bits(32) spsr) bits(32) new_pc = new_pc_in; SynchronizeContext(); // Attempts to change to an illegal mode or state will invoke the Illegal Execution state // mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' then // If the exception return is illegal, PC[1:0] are UNKNOWN new_pc<1:0> = bits(2) UNKNOWN; else // LR[1:0] or LR[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 boolean branch_conditional = !(AArch32.CurrentCond() IN {'111x'}); BranchTo(new_pc, BranchType_ERET, branch_conditional); CheckExceptionCatch(FALSE); // Check for debug event on exception return

Library pseudocode for aarch32/functions/system/AArch32.ExecutingCP10or11Instr

// AArch32.ExecutingCP10or11Instr() // ================================ boolean AArch32.ExecutingCP10or11Instr() instr = ThisInstr(); instr_set = CurrentInstrSet(); assert instr_set IN {InstrSet_A32, InstrSet_T32}; if instr_set == InstrSet_A32 then return ((instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> IN {'101x'}); else // InstrSet_T32 return (instr<31:28> IN {'111x'} && (instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> IN {'101x'});

Library pseudocode for aarch32/functions/system/AArch32.ITAdvance

// AArch32.ITAdvance() // =================== AArch32.ITAdvance() if PSTATE.IT<2:0> == '000' then PSTATE.IT = '00000000'; else PSTATE.IT<4:0> = LSL(PSTATE.IT<4:0>, 1); return;

Library pseudocode for aarch32/functions/system/AArch32.SysRegRead

// Read from a 32-bit AArch32 System register and write the register's contents to R[t]. AArch32.SysRegRead(integer cp_num, bits(32) instr, integer t);

Library pseudocode for aarch32/functions/system/AArch32.SysRegRead64

// Read from a 64-bit AArch32 System register and write the register's contents to R[t] and R[t2]. AArch32.SysRegRead64(integer cp_num, bits(32) instr, integer t, integer t2);

Library pseudocode for aarch32/functions/system/AArch32.SysRegReadCanWriteAPSR

// AArch32.SysRegReadCanWriteAPSR() // ================================ // Determines whether the AArch32 System register read instruction can write to APSR flags. boolean AArch32.SysRegReadCanWriteAPSR(integer cp_num, bits(32) instr) assert UsingAArch32(); assert (cp_num IN {14,15}); assert cp_num == UInt(instr<11:8>); opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); if cp_num == 14 && opc1 == 0 && CRn == 0 && CRm == 1 && opc2 == 0 then // DBGDSCRint return TRUE; return FALSE;

Library pseudocode for aarch32/functions/system/AArch32.SysRegWrite

// Read the contents of R[t] and write to a 32-bit AArch32 System register. AArch32.SysRegWrite(integer cp_num, bits(32) instr, integer t);

Library pseudocode for aarch32/functions/system/AArch32.SysRegWrite64

// Read the contents of R[t] and R[t2] and write to a 64-bit AArch32 System register. AArch32.SysRegWrite64(integer cp_num, bits(32) instr, integer t, integer t2);

Library pseudocode for aarch32/functions/system/AArch32.SysRegWriteM

// Read a value from a virtual address and write it to an AArch32 System register. AArch32.SysRegWriteM(integer cp_num, bits(32) instr, bits(32) address);

Library pseudocode for aarch32/functions/system/AArch32.WriteMode

// AArch32.WriteMode() // =================== // Function for dealing with writes to PSTATE.M from AArch32 state only. // This ensures that PSTATE.EL and PSTATE.SP are always valid. AArch32.WriteMode(bits(5) mode) (valid,el) = ELFromM32(mode); assert valid; PSTATE.M = mode; PSTATE.EL = el; PSTATE.nRW = '1'; PSTATE.SP = (if mode IN {M32_User,M32_System} then '0' else '1'); return;

Library pseudocode for aarch32/functions/system/AArch32.WriteModeByInstr

// AArch32.WriteModeByInstr() // ========================== // Function for dealing with writes to PSTATE.M from an AArch32 instruction, and ensuring that // illegal state changes are correctly flagged in PSTATE.IL. AArch32.WriteModeByInstr(bits(5) mode) (valid,el) = ELFromM32(mode); // 'valid' is set to FALSE if' mode' is invalid for this implementation or the current value // of SCR.NS/SCR_EL3.NS. Additionally, it is illegal for an instruction to write 'mode' to // PSTATE.EL if it would result in any of: // * A change to a mode that would cause entry to a higher Exception level. if UInt(el) > UInt(PSTATE.EL) then valid = FALSE; // * A change to or from Hyp mode. if (PSTATE.M == M32_Hyp || mode == M32_Hyp) && PSTATE.M != mode then valid = FALSE; // * When EL2 is implemented, the value of HCR.TGE is '1', a change to a Non-secure EL1 mode. if PSTATE.M == M32_Monitor && HaveEL(EL2) && el == EL1 && SCR.NS == '1' && HCR.TGE == '1' then valid = FALSE; if !valid then PSTATE.IL = '1'; else AArch32.WriteMode(mode);

Library pseudocode for aarch32/functions/system/BadMode

// BadMode() // ========= boolean BadMode(bits(5) mode) // Return TRUE if 'mode' encodes a mode that is not valid for this implementation boolean valid; case mode of when M32_Monitor valid = HaveAArch32EL(EL3); when M32_Hyp valid = HaveAArch32EL(EL2); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. // Therefore it is sufficient to test this implementation supports EL1 using AArch32. valid = HaveAArch32EL(EL1); when M32_User valid = HaveAArch32EL(EL0); otherwise valid = FALSE; // Passed an illegal mode value return !valid;

Library pseudocode for aarch32/functions/system/BankedRegisterAccessValid

// BankedRegisterAccessValid() // =========================== // Checks for MRS (Banked register) or MSR (Banked register) accesses to registers // other than the SPSRs that are invalid. This includes ELR_hyp accesses. BankedRegisterAccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '000xx', '00100' // R8_usr to R12_usr if mode != M32_FIQ then UNPREDICTABLE; when '00101' // SP_usr if mode == M32_System then UNPREDICTABLE; when '00110' // LR_usr if mode IN {M32_Hyp,M32_System} then UNPREDICTABLE; when '010xx', '0110x', '01110' // R8_fiq to R12_fiq, SP_fiq, LR_fiq if mode == M32_FIQ then UNPREDICTABLE; when '1000x' // LR_irq, SP_irq if mode == M32_IRQ then UNPREDICTABLE; when '1001x' // LR_svc, SP_svc if mode == M32_Svc then UNPREDICTABLE; when '1010x' // LR_abt, SP_abt if mode == M32_Abort then UNPREDICTABLE; when '1011x' // LR_und, SP_und if mode == M32_Undef then UNPREDICTABLE; when '1110x' // LR_mon, SP_mon if (! if !HaveEL(EL3) ||) || ! CurrentSecurityStateIsSecure() != SS_Secure || mode ==() || mode == M32_Monitor) then UNPREDICTABLE; then UNPREDICTABLE; when '11110' // ELR_hyp, only from Monitor or Hyp mode if !HaveEL(EL2) || !(mode IN {M32_Monitor,M32_Hyp}) then UNPREDICTABLE; when '11111' // SP_hyp, only from Monitor mode if !HaveEL(EL2) || mode != M32_Monitor then UNPREDICTABLE; otherwise UNPREDICTABLE; return;

Library pseudocode for aarch32/functions/system/CPSRWriteByInstr

// CPSRWriteByInstr() // ================== // Update PSTATE.<N,Z,C,V,Q,GE,E,A,I,F,M> from a CPSR value written by an MSR instruction. CPSRWriteByInstr(bits(32) value, bits(4) bytemask) privileged = PSTATE.EL != EL0; // PSTATE.<A,I,F,M> are not writable at EL0 // Write PSTATE from 'value', ignoring bytes masked by 'bytemask' if bytemask<3> == '1' then PSTATE.<N,Z,C,V,Q> = value<31:27>; // Bits <26:24> are ignored if bytemask<2> == '1' then if HaveSSBSExt() then PSTATE.SSBS = value<23>; if privileged then PSTATE.PAN = value<22>; if HaveDITExt() then PSTATE.DIT = value<21>; // Bit <20> is RES0 PSTATE.GE = value<19:16>; if bytemask<1> == '1' then // Bits <15:10> are RES0 PSTATE.E = value<9>; // PSTATE.E is writable at EL0 if privileged then PSTATE.A = value<8>; if bytemask<0> == '1' then if privileged then PSTATE.<I,F> = value<7:6>; // Bit <5> is RES0 // AArch32.WriteModeByInstr() sets PSTATE.IL to 1 if this is an illegal mode change. AArch32.WriteModeByInstr(value<4:0>); return;

Library pseudocode for aarch32/functions/system/ConditionPassed

// ConditionPassed() // ================= boolean ConditionPassed() return ConditionHolds(AArch32.CurrentCond());

Library pseudocode for aarch32/functions/system/CurrentCond

bits(4) AArch32.CurrentCond();

Library pseudocode for aarch32/functions/system/InITBlock

// InITBlock() // =========== boolean InITBlock() if CurrentInstrSet() == InstrSet_T32 then return PSTATE.IT<3:0> != '0000'; else return FALSE;

Library pseudocode for aarch32/functions/system/LastInITBlock

// LastInITBlock() // =============== boolean LastInITBlock() return (PSTATE.IT<3:0> == '1000');

Library pseudocode for aarch32/functions/system/SPSRWriteByInstr

// SPSRWriteByInstr() // ================== SPSRWriteByInstr(bits(32) value, bits(4) bytemask) bits(32) new_spsr = SPSR[]; if bytemask<3> == '1' then new_spsr<31:24> = value<31:24>; // N,Z,C,V,Q flags, IT[1:0],J bits if bytemask<2> == '1' then new_spsr<23:16> = value<23:16>; // IL bit, GE[3:0] flags if bytemask<1> == '1' then new_spsr<15:8> = value<15:8>; // IT[7:2] bits, E bit, A interrupt mask if bytemask<0> == '1' then new_spsr<7:0> = value<7:0>; // I,F interrupt masks, T bit, Mode bits SPSR[] = new_spsr; // UNPREDICTABLE if User or System mode return;

Library pseudocode for aarch32/functions/system/SPSRaccessValid

// SPSRaccessValid() // ================= // Checks for MRS (Banked register) or MSR (Banked register) accesses to the SPSRs // that are UNPREDICTABLE SPSRaccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '01110' // SPSR_fiq if mode == M32_FIQ then UNPREDICTABLE; when '10000' // SPSR_irq if mode == M32_IRQ then UNPREDICTABLE; when '10010' // SPSR_svc if mode == M32_Svc then UNPREDICTABLE; when '10100' // SPSR_abt if mode == M32_Abort then UNPREDICTABLE; when '10110' // SPSR_und if mode == M32_Undef then UNPREDICTABLE; when '11100' // SPSR_mon if (! if !HaveEL(EL3) || mode == M32_Monitor |||| ! CurrentSecurityStateIsSecure() != SS_Secure) then UNPREDICTABLE; () then UNPREDICTABLE; when '11110' // SPSR_hyp if !HaveEL(EL2) || mode != M32_Monitor then UNPREDICTABLE; otherwise UNPREDICTABLE; return;

Library pseudocode for aarch32/functions/system/SelectInstrSet

// SelectInstrSet() // ================ SelectInstrSet(InstrSet iset) assert CurrentInstrSet() IN {InstrSet_A32, InstrSet_T32}; assert iset IN {InstrSet_A32, InstrSet_T32}; PSTATE.T = if iset == InstrSet_A32 then '0' else '1'; return;

Library pseudocode for aarch32/functions/v6simd/Sat

// Sat() // ===== bits(N) Sat(integer i, integer N, boolean unsigned) result = if unsigned then UnsignedSat(i, N) else SignedSat(i, N); return result;

Library pseudocode for aarch32/functions/v6simd/SignedSat

// SignedSat() // =========== bits(N) SignedSat(integer i, integer N) (result, -) = SignedSatQ(i, N); return result;

Library pseudocode for aarch32/functions/v6simd/UnsignedSat

// UnsignedSat() // ============= bits(N) UnsignedSat(integer i, integer N) (result, -) = UnsignedSatQ(i, N); return result;

Library pseudocode for aarch32/ic/AArch32.IC

// AArch32.IC() // ============ // Perform Instruction Cache Operation. AArch32.IC(CacheOpScope opscope) regval = bits(32) UNKNOWN; AArch32.IC(regval, opscope); // AArch32.IC() // ============ // Perform Instruction Cache Operation. AArch32.IC(bits(32) regval, CacheOpScope opscope) CacheRecord cache; AccType acctype = AccType_IC; cache.acctype = acctype; cache.cachetype = CacheType_Instruction; cache.cacheop = CacheOp_Invalidate; cache.opscope = opscope; cache.security = SecurityStateAtEL(PSTATE.EL); if opscope IN {CacheOpScope_ALLU, CacheOpScope_ALLUIS} then if opscope == CacheOpScope_ALLUIS || (opscope == CacheOpScope_ALLU && PSTATE.EL == EL1 && EL2Enabled() && HCR.FB == '1') then cache.shareability = Shareability_ISH; else cache.shareability = Shareability_NSH; cache.regval = ZeroExtend(regval); CACHE_OP(cache); else assert opscope == CacheOpScope_PoU; if EL2Enabled() then if PSTATE.EL IN {EL0, EL1} then cache.is_vmid_valid = TRUE; cache.vmid = VMID[]; else cache.is_vmid_valid = FALSE; else cache.is_vmid_valid = FALSE; if PSTATE.EL == EL0 then cache.is_asid_valid = TRUE; cache.asid = ASID[]; else cache.is_asid_valid = FALSE; need_translate = ICInstNeedsTranslation(opscope); cache.shareability = Shareability_NSH; cache.vaddress = ZeroExtend(regval); cache.translated = need_translate; if !need_translate then cache.paddress = FullAddress UNKNOWN; CACHE_OP(cache); return; wasaligned = TRUE; iswrite = FALSE; size = 0; memaddrdesc = AArch32.TranslateAddress(regval, acctype, iswrite, wasaligned, size); if IsFault(memaddrdesc) then AArch32.Abort(regval, memaddrdesc.fault); cache.paddress = memaddrdesc.paddress; CACHE_OP(cache); return;

Library pseudocode for aarch32/predictionrestrict/RestrictPrediction

// RestrictPrediction() // ==================== // Clear all predictions in the context. AArch32.RestrictPrediction(bits(32) val, RestrictType restriction) ExecutionCntxt c; target_el = val<25:24>; // If the instruction is executed at an EL lower than the specified // level, it is treated as a NOP. if UInt(target_el) > UInt(PSTATE.EL) then return; bit ns = val<26>; ss = TargetSecurityState(ns); c.security = ss; c.target_el = target_el; if EL2Enabled() then if PSTATE.EL IN {EL0, EL1} then c.is_vmid_valid = TRUE; c.all_vmid = FALSE; c.vmid = VMID[]; elsif target_el IN {EL0, EL1} then c.is_vmid_valid = TRUE; c.all_vmid = val<27> == '1'; c.vmid = ZeroExtend(val<23:16>, 16); // Only valid if val<27> == '0'; else c.is_vmid_valid = FALSE; else c.is_vmid_valid = FALSE; if PSTATE.EL == EL0 then c.is_asid_valid = TRUE; c.all_asid = FALSE; c.asid = ASID[]; elsif target_el == EL0 then c.is_asid_valid = TRUE; c.all_asid = val<8> == '1'; c.asid = ZeroExtend(val<7:0>, 16); // Only valid if val<8> == '0'; else c.is_asid_valid = FALSE; c.restriction = restriction; RESTRICT_PREDICTIONS(c);

Library pseudocode for aarch32/translation/attrs/AArch32.DefaultTEXDecode

// AArch32.DefaultTEXDecode() // ========================== // Apply short-descriptor format memory region attributes, without TEX remap MemoryAttributes AArch32.DefaultTEXDecode(bits(3) TEX_in, bit C_in, bit B_in, bit S) MemoryAttributes memattrs; bits(3) TEX = TEX_in; bit C = C_in; bit B = B_in; // Reserved values map to allocated values if (TEX == '001' && C:B == '01') || (TEX == '010' && C:B != '00') || TEX == '011' then bits(5) texcb; (-, texcb) = ConstrainUnpredictableBits(Unpredictable_RESTEXCB); TEX = texcb<4:2>; C = texcb<1>; B = texcb<0>; // Distinction between Inner Shareable and Outer Shareable is not supported in this format // A memory region is either Non-shareable or Outer Shareable case TEX:C:B of when '00000' // Device-nGnRnE memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRnE; memattrs.shareability = Shareability_OSH; when '00001', '01000' // Device-nGnRE memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRE; memattrs.shareability = Shareability_OSH; when '00010' // Write-through Read allocate memattrs.memtype = MemType_Normal; memattrs.inner.attrs = MemAttr_WT; memattrs.inner.hints = MemHint_RA; memattrs.outer.attrs = MemAttr_WT; memattrs.outer.hints = MemHint_RA; memattrs.shareability = if S == '1' then Shareability_OSH else Shareability_NSH; when '00011' // Write-back Read allocate memattrs.memtype = MemType_Normal; memattrs.inner.attrs = MemAttr_WB; memattrs.inner.hints = MemHint_RA; memattrs.outer.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RA; memattrs.shareability = if S == '1' then Shareability_OSH else Shareability_NSH; when '00100' // Non-cacheable memattrs.memtype = MemType_Normal; memattrs.inner.attrs = MemAttr_NC; memattrs.outer.attrs = MemAttr_NC; memattrs.shareability = Shareability_OSH; when '00110' memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; when '00111' // Write-back Read and Write allocate memattrs.memtype = MemType_Normal; memattrs.inner.attrs = MemAttr_WB; memattrs.inner.hints = MemHint_RWA; memattrs.outer.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RWA; memattrs.shareability = if S == '1' then Shareability_OSH else Shareability_NSH; when '1xxxx' // Cacheable, TEX<1:0> = Outer attrs, {C,B} = Inner attrs memattrs.memtype = MemType_Normal; memattrs.inner = DecodeSDFAttr(C:B); memattrs.outer = DecodeSDFAttr(TEX<1:0>); if memattrs.inner.attrs == MemAttr_NC && memattrs.outer.attrs == MemAttr_NC then memattrs.shareability = Shareability_OSH; else memattrs.shareability = if S == '1' then Shareability_OSH else Shareability_NSH; otherwise // Reserved, handled above Unreachable(); // The Transient hint is not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; memattrs.tagged = FALSE; if memattrs.inner.attrs == MemAttr_WB && memattrs.outer.attrs == MemAttr_WB then memattrs.xs = '0'; else memattrs.xs = '1'; return memattrs;

Library pseudocode for aarch32/translation/attrs/AArch32.RemappedTEXDecode

// AArch32.RemappedTEXDecode() // =========================== // Apply short-descriptor format memory region attributes, with TEX remap MemoryAttributes AArch32.RemappedTEXDecode(Regime regime, bits(3) TEX, bit C, bit B, bit S) MemoryAttributes memattrs; PRRR_Type prrr; NMRR_Type nmrr; region = UInt(TEX<0>:C:B); // TEX<2:1> are ignored in this mapping scheme if region == 6 then return MemoryAttributes IMPLEMENTATION_DEFINED; if regime == Regime_EL30 then prrr = PRRR_S; nmrr = NMRR_S; elsif HaveAArch32EL(EL3) then prrr = PRRR_NS; nmrr = NMRR_NS; else prrr = PRRR; nmrr = NMRR; base = 2 * region; attrfield = prrr<base+1:base>; if attrfield == '11' then // Reserved, maps to allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESPRRR); case attrfield of when '00' // Device-nGnRnE memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRnE; memattrs.shareability = Shareability_OSH; when '01' // Device-nGnRE memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRE; memattrs.shareability = Shareability_OSH; when '10' NSn = if S == '0' then prrr.NS0 else prrr.NS1; NOSm = prrr<region+24> AND NSn; IRn = nmrr<base+1:base>; ORn = nmrr<base+17:base+16>; memattrs.memtype = MemType_Normal; memattrs.inner = DecodeSDFAttr(IRn); memattrs.outer = DecodeSDFAttr(ORn); if memattrs.inner.attrs == MemAttr_NC && memattrs.outer.attrs == MemAttr_NC then memattrs.shareability = Shareability_OSH; else bits(2) sh = NSn:NOSm; memattrs.shareability = DecodeShareability(sh); when '11' Unreachable(); // The Transient hint is not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; memattrs.tagged = FALSE; if memattrs.inner.attrs == MemAttr_WB && memattrs.outer.attrs == MemAttr_WB then memattrs.xs = '0'; else memattrs.xs = '1'; return memattrs;

Library pseudocode for aarch32/translation/debug/AArch32.CheckBreakpoint

// AArch32.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime, when either debug exceptions are enabled, or halting debug is enabled // and halting is allowed. FaultRecord AArch32.CheckBreakpoint(bits(32) vaddress, integer size) assert ELUsingAArch32(S1TranslationRegime()); assert size IN {2,4}; match = FALSE; mismatch = FALSE; for i = 0 to NumBreakpointsImplemented() - 1 (match_i, mismatch_i) = AArch32.BreakpointMatch(i, vaddress, size); match = match || match_i; mismatch = mismatch || mismatch_i; if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Breakpoint; Halt(reason); elsif (match || mismatch) then acctype = AccType_IFETCH; iswrite = FALSE; debugmoe = DebugException_Breakpoint; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return NoFault();

Library pseudocode for aarch32/translation/debug/AArch32.CheckDebug

// AArch32.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. FaultRecord AArch32.CheckDebug(bits(32) vaddress, AccType acctype, boolean iswrite, integer size) FaultRecord fault = NoFault(); boolean d_side = ( d_side = (acctype !=IsDataAccess(acctype) || acctype == AccType_DC); boolean i_side = (acctype == AccType_IFETCH); generate_exception = AArch32.GenerateDebugExceptions() && DBGDSCRext.MDBGen == '1'; halt = HaltOnBreakpointOrWatchpoint(); // Relative priority of Vector Catch and Breakpoint exceptions not defined in the architecture vector_catch_first = ConstrainUnpredictableBool(Unpredictable_BPVECTORCATCHPRI); if i_side && vector_catch_first && generate_exception then if !d_side && vector_catch_first && generate_exception then fault = AArch32.CheckVectorCatch(vaddress, size); if fault.statuscode == Fault_None && (generate_exception || halt) then if d_side then fault = AArch32.CheckWatchpoint(vaddress, acctype, iswrite, size); elsif i_side then else fault = AArch32.CheckBreakpoint(vaddress, size); if fault.statuscode == Fault_None && i_side && !vector_catch_first && generate_exception then && !d_side && !vector_catch_first && generate_exception then return AArch32.CheckVectorCatch(vaddress, size); return fault;

Library pseudocode for aarch32/translation/debug/AArch32.CheckVectorCatch

// AArch32.CheckVectorCatch() // ========================== // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime, when debug exceptions are enabled. FaultRecord AArch32.CheckVectorCatch(bits(32) vaddress, integer size) assert ELUsingAArch32(S1TranslationRegime()); match = AArch32.VCRMatch(vaddress); if size == 4 && !match && AArch32.VCRMatch(vaddress + 2) then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHHALF); if match then acctype = AccType_IFETCH; iswrite = FALSE; debugmoe = DebugException_VectorCatch; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return NoFault();

Library pseudocode for aarch32/translation/debug/AArch32.CheckWatchpoint

// AArch32.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address", // when either debug exceptions are enabled for the access, or halting debug // is enabled and halting is allowed. FaultRecord AArch32.CheckWatchpoint(bits(32) vaddress, AccType acctype, boolean iswrite, integer size) assert ELUsingAArch32(S1TranslationRegime()); if acctype == if acctype IN { AccType_DCAccType_TTW then if !iswrite then return, AccType_IC, AccType_AT, AccType_ATPAN} then return NoFault(); elsif !(boolean IMPLEMENTATION_DEFINED "DCIMVAC generates watchpoint") then return if acctype == AccType_DC then if !iswrite then return NoFault(); elsif !IsDataAccess(acctype) then return(); elsif !(boolean IMPLEMENTATION_DEFINED "DCIMVAC generates watchpoint") then return NoFault(); match = FALSE; ispriv = AArch32.AccessUsesEL(acctype) != EL0; for i = 0 to NumWatchpointsImplemented() - 1 if AArch32.WatchpointMatch(i, vaddress, size, ispriv, acctype, iswrite) then match = TRUE; if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Watchpoint; EDWAR = ZeroExtend(vaddress); Halt(reason); elsif match then debugmoe = DebugException_Watchpoint; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return NoFault();

Library pseudocode for aarch32/translation/faults/AArch32.DebugFault

// AArch32.DebugFault() // ==================== // Return a fault record indicating a hardware watchpoint/breakpoint FaultRecord AArch32.DebugFault(AccType acctype, boolean iswrite, bits(4) debugmoe) FaultRecord fault; fault.statuscode = Fault_Debug; fault.acctype = acctype; fault.write = iswrite; fault.debugmoe = debugmoe; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; return fault;

Library pseudocode for aarch32/translation/faults/AArch32.IPAIsOutOfRange

// AArch32.IPAIsOutOfRange() // ========================= // Check intermediate physical address bits not resolved by translation are ZERO boolean AArch32.IPAIsOutOfRange(S2TTWParams walkparams, bits(40) ipa) // Input Address size iasize = AArch32.S2IASize(walkparams.t0sz); return iasize < 40 && !IsZero(ipa<39:iasize>);

Library pseudocode for aarch32/translation/faults/AArch32.S1HasAlignmentFault

// AArch32.S1HasAlignmentFault() // ============================= // Returns whether stage 1 output fails alignment requirement on data accesses // to Device memory boolean AArch32.S1HasAlignmentFault(AccType acctype, boolean aligned, bit ntlsmd, MemoryAttributes memattrs) if acctype == AccType_IFETCH || memattrs.memtype != MemType_Device then return FALSE; if acctype == AccType_A32LSMD && ntlsmd == '0' && memattrs.device != DeviceType_GRE then return TRUE; return !aligned || acctype == AccType_DCZVA;

Library pseudocode for aarch32/translation/faults/AArch32.S1LDHasPermissionsFault

// AArch32.S1LDHasPermissionsFault() // ================================= // Returns whether an access using stage 1 long-descriptor translation // violates permissions of target memory boolean AArch32.S1LDHasPermissionsFault(Regime regime, SecurityState ss, S1TTWParams walkparams, Permissions perms, MemType memtype, PASpace paspace, boolean ispriv, AccType acctype, boolean iswrite) bit r; bit w; bit x; bit pr; bit pw; bit ur; bit uw; bit xn; if HasUnprivileged(regime) then // Apply leaf permissions case perms.ap<2:1> of when '00' (pr,pw,ur,uw) = ('1','1','0','0'); // R/W at PL1 only when '01' (pr,pw,ur,uw) = ('1','1','1','1'); // R/W at any PL when '10' (pr,pw,ur,uw) = ('1','0','0','0'); // RO at PL1 only when '11' (pr,pw,ur,uw) = ('1','0','1','0'); // RO at any PL // Apply hierarchical permissions case perms.ap_table of when '00' (pr,pw,ur,uw) = ( pr, pw, ur, uw); // No effect when '01' (pr,pw,ur,uw) = ( pr, pw,'0','0'); // Privileged access when '10' (pr,pw,ur,uw) = ( pr,'0', ur,'0'); // Read-only when '11' (pr,pw,ur,uw) = ( pr,'0','0','0'); // Read-only, privileged access xn = perms.xn OR perms.xn_table; pxn = perms.pxn OR perms.pxn_table; ux = ur AND NOT(xn OR (uw AND walkparams.wxn)); px = pr AND NOT(xn OR pxn OR (pw AND walkparams.wxn) OR (uw AND walkparams.uwxn)); pan_access = !(acctype IN {AccType_DC, AccType_IFETCH, AccType_AT}); if HavePANExt() && pan_access then pan = PSTATE.PAN AND (ur OR uw); pr = pr AND NOT(pan); pw = pw AND NOT(pan); (r,w,x) = if ispriv then (pr,pw,px) else (ur,uw,ux); // Prevent execution from Non-secure space by PE in Secure state if SIF is set if ss == SS_Secure && paspace == PAS_NonSecure then x = x AND NOT(walkparams.sif); else // Apply leaf permissions case perms.ap<2> of when '0' (r,w) = ('1','1'); // No effect when '1' (r,w) = ('1','0'); // Read-only // Apply hierarchical permissions case perms.ap_table<1> of when '0' (r,w) = ( r , w ); // No effect when '1' (r,w) = ( r ,'0'); // Read-only xn = perms.xn OR perms.xn_table; x = NOT(xn OR (w AND walkparams.wxn)); if acctype == AccType_IFETCH then constraint = ConstrainUnpredictable(Unpredictable_INSTRDEVICE); if constraint == Constraint_FAULT && memtype == MemType_Device then return TRUE; else return x == '0'; elsif acctype IN {AccType_IC, AccType_DC} then return FALSE; elsif iswrite then return w == '0'; else return r == '0';

Library pseudocode for aarch32/translation/faults/AArch32.S1SDHasPermissionsFault

// AArch32.S1SDHasPermissionsFault() // ================================= // Returns whether an access using stage 1 short-descriptor translation // violates permissions of target memory boolean AArch32.S1SDHasPermissionsFault(Regime regime, SecurityState ss, Permissions perms_in, MemType memtype, PASpace paspace, boolean ispriv, AccType acctype, boolean iswrite) Permissions perms = perms_in; bit pr; bit pw; bit ur; bit uw; SCTLR_Type sctlr; if regime == Regime_EL30 then sctlr = SCTLR_S; elsif HaveAArch32EL(EL3) then sctlr = SCTLR_NS; else sctlr = SCTLR; if sctlr.AFE == '0' then // Map Reserved encoding '100' if perms.ap == '100' then perms.ap = bits(3) IMPLEMENTATION_DEFINED "Reserved short descriptor AP encoding"; case perms.ap of when '000' (pr,pw,ur,uw) = ('0','0','0','0'); // No access when '001' (pr,pw,ur,uw) = ('1','1','0','0'); // R/W at PL1 only when '010' (pr,pw,ur,uw) = ('1','1','1','0'); // R/W at PL1, RO at PL0 when '011' (pr,pw,ur,uw) = ('1','1','1','1'); // R/W at any PL // '100' is reserved when '101' (pr,pw,ur,uw) = ('1','0','0','0'); // RO at PL1 only when '110' (pr,pw,ur,uw) = ('1','0','1','0'); // RO at any PL (deprecated) when '111' (pr,pw,ur,uw) = ('1','0','1','0'); // RO at any PL else // Simplified access permissions model case perms.ap<2:1> of when '00' (pr,pw,ur,uw) = ('1','1','0','0'); // R/W at PL1 only when '01' (pr,pw,ur,uw) = ('1','1','1','1'); // R/W at any PL when '10' (pr,pw,ur,uw) = ('1','0','0','0'); // RO at PL1 only when '11' (pr,pw,ur,uw) = ('1','0','1','0'); // RO at any PL ux = ur AND NOT(perms.xn OR (uw AND sctlr.WXN)); px = pr AND NOT(perms.xn OR perms.pxn OR (pw AND sctlr.WXN) OR (uw AND sctlr.UWXN)); pan_access = !(acctype IN {AccType_DC, AccType_IFETCH, AccType_AT}); if HavePANExt() && pan_access then pan = PSTATE.PAN AND (ur OR uw); pr = pr AND NOT(pan); pw = pw AND NOT(pan); (r,w,x) = if ispriv then (pr,pw,px) else (ur,uw,ux); // Prevent execution from Non-secure space by PE in Secure state if SIF is set if ss == SS_Secure && paspace == PAS_NonSecure then x = x AND NOT(if ELUsingAArch32(EL3) then SCR.SIF else SCR_EL3.SIF); if acctype == AccType_IFETCH then constraint = ConstrainUnpredictable(Unpredictable_INSTRDEVICE); if constraint == Constraint_FAULT && memtype == MemType_Device then return TRUE; else return x == '0'; elsif acctype IN {AccType_IC, AccType_DC} then return FALSE; elsif iswrite then return w == '0'; else return r == '0';

Library pseudocode for aarch32/translation/faults/AArch32.S2HasAlignmentFault

// AArch32.S2HasAlignmentFault() // ============================= // Returns whether stage 2 output fails alignment requirement on data accesses // to Device memory boolean AArch32.S2HasAlignmentFault(AccType acctype, boolean aligned, MemoryAttributes memattrs) if acctype == AccType_IFETCH || memattrs.memtype != MemType_Device then return FALSE; return !aligned || acctype == AccType_DCZVA;

Library pseudocode for aarch32/translation/faults/AArch32.S2HasPermissionsFault

// AArch32.S2HasPermissionsFault() // =============================== // Returns whether stage 2 access violates permissions of target memory boolean AArch32.S2HasPermissionsFault(boolean s2fs1walk, S2TTWParams walkparams, Permissions perms, MemType memtype, boolean ispriv, AccType acctype, boolean iswrite) bit px; bit ux; r = perms.s2ap<0>; w = perms.s2ap<1>; bit x; if HaveExtendedExecuteNeverExt() then case perms.s2xn:perms.s2xnx of when '00' (px, ux) = ( r , r ); when '01' (px, ux) = ('0', r ); when '10' (px, ux) = ('0','0'); when '11' (px, ux) = ( r ,'0'); x = if ispriv then px else ux; else x = r AND NOT(perms.s2xn); if s2fs1walk && walkparams.ptw == '1' && memtype == MemType_Device then return TRUE; elsif acctype == AccType_IFETCH then constraint = ConstrainUnpredictable(Unpredictable_INSTRDEVICE); if constraint == Constraint_FAULT && memtype == MemType_Device then return TRUE; else return x == '0'; elsif acctype IN {AccType_IC, AccType_DC} then return FALSE; elsif iswrite then return w == '0'; else return r == '0';

Library pseudocode for aarch32/translation/faults/AArch32.S2InconsistentSL

// AArch32.S2InconsistentSL() // ========================== // Detect inconsistent configuration of stage 2 T0SZ and SL fields boolean AArch32.S2InconsistentSL(S2TTWParams walkparams) startlevel = AArch32.S2StartLevel(walkparams.sl0); levels = FINAL_LEVEL - startlevel; granulebits = TGxGranuleBits(walkparams.tgx); stride = granulebits - 3; // Input address size must at least be large enough to be resolved from the start level sl_min_iasize = ( levels * stride // Bits resolved by table walk, except initial level + granulebits // Bits directly mapped to output address + 1); // At least 1 more bit to be decoded by initial level // Can accomodate 1 more stride in the level + concatenation of up to 2^4 tables sl_max_iasize = sl_min_iasize + (stride-1) + 4; // Configured Input Address size iasize = AArch32.S2IASize(walkparams.t0sz); return iasize < sl_min_iasize || iasize > sl_max_iasize;

Library pseudocode for aarch32/translation/faults/AArch32.VAIsOutOfRange

// AArch32.VAIsOutOfRange() // ======================== // Check virtual address bits not resolved by translation are identical // and of accepted value boolean AArch32.VAIsOutOfRange(Regime regime, S1TTWParams walkparams, bits(32) va) if regime == Regime_EL2 then // Input Address size iasize = AArch32.S1IASize(walkparams.t0sz); return walkparams.t0sz != '000' && !IsZero(va<31:iasize>); elsif walkparams.t1sz != '000' && walkparams.t0sz != '000' then // Lower range Input Address size lo_iasize = AArch32.S1IASize(walkparams.t0sz); // Upper range Input Address size up_iasize = AArch32.S1IASize(walkparams.t1sz); return !IsZero(va<31:lo_iasize>) && !IsOnes(va<31:up_iasize>); else return FALSE;

Library pseudocode for aarch32/translation/tlbcontext/AArch32.GetS1TLBContext

// AArch32.GetS1TLBContext() // ========================= // Gather translation context for accesses with VA to match against TLB entries TLBContext AArch32.GetS1TLBContext(Regime regime, SecurityState ss, bits(32) va) TLBContext tlbcontext; case regime of when Regime_EL2 tlbcontext = AArch32.TLBContextEL2(va); when Regime_EL10 tlbcontext = AArch32.TLBContextEL10(ss, va); when Regime_EL30 tlbcontext = AArch32.TLBContextEL30(va); tlbcontext.includes_s1 = TRUE; // The following may be amended for EL1&0 Regime if caching of stage 2 is successful tlbcontext.includes_s2 = FALSE; return tlbcontext;

Library pseudocode for aarch32/translation/tlbcontext/AArch32.GetS2TLBContext

// AArch32.GetS2TLBContext() // ========================= // Gather translation context for accesses with IPA to match against TLB entries TLBContext AArch32.GetS2TLBContext(FullAddress ipa) assert ipa.paspace == PAS_NonSecure; TLBContext tlbcontext; tlbcontext.ss = SS_NonSecure; tlbcontext.regime = Regime_EL10; tlbcontext.ipaspace = ipa.paspace; tlbcontext.vmid = ZeroExtend(VTTBR.VMID); tlbcontext.tg = TGx_4KB; tlbcontext.includes_s1 = FALSE; tlbcontext.includes_s2 = TRUE; tlbcontext.ia = ZeroExtend(ipa.address); tlbcontext.cnp = if HaveCommonNotPrivateTransExt() then VTTBR.CnP else '0'; return tlbcontext;

Library pseudocode for aarch32/translation/tlbcontext/AArch32.TLBContextEL10

// AArch32.TLBContextEL10() // ======================== // Gather translation context for accesses under EL10 regime // (PL10 when EL3 is A64) to match against TLB entries TLBContext AArch32.TLBContextEL10(SecurityState ss, bits(32) va) TLBContext tlbcontext; TTBCR_Type ttbcr; TTBR0_Type ttbr0; TTBR1_Type ttbr1; CONTEXTIDR_Type contextidr; if HaveAArch32EL(EL3) then ttbcr = TTBCR_NS; ttbr0 = TTBR0_NS; ttbr1 = TTBR1_NS; contextidr = CONTEXTIDR_NS; else ttbcr = TTBCR; ttbr0 = TTBR0; ttbr1 = TTBR1; contextidr = CONTEXTIDR; tlbcontext.ss = ss; tlbcontext.regime = Regime_EL10; if AArch32.EL2Enabled(ss) then tlbcontext.vmid = ZeroExtend(VTTBR.VMID); if ttbcr.EAE == '1' then tlbcontext.asid = ZeroExtend(if ttbcr.A1 == '0' then ttbr0.ASID else ttbr1.ASID); else tlbcontext.asid = ZeroExtend(contextidr.ASID); tlbcontext.tg = TGx_4KB; tlbcontext.ia = ZeroExtend(va); if HaveCommonNotPrivateTransExt() && ttbcr.EAE == '1' then if AArch32.GetVARange(va, ttbcr.T0SZ, ttbcr.T1SZ) == VARange_LOWER then tlbcontext.cnp = ttbr0.CnP; else tlbcontext.cnp = ttbr1.CnP; else tlbcontext.cnp = '0'; return tlbcontext;

Library pseudocode for aarch32/translation/tlbcontext/AArch32.TLBContextEL2

// AArch32.TLBContextEL2() // ======================= // Gather translation context for accesses under EL2 regime to match against TLB entries TLBContext AArch32.TLBContextEL2(bits(32) va) TLBContext tlbcontext; tlbcontext.ss = SS_NonSecure; tlbcontext.regime = Regime_EL2; tlbcontext.ia = ZeroExtend(va); tlbcontext.tg = TGx_4KB; tlbcontext.cnp = if HaveCommonNotPrivateTransExt() then HTTBR.CnP else '0'; return tlbcontext;

Library pseudocode for aarch32/translation/tlbcontext/AArch32.TLBContextEL30

// AArch32.TLBContextEL30() // ======================== // Gather translation context for accesses under EL30 regime // (PL10 in Secure state and EL3 is A32) to match against TLB entries TLBContext AArch32.TLBContextEL30(bits(32) va) TLBContext tlbcontext; tlbcontext.ss = SS_Secure; tlbcontext.regime = Regime_EL30; if TTBCR_S.EAE == '1' then tlbcontext.asid = ZeroExtend(if TTBCR_S.A1 == '0' then TTBR0_S.ASID else TTBR1_S.ASID); else tlbcontext.asid = ZeroExtend(CONTEXTIDR_S.ASID); tlbcontext.tg = TGx_4KB; tlbcontext.ia = ZeroExtend(va); if HaveCommonNotPrivateTransExt() && TTBCR_S.EAE == '1' then if AArch32.GetVARange(va, TTBCR_S.T0SZ, TTBCR_S.T1SZ) == VARange_LOWER then tlbcontext.cnp = TTBR0_S.CnP; else tlbcontext.cnp = TTBR1_S.CnP; else tlbcontext.cnp = '0'; return tlbcontext;

Library pseudocode for aarch32/translation/translation/AArch32.AccessUsesEL

// AArch32.AccessUsesEL() // ====================== // Determine the privilege associated with the access bits(2) AArch32.AccessUsesEL(AccType acctype) if acctype == AccType_UNPRIV then return EL0; else return PSTATE.EL;

Library pseudocode for aarch32/translation/translation/AArch32.EL2Enabled

// AArch32.EL2Enabled() // ==================== // Returns whether EL2 is enabled for the given Security State boolean AArch32.EL2Enabled(SecurityState ss) if ss == SS_Secure then if !(HaveEL(EL2) && HaveSecureEL2Ext()) then return FALSE; elsif HaveEL(EL3) then return SCR_EL3.EEL2 == '1'; else return boolean IMPLEMENTATION_DEFINED "Secure-only implementation"; else return HaveEL(EL2);

Library pseudocode for aarch32/translation/translation/AArch32.FullTranslate

// AArch32.FullTranslate() // ======================= // Perform address translation as specified by VMSA-A32 AddressDescriptor AArch32.FullTranslate(bits(32) va, AccType acctype, boolean iswrite, boolean aligned) // Prepare fault fields in case a fault is detected fault = NoFault(); fault.acctype = acctype; fault.write = iswrite; regime = TranslationRegime(PSTATE.EL, acctype); ispriv = PSTATE.EL != EL0 && acctype != AccType_UNPRIV; ss = SecurityStateForRegime(regime); // First Stage Translation AddressDescriptor ipa; if regime == Regime_EL2 || TTBCR.EAE == '1' then (fault, ipa) = AArch32.S1TranslateLD(fault, regime, ss, va, acctype, aligned, iswrite, ispriv); else (fault, ipa, -) = AArch32.S1TranslateSD(fault, regime, ss, va, acctype, aligned, iswrite, ispriv); if fault.statuscode != Fault_None then return CreateFaultyAddressDescriptor(ZeroExtend(va), fault); if regime == Regime_EL10 && EL2Enabled() then ipa.vaddress = ZeroExtend(va); s2fs1walk = FALSE; AddressDescriptor pa; (fault, pa) = AArch32.S2Translate(fault, ipa, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); if fault.statuscode != Fault_None then return CreateFaultyAddressDescriptor(ZeroExtend(va), fault); else return pa; else return ipa;

Library pseudocode for aarch32/translation/translation/AArch32.OutputDomain

// AArch32.OutputDomain() // ====================== // Determine the domain the translated output address bits(2) AArch32.OutputDomain(Regime regime, bits(4) domain) bits(2) Dn; index = 2 * UInt(domain); if regime == Regime_EL30 then Dn = DACR_S<index+1:index>; elsif HaveAArch32EL(EL3) then Dn = DACR_NS<index+1:index>; else Dn = DACR<index+1:index>; if Dn == '10' then // Reserved value maps to an allocated value (-, Dn) = ConstrainUnpredictableBits(Unpredictable_RESDACR); return Dn;

Library pseudocode for aarch32/translation/translation/AArch32.S1DisabledOutput

// AArch32.S1DisabledOutput() // ========================== // Flat map the VA to IPA/PA, depending on the regime, assigning default memory attributes (FaultRecord, AddressDescriptor) AArch32.S1DisabledOutput(FaultRecord fault_in, Regime regime, SecurityState ss, bits(32) va, AccType acctype, boolean aligned) FaultRecord fault = fault_in; // No memory page is guarded when stage 1 address translation is disabled SetInGuardedPage(FALSE); MemoryAttributes memattrs; bit default_cacheable; if regime == Regime_EL10 && AArch32.EL2Enabled(ss) then if ELStateUsingAArch32(EL2, ss == SS_Secure) then default_cacheable = HCR.DC; else default_cacheable = HCR_EL2.DC; else default_cacheable = '0'; if default_cacheable == '1' then // Use default cacheable settings memattrs.memtype = MemType_Normal; memattrs.inner.attrs = MemAttr_WB; memattrs.inner.hints = MemHint_RWA; memattrs.outer.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RWA; memattrs.shareability = Shareability_NSH; if !ELStateUsingAArch32(EL2, ss == SS_Secure) && HaveMTE2Ext() then memattrs.tagged = HCR_EL2.DCT == '1'; else memattrs.tagged = FALSE; elsif acctype == AccType_IFETCH then memattrs.memtype = MemType_Normal; memattrs.shareability = Shareability_OSH; memattrs.tagged = FALSE; if AArch32.S1ICacheEnabled(regime) then memattrs.inner.attrs = MemAttr_WT; memattrs.inner.hints = MemHint_RA; memattrs.outer.attrs = MemAttr_WT; memattrs.outer.hints = MemHint_RA; else memattrs.inner.attrs = MemAttr_NC; memattrs.outer.attrs = MemAttr_NC; else // Treat memory region as Device memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRnE; memattrs.shareability = Shareability_OSH; memattrs.tagged = FALSE; bit ntlsmd; if HaveTrapLoadStoreMultipleDeviceExt() then case regime of when Regime_EL30 ntlsmd = SCTLR_S.nTLSMD; when Regime_EL2 ntlsmd = HSCTLR.nTLSMD; when Regime_EL10 ntlsmd = if HaveAArch32EL(EL3) then SCTLR_NS.nTLSMD else SCTLR.nTLSMD; else ntlsmd = '1'; if AArch32.S1HasAlignmentFault(acctype, aligned, ntlsmd, memattrs) then fault.statuscode = Fault_Alignment; return (fault, AddressDescriptor UNKNOWN); FullAddress oa; oa.address = ZeroExtend(va); oa.paspace = if ss == SS_Secure then PAS_Secure else PAS_NonSecure; ipa = CreateAddressDescriptor(ZeroExtend(va), oa, memattrs); return (fault, ipa);

Library pseudocode for aarch32/translation/translation/AArch32.S1Enabled

// AArch32.S1Enabled() // =================== // Returns whether stage 1 translation is enabled for the active translation regime boolean AArch32.S1Enabled(Regime regime, SecurityState ss) if regime == Regime_EL2 then return HSCTLR.M == '1'; elsif regime == Regime_EL30 then return SCTLR_S.M == '1'; elsif !AArch32.EL2Enabled(ss) then return (if HaveAArch32EL(EL3) then SCTLR_NS.M else SCTLR.M) == '1'; elsif ELStateUsingAArch32(EL2, ss == SS_Secure) then return HCR.<TGE,DC> == '00' && (if HaveAArch32EL(EL3) then SCTLR_NS.M else SCTLR.M) == '1'; else return HCR_EL2.<TGE,DC> == '00' && SCTLR.M == '1';

Library pseudocode for aarch32/translation/translation/AArch32.S1TranslateLD

// AArch32.S1TranslateLD() // ======================= // Perform a stage 1 translation using long-descriptor format mapping VA to IPA/PA // depending on the regime (FaultRecord, AddressDescriptor) AArch32.S1TranslateLD(FaultRecord fault_in, Regime regime, SecurityState ss, bits(32) va, AccType acctype, boolean aligned, boolean iswrite, boolean ispriv) FaultRecord fault = fault_in; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; if !AArch32.S1Enabled(regime, ss) then return AArch32.S1DisabledOutput(fault, regime, ss, va, acctype, aligned); walkparams = AArch32.GetS1TTWParams(regime, va); if AArch32.VAIsOutOfRange(regime, walkparams, va) then fault.level = 1; fault.statuscode = Fault_Translation; return (fault, AddressDescriptor UNKNOWN); TTWState walkstate; (fault, walkstate) = AArch32.S1WalkLD(fault, regime, ss, walkparams, va, ispriv); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); SetInGuardedPage(FALSE); // AArch32-VMSA does not guard any pages if AArch32.S1HasAlignmentFault(acctype, aligned, walkparams.ntlsmd, walkstate.memattrs) then fault.statuscode = Fault_Alignment; elsif IsAtomicRW(acctype) then if AArch32.S1LDHasPermissionsFault(regime, ss, walkparams, walkstate.permissions, walkstate.memattrs.memtype, walkstate.baseaddress.paspace, ispriv, acctype, FALSE) then // The Permission fault was not caused by lack of write permissions // The permission fault was not caused by lack of write permissions fault.statuscode = Fault_Permission; fault.write = FALSE; elsif AArch32.S1LDHasPermissionsFault(regime, ss, walkparams, walkstate.permissions, walkstate.memattrs.memtype, walkstate.baseaddress.paspace, ispriv, acctype, TRUE) then // The Permission fault was caused by lack of write permissions // The permission fault _was_ caused by lack of write permissions fault.statuscode = Fault_Permission; fault.write = TRUE; elsif AArch32.S1LDHasPermissionsFault(regime, ss, walkparams, walkstate.permissions, walkstate.memattrs.memtype, walkstate.baseaddress.paspace, ispriv, acctype, iswrite) then fault.statuscode = Fault_Permission; if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); MemoryAttributes memattrs; if ((acctype == AccType_IFETCH && (walkstate.memattrs.memtype == MemType_Device || !AArch32.S1ICacheEnabled(regime))) || (acctype != AccType_IFETCH && walkstate.memattrs.memtype == MemType_Normal && !AArch32.S1DCacheEnabled(regime))) then // Treat memory attributes as Normal Non-Cacheable memattrs = NormalNCMemAttr(); memattrs.xs = walkstate.memattrs.xs; else memattrs = walkstate.memattrs; // Shareability value of stage 1 translation subject to stage 2 is IMPLEMENTATION DEFINED // to be either effective value or descriptor value if (regime == Regime_EL10 && AArch32.EL2Enabled(ss) && (if ELStateUsingAArch32(EL2, ss == SS_Secure) then HCR.VM else HCR_EL2.VM) == '1' && !(boolean IMPLEMENTATION_DEFINED "Apply effective shareability at stage 1")) then memattrs.shareability = walkstate.memattrs.shareability; else memattrs.shareability = EffectiveShareability(memattrs); // Output Address oa = StageOA(ZeroExtend(va), walkparams.tgx, walkstate); ipa = CreateAddressDescriptor(ZeroExtend(va), oa, memattrs); return (fault, ipa);

Library pseudocode for aarch32/translation/translation/AArch32.S1TranslateSD

// AArch32.S1TranslateSD() // ======================= // Perform a stage 1 translation using short-descriptor format mapping VA to IPA/PA // depending on the regime (FaultRecord, AddressDescriptor, SDFType) AArch32.S1TranslateSD(FaultRecord fault_in, Regime regime, SecurityState ss, bits(32) va, AccType acctype, boolean aligned, boolean iswrite, boolean ispriv) FaultRecord fault = fault_in; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; if !AArch32.S1Enabled(regime, ss) then AddressDescriptor ipa; (fault, ipa) = AArch32.S1DisabledOutput(fault, regime, ss, va, acctype, aligned); return (fault, ipa, SDFType UNKNOWN); TTWState walkstate; (fault, walkstate) = AArch32.S1WalkSD(fault, regime, ss, va, ispriv); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN, SDFType UNKNOWN); domain = AArch32.OutputDomain(regime, walkstate.domain); SetInGuardedPage(FALSE); // AArch32-VMSA does not guard any pages bit ntlsmd; if HaveTrapLoadStoreMultipleDeviceExt() then case regime of when Regime_EL30 ntlsmd = SCTLR_S.nTLSMD; when Regime_EL10 ntlsmd = if HaveAArch32EL(EL3) then SCTLR_NS.nTLSMD else SCTLR.nTLSMD; else ntlsmd = '1'; if AArch32.S1HasAlignmentFault(acctype, aligned, ntlsmd, walkstate.memattrs) then fault.statuscode = Fault_Alignment; elsif !(acctype IN {AccType_IC, AccType_DC}) && domain == Domain_NoAccess then fault.statuscode = Fault_Domain; elsif domain == Domain_Client then if IsAtomicRW(acctype) then if AArch32.S1SDHasPermissionsFault(regime, ss, walkstate.permissions, walkstate.memattrs.memtype, walkstate.baseaddress.paspace, ispriv, acctype, FALSE) then // The Permission fault was not caused by lack of write permissions // The permission fault was not caused by lack of write permissions fault.statuscode = Fault_Permission; fault.write = FALSE; elsif AArch32.S1SDHasPermissionsFault(regime, ss, walkstate.permissions, walkstate.memattrs.memtype, walkstate.baseaddress.paspace, ispriv, acctype, TRUE) then // The Permission fault was caused by lack of write permissions // The permission fault _was_ caused by lack of write permissions fault.statuscode = Fault_Permission; fault.write = TRUE; elsif AArch32.S1SDHasPermissionsFault(regime, ss, walkstate.permissions, walkstate.memattrs.memtype, walkstate.baseaddress.paspace, ispriv, acctype, iswrite) then fault.statuscode = Fault_Permission; if fault.statuscode != Fault_None then fault.domain = walkstate.domain; return (fault, AddressDescriptor UNKNOWN, walkstate.sdftype); MemoryAttributes memattrs; if ((acctype == AccType_IFETCH && (walkstate.memattrs.memtype == MemType_Device || !AArch32.S1ICacheEnabled(regime))) || (acctype != AccType_IFETCH && walkstate.memattrs.memtype == MemType_Normal && !AArch32.S1DCacheEnabled(regime))) then // Treat memory attributes as Normal Non-Cacheable memattrs = NormalNCMemAttr(); memattrs.xs = walkstate.memattrs.xs; else memattrs = walkstate.memattrs; // Shareability value of stage 1 translation subject to stage 2 is IMPLEMENTATION DEFINED // to be either effective value or descriptor value if (regime == Regime_EL10 && AArch32.EL2Enabled(ss) && (if ELStateUsingAArch32(EL2, ss == SS_Secure) then HCR.VM else HCR_EL2.VM) == '1' && !(boolean IMPLEMENTATION_DEFINED "Apply effective shareability at stage 1")) then memattrs.shareability = walkstate.memattrs.shareability; else memattrs.shareability = EffectiveShareability(memattrs); // Output Address oa = AArch32.SDStageOA(walkstate.baseaddress, va, walkstate.sdftype); ipa = CreateAddressDescriptor(ZeroExtend(va), oa, memattrs); return (fault, ipa, walkstate.sdftype);

Library pseudocode for aarch32/translation/translation/AArch32.S2Translate

// AArch32.S2Translate() // ===================== // Perform a stage 2 translation mapping an IPA to a PA (FaultRecord, AddressDescriptor) AArch32.S2Translate(FaultRecord fault_in, AddressDescriptor ipa, SecurityState ss, boolean s2fs1walk, AccType acctype, boolean aligned, boolean iswrite, boolean ispriv) FaultRecord fault = fault_in; assert IsZero(ipa.paddress.address<51:40>); if !ELStateUsingAArch32(EL2, ss == SS_Secure) then s1aarch64 = FALSE; return AArch64.S2Translate(fault, ipa, s1aarch64, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); // Prepare fault fields in case a fault is detected fault.statuscode = Fault_None; fault.secondstage = TRUE; fault.s2fs1walk = s2fs1walk; fault.ipaddress = ipa.paddress; walkparams = AArch32.GetS2TTWParams(); if walkparams.vm == '0' then // Stage 2 is disabled return (fault, ipa); if AArch32.IPAIsOutOfRange(walkparams, ipa.paddress.address<39:0>) then fault.statuscode = Fault_Translation; fault.level = 1; return (fault, AddressDescriptor UNKNOWN); TTWState walkstate; (fault, walkstate) = AArch32.S2Walk(fault, walkparams, ipa); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); if AArch32.S2HasAlignmentFault(acctype, aligned, walkstate.memattrs) then fault.statuscode = Fault_Alignment; elsif IsAtomicRW(acctype) then assert !s2fs1walk; // AArch32 does not support HW update of TT if AArch32.S2HasPermissionsFault(s2fs1walk, walkparams, walkstate.permissions, walkstate.memattrs.memtype, ispriv, acctype, FALSE) then // The Permission fault was not caused by lack of write permissions // The permission fault was not caused by lack of write permissions fault.statuscode = Fault_Permission; fault.write = FALSE; elsif AArch32.S2HasPermissionsFault(s2fs1walk, walkparams, walkstate.permissions, walkstate.memattrs.memtype, ispriv, acctype, TRUE) then // The Permission fault was caused by lack of write permissions // The permission fault _was_ caused by lack of write permissions fault.statuscode = Fault_Permission; fault.write = TRUE; elsif AArch32.S2HasPermissionsFault(s2fs1walk, walkparams, walkstate.permissions, walkstate.memattrs.memtype, ispriv, acctype, iswrite) then fault.statuscode = Fault_Permission; MemoryAttributes s2_memattrs; if ((s2fs1walk && walkstate.memattrs.memtype == MemType_Device) || (acctype == AccType_IFETCH && (walkstate.memattrs.memtype == MemType_Device || HCR2.ID == '1')) || (acctype != AccType_IFETCH && walkstate.memattrs.memtype == MemType_Normal && HCR2.CD == '1')) then // Treat memory attributes as Normal Non-Cacheable s2_memattrs = NormalNCMemAttr(); s2_memattrs.xs = walkstate.memattrs.xs; else s2_memattrs = walkstate.memattrs; memattrs = S2CombineS1MemAttrs(ipa.memattrs, s2_memattrs); ipa_64 = ZeroExtend(ipa.paddress.address<39:0>, 64); // Output Address oa = StageOA(ipa_64, walkparams.tgx, walkstate); pa = CreateAddressDescriptor(ipa.vaddress, oa, memattrs); return (fault, pa);

Library pseudocode for aarch32/translation/translation/AArch32.SDStageOA

// AArch32.SDStageOA() // =================== // Given the final walk state of a short-descriptor translation walk, // map the untranslated input address bits to the base output address FullAddress AArch32.SDStageOA(FullAddress baseaddress, bits(32) va, SDFType sdftype) integer tsize; case sdftype of when SDFType_SmallPage tsize = 12; when SDFType_LargePage tsize = 16; when SDFType_Section tsize = 20; when SDFType_Supersection tsize = 24; // Output Address FullAddress oa; oa.address = baseaddress.address<51:tsize>:va<tsize-1:0>; oa.paspace = baseaddress.paspace; return oa;

Library pseudocode for aarch32/translation/translation/AArch32.TranslateAddress

// AArch32.TranslateAddress() // ========================== // Main entry point for translating an address AddressDescriptor AArch32.TranslateAddress(bits(32) va, AccType acctype, boolean iswrite, boolean aligned, integer size) regime = TranslationRegime(PSTATE.EL, acctype); if !RegimeUsingAArch32(regime) then return AArch64.TranslateAddress(ZeroExtend(va, 64), acctype, iswrite, aligned, size); result = AArch32.FullTranslate(va, acctype, iswrite, aligned); if !IsFault(result) then result.fault = AArch32.CheckDebug(va, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress = ZeroExtend(va); return result;

Library pseudocode for aarch32/translation/walk/AArch32.DecodeDescriptorTypeLD

// AArch32.DecodeDescriptorTypeLD() // ================================ // Determine whether the long-descriptor is a page, block or table DescriptorType AArch32.DecodeDescriptorTypeLD(bits(64) descriptor, integer level) if descriptor<1:0> == '11' && level == FINAL_LEVEL then return DescriptorType_Page; elsif descriptor<1:0> == '11' then return DescriptorType_Table; elsif descriptor<1:0> == '01' && level != FINAL_LEVEL then return DescriptorType_Block; else return DescriptorType_Invalid;

Library pseudocode for aarch32/translation/walk/AArch32.DecodeDescriptorTypeSD

// AArch32.DecodeDescriptorTypeSD() // ================================ // Determine the type of the short-descriptor SDFType AArch32.DecodeDescriptorTypeSD(bits(32) descriptor, integer level) if level == 1 && descriptor<1:0> == '01' then return SDFType_Table; elsif level == 1 && descriptor<18,1> == '01' then return SDFType_Section; elsif level == 1 && descriptor<18,1> == '11' then return SDFType_Supersection; elsif level == 2 && descriptor<1:0> == '01' then return SDFType_LargePage; elsif level == 2 && descriptor<1:0> IN {'1x'} then return SDFType_SmallPage; else return SDFType_Invalid;

Library pseudocode for aarch32/translation/walk/AArch32.S1IASize

// AArch32.S1IASize() // ================== // Retrieve the number of bits containing the input address for stage 1 translation integer AArch32.S1IASize(bits(3) txsz) return 32 - UInt(txsz);

Library pseudocode for aarch32/translation/walk/AArch32.S1WalkLD

// AArch32.S1WalkLD() // ================== // Traverse stage 1 translation tables in long format to obtain the final descriptor (FaultRecord, TTWState) AArch32.S1WalkLD(FaultRecord fault_in, Regime regime, SecurityState ss, S1TTWParams walkparams, bits(32) va, boolean ispriv) FaultRecord fault = fault_in; bits(3) txsz; bits(64) ttbr; bit epd; if regime == Regime_EL2 then ttbr = HTTBR; txsz = walkparams.t0sz; else varange = AArch32.GetVARange(va, walkparams.t0sz, walkparams.t1sz); bits(64) ttbr0; bits(64) ttbr1; TTBCR_Type ttbcr; if regime == Regime_EL30 then ttbcr = TTBCR_S; ttbr0 = TTBR0_S; ttbr1 = TTBR1_S; elsif HaveAArch32EL(EL3) then ttbcr = TTBCR_NS; ttbr0 = TTBR0_NS; ttbr1 = TTBR1_NS; else ttbcr = TTBCR; ttbr0 = TTBR0; ttbr1 = TTBR1; assert ttbcr.EAE == '1'; if varange == VARange_LOWER then txsz = walkparams.t0sz; ttbr = ttbr0; epd = ttbcr.EPD0; else txsz = walkparams.t1sz; ttbr = ttbr1; epd = ttbcr.EPD1; if regime != Regime_EL2 && epd == '1' then fault.level = 1; fault.statuscode = Fault_Translation; return (fault, TTWState UNKNOWN); // Input Address size iasize = AArch32.S1IASize(txsz); granulebits = TGxGranuleBits(walkparams.tgx); stride = granulebits - 3; startlevel = FINAL_LEVEL - (((iasize-1) - granulebits) DIV stride); levels = FINAL_LEVEL - startlevel; if !IsZero(ttbr<47:40>) then fault.statuscode = Fault_AddressSize; fault.level = 0; return (fault, TTWState UNKNOWN); FullAddress baseaddress; baselsb = (iasize - (levels*stride + granulebits)) + 3; baseaddress.paspace = if ss == SS_Secure then PAS_Secure else PAS_NonSecure; baseaddress.address = ZeroExtend(ttbr<39:baselsb>:Zeros(baselsb)); TTWState walkstate; walkstate.baseaddress = baseaddress; walkstate.level = startlevel; walkstate.istable = TRUE; // In regimes that support global and non-global translations, translation // table entries from lookup levels other than the final level of lookup // are treated as being non-global walkstate.nG = if HasUnprivileged(regime) then '1' else '0'; walkstate.memattrs = WalkMemAttrs(walkparams.sh, walkparams.irgn, walkparams.orgn); walkstate.permissions.ap_table = '00'; walkstate.permissions.xn_table = '0'; walkstate.permissions.pxn_table = '0'; indexmsb = iasize - 1; bits(64) descriptor; AddressDescriptor walkaddress; walkaddress.vaddress = ZeroExtend(va); if !AArch32.S1DCacheEnabled(regime) then walkaddress.memattrs = NormalNCMemAttr(); walkaddress.memattrs.xs = walkstate.memattrs.xs; else walkaddress.memattrs = walkstate.memattrs; // Shareability value of stage 1 translation subject to stage 2 is IMPLEMENTATION DEFINED // to be either effective value or descriptor value if (regime == Regime_EL10 && AArch32.EL2Enabled(ss) && (if ELStateUsingAArch32(EL2, ss == SS_Secure) then HCR.VM else HCR_EL2.VM) == '1' && !(boolean IMPLEMENTATION_DEFINED "Apply effective shareability at stage 1")) then walkaddress.memattrs.shareability = walkstate.memattrs.shareability; else walkaddress.memattrs.shareability = EffectiveShareability(walkaddress.memattrs); integer indexlsb; DescriptorType desctype; repeat fault.level = walkstate.level; indexlsb = (FINAL_LEVEL - walkstate.level)*stride + granulebits; bits(40) index = ZeroExtend(va<indexmsb:indexlsb>:'000'); walkaddress.paddress.address = walkstate.baseaddress.address OR ZeroExtend(index); walkaddress.paddress.paspace = walkstate.baseaddress.paspace; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if regime == Regime_EL10 && AArch32.EL2Enabled(ss) then s2fs1walk = TRUE; s2acctype = AccType_TTW; s2aligned = TRUE; s2write = FALSE; (s2fault, s2walkaddress) = AArch32.S2Translate(fault, walkaddress, ss, s2fs1walk, s2acctype, s2aligned, s2write, ispriv); // Check for a fault on the stage 2 walk if s2fault.statuscode != Fault_None then return (s2fault, TTWState UNKNOWN); (fault, descriptor) = FetchDescriptor(walkparams.ee, s2walkaddress, fault); else (fault, descriptor) = FetchDescriptor(walkparams.ee, walkaddress, fault); if fault.statuscode != Fault_None then return (fault, TTWState UNKNOWN); desctype = AArch32.DecodeDescriptorTypeLD(descriptor, walkstate.level); case desctype of when DescriptorType_Table if !IsZero(descriptor<47:40>) then fault.statuscode = Fault_AddressSize; return (fault, TTWState UNKNOWN); walkstate.baseaddress.address = ZeroExtend(descriptor<39:12>:Zeros(12)); if walkstate.baseaddress.paspace == PAS_Secure && descriptor<63> == '1' then walkstate.baseaddress.paspace = PAS_NonSecure; if walkparams.hpd == '0' then walkstate.permissions.xn_table = (walkstate.permissions.xn_table OR descriptor<60>); walkstate.permissions.ap_table = (walkstate.permissions.ap_table OR descriptor<62:61>); walkstate.permissions.pxn_table = (walkstate.permissions.pxn_table OR descriptor<59>); walkstate.level = walkstate.level + 1; indexmsb = indexlsb - 1; when DescriptorType_Invalid fault.statuscode = Fault_Translation; return (fault, TTWState UNKNOWN); when DescriptorType_Page, DescriptorType_Block walkstate.istable = FALSE; until desctype IN {DescriptorType_Page, DescriptorType_Block}; // Check the output address is inside the supported range if !IsZero(descriptor<47:40>) then fault.statuscode = Fault_AddressSize; return (fault, TTWState UNKNOWN); // Check the access flag if descriptor<10> == '0' then fault.statuscode = Fault_AccessFlag; return (fault, TTWState UNKNOWN); walkstate.permissions.xn = descriptor<54>; walkstate.permissions.pxn = descriptor<53>; walkstate.permissions.ap = descriptor<7:6>:'1'; walkstate.contiguous = descriptor<52>; if regime == Regime_EL2 then // All EL2 regime accesses are treated as Global walkstate.nG = '0'; elsif ss == SS_Secure && walkstate.baseaddress.paspace == PAS_NonSecure then // When a PE is using the Long-descriptor translation table format, // and is in Secure state, a translation must be treated as non-global, // regardless of the value of the nG bit, // if NSTable is set to 1 at any level of the translation table walk. walkstate.nG = '1'; else walkstate.nG = descriptor<11>; walkstate.baseaddress.address = ZeroExtend(descriptor<39:indexlsb>:Zeros(indexlsb)); if walkstate.baseaddress.paspace == PAS_Secure && descriptor<5> == '1' then walkstate.baseaddress.paspace = PAS_NonSecure; memattr = descriptor<4:2>; sh = descriptor<9:8>; attr = MAIRAttr(UInt(memattr), walkparams.mair); s1aarch64 = FALSE; walkstate.memattrs = S1DecodeMemAttrs(attr, sh, s1aarch64); return (fault, walkstate);

Library pseudocode for aarch32/translation/walk/AArch32.S1WalkSD

// AArch32.S1WalkSD() // ================== // Traverse stage 1 translation tables in short format to obtain the final descriptor (FaultRecord, TTWState) AArch32.S1WalkSD(FaultRecord fault_in, Regime regime, SecurityState ss, bits(32) va, boolean ispriv) FaultRecord fault = fault_in; SCTLR_Type sctlr; TTBCR_Type ttbcr; TTBR0_Type ttbr0; TTBR1_Type ttbr1; // Determine correct translation control registers to use. if regime == Regime_EL30 then sctlr = SCTLR_S; ttbcr = TTBCR_S; ttbr0 = TTBR0_S; ttbr1 = TTBR1_S; elsif HaveAArch32EL(EL3) then sctlr = SCTLR_NS; ttbcr = TTBCR_NS; ttbr0 = TTBR0_NS; ttbr1 = TTBR1_NS; else sctlr = SCTLR; ttbcr = TTBCR; ttbr0 = TTBR0; ttbr1 = TTBR1; assert ttbcr.EAE == '0'; ee = sctlr.EE; afe = sctlr.AFE; tre = sctlr.TRE; n = UInt(ttbcr.N); bits(32) ttb; bits(1) pd; bits(2) irgn; bits(2) rgn; bits(1) s; bits(1) nos; if n == 0 || IsZero(va<31:(32-n)>) then ttb = ttbr0.TTB0:Zeros(7); pd = ttbcr.PD0; irgn = ttbr0.IRGN; rgn = ttbr0.RGN; s = ttbr0.S; nos = ttbr0.NOS; else n = 0; // TTBR1 translation always treats N as 0 ttb = ttbr1.TTB1:Zeros(7); pd = ttbcr.PD1; irgn = ttbr1.IRGN; rgn = ttbr1.RGN; s = ttbr1.S; nos = ttbr1.NOS; // Check if Translation table walk disabled for translations with this Base register. if pd == '1' then fault.level = 1; fault.statuscode = Fault_Translation; return (fault, TTWState UNKNOWN); FullAddress baseaddress; baseaddress.paspace = if ss == SS_Secure then PAS_Secure else PAS_NonSecure; baseaddress.address = ZeroExtend(ttb<31:14-n>:Zeros(14-n)); TTWState walkstate; walkstate.baseaddress = baseaddress; // In regimes that support global and non-global translations, translation // table entries from lookup levels other than the final level of lookup // are treated as being non-global. Translations in Short-Descriptor Format // always support global & non-global translations. walkstate.nG = '1'; walkstate.memattrs = WalkMemAttrs(s:nos, irgn, rgn); walkstate.level = 1; walkstate.istable = TRUE; bits(4) domain; bits(32) descriptor; AddressDescriptor walkaddress; walkaddress.vaddress = ZeroExtend(va); if !AArch32.S1DCacheEnabled(regime) then walkaddress.memattrs = NormalNCMemAttr(); walkaddress.memattrs.xs = walkstate.memattrs.xs; else walkaddress.memattrs = walkstate.memattrs; // Shareability value of stage 1 translation subject to stage 2 is IMPLEMENTATION DEFINED // to be either effective value or descriptor value if (regime == Regime_EL10 && AArch32.EL2Enabled(ss) && (if ELStateUsingAArch32(EL2, ss == SS_Secure) then HCR.VM else HCR_EL2.VM) == '1' && !(boolean IMPLEMENTATION_DEFINED "Apply effective shareability at stage 1")) then walkaddress.memattrs.shareability = walkstate.memattrs.shareability; else walkaddress.memattrs.shareability = EffectiveShareability(walkaddress.memattrs); bit nG; bit ns; bit pxn; bits(3) ap; bits(3) tex; bit c; bit b; bit xn; repeat fault.level = walkstate.level; bits(32) index; if walkstate.level == 1 then index = ZeroExtend(va<31-n:20>:'00'); else index = ZeroExtend(va<19:12>:'00'); walkaddress.paddress.address = walkstate.baseaddress.address OR ZeroExtend(index); walkaddress.paddress.paspace = walkstate.baseaddress.paspace; if regime == Regime_EL10 && AArch32.EL2Enabled(ss) then s2fs1walk = TRUE; s2acctype = AccType_TTW; s2aligned = TRUE; s2write = FALSE; (s2fault, s2walkaddress) = AArch32.S2Translate(fault, walkaddress, ss, s2fs1walk, s2acctype, s2aligned, s2write, ispriv); if s2fault.statuscode != Fault_None then return (s2fault, TTWState UNKNOWN); (fault, descriptor) = FetchDescriptor(ee, s2walkaddress, fault); else (fault, descriptor) = FetchDescriptor(ee, walkaddress, fault); if fault.statuscode != Fault_None then return (fault, TTWState UNKNOWN); walkstate.sdftype = AArch32.DecodeDescriptorTypeSD(descriptor, walkstate.level); case walkstate.sdftype of when SDFType_Invalid fault.domain = domain; fault.statuscode = Fault_Translation; return (fault, TTWState UNKNOWN); when SDFType_Table domain = descriptor<8:5>; ns = descriptor<3>; pxn = descriptor<2>; walkstate.baseaddress.address = ZeroExtend(descriptor<31:10>:Zeros(10)); walkstate.level = 2; when SDFType_SmallPage nG = descriptor<11>; s = descriptor<10>; ap = descriptor<9,5:4>; tex = descriptor<8:6>; c = descriptor<3>; b = descriptor<2>; xn = descriptor<0>; walkstate.baseaddress.address = ZeroExtend(descriptor<31:12>:Zeros(12)); walkstate.istable = FALSE; when SDFType_LargePage xn = descriptor<15>; tex = descriptor<14:12>; nG = descriptor<11>; s = descriptor<10>; ap = descriptor<9,5:4>; c = descriptor<3>; b = descriptor<2>; walkstate.baseaddress.address = ZeroExtend(descriptor<31:16>:Zeros(16)); walkstate.istable = FALSE; when SDFType_Section ns = descriptor<19>; nG = descriptor<17>; s = descriptor<16>; ap = descriptor<15,11:10>; tex = descriptor<14:12>; domain = descriptor<8:5>; xn = descriptor<4>; c = descriptor<3>; b = descriptor<2>; pxn = descriptor<0>; walkstate.baseaddress.address = ZeroExtend(descriptor<31:20>:Zeros(20)); walkstate.istable = FALSE; when SDFType_Supersection ns = descriptor<19>; nG = descriptor<17>; s = descriptor<16>; ap = descriptor<15,11:10>; tex = descriptor<14:12>; xn = descriptor<4>; c = descriptor<3>; b = descriptor<2>; pxn = descriptor<0>; domain = '0000'; walkstate.baseaddress.address = ZeroExtend(descriptor<8:5,23:20,31:24>:Zeros(24)); walkstate.istable = FALSE; until walkstate.sdftype != SDFType_Table; if afe == '1' && ap<0> == '0' then fault.domain = domain; fault.statuscode = Fault_AccessFlag; return (fault, TTWState UNKNOWN); // Decode the TEX, C, B and S bits to produce target memory attributes if tre == '1' then walkstate.memattrs = AArch32.RemappedTEXDecode(regime, tex, c, b, s); elsif RemapRegsHaveResetValues() then walkstate.memattrs = AArch32.DefaultTEXDecode(tex, c, b, s); else walkstate.memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; walkstate.permissions.ap = ap; walkstate.permissions.xn = xn; walkstate.permissions.pxn = pxn; walkstate.domain = domain; walkstate.nG = nG; if ss == SS_Secure && ns == '0' then walkstate.baseaddress.paspace = PAS_Secure; else walkstate.baseaddress.paspace = PAS_NonSecure; return (fault, walkstate);

Library pseudocode for aarch32/translation/walk/AArch32.S2IASize

// AArch32.S2IASize() // ================== // Retrieve the number of bits containing the input address for stage 2 translation integer AArch32.S2IASize(bits(4) t0sz) return 32 - SInt(t0sz);

Library pseudocode for aarch32/translation/walk/AArch32.S2StartLevel

// AArch32.S2StartLevel() // ====================== // Determine the initial lookup level when performing a stage 2 translation // table walk integer AArch32.S2StartLevel(bits(2) sl0) return 2 - UInt(sl0);

Library pseudocode for aarch32/translation/walk/AArch32.S2Walk

// AArch32.S2Walk() // ================ // Traverse stage 2 translation tables in long format to obtain the final descriptor (FaultRecord, TTWState) AArch32.S2Walk(FaultRecord fault_in, S2TTWParams walkparams, AddressDescriptor ipa) FaultRecord fault = fault_in; if walkparams.sl0 IN {'1x'} || AArch32.S2InconsistentSL(walkparams) then fault.statuscode = Fault_Translation; fault.level = 1; return (fault, TTWState UNKNOWN); // Input Address size iasize = AArch32.S2IASize(walkparams.t0sz); startlevel = AArch32.S2StartLevel(walkparams.sl0); levels = FINAL_LEVEL - startlevel; granulebits = TGxGranuleBits(walkparams.tgx); stride = granulebits - 3; if !IsZero(VTTBR<47:40>) then fault.statuscode = Fault_AddressSize; fault.level = 0; return (fault, TTWState UNKNOWN); FullAddress baseaddress; baselsb = (iasize - (levels*stride + granulebits)) + 3; baseaddress.paspace = PAS_NonSecure; baseaddress.address = ZeroExtend(VTTBR<39:baselsb>:Zeros(baselsb)); TTWState walkstate; walkstate.baseaddress = baseaddress; walkstate.level = startlevel; walkstate.istable = TRUE; walkstate.memattrs = WalkMemAttrs(walkparams.sh, walkparams.irgn, walkparams.orgn); indexmsb = iasize - 1; bits(64) descriptor; AddressDescriptor walkaddress; walkaddress.vaddress = ipa.vaddress; if HCR2.CD == '1' then walkaddress.memattrs = NormalNCMemAttr(); walkaddress.memattrs.xs = walkstate.memattrs.xs; else walkaddress.memattrs = walkstate.memattrs; walkaddress.memattrs.shareability = EffectiveShareability(walkaddress.memattrs); integer indexlsb; DescriptorType desctype; repeat fault.level = walkstate.level; indexlsb = (FINAL_LEVEL - walkstate.level)*stride + granulebits; bits(40) index = ZeroExtend(ipa.paddress.address<indexmsb:indexlsb>:'000'); walkaddress.paddress.address = walkstate.baseaddress.address OR ZeroExtend(index); walkaddress.paddress.paspace = walkstate.baseaddress.paspace; (fault, descriptor) = FetchDescriptor(walkparams.ee, walkaddress, fault); if fault.statuscode != Fault_None then return (fault, TTWState UNKNOWN); desctype = AArch32.DecodeDescriptorTypeLD(descriptor, walkstate.level); case desctype of when DescriptorType_Table if !IsZero(descriptor<47:40>) then fault.statuscode = Fault_AddressSize; return (fault, TTWState UNKNOWN); walkstate.baseaddress.address = ZeroExtend(descriptor<39:12>:Zeros(12)); walkstate.level = walkstate.level + 1; indexmsb = indexlsb - 1; when DescriptorType_Invalid fault.statuscode = Fault_Translation; return (fault, TTWState UNKNOWN); when DescriptorType_Page, DescriptorType_Block walkstate.istable = FALSE; until desctype IN {DescriptorType_Page, DescriptorType_Block}; // Check the output address is inside the supported range if !IsZero(descriptor<47:40>) then fault.statuscode = Fault_AddressSize; return (fault, TTWState UNKNOWN); // Check the access flag if descriptor<10> == '0' then fault.statuscode = Fault_AccessFlag; return (fault, TTWState UNKNOWN); // Unpack the descriptor into address and upper and lower block attributes walkstate.baseaddress.address = ZeroExtend(descriptor<39:indexlsb>:Zeros(indexlsb)); walkstate.permissions.s2ap = descriptor<7:6>; walkstate.permissions.s2xn = descriptor<54>; if HaveExtendedExecuteNeverExt() then walkstate.permissions.s2xnx = descriptor<53>; else walkstate.permissions.s2xnx = '0'; memattr = descriptor<5:2>; sh = descriptor<9:8>; walkstate.memattrs = S2DecodeMemAttrs(memattr, sh); walkstate.contiguous = descriptor<52>; return (fault, walkstate);

Library pseudocode for aarch32/translation/walk/AArch32.TranslationSizeSD

// AArch32.TranslationSizeSD() // =========================== // Determine the size of the translation integer AArch32.TranslationSizeSD(SDFType sdftype) integer tsize; case sdftype of when SDFType_SmallPage tsize = 12; when SDFType_LargePage tsize = 16; when SDFType_Section tsize = 20; when SDFType_Supersection tsize = 24; return tsize;

Library pseudocode for aarch32/translation/walk/RemapRegsHaveResetValues

boolean RemapRegsHaveResetValues();

Library pseudocode for aarch32/translation/walkparams/AArch32.GetS1TTWParams

// AArch32.GetS1TTWParams() // ======================== // Returns stage 1 translation table walk parameters from respective controlling // system registers. S1TTWParams AArch32.GetS1TTWParams(Regime regime, bits(32) va) S1TTWParams walkparams; case regime of when Regime_EL2 walkparams = AArch32.S1TTWParamsEL2(); when Regime_EL10 walkparams = AArch32.S1TTWParamsEL10(va); when Regime_EL30 walkparams = AArch32.S1TTWParamsEL30(va); return walkparams;

Library pseudocode for aarch32/translation/walkparams/AArch32.GetS2TTWParams

// AArch32.GetS2TTWParams() // ======================== // Gather walk parameters for stage 2 translation S2TTWParams AArch32.GetS2TTWParams() S2TTWParams walkparams; walkparams.tgx = TGx_4KB; walkparams.s = VTCR.S; walkparams.t0sz = VTCR.T0SZ; walkparams.sl0 = VTCR.SL0; walkparams.irgn = VTCR.IRGN0; walkparams.orgn = VTCR.ORGN0; walkparams.sh = VTCR.SH0; walkparams.ee = HSCTLR.EE; walkparams.ptw = HCR.PTW; walkparams.vm = HCR.VM OR HCR.DC; // VTCR.S must match VTCR.T0SZ[3] if walkparams.s != walkparams.t0sz<3> then (-, walkparams.t0sz) = ConstrainUnpredictableBits(Unpredictable_RESVTCRS); return walkparams;

Library pseudocode for aarch32/translation/walkparams/AArch32.GetVARange

// AArch32.GetVARange() // ==================== // Select the translation base address for stage 1 long-descriptor walks VARange AArch32.GetVARange(bits(32) va, bits(3) t0sz, bits(3) t1sz) // Lower range Input Address size lo_iasize = AArch32.S1IASize(t0sz); // Upper range Input Address size up_iasize = AArch32.S1IASize(t1sz); if t1sz == '000' && t0sz == '000' then return VARange_LOWER; elsif t1sz == '000' then return if IsZero(va<31:lo_iasize>) then VARange_LOWER else VARange_UPPER; elsif t0sz == '000' then return if IsOnes(va<31:up_iasize>) then VARange_UPPER else VARange_LOWER; elsif IsZero(va<31:lo_iasize>) then return VARange_LOWER; elsif IsOnes(va<31:up_iasize>) then return VARange_UPPER; else // Will be reported as a Translation Fault return VARange UNKNOWN;

Library pseudocode for aarch32/translation/walkparams/AArch32.S1DCacheEnabled

// AArch32.S1DCacheEnabled() // ========================= // Determine cacheability of stage 1 data accesses boolean AArch32.S1DCacheEnabled(Regime regime) case regime of when Regime_EL30 return SCTLR_S.C == '1'; when Regime_EL2 return HSCTLR.C == '1'; when Regime_EL10 return (if HaveAArch32EL(EL3) then SCTLR_NS.C else SCTLR.C) == '1';

Library pseudocode for aarch32/translation/walkparams/AArch32.S1ICacheEnabled

// AArch32.S1ICacheEnabled() // ========================= // Determine cacheability of stage 1 instruction fetches boolean AArch32.S1ICacheEnabled(Regime regime) case regime of when Regime_EL30 return SCTLR_S.I == '1'; when Regime_EL2 return HSCTLR.I == '1'; when Regime_EL10 return (if HaveAArch32EL(EL3) then SCTLR_NS.I else SCTLR.I) == '1';

Library pseudocode for aarch32/translation/walkparams/AArch32.S1TTWParamsEL10

// AArch32.S1TTWParamsEL10() // ========================= // Gather stage 1 translation table walk parameters for EL1&0 regime // (with EL2 enabled or disabled). S1TTWParams AArch32.S1TTWParamsEL10(bits(32) va) bits(64) mair; bit sif; TTBCR_Type ttbcr; TTBCR2_Type ttbcr2; SCTLR_Type sctlr; if HaveAArch32EL(EL3) then ttbcr = TTBCR_NS; ttbcr2 = TTBCR2_NS; sctlr = SCTLR_NS; mair = MAIR1_NS:MAIR0_NS; sif = SCR.SIF; else ttbcr = TTBCR; ttbcr2 = TTBCR2; sctlr = SCTLR; mair = MAIR1:MAIR0; sif = SCR_EL3.SIF; assert ttbcr.EAE == '1'; S1TTWParams walkparams; walkparams.t0sz = ttbcr.T0SZ; walkparams.t1sz = ttbcr.T1SZ; walkparams.ee = sctlr.EE; walkparams.wxn = sctlr.WXN; walkparams.uwxn = sctlr.UWXN; walkparams.ntlsmd = if HaveTrapLoadStoreMultipleDeviceExt() then sctlr.nTLSMD else '1'; walkparams.mair = mair; walkparams.sif = sif; varange = AArch32.GetVARange(va, walkparams.t0sz, walkparams.t1sz); if varange == VARange_LOWER then walkparams.sh = ttbcr.SH0; walkparams.irgn = ttbcr.IRGN0; walkparams.orgn = ttbcr.ORGN0; walkparams.hpd = if AArch32.HaveHPDExt() then ttbcr.T2E AND ttbcr2.HPD0 else '0'; else walkparams.sh = ttbcr.SH1; walkparams.irgn = ttbcr.IRGN1; walkparams.orgn = ttbcr.ORGN1; walkparams.hpd = if AArch32.HaveHPDExt() then ttbcr.T2E AND ttbcr2.HPD1 else '0'; return walkparams;

Library pseudocode for aarch32/translation/walkparams/AArch32.S1TTWParamsEL2

// AArch32.S1TTWParamsEL2() // ======================== // Gather stage 1 translation table walk parameters for EL2 regime S1TTWParams AArch32.S1TTWParamsEL2() S1TTWParams walkparams; walkparams.tgx = TGx_4KB; walkparams.t0sz = HTCR.T0SZ; walkparams.irgn = HTCR.SH0; walkparams.orgn = HTCR.IRGN0; walkparams.sh = HTCR.ORGN0; walkparams.hpd = if AArch32.HaveHPDExt() then HTCR.HPD else '0'; walkparams.ee = HSCTLR.EE; walkparams.wxn = HSCTLR.WXN; if HaveTrapLoadStoreMultipleDeviceExt() then walkparams.ntlsmd = HSCTLR.nTLSMD; else walkparams.ntlsmd = '1'; walkparams.mair = HMAIR1:HMAIR0; return walkparams;

Library pseudocode for aarch32/translation/walkparams/AArch32.S1TTWParamsEL30

// AArch32.S1TTWParamsEL30() // ========================= // Gather stage 1 translation table walk parameters for EL3&0 regime S1TTWParams AArch32.S1TTWParamsEL30(bits(32) va) assert TTBCR_S.EAE == '1'; S1TTWParams walkparams; walkparams.t0sz = TTBCR_S.T0SZ; walkparams.t1sz = TTBCR_S.T1SZ; walkparams.ee = SCTLR_S.EE; walkparams.wxn = SCTLR_S.WXN; walkparams.uwxn = SCTLR_S.UWXN; walkparams.ntlsmd = if HaveTrapLoadStoreMultipleDeviceExt() then SCTLR_S.nTLSMD else '1'; walkparams.mair = MAIR1_S:MAIR0_S; walkparams.sif = SCR.SIF; varange = AArch32.GetVARange(va, walkparams.t0sz, walkparams.t1sz); if varange == VARange_LOWER then walkparams.sh = TTBCR_S.SH0; walkparams.irgn = TTBCR_S.IRGN0; walkparams.orgn = TTBCR_S.ORGN0; walkparams.hpd = if AArch32.HaveHPDExt() then TTBCR_S.T2E AND TTBCR2_S.HPD0 else '0'; else walkparams.sh = TTBCR_S.SH1; walkparams.irgn = TTBCR_S.IRGN1; walkparams.orgn = TTBCR_S.ORGN1; walkparams.hpd = if AArch32.HaveHPDExt() then TTBCR_S.T2E AND TTBCR2_S.HPD1 else '0'; return walkparams;

Library pseudocode for aarch64/debug/breakpoint/AArch64.BreakpointMatch

// AArch64.BreakpointMatch() // ========================= // Breakpoint matching in an AArch64 translation regime. boolean AArch64.BreakpointMatch(integer n, bits(64) vaddress, AccType acctype, integer size) assert !ELUsingAArch32(S1TranslationRegime()); assert n < NumBreakpointsImplemented(); enabled = DBGBCR_EL1[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR_EL1[n].BT IN {'0x01'}; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch64.StateMatch(DBGBCR_EL1[n].SSC, DBGBCR_EL1[n].HMC, DBGBCR_EL1[n].PMC, linked, DBGBCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); value_match = AArch64.BreakpointValueMatch(n, vaddress, linked_to); if HaveAArch32() && size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. match_i = AArch64.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if vaddress<1> == '1' && DBGBCR_EL1[n].BAS == '1111' then // The above notwithstanding, if DBGBCR_EL1[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR_EL1[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); match = value_match && state_match && enabled; return match;

Library pseudocode for aarch64/debug/breakpoint/AArch64.BreakpointValueMatch

// AArch64.BreakpointValueMatch() // ============================== boolean AArch64.BreakpointValueMatch(integer n_in, bits(64) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. integer n = n_in; // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n >= NumBreakpointsImplemented() then Constraint c; (c, n) = ConstrainUnpredictableInteger(0, NumBreakpointsImplemented() - 1, Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR_EL1[n].E == '0' then return FALSE; context_aware = (n >= (NumBreakpointsImplemented() - NumContextAwareBreakpointsImplemented())); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. dbgtype = DBGBCR_EL1[n].BT; Constraint c; if ((dbgtype IN {'011x','11xx'} && !HaveVirtHostExt() && !HaveV82Debug()) || // Context matching dbgtype IN {'010x'} || // Reserved (!(dbgtype IN {'0x0x'}) && !context_aware) || // Context matching (dbgtype IN {'1xxx'} && !HaveEL(EL2))) then // EL2 extension (c, dbgtype) = ConstrainUnpredictableBits(Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (dbgtype IN {'0x0x'}); match_vmid = (dbgtype IN {'10xx'}); match_cid = (dbgtype IN {'001x'}); match_cid1 = (dbgtype IN { '101x', 'x11x'}); match_cid2 = (dbgtype IN {'11xx'}); linked = (dbgtype IN {'xxx1'}); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return FALSE; // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return FALSE; // Do the comparison. boolean BVR_match; if match_addr then boolean byte_select_match; byte = UInt(vaddress<1:0>); if HaveAArch32() then // T32 instructions can be executed at EL0 in an AArch64 translation regime. assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR_EL1[n].BAS<byte> == '1'); else assert byte == 0; // "vaddress" is word aligned byte_select_match = TRUE; // DBGBCR_EL1[n].BAS<byte> is RES1 // If the DBGxVR<n>_EL1.RESS field bits are not a sign extension of the MSB // of DBGBVR<n>_EL1.VA, it is UNPREDICTABLE whether they appear to be // included in the match. // If 'vaddress' is outside of the current virtual address space, then the access // generates a Translation fault. integer top = AArch64.VAMax(); if !IsOnes(DBGBVR_EL1[n]<63:top>) && !IsZero(DBGBVR_EL1[n]<63:top>) then if ConstrainUnpredictableBool(Unpredictable_DBGxVR_RESS) then top = 63; BVR_match = (vaddress<top:2> == DBGBVR_EL1[n]<top:2>) && byte_select_match; elsif match_cid then if IsInHost() then BVR_match = (CONTEXTIDR_EL2<31:0> == DBGBVR_EL1[n]<31:0>); else BVR_match = (PSTATE.EL IN {EL0, EL1} && CONTEXTIDR_EL1<31:0> == DBGBVR_EL1[n]<31:0>); elsif match_cid1 then BVR_match = (PSTATE.EL IN {EL0, EL1} && !IsInHost() && CONTEXTIDR_EL1<31:0> == DBGBVR_EL1[n]<31:0>); boolean BXVR_match; if match_vmid then bits(16) vmid; bits(16) bvr_vmid; if !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBVR_EL1[n]<39:32>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBVR_EL1[n]<47:32>; BXVR_match = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && !IsInHost() && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (PSTATE.EL != EL3 && (HaveVirtHostExt() || HaveV82Debug()) && EL2Enabled() && DBGBVR_EL1[n]<63:32> == CONTEXTIDR_EL2<31:0>); bvr_match_valid = (match_addr || match_cid || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return match;

Library pseudocode for aarch64/debug/breakpoint/AArch64.StateMatch

// AArch64.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. boolean AArch64.StateMatch(bits(2) SSC_in, bit HMC_in, bits(2) PxC_in, boolean linked_in, bits(4) LBN, boolean isbreakpnt, AccType acctype, boolean ispriv) // "SSC_in","HMC_in","PxC_in" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked_in" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. bits(2) SSC = SSC_in; bit HMC = HMC_in; bits(2) PxC = PxC_in; boolean linked = linked_in; // If parameters are set to a reserved type, behaves as either disabled or a defined type Constraint c; (c, SSC, HMC, PxC) = CheckValidStateMatch(SSC, HMC, PxC, isbreakpnt); if c == Constraint_DISABLED then return FALSE; // Otherwise the HMC,SSC,PxC values are either valid or the values returned by // CheckValidStateMatch are valid. EL3_match = HaveEL(EL3) && HMC == '1' && SSC<0> == '0'; EL2_match = HaveEL(EL2) && ((HMC == '1' && (SSC:PxC != '1000')) || SSC == '11'); EL1_match = PxC<0> == '1'; EL0_match = PxC<1> == '1'; boolean priv_match; if HaveNV2Ext() && acctype == AccType_NV2REGISTER && !isbreakpnt then priv_match = EL2_match; elsif !ispriv && !isbreakpnt then priv_match = EL0_match; else case PSTATE.EL of when EL3 priv_match = EL3_match; when EL2 priv_match = EL2_match; when EL1 priv_match = EL1_match; when EL0 priv_match = EL0_match; boolean security_state_match; ss = CurrentSecurityState(); case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = ss == SS_NonSecure; // Non-secure only when '10' security_state_match = ss == SS_Secure; // Secure only when '11' security_state_match = (HMC == '1' || ss == SS_Secure); // HMC=1 -> Both, 0 -> Secure only integer lbn; if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = NumBreakpointsImplemented() - NumContextAwareBreakpointsImplemented(); last_ctx_cmp = NumBreakpointsImplemented() - 1; if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint boolean linked_match; if linked then vaddress = bits(64) UNKNOWN; linked_to = TRUE; linked_match = AArch64.BreakpointValueMatch(lbn, vaddress, linked_to); return priv_match && security_state_match && (!linked || linked_match);

Library pseudocode for aarch64/debug/enables/AArch64.GenerateDebugExceptions

// AArch64.GenerateDebugExceptions() // ================================= boolean AArch64.GenerateDebugExceptions() ss = return CurrentSecurityStateAArch64.GenerateDebugExceptionsFrom(); return(PSTATE.EL, AArch64.GenerateDebugExceptionsFromIsSecure(PSTATE.EL, ss, PSTATE.D);(), PSTATE.D);

Library pseudocode for aarch64/debug/enables/AArch64.GenerateDebugExceptionsFrom

// AArch64.GenerateDebugExceptionsFrom() // ===================================== boolean AArch64.GenerateDebugExceptionsFrom(bits(2) from_el,AArch64.GenerateDebugExceptionsFrom(bits(2) from, boolean secure, bit mask) if OSLSR_EL1.OSLK == '1' || SecurityState from_state, bit mask) if OSLSR_EL1.OSLK == '1' || DoubleLockStatus() || Halted() then return FALSE; route_to_el2 = ( route_to_el2 =HaveEL(EL2) && (from_state !=) && (!secure || SS_Secure || IsSecureEL2Enabled()) && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); ()) && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); target = (if route_to_el2 then EL2 else EL1); boolean enabled; if HaveEL(EL3) && from_state == SS_Secure then ) && secure then enabled = MDCR_EL3.SDD == '0'; if from_el == if from == EL0 && ELUsingAArch32(EL1) then enabled = enabled || SDER32_EL3.SUIDEN == '1'; else enabled = TRUE; if from_el == target then if from == target then enabled = enabled && MDSCR_EL1.KDE == '1' && mask == '0'; else enabled = enabled && UInt(target) > UInt(from_el); (from); return enabled;

Library pseudocode for aarch64/debug/pmu/AArch64.CheckForPMUOverflow

// AArch64.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events AArch64.CheckForPMUOverflow() boolean pmuirq; bit E; pmuirq = PMCR_EL0.E == '1' && PMINTENSET_EL1.C == '1' && PMOVSSET_EL0.C == '1'; integer counters = GetNumEventCounters(); if counters != 0 then for idx = 0 to counters - 1 E = if AArch64.PMUCounterIsHyp(idx) then MDCR_EL2.HPME else PMCR_EL0.E; if E == '1' && PMINTENSET_EL1<idx> == '1' && PMOVSSET_EL0<idx> == '1' then pmuirq = TRUE; SetInterruptRequestLevel(InterruptID_PMUIRQ, if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflow, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.)

Library pseudocode for aarch64/debug/pmu/AArch64.ClearEventCounters

// AArch64.ClearEventCounters() // ============================ // Zero all the event counters. AArch64.ClearEventCounters() integer counters = AArch64.GetNumEventCountersAccessible(); if counters != 0 then for idx = 0 to counters - 1 PMEVCNTR_EL0[idx] = Zeros();

Library pseudocode for aarch64/debug/pmu/AArch64.CountPMUEvents

// AArch64.CountPMUEvents() // ======================== // Return TRUE if counter "idx" should count its event. // For the cycle counter, idx == CYCLE_COUNTER_ID. // Return TRUE if counter "idx" should count its event. For the cycle counter, idx == CYCLE_COUNTER_ID. boolean AArch64.CountPMUEvents(integer idx) assert idx == CYCLE_COUNTER_ID || idx < GetNumEventCounters(); boolean debug; boolean enabled; boolean prohibited; boolean filtered; boolean frozen; boolean resvd_for_el2; bit E; bit spme; bits(32) ovflws; // Event counting is disabled in Debug state debug = Halted(); // Software can reserve some counters for EL2 resvd_for_el2 = AArch64.PMUCounterIsHyp(idx); ss = // Main enable controls if idx == CurrentSecurityState(); // Main enable controls if idx == CYCLE_COUNTER_ID then enabled = PMCR_EL0.E == '1' && PMCNTENSET_EL0.C == '1'; else E = if resvd_for_el2 then MDCR_EL2.HPME else PMCR_EL0.E; enabled = E == '1' && PMCNTENSET_EL0<idx> == '1'; // Event counting is allowed unless it is prohibited by any rule below prohibited = FALSE; // Event counting in Secure state is prohibited if all of: // * EL3 is implemented // * MDCR_EL3.SPME == 0, and either: // - FEAT_PMUv3p7 is not implemented // - MDCR_EL3.MPMX == 0 if HaveEL(EL3) && ss ==) && SS_SecureIsSecure then () then if HavePMUv3p7() then prohibited = MDCR_EL3.<SPME,MPMX> == '00'; else prohibited = MDCR_EL3.SPME == '0'; // Event counting at EL3 is prohibited if all of: // * FEAT_PMUv3p7 is implemented // * One of the following is true: // - MDCR_EL3.SPME == 0 // - PMNx is not reserved for EL2 // * MDCR_EL3.MPMX == 1 if !prohibited && PSTATE.EL == EL3 && HavePMUv3p7() then prohibited = MDCR_EL3.MPMX == '1' && (MDCR_EL3.SPME == '0' || !resvd_for_el2); // Event counting at EL2 is prohibited if all of: // * The HPMD Extension is implemented // * PMNx is not reserved for EL2 // * MDCR_EL2.HPMD == 1 if !prohibited && PSTATE.EL == EL2 && HaveHPMDExt() && !resvd_for_el2 then prohibited = MDCR_EL2.HPMD == '1'; // The IMPLEMENTATION DEFINED authentication interface might override software if prohibited && !HaveNoSecurePMUDisableOverride() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // Event counting might be frozen frozen = FALSE; // If FEAT_PMUv3p7 is implemented, event counting can be frozen if HavePMUv3p7() then ovflws = ZeroExtend(PMOVSSET_EL0<GetNumEventCounters()-1:0>); bit FZ; if resvd_for_el2 then FZ = MDCR_EL2.HPMFZO; ovflws<UInt(MDCR_EL2.HPMN)-1:0> = Zeros(); else FZ = PMCR_EL0.FZO; if HaveEL(EL2) && UInt(MDCR_EL2.HPMN) < GetNumEventCounters() then ovflws<GetNumEventCounters()-1:UInt(MDCR_EL2.HPMN)> = Zeros(); frozen = (FZ == '1') && !IsZero(ovflws); // PMCR_EL0.DP disables the cycle counter when event counting is prohibited if (prohibited || frozen) && idx == CYCLE_COUNTER_ID then enabled = enabled && (PMCR_EL0.DP == '0'); // Otherwise whether event counting is prohibited does not affect the cycle counter prohibited = FALSE; frozen = FALSE; // If FEAT_PMUv3p5 is implemented, cycle counting can be prohibited. // This is not overridden by PMCR_EL0.DP. if HavePMUv3p5() && idx == CYCLE_COUNTER_ID then if ( ifHaveEL(EL3) && ss ==) && SS_SecureIsSecure && MDCR_EL3.SCCD == '1') then () && MDCR_EL3.SCCD == '1' then prohibited = TRUE; if PSTATE.EL == EL2 && MDCR_EL2.HCCD == '1' then prohibited = TRUE; // If FEAT_PMUv3p7 is implemented, cycle counting an be prohibited at EL3. // This is not overriden by PMCR_EL0.DP. if HavePMUv3p7() && idx == CYCLE_COUNTER_ID then if PSTATE.EL == EL3 && MDCR_EL3.MCCD == '1' then prohibited = TRUE; // Event counting can be filtered by the {P, U, NSK, NSU, NSH, M, SH} bits filter = if idx == CYCLE_COUNTER_ID then PMCCFILTR_EL0<31:0> else PMEVTYPER_EL0[idx]<31:0>; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) then filter<27> else '0'; M = if HaveEL(EL3) then filter<26> else '0'; SH = if HaveEL(EL3) && HaveSecureEL2Ext() then filter<24> else '0'; ss = CurrentSecurityState(); case PSTATE.EL of when EL0 filtered = if ss == SS_Secure then U == '1' else U != NSU; when EL1 filtered = if ss == SS_Secure then P == '1' else P != NSK; when EL2 filtered = if ss == SS_Secure then NSH == SH else NSH == '0'; when EL3 filtered = M != P; return !debug && enabled && !prohibited && !filtered && !frozen;

Library pseudocode for aarch64/debug/pmu/AArch64.GetNumEventCountersAccessible

// AArch64.GetNumEventCountersAccessible() // ======================================= // Return the number of event counters that can be accessed at the current Exception level. integer AArch64.GetNumEventCountersAccessible() integer n; integer total_counters = GetNumEventCounters(); // Software can reserve some counters for EL2 if PSTATE.EL IN {EL1, EL0} && EL2Enabled() then n = UInt(MDCR_EL2.HPMN); if n > total_counters || (!HaveFeatHPMN0() && n == 0) then (-, n) = ConstrainUnpredictableInteger(0, total_counters, Unpredictable_PMUEVENTCOUNTER); else n = total_counters; return n;

Library pseudocode for aarch64/debug/pmu/AArch64.IncrementEventCounter

// AArch64.IncrementEventCounter() // =============================== // Increment the specified event counter by the specified amount. AArch64.IncrementEventCounter(integer idx, integer increment) integer old_value; integer new_value; integer ovflw; bit lp; old_value = UInt(PMEVCNTR_EL0[idx]); new_value = old_value + PMUCountValue(idx, increment); if HavePMUv3p5() then PMEVCNTR_EL0[idx] = new_value<63:0>; lp = if AArch64.PMUCounterIsHyp(idx) then MDCR_EL2.HLP else PMCR_EL0.LP; ovflw = if lp == '1' then 64 else 32; else PMEVCNTR_EL0[idx] = ZeroExtend(new_value<31:0>); ovflw = 32; if old_value<64:ovflw> != new_value<64:ovflw> then PMOVSSET_EL0<idx> = '1'; PMOVSCLR_EL0<idx> = '1'; // Check for the CHAIN event from an even counter if idx<0> == '0' && idx + 1 < GetNumEventCounters() && (!HavePMUv3p5() || lp == '0') then PMUEvent(PMU_EVENT_CHAIN, 1, idx + 1);

Library pseudocode for aarch64/debug/pmu/AArch64.PMUCounterIsHyp

// AArch64.PMUCounterIsHyp // ======================= // Returns TRUE if a counter is reserved for use by EL2, FALSE otherwise. boolean AArch64.PMUCounterIsHyp(integer n) boolean resvd_for_el2; // Software can reserve some event counters for EL2 if n != CYCLE_COUNTER_ID && HaveEL(EL2) then resvd_for_el2 = n >= UInt(MDCR_EL2.HPMN); if UInt(MDCR_EL2.HPMN) > GetNumEventCounters() || (!HaveFeatHPMN0() && IsZero(MDCR_EL2.HPMN)) then resvd_for_el2 = boolean UNKNOWN; else resvd_for_el2 = FALSE; return resvd_for_el2;

Library pseudocode for aarch64/debug/pmu/AArch64.PMUCycle

// AArch64.PMUCycle() // ================== // Called at the end of each cycle to increment event counters and // check for PMU overflow. In pseudocode, a cycle ends after the // execution of the operational pseudocode. AArch64.PMUCycle() if !HavePMUv3() then return; PMUEvent(PMU_EVENT_CPU_CYCLES); integer counters = GetNumEventCounters(); if counters != 0 then for idx = 0 to counters - 1 if AArch64.CountPMUEvents(idx) then accumulated = PMUEventAccumulator[idx]; AArch64.IncrementEventCounter(idx, accumulated); PMUEventAccumulator[idx] = 0; integer old_value; integer new_value; integer ovflw; if (AArch64.CountPMUEvents(CYCLE_COUNTER_ID) && (!HaveAArch32() || PMCR_EL0.LC == '1' || PMCR_EL0.D == '0' || HasElapsed64Cycles())) then old_value = UInt(PMCCNTR_EL0); new_value = old_value + 1; PMCCNTR_EL0 = new_value<63:0>; if HaveAArch32() then ovflw = if PMCR_EL0.LC == '1' then 64 else 32; else ovflw = 64; if old_value<64:ovflw> != new_value<64:ovflw> then PMOVSSET_EL0.C = '1'; PMOVSCLR_EL0.C = '1'; AArch64.CheckForPMUOverflow();

Library pseudocode for aarch64/debug/pmu/AArch64.PMUSwIncrement

// AArch64.PMUSwIncrement() // ======================== // Generate PMU Events on a write to PMSWINC_EL0. AArch64.PMUSwIncrement(bits(32) sw_incr) integer counters = AArch64.GetNumEventCountersAccessible(); if counters != 0 then for idx = 0 to counters - 1 if sw_incr<idx> == '1' then PMUEvent(PMU_EVENT_SW_INCR, 1, idx);

Library pseudocode for aarch64/debug/statisticalprofiling/CollectContextIDR1

// CollectContextIDR1() // ==================== boolean CollectContextIDR1() if !StatisticalProfilingEnabled() then return FALSE; if PSTATE.EL == EL2 then return FALSE; if EL2Enabled() && HCR_EL2.TGE == '1' then return FALSE; return PMSCR_EL1.CX == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/CollectContextIDR2

// CollectContextIDR2() // ==================== boolean CollectContextIDR2() if !StatisticalProfilingEnabled() then return FALSE; if !EL2Enabled() then return FALSE; return PMSCR_EL2.CX == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/CollectPhysicalAddress

// CollectPhysicalAddress() // ======================== boolean CollectPhysicalAddress() if !StatisticalProfilingEnabled() then return FALSE; (owning_ss, owning_el) = ProfilingBufferOwner(); if HaveEL(EL2) && (owning_ss != SS_Secure || IsSecureEL2Enabled()) then return PMSCR_EL2.PA == '1' && (owning_el == EL2 || PMSCR_EL1.PA == '1'); else return PMSCR_EL1.PA == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/CollectTimeStamp

// CollectTimeStamp() // ================== TimeStamp CollectTimeStamp() if !StatisticalProfilingEnabled() then return TimeStamp_None; (-, owning_el) = ProfilingBufferOwner(); if owning_el == EL2 then if PMSCR_EL2.TS == '0' then return TimeStamp_None; else if PMSCR_EL1.TS == '0' then return TimeStamp_None; bits(2) PCT_el1; if !HaveECVExt() then PCT_el1 = '0':PMSCR_EL1.PCT<0>; // PCT<1> is RES0 else PCT_el1 = PMSCR_EL1.PCT; if PCT_el1 == '10' then // Reserved value (-, PCT_el1) = ConstrainUnpredictableBits(Unpredictable_PMSCR_PCT); if EL2Enabled() then bits(2) PCT_el2; if !HaveECVExt() then PCT_el2 = '0':PMSCR_EL2.PCT<0>; // PCT<1> is RES0 else PCT_el2 = PMSCR_EL2.PCT; if PCT_el2 == '10' then // Reserved value (-, PCT_el2) = ConstrainUnpredictableBits(Unpredictable_PMSCR_PCT); case PCT_el2 of when '00' return if return IsInHost() then TimeStamp_Physical else TimeStamp_Virtual; when '01' if owning_el == EL2 then return TimeStamp_Physical; when '11' assert HaveECVExt(); // FEAT_ECV must be implemented if owning_el == EL1 && PCT_el1 == '00' then return if return IsInHost() then TimeStamp_Physical else TimeStamp_Virtual; else return TimeStamp_OffsetPhysical; otherwise Unreachable(); case PCT_el1 of when '00' return if IsInHost() then TimeStamp_Physical else(); case PCT_el1 of when '00' return TimeStamp_Virtual; when '01' return TimeStamp_Physical; when '11' assert HaveECVExt(); // FEAT_ECV must be implemented return TimeStamp_OffsetPhysical; otherwise Unreachable();

Library pseudocode for aarch64/debug/statisticalprofiling/OpType

enumeration OpType { OpType_Load, // Any memory-read operation other than atomics, compare-and-swap, and swap OpType_Store, // Any memory-write operation, including atomics without return OpType_LoadAtomic, // Atomics with return, compare-and-swap and swap OpType_Branch, // Software write to the PC OpType_Other // Any other class of operation };

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingBufferEnabled

// ProfilingBufferEnabled() // ======================== boolean ProfilingBufferEnabled() if !HaveStatisticalProfiling() then return FALSE; (owning_ss, owning_el) = ProfilingBufferOwner(); state_match = ((owning_ss == SS_Secure && SCR_EL3.NS == '0') || (owning_ss == SS_NonSecure && SCR_EL3.NS == '1')); return (!ELUsingAArch32(owning_el) && state_match && PMBLIMITR_EL1.E == '1' && PMBSR_EL1.S == '0');

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingBufferOwner

// ProfilingBufferOwner() // ====================== (SecurityState, bits(2)) ProfilingBufferOwner() SecurityState owning_ss; if HaveEL(EL3) then owning_ss = if MDCR_EL3.NSPB<1> == '0' then SS_Secure else SS_NonSecure; else owning_ss = if SecureOnlyImplementation() then SS_Secure else SS_NonSecure; bits(2) owning_el; if HaveEL(EL2) && (owning_ss != SS_Secure || IsSecureEL2Enabled()) then owning_el = if MDCR_EL2.E2PB == '00' then EL2 else EL1; else owning_el = EL1; return (owning_ss, owning_el);

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingSynchronizationBarrier

// Barrier to ensure that all existing profiling data has been formatted, and profiling buffer // addresses have been translated such that writes to the profiling buffer have been initiated. // A following DSB completes when writes to the profiling buffer have completed. ProfilingSynchronizationBarrier();

Library pseudocode for aarch64/debug/statisticalprofiling/StatisticalProfilingEnabledSPECollectRecord

// StatisticalProfilingEnabled() // ============================= // SPECollectRecord() // ================== // Returns TRUE if the sampled class of instructions or operations, as // determined by PMSFCR_EL1, are recorded and FALSE otherwise. boolean StatisticalProfilingEnabled() if !SPECollectRecord(bits(64) events, integer total_latency,HaveStatisticalProfilingOpType() ||optype) assert UsingAArch32StatisticalProfilingEnabled() || !(); bits(64) mask = 0xAA<63:0>; // Bits [7,5,3,1] bits(64) e; bits(64) m; ifProfilingBufferEnabledHaveStatisticalProfilingv1p1() then return FALSE; tge_set =() then mask<11> = '1'; // Alignment Flag if EL2EnabledHaveStatisticalProfilingv1p2() && HCR_EL2.TGE == '1'; (owning_ss, owning_el) =() then mask<6> = '1'; // Not taken flag mask<63:48> = bits(16) IMPLEMENTATION_DEFINED "SPE mask 63:48"; mask<31:24> = bits(8) IMPLEMENTATION_DEFINED "SPE mask 31:24"; mask<15:12> = bits(4) IMPLEMENTATION_DEFINED "SPE mask 15:12"; // Check for UNPREDICTABLE case if ( ProfilingBufferOwnerHaveStatisticalProfilingv1p2(); if (() && PMSFCR_EL1.<FnE,FE> == '11' && !UIntIsZero(owning_el) <(PMSEVFR_EL1 AND PMSNEVFR_EL1 AND mask)) then if ConstrainUnpredictableBool(Unpredictable_BADPMSFCR) then return FALSE; else // Filtering by event if PMSFCR_EL1.FE == '1' && !IsZero(PMSEVFR_EL1) then e = events AND mask; m = PMSEVFR_EL1 AND mask; if !IsZero(NOT(e) AND m) then return FALSE; // Filtering by inverse event if (HaveStatisticalProfilingv1p2() && PMSFCR_EL1.FnE == '1' && !IsZero(PMSNEVFR_EL1)) then e = events AND mask; m = PMSNEVFR_EL1 AND mask; if !IsZero(e AND m) then return FALSE; // Filtering by type if PMSFCR_EL1.FT == '1' && !IsZero(PMSFCR_EL1.<B,LD,ST>) then case optype of when OpType_Branch if PMSFCR_EL1.B == '0' then return FALSE; when OpType_Load if PMSFCR_EL1.LD == '0' then return FALSE; when OpType_Store if PMSFCR_EL1.ST == '0' then return FALSE; when OpType_LoadAtomic if PMSFCR_EL1.<LD,ST> == '00' then return FALSE; otherwise return FALSE; // Filtering by latency if PMSFCR_EL1.FL == '1' && !IsZero(PMSLATFR_EL1.MINLAT) then if total_latency < UInt(PSTATE.EL) || (tge_set && owning_el ==(PMSLATFR_EL1.MINLAT) then return FALSE; // Check for UNPREDICTABLE cases if ((PMSFCR_EL1.FE == '1' && EL1IsZero) || owning_ss !=(PMSEVFR_EL1 AND mask)) || (PMSFCR_EL1.FT == '1' && CurrentSecurityStateIsZero()) then return FALSE; bit spe_bit; case PSTATE.EL of when(PMSFCR_EL1.<B,LD,ST>)) || (PMSFCR_EL1.FL == '1' && EL3IsZero (PMSLATFR_EL1.MINLAT))) then returnUnreachableConstrainUnpredictableBool(); when( EL2Unpredictable_BADPMSFCR spe_bit = PMSCR_EL2.E2SPE; when); if ( EL1HaveStatisticalProfilingv1p2 spe_bit = PMSCR_EL1.E1SPE; when() && ((PMSFCR_EL1.FnE == '1' && (PMSNEVFR_EL1 AND mask)) || (PMSFCR_EL1.<FnE,FE> == '11' && !IsZero(PMSEVFR_EL1 AND PMSNEVFR_EL1 AND mask)))) then return ConstrainUnpredictableBool(Unpredictable_BADPMSFCREL0IsZero spe_bit = (if tge_set then PMSCR_EL2.E0HSPE else PMSCR_EL1.E0SPE); ); return spe_bit == '1'; return TRUE;

Library pseudocode for aarch64/debug/statisticalprofiling/TimeStampStatisticalProfilingEnabled

enumeration// StatisticalProfilingEnabled() // ============================= boolean TimeStamp {StatisticalProfilingEnabled() if ! TimeStamp_None, // No timestamp() || TimeStamp_CoreSight, // CoreSight time (IMPLEMENTATION DEFINED)() || ! TimeStamp_Physical, // Physical counter value with no offset() then return FALSE; tge_set = TimeStamp_OffsetPhysical, // Physical counter value minus CNTPOFF_EL2() && HCR_EL2.TGE == '1'; (owning_ss, owning_el) = (); if (UInt(owning_el) < UInt(PSTATE.EL) || (tge_set && owning_el == EL1) || owning_ss != CurrentSecurityState()) then return FALSE; bit spe_bit; case PSTATE.EL of when EL3 Unreachable(); when EL2 spe_bit = PMSCR_EL2.E2SPE; when EL1 spe_bit = PMSCR_EL1.E1SPE; when EL0TimeStamp_Virtual }; // Physical counter value minus CNTVOFF_EL2spe_bit = (if tge_set then PMSCR_EL2.E0HSPE else PMSCR_EL1.E0SPE); return spe_bit == '1';

Library pseudocode for aarch64/debug/takeexceptiondbgstatisticalprofiling/AArch64.TakeExceptionInDebugStateTimeStamp

// AArch64.TakeExceptionInDebugState() // =================================== // Take an exception in Debug state to an Exception level using AArch64.enumeration AArch64.TakeExceptionInDebugState(bits(2) target_el,TimeStamp { ExceptionRecord exception_in) assertTimeStamp_None, // No timestamp HaveEL(target_el) && !TimeStamp_CoreSight, // CoreSight time (IMPLEMENTATION DEFINED)ELUsingAArch32(target_el) &&TimeStamp_Physical, // Physical counter value with no offset UInt(target_el) >=TimeStamp_OffsetPhysical, // Physical counter value minus CNTPOFF_EL2 UInt(PSTATE.EL); ExceptionRecord exception = exception_in; boolean sync_errors; boolean iesb_req; if HaveIESB() then sync_errors = SCTLR[target_el].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.<EA,NMEA> == '11' && target_el == EL3); // SCTLR[].IESB and/or SCR_EL3.NMEA (if applicable) might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; else sync_errors = FALSE; SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = bits(64) UNKNOWN; ELR[] = bits(64) UNKNOWN; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if UNKNOWN. PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if (HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0') then PSTATE.PAN = '1'; if HaveUAOExt() then PSTATE.UAO = '0'; if HaveBTIExt() then PSTATE.BTYPE = '00'; if HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; if HaveMTEExt() then PSTATE.TCO = '1'; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(64) UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. if sync_errors then SynchronizeErrors(); EndOfInstruction();TimeStamp_Virtual }; // Physical counter value minus CNTVOFF_EL2

Library pseudocode for aarch64/debug/watchpointtakeexceptiondbg/AArch64.WatchpointByteMatchAArch64.TakeExceptionInDebugState

// AArch64.WatchpointByteMatch() // ============================= boolean// AArch64.TakeExceptionInDebugState() // =================================== // Take an exception in Debug state to an Exception level using AArch64. AArch64.WatchpointByteMatch(integer n,AArch64.TakeExceptionInDebugState(bits(2) target_el, AccTypeExceptionRecord acctype, bits(64) vaddress) integer top =exception_in) assert AArch64.VAMaxHaveEL(); bottom = if DBGWVR_EL1[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR_EL1[n].BAS<(target_el) && !ELUsingAArch32(target_el) && UInt(vaddress<bottom-1:0>)> != '0'); mask =(target_el) >= UInt(DBGWCR_EL1[n].MASK); // If DBGWCR_EL1[n].MASK is non-zero value and DBGWCR_EL1[n].BAS is not set to '11111111', or // DBGWCR_EL1[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !(PSTATE.EL);IsOnesExceptionRecord(DBGWCR_EL1[n].BAS) then byte_select_match =exception = exception_in; boolean sync_errors; boolean iesb_req; if ConstrainUnpredictableBoolHaveIESB(() then sync_errors =Unpredictable_WPMASKANDBASSCTLR); else LSB = (DBGWCR_EL1[n].BAS AND NOT(DBGWCR_EL1[n].BAS - 1)); MSB = (DBGWCR_EL1[n].BAS + LSB); if ![target_el].IESB == '1'; ifIsZeroHaveDoubleFaultExt(MSB AND (MSB - 1)) then // Not contiguous byte_select_match =() then sync_errors = sync_errors || (SCR_EL3.<EA,NMEA> == '11' && target_el == EL3); // SCTLR[].IESB and/or SCR_EL3.NMEA (if applicable) might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUSUnpredictable_IESBinDebug); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then) then sync_errors = FALSE; else sync_errors = FALSE; ConstraintSynchronizeContext c; (c, mask) =(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = ConstrainUnpredictableIntegerUsingAArch32(3, 31,(); if from_32 then Unpredictable_RESWPMASKAArch64.MaybeZeroRegisterUppers); assert c IN {();Constraint_DISABLEDAArch64.ReportException,(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; Constraint_NONESPSR,[] = bits(64) UNKNOWN; Constraint_UNKNOWNELR}; case c of when[] = bits(64) UNKNOWN; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if UNKNOWN. PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if ( Constraint_DISABLEDHavePANExt return FALSE; // Disabled when() && (PSTATE.EL == Constraint_NONEEL1 mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value boolean WVR_match; if mask > bottom then // If the DBGxVR<n>_EL1.RESS field bits are not a sign extension of the MSB // of DBGBVR<n>_EL1.VA, it is UNPREDICTABLE whether they appear to be // included in the match. if !|| (PSTATE.EL ==IsOnesEL2(DBGBVR_EL1[n]<63:top>) && !&&IsZeroELIsInHost(DBGBVR_EL1[n]<63:top>) then if( ConstrainUnpredictableBoolEL0())) &&Unpredictable_DBGxVR_RESSSCTLR) then top = 63; WVR_match = (vaddress<top:mask> == DBGWVR_EL1[n]<top:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && ![].SPAN == '0') then PSTATE.PAN = '1'; ifIsZeroHaveUAOExt(DBGWVR_EL1[n]<mask-1:bottom>) then WVR_match =() then PSTATE.UAO = '0'; if ConstrainUnpredictableBoolHaveBTIExt(() then PSTATE.BTYPE = '00'; if() then PSTATE.SSBS = bit UNKNOWN; if HaveMTEExt() then PSTATE.TCO = '1'; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(64) UNKNOWN; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. if sync_errors then SynchronizeErrors(); EndOfInstructionUnpredictable_WPMASKEDBITSHaveSSBSExt); else WVR_match = vaddress<top:bottom> == DBGWVR_EL1[n]<top:bottom>; return WVR_match && byte_select_match;();

Library pseudocode for aarch64/debug/watchpoint/AArch64.WatchpointMatchAArch64.WatchpointByteMatch

// AArch64.WatchpointMatch() // ========================= // Watchpoint matching in an AArch64 translation regime. // AArch64.WatchpointByteMatch() // ============================= boolean AArch64.WatchpointMatch(integer n, bits(64) vaddress, integer size, boolean ispriv,AArch64.WatchpointByteMatch(integer n, AccType acctype, boolean iswrite) assert !acctype, bits(64) vaddress) integer top =ELUsingAArch32AArch64.VAMax((); bottom = if DBGWVR_EL1[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR_EL1[n].BAS<S1TranslationRegimeUInt()); assert n <(vaddress<bottom-1:0>)> != '0'); mask = NumWatchpointsImplementedUInt(); (DBGWCR_EL1[n].MASK); // "ispriv" is: // * FALSE for all loads, stores, and atomic operations executed at EL0. // * FALSE if the access is unprivileged. // * TRUE for all other loads, stores, and atomic operations. enabled = DBGWCR_EL1[n].E == '1'; linked = DBGWCR_EL1[n].WT == '1'; isbreakpnt = FALSE; state_match = // If DBGWCR_EL1[n].MASK is non-zero value and DBGWCR_EL1[n].BAS is not set to '11111111', or // DBGWCR_EL1[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && ! AArch64.StateMatchIsOnes(DBGWCR_EL1[n].SSC, DBGWCR_EL1[n].HMC, DBGWCR_EL1[n].PAC, linked, DBGWCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); ls_match = FALSE; if(DBGWCR_EL1[n].BAS) then byte_select_match = IsAtomicRWConstrainUnpredictableBool(acctype) then ls_match = (DBGWCR_EL1[n].LSC != '00'); elsif iswrite then ls_match = (DBGWCR_EL1[n].LSC<1> != '0'); else ls_match = (DBGWCR_EL1[n].LSC<0> != '0'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match ||( ); else LSB = (DBGWCR_EL1[n].BAS AND NOT(DBGWCR_EL1[n].BAS - 1)); MSB = (DBGWCR_EL1[n].BAS + LSB); if !IsZero(MSB AND (MSB - 1)) then // Not contiguous byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUS); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then Constraint c; (c, mask) = ConstrainUnpredictableInteger(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value boolean WVR_match; if mask > bottom then // If the DBGxVR<n>_EL1.RESS field bits are not a sign extension of the MSB // of DBGBVR<n>_EL1.VA, it is UNPREDICTABLE whether they appear to be // included in the match. if !IsOnes(DBGBVR_EL1[n]<63:top>) && !IsZero(DBGBVR_EL1[n]<63:top>) then if ConstrainUnpredictableBool(Unpredictable_DBGxVR_RESS) then top = 63; WVR_match = (vaddress<top:mask> == DBGWVR_EL1[n]<top:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR_EL1[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITSAArch64.WatchpointByteMatchUnpredictable_WPMASKANDBAS(n, acctype, vaddress + byte); ); else WVR_match = vaddress<top:bottom> == DBGWVR_EL1[n]<top:bottom>; return value_match && state_match && ls_match && enabled; return WVR_match && byte_select_match;

Library pseudocode for aarch64/exceptionsdebug/abortswatchpoint/AArch64.AbortAArch64.WatchpointMatch

// AArch64.Abort() // =============== // Abort and Debug exception handling in an AArch64 translation regime.// AArch64.WatchpointMatch() // ========================= // Watchpoint matching in an AArch64 translation regime. boolean AArch64.Abort(bits(64) vaddress,AArch64.WatchpointMatch(integer n, bits(64) vaddress, integer size, boolean ispriv, FaultRecordAccType fault) ifacctype, boolean iswrite) assert ! IsDebugExceptionELUsingAArch32(fault) then if fault.acctype ==( AccType_IFETCHS1TranslationRegime then if()); assert n < UsingAArch32NumWatchpointsImplemented() && fault.debugmoe ==(); // "ispriv" is: // * FALSE for all loads, stores, and atomic operations executed at EL0. // * FALSE if the access is unprivileged. // * TRUE for all other loads, stores, and atomic operations. enabled = DBGWCR_EL1[n].E == '1'; linked = DBGWCR_EL1[n].WT == '1'; isbreakpnt = FALSE; state_match = DebugException_VectorCatchAArch64.StateMatch then(DBGWCR_EL1[n].SSC, DBGWCR_EL1[n].HMC, DBGWCR_EL1[n].PAC, linked, DBGWCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); ls_match = FALSE; if acctype == AArch64.VectorCatchExceptionAccType_ATOMICRW(fault); elsethen ls_match = (DBGWCR_EL1[n].LSC != '00'); else ls_match = (DBGWCR_EL1[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch64.BreakpointExceptionAArch64.WatchpointByteMatch(fault); else AArch64.WatchpointException(vaddress, fault); elsif fault.acctype == AccType_IFETCH then AArch64.InstructionAbort(vaddress, fault); else AArch64.DataAbort(vaddress, fault);(n, acctype, vaddress + byte); return value_match && state_match && ls_match && enabled;

Library pseudocode for aarch64/exceptions/aborts/AArch64.AbortSyndromeAArch64.Abort

// AArch64.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort and Watchpoint exceptions // from an AArch64 translation regime. ExceptionRecord// AArch64.Abort() // =============== // Abort and Debug exception handling in an AArch64 translation regime. AArch64.AbortSyndrome(AArch64.Abort(bits(64) vaddress,Exception exceptype, FaultRecord fault, bits(64) vaddress) exception =fault) if ExceptionSyndromeIsDebugException(exceptype); d_side = exceptype IN {(fault) then if fault.acctype ==Exception_DataAbortAccType_IFETCH,then if Exception_NV2DataAbortUsingAArch32,() && fault.debugmoe == Exception_WatchpointDebugException_VectorCatch,then Exception_NV2WatchpointAArch64.VectorCatchException}; (exception.syndrome, exception.syndrome2) =(fault); else AArch64.FaultSyndromeAArch64.BreakpointException(d_side, fault); exception.vaddress =(fault); else ZeroExtendAArch64.WatchpointException(vaddress); if(vaddress, fault); elsif fault.acctype == IPAValidAccType_IFETCH(fault) then exception.ipavalid = TRUE; exception.NS = if fault.ipaddress.paspace ==then (vaddress, fault); else AArch64.DataAbortPAS_NonSecureAArch64.InstructionAbort then '1' else '0'; exception.ipaddress = fault.ipaddress.address; else exception.ipavalid = FALSE; return exception;(vaddress, fault);

Library pseudocode for aarch64/exceptions/aborts/AArch64.CheckPCAlignmentAArch64.AbortSyndrome

// AArch64.CheckPCAlignment() // ==========================// AArch64.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort and Watchpoint exceptions // from an AArch64 translation regime. ExceptionRecord AArch64.CheckPCAlignment() bits(64) pc =AArch64.AbortSyndrome( ThisInstrAddrException(); if pc<1:0> != '00' thenexceptype, fault, bits(64) vaddress) exception = ExceptionSyndrome(exceptype); d_side = exceptype IN {Exception_DataAbort, Exception_NV2DataAbort, Exception_Watchpoint, Exception_NV2Watchpoint}; (exception.syndrome, exception.syndrome2) = AArch64.FaultSyndrome(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValid(fault) then exception.ipavalid = TRUE; exception.NS = if fault.ipaddress.paspace == PAS_NonSecureAArch64.PCAlignmentFaultFaultRecord();then '1' else '0'; exception.ipaddress = fault.ipaddress.address; else exception.ipavalid = FALSE; return exception;

Library pseudocode for aarch64/exceptions/aborts/AArch64.DataAbortAArch64.CheckPCAlignment

// AArch64.DataAbort() // ===================// AArch64.CheckPCAlignment() // ========================== AArch64.DataAbort(bits(64) vaddress,AArch64.CheckPCAlignment() bits(64) pc = FaultRecord fault) route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1' && IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0, EL1} && (HCR_EL2.TGE == '1' || (HaveRASExt() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)) || (HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER) || IsSecondStage(fault))); bits(64) preferred_exception_return = ThisInstrAddr(); integer vect_offset; if ( if pc<1:0> != '00' thenHaveDoubleFaultExtAArch64.PCAlignmentFault() && (PSTATE.EL == EL3 || route_to_el3) && IsExternalAbort(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; ExceptionRecord exception; if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then exception = AArch64.AbortSyndrome(Exception_NV2DataAbort, fault, vaddress); else exception = AArch64.AbortSyndrome(Exception_DataAbort, fault, vaddress); bits(2) target_el = EL1; if PSTATE.EL == EL3 || route_to_el3 then target_el = EL3; elsif PSTATE.EL == EL2 || route_to_el2 then target_el = EL2; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);();

Library pseudocode for aarch64/exceptions/aborts/AArch64.EffectiveTCFAArch64.DataAbort

// AArch64.EffectiveTCF() // ====================== // Returns the TCF field applied to tag check faults in the given Exception level. bits(2)// AArch64.DataAbort() // =================== AArch64.EffectiveTCF(AArch64.DataAbort(bits(64) vaddress,AccTypeFaultRecord acctype) bits(2) tcf, el; el =fault) route_to_el3 = S1TranslationRegimeHaveEL(); if el ==( EL3 then tcf = SCTLR_EL3.TCF; elsif el ==) && SCR_EL3.EA == '1' && EL2IsExternalAbort then if(fault); route_to_el2 = ( AArch64.AccessUsesELEL2Enabled(acctype) ==() && PSTATE.EL IN { EL0 then tcf = SCTLR_EL2.TCF0; else tcf = SCTLR_EL2.TCF; elsif el ==, EL1 then if} && (HCR_EL2.TGE == '1' || ( AArch64.AccessUsesELHaveRASExt(acctype) ==() && HCR_EL2.TEA == '1' && EL0IsExternalAbort then tcf = SCTLR_EL1.TCF0; else tcf = SCTLR_EL1.TCF; if tcf == '11' then //reserved value if !(fault)) || (HaveMTE3ExtHaveNV2Ext() then (-,tcf) =() && fault.acctype == ConstrainUnpredictableBitsAccType_NV2REGISTER() ||(fault))); bits(64) preferred_exception_return = ThisInstrAddr(); integer vect_offset; if (HaveDoubleFaultExt() && (PSTATE.EL == EL3 || route_to_el3) && IsExternalAbort(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; ExceptionRecord exception; if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then exception = AArch64.AbortSyndrome(Exception_NV2DataAbort, fault, vaddress); else exception = AArch64.AbortSyndrome(Exception_DataAbort, fault, vaddress); bits(2) target_el = EL1; if PSTATE.EL == EL3 || route_to_el3 then target_el = EL3; elsif PSTATE.EL == EL2 || route_to_el2 then target_el = EL2; AArch64.TakeExceptionUnpredictable_RESTCFIsSecondStage); return tcf;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/AArch64.InstructionAbortAArch64.EffectiveTCF

// AArch64.InstructionAbort() // ==========================// AArch64.EffectiveTCF() // ====================== // Returns the TCF field applied to tag check faults in the given Exception level. bits(2) AArch64.InstructionAbort(bits(64) vaddress,AArch64.EffectiveTCF( FaultRecordAccType fault) // External aborts on instruction fetch must be taken synchronously ifacctype) bits(2) tcf, el; el = HaveDoubleFaultExtS1TranslationRegime() then assert fault.statuscode !=(); if el == Fault_AsyncExternal; route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1' &&then tcf = SCTLR_EL3.TCF; elsif el == IsExternalAbortEL2(fault); route_to_el2 = (then ifEL2EnabledAArch64.AccessUsesEL() && PSTATE.EL IN {(acctype) ==EL0,then tcf = SCTLR_EL2.TCF0; else tcf = SCTLR_EL2.TCF; elsif el == EL1} && (HCR_EL2.TGE == '1' || (then ifHaveRASExtAArch64.AccessUsesEL() && HCR_EL2.TEA == '1' &&(acctype) == IsExternalAbortEL0(fault)) ||then tcf = SCTLR_EL1.TCF0; else tcf = SCTLR_EL1.TCF; if tcf == '11' then //reserved value if ! IsSecondStageHaveMTE3Ext(fault)));() then (-,tcf) = ExceptionRecordConstrainUnpredictableBits exception; bits(64) preferred_exception_return =( ThisInstrAddrUnpredictable_RESTCF(); integer vect_offset; if (HaveDoubleFaultExt() && (PSTATE.EL == EL3 || route_to_el3) && IsExternalAbort(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; exception = AArch64.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); bits(2) target_el = EL1; if PSTATE.EL == EL3 || route_to_el3 then target_el = EL3; elsif PSTATE.EL == EL2 || route_to_el2 then target_el = EL2; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);); return tcf;

Library pseudocode for aarch64/exceptions/aborts/AArch64.PCAlignmentFaultAArch64.InstructionAbort

// AArch64.PCAlignmentFault() // ========================== // Called on unaligned program counter in AArch64 state.// AArch64.InstructionAbort() // ========================== AArch64.PCAlignmentFault() bits(64) preferred_exception_return =AArch64.InstructionAbort(bits(64) vaddress, ThisInstrAddrFaultRecord(); vect_offset = 0x0; exception =fault) // External aborts on instruction fetch must be taken synchronously if ExceptionSyndromeHaveDoubleFaultExt(() then assert fault.statuscode !=Exception_PCAlignmentFault_AsyncExternal); exception.vaddress =; route_to_el3 = ThisInstrAddrHaveEL(); bits(2) target_el =( EL3) && SCR_EL3.EA == '1' && IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0, EL1; if} && (HCR_EL2.TGE == '1' || ( UIntHaveRASExt(PSTATE.EL) >() && HCR_EL2.TEA == '1' && UIntIsExternalAbort((fault)) ||IsSecondStage(fault))); ExceptionRecord exception; bits(64) preferred_exception_return = ThisInstrAddr(); integer vect_offset; if (HaveDoubleFaultExt() && (PSTATE.EL == EL3 || route_to_el3) && IsExternalAbort(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; exception = AArch64.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); bits(2) target_el = EL1) then target_el = PSTATE.EL; elsif; if PSTATE.EL == || route_to_el3 then target_el = EL3; elsif PSTATE.EL == EL2EL2EnabledEL3() && HCR_EL2.TGE == '1' then || route_to_el2 then target_el = EL2; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/AArch64.RaiseTagCheckFaultAArch64.PCAlignmentFault

// AArch64.RaiseTagCheckFault() // ============================ // Raise a tag check fault exception.// AArch64.PCAlignmentFault() // ========================== // Called on unaligned program counter in AArch64 state. AArch64.RaiseTagCheckFault(bits(64) va, boolean write) AArch64.PCAlignmentFault() bits(64) preferred_exception_return = ThisInstrAddr(); integer vect_offset = 0x0; vect_offset = 0x0; exception = ExceptionSyndrome(Exception_DataAbortException_PCAlignment); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = bits(4) UNKNOWN : va<59:0>; bits(2) target_el = exception.vaddress = ThisInstrAddr(); bits(2) target_el = EL1; if UInt(PSTATE.EL) > UInt(EL1) then target_el = PSTATE.EL; elsif PSTATE.EL == EL0 &&) then target_el = PSTATE.EL; elsif EL2Enabled() && HCR_EL2.TGE == '1' then target_el = EL2; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/AArch64.ReportTagCheckFaultAArch64.RaiseTagCheckFault

// AArch64.ReportTagCheckFault() // ============================= // Records a tag check fault exception into the appropriate TCFR_ELx.// AArch64.RaiseTagCheckFault() // ============================ // Raise a tag check fault exception. AArch64.ReportTagCheckFault(bits(2) el, bit ttbr) if el ==AArch64.RaiseTagCheckFault(bits(64) va, boolean write) bits(64) preferred_exception_return = EL3ThisInstrAddr then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el ==(); integer vect_offset = 0x0; exception = EL2ExceptionSyndrome then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el ==( Exception_DataAbort); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = bits(4) UNKNOWN : va<59:0>; bits(2) target_el = EL1 then if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el ==; if UInt(PSTATE.EL) > UInt(EL1) then target_el = PSTATE.EL; elsif PSTATE.EL == EL0 && EL2Enabled() && HCR_EL2.TGE == '1' then target_el = EL2; AArch64.TakeException then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/AArch64.SPAlignmentFaultAArch64.ReportTagCheckFault

// AArch64.SPAlignmentFault() // ========================== // Called on an unaligned stack pointer in AArch64 state.// AArch64.ReportTagCheckFault() // ============================= // Records a tag check fault exception into the appropriate TCFR_ELx. AArch64.SPAlignmentFault() bits(64) preferred_exception_return =AArch64.ReportTagCheckFault(bits(2) el, bit ttbr) if el == ThisInstrAddrEL3(); vect_offset = 0x0; exception =then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el == ExceptionSyndromeEL2(then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el ==Exception_SPAlignment); bits(2) target_el = EL1; ifthen if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el == UIntEL0(PSTATE.EL) > UInt(EL1) then target_el = PSTATE.EL; elsif EL2Enabled() && HCR_EL2.TGE == '1' then target_el = EL2; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';

Library pseudocode for aarch64/exceptions/aborts/AArch64.TagCheckFaultAArch64.SPAlignmentFault

// AArch64.TagCheckFault() // ======================= // Handle a tag check fault condition.// AArch64.SPAlignmentFault() // ========================== // Called on an unaligned stack pointer in AArch64 state. AArch64.TagCheckFault(bits(64) vaddress,AArch64.SPAlignmentFault() bits(64) preferred_exception_return = AccTypeThisInstrAddr acctype, boolean iswrite) bits(2) tcf, el; el =(); vect_offset = 0x0; exception = AArch64.AccessUsesELExceptionSyndrome(acctype); tcf =( AArch64.EffectiveTCFException_SPAlignment(acctype); case tcf of when '00' // Tag Check Faults have no effect on the PE return; when '01' // Tag Check Faults cause a synchronous exception); bits(2) target_el = AArch64.RaiseTagCheckFaultEL1(vaddress, iswrite); when '10' // Tag Check Faults are asynchronously accumulated; if AArch64.ReportTagCheckFaultUInt(el, vaddress<55>); when '11' // Tag Check Faults cause a synchronous exception on reads or on // a read/write access, and are asynchronously accumulated on writes // Check for access performing both a read and a write. if !iswrite ||(PSTATE.EL) > IsAtomicRWUInt(acctype) then( AArch64.RaiseTagCheckFaultEL1(vaddress, iswrite); else) then target_el = PSTATE.EL; elsif () && HCR_EL2.TGE == '1' then target_el = EL2; AArch64.TakeExceptionAArch64.ReportTagCheckFaultEL2Enabled(PSTATE.EL, vaddress<55>);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/BranchTargetExceptionAArch64.TagCheckFault

// BranchTargetException() // AArch64.TagCheckFault() // ======================= // Raise branch target exception.// Handle a tag check fault condition. AArch64.BranchTargetException(bits(52) vaddress) bits(64) preferred_exception_return =AArch64.TagCheckFault(bits(64) vaddress, ThisInstrAddrAccType(); vect_offset = 0x0; exception =acctype, boolean iswrite) bits(2) tcf, el; el = ExceptionSyndromeAArch64.AccessUsesEL((acctype); tcf =Exception_BranchTargetAArch64.EffectiveTCF); exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> =(acctype); case tcf of when '00' // Tag Check Faults have no effect on the PE return; when '01' // Tag Check Faults cause a synchronous exception ZerosAArch64.RaiseTagCheckFault(); // RES0 bits(2) target_el =(vaddress, iswrite); when '10' // Tag Check Faults are asynchronously accumulated EL1AArch64.ReportTagCheckFault; if(el, vaddress<55>); when '11' // Tag Check Faults cause a synchronous exception on reads or on // a read-write access, and are asynchronously accumulated on writes // Check for access performing both a read and a write. readwrite = acctype IN { UIntAccType_ATOMICRW(PSTATE.EL) >, UIntAccType_ORDEREDATOMICRW(,EL1AccType_ORDEREDRW) then target_el = PSTATE.EL; elsif PSTATE.EL ==}; if !iswrite || readwrite then EL0AArch64.RaiseTagCheckFault &&(vaddress, iswrite); else EL2EnabledAArch64.ReportTagCheckFault() && HCR_EL2.TGE == '1' then target_el = EL2; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);(PSTATE.EL, vaddress<55>);

Library pseudocode for aarch64/exceptions/asyncaborts/AArch64.TakePhysicalFIQExceptionBranchTargetException

// AArch64.TakePhysicalFIQException() // ==================================// BranchTargetException() // ======================= // Raise branch target exception. AArch64.TakePhysicalFIQException() route_to_el3 =AArch64.BranchTargetException(bits(52) vaddress) bits(64) preferred_exception_return = HaveEL(EL3) && SCR_EL3.FIQ == '1'; route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || HCR_EL2.FMO == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x100; vect_offset = 0x0; exception = ExceptionSyndrome(Exception_FIQException_BranchTarget); if route_to_el3 then exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> = AArch64.TakeExceptionZeros((); // RES0 bits(2) target_el =EL3EL1, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL ==; if EL2UInt || route_to_el2 then assert PSTATE.EL !=(PSTATE.EL) > EL3UInt;( AArch64.TakeExceptionEL1() then target_el = PSTATE.EL; elsif PSTATE.EL ==EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0,&& EL1EL2Enabled};() && HCR_EL2.TGE == '1' then target_el = EL2; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/async/AArch64.TakePhysicalIRQExceptionAArch64.TakePhysicalFIQException

// AArch64.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception.// AArch64.TakePhysicalFIQException() // ================================== AArch64.TakePhysicalIRQException() AArch64.TakePhysicalFIQException() route_to_el3 = HaveEL(EL3) && SCR_EL3.IRQ == '1'; ) && SCR_EL3.FIQ == '1'; route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || HCR_EL2.IMO == '1')); (HCR_EL2.TGE == '1' || HCR_EL2.FMO == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x80; vect_offset = 0x100; exception = ExceptionSyndrome(Exception_IRQException_FIQ); if route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then assert PSTATE.EL != EL3; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0, EL1}; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/async/AArch64.TakePhysicalSErrorExceptionAArch64.TakePhysicalIRQException

// AArch64.TakePhysicalSErrorException() // =====================================// AArch64.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception. AArch64.TakePhysicalSErrorException(bits(25) syndrome) AArch64.TakePhysicalIRQException() route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1'; ) && SCR_EL3.IRQ == '1'; route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || (! (HCR_EL2.TGE == '1' || HCR_EL2.IMO == '1')); bits(64) preferred_exception_return =IsInHost() && HCR_EL2.AMO == '1'))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x180; vect_offset = 0x80; bits(2) target_el; if PSTATE.EL == exception = EL3ExceptionSyndrome || route_to_el3 then target_el =( Exception_IRQ); if route_to_el3 then AArch64.TakeException(EL3; , exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then target_el = assert PSTATE.EL != EL2EL3; else target_el =; EL1AArch64.TakeException; if( IsSErrorEdgeTriggeredEL2(target_el, syndrome) then, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN { ClearPendingPhysicalSErrorEL0(); exception =, ExceptionSyndromeEL1(};Exception_SErrorAArch64.TakeException); exception.syndrome = syndrome;( AArch64.TakeExceptionEL1(target_el, exception, preferred_exception_return, vect_offset);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/async/AArch64.TakeVirtualFIQExceptionAArch64.TakePhysicalSErrorException

// AArch64.TakeVirtualFIQException() // =================================// AArch64.TakePhysicalSErrorException() // ===================================== AArch64.TakeVirtualFIQException() assert PSTATE.EL IN {AArch64.TakePhysicalSErrorException(bits(25) syndrome) route_to_el3 =HaveEL(EL3) && SCR_EL3.EA == '1'; route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled(); assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Virtual IRQ enabled if TGE==0 and FMO==1 bits(64) preferred_exception_return =() && (HCR_EL2.TGE == '1' || (! IsInHost() && HCR_EL2.AMO == '1'))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x100; vect_offset = 0x180; exception = bits(2) target_el; if PSTATE.EL == EL3 || route_to_el3 then target_el = EL3; elsif PSTATE.EL == EL2 || route_to_el2 then target_el = EL2; else target_el = EL1; if IsSErrorEdgeTriggered(target_el, syndrome) then ClearPendingPhysicalSError(); exception = ExceptionSyndrome(Exception_FIQException_SError);); exception.syndrome = syndrome; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/async/AArch64.TakeVirtualIRQExceptionAArch64.TakeVirtualFIQException

// AArch64.TakeVirtualIRQException() // AArch64.TakeVirtualFIQException() // ================================= AArch64.TakeVirtualIRQException() AArch64.TakeVirtualFIQException() assert PSTATE.EL IN {EL0, EL1} && EL2Enabled(); assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Virtual IRQ enabled if TGE==0 and IMO==1 assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Virtual IRQ enabled if TGE==0 and FMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x80; vect_offset = 0x100; exception = ExceptionSyndrome(Exception_IRQException_FIQ); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/async/AArch64.TakeVirtualSErrorExceptionAArch64.TakeVirtualIRQException

// AArch64.TakeVirtualSErrorException() // ====================================// AArch64.TakeVirtualIRQException() // ================================= AArch64.TakeVirtualSErrorException(bits(25) syndrome) AArch64.TakeVirtualIRQException() assert PSTATE.EL IN {EL0, EL1} && EL2Enabled(); assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Virtual SError enabled if TGE==0 and AMO==1 assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Virtual IRQ enabled if TGE==0 and IMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x180; vect_offset = 0x80; exception = ExceptionSyndrome(Exception_SErrorException_IRQ); if HaveRASExt() then exception.syndrome<24> = VSESR_EL2.IDS; exception.syndrome<23:0> = VSESR_EL2.ISS; else impdef_syndrome = syndrome<24> == '1'; if impdef_syndrome then exception.syndrome = syndrome; ClearPendingVirtualSError();); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debugasync/AArch64.BreakpointExceptionAArch64.TakeVirtualSErrorException

// AArch64.BreakpointException() // =============================// AArch64.TakeVirtualSErrorException() // ==================================== AArch64.BreakpointException(AArch64.TakeVirtualSErrorException(bits(25) syndrome) assert PSTATE.EL IN {FaultRecord fault) assert PSTATE.EL != EL3; route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); (); assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Virtual SError enabled if TGE==0 and AMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; vect_offset = 0x180; exception = AArch64.AbortSyndromeExceptionSyndrome(Exception_BreakpointException_SError, fault, vaddress); ); if PSTATE.EL == if EL2HaveRASExt || route_to_el2 then() then exception.syndrome<24> = VSESR_EL2.IDS; exception.syndrome<23:0> = VSESR_EL2.ISS; else impdef_syndrome = syndrome<24> == '1'; if impdef_syndrome then exception.syndrome = syndrome; AArch64.TakeExceptionClearPendingVirtualSError(EL2, exception, preferred_exception_return, vect_offset); else(); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.SoftwareBreakpointAArch64.BreakpointException

// AArch64.SoftwareBreakpoint() // ============================// AArch64.BreakpointException() // ============================= AArch64.SoftwareBreakpoint(bits(16) immediate) route_to_el2 = (PSTATE.EL IN {AArch64.BreakpointException(FaultRecord fault) assert PSTATE.EL != EL3; route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception = ExceptionSyndromeAArch64.AbortSyndrome(Exception_SoftwareBreakpointException_Breakpoint); exception.syndrome<15:0> = immediate; , fault, vaddress); if if PSTATE.EL == UIntEL2(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then|| route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.SoftwareStepExceptionAArch64.SoftwareBreakpoint

// AArch64.SoftwareStepException() // ===============================// AArch64.SoftwareBreakpoint() // ============================ AArch64.SoftwareStepException() assert PSTATE.EL !=AArch64.SoftwareBreakpoint(bits(16) immediate) route_to_el2 = (PSTATE.EL IN { EL3; route_to_el2 = (PSTATE.EL IN {EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SoftwareStepException_SoftwareBreakpoint); exception.syndrome<15:0> = immediate; if SoftwareStep_DidNotStepUInt() then exception.syndrome<24> = '0'; else exception.syndrome<24> = '1'; exception.syndrome<6> = if(PSTATE.EL) > SoftwareStep_SteppedEXUInt() then '1' else '0'; exception.syndrome<5:0> = '100010'; // IFSC = Debug Exception if PSTATE.EL ==( ) then AArch64.TakeExceptionEL2EL1 || route_to_el2 then(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.VectorCatchExceptionAArch64.SoftwareStepException

// AArch64.VectorCatchException() // ============================== // Vector Catch taken from EL0 or EL1 to EL2. This can only be called when debug exceptions are // being routed to EL2, as Vector Catch is a legacy debug event.// AArch64.SoftwareStepException() // =============================== AArch64.VectorCatchException(AArch64.SoftwareStepException() assert PSTATE.EL !=FaultRecordEL3 fault) assert PSTATE.EL !=; route_to_el2 = (PSTATE.EL IN { EL2EL0; assert, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); () && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception = AArch64.AbortSyndromeExceptionSyndrome(Exception_VectorCatchException_SoftwareStep, fault, vaddress);); if SoftwareStep_DidNotStep() then exception.syndrome<24> = '0'; else exception.syndrome<24> = '1'; exception.syndrome<6> = if SoftwareStep_SteppedEX() then '1' else '0'; exception.syndrome<5:0> = '100010'; // IFSC = Debug Exception if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.WatchpointExceptionAArch64.VectorCatchException

// AArch64.WatchpointException() // =============================// AArch64.VectorCatchException() // ============================== // Vector Catch taken from EL0 or EL1 to EL2. This can only be called when debug exceptions are // being routed to EL2, as Vector Catch is a legacy debug event. AArch64.WatchpointException(bits(64) vaddress,AArch64.VectorCatchException( FaultRecord fault) assert PSTATE.EL != EL3EL2; route_to_el2 = (PSTATE.EL IN { assertEL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); () && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception = ExceptionRecord exception; if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then exception = AArch64.AbortSyndrome(Exception_NV2WatchpointException_VectorCatch, fault, vaddress); else exception =, fault, vaddress); AArch64.AbortSyndrome(Exception_Watchpoint, fault, vaddress); if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptionsdebug/AArch64.ExceptionClassAArch64.WatchpointException

// AArch64.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in ESR (integer,bit)// AArch64.WatchpointException() // ============================= AArch64.ExceptionClass(AArch64.WatchpointException(bits(64) vaddress,ExceptionFaultRecord exceptype, bits(2) target_el) il_is_valid = TRUE; from_32 =fault) assert PSTATE.EL != UsingAArch32EL3(); integer ec; case exceptype of when; route_to_el2 = (PSTATE.EL IN { Exception_UncategorizedEL0 ec = 0x00; il_is_valid = FALSE; when, Exception_WFxTrapEL1 ec = 0x01; when} && Exception_CP15RTTrapEL2Enabled ec = 0x03; assert from_32; when() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = Exception_CP15RRTTrapThisInstrAddr ec = 0x04; assert from_32; when(); vect_offset = 0x0; Exception_CP14RTTrapExceptionRecord ec = 0x05; assert from_32; whenexception; if Exception_CP14DTTrapHaveNV2Ext ec = 0x06; assert from_32; when() && fault.acctype == Exception_AdvSIMDFPAccessTrapAccType_NV2REGISTER ec = 0x07; whenthen exception = Exception_FPIDTrapAArch64.AbortSyndrome ec = 0x08; when( Exception_PACTrapException_NV2Watchpoint ec = 0x09; when, fault, vaddress); else exception = Exception_LDST64BTrapAArch64.AbortSyndrome ec = 0x0A; when( Exception_CP14RRTTrap ec = 0x0C; assert from_32; when Exception_BranchTarget ec = 0x0D; when Exception_IllegalState ec = 0x0E; il_is_valid = FALSE; when Exception_SupervisorCall ec = 0x11; when Exception_HypervisorCall ec = 0x12; when Exception_MonitorCall ec = 0x13; when Exception_SystemRegisterTrap ec = 0x18; assert !from_32; when Exception_ERetTrap ec = 0x1A; assert !from_32; when Exception_PACFail ec = 0x1C; assert !from_32; when Exception_InstructionAbort ec = 0x20; il_is_valid = FALSE; when Exception_PCAlignment ec = 0x22; il_is_valid = FALSE; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_SPAlignment ec = 0x26; il_is_valid = FALSE; assert !from_32; when Exception_MemCpyMemSet ec = 0x27; when Exception_FPTrappedException ec = 0x28; when Exception_SError ec = 0x2F; il_is_valid = FALSE; when Exception_Breakpoint ec = 0x30; il_is_valid = FALSE; when Exception_SoftwareStep ec = 0x32; il_is_valid = FALSE; when Exception_Watchpoint ec = 0x34; il_is_valid = FALSE; when, fault, vaddress); if PSTATE.EL == Exception_NV2WatchpointEL2 ec = 0x35; il_is_valid = FALSE; when|| route_to_el2 then Exception_SoftwareBreakpointAArch64.TakeException ec = 0x38; when( Exception_VectorCatchEL2 ec = 0x3A; il_is_valid = FALSE; assert from_32; otherwise, exception, preferred_exception_return, vect_offset); else UnreachableAArch64.TakeException(); if ec IN {0x20,0x24,0x30,0x32,0x34} && target_el == PSTATE.EL then ec = ec + 1; if ec IN {0x11,0x12,0x13,0x28,0x38} && !from_32 then ec = ec + 4; bit il; if il_is_valid then il = if( ThisInstrLengthEL1() == 32 then '1' else '0'; else il = '1'; assert from_32 || il == '1'; // AArch64 instructions always 32-bit return (ec,il);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptions/AArch64.ReportExceptionAArch64.ExceptionClass

// AArch64.ReportException() // ========================= // Report syndrome information for exception taken to AArch64 state.// AArch64.ExceptionClass() // ======================== // Returns the Exception Class and Instruction Length fields to be reported in ESR (integer,bit) AArch64.ReportException(AArch64.ExceptionClass(ExceptionRecord exception, bits(2) target_el) Exception exceptype = exception.exceptype; exceptype, bits(2) target_el) (ec,il) = il_is_valid = TRUE; from_32 = AArch64.ExceptionClassUsingAArch32(exceptype, target_el); iss = exception.syndrome; iss2 = exception.syndrome2; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1';(); integer ec; case exceptype of when ESRException_Uncategorized[target_el] = (ec = 0x00; il_is_valid = FALSE; whenZerosException_WFxTrap(27) : // <63:37> iss2 : // <36:32> ec<5:0> : // <31:26> il : // <25> iss); // <24:0> if exceptype IN {ec = 0x01; when Exception_CP15RTTrap ec = 0x03; assert from_32; when Exception_CP15RRTTrap ec = 0x04; assert from_32; when Exception_CP14RTTrap ec = 0x05; assert from_32; when Exception_CP14DTTrap ec = 0x06; assert from_32; when Exception_AdvSIMDFPAccessTrap ec = 0x07; when Exception_FPIDTrap ec = 0x08; when Exception_PACTrap ec = 0x09; when Exception_LDST64BTrap ec = 0x0A; when Exception_CP14RRTTrap ec = 0x0C; assert from_32; when Exception_BranchTarget ec = 0x0D; when Exception_IllegalState ec = 0x0E; il_is_valid = FALSE; when Exception_SupervisorCall ec = 0x11; when Exception_HypervisorCall ec = 0x12; when Exception_MonitorCall ec = 0x13; when Exception_SystemRegisterTrap ec = 0x18; assert !from_32; when Exception_ERetTrap ec = 0x1A; assert !from_32; when Exception_PACFail ec = 0x1C; assert !from_32; when Exception_InstructionAbort,ec = 0x20; il_is_valid = FALSE; when Exception_PCAlignment,ec = 0x22; il_is_valid = FALSE; when Exception_DataAbort,ec = 0x24; when Exception_NV2DataAbort,ec = 0x25; when Exception_NV2WatchpointException_SPAlignment,ec = 0x26; il_is_valid = FALSE; assert !from_32; when Exception_MemCpyMemSet ec = 0x27; when Exception_FPTrappedException ec = 0x28; when Exception_SError ec = 0x2F; il_is_valid = FALSE; when Exception_Breakpoint ec = 0x30; il_is_valid = FALSE; when Exception_SoftwareStep ec = 0x32; il_is_valid = FALSE; when Exception_Watchpoint } thenec = 0x34; il_is_valid = FALSE; when FARException_NV2Watchpoint[target_el] = exception.vaddress; elseec = 0x35; il_is_valid = FALSE; when FARException_SoftwareBreakpoint[target_el] = bits(64) UNKNOWN; if exception.ipavalid then HPFAR_EL2<43:4> = exception.ipaddress<51:12>; ifec = 0x38; when IsSecureEL2EnabledException_VectorCatch() &&ec = 0x3A; il_is_valid = FALSE; assert from_32; otherwise CurrentSecurityStateUnreachable() ==(); if ec IN {0x20,0x24,0x30,0x32,0x34} && target_el == PSTATE.EL then ec = ec + 1; if ec IN {0x11,0x12,0x13,0x28,0x38} && !from_32 then ec = ec + 4; bit il; if il_is_valid then il = if SS_SecureThisInstrLength then HPFAR_EL2.NS = exception.NS; else HPFAR_EL2.NS = '0'; elsif target_el == EL2 then HPFAR_EL2<43:4> = bits(40) UNKNOWN; () == 32 then '1' else '0'; else il = '1'; assert from_32 || il == '1'; // AArch64 instructions always 32-bit return; return (ec,il);

Library pseudocode for aarch64/exceptions/exceptions/AArch64.ResetControlRegistersAArch64.ReportException

// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values.// AArch64.ReportException() // ========================= // Report syndrome information for exception taken to AArch64 state. AArch64.ResetControlRegisters(boolean cold_reset);AArch64.ReportException(ExceptionRecord exception, bits(2) target_el) Exception exceptype = exception.exceptype; (ec,il) = AArch64.ExceptionClass(exceptype, target_el); iss = exception.syndrome; iss2 = exception.syndrome2; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; ESR[target_el] = (Zeros(27) : // <63:37> iss2 : // <36:32> ec<5:0> : // <31:26> il : // <25> iss); // <24:0> if exceptype IN { Exception_InstructionAbort, Exception_PCAlignment, Exception_DataAbort, Exception_NV2DataAbort, Exception_NV2Watchpoint, Exception_Watchpoint } then FAR[target_el] = exception.vaddress; else FAR[target_el] = bits(64) UNKNOWN; if exception.ipavalid then HPFAR_EL2<43:4> = exception.ipaddress<51:12>; if IsSecureEL2Enabled() && IsSecure() then HPFAR_EL2.NS = exception.NS; else HPFAR_EL2.NS = '0'; elsif target_el == EL2 then HPFAR_EL2<43:4> = bits(40) UNKNOWN; return;

Library pseudocode for aarch64/exceptions/exceptions/AArch64.TakeResetAArch64.ResetControlRegisters

// AArch64.TakeReset() // =================== // Reset into AArch64 state// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values. AArch64.TakeReset(boolean cold_reset) assertAArch64.ResetControlRegisters(boolean cold_reset); HaveAArch64(); // Enter the highest implemented Exception level in AArch64 state PSTATE.nRW = '0'; if HaveEL(EL3) then PSTATE.EL = EL3; elsif HaveEL(EL2) then PSTATE.EL = EL2; else PSTATE.EL = EL1; // Reset System registers and other system components AArch64.ResetControlRegisters(cold_reset); // Reset all other PSTATE fields PSTATE.SP = '1'; // Select stack pointer PSTATE.<D,A,I,F> = '1111'; // All asynchronous exceptions masked PSTATE.SS = '0'; // Clear software step bit PSTATE.DIT = '0'; // PSTATE.DIT is reset to 0 when resetting into AArch64 PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // ELR_ELx and SPSR_ELx have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch64.ResetGeneralRegisters(); AArch64.ResetSIMDFPRegisters(); AArch64.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(64) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then rv = RVBAR_EL3; elsif HaveEL(EL2) then rv = RVBAR_EL2; else rv = RVBAR_EL1; // The reset vector must be correctly aligned assert IsZero(rv<63:AArch64.PAMax()>) && IsZero(rv<1:0>); boolean branch_conditional = FALSE; BranchTo(rv, BranchType_RESET, branch_conditional);

Library pseudocode for aarch64/exceptions/ieeefpexceptions/AArch64.FPTrappedExceptionAArch64.TakeReset

// AArch64.FPTrappedException() // ============================// AArch64.TakeReset() // =================== // Reset into AArch64 state AArch64.FPTrappedException(boolean is_ase, bits(8) accumulated_exceptions) exception =AArch64.TakeReset(boolean cold_reset) assert ExceptionSyndromeHaveAArch64((); // Enter the highest implemented Exception level in AArch64 state PSTATE.nRW = '0'; ifException_FPTrappedExceptionHaveEL); if is_ase then if boolean IMPLEMENTATION_DEFINED "vector instructions set TFV to 1" then exception.syndrome<23> = '1'; // TFV else exception.syndrome<23> = '0'; // TFV else exception.syndrome<23> = '1'; // TFV exception.syndrome<10:8> = bits(3) UNKNOWN; // VECITR if exception.syndrome<23> == '1' then exception.syndrome<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF else exception.syndrome<7,4:0> = bits(6) UNKNOWN; route_to_el2 =( EL2EnabledEL3() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =) then PSTATE.EL = ThisInstrAddrEL3(); vect_offset = 0x0; if; elsif UIntHaveEL(PSTATE.EL) >( UIntEL2() then PSTATE.EL =EL2; else PSTATE.EL = EL1) then; // Reset System registers and other system components AArch64.TakeExceptionAArch64.ResetControlRegisters(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then(cold_reset); // Reset all other PSTATE fields PSTATE.SP = '1'; // Select stack pointer PSTATE.<D,A,I,F> = '1111'; // All asynchronous exceptions masked PSTATE.SS = '0'; // Clear software step bit PSTATE.DIT = '0'; // PSTATE.DIT is reset to 0 when resetting into AArch64 PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // ELR_ELx and SPSR_ELx have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch64.TakeExceptionAArch64.ResetGeneralRegisters(();AArch64.ResetSIMDFPRegisters(); AArch64.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(64) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then rv = RVBAR_EL3; elsif HaveEL(EL2, exception, preferred_exception_return, vect_offset); else) then rv = RVBAR_EL2; else rv = RVBAR_EL1; // The reset vector must be correctly aligned assert AArch64.TakeExceptionIsZero((rv<63:()>) && IsZero(rv<1:0>); boolean branch_conditional = FALSE; BranchTo(rv, BranchType_RESETEL1AArch64.PAMax, exception, preferred_exception_return, vect_offset);, branch_conditional);

Library pseudocode for aarch64/exceptions/syscallsieeefp/AArch64.CallHypervisorAArch64.FPTrappedException

// AArch64.CallHypervisor() // ======================== // Performs a HVC call// AArch64.FPTrappedException() // ============================ AArch64.CallHypervisor(bits(16) immediate) assertAArch64.FPTrappedException(boolean is_ase, bits(8) accumulated_exceptions) exception = HaveELExceptionSyndrome(EL2Exception_FPTrappedException); if is_ase then if boolean IMPLEMENTATION_DEFINED "vector instructions set TFV to 1" then exception.syndrome<23> = '1'; // TFV else exception.syndrome<23> = '0'; // TFV else exception.syndrome<23> = '1'; // TFV exception.syndrome<10:8> = bits(3) UNKNOWN; // VECITR if exception.syndrome<23> == '1' then exception.syndrome<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF else exception.syndrome<7,4:0> = bits(6) UNKNOWN; if route_to_el2 = UsingAArch32EL2Enabled() then() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = AArch32.ITAdvanceThisInstrAddr();(); vect_offset = 0x0; if SSAdvanceUInt(); bits(64) preferred_exception_return =(PSTATE.EL) > NextInstrAddrUInt(); vect_offset = 0x0; exception =( ExceptionSyndromeEL1() thenException_HypervisorCallAArch64.TakeException); exception.syndrome<15:0> = immediate; if PSTATE.EL ==(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then EL3 then AArch64.TakeException(EL3EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL2EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscalls/AArch64.CallSecureMonitorAArch64.CallHypervisor

// AArch64.CallSecureMonitor() // ===========================// AArch64.CallHypervisor() // ======================== // Performs a HVC call AArch64.CallSecureMonitor(bits(16) immediate) AArch64.CallHypervisor(bits(16) immediate) assert HaveEL(EL3EL2) && !); ifELUsingAArch32(EL3); if UsingAArch32() then AArch32.ITAdvance(); SSAdvance(); bits(64) preferred_exception_return = NextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_MonitorCallException_HypervisorCall); exception.syndrome<15:0> = immediate; exception.syndrome<15:0> = immediate; if PSTATE.EL == EL3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscalls/AArch64.CallSupervisorAArch64.CallSecureMonitor

// AArch64.CallSupervisor() // ======================== // Calls the Supervisor// AArch64.CallSecureMonitor() // =========================== AArch64.CallSupervisor(bits(16) immediate_in) bits(16) immediate = immediate_in; ifAArch64.CallSecureMonitor(bits(16) immediate) assert HaveEL(EL3) && !ELUsingAArch32(EL3); if UsingAArch32() then AArch32.ITAdvance(); SSAdvance(); route_to_el2 = PSTATE.EL == bits(64) preferred_exception_return = EL0 && EL2Enabled() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = NextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SupervisorCallException_MonitorCall); exception.syndrome<15:0> = immediate; if exception.syndrome<15:0> = immediate; UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then( AArch64.TakeExceptionEL3(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/takeexceptionsyscalls/AArch64.TakeExceptionAArch64.CallSupervisor

// AArch64.TakeException() // ======================= // Take an exception to an Exception level using AArch64.// AArch64.CallSupervisor() // ======================== // Calls the Supervisor AArch64.TakeException(bits(2) target_el,AArch64.CallSupervisor(bits(16) immediate_in) bits(16) immediate = immediate_in; if ExceptionRecordUsingAArch32 exception_in, bits(64) preferred_exception_return, integer vect_offset_in) assert() then HaveELAArch32.ITAdvance(target_el) && !();ELUsingAArch32SSAdvance(target_el) &&(); route_to_el2 = PSTATE.EL == UIntEL0(target_el) >=&& UIntEL2Enabled(PSTATE.EL);() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ExceptionRecordNextInstrAddr exception = exception_in; boolean sync_errors; boolean iesb_req; if(); vect_offset = 0x0; exception = HaveIESBExceptionSyndrome() then sync_errors =( SCTLRException_SupervisorCall[target_el].IESB == '1'; if); exception.syndrome<15:0> = immediate; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.<EA,NMEA> == '11' && target_el == EL3); if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); iesb_req = FALSE; sync_errors = FALSE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); else sync_errors = FALSE; SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); integer vect_offset = vect_offset_in; if UInt(target_el) >(PSTATE.EL) > UInt(PSTATE.EL) then boolean lower_32; if target_el ==( EL3 then if EL2Enabled() then lower_32 = ELUsingAArch32(EL2); else lower_32 = ELUsingAArch32(EL1); elsif) then IsInHostAArch64.TakeException() && PSTATE.EL ==(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then EL0AArch64.TakeException && target_el ==( EL2 then lower_32 =, exception, preferred_exception_return, vect_offset); else ELUsingAArch32AArch64.TakeException(EL0); else lower_32 = ELUsingAArch32(target_el - 1); vect_offset = vect_offset + (if lower_32 then 0x600 else 0x400); elsif PSTATE.SP == '1' then vect_offset = vect_offset + 0x200; bits(64) spsr = GetPSRFromPSTATE(AArch64_NonDebugState); if PSTATE.EL == EL1 && target_el == EL1 && EL2Enabled() then if HaveNV2Ext() && (HCR_EL2.<NV,NV1,NV2> == '100' || HCR_EL2.<NV,NV1,NV2> == '111') then spsr<3:2> = '10'; else if HaveNVExt() && HCR_EL2.<NV,NV1> == '10' then spsr<3:2> = '10'; if HaveBTIExt() && !UsingAArch32() then boolean zero_btype; // SPSR[].BTYPE is only guaranteed valid for these exception types if exception.exceptype IN {Exception_SError, Exception_IRQ, Exception_FIQ, Exception_SoftwareStep, Exception_PCAlignment, Exception_InstructionAbort, Exception_Breakpoint, Exception_VectorCatch, Exception_SoftwareBreakpoint, Exception_IllegalState, Exception_BranchTarget} then zero_btype = FALSE; else zero_btype = ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE); if zero_btype then spsr<11:10> = '00'; if HaveNV2Ext() && exception.exceptype == Exception_NV2DataAbort && target_el == EL3 then // External aborts are configured to be taken to EL3 exception.exceptype = Exception_DataAbort; if !(exception.exceptype IN {Exception_IRQ, Exception_FIQ}) then AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = spsr; ELR[] = preferred_exception_return; PSTATE.SS = '0'; if HaveFeatNMI() && !ELUsingAArch32(target_el) then PSTATE.ALLINT = NOT SCTLR[].SPINTMASK; PSTATE.<D,A,I,F> = '1111'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if (HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0') then PSTATE.PAN = '1'; if HaveUAOExt() then PSTATE.UAO = '0'; if HaveBTIExt() then PSTATE.BTYPE = '00'; if HaveSSBSExt() then PSTATE.SSBS = SCTLR[].DSSBS; if HaveMTEExt() then PSTATE.TCO = '1'; boolean branch_conditional = FALSE; BranchTo(VBAR[]<63:11>:vect_offset<10:0>, BranchType_EXCEPTION, branch_conditional); CheckExceptionCatch(TRUE); // Check for debug event on exception entry if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); EndOfInstruction();, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/trapstakeexception/AArch64.AArch32SystemAccessTrapAArch64.TakeException

// AArch64.AArch32SystemAccessTrap() // ================================= // Trapped AARCH32 system register access.// AArch64.TakeException() // ======================= // Take an exception to an Exception level using AArch64. AArch64.AArch32SystemAccessTrap(bits(2) target_el, integer ec) assertAArch64.TakeException(bits(2) target_el, ExceptionRecord exception_in, bits(64) preferred_exception_return, integer vect_offset_in) assert HaveEL(target_el) && target_el !=(target_el) && ! EL0ELUsingAArch32 &&(target_el) && UInt(target_el) >= UInt(PSTATE.EL); bits(64) preferred_exception_return =(PSTATE.EL); ThisInstrAddrExceptionRecord(); vect_offset = 0x0; exception =exception = exception_in; boolean sync_errors; boolean iesb_req; if AArch64.AArch32SystemAccessTrapSyndromeHaveIESB(() then sync_errors =ThisInstrSCTLR(), ec);[target_el].IESB == '1'; if () then sync_errors = sync_errors || (SCR_EL3.<EA,NMEA> == '11' && target_el == EL3); if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); iesb_req = FALSE; sync_errors = FALSE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); else sync_errors = FALSE; SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); integer vect_offset = vect_offset_in; if UInt(target_el) > UInt(PSTATE.EL) then boolean lower_32; if target_el == EL3 then if EL2Enabled() then lower_32 = ELUsingAArch32(EL2); else lower_32 = ELUsingAArch32(EL1); elsif IsInHost() && PSTATE.EL == EL0 && target_el == EL2 then lower_32 = ELUsingAArch32(EL0); else lower_32 = ELUsingAArch32(target_el - 1); vect_offset = vect_offset + (if lower_32 then 0x600 else 0x400); elsif PSTATE.SP == '1' then vect_offset = vect_offset + 0x200; bits(64) spsr = GetPSRFromPSTATE(AArch64_NonDebugState); if PSTATE.EL == EL1 && target_el == EL1 && EL2Enabled() then if HaveNV2Ext() && (HCR_EL2.<NV,NV1,NV2> == '100' || HCR_EL2.<NV,NV1,NV2> == '111') then spsr<3:2> = '10'; else if HaveNVExt() && HCR_EL2.<NV,NV1> == '10' then spsr<3:2> = '10'; if HaveBTIExt() && !UsingAArch32() then boolean zero_btype; // SPSR[].BTYPE is only guaranteed valid for these exception types if exception.exceptype IN {Exception_SError, Exception_IRQ, Exception_FIQ, Exception_SoftwareStep, Exception_PCAlignment, Exception_InstructionAbort, Exception_Breakpoint, Exception_VectorCatch, Exception_SoftwareBreakpoint, Exception_IllegalState, Exception_BranchTarget} then zero_btype = FALSE; else zero_btype = ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE); if zero_btype then spsr<11:10> = '00'; if HaveNV2Ext() && exception.exceptype == Exception_NV2DataAbort && target_el == EL3 then // External aborts are configured to be taken to EL3 exception.exceptype = Exception_DataAbort; if !(exception.exceptype IN {Exception_IRQ, Exception_FIQ}) then AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = spsr; ELR[] = preferred_exception_return; PSTATE.SS = '0'; if HaveFeatNMI() && !ELUsingAArch32(target_el) then PSTATE.ALLINT = NOT SCTLR[].SPINTMASK; PSTATE.<D,A,I,F> = '1111'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if (HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0') then PSTATE.PAN = '1'; if HaveUAOExt() then PSTATE.UAO = '0'; if HaveBTIExt() then PSTATE.BTYPE = '00'; if HaveSSBSExt() then PSTATE.SSBS = SCTLR[].DSSBS; if HaveMTEExt() then PSTATE.TCO = '1'; boolean branch_conditional = FALSE; BranchTo(VBAR[]<63:11>:vect_offset<10:0>, BranchType_EXCEPTION, branch_conditional); CheckExceptionCatch(TRUE); // Check for debug event on exception entry if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); EndOfInstructionAArch64.TakeExceptionHaveDoubleFaultExt(target_el, exception, preferred_exception_return, vect_offset);();

Library pseudocode for aarch64/exceptions/traps/AArch64.AArch32SystemAccessTrapSyndromeAArch64.AArch32SystemAccessTrap

// AArch64.AArch32SystemAccessTrapSyndrome() // ========================================= // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord// AArch64.AArch32SystemAccessTrap() // ================================= // Trapped AARCH32 system register access. AArch64.AArch32SystemAccessTrapSyndrome(bits(32) instr, integer ec)AArch64.AArch32SystemAccessTrap(bits(2) target_el, integer ec) assert ExceptionRecordHaveEL exception; case ec of when 0x0 exception =(target_el) && target_el != ExceptionSyndromeEL0(&&Exception_UncategorizedUInt); when 0x3 exception =(target_el) >= ExceptionSyndromeUInt((PSTATE.EL); bits(64) preferred_exception_return =Exception_CP15RTTrapThisInstrAddr); when 0x4 exception =(); vect_offset = 0x0; exception = ExceptionSyndromeAArch64.AArch32SystemAccessTrapSyndrome(Exception_CP15RRTTrapThisInstr); when 0x5 exception =(), ec); ExceptionSyndromeAArch64.TakeException(Exception_CP14RTTrap); when 0x6 exception = ExceptionSyndrome(Exception_CP14DTTrap); when 0x7 exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); when 0x8 exception = ExceptionSyndrome(Exception_FPIDTrap); when 0xC exception = ExceptionSyndrome(Exception_CP14RRTTrap); otherwise Unreachable(); bits(20) iss = Zeros(); if exception.exceptype == Exception_Uncategorized then return exception; elsif exception.exceptype IN {Exception_FPIDTrap, Exception_CP14RTTrap, Exception_CP15RTTrap} then // Trapped MRC/MCR, VMRS on FPSID if exception.exceptype != Exception_FPIDTrap then // When trap is not for VMRS iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<4:1> = instr<3:0>; // CRm else iss<19:17> = '000'; iss<16:14> = '111'; iss<13:10> = instr<19:16>; // reg iss<4:1> = '0000'; if instr<20> == '1' && instr<15:12> == '1111' then // MRC, Rt==15 iss<9:5> = '11111'; elsif instr<20> == '0' && instr<15:12> == '1111' then // MCR, Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; elsif exception.exceptype IN {Exception_CP14RRTTrap, Exception_AdvSIMDFPAccessTrap, Exception_CP15RRTTrap} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 if instr<19:16> == '1111' then // Rt2==15 iss<14:10> = bits(5) UNKNOWN; else iss<14:10> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; if instr<15:12> == '1111' then // Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif exception.exceptype == Exception_CP14DTTrap then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<9:5> = bits(5) UNKNOWN; iss<3> = '1'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<19:0> = iss; return exception;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.AdvSIMDFPAccessTrapAArch64.AArch32SystemAccessTrapSyndrome

// AArch64.AdvSIMDFPAccessTrap() // ============================= // Trapped access to Advanced SIMD or FP registers due to CPACR[].// AArch64.AArch32SystemAccessTrapSyndrome() // ========================================= // Returns the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS, VMSR instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord AArch64.AdvSIMDFPAccessTrap(bits(2) target_el) bits(64) preferred_exception_return =AArch64.AArch32SystemAccessTrapSyndrome(bits(32) instr, integer ec) ThisInstrAddr(); ExceptionRecord exception; vect_offset = 0x0; route_to_el2 = (target_el == case ec of when 0x0 exception = EL1ExceptionSyndrome &&( EL2EnabledException_Uncategorized() && HCR_EL2.TGE == '1'); if route_to_el2 then exception =); when 0x3 exception = ExceptionSyndrome(Exception_UncategorizedException_CP15RTTrap);); when 0x4 exception = AArch64.TakeExceptionExceptionSyndrome(EL2Exception_CP15RRTTrap, exception, preferred_exception_return, vect_offset); else exception =); when 0x5 exception = ExceptionSyndrome(Exception_CP14RTTrap); when 0x6 exception = ExceptionSyndrome(Exception_CP14DTTrap); when 0x7 exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); exception.syndrome<24:20> = when 0x8 exception = ExceptionSyndrome(Exception_FPIDTrap); when 0xC exception = ExceptionSyndrome(Exception_CP14RRTTrap); otherwise Unreachable(); bits(20) iss = Zeros(); if exception.exceptype == Exception_Uncategorized then return exception; elsif exception.exceptype IN {Exception_FPIDTrap, Exception_CP14RTTrap, Exception_CP15RTTrap} then // Trapped MRC/MCR, VMRS on FPSID if exception.exceptype != Exception_FPIDTrap then // When trap is not for VMRS iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<4:1> = instr<3:0>; // CRm else iss<19:17> = '000'; iss<16:14> = '111'; iss<13:10> = instr<19:16>; // reg iss<4:1> = '0000'; if instr<20> == '1' && instr<15:12> == '1111' then // MRC, Rt==15 iss<9:5> = '11111'; elsif instr<20> == '0' && instr<15:12> == '1111' then // MCR, Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; elsif exception.exceptype IN {Exception_CP14RRTTrap, Exception_AdvSIMDFPAccessTrap, Exception_CP15RRTTrap} then // Trapped MRRC/MCRR, VMRS/VMSR iss<19:16> = instr<7:4>; // opc1 if instr<19:16> == '1111' then // Rt2==15 iss<14:10> = bits(5) UNKNOWN; else iss<14:10> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; if instr<15:12> == '1111' then // Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif exception.exceptype == Exception_CP14DTTrap then // Trapped LDC/STC iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<9:5> = bits(5) UNKNOWN; iss<3> = '1'; iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndrome(); AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset); (); exception.syndrome<19:0> = iss; return; return exception;

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckCP15InstrCoarseTrapsAArch64.AdvSIMDFPAccessTrap

// AArch64.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained AArch32 traps to System registers in the // coproc=0b1111 encoding space by HSTR_EL2, HCR_EL2, and SCTLR_ELx.// AArch64.AdvSIMDFPAccessTrap() // ============================= // Trapped access to Advanced SIMD or FP registers due to CPACR[]. AArch64.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) trapped_encoding = ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15})); // Check for MRC and MCR disabled by SCTLR_EL1.TIDCP. if (AArch64.AdvSIMDFPAccessTrap(bits(2) target_el) bits(64) preferred_exception_return =HaveFeatTIDCP1ThisInstrAddr() && PSTATE.EL ==(); EL0ExceptionRecord && !exception; vect_offset = 0x0; route_to_el2 = (target_el ==IsInHost() && !ELUsingAArch32(EL1) && SCTLR_EL1.TIDCP == '1' && trapped_encoding) then if&& EL2Enabled() && HCR_EL2.TGE == '1' then() && HCR_EL2.TGE == '1'); if route_to_el2 then exception = AArch64.AArch32SystemAccessTrapExceptionSyndrome(EL2Exception_Uncategorized, 0x3); else); AArch64.AArch32SystemAccessTrapAArch64.TakeException(EL1, 0x3); // Check for coarse-grained Hyp traps if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then // Check for MRC and MCR disabled by SCTLR_EL2.TIDCP. if (HaveFeatTIDCP1() && PSTATE.EL == EL0 && IsInHost() && SCTLR_EL2.TIDCP == '1' && trapped_encoding) then AArch64.AArch32SystemAccessTrap(EL2, 0x3); major = if nreg == 1 then CRn else CRm; // Check for MCR, MRC, MCRR, and MRRC disabled by HSTR_EL2<CRn/CRm> // and MRC and MCR disabled by HCR_EL2.TIDCP. if ((!, exception, preferred_exception_return, vect_offset); else exception =IsInHostExceptionSyndrome() && !(major IN {4,14}) && HSTR_EL2<major> == '1') || (HCR_EL2.TIDCP == '1' && nreg == 1 && trapped_encoding)) then if (PSTATE.EL ==( EL0Exception_AdvSIMDFPAccessTrap && boolean IMPLEMENTATION_DEFINED "UNDEF unallocated CP15 access at); exception.syndrome<24:20> = EL0ConditionSyndrome") then UNDEFINED;(); AArch64.AArch32SystemAccessTrapAArch64.TakeException(EL2, 0x3);(target_el, exception, preferred_exception_return, vect_offset); return;

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckFPAdvSIMDEnabledAArch64.CheckCP15InstrCoarseTraps

// AArch64.CheckFPAdvSIMDEnabled() // ===============================// AArch64.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained AArch32 traps to System registers in the // coproc=0b1111 encoding space by HSTR_EL2, HCR_EL2, and SCTLR_ELx. AArch64.CheckFPAdvSIMDEnabled()AArch64.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) trapped_encoding = ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15})); // Check for MRC and MCR disabled by SCTLR_EL1.TIDCP. if ( () && PSTATE.EL == EL0 && !IsInHost() && !ELUsingAArch32(EL1) && SCTLR_EL1.TIDCP == '1' && trapped_encoding) then if EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.AArch32SystemAccessTrap(EL2, 0x3); else AArch64.AArch32SystemAccessTrap(EL1, 0x3); // Check for coarse-grained Hyp traps if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then // Check for MRC and MCR disabled by SCTLR_EL2.TIDCP. if (HaveFeatTIDCP1() && PSTATE.EL == EL0 && IsInHost() && SCTLR_EL2.TIDCP == '1' && trapped_encoding) then AArch64.AArch32SystemAccessTrap(EL2, 0x3); major = if nreg == 1 then CRn else CRm; // Check for MCR, MRC, MCRR, and MRRC disabled by HSTR_EL2<CRn/CRm> // and MRC and MCR disabled by HCR_EL2.TIDCP. if ((!IsInHost() && !(major IN {4,14}) && HSTR_EL2<major> == '1') || (HCR_EL2.TIDCP == '1' && nreg == 1 && trapped_encoding)) then if (PSTATE.EL == EL0 && boolean IMPLEMENTATION_DEFINED "UNDEF unallocated CP15 access at EL0") then UNDEFINED; AArch64.AArch32SystemAccessTrap(EL2AArch64.CheckFPEnabledHaveFeatTIDCP1();, 0x3);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckFPAdvSIMDTrapAArch64.CheckFPAdvSIMDEnabled

// AArch64.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3.// AArch64.CheckFPAdvSIMDEnabled() // =============================== AArch64.CheckFPAdvSIMDTrap() ifAArch64.CheckFPAdvSIMDEnabled() HaveELAArch64.CheckFPEnabled(EL3) && CPTR_EL3.TFP == '1' && EL3SDDTrapPriority() then UNDEFINED; if PSTATE.EL IN {EL0, EL1, EL2} && EL2Enabled() then // Check if access disabled in CPTR_EL2 if HaveVirtHostExt() && HCR_EL2.E2H == '1' then boolean disabled; case CPTR_EL2.FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; when '11' disabled = FALSE; if disabled then AArch64.AdvSIMDFPAccessTrap(EL2); else if CPTR_EL2.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL2); if HaveEL(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else AArch64.AdvSIMDFPAccessTrap(EL3);();

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckFPEnabledAArch64.CheckFPAdvSIMDTrap

// AArch64.CheckFPEnabled() // ======================== // Check against CPACR[]// AArch64.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3. AArch64.CheckFPEnabled() AArch64.CheckFPAdvSIMDTrap() if PSTATE.EL IN {EL0, EL1} && !,IsInHostEL2() then // Check if access disabled in CPACR_EL1 boolean disabled; case CPACR_EL1.FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL ==} && EL2Enabled() then // Check if access disabled in CPTR_EL2 if HaveVirtHostExt() && HCR_EL2.E2H == '1' then boolean disabled; case CPTR_EL2.FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0; when '11' disabled = FALSE; if disabled then&& HCR_EL2.TGE == '1'; when '11' disabled = FALSE; if disabled then AArch64.AdvSIMDFPAccessTrap(EL1EL2);); else if CPTR_EL2.TFP == '1' then (EL2); if HaveEL(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3AArch64.CheckFPAdvSIMDTrapAArch64.AdvSIMDFPAccessTrap(); // Also check against CPTR_EL2 and CPTR_EL3); return;

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckForERetTrapAArch64.CheckFPEnabled

// AArch64.CheckForERetTrap() // ========================== // Check for trap on ERET, ERETAA, ERETAB instruction// AArch64.CheckFPEnabled() // ======================== // Check against CPACR[] AArch64.CheckForERetTrap(boolean eret_with_pac, boolean pac_uses_key_a) route_to_el2 = FALSE; // Non-secure EL1 execution of ERET, ERETAA, ERETAB when either HCR_EL2.NV or HFGITR_EL2.ERET is set, // is trapped to EL2 route_to_el2 = (PSTATE.EL ==AArch64.CheckFPEnabled() if PSTATE.EL IN { EL0, EL1 &&} && ! EL2EnabledIsInHost() && ((() then // Check if access disabled in CPACR_EL1 boolean disabled; case CPACR_EL1.FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL ==HaveNVExtEL0() && HCR_EL2.NV == '1') || (; when '11' disabled = FALSE; if disabled thenHaveFGTExtAArch64.AdvSIMDFPAccessTrap() && (!(HaveELEL1();EL3AArch64.CheckFPAdvSIMDTrap) || SCR_EL3.FGTEn == '1') && HFGITR_EL2.ERET == '1'))); if route_to_el2 then ExceptionRecord exception; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_ERetTrap); if !eret_with_pac then // ERET exception.syndrome<1> = '0'; exception.syndrome<0> = '0'; // RES0 else exception.syndrome<1> = '1'; if pac_uses_key_a then // ERETAA exception.syndrome<0> = '0'; else // ERETAB exception.syndrome<0> = '1'; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);(); // Also check against CPTR_EL2 and CPTR_EL3

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckForSMCUndefOrTrapAArch64.CheckForERetTrap

// AArch64.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction// AArch64.CheckForERetTrap() // ========================== // Check for trap on ERET, ERETAA, ERETAB instruction AArch64.CheckForSMCUndefOrTrap(bits(16) imm) if PSTATE.EL ==AArch64.CheckForERetTrap(boolean eret_with_pac, boolean pac_uses_key_a) route_to_el2 = FALSE; // Non-secure EL1 execution of ERET, ERETAA, ERETAB when either HCR_EL2.NV or HFGITR_EL2.ERET is set, // is trapped to EL2 route_to_el2 = (PSTATE.EL == EL0 then UNDEFINED; if (!(PSTATE.EL == EL1 && EL2Enabled() && HCR_EL2.TSC == '1') &&() && (( HaveELHaveNVExt(() && HCR_EL2.NV == '1') || (EL3HaveFGTExt) && SCR_EL3.SMD == '1') then UNDEFINED; route_to_el2 = FALSE; if !() && HCR_EL2.<E2H, TGE> != '11' && (!HaveEL(EL3) then if PSTATE.EL ==) || SCR_EL3.FGTEn == '1') && HFGITR_EL2.ERET == '1'))); if route_to_el2 then EL1ExceptionRecord &&exception; bits(64) preferred_exception_return = EL2Enabled() then if HaveNVExt() && HCR_EL2.NV == '1' && HCR_EL2.TSC == '1' then route_to_el2 = TRUE; else UNDEFINED; else UNDEFINED; else route_to_el2 = PSTATE.EL == EL1 && EL2Enabled() && HCR_EL2.TSC == '1'; if route_to_el2 then bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_MonitorCallException_ERetTrap); exception.syndrome<15:0> = imm; exception.trappedsyscallinst = TRUE; if !eret_with_pac then // ERET exception.syndrome<1> = '0'; exception.syndrome<0> = '0'; // RES0 else exception.syndrome<1> = '1'; if pac_uses_key_a then // ERETAA exception.syndrome<0> = '0'; else // ERETAB exception.syndrome<0> = '1'; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckForSVCTrapAArch64.CheckForSMCUndefOrTrap

// AArch64.CheckForSVCTrap() // ========================= // Check for trap on SVC instruction// AArch64.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction AArch64.CheckForSVCTrap(bits(16) immediate) ifAArch64.CheckForSMCUndefOrTrap(bits(16) imm) if PSTATE.EL == HaveFGTExt() then route_to_el2 = FALSE; if PSTATE.EL == EL0 then route_to_el2 = (!then UNDEFINED; if (!(PSTATE.EL ==UsingAArch32() && !ELUsingAArch32(EL1) &&&& EL2Enabled() && HFGITR_EL2.SVC_EL0 == '1' && (HCR_EL2.<E2H, TGE> != '11' && (!() && HCR_EL2.TSC == '1') &&HaveEL(EL3) || SCR_EL3.FGTEn == '1'))); elsif PSTATE.EL ==) && SCR_EL3.SMD == '1') then UNDEFINED; route_to_el2 = FALSE; if ! HaveEL(EL3) then if PSTATE.EL == EL1 then route_to_el2 = (&&EL2Enabled() && HFGITR_EL2.SVC_EL1 == '1' && (!() then ifHaveELHaveNVExt(() && HCR_EL2.NV == '1' && HCR_EL2.TSC == '1' then route_to_el2 = TRUE; else UNDEFINED; else UNDEFINED; else route_to_el2 = PSTATE.EL ==EL3EL1) || SCR_EL3.FGTEn == '1')); if route_to_el2 then exception =&& EL2Enabled() && HCR_EL2.TSC == '1'; if route_to_el2 then bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SupervisorCallException_MonitorCall); exception.syndrome<15:0> = immediate; exception.trappedsyscallinst = TRUE; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0;); exception.syndrome<15:0> = imm; exception.trappedsyscallinst = TRUE; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckForWFxTrapAArch64.CheckForSVCTrap

// AArch64.CheckForWFxTrap() // AArch64.CheckForSVCTrap() // ========================= // Check for trap on WFE or WFI instruction// Check for trap on SVC instruction AArch64.CheckForWFxTrap(bits(2) target_el,AArch64.CheckForSVCTrap(bits(16) immediate) if WFxTypeHaveFGTExt wfxtype) assert() then route_to_el2 = FALSE; if PSTATE.EL == EL0 then route_to_el2 = (!ELUsingAArch32(EL0) && !ELUsingAArch32(EL1) && EL2Enabled() && HFGITR_EL2.SVC_EL0 == '1' && (HCR_EL2.<E2H, TGE> != '11' && (!HaveEL(target_el); boolean is_wfe = wfxtype IN {(WFxType_WFEEL3,) || SCR_EL3.FGTEn == '1'))); elsif PSTATE.EL == WFxType_WFETEL1}; boolean trap; case target_el of whenthen route_to_el2 = (! ELUsingAArch32(EL1 trap = (if is_wfe then) && SCTLREL2Enabled[].nTWE else() && HFGITR_EL2.SVC_EL1 == '1' && (HCR_EL2.<E2H, TGE> != '11' && (! SCTLRHaveEL[].nTWI) == '0'; when( EL2 trap = (if is_wfe then HCR_EL2.TWE else HCR_EL2.TWI) == '1'; when EL3 trap = (if is_wfe then SCR_EL3.TWE else SCR_EL3.TWI) == '1'; ) || SCR_EL3.FGTEn == '1'))); if trap then if route_to_el2 then exception = (Exception_SupervisorCall); exception.syndrome<15:0> = immediate; exception.trappedsyscallinst = TRUE; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; AArch64.TakeException(EL2AArch64.WFxTrapExceptionSyndrome(wfxtype, target_el);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckIllegalStateAArch64.CheckForWFxTrap

// AArch64.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set.// AArch64.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction AArch64.CheckIllegalState() if PSTATE.IL == '1' then route_to_el2 = PSTATE.EL ==AArch64.CheckForWFxTrap(bits(2) target_el, EL0WFxType &&wfxtype) assert EL2EnabledHaveEL() && HCR_EL2.TGE == '1'; (target_el); bits(64) preferred_exception_return = boolean is_wfe = wfxtype IN { ThisInstrAddrWFxType_WFE(); vect_offset = 0x0; exception =, ExceptionSyndromeWFxType_WFET(}; boolean trap; case target_el of whenException_IllegalState); if UInt(PSTATE.EL) > UInt(EL1) thentrap = (if is_wfe then AArch64.TakeExceptionSCTLR(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then[].nTWE else AArch64.TakeExceptionSCTLR([].nTWI) == '0'; whenEL2, exception, preferred_exception_return, vect_offset); elsetrap = (if is_wfe then HCR_EL2.TWE else HCR_EL2.TWI) == '1'; when AArch64.TakeExceptionEL3(trap = (if is_wfe then SCR_EL3.TWE else SCR_EL3.TWI) == '1'; if trap thenEL1AArch64.WFxTrap, exception, preferred_exception_return, vect_offset);(wfxtype, target_el);

Library pseudocode for aarch64/exceptions/traps/AArch64.MonitorModeTrapAArch64.CheckIllegalState

// AArch64.MonitorModeTrap() // ========================= // Trapped use of Monitor mode features in a Secure EL1 AArch32 mode// AArch64.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set. AArch64.MonitorModeTrap() bits(64) preferred_exception_return =AArch64.CheckIllegalState() if PSTATE.IL == '1' then route_to_el2 = PSTATE.EL == EL0 && EL2Enabled() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_UncategorizedException_IllegalState); if IsSecureEL2EnabledUInt() then(PSTATE.EL) > UInt(EL1) then AArch64.TakeException((PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 thenAArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL3EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.SystemAccessTrapAArch64.MonitorModeTrap

// AArch64.SystemAccessTrap() // ========================== // Trapped access to AArch64 system register or system instruction.// AArch64.MonitorModeTrap() // ========================= // Trapped use of Monitor mode features in a Secure EL1 AArch32 mode AArch64.SystemAccessTrap(bits(2) target_el, integer ec) assertAArch64.MonitorModeTrap() bits(64) preferred_exception_return = HaveELThisInstrAddr(target_el) && target_el !=(); vect_offset = 0x0; exception = EL0ExceptionSyndrome &&( UIntException_Uncategorized(target_el) >=); if UIntIsSecureEL2Enabled(PSTATE.EL); bits(64) preferred_exception_return =() then ThisInstrAddrAArch64.TakeException(); vect_offset = 0x0; exception =( AArch64.SystemAccessTrapSyndromeEL2(, exception, preferred_exception_return, vect_offset);ThisInstrAArch64.TakeException(), ec);( AArch64.TakeExceptionEL3(target_el, exception, preferred_exception_return, vect_offset);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.SystemAccessTrapSyndromeAArch64.SystemAccessTrap

// AArch64.SystemAccessTrapSyndrome() // ================================== // Returns the syndrome information for traps on AArch64 MSR/MRS instructions. ExceptionRecord// AArch64.SystemAccessTrap() // ========================== // Trapped access to AArch64 system register or system instruction. AArch64.SystemAccessTrapSyndrome(bits(32) instr_in, integer ec)AArch64.SystemAccessTrap(bits(2) target_el, integer ec) assert ExceptionRecordHaveEL exception; bits(32) instr = instr_in; case ec of when 0x0 // Trapped access due to unknown reason. exception =(target_el) && target_el != ExceptionSyndromeEL0(&&Exception_UncategorizedUInt); when 0x7 // Trapped access to SVE, Advance SIMD&FP system register. exception =(target_el) >= ExceptionSyndromeUInt((PSTATE.EL); bits(64) preferred_exception_return =Exception_AdvSIMDFPAccessTrapThisInstrAddr); exception.syndrome<24:20> =(); vect_offset = 0x0; exception = ConditionSyndromeAArch64.SystemAccessTrapSyndrome(); when 0x18 // Trapped access to system register or system instruction. exception =( ExceptionSyndrome(Exception_SystemRegisterTrap); instr = ThisInstr(); exception.syndrome<21:20> = instr<20:19>; // Op0 exception.syndrome<19:17> = instr<7:5>; // Op2 exception.syndrome<16:14> = instr<18:16>; // Op1 exception.syndrome<13:10> = instr<15:12>; // CRn exception.syndrome<9:5> = instr<4:0>; // Rt exception.syndrome<4:1> = instr<11:8>; // CRm exception.syndrome<0> = instr<21>; // Direction otherwise(), ec); UnreachableAArch64.TakeException(); return exception;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.UndefinedFaultAArch64.SystemAccessTrapSyndrome

// AArch64.UndefinedFault() // ========================// AArch64.SystemAccessTrapSyndrome() // ================================== // Returns the syndrome information for traps on AArch64 MSR/MRS instructions. ExceptionRecord AArch64.UndefinedFault() route_to_el2 = PSTATE.EL ==AArch64.SystemAccessTrapSyndrome(bits(32) instr_in, integer ec) EL0ExceptionRecord &&exception; bits(32) instr = instr_in; case ec of when 0x0 // Trapped access due to unknown reason. exception = EL2Enabled() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_Uncategorized); if when 0x7 // Trapped access to SVE, Advance SIMD&FP system register. exception = UIntExceptionSyndrome(PSTATE.EL) >( UIntException_AdvSIMDFPAccessTrap(); exception.syndrome<24:20> =EL1ConditionSyndrome) then(); when 0x18 // Trapped access to system register or system instruction. exception = AArch64.TakeExceptionExceptionSyndrome(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then( AArch64.TakeExceptionException_SystemRegisterTrap(); instr =EL2ThisInstr, exception, preferred_exception_return, vect_offset); else(); exception.syndrome<21:20> = instr<20:19>; // Op0 exception.syndrome<19:17> = instr<7:5>; // Op2 exception.syndrome<16:14> = instr<18:16>; // Op1 exception.syndrome<13:10> = instr<15:12>; // CRn exception.syndrome<9:5> = instr<4:0>; // Rt exception.syndrome<4:1> = instr<11:8>; // CRm exception.syndrome<0> = instr<21>; // Direction otherwise AArch64.TakeExceptionUnreachable(EL1, exception, preferred_exception_return, vect_offset);(); return exception;

Library pseudocode for aarch64/exceptions/traps/AArch64.WFxTrapAArch64.UndefinedFault

// AArch64.WFxTrap() // =================// AArch64.UndefinedFault() // ======================== AArch64.WFxTrap(AArch64.UndefinedFault() route_to_el2 = PSTATE.EL ==WFxTypeEL0 wfxtype, bits(2) target_el) assert&& UIntEL2Enabled(target_el) >() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_WFxTrapException_Uncategorized); exception.syndrome<24:20> = if ConditionSyndromeUInt(); case wfxtype of when(PSTATE.EL) > WFxType_WFIUInt exception.syndrome<1:0> = '00'; when( WFxType_WFE exception.syndrome<1:0> = '01'; when WFxType_WFIT exception.syndrome<1:0> = '10'; if HaveFeatWFxT2() then exception.syndrome<2> = '1'; // Register field is valid exception.syndrome<9:5> = ThisInstr()<4:0>; else exception.syndrome<2> = '0'; // Register field is invalid when WFxType_WFET exception.syndrome<1:0> = '11'; if HaveFeatWFxT2() then exception.syndrome<2> = '1'; // Register field is valid exception.syndrome<9:5> = ThisInstr()<4:0>; else exception.syndrome<2> = '0'; // Register field is invalid if target_el == EL1 &&) then EL2EnabledAArch64.TakeException() && HCR_EL2.TGE == '1' then(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1(target_el, exception, preferred_exception_return, vect_offset);, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/CheckFPAdvSIMDEnabled64AArch64.WFxTrap

// CheckFPAdvSIMDEnabled64() // ========================= // AArch64 instruction wrapper// AArch64.WFxTrap() // ================= CheckFPAdvSIMDEnabled64()AArch64.WFxTrap( wfxtype, bits(2) target_el) assert UInt(target_el) > UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_WFxTrap); exception.syndrome<24:20> = ConditionSyndrome(); case wfxtype of when WFxType_WFI exception.syndrome<1:0> = '00'; when WFxType_WFE exception.syndrome<1:0> = '01'; when WFxType_WFIT exception.syndrome<1:0> = '10'; if HaveFeatWFxT2() then exception.syndrome<2> = '1'; // Register field is valid exception.syndrome<9:5> = ThisInstr()<4:0>; else exception.syndrome<2> = '0'; // Register field is invalid when WFxType_WFET exception.syndrome<1:0> = '11'; if HaveFeatWFxT2() then exception.syndrome<2> = '1'; // Register field is valid exception.syndrome<9:5> = ThisInstr()<4:0>; else exception.syndrome<2> = '0'; // Register field is invalid if target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeExceptionAArch64.CheckFPAdvSIMDEnabledWFxType();(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/CheckFPEnabled64CheckFPAdvSIMDEnabled64

// CheckFPEnabled64() // ================== // CheckFPAdvSIMDEnabled64() // ========================= // AArch64 instruction wrapper CheckFPEnabled64()CheckFPAdvSIMDEnabled64() AArch64.CheckFPEnabledAArch64.CheckFPAdvSIMDEnabled();

Library pseudocode for aarch64/exceptions/traps/CheckLDST64BEnabledCheckFPEnabled64

// CheckLDST64BEnabled() // ===================== // Checks for trap on ST64B and LD64B instructions// CheckFPEnabled64() // ================== // AArch64 instruction wrapper CheckLDST64BEnabled() boolean trap = FALSE; bits(25) iss =CheckFPEnabled64() ZeroExtendAArch64.CheckFPEnabled('10'); // 0x2 bits(2) target_el; if PSTATE.EL == EL0 then if !IsInHost() then trap = SCTLR_EL1.EnALS == '0'; target_el = if EL2Enabled() && HCR_EL2.TGE == '1' then EL2 else EL1; else trap = SCTLR_EL2.EnALS == '0'; target_el = EL2; else target_el = EL1; if (!trap && EL2Enabled() && HaveFeatHCX() && ((PSTATE.EL == EL0 && !IsInHost()) || PSTATE.EL == EL1)) then trap = !IsHCRXEL2Enabled() || HCRX_EL2.EnALS == '0'; target_el = EL2; if trap then LDST64BTrap(target_el, iss);();

Library pseudocode for aarch64/exceptions/traps/CheckST64BV0EnabledCheckLDST64BEnabled

// CheckST64BV0Enabled() // CheckLDST64BEnabled() // ===================== // Checks for trap on ST64BV0 instruction// Checks for trap on ST64B and LD64B instructions CheckST64BV0Enabled() CheckLDST64BEnabled() boolean trap = FALSE; bits(25) iss = ZeroExtend('1'); // 0x1 ('10'); // 0x2 bits(2) target_el; if (PSTATE.EL != if PSTATE.EL == EL3 && HaveEL(EL3) && SCR_EL3.EnAS0 == '0' && EL3SDDTrapPriority()) then UNDEFINED; if PSTATE.EL == EL0 then if !IsInHost() then trap = SCTLR_EL1.EnAS0 == '0'; trap = SCTLR_EL1.EnALS == '0'; target_el = if EL2Enabled() && HCR_EL2.TGE == '1' then EL2 else EL1; else trap = SCTLR_EL2.EnAS0 == '0'; trap = SCTLR_EL2.EnALS == '0'; target_el = EL2; if (!trap && else target_el = EL1; if (!trap && EL2Enabled() && HaveFeatHCX() && ((PSTATE.EL == EL0 && !IsInHost()) || PSTATE.EL == EL1)) then trap = !IsHCRXEL2Enabled() || HCRX_EL2.EnAS0 == '0'; () || HCRX_EL2.EnALS == '0'; target_el = EL2; if !trap && PSTATE.EL != EL3 then trap = HaveEL(EL3) && SCR_EL3.EnAS0 == '0'; target_el = EL3; if trap then if target_el == EL3 && Halted() && EDSCR.SDD == '1' then UNDEFINED; else; if trap then LDST64BTrap(target_el, iss);

Library pseudocode for aarch64/exceptions/traps/CheckST64BVEnabledCheckST64BV0Enabled

// CheckST64BVEnabled() // ==================== // Checks for trap on ST64BV instruction// CheckST64BV0Enabled() // ===================== // Checks for trap on ST64BV0 instruction CheckST64BVEnabled() CheckST64BV0Enabled() boolean trap = FALSE; bits(25) iss = ZerosZeroExtend(); ('1'); // 0x1 bits(2) target_el; if PSTATE.EL == EL0 then if !IsInHost() then trap = SCTLR_EL1.EnASR == '0'; trap = SCTLR_EL1.EnAS0 == '0'; target_el = if EL2Enabled() && HCR_EL2.TGE == '1' then EL2 else EL1; else trap = SCTLR_EL2.EnASR == '0'; trap = SCTLR_EL2.EnAS0 == '0'; target_el = EL2; if (!trap && EL2Enabled() && HaveFeatHCX() && ((PSTATE.EL == EL0 && !IsInHost()) || PSTATE.EL == EL1)) then trap = !IsHCRXEL2Enabled() || HCRX_EL2.EnASR == '0'; () || HCRX_EL2.EnAS0 == '0'; target_el = EL2; if !trap && PSTATE.EL != EL3 then trap = HaveEL(EL3) && SCR_EL3.EnAS0 == '0'; target_el = EL3; if trap then LDST64BTrap(target_el, iss);

Library pseudocode for aarch64/exceptions/traps/LDST64BTrapCheckST64BVEnabled

// LDST64BTrap() // ============= // Trapped access to LD64B, ST64B, ST64BV and ST64BV0 instructions// CheckST64BVEnabled() // ==================== // Checks for trap on ST64BV instruction LDST64BTrap(bits(2) target_el, bits(25) iss) bits(64) preferred_exception_return =CheckST64BVEnabled() boolean trap = FALSE; bits(25) iss = ThisInstrAddrZeros(); vect_offset = 0x0; bits(2) target_el; exception = if PSTATE.EL == ExceptionSyndromeEL0(then if !Exception_LDST64BTrapIsInHost); exception.syndrome = iss;() then trap = SCTLR_EL1.EnASR == '0'; target_el = if () && HCR_EL2.TGE == '1' then EL2 else EL1; else trap = SCTLR_EL2.EnASR == '0'; target_el = EL2; if (!trap && EL2Enabled() && HaveFeatHCX() && ((PSTATE.EL == EL0 && !IsInHost()) || PSTATE.EL == EL1)) then trap = !IsHCRXEL2Enabled() || HCRX_EL2.EnASR == '0'; target_el = EL2; if trap then LDST64BTrapAArch64.TakeExceptionEL2Enabled(target_el, exception, preferred_exception_return, vect_offset); return;(target_el, iss);

Library pseudocode for aarch64/exceptions/traps/WFETrapDelayLDST64BTrap

// WFETrapDelay() // ============== // Returns TRUE when delay in trap to WFE is enabled with value to amount of delay, // FALSE otherwise. (boolean, integer)// LDST64BTrap() // ============= // Trapped access to LD64B, ST64B, ST64BV and ST64BV0 instructions WFETrapDelay(bits(2) target_el) boolean delay_enabled; integer delay; case target_el of whenLDST64BTrap(bits(2) target_el, bits(25) iss) bits(64) preferred_exception_return = EL1ThisInstrAddr if !(); vect_offset = 0x0; exception =IsInHostExceptionSyndrome() then delay_enabled = SCTLR_EL1.TWEDEn == '1'; delay = 1 << ((UIntException_LDST64BTrap(SCTLR_EL1.TWEDEL) + 8); else delay_enabled = SCTLR_EL2.TWEDEn == '1'; delay = 1 << (); exception.syndrome = iss;UIntAArch64.TakeException(SCTLR_EL2.TWEDEL) + 8); when EL2 assert EL2Enabled(); delay_enabled = HCR_EL2.TWEDEn == '1'; delay = 1 << (UInt(HCR_EL2.TWEDEL) + 8); when EL3 delay_enabled = SCR_EL3.TWEDEn == '1'; delay = 1 << (UInt(SCR_EL3.TWEDEL) + 8); (target_el, exception, preferred_exception_return, vect_offset); return (delay_enabled, delay); return;

Library pseudocode for aarch64/exceptions/traps/WaitForEventUntilDelayWFETrapDelay

// Returns TRUE if WaitForEvent() returns before WFE trap delay expires, // WFETrapDelay() // ============== // Returns TRUE when delay in trap to WFE is enabled with value to amount of delay, // FALSE otherwise. boolean (boolean, integer) WaitForEventUntilDelay(boolean delay_enabled, integer delay);WFETrapDelay(bits(2) target_el) boolean delay_enabled; integer delay; case target_el of whenEL1 if !IsInHost() then delay_enabled = SCTLR_EL1.TWEDEn == '1'; delay = 1 << (UInt(SCTLR_EL1.TWEDEL) + 8); else delay_enabled = SCTLR_EL2.TWEDEn == '1'; delay = 1 << (UInt(SCTLR_EL2.TWEDEL) + 8); when EL2 assert EL2Enabled(); delay_enabled = HCR_EL2.TWEDEn == '1'; delay = 1 << (UInt(HCR_EL2.TWEDEL) + 8); when EL3 delay_enabled = SCR_EL3.TWEDEn == '1'; delay = 1 << (UInt(SCR_EL3.TWEDEL) + 8); return (delay_enabled, delay);

Library pseudocode for aarch64/functionsexceptions/abortstraps/AArch64.FaultSyndromeWaitForEventUntilDelay

// AArch64.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // an Exception level using AArch64. (bits(25), bits(5))// Returns TRUE if WaitForEvent() returns before WFE trap delay expires, // FALSE otherwise. boolean AArch64.FaultSyndrome(boolean d_side,WaitForEventUntilDelay(boolean delay_enabled, integer delay); FaultRecord fault) assert fault.statuscode != Fault_None; bits(25) iss = Zeros(); bits(5) iss2 = Zeros(); if !HaveFeatLS64() && HaveRASExt() && IsAsyncAbort(fault) then iss<12:11> = fault.errortype; // SET if d_side then if HaveFeatLS64() && fault.acctype == AccType_ATOMICLS64 then if (fault.statuscode IN {Fault_AccessFlag, Fault_Translation, Fault_Permission}) then (iss2, iss<24:14>, iss<12:11>) = LS64InstructionSyndrome(); else if (IsSecondStage(fault) && !fault.s2fs1walk && (!IsExternalSyncAbort(fault) || (!HaveRASExt() && fault.acctype == AccType_TTW && boolean IMPLEMENTATION_DEFINED "ISV on second stage translation table walk"))) then iss<24:14> = LSInstructionSyndrome(); if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then iss<13> = '1'; // Fault is generated by use of VNCR_EL2 if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT, AccType_ATPAN} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.statuscode, fault.level); return (iss, iss2);

Library pseudocode for aarch64/functions/aborts/LS64InstructionSyndromeAArch64.FaultSyndrome

// Returns the syndrome information and LST for a Data Abort by a // ST64B, ST64BV, ST64BV0, or LD64B instruction. The syndrome information // includes the ISS2, extended syndrome field, and LST. (bits(5), bits(11), bits(2))// AArch64.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // an Exception level using AArch64. (bits(25), bits(5)) LS64InstructionSyndrome();AArch64.FaultSyndrome(boolean d_side,FaultRecord fault) assert fault.statuscode != Fault_None; bits(25) iss = Zeros(); bits(5) iss2 = Zeros(); if !HaveFeatLS64() && HaveRASExt() && IsAsyncAbort(fault) then iss<12:11> = fault.errortype; // SET if d_side then if HaveFeatLS64() && fault.acctype == AccType_ATOMICLS64 then if (fault.statuscode IN {Fault_AccessFlag, Fault_Translation, Fault_Permission}) then (iss2, iss<24:14>, iss<12:11>) = LS64InstructionSyndrome(); else if (IsSecondStage(fault) && !fault.s2fs1walk && (!IsExternalSyncAbort(fault) || (!HaveRASExt() && fault.acctype == AccType_TTW && boolean IMPLEMENTATION_DEFINED "ISV on second stage translation table walk"))) then iss<24:14> = LSInstructionSyndrome(); if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then iss<13> = '1'; // Fault is generated by use of VNCR_EL2 if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT, AccType_ATPAN} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.statuscode, fault.level); return (iss, iss2);

Library pseudocode for aarch64/functions/cacheaborts/AArch64.DataMemZeroLS64InstructionSyndrome

// AArch64.DataMemZero() // ===================== // Write Zero to data memory// Returns the syndrome information and LST for a Data Abort by a // ST64B, ST64BV, ST64BV0, or LD64B instruction. The syndrome information // includes the ISS2, extended syndrome field, and LST. (bits(5), bits(11), bits(2)) AArch64.DataMemZero(bits(64) regval, bits(64) vaddress,LS64InstructionSyndrome(); AddressDescriptor memaddrdesc_in, integer size) iswrite = TRUE; AddressDescriptor memaddrdesc = memaddrdesc_in; for i = 0 to size-1 accdesc = CreateAccessDescriptor(AccType_DCZVA); if HaveMTEExt() then if AArch64.AccessIsTagChecked(vaddress, AccType_DCZVA) then bits(4) ptag = AArch64.PhysicalTag(vaddress); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then if boolean IMPLEMENTATION_DEFINED "DC_ZVA tag fault reported with lowest faulting address" then AArch64.TagCheckFault(vaddress, AccType_DCZVA, iswrite); else AArch64.TagCheckFault(regval, AccType_DCZVA, iswrite); memstatus = PhysMemWrite(memaddrdesc, 1, accdesc, Zeros()); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, 1, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + 1; return;

Library pseudocode for aarch64/functions/cache/AArch64.TagMemZeroAArch64.DataMemZero

// AArch64.TagMemZero() // ==================== // Write Zero to tag memory// AArch64.DataMemZero() // ===================== // Write Zero to data memory AArch64.TagMemZero(bits(64) vaddress_in, integer size) bits(64) vaddress = vaddress_in; integer count = size >>AArch64.DataMemZero(bits(64) regval, bits(64) vaddress, LOG2_TAG_GRANULEAddressDescriptor; bits(4) tag =memaddrdesc_in, integer size) iswrite = TRUE; AArch64.AllocationTagFromAddressAddressDescriptor(vaddress); for i = 0 to count-1memaddrdesc = memaddrdesc_in; for i = 0 to size-1 accdesc = AArch64.MemTagCreateAccessDescriptor[vaddress,( AccType_NORMALAccType_DCZVA] = tag; vaddress = vaddress +); if () then if AArch64.AccessIsTagChecked(vaddress, AccType_DCZVA) then bits(4) ptag = AArch64.PhysicalTag(vaddress); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then if boolean IMPLEMENTATION_DEFINED "DC_ZVA tag fault reported with lowest faulting address" then AArch64.TagCheckFault(vaddress, AccType_DCZVA, iswrite); else AArch64.TagCheckFault(regval, AccType_DCZVA, iswrite); memstatus = PhysMemWrite(memaddrdesc, 1, accdesc, Zeros()); if IsFault(memstatus) then HandleExternalWriteAbortTAG_GRANULEHaveMTEExt; (memstatus, memaddrdesc, 1, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + 1; return;

Library pseudocode for aarch64/functions/exclusivecache/AArch64.ExclusiveMonitorsPassAArch64.TagMemZero

// AArch64.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean// AArch64.TagMemZero() // ==================== // Write Zero to tag memory AArch64.ExclusiveMonitorsPass(bits(64) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype =AArch64.TagMemZero(bits(64) vaddress_in, integer size) bits(64) vaddress = vaddress_in; integer count = size >> AccType_ATOMICLOG2_TAG_GRANULE; iswrite = TRUE; aligned = bits(4) tag = AArch64.CheckAlignmentAArch64.AllocationTagFromAddress(address, size, acctype, iswrite); passed =(vaddress); for i = 0 to count-1 AArch64.IsExclusiveVAAArch64.MemTag(address,[vaddress, ProcessorIDAccType_NORMAL(), size); if !passed then return FALSE; memaddrdesc =] = tag; vaddress = vaddress + AArch64.TranslateAddressTAG_GRANULE(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); ClearExclusiveLocal(ProcessorID()); if passed then if memaddrdesc.memattrs.shareability != Shareability_NSH then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); return passed;; return;

Library pseudocode for aarch64/functions/exclusive/AArch64.IsExclusiveVAAArch64.ExclusiveMonitorsPass

// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. // AArch64.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean AArch64.IsExclusiveVA(bits(64) address, integer processorid, integer size);AArch64.ExclusiveMonitorsPass(bits(64) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype =AccType_ATOMIC; iswrite = TRUE; aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); passed = AArch64.IsExclusiveVA(address, ProcessorID(), size); if !passed then return FALSE; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); ClearExclusiveLocal(ProcessorID()); if passed then if memaddrdesc.memattrs.shareability != Shareability_NSH then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); return passed;

Library pseudocode for aarch64/functions/exclusive/AArch64.MarkExclusiveVAAArch64.IsExclusiveVA

// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid.// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean AArch64.MarkExclusiveVA(bits(64) address, integer processorid, integer size);AArch64.IsExclusiveVA(bits(64) address, integer processorid, integer size);

Library pseudocode for aarch64/functions/exclusive/AArch64.SetExclusiveMonitorsAArch64.MarkExclusiveVA

// AArch64.SetExclusiveMonitors() // ============================== // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address.// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid. AArch64.SetExclusiveMonitors(bits(64) address, integer size) acctype =AArch64.MarkExclusiveVA(bits(64) address, integer processorid, integer size); AccType_ATOMIC; iswrite = FALSE; aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return; if memaddrdesc.memattrs.shareability != Shareability_NSH then MarkExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); MarkExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); AArch64.MarkExclusiveVA(address, ProcessorID(), size);

Library pseudocode for aarch64/functions/fusedrstepexclusive/FPRSqrtStepFusedAArch64.SetExclusiveMonitors

// FPRSqrtStepFused() // ================== bits(N)// AArch64.SetExclusiveMonitors() // ============================== // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address. FPRSqrtStepFused(bits(N) op1_in, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; bits(N) op1 = op1_in; boolean done;AArch64.SetExclusiveMonitors(bits(64) address, integer size) acctype = FPCRTypeAccType_ATOMIC fpcr = FPCR[]; op1 =; iswrite = FALSE; aligned = FPNegAArch64.CheckAlignment(op1); boolean altfp =(address, size, acctype, iswrite); memaddrdesc = HaveAltFPAArch64.TranslateAddress() && fpcr.AH == '1'; boolean fpexc = !altfp; // Generate no floating-point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero if altfp then fpcr.RMode = '00'; // Use RNE rounding mode (type1,sign1,value1) =(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if FPUnpackIsFault(op1, fpcr, fpexc); (type2,sign2,value2) =(memaddrdesc) then return; if memaddrdesc.memattrs.shareability != FPUnpackShareability_NSH(op2, fpcr, fpexc); (done,result) =then FPProcessNaNsMarkExclusiveGlobal(type1, type2, op1, op2, fpcr, FALSE, fpexc);(memaddrdesc.paddress, FPRoundingProcessorID rounding =(), size); FPRoundingModeMarkExclusiveLocal(fpcr); if !done then inf1 = (type1 ==(memaddrdesc.paddress, FPType_InfinityProcessorID); inf2 = (type2 ==(), size); FPType_InfinityAArch64.MarkExclusiveVA); zero1 = (type1 ==(address, FPType_ZeroProcessorID); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPOnePointFive('0'); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); else // Fully fused multiply-add and halve result_value = (3.0 + (value1 * value2)) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(sign); else result = FPRound(result_value, fpcr, rounding, fpexc); return result;(), size);

Library pseudocode for aarch64/functions/fusedrstep/FPRecipStepFusedFPRSqrtStepFused

// FPRecipStepFused() // FPRSqrtStepFused() // ================== bits(N) FPRecipStepFused(bits(N) op1_in, bits(N) op2) FPRSqrtStepFused(bits(N) op1_in, bits(N) op2) assert N IN {16, 32, 64}; bits(N) result; bits(N) op1 = op1_in; bits(N) result; boolean done; FPCRType fpcr = FPCR[]; op1 = FPNeg(op1); boolean altfp = HaveAltFP() && fpcr.AH == '1'; boolean fpexc = !altfp; // Generate no floating-point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero if altfp then fpcr.RMode = '00'; // Use RNE rounding mode (type1,sign1,value1) = FPUnpack(op1, fpcr, fpexc); (type2,sign2,value2) = FPUnpack(op2, fpcr, fpexc); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr, FALSE, fpexc); FPRounding rounding = FPRoundingMode(fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPTwoFPOnePointFive('0'); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); else // Fully fused multiply-add result_value = 2.0 + (value1 * value2); // Fully fused multiply-add and halve result_value = (3.0 + (value1 * value2)) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(sign); else result = FPRound(result_value, fpcr, rounding, fpexc); return result;

Library pseudocode for aarch64/functions/memoryfusedrstep/AArch64.AccessIsTagCheckedFPRecipStepFused

// AArch64.AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. // FPRecipStepFused() // ================== booleanbits(N) AArch64.AccessIsTagChecked(bits(64) vaddr,FPRecipStepFused(bits(N) op1_in, bits(N) op2) assert N IN {16, 32, 64}; bits(N) op1 = op1_in; bits(N) result; boolean done; AccTypeFPCRType acctype) if PSTATE.M<4> == '1' then return FALSE; iffpcr = FPCR[]; op1 = EffectiveTBIFPNeg(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; (op1); if boolean altfp = EffectiveTCMAHaveAltFP(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; () && fpcr.AH == '1'; boolean fpexc = !altfp; // Generate no floating-point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero if altfp then fpcr.RMode = '00'; // Use RNE rounding mode if ! (type1,sign1,value1) =AArch64.AllocationTagAccessIsEnabledFPUnpack(acctype) then return FALSE; if acctype IN {(op1, fpcr, fpexc); (type2,sign2,value2) =AccType_IFETCHFPUnpack,(op2, fpcr, fpexc); (done,result) = AccType_TTWFPProcessNaNs,(type1, type2, op1, op2, fpcr, FALSE, fpexc); AccType_DCFPRounding,rounding = AccType_ICFPRoundingMode} then return FALSE; (fpcr); if acctype == if !done then inf1 = (type1 == AccType_NV2REGISTERFPType_Infinity then return FALSE; if PSTATE.TCO=='1' then return FALSE; if !); inf2 = (type2 ==); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPTwo('0'); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); else // Fully fused multiply-add result_value = 2.0 + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(sign); else result = FPRoundIsTagCheckedInstructionFPType_Infinity() then return FALSE; (result_value, fpcr, rounding, fpexc); return TRUE; return result;

Library pseudocode for aarch64/functions/memory/AArch64.AddressWithAllocationTagAArch64.AccessIsTagChecked

// AArch64.AddressWithAllocationTag() // ================================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as '0000'. // AArch64.AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. bits(64)boolean AArch64.AddressWithAllocationTag(bits(64) address,AArch64.AccessIsTagChecked(bits(64) vaddr, AccType acctype, bits(4) allocation_tag) bits(64) result = address; bits(4) tag; acctype) if PSTATE.M<4> == '1' then return FALSE; if EffectiveTBI(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if EffectiveTCMA(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if !AArch64.AllocationTagAccessIsEnabled(acctype) then return FALSE; if acctype IN {AccType_IFETCH, AccType_TTW, AccType_DC, AccType_IC} then return FALSE; if acctype == AccType_NV2REGISTER then return FALSE; if PSTATE.TCO=='1' then return FALSE; if !IsTagCheckedInstruction(acctype) then tag = allocation_tag; else tag = '0000'; result<59:56> = tag; return result;() then return FALSE; return TRUE;

Library pseudocode for aarch64/functions/memory/AArch64.AllocationTagFromAddressAArch64.AddressWithAllocationTag

// AArch64.AllocationTagFromAddress() // AArch64.AddressWithAllocationTag() // ================================== // Generate an Allocation Tag from a 64-bit value containing a Logical Address Tag. // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as '0000'. bits(4)bits(64) AArch64.AllocationTagFromAddress(bits(64) tagged_address) return tagged_address<59:56>;AArch64.AddressWithAllocationTag(bits(64) address,AccType acctype, bits(4) allocation_tag) bits(64) result = address; bits(4) tag; if AArch64.AllocationTagAccessIsEnabled(acctype) then tag = allocation_tag; else tag = '0000'; result<59:56> = tag; return result;

Library pseudocode for aarch64/functions/memory/AArch64.CheckAlignmentAArch64.AllocationTagFromAddress

// AArch64.CheckAlignment() // ======================== // AArch64.AllocationTagFromAddress() // ================================== // Generate an Allocation Tag from a 64-bit value containing a Logical Address Tag. booleanbits(4) AArch64.CheckAlignment(bits(64) address, integer alignment,AArch64.AllocationTagFromAddress(bits(64) tagged_address) return tagged_address<59:56>; AccType acctype, boolean iswrite) aligned = (address == Align(address, alignment)); atomic = acctype IN { AccType_ATOMIC, AccType_ATOMICRW, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW, AccType_ATOMICLS64, AccType_A32LSMD}; ordered = acctype IN { AccType_ORDERED, AccType_ORDEREDRW, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; boolean check; if SCTLR[].A == '1' then check = TRUE; elsif HaveLSE2Ext() then check = (UInt(address<3:0>) + alignment > 16) && ((ordered && SCTLR[].nAA == '0') || atomic); else check = atomic || ordered; if check && !aligned then secondstage = FALSE; AArch64.Abort(address, AlignmentFault(acctype, iswrite, secondstage)); return aligned;

Library pseudocode for aarch64/functions/memory/AArch64.CheckTagAArch64.CheckAlignment

// AArch64.CheckTag() // ================== // Performs a Tag Check operation for a memory access and returns // whether the check passed // AArch64.CheckAlignment() // ======================== boolean AArch64.CheckTag(AArch64.CheckAlignment(bits(64) address, integer alignment,AddressDescriptorAccType memaddrdesc,acctype, boolean iswrite) aligned = (address == AccessDescriptorAlign accdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then (memstatus, readtag) =(address, alignment)); atomic = acctype IN { PhysMemTagReadAccType_ATOMIC(memaddrdesc, accdesc); if, IsFaultAccType_ATOMICRW(memstatus) then, , AccType_ORDEREDATOMICRW, AccType_ATOMICLS64, AccType_A32LSMD}; ordered = acctype IN { AccType_ORDERED, AccType_ORDEREDRW, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; boolean check; if SCTLR[].A == '1' then check = TRUE; elsif HaveLSE2Ext() then check = (UInt(address<3:0>) + alignment > 16) && ((ordered && SCTLR[].nAA == '0') || atomic); else check = atomic || ordered; if check && !aligned then secondstage = FALSE; AArch64.Abort(address, AlignmentFaultHandleExternalReadAbortAccType_ORDEREDATOMIC(memstatus, memaddrdesc, 1, accdesc); return ptag == readtag; else return TRUE;(acctype, iswrite, secondstage)); return aligned;

Library pseudocode for aarch64/functions/memory/AArch64.MemSingleAArch64.CheckTag

// AArch64.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. // AArch64.CheckTag() // ================== // Performs a Tag Check operation for a memory access and returns // whether the check passed bits(size*8)boolean AArch64.MemSingle[bits(64) address, integer size,AArch64.CheckTag( AccType acctype, boolean aligned] boolean ispair = FALSE; return AArch64.MemSingle[address, size, acctype, aligned, ispair]; // AArch64.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8) AArch64.MemSingle[bits(64) address, integer size, AccType acctype, boolean aligned, boolean ispair] assert size IN {1, 2, 4, 8, 16}; constant halfsize = size DIV 2; if HaveLSE2Ext() then assert CheckAllInAlignedQuantity(address, size, 16); else assert address == Align(address, size); AddressDescriptor memaddrdesc; bits(size*8) value; iswrite = FALSE; memaddrdesc =memaddrdesc, AArch64.TranslateAddressAccessDescriptor(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions ifaccdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then (memstatus, readtag) = IsFaultPhysMemTagRead(memaddrdesc) then(memaddrdesc, accdesc); if AArch64.Abort(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTE2Ext() then if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.PhysicalTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(ZeroExtend(address, 64), acctype, iswrite); (atomic, splitpair) = CheckSingleAccessAttributes(address, memaddrdesc.memattrs, size, acctype, iswrite, aligned, ispair); PhysMemRetStatus memstatus; if atomic then (memstatus, value) = PhysMemRead(memaddrdesc, size, accdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, size, accdesc); elsif splitpair then assert ispair; bits(halfsize * 8) lowhalf, highhalf; (memstatus, lowhalf) = PhysMemRead(memaddrdesc, halfsize, accdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, halfsize, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + halfsize; (memstatus, highhalf) = PhysMemRead(memaddrdesc, halfsize, accdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, halfsize, accdesc); value = highhalf:lowhalf; else for i = 0 to size-1 (memstatus, value<8*i+7:8*i>) = PhysMemRead(memaddrdesc, 1, accdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, 1, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + 1; return value; // AArch64.MemSingle[] - assignment (write) form // ============================================= AArch64.MemSingle[bits(64) address, integer size, AccType acctype, boolean aligned] = bits(size*8) value boolean ispair = FALSE; AArch64.MemSingle[address, size, acctype, aligned, ispair] = value; return; // AArch64.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. AArch64.MemSingle[bits(64) address, integer size, AccType acctype, boolean aligned, boolean ispair] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; constant halfsize = size DIV 2; if HaveLSE2Ext() then assert CheckAllInAlignedQuantity(address, size, 16); else assert address == Align(address, size); AddressDescriptor memaddrdesc; iswrite = TRUE; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareability != Shareability_NSH then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTE2Ext() then if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.PhysicalTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(ZeroExtend(address, 64), acctype, iswrite); PhysMemRetStatus memstatus; (atomic, splitpair) = CheckSingleAccessAttributes(address, memaddrdesc.memattrs, size, acctype, iswrite, aligned, ispair); if atomic then memstatus = PhysMemWrite(memaddrdesc, size, accdesc, value); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, size, accdesc); elsif splitpair then assert ispair; bits(halfsize*8) lowhalf, highhalf; <highhalf, lowhalf> = value; memstatus = PhysMemWrite(memaddrdesc, halfsize, accdesc, lowhalf); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, halfsize, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + halfsize; memstatus = PhysMemWrite(memaddrdesc, halfsize, accdesc, highhalf); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, halfsize, accdesc); else for i = 0 to size-1 memstatus = PhysMemWrite(memaddrdesc, 1, accdesc, value<8*i+7:8*i>); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, 1, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + 1; return; return ptag == readtag; else return TRUE;

Library pseudocode for aarch64/functions/memory/AArch64.MemTagAArch64.MemSingle

// AArch64.MemTag[] - non-assignment (read) form // ============================================= // Load an Allocation Tag from memory. // AArch64.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(4)bits(size*8) AArch64.MemTag[bits(64) address,AArch64.MemSingle[bits(64) address, integer size, AccType acctype]acctype, boolean aligned] boolean ispair = FALSE; return AArch64.MemSingle[address, size, acctype, aligned, ispair]; // AArch64.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8) AArch64.MemSingle[bits(64) address, integer size, AccType acctype, boolean aligned, boolean ispair] assert size IN {1, 2, 4, 8, 16}; constant halfsize = size DIV 2; if HaveLSE2Ext() then assert CheckAllInAlignedQuantity(address, size, 16); else assert address == Align(address, size); AddressDescriptor memaddrdesc; bits(4) value; bits(size*8) value; iswrite = FALSE; iswrite = FALSE; aligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned,(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if TAG_GRANULEIsFault); accdesc =(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); // Check for aborts or debug exceptions if HaveMTE2Ext() then if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.PhysicalTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(ZeroExtend(address, 64), acctype, iswrite); (atomic, splitpair) = CheckSingleAccessAttributes(address, memaddrdesc.memattrs, size, acctype, iswrite, aligned, ispair); PhysMemRetStatus memstatus; if atomic then (memstatus, value) = PhysMemRead(memaddrdesc, size, accdesc); if IsFault(memaddrdesc) then(memstatus) then AArch64.AbortHandleExternalReadAbort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if(memstatus, memaddrdesc, size, accdesc); elsif splitpair then assert ispair; bits(halfsize * 8) lowhalf, highhalf; (memstatus, lowhalf) = AArch64.AllocationTagAccessIsEnabledPhysMemRead(acctype) && memaddrdesc.memattrs.tagged then (memstatus, tag) =(memaddrdesc, halfsize, accdesc); if PhysMemTagReadIsFault(memaddrdesc, accdesc); if(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, halfsize, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + halfsize; (memstatus, highhalf) = PhysMemRead(memaddrdesc, halfsize, accdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, 1, accdesc); return tag; (memstatus, memaddrdesc, halfsize, accdesc); value = highhalf:lowhalf; else // ...otherwise read tag as zero. return '0000'; // AArch64.MemTag[] - assignment (write) form // ========================================== // Store an Allocation Tag to memory. for i = 0 to size-1 (memstatus, value<8*i+7:8*i>) = PhysMemRead(memaddrdesc, 1, accdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, 1, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + 1; return value; // AArch64.MemSingle[] - assignment (write) form // ============================================= AArch64.MemTag[bits(64) address,AArch64.MemSingle[bits(64) address, integer size, AccType acctype] = bits(4) valueacctype, boolean aligned] = bits(size*8) value boolean ispair = FALSE; AddressDescriptorAArch64.MemSingle memaddrdesc; iswrite = TRUE; [address, size, acctype, aligned, ispair] = value; return; // Stores of allocation tags must be aligned if address !=// AArch64.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. AArch64.MemSingle[bits(64) address, integer size, AccType acctype, boolean aligned, boolean ispair] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; constant halfsize = size DIV 2; if HaveLSE2Ext() then assert CheckAllInAlignedQuantity(address, size, 16); else assert address == Align(address,(address, size); TAG_GRANULEAddressDescriptor) then boolean secondstage = FALSE;memaddrdesc; iswrite = TRUE; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address,(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareability != AlignmentFaultShareability_NSH(acctype, iswrite, secondstage)); aligned = TRUE; memaddrdesc =then AArch64.TranslateAddressClearExclusiveByAddress(address, acctype, iswrite, aligned,(memaddrdesc.paddress, TAG_GRANULEProcessorID); (), size); // It is CONSTRAINED UNPREDICTABLE if tags stored to memory locations marked as Device // generate an Alignment Fault or store the data to locations. if memaddrdesc.memattrs.memtype == // Memory array access accdesc = MemType_DeviceCreateAccessDescriptor then c =(acctype); if ConstrainUnpredictableHaveMTE2Ext(() then ifUnpredictable_DEVICETAGSTOREAArch64.AccessIsTagChecked); assert c IN {(Constraint_NONEZeroExtend,(address, 64), acctype) then bits(4) ptag = Constraint_FAULTAArch64.PhysicalTag}; if c ==( Constraint_FAULTZeroExtend then boolean secondstage = FALSE;(address, 64)); if ! AArch64.AbortAArch64.CheckTag(address,(memaddrdesc, accdesc, ptag, iswrite) then AlignmentFaultAArch64.TagCheckFault(acctype, iswrite, secondstage)); // Check for aborts or debug exceptions if( ZeroExtend(address, 64), acctype, iswrite); PhysMemRetStatus memstatus; (atomic, splitpair) = CheckSingleAccessAttributes(address, memaddrdesc.memattrs, size, acctype, iswrite, aligned, ispair); if atomic then memstatus = PhysMemWrite(memaddrdesc, size, accdesc, value); if IsFault(memaddrdesc) then(memstatus) then AArch64.AbortHandleExternalWriteAbort(address, memaddrdesc.fault); (memstatus, memaddrdesc, size, accdesc); elsif splitpair then assert ispair; bits(halfsize*8) lowhalf, highhalf; <highhalf, lowhalf> = value; accdesc = memstatus = CreateAccessDescriptorPhysMemWrite(acctype); // Memory array access if(memaddrdesc, halfsize, accdesc, lowhalf); if AArch64.AllocationTagAccessIsEnabledIsFault(acctype) && memaddrdesc.memattrs.tagged then memstatus =(memstatus) then (memstatus, memaddrdesc, halfsize, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + halfsize; memstatus = PhysMemWrite(memaddrdesc, halfsize, accdesc, highhalf); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, halfsize, accdesc); else for i = 0 to size-1 memstatus = PhysMemWritePhysMemTagWriteHandleExternalWriteAbort(memaddrdesc, accdesc, value); if(memaddrdesc, 1, accdesc, value<8*i+7:8*i>); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, 1, accdesc);(memstatus, memaddrdesc, 1, accdesc); memaddrdesc.paddress.address = memaddrdesc.paddress.address + 1; return;

Library pseudocode for aarch64/functions/memory/AArch64.PhysicalTagAArch64.MemTag

// AArch64.PhysicalTag() // ===================== // Generate a Physical Tag from a Logical Tag in an address // AArch64.MemTag[] - non-assignment (read) form // ============================================= // Load an Allocation Tag from memory. bits(4) AArch64.PhysicalTag(bits(64) vaddr) return vaddr<59:56>;AArch64.MemTag[bits(64) address,AccType acctype] AddressDescriptor memaddrdesc; bits(4) value; iswrite = FALSE; aligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, TAG_GRANULE); accdesc = CreateAccessDescriptor(acctype); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AArch64.AllocationTagAccessIsEnabled(acctype) && memaddrdesc.memattrs.tagged then (memstatus, tag) = PhysMemTagRead(memaddrdesc, accdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, 1, accdesc); return tag; else // ...otherwise read tag as zero. return '0000'; // AArch64.MemTag[] - assignment (write) form // ========================================== // Store an Allocation Tag to memory. AArch64.MemTag[bits(64) address, AccType acctype] = bits(4) value AddressDescriptor memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != Align(address, TAG_GRANULE) then boolean secondstage = FALSE; AArch64.Abort(address, AlignmentFault(acctype, iswrite, secondstage)); aligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, TAG_GRANULE); // It is CONSTRAINED UNPREDICTABLE if tags stored to memory locations marked as Device // generate an Alignment Fault or store the data to locations. if memaddrdesc.memattrs.memtype == MemType_Device then c = ConstrainUnpredictable(Unpredictable_DEVICETAGSTORE); assert c IN {Constraint_NONE, Constraint_FAULT}; if c == Constraint_FAULT then boolean secondstage = FALSE; AArch64.Abort(address, AlignmentFault(acctype, iswrite, secondstage)); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); accdesc = CreateAccessDescriptor(acctype); // Memory array access if AArch64.AllocationTagAccessIsEnabled(acctype) && memaddrdesc.memattrs.tagged then memstatus = PhysMemTagWrite(memaddrdesc, accdesc, value); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, 1, accdesc);

Library pseudocode for aarch64/functions/memory/AArch64.TranslateAddressForAtomicAccessAArch64.PhysicalTag

// AArch64.TranslateAddressForAtomicAccess() // ========================================= // Performs an alignment check for atomic memory operations. // Also translates 64-bit Virtual Address into Physical Address. // AArch64.PhysicalTag() // ===================== // Generate a Physical Tag from a Logical Tag in an address AddressDescriptorbits(4) AArch64.TranslateAddressForAtomicAccess(bits(64) address, integer sizeinbits) boolean iswrite = FALSE; size = sizeinbits DIV 8; assert size IN {1, 2, 4, 8, 16}; aligned =AArch64.PhysicalTag(bits(64) vaddr) return vaddr<59:56>; AArch64.CheckAlignment(address, size, AccType_ATOMICRW, iswrite); // MMU or MPU lookup memaddrdesc = AArch64.TranslateAddress(address, AccType_ATOMICRW, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareability != Shareability_NSH then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); if HaveMTE2Ext() && AArch64.AccessIsTagChecked(address, AccType_ATOMICRW) then bits(4) ptag = AArch64.PhysicalTag(address); accdesc = CreateAccessDescriptor(AccType_ATOMICRW); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(address, AccType_ATOMICRW, iswrite); return memaddrdesc;

Library pseudocode for aarch64/functions/memory/AddressSupportsLS64AArch64.TranslateAddressForAtomicAccess

// Returns TRUE if the 64-byte block following the given address supports the // LD64B and ST64B instructions, and FALSE otherwise. boolean// AArch64.TranslateAddressForAtomicAccess() // ========================================= // Performs an alignment check for atomic memory operations. // Also translates 64-bit Virtual Address into Physical Address. AddressDescriptor AddressSupportsLS64(bits(64) address);AArch64.TranslateAddressForAtomicAccess(bits(64) address, integer sizeinbits) boolean iswrite = FALSE; size = sizeinbits DIV 8; assert size IN {1, 2, 4, 8, 16}; aligned =AArch64.CheckAlignment(address, size, AccType_ATOMICRW, iswrite); // MMU or MPU lookup memaddrdesc = AArch64.TranslateAddress(address, AccType_ATOMICRW, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareability != Shareability_NSH then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); if HaveMTE2Ext() && AArch64.AccessIsTagChecked(address, AccType_ATOMICRW) then bits(4) ptag = AArch64.PhysicalTag(address); accdesc = CreateAccessDescriptor(AccType_ATOMICRW); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(address, AccType_ATOMICRW, iswrite); return memaddrdesc;

Library pseudocode for aarch64/functions/memory/CheckAllInAlignedQuantityAddressSupportsLS64

// CheckAllInAlignedQuantity() // =========================== // Returns TRUE if all accessed bytes are within one aligned quantity, FALSE otherwise. // Returns TRUE if the 64-byte block following the given address supports the // LD64B and ST64B instructions, and FALSE otherwise. boolean CheckAllInAlignedQuantity(bits(64) address, integer size, integer alignment) assert(size <= alignment); returnAddressSupportsLS64(bits(64) address); Align(address+size-1, alignment) == Align(address, alignment);

Library pseudocode for aarch64/functions/memory/CheckSPAlignmentCheckAllInAlignedQuantity

// CheckSPAlignment() // ================== // Check correct stack pointer alignment for AArch64 state.// CheckAllInAlignedQuantity() // =========================== // Returns TRUE if all accessed bytes are within one aligned quantity, FALSE otherwise. boolean CheckSPAlignment() bits(64) sp =CheckAllInAlignedQuantity(bits(64) address, integer size, integer alignment) assert(size <= alignment); return SPAlign[]; boolean stack_align_check; if PSTATE.EL ==(address+size-1, alignment) == EL0 then stack_align_check = (SCTLR[].SA0 != '0'); else stack_align_check = (SCTLR[].SA != '0'); if stack_align_check && sp != Align(sp, 16) then AArch64.SPAlignmentFault(); return;(address, alignment);

Library pseudocode for aarch64/functions/memory/CheckSingleAccessAttributesCheckSPAlignment

// CheckSingleAccessAttributes() // ============================= // // When FEAT_LSE2 is implemented, a MemSingle[] access needs to be further assessed once the memory // attributes are determined. // If it was aligned to access size or targets Normal Inner Write-Back, Outer Write-Back Cacheable // memory then it is single copy atomic and there is no alignment fault. // If not, for exclusives, atomics and non atomic acquire release instructions - it is CONSTRAINED UNPREDICTABLE // if they generate an alignment fault. If they do not generate an alignement fault - they are // single copy atomic. // Otherwise it is IMPLEMENTATION DEFINED - if they are single copy atomic. // // The function returns (atomic, splitpair), where // atomic indicates if the access is single copy atomic. // splitpair indicates that a load/store pair is split into 2 single copy atomic accesses. // when atomic and splitpair are both FALSE - the access is not single copy atomic and may be treated // as byte accesses. (boolean, boolean)// CheckSPAlignment() // ================== // Check correct stack pointer alignment for AArch64 state. CheckSingleAccessAttributes(bits(64) address,CheckSPAlignment() bits(64) sp = MemoryAttributesSP memattrs, integer size,[]; boolean stack_align_check; if PSTATE.EL == AccTypeEL0 acctype, boolean iswrite, boolean aligned, boolean ispair) isnormalwb = (memattrs.memtype ==then stack_align_check = ( MemType_NormalSCTLR && memattrs.inner.attrs ==[].SA0 != '0'); else stack_align_check = ( MemAttr_WBSCTLR && memattrs.outer.attrs ==[].SA != '0'); if stack_align_check && sp != MemAttr_WBAlign); atomic = TRUE; splitpair = FALSE; if isnormalwb then return (atomic, splitpair); accatomic = acctype IN {(sp, 16) then AccType_ATOMICAArch64.SPAlignmentFault, AccType_ATOMICRW, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW, AccType_ATOMICLS64, AccType_A32LSMD}; ordered = acctype IN { AccType_ORDERED, AccType_ORDEREDRW, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; if !aligned && (accatomic || ordered) then atomic = ConstrainUnpredictableBool(Unpredictable_MISALIGNEDATOMIC); if !atomic then secondstage = FALSE; AArch64.Abort(address, AlignmentFault(acctype, iswrite, secondstage)); else return (atomic, splitpair); (); if ispair && aligned then // load / store pair requests that are aligned to each register access are split into 2 single copy atomic accesses atomic = FALSE; splitpair = TRUE; return (atomic, splitpair); if aligned then return (atomic, splitpair); atomic = boolean IMPLEMENTATION_DEFINED "Misaligned accesses within 16 byte aligned memory but not Normal Cacheable Writeback are Atomic"; return (atomic, splitpair); return;

Library pseudocode for aarch64/functions/memory/IsTagCheckedInstructionCheckSingleAccessAttributes

// Returns True if the current instruction uses tag-checked memory access, // False otherwise. boolean// CheckSingleAccessAttributes() // ============================= // // When FEAT_LSE2 is implemented, a MemSingle[] access needs to be further assessed once the memory // attributes are determined. // If it was aligned to access size or targets Normal Inner Write-Back, Outer Write-Back Cacheable // memory then it is single copy atomic and there is no alignment fault. // If not, for exclusives, atomics and non atomic acquire release instructions - it is CONSTRAINED UNPREDICTABLE // if they generate an alignment fault. If they do not generate an alignement fault - they are // single copy atomic. // Otherwise it is IMPLEMENTATION DEFINED - if they are single copy atomic. // // The function returns (atomic, splitpair), where // atomic indicates if the access is single copy atomic. // splitpair indicates that a load/store pair is split into 2 single copy atomic accesses. // when atomic and splitpair are both FALSE - the access is not single copy atomic and may be treated // as byte accesses. (boolean, boolean) IsTagCheckedInstruction();CheckSingleAccessAttributes(bits(64) address,MemoryAttributes memattrs, integer size, AccType acctype, boolean iswrite, boolean aligned, boolean ispair) isnormalwb = (memattrs.memtype == MemType_Normal && memattrs.inner.attrs == MemAttr_WB && memattrs.outer.attrs == MemAttr_WB); atomic = TRUE; splitpair = FALSE; if isnormalwb then return (atomic, splitpair); accatomic = acctype IN { AccType_ATOMIC, AccType_ATOMICRW, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW, AccType_ATOMICLS64, AccType_A32LSMD}; ordered = acctype IN { AccType_ORDERED, AccType_ORDEREDRW, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; if !aligned && (accatomic || ordered) then atomic = ConstrainUnpredictableBool(Unpredictable_MISALIGNEDATOMIC); if !atomic then secondstage = FALSE; AArch64.Abort(address, AlignmentFault(acctype, iswrite, secondstage)); else return (atomic, splitpair); if ispair && aligned then // load / store pair requests that are aligned to each register access are split into 2 single copy atomic accesses atomic = FALSE; splitpair = TRUE; return (atomic, splitpair); if aligned then return (atomic, splitpair); atomic = boolean IMPLEMENTATION_DEFINED "Misaligned accesses within 16 byte aligned memory but not Normal Cacheable Writeback are Atomic"; return (atomic, splitpair);

Library pseudocode for aarch64/functions/memory/MemIsTagCheckedInstruction

// Mem[] - non-assignment (read) form // ================================== // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch64.MemSingle directly. bits(size*8)// Returns True if the current instruction uses tag-checked memory access, // False otherwise. boolean Mem[bits(64) address, integer size,IsTagCheckedInstruction(); AccType acctype] boolean ispair = FALSE; return Mem[address, size, acctype, ispair]; bits(size*8) Mem[bits(64) address, integer size, AccType acctype, boolean ispair] assert size IN {1, 2, 4, 8, 16}; constant halfsize = size DIV 2; bits(size * 8) value; bits(halfsize * 8) lowhalf, highhalf; boolean iswrite = FALSE; boolean aligned; if ispair then // check alignment on size of element accessed, not overall access size aligned = AArch64.CheckAlignment(address, halfsize, acctype, iswrite); else aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); boolean atomic; if size != 16 || !(acctype IN {AccType_VEC, AccType_VECSTREAM}) then if !HaveLSE2Ext() then atomic = aligned; else atomic = CheckAllInAlignedQuantity(address, size, 16); elsif acctype IN {AccType_VEC, AccType_VECSTREAM} then // 128-bit SIMD&FP loads are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == Align(address, 8); else // 16-byte integer access atomic = address == Align(address, 16); if !atomic && ispair && address == Align(address, halfsize) then single_is_pair = FALSE; single_is_aligned = TRUE; lowhalf = AArch64.MemSingle[address, halfsize, acctype, single_is_aligned, single_is_pair]; highhalf = AArch64.MemSingle[address + halfsize, halfsize, acctype, single_is_aligned, single_is_pair]; value = highhalf:lowhalf; elsif atomic && ispair then value = AArch64.MemSingle[address, size, acctype, aligned, ispair]; elsif !atomic then assert size > 1; value<7:0> = AArch64.MemSingle[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = AArch64.MemSingle[address+i, 1, acctype, aligned]; elsif size == 16 && acctype IN {AccType_VEC, AccType_VECSTREAM} then lowhalf = AArch64.MemSingle[address, halfsize, acctype, aligned, ispair]; highhalf = AArch64.MemSingle[address + halfsize, halfsize, acctype, aligned, ispair]; value = highhalf:lowhalf; else value = AArch64.MemSingle[address, size, acctype, aligned, ispair]; if BigEndian(acctype) then value = BigEndianReverse(value); return value; // Mem[] - assignment (write) form // =============================== // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem[bits(64) address, integer size, AccType acctype] = bits(size*8) value_in boolean ispair = FALSE; Mem[address, size, acctype, ispair] = value_in; Mem[bits(64) address, integer size, AccType acctype, boolean ispair] = bits(size*8) value_in boolean iswrite = TRUE; constant halfsize = size DIV 2; bits(size*8) value = value_in; bits(halfsize*8) lowhalf, highhalf; boolean atomic; boolean aligned; if BigEndian(acctype) then value = BigEndianReverse(value); if ispair then // check alignment on size of element accessed, not overall access size aligned = AArch64.CheckAlignment(address, halfsize, acctype, iswrite); else aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); if ispair then atomic = CheckAllInAlignedQuantity(address, size, 16); elsif size != 16 || !(acctype IN {AccType_VEC, AccType_VECSTREAM}) then if !HaveLSE2Ext() then atomic = aligned; else atomic = CheckAllInAlignedQuantity(address, size, 16); elsif (acctype IN {AccType_VEC, AccType_VECSTREAM}) then // 128-bit SIMD&FP stores are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == Align(address, 8); else // 16-byte integer access atomic = address == Align(address, 16); if !atomic && ispair && address == Align(address, halfsize) then single_is_aligned = TRUE; <highhalf, lowhalf> = value; AArch64.MemSingle[address, halfsize, acctype, single_is_aligned, ispair] = lowhalf; AArch64.MemSingle[address + halfsize, halfsize, acctype, single_is_aligned, ispair] = highhalf; elsif atomic && ispair then AArch64.MemSingle[address, size, acctype, aligned, ispair] = value; elsif !atomic then assert size > 1; AArch64.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch64.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; elsif size == 16 && acctype IN {AccType_VEC, AccType_VECSTREAM} then <highhalf, lowhalf> = value; AArch64.MemSingle[address, halfsize, acctype, aligned, ispair] = lowhalf; AArch64.MemSingle[address + halfsize, halfsize, acctype, aligned, ispair] = highhalf; else AArch64.MemSingle[address, size, acctype, aligned, ispair] = value; return;

Library pseudocode for aarch64/functions/memory/MemAtomicMem

// MemAtomic() // =========== // Performs load and store memory operations for a given virtual address. // Mem[] - non-assignment (read) form // ================================== // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch64.MemSingle directly. bits(size)bits(size*8) MemAtomic(bits(64) address,Mem[bits(64) address, integer size, MemAtomicOp op, bits(size) value, AccType ldacctype,acctype] boolean ispair = FALSE; return Mem[address, size, acctype, ispair]; bits(size*8) Mem[bits(64) address, integer size, AccType stacctype) bits(size) newvalue; memaddrdesc =acctype, boolean ispair] assert size IN {1, 2, 4, 8, 16}; constant halfsize = size DIV 2; bits(size * 8) value; bits(halfsize * 8) lowhalf, highhalf; boolean iswrite = FALSE; boolean aligned; if ispair then // check alignment on size of element accessed, not overall access size aligned = AArch64.TranslateAddressForAtomicAccessAArch64.CheckAlignment(address, size); ldaccdesc =(address, halfsize, acctype, iswrite); else aligned = CreateAccessDescriptorAArch64.CheckAlignment(ldacctype); staccdesc =(address, size, acctype, iswrite); boolean atomic; if size != 16 || !(acctype IN { CreateAccessDescriptorAccType_VEC(stacctype); // All observers in the shareability domain observe the // following load and store atomically. (memstatus, oldvalue) =, PhysMemReadAccType_VECSTREAM(memaddrdesc, size DIV 8, ldaccdesc); if}) then if ! IsFaultHaveLSE2Ext(memstatus) then() then atomic = aligned; else atomic = HandleExternalReadAbortCheckAllInAlignedQuantity(memstatus, memaddrdesc, size DIV 8, ldaccdesc); if(address, size, 16); elsif acctype IN { BigEndianAccType_VEC(ldacctype) then oldvalue =, BigEndianReverseAccType_VECSTREAM(oldvalue); case op of when} then // 128-bit SIMD&FP loads are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == MemAtomicOp_ADDAlign newvalue = oldvalue + value; when(address, 8); else // 16-byte integer access atomic = address == MemAtomicOp_BICAlign newvalue = oldvalue AND NOT(value); when(address, 16); if !atomic && ispair && address == MemAtomicOp_EORAlign newvalue = oldvalue EOR value; when(address, halfsize) then single_is_pair = FALSE; single_is_aligned = TRUE; lowhalf = MemAtomicOp_ORRAArch64.MemSingle newvalue = oldvalue OR value; when[address, halfsize, acctype, single_is_aligned, single_is_pair]; highhalf = MemAtomicOp_SMAXAArch64.MemSingle newvalue = if[address + halfsize, halfsize, acctype, single_is_aligned, single_is_pair]; value = highhalf:lowhalf; elsif atomic && ispair then value = SIntAArch64.MemSingle(oldvalue) >[address, size, acctype, aligned, ispair]; elsif !atomic then assert size > 1; value<7:0> = SIntAArch64.MemSingle(value) then oldvalue else value; when[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = MemAtomicOp_SMINConstrainUnpredictable newvalue = if( SIntUnpredictable_DEVPAGE2(oldvalue) >); assert c IN { SIntConstraint_FAULT(value) then value else oldvalue; when, MemAtomicOp_UMAXConstraint_NONE newvalue = if}; if c == UIntConstraint_NONE(oldvalue) >then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = UIntAArch64.MemSingle(value) then oldvalue else value; when[address+i, 1, acctype, aligned]; elsif size == 16 && acctype IN { MemAtomicOp_UMINAccType_VEC newvalue = if, UIntAccType_VECSTREAM(oldvalue) >} then lowhalf = UIntAArch64.MemSingle(value) then value else oldvalue; when[address, halfsize, acctype, aligned, ispair]; highhalf = MemAtomicOp_SWPAArch64.MemSingle newvalue = value; if[address + halfsize, halfsize, acctype, aligned, ispair]; value = highhalf:lowhalf; else value = AArch64.MemSingle[address, size, acctype, aligned, ispair]; if BigEndian(stacctype) then newvalue =(acctype) then value = BigEndianReverse(newvalue); memstatus =(value); return value; // Mem[] - assignment (write) form // =============================== // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem[bits(64) address, integer size, AccType acctype] = bits(size*8) value_in boolean ispair = FALSE; Mem[address, size, acctype, ispair] = value_in; Mem[bits(64) address, integer size, AccType acctype, boolean ispair] = bits(size*8) value_in boolean iswrite = TRUE; constant halfsize = size DIV 2; bits(size*8) value = value_in; bits(halfsize*8) lowhalf, highhalf; boolean atomic; boolean aligned; if PhysMemWriteBigEndian(memaddrdesc, size DIV 8, staccdesc, newvalue); if(acctype) then value = IsFaultBigEndianReverse(memstatus) then(value); if ispair then // check alignment on size of element accessed, not overall access size aligned = (address, halfsize, acctype, iswrite); else aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); if ispair then atomic = CheckAllInAlignedQuantity(address, size, 16); elsif size != 16 || !(acctype IN {AccType_VEC, AccType_VECSTREAM}) then if !HaveLSE2Ext() then atomic = aligned; else atomic = CheckAllInAlignedQuantity(address, size, 16); elsif (acctype IN {AccType_VEC, AccType_VECSTREAM}) then // 128-bit SIMD&FP stores are treated as a pair of 64-bit single-copy atomic accesses // 64-bit aligned. atomic = address == Align(address, 8); else // 16-byte integer access atomic = address == Align(address, 16); if !atomic && ispair && address == Align(address, halfsize) then single_is_aligned = TRUE; <highhalf, lowhalf> = value; AArch64.MemSingle[address, halfsize, acctype, single_is_aligned, ispair] = lowhalf; AArch64.MemSingle[address + halfsize, halfsize, acctype, single_is_aligned, ispair] = highhalf; elsif atomic && ispair then AArch64.MemSingle[address, size, acctype, aligned, ispair] = value; elsif !atomic then assert size > 1; AArch64.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. if !aligned then c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch64.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; elsif size == 16 && acctype IN {AccType_VEC, AccType_VECSTREAM} then <highhalf, lowhalf> = value; AArch64.MemSingle[address, halfsize, acctype, aligned, ispair] = lowhalf; AArch64.MemSingle[address + halfsize, halfsize, acctype, aligned, ispair] = highhalf; else AArch64.MemSingleHandleExternalWriteAbortAArch64.CheckAlignment(memstatus, memaddrdesc, size DIV 8, staccdesc); // Load operations return the old (pre-operation) value return oldvalue;[address, size, acctype, aligned, ispair] = value; return;

Library pseudocode for aarch64/functions/memory/MemAtomicCompareAndSwapMemAtomic

// MemAtomicCompareAndSwap() // ========================= // Compares the value stored at the passed-in memory address against the passed-in expected // value. If the comparison is successful, the value at the passed-in memory address is swapped // with the passed-in new_value. // MemAtomic() // =========== // Performs load and store memory operations for a given virtual address. bits(size) MemAtomicCompareAndSwap(bits(64) address, bits(size) expectedvalue, bits(size) newvalue_in,MemAtomic(bits(64) address, MemAtomicOp op, bits(size) value, AccType ldacctype, AccType stacctype) bits(size) newvalue = newvalue_in; bits(size) newvalue; memaddrdesc = AArch64.TranslateAddressForAtomicAccess(address, size); ldaccdesc = CreateAccessDescriptor(ldacctype); staccdesc = CreateAccessDescriptor(stacctype); // All observers in the shareability domain observe the // following load and store atomically. (memstatus, oldvalue) = PhysMemRead(memaddrdesc, size DIV 8, ldaccdesc); if IsFault(memstatus) then HandleExternalReadAbort(memstatus, memaddrdesc, size DIV 8, ldaccdesc); if BigEndian(ldacctype) then oldvalue = BigEndianReverse(oldvalue); case op of when MemAtomicOp_ADD newvalue = oldvalue + value; when MemAtomicOp_BIC newvalue = oldvalue AND NOT(value); when MemAtomicOp_EOR newvalue = oldvalue EOR value; when MemAtomicOp_ORR newvalue = oldvalue OR value; when MemAtomicOp_SMAX newvalue = if SInt(oldvalue) > SInt(value) then oldvalue else value; when MemAtomicOp_SMIN newvalue = if SInt(oldvalue) > SInt(value) then value else oldvalue; when MemAtomicOp_UMAX newvalue = if UInt(oldvalue) > UInt(value) then oldvalue else value; when MemAtomicOp_UMIN newvalue = if UInt(oldvalue) > UInt(value) then value else oldvalue; when MemAtomicOp_SWP(oldvalue); newvalue = value; if oldvalue == expectedvalue then if if BigEndian(stacctype) then newvalue = BigEndianReverse(newvalue); memstatus = PhysMemWrite(memaddrdesc, size DIV 8, staccdesc, newvalue); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, size DIV 8, staccdesc); // Load operations return the old (pre-operation) value return oldvalue;

Library pseudocode for aarch64/functions/memory/MemLoad64BMemAtomicCompareAndSwap

// MemLoad64B() // ============ // Performs an atomic 64-byte read from a given virtual address. // MemAtomicCompareAndSwap() // ========================= // Compares the value stored at the passed-in memory address against the passed-in expected // value. If the comparison is successful, the value at the passed-in memory address is swapped // with the passed-in new_value. bits(512)bits(size) MemLoad64B(bits(64) address,MemAtomicCompareAndSwap(bits(64) address, bits(size) expectedvalue, bits(size) newvalue_in, AccType acctype) bits(512) data; boolean iswrite = FALSE; constant integer size = 64; aligned =ldacctype, AArch64.CheckAlignmentAccType(address, size, acctype, iswrite); if !stacctype) bits(size) newvalue = newvalue_in; memaddrdesc =AddressSupportsLS64AArch64.TranslateAddressForAtomicAccess(address) then c =(address, size); ldaccdesc = ConstrainUnpredictableCreateAccessDescriptor((ldacctype); staccdesc =Unpredictable_LS64UNSUPPORTEDCreateAccessDescriptor); assert c IN {(stacctype); // All observers in the shareability domain observe the // following load and store atomically. (memstatus, oldvalue) =Constraint_LIMITED_ATOMICITYPhysMemRead,(memaddrdesc, size DIV 8, ldaccdesc); if Constraint_FAULT}; if c == Constraint_FAULT then // Generate a stage 1 Data Abort reported using the DFSC code of 110101. boolean secondstage = FALSE; boolean s2fs1walk = FALSE; FaultRecord fault = AArch64.ExclusiveFault(acctype, iswrite, secondstage, s2fs1walk); AArch64.Abort(address, fault); else // Accesses are not single-copy atomic above the byte level for i = 0 to 63 data<7+8*i : 8*i> = AArch64.MemSingle[address+8*i, 1, acctype, aligned]; return data; AddressDescriptor memaddrdesc; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then(memstatus) then AArch64.AbortHandleExternalReadAbort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareability !=(memstatus, memaddrdesc, size DIV 8, ldaccdesc); if Shareability_NSHBigEndian then(ldacctype) then oldvalue = ClearExclusiveByAddressBigEndianReverse(memaddrdesc.paddress,(oldvalue); if oldvalue == expectedvalue then if ProcessorIDBigEndian(), size); // Memory array access accdesc =(stacctype) then newvalue = CreateAccessDescriptorBigEndianReverse(acctype); if(newvalue); memstatus = HaveMTE2ExtPhysMemWrite() then (memaddrdesc, size DIV 8, staccdesc, newvalue); if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.PhysicalTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(address, acctype, iswrite); PhysMemRetStatus memstatus; (memstatus, data) = PhysMemRead(memaddrdesc, size, accdesc); if IsFault(memstatus) then HandleExternalReadAbortHandleExternalWriteAbort(memstatus, memaddrdesc, size, accdesc); return data;(memstatus, memaddrdesc, size DIV 8, staccdesc); return oldvalue;

Library pseudocode for aarch64/functions/memory/MemStore64BMemLoad64B

// MemStore64B() // ============= // Performs an atomic 64-byte store to a given virtual address. Function does // not return the status of the store.// MemLoad64B() // ============ // Performs an atomic 64-byte read from a given virtual address. bits(512) MemStore64B(bits(64) address, bits(512) value,MemLoad64B(bits(64) address, AccType acctype) boolean iswrite = TRUE; bits(512) data; boolean iswrite = FALSE; constant integer size = 64; aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); if !AddressSupportsLS64(address) then c = ConstrainUnpredictable(Unpredictable_LS64UNSUPPORTED); assert c IN {Constraint_LIMITED_ATOMICITY, Constraint_FAULT}; if c == Constraint_FAULT then // Generate a Data Abort reported using the DFSC code of 110101. // Generate a stage 1 Data Abort reported using the DFSC code of 110101. boolean secondstage = FALSE; boolean s2fs1walk = FALSE; fault = boolean s2fs1walk = FALSE; FaultRecord fault = AArch64.ExclusiveFault(acctype, iswrite, secondstage, s2fs1walk); AArch64.Abort(address, fault); else // Accesses are not single-copy atomic above the byte level. for i = 0 to 63 // Accesses are not single-copy atomic above the byte level for i = 0 to 63 data<7+8*i : 8*i> = AArch64.MemSingle[address+8*i, 1, acctype, aligned] = value<7+8*i : 8*i>; else -=[address+8*i, 1, acctype, aligned]; return data; memaddrdesc; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareability != Shareability_NSH then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTE2Ext() then if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.PhysicalTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(address, acctype, iswrite); PhysMemRetStatus memstatus; (memstatus, data) = PhysMemRead(memaddrdesc, size, accdesc); if IsFault(memstatus) then HandleExternalReadAbortMemStore64BWithRetAddressDescriptor(address, value, acctype); // Return status is ignored by ST64B return;(memstatus, memaddrdesc, size, accdesc); return data;

Library pseudocode for aarch64/functions/memory/MemStore64BWithRetMemStore64B

// MemStore64BWithRet() // ==================== // Performs an atomic 64-byte store to a given virtual address returning // the status value of the operation. bits(64)// MemStore64B() // ============= // Performs an atomic 64-byte store to a given virtual address. Function does // not return the status of the store. MemStore64BWithRet(bits(64) address, bits(512) value,MemStore64B(bits(64) address, bits(512) value, AccType acctype)acctype) boolean iswrite = TRUE; constant integer size = 64; aligned = AddressDescriptor memaddrdesc; boolean iswrite = TRUE; constant integer size = 64; aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); memaddrdesc = if ! AArch64.TranslateAddressAddressSupportsLS64(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if(address) then c = IsFaultConstrainUnpredictable(memaddrdesc) then( AArch64.AbortUnpredictable_LS64UNSUPPORTED(address, memaddrdesc.fault); return); assert c IN { ZeroExtendConstraint_LIMITED_ATOMICITY('1'); // Effect on exclusives if memaddrdesc.memattrs.shareability !=, Shareability_NSHConstraint_FAULT then}; if c == ClearExclusiveByAddressConstraint_FAULT(memaddrdesc.paddress,then // Generate a Data Abort reported using the DFSC code of 110101. boolean secondstage = FALSE; boolean s2fs1walk = FALSE; fault = ProcessorIDAArch64.ExclusiveFault(), 64); // Memory array access accdesc =(acctype, iswrite, secondstage, s2fs1walk); CreateAccessDescriptorAArch64.Abort(acctype); if(address, fault); else // Accesses are not single-copy atomic above the byte level. for i = 0 to 63 HaveMTE2ExtAArch64.MemSingle() then if[address+8*i, 1, acctype, aligned] = value<7+8*i : 8*i>; else -= AArch64.AccessIsTagCheckedMemStore64BWithRet(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.PhysicalTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(address, acctype, iswrite); return ZeroExtend('1'); memstatus = PhysMemWrite(memaddrdesc, size, accdesc, value); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, size, accdesc); return memstatus.store64bstatus;(address, value, acctype); // Return status is ignored by ST64B return;

Library pseudocode for aarch64/functions/memory/MemStore64BWithRetStatusMemStore64BWithRet

// Generates the return status of memory write with ST64BV or ST64BV0 // instructions. The status indicates if the operation succeeded, failed, // or was not supported at this memory location. // MemStore64BWithRet() // ==================== // Performs an atomic 64-byte store to a given virtual address returning // the status value of the operation. bits(64) MemStore64BWithRetStatus();MemStore64BWithRet(bits(64) address, bits(512) value,AccType acctype) AddressDescriptor memaddrdesc; boolean iswrite = TRUE; constant integer size = 64; aligned = AArch64.CheckAlignment(address, size, acctype, iswrite); memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); return ZeroExtend('1'); // Effect on exclusives if memaddrdesc.memattrs.shareability != Shareability_NSH then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), 64); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTE2Ext() then if AArch64.AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = AArch64.PhysicalTag(ZeroExtend(address, 64)); if !AArch64.CheckTag(memaddrdesc, accdesc, ptag, iswrite) then AArch64.TagCheckFault(address, acctype, iswrite); return ZeroExtend('1'); memstatus = PhysMemWrite(memaddrdesc, size, accdesc, value); if IsFault(memstatus) then HandleExternalWriteAbort(memstatus, memaddrdesc, size, accdesc); return memstatus.store64bstatus;

Library pseudocode for aarch64/functions/memory/NVMemMemStore64BWithRetStatus

// NVMem[] - non-assignment form // ============================= // This function is the load memory access for the transformed System register read access // when Enhanced Nested Virtualization is enabled with HCR_EL2.NV2 = 1. // The address for the load memory access is calculated using // the formula SignExtend(VNCR_EL2.BADDR : Offset<11:0>, 64) where, // * VNCR_EL2.BADDR holds the base address of the memory location, and // * Offset is the unique offset value defined architecturally for each System register that // supports transformation of register access to memory access. // Generates the return status of memory write with ST64BV or ST64BV0 // instructions. The status indicates if the operation succeeded, failed, // or was not supported at this memory location. bits(64) NVMem[integer offset] assert offset > 0; bits(64) address =MemStore64BWithRetStatus(); SignExtend(VNCR_EL2.BADDR:offset<11:0>, 64); return Mem[address, 8, AccType_NV2REGISTER]; // NVMem[] - assignment form // ========================= // This function is the store memory access for the transformed System register write access // when Enhanced Nested Virtualization is enabled with HCR_EL2.NV2 = 1. // The address for the store memory access is calculated using // the formula SignExtend(VNCR_EL2.BADDR : Offset<11:0>, 64) where, // * VNCR_EL2.BADDR holds the base address of the memory location, and // * Offset is the unique offset value defined architecturally for each System register that // supports transformation of register access to memory access. NVMem[integer offset] = bits(64) value assert offset > 0; bits(64) address = SignExtend(VNCR_EL2.BADDR:offset<11:0>, 64); Mem[address, 8, AccType_NV2REGISTER] = value; return;

Library pseudocode for aarch64/functions/memory/PhysMemTagReadNVMem

// This is the hardware operation which perform a single-copy atomic, // Allocation Tag granule aligned, memory access from the tag in PA space. // // The function address the array using desc.paddress which supplies: // * A 52-bit physical address // * A single NS bit to select between Secure and Non-secure parts of the array. // // The accdesc descriptor describes the access type: normal, exclusive, ordered, streaming, // etc and other parameters required to access the physical memory or for setting syndrome // register in the event of an External abort. (PhysMemRetStatus, bits(4))// NVMem[] - non-assignment form // ============================= // This function is the load memory access for the transformed System register read access // when Enhanced Nested Virtualisation is enabled with HCR_EL2.NV2 = 1. // The address for the load memory access is calculated using // the formula SignExtend(VNCR_EL2.BADDR : Offset<11:0>, 64) where, // * VNCR_EL2.BADDR holds the base address of the memory location, and // * Offset is the unique offset value defined architecturally for each System register that // supports transformation of register access to memory access. bits(64) PhysMemTagRead(NVMem[integer offset] assert offset > 0; bits(64) address =AddressDescriptorSignExtend desc,(VNCR_EL2.BADDR:offset<11:0>, 64); return [address, 8, AccType_NV2REGISTER]; // NVMem[] - assignment form // ========================= // This function is the store memory access for the transformed System register write access // when Enhanced Nested Virtualisation is enabled with HCR_EL2.NV2 = 1. // The address for the store memory access is calculated using // the formula SignExtend(VNCR_EL2.BADDR : Offset<11:0>, 64) where, // * VNCR_EL2.BADDR holds the base address of the memory location, and // * Offset is the unique offset value defined architecturally for each System register that // supports transformation of register access to memory access. NVMem[integer offset] = bits(64) value assert offset > 0; bits(64) address = SignExtendAccessDescriptorMem accdesc);(VNCR_EL2.BADDR:offset<11:0>, 64); Mem[address, 8, AccType_NV2REGISTER] = value; return;

Library pseudocode for aarch64/functions/memory/PhysMemTagWritePhysMemTagRead

// This is the hardware operation which perform a single-copy atomic, // Allocation Tag granule aligned, memory access to the tag in PA space. // Allocation Tag granule aligned, memory access from the tag in PA space. // // The function address the array using desc.paddress which supplies: // * A 52-bit physical address // * A single NS bit to select between Secure and Non-secure parts of the array. // // The accdesc descriptor describes the access type: normal, exclusive, ordered, streaming, // etc and other parameters required to access the physical memory or for setting syndrome // register in the event of an External abort. PhysMemRetStatus(PhysMemRetStatus, bits(4)) PhysMemTagWrite(PhysMemTagRead(AddressDescriptor desc, AccessDescriptor accdesc, bits (4) value);accdesc);

Library pseudocode for aarch64/functions/memory/SetTagCheckedInstructionPhysMemTagWrite

// Flag the current instruction as using/not using memory tag checking.// This is the hardware operation which perform a single-copy atomic, // Allocation Tag granule aligned, memory access to the tag in PA space. // // The function address the array using desc.paddress which supplies: // * A 52-bit physical address // * A single NS bit to select between Secure and Non-secure parts of the array. // // The accdesc descriptor describes the access type: normal, exclusive, ordered, streaming, // etc and other parameters required to access the physical memory or for setting syndrome // register in the event of an External abort. PhysMemRetStatus SetTagCheckedInstruction(boolean checked);PhysMemTagWrite(AddressDescriptor desc, AccessDescriptor accdesc, bits (4) value);

Library pseudocode for aarch64/functions/mopsmemory/CPYPostSizeChoiceSetTagCheckedInstruction

// Returns the size of the copy that is performed by the CPYE* instructions for this // implementation given the parameters of the destination, source and size of the copy. // Postsize is encoded as -1*size for an option A implementation if cpysize is negative. bits(64)// Flag the current instruction as using/not using memory tag checking. CPYPostSizeChoice(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);SetTagCheckedInstruction(boolean checked);

Library pseudocode for aarch64/functions/mops/CPYPreSizeChoiceCPYPostSizeChoice

// Returns the size of the copy that is performed by the CPYP* instructions for this // Returns the size of the copy that is performed by the CPYE* instructions for this // implementation given the parameters of the destination, source and size of the copy. // Presize is encoded as -1*size for an option A implementation if cpysize is negative. // Postsize is encoded as -1*size for an option A implementation if cpysize is negative. bits(64) CPYPreSizeChoice(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);CPYPostSizeChoice(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);

Library pseudocode for aarch64/functions/mops/CPYSizeChoiceCPYPreSizeChoice

// Returns the size of the block this performed for an iteration of the copy given the // parameters of the destination, source and size of the copy. integer// Returns the size of the copy that is performed by the CPYP* instructions for this // implementation given the parameters of the destination, source and size of the copy. // Presize is encoded as -1*size for an option A implementation if cpysize is negative. bits(64) CPYSizeChoice(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);CPYPreSizeChoice(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);

Library pseudocode for aarch64/functions/mops/CheckMOPSEnabledCPYSizeChoice

// CheckMOPSEnabled() // ================== // Check for EL0 and EL1 access to the CPY* and SET* instructions.// Returns the size of the block this performed for an iteration of the copy given the // parameters of the destination, source and size of the copy. integer CheckMOPSEnabled() if (PSTATE.EL IN {CPYSizeChoice(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);EL0, EL1} && EL2Enabled() && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0') && (!IsHCRXEL2Enabled() || HCRX_EL2.MSCEn == '0')) then UNDEFINED; if (PSTATE.EL == EL0 && SCTLR_EL1.MSCEn == '0' && (!EL2Enabled() || HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')) then UNDEFINED; if PSTATE.EL == EL0 && IsInHost() && SCTLR_EL2.MSCEn == '0' then UNDEFINED;

Library pseudocode for aarch64/functions/mops/MOPSStageCheckMOPSEnabled

enumeration// CheckMOPSEnabled() // ================== // Check for EL0 and EL1 access to the CPY* and SET* instructions. MOPSStage {CheckMOPSEnabled() if (PSTATE.EL IN { MOPSStage_Prologue,, MOPSStage_Main,} && () && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0') && (!IsHCRXEL2Enabled() || HCRX_EL2.MSCEn == '0')) then UNDEFINED; if (PSTATE.EL == EL0 && SCTLR_EL1.MSCEn == '0' && (!EL2Enabled() || HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')) then UNDEFINED; if PSTATE.EL == EL0 && IsInHostMOPSStage_Epilogue };() && SCTLR_EL2.MSCEn == '0' then UNDEFINED;

Library pseudocode for aarch64/functions/mops/MaxBlockSizeCopiedBytesMOPSStage

// MaxBlockSizeCopiedBytes() // ========================= // Returns the maximum number of bytes that can used in a single block of the copy. integerenumeration MaxBlockSizeCopiedBytes() return integer IMPLEMENTATION_DEFINED "Maximum bytes used in a single block of a copy";MOPSStage {MOPSStage_Prologue, MOPSStage_Main, MOPSStage_Epilogue };

Library pseudocode for aarch64/functions/mops/MemCpyAccessTypesMaxBlockSizeCopiedBytes

// MemCpyAccessTypes() // =================== // Return the read and write access types for a CPY* instruction. // MaxBlockSizeCopiedBytes() // ========================= // Returns the maximum number of bytes that can used in a single block of the copy. (AccType, AccType)integer MemCpyAccessTypes(bits(4) options) unpriv_at_el1 = PSTATE.EL ==MaxBlockSizeCopiedBytes() return integer IMPLEMENTATION_DEFINED "Maximum bytes used in a single block of a copy"; EL1 && !(EL2Enabled() && HaveNVExt() && HCR_EL2.<NV,NV1> == '11'); unpriv_at_el2 = PSTATE.EL == EL2 && HaveVirtHostExt() && HCR_EL2.<E2H,TGE> == '11'; runpriv_at_el1 = options<1> == '1' && unpriv_at_el1; runpriv_at_el2 = options<1> == '1' && unpriv_at_el2; wunpriv_at_el1 = options<0> == '1' && unpriv_at_el1; wunpriv_at_el2 = options<0> == '1' && unpriv_at_el2; user_access_override = HaveUAOExt() && PSTATE.UAO == '1'; AccType racctype; if !user_access_override && (runpriv_at_el1 || runpriv_at_el2) then racctype = if options<3> == '0' then AccType_UNPRIV else AccType_UNPRIVSTREAM; else racctype = if options<3> == '0' then AccType_NORMAL else AccType_STREAM; AccType wacctype; if !user_access_override && (wunpriv_at_el1 || wunpriv_at_el2) then wacctype = if options<2> == '0' then AccType_UNPRIV else AccType_UNPRIVSTREAM; else wacctype = if options<2> == '0' then AccType_NORMAL else AccType_STREAM; return (racctype, wacctype);

Library pseudocode for aarch64/functions/mops/MemCpyDirectionChoiceMemCpyAccessTypes

// Returns true if in the non-overlapping case of a memcpy of size cpysize bytes // from the source address fromaddress to destination address toaddress is done // in the forward direction on this implementation. boolean// MemCpyAccessTypes() // =================== // Return the read and write access types for a CPY* instruction. (AccType, AccType) MemCpyDirectionChoice(bits(64) fromaddress, bits(64) toaddress, bits(64) cpysize);MemCpyAccessTypes(bits(4) options) unpriv_at_el1 = PSTATE.EL ==EL1 && !(EL2Enabled() && HaveNVExt() && HCR_EL2.<NV,NV1> == '11'); unpriv_at_el2 = PSTATE.EL == EL2 && HaveVirtHostExt() && HCR_EL2.<E2H,TGE> == '11'; runpriv_at_el1 = options<1> == '1' && unpriv_at_el1; runpriv_at_el2 = options<1> == '1' && unpriv_at_el2; wunpriv_at_el1 = options<0> == '1' && unpriv_at_el1; wunpriv_at_el2 = options<0> == '1' && unpriv_at_el2; user_access_override = HaveUAOExt() && PSTATE.UAO == '1'; AccType racctype; if !user_access_override && (runpriv_at_el1 || runpriv_at_el2) then racctype = if options<3> == '0' then AccType_UNPRIV else AccType_UNPRIVSTREAM; else racctype = if options<3> == '0' then AccType_NORMAL else AccType_STREAM; AccType wacctype; if !user_access_override && (wunpriv_at_el1 || wunpriv_at_el2) then wacctype = if options<2> == '0' then AccType_UNPRIV else AccType_UNPRIVSTREAM; else wacctype = if options<2> == '0' then AccType_NORMAL else AccType_STREAM; return (racctype, wacctype);

Library pseudocode for aarch64/functions/mops/MemCpyOptionAMemCpyDirectionChoice

// MemCpyOptionA() // =============== // Returns TRUE if the implementation uses Option A for the // CPY*/SET* instructions, and FALSE otherwise. // Returns true if in the non-overlapping case of a memcpy of size cpysize bytes // from the source address fromaddress to destination address toaddress is done // in the forward direction on this implementation. boolean MemCpyOptionA() return boolean IMPLEMENTATION_DEFINED "CPY*/SET* instructions use Option A";MemCpyDirectionChoice(bits(64) fromaddress, bits(64) toaddress, bits(64) cpysize);

Library pseudocode for aarch64/functions/mops/MemCpyParametersIllformedEMemCpyOptionA

// Returns TRUE if the inputs are not well formed (in terms of their size and/or alignment) // for a CPYE* instruction for this implementation given the parameters of the destination, // source and size of the copy. // MemCpyOptionA() // =============== // Returns TRUE if the implementation uses Option A for the // CPY*/SET* instructions, and FALSE otherwise. boolean MemCpyParametersIllformedE(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);MemCpyOptionA() return boolean IMPLEMENTATION_DEFINED "CPY*/SET* instructions use Option A";

Library pseudocode for aarch64/functions/mops/MemCpyParametersIllformedMMemCpyParametersIllformedE

// Returns TRUE if the inputs are not well formed (in terms of their size and/or alignment) // for a CPYM* instruction for this implementation given the parameters of the destination, // for a CPYE* instruction for this implementation given the parameters of the destination, // source and size of the copy. boolean MemCpyParametersIllformedM(bits(64) toaddress, bits(64) fromaddress, MemCpyParametersIllformedE(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);

Library pseudocode for aarch64/functions/mops/MemCpyZeroSizeCheckMemCpyParametersIllformedM

// Returns TRUE if the implementation option is checked on a copy of size zero remaining. // Returns TRUE if the inputs are not well formed (in terms of their size and/or alignment) // for a CPYM* instruction for this implementation given the parameters of the destination, // source and size of the copy. boolean MemCpyZeroSizeCheck();MemCpyParametersIllformedM(bits(64) toaddress, bits(64) fromaddress, bits(64) cpysize);

Library pseudocode for aarch64/functions/mops/MemSetAccessTypeMemCpyZeroSizeCheck

// MemSetAccessType() // ================== // Return the access type for a SET* instruction. AccType// Returns TRUE if the implementation option is checked on a copy of size zero remaining. boolean MemSetAccessType(bits(2) options) unpriv_at_el1 = options<0> == '1' && PSTATE.EL ==MemCpyZeroSizeCheck(); EL1 && !(EL2Enabled() && HaveNVExt() && HCR_EL2.<NV,NV1> == '11'); unpriv_at_el2 = (options<0> == '1' && PSTATE.EL == EL2 && HaveVirtHostExt() && HCR_EL2.<E2H,TGE> == '11'); user_access_override = HaveUAOExt() && PSTATE.UAO == '1'; AccType acctype; if !user_access_override && (unpriv_at_el1 || unpriv_at_el2) then acctype = if options<1> == '0' then AccType_UNPRIV else AccType_UNPRIVSTREAM; else acctype = if options<1> == '0' then AccType_NORMAL else AccType_STREAM; return acctype;

Library pseudocode for aarch64/functions/mops/MemSetParametersIllformedEMemSetAccessType

// Returns TRUE if the inputs are not well formed (in terms of their size and/or // alignment) for a SETE* or SETGE* instruction for this implementation given the // parameters of the destination and size of the set. boolean// MemSetAccessType() // ================== // Return the access type for a SET* instruction. AccType MemSetParametersIllformedE(bits(64) toaddress, bits(64) setsize, boolean IsSETGE);MemSetAccessType(bits(2) options) unpriv_at_el1 = options<0> == '1' && PSTATE.EL ==EL1 && !(EL2Enabled() && HaveNVExt() && HCR_EL2.<NV,NV1> == '11'); unpriv_at_el2 = (options<0> == '1' && PSTATE.EL == EL2 && HaveVirtHostExt() && HCR_EL2.<E2H,TGE> == '11'); user_access_override = HaveUAOExt() && PSTATE.UAO == '1'; AccType acctype; if !user_access_override && (unpriv_at_el1 || unpriv_at_el2) then acctype = if options<1> == '0' then AccType_UNPRIV else AccType_UNPRIVSTREAM; else acctype = if options<1> == '0' then AccType_NORMAL else AccType_STREAM; return acctype;

Library pseudocode for aarch64/functions/mops/MemSetParametersIllformedMMemSetParametersIllformedE

// Returns TRUE if the inputs are not well formed (in terms of their size and/or // alignment) for a SETM* or SETGM* instruction for this implementation given the // parameters of the destination and size of the copy. // alignment) for a SETE* or SETGE* instruction for this implementation given the // parameters of the destination and size of the set. boolean MemSetParametersIllformedM(bits(64) toaddress, bits(64) setsize, boolean IsSETGM);MemSetParametersIllformedE(bits(64) toaddress, bits(64) setsize, boolean IsSETGE);

Library pseudocode for aarch64/functions/mops/MemSetZeroSizeCheckMemSetParametersIllformedM

// Returns TRUE if the implementation option is checked on a copy of size zero remaining. // Returns TRUE if the inputs are not well formed (in terms of their size and/or // alignment) for a SETM* or SETGM* instruction for this implementation given the // parameters of the destination and size of the copy. boolean MemSetZeroSizeCheck();MemSetParametersIllformedM(bits(64) toaddress, bits(64) setsize, boolean IsSETGM);

Library pseudocode for aarch64/functions/mops/MismatchedCpySetTargetELMemSetZeroSizeCheck

// MismatchedCpySetTargetEL() // ========================== // Return the target exception level for an Exception_MemCpyMemSet. bits(2)// Returns TRUE if the implementation option is checked on a copy of size zero remaining. boolean MismatchedCpySetTargetEL() bits(2) target_el; ifMemSetZeroSizeCheck(); UInt(PSTATE.EL) > UInt(EL1) then target_el = PSTATE.EL; elsif PSTATE.EL == EL0 && EL2Enabled() && HCR_EL2.TGE == '1' then target_el = EL2; elsif (PSTATE.EL == EL1 && EL2Enabled() && IsHCRXEL2Enabled() && HCRX_EL2.MCE2 == '1') then target_el = EL2; else target_el = EL1; return target_el;

Library pseudocode for aarch64/functions/mops/MismatchedMemCpyExceptionMismatchedCpySetTargetEL

// MismatchedMemCpyException() // =========================== // Generates an exception for a CPY* instruction if the version // is inconsistent with the state of the call.// MismatchedCpySetTargetEL() // ========================== // Return the target exception level for an Exception_MemCpyMemSet. bits(2) MismatchedMemCpyException(boolean option_a, integer destreg, integer srcreg, integer sizereg, boolean wrong_option, boolean from_epilogue, bits(4) options) bits(64) preferred_exception_return =MismatchedCpySetTargetEL() bits(2) target_el; if ThisInstrAddrUInt(); integer vect_offset = 0x0; bits(2) target_el =(PSTATE.EL) > MismatchedCpySetTargetELUInt();( ExceptionRecordEL1 exception =) then target_el = PSTATE.EL; elsif PSTATE.EL == ExceptionSyndromeEL0(&&Exception_MemCpyMemSetEL2Enabled); exception.syndrome<24> = '0'; exception.syndrome<23> = '0'; exception.syndrome<22:19> = options; exception.syndrome<18> = if from_epilogue then '1' else '0'; exception.syndrome<17> = if wrong_option then '1' else '0'; exception.syndrome<16> = if option_a then '1' else '0'; // exception.syndrome<15> is RES0 exception.syndrome<14:10> = destreg<4:0>; exception.syndrome<9:5> = srcreg<4:0>; exception.syndrome<4:0> = sizereg<4:0>;() && HCR_EL2.TGE == '1' then target_el = ; elsif (PSTATE.EL == EL1 && EL2Enabled() && IsHCRXEL2Enabled() && HCRX_EL2.MCE2 == '1') then target_el = EL2; else target_el = EL1AArch64.TakeExceptionEL2(target_el, exception, preferred_exception_return, vect_offset);; return target_el;

Library pseudocode for aarch64/functions/mops/MismatchedMemSetExceptionMismatchedMemCpyException

// MismatchedMemSetException() // MismatchedMemCpyException() // =========================== // Generates an exception for a SET* instruction if the version // Generates an exception for a CPY* instruction if the version // is inconsistent with the state of the call. MismatchedMemSetException(boolean option_a, integer destreg, integer datareg, integer sizereg, boolean wrong_option, boolean from_epilogue, bits(2) options, boolean is_SETG) MismatchedMemCpyException(boolean option_a, integer destreg, integer srcreg, integer sizereg, boolean wrong_option, boolean from_epilogue, bits(4) options) bits(64) preferred_exception_return = ThisInstrAddr(); integer vect_offset = 0x0; bits(2) target_el = MismatchedCpySetTargetEL(); ExceptionRecord exception = ExceptionSyndrome(Exception_MemCpyMemSet); exception.syndrome<24> = '1'; exception.syndrome<23> = if is_SETG then '1' else '0'; // exception.syndrome<22:21> is RES0 exception.syndrome<20:19> = options; exception.syndrome<24> = '0'; exception.syndrome<23> = '0'; exception.syndrome<22:19> = options; exception.syndrome<18> = if from_epilogue then '1' else '0'; exception.syndrome<17> = if wrong_option then '1' else '0'; exception.syndrome<16> = if option_a then '1' else '0'; // exception.syndrome<15> is RES0 exception.syndrome<14:10> = destreg<4:0>; exception.syndrome<9:5> = datareg<4:0>; exception.syndrome<9:5> = srcreg<4:0>; exception.syndrome<4:0> = sizereg<4:0>; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/mops/SETPostSizeChoiceMismatchedMemSetException

// Returns the size of the set that is performed by the SETE* or SETGE* instructions // for this implementation, given the parameters of the destination and size of the set. // Postsize is encoded as -1*size for an option A implementation if setsize is negative. bits(64)// MismatchedMemSetException() // =========================== // Generates an exception for a SET* instruction if the version // is inconsistent with the state of the call. SETPostSizeChoice(bits(64) toaddress, bits(64) setsize, boolean IsSETGE);MismatchedMemSetException(boolean option_a, integer destreg, integer datareg, integer sizereg, boolean wrong_option, boolean from_epilogue, bits(2) options, boolean is_SETG) bits(64) preferred_exception_return =ThisInstrAddr(); integer vect_offset = 0x0; bits(2) target_el = MismatchedCpySetTargetEL(); ExceptionRecord exception = ExceptionSyndrome(Exception_MemCpyMemSet); exception.syndrome<24> = '1'; exception.syndrome<23> = if is_SETG then '1' else '0'; // exception.syndrome<22:21> is RES0 exception.syndrome<20:19> = options; exception.syndrome<18> = if from_epilogue then '1' else '0'; exception.syndrome<17> = if wrong_option then '1' else '0'; exception.syndrome<16> = if option_a then '1' else '0'; // exception.syndrome<15> is RES0 exception.syndrome<14:10> = destreg<4:0>; exception.syndrome<9:5> = datareg<4:0>; exception.syndrome<4:0> = sizereg<4:0>; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/mops/SETPreSizeChoiceSETPostSizeChoice

// Returns the size of the set that is performed by the SETP* or SETGP* instructions // Returns the size of the set that is performed by the SETE* or SETGE* instructions // for this implementation, given the parameters of the destination and size of the set. // Presize is encoded as -1*size for an option A implementation if setsize is negative. // Postsize is encoded as -1*size for an option A implementation if setsize is negative. bits(64) SETPreSizeChoice(bits(64) toaddress, bits(64) setsize, boolean IsSETGP);SETPostSizeChoice(bits(64) toaddress, bits(64) setsize, boolean IsSETGE);

Library pseudocode for aarch64/functions/mops/SETSizeChoiceSETPreSizeChoice

// Returns the size of the block thisperformed for an iteration of the set given // the parameters of the destination and size of the set. The size of the block // is an integer multiple of AlignSize. integer// Returns the size of the set that is performed by the SETP* or SETGP* instructions // for this implementation, given the parameters of the destination and size of the set. // Presize is encoded as -1*size for an option A implementation if setsize is negative. bits(64) SETSizeChoice(bits(64) toaddress, bits(64) setsize, integer AlignSize);SETPreSizeChoice(bits(64) toaddress, bits(64) setsize, boolean IsSETGP);

Library pseudocode for aarch64/functions/pacmops/addpac/AddPACSETSizeChoice

// AddPAC() // ======== // Calculates the pointer authentication code for a 64-bit quantity and then // inserts that into pointer authentication code field of that 64-bit quantity. bits(64)// Returns the size of the block thisperformed for an iteration of the set given // the parameters of the destination and size of the set. The size of the block // is an integer multiple of AlignSize. integer AddPAC(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data) bits(64) PAC; bits(64) result; bits(64) ext_ptr; bits(64) extfield; bit selbit; boolean tbi =SETSizeChoice(bits(64) toaddress, bits(64) setsize, integer AlignSize); EffectiveTBI(ptr, !data, PSTATE.EL) == '1'; integer top_bit = if tbi then 55 else 63; // If tagged pointers are in use for a regime with two TTBRs, use bit<55> of // the pointer to select between upper and lower ranges, and preserve this. // This handles the awkward case where there is apparently no correct choice between // the upper and lower address range - ie an addr of 1xxxxxxx0... with TBI0=0 and TBI1=1 // and 0xxxxxxx1 with TBI1=0 and TBI0=1: if PtrHasUpperAndLowerAddRanges() then assert S1TranslationRegime() IN {EL1, EL2}; if S1TranslationRegime() == EL1 then // EL1 translation regime registers if data then if TCR_EL1.TBI1 == '1' || TCR_EL1.TBI0 == '1' then selbit = ptr<55>; else selbit = ptr<63>; else if ((TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0') || (TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then selbit = ptr<55>; else selbit = ptr<63>; else // EL2 translation regime registers if data then if TCR_EL2.TBI1 == '1' || TCR_EL2.TBI0 == '1' then selbit = ptr<55>; else selbit = ptr<63>; else if ((TCR_EL2.TBI1 == '1' && TCR_EL2.TBID1 == '0') || (TCR_EL2.TBI0 == '1' && TCR_EL2.TBID0 == '0')) then selbit = ptr<55>; else selbit = ptr<63>; else selbit = if tbi then ptr<55> else ptr<63>; if HaveEnhancedPAC2() && ConstPACField() then selbit = ptr<55>; integer bottom_PAC_bit = CalculateBottomPACBit(selbit); // The pointer authentication code field takes all the available bits in between extfield = Replicate(selbit, 64); // Compute the pointer authentication code for a ptr with good extension bits if tbi then ext_ptr = ptr<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; else ext_ptr = extfield<(64-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = ComputePAC(ext_ptr, modifier, K<127:64>, K<63:0>); // Check if the ptr has good extension bits and corrupt the pointer authentication code if not if !IsZero(ptr<top_bit:bottom_PAC_bit>) && !IsOnes(ptr<top_bit:bottom_PAC_bit>) then if HaveEnhancedPAC() then PAC = 0x0000000000000000<63:0>; elsif !HaveEnhancedPAC2() then PAC<top_bit-1> = NOT(PAC<top_bit-1>); // preserve the determination between upper and lower address at bit<55> and insert PAC if !HaveEnhancedPAC2() then if tbi then result = ptr<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; else result = PAC<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; else if tbi then result = ptr<63:56>:selbit:(ptr<54:bottom_PAC_bit> EOR PAC<54:bottom_PAC_bit>):ptr<bottom_PAC_bit-1:0>; else result = (ptr<63:56> EOR PAC<63:56>):selbit:(ptr<54:bottom_PAC_bit> EOR PAC<54:bottom_PAC_bit>):ptr<bottom_PAC_bit-1:0>; return result;

Library pseudocode for aarch64/functions/pac/addpacdaaddpac/AddPACDAAddPAC

// AddPACDA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDAKey_EL1. // AddPAC() // ======== // Calculates the pointer authentication code for a 64-bit quantity and then // inserts that into pointer authentication code field of that 64-bit quantity. bits(64) AddPACDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of whenAddPAC(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data) bits(64) PAC; bits(64) result; bits(64) ext_ptr; bits(64) extfield; bit selbit; boolean tbi = EL0EffectiveTBI boolean IsEL1Regime =(ptr, !data, PSTATE.EL) == '1'; integer top_bit = if tbi then 55 else 63; // If tagged pointers are in use for a regime with two TTBRs, use bit<55> of // the pointer to select between upper and lower ranges, and preserve this. // This handles the awkward case where there is apparently no correct choice between // the upper and lower address range - ie an addr of 1xxxxxxx0... with TBI0=0 and TBI1=1 // and 0xxxxxxx1 with TBI1=0 and TBI0=1: if PtrHasUpperAndLowerAddRanges() then assert S1TranslationRegime() ==() IN { EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (,EL2EnabledEL2() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =}; if HaveELS1TranslationRegime(() ==EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDA; TrapEL2 =then // EL1 translation regime registers if data then if TCR_EL1.TBI1 == '1' || TCR_EL1.TBI0 == '1' then selbit = ptr<55>; else selbit = ptr<63>; else if ((TCR_EL1.TBI1 == '1' && TCR_EL1.TBID1 == '0') || (TCR_EL1.TBI0 == '1' && TCR_EL1.TBID0 == '0')) then selbit = ptr<55>; else selbit = ptr<63>; else // EL2 translation regime registers if data then if TCR_EL2.TBI1 == '1' || TCR_EL2.TBI0 == '1' then selbit = ptr<55>; else selbit = ptr<63>; else if ((TCR_EL2.TBI1 == '1' && TCR_EL2.TBID1 == '0') || (TCR_EL2.TBI0 == '1' && TCR_EL2.TBID0 == '0')) then selbit = ptr<55>; else selbit = ptr<63>; else selbit = if tbi then ptr<55> else ptr<63>; if EL2EnabledHaveEnhancedPAC2() && HCR_EL2.API == '0'; TrapEL3 =() && HaveELConstPACField(() then selbit = ptr<55>; integer bottom_PAC_bit =EL3CalculateBottomPACBit) && SCR_EL3.API == '0'; when(selbit); // The pointer authentication code field takes all the available bits in between extfield = EL2Replicate Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 =(selbit, 64); // Compute the pointer authentication code for a ptr with good extension bits if tbi then ext_ptr = ptr<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; else ext_ptr = extfield<(64-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = HaveELComputePAC((ext_ptr, modifier, K<127:64>, K<63:0>); // Check if the ptr has good extension bits and corrupt the pointer authentication code if not if !EL3IsZero) && SCR_EL3.API == '0'; when(ptr<top_bit:bottom_PAC_bit>) && ! EL3IsOnes Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL3 &&(ptr<top_bit:bottom_PAC_bit>) then if EL3SDDTrapPriorityHaveEnhancedPAC() then UNDEFINED; elsif TrapEL2 then PAC = 0x0000000000000000<63:0>; elsif ! TrapPACUseHaveEnhancedPAC2(() then PAC<top_bit-1> = NOT(PAC<top_bit-1>); // preserve the determination between upper and lower address at bit<55> and insert PAC if !EL2HaveEnhancedPAC2); elsif TrapEL3 then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else TrapPACUse(EL3); else return AddPAC(X, Y, APDAKey_EL1, TRUE);() then if tbi then result = ptr<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; else result = PAC<63:56>:selbit:PAC<54:bottom_PAC_bit>:ptr<bottom_PAC_bit-1:0>; else if tbi then result = ptr<63:56>:selbit:(ptr<54:bottom_PAC_bit> EOR PAC<54:bottom_PAC_bit>):ptr<bottom_PAC_bit-1:0>; else result = (ptr<63:56> EOR PAC<63:56>):selbit:(ptr<54:bottom_PAC_bit> EOR PAC<54:bottom_PAC_bit>):ptr<bottom_PAC_bit-1:0>; return result;

Library pseudocode for aarch64/functions/pac/addpacdbaddpacda/AddPACDBAddPACDA

// AddPACDB() // AddPACDA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDBKey_EL1. // APDAKey_EL1. bits(64) AddPACDB(bits(64) X, bits(64) Y) AddPACDA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; bits(128) APDAKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDB; Enable = SCTLR_EL1.EnDA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDB; Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDB; Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL3 && if Enable == '0' then return X; elsif TrapEL2 then EL3SDDTrapPriority() then UNDEFINED; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APDBKey_EL1, TRUE);(X, Y, APDAKey_EL1, TRUE);

Library pseudocode for aarch64/functions/pac/addpacgaaddpacdb/AddPACGAAddPACDB

// AddPACGA() // AddPACDB() // ========== // Returns a 64-bit value where the lower 32 bits are 0, and the upper 32 bits contain // a 32-bit pointer authentication code which is derived using a cryptographic // algorithm as a combination of X, Y and the APGAKey_EL1. // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APDBKey_EL1. bits(64) AddPACGA(bits(64) X, bits(64) Y) AddPACDB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(128) APGAKey_EL1; bits(1) Enable; bits(128) APDBKey_EL1; APGAKey_EL1 = APGAKeyHi_EL1<63:0> : APGAKeyLo_EL1<63:0>; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 TrapEL2 = (boolean IsEL1Regime =S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 TrapEL2 =Enable = SCTLR_EL1.EnDB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 TrapEL2 = FALSE; Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 TrapEL2 = FALSE; Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if TrapEL3 && if Enable == '0' then return X; elsif TrapEL2 then EL3SDDTrapPriority() then UNDEFINED; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then if elsif TrapEL3 then Halted() && EDSCR.SDD == '1' then UNDEFINED; else TrapPACUse(EL3); else return ComputePACAddPAC(X, Y, APGAKey_EL1<127:64>, APGAKey_EL1<63:0>)<63:32>:Zeros(32);(X, Y, APDBKey_EL1, TRUE);

Library pseudocode for aarch64/functions/pac/addpaciaaddpacga/AddPACIAAddPACGA

// AddPACIA() // AddPACGA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y, and the // APIAKey_EL1. // Returns a 64-bit value where the lower 32 bits are 0, and the upper 32 bits contain // a 32-bit pointer authentication code which is derived using a cryptographic // algorithm as a combination of X, Y and the APGAKey_EL1. bits(64) AddPACIA(bits(64) X, bits(64) Y) AddPACGA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; bits(128) APGAKey_EL1; APIAKey_EL1 = APIAKeyHi_EL1<63:0>:APIAKeyLo_EL1<63:0>; APGAKey_EL1 = APGAKeyHi_EL1<63:0> : APGAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime =TrapEL2 = ( S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIA; TrapEL2 =TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL3 && if TrapEL2 then EL3SDDTrapPriority() then UNDEFINED; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then if elsif TrapEL3 then Halted() && EDSCR.SDD == '1' then UNDEFINED; else TrapPACUse(EL3); else return (X, Y, APGAKey_EL1<127:64>, APGAKey_EL1<63:0>)<63:32>:ZerosAddPACComputePAC(X, Y, APIAKey_EL1, FALSE);(32);

Library pseudocode for aarch64/functions/pac/addpacibaddpacia/AddPACIBAddPACIA

// AddPACIB() // AddPACIA() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APIBKey_EL1. // code is derived using a cryptographic algorithm as a combination of X, Y, and the // APIAKey_EL1. bits(64) AddPACIB(bits(64) X, bits(64) Y) AddPACIA(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; bits(128) APIAKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; APIAKey_EL1 = APIAKeyHi_EL1<63:0>:APIAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIB; Enable = SCTLR_EL1.EnIA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIB; Enable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL3 && if Enable == '0' then return X; elsif TrapEL2 then EL3SDDTrapPriority() then UNDEFINED; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else); elsif TrapEL3 then TrapPACUse(EL3); else return AddPAC(X, Y, APIBKey_EL1, FALSE);(X, Y, APIAKey_EL1, FALSE);

Library pseudocode for aarch64/functions/pac/authaddpacib/AArch64.PACFailExceptionAddPACIB

// AArch64.PACFailException() // ========================== // Generates a PAC Fail Exception// AddPACIB() // ========== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with a pointer authentication code, where the pointer authentication // code is derived using a cryptographic algorithm as a combination of X, Y and the // APIBKey_EL1. bits(64) AArch64.PACFailException(bits(2) syndrome) route_to_el2 = PSTATE.EL ==AddPACIB(bits(64) X, bits(64) Y) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 &&boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = (EL2Enabled() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return =() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = ThisInstrAddrHaveEL(); vect_offset = 0x0; exception =( ExceptionSyndromeEL3() && SCR_EL3.API == '0'; whenException_PACFailEL1); exception.syndrome<1:0> = syndrome; exception.syndrome<24:2> =Enable = SCTLR_EL1.EnIB; TrapEL2 = ZerosEL2Enabled(); // RES0 if() && HCR_EL2.API == '0'; TrapEL3 = UIntHaveEL(PSTATE.EL) >( UIntEL3() && SCR_EL3.API == '0'; whenEL0EL2) thenEnable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 = AArch64.TakeExceptionHaveEL(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then( AArch64.TakeExceptionEL3() && SCR_EL3.API == '0'; whenEL3 Enable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then TrapPACUse(EL2, exception, preferred_exception_return, vect_offset); else); elsif TrapEL3 then AArch64.TakeExceptionTrapPACUse(); else return AddPACEL1EL3, exception, preferred_exception_return, vect_offset);(X, Y, APIBKey_EL1, FALSE);

Library pseudocode for aarch64/functions/pac/auth/AuthAArch64.PACFailException

// Auth() // ====== // Restores the upper bits of the address to be all zeros or all ones (based on the // value of bit[55]) and computes and checks the pointer authentication code. If the // check passes, then the restored address is returned. If the check fails, the // second-top and third-top bits of the extension bits in the pointer authentication code // field are corrupted to ensure that accessing the address will give a translation fault. bits(64)// AArch64.PACFailException() // ========================== // Generates a PAC Fail Exception Auth(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data, bit key_number, boolean is_combined) bits(64) PAC; bits(64) result; bits(64) original_ptr; bits(2) error_code; bits(64) extfield; // Reconstruct the extension field used of adding the PAC to the pointer boolean tbi =AArch64.PACFailException(bits(2) syndrome) route_to_el2 = PSTATE.EL == EffectiveTBIEL0(ptr, !data, PSTATE.EL) == '1'; integer bottom_PAC_bit =&& CalculateBottomPACBitEL2Enabled(ptr<55>); extfield =() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ReplicateThisInstrAddr(ptr<55>, 64); (); vect_offset = 0x0; if tbi then original_ptr = ptr<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; else original_ptr = extfield<(64-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; PAC = exception = ComputePACExceptionSyndrome(original_ptr, modifier, K<127:64>, K<63:0>); // Check pointer authentication code if tbi then if !(HaveEnhancedPAC2Exception_PACFail() then if PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit> then result = original_ptr; else error_code = key_number:NOT(key_number); result = original_ptr<63:55>:error_code:original_ptr<52:0>; else result = ptr; result<54:bottom_PAC_bit> = result<54:bottom_PAC_bit> EOR PAC<54:bottom_PAC_bit>; if); exception.syndrome<1:0> = syndrome; exception.syndrome<24:2> = HaveFPACCombinedZeros() || ((); // RES0 ifHaveFPACUInt() && !is_combined) then if result<54:bottom_PAC_bit> !=(PSTATE.EL) > ReplicateUInt(result<55>, (55-bottom_PAC_bit)) then error_code = (if data then '1' else '0'):key_number;( AArch64.PACFailExceptionEL0(error_code); else if !) thenHaveEnhancedPAC2AArch64.TakeException() then if PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit> && PAC<63:56> == ptr<63:56> then result = original_ptr; else error_code = key_number:NOT(key_number); result = original_ptr<63>:error_code:original_ptr<60:0>; else result = ptr; result<54:bottom_PAC_bit> = result<54:bottom_PAC_bit> EOR PAC<54:bottom_PAC_bit>; result<63:56> = result<63:56> EOR PAC<63:56>; if(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then HaveFPACCombinedAArch64.TakeException() || ((HaveFPACEL2() && !is_combined) then if result<63:bottom_PAC_bit> !=, exception, preferred_exception_return, vect_offset); else ReplicateAArch64.TakeException(result<55>, (64-bottom_PAC_bit)) then error_code = (if data then '1' else '0'):key_number;( AArch64.PACFailExceptionEL1(error_code); return result;, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/pac/authdaauth/AuthDAAuth

// AuthDA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACDA(). // Auth() // ====== // Restores the upper bits of the address to be all zeros or all ones (based on the // value of bit[55]) and computes and checks the pointer authentication code. If the // check passes, then the restored address is returned. If the check fails, the // second-top and third-top bits of the extension bits in the pointer authentication code // field are corrupted to ensure that accessing the address will give a translation fault. bits(64) AuthDA(bits(64) X, bits(64) Y, boolean is_combined) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDAKey_EL1; Auth(bits(64) ptr, bits(64) modifier, bits(128) K, boolean data, bit key_number, boolean is_combined) bits(64) PAC; bits(64) result; bits(64) original_ptr; bits(2) error_code; bits(64) extfield; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of when // Reconstruct the extension field used of adding the PAC to the pointer boolean tbi = EL0EffectiveTBI boolean IsEL1Regime =(ptr, !data, PSTATE.EL) == '1'; integer bottom_PAC_bit = S1TranslationRegimeCalculateBottomPACBit() ==(ptr<55>); extfield = EL1Replicate; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = ((ptr<55>, 64); if tbi then original_ptr = ptr<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; else original_ptr = extfield<(64-bottom_PAC_bit)-1:0>:ptr<bottom_PAC_bit-1:0>; PAC =EL2EnabledComputePAC() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 =(original_ptr, modifier, K<127:64>, K<63:0>); // Check pointer authentication code if tbi then if ! HaveELHaveEnhancedPAC2(() then if PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit> then result = original_ptr; else error_code = key_number:NOT(key_number); result = original_ptr<63:55>:error_code:original_ptr<52:0>; else result = ptr; result<54:bottom_PAC_bit> = result<54:bottom_PAC_bit> EOR PAC<54:bottom_PAC_bit>; ifEL3HaveFPACCombined) && SCR_EL3.API == '0'; when() || ( EL1HaveFPAC Enable = SCTLR_EL1.EnDA; TrapEL2 =() && !is_combined) then if result<54:bottom_PAC_bit> != EL2EnabledReplicate() && HCR_EL2.API == '0'; TrapEL3 =(result<55>, (55-bottom_PAC_bit)) then error_code = (if data then '1' else '0'):key_number; HaveELAArch64.PACFailException((error_code); else if !EL3HaveEnhancedPAC2) && SCR_EL3.API == '0'; when() then if PAC<54:bottom_PAC_bit> == ptr<54:bottom_PAC_bit> && PAC<63:56> == ptr<63:56> then result = original_ptr; else error_code = key_number:NOT(key_number); result = original_ptr<63>:error_code:original_ptr<60:0>; else result = ptr; result<54:bottom_PAC_bit> = result<54:bottom_PAC_bit> EOR PAC<54:bottom_PAC_bit>; result<63:56> = result<63:56> EOR PAC<63:56>; if EL2HaveFPACCombined Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 =() || ( HaveELHaveFPAC(() && !is_combined) then if result<63:bottom_PAC_bit> !=EL3Replicate) && SCR_EL3.API == '0'; when(result<55>, (64-bottom_PAC_bit)) then error_code = (if data then '1' else '0'):key_number; EL3AArch64.PACFailException Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL3 && EL3SDDTrapPriority() then UNDEFINED; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else TrapPACUse(EL3); else return Auth(X, Y, APDAKey_EL1, TRUE, '0', is_combined);(error_code); return result;

Library pseudocode for aarch64/functions/pac/authdbauthda/AuthDBAuthDA

// AuthDB() // AuthDA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a // pointer authentication code in the pointer authentication code field bits of X, using // the same algorithm and key as AddPACDB(). // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACDA(). bits(64) AuthDB(bits(64) X, bits(64) Y, boolean is_combined) AuthDA(bits(64) X, bits(64) Y, boolean is_combined) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APDBKey_EL1; bits(128) APDAKey_EL1; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; APDAKey_EL1 = APDAKeyHi_EL1<63:0> : APDAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; Enable = if IsEL1Regime then SCTLR_EL1.EnDA else SCTLR_EL2.EnDA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnDB; Enable = SCTLR_EL1.EnDA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnDB; Enable = SCTLR_EL2.EnDA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnDB; Enable = SCTLR_EL3.EnDA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL3 && if Enable == '0' then return X; elsif TrapEL2 then EL3SDDTrapPriority() then UNDEFINED; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APDBKey_EL1, TRUE, '1', is_combined);(X, Y, APDAKey_EL1, TRUE, '0', is_combined);

Library pseudocode for aarch64/functions/pac/authiaauthdb/AuthIAAuthDB

// AuthIA() // AuthDB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIA(). // field bits with the extension of the address bits. The instruction checks a // pointer authentication code in the pointer authentication code field bits of X, using // the same algorithm and key as AddPACDB(). bits(64) AuthIA(bits(64) X, bits(64) Y, boolean is_combined) AuthDB(bits(64) X, bits(64) Y, boolean is_combined) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIAKey_EL1; bits(128) APDBKey_EL1; APIAKey_EL1 = APIAKeyHi_EL1<63:0> : APIAKeyLo_EL1<63:0>; APDBKey_EL1 = APDBKeyHi_EL1<63:0> : APDBKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; Enable = if IsEL1Regime then SCTLR_EL1.EnDB else SCTLR_EL2.EnDB; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIA; Enable = SCTLR_EL1.EnDB; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIA; Enable = SCTLR_EL2.EnDB; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIA; Enable = SCTLR_EL3.EnDB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL3 && if Enable == '0' then return X; elsif TrapEL2 then EL3SDDTrapPriority() then UNDEFINED; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APIAKey_EL1, FALSE, '0', is_combined);(X, Y, APDBKey_EL1, TRUE, '1', is_combined);

Library pseudocode for aarch64/functions/pac/authibauthia/AuthIBAuthIA

// AuthIB() // AuthIA() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIB(). // algorithm and key as AddPACIA(). bits(64) AuthIB(bits(64) X, bits(64) Y, boolean is_combined) AuthIA(bits(64) X, bits(64) Y, boolean is_combined) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; bits(128) APIAKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; APIAKey_EL1 = APIAKeyHi_EL1<63:0> : APIAKeyLo_EL1<63:0>; case PSTATE.EL of when EL0 boolean IsEL1Regime = S1TranslationRegime() == EL1; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; Enable = if IsEL1Regime then SCTLR_EL1.EnIA else SCTLR_EL2.EnIA; TrapEL2 = (EL2Enabled() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL1 Enable = SCTLR_EL1.EnIB; Enable = SCTLR_EL1.EnIA; TrapEL2 = EL2Enabled() && HCR_EL2.API == '0'; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL2 Enable = SCTLR_EL2.EnIB; Enable = SCTLR_EL2.EnIA; TrapEL2 = FALSE; TrapEL3 = HaveEL(EL3) && SCR_EL3.API == '0'; when EL3 Enable = SCTLR_EL3.EnIB; Enable = SCTLR_EL3.EnIA; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL3 && if Enable == '0' then return X; elsif TrapEL2 then EL3SDDTrapPriority() then UNDEFINED; elsif TrapEL2 then TrapPACUse(EL2); elsif TrapEL3 then if Halted() && EDSCR.SDD == '1' then UNDEFINED; else); elsif TrapEL3 then TrapPACUse(EL3); else return Auth(X, Y, APIBKey_EL1, FALSE, '1', is_combined);(X, Y, APIAKey_EL1, FALSE, '0', is_combined);

Library pseudocode for aarch64/functions/pac/calcbottompacbitauthib/CalculateBottomPACBitAuthIB

// CalculateBottomPACBit() // ======================= // AuthIB() // ======== // Returns a 64-bit value containing X, but replacing the pointer authentication code // field bits with the extension of the address bits. The instruction checks a pointer // authentication code in the pointer authentication code field bits of X, using the same // algorithm and key as AddPACIB(). integerbits(64) CalculateBottomPACBit(bit top_bit) integer tsz_field; boolean using64k;AuthIB(bits(64) X, bits(64) Y, boolean is_combined) boolean TrapEL2; boolean TrapEL3; bits(1) Enable; bits(128) APIBKey_EL1; APIBKey_EL1 = APIBKeyHi_EL1<63:0> : APIBKeyLo_EL1<63:0>; case PSTATE.EL of when ConstraintEL0 c; ifboolean IsEL1Regime = PtrHasUpperAndLowerAddRanges() then assert S1TranslationRegime() IN {() ==EL1,; Enable = if IsEL1Regime then SCTLR_EL1.EnIB else SCTLR_EL2.EnIB; TrapEL2 = ( EL2EL2Enabled}; if() && HCR_EL2.API == '0' && (HCR_EL2.TGE == '0' || HCR_EL2.E2H == '0')); TrapEL3 = S1TranslationRegimeHaveEL() ==( EL3) && SCR_EL3.API == '0'; when EL1 then // EL1 translation regime registers tsz_field = if top_bit == '1' thenEnable = SCTLR_EL1.EnIB; TrapEL2 = UIntEL2Enabled(TCR_EL1.T1SZ) else() && HCR_EL2.API == '0'; TrapEL3 = UInt(TCR_EL1.T0SZ); using64k = if top_bit == '1' then TCR_EL1.TG1 == '11' else TCR_EL1.TG0 == '01'; else // EL2 translation regime registers assert HaveEL(EL3) && SCR_EL3.API == '0'; when EL2); tsz_field = if top_bit == '1' thenEnable = SCTLR_EL2.EnIB; TrapEL2 = FALSE; TrapEL3 = UIntHaveEL(TCR_EL2.T1SZ) else( UIntEL3(TCR_EL2.T0SZ); using64k = if top_bit == '1' then TCR_EL2.TG1 == '11' else TCR_EL2.TG0 == '01'; else tsz_field = if PSTATE.EL ==) && SCR_EL3.API == '0'; when EL2EL3 thenEnable = SCTLR_EL3.EnIB; TrapEL2 = FALSE; TrapEL3 = FALSE; if Enable == '0' then return X; elsif TrapEL2 then UIntTrapPACUse(TCR_EL2.T0SZ) else( UInt(TCR_EL3.T0SZ); using64k = if PSTATE.EL == EL2 then TCR_EL2.TG0 == '01' else TCR_EL3.TG0 == '01'; max_limit_tsz_field = (if !); elsif TrapEL3 thenHaveSmallTranslationTableExtTrapPACUse() then 39 else if using64k then 47 else 48); if tsz_field > max_limit_tsz_field then // TCR_ELx.TySZ is out of range c =( ConstrainUnpredictableEL3(); else returnUnpredictable_RESTnSZAuth); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = max_limit_tsz_field; tszmin = if using64k && AArch64.VAMax() == 52 then 12 else 16; if tsz_field < tszmin then c = ConstrainUnpredictable(Unpredictable_RESTnSZ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCE then tsz_field = tszmin; return (64-tsz_field);(X, Y, APIBKey_EL1, FALSE, '1', is_combined);

Library pseudocode for aarch64/functions/pac/computepaccalcbottompacbit/ComputePACCalculateBottomPACBit

// ComputePAC() // ============ // CalculateBottomPACBit() // ======================= bits(64)integer ComputePAC(bits(64) data, bits(64) modifier, bits(64) key0, bits(64) key1) bits(64) workingval; bits(64) runningmod; bits(64) roundkey; bits(64) modk0; constant bits(64) Alpha = 0xC0AC29B7C97C50DD<63:0>; integer iterations; ifCalculateBottomPACBit(bit top_bit) integer tsz_field; boolean using64k; HavePACQARMA3Constraint() then iterations = 2; RC[0] = 0x0000000000000000<63:0>; RC[1] = 0x13198A2E03707344<63:0>; RC[2] = 0xA4093822299F31D0<63:0>; else iterations = 4; RC[0] = 0x0000000000000000<63:0>; RC[1] = 0x13198A2E03707344<63:0>; RC[2] = 0xA4093822299F31D0<63:0>; RC[3] = 0x082EFA98EC4E6C89<63:0>; RC[4] = 0x452821E638D01377<63:0>; c; modk0 = key0<0>:key0<63:2>:(key0<63> EOR key0<1>); runningmod = modifier; workingval = data EOR key0; for i = 0 to iterations roundkey = key1 EOR runningmod; workingval = workingval EOR roundkey; workingval = workingval EOR RC[i]; if i > 0 then workingval = if PACCellShufflePtrHasUpperAndLowerAddRanges(workingval); workingval =() then assert PACMultS1TranslationRegime(workingval); if() IN { HavePACQARMA3EL1() then workingval =, PACSub1EL2(workingval); else workingval =}; if PACSubS1TranslationRegime(workingval); runningmod =() == TweakShuffleEL1(runningmod<63:0>); roundkey = modk0 EOR runningmod; workingval = workingval EOR roundkey; workingval =then // EL1 translation regime registers tsz_field = if top_bit == '1' then PACCellShuffleUInt(workingval); workingval =(TCR_EL1.T1SZ) else PACMultUInt(workingval); if(TCR_EL1.T0SZ); using64k = if top_bit == '1' then TCR_EL1.TG1 == '11' else TCR_EL1.TG0 == '01'; else // EL2 translation regime registers assert HavePACQARMA3HaveEL() then workingval =( PACSub1EL2(workingval); else workingval =); tsz_field = if top_bit == '1' then PACSubUInt(workingval); workingval =(TCR_EL2.T1SZ) else PACCellShuffleUInt(workingval); workingval =(TCR_EL2.T0SZ); using64k = if top_bit == '1' then TCR_EL2.TG1 == '11' else TCR_EL2.TG0 == '01'; else tsz_field = if PSTATE.EL == PACMultEL2(workingval); workingval = key1 EOR workingval; workingval =then PACCellInvShuffleUInt(workingval); if(TCR_EL2.T0SZ) else HavePACQARMA3UInt() then workingval =(TCR_EL3.T0SZ); using64k = if PSTATE.EL == PACSub1EL2(workingval); else workingval =then TCR_EL2.TG0 == '01' else TCR_EL3.TG0 == '01'; max_limit_tsz_field = (if ! PACInvSubHaveSmallTranslationTableExt(workingval); workingval =() then 39 else if using64k then 47 else 48); if tsz_field > max_limit_tsz_field then // TCR_ELx.TySZ is out of range c = PACMultConstrainUnpredictable(workingval); workingval =( PACCellInvShuffleUnpredictable_RESTnSZ(workingval); workingval = workingval EOR key0; workingval = workingval EOR runningmod; for i = 0 to iterations if); assert c IN { HavePACQARMA3Constraint_FORCE() then workingval =, PACSub1Constraint_NONE(workingval); else workingval =}; if c == PACInvSubConstraint_FORCE(workingval); if i < iterations then workingval =then tsz_field = max_limit_tsz_field; tszmin = if using64k && PACMultAArch64.VAMax(workingval); workingval =() == 52 then 12 else 16; if tsz_field < tszmin then c = PACCellInvShuffleConstrainUnpredictable(workingval); runningmod =( ); assert c IN {Constraint_FORCE, Constraint_NONE}; if c == Constraint_FORCETweakInvShuffleUnpredictable_RESTnSZ(runningmod<63:0>); roundkey = key1 EOR runningmod; workingval = workingval EOR RC[iterations-i]; workingval = workingval EOR roundkey; workingval = workingval EOR Alpha; workingval = workingval EOR modk0; return workingval;then tsz_field = tszmin; return (64-tsz_field);

Library pseudocode for aarch64/functions/pac/computepac/PACCellInvShuffleComputePAC

// PACCellInvShuffle() // =================== // ComputePAC() // ============ bits(64) PACCellInvShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<15:12>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<51:48>; outdata<15:12> = indata<39:36>; outdata<19:16> = indata<59:56>; outdata<23:20> = indata<47:44>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<19:16>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<31:28>; outdata<47:44> = indata<11:8>; outdata<51:48> = indata<23:20>; outdata<55:52> = indata<3:0>; outdata<59:56> = indata<43:40>; outdata<63:60> = indata<63:60>; return outdata;ComputePAC(bits(64) data, bits(64) modifier, bits(64) key0, bits(64) key1) bits(64) workingval; bits(64) runningmod; bits(64) roundkey; bits(64) modk0; constant bits(64) Alpha = 0xC0AC29B7C97C50DD<63:0>; integer iterations; ifHavePACQARMA3() then iterations = 2; RC[0] = 0x0000000000000000<63:0>; RC[1] = 0x13198A2E03707344<63:0>; RC[2] = 0xA4093822299F31D0<63:0>; else iterations = 4; RC[0] = 0x0000000000000000<63:0>; RC[1] = 0x13198A2E03707344<63:0>; RC[2] = 0xA4093822299F31D0<63:0>; RC[3] = 0x082EFA98EC4E6C89<63:0>; RC[4] = 0x452821E638D01377<63:0>; modk0 = key0<0>:key0<63:2>:(key0<63> EOR key0<1>); runningmod = modifier; workingval = data EOR key0; for i = 0 to iterations roundkey = key1 EOR runningmod; workingval = workingval EOR roundkey; workingval = workingval EOR RC[i]; if i > 0 then workingval = PACCellShuffle(workingval); workingval = PACMult(workingval); if HavePACQARMA3() then workingval = PACSub1(workingval); else workingval = PACSub(workingval); runningmod = TweakShuffle(runningmod<63:0>); roundkey = modk0 EOR runningmod; workingval = workingval EOR roundkey; workingval = PACCellShuffle(workingval); workingval = PACMult(workingval); if HavePACQARMA3() then workingval = PACSub1(workingval); else workingval = PACSub(workingval); workingval = PACCellShuffle(workingval); workingval = PACMult(workingval); workingval = key1 EOR workingval; workingval = PACCellInvShuffle(workingval); if HavePACQARMA3() then workingval = PACSub1(workingval); else workingval = PACInvSub(workingval); workingval = PACMult(workingval); workingval = PACCellInvShuffle(workingval); workingval = workingval EOR key0; workingval = workingval EOR runningmod; for i = 0 to iterations if HavePACQARMA3() then workingval = PACSub1(workingval); else workingval = PACInvSub(workingval); if i < iterations then workingval = PACMult(workingval); workingval = PACCellInvShuffle(workingval); runningmod = TweakInvShuffle(runningmod<63:0>); roundkey = key1 EOR runningmod; workingval = workingval EOR RC[iterations-i]; workingval = workingval EOR roundkey; workingval = workingval EOR Alpha; workingval = workingval EOR modk0; return workingval;

Library pseudocode for aarch64/functions/pac/computepac/PACCellShufflePACCellInvShuffle

// PACCellShuffle() // ================ // PACCellInvShuffle() // =================== bits(64) PACCellShuffle(bits(64) indata) PACCellInvShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<55:52>; outdata<3:0> = indata<15:12>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<47:44>; outdata<15:12> = indata<3:0>; outdata<19:16> = indata<31:28>; outdata<23:20> = indata<51:48>; outdata<11:8> = indata<51:48>; outdata<15:12> = indata<39:36>; outdata<19:16> = indata<59:56>; outdata<23:20> = indata<47:44>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<43:40>; outdata<31:28> = indata<19:16>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<15:12>; outdata<43:40> = indata<59:56>; outdata<47:44> = indata<23:20>; outdata<51:48> = indata<11:8>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<19:16>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<31:28>; outdata<47:44> = indata<11:8>; outdata<51:48> = indata<23:20>; outdata<55:52> = indata<3:0>; outdata<59:56> = indata<43:40>; outdata<63:60> = indata<63:60>; return outdata;

Library pseudocode for aarch64/functions/pac/computepac/PACInvSubPACCellShuffle

// PACInvSub() // =========== // PACCellShuffle() // ================ bits(64) PACInvSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '0101'; when '0001' Toutput<4*i+3:4*i> = '1110'; when '0010' Toutput<4*i+3:4*i> = '1101'; when '0011' Toutput<4*i+3:4*i> = '1000'; when '0100' Toutput<4*i+3:4*i> = '1010'; when '0101' Toutput<4*i+3:4*i> = '1011'; when '0110' Toutput<4*i+3:4*i> = '0001'; when '0111' Toutput<4*i+3:4*i> = '1001'; when '1000' Toutput<4*i+3:4*i> = '0010'; when '1001' Toutput<4*i+3:4*i> = '0110'; when '1010' Toutput<4*i+3:4*i> = '1111'; when '1011' Toutput<4*i+3:4*i> = '0000'; when '1100' Toutput<4*i+3:4*i> = '0100'; when '1101' Toutput<4*i+3:4*i> = '1100'; when '1110' Toutput<4*i+3:4*i> = '0111'; when '1111' Toutput<4*i+3:4*i> = '0011'; return Toutput;PACCellShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<55:52>; outdata<7:4> = indata<27:24>; outdata<11:8> = indata<47:44>; outdata<15:12> = indata<3:0>; outdata<19:16> = indata<31:28>; outdata<23:20> = indata<51:48>; outdata<27:24> = indata<7:4>; outdata<31:28> = indata<43:40>; outdata<35:32> = indata<35:32>; outdata<39:36> = indata<15:12>; outdata<43:40> = indata<59:56>; outdata<47:44> = indata<23:20>; outdata<51:48> = indata<11:8>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<19:16>; outdata<63:60> = indata<63:60>; return outdata;

Library pseudocode for aarch64/functions/pac/computepac/PACMultPACInvSub

// PACMult() // ========= // PACInvSub() // =========== bits(64) PACMult(bits(64) Sinput) bits(4) t0; bits(4) t1; bits(4) t2; bits(4) t3; bits(64) Soutput; for i = 0 to 3 t0<3:0> =PACInvSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '0101'; when '0001' Toutput<4*i+3:4*i> = '1110'; when '0010' Toutput<4*i+3:4*i> = '1101'; when '0011' Toutput<4*i+3:4*i> = '1000'; when '0100' Toutput<4*i+3:4*i> = '1010'; when '0101' Toutput<4*i+3:4*i> = '1011'; when '0110' Toutput<4*i+3:4*i> = '0001'; when '0111' Toutput<4*i+3:4*i> = '1001'; when '1000' Toutput<4*i+3:4*i> = '0010'; when '1001' Toutput<4*i+3:4*i> = '0110'; when '1010' Toutput<4*i+3:4*i> = '1111'; when '1011' Toutput<4*i+3:4*i> = '0000'; when '1100' Toutput<4*i+3:4*i> = '0100'; when '1101' Toutput<4*i+3:4*i> = '1100'; when '1110' Toutput<4*i+3:4*i> = '0111'; when '1111' Toutput<4*i+3:4*i> = '0011'; return Toutput; RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 1) EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 2); t0<3:0> = t0<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 1); t1<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 1); t1<3:0> = t1<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 2); t2<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 2) EOR RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 1); t2<3:0> = t2<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 1); t3<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 2); t3<3:0> = t3<3:0> EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 1); Soutput<4*i+3:4*i> = t3<3:0>; Soutput<4*(i+4)+3:4*(i+4)> = t2<3:0>; Soutput<4*(i+8)+3:4*(i+8)> = t1<3:0>; Soutput<4*(i+12)+3:4*(i+12)> = t0<3:0>; return Soutput;

Library pseudocode for aarch64/functions/pac/computepac/PACSubPACMult

// PACSub() // ======== // PACMult() // ========= bits(64) PACSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '1011'; when '0001' Toutput<4*i+3:4*i> = '0110'; when '0010' Toutput<4*i+3:4*i> = '1000'; when '0011' Toutput<4*i+3:4*i> = '1111'; when '0100' Toutput<4*i+3:4*i> = '1100'; when '0101' Toutput<4*i+3:4*i> = '0000'; when '0110' Toutput<4*i+3:4*i> = '1001'; when '0111' Toutput<4*i+3:4*i> = '1110'; when '1000' Toutput<4*i+3:4*i> = '0011'; when '1001' Toutput<4*i+3:4*i> = '0111'; when '1010' Toutput<4*i+3:4*i> = '0100'; when '1011' Toutput<4*i+3:4*i> = '0101'; when '1100' Toutput<4*i+3:4*i> = '1101'; when '1101' Toutput<4*i+3:4*i> = '0010'; when '1110' Toutput<4*i+3:4*i> = '0001'; when '1111' Toutput<4*i+3:4*i> = '1010'; return Toutput;PACMult(bits(64) Sinput) bits(4) t0; bits(4) t1; bits(4) t2; bits(4) t3; bits(64) Soutput; for i = 0 to 3 t0<3:0> =RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 1) EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 2); t0<3:0> = t0<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 1); t1<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 1); t1<3:0> = t1<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 2); t2<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 2) EOR RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 1); t2<3:0> = t2<3:0> EOR RotCell(Sinput<4*(i)+3:4*(i)>, 1); t3<3:0> = RotCell(Sinput<4*(i+12)+3:4*(i+12)>, 1) EOR RotCell(Sinput<4*(i+8)+3:4*(i+8)>, 2); t3<3:0> = t3<3:0> EOR RotCell(Sinput<4*(i+4)+3:4*(i+4)>, 1); Soutput<4*i+3:4*i> = t3<3:0>; Soutput<4*(i+4)+3:4*(i+4)> = t2<3:0>; Soutput<4*(i+8)+3:4*(i+8)> = t1<3:0>; Soutput<4*(i+12)+3:4*(i+12)> = t0<3:0>; return Soutput;

Library pseudocode for aarch64/functions/pac/computepac/PacSub1PACSub

// PacSub1() // ========= // PACSub() // ======== bits(64) PACSub1(bits(64) Tinput) // This is a 4-bit substitution from Qarma sigma1 PACSub(bits(64) Tinput) // This is a 4-bit substitution from the PRINCE-family cipher bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '1010'; when '0001' Toutput<4*i+3:4*i> = '1101'; when '0010' Toutput<4*i+3:4*i> = '1110'; when '0011' Toutput<4*i+3:4*i> = '0110'; when '0100' Toutput<4*i+3:4*i> = '1111'; when '0101' Toutput<4*i+3:4*i> = '0111'; when '0110' Toutput<4*i+3:4*i> = '0011'; when '0111' Toutput<4*i+3:4*i> = '0101'; when '1000' Toutput<4*i+3:4*i> = '1001'; when '1001' Toutput<4*i+3:4*i> = '1000'; when '1010' Toutput<4*i+3:4*i> = '0000'; when '1011' Toutput<4*i+3:4*i> = '1100'; when '1100' Toutput<4*i+3:4*i> = '1011'; when '1101' Toutput<4*i+3:4*i> = '0001'; when '1110' Toutput<4*i+3:4*i> = '0010'; when '1111' Toutput<4*i+3:4*i> = '0100'; when '0000' Toutput<4*i+3:4*i> = '1011'; when '0001' Toutput<4*i+3:4*i> = '0110'; when '0010' Toutput<4*i+3:4*i> = '1000'; when '0011' Toutput<4*i+3:4*i> = '1111'; when '0100' Toutput<4*i+3:4*i> = '1100'; when '0101' Toutput<4*i+3:4*i> = '0000'; when '0110' Toutput<4*i+3:4*i> = '1001'; when '0111' Toutput<4*i+3:4*i> = '1110'; when '1000' Toutput<4*i+3:4*i> = '0011'; when '1001' Toutput<4*i+3:4*i> = '0111'; when '1010' Toutput<4*i+3:4*i> = '0100'; when '1011' Toutput<4*i+3:4*i> = '0101'; when '1100' Toutput<4*i+3:4*i> = '1101'; when '1101' Toutput<4*i+3:4*i> = '0010'; when '1110' Toutput<4*i+3:4*i> = '0001'; when '1111' Toutput<4*i+3:4*i> = '1010'; return Toutput;

Library pseudocode for aarch64/functions/pac/computepac/RCPacSub1

array bits(64) RC[0..4];// PacSub1() // ========= bits(64)PACSub1(bits(64) Tinput) // This is a 4-bit substitution from Qarma sigma1 bits(64) Toutput; for i = 0 to 15 case Tinput<4*i+3:4*i> of when '0000' Toutput<4*i+3:4*i> = '1010'; when '0001' Toutput<4*i+3:4*i> = '1101'; when '0010' Toutput<4*i+3:4*i> = '1110'; when '0011' Toutput<4*i+3:4*i> = '0110'; when '0100' Toutput<4*i+3:4*i> = '1111'; when '0101' Toutput<4*i+3:4*i> = '0111'; when '0110' Toutput<4*i+3:4*i> = '0011'; when '0111' Toutput<4*i+3:4*i> = '0101'; when '1000' Toutput<4*i+3:4*i> = '1001'; when '1001' Toutput<4*i+3:4*i> = '1000'; when '1010' Toutput<4*i+3:4*i> = '0000'; when '1011' Toutput<4*i+3:4*i> = '1100'; when '1100' Toutput<4*i+3:4*i> = '1011'; when '1101' Toutput<4*i+3:4*i> = '0001'; when '1110' Toutput<4*i+3:4*i> = '0010'; when '1111' Toutput<4*i+3:4*i> = '0100'; return Toutput;

Library pseudocode for aarch64/functions/pac/computepac/RotCellRC

// RotCell() // ========= bits(4)array bits(64) RC[0..4]; RotCell(bits(4) incell, integer amount) bits(8) tmp; bits(4) outcell; // assert amount>3 || amount<1; tmp<7:0> = incell<3:0>:incell<3:0>; outcell = tmp<7-amount:4-amount>; return outcell;

Library pseudocode for aarch64/functions/pac/computepac/TweakCellInvRotRotCell

// TweakCellInvRot() // ================= // RotCell() // ========= bits(4) TweakCellInvRot(bits(4) incell) RotCell(bits(4) incell, integer amount) bits(8) tmp; bits(4) outcell; outcell<3> = incell<2>; outcell<2> = incell<1>; outcell<1> = incell<0>; outcell<0> = incell<0> EOR incell<3>; // assert amount>3 || amount<1; tmp<7:0> = incell<3:0>:incell<3:0>; outcell = tmp<7-amount:4-amount>; return outcell;

Library pseudocode for aarch64/functions/pac/computepac/TweakCellRotTweakCellInvRot

// TweakCellRot() // ============== // TweakCellInvRot() // ================= bits(4) TweakCellRot(bits(4) incell) TweakCellInvRot(bits(4) incell) bits(4) outcell; outcell<3> = incell<0> EOR incell<1>; outcell<2> = incell<3>; outcell<1> = incell<2>; outcell<0> = incell<1>; outcell<3> = incell<2>; outcell<2> = incell<1>; outcell<1> = incell<0>; outcell<0> = incell<0> EOR incell<3>; return outcell;

Library pseudocode for aarch64/functions/pac/computepac/TweakInvShuffleTweakCellRot

// TweakInvShuffle() // ================= // TweakCellRot() // ============== bits(64)bits(4) TweakInvShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> =TweakCellRot(bits(4) incell) bits(4) outcell; outcell<3> = incell<0> EOR incell<1>; outcell<2> = incell<3>; outcell<1> = incell<2>; outcell<0> = incell<1>; return outcell; TweakCellInvRot(indata<51:48>); outdata<7:4> = indata<55:52>; outdata<11:8> = indata<23:20>; outdata<15:12> = indata<27:24>; outdata<19:16> = indata<3:0>; outdata<23:20> = indata<7:4>; outdata<27:24> = TweakCellInvRot(indata<11:8>); outdata<31:28> = indata<15:12>; outdata<35:32> = TweakCellInvRot(indata<31:28>); outdata<39:36> = TweakCellInvRot(indata<63:60>); outdata<43:40> = TweakCellInvRot(indata<59:56>); outdata<47:44> = TweakCellInvRot(indata<19:16>); outdata<51:48> = indata<35:32>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<43:40>; outdata<63:60> = TweakCellInvRot(indata<47:44>); return outdata;

Library pseudocode for aarch64/functions/pac/computepac/TweakShuffleTweakInvShuffle

// TweakShuffle() // ============== // TweakInvShuffle() // ================= bits(64) TweakShuffle(bits(64) indata) TweakInvShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<19:16>; outdata<7:4> = indata<23:20>; outdata<11:8> = outdata<3:0> = TweakCellRotTweakCellInvRot(indata<27:24>); outdata<15:12> = indata<31:28>; outdata<19:16> =(indata<51:48>); outdata<7:4> = indata<55:52>; outdata<11:8> = indata<23:20>; outdata<15:12> = indata<27:24>; outdata<19:16> = indata<3:0>; outdata<23:20> = indata<7:4>; outdata<27:24> = TweakCellRotTweakCellInvRot(indata<47:44>); outdata<23:20> = indata<11:8>; outdata<27:24> = indata<15:12>; outdata<31:28> =(indata<11:8>); outdata<31:28> = indata<15:12>; outdata<35:32> = TweakCellRotTweakCellInvRot(indata<35:32>); outdata<35:32> = indata<51:48>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<59:56>; outdata<47:44> =(indata<31:28>); outdata<39:36> = TweakCellRotTweakCellInvRot(indata<63:60>); outdata<51:48> = outdata<43:40> = TweakCellRotTweakCellInvRot(indata<3:0>); outdata<55:52> = indata<7:4>; outdata<59:56> =(indata<59:56>); outdata<47:44> = TweakCellRotTweakCellInvRot(indata<43:40>); (indata<19:16>); outdata<51:48> = indata<35:32>; outdata<55:52> = indata<39:36>; outdata<59:56> = indata<43:40>; outdata<63:60> = TweakCellRotTweakCellInvRot(indata<39:36>); (indata<47:44>); return outdata;

Library pseudocode for aarch64/functions/pac/paccomputepac/ConstPACFieldTweakShuffle

// ConstPACField() // =============== // Returns TRUE if bit<55> can be used to determine the size of the PAC field, FALSE otherwise. // TweakShuffle() // ============== booleanbits(64) ConstPACField() return boolean IMPLEMENTATION_DEFINED "Bit 55 determines the size of the PAC field";TweakShuffle(bits(64) indata) bits(64) outdata; outdata<3:0> = indata<19:16>; outdata<7:4> = indata<23:20>; outdata<11:8> =TweakCellRot(indata<27:24>); outdata<15:12> = indata<31:28>; outdata<19:16> = TweakCellRot(indata<47:44>); outdata<23:20> = indata<11:8>; outdata<27:24> = indata<15:12>; outdata<31:28> = TweakCellRot(indata<35:32>); outdata<35:32> = indata<51:48>; outdata<39:36> = indata<55:52>; outdata<43:40> = indata<59:56>; outdata<47:44> = TweakCellRot(indata<63:60>); outdata<51:48> = TweakCellRot(indata<3:0>); outdata<55:52> = indata<7:4>; outdata<59:56> = TweakCellRot(indata<43:40>); outdata<63:60> = TweakCellRot(indata<39:36>); return outdata;

Library pseudocode for aarch64/functions/pac/pac/HaveEnhancedPACConstPACField

// HaveEnhancedPAC() // ================= // Returns TRUE if support for EnhancedPAC is implemented, FALSE otherwise. // ConstPACField() // =============== // Returns TRUE if bit<55> can be used to determine the size of the PAC field, FALSE otherwise. boolean HaveEnhancedPAC() return (ConstPACField() return boolean IMPLEMENTATION_DEFINED "Bit 55 determines the size of the PAC field"; HavePACExt() && boolean IMPLEMENTATION_DEFINED "Has enhanced PAC functionality" );

Library pseudocode for aarch64/functions/pac/pac/HaveEnhancedPAC2HaveEnhancedPAC

// HaveEnhancedPAC2() // ================== // Returns TRUE if support for EnhancedPAC2 is implemented, FALSE otherwise. // HaveEnhancedPAC() // ================= // Returns TRUE if support for EnhancedPAC is implemented, FALSE otherwise. boolean HaveEnhancedPAC2() returnHaveEnhancedPAC() return ( HasArchVersionHavePACExt(ARMv8p6) || (HasArchVersion(ARMv8p3) && boolean IMPLEMENTATION_DEFINED "Has enhanced PAC 2 functionality");() && boolean IMPLEMENTATION_DEFINED "Has enhanced PAC functionality" );

Library pseudocode for aarch64/functions/pac/pac/HaveFPACHaveEnhancedPAC2

// HaveFPAC() // ========== // Returns TRUE if support for FPAC is implemented, FALSE otherwise. // HaveEnhancedPAC2() // ================== // Returns TRUE if support for EnhancedPAC2 is implemented, FALSE otherwise. boolean HaveFPAC() HaveEnhancedPAC2() return (ARMv8p6) || (HasArchVersion(ARMv8p3HaveEnhancedPAC2HasArchVersion() && boolean IMPLEMENTATION_DEFINED "Has FPAC functionality";) && boolean IMPLEMENTATION_DEFINED "Has enhanced PAC 2 functionality");

Library pseudocode for aarch64/functions/pac/pac/HaveFPACCombinedHaveFPAC

// HaveFPACCombined() // ================== // Returns TRUE if support for FPACCombined is implemented, FALSE otherwise. // HaveFPAC() // ========== // Returns TRUE if support for FPAC is implemented, FALSE otherwise. boolean HaveFPACCombined() HaveFPAC() return HaveFPACHaveEnhancedPAC2() && boolean IMPLEMENTATION_DEFINED "Has FPAC Combined functionality";() && boolean IMPLEMENTATION_DEFINED "Has FPAC functionality";

Library pseudocode for aarch64/functions/pac/pac/HavePACExtHaveFPACCombined

// HavePACExt() // ============ // Returns TRUE if support for the PAC extension is implemented, FALSE otherwise. // HaveFPACCombined() // ================== // Returns TRUE if support for FPACCombined is implemented, FALSE otherwise. boolean HavePACExt() HaveFPACCombined() return HasArchVersionHaveFPAC(ARMv8p3);() && boolean IMPLEMENTATION_DEFINED "Has FPAC Combined functionality";

Library pseudocode for aarch64/functions/pac/pac/HavePACIMPHavePACExt

// HavePACIMP() // HavePACExt() // ============ // Returns TRUE if support for PAC IMP is implemented, FALSE otherwise. // Returns TRUE if support for the PAC extension is implemented, FALSE otherwise. boolean HavePACIMP() HavePACExt() return (ARMv8p3HavePACExtHasArchVersion() && boolean IMPLEMENTATION_DEFINED "Has PAC IMP functionality";);

Library pseudocode for aarch64/functions/pac/pac/HavePACQARMA3HavePACIMP

// HavePACQARMA3() // =============== // Returns TRUE if support for PAC QARMA3 is implemented, FALSE otherwise. // HavePACIMP() // ============ // Returns TRUE if support for PAC IMP is implemented, FALSE otherwise. boolean HavePACQARMA3() HavePACIMP() return HavePACExt() && boolean IMPLEMENTATION_DEFINED "Has PAC QARMA3 functionality";() && boolean IMPLEMENTATION_DEFINED "Has PAC IMP functionality";

Library pseudocode for aarch64/functions/pac/pac/HavePACQARMA5HavePACQARMA3

// HavePACQARMA5() // HavePACQARMA3() // =============== // Returns TRUE if support for PAC QARMA5 is implemented, FALSE otherwise. // Returns TRUE if support for PAC QARMA3 is implemented, FALSE otherwise. boolean HavePACQARMA5() HavePACQARMA3() return HavePACExt() && boolean IMPLEMENTATION_DEFINED "Has PAC QARMA5 functionality";() && boolean IMPLEMENTATION_DEFINED "Has PAC QARMA3 functionality";

Library pseudocode for aarch64/functions/pac/pac/PtrHasUpperAndLowerAddRangesHavePACQARMA5

// PtrHasUpperAndLowerAddRanges() // ============================== // Returns TRUE if the pointer has upper and lower address ranges, FALSE otherwise. // HavePACQARMA5() // =============== // Returns TRUE if support for PAC QARMA5 is implemented, FALSE otherwise. boolean PtrHasUpperAndLowerAddRanges() regime =HavePACQARMA5() return TranslationRegimeHavePACExt(PSTATE.EL, AccType_NORMAL); return HasUnprivileged(regime);() && boolean IMPLEMENTATION_DEFINED "Has PAC QARMA5 functionality";

Library pseudocode for aarch64/functions/pac/strippac/StripPtrHasUpperAndLowerAddRanges

// Strip() // ======= // Strip() returns a 64-bit value containing A, but replacing the pointer authentication // code field bits with the extension of the address bits. This can apply to either // instructions or data, where, as the use of tagged pointers is distinct, it might be // handled differently. // PtrHasUpperAndLowerAddRanges() // ============================== // Returns TRUE if the pointer has upper and lower address ranges, FALSE otherwise. bits(64)boolean Strip(bits(64) A, boolean data) bits(64) original_ptr; bits(64) extfield; boolean tbi =PtrHasUpperAndLowerAddRanges() regime = EffectiveTBITranslationRegime(A, !data, PSTATE.EL) == '1'; integer bottom_PAC_bit =(PSTATE.EL, CalculateBottomPACBitAccType_NORMAL(A<55>); extfield =); return ReplicateHasUnprivileged(A<55>, 64); if tbi then original_ptr = A<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:A<bottom_PAC_bit-1:0>; else original_ptr = extfield<(64-bottom_PAC_bit)-1:0>:A<bottom_PAC_bit-1:0>; return original_ptr;(regime);

Library pseudocode for aarch64/functions/pac/trappacusestrip/TrapPACUseStrip

// TrapPACUse() // ============ // Used for the trapping of the pointer authentication functions by higher exception // levels.// Strip() // ======= // Strip() returns a 64-bit value containing A, but replacing the pointer authentication // code field bits with the extension of the address bits. This can apply to either // instructions or data, where, as the use of tagged pointers is distinct, it might be // handled differently. bits(64) TrapPACUse(bits(2) target_el) assertStrip(bits(64) A, boolean data) bits(64) original_ptr; bits(64) extfield; boolean tbi = HaveELEffectiveTBI(target_el) && target_el !=(A, !data, PSTATE.EL) == '1'; integer bottom_PAC_bit = EL0CalculateBottomPACBit &&(A<55>); extfield = UIntReplicate(target_el) >= UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); ExceptionRecord exception; vect_offset = 0; exception = ExceptionSyndrome(Exception_PACTrap); AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);(A<55>, 64); if tbi then original_ptr = A<63:56>:extfield<(56-bottom_PAC_bit)-1:0>:A<bottom_PAC_bit-1:0>; else original_ptr = extfield<(64-bottom_PAC_bit)-1:0>:A<bottom_PAC_bit-1:0>; return original_ptr;

Library pseudocode for aarch64/functions/raspac/AArch64.ESBOperationtrappacuse/TrapPACUse

// AArch64.ESBOperation() // ====================== // Perform the AArch64 ESB operation, either for ESB executed in AArch64 state, or for // ESB in AArch32 state when SError interrupts are routed to an Exception level using // AArch64// TrapPACUse() // ============ // Used for the trapping of the pointer authentication functions by higher exception // levels. AArch64.ESBOperation() boolean mask_active; route_to_el3 =TrapPACUse(bits(2) target_el) assert HaveEL((target_el) && target_el !=EL3) && SCR_EL3.EA == '1'; route_to_el2 = (EL2Enabled() && (HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1')); target = if route_to_el3 then EL3 elsif route_to_el2 then EL2 else EL1; if target == EL1 then mask_active = PSTATE.EL IN {EL0,&& EL1}; elsif HaveVirtHostExt() && target == EL2 && HCR_EL2.<E2H,TGE> == '11' then mask_active = PSTATE.EL IN {EL0, EL2}; else mask_active = PSTATE.EL == target; mask_set = (PSTATE.A == '1' && (!HaveDoubleFaultExt() || SCR_EL3.EA == '0' || PSTATE.EL != EL3 || SCR_EL3.NMEA == '0')); intdis = Halted() || ExternalDebugInterruptsDisabled(target); masked = (UInt(target) <(target_el) >= UInt(PSTATE.EL)) || intdis || (mask_active && mask_set); (PSTATE.EL); // Check for a masked Physical SError pending that can be synchronized // by an Error synchronization event. if masked && bits(64) preferred_exception_return = IsSynchronizablePhysicalSErrorPendingThisInstrAddr() then // This function might be called for an interworking case, and INTdis is masking // the SError interrupt. if(); ELUsingAArch32ExceptionRecord(exception; vect_offset = 0; exception =S1TranslationRegimeExceptionSyndrome()) then syndrome32 =( AArch32.PhysicalSErrorSyndromeException_PACTrap(); DISR =); AArch32.ReportDeferredSErrorAArch64.TakeException(syndrome32.AET, syndrome32.ExT); else implicit_esb = FALSE; syndrome64 = AArch64.PhysicalSErrorSyndrome(implicit_esb); DISR_EL1 = AArch64.ReportDeferredSError(syndrome64); ClearPendingPhysicalSError(); // Set ISR_EL1.A to 0 return;(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/functions/ras/AArch64.PhysicalSErrorSyndromeAArch64.ESBOperation

// Return the SError syndrome bits(25)// AArch64.ESBOperation() // ====================== // Perform the AArch64 ESB operation, either for ESB executed in AArch64 state, or for // ESB in AArch32 state when SError interrupts are routed to an Exception level using // AArch64 AArch64.PhysicalSErrorSyndrome(boolean implicit_esb);AArch64.ESBOperation() boolean mask_active; route_to_el3 =HaveEL(EL3) && SCR_EL3.EA == '1'; route_to_el2 = (EL2Enabled() && (HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1')); target = if route_to_el3 then EL3 elsif route_to_el2 then EL2 else EL1; if target == EL1 then mask_active = PSTATE.EL IN {EL0, EL1}; elsif HaveVirtHostExt() && target == EL2 && HCR_EL2.<E2H,TGE> == '11' then mask_active = PSTATE.EL IN {EL0, EL2}; else mask_active = PSTATE.EL == target; mask_set = (PSTATE.A == '1' && (!HaveDoubleFaultExt() || SCR_EL3.EA == '0' || PSTATE.EL != EL3 || SCR_EL3.NMEA == '0')); intdis = Halted() || ExternalDebugInterruptsDisabled(target); masked = (UInt(target) < UInt(PSTATE.EL)) || intdis || (mask_active && mask_set); // Check for a masked Physical SError pending that can be synchronized // by an Error synchronization event. if masked && IsSynchronizablePhysicalSErrorPending() then // This function might be called for an interworking case, and INTdis is masking // the SError interrupt. if ELUsingAArch32(S1TranslationRegime()) then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); else implicit_esb = FALSE; syndrome64 = AArch64.PhysicalSErrorSyndrome(implicit_esb); DISR_EL1 = AArch64.ReportDeferredSError(syndrome64); ClearPendingPhysicalSError(); // Set ISR_EL1.A to 0 return;

Library pseudocode for aarch64/functions/ras/AArch64.ReportDeferredSErrorAArch64.PhysicalSErrorSyndrome

// AArch64.ReportDeferredSError() // ============================== // Generate deferred SError syndrome bits(64)// Return the SError syndrome bits(25) AArch64.ReportDeferredSError(bits(25) syndrome) bits(64) target; target<31> = '1'; // A target<24> = syndrome<24>; // IDS target<23:0> = syndrome<23:0>; // ISS return target;AArch64.PhysicalSErrorSyndrome(boolean implicit_esb);

Library pseudocode for aarch64/functions/ras/AArch64.vESBOperationAArch64.ReportDeferredSError

// AArch64.vESBOperation() // ======================= // Perform the AArch64 ESB operation for virtual SError interrupts, either for ESB // executed in AArch64 state, or for ESB in AArch32 state with EL2 using AArch64 state// AArch64.ReportDeferredSError() // ============================== // Generate deferred SError syndrome bits(64) AArch64.vESBOperation() assert PSTATE.EL IN {AArch64.ReportDeferredSError(bits(25) syndrome) bits(64) target; target<31> = '1'; // A target<24> = syndrome<24>; // IDS target<23:0> = syndrome<23:0>; // ISS return target;EL0, EL1} && EL2Enabled(); // If physical SError interrupts are routed to EL2, and TGE is not set, then a virtual // SError interrupt might be pending vSEI_enabled = HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; vSEI_pending = vSEI_enabled && HCR_EL2.VSE == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then // This function might be called for the interworking case, and INTdis is masking // the virtual SError interrupt. if ELUsingAArch32(EL1) then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); else VDISR_EL2 = AArch64.ReportDeferredSError(VSESR_EL2<24:0>); HCR_EL2.VSE = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch64/functions/registersras/AArch64.MaybeZeroRegisterUppersAArch64.vESBOperation

// AArch64.MaybeZeroRegisterUppers() // ================================= // On taking an exception to AArch64 from AArch32, it is CONSTRAINED UNPREDICTABLE whether the top // 32 bits of registers visible at any lower Exception level using AArch32 are set to zero.// AArch64.vESBOperation() // ======================= // Perform the AArch64 ESB operation for virtual SError interrupts, either for ESB // executed in AArch64 state, or for ESB in AArch32 state with EL2 using AArch64 state AArch64.MaybeZeroRegisterUppers() assertAArch64.vESBOperation() assert PSTATE.EL IN { UsingAArch32(); // Always called from AArch32 state before entering AArch64 state integer first; integer last; boolean include_R15; if PSTATE.EL == EL0 && !,ELUsingAArch32(EL1) then first = 0; last = 14; include_R15 = FALSE; elsif PSTATE.EL IN {} &&EL0EL2Enabled,(); // If physical SError interrupts are routed to EL2, and TGE is not set, then a virtual // SError interrupt might be pending vSEI_enabled = HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; vSEI_pending = vSEI_enabled && HCR_EL2.VSE == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1} &&); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then // This function might be called for the interworking case, and INTdis is masking // the virtual SError interrupt. if EL2Enabled() && !ELUsingAArch32(EL2EL1) then first = 0; last = 30; include_R15 = FALSE; else first = 0; last = 30; include_R15 = TRUE; for n = first to last if (n != 15 || include_R15) && VDISR = ConstrainUnpredictableBoolAArch32.ReportDeferredSError((VDFSR<15:14>, VDFSR<12>); else VDISR_EL2 =Unpredictable_ZEROUPPERAArch64.ReportDeferredSError) then _R[n]<63:32> = Zeros(); (VSESR_EL2<24:0>); HCR_EL2.VSE = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch64/functions/registers/AArch64.ResetGeneralRegistersAArch64.MaybeZeroRegisterUppers

// AArch64.ResetGeneralRegisters() // ===============================// AArch64.MaybeZeroRegisterUppers() // ================================= // On taking an exception to AArch64 from AArch32, it is CONSTRAINED UNPREDICTABLE whether the top // 32 bits of registers visible at any lower Exception level using AArch32 are set to zero. AArch64.ResetGeneralRegisters() for i = 0 to 30AArch64.MaybeZeroRegisterUppers() assert (); // Always called from AArch32 state before entering AArch64 state integer first; integer last; boolean include_R15; if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then first = 0; last = 14; include_R15 = FALSE; elsif PSTATE.EL IN {EL0, EL1} && EL2Enabled() && !ELUsingAArch32(EL2) then first = 0; last = 30; include_R15 = FALSE; else first = 0; last = 30; include_R15 = TRUE; for n = first to last if (n != 15 || include_R15) && ConstrainUnpredictableBool(Unpredictable_ZEROUPPER) then _R[n]<63:32> = ZerosXUsingAArch32[i] = bits(64) UNKNOWN; (); return;

Library pseudocode for aarch64/functions/registers/AArch64.ResetSIMDFPRegistersAArch64.ResetGeneralRegisters

// AArch64.ResetSIMDFPRegisters() // ==============================// AArch64.ResetGeneralRegisters() // =============================== AArch64.ResetSIMDFPRegisters() AArch64.ResetGeneralRegisters() for i = 0 to 31 for i = 0 to 30 VX[i] = bits(128) UNKNOWN; [i] = bits(64) UNKNOWN; return;

Library pseudocode for aarch64/functions/registers/AArch64.ResetSpecialRegistersAArch64.ResetSIMDFPRegisters

// AArch64.ResetSpecialRegisters() // ===============================// AArch64.ResetSIMDFPRegisters() // ============================== AArch64.ResetSpecialRegisters() AArch64.ResetSIMDFPRegisters() // AArch64 special registers SP_EL0 = bits(64) UNKNOWN; SP_EL1 = bits(64) UNKNOWN; SPSR_EL1 = bits(64) UNKNOWN; ELR_EL1 = bits(64) UNKNOWN; if for i = 0 to 31 HaveELV(EL2) then SP_EL2 = bits(64) UNKNOWN; SPSR_EL2 = bits(64) UNKNOWN; ELR_EL2 = bits(64) UNKNOWN; if HaveEL(EL3) then SP_EL3 = bits(64) UNKNOWN; SPSR_EL3 = bits(64) UNKNOWN; ELR_EL3 = bits(64) UNKNOWN; // AArch32 special registers that are not architecturally mapped to AArch64 registers if HaveAArch32EL(EL1) then SPSR_fiq<31:0> = bits(32) UNKNOWN; SPSR_irq<31:0> = bits(32) UNKNOWN; SPSR_abt<31:0> = bits(32) UNKNOWN; SPSR_und<31:0> = bits(32) UNKNOWN; // External debug special registers DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(64) UNKNOWN; [i] = bits(128) UNKNOWN; return;

Library pseudocode for aarch64/functions/registers/AArch64.ResetSystemRegistersAArch64.ResetSpecialRegisters

// AArch64.ResetSpecialRegisters() // =============================== AArch64.ResetSpecialRegisters() // AArch64 special registers SP_EL0 = bits(64) UNKNOWN; SP_EL1 = bits(64) UNKNOWN; SPSR_EL1 = bits(64) UNKNOWN; ELR_EL1 = bits(64) UNKNOWN; if HaveEL(EL2) then SP_EL2 = bits(64) UNKNOWN; SPSR_EL2 = bits(64) UNKNOWN; ELR_EL2 = bits(64) UNKNOWN; if HaveEL(EL3) then SP_EL3 = bits(64) UNKNOWN; SPSR_EL3 = bits(64) UNKNOWN; ELR_EL3 = bits(64) UNKNOWN; // AArch32 special registers that are not architecturally mapped to AArch64 registers if HaveAArch32EL(EL1AArch64.ResetSystemRegisters(boolean cold_reset);) then SPSR_fiq<31:0> = bits(32) UNKNOWN; SPSR_irq<31:0> = bits(32) UNKNOWN; SPSR_abt<31:0> = bits(32) UNKNOWN; SPSR_und<31:0> = bits(32) UNKNOWN; // External debug special registers DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(64) UNKNOWN; return;

Library pseudocode for aarch64/functions/registers/PCAArch64.ResetSystemRegisters

// PC - non-assignment form // ======================== // Read program counter. bits(64) PC[] return _PC;AArch64.ResetSystemRegisters(boolean cold_reset);

Library pseudocode for aarch64/functions/registers/SPPC

// SP[] - assignment form // ====================== // Write to stack pointer from either a 32-bit or a 64-bit value.// PC - non-assignment form // ======================== // Read program counter. bits(64) SP[] = bits(width) value assert width IN {32,64}; if PSTATE.SP == '0' then SP_EL0 =PC[] return _PC; ZeroExtend(value); else case PSTATE.EL of when EL0 SP_EL0 = ZeroExtend(value); when EL1 SP_EL1 = ZeroExtend(value); when EL2 SP_EL2 = ZeroExtend(value); when EL3 SP_EL3 = ZeroExtend(value); return; // SP[] - non-assignment form // ========================== // Read stack pointer with implicit slice of 8, 16, 32 or 64 bits. bits(width) SP[] assert width IN {8,16,32,64}; if PSTATE.SP == '0' then return SP_EL0<width-1:0>; else case PSTATE.EL of when EL0 return SP_EL0<width-1:0>; when EL1 return SP_EL1<width-1:0>; when EL2 return SP_EL2<width-1:0>; when EL3 return SP_EL3<width-1:0>;

Library pseudocode for aarch64/functions/registers/VSP

// V[] - assignment form // ===================== // Write to SIMD&FP register with implicit extension from // 8, 16, 32, 64 or 128 bits.// SP[] - assignment form // ====================== // Write to stack pointer from either a 32-bit or a 64-bit value. V[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; _V[n] =SP[] = bits(width) value assert width IN {32,64}; if PSTATE.SP == '0' then SP_EL0 = ZeroExtend(value); return; // V[] - non-assignment form // ========================= // Read from SIMD&FP register with implicit slice of 8, 16 // 32, 64 or 128 bits. bits(width) else case PSTATE.EL of when EL0 SP_EL0 = ZeroExtend(value); when EL1 SP_EL1 = ZeroExtend(value); when EL2 SP_EL2 = ZeroExtend(value); when EL3 SP_EL3 = ZeroExtend(value); return; // SP[] - non-assignment form // ========================== // Read stack pointer with implicit slice of 8, 16, 32 or 64 bits. bits(width) SP[] assert width IN {8,16,32,64}; if PSTATE.SP == '0' then return SP_EL0<width-1:0>; else case PSTATE.EL of when EL0 return SP_EL0<width-1:0>; when EL1 return SP_EL1<width-1:0>; when EL2 return SP_EL2<width-1:0>; when EL3V[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; return _V[n]<width-1:0>;return SP_EL3<width-1:0>;

Library pseudocode for aarch64/functions/registers/VpartV

// Vpart[] - non-assignment form // ============================= // Reads a 128-bit SIMD&FP register in up to two parts: // part 0 returns the bottom 8, 16, 32 or 64 bits of a value held in the register; // part 1 returns the top half of the bottom 64 bits or the top half of the 128-bit // value held in the register. bits(width)// V[] - assignment form // ===================== // Write to SIMD&FP register with implicit extension from // 8, 16, 32, 64 or 128 bits. Vpart[integer n, integer part] V[integer n] = bits(width) value assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; return _V[n]<width-1:0>; else assert width IN {32,64}; return _V[n]<(width * 2)-1:width>; // Vpart[] - assignment form // ========================= // Writes a 128-bit SIMD&FP register in up to two parts: // part 0 zero extends a 8, 16, 32, or 64-bit value to fill the whole register; // part 1 inserts a 64-bit value into the top half of the register. assert width IN {8,16,32,64,128}; _V[n] = ZeroExtend(value); return; // V[] - non-assignment form // ========================= // Read from SIMD&FP register with implicit slice of 8, 16 // 32, 64 or 128 bits. bits(width) Vpart[integer n, integer part] = bits(width) value assert n >= 0 && n <= 31; assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; _V[n] = ZeroExtend(value); else assert width == 64; _V[n]<(width * 2)-1:width> = value<width-1:0>;V[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64,128}; return _V[n]<width-1:0>;

Library pseudocode for aarch64/functions/registers/XVpart

// X[] - assignment form // ===================== // Write to general-purpose register from either a 32-bit or a 64-bit value.// Vpart[] - non-assignment form // ============================= // Reads a 128-bit SIMD&FP register in up to two parts: // part 0 returns the bottom 8, 16, 32 or 64 bits of a value held in the register; // part 1 returns the top half of the bottom 64 bits or the top half of the 128-bit // value held in the register. bits(width) X[integer n] = bits(width) value Vpart[integer n, integer part] assert n >= 0 && n <= 31; assert width IN {32,64}; if n != 31 then _R[n] = assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; return _V[n]<width-1:0>; else assert width IN {32,64}; return _V[n]<(width * 2)-1:width>; // Vpart[] - assignment form // ========================= // Writes a 128-bit SIMD&FP register in up to two parts: // part 0 zero extends a 8, 16, 32, or 64-bit value to fill the whole register; // part 1 inserts a 64-bit value into the top half of the register. ZeroExtend(value); return; // X[] - assignment form // ===================== // Write to general-purpose register from either a 32-bit or a 64-bit value, // where the size of the value is passed as an argument. X[integer n, integer width] = bits(width) value Vpart[integer n, integer part] = bits(width) value assert n >= 0 && n <= 31; assert width IN {32,64}; if n != 31 then _R[n] = assert part IN {0, 1}; if part == 0 then assert width IN {8,16,32,64}; _V[n] = ZeroExtend(value, 64); return; // X[] - non-assignment form // ========================= // Read from general-purpose register with implicit slice of 8, 16, 32 or 64 bits. bits(width) X[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64}; if n != 31 then return _R[n]<width-1:0>; else return Zeros(width); // X[] - non-assignment form // ========================= // Read from general-purpose register with an explicit slice of 8, 16, 32 or 64 bits. bits(width) X[integer n, integer width] assert n >= 0 && n <= 31; assert width IN {8,16,32,64}; if n != 31 then return _R[n]<width-1:0>; else return Zeros(width);(value); else assert width == 64; _V[n]<(width * 2)-1:width> = value<width-1:0>;

Library pseudocode for aarch64/functions/sysregistersregisters/CNTKCTLX

// CNTKCTL[] - non-assignment form // =============================== CNTKCTLType// X[] - assignment form // ===================== // Write to general-purpose register from either a 32-bit or a 64-bit value. CNTKCTL[] bits(64) r; ifX[integer n] = bits(width) value assert n >= 0 && n <= 31; assert width IN {32,64}; if n != 31 then _R[n] = (value); return; // X[] - assignment form // ===================== // Write to general-purpose register from either a 32-bit or a 64-bit value, // where the size of the value is passed as an argument. X[integer n, integer width] = bits(width) value assert n >= 0 && n <= 31; assert width IN {32,64}; if n != 31 then _R[n] = ZeroExtend(value, 64); return; // X[] - non-assignment form // ========================= // Read from general-purpose register with implicit slice of 8, 16, 32 or 64 bits. bits(width) X[integer n] assert n >= 0 && n <= 31; assert width IN {8,16,32,64}; if n != 31 then return _R[n]<width-1:0>; else return Zeros(width); // X[] - non-assignment form // ========================= // Read from general-purpose register with an explicit slice of 8, 16, 32 or 64 bits. bits(width) X[integer n, integer width] assert n >= 0 && n <= 31; assert width IN {8,16,32,64}; if n != 31 then return _R[n]<width-1:0>; else return ZerosIsInHostZeroExtend() then r = CNTHCTL_EL2; return r; r = CNTKCTL_EL1; return r;(width);

Library pseudocode for aarch64/functions/sysregisters/CNTKCTLTypeCNTKCTL

type// CNTKCTL[] - non-assignment form // =============================== CNTKCTLType CNTKCTLType;CNTKCTL[] bits(64) r; ifIsInHost() then r = CNTHCTL_EL2; return r; r = CNTKCTL_EL1; return r;

Library pseudocode for aarch64/functions/sysregisters/CPACRCNTKCTLType

// CPACR[] - non-assignment form // ============================= CPACRTypetype CPACR[] bits(64) r; ifCNTKCTLType; IsInHost() then r = CPTR_EL2; return r; r = CPACR_EL1; return r;

Library pseudocode for aarch64/functions/sysregisters/CPACRTypeCPACR

type// CPACR[] - non-assignment form // ============================= CPACRType CPACRType;CPACR[] bits(64) r; ifIsInHost() then r = CPTR_EL2; return r; r = CPACR_EL1; return r;

Library pseudocode for aarch64/functions/sysregisters/ELRCPACRType

// ELR[] - non-assignment form // =========================== bits(64)type ELR[bits(2) el] bits(64) r; case el of whenCPACRType; EL1 r = ELR_EL1; when EL2 r = ELR_EL2; when EL3 r = ELR_EL3; otherwise Unreachable(); return r; // ELR[] - non-assignment form // =========================== bits(64) ELR[] assert PSTATE.EL != EL0; return ELR[PSTATE.EL]; // ELR[] - assignment form // ======================= ELR[bits(2) el] = bits(64) value bits(64) r = value; case el of when EL1 ELR_EL1 = r; when EL2 ELR_EL2 = r; when EL3 ELR_EL3 = r; otherwise Unreachable(); return; // ELR[] - assignment form // ======================= ELR[] = bits(64) value assert PSTATE.EL != EL0; ELR[PSTATE.EL] = value; return;

Library pseudocode for aarch64/functions/sysregisters/ESRELR

// ESR[] - non-assignment form // ELR[] - non-assignment form // =========================== ESRTypebits(64) ESR[bits(2) regime] ELR[bits(2) el] bits(64) r; case regime of case el of when EL1 r = ESR_EL1; r = ELR_EL1; when EL2 r = ESR_EL2; r = ELR_EL2; when EL3 r = ESR_EL3; r = ELR_EL3; otherwise Unreachable(); return r; // ESR[] - non-assignment form // ELR[] - non-assignment form // =========================== ESRTypebits(64) ESR[] returnELR[] assert PSTATE.EL != ESREL0[; returnS1TranslationRegimeELR()]; [PSTATE.EL]; // ESR[] - assignment form // ELR[] - assignment form // ======================= ESR[bits(2) regime] =ELR[bits(2) el] = bits(64) value bits(64) r = value; case el of when ESRType value bits(64) r = value; case regime of when EL1 ESR_EL1 = r; ELR_EL1 = r; when EL2 ESR_EL2 = r; ELR_EL2 = r; when EL3 ESR_EL3 = r; ELR_EL3 = r; otherwise Unreachable(); return; // ESR[] - assignment form // ELR[] - assignment form // ======================= ESR[] =ELR[] = bits(64) value assert PSTATE.EL != ESRTypeEL0 value; ESRELR[S1TranslationRegime()] = value;[PSTATE.EL] = value; return;

Library pseudocode for aarch64/functions/sysregisters/ESRTypeESR

type// ESR[] - non-assignment form // =========================== ESRType ESRType;ESR[bits(2) regime] bits(64) r; case regime of whenEL1 r = ESR_EL1; when EL2 r = ESR_EL2; when EL3 r = ESR_EL3; otherwise Unreachable(); return r; // ESR[] - non-assignment form // =========================== ESRType ESR[] return ESR[S1TranslationRegime()]; // ESR[] - assignment form // ======================= ESR[bits(2) regime] = ESRType value bits(64) r = value; case regime of when EL1 ESR_EL1 = r; when EL2 ESR_EL2 = r; when EL3 ESR_EL3 = r; otherwise Unreachable(); return; // ESR[] - assignment form // ======================= ESR[] = ESRType value ESR[S1TranslationRegime()] = value;

Library pseudocode for aarch64/functions/sysregisters/FARESRType

// FAR[] - non-assignment form // =========================== bits(64)type FAR[bits(2) regime] bits(64) r; case regime of whenESRType; EL1 r = FAR_EL1; when EL2 r = FAR_EL2; when EL3 r = FAR_EL3; otherwise Unreachable(); return r; // FAR[] - non-assignment form // =========================== bits(64) FAR[] return FAR[S1TranslationRegime()]; // FAR[] - assignment form // ======================= FAR[bits(2) regime] = bits(64) value bits(64) r = value; case regime of when EL1 FAR_EL1 = r; when EL2 FAR_EL2 = r; when EL3 FAR_EL3 = r; otherwise Unreachable(); return; // FAR[] - assignment form // ======================= FAR[] = bits(64) value FAR[S1TranslationRegime()] = value; return;

Library pseudocode for aarch64/functions/sysregisters/MAIRFAR

// MAIR[] - non-assignment form // ============================ // FAR[] - non-assignment form // =========================== MAIRTypebits(64) MAIR[bits(2) regime] FAR[bits(2) regime] bits(64) r; case regime of when EL1 r = MAIR_EL1; r = FAR_EL1; when EL2 r = MAIR_EL2; r = FAR_EL2; when EL3 r = MAIR_EL3; r = FAR_EL3; otherwise Unreachable(); return r; // MAIR[] - non-assignment form // ============================ // FAR[] - non-assignment form // =========================== MAIRTypebits(64) MAIR[] FAR[] return [S1TranslationRegime()]; // FAR[] - assignment form // ======================= FAR[bits(2) regime] = bits(64) value bits(64) r = value; case regime of when EL1 FAR_EL1 = r; when EL2 FAR_EL2 = r; when EL3 FAR_EL3 = r; otherwise Unreachable(); return; // FAR[] - assignment form // ======================= FAR[] = bits(64) value FARMAIRFAR[S1TranslationRegime()];()] = value; return;

Library pseudocode for aarch64/functions/sysregisters/MAIRTypeMAIR

type// MAIR[] - non-assignment form // ============================ MAIRType MAIRType;MAIR[bits(2) regime] bits(64) r; case regime of whenEL1 r = MAIR_EL1; when EL2 r = MAIR_EL2; when EL3 r = MAIR_EL3; otherwise Unreachable(); return r; // MAIR[] - non-assignment form // ============================ MAIRType MAIR[] return MAIR[S1TranslationRegime()];

Library pseudocode for aarch64/functions/sysregisters/SCTLRMAIRType

// SCTLR[] - non-assignment form // ============================= SCTLRTypetype SCTLR[bits(2) regime] bits(64) r; case regime of whenMAIRType; EL1 r = SCTLR_EL1; when EL2 r = SCTLR_EL2; when EL3 r = SCTLR_EL3; otherwise Unreachable(); return r; // SCTLR[] - non-assignment form // ============================= SCTLRType SCTLR[] return SCTLR[S1TranslationRegime()];

Library pseudocode for aarch64/functions/sysregisters/SCTLRTypeSCTLR

type// SCTLR[] - non-assignment form // ============================= SCTLRType SCTLRType;SCTLR[bits(2) regime] bits(64) r; case regime of whenEL1 r = SCTLR_EL1; when EL2 r = SCTLR_EL2; when EL3 r = SCTLR_EL3; otherwise Unreachable(); return r; // SCTLR[] - non-assignment form // ============================= SCTLRType SCTLR[] return SCTLR[S1TranslationRegime()];

Library pseudocode for aarch64/functions/sysregisters/VBARSCTLRType

// VBAR[] - non-assignment form // ============================ bits(64)type VBAR[bits(2) regime] bits(64) r; case regime of whenSCTLRType; EL1 r = VBAR_EL1; when EL2 r = VBAR_EL2; when EL3 r = VBAR_EL3; otherwise Unreachable(); return r; // VBAR[] - non-assignment form // ============================ bits(64) VBAR[] return VBAR[S1TranslationRegime()];

Library pseudocode for aarch64/functions/systemsysregisters/AArch64.AllocationTagAccessIsEnabledVBAR

// AArch64.AllocationTagAccessIsEnabled() // ====================================== // Check whether access to Allocation Tags is enabled. // VBAR[] - non-assignment form // ============================ booleanbits(64) AArch64.AllocationTagAccessIsEnabled(VBAR[bits(2) regime] bits(64) r; case regime of whenAccType acctype) bits(2) el = AArch64.AccessUsesEL(acctype); if SCR_EL3.ATA == '0' && el IN {EL0, EL1,r = VBAR_EL1; when EL2} then return FALSE; elsif HCR_EL2.ATA == '0' && el IN {r = VBAR_EL2; whenEL0, EL1} && EL2Enabled() && HCR_EL2.<E2H,TGE> != '11' then return FALSE; elsif SCTLR_EL3.ATA == '0' && el == EL3 then return FALSE; elsif SCTLR_EL2.ATA == '0' && el ==r = VBAR_EL3; otherwise EL2Unreachable then return FALSE; elsif SCTLR_EL1.ATA == '0' && el ==(); return r; // VBAR[] - non-assignment form // ============================ bits(64) EL1 then return FALSE; elsif SCTLR_EL2.ATA0 == '0' && el ==VBAR[] return EL0VBAR &&[ EL2EnabledS1TranslationRegime() && HCR_EL2.<E2H,TGE> == '11' then return FALSE; elsif SCTLR_EL1.ATA0 == '0' && el == EL0 && !(EL2Enabled() && HCR_EL2.<E2H,TGE> == '11') then return FALSE; else return TRUE;()];

Library pseudocode for aarch64/functions/system/AArch64.ChooseNonExcludedTagAArch64.AllocationTagAccessIsEnabled

// AArch64.ChooseNonExcludedTag() // ============================== // Return a tag derived from the start and the offset values, excluding // any tags in the given mask. // AArch64.AllocationTagAccessIsEnabled() // ====================================== // Check whether access to Allocation Tags is enabled. bits(4)boolean AArch64.ChooseNonExcludedTag(bits(4) tag_in, bits(4) offset_in, bits(16) exclude) bits(4) tag = tag_in; bits(4) offset = offset_in; ifAArch64.AllocationTagAccessIsEnabled( IsOnesAccType(exclude) then return '0000'; if offset == '0000' then while exclude<acctype) bits(2) el =UIntAArch64.AccessUsesEL(tag)> == '1' do tag = tag + '0001'; (acctype); while offset != '0000' do offset = offset - '0001'; tag = tag + '0001'; while exclude< if SCR_EL3.ATA == '0' && el IN {, EL1, EL2} then return FALSE; elsif HCR_EL2.ATA == '0' && el IN {EL0, EL1} && EL2Enabled() && HCR_EL2.<E2H,TGE> != '11' then return FALSE; elsif SCTLR_EL3.ATA == '0' && el == EL3 then return FALSE; elsif SCTLR_EL2.ATA == '0' && el == EL2 then return FALSE; elsif SCTLR_EL1.ATA == '0' && el == EL1 then return FALSE; elsif SCTLR_EL2.ATA0 == '0' && el == EL0 && EL2Enabled() && HCR_EL2.<E2H,TGE> == '11' then return FALSE; elsif SCTLR_EL1.ATA0 == '0' && el == EL0 && !(EL2EnabledUIntEL0(tag)> == '1' do tag = tag + '0001'; return tag;() && HCR_EL2.<E2H,TGE> == '11') then return FALSE; else return TRUE;

Library pseudocode for aarch64/functions/system/AArch64.ExecutingBROrBLROrRetInstrAArch64.ChooseNonExcludedTag

// AArch64.ExecutingBROrBLROrRetInstr() // ==================================== // Returns TRUE if current instruction is a BR, BLR, RET, B[L]RA[B][Z], or RETA[B]. // AArch64.ChooseNonExcludedTag() // ============================== // Return a tag derived from the start and the offset values, excluding // any tags in the given mask. booleanbits(4) AArch64.ExecutingBROrBLROrRetInstr() if !AArch64.ChooseNonExcludedTag(bits(4) tag_in, bits(4) offset_in, bits(16) exclude) bits(4) tag = tag_in; bits(4) offset = offset_in; ifHaveBTIExtIsOnes() then return FALSE; (exclude) then return '0000'; instr = if offset == '0000' then while exclude< (tag)> == '1' do tag = tag + '0001'; while offset != '0000' do offset = offset - '0001'; tag = tag + '0001'; while exclude<UIntThisInstrUInt(); if instr<31:25> == '1101011' && instr<20:16> == '11111' then opc = instr<24:21>; return opc != '0101'; else return FALSE;(tag)> == '1' do tag = tag + '0001'; return tag;

Library pseudocode for aarch64/functions/system/AArch64.ExecutingBTIInstrAArch64.ExecutingBROrBLROrRetInstr

// AArch64.ExecutingBTIInstr() // =========================== // Returns TRUE if current instruction is a BTI. // AArch64.ExecutingBROrBLROrRetInstr() // ==================================== // Returns TRUE if current instruction is a BR, BLR, RET, B[L]RA[B][Z], or RETA[B]. boolean AArch64.ExecutingBTIInstr() AArch64.ExecutingBROrBLROrRetInstr() if !HaveBTIExt() then return FALSE; instr = ThisInstr(); if instr<31:22> == '1101010100' && instr<21:12> == '0000110010' && instr<4:0> == '11111' then CRm = instr<11:8>; op2 = instr<7:5>; return (CRm == '0100' && op2<0> == '0'); if instr<31:25> == '1101011' && instr<20:16> == '11111' then opc = instr<24:21>; return opc != '0101'; else return FALSE;

Library pseudocode for aarch64/functions/system/AArch64.ExecutingERETInstrAArch64.ExecutingBTIInstr

// AArch64.ExecutingERETInstr() // ============================ // Returns TRUE if current instruction is ERET. // AArch64.ExecutingBTIInstr() // =========================== // Returns TRUE if current instruction is a BTI. boolean AArch64.ExecutingERETInstr() instr =AArch64.ExecutingBTIInstr() if ! HaveBTIExt() then return FALSE; instr = ThisInstr(); return instr<31:12> == '11010110100111110000'; if instr<31:22> == '1101010100' && instr<21:12> == '0000110010' && instr<4:0> == '11111' then CRm = instr<11:8>; op2 = instr<7:5>; return (CRm == '0100' && op2<0> == '0'); else return FALSE;

Library pseudocode for aarch64/functions/system/AArch64.ImpDefSysInstrAArch64.ExecutingERETInstr

// Execute an implementation-defined system instruction with write (source operand).// AArch64.ExecutingERETInstr() // ============================ // Returns TRUE if current instruction is ERET. boolean AArch64.ImpDefSysInstr(integer el, bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2, integer t);AArch64.ExecutingERETInstr() instr =ThisInstr(); return instr<31:12> == '11010110100111110000';

Library pseudocode for aarch64/functions/system/AArch64.ImpDefSysInstrWithResultAArch64.ImpDefSysInstr

// Execute an implementation-defined system instruction with read (result operand).// Execute an implementation-defined system instruction with write (source operand). AArch64.ImpDefSysInstrWithResult(integer el, bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2);AArch64.ImpDefSysInstr(integer el, bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2, integer t);

Library pseudocode for aarch64/functions/system/AArch64.ImpDefSysRegReadAArch64.ImpDefSysInstrWithResult

// Read from an implementation-defined system register and write the contents of the register to X[t].// Execute an implementation-defined system instruction with read (result operand). AArch64.ImpDefSysRegRead(bits(2) op0, bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2, integer t);AArch64.ImpDefSysInstrWithResult(integer el, bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2);

Library pseudocode for aarch64/functions/system/AArch64.ImpDefSysRegWriteAArch64.ImpDefSysRegRead

// Write to an implementation-defined system register.// Read from an implementation-defined system register and write the contents of the register to X[t]. AArch64.ImpDefSysRegWrite(bits(2) op0, bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2, integer t);AArch64.ImpDefSysRegRead(bits(2) op0, bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2, integer t);

Library pseudocode for aarch64/functions/system/AArch64.NextRandomTagBitAArch64.ImpDefSysRegWrite

// AArch64.NextRandomTagBit() // ========================== // Generate a random bit suitable for generating a random Allocation Tag. bit// Write to an implementation-defined system register. AArch64.NextRandomTagBit() bits(16) lfsr = RGSR_EL1.SEED; bit top = lfsr<5> EOR lfsr<3> EOR lfsr<2> EOR lfsr<0>; RGSR_EL1.SEED = top:lfsr<15:1>; return top;AArch64.ImpDefSysRegWrite(bits(2) op0, bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2, integer t);

Library pseudocode for aarch64/functions/system/AArch64.RandomTagAArch64.NextRandomTagBit

// AArch64.RandomTag() // =================== // Generate a random Allocation Tag. // AArch64.NextRandomTagBit() // ========================== // Generate a random bit suitable for generating a random Allocation Tag. bits(4)bit AArch64.RandomTag() bits(4) tag; for i = 0 to 3 tag<i> =AArch64.NextRandomTagBit() bits(16) lfsr = RGSR_EL1.SEED; bit top = lfsr<5> EOR lfsr<3> EOR lfsr<2> EOR lfsr<0>; RGSR_EL1.SEED = top:lfsr<15:1>; return top; AArch64.NextRandomTagBit(); return tag;

Library pseudocode for aarch64/functions/system/AArch64.SysInstrAArch64.RandomTag

// Execute a system instruction with write (source operand).// AArch64.RandomTag() // =================== // Generate a random Allocation Tag. bits(4) AArch64.SysInstr(integer op0, integer op1, integer crn, integer crm, integer op2, integer t);AArch64.RandomTag() bits(4) tag; for i = 0 to 3 tag<i> =AArch64.NextRandomTagBit(); return tag;

Library pseudocode for aarch64/functions/system/AArch64.SysInstrWithResultAArch64.SysInstr

// Execute a system instruction with read (result operand). // Writes the result of the instruction to X[t].// Execute a system instruction with write (source operand). AArch64.SysInstrWithResult(integer op0, integer op1, integer crn, integer crm, integer op2, integer t);AArch64.SysInstr(integer op0, integer op1, integer crn, integer crm, integer op2, integer t);

Library pseudocode for aarch64/functions/system/AArch64.SysRegReadAArch64.SysInstrWithResult

// Read from a system register and write the contents of the register to X[t].// Execute a system instruction with read (result operand). // Writes the result of the instruction to X[t]. AArch64.SysRegRead(integer op0, integer op1, integer crn, integer crm, integer op2, integer t);AArch64.SysInstrWithResult(integer op0, integer op1, integer crn, integer crm, integer op2, integer t);

Library pseudocode for aarch64/functions/system/AArch64.SysRegWriteAArch64.SysRegRead

// Write to a system register.// Read from a system register and write the contents of the register to X[t]. AArch64.SysRegWrite(integer op0, integer op1, integer crn, integer crm, integer op2, integer t);AArch64.SysRegRead(integer op0, integer op1, integer crn, integer crm, integer op2, integer t);

Library pseudocode for aarch64/functions/system/BTypeCompatibleAArch64.SysRegWrite

boolean BTypeCompatible;// Write to a system register.AArch64.SysRegWrite(integer op0, integer op1, integer crn, integer crm, integer op2, integer t);

Library pseudocode for aarch64/functions/system/BTypeCompatible_BTIBTypeCompatible

// BTypeCompatible_BTI // =================== // This function determines whether a given hint encoding is compatible with the current value of // PSTATE.BTYPE. A value of TRUE here indicates a valid Branch Target Identification instruction. booleanboolean BTypeCompatible; BTypeCompatible_BTI(bits(2) hintcode) case hintcode of when '00' return FALSE; when '01' return PSTATE.BTYPE != '11'; when '10' return PSTATE.BTYPE != '10'; when '11' return TRUE;

Library pseudocode for aarch64/functions/system/BTypeCompatible_PACIXSPBTypeCompatible_BTI

// BTypeCompatible_PACIXSP() // ========================= // Returns TRUE if PACIASP, PACIBSP instruction is implicit compatible with PSTATE.BTYPE, // FALSE otherwise. // BTypeCompatible_BTI // =================== // This function determines whether a given hint encoding is compatible with the current value of // PSTATE.BTYPE. A value of TRUE here indicates a valid Branch Target Identification instruction. boolean BTypeCompatible_PACIXSP() if PSTATE.BTYPE IN {'01', '10'} then return TRUE; elsif PSTATE.BTYPE == '11' then index = if PSTATE.EL ==BTypeCompatible_BTI(bits(2) hintcode) case hintcode of when '00' return FALSE; when '01' return PSTATE.BTYPE != '11'; when '10' return PSTATE.BTYPE != '10'; when '11' return TRUE; EL0 then 35 else 36; return SCTLR[]<index> == '0'; else return FALSE;

Library pseudocode for aarch64/functions/system/BTypeNextBTypeCompatible_PACIXSP

bits(2) BTypeNext;// BTypeCompatible_PACIXSP() // ========================= // Returns TRUE if PACIASP, PACIBSP instruction is implicit compatible with PSTATE.BTYPE, // FALSE otherwise. booleanBTypeCompatible_PACIXSP() if PSTATE.BTYPE IN {'01', '10'} then return TRUE; elsif PSTATE.BTYPE == '11' then index = if PSTATE.EL == EL0 then 35 else 36; return SCTLR[]<index> == '0'; else return FALSE;

Library pseudocode for aarch64/functions/system/ChooseRandomNonExcludedTagBTypeNext

// The ChooseRandomNonExcludedTag function is used when GCR_EL1.RRND == '1' to generate random // Allocation Tags. // // The resulting Allocation Tag is selected from the set [0,15], excluding any Allocation Tag where // exclude[tag_value] == 1. If 'exclude' is all Ones, the returned Allocation Tag is '0000'. // // This function is permitted to generate a non-deterministic selection from the set of non-excluded // Allocation Tags. A reasonable implementation is described by the Pseudocode used when // GCR_EL1.RRND is 0, but with a non-deterministic implementation of NextRandomTagBit(). Implementations // may choose to behave the same as GCR_EL1.RRND=0. bits(4)bits(2) BTypeNext; ChooseRandomNonExcludedTag(bits(16) exclude_in);

Library pseudocode for aarch64/functions/system/InGuardedPageChooseRandomNonExcludedTag

boolean InGuardedPage;// The ChooseRandomNonExcludedTag function is used when GCR_EL1.RRND == '1' to generate random // Allocation Tags. // // The resulting Allocation Tag is selected from the set [0,15], excluding any Allocation Tag where // exclude[tag_value] == 1. If 'exclude' is all Ones, the returned Allocation Tag is '0000'. // // This function is permitted to generate a non-deterministic selection from the set of non-excluded // Allocation Tags. A reasonable implementation is described by the Pseudocode used when // GCR_EL1.RRND is 0, but with a non-deterministic implementation of NextRandomTagBit(). Implementations // may choose to behave the same as GCR_EL1.RRND=0. bits(4)ChooseRandomNonExcludedTag(bits(16) exclude_in);

Library pseudocode for aarch64/functions/system/IsHCRXEL2EnabledInGuardedPage

// IsHCRXEL2Enabled() // ================== // Returns TRUE if access to HCRX_EL2 register is enabled, and FALSE otherwise. // Indirect read of HCRX_EL2 returns 0 when access is not enabled. booleanboolean InGuardedPage; IsHCRXEL2Enabled() assert(HaveFeatHCX()); if HaveEL(EL3) && SCR_EL3.HXEn == '0' then return FALSE; return EL2Enabled();

Library pseudocode for aarch64/functions/system/SetBTypeCompatibleIsHCRXEL2Enabled

// SetBTypeCompatible() // ==================== // Sets the value of BTypeCompatible global variable used by BTI// IsHCRXEL2Enabled() // ================== // Returns TRUE if access to HCRX_EL2 register is enabled, and FALSE otherwise. // Indirect read of HCRX_EL2 returns 0 when access is not enabled. boolean SetBTypeCompatible(boolean x) BTypeCompatible = x;IsHCRXEL2Enabled() assert(HaveFeatHCX()); if HaveEL(EL3) && SCR_EL3.HXEn == '0' then return FALSE; return EL2Enabled();

Library pseudocode for aarch64/functions/system/SetBTypeNextSetBTypeCompatible

// SetBTypeNext() // ============== // Set the value of BTypeNext global variable used by BTI// SetBTypeCompatible() // ==================== // Sets the value of BTypeCompatible global variable used by BTI SetBTypeNext(bits(2) x) BTypeNext = x;SetBTypeCompatible(boolean x) BTypeCompatible = x;

Library pseudocode for aarch64/functions/system/SetInGuardedPageSetBTypeNext

// SetInGuardedPage() // ================== // Global state updated to denote if memory access is from a guarded page.// SetBTypeNext() // ============== // Set the value of BTypeNext global variable used by BTI SetInGuardedPage(boolean guardedpage) InGuardedPage = guardedpage;SetBTypeNext(bits(2) x) BTypeNext = x;

Library pseudocode for aarch64/instrsfunctions/branchsystem/eret/AArch64.ExceptionReturnSetInGuardedPage

// AArch64.ExceptionReturn() // =========================// SetInGuardedPage() // ================== // Global state updated to denote if memory access is from a guarded page. AArch64.ExceptionReturn(bits(64) new_pc_in, bits(64) spsr) bits(64) new_pc = new_pc_in; ifSetInGuardedPage(boolean guardedpage) InGuardedPage = guardedpage; HaveIESB() then sync_errors = SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.<EA,NMEA> == '11' && PSTATE.EL == EL3); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); SynchronizeContext(); // Attempts to change to an illegal state will invoke the Illegal Execution state mechanism bits(2) source_el = PSTATE.EL; boolean illegal_psr_state = IllegalExceptionReturn(spsr); SetPSTATEFromPSR(spsr, illegal_psr_state); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if illegal_psr_state && spsr<4> == '1' then // If the exception return is illegal, PC[63:32,1:0] are UNKNOWN new_pc<63:32> = bits(32) UNKNOWN; new_pc<1:0> = bits(2) UNKNOWN; elsif UsingAArch32() then // Return to AArch32 // ELR_ELx[1:0] or ELR_ELx[0] are treated as being 0, depending on the // target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 else // Return to AArch64 // ELR_ELx[63:56] might include a tag new_pc = AArch64.BranchAddr(new_pc); if UsingAArch32() then // 32 most significant bits are ignored. boolean branch_conditional = FALSE; BranchTo(new_pc<31:0>, BranchType_ERET, branch_conditional); else BranchToAddr(new_pc, BranchType_ERET); CheckExceptionCatch(FALSE); // Check for debug event on exception return

Library pseudocode for aarch64/instrs/countopbranch/CountOperet/AArch64.ExceptionReturn

enumeration// AArch64.ExceptionReturn() // ========================= CountOp {AArch64.ExceptionReturn(bits(64) new_pc_in, bits(64) spsr) bits(64) new_pc = new_pc_in; ifCountOp_CLZ,() then sync_errors = CountOp_CLS,[].IESB == '1'; if () then sync_errors = sync_errors || (SCR_EL3.<EA,NMEA> == '11' && PSTATE.EL == EL3); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); SynchronizeContext(); // Attempts to change to an illegal state will invoke the Illegal Execution state mechanism bits(2) source_el = PSTATE.EL; boolean illegal_psr_state = IllegalExceptionReturn(spsr); SetPSTATEFromPSR(spsr, illegal_psr_state); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if illegal_psr_state && spsr<4> == '1' then // If the exception return is illegal, PC[63:32,1:0] are UNKNOWN new_pc<63:32> = bits(32) UNKNOWN; new_pc<1:0> = bits(2) UNKNOWN; elsif UsingAArch32() then // Return to AArch32 // ELR_ELx[1:0] or ELR_ELx[0] are treated as being 0, depending on the // target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 else // Return to AArch64 // ELR_ELx[63:56] might include a tag new_pc = AArch64.BranchAddr(new_pc); if UsingAArch32() then // 32 most significant bits are ignored. boolean branch_conditional = FALSE; BranchTo(new_pc<31:0>, BranchType_ERET, branch_conditional); else BranchToAddr(new_pc, BranchType_ERET); CheckExceptionCatchCountOp_CNT};(FALSE); // Check for debug event on exception return

Library pseudocode for aarch64/instrs/extendregcountop/DecodeRegExtendCountOp

// DecodeRegExtend() // ================= // Decode a register extension option ExtendTypeenumeration DecodeRegExtend(bits(3) op) case op of when '000' returnCountOp { ExtendType_UXTB; when '001' returnCountOp_CLZ, ExtendType_UXTH; when '010' returnCountOp_CLS, ExtendType_UXTW; when '011' return ExtendType_UXTX; when '100' return ExtendType_SXTB; when '101' return ExtendType_SXTH; when '110' return ExtendType_SXTW; when '111' return ExtendType_SXTX;CountOp_CNT};

Library pseudocode for aarch64/instrs/extendreg/ExtendRegDecodeRegExtend

// ExtendReg() // =========== // Perform a register extension and shift // DecodeRegExtend() // ================= // Decode a register extension option bits(N)ExtendType ExtendReg(integer reg,DecodeRegExtend(bits(3) op) case op of when '000' return ExtendType exttype, integer shift) assert shift >= 0 && shift <= 4; bits(N) val = X[reg]; boolean unsigned; integer len; case exttype of when ExtendType_SXTB unsigned = FALSE; len = 8; when ExtendType_SXTH unsigned = FALSE; len = 16; when ExtendType_SXTW unsigned = FALSE; len = 32; when ExtendType_SXTX unsigned = FALSE; len = 64; when ExtendType_UXTB unsigned = TRUE; len = 8; when; when '001' return ExtendType_UXTH unsigned = TRUE; len = 16; when; when '010' return ExtendType_UXTW unsigned = TRUE; len = 32; when; when '011' return ExtendType_UXTX unsigned = TRUE; len = 64; // Note the extended width of the intermediate value and // that sign extension occurs from bit <len+shift-1>, not // from bit <len-1>. This is equivalent to the instruction // [SU]BFIZ Rtmp, Rreg, #shift, #len // It may also be seen as a sign/zero extend followed by a shift: // LSL(Extend(val<len-1:0>, N, unsigned), shift); len =; when '100' return MinExtendType_SXTB(len, N - shift); return; when '101' return ExtendExtendType_SXTH(val<len-1:0> :; when '110' return ; when '111' return ExtendType_SXTXZerosExtendType_SXTW(shift), N, unsigned);;

Library pseudocode for aarch64/instrs/extendreg/ExtendTypeExtendReg

enumeration// ExtendReg() // =========== // Perform a register extension and shift bits(N) ExtendType {ExtendReg(integer reg,ExtendType_SXTB,exttype, integer shift) assert shift >= 0 && shift <= 4; bits(N) val = ExtendType_SXTH,[reg]; boolean unsigned; integer len; case exttype of when ExtendType_SXTW,unsigned = FALSE; len = 8; when ExtendType_SXTX,unsigned = FALSE; len = 16; when ExtendType_UXTB,unsigned = FALSE; len = 32; when ExtendType_UXTH,unsigned = FALSE; len = 64; when ExtendType_UXTW,unsigned = TRUE; len = 8; when unsigned = TRUE; len = 16; when ExtendType_UXTW unsigned = TRUE; len = 32; when ExtendType_UXTX unsigned = TRUE; len = 64; // Note the extended width of the intermediate value and // that sign extension occurs from bit <len+shift-1>, not // from bit <len-1>. This is equivalent to the instruction // [SU]BFIZ Rtmp, Rreg, #shift, #len // It may also be seen as a sign/zero extend followed by a shift: // LSL(Extend(val<len-1:0>, N, unsigned), shift); len = Min(len, N - shift); return Extend(val<len-1:0> : ZerosExtendType_UXTX};(shift), N, unsigned);

Library pseudocode for aarch64/instrs/floatextendreg/arithmetic/max-min/fpmaxminop/FPMaxMinOpExtendType

enumeration FPMaxMinOp {ExtendType {FPMaxMinOp_MAX,ExtendType_SXTB, FPMaxMinOp_MIN,ExtendType_SXTH, FPMaxMinOp_MAXNUM,ExtendType_SXTW, FPMaxMinOp_MINNUM};ExtendType_SXTX,ExtendType_UXTB, ExtendType_UXTH, ExtendType_UXTW, ExtendType_UXTX};

Library pseudocode for aarch64/instrs/float/arithmetic/unarymax-min/fpunaryopfpmaxminop/FPUnaryOpFPMaxMinOp

enumeration FPUnaryOp {FPMaxMinOp {FPUnaryOp_ABS,FPMaxMinOp_MAX, FPUnaryOp_MOV,FPMaxMinOp_MIN, FPUnaryOp_NEG,FPMaxMinOp_MAXNUM, FPUnaryOp_SQRT};FPMaxMinOp_MINNUM};

Library pseudocode for aarch64/instrs/float/convertarithmetic/fpconvopunary/FPConvOpfpunaryop/FPUnaryOp

enumeration FPConvOp {FPUnaryOp {FPConvOp_CVT_FtoI,FPUnaryOp_ABS, FPConvOp_CVT_ItoF,FPUnaryOp_MOV, FPConvOp_MOV_FtoI,FPUnaryOp_NEG, FPConvOp_MOV_ItoF ,FPUnaryOp_SQRT}; FPConvOp_CVT_FtoI_JS };

Library pseudocode for aarch64/instrs/integerfloat/bitfieldconvert/bfxpreferredfpconvop/BFXPreferredFPConvOp

// BFXPreferred() // ============== // // Return TRUE if UBFX or SBFX is the preferred disassembly of a // UBFM or SBFM bitfield instruction. Must exclude more specific // aliases UBFIZ, SBFIZ, UXT[BH], SXT[BHW], LSL, LSR and ASR. booleanenumeration BFXPreferred(bit sf, bit uns, bits(6) imms, bits(6) immr) integer S =FPConvOp { UInt(imms); integer R =FPConvOp_CVT_FtoI, UInt(immr); // must not match UBFIZ/SBFIX alias ifFPConvOp_CVT_ItoF, UInt(imms) <FPConvOp_MOV_FtoI, FPConvOp_MOV_ItoF , UInt(immr) then return FALSE; // must not match LSR/ASR/LSL alias (imms == 31 or 63) if imms == sf:'11111' then return FALSE; // must not match UXTx/SXTx alias if immr == '000000' then // must not match 32-bit UXT[BH] or SXT[BH] if sf == '0' && imms IN {'000111', '001111'} then return FALSE; // must not match 64-bit SXT[BHW] if sf:uns == '10' && imms IN {'000111', '001111', '011111'} then return FALSE; // must be UBFX/SBFX alias return TRUE;FPConvOp_CVT_FtoI_JS };

Library pseudocode for aarch64/instrs/integer/bitmasksbitfield/DecodeBitMasksbfxpreferred/BFXPreferred

// DecodeBitMasks() // ================ // BFXPreferred() // ============== // // Return TRUE if UBFX or SBFX is the preferred disassembly of a // UBFM or SBFM bitfield instruction. Must exclude more specific // aliases UBFIZ, SBFIZ, UXT[BH], SXT[BHW], LSL, LSR and ASR. // Decode AArch64 bitfield and logical immediate masks which use a similar encoding structure (bits(M), bits(M))boolean DecodeBitMasks(bit immN, bits(6) imms, bits(6) immr, boolean immediate) bits(64) tmask, wmask; bits(6) tmask_and, wmask_and; bits(6) tmask_or, wmask_or; bits(6) levels; // Compute log2 of element size // 2^len must be in range [2, M] len =BFXPreferred(bit sf, bit uns, bits(6) imms, bits(6) immr) integer S = HighestSetBitUInt(immN:NOT(imms)); if len < 1 then UNDEFINED; assert M >= (1 << len); // Determine S, R and S - R parameters levels =(imms); integer R = ZeroExtendUInt((immr); // must not match UBFIZ/SBFIX alias ifOnes(len), 6); // For logical immediates an all-ones value of S is reserved // since it would generate a useless all-ones result (many times) if immediate && (imms AND levels) == levels then UNDEFINED; S = UInt(imms AND levels); R =(imms) < UInt(immr AND levels); diff = S - R; // 6-bit subtract with borrow // From a software perspective, the remaining code is equivalant to: // esize = 1 << len; // d = UInt(diff<len-1:0>); // welem = ZeroExtend(Ones(S + 1), esize); // telem = ZeroExtend(Ones(d + 1), esize); // wmask = Replicate(ROR(welem, R)); // tmask = Replicate(telem); // return (wmask, tmask); // Compute "top mask" tmask_and = diff<5:0> OR NOT(levels); tmask_or = diff<5:0> AND levels; tmask = Ones(64); tmask = ((tmask AND Replicate(Replicate(tmask_and<0>, 1) : Ones(1), 32)) OR Replicate(Zeros(1) : Replicate(tmask_or<0>, 1), 32)); // optimization of first step: // tmask = Replicate(tmask_and<0> : '1', 32); tmask = ((tmask AND Replicate(Replicate(tmask_and<1>, 2) : Ones(2), 16)) OR Replicate(Zeros(2) : Replicate(tmask_or<1>, 2), 16)); tmask = ((tmask AND Replicate(Replicate(tmask_and<2>, 4) : Ones(4), 8)) OR Replicate(Zeros(4) : Replicate(tmask_or<2>, 4), 8)); tmask = ((tmask AND Replicate(Replicate(tmask_and<3>, 8) : Ones(8), 4)) OR Replicate(Zeros(8) : Replicate(tmask_or<3>, 8), 4)); tmask = ((tmask AND Replicate(Replicate(tmask_and<4>, 16) : Ones(16), 2)) OR Replicate(Zeros(16) : Replicate(tmask_or<4>, 16), 2)); tmask = ((tmask AND Replicate(Replicate(tmask_and<5>, 32) : Ones(32), 1)) OR Replicate(Zeros(32) : Replicate(tmask_or<5>, 32), 1)); // Compute "wraparound mask" wmask_and = immr OR NOT(levels); wmask_or = immr AND levels; wmask = Zeros(64); wmask = ((wmask AND Replicate(Ones(1) : Replicate(wmask_and<0>, 1), 32)) OR Replicate(Replicate(wmask_or<0>, 1) : Zeros(1), 32)); // optimization of first step: // wmask = Replicate(wmask_or<0> : '0', 32); wmask = ((wmask AND Replicate(Ones(2) : Replicate(wmask_and<1>, 2), 16)) OR Replicate(Replicate(wmask_or<1>, 2) : Zeros(2), 16)); wmask = ((wmask AND Replicate(Ones(4) : Replicate(wmask_and<2>, 4), 8)) OR Replicate(Replicate(wmask_or<2>, 4) : Zeros(4), 8)); wmask = ((wmask AND Replicate(Ones(8) : Replicate(wmask_and<3>, 8), 4)) OR Replicate(Replicate(wmask_or<3>, 8) : Zeros(8), 4)); wmask = ((wmask AND Replicate(Ones(16) : Replicate(wmask_and<4>, 16), 2)) OR Replicate(Replicate(wmask_or<4>, 16) : Zeros(16), 2)); wmask = ((wmask AND Replicate(Ones(32) : Replicate(wmask_and<5>, 32), 1)) OR Replicate(Replicate(wmask_or<5>, 32) : Zeros(32), 1)); (immr) then return FALSE; if diff<6> != '0' then // borrow from S - R wmask = wmask AND tmask; else wmask = wmask OR tmask; // must not match LSR/ASR/LSL alias (imms == 31 or 63) if imms == sf:'11111' then return FALSE; return (wmask<M-1:0>, tmask<M-1:0>); // must not match UXTx/SXTx alias if immr == '000000' then // must not match 32-bit UXT[BH] or SXT[BH] if sf == '0' && imms IN {'000111', '001111'} then return FALSE; // must not match 64-bit SXT[BHW] if sf:uns == '10' && imms IN {'000111', '001111', '011111'} then return FALSE; // must be UBFX/SBFX alias return TRUE;

Library pseudocode for aarch64/instrs/integer/ins-extbitmasks/insert/movewide/movewideop/MoveWideOpDecodeBitMasks

enumeration// DecodeBitMasks() // ================ // Decode AArch64 bitfield and logical immediate masks which use a similar encoding structure (bits(M), bits(M)) MoveWideOp {DecodeBitMasks(bit immN, bits(6) imms, bits(6) immr, boolean immediate) bits(64) tmask, wmask; bits(6) tmask_and, wmask_and; bits(6) tmask_or, wmask_or; bits(6) levels; // Compute log2 of element size // 2^len must be in range [2, M] len =MoveWideOp_N,(immN:NOT(imms)); if len < 1 then UNDEFINED; assert M >= (1 << len); // Determine S, R and S - R parameters levels = MoveWideOp_Z,( (len), 6); // For logical immediates an all-ones value of S is reserved // since it would generate a useless all-ones result (many times) if immediate && (imms AND levels) == levels then UNDEFINED; S = UInt(imms AND levels); R = UInt(immr AND levels); diff = S - R; // 6-bit subtract with borrow // From a software perspective, the remaining code is equivalant to: // esize = 1 << len; // d = UInt(diff<len-1:0>); // welem = ZeroExtend(Ones(S + 1), esize); // telem = ZeroExtend(Ones(d + 1), esize); // wmask = Replicate(ROR(welem, R)); // tmask = Replicate(telem); // return (wmask, tmask); // Compute "top mask" tmask_and = diff<5:0> OR NOT(levels); tmask_or = diff<5:0> AND levels; tmask = Ones(64); tmask = ((tmask AND Replicate(Replicate(tmask_and<0>, 1) : Ones(1), 32)) OR Replicate(Zeros(1) : Replicate(tmask_or<0>, 1), 32)); // optimization of first step: // tmask = Replicate(tmask_and<0> : '1', 32); tmask = ((tmask AND Replicate(Replicate(tmask_and<1>, 2) : Ones(2), 16)) OR Replicate(Zeros(2) : Replicate(tmask_or<1>, 2), 16)); tmask = ((tmask AND Replicate(Replicate(tmask_and<2>, 4) : Ones(4), 8)) OR Replicate(Zeros(4) : Replicate(tmask_or<2>, 4), 8)); tmask = ((tmask AND Replicate(Replicate(tmask_and<3>, 8) : Ones(8), 4)) OR Replicate(Zeros(8) : Replicate(tmask_or<3>, 8), 4)); tmask = ((tmask AND Replicate(Replicate(tmask_and<4>, 16) : Ones(16), 2)) OR Replicate(Zeros(16) : Replicate(tmask_or<4>, 16), 2)); tmask = ((tmask AND Replicate(Replicate(tmask_and<5>, 32) : Ones(32), 1)) OR Replicate(Zeros(32) : Replicate(tmask_or<5>, 32), 1)); // Compute "wraparound mask" wmask_and = immr OR NOT(levels); wmask_or = immr AND levels; wmask = Zeros(64); wmask = ((wmask AND Replicate(Ones(1) : Replicate(wmask_and<0>, 1), 32)) OR Replicate(Replicate(wmask_or<0>, 1) : Zeros(1), 32)); // optimization of first step: // wmask = Replicate(wmask_or<0> : '0', 32); wmask = ((wmask AND Replicate(Ones(2) : Replicate(wmask_and<1>, 2), 16)) OR Replicate(Replicate(wmask_or<1>, 2) : Zeros(2), 16)); wmask = ((wmask AND Replicate(Ones(4) : Replicate(wmask_and<2>, 4), 8)) OR Replicate(Replicate(wmask_or<2>, 4) : Zeros(4), 8)); wmask = ((wmask AND Replicate(Ones(8) : Replicate(wmask_and<3>, 8), 4)) OR Replicate(Replicate(wmask_or<3>, 8) : Zeros(8), 4)); wmask = ((wmask AND Replicate(Ones(16) : Replicate(wmask_and<4>, 16), 2)) OR Replicate(Replicate(wmask_or<4>, 16) : Zeros(16), 2)); wmask = ((wmask AND Replicate(Ones(32) : Replicate(wmask_and<5>, 32), 1)) OR Replicate(Replicate(wmask_or<5>, 32) : ZerosMoveWideOp_K};(32), 1)); if diff<6> != '0' then // borrow from S - R wmask = wmask AND tmask; else wmask = wmask OR tmask; return (wmask<M-1:0>, tmask<M-1:0>);

Library pseudocode for aarch64/instrs/integer/logicalins-ext/movwpreferredinsert/MoveWidePreferredmovewide/movewideop/MoveWideOp

// MoveWidePreferred() // =================== // // Return TRUE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single MOVZ or MOVN instruction. // Used as a condition for the preferred MOV<-ORR alias. booleanenumeration MoveWidePreferred(bit sf, bit immN, bits(6) imms, bits(6) immr) integer S =MoveWideOp { UInt(imms); integer R =MoveWideOp_N, MoveWideOp_Z, UInt(immr); integer width = if sf == '1' then 64 else 32; // element size must equal total immediate size if sf == '1' && !((immN:imms) IN {'1xxxxxx'}) then return FALSE; if sf == '0' && !((immN:imms) IN {'00xxxxx'}) then return FALSE; // for MOVZ must contain no more than 16 ones if S < 16 then // ones must not span halfword boundary when rotated return (-R MOD 16) <= (15 - S); // for MOVN must contain no more than 16 zeros if S >= width - 15 then // zeros must not span halfword boundary when rotated return (R MOD 16) <= (S - (width - 15)); return FALSE;MoveWideOp_K};

Library pseudocode for aarch64/instrs/integer/shiftreglogical/DecodeShiftmovwpreferred/MoveWidePreferred

// DecodeShift() // ============= // Decode shift encodings // MoveWidePreferred() // =================== // // Return TRUE if a bitmask immediate encoding would generate an immediate // value that could also be represented by a single MOVZ or MOVN instruction. // Used as a condition for the preferred MOV<-ORR alias. ShiftTypeboolean DecodeShift(bits(2) op) case op of when '00' returnMoveWidePreferred(bit sf, bit immN, bits(6) imms, bits(6) immr) integer S = ShiftType_LSLUInt; when '01' return(imms); integer R = ShiftType_LSRUInt; when '10' return ShiftType_ASR; when '11' return ShiftType_ROR;(immr); integer width = if sf == '1' then 64 else 32; // element size must equal total immediate size if sf == '1' && !((immN:imms) IN {'1xxxxxx'}) then return FALSE; if sf == '0' && !((immN:imms) IN {'00xxxxx'}) then return FALSE; // for MOVZ must contain no more than 16 ones if S < 16 then // ones must not span halfword boundary when rotated return (-R MOD 16) <= (15 - S); // for MOVN must contain no more than 16 zeros if S >= width - 15 then // zeros must not span halfword boundary when rotated return (R MOD 16) <= (S - (width - 15)); return FALSE;

Library pseudocode for aarch64/instrs/integer/shiftreg/ShiftRegDecodeShift

// ShiftReg() // ========== // Perform shift of a register operand // DecodeShift() // ============= // Decode shift encodings bits(N)ShiftType ShiftReg(integer reg,DecodeShift(bits(2) op) case op of when '00' return ShiftType shiftype, integer amount) bits(N) result = X[reg]; case shiftype of when ShiftType_LSL result =; when '01' return LSL(result, amount); when ShiftType_LSR result =; when '10' return LSR(result, amount); when ShiftType_ASR result =; when '11' return ASR(result, amount); when ShiftType_ROR result = ROR(result, amount); return result;;

Library pseudocode for aarch64/instrs/integer/shiftreg/ShiftTypeShiftReg

enumeration// ShiftReg() // ========== // Perform shift of a register operand bits(N) ShiftType {ShiftReg(integer reg,ShiftType_LSL,shiftype, integer amount) bits(N) result = ShiftType_LSR,[reg]; case shiftype of when ShiftType_ASR,result = (result, amount); when ShiftType_LSR result = LSR(result, amount); when ShiftType_ASR result = ASR(result, amount); when ShiftType_ROR result = RORShiftType_ROR};(result, amount); return result;

Library pseudocode for aarch64/instrs/logicalopinteger/LogicalOpshiftreg/ShiftType

enumeration LogicalOp {ShiftType {LogicalOp_AND,ShiftType_LSL, LogicalOp_EOR,ShiftType_LSR, LogicalOp_ORR};ShiftType_ASR,ShiftType_ROR};

Library pseudocode for aarch64/instrs/memorylogicalop/memop/MemAtomicOpLogicalOp

enumeration MemAtomicOp {LogicalOp {MemAtomicOp_ADD,LogicalOp_AND, MemAtomicOp_BIC,LogicalOp_EOR, MemAtomicOp_EOR,LogicalOp_ORR}; MemAtomicOp_ORR, MemAtomicOp_SMAX, MemAtomicOp_SMIN, MemAtomicOp_UMAX, MemAtomicOp_UMIN, MemAtomicOp_SWP};

Library pseudocode for aarch64/instrs/memory/memop/MemOpMemAtomicOp

enumeration MemOp {MemAtomicOp {MemOp_LOAD,MemAtomicOp_ADD, MemOp_STORE,MemAtomicOp_BIC, MemOp_PREFETCH};MemAtomicOp_EOR,MemAtomicOp_ORR, MemAtomicOp_SMAX, MemAtomicOp_SMIN, MemAtomicOp_UMAX, MemAtomicOp_UMIN, MemAtomicOp_SWP};

Library pseudocode for aarch64/instrs/memory/prefetchmemop/PrefetchMemOp

// Prefetch() // ========== // Decode and execute the prefetch hint on ADDRESS specified by PRFOPenumeration Prefetch(bits(64) address, bits(5) prfop)MemOp { PrefetchHint hint; integer target; boolean stream; case prfop<4:3> of when '00' hint =MemOp_LOAD, Prefetch_READ; // PLD: prefetch for load when '01' hint =MemOp_STORE, Prefetch_EXEC; // PLI: preload instructions when '10' hint = Prefetch_WRITE; // PST: prepare for store when '11' return; // unallocated hint target = UInt(prfop<2:1>); // target cache level stream = (prfop<0> != '0'); // streaming (non-temporal) Hint_Prefetch(address, hint, target, stream); return;MemOp_PREFETCH};

Library pseudocode for aarch64/instrs/systemmemory/barriersprefetch/barrierop/MemBarrierOpPrefetch

enumeration// Prefetch() // ========== // Decode and execute the prefetch hint on ADDRESS specified by PRFOP MemBarrierOp {Prefetch(bits(64) address, bits(5) prfop) MemBarrierOp_DSB // Data Synchronization Barrier ,hint; integer target; boolean stream; case prfop<4:3> of when '00' hint = MemBarrierOp_DMB // Data Memory Barrier ,; // PLD: prefetch for load when '01' hint = MemBarrierOp_ISB // Instruction Synchronization Barrier ,; // PLI: preload instructions when '10' hint = MemBarrierOp_SSBB // Speculative Synchronization Barrier to VA ,; // PST: prepare for store when '11' return; // unallocated hint target = MemBarrierOp_PSSBB // Speculative Synchronization Barrier to PA ,(prfop<2:1>); // target cache level stream = (prfop<0> != '0'); // streaming (non-temporal) MemBarrierOp_SB // Speculation Barrier };(address, hint, target, stream); return;

Library pseudocode for aarch64/instrs/system/hintsbarriers/syshintopbarrierop/SystemHintOpMemBarrierOp

enumeration SystemHintOp {MemBarrierOp { SystemHintOp_NOP,MemBarrierOp_DSB // Data Synchronization Barrier , SystemHintOp_YIELD,MemBarrierOp_DMB // Data Memory Barrier , SystemHintOp_WFE,MemBarrierOp_ISB // Instruction Synchronization Barrier , SystemHintOp_WFI,MemBarrierOp_SSBB // Speculative Synchronization Barrier to VA , SystemHintOp_SEV,MemBarrierOp_PSSBB // Speculative Synchronization Barrier to PA , SystemHintOp_SEVL,MemBarrierOp_SB // Speculation Barrier }; SystemHintOp_DGH, SystemHintOp_ESB, SystemHintOp_PSB, SystemHintOp_TSB, SystemHintOp_BTI, SystemHintOp_WFET, SystemHintOp_WFIT, SystemHintOp_CSDB };

Library pseudocode for aarch64/instrs/system/registerhints/cpsrsyshintop/pstatefield/PSTATEFieldSystemHintOp

enumeration PSTATEField {SystemHintOp {PSTATEField_DAIFSet,SystemHintOp_NOP, PSTATEField_DAIFClr,SystemHintOp_YIELD, PSTATEField_PAN, // Armv8.1SystemHintOp_WFE, PSTATEField_UAO, // Armv8.2SystemHintOp_WFI, PSTATEField_DIT, // Armv8.4SystemHintOp_SEV, PSTATEField_SSBS,SystemHintOp_SEVL, PSTATEField_TCO, // Armv8.5SystemHintOp_DGH, PSTATEField_ALLINT,SystemHintOp_ESB, PSTATEField_SP };SystemHintOp_PSB,SystemHintOp_TSB, SystemHintOp_BTI, SystemHintOp_WFET, SystemHintOp_WFIT, SystemHintOp_CSDB };

Library pseudocode for aarch64/instrs/system/sysopsregister/atcpsr/AArch64.ATpstatefield/PSTATEField

// AArch64.AT() // ============ // Perform address translation as per AT instructions.enumeration AArch64.AT(bits(64) address,PSTATEField { TranslationStage stage_in, bits(2) el_in,PSTATEField_DAIFSet, ATAccess ataccess)PSTATEField_DAIFClr, TranslationStage stage = stage_in; bits(2) el = el_in; // For stage 1 translation, when HCR_EL2.{E2H, TGE} is {1,1} and requested EL is EL1, // the EL2&0 translation regime is used. if HCR_EL2.<E2H, TGE> == '11' && el ==PSTATEField_PAN, // Armv8.1 EL1 && stage ==PSTATEField_UAO, // Armv8.2 TranslationStage_1 then el =PSTATEField_DIT, // Armv8.4 EL2; ifPSTATEField_SSBS, HaveEL(PSTATEField_TCO, // Armv8.5EL3) && stage ==PSTATEField_ALLINT, TranslationStage_12 && !EL2Enabled() then stage = TranslationStage_1; acctype = if ataccess IN {ATAccess_Read, ATAccess_Write} then AccType_AT else AccType_ATPAN; iswrite = ataccess IN {ATAccess_WritePAN, ATAccess_Write}; aligned = TRUE; ispriv = el != EL0; fault = NoFault(); fault.acctype = acctype; fault.write = iswrite; Regime regime; if stage == TranslationStage_12 then regime = Regime_EL10; else regime = TranslationRegime(el, acctype); AddressDescriptor addrdesc; ss = SecurityStateAtEL(el); if (el == EL0 && ELUsingAArch32(EL1)) || (el != EL0 && ELUsingAArch32(el)) then if regime == Regime_EL2 || TTBCR.EAE == '1' then (fault, addrdesc) = AArch32.S1TranslateLD(fault, regime, ss, address<31:0>, acctype, aligned, iswrite, ispriv); else (fault, addrdesc, -) = AArch32.S1TranslateSD(fault, regime, ss, address<31:0>, acctype, aligned, iswrite, ispriv); else (fault, addrdesc) = AArch64.S1Translate(fault, regime, ss, address, acctype, aligned, iswrite, ispriv); if stage == TranslationStage_12 && fault.statuscode == Fault_None then boolean s2fs1walk; boolean s1aarch64; if ELUsingAArch32(EL1) && regime == Regime_EL10 && EL2Enabled() then addrdesc.vaddress = ZeroExtend(address); s2fs1walk = FALSE; (fault, addrdesc) = AArch32.S2Translate(fault, addrdesc, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); elsif regime == Regime_EL10 && EL2Enabled() then s1aarch64 = TRUE; s2fs1walk = FALSE; (fault, addrdesc) = AArch64.S2Translate(fault, addrdesc, s1aarch64, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); is_ATS1Ex = stage != TranslationStage_12; if fault.statuscode != Fault_None then addrdesc = CreateFaultyAddressDescriptor(address, fault); // Take an exception on: // * A Synchronous External abort occurs on translation table walk // * A stage 2 fault occurs on a stage 1 walk if IsExternalAbort(fault) || (PSTATE.EL == EL1 && fault.s2fs1walk) then PAR_EL1 = bits(64) UNKNOWN; AArch64.Abort(address, addrdesc.fault); AArch64.EncodePAR(regime, addrdesc); return;PSTATEField_SP };

Library pseudocode for aarch64/instrs/system/sysops/at/AArch64.EncodePARAArch64.AT

// AArch64.EncodePAR() // =================== // Encode PAR register with result of translation.// AArch64.AT() // ============ // Perform address translation as per AT instructions. AArch64.EncodePAR(AArch64.AT(bits(64) address,TranslationStage stage_in, bits(2) el_in, ATAccess ataccess) TranslationStage stage = stage_in; bits(2) el = el_in; // For stage 1 translation, when HCR_EL2.{E2H, TGE} is {1,1} and requested EL is EL1, // the EL2&0 translation regime is used. if HCR_EL2.<E2H, TGE> == '11' && el == EL1 && stage == TranslationStage_1 then el = EL2; if HaveEL(EL3) && stage == TranslationStage_12 && !EL2Enabled() then stage = TranslationStage_1; acctype = if ataccess IN {ATAccess_Read, ATAccess_Write} then AccType_AT else AccType_ATPAN; iswrite = ataccess IN {ATAccess_WritePAN, ATAccess_Write}; aligned = TRUE; ispriv = el != EL0; fault = NoFault(); fault.acctype = acctype; fault.write = iswrite; Regime regime,regime; if stage == TranslationStage_12 then regime = Regime_EL10; else regime = TranslationRegime(el, acctype); AddressDescriptor addrdesc) PAR_EL1 =addrdesc; ss = ZerosSecurityStateAtEL(); paspace = addrdesc.paddress.paspace; if !(el); if (el ==IsFaultEL0(addrdesc) then PAR_EL1.F = '0'; PAR_EL1<11> = '1'; // RES1 if&& SecurityStateForRegimeELUsingAArch32(regime) ==( SS_SecureEL1 then PAR_EL1.NS = if paspace ==)) || (el != PAS_SecureEL0 then '0' else '1'; else PAR_EL1.NS = bit UNKNOWN; PAR_EL1.SH = ReportedPARShareability(&&PAREncodeShareabilityELUsingAArch32(addrdesc.memattrs)); PAR_EL1.PA = addrdesc.paddress.address<52-1:12>; PAR_EL1.ATTR = ReportedPARAttrs((el)) then if regime ==EncodePARAttrsRegime_EL2(addrdesc.memattrs)); PAR_EL1<10> = bit IMPLEMENTATION_DEFINED "Non-Faulting PAR"; else PAR_EL1.F = '1'; PAR_EL1.FST =|| TTBCR.EAE == '1' then (fault, addrdesc) = (fault, regime, ss, address<31:0>, acctype, aligned, iswrite, ispriv); else (fault, addrdesc, -) = AArch32.S1TranslateSD(fault, regime, ss, address<31:0>, acctype, aligned, iswrite, ispriv); else (fault, addrdesc) = AArch64.S1Translate(fault, regime, ss, address, acctype, aligned, iswrite, ispriv); if stage == TranslationStage_12 && fault.statuscode == Fault_None then boolean s2fs1walk; boolean s1aarch64; if ELUsingAArch32(EL1) && regime == Regime_EL10 && EL2Enabled() then addrdesc.vaddress = ZeroExtend(address); s2fs1walk = FALSE; (fault, addrdesc) = AArch32.S2Translate(fault, addrdesc, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); elsif regime == Regime_EL10 && EL2Enabled() then s1aarch64 = TRUE; s2fs1walk = FALSE; (fault, addrdesc) = AArch64.S2Translate(fault, addrdesc, s1aarch64, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); is_ATS1Ex = stage != TranslationStage_12; if fault.statuscode != Fault_None then addrdesc = CreateFaultyAddressDescriptor(address, fault); // Take an exception on: // * A Synchronous external abort occurs on translation table walk // * A stage 2 fault occurs on a stage 1 walk if IsExternalAbort(fault) || (PSTATE.EL == EL1 && fault.s2fs1walk) then PAR_EL1 = bits(64) UNKNOWN; AArch64.Abort(address, addrdesc.fault); AArch64.EncodePARAArch64.PARFaultStatusAArch32.S1TranslateLD(addrdesc.fault); PAR_EL1.PTW = if addrdesc.fault.s2fs1walk then '1' else '0'; PAR_EL1.S = if addrdesc.fault.secondstage then '1' else '0'; PAR_EL1<11> = '1'; // RES1 PAR_EL1<63:48> = bits(16) IMPLEMENTATION_DEFINED "Faulting PAR"; (regime, addrdesc); return;

Library pseudocode for aarch64/instrs/system/sysops/at/AArch64.PARFaultStatusAArch64.EncodePAR

// AArch64.PARFaultStatus() // ======================== // Fault status field decoding of 64-bit PAR. bits(6)// AArch64.EncodePAR() // =================== // Encode PAR register with result of translation. AArch64.PARFaultStatus(AArch64.EncodePAR(FaultRecordRegime fault) bits(6) fst; if fault.statuscode ==regime, Fault_DomainAddressDescriptor then // Report Domain fault assert fault.level IN {1,2}; fst<1:0> = if fault.level == 1 then '01' else '10'; fst<5:2> = '1111'; else fst =addrdesc) PAR_EL1 = (); paspace = addrdesc.paddress.paspace; if !IsFault(addrdesc) then PAR_EL1.F = '0'; PAR_EL1<11> = '1'; // RES1 if SecurityStateForRegime(regime) == SS_Secure then PAR_EL1.NS = if paspace == PAS_Secure then '0' else '1'; else PAR_EL1.NS = bit UNKNOWN; PAR_EL1.SH = ReportedPARShareability(PAREncodeShareability(addrdesc.memattrs)); PAR_EL1.PA = addrdesc.paddress.address<52-1:12>; PAR_EL1.ATTR = ReportedPARAttrs(EncodePARAttrs(addrdesc.memattrs)); PAR_EL1<10> = bit IMPLEMENTATION_DEFINED "Non-Faulting PAR"; else PAR_EL1.F = '1'; PAR_EL1.FST = AArch64.PARFaultStatusEncodeLDFSCZeros(fault.statuscode, fault.level); return fst;(addrdesc.fault); PAR_EL1.PTW = if addrdesc.fault.s2fs1walk then '1' else '0'; PAR_EL1.S = if addrdesc.fault.secondstage then '1' else '0'; PAR_EL1<11> = '1'; // RES1 PAR_EL1<63:48> = bits(16) IMPLEMENTATION_DEFINED "Faulting PAR"; return;

Library pseudocode for aarch64/instrs/system/sysops/dcat/AArch64.DCAArch64.PARFaultStatus

// AArch64.DC() // ============ // Perform Data Cache Operation.// AArch64.PARFaultStatus() // ======================== // Fault status field decoding of 64-bit PAR. bits(6) AArch64.DC(bits(64) regval,AArch64.PARFaultStatus( CacheTypeFaultRecord cachetype,fault) bits(6) fst; if fault.statuscode == CacheOpFault_Domain cacheop,then // Report Domain fault assert fault.level IN {1,2}; fst<1:0> = if fault.level == 1 then '01' else '10'; fst<5:2> = '1111'; else fst = CacheOpScopeEncodeLDFSC opscope_in) CacheOpScope opscope = opscope_in; AccType acctype = AccType_DC; CacheRecord cache; cache.acctype = acctype; cache.cachetype = cachetype; cache.cacheop = cacheop; cache.opscope = opscope; if opscope == CacheOpScope_SetWay then ss = SecurityStateAtEL(PSTATE.EL); cache.cpas = CPASAtSecurityState(ss); cache.shareability = Shareability_NSH; (cache.set, cache.way, cache.level) = DecodeSW(regval, cachetype); if (cacheop == CacheOp_Invalidate && PSTATE.EL == EL1 && EL2Enabled() && (HCR_EL2.SWIO == '1' || HCR_EL2.<DC,VM> != '00')) then cache.cacheop = CacheOp_CleanInvalidate; CACHE_OP(cache); return; if EL2Enabled() && !IsInHost() then if PSTATE.EL IN {EL0, EL1} then cache.is_vmid_valid = TRUE; cache.vmid = VMID[]; else cache.is_vmid_valid = FALSE; else cache.is_vmid_valid = FALSE; if PSTATE.EL == EL0 then cache.is_asid_valid = TRUE; cache.asid = ASID[]; else cache.is_asid_valid = FALSE; if opscope == CacheOpScope_PoDP && boolean IMPLEMENTATION_DEFINED "Memory system does not supports PoDP" then opscope = CacheOpScope_PoP; if opscope == CacheOpScope_PoP && boolean IMPLEMENTATION_DEFINED "Memory system does not supports PoP" then opscope = CacheOpScope_PoC; need_translate = DCInstNeedsTranslation(opscope); iswrite = cacheop == CacheOp_Invalidate; vaddress = regval; size = 0; // by default no watchpoint address if iswrite then size = integer IMPLEMENTATION_DEFINED "Data Cache Invalidate Watchpoint Size"; assert size >= 4*(2^(UInt(CTR_EL0.DminLine))) && size <= 2048; assert UInt(size<32:0> AND (size-1)<32:0>) == 0; // size is power of 2 vaddress = Align(regval, size); cache.translated = need_translate; cache.vaddress = vaddress; if need_translate then wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(vaddress, acctype, iswrite, wasaligned, size); if IsFault(memaddrdesc) then AArch64.Abort(regval, memaddrdesc.fault); memattrs = memaddrdesc.memattrs; cache.paddress = memaddrdesc.paddress; cache.cpas = CPASAtPAS(memaddrdesc.paddress.paspace); if opscope IN {CacheOpScope_PoC, CacheOpScope_PoP, CacheOpScope_PoDP} then cache.shareability = memattrs.shareability; else cache.shareability = Shareability_NSH; else cache.shareability = Shareability UNKNOWN; cache.paddress = FullAddress UNKNOWN; if cacheop == CacheOp_Invalidate && PSTATE.EL == EL1 && EL2Enabled() && HCR_EL2.<DC,VM> != '00' then cache.cacheop = CacheOp_CleanInvalidate; CACHE_OP(cache); return;(fault.statuscode, fault.level); return fst;

Library pseudocode for aarch64/instrs/system/sysops/dc/AArch64.MemZeroAArch64.DC

// AArch64.MemZero() // =================// AArch64.DC() // ============ // Perform Data Cache Operation. AArch64.MemZero(bits(64) regval,AArch64.DC(bits(64) regval, CacheType cachetype)cachetype, CacheOp cacheop, CacheOpScope opscope_in) CacheOpScope opscope = opscope_in; AccType acctype = AccType_DCZVAAccType_DC; boolean iswrite = TRUE; boolean wasaligned = TRUE; integer size = 4*(2^(;CacheRecord cache; cache.acctype = acctype; cache.cachetype = cachetype; cache.cacheop = cacheop; cache.opscope = opscope; if opscope == CacheOpScope_SetWay then ss = SecurityStateAtEL(PSTATE.EL); cache.cpas = CPASAtSecurityState(ss); cache.shareability = Shareability_NSH; (cache.set, cache.way, cache.level) = DecodeSW(regval, cachetype); if (cacheop == CacheOp_Invalidate && PSTATE.EL == EL1 && EL2Enabled() && (HCR_EL2.SWIO == '1' || HCR_EL2.<DC,VM> != '00')) then cache.cacheop = CacheOp_CleanInvalidate; CACHE_OP(cache); return; if EL2Enabled() && !IsInHost() then if PSTATE.EL IN {EL0, EL1} then cache.is_vmid_valid = TRUE; cache.vmid = VMID[]; else cache.is_vmid_valid = FALSE; else cache.is_vmid_valid = FALSE; if PSTATE.EL == EL0 then cache.is_asid_valid = TRUE; cache.asid = ASID[]; else cache.is_asid_valid = FALSE; if opscope == CacheOpScope_PoDP && boolean IMPLEMENTATION_DEFINED "Memory system does not supports PoDP" then opscope = CacheOpScope_PoP; if opscope == CacheOpScope_PoP && boolean IMPLEMENTATION_DEFINED "Memory system does not supports PoP" then opscope = CacheOpScope_PoC; need_translate = DCInstNeedsTranslation(opscope); iswrite = cacheop == CacheOp_Invalidate; vaddress = regval; size = 0; // by default no watchpoint address if iswrite then size = integer IMPLEMENTATION_DEFINED "Data Cache Invalidate Watchpoint Size"; assert size >= 4*(2^(UInt(DCZID_EL0.BS))); bits(64) vaddress =(CTR_EL0.DminLine))) && size <= 2048; assert UInt(size<32:0> AND (size-1)<32:0>) == 0; // size is power of 2 vaddress = Align(regval, size); memaddrdesc = cache.translated = need_translate; cache.vaddress = vaddress; if need_translate then wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(vaddress, acctype, iswrite, wasaligned, size); if IsFault(memaddrdesc) then if(memaddrdesc) then IsDebugException(memaddrdesc.fault) then AArch64.Abort(vaddress, memaddrdesc.fault); else(regval, memaddrdesc.fault); memattrs = memaddrdesc.memattrs; cache.paddress = memaddrdesc.paddress; cache.cpas = AArch64.AbortCPASAtPAS(regval, memaddrdesc.fault); else if cachetype ==(memaddrdesc.paddress.paspace); if opscope IN { CacheType_DataCacheOpScope_PoC then, AArch64.DataMemZeroCacheOpScope_PoP(regval, vaddress, memaddrdesc, size); elsif cachetype ==, CacheType_TagCacheOpScope_PoDP then if} then cache.shareability = memattrs.shareability; else cache.shareability = HaveMTEExtShareability_NSH() then; else cache.shareability = AArch64.TagMemZeroShareability(vaddress, size); elsif cachetype ==UNKNOWN; cache.paddress = CacheType_Data_TagFullAddress then ifUNKNOWN; if cacheop == HaveMTEExtCacheOp_Invalidate() then&& PSTATE.EL == AArch64.TagMemZeroEL1(vaddress, size);&& () && HCR_EL2.<DC,VM> != '00' then cache.cacheop = CacheOp_CleanInvalidate; CACHE_OPAArch64.DataMemZeroEL2Enabled(regval, vaddress, memaddrdesc, size); (cache); return;

Library pseudocode for aarch64/instrs/system/sysops/icdc/AArch64.ICAArch64.MemZero

// AArch64.IC() // ============ // Perform Instruction Cache Operation.// AArch64.MemZero() // ================= AArch64.IC(AArch64.MemZero(bits(64) regval,CacheOpScopeCacheType opscope) regval = bits(64) UNKNOWN;cachetype) AArch64.IC(regval, opscope); // AArch64.IC() // ============ // Perform Instruction Cache Operation. AArch64.IC(bits(64) regval, CacheOpScope opscope) CacheRecord cache; AccType acctype = AccType_ICAccType_DCZVA; boolean iswrite = TRUE; boolean wasaligned = TRUE; cache.acctype = acctype; cache.cachetype = integer size = 4*(2^( CacheType_InstructionUInt; cache.cacheop =(DCZID_EL0.BS))); bits(64) vaddress = CacheOp_InvalidateAlign; cache.opscope = opscope; (regval, size); if opscope IN { memaddrdesc =CacheOpScope_ALLUAArch64.TranslateAddress,(vaddress, acctype, iswrite, wasaligned, size); if CacheOpScope_ALLUISIsFault} then ss =(memaddrdesc) then if SecurityStateAtELIsDebugException(PSTATE.EL); cache.cpas =(memaddrdesc.fault) then CPASAtSecurityStateAArch64.Abort(ss); if (opscope ==(vaddress, memaddrdesc.fault); else CacheOpScope_ALLUISAArch64.Abort || (opscope ==(regval, memaddrdesc.fault); else if cachetype == CacheOpScope_ALLUCacheType_Data && PSTATE.EL ==then EL1AArch64.DataMemZero &&(regval, vaddress, memaddrdesc, size); elsif cachetype == EL2EnabledCacheType_Tag() && HCR_EL2.FB == '1')) then cache.shareability =then if Shareability_ISHHaveMTEExt; else cache.shareability =() then Shareability_NSHAArch64.TagMemZero; cache.regval = regval;(vaddress, size); elsif cachetype == CACHE_OPCacheType_Data_Tag(cache); else assert opscope ==then if CacheOpScope_PoUHaveMTEExt; if() then EL2EnabledAArch64.TagMemZero() && !(vaddress, size);IsInHostAArch64.DataMemZero() then if PSTATE.EL IN {EL0, EL1} then cache.is_vmid_valid = TRUE; cache.vmid = VMID[]; else cache.is_vmid_valid = FALSE; else cache.is_vmid_valid = FALSE; if PSTATE.EL == EL0 then cache.is_asid_valid = TRUE; cache.asid = ASID[]; else cache.is_asid_valid = FALSE; bits(64) vaddress = regval; need_translate = ICInstNeedsTranslation(opscope); cache.vaddress = regval; cache.shareability = Shareability_NSH; cache.translated = need_translate; if !need_translate then cache.paddress = FullAddress UNKNOWN; CACHE_OP(cache); return; iswrite = FALSE; wasaligned = TRUE; size = 0; memaddrdesc = AArch64.TranslateAddress(vaddress, acctype, iswrite, wasaligned, size); if IsFault(memaddrdesc) then AArch64.Abort(regval, memaddrdesc.fault); cache.cpas = CPASAtPAS(memaddrdesc.paddress.paspace); cache.paddress = memaddrdesc.paddress; CACHE_OP(cache); (regval, vaddress, memaddrdesc, size); return;

Library pseudocode for aarch64/instrs/system/sysops/predictionrestrictic/RestrictPredictionAArch64.IC

// RestrictPrediction() // ==================== // Clear all predictions in the context.// AArch64.IC() // ============ // Perform Instruction Cache Operation. AArch64.RestrictPrediction(bits(64) val,AArch64.IC( RestrictTypeCacheOpScope restriction)opscope) regval = bits(64) UNKNOWN; ExecutionCntxtAArch64.IC c; target_el = val<25:24>; (regval, opscope); // If the instruction is executed at an EL lower than the specified // level, it is treated as a NOP. if// AArch64.IC() // ============ // Perform Instruction Cache Operation. UInt(target_el) >AArch64.IC(bits(64) regval, UIntCacheOpScope(PSTATE.EL) then return; bit ns = val<26>; ss =opscope) TargetSecurityStateCacheRecord(ns); c.security = ss; c.target_el = target_el; ifcache; AccType acctype = AccType_IC; cache.acctype = acctype; cache.cachetype = CacheType_Instruction; cache.cacheop = CacheOp_Invalidate; cache.opscope = opscope; if opscope IN {CacheOpScope_ALLU, CacheOpScope_ALLUIS} then ss = SecurityStateAtEL(PSTATE.EL); cache.cpas = CPASAtSecurityState(ss); if (opscope == CacheOpScope_ALLUIS || (opscope == CacheOpScope_ALLU && PSTATE.EL == EL1 && EL2Enabled() && !() && HCR_EL2.FB == '1')) then cache.shareability =Shareability_ISH; else cache.shareability = Shareability_NSH; cache.regval = regval; CACHE_OP(cache); else assert opscope == CacheOpScope_PoU; if EL2Enabled() && !IsInHost() then if PSTATE.EL IN {EL0, EL1} then c.is_vmid_valid = TRUE; c.all_vmid = FALSE; c.vmid = cache.is_vmid_valid = TRUE; cache.vmid = VMID[]; else cache.is_vmid_valid = FALSE; else cache.is_vmid_valid = FALSE; elsif target_el IN { if PSTATE.EL ==EL0,then cache.is_asid_valid = TRUE; cache.asid = EL1ASID} then c.is_vmid_valid = TRUE; c.all_vmid = val<48> == '1'; c.vmid = val<47:32>; // Only valid if val<48> == '0'; []; else c.is_vmid_valid = FALSE; else c.is_vmid_valid = FALSE; cache.is_asid_valid = FALSE; if PSTATE.EL == bits(64) vaddress = regval; need_translate = EL0ICInstNeedsTranslation then c.is_asid_valid = TRUE; c.all_asid = FALSE; c.asid =(opscope); cache.vaddress = regval; cache.shareability = ASIDShareability_NSH[]; ; cache.translated = need_translate; elsif target_el == if !need_translate then cache.paddress = EL0FullAddress then c.is_asid_valid = TRUE; c.all_asid = val<16> == '1'; c.asid = val<15:0>; // Only valid if val<16> == '0'; else c.is_asid_valid = FALSE; c.restriction = restriction;UNKNOWN; (cache); return; iswrite = FALSE; wasaligned = TRUE; size = 0; memaddrdesc = AArch64.TranslateAddress(vaddress, acctype, iswrite, wasaligned, size); if IsFault(memaddrdesc) then AArch64.Abort(regval, memaddrdesc.fault); cache.cpas = CPASAtPAS(memaddrdesc.paddress.paspace); cache.paddress = memaddrdesc.paddress; CACHE_OPRESTRICT_PREDICTIONSCACHE_OP(c);(cache); return;

Library pseudocode for aarch64/instrs/system/sysops/sysoppredictionrestrict/SysOpRestrictPrediction

// SysOp() // ======= SystemOp// RestrictPrediction() // ==================== // Clear all predictions in the context. SysOp(bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2) case op1:CRn:CRm:op2 of when '000 0111 1000 000' returnAArch64.RestrictPrediction(bits(64) val, Sys_ATRestrictType; // S1E1R when '100 0111 1000 000' returnrestriction) Sys_ATExecutionCntxt; // S1E2R when '110 0111 1000 000' returnc; target_el = val<25:24>; // If the instruction is executed at an EL lower than the specified // level, it is treated as a NOP. if Sys_ATUInt; // S1E3R when '000 0111 1000 001' return(target_el) > Sys_ATUInt; // S1E1W when '100 0111 1000 001' return(PSTATE.EL) then return; bit ns = val<26>; ss = Sys_ATTargetSecurityState; // S1E2W when '110 0111 1000 001' return(ns); c.security = ss; c.target_el = target_el; if Sys_ATEL2Enabled; // S1E3W when '000 0111 1000 010' return() && ! Sys_ATIsInHost; // S1E0R when '000 0111 1000 011' return() then if PSTATE.EL IN { Sys_ATEL0; // S1E0W when '100 0111 1000 100' return, Sys_ATEL1; // S12E1R when '100 0111 1000 101' return} then c.is_vmid_valid = TRUE; c.all_vmid = FALSE; c.vmid = Sys_ATVMID; // S12E1W when '100 0111 1000 110' return[]; elsif target_el IN { Sys_ATEL0; // S12E0R when '100 0111 1000 111' return, Sys_ATEL1; // S12E0W when '011 0111 0100 001' return} then c.is_vmid_valid = TRUE; c.all_vmid = val<48> == '1'; c.vmid = val<47:32>; // Only valid if val<48> == '0'; else c.is_vmid_valid = FALSE; else c.is_vmid_valid = FALSE; if PSTATE.EL == Sys_DCEL0; // ZVA when '000 0111 0110 001' returnthen c.is_asid_valid = TRUE; c.all_asid = FALSE; c.asid = Sys_DCASID; // IVAC when '000 0111 0110 010' return[]; elsif target_el == Sys_DCEL0; // ISW when '011 0111 1010 001' returnthen c.is_asid_valid = TRUE; c.all_asid = val<16> == '1'; c.asid = val<15:0>; // Only valid if val<16> == '0'; else c.is_asid_valid = FALSE; c.restriction = restriction; Sys_DCRESTRICT_PREDICTIONS; // CVAC when '000 0111 1010 010' return Sys_DC; // CSW when '011 0111 1011 001' return Sys_DC; // CVAU when '011 0111 1110 001' return Sys_DC; // CIVAC when '000 0111 1110 010' return Sys_DC; // CISW when '011 0111 1101 001' return Sys_DC; // CVADP when '000 0111 0001 000' return Sys_IC; // IALLUIS when '000 0111 0101 000' return Sys_IC; // IALLU when '011 0111 0101 001' return Sys_IC; // IVAU when '100 1000 0000 001' return Sys_TLBI; // IPAS2E1IS when '100 1000 0000 101' return Sys_TLBI; // IPAS2LE1IS when '000 1000 0011 000' return Sys_TLBI; // VMALLE1IS when '100 1000 0011 000' return Sys_TLBI; // ALLE2IS when '110 1000 0011 000' return Sys_TLBI; // ALLE3IS when '000 1000 0011 001' return Sys_TLBI; // VAE1IS when '100 1000 0011 001' return Sys_TLBI; // VAE2IS when '110 1000 0011 001' return Sys_TLBI; // VAE3IS when '000 1000 0011 010' return Sys_TLBI; // ASIDE1IS when '000 1000 0011 011' return Sys_TLBI; // VAAE1IS when '100 1000 0011 100' return Sys_TLBI; // ALLE1IS when '000 1000 0011 101' return Sys_TLBI; // VALE1IS when '100 1000 0011 101' return Sys_TLBI; // VALE2IS when '110 1000 0011 101' return Sys_TLBI; // VALE3IS when '100 1000 0011 110' return Sys_TLBI; // VMALLS12E1IS when '000 1000 0011 111' return Sys_TLBI; // VAALE1IS when '100 1000 0100 001' return Sys_TLBI; // IPAS2E1 when '100 1000 0100 101' return Sys_TLBI; // IPAS2LE1 when '000 1000 0111 000' return Sys_TLBI; // VMALLE1 when '100 1000 0111 000' return Sys_TLBI; // ALLE2 when '110 1000 0111 000' return Sys_TLBI; // ALLE3 when '000 1000 0111 001' return Sys_TLBI; // VAE1 when '100 1000 0111 001' return Sys_TLBI; // VAE2 when '110 1000 0111 001' return Sys_TLBI; // VAE3 when '000 1000 0111 010' return Sys_TLBI; // ASIDE1 when '000 1000 0111 011' return Sys_TLBI; // VAAE1 when '100 1000 0111 100' return Sys_TLBI; // ALLE1 when '000 1000 0111 101' return Sys_TLBI; // VALE1 when '100 1000 0111 101' return Sys_TLBI; // VALE2 when '110 1000 0111 101' return Sys_TLBI; // VALE3 when '100 1000 0111 110' return Sys_TLBI; // VMALLS12E1 when '000 1000 0111 111' return Sys_TLBI; // VAALE1 return Sys_SYS;(c);

Library pseudocode for aarch64/instrs/system/sysops/sysop/SystemOpSysOp

enumeration// SysOp() // ======= SystemOp SystemOp {SysOp(bits(3) op1, bits(4) CRn, bits(4) CRm, bits(3) op2) case op1:CRn:CRm:op2 of when '000 0111 1000 000' returnSys_AT,; // S1E1R when '100 0111 1000 000' return Sys_DC,; // S1E2R when '110 0111 1000 000' return Sys_IC,; // S1E3R when '000 0111 1000 001' return Sys_TLBI,; // S1E1W when '100 0111 1000 001' return ; // S1E2W when '110 0111 1000 001' return Sys_AT; // S1E3W when '000 0111 1000 010' return Sys_AT; // S1E0R when '000 0111 1000 011' return Sys_AT; // S1E0W when '100 0111 1000 100' return Sys_AT; // S12E1R when '100 0111 1000 101' return Sys_AT; // S12E1W when '100 0111 1000 110' return Sys_AT; // S12E0R when '100 0111 1000 111' return Sys_AT; // S12E0W when '011 0111 0100 001' return Sys_DC; // ZVA when '000 0111 0110 001' return Sys_DC; // IVAC when '000 0111 0110 010' return Sys_DC; // ISW when '011 0111 1010 001' return Sys_DC; // CVAC when '000 0111 1010 010' return Sys_DC; // CSW when '011 0111 1011 001' return Sys_DC; // CVAU when '011 0111 1110 001' return Sys_DC; // CIVAC when '000 0111 1110 010' return Sys_DC; // CISW when '011 0111 1101 001' return Sys_DC; // CVADP when '000 0111 0001 000' return Sys_IC; // IALLUIS when '000 0111 0101 000' return Sys_IC; // IALLU when '011 0111 0101 001' return Sys_IC; // IVAU when '100 1000 0000 001' return Sys_TLBI; // IPAS2E1IS when '100 1000 0000 101' return Sys_TLBI; // IPAS2LE1IS when '000 1000 0011 000' return Sys_TLBI; // VMALLE1IS when '100 1000 0011 000' return Sys_TLBI; // ALLE2IS when '110 1000 0011 000' return Sys_TLBI; // ALLE3IS when '000 1000 0011 001' return Sys_TLBI; // VAE1IS when '100 1000 0011 001' return Sys_TLBI; // VAE2IS when '110 1000 0011 001' return Sys_TLBI; // VAE3IS when '000 1000 0011 010' return Sys_TLBI; // ASIDE1IS when '000 1000 0011 011' return Sys_TLBI; // VAAE1IS when '100 1000 0011 100' return Sys_TLBI; // ALLE1IS when '000 1000 0011 101' return Sys_TLBI; // VALE1IS when '100 1000 0011 101' return Sys_TLBI; // VALE2IS when '110 1000 0011 101' return Sys_TLBI; // VALE3IS when '100 1000 0011 110' return Sys_TLBI; // VMALLS12E1IS when '000 1000 0011 111' return Sys_TLBI; // VAALE1IS when '100 1000 0100 001' return Sys_TLBI; // IPAS2E1 when '100 1000 0100 101' return Sys_TLBI; // IPAS2LE1 when '000 1000 0111 000' return Sys_TLBI; // VMALLE1 when '100 1000 0111 000' return Sys_TLBI; // ALLE2 when '110 1000 0111 000' return Sys_TLBI; // ALLE3 when '000 1000 0111 001' return Sys_TLBI; // VAE1 when '100 1000 0111 001' return Sys_TLBI; // VAE2 when '110 1000 0111 001' return Sys_TLBI; // VAE3 when '000 1000 0111 010' return Sys_TLBI; // ASIDE1 when '000 1000 0111 011' return Sys_TLBI; // VAAE1 when '100 1000 0111 100' return Sys_TLBI; // ALLE1 when '000 1000 0111 101' return Sys_TLBI; // VALE1 when '100 1000 0111 101' return Sys_TLBI; // VALE2 when '110 1000 0111 101' return Sys_TLBI; // VALE3 when '100 1000 0111 110' return Sys_TLBI; // VMALLS12E1 when '000 1000 0111 111' return Sys_TLBI; // VAALE1 return Sys_SYSSys_SYS};;

Library pseudocode for aarch64/instrs/system/sysops/tlbisysop/AArch32.DTLBI_ALLSystemOp

// AArch32.DTLBI_ALL() // =================== // Invalidate all data TLB entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability domain. // Invalidation applies to all applicable stage 1 and stage 2 entries. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed.enumeration AArch32.DTLBI_ALL(SystemOp {SecurityState security,Sys_AT, Regime regime,Sys_DC, Shareability shareability,Sys_IC, TLBIMemAttr attr) assert PSTATE.EL IN {Sys_TLBI,EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_DALL; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.level = TLBILevel_Any; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;Sys_SYS};

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.DTLBI_ASIDAArch32.DTLBI_ALL

// AArch32.DTLBI_ASID() // ==================== // Invalidate all data TLB stage 1 entries matching the indicated VMID (where regime supports) // and ASID in the parameter Rt in the indicated translation regime with the // indicated security state for all TLBs within the indicated shareability domain. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // AArch32.DTLBI_ALL() // =================== // Invalidate all data TLB entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability domain. // Invalidation applies to all applicable stage 1 and stage 2 entries. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete.// order to deem this operation to be completed. AArch32.DTLBI_ASID(AArch32.DTLBI_ALL(SecurityState security, Regime regime, bits(16) vmid,regime, Shareability shareability, TLBIMemAttr attr, bits(32) Rt) attr) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_DASIDTLBIOp_DALL; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = TLBILevel_Any; r.attr = attr; r.asid = Zeros(8) : Rt<7:0>;; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.DTLBI_VAAArch32.DTLBI_ASID

// AArch32.DTLBI_VA() // ================== // Invalidate by VA all stage 1 data TLB entries in the indicated shareability domain // matching the indicated VMID and ASID (where regime supports VMID, ASID) in the indicated regime // with the indicated security state. // ASID, VA and related parameters are derived from Rt. // AArch32.DTLBI_ASID() // ==================== // Invalidate all data TLB stage 1 entries matching the indicated VMID (where regime supports) // and ASID in the parameter Rt in the indicated translation regime with the // indicated security state for all TLBs within the indicated shareability domain. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.DTLBI_VA(AArch32.DTLBI_ASID(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(32) Rt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_DVATLBIOp_DASID; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.asid = r.level = ZerosTLBILevel_Any(8) : Rt<7:0>; r.address = Zeros(32) : Rt<31:12> :; r.attr = attr; r.asid = Zeros(12);(8) : Rt<7:0>; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.ITLBI_ALLAArch32.DTLBI_VA

// AArch32.ITLBI_ALL() // =================== // Invalidate all instruction TLB entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability domain. // Invalidation applies to all applicable stage 1 and stage 2 entries. // AArch32.DTLBI_VA() // ================== // Invalidate by VA all stage 1 data TLB entries in the indicated shareability domain // matching the indicated VMID and ASID (where regime supports VMID, ASID) in the indicated regime // with the indicated security state. // ASID, VA and related parameters are derived from Rt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed.// order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.ITLBI_ALL(AArch32.DTLBI_VA(SecurityState security, Regime regime,regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr) attr, bits(32) Rt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_IALLTLBIOp_DVA; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.level = r.vmid = vmid; r.level = level; r.attr = attr; r.asid = (8) : Rt<7:0>; r.address = Zeros(32) : Rt<31:12> : ZerosTLBILevel_AnyZeros; r.attr = attr;(12); TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.ITLBI_ASIDAArch32.ITLBI_ALL

// AArch32.ITLBI_ASID() // ==================== // Invalidate all instruction TLB stage 1 entries matching the indicated VMID (where regime supports) // and ASID in the parameter Rt in the indicated translation regime with the // indicated security state for all TLBs within the indicated shareability domain. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // AArch32.ITLBI_ALL() // =================== // Invalidate all instruction TLB entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability domain. // Invalidation applies to all applicable stage 1 and stage 2 entries. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete.// order to deem this operation to be completed. AArch32.ITLBI_ASID(AArch32.ITLBI_ALL(SecurityState security, Regime regime, bits(16) vmid,regime, Shareability shareability, TLBIMemAttr attr, bits(32) Rt) attr) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_IASIDTLBIOp_IALL; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = TLBILevel_Any; r.attr = attr; r.asid = Zeros(8) : Rt<7:0>;; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.ITLBI_VAAArch32.ITLBI_ASID

// AArch32.ITLBI_VA() // ================== // Invalidate by VA all stage 1 instruction TLB entries in the indicated shareability domain // matching the indicated VMID and ASID (where regime supports VMID, ASID) in the indicated regime // with the indicated security state. // ASID, VA and related parameters are derived from Rt. // AArch32.ITLBI_ASID() // ==================== // Invalidate all instruction TLB stage 1 entries matching the indicated VMID (where regime supports) // and ASID in the parameter Rt in the indicated translation regime with the // indicated security state for all TLBs within the indicated shareability domain. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.ITLBI_VA(AArch32.ITLBI_ASID(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(32) Rt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_IVATLBIOp_IASID; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.asid = r.level = ZerosTLBILevel_Any(8) : Rt<7:0>; r.address = Zeros(32) : Rt<31:12> :; r.attr = attr; r.asid = Zeros(12);(8) : Rt<7:0>; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.TLBI_ALLAArch32.ITLBI_VA

// AArch32.TLBI_ALL() // AArch32.ITLBI_VA() // ================== // Invalidate all entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability domain. // Invalidation applies to all applicable stage 1 and stage 2 entries. // Invalidate by VA all stage 1 instruction TLB entries in the indicated shareability domain // matching the indicated VMID and ASID (where regime supports VMID, ASID) in the indicated regime // with the indicated security state. // ASID, VA and related parameters are derived from Rt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.TLBI_ALL(AArch32.ITLBI_VA(SecurityState security, Regime regime,regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr) attr, bits(32) Rt) assert PSTATE.EL IN {EL3, EL2};, EL1}; TLBIRecord r; r.op = TLBIOp_ALLTLBIOp_IVA; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.level = r.vmid = vmid; r.level = level; r.attr = attr; r.asid = (8) : Rt<7:0>; r.address = Zeros(32) : Rt<31:12> : ZerosTLBILevel_AnyZeros; r.attr = attr;(12); TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.TLBI_ASIDAArch32.TLBI_ALL

// AArch32.TLBI_ASID() // =================== // Invalidate all stage 1 entries matching the indicated VMID (where regime supports) // and ASID in the parameter Rt in the indicated translation regime with the // indicated security state for all TLBs within the indicated shareability domain. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // AArch32.TLBI_ALL() // ================== // Invalidate all entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability domain. // Invalidation applies to all applicable stage 1 and stage 2 entries. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.TLBI_ASID(AArch32.TLBI_ALL(SecurityState security, Regime regime, bits(16) vmid,regime, Shareability shareability, TLBIMemAttr attr, bits(32) Rt) attr) assert PSTATE.EL IN {EL3, EL2,}; EL1}; TLBIRecord r; r.op = TLBIOp_ASIDTLBIOp_ALL; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = TLBILevel_Any; r.attr = attr; r.asid = Zeros(8) : Rt<7:0>;; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.TLBI_IPAS2AArch32.TLBI_ASID

// AArch32.TLBI_IPAS2() // ==================== // Invalidate by IPA all stage 2 only TLB entries in the indicated shareability // domain matching the indicated VMID in the indicated regime with the indicated security state. // Note: stage 1 and stage 2 combined entries are not in the scope of this operation. // IPA and related parameters of the are derived from Rt. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // AArch32.TLBI_ASID() // =================== // Invalidate all stage 1 entries matching the indicated VMID (where regime supports) // and ASID in the parameter Rt in the indicated translation regime with the // indicated security state for all TLBs within the indicated shareability domain. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.TLBI_IPAS2(AArch32.TLBI_ASID(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(32) Rt) assert PSTATE.EL IN {EL3, EL2}; assert security ==, SS_NonSecureEL1;}; TLBIRecord r; r.op = TLBIOp_IPAS2TLBIOp_ASID; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.address = r.level = ZerosTLBILevel_Any(24) : Rt<27:0> :; r.attr = attr; r.asid = Zeros(12); r.ipaspace = PAS_NonSecure;(8) : Rt<7:0>; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.TLBI_VAAArch32.TLBI_IPAS2

// AArch32.TLBI_VA() // ================= // Invalidate by VA all stage 1 TLB entries in the indicated shareability domain // matching the indicated VMID and ASID (where regime supports VMID, ASID) in the indicated regime // with the indicated security state. // ASID, VA and related parameters are derived from Rt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // AArch32.TLBI_IPAS2() // ==================== // Invalidate by IPA all stage 2 only TLB entries in the indicated shareability // domain matching the indicated VMID in the indicated regime with the indicated security state. // Note: stage 1 and stage 2 combined entries are not in the scope of this operation. // IPA and related parameters of the are derived from Rt. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.TLBI_VA(AArch32.TLBI_IPAS2(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(32) Rt) assert PSTATE.EL IN {EL3, EL2,}; assert security == EL1SS_NonSecure};; TLBIRecord r; r.op = TLBIOp_VATLBIOp_IPAS2; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.asid = r.address = Zeros(8) : Rt<7:0>; r.address =(24) : Rt<27:0> : Zeros(32) : Rt<31:12> :(12); r.ipaspace = ZerosPAS_NonSecure(12);; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.TLBI_VAAAArch32.TLBI_VA

// AArch32.TLBI_VAA() // ================== // AArch32.TLBI_VA() // ================= // Invalidate by VA all stage 1 TLB entries in the indicated shareability domain // matching the indicated VMID (where regime supports VMID) and all ASID in the indicated regime // matching the indicated VMID and ASID (where regime supports VMID, ASID) in the indicated regime // with the indicated security state. // VA and related parameters are derived from Rt. // ASID, VA and related parameters are derived from Rt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.TLBI_VAA(AArch32.TLBI_VA(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(32) Rt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = ; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.asid = ZerosTLBIOp_VAATLBIOp_VA; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; (8) : Rt<7:0>; r.address = Zeros(32) : Rt<31:12> : Zeros(12); TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.TLBI_VMALLAArch32.TLBI_VAA

// AArch32.TLBI_VMALL() // ==================== // Invalidate all stage 1 entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability // domain that match the indicated VMID (where applicable). // AArch32.TLBI_VAA() // ================== // Invalidate by VA all stage 1 TLB entries in the indicated shareability domain // matching the indicated VMID (where regime supports VMID) and all ASID in the indicated regime // with the indicated security state. // VA and related parameters are derived from Rt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // Note: stage 2 only entries are not in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.TLBI_VMALL(AArch32.TLBI_VAA(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr) attr, bits(32) Rt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_VMALLTLBIOp_VAA; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.level = r.vmid = vmid; r.level = level; r.attr = attr; r.address = (32) : Rt<31:12> : ZerosTLBILevel_AnyZeros; r.vmid = vmid; r.attr = attr;(12); TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch32.TLBI_VMALLS12AArch32.TLBI_VMALL

// AArch32.TLBI_VMALLS12() // ======================= // Invalidate all stage 1 and stage 2 entries for the indicated translation // regime with the indicated security state for all TLBs within the indicated // shareability domain that match the indicated VMID. // AArch32.TLBI_VMALL() // ==================== // Invalidate all stage 1 entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability // domain that match the indicated VMID (where applicable). // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // Note: stage 2 only entries are not in the scope of this operation. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch32.TLBI_VMALLS12(AArch32.TLBI_VMALL(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBIMemAttr attr) assert PSTATE.EL IN {EL3, EL2};, EL1}; TLBIRecord r; r.op = TLBIOp_VMALLS12TLBIOp_VMALL; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.level = TLBILevel_Any; r.vmid = vmid; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_ALLAArch32.TLBI_VMALLS12

// AArch64.TLBI_ALL() // ================== // Invalidate all entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability domain. // Invalidation applies to all applicable stage 1 and stage 2 entries. // AArch32.TLBI_VMALLS12() // ======================= // Invalidate all stage 1 and stage 2 entries for the indicated translation // regime with the indicated security state for all TLBs within the indicated // shareability domain that match the indicated VMID. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_ALL(AArch32.TLBI_VMALLS12(SecurityState security, Regime regime,regime, bits(16) vmid, Shareability shareability, TLBIMemAttr attr) assert PSTATE.EL IN {EL3, EL2}; TLBIRecord r; r.op = TLBIOp_ALLTLBIOp_VMALLS12; r.from_aarch64 = TRUE; r.from_aarch64 = FALSE; r.security = security; r.regime = regime; r.level = TLBILevel_Any; r.vmid = vmid; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_ASIDAArch64.TLBI_ALL

// AArch64.TLBI_ASID() // =================== // Invalidate all stage 1 entries matching the indicated VMID (where regime supports) // and ASID in the parameter Xt in the indicated translation regime with the // indicated security state for all TLBs within the indicated shareability domain. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // AArch64.TLBI_ALL() // ================== // Invalidate all entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability domain. // Invalidation applies to all applicable stage 1 and stage 2 entries. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_ASID(AArch64.TLBI_ALL(SecurityState security, Regime regime, bits(16) vmid,regime, Shareability shareability, TLBIMemAttr attr, bits(64) Xt) attr) assert PSTATE.EL IN {EL3, EL2,}; EL1}; TLBIRecord r; r.op = TLBIOp_ASIDTLBIOp_ALL; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = TLBILevel_Any; r.attr = attr; r.asid = Xt<63:48>; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_IPAS2AArch64.TLBI_ASID

// AArch64.TLBI_IPAS2() // ==================== // Invalidate by IPA all stage 2 only TLB entries in the indicated shareability // domain matching the indicated VMID in the indicated regime with the indicated security state. // Note: stage 1 and stage 2 combined entries are not in the scope of this operation. // IPA and related parameters of the are derived from Xt. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // AArch64.TLBI_ASID() // =================== // Invalidate all stage 1 entries matching the indicated VMID (where regime supports) // and ASID in the parameter Xt in the indicated translation regime with the // indicated security state for all TLBs within the indicated shareability domain. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_IPAS2(AArch64.TLBI_ASID(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(64) Xt) assert PSTATE.EL IN {EL3, EL2};, EL1}; TLBIRecord r; r.op = TLBIOp_IPAS2TLBIOp_ASID; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.address = r.level = ZeroExtendTLBILevel_Any(Xt<39:0> : Zeros(12)); case security of when SS_NonSecure r.ipaspace = PAS_NonSecure; when SS_Secure r.ipaspace = if Xt<63> == '1' then PAS_NonSecure else PAS_Secure;; r.attr = attr; r.asid = Xt<63:48>; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_RIPAS2AArch64.TLBI_IPAS2

// AArch64.TLBI_RIPAS2() // ===================== // Range invalidate by IPA all stage 2 only TLB entries in the indicated // shareability domain matching the indicated VMID in the indicated regime with the indicated // security state. // AArch64.TLBI_IPAS2() // ==================== // Invalidate by IPA all stage 2 only TLB entries in the indicated shareability // domain matching the indicated VMID in the indicated regime with the indicated security state. // Note: stage 1 and stage 2 combined entries are not in the scope of this operation. // The range of IPA and related parameters of the are derived from Xt. // IPA and related parameters of the are derived from Xt. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_RIPAS2(AArch64.TLBI_IPAS2(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(64) Xt) assert PSTATE.EL IN {EL3, EL2,}; EL1}; TLBIRecord r; r.op = TLBIOp_RIPAS2TLBIOp_IPAS2; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; bits(2) tg = Xt<47:46>; integer scale = r.address = UIntZeroExtend(Xt<45:44>); integer num =(Xt<39:0> : UIntZeros(Xt<43:39>); integer baseaddr = SInt(Xt<36:0>); boolean valid; (valid, r.tg, r.address, r.end_address) = TLBIRange(regime, Xt); if !valid then return; (12)); case security of when SS_NonSecure r.ipaspace = PAS_NonSecure; when SS_Secure r.ipaspace = if Xt<63> == '1' then PAS_NonSecure else PAS_Secure; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_RVAAArch64.TLBI_RIPAS2

// AArch64.TLBI_RVA() // ================== // Range invalidate by VA range all stage 1 TLB entries in the indicated // shareability domain matching the indicated VMID and ASID (where regime // supports VMID, ASID) in the indicated regime with the indicated security state. // ASID, and range related parameters are derived from Xt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // AArch64.TLBI_RIPAS2() // ===================== // Range invalidate by IPA all stage 2 only TLB entries in the indicated // shareability domain matching the indicated VMID in the indicated regime with the indicated // security state. // Note: stage 1 and stage 2 combined entries are not in the scope of this operation. // The range of IPA and related parameters of the are derived from Xt. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_RVA(AArch64.TLBI_RIPAS2(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(64) Xt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_RVATLBIOp_RIPAS2; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.asid = Xt<63:48>; boolean valid; (valid, r.tg, r.address, r.end_address) = bits(2) tg = Xt<47:46>; integer scale = UInt(Xt<45:44>); integer num = UInt(Xt<43:39>); integer baseaddr = SInt(Xt<36:0>); boolean valid; (valid, r.tg, r.address, r.end_address) = TLBIRange(regime, Xt); if !valid then return; case security of when SS_NonSecure r.ipaspace = PAS_NonSecure; when SS_Secure r.ipaspace = if Xt<63> == '1' then PAS_NonSecure else PAS_Secure(regime, Xt); if !valid then return;; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_RVAAAArch64.TLBI_RVA

// AArch64.TLBI_RVAA() // =================== // AArch64.TLBI_RVA() // ================== // Range invalidate by VA range all stage 1 TLB entries in the indicated // shareability domain matching the indicated VMID (where regimesupports VMID) // and all ASID in the indicated regime with the indicated security state. // VA range related parameters are derived from Xt. // shareability domain matching the indicated VMID and ASID (where regime // supports VMID, ASID) in the indicated regime with the indicated security state. // ASID, and range related parameters are derived from Xt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_RVAA(AArch64.TLBI_RVA(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(64) Xt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_RVAATLBIOp_RVA; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; bits(2) tg = Xt<47:46>; integer scale = UInt(Xt<45:44>); integer num = UInt(Xt<43:39>); integer baseaddr = SInt(Xt<36:0>); ; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.asid = Xt<63:48>; boolean valid; (valid, r.tg, r.address, r.end_address) = TLBIRange(regime, Xt); if !valid then return; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_VAAArch64.TLBI_RVAA

// AArch64.TLBI_VA() // ================= // Invalidate by VA all stage 1 TLB entries in the indicated shareability domain // matching the indicated VMID and ASID (where regime supports VMID, ASID) in the indicated regime // with the indicated security state. // ASID, VA and related parameters are derived from Xt. // AArch64.TLBI_RVAA() // =================== // Range invalidate by VA range all stage 1 TLB entries in the indicated // shareability domain matching the indicated VMID (where regimesupports VMID) // and all ASID in the indicated regime with the indicated security state. // VA range related parameters are derived from Xt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_VA(AArch64.TLBI_RVAA(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(64) Xt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_VATLBIOp_RVAA; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.asid = Xt<63:48>; r.address = bits(2) tg = Xt<47:46>; integer scale = ZeroExtendUInt(Xt<43:0> :(Xt<45:44>); integer num = (Xt<43:39>); integer baseaddr = SInt(Xt<36:0>); boolean valid; (valid, r.tg, r.address, r.end_address) = TLBIRangeZerosUInt(12));(regime, Xt); if !valid then return; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_VAAAArch64.TLBI_VA

// AArch64.TLBI_VAA() // ================== // AArch64.TLBI_VA() // ================= // Invalidate by VA all stage 1 TLB entries in the indicated shareability domain // matching the indicated VMID (where regime supports VMID) and all ASID in the indicated regime // matching the indicated VMID and ASID (where regime supports VMID, ASID) in the indicated regime // with the indicated security state. // VA and related parameters are derived from Xt. // ASID, VA and related parameters are derived from Xt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_VAA(AArch64.TLBI_VA(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr, bits(64) Xt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_VAATLBIOp_VA; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.vmid = vmid; r.level = level; r.attr = attr; r.asid = Xt<63:48>; r.address = ZeroExtend(Xt<43:0> : Zeros(12)); TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_VMALLAArch64.TLBI_VAA

// AArch64.TLBI_VMALL() // ==================== // Invalidate all stage 1 entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability // domain that match the indicated VMID (where applicable). // AArch64.TLBI_VAA() // ================== // Invalidate by VA all stage 1 TLB entries in the indicated shareability domain // matching the indicated VMID (where regime supports VMID) and all ASID in the indicated regime // with the indicated security state. // VA and related parameters are derived from Xt. // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // Note: stage 2 only entries are not in the scope of this operation. // When the indicated level is // TLBILevel_Any : this applies to TLB entries at all levels // TLBILevel_Last : this applies to TLB entries at last level only // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_VMALL(AArch64.TLBI_VAA(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBILevel level, TLBIMemAttr attr) attr, bits(64) Xt) assert PSTATE.EL IN {EL3, EL2, EL1}; TLBIRecord r; r.op = TLBIOp_VMALLTLBIOp_VAA; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.level = r.vmid = vmid; r.level = level; r.attr = attr; r.address = (Xt<43:0> : ZerosTLBILevel_AnyZeroExtend; r.vmid = vmid; r.attr = attr;(12)); TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/AArch64.TLBI_VMALLS12AArch64.TLBI_VMALL

// AArch64.TLBI_VMALLS12() // ======================= // Invalidate all stage 1 and stage 2 entries for the indicated translation // regime with the indicated security state for all TLBs within the indicated // shareability domain that match the indicated VMID. // AArch64.TLBI_VMALL() // ==================== // Invalidate all stage 1 entries for the indicated translation regime with the // the indicated security state for all TLBs within the indicated shareability // domain that match the indicated VMID (where applicable). // Note: stage 1 and stage 2 combined entries are in the scope of this operation. // Note: stage 2 only entries are not in the scope of this operation. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. AArch64.TLBI_VMALLS12(AArch64.TLBI_VMALL(SecurityState security, Regime regime, bits(16) vmid, Shareability shareability, TLBIMemAttr attr) assert PSTATE.EL IN {EL3, EL2};, EL1}; TLBIRecord r; r.op = TLBIOp_VMALLS12TLBIOp_VMALL; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.level = TLBILevel_Any; r.vmid = vmid; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then Broadcast(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/ASID_NONEAArch64.TLBI_VMALLS12

constant bits(16)// AArch64.TLBI_VMALLS12() // ======================= // Invalidate all stage 1 and stage 2 entries for the indicated translation // regime with the indicated security state for all TLBs within the indicated // shareability domain that match the indicated VMID. // The indicated attr defines the attributes of the memory operations that must be completed in // order to deem this operation to be completed. // When attr is TLBI_ExcludeXS, only operations with XS=0 within the scope of this TLB operation // are required to complete. ASID_NONE =AArch64.TLBI_VMALLS12( security, Regime regime, bits(16) vmid, Shareability shareability, TLBIMemAttr attr) assert PSTATE.EL IN {EL3, EL2}; TLBIRecord r; r.op = TLBIOp_VMALLS12; r.from_aarch64 = TRUE; r.security = security; r.regime = regime; r.level = TLBILevel_Any; r.vmid = vmid; r.attr = attr; TLBI(r); if shareability != Shareability_NSH then BroadcastZerosSecurityState();(shareability, r); return;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/BroadcastASID_NONE

// Broadcast() // =========== // IMPLEMENTATION DEFINED function to broadcast TLBI operation within the indicated shareability // domain.constant bits(16) Broadcast(ASID_NONE =ShareabilityZeros shareability, TLBIRecord r) IMPLEMENTATION_DEFINED;();

Library pseudocode for aarch64/instrs/system/sysops/tlbi/DecodeTLBITGBroadcast

// DecodeTLBITG() // ============== // Decode translation granule size in TLBI range instructions TGx// Broadcast() // =========== // IMPLEMENTATION DEFINED function to broadcast TLBI operation within the indicated shareability // domain. DecodeTLBITG(bits(2) tg) case tg of when '01' returnBroadcast( TGx_4KBShareability; when '10' returnshareability, TGx_16KBTLBIRecord; when '11' return TGx_64KB;r) IMPLEMENTATION_DEFINED;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/HasLargeAddressDecodeTLBITG

// HasLargeAddress() // ================= // Returns TRUE if the regime is configured for 52 bit addresses, FALSE otherwise. // DecodeTLBITG() // ============== // Decode translation granule size in TLBI range instructions booleanTGx HasLargeAddress(DecodeTLBITG(bits(2) tg) case tg of when '01' returnRegimeTGx_4KB regime) if !; when '10' returnHave52BitIPAAndPASpaceExtTGx_16KB() then return FALSE; case regime of when; when '11' return Regime_EL3TGx_64KB return TCR_EL3<32> == '1'; when Regime_EL2 return TCR_EL2<32> == '1'; when Regime_EL20 return TCR_EL2<59> == '1'; when Regime_EL10 return TCR_EL1<59> == '1'; otherwise Unreachable();;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/TLBIHasLargeAddress

// TLBI() // ====== // Performs TLB maintenance of operation on TLB to invalidate the matching transition table entries.// HasLargeAddress() // ================= // Returns TRUE if the regime is configured for 52 bit addresses, FALSE otherwise. boolean TLBI(HasLargeAddress( regime) if !Have52BitIPAAndPASpaceExt() then return FALSE; case regime of when Regime_EL3 return TCR_EL3<32> == '1'; when Regime_EL2 return TCR_EL2<32> == '1'; when Regime_EL20 return TCR_EL2<59> == '1'; when Regime_EL10 return TCR_EL1<59> == '1'; otherwise UnreachableTLBIRecordRegime r) IMPLEMENTATION_DEFINED;();

Library pseudocode for aarch64/instrs/system/sysops/tlbi/TLBILevelTLBI

enumeration// TLBI() // ====== // Performs TLB maintenance of operation on TLB to invalidate the matching transition table entries. TLBILevel {TLBI( TLBILevel_Any, TLBILevel_Last };r) IMPLEMENTATION_DEFINED;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/TLBIMatchTLBILevel

// TLBIMatch() // =========== // Determine whether the TLB entry lies within the scope of inavlidation booleanenumeration TLBIMatch(TLBILevel {TLBIRecord tlbi,TLBILevel_Any, TLBRecord entry) boolean match; case tlbi.op of when TLBIOp_DALL, TLBIOp_IALL match = (tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime); when TLBIOp_DASID, TLBIOp_IASID match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (UseASID(entry.context) && entry.context.nG == '1' && tlbi.asid == entry.context.asid)); when TLBIOp_DVA, TLBIOp_IVA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (!UseASID(entry.context) || tlbi.asid == entry.context.asid || entry.context.nG == '0') && tlbi.address<55:entry.blocksize> == entry.context.ia<55:entry.blocksize> && (tlbi.level == TLBILevel_Any || !entry.walkstate.istable)); when TLBIOp_ALL relax_regime = (tlbi.from_aarch64 && tlbi.regime IN {Regime_EL20, Regime_EL2} && entry.context.regime IN {Regime_EL20, Regime_EL2}); match = (tlbi.security == entry.context.ss && (tlbi.regime == entry.context.regime || relax_regime)); when TLBIOp_ASID match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (UseASID(entry.context) && entry.context.nG == '1' && tlbi.asid == entry.context.asid)); when TLBIOp_IPAS2 match = (!entry.context.includes_s1 && entry.context.includes_s2 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && tlbi.ipaspace == entry.context.ipaspace && tlbi.address<51:entry.blocksize> == entry.context.ia<51:entry.blocksize> && (tlbi.level == TLBILevel_Any || !entry.walkstate.istable)); when TLBIOp_VAA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && tlbi.address<55:entry.blocksize> == entry.context.ia<55:entry.blocksize> && (tlbi.level == TLBILevel_Any || !entry.walkstate.istable)); when TLBIOp_VA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (!UseASID(entry.context) || tlbi.asid == entry.context.asid || entry.context.nG == '0') && tlbi.address<55:entry.blocksize> == entry.context.ia<55:entry.blocksize> && (tlbi.level == TLBILevel_Any || !entry.walkstate.istable)); when TLBIOp_VMALL match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid)); when TLBIOp_VMALLS12 match = (tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid)); when TLBIOp_RIPAS2 match = (!entry.context.includes_s1 && entry.context.includes_s2 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && tlbi.ipaspace == entry.context.ipaspace && (tlbi.tg != '00' && DecodeTLBITG(tlbi.tg) == entry.context.tg) && UInt(tlbi.address) <= UInt(entry.context.ia) && UInt(tlbi.end_address) > UInt(entry.context.ia)); when TLBIOp_RVAA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (tlbi.tg != '00' && DecodeTLBITG(tlbi.tg) == entry.context.tg) && UInt(tlbi.address) <= UInt(entry.context.ia) && UInt(tlbi.end_address) > UInt(entry.context.ia)); when TLBIOp_RVA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (!UseASID(entry.context) || tlbi.asid == entry.context.asid || entry.context.nG == '0') && (tlbi.tg != '00' && DecodeTLBITG(tlbi.tg) == entry.context.tg) && UInt(tlbi.address) <= UInt(entry.context.ia) && UInt(tlbi.end_address) > UInt(entry.context.ia)); if tlbi.attr == TLBI_ExcludeXS && entry.context.xs == '1' then match = FALSE; return match;TLBILevel_Last };

Library pseudocode for aarch64/instrs/system/sysops/tlbi/TLBIMemAttrTLBIMatch

enumeration// TLBIMatch() // =========== // Determine whether the TLB entry lies within the scope of inavlidation boolean TLBIMemAttr {TLBIMatch( TLBI_AllAttr,tlbi, entry) boolean match; case tlbi.op of when TLBIOp_DALL, TLBIOp_IALL match = (tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime); when TLBIOp_DASID, TLBIOp_IASID match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (UseASID(entry.context) && entry.context.nG == '1' && tlbi.asid == entry.context.asid)); when TLBIOp_DVA, TLBIOp_IVA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (!UseASID(entry.context) || tlbi.asid == entry.context.asid || entry.context.nG == '0') && tlbi.address<55:entry.blocksize> == entry.context.ia<55:entry.blocksize> && (tlbi.level == TLBILevel_Any || !entry.walkstate.istable)); when TLBIOp_ALL relax_regime = (tlbi.from_aarch64 && tlbi.regime IN {Regime_EL20, Regime_EL2} && entry.context.regime IN {Regime_EL20, Regime_EL2}); match = (tlbi.security == entry.context.ss && (tlbi.regime == entry.context.regime || relax_regime)); when TLBIOp_ASID match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (UseASID(entry.context) && entry.context.nG == '1' && tlbi.asid == entry.context.asid)); when TLBIOp_IPAS2 match = (!entry.context.includes_s1 && entry.context.includes_s2 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && tlbi.ipaspace == entry.context.ipaspace && tlbi.address<51:entry.blocksize> == entry.context.ia<51:entry.blocksize> && (tlbi.level == TLBILevel_Any || !entry.walkstate.istable)); when TLBIOp_VAA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && tlbi.address<55:entry.blocksize> == entry.context.ia<55:entry.blocksize> && (tlbi.level == TLBILevel_Any || !entry.walkstate.istable)); when TLBIOp_VA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (!UseASID(entry.context) || tlbi.asid == entry.context.asid || entry.context.nG == '0') && tlbi.address<55:entry.blocksize> == entry.context.ia<55:entry.blocksize> && (tlbi.level == TLBILevel_Any || !entry.walkstate.istable)); when TLBIOp_VMALL match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid)); when TLBIOp_VMALLS12 match = (tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid)); when TLBIOp_RIPAS2 match = (!entry.context.includes_s1 && entry.context.includes_s2 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && tlbi.ipaspace == entry.context.ipaspace && (tlbi.tg != '00' && DecodeTLBITG(tlbi.tg) == entry.context.tg) && UInt(tlbi.address) <= UInt(entry.context.ia) && UInt(tlbi.end_address) > UInt(entry.context.ia)); when TLBIOp_RVAA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (tlbi.tg != '00' && DecodeTLBITG(tlbi.tg) == entry.context.tg) && UInt(tlbi.address) <= UInt(entry.context.ia) && UInt(tlbi.end_address) > UInt(entry.context.ia)); when TLBIOp_RVA match = (entry.context.includes_s1 && tlbi.security == entry.context.ss && tlbi.regime == entry.context.regime && (!UseVMID(entry.context) || tlbi.vmid == entry.context.vmid) && (!UseASID(entry.context) || tlbi.asid == entry.context.asid || entry.context.nG == '0') && (tlbi.tg != '00' && DecodeTLBITG(tlbi.tg) == entry.context.tg) && UInt(tlbi.address) <= UInt(entry.context.ia) && UInt(tlbi.end_address) > UInt(entry.context.ia)); if tlbi.attr == TLBI_ExcludeXSTLBI_ExcludeXS };&& entry.context.xs == '1' then match = FALSE; return match;

Library pseudocode for aarch64/instrs/system/sysops/tlbi/TLBIOpTLBIMemAttr

enumeration TLBIOp {TLBIMemAttr { TLBIOp_DALL, // AArch32 Data TLBI operations - deprecatedTLBI_AllAttr, TLBIOp_DASID,TLBI_ExcludeXS }; TLBIOp_DVA, TLBIOp_IALL, // AArch32 Instruction TLBI operations - deprecated TLBIOp_IASID, TLBIOp_IVA, TLBIOp_ALL, TLBIOp_ASID, TLBIOp_IPAS2, TLBIOp_VAA, TLBIOp_VA, TLBIOp_VMALL, TLBIOp_VMALLS12, TLBIOp_RIPAS2, TLBIOp_RVAA, TLBIOp_RVA, };

Library pseudocode for aarch64/instrs/system/sysops/tlbi/TLBIRangeTLBIOp

// TLBIRange() // =========== // Extract the input address range information from encoded Xt. (boolean, bits(2), bits(64), bits(64))enumeration TLBIRange(TLBIOp {Regime regime, bits(64) Xt) boolean valid = TRUE; bits(64) start =TLBIOp_DALL, // AArch32 Data TLBI operations - deprecated Zeros(64); bits(64) end =TLBIOp_DASID, Zeros(64); bits(2) tg = Xt<47:46>; integer scale =TLBIOp_DVA, UInt(Xt<45:44>); integer num =TLBIOp_IALL, // AArch32 Instruction TLBI operations - deprecated UInt(Xt<43:39>); integer tg_bits; if tg == '00' then return (FALSE, tg, start, end); case tg of when '01' // 4KB tg_bits = 12; ifTLBIOp_IASID, HasLargeAddress(regime) then start<52:16> = Xt<36:0>; start<63:53> =TLBIOp_IVA, Replicate(Xt<36>, 11); else start<48:12> = Xt<36:0>; start<63:49> =TLBIOp_ALL, Replicate(Xt<36>, 15); when '10' // 16KB tg_bits = 14; ifTLBIOp_ASID, HasLargeAddress(regime) then start<52:16> = Xt<36:0>; start<63:53> =TLBIOp_IPAS2, Replicate(Xt<36>, 11); else start<50:14> = Xt<36:0>; start<63:51> =TLBIOp_VAA, Replicate(Xt<36>, 13); when '11' // 64KB tg_bits = 16; start<52:16> = Xt<36:0>; start<63:53> =TLBIOp_VA, Replicate(Xt<36>, 11); otherwiseTLBIOp_VMALL, Unreachable(); integer range = (num+1) << (5*scale + 1 + tg_bits); end = start + range<63:0>; if end<52> != start<52> then // overflow, saturate it end =TLBIOp_VMALLS12, Replicate(start<52>, 64-52) :TLBIOp_RIPAS2, TLBIOp_RVAA, Ones(52); return (valid, tg, start, end);TLBIOp_RVA, };

Library pseudocode for aarch64/instrs/system/sysops/tlbi/TLBIRecordTLBIRange

type// TLBIRange() // =========== // Extract the input address range information from encoded Xt. (boolean, bits(2), bits(64), bits(64)) TLBIRecord is (TLBIRange( TLBIOpRegime op, boolean from_aarch64, // originated as an AArch64 operationregime, bits(64) Xt) boolean valid = TRUE; bits(64) start = SecurityStateZeros security,(64); bits(64) end = RegimeZeros regime, bits(16) vmid, bits(16) asid,(64); bits(2) tg = Xt<47:46>; integer scale = TLBILevelUInt level,(Xt<45:44>); integer num = TLBIMemAttrUInt attr,(Xt<43:39>); integer tg_bits; if tg == '00' then return (FALSE, tg, start, end); case tg of when '01' // 4KB tg_bits = 12; if (regime) then start<52:16> = Xt<36:0>; start<63:53> = Replicate(Xt<36>, 11); else start<48:12> = Xt<36:0>; start<63:49> = Replicate(Xt<36>, 15); when '10' // 16KB tg_bits = 14; if HasLargeAddress(regime) then start<52:16> = Xt<36:0>; start<63:53> = Replicate(Xt<36>, 11); else start<50:14> = Xt<36:0>; start<63:51> = Replicate(Xt<36>, 13); when '11' // 64KB tg_bits = 16; start<52:16> = Xt<36:0>; start<63:53> = Replicate(Xt<36>, 11); otherwise Unreachable(); integer range = (num+1) << (5*scale + 1 + tg_bits); end = start + range<63:0>; if end<52> != start<52> then // overflow, saturate it end = Replicate(start<52>, 64-52) : OnesPASpaceHasLargeAddress ipaspace, // For operations that take IPA as input address bits(64) address, // input address, for range operations, start address bits(64) end_address, // for range operations, end address bits(2) tg, // for range operations, translation granule )(52); return (valid, tg, start, end);

Library pseudocode for aarch64/instrs/system/sysops/tlbi/VMIDTLBIRecord

// VMID[] // ====== // Effective VMID. bits(16)type VMID[] ifTLBIRecord is ( EL2EnabledTLBIOp() then if !op, boolean from_aarch64, // originated as an AArch64 operationELUsingAArch32SecurityState(security,EL2Regime) then ifregime, bits(16) vmid, bits(16) asid, Have16bitVMIDTLBILevel() && VTCR_EL2.VS == '1' then return VTTBR_EL2.VMID; else returnlevel, ZeroExtendTLBIMemAttr(VTTBR_EL2.VMID<7:0>, 16); else returnattr, ZeroExtendPASpace(VTTBR.VMID, 16); elsif HaveEL(EL2) && HaveSecureEL2Ext() then return Zeros(16); else return VMID_NONE;ipaspace, // For operations that take IPA as input address bits(64) address, // input address, for range operations, start address bits(64) end_address, // for range operations, end address bits(2) tg, // for range operations, translation granule )

Library pseudocode for aarch64/instrs/system/sysops/tlbi/VMID_NONEVMID

constant bits(16)// VMID[] // ====== // Effective VMID. bits(16) VMID_NONE =VMID[] if EL2Enabled() then if !ELUsingAArch32(EL2) then if Have16bitVMID() && VTCR_EL2.VS == '1' then return VTTBR_EL2.VMID; else return ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); else return ZeroExtend(VTTBR.VMID, 16); elsif HaveEL(EL2) && HaveSecureEL2Ext() then return Zeros(16); else return VMID_NONE();;

Library pseudocode for aarch64/instrs/vectorsystem/arithmeticsysops/binarytlbi/uniform/logical/bsl-eor/vbitop/VBitOpVMID_NONE

enumerationconstant bits(16) VBitOp {VMID_NONE =VBitOp_VBIF, VBitOp_VBIT, VBitOp_VBSL, VBitOp_VEOR};();

Library pseudocode for aarch64/instrs/vector/arithmetic/unarybinary/cmpuniform/compareoplogical/CompareOpbsl-eor/vbitop/VBitOp

enumeration CompareOp {VBitOp {CompareOp_GT,VBitOp_VBIF, CompareOp_GE,VBitOp_VBIT, CompareOp_EQ,VBitOp_VBSL, CompareOp_LE,VBitOp_VEOR}; CompareOp_LT};

Library pseudocode for aarch64/instrs/vector/logicalarithmetic/immediateopunary/ImmediateOpcmp/compareop/CompareOp

enumeration ImmediateOp {CompareOp {ImmediateOp_MOVI,CompareOp_GT, ImmediateOp_MVNI,CompareOp_GE, ImmediateOp_ORR,CompareOp_EQ, ImmediateOp_BIC};CompareOp_LE,CompareOp_LT};

Library pseudocode for aarch64/instrs/vector/reducelogical/reduceopimmediateop/ReduceImmediateOp

// Reduce() // ======== bits(esize)enumeration Reduce(ImmediateOp {ReduceOp op, bits(N) input, integer esize) boolean altfp =ImmediateOp_MOVI, HaveAltFP() && !ImmediateOp_MVNI,UsingAArch32() && FPCR.AH == '1'; returnImmediateOp_ORR, Reduce(op, input, esize, altfp); // Reduce() // ======== // Perform the operation 'op' on pairs of elements from the input vector, // reducing the vector to a scalar result. The 'altfp' argument controls // alternative floating-point behaviour. bits(esize) Reduce(ReduceOp op, bits(N) input, integer esize, boolean altfp) integer half; bits(esize) hi; bits(esize) lo; bits(esize) result; if N == esize then return input<esize-1:0>; half = N DIV 2; hi = Reduce(op, input<N-1:half>, esize, altfp); lo = Reduce(op, input<half-1:0>, esize, altfp); case op of when ReduceOp_FMINNUM result = FPMinNum(lo, hi, FPCR[]); when ReduceOp_FMAXNUM result = FPMaxNum(lo, hi, FPCR[]); when ReduceOp_FMIN result = FPMin(lo, hi, FPCR[], altfp); when ReduceOp_FMAX result = FPMax(lo, hi, FPCR[], altfp); when ReduceOp_FADD result = FPAdd(lo, hi, FPCR[]); when ReduceOp_ADD result = lo + hi; return result;ImmediateOp_BIC};

Library pseudocode for aarch64/instrs/vector/reduce/reduceop/ReduceOpReduce

enumeration// Reduce() // ======== bits(esize) ReduceOp {Reduce(ReduceOp_FMINNUM,op, bits(N) input, integer esize) boolean altfp = ReduceOp_FMAXNUM,() && ! ReduceOp_FMIN,() && FPCR.AH == '1'; return ReduceOp_FMAX,(op, input, esize, altfp); // Reduce() // ======== // Perform the operation 'op' on pairs of elements from the input vector, // reducing the vector to a scalar result. The 'altfp' argument controls // alternative floating-point behaviour. bits(esize) ReduceOp_FADD,Reduce( op, bits(N) input, integer esize, boolean altfp) integer half; bits(esize) hi; bits(esize) lo; bits(esize) result; if N == esize then return input<esize-1:0>; half = N DIV 2; hi = Reduce(op, input<N-1:half>, esize, altfp); lo = Reduce(op, input<half-1:0>, esize, altfp); case op of when ReduceOp_FMINNUM result = FPMinNum(lo, hi, FPCR[]); when ReduceOp_FMAXNUM result = FPMaxNum(lo, hi, FPCR[]); when ReduceOp_FMIN result = FPMin(lo, hi, FPCR[], altfp); when ReduceOp_FMAX result = FPMax(lo, hi, FPCR[], altfp); when ReduceOp_FADD result = FPAdd(lo, hi, FPCR[]); when ReduceOp_ADDReduceOp_ADD};result = lo + hi; return result;

Library pseudocode for aarch64/translationinstrs/debugvector/AArch64.CheckBreakpointreduce/reduceop/ReduceOp

// AArch64.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch64 // translation regime, when either debug exceptions are enabled, or halting debug is enabled // and halting is allowed. FaultRecordenumeration AArch64.CheckBreakpoint(bits(64) vaddress,ReduceOp { AccType acctype_in, integer size) assert !ReduceOp_FMINNUM,ELUsingAArch32(ReduceOp_FMAXNUM,S1TranslationRegime()); assert (ReduceOp_FMIN,UsingAArch32() && size IN {2,4}) || size == 4;ReduceOp_FMAX, AccType acctype = acctype_in; match = FALSE; for i = 0 toReduceOp_FADD, NumBreakpointsImplemented() - 1 match_i = AArch64.BreakpointMatch(i, vaddress, acctype, size); match = match || match_i; if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Breakpoint; Halt(reason); elsif match then acctype = AccType_IFETCH; iswrite = FALSE; return AArch64.DebugFault(acctype, iswrite); else return NoFault();ReduceOp_ADD};

Library pseudocode for aarch64/translation/debug/AArch64.CheckDebugAArch64.CheckBreakpoint

// AArch64.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. // AArch64.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch64 // translation regime, when either debug exceptions are enabled, or halting debug is enabled // and halting is allowed. FaultRecord AArch64.CheckDebug(bits(64) vaddress,AArch64.CheckBreakpoint(bits(64) vaddress, AccType acctype, boolean iswrite, integer size)acctype_in, integer size) assert ! FaultRecordELUsingAArch32 fault =( NoFaultS1TranslationRegime(); boolean generate_exception; boolean d_side = (()); assert (IsDataAccessUsingAArch32(acctype) || acctype ==() && size IN {2,4}) || size == 4; AccType_DCAccType); boolean i_side = (acctype ==acctype = acctype_in; match = FALSE; for i = 0 to AccType_IFETCHNumBreakpointsImplemented); if() - 1 match_i = HaveNV2ExtAArch64.BreakpointMatch() && acctype ==(i, vaddress, acctype, size); match = match || match_i; if match && AccType_NV2REGISTERHaltOnBreakpointOrWatchpoint then mask = '0'; ss =() then reason = CurrentSecurityStateDebugHalt_Breakpoint(); generate_exception =; AArch64.GenerateDebugExceptionsFromHalt((reason); elsif match then acctype =EL2AccType_IFETCH, ss, mask) && MDSCR_EL1.MDE == '1'; else generate_exception =; iswrite = FALSE; return AArch64.GenerateDebugExceptionsAArch64.DebugFault() && MDSCR_EL1.MDE == '1'; halt =(acctype, iswrite); else return HaltOnBreakpointOrWatchpointNoFault(); if generate_exception || halt then if d_side then fault = AArch64.CheckWatchpoint(vaddress, acctype, iswrite, size); elsif i_side then fault = AArch64.CheckBreakpoint(vaddress, acctype, size); return fault;();

Library pseudocode for aarch64/translation/debug/AArch64.CheckWatchpointAArch64.CheckDebug

// AArch64.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address", // when either debug exceptions are enabled for the access, or halting debug // is enabled and halting is allowed. // AArch64.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. FaultRecord AArch64.CheckWatchpoint(bits(64) vaddress,AArch64.CheckDebug(bits(64) vaddress, AccType acctype, boolean iswrite_in, integer size) assert !acctype, boolean iswrite, integer size)ELUsingAArch32FaultRecord(fault =S1TranslationRegime()); boolean iswrite = iswrite_in; if acctype == AccType_DC then if !iswrite then return NoFault(); elsif ! boolean generate_exception; d_side = (acctype !=IsDataAccessAccType_IFETCH(acctype) then return); if NoFaultHaveNV2Ext(); match = FALSE; match_on_read = FALSE; ispriv =() && acctype == AArch64.AccessUsesELAccType_NV2REGISTER(acctype) !=then mask = '0'; generate_exception = EL0AArch64.GenerateDebugExceptionsFrom; for i = 0 to( NumWatchpointsImplementedEL2() - 1 if, AArch64.WatchpointMatchIsSecure(i, vaddress, size, ispriv, acctype, iswrite) then match = TRUE; if DBGWCR_EL1[i].LSC<0> == '1' then match_on_read = TRUE; if match &&(), mask) && MDSCR_EL1.MDE == '1'; else generate_exception = IsAtomicRWAArch64.GenerateDebugExceptions(acctype) then iswrite = !match_on_read; if match &&() && MDSCR_EL1.MDE == '1'; halt = HaltOnBreakpointOrWatchpoint() then reason =(); if generate_exception || halt then if d_side then fault = DebugHalt_WatchpointAArch64.CheckWatchpoint; EDWAR = vaddress;(vaddress, acctype, iswrite, size); else fault = HaltAArch64.CheckBreakpoint(reason); elsif match then return AArch64.DebugFault(acctype, iswrite); else return NoFault();(vaddress, acctype, size); return fault;

Library pseudocode for aarch64/translation/vmsa_addrcalcdebug/AArch64.BlockBaseAArch64.CheckWatchpoint

// AArch64.BlockBase() // =================== // Extract the address embedded in a block descriptor pointing to the base of // a memory block // AArch64.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address", // when either debug exceptions are enabled for the access, or halting debug // is enabled and halting is allowed. bits(52)FaultRecord AArch64.BlockBase(bits(64) descriptor, bit ds,AArch64.CheckWatchpoint(bits(64) vaddress, TGxAccType tgx, integer level) bits(52) blockbase =acctype, boolean iswrite_in, integer size) assert ! ZerosELUsingAArch32(); if tgx ==( TGx_4KBS1TranslationRegime && level == 2 then blockbase<47:21> = descriptor<47:21>; elsif tgx ==()); boolean iswrite = iswrite_in; if acctype IN { TGx_4KBAccType_TTW && level == 1 then blockbase<47:30> = descriptor<47:30>; elsif tgx ==, TGx_4KBAccType_IC && level == 0 then blockbase<47:39> = descriptor<47:39>; elsif tgx ==, TGx_16KBAccType_AT && level == 2 then blockbase<47:25> = descriptor<47:25>; elsif tgx ==, TGx_16KBAccType_ATPAN && level == 1 then blockbase<47:36> = descriptor<47:36>; elsif tgx ==} then return TGx_64KBNoFault && level == 2 then blockbase<47:29> = descriptor<47:29>; elsif tgx ==(); if acctype == TGx_64KBAccType_DC && level == 1 then blockbase<47:42> = descriptor<47:42>; elsethen if !iswrite then return UnreachableNoFault(); if match = FALSE; match_on_read = FALSE; ispriv = Have52BitPAExtAArch64.AccessUsesEL() && tgx ==(acctype) != ; for i = 0 to NumWatchpointsImplemented() - 1 if AArch64.WatchpointMatch(i, vaddress, size, ispriv, acctype, iswrite) then match = TRUE; if DBGWCR_EL1[i].LSC<0> == '1' then match_on_read = TRUE; if match && acctype == AccType_ATOMICRW then iswrite = !match_on_read; if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Watchpoint; EDWAR = vaddress; Halt(reason); elsif match then return AArch64.DebugFault(acctype, iswrite); else return NoFaultTGx_64KBEL0 then blockbase<51:48> = descriptor<15:12>; elsif ds == '1' then blockbase<51:48> = descriptor<9:8>:descriptor<49:48>; return blockbase;();

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.IASizeAArch64.BlockBase

// AArch64.IASize() // ================ // Retrieve the number of bits containing the input address // AArch64.BlockBase() // =================== // Extract the address embedded in a block descriptor pointing to the base of // a memory block integerbits(52) AArch64.IASize(bits(6) txsz) return 64 -AArch64.BlockBase(bits(64) descriptor, bit ds, tgx, integer level) bits(52) blockbase = Zeros(); if tgx == TGx_4KB && level == 2 then blockbase<47:21> = descriptor<47:21>; elsif tgx == TGx_4KB && level == 1 then blockbase<47:30> = descriptor<47:30>; elsif tgx == TGx_4KB && level == 0 then blockbase<47:39> = descriptor<47:39>; elsif tgx == TGx_16KB && level == 2 then blockbase<47:25> = descriptor<47:25>; elsif tgx == TGx_16KB && level == 1 then blockbase<47:36> = descriptor<47:36>; elsif tgx == TGx_64KB && level == 2 then blockbase<47:29> = descriptor<47:29>; elsif tgx == TGx_64KB && level == 1 then blockbase<47:42> = descriptor<47:42>; else Unreachable(); if Have52BitPAExt() && tgx == TGx_64KBUIntTGx(txsz);then blockbase<51:48> = descriptor<15:12>; elsif ds == '1' then blockbase<51:48> = descriptor<9:8>:descriptor<49:48>; return blockbase;

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.NextTableBaseAArch64.IASize

// AArch64.NextTableBase() // ======================= // Extract the address embedded in a table descriptor pointing to the base of // the next level table of descriptors // AArch64.IASize() // ================ // Retrieve the number of bits containing the input address bits(52)integer AArch64.NextTableBase(bits(64) descriptor, bit ds,AArch64.IASize(bits(6) txsz) return 64 - TGxUInt tgx) bits(52) tablebase = Zeros(); case tgx of when TGx_4KB tablebase<47:12> = descriptor<47:12>; when TGx_16KB tablebase<47:14> = descriptor<47:14>; when TGx_64KB tablebase<47:16> = descriptor<47:16>; if Have52BitPAExt() && tgx == TGx_64KB then tablebase<51:48> = descriptor<15:12>; elsif ds == '1' then tablebase<51:48> = descriptor<9:8>:descriptor<49:48>; return tablebase;(txsz);

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.PageBaseAArch64.NextTableBase

// AArch64.PageBase() // ================== // Extract the address embedded in a page descriptor pointing to the base of // a memory page // AArch64.NextTableBase() // ======================= // Extract the address embedded in a table descriptor pointing to the base of // the next level table of descriptors bits(52) AArch64.PageBase(bits(64) descriptor, bit ds,AArch64.NextTableBase(bits(64) descriptor, bit ds, TGx tgx) bits(52) pagebase = bits(52) tablebase = Zeros(); case tgx of when TGx_4KB pagebase<47:12> = descriptor<47:12>; tablebase<47:12> = descriptor<47:12>; when TGx_16KB pagebase<47:14> = descriptor<47:14>; tablebase<47:14> = descriptor<47:14>; when TGx_64KB pagebase<47:16> = descriptor<47:16>; tablebase<47:16> = descriptor<47:16>; if Have52BitPAExt() && tgx == TGx_64KB then pagebase<51:48> = descriptor<15:12>; tablebase<51:48> = descriptor<15:12>; elsif ds == '1' then pagebase<51:48> = descriptor<9:8>:descriptor<49:48>; tablebase<51:48> = descriptor<9:8>:descriptor<49:48>; return pagebase; return tablebase;

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.PhysicalAddressSizeAArch64.PageBase

// AArch64.PhysicalAddressSize() // ============================= // Retrieve the number of bits bounding the physical address // AArch64.PageBase() // ================== // Extract the address embedded in a page descriptor pointing to the base of // a memory page integerbits(52) AArch64.PhysicalAddressSize(bits(3) encoded_ps,AArch64.PageBase(bits(64) descriptor, bit ds, TGx tgx) integer ps; integer max_ps; case encoded_ps of when '000' ps = 32; when '001' ps = 36; when '010' ps = 40; when '011' ps = 42; when '100' ps = 44; when '101' ps = 48; when '110' ps = 52; otherwise ps = integer IMPLEMENTATION_DEFINED "Reserved Intermediate Physical Address size value"; if tgx != bits(52) pagebase = TGx_64KBZeros && !(); case tgx of whenHave52BitIPAAndPASpaceExtTGx_4KB() then max_ps =pagebase<47:12> = descriptor<47:12>; when MinTGx_16KB(48,pagebase<47:14> = descriptor<47:14>; when AArch64.PAMaxTGx_64KB()); else max_ps =pagebase<47:16> = descriptor<47:16>; if AArch64.PAMaxHave52BitPAExt(); return() && tgx == MinTGx_64KB(ps, max_ps);then pagebase<51:48> = descriptor<15:12>; elsif ds == '1' then pagebase<51:48> = descriptor<9:8>:descriptor<49:48>; return pagebase;

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.S1StartLevelAArch64.PhysicalAddressSize

// AArch64.S1StartLevel() // ====================== // Compute the initial lookup level when performing a stage 1 translation // table walk // AArch64.PhysicalAddressSize() // ============================= // Retrieve the number of bits bounding the physical address integer AArch64.S1StartLevel(AArch64.PhysicalAddressSize(bits(3) encoded_ps,S1TTWParamsTGx walkparams) // Input Address size iasize =tgx) integer ps; integer max_ps; case encoded_ps of when '000' ps = 32; when '001' ps = 36; when '010' ps = 40; when '011' ps = 42; when '100' ps = 44; when '101' ps = 48; when '110' ps = 52; otherwise ps = integer IMPLEMENTATION_DEFINED "Reserved Intermediate Physical Address size value"; if tgx != AArch64.IASizeTGx_64KB(walkparams.txsz); granulebits =&& ! TGxGranuleBitsHave52BitIPAAndPASpaceExt(walkparams.tgx); stride = granulebits - 3; return() then max_ps = (48, AArch64.PAMax()); else max_ps = AArch64.PAMax(); return MinFINAL_LEVELMin - (((iasize-1) - granulebits) DIV stride);(ps, max_ps);

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.S2SLTTEntryAddressAArch64.S1StartLevel

// AArch64.S2SLTTEntryAddress() // ============================ // Compute the first stage 2 translation table descriptor address within the // table pointed to by the base at the start level // AArch64.S1StartLevel() // ====================== // Compute the initial lookup level when performing a stage 1 translation // table walk FullAddressinteger AArch64.S2SLTTEntryAddress(AArch64.S1StartLevel(S2TTWParamsS1TTWParams walkparams, bits(52) ipa,walkparams) // Input Address size iasize = FullAddress tablebase) startlevel = AArch64.S2StartLevel(walkparams); iasize = AArch64.IASize(walkparams.txsz); granulebits = TGxGranuleBits(walkparams.tgx); stride = granulebits - 3; levels = return FINAL_LEVEL - startlevel; bits(52) index; lsb = levels*stride + granulebits; msb = iasize - 1; index = ZeroExtend(ipa<msb:lsb>:Zeros(3)); FullAddress descaddress; descaddress.address = tablebase.address OR index; descaddress.paspace = tablebase.paspace; return descaddress;- (((iasize-1) - granulebits) DIV stride);

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.S2StartLevelAArch64.S2SLTTEntryAddress

// AArch64.S2StartLevel() // ====================== // Determine the initial lookup level when performing a stage 2 translation // table walk // AArch64.S2SLTTEntryAddress() // ============================ // Compute the first stage 2 translation table descriptor address within the // table pointed to by the base at the start level integerFullAddress AArch64.S2StartLevel(AArch64.S2SLTTEntryAddress(S2TTWParams walkparams) case walkparams.tgx of whenwalkparams, bits(52) ipa, TGx_4KBFullAddress case walkparams.sl2:walkparams.sl0 of when '000' return 2; when '001' return 1; when '010' return 0; when '011' return 3; when '100' return -1; whentablebase) startlevel = TGx_16KBAArch64.S2StartLevel case walkparams.sl0 of when '00' return 3; when '01' return 2; when '10' return 1; when '11' return 0; when(walkparams); iasize = (walkparams.txsz); granulebits = TGxGranuleBits(walkparams.tgx); stride = granulebits - 3; levels = FINAL_LEVEL - startlevel; bits(52) index; lsb = levels*stride + granulebits; msb = iasize - 1; index = ZeroExtend(ipa<msb:lsb>:Zeros(3)); FullAddressTGx_64KBAArch64.IASize case walkparams.sl0 of when '00' return 3; when '01' return 2; when '10' return 1;descaddress; descaddress.address = tablebase.address OR index; descaddress.paspace = tablebase.paspace; return descaddress;

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.TTBaseAddressAArch64.S2StartLevel

// AArch64.TTBaseAddress() // ======================= // Retrieve the PA/IPA pointing to the base of the initial translation table // AArch64.S2StartLevel() // ====================== // Determine the initial lookup level when performing a stage 2 translation // table walk bits(52)integer AArch64.TTBaseAddress(bits(64) ttb, bits(6) txsz, bits(3) ps, bit ds,AArch64.S2StartLevel( TGxS2TTWParams tgx, integer startlevel) bits(52) tablebase =walkparams) case walkparams.tgx of when ZerosTGx_4KB(); // Input Address size iasize =case walkparams.sl2:walkparams.sl0 of when '000' return 2; when '001' return 1; when '010' return 0; when '011' return 3; when '100' return -1; when AArch64.IASizeTGx_16KB(txsz); granulebits =case walkparams.sl0 of when '00' return 3; when '01' return 2; when '10' return 1; when '11' return 0; when TGxGranuleBits(tgx); stride = granulebits - 3; levels = FINAL_LEVEL - startlevel; // Base address is aligned to size of the initial translation table in bytes tsize = (iasize - (levels*stride + granulebits)) + 3; if (Have52BitPAExt() && tgx == TGx_64KB && ps == '110') || (ds == '1') then tsize = Max(tsize, 6); tablebase<51:6> = ttb<5:2>:ttb<47:6>; else tablebase<47:1> = ttb<47:1>; tablebase = Align(tablebase, 1 << tsize); return tablebase;case walkparams.sl0 of when '00' return 3; when '01' return 2; when '10' return 1;

Library pseudocode for aarch64/translation/vmsa_addrcalc/AArch64.TTEntryAddressAArch64.TTBaseAddress

// AArch64.TTEntryAddress() // ======================== // Compute translation table descriptor address within the table pointed to by // the table base // AArch64.TTBaseAddress() // ======================= // Retrieve the PA/IPA pointing to the base of the initial translation table FullAddressbits(52) AArch64.TTEntryAddress(integer level,AArch64.TTBaseAddress(bits(64) ttb, bits(6) txsz, bits(3) ps, bit ds, TGx tgx, bits(6) txsz, bits(64) ia,tgx, integer startlevel) bits(52) tablebase = FullAddressZeros tablebase) (); // Input Address size iasize = AArch64.IASize(txsz); granulebits = TGxGranuleBits(tgx); stride = granulebits - 3; levels = FINAL_LEVEL - level; - startlevel; bits(52) index; lsb = levels*stride + granulebits; msb = // Base address is aligned to size of the initial translation table in bytes tsize = (iasize - (levels*stride + granulebits)) + 3; if ( MinHave52BitPAExt(iasize - 1, (lsb + stride) - 1); index =() && tgx == ZeroExtendTGx_64KB(ia<msb:lsb>:&& ps == '110') || (ds == '1') then tsize =ZerosMax(3));(tsize, 6); tablebase<51:6> = ttb<5:2>:ttb<47:6>; else tablebase<47:1> = ttb<47:1>; tablebase = FullAddressAlign descaddress; descaddress.address = tablebase.address OR index; descaddress.paspace = tablebase.paspace; return descaddress;(tablebase, 1 << tsize); return tablebase;

Library pseudocode for aarch64/translation/vmsa_faultsvmsa_addrcalc/AArch64.AddrTopAArch64.TTEntryAddress

// AArch64.AddrTop() // ================= // Get the top bit position of the virtual address. // Bits above are not accounted as part of the translation process. // AArch64.TTEntryAddress() // ======================== // Compute translation table descriptor address within the table pointed to by // the table base integerFullAddress AArch64.AddrTop(bit tbid,AArch64.TTEntryAddress(integer level, AccTypeTGx acctype, bit tbi) if tbid == '1' && acctype ==tgx, bits(6) txsz, bits(64) ia, tablebase) // Input Address size iasize = AArch64.IASize(txsz); granulebits = TGxGranuleBits(tgx); stride = granulebits - 3; levels = FINAL_LEVEL - level; bits(52) index; lsb = levels*stride + granulebits; msb = Min(iasize - 1, (lsb + stride) - 1); index = ZeroExtend(ia<msb:lsb>:Zeros(3)); FullAddressAccType_IFETCHFullAddress then return 63; descaddress; descaddress.address = tablebase.address OR index; descaddress.paspace = tablebase.paspace; if tbi == '1' then return 55; else return 63; return descaddress;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.ContiguousBitFaultsAArch64.AddrTop

// AArch64.ContiguousBitFaults() // ============================= // If contiguous bit is set, returns whether the translation size exceeds the // input address size and if the implementation generates a fault // AArch64.AddrTop() // ================= // Get the top bit position of the virtual address. // Bits above are not accounted as part of the translation process. booleaninteger AArch64.ContiguousBitFaults(bits(6) txsz,AArch64.AddrTop(bit tbid, TGxAccType tgx, integer level) // Input Address size iasize =acctype, bit tbi) if tbid == '1' && acctype == AArch64.IASizeAccType_IFETCH(txsz); // Translation size tsize = TranslationSize(tgx, level) + ContiguousSize(tgx, level); then return 63; fault = boolean IMPLEMENTATION_DEFINED "Translation fault on misprogrammed contiguous bit"; return tsize > iasize && fault; if tbi == '1' then return 55; else return 63;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.DebugFaultAArch64.ContiguousBitFaults

// AArch64.DebugFault() // ==================== // Return a fault record indicating a hardware watchpoint/breakpoint // AArch64.ContiguousBitFaults() // ============================= // If contiguous bit is set, returns whether the translation size exceeds the // input address size and if the implementation generates a fault FaultRecordboolean AArch64.DebugFault(AArch64.ContiguousBitFaults(bits(6) txsz,AccTypeTGx acctype, boolean iswrite)tgx, integer level) // Input Address size iasize = FaultRecordAArch64.IASize fault; fault.statuscode =(txsz); // Translation size tsize = (tgx, level) + ContiguousSizeFault_DebugTranslationSize; fault.acctype = acctype; fault.write = iswrite; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; (tgx, level); return fault; fault = boolean IMPLEMENTATION_DEFINED "Translation fault on misprogrammed contiguous bit"; return tsize > iasize && fault;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.ExclusiveFaultAArch64.DebugFault

// AArch64.ExclusiveFault() // ======================== // AArch64.DebugFault() // ==================== // Return a fault record indicating a hardware watchpoint/breakpoint FaultRecord AArch64.ExclusiveFault(AArch64.DebugFault(AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk)acctype, boolean iswrite) FaultRecord fault; fault.statuscode = Fault_ExclusiveFault_Debug; fault.acctype = acctype; fault.write = iswrite; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; return fault;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.IPAIsOutOfRangeAArch64.ExclusiveFault

// AArch64.IPAIsOutOfRange() // ========================= // Check bits not resolved by translation are ZERO // AArch64.ExclusiveFault() // ======================== booleanFaultRecord AArch64.IPAIsOutOfRange(bits(52) ipa,AArch64.ExclusiveFault( S2TTWParamsAccType walkparams) //Input Address size iasize =acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) AArch64.IASizeFaultRecord(walkparams.txsz); fault; if iasize < 52 then return ! fault.statuscode =IsZeroFault_Exclusive(ipa<51:iasize>); else return FALSE;; fault.acctype = acctype; fault.write = iswrite; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.OAOutOfRangeAArch64.IPAIsOutOfRange

// AArch64.OAOutOfRange() // ====================== // Returns whether output address is expressed in the configured size number of bits // AArch64.IPAIsOutOfRange() // ========================= // Check bits not resolved by translation are ZERO boolean AArch64.OAOutOfRange(AArch64.IPAIsOutOfRange(bits(52) ipa,TTWStateS2TTWParams walkstate, bits(3) ps,walkparams) //Input Address size iasize = TGxAArch64.IASize tgx, bits(64) ia) // Output Address size oasize = AArch64.PhysicalAddressSize(ps, tgx); if oasize < 52 then if walkstate.istable then baseaddress = walkstate.baseaddress.address; return !IsZero(baseaddress<51:oasize>); else // Output address oa = StageOA(ia, tgx, walkstate); return !(walkparams.txsz); if iasize < 52 then return !IsZero(oa.address<51:oasize>); (ipa<51:iasize>); else return FALSE;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.S1HasAlignmentFaultAArch64.OAOutOfRange

// AArch64.S1HasAlignmentFault() // ============================= // Returns whether stage 1 output fails alignment requirement on data accesses // to Device memory // AArch64.OAOutOfRange() // ====================== // Returns whether output address is expressed in the configured size number of bits boolean AArch64.S1HasAlignmentFault(AArch64.OAOutOfRange(AccTypeTTWState acctype, boolean aligned, bit ntlsmd,walkstate, bits(3) ps, MemoryAttributesTGx memattrs) if acctype ==tgx, bits(64) ia) // Output Address size oasize = AccType_IFETCHAArch64.PhysicalAddressSize || memattrs.memtype !=(ps, tgx); if oasize < 52 then if walkstate.istable then baseaddress = walkstate.baseaddress.address; return ! MemType_DeviceIsZero then return FALSE; if acctype ==(baseaddress<51:oasize>); else // Output address oa = AccType_A32LSMDStageOA && ntlsmd == '0' && memattrs.device !=(ia, tgx, walkstate); return ! DeviceType_GREIsZero then return TRUE; return !aligned || acctype == AccType_DCZVA;(oa.address<51:oasize>); else return FALSE;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.S1HasPermissionsFaultAArch64.S1HasAlignmentFault

// AArch64.S1HasPermissionsFault() // =============================== // Returns whether stage 1 access violates permissions of target memory // AArch64.S1HasAlignmentFault() // ============================= // Returns whether stage 1 output fails alignment requirement on data accesses // to Device memory boolean AArch64.S1HasPermissionsFault(AArch64.S1HasAlignmentFault(Regime regime, SecurityState ss, TTWState walkstate, S1TTWParams walkparams, boolean ispriv, AccType acctype, boolean iswrite) bit r; bit w; bit x; permissions = walkstate.permissions; ifacctype, boolean aligned, bit ntlsmd, HasUnprivilegedMemoryAttributes(regime) then bit pr; bit pw; bit ur; bit uw; // Apply leaf permissions case permissions.ap<2:1> of when '00' (pr,pw,ur,uw) = ('1','1','0','0'); // Privileged access when '01' (pr,pw,ur,uw) = ('1','1','1','1'); // No effect when '10' (pr,pw,ur,uw) = ('1','0','0','0'); // Read-only, privileged access when '11' (pr,pw,ur,uw) = ('1','0','1','0'); // Read-only // Apply hierarchical permissions case permissions.ap_table of when '00' (pr,pw,ur,uw) = ( pr, pw, ur, uw); // No effect when '01' (pr,pw,ur,uw) = ( pr, pw,'0','0'); // Privileged access when '10' (pr,pw,ur,uw) = ( pr,'0', ur,'0'); // Read-only when '11' (pr,pw,ur,uw) = ( pr,'0','0','0'); // Read-only, privileged access // Locations writable by unprivileged cannot be executed by privileged px = NOT(permissions.pxn OR permissions.pxn_table OR uw); ux = NOT(permissions.uxn OR permissions.uxn_table); pan_access = !(acctype IN {memattrs) if acctype ==AccType_DC, AccType_IFETCH,|| memattrs.memtype != AccType_AT, AccType_NV2REGISTER}); if HavePANExt() && pan_access && !(regime == Regime_EL10 && walkparams.nv1 == '1') then bit pan; if (boolean IMPLEMENTATION_DEFINED "SCR_EL3.SIF affects EPAN" && CurrentSecurityState() == SS_Secure && walkstate.baseaddress.paspace == PAS_NonSecure && walkparams.sif == '1') then ux = '0'; pan = PSTATE.PAN AND (ur OR uw OR (walkparams.epan AND ux)); pr = pr AND NOT(pan); pw = pw AND NOT(pan); (r,w,x) = if ispriv then (pr,pw,px) else (ur,uw,ux); else // Apply leaf permissions case permissions.ap<2> of when '0' (r,w) = ('1','1'); // No effect when '1' (r,w) = ('1','0'); // Read-only // Apply hierarchical permissions case permissions.ap_table<1> of when '0' (r,w) = ( r , w ); // No effect when '1' (r,w) = ( r ,'0'); // Read-only x = NOT(permissions.xn OR permissions.xn_table); // Prevent execution from writable locations if WXN is set x = x AND NOT(walkparams.wxn AND w); // Prevent execution from Non-secure space by PE in secure state if SIF is set if ss == SS_Secure && walkstate.baseaddress.paspace == PAS_NonSecure then x = x AND NOT(walkparams.sif); if acctype == AccType_IFETCH then if (ConstrainUnpredictable(Unpredictable_INSTRDEVICE) == Constraint_FAULT && walkstate.memattrs.memtype == MemType_Device) then return TRUE; then return FALSE; return x == '0'; elsif acctype == if acctype == AccType_DCAccType_A32LSMD then if iswrite then return w == '0'; else // DC from privileged context which do no write cannot Permission fault return !ispriv && (r == '0' || (&& ntlsmd == '0' && memattrs.device !=IsCMOWControlledInstructionDeviceType_GRE() && walkparams.cmow == '1' && w == '0')); elsif acctype ==then return TRUE; return !aligned || acctype == AccType_ICAccType_DCZVA then // IC instructions do not write assert !iswrite; impdef_ic_fault = boolean IMPLEMENTATION_DEFINED "Permission fault on EL0 IC_IVAU execution"; // IC from privileged context cannot Permission fault return !ispriv && ((r == '0' && impdef_ic_fault) || (IsCMOWControlledInstruction() && walkparams.cmow == '1' && w == '0')); elsif iswrite then return w == '0'; else return r == '0';;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.S1InvalidTxSZAArch64.S1HasPermissionsFault

// AArch64.S1InvalidTxSZ() // ======================= // Detect erroneous configuration of stage 1 TxSZ field if the implementation // does not constrain the value of TxSZ // AArch64.S1HasPermissionsFault() // =============================== // Returns whether stage 1 access violates permissions of target memory boolean AArch64.S1InvalidTxSZ(AArch64.S1HasPermissionsFault(Regime regime, SecurityState ss, TTWState walkstate, S1TTWParams walkparams) mintxsz =walkparams, boolean ispriv, AArch64.S1MinTxSZAccType(walkparams.ds, walkparams.tgx); maxtxsz =acctype, boolean iswrite) bit r; bit w; bit x; permissions = walkstate.permissions; if AArch64.MaxTxSZHasUnprivileged(walkparams.tgx); (regime) then bit pr; bit pw; bit ur; bit uw; // Apply leaf permissions case permissions.ap<2:1> of when '00' (pr,pw,ur,uw) = ('1','1','0','0'); // Privileged access when '01' (pr,pw,ur,uw) = ('1','1','1','1'); // No effect when '10' (pr,pw,ur,uw) = ('1','0','0','0'); // Read-only, privileged access when '11' (pr,pw,ur,uw) = ('1','0','1','0'); // Read-only return // Apply hierarchical permissions case permissions.ap_table of when '00' (pr,pw,ur,uw) = ( pr, pw, ur, uw); // No effect when '01' (pr,pw,ur,uw) = ( pr, pw,'0','0'); // Privileged access when '10' (pr,pw,ur,uw) = ( pr,'0', ur,'0'); // Read-only when '11' (pr,pw,ur,uw) = ( pr,'0','0','0'); // Read-only, privileged access // Locations writable by unprivileged cannot be executed by privileged px = NOT(permissions.pxn OR permissions.pxn_table OR uw); ux = NOT(permissions.uxn OR permissions.uxn_table); pan_access = !(acctype IN { UIntAccType_DC(walkparams.txsz) < mintxsz ||, , AccType_AT, AccType_NV2REGISTER}); if HavePANExt() && pan_access && !(regime == Regime_EL10 && walkparams.nv1 == '1') then bit pan; if (boolean IMPLEMENTATION_DEFINED "SCR_EL3.SIF affects EPAN" && CurrentSecurityState() == SS_Secure && walkstate.baseaddress.paspace == PAS_NonSecure && walkparams.sif == '1') then ux = '0'; pan = PSTATE.PAN AND (ur OR uw OR (walkparams.epan AND ux)); pr = pr AND NOT(pan); pw = pw AND NOT(pan); (r,w,x) = if ispriv then (pr,pw,px) else (ur,uw,ux); else // Apply leaf permissions case permissions.ap<2> of when '0' (r,w) = ('1','1'); // No effect when '1' (r,w) = ('1','0'); // Read-only // Apply hierarchical permissions case permissions.ap_table<1> of when '0' (r,w) = ( r , w ); // No effect when '1' (r,w) = ( r ,'0'); // Read-only x = NOT(permissions.xn OR permissions.xn_table); // Prevent execution from writable locations if WXN is set x = x AND NOT(walkparams.wxn AND w); // Prevent execution from Non-secure space by PE in secure state if SIF is set if ss == SS_Secure && walkstate.baseaddress.paspace == PAS_NonSecure then x = x AND NOT(walkparams.sif); if acctype == AccType_IFETCH then if (ConstrainUnpredictable(Unpredictable_INSTRDEVICE) == Constraint_FAULT && walkstate.memattrs.memtype == MemType_Device) then return TRUE; return x == '0'; elsif acctype == AccType_DC then if iswrite then return w == '0'; else // DC from privileged context which do no write cannot permission fault return !ispriv && (r == '0' || (IsCMOWControlledInstruction() && walkparams.cmow == '1' && w == '0')); elsif acctype == AccType_IC then // IC instructions do not write assert !iswrite; impdef_ic_fault = boolean IMPLEMENTATION_DEFINED "Permission fault on EL0 IC_IVAU execution"; // IC from privileged context cannot permission fault return !ispriv && ((r == '0' && impdef_ic_fault) || (IsCMOWControlledInstructionUIntAccType_IFETCH(walkparams.txsz) > maxtxsz;() && walkparams.cmow == '1' && w == '0')); elsif iswrite then return w == '0'; else return r == '0';

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.S2HasAlignmentFaultAArch64.S1InvalidTxSZ

// AArch64.S2HasAlignmentFault() // ============================= // Returns whether stage 2 output fails alignment requirement on data accesses // to Device memory // AArch64.S1InvalidTxSZ() // ======================= // Detect erroneous configuration of stage 1 TxSZ field if the implementation // does not constrain the value of TxSZ boolean AArch64.S2HasAlignmentFault(AArch64.S1InvalidTxSZ(AccTypeS1TTWParams acctype, boolean aligned,walkparams) mintxsz = MemoryAttributesAArch64.S1MinTxSZ memattrs) if acctype ==(walkparams.ds, walkparams.tgx); maxtxsz = AccType_IFETCHAArch64.MaxTxSZ || memattrs.memtype !=(walkparams.tgx); return MemType_DeviceUInt then return FALSE; return !aligned || acctype ==(walkparams.txsz) < mintxsz || AccType_DCZVAUInt;(walkparams.txsz) > maxtxsz;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.S2HasPermissionsFaultAArch64.S2HasAlignmentFault

// AArch64.S2HasPermissionsFault() // =============================== // Returns whether stage 2 access violates permissions of target memory // AArch64.S2HasAlignmentFault() // ============================= // Returns whether stage 2 output fails alignment requirement on data accesses // to Device memory boolean AArch64.S2HasPermissionsFault(boolean s2fs1walk,AArch64.S2HasAlignmentFault( TTWState walkstate, SecurityState ss, S2TTWParams walkparams, boolean ispriv, AccType acctype, boolean iswrite) permissions = walkstate.permissions; memtype = walkstate.memattrs.memtype; r = permissions.s2ap<0>; w = permissions.s2ap<1>; bit px; bit ux; case (permissions.s2xn:permissions.s2xnx) of when '00' (px,ux) = ('1','1'); when '01' (px,ux) = ('0','1'); when '10' (px,ux) = ('0','0'); when '11' (px,ux) = ('1','0'); x = if ispriv then px else ux; if s2fs1walk && walkparams.ptw == '1' && memtype ==acctype, boolean aligned, MemType_DeviceMemoryAttributes then return TRUE; elsif acctype ==memattrs) if acctype == AccType_IFETCH then constraint =|| memattrs.memtype != ConstrainUnpredictable(Unpredictable_INSTRDEVICE); if constraint == Constraint_FAULT && memtype == MemType_Device then return TRUE; return x == '0'; elsif acctype == return FALSE; return !aligned || acctype == AccType_DCAccType_DCZVA then // AArch32 DC maintenance instructions operating by VA cannot fault. if iswrite then return !ELUsingAArch32(EL1) && w == '0'; else return ((!ispriv && !ELUsingAArch32(EL1) && r == '0') || (IsCMOWControlledInstruction() && walkparams.cmow == '1' && w == '0')); elsif acctype == AccType_IC then // IC instructions do not write assert !iswrite; impdef_ic_fault = boolean IMPLEMENTATION_DEFINED "Permission fault on EL0 IC_IVAU execution"; return ((!ispriv && !ELUsingAArch32(EL1) && r == '0' && impdef_ic_fault) || (IsCMOWControlledInstruction() && walkparams.cmow == '1' && w == '0')); elsif iswrite then return w == '0'; else return r == '0';;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.S2InconsistentSLAArch64.S2HasPermissionsFault

// AArch64.S2InconsistentSL() // ========================== // Detect inconsistent configuration of stage 2 TxSZ and SL fields // AArch64.S2HasPermissionsFault() // =============================== // Returns whether stage 2 access violates permissions of target memory boolean AArch64.S2InconsistentSL(AArch64.S2HasPermissionsFault(boolean s2fs1walk,TTWState walkstate, SecurityState ss, S2TTWParams walkparams) startlevel =walkparams, boolean ispriv, AArch64.S2StartLevelAccType(walkparams); levels =acctype, boolean iswrite) permissions = walkstate.permissions; memtype = walkstate.memattrs.memtype; r = permissions.s2ap<0>; w = permissions.s2ap<1>; bit px; bit ux; case (permissions.s2xn:permissions.s2xnx) of when '00' (px,ux) = ('1','1'); when '01' (px,ux) = ('0','1'); when '10' (px,ux) = ('0','0'); when '11' (px,ux) = ('1','0'); x = if ispriv then px else ux; if s2fs1walk && walkparams.ptw == '1' && memtype == FINAL_LEVELMemType_Device - startlevel; granulebits =then return TRUE; elsif acctype == TGxGranuleBitsAccType_IFETCH(walkparams.tgx); stride = granulebits - 3; // Input address size must at least be large enough to be resolved from the start level sl_min_iasize = ( levels * stride // Bits resolved by table walk, except initial level + granulebits // Bits directly mapped to output address + 1); // At least 1 more bit to be decoded by initial level // Can accomodate 1 more stride in the level + concatenation of up to 2^4 tables sl_max_iasize = sl_min_iasize + (stride-1) + 4; // Configured Input Address size iasize =then constraint = (Unpredictable_INSTRDEVICE); if constraint == Constraint_FAULT && memtype == MemType_Device then return TRUE; return x == '0'; elsif acctype == AccType_DC then // AArch32 DC maintenance instructions operating by VA cannot fault. if iswrite then return !ELUsingAArch32(EL1) && w == '0'; else return ((!ispriv && !ELUsingAArch32(EL1) && r == '0') || (IsCMOWControlledInstruction() && walkparams.cmow == '1' && w == '0')); elsif acctype == AccType_IC then // IC instructions do not write assert !iswrite; impdef_ic_fault = boolean IMPLEMENTATION_DEFINED "Permission fault on EL0 IC_IVAU execution"; return ((!ispriv && !ELUsingAArch32(EL1) && r == '0' && impdef_ic_fault) || (IsCMOWControlledInstructionAArch64.IASizeConstrainUnpredictable(walkparams.txsz); return iasize < sl_min_iasize || iasize > sl_max_iasize;() && walkparams.cmow == '1' && w == '0')); elsif iswrite then return w == '0'; else return r == '0';

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.S2InvalidSLAArch64.S2InconsistentSL

// AArch64.S2InvalidSL() // ===================== // Detect invalid configuration of SL field // AArch64.S2InconsistentSL() // ========================== // Detect inconsistent configuration of stage 2 TxSZ and SL fields boolean AArch64.S2InvalidSL(AArch64.S2InconsistentSL(S2TTWParams walkparams) case walkparams.tgx of when startlevel = TGx_4KBAArch64.S2StartLevel case walkparams.sl2:walkparams.sl0 of when '1x1' return TRUE; when '11x' return TRUE; when '010' return(walkparams); levels = AArch64.PAMaxFINAL_LEVEL() < 44; when '011' return !- startlevel; granulebits =HaveSmallTranslationTableExtTGxGranuleBits(); otherwise return FALSE; when(walkparams.tgx); stride = granulebits - 3; // Input address size must at least be large enough to be resolved from the start level sl_min_iasize = ( levels * stride // Bits resolved by table walk, except initial level + granulebits // Bits directly mapped to output address + 1); // At least 1 more bit to be decoded by initial level // Can accomodate 1 more stride in the level + concatenation of up to 2^4 tables sl_max_iasize = sl_min_iasize + (stride-1) + 4; // Configured Input Address size iasize = TGx_16KBAArch64.IASize case walkparams.sl0 of when '11' return walkparams.ds == '0'; when '10' return AArch64.PAMax() < 42; otherwise return FALSE; when TGx_64KB case walkparams.sl0 of when '11' return TRUE; when '10' return AArch64.PAMax() < 44; otherwise return FALSE;(walkparams.txsz); return iasize < sl_min_iasize || iasize > sl_max_iasize;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.S2InvalidTxSZAArch64.S2InvalidSL

// AArch64.S2InvalidTxSZ() // ======================= // Detect erroneous configuration of stage 2 TxSZ field if the implementation // does not constrain the value of TxSZ // AArch64.S2InvalidSL() // ===================== // Detect invalid configuration of SL field boolean AArch64.S2InvalidTxSZ(AArch64.S2InvalidSL(S2TTWParams walkparams, boolean s1aarch64) mintxsz =walkparams) case walkparams.tgx of when AArch64.S2MinTxSZTGx_4KB(walkparams.ds, walkparams.tgx, s1aarch64); maxtxsz =case walkparams.sl2:walkparams.sl0 of when '1x1' return TRUE; when '11x' return TRUE; when '010' return AArch64.MaxTxSZAArch64.PAMax(walkparams.tgx); return() < 44; when '011' return ! UIntHaveSmallTranslationTableExt(walkparams.txsz) < mintxsz ||(); otherwise return FALSE; when case walkparams.ds:walkparams.sl0 of when '011' return TRUE; when '010' return AArch64.PAMax() < 42; otherwise return FALSE; when TGx_64KB case walkparams.sl0 of when '11' return TRUE; when '10' return AArch64.PAMaxUIntTGx_16KB(walkparams.txsz) > maxtxsz;() < 44; otherwise return FALSE;

Library pseudocode for aarch64/translation/vmsa_faults/AArch64.VAIsOutOfRangeAArch64.S2InvalidTxSZ

// AArch64.VAIsOutOfRange() // ======================== // Check bits not resolved by translation are identical and of accepted value // AArch64.S2InvalidTxSZ() // ======================= // Detect erroneous configuration of stage 2 TxSZ field if the implementation // does not constrain the value of TxSZ boolean AArch64.VAIsOutOfRange(bits(64) va,AArch64.S2InvalidTxSZ( AccTypeS2TTWParams acctype,walkparams, boolean s1aarch64) mintxsz = RegimeAArch64.S2MinTxSZ regime,(walkparams.ds, walkparams.tgx, s1aarch64); maxtxsz = S1TTWParamsAArch64.MaxTxSZ walkparams) addrtop =(walkparams.tgx); return AArch64.AddrTopUInt(walkparams.tbid, acctype, walkparams.tbi); // Input Address size iasize =(walkparams.txsz) < mintxsz || AArch64.IASizeUInt(walkparams.txsz); if HasUnprivileged(regime) then if AArch64.GetVARange(va) == VARange_LOWER then return !IsZero(va<addrtop:iasize>); else return !IsOnes(va<addrtop:iasize>); else return !IsZero(va<addrtop:iasize>);(walkparams.txsz) > maxtxsz;

Library pseudocode for aarch64/translation/vmsa_memattrvmsa_faults/AArch64.IsS2ResultTaggedAArch64.VAIsOutOfRange

// AArch64.IsS2ResultTagged() // ========================== // Determine whether the combined output memory attributes of stage 1 and // stage 2 indicate tagged memory // AArch64.VAIsOutOfRange() // ======================== // Check bits not resolved by translation are identical and of accepted value boolean AArch64.IsS2ResultTagged(AArch64.VAIsOutOfRange(bits(64) va,MemoryAttributesAccType s2_memattrs, boolean s1_tagged) return ( s1_tagged && (s2_memattrs.memtype ==acctype, MemType_NormalRegime) && (s2_memattrs.inner.attrs ==regime, MemAttr_WBS1TTWParams) && (s2_memattrs.inner.hints ==walkparams) addrtop = MemHint_RWAAArch64.AddrTop) && (!s2_memattrs.inner.transient) && (s2_memattrs.outer.attrs ==(walkparams.tbid, acctype, walkparams.tbi); // Input Address size iasize = MemAttr_WBAArch64.IASize) && (s2_memattrs.outer.hints ==(walkparams.txsz); if (regime) then if AArch64.GetVARange(va) == VARange_LOWER then return !IsZero(va<addrtop:iasize>); else return !IsOnes(va<addrtop:iasize>); else return !IsZeroMemHint_RWAHasUnprivileged) && (!s2_memattrs.outer.transient) );(va<addrtop:iasize>);

Library pseudocode for aarch64/translation/vmsa_memattr/AArch64.S2ApplyFWBMemAttrsAArch64.IsS2ResultTagged

// AArch64.S2ApplyFWBMemAttrs() // ============================ // Apply stage 2 forced Write-Back on stage 1 memory attributes. // AArch64.IsS2ResultTagged() // ========================== // Determine whether the combined output memory attributes of stage 1 and // stage 2 indicate tagged memory MemoryAttributesboolean AArch64.S2ApplyFWBMemAttrs(AArch64.IsS2ResultTagged(MemoryAttributes s1_memattrs, bits(4) s2_attr, bits(2) s2_sh)s2_memattrs, boolean s1_tagged) return ( s1_tagged && (s2_memattrs.memtype == MemoryAttributes memattrs; if s2_attr<2> == '0' then // S2 Device, S1 any s2_device = DecodeDevice(s2_attr<1:0>); memattrs.memtype = MemType_Device; if s1_memattrs.memtype == MemType_Device then memattrs.device = S2CombineS1Device(s1_memattrs.device, s2_device); else memattrs.device = s2_device; elsif s2_attr<1:0> == '11' then // S2 attr = S1 attr memattrs = s1_memattrs; elsif s2_attr<1:0> == '10' then // Force writeback memattrs.memtype = MemType_Normal; memattrs.inner.attrs =) && (s2_memattrs.inner.attrs == MemAttr_WB; memattrs.outer.attrs =) && (s2_memattrs.inner.hints == MemAttr_WB; if (s1_memattrs.memtype == MemType_Normal && s1_memattrs.inner.attrs != MemAttr_NC) then memattrs.inner.hints = s1_memattrs.inner.hints; memattrs.inner.transient = s1_memattrs.inner.transient; else memattrs.inner.hints = MemHint_RWA; memattrs.inner.transient = FALSE; if (s1_memattrs.memtype ==) && (!s2_memattrs.inner.transient) && (s2_memattrs.outer.attrs == MemType_NormalMemAttr_WB && s1_memattrs.outer.attrs !=) && (s2_memattrs.outer.hints == MemAttr_NC) then memattrs.outer.hints = s1_memattrs.outer.hints; memattrs.outer.transient = s1_memattrs.outer.transient; else memattrs.outer.hints = MemHint_RWA; memattrs.outer.transient = FALSE; else // Non-cacheable unless S1 is device if s1_memattrs.memtype == MemType_Device then memattrs = s1_memattrs; else MemAttrHints cacheability_attr; cacheability_attr.attrs = MemAttr_NC; memattrs.memtype = MemType_Normal; memattrs.inner = cacheability_attr; memattrs.outer = cacheability_attr; s2_shareability = DecodeShareability(s2_sh); memattrs.shareability = S2CombineS1Shareability(s1_memattrs.shareability, s2_shareability); memattrs.tagged = AArch64.IsS2ResultTagged(memattrs, s1_memattrs.tagged); memattrs.shareability = EffectiveShareability(memattrs); return memattrs;) && (!s2_memattrs.outer.transient) );

Library pseudocode for aarch64/translation/vmsa_tlbcontextvmsa_memattr/AArch64.GetS1TLBContextAArch64.S2ApplyFWBMemAttrs

// AArch64.GetS1TLBContext() // ========================= // Gather translation context for accesses with VA to match against TLB entries // AArch64.S2ApplyFWBMemAttrs() // ============================ // Apply stage 2 forced Write-Back on stage 1 memory attributes. TLBContextMemoryAttributes AArch64.GetS1TLBContext(AArch64.S2ApplyFWBMemAttrs(RegimeMemoryAttributes regime,s1_memattrs, bits(4) s2_attr, bits(2) s2_sh) SecurityStateMemoryAttributes ss, bits(64) va,memattrs; if s2_attr<2> == '0' then // S2 Device, S1 any s2_device = TGxDecodeDevice tg)(s2_attr<1:0>); memattrs.memtype = TLBContextMemType_Device tlbcontext; case regime of when; if s1_memattrs.memtype == Regime_EL3MemType_Device tlbcontext =then memattrs.device = AArch64.TLBContextEL3S2CombineS1Device(ss, va, tg); when(s1_memattrs.device, s2_device); else memattrs.device = s2_device; elsif s2_attr<1:0> == '11' then // S2 attr = S1 attr memattrs = s1_memattrs; elsif s2_attr<1:0> == '10' then // Force writeback memattrs.memtype = Regime_EL2MemType_Normal tlbcontext =; memattrs.inner.attrs = AArch64.TLBContextEL2MemAttr_WB(ss, va, tg); when; memattrs.outer.attrs = Regime_EL20MemAttr_WB tlbcontext =; if (s1_memattrs.memtype == AArch64.TLBContextEL20MemType_Normal(ss, va, tg); when&& s1_memattrs.inner.attrs != Regime_EL10MemAttr_NC tlbcontext =) then memattrs.inner.hints = s1_memattrs.inner.hints; memattrs.inner.transient = s1_memattrs.inner.transient; else memattrs.inner.hints = ; memattrs.inner.transient = FALSE; if (s1_memattrs.memtype == MemType_Normal && s1_memattrs.outer.attrs != MemAttr_NC) then memattrs.outer.hints = s1_memattrs.outer.hints; memattrs.outer.transient = s1_memattrs.outer.transient; else memattrs.outer.hints = MemHint_RWA; memattrs.outer.transient = FALSE; else // Non-cacheable unless S1 is device if s1_memattrs.memtype == MemType_Device then memattrs = s1_memattrs; else MemAttrHints cacheability_attr; cacheability_attr.attrs = MemAttr_NC; memattrs.memtype = MemType_Normal; memattrs.inner = cacheability_attr; memattrs.outer = cacheability_attr; s2_shareability = DecodeShareability(s2_sh); memattrs.shareability = S2CombineS1Shareability(s1_memattrs.shareability, s2_shareability); memattrs.tagged = AArch64.IsS2ResultTagged(memattrs, s1_memattrs.tagged); memattrs.shareability = EffectiveShareabilityAArch64.TLBContextEL10MemHint_RWA(ss, va, tg); tlbcontext.includes_s1 = TRUE; // The following may be amended for EL1&0 Regime if caching of stage 2 is successful tlbcontext.includes_s2 = FALSE; return tlbcontext;(memattrs); return memattrs;

Library pseudocode for aarch64/translation/vmsa_tlbcontext/AArch64.GetS2TLBContextAArch64.GetS1TLBContext

// AArch64.GetS2TLBContext() // AArch64.GetS1TLBContext() // ========================= // Gather translation context for accesses with IPA to match against TLB entries // Gather translation context for accesses with VA to match against TLB entries TLBContext AArch64.GetS2TLBContext(AArch64.GetS1TLBContext(Regime regime, SecurityState ss,ss, bits(64) va, FullAddress ipa, TGx tg) asserttg) EL2Enabled(); TLBContext tlbcontext; tlbcontext.ss = ss; tlbcontext.regime = case regime of when Regime_EL10Regime_EL3; tlbcontext.ipaspace = ipa.paspace; tlbcontext.vmid =tlbcontext = VMIDAArch64.TLBContextEL3[]; tlbcontext.tg = tg; tlbcontext.ia =(ss, va, tg); when ZeroExtendRegime_EL2(ipa.address); iftlbcontext = HaveCommonNotPrivateTransExtAArch64.TLBContextEL2() then tlbcontext.cnp = if ipa.paspace ==(ss, va, tg); when tlbcontext = AArch64.TLBContextEL20(ss, va, tg); when Regime_EL10 tlbcontext = AArch64.TLBContextEL10PAS_SecureRegime_EL20 then VSTTBR_EL2.CnP else VTTBR_EL2.CnP; else tlbcontext.cnp = '0'; (ss, va, tg); tlbcontext.includes_s1 = FALSE; tlbcontext.includes_s2 = TRUE; tlbcontext.includes_s1 = TRUE; // The following may be amended for EL1&0 Regime if caching of stage 2 is successful tlbcontext.includes_s2 = FALSE; return tlbcontext;

Library pseudocode for aarch64/translation/vmsa_tlbcontext/AArch64.TLBContextEL10AArch64.GetS2TLBContext

// AArch64.TLBContextEL10() // ======================== // Gather translation context for accesses under EL10 regime to match against TLB entries // AArch64.GetS2TLBContext() // ========================= // Gather translation context for accesses with IPA to match against TLB entries TLBContext AArch64.TLBContextEL10(AArch64.GetS2TLBContext(SecurityState ss, bits(64) va,ss, FullAddress ipa, TGx tg)tg) assert EL2Enabled(); TLBContext tlbcontext; tlbcontext.ss = ss; tlbcontext.regime = Regime_EL10; tlbcontext.vmid = tlbcontext.ipaspace = ipa.paspace; tlbcontext.vmid = VMID[]; tlbcontext.asid = if TCR_EL1.A1 == '0' then TTBR0_EL1.ASID else TTBR1_EL1.ASID; tlbcontext.tg = tg; tlbcontext.ia = va; if tlbcontext.tg = tg; tlbcontext.ia = ZeroExtend(ipa.address); if HaveCommonNotPrivateTransExt() then if tlbcontext.cnp = if ipa.paspace == AArch64.GetVARangePAS_Secure(va) == VARange_LOWER then tlbcontext.cnp = TTBR0_EL1.CnP; else tlbcontext.cnp = TTBR1_EL1.CnP; then VSTTBR_EL2.CnP else VTTBR_EL2.CnP; else tlbcontext.cnp = '0'; tlbcontext.includes_s1 = FALSE; tlbcontext.includes_s2 = TRUE; return tlbcontext;

Library pseudocode for aarch64/translation/vmsa_tlbcontext/AArch64.TLBContextEL2AArch64.TLBContextEL10

// AArch64.TLBContextEL2() // ======================= // Gather translation context for accesses under EL2 regime to match against TLB entries // AArch64.TLBContextEL10() // ======================== // Gather translation context for accesses under EL10 regime to match against TLB entries TLBContext AArch64.TLBContextEL2(AArch64.TLBContextEL10(SecurityState ss, bits(64) va, TGx tg) TLBContext tlbcontext; tlbcontext.ss = ss; tlbcontext.regime = Regime_EL2Regime_EL10; tlbcontext.tg = tg; tlbcontext.ia = va; tlbcontext.cnp = if tlbcontext.vmid = VMID[]; tlbcontext.asid = if TCR_EL1.A1 == '0' then TTBR0_EL1.ASID else TTBR1_EL1.ASID; tlbcontext.tg = tg; tlbcontext.ia = va; if HaveCommonNotPrivateTransExt() then if AArch64.GetVARange(va) == VARange_LOWER() then TTBR0_EL2.CnP else '0'; then tlbcontext.cnp = TTBR0_EL1.CnP; else tlbcontext.cnp = TTBR1_EL1.CnP; else tlbcontext.cnp = '0'; return tlbcontext;

Library pseudocode for aarch64/translation/vmsa_tlbcontext/AArch64.TLBContextEL20AArch64.TLBContextEL2

// AArch64.TLBContextEL20() // ======================== // Gather translation context for accesses under EL20 regime to match against TLB entries // AArch64.TLBContextEL2() // ======================= // Gather translation context for accesses under EL2 regime to match against TLB entries TLBContext AArch64.TLBContextEL20(AArch64.TLBContextEL2(SecurityState ss, bits(64) va, TGx tg) TLBContext tlbcontext; tlbcontext.ss = ss; tlbcontext.regime = Regime_EL20Regime_EL2; tlbcontext.asid = if TCR_EL2.A1 == '0' then TTBR0_EL2.ASID else TTBR1_EL2.ASID; tlbcontext.tg = tg; tlbcontext.ia = va; if tlbcontext.cnp = if HaveCommonNotPrivateTransExt() then if AArch64.GetVARange(va) == VARange_LOWER then tlbcontext.cnp = TTBR0_EL2.CnP; else tlbcontext.cnp = TTBR1_EL2.CnP; else tlbcontext.cnp = '0'; () then TTBR0_EL2.CnP else '0'; return tlbcontext;

Library pseudocode for aarch64/translation/vmsa_tlbcontext/AArch64.TLBContextEL3AArch64.TLBContextEL20

// AArch64.TLBContextEL3() // ======================= // Gather translation context for accesses under EL3 regime to match against TLB entries // AArch64.TLBContextEL20() // ======================== // Gather translation context for accesses under EL20 regime to match against TLB entries TLBContext AArch64.TLBContextEL3(AArch64.TLBContextEL20(SecurityState ss, bits(64) va, TGx tg) TLBContext tlbcontext; tlbcontext.ss = ss; tlbcontext.regime = Regime_EL3Regime_EL20; tlbcontext.asid = if TCR_EL2.A1 == '0' then TTBR0_EL2.ASID else TTBR1_EL2.ASID; tlbcontext.tg = tg; tlbcontext.ia = va; tlbcontext.cnp = if if HaveCommonNotPrivateTransExt() then if AArch64.GetVARange(va) == VARange_LOWER() then TTBR0_EL3.CnP else '0'; then tlbcontext.cnp = TTBR0_EL2.CnP; else tlbcontext.cnp = TTBR1_EL2.CnP; else tlbcontext.cnp = '0'; return tlbcontext;

Library pseudocode for aarch64/translation/vmsa_translationvmsa_tlbcontext/AArch64.AccessUsesELAArch64.TLBContextEL3

// AArch64.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. // AArch64.TLBContextEL3() // ======================= // Gather translation context for accesses under EL3 regime to match against TLB entries bits(2)TLBContext AArch64.AccessUsesEL(AArch64.TLBContextEL3(AccTypeSecurityState acctype) if acctype IN {ss, bits(64) va,AccType_UNPRIVTGx,tg) AccType_UNPRIVSTREAMTLBContext} then returntlbcontext; tlbcontext.ss = ss; tlbcontext.regime = EL0Regime_EL3; elsif acctype == tlbcontext.tg = tg; tlbcontext.ia = va; tlbcontext.cnp = if AccType_NV2REGISTERHaveCommonNotPrivateTransExt then return EL2; else return PSTATE.EL;() then TTBR0_EL3.CnP else '0'; return tlbcontext;

Library pseudocode for aarch64/translation/vmsa_translation/AArch64.FaultAllowsSetAccessFlagAArch64.AccessUsesEL

// AArch64.FaultAllowsSetAccessFlag() // ================================== // Determine whether the access flag could be set by HW given the fault status // AArch64.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. booleanbits(2) AArch64.FaultAllowsSetAccessFlag(AArch64.AccessUsesEL(FaultRecordAccType fault) if fault.statuscode ==acctype) if acctype IN { Fault_NoneAccType_UNPRIV then return TRUE; elsif fault.statuscode IN {,Fault_AlignmentAccType_UNPRIVSTREAM,} then return Fault_PermissionEL0} then return; elsif acctype == ConstrainUnpredictableAccType_NV2REGISTER(then returnUnpredictable_AFUPDATEEL2) == Constraint_TRUE; else return FALSE; return PSTATE.EL;

Library pseudocode for aarch64/translation/vmsa_translation/AArch64.FullTranslateAArch64.FaultAllowsSetAccessFlag

// AArch64.FullTranslate() // ======================= // Address translation as specified by VMSA // Alignment check NOT due to memory type is expected to be done before translation // AArch64.FaultAllowsSetAccessFlag() // ================================== // Determine whether the access flag could be set by HW given the fault status AddressDescriptorboolean AArch64.FullTranslate(bits(64) va,AArch64.FaultAllowsSetAccessFlag( AccTypeFaultRecord acctype, boolean iswrite, boolean aligned) fault =fault) if fault.statuscode == NoFault(); fault.acctype = acctype; fault.write = iswrite; ispriv = PSTATE.EL != EL0 && !(acctype IN {AccType_UNPRIV, AccType_UNPRIVSTREAM}); regime = TranslationRegime(PSTATE.EL, acctype); ss = SecurityStateAtEL(PSTATE.EL); AddressDescriptor ipa; (fault, ipa) = AArch64.S1Translate(fault, regime, ss, va, acctype, aligned, iswrite, ispriv); if fault.statuscode != Fault_None then return return TRUE; elsif fault.statuscode IN { CreateFaultyAddressDescriptorFault_Alignment(va, fault); if regime ==, Regime_EL10Fault_Permission &&} then return EL2EnabledConstrainUnpredictable() then s1aarch64 = TRUE; s2fs1walk = FALSE;( AddressDescriptorUnpredictable_AFUPDATE pa; (fault, pa) =) == AArch64.S2TranslateConstraint_TRUE(fault, ipa, s1aarch64, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); if fault.statuscode != Fault_None then return CreateFaultyAddressDescriptor(va, fault); else return pa; ; else return ipa; return FALSE;

Library pseudocode for aarch64/translation/vmsa_translation/AArch64.MemSwapTableDescAArch64.FullTranslate

// AArch64.MemSwapTableDesc() // ========================== // Perform HW update of table descriptor as an atomic operation // AArch64.FullTranslate() // ======================= // Address translation as specified by VMSA // Alignment check NOT due to memory type is expected to be done before translation (FaultRecord, bits(64))AddressDescriptor AArch64.MemSwapTableDesc(AArch64.FullTranslate(bits(64) va,FaultRecordAccType fault_in, bits(64) prev_desc, bits(64) new_desc, bit ee,acctype, boolean iswrite, boolean aligned) fault = AddressDescriptorNoFault descupdateaddress) descupdateaccess =(); fault.acctype = acctype; fault.write = iswrite; ispriv = PSTATE.EL != CreateAccessDescriptorEL0(&& !(acctype IN {AccType_ATOMICRWAccType_UNPRIV);, FaultRecordAccType_UNPRIVSTREAM fault = fault_in; boolean iswrite; // All observers in the shareability domain observe the // following memory read and write accesses atomically. (memstatus, mem_desc) =}); regime = PhysMemReadTranslationRegime(descupdateaddress, 8, descupdateaccess); if ee == '1' then mem_desc =(PSTATE.EL, acctype); ss = BigEndianReverseSecurityStateAtEL(mem_desc); if(PSTATE.EL); IsFaultAddressDescriptor(memstatus) then iswrite = FALSE; fault =ipa; (fault, ipa) = HandleExternalTTWAbortAArch64.S1Translate(memstatus, iswrite, descupdateaddress, descupdateaccess, 8, fault); if(fault, regime, ss, va, acctype, aligned, iswrite, ispriv); if fault.statuscode != IsFaultFault_None(fault.statuscode) then fault.acctype =then return AccType_ATOMICRWCreateFaultyAddressDescriptor; return (fault, bits(64) UNKNOWN); (va, fault); if mem_desc == prev_desc then ordered_new_desc = if ee == '1' then if regime == BigEndianReverseRegime_EL10(new_desc) else new_desc; memstatus =&& PhysMemWriteEL2Enabled(descupdateaddress, 8, descupdateaccess, ordered_new_desc); if() then s1aarch64 = TRUE; s2fs1walk = FALSE; IsFaultAddressDescriptor(memstatus) then iswrite = TRUE; fault =pa; (fault, pa) = HandleExternalTTWAbortAArch64.S2Translate(memstatus, iswrite, descupdateaddress, descupdateaccess, 8, fault); fault.acctype = memstatus.acctype; if(fault, ipa, s1aarch64, ss, s2fs1walk, acctype, aligned, iswrite, ispriv); if fault.statuscode != IsFaultFault_None(fault.statuscode) then fault.acctype =then return AccType_ATOMICRWCreateFaultyAddressDescriptor; return (fault, bits(64) UNKNOWN); // Reflect what is now in memory (in little endian format) mem_desc = new_desc; return (fault, mem_desc);(va, fault); else return pa; else return ipa;

Library pseudocode for aarch64/translation/vmsa_translation/AArch64.S1DisabledOutputAArch64.MemSwapTableDesc

// AArch64.S1DisabledOutput() // AArch64.MemSwapTableDesc() // ========================== // Map the the VA to IPA/PA and assign default memory attributes // Perform HW update of table descriptor as an atomic operation (FaultRecord, AddressDescriptor)(FaultRecord, bits(64)) AArch64.S1DisabledOutput(AArch64.MemSwapTableDesc(FaultRecord fault_in,fault_in, bits(64) prev_desc, bits(64) new_desc, bit ee, RegimeAddressDescriptor regime,descupdateaddress) descupdateaccess = SecurityStateCreateAccessDescriptor ss, bits(64) va,( AccTypeAccType_ATOMICRW acctype, boolean aligned) walkparams =); AArch64.GetS1TTWParams(regime, va); FaultRecord fault = fault_in; boolean iswrite; // No memory page is guarded when stage 1 address translation is disabled // All observers in the shareability domain observe the // following memory read and write accesses atomically. (memstatus, mem_desc) = SetInGuardedPagePhysMemRead(FALSE); // Output Address(descupdateaddress, 8, descupdateaccess); if ee == '1' then mem_desc = FullAddressBigEndianReverse oa; oa.address = va<51:0>; case ss of when(mem_desc); if SS_SecureIsFault oa.paspace =(memstatus) then iswrite = FALSE; fault = PAS_SecureHandleExternalTTWAbort; when(memstatus, iswrite, descupdateaddress, descupdateaccess, 8, fault); if SS_NonSecureIsFault oa.paspace =(fault.statuscode) then fault.acctype = PAS_NonSecureAccType_ATOMICRW;; return (fault, bits(64) UNKNOWN); if mem_desc == prev_desc then ordered_new_desc = if ee == '1' then MemoryAttributesBigEndianReverse memattrs; if regime ==(new_desc) else new_desc; memstatus = Regime_EL10PhysMemWrite &&(descupdateaddress, 8, descupdateaccess, ordered_new_desc); if EL2EnabledIsFault() && walkparams.dc == '1' then(memstatus) then iswrite = TRUE; fault = MemAttrHintsHandleExternalTTWAbort default_cacheability; default_cacheability.attrs =(memstatus, iswrite, descupdateaddress, descupdateaccess, 8, fault); fault.acctype = memstatus.acctype; if MemAttr_WBIsFault; default_cacheability.hints =(fault.statuscode) then fault.acctype = MemHint_RWAAccType_ATOMICRW; default_cacheability.transient = FALSE; memattrs.memtype = MemType_Normal; memattrs.outer = default_cacheability; memattrs.inner = default_cacheability; memattrs.shareability = Shareability_NSH; memattrs.tagged = walkparams.dct == '1'; memattrs.xs = '0'; elsif acctype == AccType_IFETCH then MemAttrHints i_cache_attr; if AArch64.S1ICacheEnabled(regime) then i_cache_attr.attrs = MemAttr_WT; i_cache_attr.hints = MemHint_RA; i_cache_attr.transient = FALSE; else i_cache_attr.attrs = MemAttr_NC; memattrs.memtype = MemType_Normal; memattrs.outer = i_cache_attr; memattrs.inner = i_cache_attr; memattrs.shareability = Shareability_OSH; memattrs.tagged = FALSE; memattrs.xs = '1'; else memattrs.memtype = MemType_Device; memattrs.device = DeviceType_nGnRnE; memattrs.shareability = Shareability_OSH; memattrs.xs = '1'; fault.level = 0; addrtop = AArch64.AddrTop(walkparams.tbid, acctype, walkparams.tbi); if !IsZero(va<addrtop:AArch64.PAMax()>) then fault.statuscode = Fault_AddressSize; elsif AArch64.S1HasAlignmentFault(acctype, aligned, walkparams.ntlsmd, memattrs) then fault.statuscode = Fault_Alignment; if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); else ipa = CreateAddressDescriptor(va, oa, memattrs); return (fault, ipa);; return (fault, bits(64) UNKNOWN); // Reflect what is now in memory (in little endian format) mem_desc = new_desc; return (fault, mem_desc);

Library pseudocode for aarch64/translation/vmsa_translation/AArch64.S1TranslateAArch64.S1DisabledOutput

// AArch64.S1Translate() // ===================== // Translate VA to IPA/PA depending on the regime // AArch64.S1DisabledOutput() // ========================== // Map the the VA to IPA/PA and assign default memory attributes (FaultRecord, AddressDescriptor) AArch64.S1Translate(AArch64.S1DisabledOutput(FaultRecord fault_in, Regime regime, SecurityState ss, bits(64) va, AccType acctype, boolean aligned_in, boolean iswrite_in, boolean ispriv)acctype, boolean aligned) walkparams = FaultRecord fault = fault_in; boolean aligned = aligned_in; boolean iswrite = iswrite_in; // Prepare fault fields in case a fault is detected fault.secondstage = FALSE; fault.s2fs1walk = FALSE; if !AArch64.S1Enabled(regime) then return AArch64.S1DisabledOutput(fault, regime, ss, va, acctype, aligned); walkparams = AArch64.GetS1TTWParams(regime, va); if ((regime, va);AArch64.S1InvalidTxSZFaultRecord(walkparams) || (!ispriv && walkparams.e0pd == '1') ||fault = fault_in; // No memory page is guarded when stage 1 address translation is disabled AArch64.VAIsOutOfRange(va, acctype, regime, walkparams)) then fault.statuscode = Fault_Translation; fault.level = 0; return (fault, AddressDescriptor UNKNOWN); AddressDescriptor descaddress; TTWState walkstate; bits(64) descriptor; bits(64) new_desc; bits(64) mem_desc; repeat (fault, descaddress, walkstate, descriptor) = AArch64.S1Walk(fault, walkparams, va, regime, ss, acctype, iswrite, ispriv); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); if acctype == AccType_IFETCH then // Flag the fetched instruction is from a guarded page SetInGuardedPage(walkstate.guardedpage == '1'); (FALSE); if // Output Address AArch64.S1HasAlignmentFaultFullAddress(acctype, aligned, walkparams.ntlsmd, walkstate.memattrs) then fault.statuscode =oa; oa.address = va<51:0>; case ss of when Fault_AlignmentSS_Secure; elsifoa.paspace = IsAtomicRWPAS_Secure(acctype) then if; when AArch64.S1HasPermissionsFaultSS_NonSecure(regime, ss, walkstate, walkparams, ispriv, acctype, FALSE) then // The Permission fault was not caused by lack of write permissions fault.statuscode =oa.paspace = Fault_PermissionPAS_NonSecure; fault.write = FALSE; elsif; AArch64.S1HasPermissionsFaultMemoryAttributes(regime, ss, walkstate, walkparams, ispriv, acctype, TRUE) then // The Permission fault was caused by lack of write permissions fault.statuscode =memattrs; if regime == Fault_Permission; fault.write = TRUE; elsif AArch64.S1HasPermissionsFault(regime, ss, walkstate, walkparams, ispriv, acctype, iswrite) then fault.statuscode = Fault_Permission; new_desc = descriptor; if walkparams.ha == '1' && AArch64.FaultAllowsSetAccessFlag(fault) then // Set descriptor AF bit new_desc<10> = '1'; // If HW update of dirty bit is enabled, the walk state permissions // will already reflect a configuration permitting writes. // The update of the descriptor occurs only if the descriptor bits in // memory do not reflect that and the access instigates a write. if (fault.statuscode == Fault_None && walkparams.ha == '1' && walkparams.hd == '1' && descriptor<51> == '1' && // Descriptor DBM bit (IsAtomicRW(acctype) || iswrite) && !(acctype IN {AccType_AT, AccType_ATPAN, AccType_IC, AccType_DC})) then // Clear descriptor AP[2] bit permitting stage 1 writes new_desc<7> = '0'; AddressDescriptor descupdateaddress; FaultRecord s2fault; // Either the access flag was clear or AP<2> is set if new_desc != descriptor then if regime == Regime_EL10 && EL2Enabled() then s1aarch64 = TRUE; s2fs1walk = TRUE; aligned = TRUE; iswrite = TRUE; (s2fault, descupdateaddress) =() && walkparams.dc == '1' then AArch64.S2TranslateMemAttrHints(fault, descaddress, s1aarch64, ss, s2fs1walk,default_cacheability; default_cacheability.attrs = AccType_ATOMICRWMemAttr_WB, aligned, iswrite, ispriv); if s2fault.statuscode !=; default_cacheability.hints = Fault_NoneMemHint_RWA then return (s2fault,; default_cacheability.transient = FALSE; memattrs.memtype = AddressDescriptorMemType_Normal UNKNOWN); else descupdateaddress = descaddress; (fault, mem_desc) =; memattrs.outer = default_cacheability; memattrs.inner = default_cacheability; memattrs.shareability = AArch64.MemSwapTableDescShareability_NSH(fault, descriptor, new_desc, walkparams.ee, descupdateaddress); until new_desc == descriptor || mem_desc == new_desc; if fault.statuscode !=; memattrs.tagged = walkparams.dct == '1'; memattrs.xs = '0'; elsif acctype == Fault_None then return (fault, AddressDescriptor UNKNOWN); // Output Address oa = StageOA(va, walkparams.tgx, walkstate); MemoryAttributes memattrs; if (acctype == AccType_IFETCH && (walkstate.memattrs.memtype ==then MemType_DeviceMemAttrHints || !i_cache_attr; ifAArch64.S1ICacheEnabled(regime))) then // Treat memory attributes as Normal Non-Cacheable memattrs =(regime) then i_cache_attr.attrs = NormalNCMemAttrMemAttr_WT(); memattrs.xs = walkstate.memattrs.xs; elsif (acctype !=; i_cache_attr.hints = AccType_IFETCHMemHint_RA && !; i_cache_attr.transient = FALSE; else i_cache_attr.attrs =AArch64.S1DCacheEnabledMemAttr_NC(regime) && walkstate.memattrs.memtype ==; memattrs.memtype = MemType_Normal) then // Treat memory attributes as Normal Non-Cacheable memattrs =; memattrs.outer = i_cache_attr; memattrs.inner = i_cache_attr; memattrs.shareability = NormalNCMemAttrShareability_OSH(); memattrs.xs = walkstate.memattrs.xs; // The effect of SCTLR_ELx.C when '0' is Constrained UNPREDICTABLE // on the Tagged attribute if; memattrs.tagged = FALSE; memattrs.xs = '1'; else memattrs.memtype = HaveMTE2ExtMemType_Device() && walkstate.memattrs.tagged then memattrs.tagged =; memattrs.device = ConstrainUnpredictableBoolDeviceType_nGnRnE(; memattrs.shareability =Unpredictable_S1CTAGGEDShareability_OSH); else memattrs = walkstate.memattrs; ; memattrs.xs = '1'; // Shareability value of stage 1 translation subject to stage 2 is IMPLEMENTATION DEFINED // to be either effective value or descriptor value if (regime == fault.level = 0; addrtop = Regime_EL10AArch64.AddrTop &&(walkparams.tbid, acctype, walkparams.tbi); if ! EL2EnabledIsZero() && HCR_EL2.VM == '1' && !(boolean IMPLEMENTATION_DEFINED "Apply effective shareability at stage 1")) then memattrs.shareability = walkstate.memattrs.shareability; else memattrs.shareability =(va<addrtop: EffectiveShareabilityAArch64.PAMax(memattrs); if acctype ==()>) then fault.statuscode = AccType_ATOMICLS64Fault_AddressSize && memattrs.memtype ==; elsif MemType_NormalAArch64.S1HasAlignmentFault then if memattrs.inner.attrs !=(acctype, aligned, walkparams.ntlsmd, memattrs) then fault.statuscode = MemAttr_NCFault_Alignment || memattrs.outer.attrs !=; if fault.statuscode != MemAttr_NCFault_None then fault.statuscode = Fault_Exclusive; return (fault,then return (fault, AddressDescriptor UNKNOWN); ipa = else ipa = CreateAddressDescriptor(va, oa, memattrs); return (fault, ipa);

Library pseudocode for aarch64/translation/vmsa_translation/AArch64.S2TranslateAArch64.S1Translate

// AArch64.S2Translate() // AArch64.S1Translate() // ===================== // Translate stage 1 IPA to PA and combine memory attributes // Translate VA to IPA/PA depending on the regime (FaultRecord, AddressDescriptor) AArch64.S2Translate(AArch64.S1Translate(FaultRecord fault_in, AddressDescriptorRegime ipa, boolean s1aarch64,regime, SecurityState ss, boolean s2fs1walk,ss, bits(64) va, AccType acctype, boolean aligned, boolean iswrite, boolean ispriv) walkparams =acctype, boolean aligned_in, boolean iswrite_in, boolean ispriv) AArch64.GetS2TTWParams(ss, ipa.paddress.paspace, s1aarch64); FaultRecord fault = fault_in; boolean aligned = aligned_in; boolean iswrite = iswrite_in; // Prepare fault fields in case a fault is detected fault.secondstage = FALSE; fault.s2fs1walk = FALSE; // Prepare fault fields in case a fault is detected fault.statuscode = if ! Fault_NoneAArch64.S1Enabled; // Ignore any faults from stage 1 fault.secondstage = TRUE; fault.s2fs1walk = s2fs1walk; fault.ipaddress = ipa.paddress; if walkparams.vm != '1' then // Stage 2 translation is disabled return (fault, ipa); if ((regime) then returnAArch64.S2InvalidTxSZAArch64.S1DisabledOutput(walkparams, s1aarch64) ||(fault, regime, ss, va, acctype, aligned); walkparams = AArch64.S2InvalidSLAArch64.GetS1TTWParams(walkparams) ||(regime, va); if ( AArch64.S2InconsistentSLAArch64.S1InvalidTxSZ(walkparams) ||(walkparams) || (!ispriv && walkparams.e0pd == '1') || AArch64.IPAIsOutOfRangeAArch64.VAIsOutOfRange(ipa.paddress.address, walkparams)) then (va, acctype, regime, walkparams)) then fault.statuscode = Fault_Translation; fault.level = 0; return (fault, AddressDescriptor UNKNOWN); AddressDescriptor descaddress; TTWState walkstate; bits(64) descriptor; bits(64) new_desc; bits(64) mem_desc; repeat (fault, descaddress, walkstate, descriptor) = AArch64.S2WalkAArch64.S1Walk(fault, ipa, walkparams, ss, acctype, iswrite, s1aarch64); (fault, walkparams, va, regime, ss, acctype, iswrite, ispriv); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); if if acctype == AArch64.S2HasAlignmentFaultAccType_IFETCH(acctype, aligned, walkstate.memattrs) then fault.statuscode =then // Flag the fetched instruction is from a guarded page SetInGuardedPage(walkstate.guardedpage == '1'); if AArch64.S1HasAlignmentFault(acctype, aligned, walkparams.ntlsmd, walkstate.memattrs) then fault.statuscode = Fault_Alignment; elsif IsAtomicRW(acctype) then if AArch64.S2HasPermissionsFaultAArch64.S1HasPermissionsFault(s2fs1walk, walkstate, ss, walkparams, (regime, ss, walkstate, walkparams, ispriv, acctype, FALSE) then // The Permission fault was not caused by lack of write permissions // The permission fault was not caused by lack of write permissions fault.statuscode = Fault_Permission; fault.write = FALSE; elsif AArch64.S2HasPermissionsFaultAArch64.S1HasPermissionsFault(s2fs1walk, walkstate, ss, walkparams, (regime, ss, walkstate, walkparams, ispriv, acctype, TRUE) then // The Permission fault was caused by lack of write permissions. // However, HW updates, which are atomic writes for stage 1 // descriptors, permissions fault reflect the original access. // The permission fault _was_ caused by lack of write permissions fault.statuscode = Fault_Permission; if !fault.s2fs1walk then fault.write = TRUE; fault.write = TRUE; elsif AArch64.S2HasPermissionsFaultAArch64.S1HasPermissionsFault(s2fs1walk, walkstate, ss, walkparams, (regime, ss, walkstate, walkparams, ispriv, acctype, iswrite) then fault.statuscode = Fault_Permission; new_desc = descriptor; if walkparams.ha == '1' && AArch64.FaultAllowsSetAccessFlag(fault) then // Set descriptor AF bit new_desc<10> = '1'; // If HW update of dirty bit is enabled, the walk state permissions // will already reflect a configuration permitting writes. // The update of the descriptor occurs only if the descriptor bits in // memory do not reflect that and the access instigates a write. if (fault.statuscode == Fault_None && walkparams.ha == '1' && walkparams.hd == '1' && descriptor<51> == '1' && // Descriptor DBM bit (IsAtomicRW(acctype) || iswrite) && !(acctype IN {AccType_AT, AccType_ATPAN, AccType_IC, AccType_DC})) then // Set descriptor S2AP[1] bit permitting stage 2 writes new_desc<7> = '1'; // Either the access flag was clear or S2AP<1> is clear if new_desc != descriptor then (fault, mem_desc) = // Clear descriptor AP[2] bit permitting stage 1 writes new_desc<7> = '0'; AddressDescriptor descupdateaddress; FaultRecord s2fault; // Either the access flag was clear or AP<2> is set if new_desc != descriptor then if regime == Regime_EL10 && EL2Enabled() then s1aarch64 = TRUE; s2fs1walk = TRUE; aligned = TRUE; iswrite = TRUE; (s2fault, descupdateaddress) = AArch64.S2Translate(fault, descaddress, s1aarch64, ss, s2fs1walk, AccType_ATOMICRW, aligned, iswrite, ispriv); if s2fault.statuscode != Fault_None then return (s2fault, AddressDescriptor UNKNOWN); else descupdateaddress = descaddress; (fault, mem_desc) = AArch64.MemSwapTableDesc(fault, descriptor, new_desc, walkparams.ee, descaddress); walkparams.ee, descupdateaddress); until new_desc == descriptor || mem_desc == new_desc; if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); ipa_64 = // Output Address oa = ZeroExtend(ipa.paddress.address, 64); // Output Address oa = StageOA(ipa_64, walkparams.tgx, walkstate);(va, walkparams.tgx, walkstate); MemoryAttributes s2_memattrs; if ((s2fs1walk && walkstate.memattrs.memtype ==memattrs; if (acctype == MemType_Device && walkparams.ptw == '0') || (acctype == AccType_IFETCH && (walkstate.memattrs.memtype == MemType_Device || HCR_EL2.ID == '1')) || (acctype !=|| ! AArch64.S1ICacheEnabled(regime))) then // Treat memory attributes as Normal Non-Cacheable memattrs = NormalNCMemAttr(); memattrs.xs = walkstate.memattrs.xs; elsif (acctype != AccType_IFETCH && walkstate.memattrs.memtype ==&& ! AArch64.S1DCacheEnabled(regime) && walkstate.memattrs.memtype == MemType_Normal && HCR_EL2.CD == '1')) then ) then // Treat memory attributes as Normal Non-Cacheable s2_memattrs = memattrs = NormalNCMemAttr(); s2_memattrs.xs = walkstate.memattrs.xs; else s2_memattrs = walkstate.memattrs; memattrs.xs = walkstate.memattrs.xs; if !s2fs1walk && acctype == // The effect of SCTLR_ELx.C when '0' is Constrained UNPREDICTABLE // on the Tagged attribute if HaveMTE2Ext() && walkstate.memattrs.tagged then memattrs.tagged = ConstrainUnpredictableBool(Unpredictable_S1CTAGGED); else memattrs = walkstate.memattrs; // Shareability value of stage 1 translation subject to stage 2 is IMPLEMENTATION DEFINED // to be either effective value or descriptor value if (regime == Regime_EL10 && EL2Enabled() && HCR_EL2.VM == '1' && !(boolean IMPLEMENTATION_DEFINED "Apply effective shareability at stage 1")) then memattrs.shareability = walkstate.memattrs.shareability; else memattrs.shareability = EffectiveShareability(memattrs); if acctype == AccType_ATOMICLS64 && s2_memattrs.memtype ==&& memattrs.memtype == MemType_Normal then if s2_memattrs.inner.attrs != if memattrs.inner.attrs != MemAttr_NC || s2_memattrs.outer.attrs !=|| memattrs.outer.attrs != MemAttr_NC then fault.statuscode = Fault_Exclusive; return (fault, AddressDescriptor UNKNOWN); MemoryAttributes memattrs; if walkparams.fwb == '0' then memattrs = S2CombineS1MemAttrs(ipa.memattrs, s2_memattrs); else memattrs = s2_memattrs; UNKNOWN); pa = ipa = CreateAddressDescriptor(ipa.vaddress, oa, memattrs); return (fault, pa);(va, oa, memattrs); return (fault, ipa);

Library pseudocode for aarch64/translation/vmsa_translation/AArch64.TranslateAddressAArch64.S2Translate

// AArch64.TranslateAddress() // ========================== // Main entry point for translating an address // AArch64.S2Translate() // ===================== // Translate stage 1 IPA to PA and combine memory attributes AddressDescriptor(FaultRecord, AddressDescriptor) AArch64.TranslateAddress(bits(64) va,AArch64.S2Translate( FaultRecord fault_in, AddressDescriptor ipa, boolean s1aarch64, SecurityState ss, boolean s2fs1walk, AccType acctype, boolean iswrite, boolean aligned, integer size) result =acctype, boolean aligned, boolean iswrite, boolean ispriv) walkparams = AArch64.FullTranslateAArch64.GetS2TTWParams(va, acctype, iswrite, aligned); if !(ss, ipa.paddress.paspace, s1aarch64);IsFaultFaultRecord(result) then result.fault =fault = fault_in; // Prepare fault fields in case a fault is detected fault.statuscode = AArch64.CheckDebugFault_None(va, acctype, iswrite, size); ; // Ignore any faults from stage 1 fault.secondstage = TRUE; fault.s2fs1walk = s2fs1walk; fault.ipaddress = ipa.paddress; // Update virtual address for abort functions result.vaddress = if walkparams.vm != '1' then // Stage 2 translation is disabled return (fault, ipa); if ( AArch64.S2InvalidTxSZ(walkparams, s1aarch64) || AArch64.S2InvalidSL(walkparams) || AArch64.S2InconsistentSL(walkparams) || AArch64.IPAIsOutOfRange(ipa.paddress.address, walkparams)) then fault.statuscode = Fault_Translation; fault.level = 0; return (fault, AddressDescriptor UNKNOWN); AddressDescriptor descaddress; TTWState walkstate; bits(64) descriptor; bits(64) new_desc; bits(64) mem_desc; repeat (fault, descaddress, walkstate, descriptor) = AArch64.S2Walk(fault, ipa, walkparams, ss, acctype, iswrite, s1aarch64); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); if AArch64.S2HasAlignmentFault(acctype, aligned, walkstate.memattrs) then fault.statuscode = Fault_Alignment; elsif IsAtomicRW(acctype) then if AArch64.S2HasPermissionsFault(s2fs1walk, walkstate, ss, walkparams, ispriv, acctype, FALSE) then // The permission fault was not caused by lack of write permissions fault.statuscode = Fault_Permission; fault.write = FALSE; elsif AArch64.S2HasPermissionsFault(s2fs1walk, walkstate, ss, walkparams, ispriv, acctype, TRUE) then // The permission fault _was_ caused by lack of write permissions. // However, HW updates, which are atomic writes for stage 1 // descriptors, permissions fault reflect the original access. fault.statuscode = Fault_Permission; if !fault.s2fs1walk then fault.write = TRUE; elsif AArch64.S2HasPermissionsFault(s2fs1walk, walkstate, ss, walkparams, ispriv, acctype, iswrite) then fault.statuscode = Fault_Permission; new_desc = descriptor; if walkparams.ha == '1' && AArch64.FaultAllowsSetAccessFlag(fault) then // Set descriptor AF bit new_desc<10> = '1'; // If HW update of dirty bit is enabled, the walk state permissions // will already reflect a configuration permitting writes. // The update of the descriptor occurs only if the descriptor bits in // memory do not reflect that and the access instigates a write. if (fault.statuscode == Fault_None && walkparams.ha == '1' && walkparams.hd == '1' && descriptor<51> == '1' && // Descriptor DBM bit (IsAtomicRW(acctype) || iswrite) && !(acctype IN {AccType_AT, AccType_ATPAN, AccType_IC, AccType_DC})) then // Set descriptor S2AP[1] bit permitting stage 2 writes new_desc<7> = '1'; // Either the access flag was clear or S2AP<1> is clear if new_desc != descriptor then (fault, mem_desc) = AArch64.MemSwapTableDesc(fault, descriptor, new_desc, walkparams.ee, descaddress); until new_desc == descriptor || mem_desc == new_desc; if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN); ipa_64 = ZeroExtend(ipa.paddress.address, 64); // Output Address oa = StageOA(ipa_64, walkparams.tgx, walkstate); MemoryAttributes s2_memattrs; if ((s2fs1walk && walkstate.memattrs.memtype == MemType_Device && walkparams.ptw == '0') || (acctype == AccType_IFETCH && (walkstate.memattrs.memtype == MemType_Device || HCR_EL2.ID == '1')) || (acctype != AccType_IFETCH && walkstate.memattrs.memtype == MemType_Normal && HCR_EL2.CD == '1')) then // Treat memory attributes as Normal Non-Cacheable s2_memattrs = NormalNCMemAttr(); s2_memattrs.xs = walkstate.memattrs.xs; else s2_memattrs = walkstate.memattrs; if !s2fs1walk && acctype == AccType_ATOMICLS64 && s2_memattrs.memtype == MemType_Normal then if s2_memattrs.inner.attrs != MemAttr_NC || s2_memattrs.outer.attrs != MemAttr_NC then fault.statuscode = Fault_Exclusive; return (fault, AddressDescriptor UNKNOWN); MemoryAttributes memattrs; if walkparams.fwb == '0' then memattrs = S2CombineS1MemAttrs(ipa.memattrs, s2_memattrs); else memattrs = s2_memattrs; pa = CreateAddressDescriptor(va); return result;(ipa.vaddress, oa, memattrs); return (fault, pa);

Library pseudocode for aarch64/translation/vmsa_ttentryvmsa_translation/AArch64.BlockDescSupportedAArch64.TranslateAddress

// AArch64.BlockDescSupported() // ============================ // Determine whether a block descriptor is valid for the given granule size // and level // AArch64.TranslateAddress() // ========================== // Main entry point for translating an address booleanAddressDescriptor AArch64.BlockDescSupported(bit ds,AArch64.TranslateAddress(bits(64) va, TGxAccType tgx, integer level) case tgx of whenacctype, boolean iswrite, boolean aligned, integer size) result = TGx_4KBAArch64.FullTranslate return level == 2 || level == 1 || (level == 0 && ds == '1'); when(va, acctype, iswrite, aligned); if ! TGx_16KBIsFault return level == 2 || (level == 1 && ds == '1'); when(result) then result.fault = TGx_64KBAArch64.CheckDebug return level == 2 || (level == 1 &&(va, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress = AArch64.PAMaxZeroExtend() == 52); (va); return FALSE; return result;

Library pseudocode for aarch64/translation/vmsa_ttentry/AArch64.BlocknTFaultsAArch64.BlockDescSupported

// AArch64.BlocknTFaults() // ======================= // Identify whether the nT bit in a block descriptor is effectively set // causing a translation fault // AArch64.BlockDescSupported() // ============================ // Determine whether a block descriptor is valid for the given granule size // and level boolean AArch64.BlocknTFaults(bits(64) descriptor) if !AArch64.BlockDescSupported(bit ds,HaveBlockBBMTGx() then return FALSE; bbm_level =tgx, integer level) case tgx of when AArch64.BlockBBMSupportLevelTGx_4KB(); nT_faults = boolean IMPLEMENTATION_DEFINED "BBM level 1 or 2 support nT bit causes Translationreturn level == 2 || level == 1 || (level == 0 && ds == '1'); when return level == 2 || (level == 1 && ds == '1'); when TGx_64KB return level == 2 || (level == 1 && AArch64.PAMaxFaultTGx_16KB"; () == 52); return bbm_level IN {1, 2} && descriptor<16> == '1' && nT_faults; return FALSE;

Library pseudocode for aarch64/translation/vmsa_ttentry/AArch64.ContiguousBitAArch64.BlocknTFaults

// AArch64.ContiguousBit() // AArch64.BlocknTFaults() // ======================= // Get the value of the contiguous bit // Identify whether the nT bit in a block descriptor is effectively set // causing a translation fault bitboolean AArch64.ContiguousBit(AArch64.BlocknTFaults(bits(64) descriptor) if !TGxHaveBlockBBM tgx, integer level, bits(64) descriptor) if tgx ==() then return FALSE; bbm_level = TGx_64KBAArch64.BlockBBMSupportLevel && level == 1 && !(); nT_faults = boolean IMPLEMENTATION_DEFINED "BBM level 1 or 2 support nT bit causes TranslationHave52BitVAExtFault() then return '0'; // RES0 if tgx == TGx_16KB && level == 1 then return '0'; // RES0 if tgx == TGx_4KB && level == 0 then return '0'; // RES0 "; return descriptor<52>; return bbm_level IN {1, 2} && descriptor<16> == '1' && nT_faults;

Library pseudocode for aarch64/translation/vmsa_ttentry/AArch64.DecodeDescriptorTypeAArch64.ContiguousBit

// AArch64.DecodeDescriptorType() // ============================== // Determine whether the descriptor is a page, block or table // AArch64.ContiguousBit() // ======================= // Get the value of the contiguous bit DescriptorTypebit AArch64.DecodeDescriptorType(bits(64) descriptor, bit ds,AArch64.ContiguousBit( TGx tgx, integer level) if descriptor<1:0> == '11' && level ==tgx, integer level, bits(64) descriptor) if tgx == FINAL_LEVELTGx_64KB then return&& level == 1 && ! DescriptorType_PageHave52BitVAExt; elsif descriptor<1:0> == '11' then return() then return '0'; // RES0 if tgx == DescriptorType_TableTGx_16KB; elsif descriptor<1:0> == '01' then if&& level == 1 then return '0'; // RES0 if tgx == AArch64.BlockDescSupportedTGx_4KB(ds, tgx, level) then return DescriptorType_Block; else return DescriptorType_Invalid; else return DescriptorType_Invalid;&& level == 0 then return '0'; // RES0 return descriptor<52>;

Library pseudocode for aarch64/translation/vmsa_ttentry/AArch64.S1ApplyOutputPermsAArch64.DecodeDescriptorType

// AArch64.S1ApplyOutputPerms() // ============================ // Apply output permissions encoded in stage 1 page/block descriptors // AArch64.DecodeDescriptorType() // ============================== // Determine whether the descriptor is a page, block or table PermissionsDescriptorType AArch64.S1ApplyOutputPerms(AArch64.DecodeDescriptorType(bits(64) descriptor, bit ds,PermissionsTGx permissions_in, bits(64) descriptor,tgx, integer level) if descriptor<1:0> == '11' && level == RegimeFINAL_LEVEL regime,then return S1TTWParamsDescriptorType_Page walkparams); elsif descriptor<1:0> == '11' then return PermissionsDescriptorType_Table permissions = permissions_in; if regime ==; elsif descriptor<1:0> == '01' then if Regime_EL10AArch64.BlockDescSupported &&(ds, tgx, level) then return EL2EnabledDescriptorType_Block() && walkparams.nv1 == '1' then permissions.ap<2:1> = descriptor<7>:'0'; permissions.pxn = descriptor<54>; elsif; else return ; else return DescriptorType_InvalidHasUnprivilegedDescriptorType_Invalid(regime) then permissions.ap<2:1> = descriptor<7:6>; permissions.uxn = descriptor<54>; permissions.pxn = descriptor<53>; else permissions.ap<2:1> = descriptor<7>:'1'; permissions.xn = descriptor<54>; // Descriptors marked with DBM set have the effective value of AP[2] cleared. // This implies no Permission faults caused by lack of write permissions are // reported, and the Dirty bit can be set. if walkparams.ha == '1' && walkparams.hd == '1' && descriptor<51> == '1' then permissions.ap<2> = '0'; return permissions;;

Library pseudocode for aarch64/translation/vmsa_ttentry/AArch64.S1ApplyTablePermsAArch64.S1ApplyOutputPerms

// AArch64.S1ApplyTablePerms() // =========================== // Apply hierarchical permissions encoded in stage 1 table descriptors // AArch64.S1ApplyOutputPerms() // ============================ // Apply output permissions encoded in stage 1 page/block descriptors Permissions AArch64.S1ApplyTablePerms(AArch64.S1ApplyOutputPerms(Permissions permissions_in, bits(64) descriptor, Regime regime, S1TTWParams walkparams) Permissions permissions = permissions_in; bits(2) ap_table; bit pxn_table; bit uxn_table; bit xn_table; if regime == Regime_EL10 && EL2Enabled() && walkparams.nv1 == '1' then ap_table = descriptor<62>:'0'; pxn_table = descriptor<60>; permissions.ap_table = permissions.ap_table OR ap_table; permissions.pxn_table = permissions.pxn_table OR pxn_table; permissions.ap<2:1> = descriptor<7>:'0'; permissions.pxn = descriptor<54>; elsif HasUnprivileged(regime) then ap_table = descriptor<62:61>; uxn_table = descriptor<60>; pxn_table = descriptor<59>; permissions.ap_table = permissions.ap_table OR ap_table; permissions.uxn_table = permissions.uxn_table OR uxn_table; permissions.pxn_table = permissions.pxn_table OR pxn_table; permissions.ap<2:1> = descriptor<7:6>; permissions.uxn = descriptor<54>; permissions.pxn = descriptor<53>; else ap_table = descriptor<62>:'0'; xn_table = descriptor<60>; permissions.ap_table = permissions.ap_table OR ap_table; permissions.xn_table = permissions.xn_table OR xn_table; permissions.ap<2:1> = descriptor<7>:'1'; permissions.xn = descriptor<54>; // Descriptors marked with DBM set have the effective value of AP[2] cleared. // This implies no permission faults caused by lack of write permissions are // reported, and the Dirty bit can be set. if walkparams.ha == '1' && walkparams.hd == '1' && descriptor<51> == '1' then permissions.ap<2> = '0'; return permissions;

Library pseudocode for aarch64/translation/vmsa_ttentry/AArch64.S2ApplyOutputPermsAArch64.S1ApplyTablePerms

// AArch64.S2ApplyOutputPerms() // ============================ // Apply output permissions encoded in stage 2 page/block descriptors // AArch64.S1ApplyTablePerms() // =========================== // Apply hierarchical permissions encoded in stage 1 table descriptors Permissions AArch64.S2ApplyOutputPerms(bits(64) descriptor,AArch64.S1ApplyTablePerms( S2TTWParamsPermissions walkparams)permissions_in, bits(64) descriptor, Regime regime, S1TTWParams walkparams) Permissions permissions; permissions.s2ap = descriptor<7:6>; permissions.s2xn = descriptor<54>; ifpermissions = permissions_in; bits(2) ap_table; bit pxn_table; bit uxn_table; bit xn_table; if regime == && EL2Enabled() && walkparams.nv1 == '1' then ap_table = descriptor<62>:'0'; pxn_table = descriptor<60>; permissions.ap_table = permissions.ap_table OR ap_table; permissions.pxn_table = permissions.pxn_table OR pxn_table; elsif HasUnprivilegedHaveExtendedExecuteNeverExtRegime_EL10() then permissions.s2xnx = descriptor<53>; (regime) then ap_table = descriptor<62:61>; uxn_table = descriptor<60>; pxn_table = descriptor<59>; permissions.ap_table = permissions.ap_table OR ap_table; permissions.uxn_table = permissions.uxn_table OR uxn_table; permissions.pxn_table = permissions.pxn_table OR pxn_table; else permissions.s2xnx = '0'; // Descriptors marked with DBM set have the effective value of S2AP[1] set. // This implies no Permission faults caused by lack of write permissions are // reported, and the Dirty bit can be set. if walkparams.ha == '1' && walkparams.hd == '1' && descriptor<51> == '1' then permissions.s2ap<1> = '1'; ap_table = descriptor<62>:'0'; xn_table = descriptor<60>; permissions.ap_table = permissions.ap_table OR ap_table; permissions.xn_table = permissions.xn_table OR xn_table; return permissions;

Library pseudocode for aarch64/translation/vmsa_walkvmsa_ttentry/AArch64.S1InitialTTWStateAArch64.S2ApplyOutputPerms

// AArch64.S1InitialTTWState() // =========================== // Set properties of first access to translation tables in stage 1 // AArch64.S2ApplyOutputPerms() // ============================ // Apply output permissions encoded in stage 2 page/block descriptors TTWStatePermissions AArch64.S1InitialTTWState(AArch64.S2ApplyOutputPerms(bits(64) descriptor,S1TTWParamsS2TTWParams walkparams, bits(64) va,walkparams) Regime regime, SecurityState ss) TTWState walkstate; FullAddress tablebase; Permissions permissions; startlevel = permissions.s2ap = descriptor<7:6>; permissions.s2xn = descriptor<54>; if AArch64.S1StartLevelHaveExtendedExecuteNeverExt(walkparams); ttbr = AArch64.S1TTBR(regime, va); case ss of when SS_Secure tablebase.paspace = PAS_Secure; when SS_NonSecure tablebase.paspace = PAS_NonSecure; tablebase.address = AArch64.TTBaseAddress(ttbr, walkparams.txsz, walkparams.ps, walkparams.ds, walkparams.tgx, startlevel); permissions.ap_table = Zeros(); if HasUnprivileged(regime) then permissions.uxn_table = Zeros(); permissions.pxn_table = Zeros(); else permissions.xn_table = Zeros(); walkstate.baseaddress = tablebase; walkstate.level = startlevel; walkstate.istable = TRUE; // In regimes that support global and non-global translations, translation // table entries from lookup levels other than the final level of lookup // are treated as being non-global walkstate.nG = if HasUnprivileged(regime) then '1' else '0'; walkstate.memattrs = WalkMemAttrs(walkparams.sh, walkparams.irgn, walkparams.orgn); walkstate.permissions = permissions; () then permissions.s2xnx = descriptor<53>; else permissions.s2xnx = '0'; return walkstate; // Descriptors marked with DBM set have the effective value of S2AP[1] set. // This implies no permission faults caused by lack of write permissions are // reported, and the Dirty bit can be set. if walkparams.ha == '1' && walkparams.hd == '1' && descriptor<51> == '1' then permissions.s2ap<1> = '1'; return permissions;

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.S1NextWalkStateLastAArch64.S1InitialTTWState

// AArch64.S1NextWalkStateLast() // ============================= // Decode stage 1 page or block descriptor as output to this stage of translation // AArch64.S1InitialTTWState() // =========================== // Set properties of first access to translation tables in stage 1 TTWState AArch64.S1NextWalkStateLast(AArch64.S1InitialTTWState(TTWStateS1TTWParams currentstate,walkparams, bits(64) va, Regime regime, SecurityState ss,ss) S1TTWParams walkparams, bits(64) descriptor) TTWState nextstate;walkstate; FullAddress baseaddress; if currentstate.level ==tablebase; FINAL_LEVELPermissions then baseaddress.address =permissions; startlevel = AArch64.PageBaseAArch64.S1StartLevel(descriptor, walkparams.ds, walkparams.tgx); else baseaddress.address =(walkparams); ttbr = AArch64.BlockBaseAArch64.S1TTBR(descriptor, walkparams.ds, walkparams.tgx, currentstate.level); if currentstate.baseaddress.paspace ==(regime, va); case ss of when PAS_SecureSS_Secure then // Determine PA space of the block from NS bit baseaddress.paspace = if descriptor<5> == '0' thentablebase.paspace = PAS_Secure else; when PAS_NonSecureSS_NonSecure; else baseaddress.paspace =tablebase.paspace = PAS_NonSecure; nextstate.istable = FALSE; nextstate.level = currentstate.level; nextstate.baseaddress = baseaddress; attrindx = descriptor<4:2>; sh = if walkparams.ds == '1' then walkparams.sh else descriptor<9:8>; attr = tablebase.address = MAIRAttrAArch64.TTBaseAddress((ttbr, walkparams.txsz, walkparams.ps, walkparams.ds, walkparams.tgx, startlevel); permissions.ap_table =UIntZeros(attrindx), walkparams.mair); s1aarch64 = TRUE; nextstate.memattrs =(); if S1DecodeMemAttrsHasUnprivileged(attr, sh, s1aarch64); nextstate.permissions =(regime) then permissions.uxn_table = AArch64.S1ApplyOutputPermsZeros(currentstate.permissions, descriptor, regime, walkparams); nextstate.contiguous =(); permissions.pxn_table = AArch64.ContiguousBitZeros(walkparams.tgx, currentstate.level, descriptor); if !(); else permissions.xn_table =Zeros(); walkstate.baseaddress = tablebase; walkstate.level = startlevel; walkstate.istable = TRUE; // In regimes that support global and non-global translations, translation // table entries from lookup levels other than the final level of lookup // are treated as being non-global walkstate.nG = if HasUnprivileged(regime) then nextstate.nG = '0'; elsif ss ==(regime) then '1' else '0'; walkstate.memattrs = SS_SecureWalkMemAttrs && currentstate.baseaddress.paspace == PAS_NonSecure then // In Secure state, a translation must be treated as non-global, // regardless of the value of the nG bit, // if NSTable is set to 1 at any level of the translation table walk nextstate.nG = '1'; else nextstate.nG = descriptor<11>; (walkparams.sh, walkparams.irgn, walkparams.orgn); walkstate.permissions = permissions; nextstate.guardedpage = descriptor<50>; return nextstate; return walkstate;

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.S1NextWalkStateTableAArch64.S1NextWalkStateLast

// AArch64.S1NextWalkStateTable() // ============================== // Decode stage 1 table descriptor to transition to the next level // AArch64.S1NextWalkStateLast() // ============================= // Decode stage 1 page or block descriptor as output to this stage of translation TTWState AArch64.S1NextWalkStateTable(AArch64.S1NextWalkStateLast(TTWState currentstate, Regime regime, SecurityState ss, S1TTWParams walkparams, bits(64) descriptor) TTWState nextstate; FullAddress tablebase; baseaddress; tablebase.address = if currentstate.level == AArch64.NextTableBaseFINAL_LEVEL(descriptor, walkparams.ds, walkparams.tgx); if currentstate.baseaddress.paspace ==then baseaddress.address = AArch64.PageBase(descriptor, walkparams.ds, walkparams.tgx); else baseaddress.address = AArch64.BlockBase(descriptor, walkparams.ds, walkparams.tgx, currentstate.level); if currentstate.baseaddress.paspace == PAS_Secure then // Determine PA space of the next table from NSTable bit tablebase.paspace = if descriptor<63> == '0' then // Determine PA space of the block from NS bit baseaddress.paspace = if descriptor<5> == '0' then PAS_Secure else PAS_NonSecure; else // Otherwise bit 63 is RES0 and there is no NSTable bit tablebase.paspace = currentstate.baseaddress.paspace; nextstate.istable = TRUE; nextstate.nG = currentstate.nG; nextstate.level = currentstate.level + 1; nextstate.baseaddress = tablebase; nextstate.memattrs = currentstate.memattrs; if walkparams.hpd == '0' then nextstate.permissions = baseaddress.paspace = ; nextstate.istable = FALSE; nextstate.level = currentstate.level; nextstate.baseaddress = baseaddress; attrindx = descriptor<4:2>; sh = if walkparams.ds == '1' then walkparams.sh else descriptor<9:8>; attr = MAIRAttr(UInt(attrindx), walkparams.mair); s1aarch64 = TRUE; nextstate.memattrs = S1DecodeMemAttrs(attr, sh, s1aarch64); nextstate.permissions = AArch64.S1ApplyOutputPerms(currentstate.permissions, descriptor, regime, walkparams); nextstate.contiguous = AArch64.ContiguousBit(walkparams.tgx, currentstate.level, descriptor); if !HasUnprivileged(regime) then nextstate.nG = '0'; elsif ss == SS_Secure && currentstate.baseaddress.paspace == PAS_NonSecureAArch64.S1ApplyTablePermsPAS_NonSecure(currentstate.permissions, descriptor, regime, walkparams); then // In Secure state, a translation must be treated as non-global, // regardless of the value of the nG bit, // if NSTable is set to 1 at any level of the translation table walk nextstate.nG = '1'; else nextstate.permissions = currentstate.permissions; nextstate.nG = descriptor<11>; nextstate.guardedpage = descriptor<50>; return nextstate;

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.S1WalkAArch64.S1NextWalkStateTable

// AArch64.S1Walk() // ================ // Traverse stage 1 translation tables obtaining the final descriptor // as well as the address leading to that descriptor // AArch64.S1NextWalkStateTable() // ============================== // Decode stage 1 table descriptor to transition to the next level (FaultRecord, AddressDescriptor, TTWState, bits(64))TTWState AArch64.S1Walk(AArch64.S1NextWalkStateTable(FaultRecordTTWState fault_in,currentstate, S1TTWParams walkparams, bits(64) va, Regime regime, SecurityStateS1TTWParams ss,walkparams, bits(64) descriptor) AccType acctype, boolean iswrite_in, boolean ispriv) FaultRecord fault = fault_in; boolean iswrite = iswrite_in; if HasUnprivileged(regime) && AArch64.S1EPD(regime, va) == '1' then fault.statuscode = Fault_Translation; fault.level = 0; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); walkstate =nextstate; AArch64.S1InitialTTWState(walkparams, va, regime, ss); // Detect Address Size Fault by TTB if AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, va) then fault.statuscode = Fault_AddressSize; fault.level = 0; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); bits(64) descriptor; AddressDescriptor walkaddress; walkaddress.vaddress = va; if !AArch64.S1DCacheEnabled(regime) then walkaddress.memattrs = NormalNCMemAttr(); walkaddress.memattrs.xs = walkstate.memattrs.xs; else walkaddress.memattrs = walkstate.memattrs; // Shareability value of stage 1 translation subject to stage 2 is IMPLEMENTATION DEFINED // to be either effective value or descriptor value if (regime == Regime_EL10 && EL2Enabled() && HCR_EL2.VM == '1' && !(boolean IMPLEMENTATION_DEFINED "Apply effective shareability at stage 1")) then walkaddress.memattrs.shareability = walkstate.memattrs.shareability; else walkaddress.memattrs.shareability = EffectiveShareability(walkaddress.memattrs); DescriptorType desctype; repeat fault.level = walkstate.level; FullAddress descaddress =tablebase; tablebase.address = AArch64.TTEntryAddressAArch64.NextTableBase(walkstate.level, walkparams.tgx, walkparams.txsz, va, walkstate.baseaddress); walkaddress.paddress = descaddress; if regime ==(descriptor, walkparams.ds, walkparams.tgx); if currentstate.baseaddress.paspace == Regime_EL10PAS_Secure &&then // Determine PA space of the next table from NSTable bit tablebase.paspace = if descriptor<63> == '0' then EL2EnabledPAS_Secure() then s1aarch64 = TRUE; s2fs1walk = TRUE; aligned = TRUE; iswrite = FALSE; (s2fault, s2walkaddress) =else AArch64.S2TranslatePAS_NonSecure(fault, walkaddress, s1aarch64, ss, s2fs1walk,; else // Otherwise bit 63 is RES0 and there is no NSTable bit tablebase.paspace = currentstate.baseaddress.paspace; nextstate.istable = TRUE; nextstate.nG = currentstate.nG; nextstate.level = currentstate.level + 1; nextstate.baseaddress = tablebase; nextstate.memattrs = currentstate.memattrs; if walkparams.hpd == '0' then nextstate.permissions = AccType_TTWAArch64.S1ApplyTablePerms, aligned, iswrite, ispriv); if s2fault.statuscode != Fault_None then return (s2fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); (fault, descriptor) = FetchDescriptor(walkparams.ee, s2walkaddress, fault); else (fault, descriptor) = FetchDescriptor(walkparams.ee, walkaddress, fault); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); desctype = AArch64.DecodeDescriptorType(descriptor, walkparams.ds, walkparams.tgx, walkstate.level); case desctype of when DescriptorType_Table walkstate = AArch64.S1NextWalkStateTable(walkstate, regime, walkparams, descriptor); // Detect Address Size Fault by table descriptor if AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, va) then fault.statuscode = Fault_AddressSize; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); when DescriptorType_Page, DescriptorType_Block walkstate = AArch64.S1NextWalkStateLast(walkstate, regime, ss, walkparams, descriptor); when DescriptorType_Invalid fault.statuscode = Fault_Translation; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); otherwise Unreachable(); until desctype IN {DescriptorType_Page, DescriptorType_Block}; if (walkstate.contiguous == '1' && AArch64.ContiguousBitFaults(walkparams.txsz, walkparams.tgx, walkstate.level)) then fault.statuscode = Fault_Translation; elsif desctype == DescriptorType_Block && AArch64.BlocknTFaults(descriptor) then fault.statuscode = Fault_Translation; // Detect Address Size Fault by final output elsif AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, va) then fault.statuscode = Fault_AddressSize; // Check descriptor AF bit elsif (descriptor<10> == '0' && walkparams.ha == '0' && !(acctype IN {AccType_DC, AccType_IC} && !boolean IMPLEMENTATION_DEFINED "Generate access flag fault on IC/DC operations")) then fault.statuscode = Fault_AccessFlag; (currentstate.permissions, descriptor, regime, walkparams); else nextstate.permissions = currentstate.permissions; return (fault, walkaddress, walkstate, descriptor); return nextstate;

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.S2InitialTTWStateAArch64.S1Walk

// AArch64.S2InitialTTWState() // =========================== // Set properties of first access to translation tables in stage 2 // AArch64.S1Walk() // ================ // Traverse stage 1 translation tables obtaining the final descriptor // as well as the address leading to that descriptor TTWState(FaultRecord, AddressDescriptor, TTWState, bits(64)) AArch64.S2InitialTTWState(AArch64.S1Walk(FaultRecord fault_in, S1TTWParams walkparams, bits(64) va, Regime regime, SecurityState ss, S2TTWParamsAccType walkparams)acctype, boolean iswrite_in, boolean ispriv) FaultRecord fault = fault_in; boolean iswrite = iswrite_in; if HasUnprivileged(regime) && AArch64.S1EPD(regime, va) == '1' then fault.statuscode = Fault_Translation; fault.level = 0; return (fault, AddressDescriptor UNKNOWN, TTWState walkstate;UNKNOWN, bits(64) UNKNOWN); walkstate = AArch64.S1InitialTTWState(walkparams, va, regime, ss); // Detect Address Size Fault by TTB if AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, va) then fault.statuscode = Fault_AddressSize; fault.level = 0; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); bits(64) descriptor; AddressDescriptor walkaddress; walkaddress.vaddress = va; if !AArch64.S1DCacheEnabled(regime) then walkaddress.memattrs = NormalNCMemAttr(); walkaddress.memattrs.xs = walkstate.memattrs.xs; else walkaddress.memattrs = walkstate.memattrs; // Shareability value of stage 1 translation subject to stage 2 is IMPLEMENTATION DEFINED // to be either effective value or descriptor value if (regime == Regime_EL10 && EL2Enabled() && HCR_EL2.VM == '1' && !(boolean IMPLEMENTATION_DEFINED "Apply effective shareability at stage 1")) then walkaddress.memattrs.shareability = walkstate.memattrs.shareability; else walkaddress.memattrs.shareability = EffectiveShareability(walkaddress.memattrs); DescriptorType desctype; repeat fault.level = walkstate.level; FullAddress tablebase; ttbr = VTTBR_EL2; startlevel =descaddress = AArch64.S2StartLevelAArch64.TTEntryAddress(walkparams); tablebase.paspace =(walkstate.level, walkparams.tgx, walkparams.txsz, va, walkstate.baseaddress); walkaddress.paddress = descaddress; if regime == PAS_NonSecureRegime_EL10; tablebase.address =&& AArch64.TTBaseAddressEL2Enabled(ttbr, walkparams.txsz, walkparams.ps, walkparams.ds, walkparams.tgx, startlevel); walkstate.baseaddress = tablebase; walkstate.level = startlevel; walkstate.istable = TRUE; walkstate.memattrs =() then s1aarch64 = TRUE; s2fs1walk = TRUE; aligned = TRUE; iswrite = FALSE; (s2fault, s2walkaddress) = (fault, walkaddress, s1aarch64, ss, s2fs1walk, AccType_TTW, aligned, iswrite, ispriv); if s2fault.statuscode != Fault_None then return (s2fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); (fault, descriptor) = FetchDescriptor(walkparams.ee, s2walkaddress, fault); else (fault, descriptor) = FetchDescriptor(walkparams.ee, walkaddress, fault); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); desctype = AArch64.DecodeDescriptorType(descriptor, walkparams.ds, walkparams.tgx, walkstate.level); case desctype of when DescriptorType_Table walkstate = AArch64.S1NextWalkStateTable(walkstate, regime, walkparams, descriptor); // Detect Address Size Fault by table descriptor if AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, va) then fault.statuscode = Fault_AddressSize; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); when DescriptorType_Page, DescriptorType_Block walkstate = AArch64.S1NextWalkStateLast(walkstate, regime, ss, walkparams, descriptor); when DescriptorType_Invalid fault.statuscode = Fault_Translation; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); otherwise Unreachable(); until desctype IN {DescriptorType_Page, DescriptorType_Block}; if (walkstate.contiguous == '1' && AArch64.ContiguousBitFaults(walkparams.txsz, walkparams.tgx, walkstate.level)) then fault.statuscode = Fault_Translation; elsif desctype == DescriptorType_Block && AArch64.BlocknTFaults(descriptor) then fault.statuscode = Fault_Translation; // Detect Address Size Fault by final output elsif AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, va) then fault.statuscode = Fault_AddressSize; // Check descriptor AF bit elsif (descriptor<10> == '0' && walkparams.ha == '0' && !(acctype IN {AccType_DC, AccType_IC} && !boolean IMPLEMENTATION_DEFINED "Generate access flag fault on IC/DC operations")) then fault.statuscode = Fault_AccessFlagWalkMemAttrsAArch64.S2Translate(walkparams.sh, walkparams.irgn, walkparams.orgn); ; return walkstate; return (fault, walkaddress, walkstate, descriptor);

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.S2NextWalkStateLastAArch64.S2InitialTTWState

// AArch64.S2NextWalkStateLast() // ============================= // Decode stage 2 page or block descriptor as output to this stage of translation // AArch64.S2InitialTTWState() // =========================== // Set properties of first access to translation tables in stage 2 TTWState AArch64.S2NextWalkStateLast(AArch64.S2InitialTTWState(TTWState currentstate, SecurityState ss, S2TTWParams walkparams,walkparams) AddressDescriptor ipa, bits(64) descriptor) TTWState nextstate;walkstate; FullAddress baseaddress; tablebase; if ss == ttbr = VTTBR_EL2; startlevel = SS_SecureAArch64.S2StartLevel then baseaddress.paspace =(walkparams); tablebase.paspace = AArch64.SS2OutputPASpace(walkparams, ipa.paddress.paspace); else baseaddress.paspace = PAS_NonSecure; if currentstate.level == tablebase.address = FINAL_LEVELAArch64.TTBaseAddress then baseaddress.address =(ttbr, walkparams.txsz, walkparams.ps, walkparams.ds, walkparams.tgx, startlevel); walkstate.baseaddress = tablebase; walkstate.level = startlevel; walkstate.istable = TRUE; walkstate.memattrs = AArch64.PageBaseWalkMemAttrs(descriptor, walkparams.ds, walkparams.tgx); else baseaddress.address = AArch64.BlockBase(descriptor, walkparams.ds, walkparams.tgx, currentstate.level); nextstate.istable = FALSE; nextstate.level = currentstate.level; nextstate.baseaddress = baseaddress; nextstate.permissions = AArch64.S2ApplyOutputPerms(descriptor, walkparams); s2_attr = descriptor<5:2>; s2_sh = if walkparams.ds == '1' then walkparams.sh else descriptor<9:8>; s2_fnxs = descriptor<11>; if walkparams.fwb == '1' then nextstate.memattrs = AArch64.S2ApplyFWBMemAttrs(ipa.memattrs, s2_attr, s2_sh); if s2_attr<1:0> == '10' then // Force writeback nextstate.memattrs.xs = '0'; else nextstate.memattrs.xs = if s2_fnxs == '1' then '0' else ipa.memattrs.xs; else nextstate.memattrs = S2DecodeMemAttrs(s2_attr, s2_sh); nextstate.memattrs.xs = if s2_fnxs == '1' then '0' else ipa.memattrs.xs; nextstate.contiguous = AArch64.ContiguousBit(walkparams.tgx, currentstate.level, descriptor); (walkparams.sh, walkparams.irgn, walkparams.orgn); return nextstate; return walkstate;

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.S2NextWalkStateTableAArch64.S2NextWalkStateLast

// AArch64.S2NextWalkStateTable() // ============================== // Decode stage 2 table descriptor to transition to the next level // AArch64.S2NextWalkStateLast() // ============================= // Decode stage 2 page or block descriptor as output to this stage of translation TTWState AArch64.S2NextWalkStateTable(AArch64.S2NextWalkStateLast(TTWState currentstate, SecurityState ss, S2TTWParams walkparams, bits(64) descriptor)walkparams, AddressDescriptor ipa, bits(64) descriptor) TTWState nextstate; FullAddress tablebase; baseaddress; tablebase.address = if ss == then baseaddress.paspace = AArch64.SS2OutputPASpace(walkparams, ipa.paddress.paspace); else baseaddress.paspace = PAS_NonSecure; if currentstate.level == FINAL_LEVEL then baseaddress.address = AArch64.PageBase(descriptor, walkparams.ds, walkparams.tgx); else baseaddress.address = AArch64.BlockBase(descriptor, walkparams.ds, walkparams.tgx, currentstate.level); nextstate.istable = FALSE; nextstate.level = currentstate.level; nextstate.baseaddress = baseaddress; nextstate.permissions = AArch64.S2ApplyOutputPerms(descriptor, walkparams); s2_attr = descriptor<5:2>; s2_sh = if walkparams.ds == '1' then walkparams.sh else descriptor<9:8>; s2_fnxs = descriptor<11>; if walkparams.fwb == '1' then nextstate.memattrs = AArch64.S2ApplyFWBMemAttrs(ipa.memattrs, s2_attr, s2_sh); if s2_attr<1:0> == '10' then // Force writeback nextstate.memattrs.xs = '0'; else nextstate.memattrs.xs = if s2_fnxs == '1' then '0' else ipa.memattrs.xs; else nextstate.memattrs = S2DecodeMemAttrs(s2_attr, s2_sh); nextstate.memattrs.xs = if s2_fnxs == '1' then '0' else ipa.memattrs.xs; nextstate.contiguous = AArch64.ContiguousBitAArch64.NextTableBaseSS_Secure(descriptor, walkparams.ds, walkparams.tgx); tablebase.paspace = currentstate.baseaddress.paspace; nextstate.istable = TRUE; nextstate.level = currentstate.level + 1; nextstate.baseaddress = tablebase; nextstate.memattrs = currentstate.memattrs; (walkparams.tgx, currentstate.level, descriptor); return nextstate;

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.S2WalkAArch64.S2NextWalkStateTable

// AArch64.S2Walk() // ================ // Traverse stage 2 translation tables obtaining the final descriptor // as well as the address leading to that descriptor // AArch64.S2NextWalkStateTable() // ============================== // Decode stage 2 table descriptor to transition to the next level (FaultRecord, AddressDescriptor, TTWState, bits(64))TTWState AArch64.S2Walk(AArch64.S2NextWalkStateTable( FaultRecordTTWState fault_in,currentstate, AddressDescriptor ipa, S2TTWParams walkparams,walkparams, bits(64) descriptor) SecurityState ss, AccType acctype, boolean iswrite, boolean s1aarch64) FaultRecord fault = fault_in; ipa_64 = ZeroExtend(ipa.paddress.address, 64); TTWState walkstate; if ss ==nextstate; SS_Secure then walkstate = AArch64.SS2InitialTTWState(walkparams, ipa.paddress.paspace); else walkstate = AArch64.S2InitialTTWState(ss, walkparams); // Detect Address Size Fault by TTB if AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, ipa_64) then fault.statuscode = Fault_AddressSize; fault.level = 0; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); bits(64) descriptor; AddressDescriptor walkaddress; walkaddress.vaddress = ipa.vaddress; if HCR_EL2.CD == '1' then walkaddress.memattrs = NormalNCMemAttr(); walkaddress.memattrs.xs = walkstate.memattrs.xs; else walkaddress.memattrs = walkstate.memattrs; walkaddress.memattrs.shareability = EffectiveShareability(walkaddress.memattrs); DescriptorType desctype; repeat fault.level = walkstate.level; FullAddress descaddress; if walkstate.level ==tablebase; tablebase.address = AArch64.S2StartLevelAArch64.NextTableBase(walkparams) then // Initial lookup might index into concatenated tables descaddress = AArch64.S2SLTTEntryAddress(walkparams, ipa.paddress.address, walkstate.baseaddress); else ipa_64 = ZeroExtend(ipa.paddress.address, 64); descaddress = AArch64.TTEntryAddress(walkstate.level, walkparams.tgx, walkparams.txsz, ipa_64, walkstate.baseaddress); walkaddress.paddress = descaddress; (fault, descriptor) = FetchDescriptor(walkparams.ee, walkaddress, fault); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); desctype = AArch64.DecodeDescriptorType(descriptor, walkparams.ds, walkparams.tgx, walkstate.level); case desctype of when DescriptorType_Table walkstate = AArch64.S2NextWalkStateTable(walkstate, walkparams, descriptor); // Detect Address Size Fault by table descriptor if AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, ipa_64) then fault.statuscode = Fault_AddressSize; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); when DescriptorType_Page, DescriptorType_Block walkstate = AArch64.S2NextWalkStateLast(walkstate, ss, walkparams, ipa, descriptor); when DescriptorType_Invalid fault.statuscode = Fault_Translation; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); otherwise Unreachable(); until desctype IN {DescriptorType_Page, DescriptorType_Block}; if (walkstate.contiguous == '1' && AArch64.ContiguousBitFaults(walkparams.txsz, walkparams.tgx, walkstate.level)) then fault.statuscode = Fault_Translation; elsif desctype == DescriptorType_Block && AArch64.BlocknTFaults(descriptor) then fault.statuscode = Fault_Translation; // Detect Address Size Fault by final output elsif AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, ipa_64) then fault.statuscode = Fault_AddressSize; // Check descriptor AF bit elsif (descriptor<10> == '0' && walkparams.ha == '0' && !(acctype IN {AccType_DC, AccType_IC} && !boolean IMPLEMENTATION_DEFINED "Generate access flag fault on IC/DC operations")) then fault.statuscode = Fault_AccessFlag; (descriptor, walkparams.ds, walkparams.tgx); tablebase.paspace = currentstate.baseaddress.paspace; return (fault, walkaddress, walkstate, descriptor); nextstate.istable = TRUE; nextstate.level = currentstate.level + 1; nextstate.baseaddress = tablebase; nextstate.memattrs = currentstate.memattrs; return nextstate;

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.SS2InitialTTWStateAArch64.S2Walk

// AArch64.SS2InitialTTWState() // ============================ // Set properties of first access to translation tables in Secure stage 2 // AArch64.S2Walk() // ================ // Traverse stage 2 translation tables obtaining the final descriptor // as well as the address leading to that descriptor TTWState(FaultRecord, AddressDescriptor, TTWState, bits(64)) AArch64.SS2InitialTTWState(AArch64.S2Walk(FaultRecord fault_in, AddressDescriptor ipa, S2TTWParams walkparams, PASpaceSecurityState ipaspace)ss, AccType acctype, boolean iswrite, boolean s1aarch64) FaultRecord fault = fault_in; ipa_64 = ZeroExtend(ipa.paddress.address, 64); TTWState walkstate;walkstate; if ss == FullAddressSS_Secure tablebase; bits(64) ttbr; if ipaspace ==then walkstate = PAS_SecureAArch64.SS2InitialTTWState then ttbr = VSTTBR_EL2; (walkparams, ipa.paddress.paspace); else ttbr = VTTBR_EL2; if ipaspace == walkstate = PAS_SecureAArch64.S2InitialTTWState then if walkparams.sw == '0' then tablebase.paspace =(ss, walkparams); // Detect Address Size Fault by TTB if PAS_SecureAArch64.OAOutOfRange; else tablebase.paspace =(walkstate, walkparams.ps, walkparams.tgx, ipa_64) then fault.statuscode = PAS_NonSecureFault_AddressSize; else if walkparams.nsw == '0' then tablebase.paspace = fault.level = 0; return (fault, PAS_SecureAddressDescriptor; else tablebase.paspace =UNKNOWN, PAS_NonSecureTTWState; UNKNOWN, bits(64) UNKNOWN); startlevel = bits(64) descriptor; AddressDescriptor walkaddress; walkaddress.vaddress = ipa.vaddress; if HCR_EL2.CD == '1' then walkaddress.memattrs = NormalNCMemAttr(); walkaddress.memattrs.xs = walkstate.memattrs.xs; else walkaddress.memattrs = walkstate.memattrs; walkaddress.memattrs.shareability = EffectiveShareability(walkaddress.memattrs); DescriptorType desctype; repeat fault.level = walkstate.level; FullAddress descaddress; if walkstate.level == AArch64.S2StartLevel(walkparams); tablebase.address =(walkparams) then // Initial lookup might index into concatenated tables descaddress = AArch64.TTBaseAddressAArch64.S2SLTTEntryAddress(ttbr, walkparams.txsz, walkparams.ps, walkparams.ds, walkparams.tgx, startlevel); walkstate.baseaddress = tablebase; walkstate.level = startlevel; walkstate.istable = TRUE; walkstate.memattrs =(walkparams, ipa.paddress.address, walkstate.baseaddress); else ipa_64 = (ipa.paddress.address, 64); descaddress = AArch64.TTEntryAddress(walkstate.level, walkparams.tgx, walkparams.txsz, ipa_64, walkstate.baseaddress); walkaddress.paddress = descaddress; (fault, descriptor) = FetchDescriptor(walkparams.ee, walkaddress, fault); if fault.statuscode != Fault_None then return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); desctype = AArch64.DecodeDescriptorType(descriptor, walkparams.ds, walkparams.tgx, walkstate.level); case desctype of when DescriptorType_Table walkstate = AArch64.S2NextWalkStateTable(walkstate, walkparams, descriptor); // Detect Address Size Fault by table descriptor if AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, ipa_64) then fault.statuscode = Fault_AddressSize; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); when DescriptorType_Page, DescriptorType_Block walkstate = AArch64.S2NextWalkStateLast(walkstate, ss, walkparams, ipa, descriptor); when DescriptorType_Invalid fault.statuscode = Fault_Translation; return (fault, AddressDescriptor UNKNOWN, TTWState UNKNOWN, bits(64) UNKNOWN); otherwise Unreachable(); until desctype IN {DescriptorType_Page, DescriptorType_Block}; if (walkstate.contiguous == '1' && AArch64.ContiguousBitFaults(walkparams.txsz, walkparams.tgx, walkstate.level)) then fault.statuscode = Fault_Translation; elsif desctype == DescriptorType_Block && AArch64.BlocknTFaults(descriptor) then fault.statuscode = Fault_Translation; // Detect Address Size Fault by final output elsif AArch64.OAOutOfRange(walkstate, walkparams.ps, walkparams.tgx, ipa_64) then fault.statuscode = Fault_AddressSize; // Check descriptor AF bit elsif (descriptor<10> == '0' && walkparams.ha == '0' && !(acctype IN {AccType_DC, AccType_IC} && !boolean IMPLEMENTATION_DEFINED "Generate access flag fault on IC/DC operations")) then fault.statuscode = Fault_AccessFlagWalkMemAttrsZeroExtend(walkparams.sh, walkparams.irgn, walkparams.orgn); ; return walkstate; return (fault, walkaddress, walkstate, descriptor);

Library pseudocode for aarch64/translation/vmsa_walk/AArch64.SS2OutputPASpaceAArch64.SS2InitialTTWState

// AArch64.SS2OutputPASpace() // ========================== // Assign PA Space to output of Secure stage 2 translation // AArch64.SS2InitialTTWState() // ============================ // Set properties of first access to translation tables in Secure stage 2 PASpaceTTWState AArch64.SS2OutputPASpace(AArch64.SS2InitialTTWState(S2TTWParams walkparams, PASpace ipaspace) if ipaspace ==ipaspace) TTWState walkstate; FullAddress tablebase; bits(64) ttbr; if ipaspace == PAS_Secure then if walkparams.<sw,sa> == '00' then return ttbr = VSTTBR_EL2; else ttbr = VTTBR_EL2; if ipaspace == PAS_Secure; else returnthen if walkparams.sw == '0' then tablebase.paspace = PAS_Secure; else tablebase.paspace = PAS_NonSecure; else if walkparams.<sw,sa,nsw,nsa> == '0000' then return if walkparams.nsw == '0' then tablebase.paspace = PAS_Secure; else return tablebase.paspace = PAS_NonSecure; startlevel = AArch64.S2StartLevel(walkparams); tablebase.address = AArch64.TTBaseAddress(ttbr, walkparams.txsz, walkparams.ps, walkparams.ds, walkparams.tgx, startlevel); walkstate.baseaddress = tablebase; walkstate.level = startlevel; walkstate.istable = TRUE; walkstate.memattrs = WalkMemAttrs;(walkparams.sh, walkparams.irgn, walkparams.orgn); return walkstate;

Library pseudocode for aarch64/translation/vmsa_walkparamsvmsa_walk/AArch64.BBMSupportLevelAArch64.SS2OutputPASpace

// AArch64.BBMSupportLevel() // ========================= // Returns the level of FEAT_BBM supported // AArch64.SS2OutputPASpace() // ========================== // Assign PA Space to output of Secure stage 2 translation integerPASpace AArch64.BlockBBMSupportLevel() if !AArch64.SS2OutputPASpace( walkparams, PASpace ipaspace) if ipaspace == PAS_Secure then if walkparams.<sw,sa> == '00' then return PAS_Secure; else return PAS_NonSecure; else if walkparams.<sw,sa,nsw,nsa> == '0000' then return PAS_Secure; else return PAS_NonSecureHaveBlockBBMS2TTWParams() then return integer UNKNOWN; else return integer IMPLEMENTATION_DEFINED "Block BBM support level";;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.DecodeTG0AArch64.BBMSupportLevel

// AArch64.DecodeTG0() // =================== // Decode granule size configuration bits TG0 // AArch64.BBMSupportLevel() // ========================= // Returns the level of FEAT_BBM supported TGxinteger AArch64.DecodeTG0(bits(2) tg0_in) bits(2) tg0 = tg0_in; if tg0 == '11' then tg0 = bits(2) IMPLEMENTATION_DEFINED "Reserved TG0 encoding granule size"; case tg0 of when '00' returnAArch64.BlockBBMSupportLevel() if ! TGx_4KBHaveBlockBBM; when '01' return TGx_64KB; when '10' return TGx_16KB;() then return integer UNKNOWN; else return integer IMPLEMENTATION_DEFINED "Block BBM support level";

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.DecodeTG1AArch64.DecodeTG0

// AArch64.DecodeTG1() // AArch64.DecodeTG0() // =================== // Decode granule size configuration bits TG1 // Decode granule size configuration bits TG0 TGx AArch64.DecodeTG1(bits(2) tg1_in) bits(2) tg1 = tg1_in; if tg1 == '00' then tg1 = bits(2) IMPLEMENTATION_DEFINED "Reserved TG1 encoding granule size"; AArch64.DecodeTG0(bits(2) tg0_in) bits(2) tg0 = tg0_in; if tg0 == '11' then tg0 = bits(2) IMPLEMENTATION_DEFINED "Reserved TG0 encoding granule size"; case tg1 of when '10' return case tg0 of when '00' return TGx_4KB; when '11' return when '01' return TGx_64KB; when '01' return when '10' return TGx_16KB;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.GetS1TTWParamsAArch64.DecodeTG1

// AArch64.GetS1TTWParams() // ======================== // Returns stage 1 translation table walk parameters from respective controlling // system registers. // AArch64.DecodeTG1() // =================== // Decode granule size configuration bits TG1 S1TTWParamsTGx AArch64.GetS1TTWParams(AArch64.DecodeTG1(bits(2) tg1_in) bits(2) tg1 = tg1_in; if tg1 == '00' then tg1 = bits(2) IMPLEMENTATION_DEFINED "Reserved TG1 encoding granule size"; case tg1 of when '10' returnRegimeTGx_4KB regime, bits(64) va); when '11' return S1TTWParamsTGx_64KB walkparams; varange =; when '01' return AArch64.GetVARangeTGx_16KB(va); case regime of when Regime_EL3 walkparams = AArch64.S1TTWParamsEL3(); when Regime_EL2 walkparams = AArch64.S1TTWParamsEL2(); when Regime_EL20 walkparams = AArch64.S1TTWParamsEL20(varange); when Regime_EL10 walkparams = AArch64.S1TTWParamsEL10(varange); maxtxsz = AArch64.MaxTxSZ(walkparams.tgx); mintxsz = AArch64.S1MinTxSZ(walkparams.ds, walkparams.tgx); if UInt(walkparams.txsz) > maxtxsz then if !(boolean IMPLEMENTATION_DEFINED "Fault on TxSZ value above maximum") then walkparams.txsz = maxtxsz<5:0>; elsif !Have52BitVAExt() && UInt(walkparams.txsz) < mintxsz then if !(boolean IMPLEMENTATION_DEFINED "Fault on TxSZ value below minimum") then walkparams.txsz = mintxsz<5:0>; return walkparams;;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.GetS2TTWParamsAArch64.GetS1TTWParams

// AArch64.GetS2TTWParams() // AArch64.GetS1TTWParams() // ======================== // Gather walk parameters for stage 2 translation // Returns stage 1 translation table walk parameters from respective controlling // system registers. S2TTWParamsS1TTWParams AArch64.GetS2TTWParams(AArch64.GetS1TTWParams(SecurityStateRegime ss,regime, bits(64) va) PASpaceS1TTWParams ipaspace, boolean s1aarch64)walkparams; varange = S2TTWParamsAArch64.GetVARange walkparams; (va); if ss == case regime of when SS_NonSecureRegime_EL3 then walkparams =walkparams = AArch64.NSS2TTWParamsAArch64.S1TTWParamsEL3(s1aarch64); elsif(); when HaveSecureEL2ExtRegime_EL2() && ss ==walkparams = SS_SecureAArch64.S1TTWParamsEL2 then walkparams =(); when AArch64.SS2TTWParamsRegime_EL20(ipaspace, s1aarch64); elsewalkparams = UnreachableAArch64.S1TTWParamsEL20(); maxtxsz =(varange); when Regime_EL10 walkparams = AArch64.S1TTWParamsEL10(varange); maxtxsz = AArch64.MaxTxSZ(walkparams.tgx); mintxsz = AArch64.S2MinTxSZAArch64.S1MinTxSZ(walkparams.ds, walkparams.tgx, s1aarch64); (walkparams.ds, walkparams.tgx); if UInt(walkparams.txsz) > maxtxsz then if !(boolean IMPLEMENTATION_DEFINED "Fault on TxSZ value above maximum") then walkparams.txsz = maxtxsz<5:0>; elsif !Have52BitPAExtHave52BitVAExt() && UInt(walkparams.txsz) < mintxsz then if !(boolean IMPLEMENTATION_DEFINED "Fault on TxSZ value below minimum") then walkparams.txsz = mintxsz<5:0>; return walkparams;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.GetVARangeAArch64.GetS2TTWParams

// AArch64.GetVARange() // ==================== // Determines if the VA that is to be translated lies in LOWER or UPPER address range. // AArch64.GetS2TTWParams() // ======================== // Gather walk parameters for stage 2 translation VARangeS2TTWParams AArch64.GetVARange(bits(64) va) if va<55> == '0' then returnAArch64.GetS2TTWParams( VARange_LOWERSecurityState; else returnss, ipaspace, boolean s1aarch64) S2TTWParams walkparams; if ss == SS_NonSecure then walkparams = AArch64.NSS2TTWParams(s1aarch64); elsif HaveSecureEL2Ext() && ss == SS_Secure then walkparams = AArch64.SS2TTWParams(ipaspace, s1aarch64); else Unreachable(); maxtxsz = AArch64.MaxTxSZ(walkparams.tgx); mintxsz = AArch64.S2MinTxSZ(walkparams.ds, walkparams.tgx, s1aarch64); if UInt(walkparams.txsz) > maxtxsz then if !(boolean IMPLEMENTATION_DEFINED "Fault on TxSZ value above maximum") then walkparams.txsz = maxtxsz<5:0>; elsif !Have52BitPAExt() && UInt(walkparams.txsz) < mintxsz then if !(boolean IMPLEMENTATION_DEFINED "FaultVARange_UPPERPASpace;on TxSZ value below minimum") then walkparams.txsz = mintxsz<5:0>; return walkparams;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.MaxTxSZAArch64.GetVARange

// AArch64.MaxTxSZ() // ================= // Retrieve the maximum value of TxSZ indicating minimum input address size for both // stages of translation // AArch64.GetVARange() // ==================== // Determines if the VA that is to be translated lies in LOWER or UPPER address range. integerVARange AArch64.MaxTxSZ(AArch64.GetVARange(bits(64) va) if va<55> == '0' then returnTGxVARange_LOWER tgx) if; else return HaveSmallTranslationTableExtVARange_UPPER() && !UsingAArch32() then case tgx of when TGx_4KB return 48; when TGx_16KB return 48; when TGx_64KB return 47; return 39;;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.NSS2TTWParamsAArch64.MaxTxSZ

// AArch64.NSS2TTWParams() // ======================= // Gather walk parameters specific for Non-secure stage 2 translation // AArch64.MaxTxSZ() // ================= // Retrieve the maximum value of TxSZ indicating minimum input address size for both // stages of translation S2TTWParamsinteger AArch64.NSS2TTWParams(boolean s1aarch64)AArch64.MaxTxSZ( S2TTWParamsTGx walkparams; walkparams.vm = HCR_EL2.VM OR HCR_EL2.DC; walkparams.tgx =tgx) if AArch64.DecodeTG0HaveSmallTranslationTableExt(VTCR_EL2.TG0); walkparams.txsz = VTCR_EL2.T0SZ; walkparams.sl0 = VTCR_EL2.SL0; walkparams.ps = VTCR_EL2.PS; walkparams.irgn = VTCR_EL2.IRGN0; walkparams.orgn = VTCR_EL2.ORGN0; walkparams.sh = VTCR_EL2.SH0; walkparams.ee = SCTLR_EL2.EE; walkparams.ptw = if HCR_EL2.TGE == '0' then HCR_EL2.PTW else '0'; walkparams.fwb = if() && ! HaveStage2MemAttrControlUsingAArch32() then HCR_EL2.FWB else '0'; walkparams.ha = if() then case tgx of when HaveAccessFlagUpdateExt() then VTCR_EL2.HA else '0'; walkparams.hd = if HaveDirtyBitModifierExt() then VTCR_EL2.HD else '0'; if walkparams.tgx IN {TGx_4KB,return 48; when TGx_16KB} &&return 48; when Have52BitIPAAndPASpaceExtTGx_64KB() then walkparams.ds = VTCR_EL2.DS; else walkparams.ds = '0'; if walkparams.tgx == TGx_4KB && Have52BitIPAAndPASpaceExt() then walkparams.sl2 = VTCR_EL2.SL2 AND VTCR_EL2.DS; else walkparams.sl2 = '0'; walkparams.cmow = if HaveFeatCMOW() && IsHCRXEL2Enabled() then HCRX_EL2.CMOW else '0'; return walkparams;return 47; return 39;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.PAMaxAArch64.NSS2TTWParams

// AArch64.PAMax() // =============== // Returns the IMPLEMENTATION DEFINED maximum number of bits capable of representing // physical address for this processor // AArch64.NSS2TTWParams() // ======================= // Gather walk parameters specific for Non-secure stage 2 translation integerS2TTWParams AArch64.PAMax() return integer IMPLEMENTATION_DEFINED "Maximum Physical Address Size";AArch64.NSS2TTWParams(boolean s1aarch64)S2TTWParams walkparams; walkparams.vm = HCR_EL2.VM OR HCR_EL2.DC; walkparams.tgx = AArch64.DecodeTG0(VTCR_EL2.TG0); walkparams.txsz = VTCR_EL2.T0SZ; walkparams.sl0 = VTCR_EL2.SL0; walkparams.ps = VTCR_EL2.PS; walkparams.irgn = VTCR_EL2.IRGN0; walkparams.orgn = VTCR_EL2.ORGN0; walkparams.sh = VTCR_EL2.SH0; walkparams.ee = SCTLR_EL2.EE; walkparams.ptw = if HCR_EL2.TGE == '0' then HCR_EL2.PTW else '0'; walkparams.fwb = if HaveStage2MemAttrControl() then HCR_EL2.FWB else '0'; walkparams.ha = if HaveAccessFlagUpdateExt() then VTCR_EL2.HA else '0'; walkparams.hd = if HaveDirtyBitModifierExt() then VTCR_EL2.HD else '0'; if walkparams.tgx IN {TGx_4KB, TGx_16KB} && Have52BitIPAAndPASpaceExt() then walkparams.ds = VTCR_EL2.DS; else walkparams.ds = '0'; if walkparams.tgx == TGx_4KB && Have52BitIPAAndPASpaceExt() then walkparams.sl2 = VTCR_EL2.SL2 AND VTCR_EL2.DS; else walkparams.sl2 = '0'; walkparams.cmow = if HaveFeatCMOW() && IsHCRXEL2Enabled() then HCRX_EL2.CMOW else '0'; return walkparams;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1DCacheEnabledAArch64.PAMax

// AArch64.S1DCacheEnabled() // ========================= // Determine cacheability of stage 1 data accesses // AArch64.PAMax() // =============== // Returns the IMPLEMENTATION DEFINED maximum number of bits capable of representing // physical address for this processor booleaninteger AArch64.S1DCacheEnabled(AArch64.PAMax() return integer IMPLEMENTATION_DEFINED "Maximum Physical Address Size";Regime regime) case regime of when Regime_EL3 return SCTLR_EL3.C == '1'; when Regime_EL2 return SCTLR_EL2.C == '1'; when Regime_EL20 return SCTLR_EL2.C == '1'; when Regime_EL10 return SCTLR_EL1.C == '1';

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1EPDAArch64.S1DCacheEnabled

// AArch64.S1EPD() // =============== // Determine whether stage 1 translation table walk is allowed for the VA range // AArch64.S1DCacheEnabled() // ========================= // Determine cacheability of stage 1 data accesses bitboolean AArch64.S1EPD(AArch64.S1DCacheEnabled(Regime regime, bits(64) va) assertregime) case regime of when HasUnprivilegedRegime_EL3(regime); varange =return SCTLR_EL3.C == '1'; when AArch64.GetVARangeRegime_EL2(va); case regime of return SCTLR_EL2.C == '1'; when Regime_EL20 return if varange ==return SCTLR_EL2.C == '1'; when VARange_LOWER then TCR_EL2.EPD0 else TCR_EL2.EPD1; when Regime_EL10 return if varange == VARange_LOWER then TCR_EL1.EPD0 else TCR_EL1.EPD1;return SCTLR_EL1.C == '1';

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1EnabledAArch64.S1EPD

// AArch64.S1Enabled() // =================== // Determine if stage 1 for the acting translation regime is enabled // AArch64.S1EPD() // =============== // Determine whether stage 1 translation table walk is allowed for the VA range booleanbit AArch64.S1Enabled(AArch64.S1EPD(Regime regime) case regime of whenregime, bits(64) va) assert Regime_EL3HasUnprivileged return SCTLR_EL3.M == '1'; when(regime); varange = Regime_EL2AArch64.GetVARange return SCTLR_EL2.M == '1'; (va); case regime of when Regime_EL20 return SCTLR_EL2.M == '1'; whenreturn if varange == VARange_LOWER then TCR_EL2.EPD0 else TCR_EL2.EPD1; when Regime_EL10 return (!return if varange ==EL2EnabledVARange_LOWER() || HCR_EL2.<DC,TGE> == '00') && SCTLR_EL1.M == '1';then TCR_EL1.EPD0 else TCR_EL1.EPD1;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1ICacheEnabledAArch64.S1Enabled

// AArch64.S1ICacheEnabled() // ========================= // Determine cacheability of stage 1 instruction fetches // AArch64.S1Enabled() // =================== // Determine if stage 1 for the acting translation regime is enabled boolean AArch64.S1ICacheEnabled(AArch64.S1Enabled(Regime regime) case regime of when Regime_EL3 return SCTLR_EL3.I == '1'; return SCTLR_EL3.M == '1'; when Regime_EL2 return SCTLR_EL2.I == '1'; return SCTLR_EL2.M == '1'; when Regime_EL20 return SCTLR_EL2.I == '1'; return SCTLR_EL2.M == '1'; when Regime_EL10 return SCTLR_EL1.I == '1';return (!EL2Enabled() || HCR_EL2.<DC,TGE> == '00') && SCTLR_EL1.M == '1';

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1MinTxSZAArch64.S1ICacheEnabled

// AArch64.S1MinTxSZ() // =================== // Retrieve the minimum value of TxSZ indicating maximum input address size for stage 1 // AArch64.S1ICacheEnabled() // ========================= // Determine cacheability of stage 1 instruction fetches integerboolean AArch64.S1MinTxSZ(bit ds,AArch64.S1ICacheEnabled( TGxRegime tgx) if (regime) case regime of whenHave52BitVAExtRegime_EL3() && tgx ==return SCTLR_EL3.I == '1'; when return SCTLR_EL2.I == '1'; when Regime_EL20 return SCTLR_EL2.I == '1'; when Regime_EL10TGx_64KBRegime_EL2) || ds == '1' then return 12; return 16;return SCTLR_EL1.I == '1';

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1TTBRAArch64.S1MinTxSZ

// AArch64.S1TTBR() // ================ // Identify stage 1 table base register for the acting translation regime // AArch64.S1MinTxSZ() // =================== // Retrieve the minimum value of TxSZ indicating maximum input address size for stage 1 bits(64)integer AArch64.S1TTBR(AArch64.S1MinTxSZ(bit ds,RegimeTGx regime, bits(64) va) varange =tgx) if ( AArch64.GetVARangeHave52BitVAExt(va); case regime of when() && tgx == Regime_EL3TGx_64KB return TTBR0_EL3; when Regime_EL2 return TTBR0_EL2; when Regime_EL20 return if varange == VARange_LOWER then TTBR0_EL2 else TTBR1_EL2; when Regime_EL10 return if varange == VARange_LOWER then TTBR0_EL1 else TTBR1_EL1;) || ds == '1' then return 12; return 16;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1TTWParamsEL10AArch64.S1TTBR

// AArch64.S1TTWParamsEL10() // ========================= // Gather stage 1 translation table walk parameters for EL1&0 regime // (with EL2 enabled or disabled) // AArch64.S1TTBR() // ================ // Identify stage 1 table base register for the acting translation regime S1TTWParamsbits(64) AArch64.S1TTWParamsEL10(AArch64.S1TTBR(VARangeRegime varange)regime, bits(64) va) varange = S1TTWParamsAArch64.GetVARange walkparams; (va); if varange == case regime of when VARange_LOWERRegime_EL3 then walkparams.tgx =return TTBR0_EL3; when AArch64.DecodeTG0Regime_EL2(TCR_EL1.TG0); walkparams.txsz = TCR_EL1.T0SZ; walkparams.irgn = TCR_EL1.IRGN0; walkparams.orgn = TCR_EL1.ORGN0; walkparams.sh = TCR_EL1.SH0; walkparams.tbi = TCR_EL1.TBI0; walkparams.tbid = ifreturn TTBR0_EL2; when HavePACExtRegime_EL20() then TCR_EL1.TBID0 else '0'; walkparams.e0pd = ifreturn if varange == HaveE0PDExtVARange_LOWER() then TCR_EL1.E0PD0 else '0'; walkparams.hpd = ifthen TTBR0_EL2 else TTBR1_EL2; when AArch64.HaveHPDExtRegime_EL10() then TCR_EL1.HPD0 else '0'; else walkparams.tgx =return if varange == AArch64.DecodeTG1VARange_LOWER(TCR_EL1.TG1); walkparams.txsz = TCR_EL1.T1SZ; walkparams.irgn = TCR_EL1.IRGN1; walkparams.orgn = TCR_EL1.ORGN1; walkparams.sh = TCR_EL1.SH1; walkparams.tbi = TCR_EL1.TBI1; walkparams.tbid = if HavePACExt() then TCR_EL1.TBID1 else '0'; walkparams.e0pd = if HaveE0PDExt() then TCR_EL1.E0PD1 else '0'; walkparams.hpd = if AArch64.HaveHPDExt() then TCR_EL1.HPD1 else '0'; walkparams.mair = MAIR_EL1; walkparams.wxn = SCTLR_EL1.WXN; walkparams.ps = TCR_EL1.IPS; walkparams.ee = SCTLR_EL1.EE; walkparams.sif = SCR_EL3.SIF; if EL2Enabled() then walkparams.dc = HCR_EL2.DC; walkparams.dct = if HaveMTE2Ext() then HCR_EL2.DCT else '0'; if HaveTrapLoadStoreMultipleDeviceExt() then walkparams.ntlsmd = SCTLR_EL1.nTLSMD; else walkparams.ntlsmd = '1'; if EL2Enabled() then if HCR_EL2.<NV,NV1> == '01' then case ConstrainUnpredictable(Unpredictable_NVNV1) of when Constraint_NVNV1_00 walkparams.nv1 = '0'; when Constraint_NVNV1_01 walkparams.nv1 = '1'; when Constraint_NVNV1_11 walkparams.nv1 = '1'; else walkparams.nv1 = HCR_EL2.NV1; else walkparams.nv1 = '0'; walkparams.epan = if HavePAN3Ext() then SCTLR_EL1.EPAN else '0'; walkparams.cmow = if HaveFeatCMOW() then SCTLR_EL1.CMOW else '0'; walkparams.ha = if HaveAccessFlagUpdateExt() then TCR_EL1.HA else '0'; walkparams.hd = if HaveDirtyBitModifierExt() then TCR_EL1.HD else '0'; if walkparams.tgx IN {TGx_4KB, TGx_16KB} && Have52BitIPAAndPASpaceExt() then walkparams.ds = TCR_EL1.DS; else walkparams.ds = '0'; return walkparams;then TTBR0_EL1 else TTBR1_EL1;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1TTWParamsEL2AArch64.S1TTWParamsEL10

// AArch64.S1TTWParamsEL2() // ======================== // Gather stage 1 translation table walk parameters for EL2 regime // AArch64.S1TTWParamsEL10() // ========================= // Gather stage 1 translation table walk parameters for EL1&0 regime // (with EL2 enabled or disabled) S1TTWParams AArch64.S1TTWParamsEL2()AArch64.S1TTWParamsEL10( VARange varange) S1TTWParams walkparams; walkparams.tgx = if varange == VARange_LOWER then walkparams.tgx = AArch64.DecodeTG0(TCR_EL2.TG0); walkparams.txsz = TCR_EL2.T0SZ; walkparams.ps = TCR_EL2.PS; walkparams.irgn = TCR_EL2.IRGN0; walkparams.orgn = TCR_EL2.ORGN0; walkparams.sh = TCR_EL2.SH0; walkparams.tbi = TCR_EL2.TBI; walkparams.mair = MAIR_EL2; walkparams.wxn = SCTLR_EL2.WXN; walkparams.ee = SCTLR_EL2.EE; walkparams.sif = SCR_EL3.SIF; (TCR_EL1.TG0); walkparams.txsz = TCR_EL1.T0SZ; walkparams.irgn = TCR_EL1.IRGN0; walkparams.orgn = TCR_EL1.ORGN0; walkparams.sh = TCR_EL1.SH0; walkparams.tbi = TCR_EL1.TBI0; walkparams.tbid = if walkparams.tbid = if HavePACExt() then TCR_EL2.TBID else '0'; walkparams.hpd = if() then TCR_EL1.TBID0 else '0'; walkparams.e0pd = if HaveE0PDExt() then TCR_EL1.E0PD0 else '0'; walkparams.hpd = if AArch64.HaveHPDExt() then TCR_EL1.HPD0 else '0'; else walkparams.tgx = AArch64.DecodeTG1(TCR_EL1.TG1); walkparams.txsz = TCR_EL1.T1SZ; walkparams.irgn = TCR_EL1.IRGN1; walkparams.orgn = TCR_EL1.ORGN1; walkparams.sh = TCR_EL1.SH1; walkparams.tbi = TCR_EL1.TBI1; walkparams.tbid = if HavePACExt() then TCR_EL1.TBID1 else '0'; walkparams.e0pd = if HaveE0PDExt() then TCR_EL1.E0PD1 else '0'; walkparams.hpd = if AArch64.HaveHPDExt() then TCR_EL1.HPD1 else '0'; walkparams.mair = MAIR_EL1; walkparams.wxn = SCTLR_EL1.WXN; walkparams.ps = TCR_EL1.IPS; walkparams.ee = SCTLR_EL1.EE; walkparams.sif = SCR_EL3.SIF; if EL2Enabled() then walkparams.dc = HCR_EL2.DC; walkparams.dct = if HaveMTE2Ext() then HCR_EL2.DCT else '0'; if HaveTrapLoadStoreMultipleDeviceExt() then walkparams.ntlsmd = SCTLR_EL1.nTLSMD; else walkparams.ntlsmd = '1'; if EL2Enabled() then if HCR_EL2.<NV,NV1> == '01' then case ConstrainUnpredictable(Unpredictable_NVNV1) of when Constraint_NVNV1_00 walkparams.nv1 = '0'; when Constraint_NVNV1_01 walkparams.nv1 = '1'; when Constraint_NVNV1_11 walkparams.nv1 = '1'; else walkparams.nv1 = HCR_EL2.NV1; else walkparams.nv1 = '0'; walkparams.epan = if HavePAN3Ext() then SCTLR_EL1.EPAN else '0'; walkparams.cmow = if HaveFeatCMOW() then TCR_EL2.HPD else '0'; () then SCTLR_EL1.CMOW else '0'; walkparams.ha = if HaveAccessFlagUpdateExt() then TCR_EL2.HA else '0'; () then TCR_EL1.HA else '0'; walkparams.hd = if HaveDirtyBitModifierExt() then TCR_EL2.HD else '0'; () then TCR_EL1.HD else '0'; if walkparams.tgx IN {TGx_4KB, TGx_16KB} && Have52BitIPAAndPASpaceExt() then walkparams.ds = TCR_EL2.DS; walkparams.ds = TCR_EL1.DS; else walkparams.ds = '0'; return walkparams;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1TTWParamsEL20AArch64.S1TTWParamsEL2

// AArch64.S1TTWParamsEL20() // ========================= // Gather stage 1 translation table walk parameters for EL2&0 regime // AArch64.S1TTWParamsEL2() // ======================== // Gather stage 1 translation table walk parameters for EL2 regime S1TTWParams AArch64.S1TTWParamsEL20(AArch64.S1TTWParamsEL2()VARange varange) S1TTWParams walkparams; if varange == walkparams.tgx = VARange_LOWER then walkparams.tgx = AArch64.DecodeTG0(TCR_EL2.TG0); walkparams.txsz = TCR_EL2.T0SZ; walkparams.irgn = TCR_EL2.IRGN0; walkparams.orgn = TCR_EL2.ORGN0; walkparams.sh = TCR_EL2.SH0; walkparams.tbi = TCR_EL2.TBI0; walkparams.txsz = TCR_EL2.T0SZ; walkparams.ps = TCR_EL2.PS; walkparams.irgn = TCR_EL2.IRGN0; walkparams.orgn = TCR_EL2.ORGN0; walkparams.sh = TCR_EL2.SH0; walkparams.tbi = TCR_EL2.TBI; walkparams.mair = MAIR_EL2; walkparams.wxn = SCTLR_EL2.WXN; walkparams.ee = SCTLR_EL2.EE; walkparams.sif = SCR_EL3.SIF; walkparams.tbid = if walkparams.tbid = if HavePACExt() then TCR_EL2.TBID0 else '0'; walkparams.e0pd = if() then TCR_EL2.TBID else '0'; walkparams.hpd = if HaveE0PDExt() then TCR_EL2.E0PD0 else '0'; walkparams.hpd = if AArch64.HaveHPDExt() then TCR_EL2.HPD0 else '0'; else walkparams.tgx = AArch64.DecodeTG1(TCR_EL2.TG1); walkparams.txsz = TCR_EL2.T1SZ; walkparams.irgn = TCR_EL2.IRGN1; walkparams.orgn = TCR_EL2.ORGN1; walkparams.sh = TCR_EL2.SH1; walkparams.tbi = TCR_EL2.TBI1; walkparams.tbid = if HavePACExt() then TCR_EL2.TBID1 else '0'; walkparams.e0pd = if HaveE0PDExt() then TCR_EL2.E0PD1 else '0'; walkparams.hpd = if AArch64.HaveHPDExt() then TCR_EL2.HPD1 else '0'; walkparams.mair = MAIR_EL2; walkparams.wxn = SCTLR_EL2.WXN; walkparams.ps = TCR_EL2.IPS; walkparams.ee = SCTLR_EL2.EE; walkparams.sif = SCR_EL3.SIF; if HaveTrapLoadStoreMultipleDeviceExt() then walkparams.ntlsmd = SCTLR_EL2.nTLSMD; else walkparams.ntlsmd = '1'; walkparams.epan = if HavePAN3Ext() then SCTLR_EL2.EPAN else '0'; walkparams.cmow = if HaveFeatCMOW() then SCTLR_EL2.CMOW else '0'; () then TCR_EL2.HPD else '0'; walkparams.ha = if HaveAccessFlagUpdateExt() then TCR_EL2.HA else '0'; walkparams.hd = if HaveDirtyBitModifierExt() then TCR_EL2.HD else '0'; if walkparams.tgx IN {TGx_4KB, TGx_16KB} && Have52BitIPAAndPASpaceExt() then walkparams.ds = TCR_EL2.DS; else walkparams.ds = '0'; return walkparams;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S1TTWParamsEL3AArch64.S1TTWParamsEL20

// AArch64.S1TTWParamsEL3() // ======================== // Gather stage 1 translation table walk parameters for EL3 regime // AArch64.S1TTWParamsEL20() // ========================= // Gather stage 1 translation table walk parameters for EL2&0 regime S1TTWParams AArch64.S1TTWParamsEL3()AArch64.S1TTWParamsEL20( VARange varange) S1TTWParams walkparams; walkparams.tgx = if varange == VARange_LOWER then walkparams.tgx = AArch64.DecodeTG0(TCR_EL3.TG0); walkparams.txsz = TCR_EL3.T0SZ; walkparams.ps = TCR_EL3.PS; walkparams.irgn = TCR_EL3.IRGN0; walkparams.orgn = TCR_EL3.ORGN0; walkparams.sh = TCR_EL3.SH0; walkparams.tbi = TCR_EL3.TBI; walkparams.mair = MAIR_EL3; walkparams.wxn = SCTLR_EL3.WXN; walkparams.ee = SCTLR_EL3.EE; walkparams.sif = SCR_EL3.SIF; (TCR_EL2.TG0); walkparams.txsz = TCR_EL2.T0SZ; walkparams.irgn = TCR_EL2.IRGN0; walkparams.orgn = TCR_EL2.ORGN0; walkparams.sh = TCR_EL2.SH0; walkparams.tbi = TCR_EL2.TBI0; walkparams.tbid = if walkparams.tbid = if HavePACExt() then TCR_EL3.TBID else '0'; walkparams.hpd = if() then TCR_EL2.TBID0 else '0'; walkparams.e0pd = if HaveE0PDExt() then TCR_EL2.E0PD0 else '0'; walkparams.hpd = if AArch64.HaveHPDExt() then TCR_EL2.HPD0 else '0'; else walkparams.tgx = AArch64.DecodeTG1(TCR_EL2.TG1); walkparams.txsz = TCR_EL2.T1SZ; walkparams.irgn = TCR_EL2.IRGN1; walkparams.orgn = TCR_EL2.ORGN1; walkparams.sh = TCR_EL2.SH1; walkparams.tbi = TCR_EL2.TBI1; walkparams.tbid = if HavePACExt() then TCR_EL2.TBID1 else '0'; walkparams.e0pd = if HaveE0PDExt() then TCR_EL2.E0PD1 else '0'; walkparams.hpd = if AArch64.HaveHPDExt() then TCR_EL2.HPD1 else '0'; walkparams.mair = MAIR_EL2; walkparams.wxn = SCTLR_EL2.WXN; walkparams.ps = TCR_EL2.IPS; walkparams.ee = SCTLR_EL2.EE; walkparams.sif = SCR_EL3.SIF; if HaveTrapLoadStoreMultipleDeviceExt() then walkparams.ntlsmd = SCTLR_EL2.nTLSMD; else walkparams.ntlsmd = '1'; walkparams.epan = if HavePAN3Ext() then SCTLR_EL2.EPAN else '0'; walkparams.cmow = if HaveFeatCMOW() then TCR_EL3.HPD else '0'; () then SCTLR_EL2.CMOW else '0'; walkparams.ha = if HaveAccessFlagUpdateExt() then TCR_EL3.HA else '0'; () then TCR_EL2.HA else '0'; walkparams.hd = if HaveDirtyBitModifierExt() then TCR_EL3.HD else '0'; () then TCR_EL2.HD else '0'; if walkparams.tgx IN {TGx_4KB, TGx_16KB} && Have52BitIPAAndPASpaceExt() then walkparams.ds = TCR_EL3.DS; walkparams.ds = TCR_EL2.DS; else walkparams.ds = '0'; return walkparams;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.S2MinTxSZAArch64.S1TTWParamsEL3

// AArch64.S2MinTxSZ() // =================== // Retrieve the minimum value of TxSZ indicating maximum input address size for stage 2 // AArch64.S1TTWParamsEL3() // ======================== // Gather stage 1 translation table walk parameters for EL3 regime integerS1TTWParams AArch64.S2MinTxSZ(bit ds,AArch64.S1TTWParamsEL3() TGxS1TTWParams tgx, boolean s1aarch64) ips =walkparams; walkparams.tgx = AArch64.PAMaxAArch64.DecodeTG0(); (TCR_EL3.TG0); walkparams.txsz = TCR_EL3.T0SZ; walkparams.ps = TCR_EL3.PS; walkparams.irgn = TCR_EL3.IRGN0; walkparams.orgn = TCR_EL3.ORGN0; walkparams.sh = TCR_EL3.SH0; walkparams.tbi = TCR_EL3.TBI; walkparams.mair = MAIR_EL3; walkparams.wxn = SCTLR_EL3.WXN; walkparams.ee = SCTLR_EL3.EE; walkparams.sif = SCR_EL3.SIF; if walkparams.tbid = if Have52BitPAExtHavePACExt() && tgx !=() then TCR_EL3.TBID else '0'; walkparams.hpd = if TGx_64KBAArch64.HaveHPDExt && ds == '0' then ips =() then TCR_EL3.HPD else '0'; walkparams.ha = if MinHaveAccessFlagUpdateExt(48,() then TCR_EL3.HA else '0'; walkparams.hd = if AArch64.PAMaxHaveDirtyBitModifierExt()); min_txsz = 64 - ips; if !s1aarch64 then // EL1 is AArch32 min_txsz =() then TCR_EL3.HD else '0'; if walkparams.tgx IN { , TGx_16KB} && Have52BitIPAAndPASpaceExtMinTGx_4KB(min_txsz, 24); () then walkparams.ds = TCR_EL3.DS; else walkparams.ds = '0'; return min_txsz; return walkparams;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.SS2TTWParamsAArch64.S2MinTxSZ

// AArch64.SS2TTWParams() // ====================== // Gather walk parameters specific for secure stage 2 translation // AArch64.S2MinTxSZ() // =================== // Retrieve the minimum value of TxSZ indicating maximum input address size for stage 2 S2TTWParamsinteger AArch64.SS2TTWParams(AArch64.S2MinTxSZ(bit ds,PASpaceTGx ipaspace, boolean s1aarch64)tgx, boolean s1aarch64) ips = S2TTWParamsAArch64.PAMax walkparams; (); if ipaspace == if PAS_SecureHave52BitPAExt then walkparams.tgx =() && tgx != AArch64.DecodeTG0TGx_64KB(VSTCR_EL2.TG0); walkparams.txsz = VSTCR_EL2.T0SZ; walkparams.sl0 = VSTCR_EL2.SL0; if walkparams.tgx ==&& ds == '0' then ips = TGx_4KBMin &&(48, Have52BitIPAAndPASpaceExtAArch64.PAMax() then walkparams.sl2 = VSTCR_EL2.SL2 AND VTCR_EL2.DS; else walkparams.sl2 = '0'; elsif ipaspace ==()); min_txsz = 64 - ips; if !s1aarch64 then // EL1 is AArch32 min_txsz = PAS_NonSecureMin then walkparams.tgx = AArch64.DecodeTG0(VTCR_EL2.TG0); walkparams.txsz = VTCR_EL2.T0SZ; walkparams.sl0 = VTCR_EL2.SL0; if walkparams.tgx == TGx_4KB && Have52BitIPAAndPASpaceExt() then walkparams.sl2 = VTCR_EL2.SL2 AND VTCR_EL2.DS; else walkparams.sl2 = '0'; else Unreachable(); walkparams.sw = VSTCR_EL2.SW; walkparams.nsw = VTCR_EL2.NSW; walkparams.sa = VSTCR_EL2.SA; walkparams.nsa = VTCR_EL2.NSA; walkparams.vm = HCR_EL2.VM OR HCR_EL2.DC; walkparams.ps = VTCR_EL2.PS; walkparams.irgn = VTCR_EL2.IRGN0; walkparams.orgn = VTCR_EL2.ORGN0; walkparams.sh = VTCR_EL2.SH0; walkparams.ee = SCTLR_EL2.EE; walkparams.ptw = if HCR_EL2.TGE == '0' then HCR_EL2.PTW else '0'; walkparams.fwb = if HaveStage2MemAttrControl() then HCR_EL2.FWB else '0'; walkparams.ha = if HaveAccessFlagUpdateExt() then VTCR_EL2.HA else '0'; walkparams.hd = if HaveDirtyBitModifierExt() then VTCR_EL2.HD else '0'; if walkparams.tgx IN {TGx_4KB, TGx_16KB} && Have52BitIPAAndPASpaceExt() then walkparams.ds = VTCR_EL2.DS; else walkparams.ds = '0'; walkparams.cmow = if HaveFeatCMOW() && IsHCRXEL2Enabled() then HCRX_EL2.CMOW else '0'; (min_txsz, 24); return walkparams; return min_txsz;

Library pseudocode for aarch64/translation/vmsa_walkparams/AArch64.VAMaxAArch64.SS2TTWParams

// AArch64.VAMax() // =============== // Returns the IMPLEMENTATION DEFINED maximum number of bits capable of representing // the virtual address for this processor // AArch64.SS2TTWParams() // ====================== // Gather walk parameters specific for secure stage 2 translation integerS2TTWParams AArch64.VAMax() return integer IMPLEMENTATION_DEFINED "Maximum Virtual Address Size";AArch64.SS2TTWParams(PASpace ipaspace, boolean s1aarch64) S2TTWParams walkparams; if ipaspace == PAS_Secure then walkparams.tgx = AArch64.DecodeTG0(VSTCR_EL2.TG0); walkparams.txsz = VSTCR_EL2.T0SZ; walkparams.sl0 = VSTCR_EL2.SL0; if walkparams.tgx == TGx_4KB && Have52BitIPAAndPASpaceExt() then walkparams.sl2 = VSTCR_EL2.SL2 AND VTCR_EL2.DS; else walkparams.sl2 = '0'; elsif ipaspace == PAS_NonSecure then walkparams.tgx = AArch64.DecodeTG0(VTCR_EL2.TG0); walkparams.txsz = VTCR_EL2.T0SZ; walkparams.sl0 = VTCR_EL2.SL0; if walkparams.tgx == TGx_4KB && Have52BitIPAAndPASpaceExt() then walkparams.sl2 = VTCR_EL2.SL2 AND VTCR_EL2.DS; else walkparams.sl2 = '0'; else Unreachable(); walkparams.sw = VSTCR_EL2.SW; walkparams.nsw = VTCR_EL2.NSW; walkparams.sa = VSTCR_EL2.SA; walkparams.nsa = VTCR_EL2.NSA; walkparams.vm = HCR_EL2.VM OR HCR_EL2.DC; walkparams.ps = VTCR_EL2.PS; walkparams.irgn = VTCR_EL2.IRGN0; walkparams.orgn = VTCR_EL2.ORGN0; walkparams.sh = VTCR_EL2.SH0; walkparams.ee = SCTLR_EL2.EE; walkparams.ptw = if HCR_EL2.TGE == '0' then HCR_EL2.PTW else '0'; walkparams.fwb = if HaveStage2MemAttrControl() then HCR_EL2.FWB else '0'; walkparams.ha = if HaveAccessFlagUpdateExt() then VTCR_EL2.HA else '0'; walkparams.hd = if HaveDirtyBitModifierExt() then VTCR_EL2.HD else '0'; if walkparams.tgx IN {TGx_4KB, TGx_16KB} && Have52BitIPAAndPASpaceExt() then walkparams.ds = VTCR_EL2.DS; else walkparams.ds = '0'; walkparams.cmow = if HaveFeatCMOW() && IsHCRXEL2Enabled() then HCRX_EL2.CMOW else '0'; return walkparams;

Library pseudocode for sharedaarch64/debugtranslation/ClearStickyErrorsvmsa_walkparams/ClearStickyErrorsAArch64.VAMax

// ClearStickyErrors() // ===================// AArch64.VAMax() // =============== // Returns the IMPLEMENTATION DEFINED maximum number of bits capable of representing // the virtual address for this processor integer ClearStickyErrors() EDSCR.TXU = '0'; // Clear TX underrun flag EDSCR.RXO = '0'; // Clear RX overrun flag ifAArch64.VAMax() return integer IMPLEMENTATION_DEFINED "Maximum Virtual Address Size"; Halted() then // in Debug state EDSCR.ITO = '0'; // Clear ITR overrun flag // If halted and the ITR is not empty then it is UNPREDICTABLE whether the EDSCR.ERR is cleared. // The UNPREDICTABLE behavior also affects the instructions in flight, but this is not described // in the pseudocode. if Halted() && EDSCR.ITE == '0' && ConstrainUnpredictableBool(Unpredictable_CLEARERRITEZERO) then return; EDSCR.ERR = '0'; // Clear cumulative error flag return;

Library pseudocode for shared/debug/DebugTargetClearStickyErrors/DebugTargetClearStickyErrors

// DebugTarget() // ============= // Returns the debug exception target Exception level bits(2)// ClearStickyErrors() // =================== DebugTarget() ss =ClearStickyErrors() EDSCR.TXU = '0'; // Clear TX underrun flag EDSCR.RXO = '0'; // Clear RX overrun flag if CurrentSecurityStateHalted(); return() then // in Debug state EDSCR.ITO = '0'; // Clear ITR overrun flag // If halted and the ITR is not empty then it is UNPREDICTABLE whether the EDSCR.ERR is cleared. // The UNPREDICTABLE behavior also affects the instructions in flight, but this is not described // in the pseudocode. if () && EDSCR.ITE == '0' && ConstrainUnpredictableBool(Unpredictable_CLEARERRITEZERODebugTargetFromHalted(ss);) then return; EDSCR.ERR = '0'; // Clear cumulative error flag return;

Library pseudocode for shared/debug/DebugTarget/DebugTargetFromDebugTarget

// DebugTargetFrom() // ================= // DebugTarget() // ============= // Returns the debug exception target Exception level bits(2) DebugTargetFrom(DebugTarget() secure =SecurityStateIsSecure from_state) boolean route_to_el2; if(); return HaveELDebugTargetFrom(EL2) && (from_state != SS_Secure || (HaveSecureEL2Ext() && (!HaveEL(EL3) || SCR_EL3.EEL2 == '1'))) then if ELUsingAArch32(EL2) then route_to_el2 = (HDCR.TDE == '1' || HCR.TGE == '1'); else route_to_el2 = (MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'); else route_to_el2 = FALSE; bits(2) target; if route_to_el2 then target = EL2; elsif HaveEL(EL3) && !HaveAArch64() && from_state == SS_Secure then target = EL3; else target = EL1; return target;(secure);

Library pseudocode for shared/debug/DoubleLockStatusDebugTarget/DoubleLockStatusDebugTargetFrom

// DoubleLockStatus() // ================== // Returns the state of the OS Double Lock. // FALSE if OSDLR_EL1.DLK == 0 or DBGPRCR_EL1.CORENPDRQ == 1 or the PE is in Debug state. // TRUE if OSDLR_EL1.DLK == 1 and DBGPRCR_EL1.CORENPDRQ == 0 and the PE is in Non-debug state. // DebugTargetFrom() // ================= booleanbits(2) DoubleLockStatus() if !DebugTargetFrom(boolean secure) boolean route_to_el2; ifHaveDoubleLockHaveEL() then return FALSE; elsif( EL2) && (!secure || (HaveSecureEL2Ext() && (!HaveEL(EL3) ||SCR_EL3.EEL2 == '1'))) then if ELUsingAArch32(EL1EL2) then return DBGOSDLR.DLK == '1' && DBGPRCR.CORENPDRQ == '0' && ! route_to_el2 = (HDCR.TDE == '1' || HCR.TGE == '1'); else route_to_el2 = (MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'); else route_to_el2 = FALSE; bits(2) target; if route_to_el2 then target =HaltedEL2(); else return OSDLR_EL1.DLK == '1' && DBGPRCR_EL1.CORENPDRQ == '0' && !; elsif(EL3) && !HaveAArch64() && secure then target = EL3; else target = EL1HaltedHaveEL();; return target;

Library pseudocode for shared/debug/OSLockStatusDoubleLockStatus/OSLockStatusDoubleLockStatus

// OSLockStatus() // ============== // Returns the state of the OS Lock. // DoubleLockStatus() // ================== // Returns the state of the OS Double Lock. // FALSE if OSDLR_EL1.DLK == 0 or DBGPRCR_EL1.CORENPDRQ == 1 or the PE is in Debug state. // TRUE if OSDLR_EL1.DLK == 1 and DBGPRCR_EL1.CORENPDRQ == 0 and the PE is in Non-debug state. boolean OSLockStatus() return (ifDoubleLockStatus() if ! HaveDoubleLock() then return FALSE; elsif ELUsingAArch32(EL1) then return DBGOSDLR.DLK == '1' && DBGPRCR.CORENPDRQ == '0' && !Halted(); else return OSDLR_EL1.DLK == '1' && DBGPRCR_EL1.CORENPDRQ == '0' && !Halted) then DBGOSLSR.OSLK else OSLSR_EL1.OSLK) == '1';();

Library pseudocode for shared/debug/SoftwareLockStatusOSLockStatus/ComponentOSLockStatus

enumeration// OSLockStatus() // ============== // Returns the state of the OS Lock. boolean Component {OSLockStatus() return (if Component_PMU,( Component_Debug, Component_CTI };) then DBGOSLSR.OSLK else OSLSR_EL1.OSLK) == '1';

Library pseudocode for shared/debug/SoftwareLockStatus/GetAccessComponentComponent

// Returns the accessed component. Componentenumeration GetAccessComponent();Component {Component_PMU, Component_Debug, Component_CTI };

Library pseudocode for shared/debug/SoftwareLockStatus/SoftwareLockStatusGetAccessComponent

// SoftwareLockStatus() // ==================== // Returns the state of the Software Lock. boolean// Returns the accessed component. Component SoftwareLockStatus()GetAccessComponent(); Component component = GetAccessComponent(); if !HaveSoftwareLock(component) then return FALSE; case component of when Component_Debug return EDLSR.SLK == '1'; when Component_PMU return PMLSR.SLK == '1'; when Component_CTI return CTILSR.SLK == '1'; otherwise Unreachable();

Library pseudocode for shared/debug/authenticationSoftwareLockStatus/AccessStateSoftwareLockStatus

// Returns the Security state of the access. SecurityState// SoftwareLockStatus() // ==================== // Returns the state of the Software Lock. boolean AccessState();SoftwareLockStatus()Component component = GetAccessComponent(); if !HaveSoftwareLock(component) then return FALSE; case component of when Component_Debug return EDLSR.SLK == '1'; when Component_PMU return PMLSR.SLK == '1'; when Component_CTI return CTILSR.SLK == '1'; otherwise Unreachable();

Library pseudocode for shared/debug/authentication/AllowExternalDebugAccessAccessState

// AllowExternalDebugAccess() // ========================== // Returns TRUE if an external debug interface access to the External debug registers // is allowed, FALSE otherwise. boolean// Returns the Security state of the access. SecurityState AllowExternalDebugAccess() // The access may also be subject to OS Lock, power-down, etc. returnAccessState(); AllowExternalDebugAccess(AccessState()); // AllowExternalDebugAccess() // ========================== // Returns TRUE if an external debug interface access to the External debug registers // is allowed for the given Security state, FALSE otherwise. boolean AllowExternalDebugAccess(SecurityState access_state) // The access may also be subject to OS Lock, power-down, etc. if HaveSecureExtDebugView() then if access_state == SS_Secure then return TRUE; else if !ExternalInvasiveDebugEnabled() then return FALSE; if ExternalSecureInvasiveDebugEnabled() then return TRUE; if HaveEL(EL3) then EDAD_bit = if ELUsingAArch32(EL3) then SDCR.EDAD else MDCR_EL3.EDAD; return EDAD_bit == '0'; else return NonSecureOnlyImplementation();

Library pseudocode for shared/debug/authentication/AllowExternalPMUAccessAllowExternalDebugAccess

// AllowExternalPMUAccess() // ======================== // Returns TRUE if an external debug interface access to the PMU registers is // allowed, FALSE otherwise. // AllowExternalDebugAccess() // ========================== // Returns TRUE if an external debug interface access to the External debug registers // is allowed, FALSE otherwise. boolean AllowExternalPMUAccess() AllowExternalDebugAccess() // The access may also be subject to OS Lock, power-down, etc. return AllowExternalPMUAccessAllowExternalDebugAccess(AccessState()); // AllowExternalPMUAccess() // ======================== // Returns TRUE if an external debug interface access to the PMU registers is // allowed for the given Security state, FALSE otherwise. // AllowExternalDebugAccess() // ========================== // Returns TRUE if an external debug interface access to the External debug registers // is allowed for the given Security state, FALSE otherwise. boolean AllowExternalPMUAccess(AllowExternalDebugAccess(SecurityState access_state) // The access may also be subject to OS Lock, power-down, etc. if HaveSecureExtDebugView() then if access_state == SS_Secure then return TRUE; else if !ExternalInvasiveDebugEnabled() then return FALSE; if ExternalSecureInvasiveDebugEnabled() then return TRUE; if HaveEL(EL3) then EPMAD_bit = if EDAD_bit = if ELUsingAArch32(EL3) then SDCR.EPMAD else MDCR_EL3.EPMAD; return EPMAD_bit == '0'; ) then SDCR.EDAD else MDCR_EL3.EDAD; return EDAD_bit == '0'; else return NonSecureOnlyImplementation();

Library pseudocode for shared/debug/authentication/Debug_authenticationAllowExternalPMUAccess

signal DBGEN; signal NIDEN; signal SPIDEN; signal SPNIDEN;// AllowExternalPMUAccess() // ======================== // Returns TRUE if an external debug interface access to the PMU registers is // allowed, FALSE otherwise. booleanAllowExternalPMUAccess() // The access may also be subject to OS Lock, power-down, etc. return AllowExternalPMUAccess(AccessState()); // AllowExternalPMUAccess() // ======================== // Returns TRUE if an external debug interface access to the PMU registers is // allowed for the given Security state, FALSE otherwise. boolean AllowExternalPMUAccess(SecurityState access_state) // The access may also be subject to OS Lock, power-down, etc. if HaveSecureExtDebugView() then if access_state == SS_Secure then return TRUE; else if !ExternalInvasiveDebugEnabled() then return FALSE; if ExternalSecureInvasiveDebugEnabled() then return TRUE; if HaveEL(EL3) then EPMAD_bit = if ELUsingAArch32(EL3) then SDCR.EPMAD else MDCR_EL3.EPMAD; return EPMAD_bit == '0'; else return NonSecureOnlyImplementation();

Library pseudocode for shared/debug/authentication/ExternalInvasiveDebugEnabledDebug_authentication

// ExternalInvasiveDebugEnabled() // ============================== // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the DBGEN signal. booleansignal DBGEN; signal NIDEN; signal SPIDEN; signal SPNIDEN; ExternalInvasiveDebugEnabled() return DBGEN == HIGH;

Library pseudocode for shared/debug/authentication/ExternalNoninvasiveDebugAllowedExternalInvasiveDebugEnabled

// ExternalNoninvasiveDebugAllowed() // ================================= // Returns TRUE if Trace and PC Sample-based Profiling are allowed // ExternalInvasiveDebugEnabled() // ============================== // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the DBGEN signal. boolean ExternalNoninvasiveDebugAllowed() if !ExternalInvasiveDebugEnabled() return DBGEN == HIGH;ExternalNoninvasiveDebugEnabled() then return FALSE; ss = SecurityStateAtEL(PSTATE.EL); if (ELUsingAArch32(EL1) && PSTATE.EL == EL0 && ss == SS_Secure && SDER.SUNIDEN == '1') then return TRUE; case ss of when SS_NonSecure return TRUE; when SS_Secure return ExternalSecureNoninvasiveDebugEnabled();

Library pseudocode for shared/debug/authentication/ExternalNoninvasiveDebugEnabledExternalNoninvasiveDebugAllowed

// ExternalNoninvasiveDebugEnabled() // ExternalNoninvasiveDebugAllowed() // ================================= // This function returns TRUE if the FEAT_Debugv8p4 is implemented. // Otherwise, this function is IMPLEMENTATION DEFINED, and, in the // recommended interface, ExternalNoninvasiveDebugEnabled returns // the state of the (DBGEN OR NIDEN) signal. // Returns TRUE if Trace and PC Sample-based Profiling are allowed boolean ExternalNoninvasiveDebugEnabled() return !ExternalNoninvasiveDebugAllowed() if !HaveNoninvasiveDebugAuthExternalNoninvasiveDebugEnabled() ||() then return FALSE; ss = (PSTATE.EL); if (ELUsingAArch32(EL1) && PSTATE.EL == EL0 && ss == SS_Secure && SDER.SUNIDEN == '1') then return TRUE; case ss of when SS_NonSecure return TRUE; when SS_Secure return ExternalSecureNoninvasiveDebugEnabledExternalInvasiveDebugEnabledSecurityStateAtEL() || NIDEN == HIGH;();

Library pseudocode for shared/debug/authentication/ExternalSecureInvasiveDebugEnabledExternalNoninvasiveDebugEnabled

// ExternalSecureInvasiveDebugEnabled() // ==================================== // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN AND SPIDEN) signal. // CoreSight allows asserting SPIDEN without also asserting DBGEN, but this is not recommended. // ExternalNoninvasiveDebugEnabled() // ================================= // This function returns TRUE if the FEAT_Debugv8p4 is implemented. // Otherwise, this function is IMPLEMENTATION DEFINED, and, in the // recommended interface, ExternalNoninvasiveDebugEnabled returns // the state of the (DBGEN OR NIDEN) signal. boolean ExternalSecureInvasiveDebugEnabled() if !ExternalNoninvasiveDebugEnabled() return !HaveELHaveNoninvasiveDebugAuth(EL3) && !SecureOnlyImplementation() then return FALSE; return() || ExternalInvasiveDebugEnabled() && SPIDEN == HIGH;() || NIDEN == HIGH;

Library pseudocode for shared/debug/authentication/ExternalSecureNoninvasiveDebugEnabledExternalSecureInvasiveDebugEnabled

// ExternalSecureNoninvasiveDebugEnabled() // ======================================= // This function returns the value of ExternalSecureInvasiveDebugEnabled() when FEAT_Debugv8p4 // is implemented. Otherwise, the definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN OR NIDEN) AND // (SPIDEN OR SPNIDEN) signal. // ExternalSecureInvasiveDebugEnabled() // ==================================== // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN AND SPIDEN) signal. // CoreSight allows asserting SPIDEN without also asserting DBGEN, but this is not recommended. boolean ExternalSecureNoninvasiveDebugEnabled() ExternalSecureInvasiveDebugEnabled() if !HaveEL(EL3) && !SecureOnlyImplementation() then return FALSE; if return HaveNoninvasiveDebugAuthExternalInvasiveDebugEnabled() then return ExternalNoninvasiveDebugEnabled() && (SPIDEN == HIGH || SPNIDEN == HIGH); else return ExternalSecureInvasiveDebugEnabled();() && SPIDEN == HIGH;

Library pseudocode for shared/debug/authentication/IsAccessSecureExternalSecureNoninvasiveDebugEnabled

// Returns TRUE when an access is Secure // ExternalSecureNoninvasiveDebugEnabled() // ======================================= // This function returns the value of ExternalSecureInvasiveDebugEnabled() when FEAT_Debugv8p4 // is implemented. Otherwise, the definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, this function returns the state of the (DBGEN OR NIDEN) AND // (SPIDEN OR SPNIDEN) signal. boolean IsAccessSecure();ExternalSecureNoninvasiveDebugEnabled() if !HaveEL(EL3) && !SecureOnlyImplementation() then return FALSE; if HaveNoninvasiveDebugAuth() then return ExternalNoninvasiveDebugEnabled() && (SPIDEN == HIGH || SPNIDEN == HIGH); else return ExternalSecureInvasiveDebugEnabled();

Library pseudocode for shared/debug/authentication/IsCorePoweredIsAccessSecure

// Returns TRUE if the Core power domain is powered on, FALSE otherwise. // Returns TRUE when an access is Secure boolean IsCorePowered();IsAccessSecure();

Library pseudocode for shared/debug/breakpointauthentication/CheckValidStateMatchIsCorePowered

// CheckValidStateMatch() // ====================== // Checks for an invalid state match that will generate Constrained // Unpredictable behaviour, otherwise returns Constraint_NONE. (Constraint, bits(2), bit, bits(2))// Returns TRUE if the Core power domain is powered on, FALSE otherwise. boolean CheckValidStateMatch(bits(2) SSC_in, bit HMC_in, bits(2) PxC_in, boolean isbreakpnt) boolean reserved = FALSE; bits(2) SSC = SSC_in; bit HMC = HMC_in; bits(2) PxC = PxC_in; // Values that are not allocated in any architecture version if (HMC:SSC:PxC) IN {'01110','100x0','10110','11x10'} then reserved = TRUE; // Match 'Usr/Sys/Svc' valid only for AArch32 breakpoints if (!isbreakpnt || !IsCorePowered();HaveAArch32EL(EL1)) && HMC:PxC == '000' && SSC != '11' then reserved = TRUE; // Both EL3 and EL2 are not implemented if !HaveEL(EL3) && !HaveEL(EL2) && (HMC != '0' || SSC != '00') then reserved = TRUE; // EL3 is not implemented if !HaveEL(EL3) && SSC IN {'01','10'} && HMC:SSC:PxC != '10100' then reserved = TRUE; // EL3 using AArch64 only if (!HaveEL(EL3) || !HaveAArch64()) && HMC:SSC:PxC == '11000' then reserved = TRUE; // EL2 is not implemented if !HaveEL(EL2) && HMC:SSC:PxC == '11100' then reserved = TRUE; // Secure EL2 is not implemented if !HaveSecureEL2Ext() && (HMC:SSC:PxC) IN {'01100','10100','x11x1'} then reserved = TRUE; if reserved then // If parameters are set to a reserved type, behaves as either disabled or a defined type Constraint c; (c, <HMC,SSC,PxC>) = ConstrainUnpredictableBits(Unpredictable_RESBPWPCTRL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (c, bits(2) UNKNOWN, bit UNKNOWN, bits(2) UNKNOWN); // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value return (Constraint_NONE, SSC, HMC, PxC);

Library pseudocode for shared/debug/breakpoint/NumBreakpointsImplementedCheckValidStateMatch

// NumBreakpointsImplemented() // =========================== // Returns the number of breakpoints implemented. This is indicated to software by // DBGDIDR.BRPs in AArch32 state, and ID_AA64DFR0_EL1.BRPs in AArch64 state. // CheckValidStateMatch() // ====================== // Checks for an invalid state match that will generate Constrained // Unpredictable behaviour, otherwise returns Constraint_NONE. integer(Constraint, bits(2), bit, bits(2)) NumBreakpointsImplemented() return integer IMPLEMENTATION_DEFINED "Number of breakpoints";CheckValidStateMatch(bits(2) SSC_in, bit HMC_in, bits(2) PxC_in, boolean isbreakpnt) boolean reserved = FALSE; bits(2) SSC = SSC_in; bit HMC = HMC_in; bits(2) PxC = PxC_in; // Values that are not allocated in any architecture version if (HMC:SSC:PxC) IN {'01110','100x0','10110','11x10'} then reserved = TRUE; // Match 'Usr/Sys/Svc' only valid for AArch32 breakpoints if (!isbreakpnt || !HaveAArch32EL(EL1)) && HMC:PxC == '000' && SSC != '11' then reserved = TRUE; // Both EL3 and EL2 are not implemented if !HaveEL(EL3) && !HaveEL(EL2) && (HMC != '0' || SSC != '00') then reserved = TRUE; // EL3 is not implemented if !HaveEL(EL3) && SSC IN {'01','10'} && HMC:SSC:PxC != '10100' then reserved = TRUE; // EL3 using AArch64 only if (!HaveEL(EL3) || !HaveAArch64()) && HMC:SSC:PxC == '11000' then reserved = TRUE; // EL2 is not implemented if !HaveEL(EL2) && HMC:SSC:PxC == '11100' then reserved = TRUE; // Secure EL2 is not implemented if !HaveSecureEL2Ext() && (HMC:SSC:PxC) IN {'01100','10100','x11x1'} then reserved = TRUE; if reserved then // If parameters are set to a reserved type, behaves as either disabled or a defined type Constraint c; (c, <HMC,SSC,PxC>) = ConstrainUnpredictableBits(Unpredictable_RESBPWPCTRL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (c, bits(2) UNKNOWN, bit UNKNOWN, bits(2) UNKNOWN); // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value return (Constraint_NONE, SSC, HMC, PxC);

Library pseudocode for shared/debug/breakpoint/NumContextAwareBreakpointsImplementedNumBreakpointsImplemented

// NumContextAwareBreakpointsImplemented() // ======================================= // Returns the number of context-aware breakpoints implemented. This is indicated to software by // DBGDIDR.CTX_CMPs in AArch32 state, and ID_AA64DFR0_EL1.CTX_CMPs in AArch64 state. // NumBreakpointsImplemented() // =========================== // Returns the number of breakpoints implemented. This is indicated to software by // DBGDIDR.BRPs in AArch32 state, and ID_AA64DFR0_EL1.BRPs in AArch64 state. integer NumContextAwareBreakpointsImplemented() return integer IMPLEMENTATION_DEFINED "Number of context-aware breakpoints";NumBreakpointsImplemented() return integer IMPLEMENTATION_DEFINED "Number of breakpoints";

Library pseudocode for shared/debug/breakpoint/NumWatchpointsImplementedNumContextAwareBreakpointsImplemented

// NumWatchpointsImplemented() // =========================== // Returns the number of watchpoints implemented. This is indicated to software by // DBGDIDR.WRPs in AArch32 state, and ID_AA64DFR0_EL1.WRPs in AArch64 state. // NumContextAwareBreakpointsImplemented() // ======================================= // Returns the number of context-aware breakpoints implemented. This is indicated to software by // DBGDIDR.CTX_CMPs in AArch32 state, and ID_AA64DFR0_EL1.CTX_CMPs in AArch64 state. integer NumWatchpointsImplemented() return integer IMPLEMENTATION_DEFINED "Number of watchpoints";NumContextAwareBreakpointsImplemented() return integer IMPLEMENTATION_DEFINED "Number of context-aware breakpoints";

Library pseudocode for shared/debug/ctibreakpoint/CTI_SetEventLevelNumWatchpointsImplemented

// Set a Cross Trigger multi-cycle input event trigger to the specified level. CTI_SetEventLevel(// NumWatchpointsImplemented() // =========================== // Returns the number of watchpoints implemented. This is indicated to software by // DBGDIDR.WRPs in AArch32 state, and ID_AA64DFR0_EL1.WRPs in AArch64 state. integerCrossTriggerIn id, signal level);NumWatchpointsImplemented() return integer IMPLEMENTATION_DEFINED "Number of watchpoints";

Library pseudocode for shared/debug/cti/CTI_SignalEventCTI_SetEventLevel

// Signal a discrete event on a Cross Trigger input event trigger.// Set a Cross Trigger multi-cycle input event trigger to the specified level. CTI_SetEventLevel( CTI_SignalEvent(CrossTriggerIn id);id, signal level);

Library pseudocode for shared/debug/cti/CrossTriggerCTI_SignalEvent

enumeration// Signal a discrete event on a Cross Trigger input event trigger. CrossTriggerOut {CTI_SignalEvent(CrossTriggerOut_DebugRequest, CrossTriggerOut_RestartRequest, CrossTriggerOut_IRQ, CrossTriggerOut_RSVD3, CrossTriggerOut_TraceExtIn0, CrossTriggerOut_TraceExtIn1, CrossTriggerOut_TraceExtIn2, CrossTriggerOut_TraceExtIn3}; enumeration CrossTriggerIn {CrossTriggerIn_CrossHalt, CrossTriggerIn_PMUOverflow, CrossTriggerIn_RSVD2, CrossTriggerIn_RSVD3, CrossTriggerIn_TraceExtOut0, CrossTriggerIn_TraceExtOut1, CrossTriggerIn_TraceExtOut2, CrossTriggerIn_TraceExtOut3};id);

Library pseudocode for shared/debug/dccanditrcti/CheckForDCCInterruptsCrossTrigger

// CheckForDCCInterrupts() // =======================enumeration CheckForDCCInterrupts() commrx = (EDSCR.RXfull == '1'); commtx = (EDSCR.TXfull == '0'); // COMMRX and COMMTX support is optional and not recommended for new designs. // SetInterruptRequestLevel(InterruptID_COMMRX, if commrx then HIGH else LOW); // SetInterruptRequestLevel(InterruptID_COMMTX, if commtx then HIGH else LOW); // The value to be driven onto the common COMMIRQ signal. boolean commirq; ifCrossTriggerOut { ELUsingAArch32(CrossTriggerOut_DebugRequest,EL1) then commirq = ((commrx && DBGDCCINT.RX == '1') || (commtx && DBGDCCINT.TX == '1')); else commirq = ((commrx && MDCCINT_EL1.RX == '1') || (commtx && MDCCINT_EL1.TX == '1')); SetInterruptRequestLevel(CrossTriggerOut_RestartRequest,CrossTriggerOut_IRQ, CrossTriggerOut_RSVD3, CrossTriggerOut_TraceExtIn0, CrossTriggerOut_TraceExtIn1, CrossTriggerOut_TraceExtIn2, CrossTriggerOut_TraceExtIn3}; enumeration CrossTriggerIn {CrossTriggerIn_CrossHalt, CrossTriggerIn_PMUOverflow, CrossTriggerIn_RSVD2, CrossTriggerIn_RSVD3, CrossTriggerIn_TraceExtOut0, CrossTriggerIn_TraceExtOut1, CrossTriggerIn_TraceExtOut2, InterruptID_COMMIRQ, if commirq then HIGH else LOW); return;CrossTriggerIn_TraceExtOut3};

Library pseudocode for shared/debug/dccanditr/DBGDTRRX_EL0CheckForDCCInterrupts

// DBGDTRRX_EL0[] (external write) // =============================== // Called on writes to debug register 0x08C.// CheckForDCCInterrupts() // ======================= DBGDTRRX_EL0[boolean memory_mapped] = bits(32) value CheckForDCCInterrupts() commrx = (EDSCR.RXfull == '1'); commtx = (EDSCR.TXfull == '0'); if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "generate error response"; return; // COMMRX and COMMTX support is optional and not recommended for new designs. // SetInterruptRequestLevel(InterruptID_COMMRX, if commrx then HIGH else LOW); // SetInterruptRequestLevel(InterruptID_COMMTX, if commtx then HIGH else LOW); if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if EDSCR.RXfull == '1' || ( // The value to be driven onto the common COMMIRQ signal. boolean commirq; ifHaltedELUsingAArch32() && EDSCR.MA == '1' && EDSCR.ITE == '0') then EDSCR.RXO = '1'; EDSCR.ERR = '1'; // Overrun condition: ignore write return; EDSCR.RXfull = '1'; DTRRX = value; if( HaltedEL1() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !) then commirq = ((commrx && DBGDCCINT.RX == '1') || (commtx && DBGDCCINT.TX == '1')); else commirq = ((commrx && MDCCINT_EL1.RX == '1') || (commtx && MDCCINT_EL1.TX == '1')); SetInterruptRequestLevel(UsingAArch32InterruptID_COMMIRQ() then ExecuteA64(0xD5330501<31:0>); // A64 "MRS X1,DBGDTRRX_EL0" ExecuteA64(0xB8004401<31:0>); // A64 "STR W1,[X0],#4" X[1] = bits(64) UNKNOWN; else ExecuteT32(0xEE10<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MRS R1,DBGDTRRXint" ExecuteT32(0xF840<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "STR R1,[R0],#4" R[1] = bits(32) UNKNOWN; // If the store aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.RXfull = bit UNKNOWN; DBGDTRRX_EL0 = bits(64) UNKNOWN; else // "MRS X1,DBGDTRRX_EL0" calls DBGDTR_EL0[] (read) which clears RXfull. assert EDSCR.RXfull == '0'; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return; // DBGDTRRX_EL0[] (external read) // ============================== bits(32) DBGDTRRX_EL0[boolean memory_mapped] return DTRRX;, if commirq then HIGH else LOW); return;

Library pseudocode for shared/debug/dccanditr/DBGDTRTX_EL0DBGDTRRX_EL0

// DBGDTRTX_EL0[] (external read) // ============================== // Called on reads of debug register 0x080. bits(32)// DBGDTRRX_EL0[] (external write) // =============================== // Called on writes to debug register 0x08C. DBGDTRTX_EL0[boolean memory_mapped] DBGDTRRX_EL0[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "generate error response"; return bits(32) UNKNOWN; return; underrun = EDSCR.TXfull == '0' || ( if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if EDSCR.RXfull == '1' || (Halted() && EDSCR.MA == '1' && EDSCR.ITE == '0'); value = if underrun then bits(32) UNKNOWN else DTRTX; () && EDSCR.MA == '1' && EDSCR.ITE == '0') then EDSCR.RXO = '1'; EDSCR.ERR = '1'; // Overrun condition: ignore write return; if EDSCR.ERR == '1' then return value; // Error flag set: no side-effects EDSCR.RXfull = '1'; DTRRX = value; // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then // Software lock locked: no side-effects return value; if underrun then EDSCR.TXU = '1'; EDSCR.ERR = '1'; // Underrun condition: block side-effects return value; // Return UNKNOWN EDSCR.TXfull = '0'; if Halted() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !UsingAArch32() then ExecuteA64(0xB8404401<31:0>); // A64 "LDR W1,[X0],#4" else(0xD5330501<31:0>); // A64 "MRS X1,DBGDTRRX_EL0" ExecuteT32ExecuteA64(0xF850<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "LDR R1,[R0],#4" // If the load aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.TXfull = bit UNKNOWN; DBGDTRTX_EL0 = bits(64) UNKNOWN; else if !(0xB8004401<31:0>); // A64 "STR W1,[X0],#4"UsingAArch32X() then[1] = bits(64) UNKNOWN; else ExecuteA64ExecuteT32(0xD5130501<31:0>); // A64 "MSR DBGDTRTX_EL0,X1" else(0xEE10<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MRS R1,DBGDTRRXint" ExecuteT32(0xEE00<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MSR DBGDTRTXint,R1" // "MSR DBGDTRTX_EL0,X1" calls DBGDTR_EL0[] (write) which sets TXfull. assert EDSCR.TXfull == '1'; if !UsingAArch32() then X[1] = bits(64) UNKNOWN; else(0xF840<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "STR R1,[R0],#4" R[1] = bits(32) UNKNOWN; // If the store aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.RXfull = bit UNKNOWN; DBGDTRRX_EL0 = bits(64) UNKNOWN; else // "MRS X1,DBGDTRRX_EL0" calls DBGDTR_EL0[] (read) which clears RXfull. assert EDSCR.RXfull == '0'; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return; return value; // DBGDTRRX_EL0[] (external read) // ============================== // DBGDTRTX_EL0[] (external write) // ===============================bits(32) DBGDTRTX_EL0[boolean memory_mapped] = bits(32) value // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write DTRTX = value; return;DBGDTRRX_EL0[boolean memory_mapped] return DTRRX;

Library pseudocode for shared/debug/dccanditr/DBGDTR_EL0DBGDTRTX_EL0

// DBGDTR_EL0[] (write) // ==================== // System register writes to DBGDTR_EL0, DBGDTRTX_EL0 (AArch64) and DBGDTRTXint (AArch32) // DBGDTRTX_EL0[] (external read) // ============================== // Called on reads of debug register 0x080. DBGDTR_EL0[] = bits(N) value_in bits(N) value = value_in; // For MSR DBGDTRTX_EL0,<Rt> N=32, value=X[t]<31:0>, X[t]<63:32> is ignored // For MSR DBGDTR_EL0,<Xt> N=64, value=X[t]<63:0> assert N IN {32,64}; if EDSCR.TXfull == '1' then value = bits(N) UNKNOWN; // On a 64-bit write, implement a half-duplex channel if N == 64 then DTRRX = value<63:32>; DTRTX = value<31:0>; // 32-bit or 64-bit write EDSCR.TXfull = '1'; return; // DBGDTR_EL0[] (read) // =================== // System register reads of DBGDTR_EL0, DBGDTRRX_EL0 (AArch64) and DBGDTRRXint (AArch32) bits(N)bits(32) DBGDTR_EL0[] // For MRS <Rt>,DBGDTRTX_EL0 N=32, X[t]=Zeros(32):result // For MRS <Xt>,DBGDTR_EL0 N=64, X[t]=result assert N IN {32,64}; bits(N) result; if EDSCR.RXfull == '0' then result = bits(N) UNKNOWN; else // On a 64-bit read, implement a half-duplex channel // NOTE: the word order is reversed on reads with regards to writes if N == 64 then result<63:32> = DTRTX; result<31:0> = DTRRX; EDSCR.RXfull = '0'; return result;DBGDTRTX_EL0[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "generate error response"; return bits(32) UNKNOWN; underrun = EDSCR.TXfull == '0' || (Halted() && EDSCR.MA == '1' && EDSCR.ITE == '0'); value = if underrun then bits(32) UNKNOWN else DTRTX; if EDSCR.ERR == '1' then return value; // Error flag set: no side-effects // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then // Software lock locked: no side-effects return value; if underrun then EDSCR.TXU = '1'; EDSCR.ERR = '1'; // Underrun condition: block side-effects return value; // Return UNKNOWN EDSCR.TXfull = '0'; if Halted() && EDSCR.MA == '1' then EDSCR.ITE = '0'; // See comments in EDITR[] (external write) if !UsingAArch32() then ExecuteA64(0xB8404401<31:0>); // A64 "LDR W1,[X0],#4" else ExecuteT32(0xF850<15:0> /*hw1*/, 0x1B04<15:0> /*hw2*/); // T32 "LDR R1,[R0],#4" // If the load aborts, the Data Abort exception is taken and EDSCR.ERR is set to 1 if EDSCR.ERR == '1' then EDSCR.TXfull = bit UNKNOWN; DBGDTRTX_EL0 = bits(64) UNKNOWN; else if !UsingAArch32() then ExecuteA64(0xD5130501<31:0>); // A64 "MSR DBGDTRTX_EL0,X1" else ExecuteT32(0xEE00<15:0> /*hw1*/, 0x1E15<15:0> /*hw2*/); // T32 "MSR DBGDTRTXint,R1" // "MSR DBGDTRTX_EL0,X1" calls DBGDTR_EL0[] (write) which sets TXfull. assert EDSCR.TXfull == '1'; if !UsingAArch32() then X[1] = bits(64) UNKNOWN; else R[1] = bits(32) UNKNOWN; EDSCR.ITE = '1'; // See comments in EDITR[] (external write) return value; // DBGDTRTX_EL0[] (external write) // =============================== DBGDTRTX_EL0[boolean memory_mapped] = bits(32) value // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write DTRTX = value; return;

Library pseudocode for shared/debug/dccanditr/DTRDBGDTR_EL0

bits(32) DTRRX; bits(32) DTRTX;// DBGDTR_EL0[] (write) // ==================== // System register writes to DBGDTR_EL0, DBGDTRTX_EL0 (AArch64) and DBGDTRTXint (AArch32) DBGDTR_EL0[] = bits(N) value_in bits(N) value = value_in; // For MSR DBGDTRTX_EL0,<Rt> N=32, value=X[t]<31:0>, X[t]<63:32> is ignored // For MSR DBGDTR_EL0,<Xt> N=64, value=X[t]<63:0> assert N IN {32,64}; if EDSCR.TXfull == '1' then value = bits(N) UNKNOWN; // On a 64-bit write, implement a half-duplex channel if N == 64 then DTRRX = value<63:32>; DTRTX = value<31:0>; // 32-bit or 64-bit write EDSCR.TXfull = '1'; return; // DBGDTR_EL0[] (read) // =================== // System register reads of DBGDTR_EL0, DBGDTRRX_EL0 (AArch64) and DBGDTRRXint (AArch32) bits(N)DBGDTR_EL0[] // For MRS <Rt>,DBGDTRTX_EL0 N=32, X[t]=Zeros(32):result // For MRS <Xt>,DBGDTR_EL0 N=64, X[t]=result assert N IN {32,64}; bits(N) result; if EDSCR.RXfull == '0' then result = bits(N) UNKNOWN; else // On a 64-bit read, implement a half-duplex channel // NOTE: the word order is reversed on reads with regards to writes if N == 64 then result<63:32> = DTRTX; result<31:0> = DTRRX; EDSCR.RXfull = '0'; return result;

Library pseudocode for shared/debug/dccanditr/EDITRDTR

// EDITR[] (external write) // ======================== // Called on writes to debug register 0x084.bits(32) DTRRX; bits(32) DTRTX; EDITR[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "generate error response"; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if !Halted() then return; // Non-debug state: ignore write if EDSCR.ITE == '0' || EDSCR.MA == '1' then EDSCR.ITO = '1'; EDSCR.ERR = '1'; // Overrun condition: block write return; // ITE indicates whether the processor is ready to accept another instruction; the processor // may support multiple outstanding instructions. Unlike the "InstrCompl" flag in [v7A] there // is no indication that the pipeline is empty (all instructions have completed). In this // pseudocode, the assumption is that only one instruction can be executed at a time, // meaning ITE acts like "InstrCompl". EDSCR.ITE = '0'; if !UsingAArch32() then ExecuteA64(value); else ExecuteT32(value<15:0>/*hw1*/, value<31:16> /*hw2*/); EDSCR.ITE = '1'; return;

Library pseudocode for shared/debug/haltingdccanditr/DCPSInstructionEDITR

// DCPSInstruction() // ================= // Operation of the DCPS instruction in Debug state// EDITR[] (external write) // ======================== // Called on writes to debug register 0x084. DCPSInstruction(bits(2) target_el)EDITR[boolean memory_mapped] = bits(32) value if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "generate error response"; return; if EDSCR.ERR == '1' then return; // Error flag set: ignore write // The Software lock is OPTIONAL. if memory_mapped && EDLSR.SLK == '1' then return; // Software lock locked: ignore write if ! SynchronizeContextHalted(); () then return; // Non-debug state: ignore write bits(2) handle_el; case target_el of when if EDSCR.ITE == '0' || EDSCR.MA == '1' then EDSCR.ITO = '1'; EDSCR.ERR = '1'; // Overrun condition: block write return; // ITE indicates whether the processor is ready to accept another instruction; the processor // may support multiple outstanding instructions. Unlike the "InstrCompl" flag in [v7A] there // is no indication that the pipeline is empty (all instructions have completed). In this // pseudocode, the assumption is that only one instruction can be executed at a time, // meaning ITE acts like "InstrCompl". EDSCR.ITE = '0'; if ! EL1 if PSTATE.EL == EL2 || (PSTATE.EL == EL3 && !UsingAArch32()) then handle_el = PSTATE.EL; elsif() then EL2EnabledExecuteA64() && HCR_EL2.TGE == '1' then UNDEFINED; else handle_el =(value); else EL1ExecuteT32; when EL2 if !HaveEL(EL2) then UNDEFINED; elsif PSTATE.EL == EL3 && !UsingAArch32() then handle_el = EL3; elsif !IsSecureEL2Enabled() && CurrentSecurityState() == SS_Secure then UNDEFINED; else handle_el = EL2; when EL3 if EDSCR.SDD == '1' || !HaveEL(EL3) then UNDEFINED; else handle_el = EL3; otherwise Unreachable(); from_secure = CurrentSecurityState() == SS_Secure; if ELUsingAArch32(handle_el) then if PSTATE.M == M32_Monitor then SCR.NS = '0'; assert UsingAArch32(); // Cannot move from AArch64 to AArch32 case handle_el of when EL1AArch32.WriteMode(M32_Svc); if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; when EL2 AArch32.WriteMode(M32_Hyp); when EL3AArch32.WriteMode(M32_Monitor); if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if handle_el == EL2 then ELR_hyp = bits(32) UNKNOWN; HSR = bits(32) UNKNOWN; else LR = bits(32) UNKNOWN; SPSR[] = bits(32) UNKNOWN; PSTATE.E = SCTLR[].EE; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else // Targeting AArch64 if UsingAArch32() then AArch64.MaybeZeroRegisterUppers(); PSTATE.nRW = '0'; PSTATE.SP = '1'; PSTATE.EL = handle_el; if HavePANExt() && ((handle_el == EL1 && SCTLR_EL1.SPAN == '0') || (handle_el == EL2 && HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && SCTLR_EL2.SPAN == '0')) then PSTATE.PAN = '1'; ELR[] = bits(64) UNKNOWN; SPSR[] = bits(64) UNKNOWN; ESR[] = bits(64) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(64) UNKNOWN; if HaveUAOExt() then PSTATE.UAO = '0'; if HaveMTEExt() then PSTATE.TCO = '1'; UpdateEDSCRFields(); // Update EDSCR PE state flags sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrors(); (value<15:0>/*hw1*/, value<31:16> /*hw2*/); EDSCR.ITE = '1'; return;

Library pseudocode for shared/debug/halting/DRPSInstructionDCPSInstruction

// DRPSInstruction() // DCPSInstruction() // ================= // Operation of the A64 DRPS and T32 ERET instructions in Debug state// Operation of the DCPS instruction in Debug state DRPSInstruction()DCPSInstruction(bits(2) target_el) SynchronizeContext(); sync_errors = bits(2) handle_el; case target_el of when EL1 if PSTATE.EL == EL2 || (PSTATE.EL == EL3 && !UsingAArch32()) then handle_el = PSTATE.EL; elsif EL2Enabled() && HCR_EL2.TGE == '1' then UNDEFINED; else handle_el = EL1; when EL2 if !HaveEL(EL2) then UNDEFINED; elsif PSTATE.EL == EL3 && !UsingAArch32() then handle_el = EL3; elsif !IsSecureEL2Enabled() && IsSecure() then UNDEFINED; else handle_el = EL2; when EL3 if EDSCR.SDD == '1' || !HaveEL(EL3) then UNDEFINED; handle_el = EL3; otherwise Unreachable(); from_secure = IsSecure(); if ELUsingAArch32(handle_el) then if PSTATE.M == M32_Monitor then SCR.NS = '0'; assert UsingAArch32(); // Cannot move from AArch64 to AArch32 case handle_el of when EL1AArch32.WriteMode(M32_Svc); if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; when EL2 AArch32.WriteMode(M32_Hyp); when EL3AArch32.WriteMode(M32_Monitor); if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; if handle_el == EL2 then ELR_hyp = bits(32) UNKNOWN; HSR = bits(32) UNKNOWN; else LR = bits(32) UNKNOWN; SPSR[] = bits(32) UNKNOWN; PSTATE.E = SCTLR[].EE; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else // Targeting AArch64 if UsingAArch32() then AArch64.MaybeZeroRegisterUppers(); PSTATE.nRW = '0'; PSTATE.SP = '1'; PSTATE.EL = handle_el; if HavePANExt() && ((handle_el == EL1 && SCTLR_EL1.SPAN == '0') || (handle_el == EL2 && HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && SCTLR_EL2.SPAN == '0')) then PSTATE.PAN = '1'; ELR[] = bits(64) UNKNOWN; SPSR[] = bits(64) UNKNOWN; ESR[] = bits(64) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(64) UNKNOWN; if HaveUAOExt() then PSTATE.UAO = '0'; if HaveMTEExt() then PSTATE.TCO = '1'; UpdateEDSCRFields(); // Update EDSCR PE state flags sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() && !UsingAArch32() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors then SynchronizeErrors(); DebugRestorePSR(); return;

Library pseudocode for shared/debug/halting/DebugHaltDRPSInstruction

constant bits(6)// DRPSInstruction() // ================= // Operation of the A64 DRPS and T32 ERET instructions in Debug state DebugHalt_Breakpoint = '000111'; constant bits(6)DRPSInstruction() DebugHalt_EDBGRQ = '010011'; constant bits(6)(); sync_errors = DebugHalt_Step_Normal = '011011'; constant bits(6)() && DebugHalt_Step_Exclusive = '011111'; constant bits(6)[].IESB == '1'; if DebugHalt_OSUnlockCatch = '100011'; constant bits(6)() && ! DebugHalt_ResetCatch = '100111'; constant bits(6)() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == DebugHalt_Watchpoint = '101011'; constant bits(6)); // SCTLR[].IESB might be ignored in Debug state. if ! DebugHalt_HaltInstruction = '101111'; constant bits(6)( DebugHalt_SoftwareAccess = '110011'; constant bits(6)) then sync_errors = FALSE; if sync_errors then DebugHalt_ExceptionCatch = '110111'; constant bits(6)(); DebugHalt_Step_NoSyndrome = '111011';(); return;

Library pseudocode for shared/debug/halting/DebugRestorePSRDebugHalt

// DebugRestorePSR() // =================constant bits(6) DebugRestorePSR() // PSTATE.{N,Z,C,V,Q,GE,SS,D,A,I,F} are not observable and ignored in Debug state, so // behave as if UNKNOWN. ifDebugHalt_Breakpoint = '000111'; constant bits(6) UsingAArch32() then bits(32) spsr =DebugHalt_EDBGRQ = '010011'; constant bits(6) SPSR[];DebugHalt_Step_Normal = '011011'; constant bits(6) SetPSTATEFromPSR(spsr); PSTATE.<N,Z,C,V,Q,GE,SS,A,I,F> = bits(13) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else bits(64) spsr =DebugHalt_Step_Exclusive = '011111'; constant bits(6) SPSR[];DebugHalt_OSUnlockCatch = '100011'; constant bits(6) SetPSTATEFromPSR(spsr); PSTATE.<N,Z,C,V,SS,D,A,I,F> = bits(9) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(64) UNKNOWN;DebugHalt_ResetCatch = '100111'; constant bits(6) DebugHalt_Watchpoint = '101011'; constant bits(6) DebugHalt_HaltInstruction = '101111'; constant bits(6) DebugHalt_SoftwareAccess = '110011'; constant bits(6) DebugHalt_ExceptionCatch = '110111'; constant bits(6) UpdateEDSCRFields(); // Update EDSCR PE state flagsDebugHalt_Step_NoSyndrome = '111011';

Library pseudocode for shared/debug/halting/DisableITRAndResumeInstructionPrefetchDebugRestorePSR

// DebugRestorePSR() // ================= DebugRestorePSR() // PSTATE.{N,Z,C,V,Q,GE,SS,D,A,I,F} are not observable and ignored in Debug state, so // behave as if UNKNOWN. if UsingAArch32() then bits(32) spsr = SPSR[]; SetPSTATEFromPSR(spsr); PSTATE.<N,Z,C,V,Q,GE,SS,A,I,F> = bits(13) UNKNOWN; // In AArch32, all instructions are T32 and unconditional. PSTATE.IT = '00000000'; PSTATE.T = '1'; // PSTATE.J is RES0 DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; else bits(64) spsr = SPSR[]; SetPSTATEFromPSR(spsr); PSTATE.<N,Z,C,V,SS,D,A,I,F> = bits(9) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(64) UNKNOWN; UpdateEDSCRFieldsDisableITRAndResumeInstructionPrefetch();(); // Update EDSCR PE state flags

Library pseudocode for shared/debug/halting/ExecuteA64DisableITRAndResumeInstructionPrefetch

// Execute an A64 instruction in Debug state. ExecuteA64(bits(32) instr);DisableITRAndResumeInstructionPrefetch();

Library pseudocode for shared/debug/halting/ExecuteT32ExecuteA64

// Execute a T32 instruction in Debug state.// Execute an A64 instruction in Debug state. ExecuteT32(bits(16) hw1, bits(16) hw2);ExecuteA64(bits(32) instr);

Library pseudocode for shared/debug/halting/ExitDebugStateExecuteT32

// ExitDebugState() // ================// Execute a T32 instruction in Debug state. ExitDebugState() assertExecuteT32(bits(16) hw1, bits(16) hw2); Halted(); SynchronizeContext(); // Although EDSCR.STATUS signals that the PE is restarting, debuggers must use EDPRSR.SDR to // detect that the PE has restarted. EDSCR.STATUS = '000001'; // Signal restarting // Clear any pending Halting debug events if Havev8p8Debug() then EDESR<3:0> = '0000'; else EDESR<2:0> = '000'; bits(64) new_pc; bits(64) spsr; if UsingAArch32() then new_pc = ZeroExtend(DLR); spsr = ZeroExtend(DSPSR); else new_pc = DLR_EL0; spsr = DSPSR_EL0; // If this is an illegal return, SetPSTATEFromPSR() will set PSTATE.IL. if UsingAArch32() then SetPSTATEFromPSR(spsr<31:0>); // Can update privileged bits, even at EL0 else SetPSTATEFromPSR(spsr); // Can update privileged bits, even at EL0 boolean branch_conditional = FALSE; if UsingAArch32() then if ConstrainUnpredictableBool(Unpredictable_RESTARTALIGNPC) then new_pc<0> = '0'; // AArch32 branch BranchTo(new_pc<31:0>, BranchType_DBGEXIT, branch_conditional); else // If targeting AArch32 then possibly zero the 32 most significant bits of the target PC if spsr<4> == '1' && ConstrainUnpredictableBool(Unpredictable_RESTARTZEROUPPERPC) then new_pc<63:32> = Zeros(); // A type of branch that is never predicted BranchTo(new_pc, BranchType_DBGEXIT, branch_conditional); (EDSCR.STATUS,EDPRSR.SDR) = ('000010','1'); // Atomically signal restarted UpdateEDSCRFields(); // Stop signalling PE state DisableITRAndResumeInstructionPrefetch(); return;

Library pseudocode for shared/debug/halting/HaltExitDebugState

// Halt() // ======// ExitDebugState() // ================ Halt(bits(6) reason) boolean is_async = FALSE;ExitDebugState() assert HaltHalted(reason, is_async); // Halt() // ======(); Halt(bits(6) reason, boolean is_async)(); // Although EDSCR.STATUS signals that the PE is restarting, debuggers must use EDPRSR.SDR to // detect that the PE has restarted. EDSCR.STATUS = '000001'; // Signal restarting EDESR<2:0> = '000'; // Clear any pending Halting debug events bits(64) new_pc; bits(64) spsr; if CTI_SignalEventUsingAArch32(() then new_pc =CrossTriggerIn_CrossHaltZeroExtend); // Trigger other cores to halt bits(64) preferred_restart_address =(DLR); spsr = ThisInstrAddrZeroExtend(); bits(32) spsr_32; bits(64) spsr_64; (DSPSR); else new_pc = DLR_EL0; spsr = DSPSR_EL0; // If this is an illegal return, SetPSTATEFromPSR() will set PSTATE.IL. if UsingAArch32() then spsr_32 =() then GetPSRFromPSTATESetPSTATEFromPSR((spsr<31:0>); // Can update privileged bits, even at EL0 elseDebugStateSetPSTATEFromPSR); else spsr_64 =(spsr); // Can update privileged bits, even at EL0 boolean branch_conditional = FALSE; if GetPSRFromPSTATEUsingAArch32(() then ifDebugStateConstrainUnpredictableBool); if ((HaveBTIExtUnpredictable_RESTARTALIGNPC() && !is_async && !(reason IN {) then new_pc<0> = '0'; // AArch32 branchDebugHalt_Step_NormalBranchTo,(new_pc<31:0>, DebugHalt_Step_ExclusiveBranchType_DBGEXIT,, branch_conditional); else // If targeting AArch32 then possibly zero the 32 most significant bits of the target PC if spsr<4> == '1' && DebugHalt_Step_NoSyndrome, DebugHalt_Breakpoint, DebugHalt_HaltInstruction}) && ConstrainUnpredictableBool(Unpredictable_ZEROBTYPEUnpredictable_RESTARTZEROUPPERPC)) then if) then new_pc<63:32> = UsingAArch32Zeros() then spsr_32<11:10> = '00'; else spsr_64<11:10> = '00'; if(); // A type of branch that is never predicted UsingAArch32BranchTo() then DLR = preferred_restart_address<31:0>; DSPSR = spsr_32; else DLR_EL0 = preferred_restart_address; DSPSR_EL0 = spsr_64; EDSCR.ITE = '1'; EDSCR.ITO = '0'; if(new_pc, CurrentSecurityStateBranchType_DBGEXIT() ==, branch_conditional); (EDSCR.STATUS,EDPRSR.SDR) = ('000010','1'); // Atomically signal restarted SS_SecureUpdateEDSCRFields then EDSCR.SDD = '0'; // If entered in Secure state, allow debug elsif(); // Stop signalling PE state HaveELDisableITRAndResumeInstructionPrefetch(EL3) then EDSCR.SDD = if ExternalSecureInvasiveDebugEnabled() then '0' else '1'; else EDSCR.SDD = '1'; // Otherwise EDSCR.SDD is RES1 EDSCR.MA = '0'; // In Debug state: // * PSTATE.{SS,SSBS,D,A,I,F} are not observable and ignored so behave-as-if UNKNOWN. // * PSTATE.{N,Z,C,V,Q,GE,E,M,nRW,EL,SP,DIT} are also not observable, but since these // are not changed on exception entry, this function also leaves them unchanged. // * PSTATE.{IT,T} are ignored. // * PSTATE.IL is ignored and behave-as-if 0. // * PSTATE.BTYPE is ignored and behave-as-if 0. // * PSTATE.TCO is set 1. // * PSTATE.{UAO,PAN} are observable and not changed on entry into Debug state. if UsingAArch32() then PSTATE.<IT,SS,SSBS,A,I,F,T> = bits(14) UNKNOWN; else PSTATE.<SS,SSBS,D,A,I,F> = bits(6) UNKNOWN; PSTATE.TCO = '1'; PSTATE.BTYPE = '00'; PSTATE.IL = '0'; StopInstructionPrefetchAndEnableITR(); EDSCR.STATUS = reason; // Signal entered Debug state UpdateEDSCRFields(); // Update EDSCR PE state flags. (); return;

Library pseudocode for shared/debug/halting/HaltOnBreakpointOrWatchpointHalt

// HaltOnBreakpointOrWatchpoint() // ============================== // Returns TRUE if the Breakpoint and Watchpoint debug events should be considered for Debug // state entry, FALSE if they should be considered for a debug exception. boolean// Halt() // ====== HaltOnBreakpointOrWatchpoint() returnHalt(bits(6) reason) (CrossTriggerIn_CrossHalt); // Trigger other cores to halt bits(64) preferred_restart_address = ThisInstrAddr(); bits(32) spsr_32; bits(64) spsr_64; if UsingAArch32() then spsr_32 = GetPSRFromPSTATE(DebugState); else spsr_64 = GetPSRFromPSTATE(DebugState); if (HaveBTIExt() && !(reason IN {DebugHalt_Step_Normal, DebugHalt_Step_Exclusive, DebugHalt_Step_NoSyndrome, DebugHalt_Breakpoint, DebugHalt_HaltInstruction}) && ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE)) then if UsingAArch32() then spsr_32<11:10> = '00'; else spsr_64<11:10> = '00'; if UsingAArch32() then DLR = preferred_restart_address<31:0>; DSPSR = spsr_32; else DLR_EL0 = preferred_restart_address; DSPSR_EL0 = spsr_64; EDSCR.ITE = '1'; EDSCR.ITO = '0'; if IsSecure() then EDSCR.SDD = '0'; // If entered in Secure state, allow debug elsif HaveEL(EL3) then EDSCR.SDD = if ExternalSecureInvasiveDebugEnabled() then '0' else '1'; else assert EDSCR.SDD == '1'; // Otherwise EDSCR.SDD is RES1 EDSCR.MA = '0'; // In Debug state: // * PSTATE.{SS,SSBS,D,A,I,F} are not observable and ignored so behave-as-if UNKNOWN. // * PSTATE.{N,Z,C,V,Q,GE,E,M,nRW,EL,SP,DIT} are also not observable, but since these // are not changed on exception entry, this function also leaves them unchanged. // * PSTATE.{IT,T} are ignored. // * PSTATE.IL is ignored and behave-as-if 0. // * PSTATE.BTYPE is ignored and behave-as-if 0. // * PSTATE.TCO is set 1. // * PSTATE.{UAO,PAN} are observable and not changed on entry into Debug state. if UsingAArch32() then PSTATE.<IT,SS,SSBS,A,I,F,T> = bits(14) UNKNOWN; else PSTATE.<SS,SSBS,D,A,I,F> = bits(6) UNKNOWN; PSTATE.TCO = '1'; PSTATE.BTYPE = '00'; PSTATE.IL = '0'; StopInstructionPrefetchAndEnableITR(); EDSCR.STATUS = reason; // Signal entered Debug state UpdateEDSCRFieldsHaltingAllowedCTI_SignalEvent() && EDSCR.HDE == '1' && OSLSR_EL1.OSLK == '0';(); // Update EDSCR PE state flags. return;

Library pseudocode for shared/debug/halting/HaltedHaltOnBreakpointOrWatchpoint

// Halted() // ======== // HaltOnBreakpointOrWatchpoint() // ============================== // Returns TRUE if the Breakpoint and Watchpoint debug events should be considered for Debug // state entry, FALSE if they should be considered for a debug exception. boolean Halted() return !(EDSCR.STATUS IN {'000001', '000010'}); // HaltedHaltOnBreakpointOrWatchpoint() returnHaltingAllowed() && EDSCR.HDE == '1' && OSLSR_EL1.OSLK == '0';

Library pseudocode for shared/debug/halting/HaltingAllowedHalted

// HaltingAllowed() // ================ // Returns TRUE if halting is currently allowed, FALSE if halting is prohibited. // Halted() // ======== boolean HaltingAllowed() ifHalted() return !(EDSCR.STATUS IN {'000001', '000010'}); // Halted Halted() || DoubleLockStatus() then return FALSE; ss = CurrentSecurityState(); case ss of when SS_NonSecure return ExternalInvasiveDebugEnabled(); when SS_Secure return ExternalSecureInvasiveDebugEnabled();

Library pseudocode for shared/debug/halting/RestartingHaltingAllowed

// Restarting() // ============ // HaltingAllowed() // ================ // Returns TRUE if halting is currently allowed, FALSE if halting is prohibited. boolean Restarting() return EDSCR.STATUS == '000001'; // RestartingHaltingAllowed() ifHalted() || DoubleLockStatus() then return FALSE; ss = SecurityStateAtEL(PSTATE.EL); case ss of when SS_NonSecure return ExternalInvasiveDebugEnabled(); when SS_Secure return ExternalSecureInvasiveDebugEnabled();

Library pseudocode for shared/debug/halting/StopInstructionPrefetchAndEnableITRRestarting

// Restarting() // ============ boolean StopInstructionPrefetchAndEnableITR();Restarting() return EDSCR.STATUS == '000001'; // Restarting

Library pseudocode for shared/debug/halting/UpdateEDSCRFieldsStopInstructionPrefetchAndEnableITR

// UpdateEDSCRFields() // =================== // Update EDSCR PE state fields UpdateEDSCRFields() if !Halted() then EDSCR.EL = '00'; EDSCR.NS = bit UNKNOWN; EDSCR.RW = '1111'; else EDSCR.EL = PSTATE.EL; ss = CurrentSecurityState(); EDSCR.NS = if ss == SS_Secure then '0' else '1'; bits(4) RW; RW<1> = if ELUsingAArch32(EL1) then '0' else '1'; if PSTATE.EL != EL0 then RW<0> = RW<1>; else RW<0> = if UsingAArch32() then '0' else '1'; if !HaveEL(EL2) || (HaveEL(EL3) && SCR_GEN[].NS == '0' && !IsSecureEL2Enabled()) then RW<2> = RW<1>; else RW<2> = if ELUsingAArch32(EL2) then '0' else '1'; if !HaveEL(EL3) then RW<3> = RW<2>; else RW<3> = if ELUsingAArch32(EL3) then '0' else '1'; // The least-significant bits of EDSCR.RW are UNKNOWN if any higher EL is using AArch32. if RW<3> == '0' then RW<2:0> = bits(3) UNKNOWN; elsif RW<2> == '0' then RW<1:0> = bits(2) UNKNOWN; elsif RW<1> == '0' then RW<0> = bit UNKNOWN; EDSCR.RW = RW; return;StopInstructionPrefetchAndEnableITR();

Library pseudocode for shared/debug/haltingeventshalting/CheckExceptionCatchUpdateEDSCRFields

// CheckExceptionCatch() // ===================== // Check whether an Exception Catch debug event is set on the current Exception level// UpdateEDSCRFields() // =================== // Update EDSCR PE state fields CheckExceptionCatch(boolean exception_entry) // Called after an exception entry or exit, that is, such that the Security state // and PSTATE.EL are correct for the exception target. When FEAT_Debugv8p2 // is not implemented, this function might also be called at any time. ss =UpdateEDSCRFields() if ! Halted() then EDSCR.EL = '00'; EDSCR.NS = bit UNKNOWN; EDSCR.RW = '1111'; else EDSCR.EL = PSTATE.EL; ss = SecurityStateAtEL(PSTATE.EL); base = if ss == EDSCR.NS = if ss == SS_Secure then 0 else 4; ifthen '0' else '1'; bits(4) RW; RW<1> = if HaltingAllowedELUsingAArch32() then boolean halt; if( HaveExtendedECDebugEventsEL1() then exception_exit = !exception_entry; increment = 8; ctrl = EDECCR<) then '0' else '1'; if PSTATE.EL !=UIntEL0(PSTATE.EL) + base + increment>:EDECCR<then RW<0> = RW<1>; else RW<0> = ifUIntUsingAArch32(PSTATE.EL) + base>; case ctrl of when '00' halt = FALSE; when '01' halt = TRUE; when '10' halt = (exception_exit == TRUE); when '11' halt = (exception_entry == TRUE); else halt = (EDECCR<() then '0' else '1'; if !UIntHaveEL(PSTATE.EL) + base> == '1'); if halt then if( Havev8p8DebugEL2() && exception_entry then EDESR.EC = '1'; else) || ( HaltHaveEL() && SCR_GEN[].NS == '0' && !IsSecureEL2Enabled()) then RW<2> = RW<1>; else RW<2> = if ELUsingAArch32(EL2) then '0' else '1'; if !HaveEL(EL3) then RW<3> = RW<2>; else RW<3> = if ELUsingAArch32(EL3DebugHalt_ExceptionCatchEL3);) then '0' else '1'; // The least-significant bits of EDSCR.RW are UNKNOWN if any higher EL is using AArch32. if RW<3> == '0' then RW<2:0> = bits(3) UNKNOWN; elsif RW<2> == '0' then RW<1:0> = bits(2) UNKNOWN; elsif RW<1> == '0' then RW<0> = bit UNKNOWN; EDSCR.RW = RW; return;

Library pseudocode for shared/debug/haltingevents/CheckHaltingStepCheckExceptionCatch

// CheckHaltingStep() // ================== // Check whether EDESR.SS has been set by Halting Step// CheckExceptionCatch() // ===================== // Check whether an Exception Catch debug event is set on the current Exception level CheckHaltingStep(boolean is_async) ifCheckExceptionCatch(boolean exception_entry) // Called after an exception entry or exit, that is, such that the Security state // and PSTATE.EL are correct for the exception target. When FEAT_Debugv8p2 // is not implemented, this function might also be called at any time. ss = HaltingAllowedSecurityStateAtEL() && EDESR.SS == '1' then // The STATUS code depends on how we arrived at the state where EDESR.SS == 1. if(PSTATE.EL); base = if ss == HaltingStep_DidNotStepSS_Secure() thenthen 0 else 4; if HaltHaltingAllowed(() then boolean halt; ifDebugHalt_Step_NoSyndromeHaveExtendedECDebugEvents, is_async); elsif() then exception_exit = !exception_entry; increment = 8; ctrl = EDECCR< HaltingStep_SteppedEXUInt() then(PSTATE.EL) + base + increment>:EDECCR< HaltUInt((PSTATE.EL) + base>; case ctrl of when '00' halt = FALSE; when '01' halt = TRUE; when '10' halt = (exception_exit == TRUE); when '11' halt = (exception_entry == TRUE); else halt = (EDECCR<DebugHalt_Step_ExclusiveUInt, is_async); else(PSTATE.EL) + base> == '1'); if halt then Halt(DebugHalt_Step_NormalDebugHalt_ExceptionCatch, is_async););

Library pseudocode for shared/debug/haltingevents/CheckOSUnlockCatchCheckHaltingStep

// CheckOSUnlockCatch() // ==================== // Called on unlocking the OS Lock to pend an OS Unlock Catch debug event// CheckHaltingStep() // ================== // Check whether EDESR.SS has been set by Halting Step CheckOSUnlockCatch() if ((CheckHaltingStep() ifHaveDoPDHaltingAllowed() && CTIDEVCTL.OSUCE == '1') || (!() && EDESR.SS == '1' then // The STATUS code depends on how we arrived at the state where EDESR.SS == 1. ifHaveDoPDHaltingStep_DidNotStep() && EDECR.OSUCE == '1')) then if !() then(DebugHalt_Step_NoSyndrome); elsif HaltingStep_SteppedEX() then Halt(DebugHalt_Step_Exclusive); else Halt(DebugHalt_Step_NormalHaltedHalt() then EDESR.OSUC = '1';);

Library pseudocode for shared/debug/haltingevents/CheckPendingExceptionCatchCheckOSUnlockCatch

// CheckPendingExceptionCatch() // ============================ // Check whether EDESR.EC has been set by an Exception Catch debug event.// CheckOSUnlockCatch() // ==================== // Called on unlocking the OS Lock to pend an OS Unlock Catch debug event CheckPendingExceptionCatch(boolean is_async) ifCheckOSUnlockCatch() if (( Havev8p8DebugHaveDoPD() &&() && CTIDEVCTL.OSUCE == '1') || (! HaltingAllowedHaveDoPD() && EDESR.EC == '1' then() && EDECR.OSUCE == '1')) then if ! HaltHalted(DebugHalt_ExceptionCatch, is_async);() then EDESR.OSUC = '1';

Library pseudocode for shared/debug/haltingevents/CheckPendingOSUnlockCatch

// CheckPendingOSUnlockCatch() // =========================== // Check whether EDESR.OSUC has been set by an OS Unlock Catch debug event CheckPendingOSUnlockCatch() if HaltingAllowed() && EDESR.OSUC == '1' then boolean is_async = TRUE;() && EDESR.OSUC == '1' then Halt(DebugHalt_OSUnlockCatch, is_async););

Library pseudocode for shared/debug/haltingevents/CheckPendingResetCatch

// CheckPendingResetCatch() // ======================== // Check whether EDESR.RC has been set by a Reset Catch debug event CheckPendingResetCatch() if HaltingAllowed() && EDESR.RC == '1' then boolean is_async = TRUE;() && EDESR.RC == '1' then Halt(DebugHalt_ResetCatch, is_async););

Library pseudocode for shared/debug/haltingevents/CheckResetCatch

// CheckResetCatch() // ================= // Called after reset CheckResetCatch() if (HaveDoPD() && CTIDEVCTL.RCE == '1') || (!HaveDoPD() && EDECR.RCE == '1') then EDESR.RC = '1'; // If halting is allowed then halt immediately if HaltingAllowed() then Halt(DebugHalt_ResetCatch);

Library pseudocode for shared/debug/haltingevents/CheckSoftwareAccessToDebugRegisters

// CheckSoftwareAccessToDebugRegisters() // ===================================== // Check for access to Breakpoint and Watchpoint registers. CheckSoftwareAccessToDebugRegisters() os_lock = (if ELUsingAArch32(EL1) then DBGOSLSR.OSLK else OSLSR_EL1.OSLK); if HaltingAllowed() && EDSCR.TDA == '1' && os_lock == '0' then Halt(DebugHalt_SoftwareAccess);

Library pseudocode for shared/debug/haltingevents/ExternalDebugRequest

// ExternalDebugRequest() // ====================== ExternalDebugRequest() if HaltingAllowed() then boolean is_async = TRUE;() then Halt(DebugHalt_EDBGRQ, is_async); ); // Otherwise the CTI continues to assert the debug request until it is taken.

Library pseudocode for shared/debug/haltingevents/HaltingStep_DidNotStep

// Returns TRUE if the previously executed instruction was executed in the inactive state, that is, // if it was not itself stepped. boolean HaltingStep_DidNotStep();

Library pseudocode for shared/debug/haltingevents/HaltingStep_SteppedEX

// Returns TRUE if the previously executed instruction was a Load-Exclusive class instruction // executed in the active-not-pending state. boolean HaltingStep_SteppedEX();

Library pseudocode for shared/debug/haltingevents/RunHaltingStep

// RunHaltingStep() // ================ RunHaltingStep(boolean exception_generated, bits(2) exception_target, boolean syscall, boolean reset) // "exception_generated" is TRUE if the previous instruction generated a synchronous exception // or was cancelled by an asynchronous exception. // // if "exception_generated" is TRUE then "exception_target" is the target of the exception, and // "syscall" is TRUE if the exception is a synchronous exception where the preferred return // address is the instruction following that which generated the exception. // // "reset" is TRUE if exiting reset state into the highest EL. if reset then assert !Halted(); // Cannot come out of reset halted active = EDECR.SS == '1' && !Halted(); if active && reset then // Coming out of reset with EDECR.SS set EDESR.SS = '1'; elsif active && HaltingAllowed() then boolean advance; if exception_generated && exception_target == EL3 then advance = syscall || ExternalSecureInvasiveDebugEnabled(); else advance = TRUE; if advance then EDESR.SS = '1'; return;

Library pseudocode for shared/debug/interrupts/ExternalDebugInterruptsDisabled

// ExternalDebugInterruptsDisabled() // ================================= // Determine whether EDSCR disables interrupts routed to 'target'. boolean ExternalDebugInterruptsDisabled(bits(2) target) boolean int_dis; SecurityState ss = SecurityStateAtEL(target); if Havev8p4Debug() then if EDSCR.INTdis[0] == '1' then case ss of when SS_NonSecure int_dis = ExternalInvasiveDebugEnabled(); when SS_Secure int_dis = ExternalSecureInvasiveDebugEnabled(); else int_dis = FALSE; else case target of when EL3 int_dis = (EDSCR.INTdis == '11' && ExternalSecureInvasiveDebugEnabled()); when EL2 int_dis = (EDSCR.INTdis IN {'1x'} && ExternalInvasiveDebugEnabled()); when EL1 if ss == SS_Secure then int_dis = (EDSCR.INTdis IN {'1x'} && ExternalSecureInvasiveDebugEnabled()); else int_dis = (EDSCR.INTdis != '00' && ExternalInvasiveDebugEnabled()); return int_dis;

Library pseudocode for shared/debug/pmu/GetNumEventCounters

// GetNumEventCounters() // ===================== // Returns the number of event counters implemented. This is indicated to software at the // highest Exception level by PMCR.N in AArch32 state, and PMCR_EL0.N in AArch64 state. integer GetNumEventCounters() return integer IMPLEMENTATION_DEFINED "Number of event counters";

Library pseudocode for shared/debug/pmu/HasElapsed64Cycles

// Returns TRUE if 64 cycles have elapsed between the last count, and FALSE otherwise. boolean HasElapsed64Cycles();

Library pseudocode for shared/debug/pmu/PMUCountValue

// PMUCountValue() // =============== // Implements the PMU threshold function, if implemented. // Returns the value to increment event counter 'n' by, if the event it is // configured to count yields the value 'V' on this cycle. integer PMUCountValue(integer n, integer V) if !HavePMUv3TH() then return V; integer T = UInt(PMEVTYPER_EL0[n].TH); case PMEVTYPER_EL0[n].TC of when '000' return (if V != T then V else 0); // Disabled or not-equal when '001' return (if V != T then 1 else 0); // Not-equal, count when '010' return (if V == T then V else 0); // Equals when '011' return (if V == T then 1 else 0); // Equals, count when '100' return (if V >= T then V else 0); // Greater-than-or-equal when '101' return (if V >= T then 1 else 0); // Greater-than-or-equal, count when '110' return (if V < T then V else 0); // Less-than when '111' return (if V < T then 1 else 0); // Less-than, count

Library pseudocode for shared/debug/pmu/PMUCounterMask

constant integer CYCLE_COUNTER_ID = 31; // PMUCounterMask() // ================ // Return bitmask of accessible PMU counters. bits(32) PMUCounterMask() integer n; if UsingAArch32() then n = AArch32.GetNumEventCountersAccessible(); else n = AArch64.GetNumEventCountersAccessible(); return '1' : ZeroExtend(Ones(n), 31);

Library pseudocode for shared/debug/pmu/PMUEvent

// PMUEvent() // ========== // Generate a PMU event. By default, increment by 1.constant bits(16) PMUEvent(bits(16) event)PMU_EVENT_SW_INCR = 0x0000<15:0>; constant bits(16) PMU_EVENT_INST_RETIRED = 0x0008<15:0>; constant bits(16) PMU_EVENT_EXC_TAKEN = 0x0009<15:0>; constant bits(16) PMU_EVENT_CPU_CYCLES = 0x0011<15:0>; constant bits(16) PMU_EVENT_INST_SPEC = 0x001B<15:0>; constant bits(16) PMU_EVENT_CHAIN = 0x001E<15:0>; // PMUEvent() // ========== // Generate a PMU event. By default, increment by 1. PMUEvent(bits(16) event) PMUEvent(event, 1); // PMUEvent() // ========== // Accumulate a PMU Event. PMUEvent(bits(16) event, integer increment) integer counters = GetNumEventCounters(); if counters != 0 then for idx = 0 to counters - 1 PMUEvent(event, increment, idx); // PMUEvent() // ========== // Accumulate a PMU Event for a specific event counter. PMUEvent(bits(16) event, integer increment, integer idx) if !HavePMUv3() then return; if UsingAArch32() then if PMEVTYPER[idx].evtCount == event then PMUEventAccumulator[idx] = PMUEventAccumulator[idx] + increment; else if PMEVTYPER_EL0[idx].evtCount == event then PMUEventAccumulator[idx] = PMUEventAccumulator[idx] + increment;

Library pseudocode for shared/debug/pmu/integer

array integer PMUEventAccumulator[0..30]; // Accumulates PMU events for a cycle

Library pseudocode for shared/debug/samplebasedprofiling/CreatePCSample

// CreatePCSample() // ================ CreatePCSample() // In a simple sequential execution of the program, CreatePCSample is executed each time the PE // executes an instruction that can be sampled. An implementation is not constrained such that // reads of EDPCSRlo return the current values of PC, etc. pc_sample.valid = ExternalNoninvasiveDebugAllowed() && !Halted(); pc_sample.pc = ThisInstrAddr(); pc_sample.el = PSTATE.EL; pc_sample.rw = if UsingAArch32() then '0' else '1'; pc_sample.ss = pc_sample.ns = if CurrentSecurityStateIsSecure(); () then '0' else '1'; pc_sample.contextidr = if ELUsingAArch32(EL1) then CONTEXTIDR else CONTEXTIDR_EL1<31:0>; pc_sample.has_el2 = PSTATE.EL != EL3 && EL2Enabled(); if pc_sample.has_el2 then if ELUsingAArch32(EL2) then pc_sample.vmid = ZeroExtend(VTTBR.VMID, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then pc_sample.vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); else pc_sample.vmid = VTTBR_EL2.VMID; if (HaveVirtHostExt() || HaveV82Debug()) && !ELUsingAArch32(EL2) then pc_sample.contextidr_el2 = CONTEXTIDR_EL2<31:0>; else pc_sample.contextidr_el2 = bits(32) UNKNOWN; pc_sample.el0h = PSTATE.EL == EL0 && IsInHost(); return;

Library pseudocode for shared/debug/samplebasedprofiling/EDPCSRlo

// EDPCSRlo[] (read) // ================= bits(32) EDPCSRlo[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "generate error response"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || EDLSR.SLK == '0'; // Software locked: no side-effects bits(32) sample; if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then if HaveVirtHostExt() && EDSCR.SC2 == '1' then EDPCSRhi.PC = (if pc_sample.rw == '0' then Zeros(24) else pc_sample.pc<55:32>); EDPCSRhi.EL = pc_sample.el; EDPCSRhi.NS = (if pc_sample.ss == EDPCSRhi.NS = pc_sample.ns; else EDPCSRhi = (if pc_sample.rw == '0' then SS_Secure then '0' else '1'); else EDPCSRhi = (if pc_sample.rw == '0' then Zeros(32) else pc_sample.pc<63:32>); EDCIDSR = pc_sample.contextidr; if (HaveVirtHostExt() || HaveV82Debug()) && EDSCR.SC2 == '1' then EDVIDSR = (if pc_sample.has_el2 then pc_sample.contextidr_el2 else bits(32) UNKNOWN); else EDVIDSR.VMID = (if pc_sample.has_el2 && pc_sample.el IN {EL1,EL0} then pc_sample.vmid else Zeros()); EDVIDSR.NS = (if pc_sample.ss == SS_Secure then '0' else '1'); ()); EDVIDSR.NS = pc_sample.ns; EDVIDSR.E2 = (if pc_sample.el == EL2 then '1' else '0'); EDVIDSR.E3 = (if pc_sample.el == EL3 then '1' else '0') AND pc_sample.rw; // The conditions for setting HV are not specified if PCSRhi is zero. // An example implementation may be "pc_sample.rw". EDVIDSR.HV = (if !IsZero(EDPCSRhi) then '1' else bit IMPLEMENTATION_DEFINED "0 or 1"); else sample = Ones(32); if update then EDPCSRhi = bits(32) UNKNOWN; EDCIDSR = bits(32) UNKNOWN; EDVIDSR = bits(32) UNKNOWN; return sample;

Library pseudocode for shared/debug/samplebasedprofiling/PCSample

type PCSample is ( boolean valid, bits(64) pc, bits(2) el, bit rw, bit rw, bit ns, boolean has_el2, bits(32) contextidr, bits(32) contextidr_el2, boolean el0h, bits(16) vmid ) SecurityState ss, boolean has_el2, bits(32) contextidr, bits(32) contextidr_el2, boolean el0h, bits(16) vmid ) PCSample pc_sample;

Library pseudocode for shared/debug/samplebasedprofiling/PMPCSR

// PMPCSR[] (read) // =============== bits(32) PMPCSR[boolean memory_mapped] if EDPRSR<6:5,0> != '001' then // Check DLK, OSLK and PU bits IMPLEMENTATION_DEFINED "generate error response"; return bits(32) UNKNOWN; // The Software lock is OPTIONAL. update = !memory_mapped || PMLSR.SLK == '0'; // Software locked: no side-effects bits(32) sample; if pc_sample.valid then sample = pc_sample.pc<31:0>; if update then PMPCSR<55:32> = (if pc_sample.rw == '0' then Zeros(24) else pc_sample.pc<55:32>); PMPCSR.EL = pc_sample.el; PMPCSR.NS = (if pc_sample.ss == PMPCSR.NS = pc_sample.ns; PMCID1SR = pc_sample.contextidr; PMCID2SR = if pc_sample.has_el2 then pc_sample.contextidr_el2 else bits(32) UNKNOWN; PMVIDSR.VMID = (if pc_sample.has_el2 && pc_sample.el IN { SS_Secure then '0' else '1'); PMCID1SR = pc_sample.contextidr; PMCID2SR = if pc_sample.has_el2 then pc_sample.contextidr_el2 else bits(32) UNKNOWN; PMVIDSR.VMID = (if pc_sample.has_el2 && pc_sample.el IN {EL1,EL0} && !pc_sample.el0h then pc_sample.vmid else bits(16) UNKNOWN); else sample = Ones(32); if update then PMPCSR<55:32> = bits(24) UNKNOWN; PMPCSR.EL = bits(2) UNKNOWN; PMPCSR.NS = bit UNKNOWN; PMCID1SR = bits(32) UNKNOWN; PMCID2SR = bits(32) UNKNOWN; PMVIDSR.VMID = bits(16) UNKNOWN; return sample;

Library pseudocode for shared/debug/softwarestep/CheckSoftwareStep

// CheckSoftwareStep() // =================== // Take a Software Step exception if in the active-pending state CheckSoftwareStep() // Other self-hosted debug functions will call AArch32.GenerateDebugExceptions() if called from // AArch32 state. However, because Software Step is only active when the debug target Exception // level is using AArch64, CheckSoftwareStep only calls AArch64.GenerateDebugExceptions(). step_enabled = !ELUsingAArch32(DebugTarget()) && AArch64.GenerateDebugExceptions() && MDSCR_EL1.SS == '1'; if step_enabled && PSTATE.SS == '0' then AArch64.SoftwareStepException();

Library pseudocode for shared/debug/softwarestep/DebugExceptionReturnSS

// DebugExceptionReturnSS() // ======================== // Returns value to write to PSTATE.SS on an exception return or Debug state exit. bit DebugExceptionReturnSS(bits(N) spsr) if UsingAArch32() then assert N == 32; else assert N == 64; assert Halted() || Restarting() || PSTATE.EL != EL0; boolean enabled_at_source; if Restarting() then enabled_at_source = FALSE; elsif UsingAArch32() then enabled_at_source = AArch32.GenerateDebugExceptions(); else enabled_at_source = AArch64.GenerateDebugExceptions(); boolean valid; bits(2) dest_el; bits(2) dest; if IllegalExceptionReturn(spsr) then dest_el = PSTATE.EL; dest = PSTATE.EL; else (valid, dest_el) = (valid, dest) = ELFromSPSR(spsr); assert valid; dest_ss = dest_is_secure = () || dest == EL3SecurityStateAtELIsSecureBelowEL3(dest_el); ; bit mask; boolean enabled_at_dest; dest_using_32 = (if dest_el == dest_using_32 = (if dest == EL0 then spsr<4> == '1' else ELUsingAArch32(dest_el)); (dest)); if dest_using_32 then enabled_at_dest = AArch32.GenerateDebugExceptionsFrom(dest_el, dest_ss); (dest, dest_is_secure); else mask = spsr<9>; enabled_at_dest = AArch64.GenerateDebugExceptionsFrom(dest_el, dest_ss, mask); (dest, dest_is_secure, mask); ELd = DebugTargetFrom(dest_ss); (dest_is_secure); bit SS_bit; if !ELUsingAArch32(ELd) && MDSCR_EL1.SS == '1' && !enabled_at_source && enabled_at_dest then SS_bit = spsr<21>; else SS_bit = '0'; return SS_bit;

Library pseudocode for shared/debug/softwarestep/SSAdvance

// SSAdvance() // =========== // Advance the Software Step state machine. SSAdvance() // A simpler implementation of this function just clears PSTATE.SS to zero regardless of the // current Software Step state machine. However, this check is made to illustrate that the // processor only needs to consider advancing the state machine from the active-not-pending // state. target = DebugTarget(); step_enabled = !ELUsingAArch32(target) && MDSCR_EL1.SS == '1'; active_not_pending = step_enabled && PSTATE.SS == '1'; if active_not_pending then PSTATE.SS = '0'; return;

Library pseudocode for shared/debug/softwarestep/SoftwareStep_DidNotStep

// Returns TRUE if the previously executed instruction was executed in the // inactive state, that is, if it was not itself stepped. // Might return TRUE or FALSE if the previously executed instruction was an ISB // or ERET executed in the active-not-pending state, or if another exception // was taken before the Software Step exception. Returns FALSE otherwise, // indicating that the previously executed instruction was executed in the // active-not-pending state, that is, the instruction was stepped. boolean SoftwareStep_DidNotStep();

Library pseudocode for shared/debug/softwarestep/SoftwareStep_SteppedEX

// Returns a value that describes the previously executed instruction. The // result is valid only if SoftwareStep_DidNotStep() returns FALSE. // Might return TRUE or FALSE if the instruction was an AArch32 LDREX or LDAEX // that failed its condition code test. Otherwise returns TRUE if the // instruction was a Load-Exclusive class instruction, and FALSE if the // instruction was not a Load-Exclusive class instruction. boolean SoftwareStep_SteppedEX();

Library pseudocode for shared/exceptions/exceptions/ConditionSyndrome

// ConditionSyndrome() // =================== // Return CV and COND fields of instruction syndrome bits(5) ConditionSyndrome() bits(5) syndrome; if UsingAArch32() then cond = AArch32.CurrentCond(); if PSTATE.T == '0' then // A32 syndrome<4> = '1'; // A conditional A32 instruction that is known to pass its condition code check // can be presented either with COND set to 0xE, the value for unconditional, or // the COND value held in the instruction. if ConditionHolds(cond) && ConstrainUnpredictableBool(Unpredictable_ESRCONDPASS) then syndrome<3:0> = '1110'; else syndrome<3:0> = cond; else // T32 // When a T32 instruction is trapped, it is IMPLEMENTATION DEFINED whether: // * CV set to 0 and COND is set to an UNKNOWN value // * CV set to 1 and COND is set to the condition code for the condition that // applied to the instruction. if boolean IMPLEMENTATION_DEFINED "Condition valid for trapped T32" then syndrome<4> = '1'; syndrome<3:0> = cond; else syndrome<4> = '0'; syndrome<3:0> = bits(4) UNKNOWN; else syndrome<4> = '1'; syndrome<3:0> = '1110'; return syndrome;

Library pseudocode for shared/exceptions/exceptions/Exception

enumeration Exception {Exception_Uncategorized, // Uncategorized or unknown reason Exception_WFxTrap, // Trapped WFI or WFE instruction Exception_CP15RTTrap, // Trapped AArch32 MCR or MRC access, coproc=0b1111 Exception_CP15RRTTrap, // Trapped AArch32 MCRR or MRRC access, coproc=0b1111 Exception_CP14RTTrap, // Trapped AArch32 MCR or MRC access, coproc=0b1110 Exception_CP14DTTrap, // Trapped AArch32 LDC or STC access, coproc=0b1110 Exception_CP14RRTTrap, // Trapped AArch32 MRRC access, coproc=0b1110 Exception_AdvSIMDFPAccessTrap, // HCPTR-trapped access to SIMD or FP Exception_FPIDTrap, // Trapped access to SIMD or FP ID register Exception_LDST64BTrap, // Trapped access to ST64BV, ST64BV0, ST64B and LD64B // Trapped BXJ instruction not supported in Armv8 Exception_PACTrap, // Trapped invalid PAC use Exception_IllegalState, // Illegal Execution state Exception_SupervisorCall, // Supervisor Call Exception_HypervisorCall, // Hypervisor Call Exception_MonitorCall, // Monitor Call or Trapped SMC instruction Exception_SystemRegisterTrap, // Trapped MRS or MSR system register access Exception_ERetTrap, // Trapped invalid ERET use Exception_InstructionAbort, // Instruction Abort or Prefetch Abort Exception_PCAlignment, // PC alignment fault Exception_DataAbort, // Data Abort Exception_NV2DataAbort, // Data abort at EL1 reported as being from EL2 Exception_PACFail, // PAC Authentication failure Exception_SPAlignment, // SP alignment fault Exception_FPTrappedException, // IEEE trapped FP exception Exception_SError, // SError interrupt Exception_Breakpoint, // (Hardware) Breakpoint Exception_SoftwareStep, // Software Step Exception_Watchpoint, // Watchpoint Exception_NV2Watchpoint, // Watchpoint at EL1 reported as being from EL2 Exception_SoftwareBreakpoint, // Software Breakpoint Instruction Exception_VectorCatch, // AArch32 Vector Catch Exception_IRQ, // IRQ interrupt Exception_BranchTarget, // Branch Target Identification Exception_MemCpyMemSet, // Exception from a CPY* or SET* instruction Exception_FIQ}; // FIQ interrupt

Library pseudocode for shared/exceptions/exceptions/ExceptionRecord

type ExceptionRecord is ( Exception exceptype, // Exception class bits(25) syndrome, // Syndrome record bits(5) syndrome2, // ST64BV(0) return value register specifier bits(64) vaddress, // Virtual fault address boolean ipavalid, // Validity of Intermediate Physical fault address bit NS, // Intermediate Physical fault address space bits(52) ipaddress, // Intermediate Physical fault address boolean trappedsyscallinst) // Trapped SVC or SMC instruction

Library pseudocode for shared/exceptions/exceptions/ExceptionSyndrome

// ExceptionSyndrome() // =================== // Return a blank exception syndrome record for an exception of the given type. ExceptionRecord ExceptionSyndrome(Exception exceptype) ExceptionRecord r; r.exceptype = exceptype; // Initialize all other fields r.syndrome = Zeros(); r.syndrome2 = Zeros(); r.vaddress = Zeros(); r.ipavalid = FALSE; r.NS = '0'; r.ipaddress = Zeros(); r.trappedsyscallinst = FALSE; return r;

Library pseudocode for shared/functions/aborts/EncodeLDFSC

// EncodeLDFSC() // ============= // Function that gives the Long-descriptor FSC code for types of Fault bits(6) EncodeLDFSC(Fault statuscode, integer level) bits(6) result; if level == -1 then assert Have52BitIPAAndPASpaceExt(); case statuscode of when Fault_AddressSize result = '101001'; when Fault_Translation result = '101011'; when Fault_SyncExternalOnWalk result = '010011'; when Fault_SyncParityOnWalk result = '011011'; assert !HaveRASExt(); otherwise Unreachable(); return result; case statuscode of when Fault_AddressSize result = '0000':level<1:0>; assert level IN {0,1,2,3}; when Fault_AccessFlag result = '0010':level<1:0>; assert level IN {0,1,2,3}; when Fault_Permission result = '0011':level<1:0>; assert level IN {0,1,2,3}; when Fault_Translation result = '0001':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncExternal result = '010000'; when Fault_SyncExternalOnWalk result = '0101':level<1:0>; assert level IN {0,1,2,3}; when Fault_SyncParity result = '011000'; when Fault_SyncParityOnWalk result = '0111':level<1:0>; assert level IN {0,1,2,3}; when Fault_AsyncParity result = '011001'; when Fault_AsyncExternal result = '010001'; when Fault_Alignment result = '100001'; when Fault_Debug result = '100010'; when Fault_TLBConflict result = '110000'; when Fault_HWUpdateAccessFlag result = '110001'; when Fault_Lockdown result = '110100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '110101'; // IMPLEMENTATION DEFINED otherwise Unreachable(); return result;

Library pseudocode for shared/functions/aborts/IPAValid

// IPAValid() // ========== // Return TRUE if the IPA is reported for the abort boolean IPAValid(FaultRecord fault) assert fault.statuscode != Fault_None; if fault.s2fs1walk then return fault.statuscode IN { Fault_AccessFlag, Fault_Permission, Fault_Translation, Fault_AddressSize }; elsif fault.secondstage then return fault.statuscode IN { Fault_AccessFlag, Fault_Translation, Fault_AddressSize }; else return FALSE;

Library pseudocode for shared/functions/aborts/IsAsyncAbort

// IsAsyncAbort() // ============== // Returns TRUE if the abort currently being processed is an asynchronous abort, and FALSE // otherwise. boolean IsAsyncAbort(Fault statuscode) assert statuscode != Fault_None; return (statuscode IN {Fault_AsyncExternal, Fault_AsyncParity}); // IsAsyncAbort() // ============== boolean IsAsyncAbort(FaultRecord fault) return IsAsyncAbort(fault.statuscode);

Library pseudocode for shared/functions/aborts/IsDebugException

// IsDebugException() // ================== boolean IsDebugException(FaultRecord fault) assert fault.statuscode != Fault_None; return fault.statuscode == Fault_Debug;

Library pseudocode for shared/functions/aborts/IsExternalAbort

// IsExternalAbort() // ================= // Returns TRUE if the abort currently being processed is an External abort and FALSE otherwise. boolean IsExternalAbort(Fault statuscode) assert statuscode != Fault_None; return (statuscode IN { Fault_SyncExternal, Fault_SyncParity, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk, Fault_AsyncExternal, Fault_AsyncParity }); // IsExternalAbort() // ================= boolean IsExternalAbort(FaultRecord fault) return IsExternalAbort(fault.statuscode);

Library pseudocode for shared/functions/aborts/IsExternalSyncAbort

// IsExternalSyncAbort() // ===================== // Returns TRUE if the abort currently being processed is an external // synchronous abort and FALSE otherwise. boolean IsExternalSyncAbort(Fault statuscode) assert statuscode != Fault_None; return (statuscode IN { Fault_SyncExternal, Fault_SyncParity, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk }); // IsExternalSyncAbort() // ===================== boolean IsExternalSyncAbort(FaultRecord fault) return IsExternalSyncAbort(fault.statuscode);

Library pseudocode for shared/functions/aborts/IsFault

// IsFault() // ========= // Return TRUE if a fault is associated with an address descriptor boolean IsFault(AddressDescriptor addrdesc) return addrdesc.fault.statuscode != Fault_None; // IsFault() // ========= // Return TRUE if a fault is associated with a memory access. boolean IsFault(Fault fault) return fault != Fault_None; // IsFault() // ========= // Return TRUE if a fault is associated with status returned by memory. boolean IsFault(PhysMemRetStatus retstatus) return retstatus.statuscode != Fault_None;

Library pseudocode for shared/functions/aborts/IsSErrorInterrupt

// IsSErrorInterrupt() // =================== // Returns TRUE if the abort currently being processed is an SError interrupt, and FALSE // otherwise. boolean IsSErrorInterrupt(Fault statuscode) assert statuscode != Fault_None; return (statuscode IN {Fault_AsyncExternal, Fault_AsyncParity}); // IsSErrorInterrupt() // =================== boolean IsSErrorInterrupt(FaultRecord fault) return IsSErrorInterrupt(fault.statuscode);

Library pseudocode for shared/functions/aborts/IsSecondStage

// IsSecondStage() // =============== boolean IsSecondStage(FaultRecord fault) assert fault.statuscode != Fault_None; return fault.secondstage;

Library pseudocode for shared/functions/aborts/LSInstructionSyndrome

// Returns the extended syndrome information for a second stage fault. // <10> - Syndrome valid bit. The syndrome is valid only for certain types of access instruction. // <10> - Syndrome valid bit. The syndrome is only valid for certain types of access instruction. // <9:8> - Access size. // <7> - Sign extended (for loads). // <6:2> - Transfer register. // <1> - Transfer register is 64-bit. // <0> - Instruction has acquire/release semantics. bits(11) LSInstructionSyndrome();

Library pseudocode for shared/functions/cache/CACHE_OP

// CACHE_OP() // ========== // Performs Cache maintenance operations as per CacheRecord. CACHE_OP(CacheRecord cache) IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/cache/CPASAtPAS

// CPASAtPAS() // =========== // Get cache PA space for given PA space. CachePASpace CPASAtPAS(PASpace pas) case pas of when PAS_NonSecure return CPAS_NonSecure; when PAS_Secure return CPAS_Secure;

Library pseudocode for shared/functions/cache/CPASAtSecurityState

// CPASAtSecurityState() // ===================== // Get cache PA space for given security state. CachePASpace CPASAtSecurityState(SecurityState ss) case ss of when SS_NonSecure return CPAS_NonSecure; when SS_Secure return CPAS_SecureNonSecure;

Library pseudocode for shared/functions/cache/CacheOp

enumeration CacheOp { CacheOp_Clean, CacheOp_Invalidate, CacheOp_CleanInvalidate };

Library pseudocode for shared/functions/cache/CacheOpScope

enumeration CacheOpScope { CacheOpScope_SetWay, CacheOpScope_PoU, CacheOpScope_PoC, CacheOpScope_PoP, CacheOpScope_PoDP, CacheOpScope_ALLU, CacheOpScope_ALLUIS };

Library pseudocode for shared/functions/cache/CachePASpace

enumeration CachePASpace { CPAS_NonSecure, CPAS_SecureNonSecure, // match entries from Secure or Non-Secure PAS CPAS_Secure };

Library pseudocode for shared/functions/cache/CacheRecord

type CacheRecord is ( AccType acctype, // Access type CacheOp cacheop, // Cache operation CacheOpScope opscope, // Cache operation type CacheType cachetype, // Cache type bits(64) regval, FullAddress paddress, bits(64) vaddress, // For VA operations integer set, // For SW operations integer way, // For SW operations integer level, // For SW operations Shareability shareability, boolean translated, boolean is_vmid_valid, // is vmid valid for current context bits(16) vmid, boolean is_asid_valid, // is asid valid for current context bits(16) asid, SecurityState security, // For cache operations to full cache or by set/way // For operations by address, PA space in paddress CachePASpace cpas )

Library pseudocode for shared/functions/cache/CacheType

enumeration CacheType { CacheType_Data, CacheType_Tag, CacheType_Data_Tag, CacheType_Instruction };

Library pseudocode for shared/functions/cache/DCInstNeedsTranslation

// DCInstNeedsTranslation() // ======================== // Check whether Data Cache operation needs translation. boolean DCInstNeedsTranslation(CacheOpScope opscope) if CLIDR_EL1.LoC == '000' then return !boolean IMPLEMENTATION_DEFINED "No fault generated for DC operations if PoC is before any level of cache"; if CLIDR_EL1.LoUU == '000' && opscope == CacheOpScope_PoU then return !boolean IMPLEMENTATION_DEFINED "No fault generated for DC operations if PoU is before any level of cache"; return TRUE;

Library pseudocode for shared/functions/cache/DecodeSW

// DecodeSW() // ========== // Decode input value into set, way and level for SW instructions. (integer, integer, integer) DecodeSW(bits(64) regval, CacheType cachetype) level = UInt(regval[3:1]); (set, way, linesize) = GetCacheInfo(level, cachetype); return (set, way, level);

Library pseudocode for shared/functions/cache/GetCacheInfo

// Returns numsets, assosciativity & linesize. (integer, integer, integer) GetCacheInfo(integer level, CacheType cachetype);

Library pseudocode for shared/functions/cache/ICInstNeedsTranslation

// ICInstNeedsTranslation() // ======================== // Check whether Instruction Cache operation needs translation. boolean ICInstNeedsTranslation(CacheOpScope opscope) return boolean IMPLEMENTATION_DEFINED "Instruction Cache needs translation";

Library pseudocode for shared/functions/common/ASR

// ASR() // ===== bits(N) ASR(bits(N) x, integer shift) assert shift >= 0; bits(N) result; if shift == 0 then result = x; else (result, -) = ASR_C(x, shift); return result;

Library pseudocode for shared/functions/common/ASR_C

// ASR_C() // ======= (bits(N), bit) ASR_C(bits(N) x, integer shift) assert shift > 0 && shift < 256; assert shift > 0; extended_x = SignExtend(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/Abs

// Abs() // ===== integer Abs(integer x) return if x >= 0 then x else -x; // Abs() // ===== real Abs(real x) return if x >= 0.0 then x else -x;

Library pseudocode for shared/functions/common/Align

// Align() // ======= integer Align(integer x, integer y) return y * (x DIV y); // Align() // ======= bits(N) Align(bits(N) x, integer y) return Align(UInt(x), y)<N-1:0>;

Library pseudocode for shared/functions/common/BitCount

// BitCount() // ========== integer BitCount(bits(N) x) integer result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 1; return result;

Library pseudocode for shared/functions/common/CountLeadingSignBits

// CountLeadingSignBits() // ====================== integer CountLeadingSignBits(bits(N) x) return CountLeadingZeroBits(x<N-1:1> EOR x<N-2:0>);

Library pseudocode for shared/functions/common/CountLeadingZeroBits

// CountLeadingZeroBits() // ====================== integer CountLeadingZeroBits(bits(N) x) return N - (HighestSetBit(x) + 1);

Library pseudocode for shared/functions/common/Elem

// Elem[] - non-assignment form // ============================ bits(size) Elem[bits(N) vector, integer e, integer size] assert e >= 0 && (e+1)*size <= N; return vector<e*size+size-1 : e*size>; // Elem[] - non-assignment form // ============================ bits(size) Elem[bits(N) vector, integer e] return Elem[vector, e, size]; // Elem[] - assignment form // ======================== Elem[bits(N) &vector, integer e, integer size] = bits(size) value assert e >= 0 && (e+1)*size <= N; vector<(e+1)*size-1:e*size> = value; return; // Elem[] - assignment form // ======================== Elem[bits(N) &vector, integer e] = bits(size) value Elem[vector, e, size] = value; return;

Library pseudocode for shared/functions/common/Extend

// Extend() // ======== bits(N) Extend(bits(M) x, integer N, boolean unsigned) return if unsigned then ZeroExtend(x, N) else SignExtend(x, N); // Extend() // ======== bits(N) Extend(bits(M) x, boolean unsigned) return Extend(x, N, unsigned);

Library pseudocode for shared/functions/common/HighestSetBit

// HighestSetBit() // =============== integer HighestSetBit(bits(N) x) for i = N-1 downto 0 if x<i> == '1' then return i; return -1;

Library pseudocode for shared/functions/common/Int

// Int() // ===== integer Int(bits(N) x, boolean unsigned) result = if unsigned then UInt(x) else SInt(x); return result;

Library pseudocode for shared/functions/common/IsOnes

// IsOnes() // ======== boolean IsOnes(bits(N) x) return x == Ones(N);

Library pseudocode for shared/functions/common/IsZero

// IsZero() // ======== boolean IsZero(bits(N) x) return x == Zeros(N);

Library pseudocode for shared/functions/common/IsZeroBit

// IsZeroBit() // =========== bit IsZeroBit(bits(N) x) return if IsZero(x) then '1' else '0';

Library pseudocode for shared/functions/common/LSL

// LSL() // ===== bits(N) LSL(bits(N) x, integer shift) assert shift >= 0; bits(N) result; if shift == 0 then result = x; else (result, -) = LSL_C(x, shift); return result;

Library pseudocode for shared/functions/common/LSL_C

// LSL_C() // ======= (bits(N), bit) LSL_C(bits(N) x, integer shift) assert shift > 0 && shift < 256; assert shift > 0; extended_x = x : Zeros(shift); result = extended_x<N-1:0>; carry_out = extended_x<N>; return (result, carry_out);

Library pseudocode for shared/functions/common/LSR

// LSR() // ===== bits(N) LSR(bits(N) x, integer shift) assert shift >= 0; bits(N) result; if shift == 0 then result = x; else (result, -) = LSR_C(x, shift); return result;

Library pseudocode for shared/functions/common/LSR_C

// LSR_C() // ======= (bits(N), bit) LSR_C(bits(N) x, integer shift) assert shift > 0 && shift < 256; assert shift > 0; extended_x = ZeroExtend(x, shift+N); result = extended_x<shift+N-1:shift>; carry_out = extended_x<shift-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/LowestSetBit

// LowestSetBit() // ============== integer LowestSetBit(bits(N) x) for i = 0 to N-1 if x<i> == '1' then return i; return N;

Library pseudocode for shared/functions/common/Max

// Max() // ===== integer Max(integer a, integer b) return if a >= b then a else b; // Max() // ===== real Max(real a, real b) return if a >= b then a else b;

Library pseudocode for shared/functions/common/Min

// Min() // ===== integer Min(integer a, integer b) return if a <= b then a else b; // Min() // ===== real Min(real a, real b) return if a <= b then a else b;

Library pseudocode for shared/functions/common/Ones

// Ones() // ====== bits(N) Ones(integer N) return Replicate('1',N); // Ones() // ====== bits(N) Ones() return Ones(N);

Library pseudocode for shared/functions/common/ROR

// ROR() // ===== bits(N) ROR(bits(N) x, integer shift) assert shift >= 0; bits(N) result; if shift == 0 then result = x; else (result, -) = ROR_C(x, shift); return result;

Library pseudocode for shared/functions/common/ROR_C

// ROR_C() // ======= (bits(N), bit) ROR_C(bits(N) x, integer shift) assert shift != 0 && shift < 256; assert shift != 0; m = shift MOD N; result = LSR(x,m) OR LSL(x,N-m); carry_out = result<N-1>; return (result, carry_out);

Library pseudocode for shared/functions/common/Replicate

// Replicate() // =========== bits(N) Replicate(bits(M) x) assert N MOD M == 0; return Replicate(x, N DIV M); bits(M*N) Replicate(bits(M) x, integer N);

Library pseudocode for shared/functions/common/RoundDown

integer RoundDown(real x);

Library pseudocode for shared/functions/common/RoundTowardsZero

// RoundTowardsZero() // ================== integer RoundTowardsZero(real x) return if x == 0.0 then 0 else if x >= 0.0 then RoundDown(x) else RoundUp(x);

Library pseudocode for shared/functions/common/RoundUp

integer RoundUp(real x);

Library pseudocode for shared/functions/common/SInt

// SInt() // ====== integer SInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; if x<N-1> == '1' then result = result - 2^N; return result;

Library pseudocode for shared/functions/common/SignExtend

// SignExtend() // ============ bits(N) SignExtend(bits(M) x, integer N) assert N >= M; return Replicate(x<M-1>, N-M) : x; // SignExtend() // ============ bits(N) SignExtend(bits(M) x) return SignExtend(x, N);

Library pseudocode for shared/functions/common/Split64to32

// Split64to32() // ============= (bits(32), bits(32)) Split64to32(bits(64) value) return (value<63:32>, value<31:0>);

Library pseudocode for shared/functions/common/UInt

// UInt() // ====== integer UInt(bits(N) x) result = 0; for i = 0 to N-1 if x<i> == '1' then result = result + 2^i; return result;

Library pseudocode for shared/functions/common/ZeroExtend

// ZeroExtend() // ============ bits(N) ZeroExtend(bits(M) x, integer N) assert N >= M; return Zeros(N-M) : x; // ZeroExtend() // ============ bits(N) ZeroExtend(bits(M) x) return ZeroExtend(x, N);

Library pseudocode for shared/functions/common/Zeros

// Zeros() // ======= bits(N) Zeros(integer N) return Replicate('0',N); // Zeros() // ======= bits(N) Zeros() return Zeros(N);

Library pseudocode for shared/functions/counters/AArch32.CheckTimerConditions

// AArch32.CheckTimerConditions() // ============================== // Checking timer conditions for all A32 timer registers AArch32.CheckTimerConditions() boolean status; bits(64) offset; offset = Zeros(64); assert !HaveAArch64(); if HaveEL(EL3) then if CNTP_CTL_S.ENABLE == '1' then status = IsTimerConditionMet(offset, CNTP_CVAL_S, CNTP_CTL_S.IMASK, InterruptID_CNTPS); CNTP_CTL_S.ISTATUS = if status then '1' else '0'; if CNTP_CTL_NS.ENABLE == '1' then status = IsTimerConditionMet(offset, CNTP_CVAL_NS, CNTP_CTL_NS.IMASK, InterruptID_CNTP); CNTP_CTL_NS.ISTATUS = if status then '1' else '0'; else if CNTP_CTL.ENABLE == '1' then status = IsTimerConditionMet(offset, CNTP_CVAL, CNTP_CTL.IMASK, InterruptID_CNTP); CNTP_CTL.ISTATUS = if status then '1' else '0'; if HaveEL(EL2) && CNTHP_CTL.ENABLE == '1' then status = IsTimerConditionMet(offset, CNTHP_CVAL, CNTHP_CTL.IMASK, InterruptID_CNTHP); CNTHP_CTL.ISTATUS = if status then '1' else '0'; if CNTV_CTL_EL0.ENABLE == '1' then status = IsTimerConditionMet(CNTVOFF_EL2, CNTV_CVAL_EL0, CNTV_CTL_EL0.IMASK, InterruptID_CNTV); CNTV_CTL_EL0.ISTATUS = if status then '1' else '0'; return;

Library pseudocode for shared/functions/counters/AArch64.CheckTimerConditions

// AArch64.CheckTimerConditions() // ============================== // Checking timer conditions for all A64 timer registers AArch64.CheckTimerConditions() boolean status; bits(64) offset; bit imask; boolean ecv = FALSE; if HaveECVExt() then ecv = CNTHCTL_EL2.ECV == '1' && SCR_EL3.ECVEn == '1' && EL2Enabled(); if ecv then offset = CNTPOFF_EL2; else offset = Zeros(64); if CNTP_CTL_EL0.ENABLE == '1' then imask = CNTP_CTL_EL0.IMASK; status = IsTimerConditionMet(offset, CNTP_CVAL_EL0, imask, CNTP_CTL_EL0.IMASK, InterruptID_CNTP); CNTP_CTL_EL0.ISTATUS = if status then '1' else '0'; if ((HaveEL(EL3) || (HaveEL(EL2) && !HaveSecureEL2Ext())) && CNTHP_CTL_EL2.ENABLE == '1') then status = IsTimerConditionMet(Zeros(64), CNTHP_CVAL_EL2, CNTHP_CTL_EL2.IMASK, InterruptID_CNTHP); CNTHP_CTL_EL2.ISTATUS = if status then '1' else '0'; if HaveEL(EL2) && HaveSecureEL2Ext() && CNTHPS_CTL_EL2.ENABLE == '1' then status = IsTimerConditionMet(Zeros(64), CNTHPS_CVAL_EL2, CNTHPS_CTL_EL2.IMASK, InterruptID_CNTHPS); CNTHPS_CTL_EL2.ISTATUS = if status then '1' else '0'; if CNTPS_CTL_EL1.ENABLE == '1' then status = IsTimerConditionMet(offset, CNTPS_CVAL_EL1, CNTPS_CTL_EL1.IMASK, InterruptID_CNTPS); CNTPS_CTL_EL1.ISTATUS = if status then '1' else '0'; if CNTV_CTL_EL0.ENABLE == '1' then imask = CNTV_CTL_EL0.IMASK; status = IsTimerConditionMet(CNTVOFF_EL2, CNTV_CVAL_EL0, imask, CNTV_CTL_EL0.IMASK, InterruptID_CNTV); CNTV_CTL_EL0.ISTATUS = if status then '1' else '0'; if ((HaveVirtHostExt() && (HaveEL(EL3) || !HaveSecureEL2Ext())) && CNTHV_CTL_EL2.ENABLE == '1') then status = IsTimerConditionMet(Zeros(64), CNTHV_CVAL_EL2, CNTHV_CTL_EL2.IMASK, InterruptID_CNTHV); CNTHV_CTL_EL2.ISTATUS = if status then '1' else '0'; if ((HaveSecureEL2Ext() && HaveVirtHostExt()) && CNTHVS_CTL_EL2.ENABLE == '1') then status = IsTimerConditionMet(Zeros(64), CNTHVS_CVAL_EL2, CNTHVS_CTL_EL2.IMASK, InterruptID_CNTHVS); CNTHVS_CTL_EL2.ISTATUS = if status then '1' else '0'; return;

Library pseudocode for shared/functions/counters/GenericCounterTick

// GenericCounterTick() // ==================== // Increments PhysicalCount value for every clock tick. GenericCounterTick() bits(64) prev_physical_count; if CNTCR.EN == '0' then if !HaveAArch64() then AArch32.CheckTimerConditions(); else AArch64.CheckTimerConditions(); return; prev_physical_count = PhysicalCountInt(); if HaveCNTSCExt() && CNTCR.SCEN == '1' then PhysicalCount = PhysicalCount + ZeroExtend(CNTSCR); else PhysicalCount<87:24> = PhysicalCount<87:24> + 1; if !HaveAArch64() then AArch32.CheckTimerConditions(); else AArch64.CheckTimerConditions(); TestEventCNTP(prev_physical_count, PhysicalCountInt()); TestEventCNTV(prev_physical_count, PhysicalCountInt()); return;

Library pseudocode for shared/functions/counters/IsTimerConditionMet

// IsTimerConditionMet() // ===================== boolean IsTimerConditionMet(bits(64) offset, bits(64) compare_value, bits(1) imask, InterruptID intid) boolean conditon_met; signal level; condition_met = (UInt(PhysicalCountInt() - offset) - UInt(compare_value)) >= 0; level = if condition_met && imask == '0' then HIGH else LOW; SetInterruptRequestLevel(intid, level); return condition_met;

Library pseudocode for shared/functions/counters/PhysicalCount

bits(88) PhysicalCount;

Library pseudocode for shared/functions/counters/SetEventRegister

// SetEventRegister() // ================== // Sets the Event Register of this PE SetEventRegister() EventRegister = '1'; return;

Library pseudocode for shared/functions/counters/TestEventCNTP

// TestEventCNTP() // =============== // Generate Event stream from the physical counter TestEventCNTP(bits(64) prev_physical_count, bits(64) current_physical_count) bits(64) offset; bits(1) samplebit, previousbit; if CNTHCTL_EL2.EVNTEN == '1' then n = UInt(CNTHCTL_EL2.EVNTI); if HaveECVExt() && CNTHCTL_EL2.EVNTIS == '1' then n = n + 8; boolean ecv = FALSE; if HaveECVExt() then ecv = (EL2Enabled() && CNTHCTL_EL2.ECV == '1' && SCR_EL3.ECVEn == '1'); offset = if ecv then CNTPOFF_EL2 else Zeros(64); samplebit = (current_physical_count - offset)<n>; previousbit = (prev_physical_count - offset)<n>; if CNTHCTL_EL2.EVNTDIR == '0' then if previousbit == '0' && samplebit == '1' then SetEventRegister(); else if previousbit == '1' && samplebit == '0' then SetEventRegister(); return;

Library pseudocode for shared/functions/counters/TestEventCNTV

// TestEventCNTV() // =============== // Generate Event stream from the virtual counter TestEventCNTV(bits(64) prev_physical_count, bits(64) current_physical_count) bits(64) offset; bits(1) samplebit, previousbit; if (!(HaveVirtHostExt() && HCR_EL2.<E2H,TGE> == '11') && CNTKCTL_EL1.EVNTEN == '1') then n = UInt(CNTKCTL_EL1.EVNTI); if HaveECVExt() && CNTKCTL_EL1.EVNTIS == '1' then n = n + 8; if HaveEL(EL2) && (!EL2Enabled() || HCR_EL2.<E2H,TGE> != '11') then offset = CNTVOFF_EL2; else offset = Zeros(64); samplebit = (current_physical_count - offset)<n>; previousbit = (prev_physical_count - offset)<n>; if CNTKCTL_EL1.EVNTDIR == '0' then if previousbit == '0' && samplebit == '1' then SetEventRegister(); else if previousbit == '1' && samplebit == '0' then SetEventRegister(); return;

Library pseudocode for shared/functions/crc/BitReverse

// BitReverse() // ============ bits(N) BitReverse(bits(N) data) bits(N) result; for i = 0 to N-1 result<(N-i)-1> = data<i>; return result;

Library pseudocode for shared/functions/crc/HaveCRCExt

// HaveCRCExt() // ============ boolean HaveCRCExt() return HasArchVersion(ARMv8p1) || boolean IMPLEMENTATION_DEFINED "Have CRC extension";

Library pseudocode for shared/functions/crc/Poly32Mod2

// Poly32Mod2() // ============ // Poly32Mod2 on a bitstring does a polynomial Modulus over {0,1} operation bits(32) Poly32Mod2(bits(N) data_in, bits(32) poly) assert N > 32; bits(N) data = data_in; for i = N-1 downto 32 if data<i> == '1' then data<i-1:0> = data<i-1:0> EOR (poly:Zeros(i-32)); return data<31:0>;

Library pseudocode for shared/functions/crypto/AESInvMixColumns

// AESInvMixColumns() // ================== // Transformation in the Inverse Cipher that is the inverse of AESMixColumns. bits(128) AESInvMixColumns(bits (128) op) bits(4*8) in0 = op< 96+:8> : op< 64+:8> : op< 32+:8> : op< 0+:8>; bits(4*8) in1 = op<104+:8> : op< 72+:8> : op< 40+:8> : op< 8+:8>; bits(4*8) in2 = op<112+:8> : op< 80+:8> : op< 48+:8> : op< 16+:8>; bits(4*8) in3 = op<120+:8> : op< 88+:8> : op< 56+:8> : op< 24+:8>; bits(4*8) out0; bits(4*8) out1; bits(4*8) out2; bits(4*8) out3; for c = 0 to 3 out0<c*8+:8> = FFmul0E(in0<c*8+:8>) EOR FFmul0B(in1<c*8+:8>) EOR FFmul0D(in2<c*8+:8>) EOR FFmul09(in3<c*8+:8>); out1<c*8+:8> = FFmul09(in0<c*8+:8>) EOR FFmul0E(in1<c*8+:8>) EOR FFmul0B(in2<c*8+:8>) EOR FFmul0D(in3<c*8+:8>); out2<c*8+:8> = FFmul0D(in0<c*8+:8>) EOR FFmul09(in1<c*8+:8>) EOR FFmul0E(in2<c*8+:8>) EOR FFmul0B(in3<c*8+:8>); out3<c*8+:8> = FFmul0B(in0<c*8+:8>) EOR FFmul0D(in1<c*8+:8>) EOR FFmul09(in2<c*8+:8>) EOR FFmul0E(in3<c*8+:8>); return ( out3<3*8+:8> : out2<3*8+:8> : out1<3*8+:8> : out0<3*8+:8> : out3<2*8+:8> : out2<2*8+:8> : out1<2*8+:8> : out0<2*8+:8> : out3<1*8+:8> : out2<1*8+:8> : out1<1*8+:8> : out0<1*8+:8> : out3<0*8+:8> : out2<0*8+:8> : out1<0*8+:8> : out0<0*8+:8> );

Library pseudocode for shared/functions/crypto/AESInvShiftRows

// AESInvShiftRows() // ================= // Transformation in the Inverse Cipher that is inverse of AESShiftRows. bits(128) AESInvShiftRows(bits(128) op) return ( op< 31: 24> : op< 55: 48> : op< 79: 72> : op<103: 96> : op<127:120> : op< 23: 16> : op< 47: 40> : op< 71: 64> : op< 95: 88> : op<119:112> : op< 15: 8> : op< 39: 32> : op< 63: 56> : op< 87: 80> : op<111:104> : op< 7: 0> );

Library pseudocode for shared/functions/crypto/AESInvSubBytes

// AESInvSubBytes() // ================ // Transformation in the Inverse Cipher that is the inverse of AESSubBytes. bits(128) AESInvSubBytes(bits(128) op) // Inverse S-box values bits(16*16*8) GF2_inv = ( /* F E D C B A 9 8 7 6 5 4 3 2 1 0 */ /*F*/ 0x7d0c2155631469e126d677ba7e042b17<127:0> : /*E*/ 0x619953833cbbebc8b0f52aae4d3be0a0<127:0> : /*D*/ 0xef9cc9939f7ae52d0d4ab519a97f5160<127:0> : /*C*/ 0x5fec8027591012b131c7078833a8dd1f<127:0> : /*B*/ 0xf45acd78fec0db9a2079d2c64b3e56fc<127:0> : /*A*/ 0x1bbe18aa0e62b76f89c5291d711af147<127:0> : /*9*/ 0x6edf751ce837f9e28535ade72274ac96<127:0> : /*8*/ 0x73e6b4f0cecff297eadc674f4111913a<127:0> : /*7*/ 0x6b8a130103bdafc1020f3fca8f1e2cd0<127:0> : /*6*/ 0x0645b3b80558e4f70ad3bc8c00abd890<127:0> : /*5*/ 0x849d8da75746155edab9edfd5048706c<127:0> : /*4*/ 0x92b6655dcc5ca4d41698688664f6f872<127:0> : /*3*/ 0x25d18b6d49a25b76b224d92866a12e08<127:0> : /*2*/ 0x4ec3fa420b954cee3d23c2a632947b54<127:0> : /*1*/ 0xcbe9dec444438e3487ff2f9b8239e37c<127:0> : /*0*/ 0xfbd7f3819ea340bf38a53630d56a0952<127:0> ); bits(128) out; for i = 0 to 15 out<i*8+:8> = GF2_inv<UInt(op<i*8+:8>)*8+:8>; return out;

Library pseudocode for shared/functions/crypto/AESMixColumns

// AESMixColumns() // =============== // Transformation in the Cipher that takes all of the columns of the // State and mixes their data (independently of one another) to // produce new columns. bits(128) AESMixColumns(bits (128) op) bits(4*8) in0 = op< 96+:8> : op< 64+:8> : op< 32+:8> : op< 0+:8>; bits(4*8) in1 = op<104+:8> : op< 72+:8> : op< 40+:8> : op< 8+:8>; bits(4*8) in2 = op<112+:8> : op< 80+:8> : op< 48+:8> : op< 16+:8>; bits(4*8) in3 = op<120+:8> : op< 88+:8> : op< 56+:8> : op< 24+:8>; bits(4*8) out0; bits(4*8) out1; bits(4*8) out2; bits(4*8) out3; for c = 0 to 3 out0<c*8+:8> = FFmul02(in0<c*8+:8>) EOR FFmul03(in1<c*8+:8>) EOR in2<c*8+:8> EOR in3<c*8+:8>; out1<c*8+:8> = in0<c*8+:8> EOR FFmul02(in1<c*8+:8>) EOR FFmul03(in2<c*8+:8>) EOR in3<c*8+:8>; out2<c*8+:8> = in0<c*8+:8> EOR in1<c*8+:8> EOR FFmul02(in2<c*8+:8>) EOR FFmul03(in3<c*8+:8>); out3<c*8+:8> = FFmul03(in0<c*8+:8>) EOR in1<c*8+:8> EOR in2<c*8+:8> EOR FFmul02(in3<c*8+:8>); return ( out3<3*8+:8> : out2<3*8+:8> : out1<3*8+:8> : out0<3*8+:8> : out3<2*8+:8> : out2<2*8+:8> : out1<2*8+:8> : out0<2*8+:8> : out3<1*8+:8> : out2<1*8+:8> : out1<1*8+:8> : out0<1*8+:8> : out3<0*8+:8> : out2<0*8+:8> : out1<0*8+:8> : out0<0*8+:8> );

Library pseudocode for shared/functions/crypto/AESShiftRows

// AESShiftRows() // ============== // Transformation in the Cipher that processes the State by cyclically // shifting the last three rows of the State by different offsets. bits(128) AESShiftRows(bits(128) op) return ( op< 95: 88> : op< 55: 48> : op< 15: 8> : op<103: 96> : op< 63: 56> : op< 23: 16> : op<111:104> : op< 71: 64> : op< 31: 24> : op<119:112> : op< 79: 72> : op< 39: 32> : op<127:120> : op< 87: 80> : op< 47: 40> : op< 7: 0> );

Library pseudocode for shared/functions/crypto/AESSubBytes

// AESSubBytes() // ============= // Transformation in the Cipher that processes the State using a nonlinear // byte substitution table (S-box) that operates on each of the State bytes // independently. bits(128) AESSubBytes(bits(128) op) // S-box values bits(16*16*8) GF2 = ( /* F E D C B A 9 8 7 6 5 4 3 2 1 0 */ /*F*/ 0x16bb54b00f2d99416842e6bf0d89a18c<127:0> : /*E*/ 0xdf2855cee9871e9b948ed9691198f8e1<127:0> : /*D*/ 0x9e1dc186b95735610ef6034866b53e70<127:0> : /*C*/ 0x8a8bbd4b1f74dde8c6b4a61c2e2578ba<127:0> : /*B*/ 0x08ae7a65eaf4566ca94ed58d6d37c8e7<127:0> : /*A*/ 0x79e4959162acd3c25c2406490a3a32e0<127:0> : /*9*/ 0xdb0b5ede14b8ee4688902a22dc4f8160<127:0> : /*8*/ 0x73195d643d7ea7c41744975fec130ccd<127:0> : /*7*/ 0xd2f3ff1021dab6bcf5389d928f40a351<127:0> : /*6*/ 0xa89f3c507f02f94585334d43fbaaefd0<127:0> : /*5*/ 0xcf584c4a39becb6a5bb1fc20ed00d153<127:0> : /*4*/ 0x842fe329b3d63b52a05a6e1b1a2c8309<127:0> : /*3*/ 0x75b227ebe28012079a059618c323c704<127:0> : /*2*/ 0x1531d871f1e5a534ccf73f362693fdb7<127:0> : /*1*/ 0xc072a49cafa2d4adf04759fa7dc982ca<127:0> : /*0*/ 0x76abd7fe2b670130c56f6bf27b777c63<127:0> ); bits(128) out; for i = 0 to 15 out<i*8+:8> = GF2<UInt(op<i*8+:8>)*8+:8>; return out;

Library pseudocode for shared/functions/crypto/FFmul02

// FFmul02() // ========= bits(8) FFmul02(bits(8) b) bits(256*8) FFmul_02 = ( /* F E D C B A 9 8 7 6 5 4 3 2 1 0 */ /*F*/ 0xE5E7E1E3EDEFE9EBF5F7F1F3FDFFF9FB<127:0> : /*E*/ 0xC5C7C1C3CDCFC9CBD5D7D1D3DDDFD9DB<127:0> : /*D*/ 0xA5A7A1A3ADAFA9ABB5B7B1B3BDBFB9BB<127:0> : /*C*/ 0x858781838D8F898B959791939D9F999B<127:0> : /*B*/ 0x656761636D6F696B757771737D7F797B<127:0> : /*A*/ 0x454741434D4F494B555751535D5F595B<127:0> : /*9*/ 0x252721232D2F292B353731333D3F393B<127:0> : /*8*/ 0x050701030D0F090B151711131D1F191B<127:0> : /*7*/ 0xFEFCFAF8F6F4F2F0EEECEAE8E6E4E2E0<127:0> : /*6*/ 0xDEDCDAD8D6D4D2D0CECCCAC8C6C4C2C0<127:0> : /*5*/ 0xBEBCBAB8B6B4B2B0AEACAAA8A6A4A2A0<127:0> : /*4*/ 0x9E9C9A98969492908E8C8A8886848280<127:0> : /*3*/ 0x7E7C7A78767472706E6C6A6866646260<127:0> : /*2*/ 0x5E5C5A58565452504E4C4A4846444240<127:0> : /*1*/ 0x3E3C3A38363432302E2C2A2826242220<127:0> : /*0*/ 0x1E1C1A18161412100E0C0A0806040200<127:0> ); return FFmul_02<UInt(b)*8+:8>;

Library pseudocode for shared/functions/crypto/FFmul03

// FFmul03() // ========= bits(8) FFmul03(bits(8) b) bits(256*8) FFmul_03 = ( /* F E D C B A 9 8 7 6 5 4 3 2 1 0 */ /*F*/ 0x1A191C1F16151013020104070E0D080B<127:0> : /*E*/ 0x2A292C2F26252023323134373E3D383B<127:0> : /*D*/ 0x7A797C7F76757073626164676E6D686B<127:0> : /*C*/ 0x4A494C4F46454043525154575E5D585B<127:0> : /*B*/ 0xDAD9DCDFD6D5D0D3C2C1C4C7CECDC8CB<127:0> : /*A*/ 0xEAE9ECEFE6E5E0E3F2F1F4F7FEFDF8FB<127:0> : /*9*/ 0xBAB9BCBFB6B5B0B3A2A1A4A7AEADA8AB<127:0> : /*8*/ 0x8A898C8F86858083929194979E9D989B<127:0> : /*7*/ 0x818287848D8E8B88999A9F9C95969390<127:0> : /*6*/ 0xB1B2B7B4BDBEBBB8A9AAAFACA5A6A3A0<127:0> : /*5*/ 0xE1E2E7E4EDEEEBE8F9FAFFFCF5F6F3F0<127:0> : /*4*/ 0xD1D2D7D4DDDEDBD8C9CACFCCC5C6C3C0<127:0> : /*3*/ 0x414247444D4E4B48595A5F5C55565350<127:0> : /*2*/ 0x717277747D7E7B78696A6F6C65666360<127:0> : /*1*/ 0x212227242D2E2B28393A3F3C35363330<127:0> : /*0*/ 0x111217141D1E1B18090A0F0C05060300<127:0> ); return FFmul_03<UInt(b)*8+:8>;

Library pseudocode for shared/functions/crypto/FFmul09

// FFmul09() // ========= bits(8) FFmul09(bits(8) b) bits(256*8) FFmul_09 = ( /* F E D C B A 9 8 7 6 5 4 3 2 1 0 */ /*F*/ 0x464F545D626B70790E071C152A233831<127:0> : /*E*/ 0xD6DFC4CDF2FBE0E99E978C85BAB3A8A1<127:0> : /*D*/ 0x7D746F6659504B42353C272E1118030A<127:0> : /*C*/ 0xEDE4FFF6C9C0DBD2A5ACB7BE8188939A<127:0> : /*B*/ 0x3039222B141D060F78716A635C554E47<127:0> : /*A*/ 0xA0A9B2BB848D969FE8E1FAF3CCC5DED7<127:0> : /*9*/ 0x0B0219102F263D34434A5158676E757C<127:0> : /*8*/ 0x9B928980BFB6ADA4D3DAC1C8F7FEE5EC<127:0> : /*7*/ 0xAAA3B8B18E879C95E2EBF0F9C6CFD4DD<127:0> : /*6*/ 0x3A3328211E170C05727B6069565F444D<127:0> : /*5*/ 0x9198838AB5BCA7AED9D0CBC2FDF4EFE6<127:0> : /*4*/ 0x0108131A252C373E49405B526D647F76<127:0> : /*3*/ 0xDCD5CEC7F8F1EAE3949D868FB0B9A2AB<127:0> : /*2*/ 0x4C455E5768617A73040D161F2029323B<127:0> : /*1*/ 0xE7EEF5FCC3CAD1D8AFA6BDB48B829990<127:0> : /*0*/ 0x777E656C535A41483F362D241B120900<127:0> ); return FFmul_09<UInt(b)*8+:8>;

Library pseudocode for shared/functions/crypto/FFmul0B

// FFmul0B() // ========= bits(8) FFmul0B(bits(8) b) bits(256*8) FFmul_0B = ( /* F E D C B A 9 8 7 6 5 4 3 2 1 0 */ /*F*/ 0xA3A8B5BE8F849992FBF0EDE6D7DCC1CA<127:0> : /*E*/ 0x1318050E3F3429224B405D56676C717A<127:0> : /*D*/ 0xD8D3CEC5F4FFE2E9808B969DACA7BAB1<127:0> : /*C*/ 0x68637E75444F5259303B262D1C170A01<127:0> : /*B*/ 0x555E434879726F640D061B10212A373C<127:0> : /*A*/ 0xE5EEF3F8C9C2DFD4BDB6ABA0919A878C<127:0> : /*9*/ 0x2E2538330209141F767D606B5A514C47<127:0> : /*8*/ 0x9E958883B2B9A4AFC6CDD0DBEAE1FCF7<127:0> : /*7*/ 0x545F424978736E650C071A11202B363D<127:0> : /*6*/ 0xE4EFF2F9C8C3DED5BCB7AAA1909B868D<127:0> : /*5*/ 0x2F2439320308151E777C616A5B504D46<127:0> : /*4*/ 0x9F948982B3B8A5AEC7CCD1DAEBE0FDF6<127:0> : /*3*/ 0xA2A9B4BF8E859893FAF1ECE7D6DDC0CB<127:0> : /*2*/ 0x1219040F3E3528234A415C57666D707B<127:0> : /*1*/ 0xD9D2CFC4F5FEE3E8818A979CADA6BBB0<127:0> : /*0*/ 0x69627F74454E5358313A272C1D160B00<127:0> ); return FFmul_0B<UInt(b)*8+:8>;

Library pseudocode for shared/functions/crypto/FFmul0D

// FFmul0D() // ========= bits(8) FFmul0D(bits(8) b) bits(256*8) FFmul_0D = ( /* F E D C B A 9 8 7 6 5 4 3 2 1 0 */ /*F*/ 0x979A8D80A3AEB9B4FFF2E5E8CBC6D1DC<127:0> : /*E*/ 0x474A5D50737E69642F2235381B16010C<127:0> : /*D*/ 0x2C21363B1815020F44495E53707D6A67<127:0> : /*C*/ 0xFCF1E6EBC8C5D2DF94998E83A0ADBAB7<127:0> : /*B*/ 0xFAF7E0EDCEC3D4D9929F8885A6ABBCB1<127:0> : /*A*/ 0x2A27303D1E130409424F5855767B6C61<127:0> : /*9*/ 0x414C5B5675786F622924333E1D10070A<127:0> : /*8*/ 0x919C8B86A5A8BFB2F9F4E3EECDC0D7DA<127:0> : /*7*/ 0x4D40575A7974636E25283F32111C0B06<127:0> : /*6*/ 0x9D90878AA9A4B3BEF5F8EFE2C1CCDBD6<127:0> : /*5*/ 0xF6FBECE1C2CFD8D59E938489AAA7B0BD<127:0> : /*4*/ 0x262B3C31121F08054E4354597A77606D<127:0> : /*3*/ 0x202D3A3714190E034845525F7C71666B<127:0> : /*2*/ 0xF0FDEAE7C4C9DED39895828FACA1B6BB<127:0> : /*1*/ 0x9B96818CAFA2B5B8F3FEE9E4C7CADDD0<127:0> : /*0*/ 0x4B46515C7F726568232E3934171A0D00<127:0> ); return FFmul_0D<UInt(b)*8+:8>;

Library pseudocode for shared/functions/crypto/FFmul0E

// FFmul0E() // ========= bits(8) FFmul0E(bits(8) b) bits(256*8) FFmul_0E = ( /* F E D C B A 9 8 7 6 5 4 3 2 1 0 */ /*F*/ 0x8D83919FB5BBA9A7FDF3E1EFC5CBD9D7<127:0> : /*E*/ 0x6D63717F555B49471D13010F252B3937<127:0> : /*D*/ 0x56584A446E60727C26283A341E10020C<127:0> : /*C*/ 0xB6B8AAA48E80929CC6C8DAD4FEF0E2EC<127:0> : /*B*/ 0x202E3C321816040A505E4C426866747A<127:0> : /*A*/ 0xC0CEDCD2F8F6E4EAB0BEACA28886949A<127:0> : /*9*/ 0xFBF5E7E9C3CDDFD18B859799B3BDAFA1<127:0> : /*8*/ 0x1B150709232D3F316B657779535D4F41<127:0> : /*7*/ 0xCCC2D0DEF4FAE8E6BCB2A0AE848A9896<127:0> : /*6*/ 0x2C22303E141A08065C52404E646A7876<127:0> : /*5*/ 0x17190B052F21333D67697B755F51434D<127:0> : /*4*/ 0xF7F9EBE5CFC1D3DD87899B95BFB1A3AD<127:0> : /*3*/ 0x616F7D735957454B111F0D032927353B<127:0> : /*2*/ 0x818F9D93B9B7A5ABF1FFEDE3C9C7D5DB<127:0> : /*1*/ 0xBAB4A6A8828C9E90CAC4D6D8F2FCEEE0<127:0> : /*0*/ 0x5A544648626C7E702A243638121C0E00<127:0> ); return FFmul_0E<UInt(b)*8+:8>;

Library pseudocode for shared/functions/crypto/HaveAESExt

// HaveAESExt() // ============ // TRUE if AES cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveAESExt() return boolean IMPLEMENTATION_DEFINED "Has AES Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveBit128PMULLExt

// HaveBit128PMULLExt() // ==================== // TRUE if 128 bit form of PMULL instructions support is implemented, // FALSE otherwise. boolean HaveBit128PMULLExt() return boolean IMPLEMENTATION_DEFINED "Has 128-bit form of PMULL instructions";

Library pseudocode for shared/functions/crypto/HaveSHA1Ext

// HaveSHA1Ext() // ============= // TRUE if SHA1 cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSHA1Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA1 Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSHA256Ext

// HaveSHA256Ext() // =============== // TRUE if SHA256 cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSHA256Ext() return boolean IMPLEMENTATION_DEFINED "Has SHA256 Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSHA3Ext

// HaveSHA3Ext() // ============= // TRUE if SHA3 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSHA3Ext() if !HasArchVersion(ARMv8p2) || !(HaveSHA1Ext() && HaveSHA256Ext()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA3 Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSHA512Ext

// HaveSHA512Ext() // =============== // TRUE if SHA512 cryptographic instructions support is implemented, // and when SHA1 and SHA2 basic cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSHA512Ext() if !HasArchVersion(ARMv8p2) || !(HaveSHA1Ext() && HaveSHA256Ext()) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SHA512 Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSM3Ext

// HaveSM3Ext() // ============ // TRUE if SM3 cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSM3Ext() if !HasArchVersion(ARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM3 Crypto instructions";

Library pseudocode for shared/functions/crypto/HaveSM4Ext

// HaveSM4Ext() // ============ // TRUE if SM4 cryptographic instructions support is implemented, // FALSE otherwise. boolean HaveSM4Ext() if !HasArchVersion(ARMv8p2) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has SM4 Crypto instructions";

Library pseudocode for shared/functions/crypto/ROL

// ROL() // ===== bits(N) ROL(bits(N) x, integer shift) assert shift >= 0 && shift <= N; if (shift == 0) then return x; return ROR(x, N-shift);

Library pseudocode for shared/functions/crypto/SHA256hash

// SHA256hash() // ============ bits(128) SHA256hash(bits (128) x_in, bits(128) y_in, bits(128) w, boolean part1) bits(32) chs, maj, t; bits(128) x = x_in; bits(128) y = y_in; for e = 0 to 3 chs = SHAchoose(y<31:0>, y<63:32>, y<95:64>); maj = SHAmajority(x<31:0>, x<63:32>, x<95:64>); t = y<127:96> + SHAhashSIGMA1(y<31:0>) + chs + Elem[w, e, 32]; x<127:96> = t + x<127:96>; y<127:96> = t + SHAhashSIGMA0(x<31:0>) + maj; <y, x> = ROL(y : x, 32); return (if part1 then x else y);

Library pseudocode for shared/functions/crypto/SHAchoose

// SHAchoose() // =========== bits(32) SHAchoose(bits(32) x, bits(32) y, bits(32) z) return (((y EOR z) AND x) EOR z);

Library pseudocode for shared/functions/crypto/SHAhashSIGMA0

// SHAhashSIGMA0() // =============== bits(32) SHAhashSIGMA0(bits(32) x) return ROR(x, 2) EOR ROR(x, 13) EOR ROR(x, 22);

Library pseudocode for shared/functions/crypto/SHAhashSIGMA1

// SHAhashSIGMA1() // =============== bits(32) SHAhashSIGMA1(bits(32) x) return ROR(x, 6) EOR ROR(x, 11) EOR ROR(x, 25);

Library pseudocode for shared/functions/crypto/SHAmajority

// SHAmajority() // ============= bits(32) SHAmajority(bits(32) x, bits(32) y, bits(32) z) return ((x AND y) OR ((x OR y) AND z));

Library pseudocode for shared/functions/crypto/SHAparity

// SHAparity() // =========== bits(32) SHAparity(bits(32) x, bits(32) y, bits(32) z) return (x EOR y EOR z);

Library pseudocode for shared/functions/crypto/Sbox

// Sbox() // ====== // Used in SM4E crypto instruction bits(8) Sbox(bits(8) sboxin) bits(8) sboxout; bits(2048) sboxstring = 0xd690e9fecce13db716b614c228fb2c052b679a762abe04c3aa441326498606999c4250f491ef987a33540b43edcfac62e4b31ca9c908e89580df94fa758f3fa64707a7fcf37317ba83593c19e6854fa8686b81b27164da8bf8eb0f4b70569d351e240e5e6358d1a225227c3b01217887d40046579fd327524c3602e7a0c4c89eeabf8ad240c738b5a3f7f2cef96115a1e0ae5da49b341a55ad933230f58cb1e31df6e22e8266ca60c02923ab0d534e6fd5db3745defd8e2f03ff6a726d6c5b518d1baf92bbddbc7f11d95c411f105ad80ac13188a5cd7bbd2d74d012b8e5b4b08969974a0c96777e65b9f109c56ec68418f07dec3adc4d2079ee5f3ed7cb3948<2047:0>; sboxout = sboxstring<(255-UInt(sboxin))*8+7:(255-UInt(sboxin))*8>; return sboxout;

Library pseudocode for shared/functions/exclusive/ClearExclusiveByAddress

// Clear the global Exclusives monitors for all PEs EXCEPT processorid if they // record any part of the physical address region of size bytes starting at paddress. // It is IMPLEMENTATION DEFINED whether the global Exclusives monitor for processorid // is also cleared if it records any part of the address region. ClearExclusiveByAddress(FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/ClearExclusiveLocal

// Clear the local Exclusives monitor for the specified processorid. ClearExclusiveLocal(integer processorid);

Library pseudocode for shared/functions/exclusive/ClearExclusiveMonitors

// ClearExclusiveMonitors() // ======================== // Clear the local Exclusives monitor for the executing PE. ClearExclusiveMonitors() ClearExclusiveLocal(ProcessorID());

Library pseudocode for shared/functions/exclusive/ExclusiveMonitorsStatus

// Returns '0' to indicate success if the last memory write by this PE was to // the same physical address region endorsed by ExclusiveMonitorsPass(). // Returns '1' to indicate failure if address translation resulted in a different // physical address. bit ExclusiveMonitorsStatus();

Library pseudocode for shared/functions/exclusive/IsExclusiveGlobal

// Return TRUE if the global Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. boolean IsExclusiveGlobal(FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/IsExclusiveLocal

// Return TRUE if the local Exclusives monitor for processorid includes all of // the physical address region of size bytes starting at paddress. boolean IsExclusiveLocal(FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/MarkExclusiveGlobal

// Record the physical address region of size bytes starting at paddress in // the global Exclusives monitor for processorid. MarkExclusiveGlobal(FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/MarkExclusiveLocal

// Record the physical address region of size bytes starting at paddress in // the local Exclusives monitor for processorid. MarkExclusiveLocal(FullAddress paddress, integer processorid, integer size);

Library pseudocode for shared/functions/exclusive/ProcessorID

// Return the ID of the currently executing PE. integer ProcessorID();

Library pseudocode for shared/functions/extension/AArch32.HaveHPDExt

// AArch32.HaveHPDExt() // ==================== boolean AArch32.HaveHPDExt() return (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has AArch32 hierarchical permission disables");

Library pseudocode for shared/functions/extension/AArch64.HaveHPDExt

// AArch64.HaveHPDExt() // ==================== boolean AArch64.HaveHPDExt() return HasArchVersion(ARMv8p1);

Library pseudocode for shared/functions/extension/Have16bitVMID

// Have16bitVMID() // =============== // Returns TRUE if EL2 and support for a 16-bit VMID are implemented. boolean Have16bitVMID() return (HasArchVersion(ARMv8p1) && HaveEL(EL2) && boolean IMPLEMENTATION_DEFINED "Has 16-bit VMID");

Library pseudocode for shared/functions/extension/Have52BitIPAAndPASpaceExt

// Have52BitIPAAndPASpaceExt() // =========================== // Returns TRUE if 52-bit IPA and PA extension support // is implemented, and FALSE otherwise. boolean Have52BitIPAAndPASpaceExt() return (HasArchVersion(ARMv8p7) && boolean IMPLEMENTATION_DEFINED "Has 52-bit IPA and PA support" && Have52BitVAExt() && Have52BitPAExt());

Library pseudocode for shared/functions/extension/Have52BitPAExt

// Have52BitPAExt() // ================ // Returns TRUE if Large Physical Address extension // support is implemented and FALSE otherwise. boolean Have52BitPAExt() return (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has large 52-bit PA/IPA support");

Library pseudocode for shared/functions/extension/Have52BitVAExt

// Have52BitVAExt() // ================ // Returns TRUE if Large Virtual Address extension // support is implemented and FALSE otherwise. boolean Have52BitVAExt() return (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has large 52-bit VA support");

Library pseudocode for shared/functions/extension/HaveAArch32BF16Ext

// HaveAArch32BF16Ext() // ==================== // Returns TRUE if AArch32 BFloat16 instruction support is implemented, and FALSE otherwise. boolean HaveAArch32BF16Ext() return (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has AArch32 BFloat16 extension");

Library pseudocode for shared/functions/extension/HaveAArch32Int8MatMulExt

// HaveAArch32Int8MatMulExt() // ========================== // Returns TRUE if AArch32 8-bit integer matrix multiply instruction support // implemented, and FALSE otherwise. boolean HaveAArch32Int8MatMulExt() return (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has AArch32 Int8 Mat Mul extension");

Library pseudocode for shared/functions/extension/HaveAccessFlagUpdateExtHaveAltFP

// HaveAccessFlagUpdateExt() // ========================= // HaveAltFP() // =========== // Returns TRUE if alternative Floating-point extension support // is implemented, and FALSE otherwise. boolean HaveAccessFlagUpdateExt() HaveAltFP() return HasArchVersion(ARMv8p1ARMv8p7);

Library pseudocode for shared/functions/extension/HaveAltFPHaveAtomicExt

// HaveAltFP() // =========== // Returns TRUE if alternative Floating-point extension support // is implemented, and FALSE otherwise. // HaveAtomicExt() // =============== boolean HaveAltFP() HaveAtomicExt() return HasArchVersion(ARMv8p7ARMv8p1);

Library pseudocode for shared/functions/extension/HaveAtomicExtHaveBF16Ext

// HaveAtomicExt() // =============== // HaveBF16Ext() // ============= // Returns TRUE if AArch64 BFloat16 instruction support is implemented, and FALSE otherwise. boolean HaveAtomicExt() returnHaveBF16Ext() return ( HasArchVersion() || (HasArchVersion(ARMv8p2ARMv8p1ARMv8p6);) && boolean IMPLEMENTATION_DEFINED "Has AArch64 BFloat16 extension"));

Library pseudocode for shared/functions/extension/HaveBF16ExtHaveBTIExt

// HaveBF16Ext() // ============= // Returns TRUE if AArch64 BFloat16 instruction support is implemented, and FALSE otherwise. // HaveBTIExt() // ============ // Returns TRUE if support for Branch Target Indentification is implemented. boolean HaveBF16Ext() return (HaveBTIExt() returnHasArchVersion(ARMv8p6ARMv8p5) || (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has AArch64 BFloat16 extension")););

Library pseudocode for shared/functions/extension/HaveBTIExtHaveBlockBBM

// HaveBTIExt() // ============ // Returns TRUE if support for Branch Target Indentification is implemented. // HaveBlockBBM() // ============== // Returns TRUE if support for changing block size without requiring // break-before-make is implemented. boolean HaveBTIExt() HaveBlockBBM() return HasArchVersion(ARMv8p5ARMv8p4);

Library pseudocode for shared/functions/extension/HaveBlockBBMHaveCNTSCExt

// HaveBlockBBM() // HaveCNTSCExt() // ============== // Returns TRUE if support for changing block size without requiring // break-before-make is implemented. // Returns TRUE if the Generic Counter Scaling is implemented, and FALSE // otherwise. boolean HaveBlockBBM() returnHaveCNTSCExt() return ( HasArchVersion(ARMv8p4);) && boolean IMPLEMENTATION_DEFINED "Has Generic Counter Scaling support");

Library pseudocode for shared/functions/extension/HaveCNTSCExtHaveCommonNotPrivateTransExt

// HaveCNTSCExt() // ============== // Returns TRUE if the Generic Counter Scaling is implemented, and FALSE // otherwise. // HaveCommonNotPrivateTransExt() // ============================== boolean HaveCNTSCExt() return (HaveCommonNotPrivateTransExt() returnHasArchVersion(ARMv8p4ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has Generic Counter Scaling support"););

Library pseudocode for shared/functions/extension/HaveCommonNotPrivateTransExtHaveDGHExt

// HaveCommonNotPrivateTransExt() // ============================== // HaveDGHExt() // ============ // Returns TRUE if Data Gathering Hint instruction support is implemented, and // FALSE otherwise. boolean HaveCommonNotPrivateTransExt() returnHaveDGHExt() return boolean IMPLEMENTATION_DEFINED "Has AArch64 DGH extension"; HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveDGHExtHaveDITExt

// HaveDGHExt() // HaveDITExt() // ============ // Returns TRUE if Data Gathering Hint instruction support is implemented, and // FALSE otherwise. boolean HaveDGHExt() return boolean IMPLEMENTATION_DEFINED "Has AArch64 DGH extension";HaveDITExt() returnHasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveDITExtHaveDOTPExt

// HaveDITExt() // ============ // HaveDOTPExt() // ============= // Returns TRUE if Dot Product feature support is implemented, and FALSE otherwise. boolean HaveDITExt() returnHaveDOTPExt() return ( HasArchVersion(ARMv8p4);) || (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has Dot Product extension"));

Library pseudocode for shared/functions/extension/HaveDOTPExtHaveDoPD

// HaveDOTPExt() // ============= // Returns TRUE if Dot Product feature support is implemented, and FALSE otherwise. // HaveDoPD() // ========== // Returns TRUE if Debug Over Power Down extension // support is implemented and FALSE otherwise. boolean HaveDOTPExt() return (HaveDoPD() returnHasArchVersion(ARMv8p4) || (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has Dot Product extension"));) && boolean IMPLEMENTATION_DEFINED "Has DoPD extension";

Library pseudocode for shared/functions/extension/HaveDirtyBitModifierExtHaveDoubleFaultExt

// HaveDirtyBitModifierExt() // ========================= // HaveDoubleFaultExt() // ==================== boolean HaveDirtyBitModifierExt() returnHaveDoubleFaultExt() return ( HasArchVersion() && HaveEL(EL3) && !ELUsingAArch32(EL3) && HaveIESBARMv8p1ARMv8p4);());

Library pseudocode for shared/functions/extension/HaveDoPDHaveDoubleLock

// HaveDoPD() // ========== // Returns TRUE if Debug Over Power Down extension // support is implemented and FALSE otherwise. // HaveDoubleLock() // ================ // Returns TRUE if support for the OS Double Lock is implemented. boolean HaveDoPD() returnHaveDoubleLock() return (! HasArchVersion(ARMv8p2ARMv8p4) && boolean IMPLEMENTATION_DEFINED "Has DoPD extension";) || boolean IMPLEMENTATION_DEFINED "OS Double Lock is implemented");

Library pseudocode for shared/functions/extension/HaveDoubleFaultExtHaveE0PDExt

// HaveDoubleFaultExt() // ==================== // HaveE0PDExt() // ============= // Returns TRUE if support for constant fault times for unprivileged accesses // to the memory map is implemented. boolean HaveDoubleFaultExt() return (HaveE0PDExt() returnHasArchVersion(ARMv8p4ARMv8p5) && HaveEL(EL3) && !ELUsingAArch32(EL3) && HaveIESB()););

Library pseudocode for shared/functions/extension/HaveDoubleLockHaveECVExt

// HaveDoubleLock() // ================ // Returns TRUE if support for the OS Double Lock is implemented. // HaveECVExt() // ============ // Returns TRUE if Enhanced Counter Virtualization extension // support is implemented, and FALSE otherwise. boolean HaveDoubleLock() return (!HaveECVExt() returnHasArchVersion(ARMv8p4ARMv8p6) || boolean IMPLEMENTATION_DEFINED "OS Double Lock is implemented"););

Library pseudocode for shared/functions/extension/HaveE0PDExtHaveEMPAMExt

// HaveE0PDExt() // ============= // Returns TRUE if support for constant fault times for unprivileged accesses // to the memory map is implemented. // HaveEMPAMExt() // ============== // Returns TRUE if Enhanced MPAM is implemented, and FALSE otherwise. boolean HaveE0PDExt() returnHaveEMPAMExt() return ( HasArchVersion() && HaveMPAMExtARMv8p5ARMv8p6);() && boolean IMPLEMENTATION_DEFINED "Has enhanced MPAM extension");

Library pseudocode for shared/functions/extension/HaveECVExtHaveExtendedCacheSets

// HaveECVExt() // ============ // Returns TRUE if Enhanced Counter Virtualization extension // support is implemented, and FALSE otherwise. // HaveExtendedCacheSets() // ======================= boolean HaveECVExt() HaveExtendedCacheSets() return HasArchVersion(ARMv8p6ARMv8p3);

Library pseudocode for shared/functions/extension/HaveEMPAMExtHaveExtendedECDebugEvents

// HaveEMPAMExt() // ============== // Returns TRUE if Enhanced MPAM is implemented, and FALSE otherwise. // HaveExtendedECDebugEvents() // =========================== boolean HaveEMPAMExt() return (HaveExtendedECDebugEvents() returnHasArchVersion(ARMv8p6ARMv8p2) && HaveMPAMExt() && boolean IMPLEMENTATION_DEFINED "Has enhanced MPAM extension"););

Library pseudocode for shared/functions/extension/HaveExtendedCacheSetsHaveExtendedExecuteNeverExt

// HaveExtendedCacheSets() // ======================= // HaveExtendedExecuteNeverExt() // ============================= boolean HaveExtendedCacheSets() HaveExtendedExecuteNeverExt() return HasArchVersion(ARMv8p3ARMv8p2);

Library pseudocode for shared/functions/extension/HaveExtendedECDebugEventsHaveFCADDExt

// HaveExtendedECDebugEvents() // =========================== // HaveFCADDExt() // ============== boolean HaveExtendedECDebugEvents() HaveFCADDExt() return HasArchVersion(ARMv8p2ARMv8p3);

Library pseudocode for shared/functions/extension/HaveExtendedExecuteNeverExtHaveFGTExt

// HaveExtendedExecuteNeverExt() // ============================= // HaveFGTExt() // ============ // Returns TRUE if Fine Grained Trap is implemented, and FALSE otherwise. boolean HaveExtendedExecuteNeverExt() HaveFGTExt() return HasArchVersion(ARMv8p2ARMv8p6);

Library pseudocode for shared/functions/extension/HaveFCADDExtHaveFJCVTZSExt

// HaveFCADDExt() // ============== // HaveFJCVTZSExt() // ================ boolean HaveFCADDExt() HaveFJCVTZSExt() return HasArchVersion(ARMv8p3);

Library pseudocode for shared/functions/extension/HaveFGTExtHaveFP16MulNoRoundingToFP32Ext

// HaveFGTExt() // ============ // Returns TRUE if Fine Grained Trap is implemented, and FALSE otherwise. // HaveFP16MulNoRoundingToFP32Ext() // ================================ // Returns TRUE if has FP16 multiply with no intermediate rounding accumulate // to FP32 instructions, and FALSE otherwise boolean HaveFGTExt() returnHaveFP16MulNoRoundingToFP32Ext() if ! HaveFP16Ext() then return FALSE; if HasArchVersion() then return TRUE; return (HasArchVersion(ARMv8p2ARMv8p6ARMv8p4);) && boolean IMPLEMENTATION_DEFINED "Has accumulate FP16 product into FP32 extension");

Library pseudocode for shared/functions/extension/HaveFJCVTZSExtHaveFeatCMOW

// HaveFJCVTZSExt() // ================ // HaveFeatCMOW() // ============== // Returns TRUE if the SCTLR_EL1.CMOW bit is implemented and the SCTLR_EL2.CMOW and // HCRX_EL2.CMOW bits are implemented if EL2 is implemented. boolean HaveFJCVTZSExt() HaveFeatCMOW() return HasArchVersion(ARMv8p3ARMv8p8);

Library pseudocode for shared/functions/extension/HaveFP16MulNoRoundingToFP32ExtHaveFeatHBC

// HaveFP16MulNoRoundingToFP32Ext() // ================================ // Returns TRUE if has FP16 multiply with no intermediate rounding accumulate // to FP32 instructions, and FALSE otherwise // HaveFeatHBC() // ============= // Returns TRUE if the BC instruction is implemented, and FALSE otherwise. boolean HaveFP16MulNoRoundingToFP32Ext() if !HaveFeatHBC() returnHaveFP16Ext() then return FALSE; if HasArchVersion(ARMv8p4ARMv8p8) then return TRUE; return (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has accumulate FP16 product into FP32 extension"););

Library pseudocode for shared/functions/extension/HaveFeatCMOWHaveFeatHCX

// HaveFeatCMOW() // ============== // Returns TRUE if the SCTLR_EL1.CMOW bit is implemented and the SCTLR_EL2.CMOW and // HCRX_EL2.CMOW bits are implemented if EL2 is implemented. // HaveFeatHCX() // ============= // Returns TRUE if HCRX_EL2 Trap Control register is implemented, // and FALSE otherwise. boolean HaveFeatCMOW() HaveFeatHCX() return HasArchVersion(ARMv8p8ARMv8p7);

Library pseudocode for shared/functions/extension/HaveFeatHBCHaveFeatHPMN0

// HaveFeatHBC() // ============= // Returns TRUE if the BC instruction is implemented, and FALSE otherwise. // HaveFeatHPMN0() // =============== // Returns TRUE if HDCR.HPMN or MDCR_EL2.HPMN is permitted to be 0 without // generating UNPREDICTABLE behavior, and FALSE otherwise. boolean HaveFeatHBC() HaveFeatHPMN0() return HasArchVersion(ARMv8p8);) &&HavePMUv3() && HaveFGTExt() && HaveEL(EL2);

Library pseudocode for shared/functions/extension/HaveFeatHCXHaveFeatLS64

// HaveFeatHCX() // ============= // Returns TRUE if HCRX_EL2 Trap Control register is implemented, // and FALSE otherwise. // HaveFeatLS64() // ============== // Returns TRUE if the LD64B, ST64B instructions are // supported, and FALSE otherwise. boolean HaveFeatHCX() returnHaveFeatLS64() return ( HasArchVersion(ARMv8p7);) && boolean IMPLEMENTATION_DEFINED "Has Load Store 64-Byte instruction support");

Library pseudocode for shared/functions/extension/HaveFeatHPMN0HaveFeatLS64_ACCDATA

// HaveFeatHPMN0() // =============== // Returns TRUE if HDCR.HPMN or MDCR_EL2.HPMN is permitted to be 0 without // generating UNPREDICTABLE behavior, and FALSE otherwise. // HaveFeatLS64_ACCDATA() // ====================== // Returns TRUE if the ST64BV0 instruction is // supported, and FALSE otherwise. boolean HaveFeatHPMN0() returnHaveFeatLS64_ACCDATA() return ( HasArchVersion(ARMv8p8ARMv8p7) && HavePMUv3HaveFeatLS64_V() &&() && boolean IMPLEMENTATION_DEFINED "Has Store 64-Byte HaveFGTExtEL0() && HaveEL(EL2);with return instruction support");

Library pseudocode for shared/functions/extension/HaveFeatLS64HaveFeatLS64_V

// HaveFeatLS64() // ============== // Returns TRUE if the LD64B, ST64B instructions are // HaveFeatLS64_V() // ================ // Returns TRUE if the ST64BV instruction is // supported, and FALSE otherwise. boolean HaveFeatLS64() HaveFeatLS64_V() return (HasArchVersion(ARMv8p7) && boolean IMPLEMENTATION_DEFINED "Has Load Store 64-Byte instruction support");) &&HaveFeatLS64() && boolean IMPLEMENTATION_DEFINED "Has Store 64-Byte with return instruction support");

Library pseudocode for shared/functions/extension/HaveFeatLS64_ACCDATAHaveFeatMOPS

// HaveFeatLS64_ACCDATA() // ====================== // Returns TRUE if the ST64BV0 instruction is // supported, and FALSE otherwise. // HaveFeatMOPS() // ============== // Returns TRUE if the CPY* and SET* instructions are supported, and FALSE otherwise. boolean HaveFeatLS64_ACCDATA() return (HaveFeatMOPS() returnHasArchVersion(ARMv8p7ARMv8p8) && HaveFeatLS64_V() && boolean IMPLEMENTATION_DEFINED "Has Store 64-Byte EL0 with return instruction support"););

Library pseudocode for shared/functions/extension/HaveFeatLS64_VHaveFeatNMI

// HaveFeatLS64_V() // ================ // Returns TRUE if the ST64BV instruction is // supported, and FALSE otherwise. // HaveFeatNMI() // ============= // Returns TRUE if the Non-Maskable Interrupt extension is // implemented, and FALSE otherwise. boolean HaveFeatLS64_V() return (HaveFeatNMI() returnHasArchVersion(ARMv8p7ARMv8p8) && HaveFeatLS64() && boolean IMPLEMENTATION_DEFINED "Has Store 64-Byte with return instruction support"););

Library pseudocode for shared/functions/extension/HaveFeatMOPSHaveFeatRPRES

// HaveFeatMOPS() // ============== // Returns TRUE if the CPY* and SET* instructions are supported, and FALSE otherwise. // HaveFeatRPRES() // =============== // Returns TRUE if reciprocal estimate implements 12-bit precision // when FPCR.AH=1, and FALSE otherwise. boolean HaveFeatMOPS() returnHaveFeatRPRES() return ( HasArchVersion() && (boolean IMPLEMENTATION_DEFINED "Has increased Reciprocal Estimate and Square Root Estimate precision support") && HaveAltFPARMv8p8ARMv8p7);());

Library pseudocode for shared/functions/extension/HaveFeatNMIHaveFeatTIDCP1

// HaveFeatNMI() // ============= // Returns TRUE if the Non-Maskable Interrupt extension is // implemented, and FALSE otherwise. // HaveFeatTIDCP1() // ================ // Returns TRUE if the SCTLR_EL1.TIDCP bit is implemented and the SCTLR_EL2.TIDCP bit // is implemented if EL2 is implemented. boolean HaveFeatNMI() HaveFeatTIDCP1() return HasArchVersion(ARMv8p8);

Library pseudocode for shared/functions/extension/HaveFeatRPRESHaveFeatWFxT

// HaveFeatRPRES() // =============== // Returns TRUE if reciprocal estimate implements 12-bit precision // when FPCR.AH=1, and FALSE otherwise. // HaveFeatWFxT() // ============== // Returns TRUE if WFET and WFIT instruction support is implemented, // and FALSE otherwise. boolean HaveFeatRPRES() return (HaveFeatWFxT() returnHasArchVersion(ARMv8p7) && (boolean IMPLEMENTATION_DEFINED "Has increased Reciprocal Estimate and Square Root Estimate precision support") &&); HaveAltFP());

Library pseudocode for shared/functions/extension/HaveFeatTIDCP1HaveFeatWFxT2

// HaveFeatTIDCP1() // ================ // Returns TRUE if the SCTLR_EL1.TIDCP bit is implemented and the SCTLR_EL2.TIDCP bit // is implemented if EL2 is implemented. // HaveFeatWFxT2() // =============== // Returns TRUE if the register number is reported in the ESR_ELx // on exceptions to WFIT and WFET. boolean HaveFeatTIDCP1() HaveFeatWFxT2() return HasArchVersionHaveFeatWFxT(ARMv8p8);() && boolean IMPLEMENTATION_DEFINED "Has feature WFxT2";

Library pseudocode for shared/functions/extension/HaveFeatWFxTHaveFeatXS

// HaveFeatWFxT() // ============== // Returns TRUE if WFET and WFIT instruction support is implemented, // and FALSE otherwise. // HaveFeatXS() // ============ // Returns TRUE if XS attribute and the TLBI and DSB instructions with nXS qualifier // are supported, and FALSE otherwise. boolean HaveFeatWFxT() HaveFeatXS() return HasArchVersion(ARMv8p7);

Library pseudocode for shared/functions/extension/HaveFeatWFxT2HaveFlagFormatExt

// HaveFeatWFxT2() // =============== // Returns TRUE if the register number is reported in the ESR_ELx // on exceptions to WFIT and WFET. // HaveFlagFormatExt() // =================== // Returns TRUE if flag format conversion instructions implemented. boolean HaveFeatWFxT2() HaveFlagFormatExt() return (ARMv8p5HaveFeatWFxTHasArchVersion() && boolean IMPLEMENTATION_DEFINED "Has feature WFxT2";);

Library pseudocode for shared/functions/extension/HaveFeatXSHaveFlagManipulateExt

// HaveFeatXS() // ============ // Returns TRUE if XS attribute and the TLBI and DSB instructions with nXS qualifier // are supported, and FALSE otherwise. // HaveFlagManipulateExt() // ======================= // Returns TRUE if flag manipulate instructions are implemented. boolean HaveFeatXS() HaveFlagManipulateExt() return HasArchVersion(ARMv8p7ARMv8p4);

Library pseudocode for shared/functions/extension/HaveFlagFormatExtHaveFrintExt

// HaveFlagFormatExt() // =================== // Returns TRUE if flag format conversion instructions implemented. // HaveFrintExt() // ============== // Returns TRUE if FRINT instructions are implemented. boolean HaveFlagFormatExt() HaveFrintExt() return HasArchVersion(ARMv8p5);

Library pseudocode for shared/functions/extension/HaveFlagManipulateExtHaveHPMDExt

// HaveFlagManipulateExt() // ======================= // Returns TRUE if flag manipulate instructions are implemented. // HaveHPMDExt() // ============= boolean HaveFlagManipulateExt() HaveHPMDExt() return HasArchVersionHavePMUv3p1(ARMv8p4);();

Library pseudocode for shared/functions/extension/HaveFrintExtHaveIDSExt

// HaveFrintExt() // ============== // Returns TRUE if FRINT instructions are implemented. // HaveIDSExt() // ============ // Returns TRUE if ID register handling feature is implemented. boolean HaveFrintExt() HaveIDSExt() return HasArchVersion(ARMv8p5ARMv8p4);

Library pseudocode for shared/functions/extension/HaveHPMDExtHaveIESB

// HaveHPMDExt() // ============= // HaveIESB() // ========== boolean HaveHPMDExt() returnHaveIESB() return ( HavePMUv3p1HaveRASExt();() && boolean IMPLEMENTATION_DEFINED "Has Implicit Error Synchronization Barrier");

Library pseudocode for shared/functions/extension/HaveIDSExtHaveInt8MatMulExt

// HaveIDSExt() // ============ // Returns TRUE if ID register handling feature is implemented. // HaveInt8MatMulExt() // =================== // Returns TRUE if AArch64 8-bit integer matrix multiply instruction support // implemented, and FALSE otherwise. boolean HaveIDSExt() returnHaveInt8MatMulExt() return ( HasArchVersion() || (HasArchVersion(ARMv8p2ARMv8p4ARMv8p6);) && boolean IMPLEMENTATION_DEFINED "Has AArch64 Int8 Mat Mul extension"));

Library pseudocode for shared/functions/extension/HaveIESBHaveLSE2Ext

// HaveIESB() // ========== // HaveLSE2Ext() // ============= // Returns TRUE if LSE2 is implemented, and FALSE otherwise. boolean HaveIESB() return (HaveLSE2Ext() return(ARMv8p4HaveRASExtHasArchVersion() && boolean IMPLEMENTATION_DEFINED "Has Implicit Error Synchronization Barrier"););

Library pseudocode for shared/functions/extension/HaveInt8MatMulExtHaveMPAMExt

// HaveInt8MatMulExt() // =================== // Returns TRUE if AArch64 8-bit integer matrix multiply instruction support // implemented, and FALSE otherwise. // HaveMPAMExt() // ============= // Returns TRUE if MPAM is implemented, and FALSE otherwise. boolean HaveInt8MatMulExt() HaveMPAMExt() return (HasArchVersion(ARMv8p6) || (HasArchVersion(ARMv8p2) && boolean IMPLEMENTATION_DEFINED "Has AArch64 Int8 Mat Mul extension")); boolean IMPLEMENTATION_DEFINED "Has MPAM extension");

Library pseudocode for shared/functions/extension/HaveLSE2ExtHaveMTE2Ext

// HaveLSE2Ext() // HaveMTE2Ext() // ============= // Returns TRUE if LSE2 is implemented, and FALSE otherwise. // Returns TRUE if MTE support is beyond EL0, and FALSE otherwise. boolean HaveLSE2Ext() returnHaveMTE2Ext() if ! HasArchVersion(ARMv8p4ARMv8p5);) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has MTE2 extension";

Library pseudocode for shared/functions/extension/HaveMPAMExtHaveMTE3Ext

// HaveMPAMExt() // HaveMTE3Ext() // ============= // Returns TRUE if MPAM is implemented, and FALSE otherwise. // Returns TRUE if MTE Asymmetric Fault Handling support is // implemented, and FALSE otherwise. boolean HaveMPAMExt() return (HaveMTE3Ext() return ((HasArchVersion() && HaveMTE2Ext()) || (HasArchVersion(ARMv8p5ARMv8p2ARMv8p7) && boolean IMPLEMENTATION_DEFINED "Has MPAM extension"); boolean IMPLEMENTATION_DEFINED "Has MTE3 extension"));

Library pseudocode for shared/functions/extension/HaveMTE2ExtHaveMTEExt

// HaveMTE2Ext() // ============= // Returns TRUE if MTE support is beyond EL0, and FALSE otherwise. // HaveMTEExt() // ============ // Returns TRUE if MTE implemented, and FALSE otherwise. boolean HaveMTE2Ext() HaveMTEExt() if !HasArchVersion(ARMv8p5) then return FALSE; return boolean IMPLEMENTATION_DEFINED "Has MTE2 extension"; ifHaveMTE2Ext() then return TRUE; return boolean IMPLEMENTATION_DEFINED "Has MTE extension";

Library pseudocode for shared/functions/extension/HaveMTE3ExtHaveNV2Ext

// HaveMTE3Ext() // ============= // Returns TRUE if MTE Asymmetric Fault Handling support is // implemented, and FALSE otherwise. // HaveNV2Ext() // ============ // Returns TRUE if Enhanced Nested Virtualization is implemented. boolean HaveMTE3Ext() return ((HaveNV2Ext() return (HasArchVersion(ARMv8p7ARMv8p4) && HaveMTE2ExtHaveNVExt()) || (HasArchVersion(ARMv8p5) && boolean IMPLEMENTATION_DEFINED "Has MTE3 extension"));() && boolean IMPLEMENTATION_DEFINED "Has support for Enhanced Nested Virtualization");

Library pseudocode for shared/functions/extension/HaveMTEExtHaveNVExt

// HaveMTEExt() // ============ // Returns TRUE if MTE implemented, and FALSE otherwise. // HaveNVExt() // =========== // Returns TRUE if Nested Virtualization is implemented. boolean HaveMTEExt() if !HaveNVExt() return (HasArchVersion(ARMv8p5ARMv8p3) then return FALSE; if HaveMTE2Ext() then return TRUE; return boolean IMPLEMENTATION_DEFINED "Has MTE extension";) && boolean IMPLEMENTATION_DEFINED "Has Nested Virtualization");

Library pseudocode for shared/functions/extension/HaveNV2ExtHaveNoSecurePMUDisableOverride

// HaveNV2Ext() // ============ // Returns TRUE if Enhanced Nested Virtualization is implemented. // HaveNoSecurePMUDisableOverride() // ================================ boolean HaveNV2Ext() return (HaveNoSecurePMUDisableOverride() returnHasArchVersion(ARMv8p4ARMv8p2) && HaveNVExt() && boolean IMPLEMENTATION_DEFINED "Has support for Enhanced Nested Virtualization"););

Library pseudocode for shared/functions/extension/HaveNVExtHaveNoninvasiveDebugAuth

// HaveNVExt() // =========== // Returns TRUE if Nested Virtualization is implemented. // HaveNoninvasiveDebugAuth() // ========================== // Returns TRUE if the Non-invasive debug controls are implemented. boolean HaveNVExt() return (HaveNoninvasiveDebugAuth() return !HasArchVersion(ARMv8p3ARMv8p4) && boolean IMPLEMENTATION_DEFINED "Has Nested Virtualization"););

Library pseudocode for shared/functions/extension/HaveNoSecurePMUDisableOverrideHavePAN3Ext

// HaveNoSecurePMUDisableOverride() // ================================ // HavePAN3Ext() // ============= // Returns TRUE if SCTLR_EL1.EPAN and SCTLR_EL2.EPAN support is implemented, // and FALSE otherwise. boolean HaveNoSecurePMUDisableOverride() HavePAN3Ext() return HasArchVersion() || (HasArchVersion(ARMv8p1ARMv8p2ARMv8p7);) && boolean IMPLEMENTATION_DEFINED "Has PAN3 extension");

Library pseudocode for shared/functions/extension/HaveNoninvasiveDebugAuthHavePANExt

// HaveNoninvasiveDebugAuth() // ========================== // Returns TRUE if the Non-invasive debug controls are implemented. // HavePANExt() // ============ boolean HaveNoninvasiveDebugAuth() return !HavePANExt() returnHasArchVersion(ARMv8p4ARMv8p1);

Library pseudocode for shared/functions/extension/HavePAN3ExtHavePMUv3

// HavePAN3Ext() // ============= // Returns TRUE if SCTLR_EL1.EPAN and SCTLR_EL2.EPAN support is implemented, // and FALSE otherwise. // HavePMUv3() // =========== // Returns TRUE if the Performance Monitors extension is implemented, and FALSE otherwise. boolean HavePAN3Ext() returnHavePMUv3() return boolean IMPLEMENTATION_DEFINED "Has Performance Monitors extension"; HasArchVersion(ARMv8p7) || (HasArchVersion(ARMv8p1) && boolean IMPLEMENTATION_DEFINED "Has PAN3 extension");

Library pseudocode for shared/functions/extension/HavePANExtHavePMUv3TH

// HavePANExt() // ============ // HavePMUv3TH() // ============= // Returns TRUE if the PMUv3 threshold extension is implemented, and FALSE otherwise. boolean HavePANExt() returnHavePMUv3TH() return ( HasArchVersion() && HavePMUv3ARMv8p1ARMv8p8);() && boolean IMPLEMENTATION_DEFINED "Has PMUv3 threshold extension");

Library pseudocode for shared/functions/extension/HavePMUv3HavePMUv3p1

// HavePMUv3() // =========== // HavePMUv3p1() // ============= // Returns TRUE if the Performance Monitors extension is implemented, and FALSE otherwise. boolean HavePMUv3() return boolean IMPLEMENTATION_DEFINED "Has Performance Monitors extension";HavePMUv3p1() returnHasArchVersion(ARMv8p1) && HavePMUv3();

Library pseudocode for shared/functions/extension/HavePMUv3THHavePMUv3p4

// HavePMUv3TH() // HavePMUv3p4() // ============= // Returns TRUE if the PMUv3 threshold extension is implemented, and FALSE otherwise. // Returns TRUE if the PMUv3.4 extension is implemented, and FALSE otherwise. boolean HavePMUv3TH() return (HavePMUv3p4() returnHasArchVersion(ARMv8p8ARMv8p4) && HavePMUv3() && boolean IMPLEMENTATION_DEFINED "Has PMUv3 threshold extension");();

Library pseudocode for shared/functions/extension/HavePMUv3p1HavePMUv3p5

// HavePMUv3p1() // HavePMUv3p5() // ============= // Returns TRUE if the Performance Monitors extension is implemented, and FALSE otherwise. // Returns TRUE if the PMUv3.5 extension is implemented, and FALSE otherwise. boolean HavePMUv3p1() HavePMUv3p5() return HasArchVersion(ARMv8p1ARMv8p5) && HavePMUv3();

Library pseudocode for shared/functions/extension/HavePMUv3p4HavePMUv3p7

// HavePMUv3p4() // HavePMUv3p7() // ============= // Returns TRUE if the PMUv3.4 extension is implemented, and FALSE otherwise. // Returns TRUE if the PMUv3.7 extension is implemented, and FALSE otherwise. boolean HavePMUv3p4() HavePMUv3p7() return HasArchVersion(ARMv8p4ARMv8p7) && HavePMUv3();

Library pseudocode for shared/functions/extension/HavePMUv3p5HavePageBasedHardwareAttributes

// HavePMUv3p5() // ============= // Returns TRUE if the PMUv3.5 extension is implemented, and FALSE otherwise. // HavePageBasedHardwareAttributes() // ================================= boolean HavePMUv3p5() HavePageBasedHardwareAttributes() return HasArchVersion(ARMv8p5ARMv8p2) && HavePMUv3(););

Library pseudocode for shared/functions/extension/HavePMUv3p7HavePrivATExt

// HavePMUv3p7() // ============= // Returns TRUE if the PMUv3.7 extension is implemented, and FALSE otherwise. // HavePrivATExt() // =============== boolean HavePMUv3p7() HavePrivATExt() return HasArchVersion(ARMv8p7ARMv8p2) && HavePMUv3(););

Library pseudocode for shared/functions/extension/HavePageBasedHardwareAttributesHaveQRDMLAHExt

// HavePageBasedHardwareAttributes() // ================================= // HaveQRDMLAHExt() // ================ boolean HavePageBasedHardwareAttributes() HaveQRDMLAHExt() return HasArchVersion(); boolean HaveAccessFlagUpdateExt() return HasArchVersion(ARMv8p1); boolean HaveDirtyBitModifierExt() return HasArchVersion(ARMv8p1ARMv8p2ARMv8p1);

Library pseudocode for shared/functions/extension/HavePrivATExtHaveRASExt

// HavePrivATExt() // =============== // HaveRASExt() // ============ boolean HavePrivATExt() returnHaveRASExt() return ( HasArchVersion(ARMv8p2);) || boolean IMPLEMENTATION_DEFINED "Has RAS extension");

Library pseudocode for shared/functions/extension/HaveQRDMLAHExtHaveRNG

// HaveQRDMLAHExt() // ================ // HaveRNG() // ========= // Returns TRUE if Random Number Generator extension // support is implemented and FALSE otherwise. boolean HaveQRDMLAHExt() HaveRNG() return HasArchVersion(ARMv8p1ARMv8p5);) && boolean IMPLEMENTATION_DEFINED "Has RNG extension";

Library pseudocode for shared/functions/extension/HaveRASExtHaveSBExt

// HaveRASExt() // ============ // HaveSBExt() // =========== // Returns TRUE if support for SB is implemented, and FALSE otherwise. boolean HaveRASExt() return (HaveSBExt() returnHasArchVersion(ARMv8p2ARMv8p5) || boolean IMPLEMENTATION_DEFINED "Has RAS extension");) || boolean IMPLEMENTATION_DEFINED "Has SB extension";

Library pseudocode for shared/functions/extension/HaveRNGHaveSSBSExt

// HaveRNG() // ========= // Returns TRUE if Random Number Generator extension // support is implemented and FALSE otherwise. // HaveSSBSExt() // ============= // Returns TRUE if support for SSBS is implemented, and FALSE otherwise. boolean HaveRNG() HaveSSBSExt() return HasArchVersion(ARMv8p5) && boolean IMPLEMENTATION_DEFINED "Has RNG extension";) || boolean IMPLEMENTATION_DEFINED "Has SSBS extension";

Library pseudocode for shared/functions/extension/HaveSBExtHaveSecureEL2Ext

// HaveSBExt() // =========== // Returns TRUE if support for SB is implemented, and FALSE otherwise. // HaveSecureEL2Ext() // ================== // Returns TRUE if Secure EL2 is implemented. boolean HaveSBExt() HaveSecureEL2Ext() return HasArchVersion(ARMv8p5ARMv8p4) || boolean IMPLEMENTATION_DEFINED "Has SB extension";);

Library pseudocode for shared/functions/extension/HaveSSBSExtHaveSecureExtDebugView

// HaveSSBSExt() // ============= // Returns TRUE if support for SSBS is implemented, and FALSE otherwise. // HaveSecureExtDebugView() // ======================== // Returns TRUE if support for Secure and Non-secure views of debug peripherals // is implemented. boolean HaveSSBSExt() HaveSecureExtDebugView() return HasArchVersion(ARMv8p5ARMv8p4) || boolean IMPLEMENTATION_DEFINED "Has SSBS extension";);

Library pseudocode for shared/functions/extension/HaveSecureEL2ExtHaveSelfHostedTrace

// HaveSecureEL2Ext() // ================== // Returns TRUE if Secure EL2 is implemented. // HaveSelfHostedTrace() // ===================== boolean HaveSecureEL2Ext() HaveSelfHostedTrace() return HasArchVersion(ARMv8p4);

Library pseudocode for shared/functions/extension/HaveSecureExtDebugViewHaveSmallTranslationTblExt

// HaveSecureExtDebugView() // ======================== // Returns TRUE if support for Secure and Non-secure views of debug peripherals // is implemented. // HaveSmallTranslationTblExt() // ============================ // Returns TRUE if Small Translation Table Support is implemented. boolean HaveSecureExtDebugView() returnHaveSmallTranslationTableExt() return ( HasArchVersion(ARMv8p4);) && boolean IMPLEMENTATION_DEFINED "Has Small Translation Table extension");

Library pseudocode for shared/functions/extension/HaveSelfHostedTraceHaveSoftwareLock

// HaveSelfHostedTrace() // ===================== // HaveSoftwareLock() // ================== // Returns TRUE if Software Lock is implemented. boolean HaveSelfHostedTrace() returnHaveSoftwareLock( HasArchVersionComponent(component) if() then return FALSE; if HaveDoPD() && component != Component_CTI then return FALSE; case component of when Component_Debug return boolean IMPLEMENTATION_DEFINED "Debug has Software Lock"; when Component_PMU return boolean IMPLEMENTATION_DEFINED "PMU has Software Lock"; when Component_CTI return boolean IMPLEMENTATION_DEFINED "CTI has Software Lock"; otherwise UnreachableARMv8p4Havev8p4Debug);();

Library pseudocode for shared/functions/extension/HaveSmallTranslationTblExtHaveStage2MemAttrControl

// HaveSmallTranslationTblExt() // ============================ // Returns TRUE if Small Translation Table Support is implemented. // HaveStage2MemAttrControl() // ========================== // Returns TRUE if support for Stage2 control of memory types and cacheability // attributes is implemented. boolean HaveSmallTranslationTableExt() return (HaveStage2MemAttrControl() returnHasArchVersion(ARMv8p4) && boolean IMPLEMENTATION_DEFINED "Has Small Translation Table extension"););

Library pseudocode for shared/functions/extension/HaveSoftwareLockHaveStatisticalProfiling

// HaveSoftwareLock() // ================== // Returns TRUE if Software Lock is implemented. // HaveStatisticalProfiling() // ========================== // Returns TRUE if Statistical Profiling Extension is implemented, // and FALSE otherwise. boolean HaveSoftwareLock(HaveStatisticalProfiling() returnComponentHasArchVersion component) if( Havev8p4DebugARMv8p2() then return FALSE; if HaveDoPD() && component != Component_CTI then return FALSE; case component of when Component_Debug return boolean IMPLEMENTATION_DEFINED "Debug has Software Lock"; when Component_PMU return boolean IMPLEMENTATION_DEFINED "PMU has Software Lock"; when Component_CTI return boolean IMPLEMENTATION_DEFINED "CTI has Software Lock"; otherwise Unreachable(););

Library pseudocode for shared/functions/extension/HaveStage2MemAttrControlHaveStatisticalProfilingv1p1

// HaveStage2MemAttrControl() // ========================== // Returns TRUE if support for Stage2 control of memory types and cacheability // attributes is implemented. // HaveStatisticalProfilingv1p1() // ============================== // Returns TRUE if the SPEv1p1 extension is implemented, and FALSE otherwise. boolean HaveStage2MemAttrControl() returnHaveStatisticalProfilingv1p1() return ( HasArchVersion(ARMv8p4ARMv8p3);) && boolean IMPLEMENTATION_DEFINED "Has SPEv1p1 extension");

Library pseudocode for shared/functions/extension/HaveStatisticalProfilingHaveStatisticalProfilingv1p2

// HaveStatisticalProfiling() // ========================== // Returns TRUE if Statistical Profiling Extension is implemented, // and FALSE otherwise. // HaveStatisticalProfilingv1p2() // ============================== // Returns TRUE if the SPEv1p2 extension is implemented, and FALSE otherwise. boolean HaveStatisticalProfiling() returnHaveStatisticalProfilingv1p2() return ( HasArchVersion() && HaveStatisticalProfilingARMv8p2ARMv8p7);() && boolean IMPLEMENTATION_DEFINED "Has SPEv1p2 extension");

Library pseudocode for shared/functions/extension/HaveStatisticalProfilingv1p1HaveTWEDExt

// HaveStatisticalProfilingv1p1() // ============================== // Returns TRUE if the SPEv1p1 extension is implemented, and FALSE otherwise. // HaveTWEDExt() // ============= // Returns TRUE if Delayed Trapping of WFE instruction support is implemented, // and FALSE otherwise. boolean HaveStatisticalProfilingv1p1() return (HaveTWEDExt() return boolean IMPLEMENTATION_DEFINED "Has TWED extension";HasArchVersion(ARMv8p3) && boolean IMPLEMENTATION_DEFINED "Has SPEv1p1 extension");

Library pseudocode for shared/functions/extension/HaveStatisticalProfilingv1p2HaveTraceExt

// HaveStatisticalProfilingv1p2() // ============================== // Returns TRUE if the SPEv1p2 extension is implemented, and FALSE otherwise. // HaveTraceExt() // ============== // Returns TRUE if Trace functionality as described by the Trace Architecture // is implemented. boolean HaveStatisticalProfilingv1p2() return (HaveTraceExt() return boolean IMPLEMENTATION_DEFINED "Has Trace Architecture functionality";HasArchVersion(ARMv8p7) && HaveStatisticalProfiling() && boolean IMPLEMENTATION_DEFINED "Has SPEv1p2 extension");

Library pseudocode for shared/functions/extension/HaveTWEDExtHaveTrapLoadStoreMultipleDeviceExt

// HaveTWEDExt() // ============= // Returns TRUE if Delayed Trapping of WFE instruction support is implemented, // and FALSE otherwise. // HaveTrapLoadStoreMultipleDeviceExt() // ==================================== boolean HaveTWEDExt() return boolean IMPLEMENTATION_DEFINED "Has TWED extension";HaveTrapLoadStoreMultipleDeviceExt() returnHasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveTraceExtHaveUAOExt

// HaveTraceExt() // ============== // Returns TRUE if Trace functionality as described by the Trace Architecture // is implemented. // HaveUAOExt() // ============ boolean HaveTraceExt() return boolean IMPLEMENTATION_DEFINED "Has Trace Architecture functionality";HaveUAOExt() returnHasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveTrapLoadStoreMultipleDeviceExtHaveV82Debug

// HaveTrapLoadStoreMultipleDeviceExt() // ==================================== // HaveV82Debug() // ============== boolean HaveTrapLoadStoreMultipleDeviceExt() HaveV82Debug() return HasArchVersion(ARMv8p2);

Library pseudocode for shared/functions/extension/HaveUAOExtHaveVirtHostExt

// HaveUAOExt() // ============ // HaveVirtHostExt() // ================= boolean HaveUAOExt() HaveVirtHostExt() return HasArchVersion(ARMv8p2ARMv8p1);

Library pseudocode for shared/functions/extension/HaveV82DebugHavev8p4Debug

// HaveV82Debug() // ============== // Havev8p4Debug() // =============== // Returns TRUE if support for the Debugv8p4 feature is implemented and FALSE otherwise. boolean HaveV82Debug() Havev8p4Debug() return HasArchVersion(ARMv8p2ARMv8p4);

Library pseudocode for shared/functions/extension/HaveVirtHostExtInsertIESBBeforeException

// HaveVirtHostExt() // ================= // If SCTLR_ELx.IESB is 1 when an exception is generated to ELx, any pending Unrecoverable // SError interrupt must be taken before executing any instructions in the exception handler. // However, this can be before the branch to the exception handler is made. boolean HaveVirtHostExt() returnInsertIESBBeforeException(bits(2) el); HasArchVersion(ARMv8p1);

Library pseudocode for shared/functions/extensionexternalaborts/Havev8p4DebugHandleExternalAbort

// Havev8p4Debug() // =============== // Returns TRUE if support for the Debugv8p4 feature is implemented and FALSE otherwise. boolean// HandleExternalAbort() // ===================== // Takes a Synchronous/Asynchronous abort based on fault. Havev8p4Debug() returnHandleExternalAbort( HasArchVersionPhysMemRetStatus(memretstatus, boolean iswrite, memaddrdesc, integer size, AccessDescriptor accdesc) assert (memretstatus.statuscode IN {Fault_SyncExternal, Fault_AsyncExternal} || (!HaveRASExt() && memretstatus.statuscode IN {Fault_SyncParity, Fault_AsyncParity})); fault = NoFault(); fault.statuscode = memretstatus.statuscode; fault.write = iswrite; fault.extflag = memretstatus.extflag; fault.acctype = memretstatus.acctype; // It is implementation specific whether external aborts signaled // in-band synchronously are taken synchronously or asynchronously if (IsExternalSyncAbort(fault) && !IsExternalAbortTakenSynchronously(memretstatus, iswrite, memaddrdesc, size, accdesc)) then if fault.statuscode == Fault_SyncParity then fault.statuscode = Fault_AsyncParity; else fault.statuscode = Fault_AsyncExternal; if HaveRASExt() then fault.errortype = PEErrorState(memretstatus); else fault.errortype = bits(2) UNKNOWN; if IsExternalSyncAbort(fault) then if UsingAArch32() then AArch32.Abort(memaddrdesc.vaddress<31:0>, fault); else AArch64.Abort(memaddrdesc.vaddress, fault); else PendSErrorInterruptARMv8p4AddressDescriptor);(fault);

Library pseudocode for shared/functions/extensionexternalaborts/Havev8p8DebugHandleExternalReadAbort

// Havev8p8Debug() // =============== // Returns TRUE if support for the Debugv8p8 feature is implemented and FALSE otherwise. boolean// HandleExternalReadAbort() // ========================= // Wrapper function for HandleExternalAbort function in case of an External // Abort on memory read. Havev8p8Debug() returnHandleExternalReadAbort( HasArchVersionPhysMemRetStatus(memstatus, memaddrdesc, integer size, AccessDescriptor accdesc) iswrite = FALSE; HandleExternalAbortARMv8p8AddressDescriptor);(memstatus, iswrite, memaddrdesc, size, accdesc);

Library pseudocode for shared/functions/extensionexternalaborts/InsertIESBBeforeExceptionHandleExternalTTWAbort

// If SCTLR_ELx.IESB is 1 when an exception is generated to ELx, any pending Unrecoverable // SError interrupt must be taken before executing any instructions in the exception handler. // However, this can be before the branch to the exception handler is made. boolean// HandleExternalTTWAbort() // ======================== // Take Asynchronous abort or update FaultRecord for Translation Table Walk // based on PhysMemRetStatus. FaultRecord InsertIESBBeforeException(bits(2) el);HandleExternalTTWAbort(PhysMemRetStatus memretstatus, boolean iswrite, AddressDescriptor memaddrdesc, AccessDescriptor accdesc, integer size, FaultRecord input_fault) output_fault = input_fault; output_fault.extflag = memretstatus.extflag; output_fault.statuscode = memretstatus.statuscode; if (IsExternalSyncAbort(output_fault) && !IsExternalAbortTakenSynchronously(memretstatus, iswrite, memaddrdesc, size, accdesc)) then if output_fault.statuscode == Fault_SyncParity then output_fault.statuscode = Fault_AsyncParity; else output_fault.statuscode = Fault_AsyncExternal; // If a synchronous fault is on a translation table walk, then update // the fault type if IsExternalSyncAbort(output_fault) then if output_fault.statuscode == Fault_SyncParity then output_fault.statuscode = Fault_SyncParityOnWalk; else output_fault.statuscode = Fault_SyncExternalOnWalk; if HaveRASExt() then output_fault.errortype = PEErrorState(memretstatus); else output_fault.errortype = bits(2) UNKNOWN; if !IsExternalSyncAbort(output_fault) then PendSErrorInterrupt(output_fault); output_fault.statuscode = Fault_None; return output_fault;

Library pseudocode for shared/functions/externalaborts/HandleExternalAbortHandleExternalWriteAbort

// HandleExternalAbort() // ===================== // Takes a Synchronous/Asynchronous abort based on fault.// HandleExternalWriteAbort() // ========================== // Wrapper function for HandleExternalAbort function in case of an External // Abort on memory write. HandleExternalAbort(HandleExternalWriteAbort(PhysMemRetStatus memretstatus, boolean iswrite,memstatus, AddressDescriptor memaddrdesc, integer size, AccessDescriptor accdesc) assert (memretstatus.statuscode IN { iswrite = TRUE;Fault_SyncExternalHandleExternalAbort, Fault_AsyncExternal} || (!HaveRASExt() && memretstatus.statuscode IN {Fault_SyncParity, Fault_AsyncParity})); fault = NoFault(); fault.statuscode = memretstatus.statuscode; fault.write = iswrite; fault.extflag = memretstatus.extflag; fault.acctype = memretstatus.acctype; // It is implementation specific whether External aborts signaled // in-band synchronously are taken synchronously or asynchronously if (IsExternalSyncAbort(fault) && !IsExternalAbortTakenSynchronously(memretstatus, iswrite, memaddrdesc, size, accdesc)) then if fault.statuscode == Fault_SyncParity then fault.statuscode = Fault_AsyncParity; else fault.statuscode = Fault_AsyncExternal; if HaveRASExt() then fault.errortype = PEErrorState(memretstatus); else fault.errortype = bits(2) UNKNOWN; if IsExternalSyncAbort(fault) then if UsingAArch32() then AArch32.Abort(memaddrdesc.vaddress<31:0>, fault); else AArch64.Abort(memaddrdesc.vaddress, fault); else PendSErrorInterrupt(fault);(memstatus, iswrite, memaddrdesc, size, accdesc);

Library pseudocode for shared/functions/externalaborts/HandleExternalReadAbortIsExternalAbortTakenSynchronously

// HandleExternalReadAbort() // ========================= // Wrapper function for HandleExternalAbort function in case of an External // Abort on memory read.// Return an implementation specific value: // TRUE if the fault returned for the access can be taken synchronously, // FALSE otherwise. // // This might vary between accesses, for example depending on the error type // or memory type being accessed. // External aborts on data accesses and translation table walks on data accesses // can be either synchronous or asynchronous. // // When FEAT_DoubleFault is not implemented, External aborts on instruction // fetches and translation table walks on instruction fetches can be either // synchronous or asynchronous. // When FEAT_DoubleFault is implemented, all External abort exceptions on // instruction fetches and translation table walks on instruction fetches // must be synchronous. boolean HandleExternalReadAbort(IsExternalAbortTakenSynchronously(PhysMemRetStatus memstatus,memstatus, boolean iswrite, AddressDescriptor memaddrdesc, integer size,desc, integer size, AccessDescriptor accdesc) iswrite = FALSE;accdesc); HandleExternalAbort(memstatus, iswrite, memaddrdesc, size, accdesc);

Library pseudocode for shared/functions/externalaborts/HandleExternalTTWAbortPEErrorState

// HandleExternalTTWAbort() // ======================== // Take Asynchronous abort or update FaultRecord for Translation Table Walk // based on PhysMemRetStatus. FaultRecordconstant bits(2) HandleExternalTTWAbort(Sync_UC = '10'; // Synchronous Uncontainable constant bits(2)PhysMemRetStatus memretstatus, boolean iswrite,Sync_UER = '00'; // Synchronous Recoverable constant bits(2) AddressDescriptor memaddrdesc,Sync_UEO = '11'; // Synchronous Restartable constant bits(2) AccessDescriptor accdesc, integer size,ASync_UC = '00'; // ASynchronous Uncontainable constant bits(2) FaultRecord input_fault) output_fault = input_fault; output_fault.extflag = memretstatus.extflag; output_fault.statuscode = memretstatus.statuscode; if (ASync_UEU = '01'; // ASynchronous Unrecoverable constant bits(2)IsExternalSyncAbort(output_fault) && !ASync_UER = '11'; // ASynchronous Recoverable constant bits(2)IsExternalAbortTakenSynchronously(memretstatus, iswrite, memaddrdesc, size, accdesc)) then if output_fault.statuscode ==ASync_UEO = '10'; // ASynchronous Restartable bits(2) Fault_SyncParity then output_fault.statuscode =PEErrorState( Fault_AsyncParityPhysMemRetStatus; else output_fault.statuscode = Fault_AsyncExternal; // If a synchronous fault is on a translation table walk, then update // the fault type if IsExternalSyncAbort(output_fault) then if output_fault.statuscode == Fault_SyncParity then output_fault.statuscode = Fault_SyncParityOnWalk; else output_fault.statuscode = Fault_SyncExternalOnWalk; if HaveRASExt() then output_fault.errortype = PEErrorState(memretstatus); else output_fault.errortype = bits(2) UNKNOWN; if !IsExternalSyncAbort(output_fault) then PendSErrorInterrupt(output_fault); output_fault.statuscode = Fault_None; return output_fault;memstatus);

Library pseudocode for shared/functions/externalaborts/HandleExternalWriteAbortPendSErrorInterrupt

// HandleExternalWriteAbort() // ========================== // Wrapper function for HandleExternalAbort function in case of an External // Abort on memory write.// Pend the SError. HandleExternalWriteAbort(PendSErrorInterrupt(PhysMemRetStatusFaultRecord memstatus, AddressDescriptor memaddrdesc, integer size, AccessDescriptor accdesc) iswrite = TRUE; HandleExternalAbort(memstatus, iswrite, memaddrdesc, size, accdesc);fault);

Library pseudocode for shared/functions/externalabortsfloat/IsExternalAbortTakenSynchronouslybfloat/BFAdd

// Return an implementation specific value: // TRUE if the fault returned for the access can be taken synchronously, // FALSE otherwise. // // This might vary between accesses, for example depending on the error type // or memory type being accessed. // External aborts on data accesses and translation table walks on data accesses // can be either synchronous or asynchronous. // // When FEAT_DoubleFault is not implemented, External aborts on instruction // fetches and translation table walks on instruction fetches can be either // synchronous or asynchronous. // When FEAT_DoubleFault is implemented, all External abort exceptions on // instruction fetches and translation table walks on instruction fetches // must be synchronous. boolean// BFAdd() // ======= // Single-precision add following BFloat16 computation behaviors. bits(32) IsExternalAbortTakenSynchronously(BFAdd(bits(32) op1, bits(32) op2) bits(32) result;PhysMemRetStatusFPCRType memstatus, boolean iswrite,fpcr = FPCR[]; (type1,sign1,value1) = AddressDescriptorBFUnpack desc, integer size,(op1); (type2,sign2,value2) = (op2); if type1 == FPType_QNaN || type2 == FPType_QNaN then result = FPDefaultNaN(fpcr); else inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == NOT(sign2) then result = FPDefaultNaN(fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '0') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '1') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == sign2 then result = FPZero(sign1); else result_value = value1 + value2; if result_value == 0.0 then result = FPZero('0'); // Positive sign when Round to Odd else result = BFRoundAccessDescriptorBFUnpack accdesc);(result_value); return result;

Library pseudocode for shared/functions/externalabortsfloat/PEErrorStatebfloat/BFDotAdd

constant bits(2)// BFDotAdd() // ========== // BFloat16 2-way dot-product and add to single-precision // result = addend + op1_a*op2_a + op1_b*op2_b bits(32) Sync_UC = '10'; // Synchronous Uncontainable constant bits(2)BFDotAdd(bits(32) addend, bits(16) op1_a, bits(16) op1_b, bits(16) op2_a, bits(16) op2_b, Sync_UER = '00'; // Synchronous Recoverable constant bits(2)fpcr_in) Sync_UEO = '11'; // Synchronous Restartable constant bits(2)fpcr = fpcr_in; bits(32) prod; prod = ASync_UC = '00'; // ASynchronous Uncontainable constant bits(2)( ASync_UEU = '01'; // ASynchronous Unrecoverable constant bits(2)(op1_a, op2_a), ASync_UER = '11'; // ASynchronous Recoverable constant bits(2)(op1_b, op2_b)); result = ASync_UEO = '10'; // ASynchronous Restartable bits(2) PEErrorState(PhysMemRetStatus memstatus);(addend, prod); return result;

Library pseudocode for shared/functions/externalabortsfloat/PendSErrorInterruptbfloat/BFMatMulAdd

// Pend the SError.// BFMatMulAdd() // ============= // BFloat16 matrix multiply and add to single-precision matrix // result[2, 2] = addend[2, 2] + (op1[2, 4] * op2[4, 2]) bits(N) PendSErrorInterrupt(BFMatMulAdd(bits(N) addend, bits(N) op1, bits(N) op2) assert N == 128; bits(N) result; bits(32) sum; for i = 0 to 1 for j = 0 to 1 sum =[addend, 2*i + j, 32]; for k = 0 to 1 bits(16) elt1_a = Elem[op1, 4*i + 2*k + 0, 16]; bits(16) elt1_b = Elem[op1, 4*i + 2*k + 1, 16]; bits(16) elt2_a = Elem[op2, 4*j + 2*k + 0, 16]; bits(16) elt2_b = Elem[op2, 4*j + 2*k + 1, 16]; sum = BFDotAdd(sum, elt1_a, elt1_b, elt2_a, elt2_b, FPCR[]); ElemFaultRecordElem fault);[result, 2*i + j, 32] = sum; return result;

Library pseudocode for shared/functions/float/bfloat/BFAddBFMul

// BFAdd() // BFMul() // ======= // Single-precision add following BFloat16 computation behaviors. // BFloat16 widening multiply to single-precision following BFloat16 // computation behaviors. bits(32) BFAdd(bits(32) op1, bits(32) op2) BFMul(bits(16) op1, bits(16) op2) bits(32) result; FPCRType fpcr = FPCR[]; (type1,sign1,value1) = BFUnpack(op1); (type2,sign2,value2) = BFUnpack(op2); if type1 == FPType_QNaN || type2 == FPType_QNaN then result = FPDefaultNaN(fpcr); else inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == NOT(sign2) then if (inf1 && zero2) || (zero1 && inf2) then result = FPDefaultNaN(fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '0') then elsif inf1 || inf2 then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '1') then (sign1 EOR sign2); elsif zero1 || zero2 then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == sign2 then result = FPZero(sign1); else result_value = value1 + value2; if result_value == 0.0 then result = FPZero('0'); // Positive sign when Round to Odd else result =(sign1 EOR sign2); else result = BFRound(result_value); (value1*value2); return result;

Library pseudocode for shared/functions/float/bfloat/BFDotAddBFMulAdd

// BFDotAdd() // BFMulAdd() // ========== // BFloat16 2-way dot-product and add to single-precision // result = addend + op1_a*op2_a + op1_b*op2_b // Used by BFMLALB and BFMLALT instructions. bits(32)bits(N) BFDotAdd(bits(32) addend, bits(16) op1_a, bits(16) op1_b, bits(16) op2_a, bits(16) op2_b,BFMulAdd(bits(N) addend, bits(N) op1, bits(N) op2, FPCRType fpcr_in) FPCRType fpcr = fpcr_in; bits(32) prod; prod = boolean altfp = BFAddHaveAltFP(() && fpcr.AH == '1'; // When TRUE: boolean fpexc = !altfp; // Do not generate floating point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero if altfp then fpcr.RMode = '00'; // Use RNE rounding mode returnBFMulFPMulAdd(op1_a, op2_a), BFMul(op1_b, op2_b)); result = BFAdd(addend, prod); return result;(addend, op1, op2, fpcr, fpexc);

Library pseudocode for shared/functions/float/bfloat/BFMatMulAddBFNeg

// BFMatMulAdd() // ============= // BFloat16 matrix multiply and add to single-precision matrix // result[2, 2] = addend[2, 2] + (op1[2, 4] * op2[4, 2]) // BFNeg() // ======= bits(N)bits(16) BFMatMulAdd(bits(N) addend, bits(N) op1, bits(N) op2) assert N == 128; bits(N) result; bits(32) sum; for i = 0 to 1 for j = 0 to 1 sum =BFNeg(bits(16) op) return NOT(op<15>) : op<14:0>; Elem[addend, 2*i + j, 32]; for k = 0 to 1 bits(16) elt1_a = Elem[op1, 4*i + 2*k + 0, 16]; bits(16) elt1_b = Elem[op1, 4*i + 2*k + 1, 16]; bits(16) elt2_a = Elem[op2, 4*j + 2*k + 0, 16]; bits(16) elt2_b = Elem[op2, 4*j + 2*k + 1, 16]; sum = BFDotAdd(sum, elt1_a, elt1_b, elt2_a, elt2_b, FPCR[]); Elem[result, 2*i + j, 32] = sum; return result;

Library pseudocode for shared/functions/float/bfloat/BFMulBFRound

// BFMul() // ======= // BFloat16 widening multiply to single-precision following BFloat16 // computation behaviors. // BFRound() // ========= // Converts a real number OP into a single-precision value using the // Round to Odd rounding mode and following BFloat16 computation behaviors. bits(32) BFMul(bits(16) op1, bits(16) op2) BFRound(real op) bits(32) result; assert op != 0.0; bits(32) result; // Format parameters - minimum exponent, numbers of exponent and fraction bits. minimum_exp = -126; E = 8; F = 23; // Split value into sign, unrounded mantissa and exponent. bit sign; real mantissa; if op < 0.0 then sign = '1'; mantissa = -op; else sign = '0'; mantissa = op; exponent = 0; while mantissa < 1.0 do mantissa = mantissa * 2.0; exponent = exponent - 1; while mantissa >= 2.0 do mantissa = mantissa / 2.0; exponent = exponent + 1; // Fixed Flush-to-zero. if exponent < minimum_exp then return FPCRTypeFPZero fpcr = FPCR[]; (type1,sign1,value1) =(sign); // Start creating the exponent value for the result. Start by biasing the actual exponent // so that the minimum exponent becomes 1, lower values 0 (indicating possible underflow). biased_exp = BFUnpackMax(op1); (type2,sign2,value2) =((exponent - minimum_exp) + 1, 0); if biased_exp == 0 then mantissa = mantissa / 2.0^(minimum_exp - exponent); // Get the unrounded mantissa as an integer, and the "units in last place" rounding error. int_mant = BFUnpackRoundDown(op2); if type1 ==(mantissa * 2.0^F); // < 2.0^F if biased_exp == 0, >= 2.0^F if not error = mantissa * 2.0^F - Real(int_mant); // Round to Odd if error != 0.0 then int_mant<0> = '1'; // Deal with overflow and generate result. if biased_exp >= 2^E - 1 then result = FPType_QNaN || type2 == FPType_QNaN then result = FPDefaultNaN(fpcr); else inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPDefaultNaN(fpcr); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); elsif zero1 || zero2 then result = FPZero(sign1 EOR sign2); else result = BFRound(value1*value2); (sign); // Overflows generate appropriately-signed Infinity else result = sign : biased_exp<30-F:0> : int_mant<F-1:0>; return result;

Library pseudocode for shared/functions/float/bfloat/BFMulAddBFUnpack

// BFMulAdd() // BFUnpack() // ========== // Used by BFMLALB and BFMLALT instructions. // Unpacks a BFloat16 or single-precision value into its type, // sign bit and real number that it represents. // The real number result has the correct sign for numbers and infinities, // is very large in magnitude for infinities, and is 0.0 for NaNs. // (These values are chosen to simplify the description of // comparisons and conversions.) bits(N)(FPType, bit, real) BFMulAdd(bits(N) addend, bits(N) op1, bits(N) op2,BFUnpack(bits(N) fpval) assert N IN {16,32}; bit sign; bits(8) exp; bits(23) frac; if N == 16 then sign = fpval<15>; exp = fpval<14:7>; frac = fpval<6:0> : FPCRTypeZeros fpcr_in)(16); else // N == 32 sign = fpval<31>; exp = fpval<30:23>; frac = fpval<22:0>; FPCRTypeFPType fpcr = fpcr_in; boolean altfp =fptype; real value; if HaveAltFPIsZero() && fpcr.AH == '1'; // When TRUE: boolean fpexc = !altfp; // Do not generate floating point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero if altfp then fpcr.RMode = '00'; // Use RNE rounding mode return(exp) then fptype = ; value = 0.0; // Fixed Flush to Zero elsif IsOnes(exp) then if IsZero(frac) then fptype = FPType_Infinity; value = 2.0^1000000; else // no SNaN for BF16 arithmetic fptype = FPType_QNaN; value = 0.0; else fptype = FPType_Nonzero; value = 2.0^(UInt(exp)-127) * (1.0 + Real(UIntFPMulAddFPType_Zero(addend, op1, op2, fpcr, fpexc);(frac)) * 2.0^-23); if sign == '1' then value = -value; return (fptype, sign, value);

Library pseudocode for shared/functions/float/bfloat/BFNegFPConvertBF

// BFNeg() // ======= // FPConvertBF() // ============= // Converts a single-precision OP to BFloat16 value with using rounding mode of // Round to Nearest Even when executed from AArch64 state and // FPCR.AH == '1', otherwise rounding is controlled by FPCR/FPSCR. bits(16) BFNeg(bits(16) op) return NOT(op<15>) : op<14:0>;FPConvertBF(bits(32) op,FPCRType fpcr_in, FPRounding rounding_in) FPCRType fpcr = fpcr_in; FPRounding rounding = rounding_in; bits(32) result; // BF16 value in top 16 bits boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; boolean fpexc = !altfp; // Generate no floating-point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero if altfp then rounding = FPRounding_TIEEVEN; // Use RNE rounding mode // Unpack floating-point operand, with always flush-to-zero if fpcr.AH == '1'. (fptype,sign,value) = FPUnpack(op, fpcr, fpexc); if fptype == FPType_SNaN || fptype == FPType_QNaN then if fpcr.DN == '1' then result = FPDefaultNaN(fpcr); else result = FPConvertNaN(op); if fptype == FPType_SNaN then if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); elsif fptype == FPType_Infinity then result = FPInfinity(sign); elsif fptype == FPType_Zero then result = FPZero(sign); else result = FPRoundCVBF(value, fpcr, rounding, fpexc); // Returns correctly rounded BF16 value from top 16 bits return result<31:16>; // FPConvertBF() // ============= // Converts a single-precision operand to BFloat16 value. bits(16) FPConvertBF(bits(32) op, FPCRType fpcr) return FPConvertBF(op, fpcr, FPRoundingMode(fpcr));

Library pseudocode for shared/functions/float/bfloat/BFRoundFPRoundCVBF

// BFRound() // ========= // Converts a real number 'op' into a single-precision value using the // Round to Odd rounding mode and following BFloat16 computation behaviors. // FPRoundCVBF() // ============= // Converts a real number OP into a BFloat16 value using the supplied // rounding mode RMODE. The 'fpexc' argument controls the generation of // floating-point exceptions. bits(32) BFRound(real op) assert op != 0.0; bits(32) result; // Format parameters - minimum exponent, numbers of exponent and fraction bits. minimum_exp = -126; E = 8; F = 23; // Split value into sign, unrounded mantissa and exponent. bit sign; real mantissa; if op < 0.0 then sign = '1'; mantissa = -op; else sign = '0'; mantissa = op; exponent = 0; while mantissa < 1.0 do mantissa = mantissa * 2.0; exponent = exponent - 1; while mantissa >= 2.0 do mantissa = mantissa / 2.0; exponent = exponent + 1; // Fixed Flush-to-zero. if exponent < minimum_exp then returnFPRoundCVBF(real op, FPZeroFPCRType(sign); // Start creating the exponent value for the result. Start by biasing the actual exponent // so that the minimum exponent becomes 1, lower values 0 (indicating possible underflow). biased_exp =fpcr, MaxFPRounding((exponent - minimum_exp) + 1, 0); if biased_exp == 0 then mantissa = mantissa / 2.0^(minimum_exp - exponent); // Get the unrounded mantissa as an integer, and the "units in last place" rounding error. int_mant =rounding, boolean fpexc) boolean isbfloat16 = TRUE; return RoundDownFPRoundBase(mantissa * 2.0^F); // < 2.0^F if biased_exp == 0, >= 2.0^F if not error = mantissa * 2.0^F - Real(int_mant); // Round to Odd if error != 0.0 then int_mant<0> = '1'; // Deal with overflow and generate result. if biased_exp >= 2^E - 1 then result = FPInfinity(sign); // Overflows generate appropriately-signed Infinity else result = sign : biased_exp<30-F:0> : int_mant<F-1:0>; return result;(op, fpcr, rounding, isbfloat16, fpexc);

Library pseudocode for shared/functions/float/bfloatfixedtofp/BFUnpackFixedToFP

// BFUnpack() // ========== // Unpacks a BFloat16 or single-precision value into its type, // sign bit and real number that it represents. // The real number result has the correct sign for numbers and infinities, // is very large in magnitude for infinities, and is 0.0 for NaNs. // (These values are chosen to simplify the description of // comparisons and conversions.) // FixedToFP() // =========== (FPType, bit, real)// Convert M-bit fixed point OP with FBITS fractional bits to // N-bit precision floating point, controlled by UNSIGNED and ROUNDING. bits(N) BFUnpack(bits(N) fpval) assert N IN {16,32}; bit sign; bits(8) exp; bits(23) frac; if N == 16 then sign = fpval<15>; exp = fpval<14:7>; frac = fpval<6:0> :FixedToFP(bits(M) op, integer fbits, boolean unsigned, ZerosFPCRType(16); else // N == 32 sign = fpval<31>; exp = fpval<30:23>; frac = fpval<22:0>;fpcr, FPTypeFPRounding fptype; real value; ifrounding) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(N) result; assert fbits >= 0; assert rounding != IsZeroFPRounding_ODD(exp) then fptype =; // Correct signed-ness int_operand = FPType_ZeroInt; value = 0.0; // Fixed Flush to Zero elsif(op, unsigned); // Scale by fractional bits and generate a real value real_operand = Real(int_operand) / 2.0^fbits; if real_operand == 0.0 then result = IsOnesFPZero(exp) then if('0'); else result = IsZeroFPRound(frac) then fptype = FPType_Infinity; value = 2.0^1000000; else // no SNaN for BF16 arithmetic fptype = FPType_QNaN; value = 0.0; else fptype = FPType_Nonzero; value = 2.0^(UInt(exp)-127) * (1.0 + Real(UInt(frac)) * 2.0^-23); (real_operand, fpcr, rounding); if sign == '1' then value = -value; return (fptype, sign, value); return result;

Library pseudocode for shared/functions/float/bfloatfpabs/FPConvertBFFPAbs

// FPConvertBF() // ============= // Converts a single-precision 'op' to BFloat16 value with using rounding mode of // Round to Nearest Even when executed from AArch64 state and // FPCR.AH == '1', otherwise rounding is controlled by FPCR/FPSCR. // FPAbs() // ======= bits(16)bits(N) FPConvertBF(bits(32) op,FPAbs(bits(N) op) assert N IN {16,32,64}; if ! FPCRTypeUsingAArch32 fpcr_in,() && FPRoundingHaveAltFP rounding_in)() then FPCRType fpcr = fpcr_in;fpcr = FPCR[]; if fpcr.AH == '1' then (fptype, -, -) = FPRounding rounding = rounding_in; bits(32) result; // BF16 value in top 16 bits boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; boolean fpexc = !altfp; // Generate no floating-point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero if altfp then rounding = FPRounding_TIEEVEN; // Use RNE rounding mode // Unpack floating-point operand, with always flush-to-zero if fpcr.AH == '1'. (fptype,sign,value) = FPUnpack(op, fpcr, fpexc); if fptype ==(op, fpcr, FALSE); if fptype IN { FPType_SNaN || fptype ==, FPType_QNaN then if fpcr.DN == '1' then result = FPDefaultNaN(fpcr); else result = FPConvertNaN(op); if fptype == FPType_SNaN then if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); elsif fptype == FPType_Infinity then result = FPInfinity(sign); elsif fptype == FPType_Zero then result = FPZero(sign); else result = FPRoundCVBF(value, fpcr, rounding, fpexc); // Returns correctly rounded BF16 value from top 16 bits return result<31:16>; // FPConvertBF() // ============= // Converts a single-precision operand to BFloat16 value. bits(16) FPConvertBF(bits(32) op, FPCRType fpcr) return FPConvertBF(op, fpcr, FPRoundingMode(fpcr));} then return op; // When fpcr.AH=1, sign of NaN has no consequence return '0' : op<N-2:0>;

Library pseudocode for shared/functions/float/bfloatfpadd/FPRoundCVBFFPAdd

// FPRoundCVBF() // ============= // Converts a real number 'op' into a BFloat16 value using the supplied // rounding mode 'rounding'. The 'fpexc' argument controls the generation of // floating-point exceptions. // FPAdd() // ======= bits(32)bits(N) FPRoundCVBF(real op,FPAdd(bits(N) op1, bits(N) op2, FPCRType fpcr,fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions return FPRoundingFPAdd rounding, boolean fpexc) boolean isbfloat16 = TRUE; return(op1, op2, fpcr, fpexc); // FPAdd() // ======= bits(N) FPAdd(bits(N) op1, bits(N) op2, FPCRType fpcr, boolean fpexc) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr, fpexc); (type2,sign2,value2) = FPUnpack(op2, fpcr, fpexc); boolean altfmaxfmin = FALSE; // Do not use altfp mode for FMIN, FMAX and variants (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr, altfmaxfmin, fpexc); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == NOT(sign2) then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '0') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '1') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == sign2 then result = FPZero(sign1); else result_value = value1 + value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding, fpexc); if fpexc then FPProcessDenormsFPRoundBase(op, fpcr, rounding, isbfloat16, fpexc);(type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fixedtofpfpcompare/FixedToFPFPCompare

// FixedToFP() // FPCompare() // =========== // Convert M-bit fixed point 'op' with FBITS fractional bits to // N-bit precision floating point, controlled by UNSIGNED and ROUNDING. bits(N)bits(4) FixedToFP(bits(M) op, integer fbits, boolean unsigned,FPCompare(bits(N) op1, bits(N) op2, boolean signal_nans, FPCRType fpcr,fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPRoundingFPUnpack rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(N) result; assert fbits >= 0; assert rounding !=(op1, fpcr); (type2,sign2,value2) = FPRounding_ODDFPUnpack; (op2, fpcr); // Correct signed-ness int_operand = bits(4) result; if type1 IN { IntFPType_SNaN(op, unsigned); // Scale by fractional bits and generate a real value real_operand = Real(int_operand) / 2.0^fbits; if real_operand == 0.0 then result =, FPZeroFPType_QNaN('0'); else result =} || type2 IN { , FPType_QNaN} then result = '0011'; if type1 == FPType_SNaN || type2 == FPType_SNaN || signal_nans then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() if value1 == value2 then result = '0110'; elsif value1 < value2 then result = '1000'; else // value1 > value2 result = '0010'; FPProcessDenormsFPRoundFPType_SNaN(real_operand, fpcr, rounding); (type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpabsfpcompareeq/FPAbsFPCompareEQ

// FPAbs() // ======= // FPCompareEQ() // ============= bits(N)boolean FPAbs(bits(N) op) assert N IN {16,32,64}; if !FPCompareEQ(bits(N) op1, bits(N) op2,UsingAArch32() && HaveAltFP() then FPCRType fpcr = FPCR[]; if fpcr.AH == '1' then (fptype, -, -) =fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op, fpcr, FALSE); if fptype IN {(op1, fpcr); (type2,sign2,value2) =FPUnpack(op2, fpcr); boolean result; if type1 IN {FPType_SNaN, FPType_QNaN} || type2 IN {FPType_SNaN, FPType_QNaN} then result = FALSE; if type1 == FPType_SNaN || type2 == FPType_SNaN then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 == value2); FPProcessDenorms} then return op; // When fpcr.AH=1, sign of NaN has no consequence (type1, type2, N, fpcr); return '0' : op<N-2:0>; return result;

Library pseudocode for shared/functions/float/fpaddfpcomparege/FPAddFPCompareGE

// FPAdd() // ======= // FPCompareGE() // ============= bits(N)boolean FPAdd(bits(N) op1, bits(N) op2,FPCompareGE(bits(N) op1, bits(N) op2, FPCRType fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions return assert N IN {16,32,64}; (type1,sign1,value1) = FPAdd(op1, op2, fpcr, fpexc); // FPAdd() // ======= bits(N) FPAdd(bits(N) op1, bits(N) op2, FPCRType fpcr, boolean fpexc) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr, fpexc); (op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr, fpexc); (op2, fpcr); boolean altfmaxfmin = FALSE; // Do not use altfp mode for FMIN, FMAX and variants (done,result) = boolean result; if type1 IN { FPProcessNaNsFPType_SNaN(type1, type2, op1, op2, fpcr, altfmaxfmin, fpexc); if !done then inf1 = (type1 ==, FPType_InfinityFPType_QNaN); inf2 = (type2 ==} || type2 IN { FPType_InfinityFPType_SNaN); zero1 = (type1 ==, FPType_ZeroFPType_QNaN); zero2 = (type2 ==} then result = FALSE; FPType_Zero); if inf1 && inf2 && sign1 == NOT(sign2) then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '0') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '1') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == sign2 then result = FPZero(sign1); else result_value = value1 + value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding, fpexc); if fpexc then, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 >= value2); FPProcessDenorms(type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpcomparefpcomparegt/FPCompareFPCompareGT

// FPCompare() // =========== // FPCompareGT() // ============= bits(4)boolean FPCompare(bits(N) op1, bits(N) op2, boolean signal_nans,FPCompareGT(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); bits(4) result; boolean result; if type1 IN {FPType_SNaN, FPType_QNaN} || type2 IN {FPType_SNaN, FPType_QNaN} then result = '0011'; if type1 == result = FALSE; FPType_SNaN || type2 == FPType_SNaN || signal_nans then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() if value1 == value2 then result = '0110'; elsif value1 < value2 then result = '1000'; else // value1 > value2 result = '0010'; result = (value1 > value2); FPProcessDenorms(type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpcompareeqfpconvert/FPCompareEQFPConvert

// FPCompareEQ() // ============= // FPConvert() // =========== boolean// Convert floating point OP with N-bit precision to M-bit precision, // with rounding controlled by ROUNDING. // This is used by the FP-to-FP conversion instructions and so for // half-precision data ignores FZ16, but observes AHP. bits(M) FPCompareEQ(bits(N) op1, bits(N) op2,FPConvert(bits(N) op, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =fpcr, FPUnpackFPRounding(op1, fpcr); (type2,sign2,value2) =rounding) assert M IN {16,32,64}; assert N IN {16,32,64}; bits(M) result; // Unpack floating-point operand optionally with flush-to-zero. (fptype,sign,value) = FPUnpackFPUnpackCV(op2, fpcr); (op, fpcr); boolean result; if type1 IN { alt_hp = (M == 16) && (fpcr.AHP == '1'); if fptype ==FPType_SNaN,|| fptype == FPType_QNaN} || type2 IN {then if alt_hp then result =FPType_SNaNFPZero,(sign); elsif fpcr.DN == '1' then result = FPType_QNaNFPDefaultNaN} then result = FALSE; if type1 ==(fpcr); else result = FPType_SNaNFPConvertNaN || type2 ==(op); if fptype == FPType_SNaN then|| alt_hp then FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 == value2);,fpcr); elsif fptype == then if alt_hp then result = sign:Ones(M-1); FPProcessException(FPExc_InvalidOp, fpcr); else result = FPInfinity(sign); elsif fptype == FPType_Zero then result = FPZero(sign); else result = FPRoundCV(value, fpcr, rounding); FPProcessDenorm(fptype, N, fpcr); return result; // FPConvert() // =========== bits(M) FPConvert(bits(N) op, FPCRType fpcr) return FPConvert(op, fpcr, FPRoundingModeFPProcessDenormsFPType_Infinity(type1, type2, N, fpcr); return result;(fpcr));

Library pseudocode for shared/functions/float/fpcomparegefpconvertnan/FPCompareGEFPConvertNaN

// FPCompareGE() // ============= // FPConvertNaN() // ============== // Converts a NaN of one floating-point type to another booleanbits(M) FPCompareGE(bits(N) op1, bits(N) op2,FPConvertNaN(bits(N) op) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(M) result; bits(51) frac; sign = op<N-1>; // Unpack payload from input NaN case N of when 64 frac = op<50:0>; when 32 frac = op<21:0>: FPCRTypeZeros fpcr) assert N IN {16,32,64}; (type1,sign1,value1) =(29); when 16 frac = op<8:0>: FPUnpackZeros(op1, fpcr); (type2,sign2,value2) =(42); // Repack payload into output NaN, while // converting an SNaN to a QNaN. case M of when 64 result = sign: FPUnpackOnes(op2, fpcr); boolean result; if type1 IN {(M-52):frac; when 32 result = sign:FPType_SNaNOnes,(M-23):frac<50:29>; when 16 result = sign: FPType_QNaNOnes} || type2 IN {FPType_SNaN, FPType_QNaN} then result = FALSE; FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 >= value2); FPProcessDenorms(type1, type2, N, fpcr); (M-10):frac<50:42>; return result;

Library pseudocode for shared/functions/float/fpcomparegtfpcrtype/FPCompareGTFPCRType

// FPCompareGT() // ============= booleantype FPCompareGT(bits(N) op1, bits(N) op2,FPCRType; FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); boolean result; if type1 IN {FPType_SNaN, FPType_QNaN} || type2 IN {FPType_SNaN, FPType_QNaN} then result = FALSE; FPProcessException(FPExc_InvalidOp, fpcr); else // All non-NaN cases can be evaluated on the values produced by FPUnpack() result = (value1 > value2); FPProcessDenorms(type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpconvertfpdecoderm/FPConvertFPDecodeRM

// FPConvert() // =========== // FPDecodeRM() // ============ // Convert floating point 'op' with N-bit precision to M-bit precision, // with rounding controlled by ROUNDING. // This is used by the FP-to-FP conversion instructions and so for // half-precision data ignores FZ16, but observes AHP. // Decode most common AArch32 floating-point rounding encoding. bits(M)FPRounding FPConvert(bits(N) op,FPDecodeRM(bits(2) rm) FPCRType fpcr, FPRounding rounding) assert M IN {16,32,64}; assert N IN {16,32,64}; bits(M) result; // Unpack floating-point operand optionally with flush-to-zero. (fptype,sign,value) =result; case rm of when '00' result = FPUnpackCVFPRounding_TIEAWAY(op, fpcr); alt_hp = (M == 16) && (fpcr.AHP == '1'); if fptype ==; // A when '01' result = FPType_SNaNFPRounding_TIEEVEN || fptype ==; // N when '10' result = FPType_QNaNFPRounding_POSINF then if alt_hp then result =; // P when '11' result = FPZeroFPRounding_NEGINF(sign); elsif fpcr.DN == '1' then result = FPDefaultNaN(fpcr); else result = FPConvertNaN(op); if fptype == FPType_SNaN || alt_hp then FPProcessException(FPExc_InvalidOp,fpcr); elsif fptype == FPType_Infinity then if alt_hp then result = sign:Ones(M-1); FPProcessException(FPExc_InvalidOp, fpcr); else result = FPInfinity(sign); elsif fptype == FPType_Zero then result = FPZero(sign); else result = FPRoundCV(value, fpcr, rounding); FPProcessDenorm(fptype, N, fpcr); return result; // FPConvert() // =========== bits(M) FPConvert(bits(N) op, FPCRType fpcr) return FPConvert(op, fpcr, FPRoundingMode(fpcr));; // M return result;

Library pseudocode for shared/functions/float/fpconvertnanfpdecoderounding/FPConvertNaNFPDecodeRounding

// FPConvertNaN() // ============== // Converts a NaN of one floating-point type to another // FPDecodeRounding() // ================== bits(M)// Decode floating-point rounding mode and common AArch64 encoding. FPRounding FPConvertNaN(bits(N) op) assert N IN {16,32,64}; assert M IN {16,32,64}; bits(M) result; bits(51) frac; sign = op<N-1>; // Unpack payload from input NaN case N of when 64 frac = op<50:0>; when 32 frac = op<21:0>:FPDecodeRounding(bits(2) rmode) case rmode of when '00' returnZerosFPRounding_TIEEVEN(29); when 16 frac = op<8:0>:; // N when '01' returnZerosFPRounding_POSINF(42); // Repack payload into output NaN, while // converting an SNaN to a QNaN. case M of when 64 result = sign:; // P when '10' returnOnesFPRounding_NEGINF(M-52):frac; when 32 result = sign:; // M when '11' returnOnesFPRounding_ZERO(M-23):frac<50:29>; when 16 result = sign:Ones(M-10):frac<50:42>; return result;; // Z

Library pseudocode for shared/functions/float/fpcrtypefpdefaultnan/FPCRTypeFPDefaultNaN

type// FPDefaultNaN() // ============== bits(N) FPCRType;FPDefaultNaN()FPCRType fpcr = FPCR[]; return FPDefaultNaN(fpcr); bits(N) FPDefaultNaN(FPCRType fpcr) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); bit sign = if HaveAltFP() && !UsingAArch32() then fpcr.AH else '0'; bits(E) exp = Ones(E); bits(F) frac = '1':Zeros(F-1); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpdecodermfpdiv/FPDecodeRMFPDiv

// FPDecodeRM() // ============ // FPDiv() // ======= // Decode most common AArch32 floating-point rounding encoding. FPRoundingbits(N) FPDecodeRM(bits(2) rm)FPDiv(bits(N) op1, bits(N) op2, FPRoundingFPCRType result; case rm of when '00' result =fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = FPRounding_TIEAWAYFPUnpack; // A when '01' result =(op1, fpcr); (type2,sign2,value2) = FPRounding_TIEEVENFPUnpack; // N when '10' result =(op2, fpcr); (done,result) = FPRounding_POSINFFPProcessNaNs; // P when '11' result =(type1, type2, op1, op2, fpcr); if !done then inf1 = type1 == ; inf2 = type2 == FPType_Infinity; zero1 = type1 == FPType_Zero; zero2 = type2 == FPType_Zero; if (inf1 && inf2) || (zero1 && zero2) then result = FPDefaultNaN(fpcr); FPProcessException(FPExc_InvalidOp, fpcr); elsif inf1 || zero2 then result = FPInfinity(sign1 EOR sign2); if !inf1 then FPProcessException(FPExc_DivideByZero, fpcr); elsif zero1 || inf2 then result = FPZero(sign1 EOR sign2); else result = FPRound(value1/value2, fpcr); if !zero2 then FPProcessDenormsFPRounding_NEGINFFPType_Infinity; // M (type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpdecoderoundingfpexc/FPDecodeRoundingFPExc

// FPDecodeRounding() // ================== // Decode floating-point rounding mode and common AArch64 encoding. FPRoundingenumeration FPDecodeRounding(bits(2) rmode) case rmode of when '00' returnFPExc { FPRounding_TIEEVEN; // N when '01' returnFPExc_InvalidOp, FPRounding_POSINF; // P when '10' returnFPExc_DivideByZero, FPRounding_NEGINF; // M when '11' returnFPExc_Overflow, FPExc_Underflow, FPExc_Inexact, FPRounding_ZERO; // ZFPExc_InputDenorm};

Library pseudocode for shared/functions/float/fpdefaultnanfpinfinity/FPDefaultNaNFPInfinity

// FPDefaultNaN() // ============== // FPInfinity() // ============ bits(N) FPDefaultNaN()FPInfinity(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); bits(E) exp = FPCRType fpcr = FPCR[]; return FPDefaultNaN(fpcr); bits(N) FPDefaultNaN(FPCRType fpcr) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); bit sign = if HaveAltFP() && !UsingAArch32() then fpcr.AH else '0'; bits(E) exp = Ones(E); bits(F) frac = '1': bits(F) frac =Zeros(F-1); (F); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpdivfpmax/FPDivFPMax

// FPDiv() // FPMax() // ======= bits(N) FPDiv(bits(N) op1, bits(N) op2,FPMax(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; (type1,sign1,value1) = boolean altfp = FPUnpackHaveAltFP(op1, fpcr); (type2,sign2,value2) =() && ! FPUnpackUsingAArch32(op2, fpcr); (done,result) =() && fpcr.AH == '1'; return FPProcessNaNsFPMax(type1, type2, op1, op2, fpcr); (op1, op2, fpcr, altfp); if !done then inf1 = type1 ==// FPMax() // ======= // Compare two inputs and return the larger value after rounding. The // 'fpcr' argument supplies the FPCR control bits and 'altfp' determines // if the function should use alternative floating-point behaviour. bits(N) FPType_Infinity; inf2 = type2 ==FPMax(bits(N) op1, bits(N) op2, FPType_InfinityFPCRType; zero1 = type1 ==fpcr_in, boolean altfp) assert N IN {16,32,64}; FPCRType fpcr = fpcr_in; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if (altfp && type1 == FPType_Zero; zero2 = type2 ==&& type2 == FPType_Zero; if (inf1 && inf2) || (zero1 && zero2) then result =&& ((sign1 == '0' && sign2 == '1') || (sign1 == '1' && sign2 == '0'))) then return FPDefaultNaNFPZero(fpcr);(sign2); (done,result) = FPProcessExceptionFPProcessNaNs((type1, type2, op1, op2, fpcr, altfp, TRUE); if !done thenFPExc_InvalidOpFPType, fpcr); elsif inf1 || zero2 then result =fptype; bit sign; real value; if value1 > value2 then (fptype,sign,value) = (type1,sign1,value1); else (fptype,sign,value) = (type2,sign2,value2); if fptype == FPType_Infinity then result = FPInfinity(sign1 EOR sign2); if !inf1 then(sign); elsif fptype == FPProcessExceptionFPType_Zero(then sign = sign1 AND sign2; // Use most positive sign result =FPExc_DivideByZeroFPZero, fpcr); elsif zero1 || inf2 then result =(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. rounding = FPZeroFPRoundingMode(sign1 EOR sign2); else (fpcr); if altfp then // Denormal output is not flushed to zero fpcr.FZ = '0'; fpcr.FZ16 = '0'; result = FPRound(value1/value2, fpcr); if !zero2 then(value, fpcr, rounding, TRUE); FPProcessDenorms(type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpexcfpmaxnormal/FPExcFPMaxNormal

enumeration// FPMaxNormal() // ============= bits(N) FPExc {FPMaxNormal(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =FPExc_InvalidOp,(E-1):'0'; frac = FPExc_DivideByZero, FPExc_Overflow, FPExc_Underflow, FPExc_Inexact, FPExc_InputDenorm};(F); return sign : exp : frac;

Library pseudocode for shared/functions/float/fpinfinityfpmaxnum/FPInfinityFPMaxNum

// FPInfinity() // ============ // FPMaxNum() // ========== bits(N) FPInfinity(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); bits(E) exp =FPMaxNum(bits(N) op1_in, bits(N) op2_in, OnesFPCRType(E); bits(F) frac =fpcr) assert N IN {16,32,64}; bits(N) op1 = op1_in; bits(N) op2 = op2_in; (type1,-,-) = (op1, fpcr); (type2,-,-) = FPUnpack(op2, fpcr); boolean type1_nan = type1 IN {FPType_QNaN, FPType_SNaN}; boolean type2_nan = type2 IN {FPType_QNaN, FPType_SNaN}; boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; if !(altfp && type1_nan && type2_nan) then // Treat a single quiet-NaN as -Infinity. if type1 == FPType_QNaN && type2 != FPType_QNaN then op1 = FPInfinity('1'); elsif type1 != FPType_QNaN && type2 == FPType_QNaN then op2 = FPInfinity('1'); altfmaxfmin = FALSE; // Restrict use of FMAX/FMIN NaN propagation rules result = FPMaxZerosFPUnpack(F); (op1, op2, fpcr, altfmaxfmin); return sign : exp : frac; return result;

Library pseudocode for shared/functions/float/fpmaxfpmerge/FPMaxIsMerging

// FPMax() // ======= // IsMerging() // =========== // Returns TRUE if the output elements other than the lowest are taken from // the destination register. bits(N)boolean FPMax(bits(N) op1, bits(N) op2,IsMerging( FPCRType fpcr) boolean altfp = boolean merge = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; return() && fpcr.NEP == '1'; return merge; FPMax(op1, op2, fpcr, altfp); // FPMax() // ======= // Compare two inputs and return the larger value after rounding. The // 'fpcr' argument supplies the FPCR control bits and 'altfp' determines // if the function should use alternative floating-point behaviour. bits(N) FPMax(bits(N) op1, bits(N) op2, FPCRType fpcr_in, boolean altfp) assert N IN {16,32,64}; FPCRType fpcr = fpcr_in; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if (altfp && type1 == FPType_Zero && type2 == FPType_Zero && ((sign1 == '0' && sign2 == '1') || (sign1 == '1' && sign2 == '0'))) then return FPZero(sign2); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr, altfp, TRUE); if !done then FPType fptype; bit sign; real value; if value1 > value2 then (fptype,sign,value) = (type1,sign1,value1); else (fptype,sign,value) = (type2,sign2,value2); if fptype == FPType_Infinity then result = FPInfinity(sign); elsif fptype == FPType_Zero then sign = sign1 AND sign2; // Use most positive sign result = FPZero(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. rounding = FPRoundingMode(fpcr); if altfp then // Denormal output is not flushed to zero fpcr.FZ = '0'; fpcr.FZ16 = '0'; result = FPRound(value, fpcr, rounding, TRUE); FPProcessDenorms(type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpmaxnormalfpmin/FPMaxNormalFPMin

// FPMaxNormal() // ============= // FPMin() // ======= bits(N) FPMaxNormal(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =FPMin(bits(N) op1, bits(N) op2, OnesFPCRType(E-1):'0'; frac =fpcr) boolean altfp = () && !UsingAArch32() && fpcr.AH == '1'; return FPMin(op1, op2, fpcr, altfp); // FPMin() // ======= // Compare two operands and return the smaller operand after rounding. The // 'fpcr' argument supplies the FPCR control bits and 'altfp' determines // if the function should use alternative behaviour. bits(N) FPMin(bits(N) op1, bits(N) op2, FPCRType fpcr_in, boolean altfp) assert N IN {16,32,64}; FPCRType fpcr = fpcr_in; (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); if (altfp && type1 == FPType_Zero && type2 == FPType_Zero && ((sign1 == '0' && sign2 == '1') || (sign1 == '1' && sign2 == '0'))) then return FPZero(sign2); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr, altfp, TRUE); if !done then FPType fptype; bit sign; real value; FPRounding rounding; if value1 < value2 then (fptype,sign,value) = (type1,sign1,value1); else (fptype,sign,value) = (type2,sign2,value2); if fptype == FPType_Infinity then result = FPInfinity(sign); elsif fptype == FPType_Zero then sign = sign1 OR sign2; // Use most negative sign result = FPZero(sign); else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. rounding = FPRoundingMode(fpcr); if altfp then // Denormal output is not flushed to zero fpcr.FZ = '0'; fpcr.FZ16 = '0'; result = FPRound(value, fpcr, rounding, TRUE); FPProcessDenormsOnesHaveAltFP(F); (type1, type2, N, fpcr); return sign : exp : frac; return result;

Library pseudocode for shared/functions/float/fpmaxnumfpminnum/FPMaxNumFPMinNum

// FPMaxNum() // FPMinNum() // ========== bits(N) FPMaxNum(bits(N) op1_in, bits(N) op2_in,FPMinNum(bits(N) op1_in, bits(N) op2_in, FPCRType fpcr) assert N IN {16,32,64}; bits(N) op1 = op1_in; bits(N) op2 = op2_in; (type1,-,-) = FPUnpack(op1, fpcr); (type2,-,-) = FPUnpack(op2, fpcr); boolean type1_nan = type1 IN {FPType_QNaN, FPType_SNaN}; boolean type2_nan = type2 IN {FPType_QNaN, FPType_SNaN}; boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; if !(altfp && type1_nan && type2_nan) then // Treat a single quiet-NaN as -Infinity. // Treat a single quiet-NaN as +Infinity. if type1 == FPType_QNaN && type2 != FPType_QNaN then op1 = FPInfinity('1'); ('0'); elsif type1 != FPType_QNaN && type2 == FPType_QNaN then op2 = FPInfinity('1'); ('0'); altfmaxfmin = FALSE; // Restrict use of FMAX/FMIN NaN propagation rules result = FPMaxFPMin(op1, op2, fpcr, altfmaxfmin); return result;

Library pseudocode for shared/functions/float/fpmergefpmul/IsMergingFPMul

// IsMerging() // =========== // Returns TRUE if the output elements other than the lowest are taken from // the destination register. // FPMul() // ======= booleanbits(N) IsMerging(FPMul(bits(N) op1, bits(N) op2,FPCRType fpcr) boolean merge = assert N IN {16,32,64}; (type1,sign1,value1) = HaveAltFPFPUnpack() && !(op1, fpcr); (type2,sign2,value2) =(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPDefaultNaN(fpcr); FPProcessException(FPExc_InvalidOp, fpcr); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); elsif zero1 || zero2 then result = FPZero(sign1 EOR sign2); else result = FPRound(value1*value2, fpcr); FPProcessDenormsUsingAArch32FPUnpack() && fpcr.NEP == '1'; return merge;(type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpminfpmuladd/FPMinFPMulAdd

// FPMin() // ======= // FPMulAdd() // ========== bits(N) FPMin(bits(N) op1, bits(N) op2,FPMulAdd(bits(N) addend, bits(N) op1, bits(N) op2, FPCRType fpcr) boolean altfp = boolean fpexc = TRUE; // Generate floating-point exceptions return HaveAltFPFPMulAdd() && !(addend, op1, op2, fpcr, fpexc); // FPMulAdd() // ========== // // Calculates addend + op1*op2 with a single rounding. The 'fpcr' argument // supplies the FPCR control bits, and 'fpexc' controls the generation of // floating-point exceptions. bits(N)UsingAArch32() && fpcr.AH == '1'; return FPMin(op1, op2, fpcr, altfp); // FPMin() // ======= // Compare two operands and return the smaller operand after rounding. The // 'fpcr' argument supplies the FPCR control bits and 'altfp' determines // if the function should use alternative behaviour. bits(N) FPMin(bits(N) op1, bits(N) op2,FPMulAdd(bits(N) addend, bits(N) op1, bits(N) op2, FPCRType fpcr_in, boolean altfp) fpcr, boolean fpexc) assert N IN {16,32,64}; assert N IN {16,32,64}; (typeA,signA,valueA) = FPCRTypeFPUnpack fpcr = fpcr_in; (addend, fpcr, fpexc); (type1,sign1,value1) = FPUnpack(op1, fpcr); (op1, fpcr, fpexc); (type2,sign2,value2) = FPUnpack(op2, fpcr); if (altfp && type1 ==(op2, fpcr, fpexc); rounding = FPRoundingMode(fpcr); inf1 = (type1 == FPType_Infinity); zero1 = (type1 == FPType_Zero && type2 ==); inf2 = (type2 == FPType_Infinity); zero2 = (type2 == FPType_Zero && ((sign1 == '0' && sign2 == '1') || (sign1 == '1' && sign2 == '0'))) then return); (done,result) = FPZeroFPProcessNaNs3(sign2); (typeA, type1, type2, addend, op1, op2, fpcr, fpexc); (done,result) = if !( FPProcessNaNsHaveAltFP(type1, type2, op1, op2, fpcr, altfp, TRUE); if !done then() && ! FPTypeUsingAArch32 fptype; bit sign; real value;() && fpcr.AH == '1') then if typeA == FPRoundingFPType_QNaN rounding; if value1 < value2 then (fptype,sign,value) = (type1,sign1,value1); else (fptype,sign,value) = (type2,sign2,value2); if fptype ==&& ((inf1 && zero2) || (zero1 && inf2)) then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); if !done then infA = (typeA == FPType_Infinity then result =); zeroA = (typeA == FPType_Zero); // Determine sign and type product will have if it does not cause an // Invalid Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero // by infinity and additions of opposite-signed infinities. invalidop = (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP); if invalidop then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity(sign); elsif fptype ==('0'); elsif (infA && signA == '1') || (infP && signP == '1') then result = FPType_ZeroFPInfinity then sign = sign1 OR sign2; // Use most negative sign ('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(sign); (signA); // Otherwise calculate numerical result and round it. else // The use of FPRound() covers the case where there is a trapped underflow exception // for a denormalized number even though the result is exact. rounding = result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRoundingModeFPRounding_NEGINF(fpcr); if altfp then // Denormal output is not flushed to zero fpcr.FZ = '0'; fpcr.FZ16 = '0'; result =then '1' else '0'; result = FPZero(result_sign); else result = FPRound(value, fpcr, rounding, TRUE);(result_value, fpcr, rounding, fpexc); if !invalidop && fpexc then FPProcessDenormsFPProcessDenorms3(type1, type2, N, fpcr); (typeA, type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpminnumfpmuladdh/FPMinNumFPMulAddH

// FPMinNum() // ========== // FPMulAddH() // =========== // Calculates addend + op1*op2. bits(N) FPMinNum(bits(N) op1_in, bits(N) op2_in,FPMulAddH(bits(N) addend, bits(N DIV 2) op1, bits(N DIV 2) op2, FPCRType fpcr) assert N IN {16,32,64}; bits(N) op1 = op1_in; bits(N) op2 = op2_in; (type1,-,-) = boolean fpexc = TRUE; // Generate floating-point exceptions return FPMulAddH(addend, op1, op2, fpcr, fpexc); // FPMulAddH() // =========== // Calculates addend + op1*op2. bits(N) FPMulAddH(bits(N) addend, bits(N DIV 2) op1, bits(N DIV 2) op2, FPCRType fpcr, boolean fpexc) assert N == 32; rounding = FPRoundingMode(fpcr); (typeA,signA,valueA) = FPUnpack(op1, fpcr); (type2,-,-) =(addend, fpcr, fpexc); (type1,sign1,value1) = FPUnpack(op2, fpcr); boolean type1_nan = type1 IN {(op1, fpcr, fpexc); (type2,sign2,value2) =FPType_QNaNFPUnpack,(op2, fpcr, fpexc); inf1 = (type1 == FPType_SNaNFPType_Infinity}; boolean type2_nan = type2 IN {); zero1 = (type1 ==FPType_QNaNFPType_Zero,); inf2 = (type2 == FPType_SNaNFPType_Infinity}; boolean altfp =); zero2 = (type2 == FPType_Zero); (done,result) = FPProcessNaNs3H(typeA, type1, type2, addend, op1, op2, fpcr, fpexc); if !(HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; if !(altfp && type1_nan && type2_nan) then // Treat a single quiet-NaN as +Infinity. if type1 ==() && fpcr.AH == '1') then if typeA == FPType_QNaN && type2 !=&& ((inf1 && zero2) || (zero1 && inf2)) then result = FPType_QNaNFPDefaultNaN then op1 =(fpcr); if fpexc then FPInfinityFPProcessException('0'); elsif type1 !=( FPType_QNaNFPExc_InvalidOp && type2 ==, fpcr); if !done then infA = (typeA == FPType_QNaNFPType_Infinity then op2 =); zeroA = (typeA == FPType_Zero); // Determine sign and type product will have if it does not cause an // Invalid Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero by infinity and // additions of opposite-signed infinities. invalidop = (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP); if invalidop then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity('0'); altfmaxfmin = FALSE; // Restrict use of FMAX/FMIN NaN propagation rules result = elsif (infA && signA == '1') || (infP && signP == '1') then result = ('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(signA); // Otherwise calculate numerical result and round it. else result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding, fpexc); if !invalidop && fpexc then FPProcessDenormFPMinFPInfinity(op1, op2, fpcr, altfmaxfmin); (typeA, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpmulfpmuladdh/FPMulFPProcessNaNs3H

// FPMul() // ======= // FPProcessNaNs3H() // ================= bits(N)(boolean, bits(N)) FPMul(bits(N) op1, bits(N) op2,FPProcessNaNs3H( FPType type1, FPType type2, FPType type3, bits(N) op1, bits(N DIV 2) op2, bits(N DIV 2) op3, FPCRType fpcr) fpcr, boolean fpexc) assert N IN {16,32,64}; (type1,sign1,value1) = assert N IN {32,64}; bits(N) result; FPUnpackFPType(op1, fpcr); (type2,sign2,value2) =type_nan; // When TRUE, use alternative NaN propagation rules. boolean altfp = FPUnpackHaveAltFP(op2, fpcr); (done,result) =() && ! FPProcessNaNsUsingAArch32(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 ==() && fpcr.AH == '1'; boolean op1_nan = type1 IN { FPType_InfinityFPType_SNaN); inf2 = (type2 ==, FPType_InfinityFPType_QNaN); zero1 = (type1 ==}; boolean op2_nan = type2 IN { FPType_ZeroFPType_SNaN); zero2 = (type2 ==, FPType_ZeroFPType_QNaN); if (inf1 && zero2) || (zero1 && inf2) then result =}; boolean op3_nan = type3 IN { FPDefaultNaNFPType_SNaN(fpcr);, FPProcessExceptionFPType_QNaN(}; if altfp then if (type1 ==FPExc_InvalidOpFPType_SNaN, fpcr); elsif inf1 || inf2 then result =|| type2 == FPInfinityFPType_SNaN(sign1 EOR sign2); elsif zero1 || zero2 then result =|| type3 == FPZeroFPType_SNaN(sign1 EOR sign2); else result =) then type_nan = FPRoundFPType_SNaN(value1*value2, fpcr);; else type_nan = ; boolean done; if altfp && op1_nan && op2_nan && op3_nan then // <n> register NaN selected done = TRUE; result = FPConvertNaN(FPProcessNaN(type_nan, op2, fpcr, fpexc)); elsif altfp && op2_nan && (op1_nan || op3_nan) then // <n> register NaN selected done = TRUE; result = FPConvertNaN(FPProcessNaN(type_nan, op2, fpcr, fpexc)); elsif altfp && op3_nan && op1_nan then // <m> register NaN selected done = TRUE; result = FPConvertNaN(FPProcessNaN(type_nan, op3, fpcr, fpexc)); elsif type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_SNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr, fpexc)); elsif type3 == FPType_SNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr, fpexc)); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr, fpexc)); elsif type3 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr, fpexc)); else done = FALSE; result = ZerosFPProcessDenormsFPType_QNaN(type1, type2, N, fpcr); return result;(); // 'Don't care' result return (done, result);

Library pseudocode for shared/functions/float/fpmuladdfpmulx/FPMulAddFPMulX

// FPMulAdd() // ========== // FPMulX() // ======== bits(N) FPMulAdd(bits(N) addend, bits(N) op1, bits(N) op2,FPMulX(bits(N) op1, bits(N) op2, FPCRType fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions return assert N IN {16,32,64}; bits(N) result; boolean done; (type1,sign1,value1) = FPMulAdd(addend, op1, op2, fpcr, fpexc); // FPMulAdd() // ========== // // Calculates addend + op1*op2 with a single rounding. The 'fpcr' argument // supplies the FPCR control bits, and 'fpexc' controls the generation of // floating-point exceptions. bits(N) FPMulAdd(bits(N) addend, bits(N) op1, bits(N) op2, FPCRType fpcr, boolean fpexc) assert N IN {16,32,64}; (typeA,signA,valueA) = FPUnpack(addend, fpcr, fpexc); (type1,sign1,value1) =(op1, fpcr); (type2,sign2,value2) = FPUnpack(op1, fpcr, fpexc); (type2,sign2,value2) =(op2, fpcr); (done,result) = FPUnpackFPProcessNaNs(op2, fpcr, fpexc); rounding =(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPRoundingMode(fpcr); inf1 = (type1 == FPType_Infinity); zero1 = (type1 ==); inf2 = (type2 == FPType_Zero); inf2 = (type2 == FPType_Infinity); zero2 = (type2 ==); zero1 = (type1 == FPType_Zero); (done,result) = zero2 = (type2 == FPProcessNaNs3(typeA, type1, type2, addend, op1, op2, fpcr, fpexc); if !(HaveAltFP() && !UsingAArch32() && fpcr.AH == '1') then if typeA == FPType_QNaN && ((inf1 && zero2) || (zero1 && inf2)) then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); if !done then infA = (typeA == FPType_Infinity); zeroA = (typeA == FPType_Zero); // Determine sign and type product will have if it does not cause an // Invalid Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero // by infinity and additions of opposite-signed infinities. invalidop = (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP); if invalidop then if (inf1 && zero2) || (zero1 && inf2) then result = FPDefaultNaNFPTwo(fpcr); if fpexc then(sign1 EOR sign2); elsif inf1 || inf2 then result = FPProcessException(FPExc_InvalidOp, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity('0'); elsif (infA && signA == '1') || (infP && signP == '1') then (sign1 EOR sign2); elsif zero1 || zero2 then result = FPInfinity('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(signA); // Otherwise calculate numerical result and round it. (sign1 EOR sign2); else result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == result = FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding, fpexc); if !invalidop && fpexc then(value1*value2, fpcr); FPProcessDenorms3FPProcessDenorms(typeA, type1, type2, N, fpcr); (type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fpmuladdhfpneg/FPMulAddHFPNeg

// FPMulAddH() // =========== // Calculates addend + op1*op2. // FPNeg() // ======= bits(N) FPMulAddH(bits(N) addend, bits(N DIV 2) op1, bits(N DIV 2) op2,FPNeg(bits(N) op) assert N IN {16,32,64}; if ! FPCRTypeUsingAArch32 fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions return() && FPMulAddHHaveAltFP(addend, op1, op2, fpcr, fpexc); // FPMulAddH() // =========== // Calculates addend + op1*op2. bits(N)() then FPMulAddH(bits(N) addend, bits(N DIV 2) op1, bits(N DIV 2) op2, FPCRType fpcr, boolean fpexc) assert N == 32; rounding =fpcr = FPCR[]; if fpcr.AH == '1' then (fptype, -, -) = FPRoundingMode(fpcr); (typeA,signA,valueA) = FPUnpack(addend, fpcr, fpexc); (type1,sign1,value1) =(op, fpcr, FALSE); if fptype IN { FPUnpackFPType_SNaN(op1, fpcr, fpexc); (type2,sign2,value2) =, FPUnpack(op2, fpcr, fpexc); inf1 = (type1 == FPType_Infinity); zero1 = (type1 == FPType_Zero); inf2 = (type2 == FPType_Infinity); zero2 = (type2 == FPType_Zero); (done,result) = FPProcessNaNs3H(typeA, type1, type2, addend, op1, op2, fpcr, fpexc); if !(HaveAltFP() && !UsingAArch32() && fpcr.AH == '1') then if typeA == FPType_QNaN && ((inf1 && zero2) || (zero1 && inf2)) then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); if !done then infA = (typeA == FPType_Infinity); zeroA = (typeA == FPType_Zero); // Determine sign and type product will have if it does not cause an // Invalid Operation. signP = sign1 EOR sign2; infP = inf1 || inf2; zeroP = zero1 || zero2; // Non SNaN-generated Invalid Operation cases are multiplies of zero by infinity and // additions of opposite-signed infinities. invalidop = (inf1 && zero2) || (zero1 && inf2) || (infA && infP && signA != signP); if invalidop then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); // Other cases involving infinities produce an infinity of the same sign. elsif (infA && signA == '0') || (infP && signP == '0') then result = FPInfinity('0'); elsif (infA && signA == '1') || (infP && signP == '1') then result = FPInfinity('1'); // Cases where the result is exactly zero and its sign is not determined by the // rounding mode are additions of same-signed zeros. elsif zeroA && zeroP && signA == signP then result = FPZero(signA); // Otherwise calculate numerical result and round it. else result_value = valueA + (value1 * value2); if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding, fpexc); if !invalidop && fpexc then FPProcessDenorm(typeA, N, fpcr); } then return result; return op; // When fpcr.AH=1, sign of NaN has no consequence return NOT(op<N-1>) : op<N-2:0>;

Library pseudocode for shared/functions/float/fpmuladdhfponepointfive/FPProcessNaNs3HFPOnePointFive

// FPProcessNaNs3H() // ================= // FPOnePointFive() // ================ (boolean, bits(N))bits(N) FPProcessNaNs3H(FPOnePointFive(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0':FPTypeOnes type1, FPType type2, FPType type3, bits(N) op1, bits(N DIV 2) op2, bits(N DIV 2) op3, FPCRType fpcr, boolean fpexc) assert N IN {32,64}; bits(N) result; FPType type_nan; // When TRUE, use alternative NaN propagation rules. boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; boolean op1_nan = type1 IN {FPType_SNaN, FPType_QNaN}; boolean op2_nan = type2 IN {FPType_SNaN, FPType_QNaN}; boolean op3_nan = type3 IN {FPType_SNaN, FPType_QNaN}; if altfp then if (type1 == FPType_SNaN || type2 == FPType_SNaN || type3 == FPType_SNaN) then type_nan = FPType_SNaN; else type_nan = FPType_QNaN; boolean done; if altfp && op1_nan && op2_nan && op3_nan then // <n> register NaN selected done = TRUE; result = FPConvertNaN(FPProcessNaN(type_nan, op2, fpcr, fpexc)); elsif altfp && op2_nan && (op1_nan || op3_nan) then // <n> register NaN selected done = TRUE; result = FPConvertNaN(FPProcessNaN(type_nan, op2, fpcr, fpexc)); elsif altfp && op3_nan && op1_nan then // <m> register NaN selected done = TRUE; result = FPConvertNaN(FPProcessNaN(type_nan, op3, fpcr, fpexc)); elsif type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_SNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr, fpexc)); elsif type3 == FPType_SNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr, fpexc)); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type2, op2, fpcr, fpexc)); elsif type3 == FPType_QNaN then done = TRUE; result = FPConvertNaN(FPProcessNaN(type3, op3, fpcr, fpexc)); else done = FALSE; result =(E-1); frac = '1': Zeros(); // 'Don't care' result return (done, result);(F-1); result = sign : exp : frac; return result;

Library pseudocode for shared/functions/float/fpmulxfpprocessdenorms/FPMulXFPProcessDenorm

// FPMulX() // ======== bits(N)// FPProcessDenorm() // ================= // Handles denormal input in case of single-precision or double-precision // when using alternative floating-point mode. FPMulX(bits(N) op1, bits(N) op2,FPProcessDenorm( FPType fptype, integer N, FPCRType fpcr) assert N IN {16,32,64}; bits(N) result; boolean done; (type1,sign1,value1) = boolean altfp = FPUnpackHaveAltFP(op1, fpcr); (type2,sign2,value2) =() && ! FPUnpackUsingAArch32(op2, fpcr); (done,result) =() && fpcr.AH == '1'; if altfp && N != 16 && fptype == FPProcessNaNsFPType_Denormal(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 ==then FPType_InfinityFPProcessException); inf2 = (type2 ==( FPType_InfinityFPExc_InputDenorm); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if (inf1 && zero2) || (zero1 && inf2) then result = FPTwo(sign1 EOR sign2); elsif inf1 || inf2 then result = FPInfinity(sign1 EOR sign2); elsif zero1 || zero2 then result = FPZero(sign1 EOR sign2); else result = FPRound(value1*value2, fpcr); FPProcessDenorms(type1, type2, N, fpcr); return result;, fpcr);

Library pseudocode for shared/functions/float/fpnegfpprocessdenorms/FPNegFPProcessDenorms

// FPNeg() // ======= bits(N)// FPProcessDenorms() // ================== // Handles denormal input in case of single-precision or double-precision // when using alternative floating-point mode. FPNeg(bits(N) op) assert N IN {16,32,64}; if !FPProcessDenorms(UsingAArch32FPType() &&type1, HaveAltFPFPType() thentype2, integer N, FPCRType fpcr = FPCR[]; if fpcr.AH == '1' then (fptype, -, -) =fpcr) boolean altfp = FPUnpackHaveAltFP(op, fpcr, FALSE); if fptype IN {() && !FPType_SNaNUsingAArch32,() && fpcr.AH == '1'; if altfp && N != 16 && (type1 == || type2 == FPType_Denormal) then FPProcessException(FPExc_InputDenormFPType_QNaNFPType_Denormal} then return op; // When fpcr.AH=1, sign of NaN has no consequence return NOT(op<N-1>) : op<N-2:0>;, fpcr);

Library pseudocode for shared/functions/float/fponepointfivefpprocessdenorms/FPOnePointFiveFPProcessDenorms3

// FPOnePointFive() // ================ bits(N)// FPProcessDenorms3() // =================== // Handles denormal input in case of single-precision or double-precision // when using alternative floating-point mode. FPOnePointFive(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '0':FPProcessDenorms3(OnesFPType(E-1); frac = '1':type1, type2, FPType type3, integer N, FPCRType fpcr) boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; if altfp && N != 16 && (type1 == FPType_Denormal || type2 == FPType_Denormal || type3 == FPType_Denormal) then FPProcessException(FPExc_InputDenormZerosFPType(F-1); result = sign : exp : frac; return result;, fpcr);

Library pseudocode for shared/functions/float/fpprocessdenorms/FPProcessDenormFPProcessDenorms4

// FPProcessDenorm() // ================= // FPProcessDenorms4() // =================== // Handles denormal input in case of single-precision or double-precision // when using alternative floating-point mode. FPProcessDenorm(FPProcessDenorms4(FPType fptype, integer N,type1, FPType type2, FPType type3, FPType type4, integer N, FPCRType fpcr) boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; if altfp && N != 16 && (type1 == FPType_Denormal || type2 == FPType_Denormal || type3 == FPType_Denormal() && fpcr.AH == '1'; if altfp && N != 16 && fptype ==|| type4 == FPType_Denormal then) then FPProcessException(FPExc_InputDenorm, fpcr);

Library pseudocode for shared/functions/float/fpprocessdenormsfpprocessexception/FPProcessDenormsFPProcessException

// FPProcessDenorms() // ================== // Handles denormal input in case of single-precision or double-precision // when using alternative floating-point mode.// FPProcessException() // ==================== // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate. FPProcessDenorms(FPProcessException(FPTypeFPExc type1,exception, FPType type2, integer N, FPCRType fpcr) boolean altfp = integer cumul; // Determine the cumulative exception bit number case exception of when HaveAltFPFPExc_InvalidOp() && !cumul = 0; whenFPExc_DivideByZero cumul = 1; when FPExc_Overflow cumul = 2; when FPExc_Underflow cumul = 3; when FPExc_Inexact cumul = 4; when FPExc_InputDenorm cumul = 7; enable = cumul + 8; if fpcr<enable> == '1' then // Trapping of the exception enabled. // It is IMPLEMENTATION DEFINED whether the enable bit may be set at all, // and if so then how exceptions and in what order that they may be // accumulated before calling FPTrappedException(). bits(8) accumulated_exceptions = GetAccumulatedFPExceptions(); accumulated_exceptions<cumul> = '1'; if boolean IMPLEMENTATION_DEFINED "Process floating-point exception" then if UsingAArch32() && fpcr.AH == '1'; if altfp && N != 16 && (type1 ==() then FPType_DenormalAArch32.FPTrappedException || type2 ==(accumulated_exceptions); else is_ase = FPType_DenormalIsASEInstruction) then(); FPProcessExceptionAArch64.FPTrappedException((is_ase, accumulated_exceptions); else // The exceptions generated by this instruction are accumulated by the PE and // FPTrappedException is called later during its execution, before the next // instruction is executed. This field is cleared at the start of each FP instruction.(accumulated_exceptions); elsif UsingAArch32FPExc_InputDenormSetAccumulatedFPExceptions, fpcr);() then // Set the cumulative exception bit FPSCR<cumul> = '1'; else // Set the cumulative exception bit FPSR<cumul> = '1'; return;

Library pseudocode for shared/functions/float/fpprocessdenormsfpprocessnan/FPProcessDenorms3FPProcessNaN

// FPProcessDenorms3() // =================== // Handles denormal input in case of single-precision or double-precision // when using alternative floating-point mode.// FPProcessNaN() // ============== bits(N) FPProcessDenorms3(FPProcessNaN(FPType type1,fptype, bits(N) op, FPTypeFPCRType type2,fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions return FPProcessNaN(fptype, op, fpcr, fpexc); // FPProcessNaN() // ============== // Handle NaN input operands, returning the operand or default NaN value // if fpcr.DN is selected. The 'fpcr' argument supplies the FPCR control bits. // The 'fpexc' argument controls the generation of exceptions, regardless of // whether 'fptype' is a signalling NaN or a quiet NaN. bits(N) FPProcessNaN(FPType type3, integer N,fptype, bits(N) op, FPCRType fpcr) boolean altfp =fpcr, boolean fpexc) assert N IN {16,32,64}; assert fptype IN { HaveAltFPFPType_QNaN() && !,UsingAArch32FPType_SNaN() && fpcr.AH == '1'; if altfp && N != 16 && (type1 ==}; integer topfrac; case N of when 16 topfrac = 9; when 32 topfrac = 22; when 64 topfrac = 51; result = op; if fptype == FPType_DenormalFPType_SNaN || type2 ==then result<topfrac> = '1'; if fpexc then FPType_DenormalFPProcessException || type3 ==( FPType_DenormalFPExc_InvalidOp) then, fpcr); if fpcr.DN == '1' then // DefaultNaN requested result = FPProcessExceptionFPDefaultNaN(FPExc_InputDenorm, fpcr);(fpcr); return result;

Library pseudocode for shared/functions/float/fpprocessdenormsfpprocessnans/FPProcessDenorms4FPProcessNaNs

// FPProcessDenorms4() // =================== // Handles denormal input in case of single-precision or double-precision // when using alternative floating-point mode.// FPProcessNaNs() // =============== (boolean, bits(N)) FPProcessDenorms4(FPProcessNaNs(FPType type1, FPType type2,type2, bits(N) op1, bits(N) op2, FPCRType fpcr) boolean altfmaxfmin = FALSE; // Do not use alfp mode for FMIN, FMAX and variants boolean fpexc = TRUE; // Generate floating-point exceptions return FPProcessNaNs(type1, type2, op1, op2, fpcr, altfmaxfmin, fpexc); // FPProcessNaNs() // =============== // // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits and 'altfmaxfmin' controls // alternative floating-point behaviour for FMAX, FMIN and variants. 'fpexc' // controls the generation of floating-point exceptions. Status information // is updated directly in the FPSR where appropriate. (boolean, bits(N)) FPProcessNaNs(FPType type3,type1, FPType type4, integer N,type2, bits(N) op1, bits(N) op2, FPCRType fpcr) boolean altfp =fpcr, boolean altfmaxfmin, boolean fpexc) assert N IN {16,32,64}; bit sign2; boolean done; bits(N) result; boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; if altfp && N != 16 && (type1 == boolean op1_nan = type1 IN { FPType_DenormalFPType_SNaN || type2 ==, FPType_DenormalFPType_QNaN || type3 ==}; boolean op2_nan = type2 IN { FPType_DenormalFPType_SNaN || type4 ==, FPType_DenormalFPType_QNaN) then}; boolean any_snan = type1 == FPType_SNaN || type2 == FPType_SNaN; FPType type_nan = if any_snan then FPType_SNaN else FPType_QNaN; if altfmaxfmin && (op1_nan || op2_nan) then FPProcessException(, fpcr); done = TRUE; sign2 = op2<N-1>; result = if type2 == FPType_Zero then FPZero(sign2) else op2; elsif altfp && op1_nan && op2_nan then // <n> register NaN selected done = TRUE; result = FPProcessNaN(type_nan, op1, fpcr, fpexc); elsif type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr, fpexc); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr, fpexc); else done = FALSE; result = ZerosFPExc_InputDenormFPExc_InvalidOp, fpcr);(); // 'Don't care' result return (done, result);

Library pseudocode for shared/functions/float/fpprocessexceptionfpprocessnans3/FPProcessExceptionFPProcessNaNs3

// FPProcessException() // ==================== // // The 'fpcr' argument supplies FPCR control bits. Status information is // updated directly in the FPSR where appropriate.// FPProcessNaNs3() // ================ (boolean, bits(N)) FPProcessException(FPProcessNaNs3(FPExcFPType exception,type1, FPType type2, FPType type3, bits(N) op1, bits(N) op2, bits(N) op3, FPCRType fpcr) integer cumul; // Determine the cumulative exception bit number case exception of when boolean fpexc = TRUE; // Generate floating-point exceptions return FPExc_InvalidOpFPProcessNaNs3 cumul = 0; when(type1, type2, type3, op1, op2, op3, fpcr, fpexc); // FPProcessNaNs3() // ================ // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits and 'fpexc' controls the // generation of floating-point exceptions. Status information is updated // directly in the FPSR where appropriate. (boolean, bits(N)) FPExc_DivideByZero cumul = 1; whenFPProcessNaNs3( FPExc_OverflowFPType cumul = 2; whentype1, FPExc_UnderflowFPType cumul = 3; whentype2, FPExc_InexactFPType cumul = 4; whentype3, bits(N) op1, bits(N) op2, bits(N) op3, FPExc_InputDenormFPCRType cumul = 7; enable = cumul + 8; if fpcr<enable> == '1' then // Trapping of the exception enabled. // It is IMPLEMENTATION DEFINED whether the enable bit may be set at all, // and if so then how exceptions and in what order that they may be // accumulated before calling FPTrappedException(). bits(8) accumulated_exceptions =fpcr, boolean fpexc) assert N IN {16,32,64}; bits(N) result; boolean op1_nan = type1 IN { GetAccumulatedFPExceptionsFPType_SNaN(); accumulated_exceptions<cumul> = '1'; if boolean IMPLEMENTATION_DEFINED "Support trapping of floating-point exceptions" then if, FPType_QNaN}; boolean op2_nan = type2 IN {FPType_SNaN, FPType_QNaN}; boolean op3_nan = type3 IN {FPType_SNaN, FPType_QNaN}; boolean altfp = HaveAltFP() && !UsingAArch32() then() && fpcr.AH == '1'; AArch32.FPTrappedExceptionFPType(accumulated_exceptions); else is_ase =type_nan; if altfp then if type1 == IsASEInstructionFPType_SNaN();|| type2 == AArch64.FPTrappedExceptionFPType_SNaN(is_ase, accumulated_exceptions); else // The exceptions generated by this instruction are accumulated by the PE and // FPTrappedException is called later during its execution, before the next // instruction is executed. This field is cleared at the start of each FP instruction.|| type3 == SetAccumulatedFPExceptionsFPType_SNaN(accumulated_exceptions); elsifthen type_nan = ; else type_nan = FPType_QNaN; boolean done; if altfp && op1_nan && op2_nan && op3_nan then // <n> register NaN selected done = TRUE; result = FPProcessNaN(type_nan, op2, fpcr, fpexc); elsif altfp && op2_nan && (op1_nan || op3_nan) then // <n> register NaN selected done = TRUE; result = FPProcessNaN(type_nan, op2, fpcr, fpexc); elsif altfp && op3_nan && op1_nan then // <m> register NaN selected done = TRUE; result = FPProcessNaN(type_nan, op3, fpcr, fpexc); elsif type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr, fpexc); elsif type3 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type3, op3, fpcr, fpexc); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr, fpexc); elsif type3 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type3, op3, fpcr, fpexc); else done = FALSE; result = ZerosUsingAArch32FPType_SNaN() then // Set the cumulative exception bit FPSCR<cumul> = '1'; else // Set the cumulative exception bit FPSR<cumul> = '1'; (); // 'Don't care' result return; return (done, result);

Library pseudocode for shared/functions/float/fpprocessnanfprecipestimate/FPProcessNaNFPRecipEstimate

// FPProcessNaN() // ============== // FPRecipEstimate() // ================= bits(N) FPProcessNaN(FPRecipEstimate(bits(N) operand,FPTypeFPCRType fptype, bits(N) op,fpcr_in) assert N IN {16,32,64}; FPCRType fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions returnfpcr = fpcr_in; bits(N) result; boolean overflow_to_inf; // When using alternative floating-point behaviour, do not generate // floating-point exceptions, flush denormal input and output to zero, // and use RNE rounding mode. boolean altfp = FPProcessNaNHaveAltFP(fptype, op, fpcr, fpexc); // FPProcessNaN() // ============== // Handle NaN input operands, returning the operand or default NaN value // if fpcr.DN is selected. The 'fpcr' argument supplies the FPCR control bits. // The 'fpexc' argument controls the generation of exceptions, regardless of // whether 'fptype' is a signalling NaN or a quiet NaN. bits(N)() && ! FPProcessNaN(() && fpcr.AH == '1'; boolean fpexc = !altfp; if altfp then fpcr.<FIZ,FZ> = '11'; if altfp then fpcr.RMode = '00'; (fptype,sign,value) =FPTypeFPUnpack fptype, bits(N) op,(operand, fpcr, fpexc); FPCRTypeFPRounding fpcr, boolean fpexc) assert N IN {16,32,64}; assert fptype IN {rounding =FPType_QNaNFPRoundingMode,(fpcr); if fptype == FPType_SNaN}; integer topfrac; case N of when 16 topfrac = 9; when 32 topfrac = 22; when 64 topfrac = 51; result = op; if fptype ==|| fptype == FPType_SNaNFPType_QNaN then result<topfrac> = '1'; if fpexc then result = FPProcessNaN(fptype, operand, fpcr, fpexc); elsif fptype == FPType_Infinity then result = FPZero(sign); elsif fptype == FPType_Zero then result = FPInfinity(sign); if fpexc then FPProcessException(FPExc_InvalidOpFPExc_DivideByZero, fpcr); if fpcr.DN == '1' then // DefaultNaN requested result = elsif ( (N == 16 && (value) < 2.0^-16) || (N == 32 && Abs(value) < 2.0^-128) || (N == 64 && Abs(value) < 2.0^-1024) ) then case rounding of when FPRounding_TIEEVEN overflow_to_inf = TRUE; when FPRounding_POSINF overflow_to_inf = (sign == '0'); when FPRounding_NEGINF overflow_to_inf = (sign == '1'); when FPRounding_ZERO overflow_to_inf = FALSE; result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); if fpexc then FPProcessException(FPExc_Overflow, fpcr); FPProcessException(FPExc_Inexact, fpcr); elsif ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && ( (N == 16 && Abs(value) >= 2.0^14) || (N == 32 && Abs(value) >= 2.0^126) || (N == 64 && Abs(value) >= 2.0^1022) ) then // Result flushed to zero of correct sign result = FPZero(sign); // Flush-to-zero never generates a trapped exception. if UsingAArch32() then FPSCR.UFC = '1'; else if fpexc then FPSR.UFC = '1'; else // Scale to a fixed point value in the range 0.5 <= x < 1.0 in steps of 1/512, and // calculate result exponent. Scaled value has copied sign bit, // exponent = 1022 = double-precision biased version of -1, // fraction = original fraction bits(52) fraction; integer exp; case N of when 16 fraction = operand<9:0> : Zeros(42); exp = UInt(operand<14:10>); when 32 fraction = operand<22:0> : Zeros(29); exp = UInt(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then if fraction<51> == '0' then exp = -1; fraction = fraction<49:0>:'00'; else fraction = fraction<50:0>:'0'; integer scaled; boolean increasedprecision = N==32 && HaveFeatRPRES() && altfp; if !increasedprecision then scaled = UInt('1':fraction<51:44>); else scaled = UInt('1':fraction<51:41>); integer result_exp; case N of when 16 result_exp = 29 - exp; // In range 29-30 = -1 to 29+1 = 30 when 32 result_exp = 253 - exp; // In range 253-254 = -1 to 253+1 = 254 when 64 result_exp = 2045 - exp; // In range 2045-2046 = -1 to 2045+1 = 2046 // Scaled is in range 256 .. 511 or 2048 .. 4095 range representing a // fixed-point number in range [0.5 .. 1.0]. estimate = RecipEstimate(scaled, increasedprecision); // Estimate is in the range 256 .. 511 or 4096 .. 8191 representing a // fixed-point result in the range [1.0 .. 2.0]. // Convert to scaled floating point result with copied sign bit, // high-order bits from estimate, and exponent calculated above. if !increasedprecision then fraction = estimate<7:0> : Zeros(44); else fraction = estimate<11:0> : ZerosFPDefaultNaNAbs(fpcr); (40); if result_exp == 0 then fraction = '1' : fraction<51:1>; elsif result_exp == -1 then fraction = '01' : fraction<51:2>; result_exp = 0; case N of when 16 result = sign : result_exp<N-12:0> : fraction<51:42>; when 32 result = sign : result_exp<N-25:0> : fraction<51:29>; when 64 result = sign : result_exp<N-54:0> : fraction<51:0>; return result;

Library pseudocode for shared/functions/float/fpprocessnansfprecipestimate/FPProcessNaNsRecipEstimate

// FPProcessNaNs() // RecipEstimate() // =============== // Compute estimate of reciprocal of 9-bit fixed-point number. // // a is in range 256 .. 511 or 2048 .. 4096 representing a number in // the range 0.5 <= x < 1.0. // increasedprecision determines if the mantissa is 8-bit or 12-bit. // result is in the range 256 .. 511 or 4096 .. 8191 representing a // number in the range 1.0 to 511/256 or 1.00 to 8191/4096. (boolean, bits(N))integer FPProcessNaNs(RecipEstimate(integer a_in, boolean increasedprecision) integer a = a_in; integer r; if !increasedprecision then assert 256 <= a && a < 512; a = a*2+1; // Round to nearest integer b = (2 ^ 19) DIV a; r = (b+1) DIV 2; // Round to nearest assert 256 <= r && r < 512; else assert 2048 <= a && a < 4096; a = a*2+1; // Round to nearest real real_val = Real(2^25)/Real(a); r =FPTypeRoundDown type1, FPType type2, bits(N) op1, bits(N) op2, FPCRType fpcr) boolean altfmaxfmin = FALSE; // Do not use alfp mode for FMIN, FMAX and variants boolean fpexc = TRUE; // Generate floating-point exceptions return FPProcessNaNs(type1, type2, op1, op2, fpcr, altfmaxfmin, fpexc); // FPProcessNaNs() // =============== // // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits and 'altfmaxfmin' controls // alternative floating-point behaviour for FMAX, FMIN and variants. 'fpexc' // controls the generation of floating-point exceptions. Status information // is updated directly in the FPSR where appropriate. (boolean, bits(N)) FPProcessNaNs(FPType type1, FPType type2, bits(N) op1, bits(N) op2, FPCRType fpcr, boolean altfmaxfmin, boolean fpexc) assert N IN {16,32,64}; bit sign2; boolean done; bits(N) result; boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; boolean op1_nan = type1 IN {FPType_SNaN, FPType_QNaN}; boolean op2_nan = type2 IN {FPType_SNaN, FPType_QNaN}; boolean any_snan = type1 == FPType_SNaN || type2 == FPType_SNaN; FPType type_nan = if any_snan then FPType_SNaN else FPType_QNaN; if altfmaxfmin && (op1_nan || op2_nan) then FPProcessException(FPExc_InvalidOp, fpcr); done = TRUE; sign2 = op2<N-1>; result = if type2 == FPType_Zero then FPZero(sign2) else op2; elsif altfp && op1_nan && op2_nan then // <n> register NaN selected done = TRUE; result = FPProcessNaN(type_nan, op1, fpcr, fpexc); elsif type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr, fpexc); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr, fpexc); else done = FALSE; result = Zeros(); // 'Don't care' result (real_val); real error = real_val - Real(r); boolean round_up = error > 0.5; // Error cannot be exactly 0.5 so do not need tie case if round_up then r = r+1; assert 4096 <= r && r < 8192; return (done, result); return r;

Library pseudocode for shared/functions/float/fpprocessnans3fprecpx/FPProcessNaNs3FPRecpX

// FPProcessNaNs3() // ================ // FPRecpX() // ========= (boolean, bits(N))bits(N) FPProcessNaNs3(FPRecpX(bits(N) op,FPType type1, FPType type2, FPType type3, bits(N) op1, bits(N) op2, bits(N) op3, FPCRType fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions returnfpcr_in) assert N IN {16,32,64}; FPProcessNaNs3(type1, type2, type3, op1, op2, op3, fpcr, fpexc); // FPProcessNaNs3() // ================ // The boolean part of the return value says whether a NaN has been found and // processed. The bits(N) part is only relevant if it has and supplies the // result of the operation. // // The 'fpcr' argument supplies FPCR control bits and 'fpexc' controls the // generation of floating-point exceptions. Status information is updated // directly in the FPSR where appropriate. (boolean, bits(N)) FPProcessNaNs3(FPType type1, FPType type2, FPType type3, bits(N) op1, bits(N) op2, bits(N) op3, FPCRType fpcr, boolean fpexc) fpcr = fpcr_in; integer esize; case N of when 16 esize = 5; when 32 esize = 8; when 64 esize = 11; assert N IN {16,32,64}; bits(N) result; boolean op1_nan = type1 IN { bits(N) result; bits(esize) exp; bits(esize) max_exp; bits(N-(esize+1)) frac =FPType_SNaNZeros,(); boolean altfp = FPType_QNaN}; boolean op2_nan = type2 IN {FPType_SNaN, FPType_QNaN}; boolean op3_nan = type3 IN {FPType_SNaN, FPType_QNaN}; boolean altfp = HaveAltFP() && !() && fpcr.AH == '1'; boolean fpexc = !altfp; // Generate no floating-point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero (fptype,sign,value) =UsingAArch32FPUnpack() && fpcr.AH == '1';(op, fpcr, fpexc); case N of when 16 exp = op<10+esize-1:10>; when 32 exp = op<23+esize-1:23>; when 64 exp = op<52+esize-1:52>; max_exp = FPTypeOnes type_nan; if altfp then if type1 ==(esize) - 1; if fptype == FPType_SNaN || type2 ==|| fptype == FPType_SNaN || type3 == FPType_SNaN then type_nan = FPType_SNaN; else type_nan = FPType_QNaN; boolean done; if altfp && op1_nan && op2_nan && op3_nan then // <n> register NaN selected done = TRUE; result =then result = FPProcessNaN(type_nan, op2, fpcr, fpexc); elsif altfp && op2_nan && (op1_nan || op3_nan) then // <n> register NaN selected done = TRUE; result =(fptype, op, fpcr, fpexc); else if FPProcessNaNIsZero(type_nan, op2, fpcr, fpexc); elsif altfp && op3_nan && op1_nan then // <m> register NaN selected done = TRUE; result = FPProcessNaN(type_nan, op3, fpcr, fpexc); elsif type1 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr, fpexc); elsif type3 == FPType_SNaN then done = TRUE; result = FPProcessNaN(type3, op3, fpcr, fpexc); elsif type1 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type1, op1, fpcr, fpexc); elsif type2 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type2, op2, fpcr, fpexc); elsif type3 == FPType_QNaN then done = TRUE; result = FPProcessNaN(type3, op3, fpcr, fpexc); else done = FALSE; result = Zeros(); // 'Don't care' result (exp) then // Zero and denormals result = sign:max_exp:frac; else // Infinities and normals result = sign:NOT(exp):frac; return (done, result); return result;

Library pseudocode for shared/functions/float/fprecipestimatefpround/FPRecipEstimateFPRound

// FPRecipEstimate() // ================= // FPRound() // ========= // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. bits(N) FPRecipEstimate(bits(N) operand,FPRound(real op, FPCRType fpcr_in) assert N IN {16,32,64};fpcr_in, FPRounding rounding) FPCRType fpcr = fpcr_in; bits(N) result; boolean overflow_to_inf; // When using alternative floating-point behaviour, do not generate // floating-point exceptions, flush denormal input and output to zero, // and use RNE rounding mode. boolean altfp = fpcr.AHP = '0'; boolean fpexc = TRUE; // Generate floating-point exceptions boolean isbfloat16 = FALSE; return HaveAltFPFPRoundBase() && !(op, fpcr, rounding, isbfloat16, fpexc); // FPRound() // ========= // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. // // The 'fpcr' argument supplies FPCR control bits and 'fpexc' controls the // generation of floating-point exceptions. Status information is updated // directly in the FPSR where appropriate. bits(N)UsingAArch32() && fpcr.AH == '1'; boolean fpexc = !altfp; if altfp then fpcr.<FIZ,FZ> = '11'; if altfp then fpcr.RMode = '00'; (fptype,sign,value) =FPRound(real op, FPUnpackFPCRType(operand, fpcr, fpexc);fpcr_in, FPRounding rounding =rounding, boolean fpexc) FPRoundingModeFPCRType(fpcr); if fptype ==fpcr = fpcr_in; fpcr.AHP = '0'; boolean isbfloat16 = FALSE; return FPType_SNaNFPRoundBase || fptype ==(op, fpcr, rounding, isbfloat16, fpexc); // FPRound() // ========= bits(N) FPType_QNaN then result =FPRound(real op, FPProcessNaNFPCRType(fptype, operand, fpcr, fpexc); elsif fptype ==fpcr) return FPType_InfinityFPRound then result =(op, fpcr, FPZeroFPRoundingMode(sign); elsif fptype == FPType_Zero then result = FPInfinity(sign); if fpexc then FPProcessException(FPExc_DivideByZero, fpcr); elsif ( (N == 16 && Abs(value) < 2.0^-16) || (N == 32 && Abs(value) < 2.0^-128) || (N == 64 && Abs(value) < 2.0^-1024) ) then case rounding of when FPRounding_TIEEVEN overflow_to_inf = TRUE; when FPRounding_POSINF overflow_to_inf = (sign == '0'); when FPRounding_NEGINF overflow_to_inf = (sign == '1'); when FPRounding_ZERO overflow_to_inf = FALSE; result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); if fpexc then FPProcessException(FPExc_Overflow, fpcr); FPProcessException(FPExc_Inexact, fpcr); elsif ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && ( (N == 16 && Abs(value) >= 2.0^14) || (N == 32 && Abs(value) >= 2.0^126) || (N == 64 && Abs(value) >= 2.0^1022) ) then // Result flushed to zero of correct sign result = FPZero(sign); // Flush-to-zero never generates a trapped exception. if UsingAArch32() then FPSCR.UFC = '1'; else if fpexc then FPSR.UFC = '1'; else // Scale to a fixed point value in the range 0.5 <= x < 1.0 in steps of 1/512, and // calculate result exponent. Scaled value has copied sign bit, // exponent = 1022 = double-precision biased version of -1, // fraction = original fraction bits(52) fraction; integer exp; case N of when 16 fraction = operand<9:0> : Zeros(42); exp = UInt(operand<14:10>); when 32 fraction = operand<22:0> : Zeros(29); exp = UInt(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then if fraction<51> == '0' then exp = -1; fraction = fraction<49:0>:'00'; else fraction = fraction<50:0>:'0'; integer scaled; boolean increasedprecision = N==32 && HaveFeatRPRES() && altfp; if !increasedprecision then scaled = UInt('1':fraction<51:44>); else scaled = UInt('1':fraction<51:41>); integer result_exp; case N of when 16 result_exp = 29 - exp; // In range 29-30 = -1 to 29+1 = 30 when 32 result_exp = 253 - exp; // In range 253-254 = -1 to 253+1 = 254 when 64 result_exp = 2045 - exp; // In range 2045-2046 = -1 to 2045+1 = 2046 // Scaled is in range 256 .. 511 or 2048 .. 4095 range representing a // fixed-point number in range [0.5 .. 1.0]. estimate = RecipEstimate(scaled, increasedprecision); // Estimate is in the range 256 .. 511 or 4096 .. 8191 representing a // fixed-point result in the range [1.0 .. 2.0]. // Convert to scaled floating point result with copied sign bit, // high-order bits from estimate, and exponent calculated above. if !increasedprecision then fraction = estimate<7:0> : Zeros(44); else fraction = estimate<11:0> : Zeros(40); if result_exp == 0 then fraction = '1' : fraction<51:1>; elsif result_exp == -1 then fraction = '01' : fraction<51:2>; result_exp = 0; case N of when 16 result = sign : result_exp<N-12:0> : fraction<51:42>; when 32 result = sign : result_exp<N-25:0> : fraction<51:29>; when 64 result = sign : result_exp<N-54:0> : fraction<51:0>; return result;(fpcr));

Library pseudocode for shared/functions/float/fprecipestimatefpround/RecipEstimateFPRoundBase

// RecipEstimate() // =============== // Compute estimate of reciprocal of 9-bit fixed-point number. // // a is in range 256 .. 511 or 2048 .. 4096 representing a number in // the range 0.5 <= x < 1.0. // increasedprecision determines if the mantissa is 8-bit or 12-bit. // result is in the range 256 .. 511 or 4096 .. 8191 representing a // number in the range 1.0 to 511/256 or 1.00 to 8191/4096. // FPRoundBase() // ============= integerbits(N) RecipEstimate(integer a_in, boolean increasedprecision) integer a = a_in; integer r; if !increasedprecision then assert 256 <= a && a < 512; a = a*2+1; // Round to nearest integer b = (2 ^ 19) DIV a; r = (b+1) DIV 2; // Round to nearest assert 256 <= r && r < 512; else assert 2048 <= a && a < 4096; a = a*2+1; // Round to nearest real real_val = Real(2^25)/Real(a); r =FPRoundBase(real op, FPCRType fpcr, FPRounding rounding, boolean isbfloat16) boolean fpexc = TRUE; // Generate floating-point exceptions return FPRoundBase(op, fpcr, rounding, isbfloat16, fpexc); // FPRoundBase() // ============= // Convert a real number OP into an N-bit floating-point value using the // supplied rounding mode RMODE. // // The 'fpcr' argument supplies FPCR control bits and 'fpexc' controls the // generation of floating-point exceptions. Status information is updated // directly in the FPSR where appropriate. bits(N) FPRoundBase(real op, FPCRType fpcr, FPRounding rounding, boolean isbfloat16, boolean fpexc) assert N IN {16,32,64}; assert op != 0.0; assert rounding != FPRounding_TIEAWAY; bits(N) result; // Obtain format parameters - minimum exponent, numbers of exponent and fraction bits. integer minimum_exp; integer F; integer E; if N == 16 then minimum_exp = -14; E = 5; F = 10; elsif N == 32 && isbfloat16 then minimum_exp = -126; E = 8; F = 7; elsif N == 32 then minimum_exp = -126; E = 8; F = 23; else // N == 64 minimum_exp = -1022; E = 11; F = 52; // Split value into sign, unrounded mantissa and exponent. bit sign; real mantissa; if op < 0.0 then sign = '1'; mantissa = -op; else sign = '0'; mantissa = op; exponent = 0; while mantissa < 1.0 do mantissa = mantissa * 2.0; exponent = exponent - 1; while mantissa >= 2.0 do mantissa = mantissa / 2.0; exponent = exponent + 1; // When TRUE, detection of underflow occurs after rounding and the test for a // denormalized number for single and double precision values occurs after rounding. altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; // Deal with flush-to-zero before rounding if FPCR.AH != '1'. if (!altfp && ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && exponent < minimum_exp) then // Flush-to-zero never generates a trapped exception. if UsingAArch32() then FPSCR.UFC = '1'; else if fpexc then FPSR.UFC = '1'; return FPZero(sign); biased_exp_unconstrained = (exponent - minimum_exp) + 1; int_mant_unconstrained = RoundDown(mantissa * 2.0^F); error_unconstrained = mantissa * 2.0^F - Real(int_mant_unconstrained); // Start creating the exponent value for the result. Start by biasing the actual exponent // so that the minimum exponent becomes 1, lower values 0 (indicating possible underflow). biased_exp = Max((exponent - minimum_exp) + 1, 0); if biased_exp == 0 then mantissa = mantissa / 2.0^(minimum_exp - exponent); // Get the unrounded mantissa as an integer, and the "units in last place" rounding error. int_mant = RoundDown(mantissa * 2.0^F); // < 2.0^F if biased_exp == 0, >= 2.0^F if not error = mantissa * 2.0^F - Real(int_mant); // Underflow occurs if exponent is too small before rounding, and result is inexact or // the Underflow exception is trapped. This applies before rounding if FPCR.AH != '1'. if !altfp && biased_exp == 0 && (error != 0.0 || fpcr.UFE == '1') then if fpexc then FPProcessException(FPExc_Underflow, fpcr); // Round result according to rounding mode. boolean round_up_unconstrained; boolean round_up; boolean overflow_to_inf; if altfp then case rounding of when FPRounding_TIEEVEN round_up_unconstrained = (error_unconstrained > 0.5 || (error_unconstrained == 0.5 && int_mant_unconstrained<0> == '1')); round_up = (error > 0.5 || (error == 0.5 && int_mant<0> == '1')); overflow_to_inf = TRUE; when FPRounding_POSINF round_up_unconstrained = (error_unconstrained != 0.0 && sign == '0'); round_up = (error != 0.0 && sign == '0'); overflow_to_inf = (sign == '0'); when FPRounding_NEGINF round_up_unconstrained = (error_unconstrained != 0.0 && sign == '1'); round_up = (error != 0.0 && sign == '1'); overflow_to_inf = (sign == '1'); when FPRounding_ZERO, FPRounding_ODD round_up_unconstrained = FALSE; round_up = FALSE; overflow_to_inf = FALSE; if round_up_unconstrained then int_mant_unconstrained = int_mant_unconstrained + 1; if int_mant_unconstrained == 2^(F+1) then // Rounded up to next exponent biased_exp_unconstrained = biased_exp_unconstrained + 1; int_mant_unconstrained = int_mant_unconstrained DIV 2; // Deal with flush-to-zero and underflow after rounding if FPCR.AH == '1'. if biased_exp_unconstrained < 1 && int_mant_unconstrained != 0 then // the result of unconstrained rounding is less than the minimum normalized number if (fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16) then // Flush-to-zero if fpexc then FPSR.UFC = '1'; FPProcessException(FPExc_Inexact, fpcr); return FPZero(sign); elsif error != 0.0 || fpcr.UFE == '1' then if fpexc then FPProcessException(FPExc_Underflow, fpcr); else // altfp == FALSE case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_mant<0> == '1')); overflow_to_inf = TRUE; when FPRounding_POSINF round_up = (error != 0.0 && sign == '0'); overflow_to_inf = (sign == '0'); when FPRounding_NEGINF round_up = (error != 0.0 && sign == '1'); overflow_to_inf = (sign == '1'); when FPRounding_ZERO, FPRounding_ODD round_up = FALSE; overflow_to_inf = FALSE; if round_up then int_mant = int_mant + 1; if int_mant == 2^F then // Rounded up from denormalized to normalized biased_exp = 1; if int_mant == 2^(F+1) then // Rounded up to next exponent biased_exp = biased_exp + 1; int_mant = int_mant DIV 2; // Handle rounding to odd if error != 0.0 && rounding == FPRounding_ODD then int_mant<0> = '1'; // Deal with overflow and generate result. if N != 16 || fpcr.AHP == '0' then // Single, double or IEEE half precision if biased_exp >= 2^E - 1 then result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); if fpexc then FPProcessException(FPExc_Overflow, fpcr); error = 1.0; // Ensure that an Inexact exception occurs else result = sign : biased_exp<E-1:0> : int_mant<F-1:0> : Zeros(N-(E+F+1)); else // Alternative half precision if biased_exp >= 2^E then result = sign : Ones(N-1); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); error = 0.0; // Ensure that an Inexact exception does not occur else result = sign : biased_exp<E-1:0> : int_mant<F-1:0> : Zeros(N-(E+F+1)); // Deal with Inexact exception. if error != 0.0 then if fpexc then FPProcessException(FPExc_Inexact(real_val); real error = real_val - Real(r); boolean round_up = error > 0.5; // Error cannot be exactly 0.5 so do not need tie case if round_up then r = r+1; assert 4096 <= r && r < 8192; , fpcr); return r; return result;

Library pseudocode for shared/functions/float/fprecpxfpround/FPRecpXFPRoundCV

// FPRecpX() // ========= // FPRoundCV() // =========== // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. bits(N) FPRecpX(bits(N) op,FPRoundCV(real op, FPCRType fpcr_in) assert N IN {16,32,64};fpcr_in, FPRounding rounding) FPCRType fpcr = fpcr_in; integer esize; case N of when 16 esize = 5; when 32 esize = 8; when 64 esize = 11; bits(N) result; bits(esize) exp; bits(esize) max_exp; bits(N-(esize+1)) frac = fpcr.FZ16 = '0'; boolean fpexc = TRUE; // Generate floating-point exceptions boolean isbfloat16 = FALSE; return ZerosFPRoundBase(); boolean altfp = HaveAltFP() && fpcr.AH == '1'; boolean fpexc = !altfp; // Generate no floating-point exceptions if altfp then fpcr.<FIZ,FZ> = '11'; // Flush denormal input and output to zero (fptype,sign,value) = FPUnpack(op, fpcr, fpexc); case N of when 16 exp = op<10+esize-1:10>; when 32 exp = op<23+esize-1:23>; when 64 exp = op<52+esize-1:52>; max_exp = Ones(esize) - 1; if fptype == FPType_SNaN || fptype == FPType_QNaN then result = FPProcessNaN(fptype, op, fpcr, fpexc); else if IsZero(exp) then // Zero and denormals result = sign:max_exp:frac; else // Infinities and normals result = sign:NOT(exp):frac; return result;(op, fpcr, rounding, isbfloat16, fpexc);

Library pseudocode for shared/functions/float/fproundfprounding/FPRoundFPRounding

// FPRound() // ========= // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. bits(N)enumeration FPRound(real op,FPRounding { FPCRType fpcr_in,FPRounding_TIEEVEN, FPRounding rounding)FPRounding_POSINF, FPCRType fpcr = fpcr_in; fpcr.AHP = '0'; boolean fpexc = TRUE; // Generate floating-point exceptions boolean isbfloat16 = FALSE; returnFPRounding_NEGINF, FPRoundBase(op, fpcr, rounding, isbfloat16, fpexc); // FPRound() // ========= // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. // // The 'fpcr' argument supplies FPCR control bits and 'fpexc' controls the // generation of floating-point exceptions. Status information is updated // directly in the FPSR where appropriate. bits(N)FPRounding_ZERO, FPRound(real op,FPRounding_TIEAWAY, FPCRType fpcr_in, FPRounding rounding, boolean fpexc) FPCRType fpcr = fpcr_in; fpcr.AHP = '0'; boolean isbfloat16 = FALSE; return FPRoundBase(op, fpcr, rounding, isbfloat16, fpexc); // FPRound() // ========= bits(N) FPRound(real op, FPCRType fpcr) return FPRound(op, fpcr, FPRoundingMode(fpcr));FPRounding_ODD};

Library pseudocode for shared/functions/float/fproundfproundingmode/FPRoundBaseFPRoundingMode

// FPRoundBase() // ============= // FPRoundingMode() // ================ // Return the current floating-point rounding mode. bits(N)FPRounding FPRoundBase(real op,FPRoundingMode( FPCRType fpcr,fpcr) return FPRoundingFPDecodeRounding rounding, boolean isbfloat16) boolean fpexc = TRUE; // Generate floating-point exceptions return FPRoundBase(op, fpcr, rounding, isbfloat16, fpexc); // FPRoundBase() // ============= // Convert a real number 'op' into an N-bit floating-point value using the // supplied rounding mode 'rounding'. // // The 'fpcr' argument supplies FPCR control bits and 'fpexc' controls the // generation of floating-point exceptions. Status information is updated // directly in the FPSR where appropriate. bits(N) FPRoundBase(real op, FPCRType fpcr, FPRounding rounding, boolean isbfloat16, boolean fpexc) assert N IN {16,32,64}; assert op != 0.0; assert rounding != FPRounding_TIEAWAY; bits(N) result; // Obtain format parameters - minimum exponent, numbers of exponent and fraction bits. integer minimum_exp; integer F; integer E; if N == 16 then minimum_exp = -14; E = 5; F = 10; elsif N == 32 && isbfloat16 then minimum_exp = -126; E = 8; F = 7; elsif N == 32 then minimum_exp = -126; E = 8; F = 23; else // N == 64 minimum_exp = -1022; E = 11; F = 52; // Split value into sign, unrounded mantissa and exponent. bit sign; real mantissa; if op < 0.0 then sign = '1'; mantissa = -op; else sign = '0'; mantissa = op; exponent = 0; while mantissa < 1.0 do mantissa = mantissa * 2.0; exponent = exponent - 1; while mantissa >= 2.0 do mantissa = mantissa / 2.0; exponent = exponent + 1; // When TRUE, detection of underflow occurs after rounding and the test for a // denormalized number for single and double precision values occurs after rounding. altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; // Deal with flush-to-zero before rounding if FPCR.AH != '1'. if (!altfp && ((fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16)) && exponent < minimum_exp) then // Flush-to-zero never generates a trapped exception. if UsingAArch32() then FPSCR.UFC = '1'; else if fpexc then FPSR.UFC = '1'; return FPZero(sign); biased_exp_unconstrained = (exponent - minimum_exp) + 1; int_mant_unconstrained = RoundDown(mantissa * 2.0^F); error_unconstrained = mantissa * 2.0^F - Real(int_mant_unconstrained); // Start creating the exponent value for the result. Start by biasing the actual exponent // so that the minimum exponent becomes 1, lower values 0 (indicating possible underflow). biased_exp = Max((exponent - minimum_exp) + 1, 0); if biased_exp == 0 then mantissa = mantissa / 2.0^(minimum_exp - exponent); // Get the unrounded mantissa as an integer, and the "units in last place" rounding error. int_mant = RoundDown(mantissa * 2.0^F); // < 2.0^F if biased_exp == 0, >= 2.0^F if not error = mantissa * 2.0^F - Real(int_mant); // Underflow occurs if exponent is too small before rounding, and result is inexact or // the Underflow exception is trapped. This applies before rounding if FPCR.AH != '1'. if !altfp && biased_exp == 0 && (error != 0.0 || fpcr.UFE == '1') then if fpexc then FPProcessException(FPExc_Underflow, fpcr); // Round result according to rounding mode. boolean round_up_unconstrained; boolean round_up; boolean overflow_to_inf; if altfp then case rounding of when FPRounding_TIEEVEN round_up_unconstrained = (error_unconstrained > 0.5 || (error_unconstrained == 0.5 && int_mant_unconstrained<0> == '1')); round_up = (error > 0.5 || (error == 0.5 && int_mant<0> == '1')); overflow_to_inf = TRUE; when FPRounding_POSINF round_up_unconstrained = (error_unconstrained != 0.0 && sign == '0'); round_up = (error != 0.0 && sign == '0'); overflow_to_inf = (sign == '0'); when FPRounding_NEGINF round_up_unconstrained = (error_unconstrained != 0.0 && sign == '1'); round_up = (error != 0.0 && sign == '1'); overflow_to_inf = (sign == '1'); when FPRounding_ZERO, FPRounding_ODD round_up_unconstrained = FALSE; round_up = FALSE; overflow_to_inf = FALSE; if round_up_unconstrained then int_mant_unconstrained = int_mant_unconstrained + 1; if int_mant_unconstrained == 2^(F+1) then // Rounded up to next exponent biased_exp_unconstrained = biased_exp_unconstrained + 1; int_mant_unconstrained = int_mant_unconstrained DIV 2; // Deal with flush-to-zero and underflow after rounding if FPCR.AH == '1'. if biased_exp_unconstrained < 1 && int_mant_unconstrained != 0 then // the result of unconstrained rounding is less than the minimum normalized number if (fpcr.FZ == '1' && N != 16) || (fpcr.FZ16 == '1' && N == 16) then // Flush-to-zero if fpexc then FPSR.UFC = '1'; FPProcessException(FPExc_Inexact, fpcr); return FPZero(sign); elsif error != 0.0 || fpcr.UFE == '1' then if fpexc then FPProcessException(FPExc_Underflow, fpcr); else // altfp == FALSE case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_mant<0> == '1')); overflow_to_inf = TRUE; when FPRounding_POSINF round_up = (error != 0.0 && sign == '0'); overflow_to_inf = (sign == '0'); when FPRounding_NEGINF round_up = (error != 0.0 && sign == '1'); overflow_to_inf = (sign == '1'); when FPRounding_ZERO, FPRounding_ODD round_up = FALSE; overflow_to_inf = FALSE; if round_up then int_mant = int_mant + 1; if int_mant == 2^F then // Rounded up from denormalized to normalized biased_exp = 1; if int_mant == 2^(F+1) then // Rounded up to next exponent biased_exp = biased_exp + 1; int_mant = int_mant DIV 2; // Handle rounding to odd if error != 0.0 && rounding == FPRounding_ODD then int_mant<0> = '1'; // Deal with overflow and generate result. if N != 16 || fpcr.AHP == '0' then // Single, double or IEEE half precision if biased_exp >= 2^E - 1 then result = if overflow_to_inf then FPInfinity(sign) else FPMaxNormal(sign); if fpexc then FPProcessException(FPExc_Overflow, fpcr); error = 1.0; // Ensure that an Inexact exception occurs else result = sign : biased_exp<E-1:0> : int_mant<F-1:0> : Zeros(N-(E+F+1)); else // Alternative half precision if biased_exp >= 2^E then result = sign : Ones(N-1); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); error = 0.0; // Ensure that an Inexact exception does not occur else result = sign : biased_exp<E-1:0> : int_mant<F-1:0> : Zeros(N-(E+F+1)); // Deal with Inexact exception. if error != 0.0 then if fpexc then FPProcessException(FPExc_Inexact, fpcr); return result;(fpcr.RMode);

Library pseudocode for shared/functions/float/fproundfproundint/FPRoundCVFPRoundInt

// FPRoundCV() // =========== // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. // FPRoundInt() // ============ // Round op to nearest integral floating point value using rounding mode in FPCR/FPSCR. // If EXACT is TRUE, set FPSR.IXC if result is not numerically equal to op. bits(N) FPRoundCV(real op,FPRoundInt(bits(N) op, FPCRType fpcr_in,fpcr, FPRounding rounding)rounding, boolean exact) assert rounding != FPCRTypeFPRounding_ODD fpcr = fpcr_in; fpcr.FZ16 = '0'; boolean fpexc = TRUE; // Generate floating-point exceptions boolean isbfloat16 = FALSE; return; assert N IN {16,32,64}; // When alternative floating-point support is TRUE, do not generate // Input Denormal floating-point exceptions. altfp = () && !UsingAArch32() && fpcr.AH == '1'; fpexc = !altfp; // Unpack using FPCR to determine if subnormals are flushed-to-zero. (fptype,sign,value) = FPUnpack(op, fpcr, fpexc); bits(N) result; if fptype == FPType_SNaN || fptype == FPType_QNaN then result = FPProcessNaN(fptype, op, fpcr); elsif fptype == FPType_Infinity then result = FPInfinity(sign); elsif fptype == FPType_Zero then result = FPZero(sign); else // Extract integer component. int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment. boolean round_up; case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when FPRounding_POSINF round_up = (error != 0.0); when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = (error != 0.0 && int_result < 0); when FPRounding_TIEAWAY round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Convert integer value into an equivalent real value. real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact. if real_result == 0.0 then result = FPZero(sign); else result = FPRound(real_result, fpcr, FPRounding_ZERO); // Generate inexact exceptions. if error != 0.0 && exact then FPProcessException(FPExc_InexactFPRoundBaseHaveAltFP(op, fpcr, rounding, isbfloat16, fpexc);, fpcr); return result;

Library pseudocode for shared/functions/float/fproundingfproundintn/FPRoundingFPRoundIntN

enumeration// FPRoundIntN() // ============= bits(N) FPRounding {FPRoundIntN(bits(N) op,FPRounding_TIEEVEN,fpcr, FPRounding_POSINF,rounding, integer intsize) assert rounding != FPRounding_NEGINF,; assert N IN {32,64}; assert intsize IN {32, 64}; integer exp; bits(N) result; boolean round_up; constant integer E = (if N == 32 then 8 else 11); constant integer F = N - (E + 1); // When alternative floating-point support is TRUE, do not generate // Input Denormal floating-point exceptions. altfp = FPRounding_ZERO,() && ! FPRounding_TIEAWAY,() && fpcr.AH == '1'; fpexc = !altfp; // Unpack using FPCR to determine if subnormals are flushed-to-zero. (fptype,sign,value) = (op, fpcr, fpexc); if fptype IN {FPType_SNaN, FPType_QNaN, FPType_Infinity} then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:Zeros(F); else exp = 1022+intsize; result = '1':exp<(E-1):0>:Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); elsif fptype == FPType_Zero then result = FPZero(sign); else // Extract integer component. int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment. case rounding of when FPRounding_TIEEVEN round_up = error > 0.5 || (error == 0.5 && int_result<0> == '1'); when FPRounding_POSINF round_up = error != 0.0; when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = error != 0.0 && int_result < 0; when FPRounding_TIEAWAY round_up = error > 0.5 || (error == 0.5 && int_result >= 0); if round_up then int_result = int_result + 1; overflow = int_result > 2^(intsize-1)-1 || int_result < -1*2^(intsize-1); if overflow then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:Zeros(F); else exp = 1022 + intsize; result = '1':exp<(E-1):0>:Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); // This case shouldn't set Inexact. error = 0.0; else // Convert integer value into an equivalent real value. real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact. if real_result == 0.0 then result = FPZero(sign); else result = FPRound(real_result, fpcr, FPRounding_ZERO); // Generate inexact exceptions. if error != 0.0 then FPProcessException(FPExc_InexactFPRounding_ODD};, fpcr); return result;

Library pseudocode for shared/functions/float/fproundingmodefprsqrtestimate/FPRoundingModeFPRSqrtEstimate

// FPRoundingMode() // ================ // Return the current floating-point rounding mode. // FPRSqrtEstimate() // ================= FPRoundingbits(N) FPRoundingMode(FPRSqrtEstimate(bits(N) operand,FPCRType fpcr) returnfpcr_in) assert N IN {16,32,64}; fpcr = fpcr_in; // When using alternative floating-point behaviour, do not generate // floating-point exceptions and flush denormal input to zero. boolean altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; boolean fpexc = !altfp; if altfp then fpcr.<FIZ,FZ> = '11'; (fptype,sign,value) = FPUnpack(operand, fpcr, fpexc); bits(N) result; if fptype == FPType_SNaN || fptype == FPType_QNaN then result = FPProcessNaN(fptype, operand, fpcr, fpexc); elsif fptype == FPType_Zero then result = FPInfinity(sign); if fpexc then FPProcessException(FPExc_DivideByZero, fpcr); elsif sign == '1' then result = FPDefaultNaN(fpcr); if fpexc then FPProcessException(FPExc_InvalidOp, fpcr); elsif fptype == FPType_Infinity then result = FPZero('0'); else // Scale to a fixed-point value in the range 0.25 <= x < 1.0 in steps of 512, with the // evenness or oddness of the exponent unchanged, and calculate result exponent. // Scaled value has copied sign bit, exponent = 1022 or 1021 = double-precision // biased version of -1 or -2, fraction = original fraction extended with zeros. bits(52) fraction; integer exp; case N of when 16 fraction = operand<9:0> : Zeros(42); exp = UInt(operand<14:10>); when 32 fraction = operand<22:0> : Zeros(29); exp = UInt(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then while fraction<51> == '0' do fraction = fraction<50:0> : '0'; exp = exp - 1; fraction = fraction<50:0> : '0'; integer scaled; boolean increasedprecision = N==32 && HaveFeatRPRES() && altfp; if !increasedprecision then if exp<0> == '0' then scaled = UInt('1':fraction<51:44>); else scaled = UInt('01':fraction<51:45>); else if exp<0> == '0' then scaled = UInt('1':fraction<51:41>); else scaled = UInt('01':fraction<51:42>); integer result_exp; case N of when 16 result_exp = ( 44 - exp) DIV 2; when 32 result_exp = ( 380 - exp) DIV 2; when 64 result_exp = (3068 - exp) DIV 2; estimate = RecipSqrtEstimate(scaled, increasedprecision); // Estimate is in the range 256 .. 511 or 4096 .. 8191 representing a // fixed-point result in the range [1.0 .. 2.0]. // Convert to scaled floating point result with copied sign bit and high-order // fraction bits, and exponent calculated above. case N of when 16 result = '0' : result_exp<N-12:0> : estimate<7:0>:Zeros(2); when 32 if !increasedprecision then result = '0' : result_exp<N-25:0> : estimate<7:0>:Zeros(15); else result = '0' : result_exp<N-25:0> : estimate<11:0>:Zeros(11); when 64 result = '0' : result_exp<N-54:0> : estimate<7:0>:ZerosFPDecodeRoundingFPCRType(fpcr.RMode);(44); return result;

Library pseudocode for shared/functions/float/fproundintfprsqrtestimate/FPRoundIntRecipSqrtEstimate

// FPRoundInt() // ============ // RecipSqrtEstimate() // =================== // Compute estimate of reciprocal square root of 9-bit fixed-point number. // // a_in is in range 128 .. 511 or 1024 .. 4095, with increased precision, // representing a number in the range 0.25 <= x < 1.0. // increasedprecision determines if the mantissa is 8-bit or 12-bit. // result is in the range 256 .. 511 or 4096 .. 8191, with increased precision, // representing a number in the range 1.0 to 511/256 or 8191/4096. // Round op to nearest integral floating point value using rounding mode in FPCR/FPSCR. // If EXACT is TRUE, set FPSR.IXC if result is not numerically equal to op. bits(N)integer FPRoundInt(bits(N) op,RecipSqrtEstimate(integer a_in, boolean increasedprecision) integer a = a_in; integer r; if !increasedprecision then assert 128 <= a && a < 512; if a < 256 then // 0.25 .. 0.5 a = a*2+1; // a in units of 1/512 rounded to nearest else // 0.5 .. 1.0 a = (a >> 1) << 1; // Discard bottom bit a = (a+1)*2; // a in units of 1/256 rounded to nearest integer b = 512; while a*(b+1)*(b+1) < 2^28 do b = b+1; // b = largest b such that b < 2^14 / sqrt(a) r = (b+1) DIV 2; // Round to nearest assert 256 <= r && r < 512; else assert 1024 <= a && a < 4096; real real_val; real error; integer int_val; if a < 2048 then // 0.25... 0.5 a = a*2 + 1; // Take 10 bits of fraction and force a 1 at the bottom real_val = Real(a)/2.0; else // 0.5..1.0 a = (a >> 1) << 1; // Discard bottom bit a = a+1; // Take 10 bits of fraction and force a 1 at the bottom real_val = Real(a); real_val = Sqrt(real_val); // This number will lie in the range of 32 to 64 // Round to nearest even for a DP float number real_val = real_val * Real(2^47); // The integer is the size of the whole DP mantissa int_val = FPCRTypeRoundDown fpcr,(real_val); // Calculate rounding value error = real_val - Real(int_val); round_up = error > 0.5; // Error cannot be exactly 0.5 so do not need tie case if round_up then int_val = int_val+1; real_val = Real(2^65)/Real(int_val); // Lies in the range 4096 <= real_val < 8192 int_val = FPRounding rounding, boolean exact) assert rounding != FPRounding_ODD; assert N IN {16,32,64}; // When alternative floating-point support is TRUE, do not generate // Input Denormal floating-point exceptions. altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; fpexc = !altfp; // Unpack using FPCR to determine if subnormals are flushed-to-zero. (fptype,sign,value) = FPUnpack(op, fpcr, fpexc); bits(N) result; if fptype == FPType_SNaN || fptype == FPType_QNaN then result = FPProcessNaN(fptype, op, fpcr); elsif fptype == FPType_Infinity then result = FPInfinity(sign); elsif fptype == FPType_Zero then result = FPZero(sign); else // Extract integer component. int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment. boolean round_up; case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when FPRounding_POSINF round_up = (error != 0.0); when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = (error != 0.0 && int_result < 0); when FPRounding_TIEAWAY round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Convert integer value into an equivalent real value. real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact. if real_result == 0.0 then result = FPZero(sign); else result = FPRound(real_result, fpcr, FPRounding_ZERO); // Generate inexact exceptions. if error != 0.0 && exact then FPProcessException(FPExc_Inexact, fpcr); (real_val); // Round that (to nearest even) to give integer error = real_val - Real(int_val); round_up = (error > 0.5 || (error == 0.5 && int_val<0> == '1')); if round_up then int_val = int_val+1; return result; r = int_val; assert 4096 <= r && r < 8192; return r;

Library pseudocode for shared/functions/float/fproundintnfpsqrt/FPRoundIntNFPSqrt

// FPRoundIntN() // ============= // FPSqrt() // ======== bits(N) FPRoundIntN(bits(N) op,FPSqrt(bits(N) op, FPCRType fpcr,fpcr) assert N IN {16,32,64}; (fptype,sign,value) = FPRounding rounding, integer intsize) assert rounding != FPRounding_ODD; assert N IN {32,64}; assert intsize IN {32, 64}; integer exp; bits(N) result; boolean round_up; constant integer E = (if N == 32 then 8 else 11); constant integer F = N - (E + 1); // When alternative floating-point support is TRUE, do not generate // Input Denormal floating-point exceptions. altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; fpexc = !altfp; // Unpack using FPCR to determine if subnormals are flushed-to-zero. (fptype,sign,value) = FPUnpack(op, fpcr, fpexc); (op, fpcr); if fptype IN { bits(N) result; if fptype ==FPType_SNaN,|| fptype == FPType_QNaN,then result = FPType_InfinityFPProcessNaN} then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:(fptype, op, fpcr); elsif fptype ==Zeros(F); else exp = 1022+intsize; result = '1':exp<(E-1):0>:Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); elsif fptype == FPType_Zero then result = FPZero(sign); else // Extract integer component. int_result = elsif fptype == RoundDownFPType_Infinity(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment. case rounding of when&& sign == '0' then result = FPRounding_TIEEVENFPInfinity round_up = error > 0.5 || (error == 0.5 && int_result<0> == '1'); when(sign); elsif sign == '1' then result = FPRounding_POSINFFPDefaultNaN round_up = error != 0.0; when(fpcr); FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = error != 0.0 && int_result < 0; when FPRounding_TIEAWAY round_up = error > 0.5 || (error == 0.5 && int_result >= 0); if round_up then int_result = int_result + 1; overflow = int_result > 2^(intsize-1)-1 || int_result < -1*2^(intsize-1); if overflow then if N == 32 then exp = 126 + intsize; result = '1':exp<(E-1):0>:Zeros(F); else exp = 1022 + intsize; result = '1':exp<(E-1):0>:Zeros(F); FPProcessException(FPExc_InvalidOp, fpcr); // This case shouldn't set Inexact. error = 0.0; else // Convert integer value into an equivalent real value. real_result = Real(int_result); // Re-encode as a floating-point value, result is always exact. if real_result == 0.0 then result = else result = FPZero(sign); else result = FPRound(real_result, fpcr,(Sqrt(value), fpcr); FPRounding_ZEROFPProcessDenorm); // Generate inexact exceptions. if error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); (fptype, N, fpcr); return result;

Library pseudocode for shared/functions/float/fprsqrtestimatefpsub/FPRSqrtEstimateFPSub

// FPRSqrtEstimate() // ================= // FPSub() // ======= bits(N) FPRSqrtEstimate(bits(N) operand,FPSub(bits(N) op1, bits(N) op2, FPCRType fpcr_in) fpcr) assert N IN {16,32,64}; assert N IN {16,32,64}; rounding = FPCRTypeFPRoundingMode fpcr = fpcr_in; (fpcr); // When using alternative floating-point behaviour, do not generate // floating-point exceptions and flush denormal input to zero. boolean altfp = (type1,sign1,value1) = HaveAltFPFPUnpack() && !(op1, fpcr); (type2,sign2,value2) =UsingAArch32() && fpcr.AH == '1'; boolean fpexc = !altfp; if altfp then fpcr.<FIZ,FZ> = '11'; (fptype,sign,value) = FPUnpack(operand, fpcr, fpexc); (op2, fpcr); bits(N) result; if fptype == (done,result) = FPType_SNaNFPProcessNaNs || fptype ==(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_QNaNFPType_Infinity then result =); inf2 = (type2 == FPProcessNaNFPType_Infinity(fptype, operand, fpcr, fpexc); elsif fptype ==); zero1 = (type1 == FPType_Zero then result =); zero2 = (type2 == FPInfinityFPType_Zero(sign); if fpexc then); if inf1 && inf2 && sign1 == sign2 then result = FPProcessException(FPExc_DivideByZero, fpcr); elsif sign == '1' then result = FPDefaultNaN(fpcr); if fpexc then(fpcr); FPProcessException(FPExc_InvalidOp, fpcr); elsif fptype == elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPType_InfinityFPInfinity then result =('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 == NOT(sign2) then result = FPZero('0'); else // Scale to a fixed-point value in the range 0.25 <= x < 1.0 in steps of 512, with the // evenness or oddness of the exponent unchanged, and calculate result exponent. // Scaled value has copied sign bit, exponent = 1022 or 1021 = double-precision // biased version of -1 or -2, fraction = original fraction extended with zeros. bits(52) fraction; integer exp; case N of when 16 fraction = operand<9:0> :(sign1); else result_value = value1 - value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == ZerosFPRounding_NEGINF(42); exp =then '1' else '0'; result = UIntFPZero(operand<14:10>); when 32 fraction = operand<22:0> :(result_sign); else result = ZerosFPRound(29); exp =(result_value, fpcr, rounding); UIntFPProcessDenorms(operand<30:23>); when 64 fraction = operand<51:0>; exp = UInt(operand<62:52>); if exp == 0 then while fraction<51> == '0' do fraction = fraction<50:0> : '0'; exp = exp - 1; fraction = fraction<50:0> : '0'; integer scaled; boolean increasedprecision = N==32 && HaveFeatRPRES() && altfp; if !increasedprecision then if exp<0> == '0' then scaled = UInt('1':fraction<51:44>); else scaled = UInt('01':fraction<51:45>); else if exp<0> == '0' then scaled = UInt('1':fraction<51:41>); else scaled = UInt('01':fraction<51:42>); integer result_exp; case N of when 16 result_exp = ( 44 - exp) DIV 2; when 32 result_exp = ( 380 - exp) DIV 2; when 64 result_exp = (3068 - exp) DIV 2; estimate = RecipSqrtEstimate(scaled, increasedprecision); // Estimate is in the range 256 .. 511 or 4096 .. 8191 representing a // fixed-point result in the range [1.0 .. 2.0]. // Convert to scaled floating point result with copied sign bit and high-order // fraction bits, and exponent calculated above. case N of when 16 result = '0' : result_exp<N-12:0> : estimate<7:0>:Zeros(2); when 32 if !increasedprecision then result = '0' : result_exp<N-25:0> : estimate<7:0>:Zeros(15); else result = '0' : result_exp<N-25:0> : estimate<11:0>:Zeros(11); when 64 result = '0' : result_exp<N-54:0> : estimate<7:0>:Zeros(44); (type1, type2, N, fpcr); return result;

Library pseudocode for shared/functions/float/fprsqrtestimatefpthree/RecipSqrtEstimateFPThree

// RecipSqrtEstimate() // =================== // Compute estimate of reciprocal square root of 9-bit fixed-point number. // // a_in is in range 128 .. 511 or 1024 .. 4095, with increased precision, // representing a number in the range 0.25 <= x < 1.0. // increasedprecision determines if the mantissa is 8-bit or 12-bit. // result is in the range 256 .. 511 or 4096 .. 8191, with increased precision, // representing a number in the range 1.0 to 511/256 or 8191/4096. // FPThree() // ========= integerbits(N) RecipSqrtEstimate(integer a_in, boolean increasedprecision) FPThree(bit sign) integer a = a_in; integer r; if !increasedprecision then assert 128 <= a && a < 512; if a < 256 then // 0.25 .. 0.5 a = a*2+1; // a in units of 1/512 rounded to nearest else // 0.5 .. 1.0 a = (a >> 1) << 1; // Discard bottom bit a = (a+1)*2; // a in units of 1/256 rounded to nearest integer b = 512; while a*(b+1)*(b+1) < 2^28 do b = b+1; // b = largest b such that b < 2^14 / sqrt(a) r = (b+1) DIV 2; // Round to nearest assert 256 <= r && r < 512; else assert 1024 <= a && a < 4096; real real_val; real error; integer int_val; if a < 2048 then // 0.25... 0.5 a = a*2 + 1; // Take 10 bits of fraction and force a 1 at the bottom real_val = Real(a)/2.0; else // 0.5..1.0 a = (a >> 1) << 1; // Discard bottom bit a = a+1; // Take 10 bits of fraction and force a 1 at the bottom real_val = Real(a); real_val = Sqrt(real_val); // This number will lie in the range of 32 to 64 // Round to nearest even for a DP float number real_val = real_val * Real(2^47); // The integer is the size of the whole DP mantissa int_val = assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1': RoundDownZeros(real_val); // Calculate rounding value error = real_val - Real(int_val); round_up = error > 0.5; // Error cannot be exactly 0.5 so do not need tie case if round_up then int_val = int_val+1; real_val = Real(2^65)/Real(int_val); // Lies in the range 4096 <= real_val < 8192 int_val =(E-1); frac = '1': RoundDownZeros(real_val); // Round that (to nearest even) to give integer error = real_val - Real(int_val); round_up = (error > 0.5 || (error == 0.5 && int_val<0> == '1')); if round_up then int_val = int_val+1; (F-1); result = sign : exp : frac; r = int_val; assert 4096 <= r && r < 8192; return r; return result;

Library pseudocode for shared/functions/float/fpsqrtfptofixed/FPSqrtFPToFixed

// FPSqrt() // ======== // FPToFixed() // =========== bits(N)// Convert N-bit precision floating point OP to M-bit fixed point with // FBITS fractional bits, controlled by UNSIGNED and ROUNDING. bits(M) FPSqrt(bits(N) op,FPToFixed(bits(N) op, integer fbits, boolean unsigned, FPCRType fpcr) assert N IN {16,32,64}; (fptype,sign,value) =fpcr, FPRounding rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; assert fbits >= 0; assert rounding != FPRounding_ODD; // When alternative floating-point support is TRUE, do not generate // Input Denormal floating-point exceptions. altfp = HaveAltFP() && !UsingAArch32() && fpcr.AH == '1'; fpexc = !altfp; // Unpack using fpcr to determine if subnormals are flushed-to-zero. (fptype,sign,value) = FPUnpack(op, fpcr); (op, fpcr, fpexc); bits(N) result; // If NaN, set cumulative flag or take exception. if fptype == FPType_SNaN || fptype == FPType_QNaN then result =then FPProcessNaNFPProcessException(fptype, op, fpcr); elsif fptype ==( FPType_ZeroFPExc_InvalidOp then result =, fpcr); // Scale by fractional bits and produce integer rounded towards minus-infinity. value = value * 2.0^fbits; int_result = FPZeroRoundDown(sign); elsif fptype ==(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment. boolean round_up; case rounding of when FPType_InfinityFPRounding_TIEEVEN && sign == '0' then result =round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when FPInfinityFPRounding_POSINF(sign); elsif sign == '1' then result =round_up = (error != 0.0); when FPDefaultNaNFPRounding_NEGINF(fpcr);round_up = FALSE; when FPRounding_ZERO round_up = (error != 0.0 && int_result < 0); when FPRounding_TIEAWAY round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Generate saturated result and exceptions. (result, overflow) = SatQ(int_result, M, unsigned); if overflow then FPProcessException(FPExc_InvalidOp, fpcr); else result = elsif error != 0.0 then FPRoundFPProcessException(Sqrt(value), fpcr);( FPProcessDenormFPExc_Inexact(fptype, N, fpcr); , fpcr); return result;

Library pseudocode for shared/functions/float/fpsubfptofixedjs/FPSubFPToFixedJS

// FPSub() // ======= // FPToFixedJS() // ============= bits(N)// Converts a double precision floating point input value // to a signed integer, with rounding to zero. (bits(N), bit) FPSub(bits(N) op1, bits(N) op2,FPToFixedJS(bits(M) op, FPCRType fpcr) fpcr, boolean Is64) assert N IN {16,32,64}; rounding = assert M == 64 && N == 32; // If FALSE, never generate Input Denormal floating-point exceptions. fpexc_idenorm = !( FPRoundingModeHaveAltFP(fpcr); (type1,sign1,value1) =() && ! FPUnpackUsingAArch32(op1, fpcr); (type2,sign2,value2) =() && fpcr.AH == '1'); // Unpack using fpcr to determine if subnormals are flushed-to-zero. (fptype,sign,value) = FPUnpack(op2, fpcr); (op, fpcr, fpexc_idenorm); (done,result) = Z = '1'; // If NaN, set cumulative flag or take exception. if fptype == FPProcessNaNsFPType_SNaN(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 ==|| fptype == FPType_InfinityFPType_QNaN); inf2 = (type2 ==then FPType_InfinityFPProcessException); zero1 = (type1 ==( FPType_ZeroFPExc_InvalidOp); zero2 = (type2 ==, fpcr); Z = '0'; int_result = FPType_ZeroRoundDown); (value); error = value - Real(int_result); if inf1 && inf2 && sign1 == sign2 then result = // Determine whether supplied rounding mode requires an increment. round_it_up = (error != 0.0 && int_result < 0); if round_it_up then int_result = int_result + 1; integer result; if int_result < 0 then result = int_result - 2^32* FPDefaultNaNRoundUp(fpcr);(Real(int_result)/Real(2^32)); else result = int_result - 2^32* RoundDown(Real(int_result)/Real(2^32)); // Generate exceptions. if int_result < -(2^31) || int_result > (2^31)-1 then FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = Z = '0'; elsif error != 0.0 then FPInfinityFPProcessException('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result =( FPInfinityFPExc_Inexact('1'); elsif zero1 && zero2 && sign1 == NOT(sign2) then result =, fpcr); Z = '0'; elsif sign == '1' && value == 0.0 then Z = '0'; elsif sign == '0' && value == 0.0 && ! FPZeroIsZero(sign1); else result_value = value1 - value2; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding ==(op<51:0>) then Z = '0'; if fptype == FPRounding_NEGINFFPType_Infinity then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr, rounding); FPProcessDenorms(type1, type2, N, fpcr); then result = 0; return result; return (result<N-1:0>, Z);

Library pseudocode for shared/functions/float/fpthreefptwo/FPThreeFPTwo

// FPThree() // ========= // FPTwo() // ======= bits(N) FPThree(bit sign) FPTwo(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1':Zeros(E-1); frac = '1': frac =Zeros(F-1); (F); result = sign : exp : frac; return result;

Library pseudocode for shared/functions/float/fptofixedfptype/FPToFixedFPType

// FPToFixed() // =========== // Convert N-bit precision floating point 'op' to M-bit fixed point with // FBITS fractional bits, controlled by UNSIGNED and ROUNDING. bits(M)enumeration FPToFixed(bits(N) op, integer fbits, boolean unsigned,FPType { FPCRType fpcr,FPType_Zero, FPRounding rounding) assert N IN {16,32,64}; assert M IN {16,32,64}; assert fbits >= 0; assert rounding !=FPType_Denormal, FPRounding_ODD; // When alternative floating-point support is TRUE, do not generate // Input Denormal floating-point exceptions. altfp =FPType_Nonzero, HaveAltFP() && !FPType_Infinity,UsingAArch32() && fpcr.AH == '1'; fpexc = !altfp; // Unpack using fpcr to determine if subnormals are flushed-to-zero. (fptype,sign,value) =FPType_QNaN, FPUnpack(op, fpcr, fpexc); // If NaN, set cumulative flag or take exception. if fptype == FPType_SNaN || fptype == FPType_QNaN then FPProcessException(FPExc_InvalidOp, fpcr); // Scale by fractional bits and produce integer rounded towards minus-infinity. value = value * 2.0^fbits; int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment. boolean round_up; case rounding of when FPRounding_TIEEVEN round_up = (error > 0.5 || (error == 0.5 && int_result<0> == '1')); when FPRounding_POSINF round_up = (error != 0.0); when FPRounding_NEGINF round_up = FALSE; when FPRounding_ZERO round_up = (error != 0.0 && int_result < 0); when FPRounding_TIEAWAY round_up = (error > 0.5 || (error == 0.5 && int_result >= 0)); if round_up then int_result = int_result + 1; // Generate saturated result and exceptions. (result, overflow) = SatQ(int_result, M, unsigned); if overflow then FPProcessException(FPExc_InvalidOp, fpcr); elsif error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); return result;FPType_SNaN};

Library pseudocode for shared/functions/float/fptofixedjsfpunpack/FPToFixedJSFPUnpack

// FPToFixedJS() // ============= // FPUnpack() // ========== // Converts a double precision floating point input value // to a signed integer, with rounding to zero. (bits(N), bit)(FPType, bit, real) FPToFixedJS(bits(M) op,FPUnpack(bits(N) fpval, FPCRType fpcr, boolean Is64) assert M == 64 && N == 32; // If FALSE, never generate Input Denormal floating-point exceptions. fpexc_idenorm = !(fpcr_in)HaveAltFPFPCRType() && !fpcr = fpcr_in; fpcr.AHP = '0'; boolean fpexc = TRUE; // Generate floating-point exceptions (fp_type, sign, value) =UsingAArch32FPUnpackBase() && fpcr.AH == '1'); (fpval, fpcr, fpexc); return (fp_type, sign, value); // Unpack using fpcr to determine if subnormals are flushed-to-zero. (fptype,sign,value) =// FPUnpack() // ========== // // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. (FPType, bit, real) FPUnpack(op, fpcr, fpexc_idenorm); Z = '1'; // If NaN, set cumulative flag or take exception. if fptype ==FPUnpack(bits(N) fpval, FPType_SNaNFPCRType || fptype ==fpcr_in, boolean fpexc) FPType_QNaNFPCRType thenfpcr = fpcr_in; fpcr.AHP = '0'; (fp_type, sign, value) = FPProcessExceptionFPUnpackBase(FPExc_InvalidOp, fpcr); Z = '0'; int_result = RoundDown(value); error = value - Real(int_result); // Determine whether supplied rounding mode requires an increment. round_it_up = (error != 0.0 && int_result < 0); if round_it_up then int_result = int_result + 1; integer result; if int_result < 0 then result = int_result - 2^32*RoundUp(Real(int_result)/Real(2^32)); else result = int_result - 2^32*RoundDown(Real(int_result)/Real(2^32)); // Generate exceptions. if int_result < -(2^31) || int_result > (2^31)-1 then FPProcessException(FPExc_InvalidOp, fpcr); Z = '0'; elsif error != 0.0 then FPProcessException(FPExc_Inexact, fpcr); Z = '0'; elsif sign == '1' && value == 0.0 then Z = '0'; elsif sign == '0' && value == 0.0 && !IsZero(op<51:0>) then Z = '0'; if fptype == FPType_Infinity then result = 0; return (result<N-1:0>, Z);(fpval, fpcr, fpexc); return (fp_type, sign, value);

Library pseudocode for shared/functions/float/fptwofpunpack/FPTwoFPUnpackBase

// FPTwo() // ======= // FPUnpackBase() // ============== bits(N)(FPType, bit, real) FPTwo(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = '1':FPUnpackBase(bits(N) fpval,ZerosFPCRType(E-1); frac =fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions (fp_type, sign, value) = (fpval, fpcr, fpexc); return (fp_type, sign, value); // FPUnpackBase() // ============== // // Unpack a floating-point number into its type, sign bit and the real number // that it represents. The real number result has the correct sign for numbers // and infinities, is very large in magnitude for infinities, and is 0.0 for // NaNs. (These values are chosen to simplify the description of comparisons // and conversions.) // // The 'fpcr_in' argument supplies FPCR control bits and 'fpexc' controls the // generation of floating-point exceptions. Status information is updated // directly in the FPSR where appropriate. (FPType, bit, real) FPUnpackBase(bits(N) fpval, FPCRType fpcr_in, boolean fpexc) assert N IN {16,32,64}; FPCRType fpcr = fpcr_in; boolean altfp = HaveAltFP() && !UsingAArch32(); boolean fiz = altfp && fpcr.FIZ == '1'; boolean fz = fpcr.FZ == '1' && !(altfp && fpcr.AH == '1'); real value; bit sign; FPType fptype; if N == 16 then sign = fpval<15>; exp16 = fpval<14:10>; frac16 = fpval<9:0>; if IsZero(exp16) then if IsZero(frac16) || fpcr.FZ16 == '1' then fptype = FPType_Zero; value = 0.0; else fptype = FPType_Denormal; value = 2.0^-14 * (Real(UInt(frac16)) * 2.0^-10); elsif IsOnes(exp16) && fpcr.AHP == '0' then // Infinity or NaN in IEEE format if IsZero(frac16) then fptype = FPType_Infinity; value = 2.0^1000000; else fptype = if frac16<9> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = FPType_Nonzero; value = 2.0^(UInt(exp16)-15) * (1.0 + Real(UInt(frac16)) * 2.0^-10); elsif N == 32 then sign = fpval<31>; exp32 = fpval<30:23>; frac32 = fpval<22:0>; if IsZero(exp32) then if IsZero(frac32) then // Produce zero if value is zero. fptype = FPType_Zero; value = 0.0; elsif fz || fiz then // Flush-to-zero if FIZ==1 or AH,FZ==01 fptype = FPType_Zero; value = 0.0; // Check whether to raise Input Denormal floating-point exception. // fpcr.FIZ==1 does not raise Input Denormal exception. if fz then // Denormalized input flushed to zero if fpexc then FPProcessException(FPExc_InputDenorm, fpcr); else fptype = FPType_Denormal; value = 2.0^-126 * (Real(UInt(frac32)) * 2.0^-23); elsif IsOnes(exp32) then if IsZero(frac32) then fptype = FPType_Infinity; value = 2.0^1000000; else fptype = if frac32<22> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = FPType_Nonzero; value = 2.0^(UInt(exp32)-127) * (1.0 + Real(UInt(frac32)) * 2.0^-23); else // N == 64 sign = fpval<63>; exp64 = fpval<62:52>; frac64 = fpval<51:0>; if IsZero(exp64) then if IsZero(frac64) then // Produce zero if value is zero. fptype = FPType_Zero; value = 0.0; elsif fz || fiz then // Flush-to-zero if FIZ==1 or AH,FZ==01 fptype = FPType_Zero; value = 0.0; // Check whether to raise Input Denormal floating-point exception. // fpcr.FIZ==1 does not raise Input Denormal exception. if fz then // Denormalized input flushed to zero if fpexc then FPProcessException(FPExc_InputDenorm, fpcr); else fptype = FPType_Denormal; value = 2.0^-1022 * (Real(UInt(frac64)) * 2.0^-52); elsif IsOnes(exp64) then if IsZero(frac64) then fptype = FPType_Infinity; value = 2.0^1000000; else fptype = if frac64<51> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = FPType_Nonzero; value = 2.0^(UInt(exp64)-1023) * (1.0 + Real(UIntZerosFPUnpackBase(F); result = sign : exp : frac; (frac64)) * 2.0^-52); return result; if sign == '1' then value = -value; return (fptype, sign, value);

Library pseudocode for shared/functions/float/fptypefpunpack/FPTypeFPUnpackCV

enumeration// FPUnpackCV() // ============ // // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. (FPType, bit, real) FPType {FPUnpackCV(bits(N) fpval,FPType_Zero,fpcr_in) FPType_Denormal,fpcr = fpcr_in; fpcr.FZ16 = '0'; boolean fpexc = TRUE; // Generate floating-point exceptions (fp_type, sign, value) = FPType_Nonzero, FPType_Infinity, FPType_QNaN, FPType_SNaN};(fpval, fpcr, fpexc); return (fp_type, sign, value);

Library pseudocode for shared/functions/float/fpunpackfpzero/FPUnpackFPZero

// FPUnpack() // ========== // FPZero() // ======== (FPType, bit, real)bits(N) FPUnpack(bits(N) fpval,FPZero(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp = FPCRTypeZeros fpcr_in)(E); frac = FPCRTypeZeros fpcr = fpcr_in; fpcr.AHP = '0'; boolean fpexc = TRUE; // Generate floating-point exceptions (fp_type, sign, value) = FPUnpackBase(fpval, fpcr, fpexc); return (fp_type, sign, value); // FPUnpack() // ========== // // Used by data processing and int/fixed <-> FP conversion instructions. // For half-precision data it ignores AHP, and observes FZ16. (FPType, bit, real) FPUnpack(bits(N) fpval, FPCRType fpcr_in, boolean fpexc) FPCRType fpcr = fpcr_in; fpcr.AHP = '0'; (fp_type, sign, value) = FPUnpackBase(fpval, fpcr, fpexc); return (fp_type, sign, value);(F); result = sign : exp : frac; return result;

Library pseudocode for shared/functions/float/fpunpackvfpexpandimm/FPUnpackBaseVFPExpandImm

// FPUnpackBase() // VFPExpandImm() // ============== (FPType, bit, real)bits(N) FPUnpackBase(bits(N) fpval,VFPExpandImm(bits(8) imm8) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = (N - E) - 1; sign = imm8<7>; exp = NOT(imm8<6>): FPCRTypeReplicate fpcr) boolean fpexc = TRUE; // Generate floating-point exceptions (fp_type, sign, value) =(imm8<6>,E-3):imm8<5:4>; frac = imm8<3:0>: FPUnpackBaseZeros(fpval, fpcr, fpexc); return (fp_type, sign, value); // FPUnpackBase() // ============== // // Unpack a floating-point number into its type, sign bit and the real number // that it represents. The real number result has the correct sign for numbers // and infinities, is very large in magnitude for infinities, and is 0.0 for // NaNs. (These values are chosen to simplify the description of comparisons // and conversions.) // // The 'fpcr_in' argument supplies FPCR control bits and 'fpexc' controls the // generation of floating-point exceptions. Status information is updated // directly in the FPSR where appropriate. (FPType, bit, real) FPUnpackBase(bits(N) fpval, FPCRType fpcr_in, boolean fpexc) assert N IN {16,32,64}; FPCRType fpcr = fpcr_in; boolean altfp = HaveAltFP() && !UsingAArch32(); boolean fiz = altfp && fpcr.FIZ == '1'; boolean fz = fpcr.FZ == '1' && !(altfp && fpcr.AH == '1'); real value; bit sign; FPType fptype; if N == 16 then sign = fpval<15>; exp16 = fpval<14:10>; frac16 = fpval<9:0>; if IsZero(exp16) then if IsZero(frac16) || fpcr.FZ16 == '1' then fptype = FPType_Zero; value = 0.0; else fptype = FPType_Denormal; value = 2.0^-14 * (Real(UInt(frac16)) * 2.0^-10); elsif IsOnes(exp16) && fpcr.AHP == '0' then // Infinity or NaN in IEEE format if IsZero(frac16) then fptype = FPType_Infinity; value = 2.0^1000000; else fptype = if frac16<9> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = FPType_Nonzero; value = 2.0^(UInt(exp16)-15) * (1.0 + Real(UInt(frac16)) * 2.0^-10); elsif N == 32 then sign = fpval<31>; exp32 = fpval<30:23>; frac32 = fpval<22:0>; if IsZero(exp32) then if IsZero(frac32) then // Produce zero if value is zero. fptype = FPType_Zero; value = 0.0; elsif fz || fiz then // Flush-to-zero if FIZ==1 or AH,FZ==01 fptype = FPType_Zero; value = 0.0; // Check whether to raise Input Denormal floating-point exception. // fpcr.FIZ==1 does not raise Input Denormal exception. if fz then // Denormalized input flushed to zero if fpexc then FPProcessException(FPExc_InputDenorm, fpcr); else fptype = FPType_Denormal; value = 2.0^-126 * (Real(UInt(frac32)) * 2.0^-23); elsif IsOnes(exp32) then if IsZero(frac32) then fptype = FPType_Infinity; value = 2.0^1000000; else fptype = if frac32<22> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = FPType_Nonzero; value = 2.0^(UInt(exp32)-127) * (1.0 + Real(UInt(frac32)) * 2.0^-23); else // N == 64 sign = fpval<63>; exp64 = fpval<62:52>; frac64 = fpval<51:0>; if IsZero(exp64) then if IsZero(frac64) then // Produce zero if value is zero. fptype = FPType_Zero; value = 0.0; elsif fz || fiz then // Flush-to-zero if FIZ==1 or AH,FZ==01 fptype = FPType_Zero; value = 0.0; // Check whether to raise Input Denormal floating-point exception. // fpcr.FIZ==1 does not raise Input Denormal exception. if fz then // Denormalized input flushed to zero if fpexc then FPProcessException(FPExc_InputDenorm, fpcr); else fptype = FPType_Denormal; value = 2.0^-1022 * (Real(UInt(frac64)) * 2.0^-52); elsif IsOnes(exp64) then if IsZero(frac64) then fptype = FPType_Infinity; value = 2.0^1000000; else fptype = if frac64<51> == '1' then FPType_QNaN else FPType_SNaN; value = 0.0; else fptype = FPType_Nonzero; value = 2.0^(UInt(exp64)-1023) * (1.0 + Real(UInt(frac64)) * 2.0^-52); (F-4); result = sign : exp : frac; if sign == '1' then value = -value; return (fptype, sign, value); return result;

Library pseudocode for shared/functions/floatinteger/fpunpack/FPUnpackCVAddWithCarry

// FPUnpackCV() // ============ // // Used for FP <-> FP conversion instructions. // For half-precision data ignores FZ16 and observes AHP. // AddWithCarry() // ============== // Integer addition with carry input, returning result and NZCV flags (FPType, bit, real)(bits(N), bits(4)) FPUnpackCV(bits(N) fpval,AddWithCarry(bits(N) x, bits(N) y, bit carry_in) integer unsigned_sum = FPCRTypeUInt fpcr_in)(x) + FPCRTypeUInt fpcr = fpcr_in; fpcr.FZ16 = '0'; boolean fpexc = TRUE; // Generate floating-point exceptions (fp_type, sign, value) =(y) + (carry_in); integer signed_sum = SInt(x) + SInt(y) + UInt(carry_in); bits(N) result = unsigned_sum<N-1:0>; // same value as signed_sum<N-1:0> bit n = result<N-1>; bit z = if IsZero(result) then '1' else '0'; bit c = if UInt(result) == unsigned_sum then '0' else '1'; bit v = if SIntFPUnpackBaseUInt(fpval, fpcr, fpexc); return (fp_type, sign, value);(result) == signed_sum then '0' else '1'; return (result, n:z:c:v);

Library pseudocode for shared/functions/floatinterrupts/fpzero/FPZeroInterruptID

// FPZero() // ======== bits(N)enumeration FPZero(bit sign) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = N - (E + 1); exp =InterruptID { Zeros(E); frac =InterruptID_PMUIRQ, InterruptID_COMMIRQ, InterruptID_CTIIRQ, InterruptID_COMMRX, InterruptID_COMMTX, InterruptID_CNTP, InterruptID_CNTHP, InterruptID_CNTHPS, InterruptID_CNTPS, InterruptID_CNTV, InterruptID_CNTHV, Zeros(F); result = sign : exp : frac; return result;InterruptID_CNTHVS, };

Library pseudocode for shared/functions/floatinterrupts/vfpexpandimm/VFPExpandImmSetInterruptRequestLevel

// VFPExpandImm() // ============== bits(N)// Set a level-sensitive interrupt to the specified level. SetInterruptRequestLevel( VFPExpandImm(bits(8) imm8) assert N IN {16,32,64}; constant integer E = (if N == 16 then 5 elsif N == 32 then 8 else 11); constant integer F = (N - E) - 1; sign = imm8<7>; exp = NOT(imm8<6>):Replicate(imm8<6>,E-3):imm8<5:4>; frac = imm8<3:0>:Zeros(F-4); result = sign : exp : frac; return result;id, signal level);

Library pseudocode for shared/functions/integermemory/AddWithCarryAArch64.BranchAddr

// AddWithCarry() // ============== // Integer addition with carry input, returning result and NZCV flags // AArch64.BranchAddr() // ==================== // Return the virtual address with tag bits removed for storing to the program counter. (bits(N), bits(4))bits(64) AddWithCarry(bits(N) x, bits(N) y, bit carry_in) integer unsigned_sum =AArch64.BranchAddr(bits(64) vaddress) assert ! UIntUsingAArch32(x) +(); msbit = UIntAddrTop(y) +(vaddress, TRUE, PSTATE.EL); if msbit == 63 then return vaddress; elsif (PSTATE.EL IN { UIntEL0(carry_in); integer signed_sum =, SIntEL1(x) +} || SIntIsInHost(y) +()) && vaddress<msbit> == '1' then return UIntSignExtend(carry_in); bits(N) result = unsigned_sum<N-1:0>; // same value as signed_sum<N-1:0> bit n = result<N-1>; bit z = if(vaddress<msbit:0>); else return IsZeroZeroExtend(result) then '1' else '0'; bit c = if UInt(result) == unsigned_sum then '0' else '1'; bit v = if SInt(result) == signed_sum then '0' else '1'; return (result, n:z:c:v);(vaddress<msbit:0>);

Library pseudocode for shared/functions/interruptsmemory/InterruptIDAccType

enumeration InterruptID {AccType { InterruptID_PMUIRQ,AccType_NORMAL, // Normal loads and stores InterruptID_COMMIRQ,AccType_STREAM, // Streaming loads and stores InterruptID_CTIIRQ,AccType_VEC, // Vector loads and stores InterruptID_COMMRX,AccType_VECSTREAM, // Streaming vector loads and stores InterruptID_COMMTX,AccType_UNPRIVSTREAM, // Streaming unprivileged loads and stores InterruptID_CNTP,AccType_A32LSMD, // Load and store multiple InterruptID_CNTHP,AccType_ATOMIC, // Atomic loads and stores InterruptID_CNTHPS,AccType_ATOMICRW, InterruptID_CNTPS,AccType_ORDERED, // Load-Acquire and Store-Release InterruptID_CNTV,AccType_ORDEREDRW, InterruptID_CNTHV,AccType_ORDEREDATOMIC, // Load-Acquire and Store-Release with atomic access InterruptID_CNTHVS, };AccType_ORDEREDATOMICRW,AccType_ATOMICLS64, // Atomic 64-byte loads and stores AccType_LIMITEDORDERED, // Load-LOAcquire and Store-LORelease AccType_UNPRIV, // Load and store unprivileged AccType_IFETCH, // Instruction fetch AccType_TTW, // Translation table walk AccType_NV2REGISTER, // MRS/MSR instruction used at EL1 and which is // converted to a memory access that uses the // EL2 translation regime // Other operations AccType_DC, // Data cache maintenance AccType_IC, // Instruction cache maintenance AccType_DCZVA, // DC ZVA instructions AccType_ATPAN, // Address translation with PAN permission checks AccType_AT}; // Address translation

Library pseudocode for shared/functions/interruptsmemory/SetInterruptRequestLevelAccessDescriptor

// Set a level-sensitive interrupt to the specified level. SetInterruptRequestLevel(typeAccessDescriptor is ( MPAMinfo mpam, AccTypeInterruptID id, signal level);acctype)

Library pseudocode for shared/functions/memory/AArch64.BranchAddrAddrTop

// AArch64.BranchAddr() // ==================== // Return the virtual address with tag bits removed for storing to the program counter. // AddrTop() // ========= // Return the MSB number of a virtual address in the stage 1 translation regime for "el". // If EL1 is using AArch64 then addresses from EL0 using AArch32 are zero-extended to 64 bits. bits(64)integer AArch64.BranchAddr(bits(64) vaddress) assert !AddrTop(bits(64) address, boolean IsInstr, bits(2) el) assertUsingAArch32HaveEL(); msbit =(el); regime = AddrTopS1TranslationRegime(vaddress, TRUE, PSTATE.EL); if msbit == 63 then return vaddress; elsif (PSTATE.EL IN {(el); ifEL0ELUsingAArch32,(regime) then // AArch32 translation regime. return 31; else if EL1EffectiveTBI} || IsInHost()) && vaddress<msbit> == '1' then return SignExtend(vaddress<msbit:0>); else return ZeroExtend(vaddress<msbit:0>);(address, IsInstr, el) == '1' then return 55; else return 63;

Library pseudocode for shared/functions/memory/AccTypeAllocation

enumerationconstant bits(2) AccType {MemHint_No = '00'; // No Read-Allocate, No Write-Allocate constant bits(2)AccType_NORMAL, // Normal loads and storesMemHint_WA = '01'; // No Read-Allocate, Write-Allocate constant bits(2) AccType_STREAM, // Streaming loads and storesMemHint_RA = '10'; // Read-Allocate, No Write-Allocate constant bits(2) AccType_VEC, // Vector loads and storesMemHint_RWA = '11'; // Read-Allocate, Write-Allocate AccType_VECSTREAM, // Streaming vector loads and stores AccType_UNPRIVSTREAM, // Streaming unprivileged loads and stores AccType_A32LSMD, // Load and store multiple AccType_ATOMIC, // Atomic loads and stores AccType_ATOMICRW, AccType_ORDERED, // Load-Acquire and Store-Release AccType_ORDEREDRW, AccType_ORDEREDATOMIC, // Load-Acquire and Store-Release with atomic access AccType_ORDEREDATOMICRW, AccType_ATOMICLS64, // Atomic 64-byte loads and stores AccType_LIMITEDORDERED, // Load-LOAcquire and Store-LORelease AccType_UNPRIV, // Load and store unprivileged AccType_IFETCH, // Instruction fetch AccType_TTW, // Translation table walk AccType_NV2REGISTER, // MRS/MSR instruction used at EL1 and which is // converted to a memory access that uses the // EL2 translation regime // Other operations AccType_DC, // Data cache maintenance AccType_IC, // Instruction cache maintenance AccType_DCZVA, // DC ZVA instructions AccType_ATPAN, // Address translation with PAN permission checks AccType_AT}; // Address translation

Library pseudocode for shared/functions/memory/AccessDescriptorBigEndian

type// BigEndian() // =========== boolean AccessDescriptor is (BigEndian( MPAMinfoAccType mpam,acctype) boolean bigend; if () && acctype == AccType_NV2REGISTER then return SCTLR_EL2.EE == '1'; if UsingAArch32() then bigend = (PSTATE.E != '0'); elsif PSTATE.EL == EL0 then bigend = (SCTLR[].E0E != '0'); else bigend = (SCTLRAccTypeHaveNV2Ext acctype)[].EE != '0'); return bigend;

Library pseudocode for shared/functions/memory/AddrTopBigEndianReverse

// AddrTop() // ========= // Return the MSB number of a virtual address in the stage 1 translation regime for "el". // If EL1 is using AArch64 then addresses from EL0 using AArch32 are zero-extended to 64 bits. // BigEndianReverse() // ================== integerbits(width) AddrTop(bits(64) address, boolean IsInstr, bits(2) el) assertBigEndianReverse (bits(width) value) assert width IN {8, 16, 32, 64, 128}; integer half = width DIV 2; if width == 8 then return value; return HaveELBigEndianReverse(el); regime =(value<half-1:0>) : S1TranslationRegimeBigEndianReverse(el); if ELUsingAArch32(regime) then // AArch32 translation regime. return 31; else if EffectiveTBI(address, IsInstr, el) == '1' then return 55; else return 63;(value<width-1:half>);

Library pseudocode for shared/functions/memory/AllocationCacheability

constant bits(2) MemHint_No = '00'; // No Read-Allocate, No Write-Allocate MemAttr_NC = '00'; // Non-cacheable constant bits(2) MemHint_WA = '01'; // No Read-Allocate, Write-Allocate MemAttr_WT = '10'; // Write-through constant bits(2) MemHint_RA = '10'; // Read-Allocate, No Write-Allocate constant bits(2)MemAttr_WB = '11'; // Write-back MemHint_RWA = '11'; // Read-Allocate, Write-Allocate

Library pseudocode for shared/functions/memory/BigEndianCreateAccessDescriptor

// BigEndian() // =========== // CreateAccessDescriptor() // ======================== booleanAccessDescriptor BigEndian(CreateAccessDescriptor(AccType acctype) boolean bigend; ifacctype) HaveNV2ExtAccessDescriptor() && acctype ==accdesc; accdesc.acctype = acctype; accdesc.mpam = AccType_NV2REGISTERGenMPAMcurEL then return SCTLR_EL2.EE == '1'; if UsingAArch32() then bigend = (PSTATE.E != '0'); elsif PSTATE.EL == EL0 then bigend = (SCTLR[].E0E != '0'); else bigend = (SCTLR[].EE != '0'); return bigend;(acctype); return accdesc;

Library pseudocode for shared/functions/memory/BigEndianReverseDataMemoryBarrier

// BigEndianReverse() // ================== bits(width) BigEndianReverse (bits(width) value) assert width IN {8, 16, 32, 64, 128}; integer half = width DIV 2; if width == 8 then return value; returnDataMemoryBarrier( BigEndianReverseMBReqDomain(value<half-1:0>) :domain, BigEndianReverseMBReqTypes(value<width-1:half>);types);

Library pseudocode for shared/functions/memory/CacheabilityDataSynchronizationBarrier

constant bits(2) MemAttr_NC = '00'; // Non-cacheable constant bits(2)DataSynchronizationBarrier( MemAttr_WT = '10'; // Write-through constant bits(2)domain, MemAttr_WB = '11'; // Write-backtypes, boolean nXS);

Library pseudocode for shared/functions/memory/CreateAccessDescriptorDeviceType

// CreateAccessDescriptor() // ======================== AccessDescriptorenumeration CreateAccessDescriptor(DeviceType {AccType acctype)DeviceType_GRE, AccessDescriptor accdesc; accdesc.acctype = acctype; accdesc.mpam =DeviceType_nGRE, DeviceType_nGnRE, GenMPAMcurEL(acctype); return accdesc;DeviceType_nGnRnE};

Library pseudocode for shared/functions/memory/DataMemoryBarrierEffectiveTBI

// EffectiveTBI() // ============== // Returns the effective TBI in the AArch64 stage 1 translation regime for "el". bit DataMemoryBarrier(EffectiveTBI(bits(64) address, boolean IsInstr, bits(2) el) bit tbi; bit tbid; assertMBReqDomainHaveEL domain,(el); regime = (el); assert(!ELUsingAArch32(regime)); case regime of when EL1 tbi = if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tbi = if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if HavePACExt() then tbid = TCR_EL2.TBID; when EL3 tbi = TCR_EL3.TBI; if HavePACExt() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!HavePACExtMBReqTypesS1TranslationRegime types);() || tbid == '0' || !IsInstr) then '1' else '0');

Library pseudocode for shared/functions/memory/DataSynchronizationBarrierEffectiveTCMA

// EffectiveTCMA() // =============== // Returns the effective TCMA of a virtual address in the stage 1 translation regime for "el". bit DataSynchronizationBarrier(EffectiveTCMA(bits(64) address, bits(2) el) bit tcma; assertMBReqDomainHaveEL domain,(el); regime = (el); assert(!ELUsingAArch32(regime)); case regime of when EL1 tcma = if address<55> == '1' then TCR_EL1.TCMA1 else TCR_EL1.TCMA0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tcma = if address<55> == '1' then TCR_EL2.TCMA1 else TCR_EL2.TCMA0; else tcma = TCR_EL2.TCMA; when EL3MBReqTypesS1TranslationRegime types, boolean nXS);tcma = TCR_EL3.TCMA; return tcma;

Library pseudocode for shared/functions/memory/DeviceTypeFault

enumeration DeviceType {Fault {DeviceType_GRE,Fault_None, DeviceType_nGRE,Fault_AccessFlag, DeviceType_nGnRE,Fault_Alignment, DeviceType_nGnRnE};Fault_Background,Fault_Domain, Fault_Permission, Fault_Translation, Fault_AddressSize, Fault_SyncExternal, Fault_SyncExternalOnWalk, Fault_SyncParity, Fault_SyncParityOnWalk, Fault_AsyncParity, Fault_AsyncExternal, Fault_Debug, Fault_TLBConflict, Fault_BranchTarget, Fault_HWUpdateAccessFlag, Fault_Lockdown, Fault_Exclusive, Fault_ICacheMaint};

Library pseudocode for shared/functions/memory/EffectiveTBIFaultRecord

// EffectiveTBI() // ============== // Returns the effective TBI in the AArch64 stage 1 translation regime for "el". bittype EffectiveTBI(bits(64) address, boolean IsInstr, bits(2) el) bit tbi; bit tbid; assertFaultRecord is ( HaveELFault(el); regime =statuscode, // Fault Status S1TranslationRegimeAccType(el); assert(!acctype, // Type of access that faultedELUsingAArch32FullAddress(regime)); case regime of when EL1 tbi = if address<55> == '1' then TCR_EL1.TBI1 else TCR_EL1.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL1.TBID1 else TCR_EL1.TBID0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tbi = if address<55> == '1' then TCR_EL2.TBI1 else TCR_EL2.TBI0; if HavePACExt() then tbid = if address<55> == '1' then TCR_EL2.TBID1 else TCR_EL2.TBID0; else tbi = TCR_EL2.TBI; if HavePACExt() then tbid = TCR_EL2.TBID; when EL3 tbi = TCR_EL3.TBI; if HavePACExt() then tbid = TCR_EL3.TBID; return (if tbi == '1' && (!HavePACExt() || tbid == '0' || !IsInstr) then '1' else '0');ipaddress, // Intermediate physical address boolean s2fs1walk, // Is on a Stage 1 translation table walk boolean write, // TRUE for a write, FALSE for a read integer level, // For translation, access flag and permission faults bit extflag, // IMPLEMENTATION DEFINED syndrome for External aborts boolean secondstage, // Is a Stage 2 abort bits(4) domain, // Domain number, AArch32 only bits(2) errortype, // [Armv8.2 RAS] AArch32 AET or AArch64 SET bits(4) debugmoe) // Debug method of entry, from AArch32 only

Library pseudocode for shared/functions/memory/EffectiveTCMAFullAddress

// EffectiveTCMA() // =============== // Returns the effective TCMA of a virtual address in the stage 1 translation regime for "el". bittype EffectiveTCMA(bits(64) address, bits(2) el) bit tcma; assertFullAddress is ( HaveELPASpace(el); regime = S1TranslationRegime(el); assert(!ELUsingAArch32(regime)); case regime of when EL1 tcma = if address<55> == '1' then TCR_EL1.TCMA1 else TCR_EL1.TCMA0; when EL2 if HaveVirtHostExt() && ELIsInHost(el) then tcma = if address<55> == '1' then TCR_EL2.TCMA1 else TCR_EL2.TCMA0; else tcma = TCR_EL2.TCMA; when EL3 tcma = TCR_EL3.TCMA; return tcma;paspace, bits(52) address )

Library pseudocode for shared/functions/memory/FaultHint_Prefetch

enumeration// Signals the memory system that memory accesses of type HINT to or from the specified address are // likely in the near future. The memory system may take some action to speed up the memory // accesses when they do occur, such as pre-loading the the specified address into one or more // caches as indicated by the innermost cache level target (0=L1, 1=L2, etc) and non-temporal hint // stream. Any or all prefetch hints may be treated as a NOP. A prefetch hint must not cause a // synchronous abort due to Alignment or Translation faults and the like. Its only effect on // software-visible state should be on caches and TLBs associated with address, which must be // accessible by reads, writes or execution, as defined in the translation regime of the current // Exception level. It is guaranteed not to access Device memory. // A Prefetch_EXEC hint must not result in an access that could not be performed by a speculative // instruction fetch, therefore if all associated MMUs are disabled, then it cannot access any // memory location that cannot be accessed by instruction fetches. Fault {Hint_Prefetch(bits(64) address,Fault_None, Fault_AccessFlag, Fault_Alignment, Fault_Background, Fault_Domain, Fault_Permission, Fault_Translation, Fault_AddressSize, Fault_SyncExternal, Fault_SyncExternalOnWalk, Fault_SyncParity, Fault_SyncParityOnWalk, Fault_AsyncParity, Fault_AsyncExternal, Fault_Debug, Fault_TLBConflict, Fault_BranchTarget, Fault_HWUpdateAccessFlag, Fault_Lockdown, Fault_Exclusive, Fault_ICacheMaint};hint, integer target, boolean stream);

Library pseudocode for shared/functions/memory/FaultRecordMBReqDomain

typeenumeration FaultRecord is (MBReqDomain {Fault statuscode, // Fault StatusMBReqDomain_Nonshareable, AccType acctype, // Type of access that faultedMBReqDomain_InnerShareable, MBReqDomain_OuterShareable, FullAddress ipaddress, // Intermediate physical address boolean s2fs1walk, // Is on a Stage 1 translation table walk boolean write, // TRUE for a write, FALSE for a read integer level, // For translation, access flag and Permission faults bit extflag, // IMPLEMENTATION DEFINED syndrome for External aborts boolean secondstage, // Is a Stage 2 abort bits(4) domain, // Domain number, AArch32 only bits(2) errortype, // [Armv8.2 RAS] AArch32 AET or AArch64 SET bits(4) debugmoe) // Debug method of entry, from AArch32 onlyMBReqDomain_FullSystem};

Library pseudocode for shared/functions/memory/FullAddressMBReqTypes

typeenumeration FullAddress is (MBReqTypes { MBReqTypes_Reads, MBReqTypes_Writes, PASpace paspace, bits(52) address )MBReqTypes_All};

Library pseudocode for shared/functions/memory/Hint_PrefetchMPAM

// Signals the memory system that memory accesses of type HINT to or from the specified address are // likely in the near future. The memory system may take some action to speed up the memory // accesses when they do occur, such as pre-loading the the specified address into one or more // caches as indicated by the innermost cache level target (0=L1, 1=L2, etc) and non-temporal hint // stream. Any or all prefetch hints may be treated as a NOP. A prefetch hint must not cause a // synchronous abort due to Alignment or Translation faults and the like. Its only effect on // software-visible state should be on caches and TLBs associated with address, which must be // accessible by reads, writes or execution, as defined in the translation regime of the current // Exception level. It is guaranteed not to access Device memory. // A Prefetch_EXEC hint must not result in an access that could not be performed by a speculative // instruction fetch, therefore if all associated MMUs are disabled, then it cannot access any // memory location that cannot be accessed by instruction fetches.type Hint_Prefetch(bits(64) address,PARTIDtype = bits(16); type PMGtype = bits(8); type PARTIDspaceType = bit; constant PARTIDspaceType PIdSpace_Secure = '0'; constant PARTIDspaceType PIdSpace_NonSecure = '1'; type MPAMinfo is ( PARTIDspaceType mpam_ns, PARTIDtype partid, PMGtypePrefetchHint hint, integer target, boolean stream);pmg )

Library pseudocode for shared/functions/memory/IsDataAccessMemAttrHints

// IsDataAccess() // ============== // Return TRUE if access is to data memory. booleantype IsDataAccess(MemAttrHints is ( bits(2) attrs, // See MemAttr_*, Cacheability attributes bits(2) hints, // See MemHint_*, Allocation hints boolean transient )AccType acctype) return !(acctype IN {AccType_IFETCH, AccType_TTW, AccType_DC, AccType_IC, AccType_AT, AccType_ATPAN });

Library pseudocode for shared/functions/memory/MBReqDomainMemType

enumeration MBReqDomain {MemType {MBReqDomain_Nonshareable,MemType_Normal, MBReqDomain_InnerShareable,MemType_Device}; MBReqDomain_OuterShareable, MBReqDomain_FullSystem};

Library pseudocode for shared/functions/memory/MBReqTypesMemoryAttributes

enumerationtype MBReqTypes {MemoryAttributes is (MBReqTypes_Reads,memtype, MBReqTypes_Writes,device, // For Device memory types inner, // Inner hints and attributes MemAttrHints outer, // Outer hints and attributes ShareabilityMBReqTypes_All};shareability, // Shareability attribute boolean tagged, // Tagged access bit xs // XS attribute )

Library pseudocode for shared/functions/memory/MPAMPASpace

typeenumeration PARTIDtype = bits(16); typePASpace { PMGtype = bits(8); enumerationPAS_NonSecure, PARTIDspaceType {PAS_Secure, }; PIdSpace_Secure, PIdSpace_NonSecure }; type MPAMinfo is ( PARTIDspaceType mpam_sp, PARTIDtype partid, PMGtype pmg )

Library pseudocode for shared/functions/memory/MemAttrHintsPermissions

type MemAttrHints is ( bits(2) attrs, // See MemAttr_*, Cacheability attributes bits(2) hints, // See MemHint_*, Allocation hints boolean transient Permissions is ( bits(2) ap_table, // Stage 1 hierarchical access permissions bit xn_table, // Stage 1 hierarchical execute-never for single EL regimes bit pxn_table, // Stage 1 hierarchical privileged execute-never bit uxn_table, // Stage 1 hierarchical unprivileged execute-never bits(3) ap, // Stage 1 access permissions bit xn, // Stage 1 execute-never for single EL regimes bit uxn, // Stage 1 unprivileged execute-never bit pxn, // Stage 1 privileged execute-never bits(2) s2ap, // Stage 2 access permissions bit s2xnx, // Stage 2 extended execute-never bit s2xn // Stage 2 execute-never )

Library pseudocode for shared/functions/memory/MemTypePhysMemRead

enumeration// Returns the value read from memory, and a status. // Returned value is UNKNOWN if an external abort occurred while reading the // memory. // Otherwise the PhysMemRetStatus statuscode is Fault_None. (PhysMemRetStatus, bits(8*size)) MemType {PhysMemRead(MemType_Normal,desc, integer size, MemType_Device};accdesc);

Library pseudocode for shared/functions/memory/MemoryAttributesPhysMemRetStatus

type MemoryAttributes is (PhysMemRetStatus is ( MemTypeFault memtype,statuscode, // Fault Status bit extflag, // IMPLEMENTATION DEFINED // syndrome for External aborts bits(2) errortype, // optional error state // returned on a physical // memory access bits(64) store64bstatus, // status of 64B store DeviceTypeAccType device, // For Device memory types MemAttrHints inner, // Inner hints and attributes MemAttrHints outer, // Outer hints and attributes Shareability shareability, // Shareability attribute boolean tagged, // Tagged access bit xs // XS attribute )acctype) // Type of access that faulted

Library pseudocode for shared/functions/memory/PASpacePhysMemWrite

enumeration// Writes the value to memory, and returns the status of the write. // If there is an external abort on the write, the PhysMemRetStatus indicates this. // Otherwise the statuscode of PhysMemRetStatus is Fault_None. PhysMemRetStatus PASpace {PhysMemWrite( PAS_NonSecure,desc, integer size, PAS_Secure, };accdesc, bits(8*size) value);

Library pseudocode for shared/functions/memory/PermissionsPrefetchHint

typeenumeration Permissions is ( bits(2) ap_table, // Stage 1 hierarchical access permissions bit xn_table, // Stage 1 hierarchical execute-never for single EL regimes bit pxn_table, // Stage 1 hierarchical privileged execute-never bit uxn_table, // Stage 1 hierarchical unprivileged execute-never bits(3) ap, // Stage 1 access permissions bit xn, // Stage 1 execute-never for single EL regimes bit uxn, // Stage 1 unprivileged execute-never bit pxn, // Stage 1 privileged execute-never bits(2) s2ap, // Stage 2 access permissions bit s2xnx, // Stage 2 extended execute-never bit s2xn // Stage 2 execute-never )PrefetchHint {Prefetch_READ, Prefetch_WRITE, Prefetch_EXEC};

Library pseudocode for shared/functions/memory/PhysMemReadShareability

// Returns the value read from memory, and a status. // Returned value is UNKNOWN if an External abort occurred while reading the // memory. // Otherwise the PhysMemRetStatus statuscode is Fault_None. (PhysMemRetStatus, bits(8*size))enumeration PhysMemRead(Shareability {AddressDescriptor desc, integer size,Shareability_NSH, Shareability_ISH, AccessDescriptor accdesc);Shareability_OSH };

Library pseudocode for shared/functions/memory/PhysMemRetStatusSpeculativeStoreBypassBarrierToPA

type PhysMemRetStatus is (Fault statuscode, // Fault Status bit extflag, // IMPLEMENTATION DEFINED // syndrome for External aborts bits(2) errortype, // optional error state // returned on a physical // memory access bits(64) store64bstatus, // status of 64B store AccType acctype) // Type of access that faultedSpeculativeStoreBypassBarrierToPA();

Library pseudocode for shared/functions/memory/PhysMemWriteSpeculativeStoreBypassBarrierToVA

// Writes the value to memory, and returns the status of the write. // If there is an External abort on the write, the PhysMemRetStatus indicates this. // Otherwise the statuscode of PhysMemRetStatus is Fault_None. PhysMemRetStatus PhysMemWrite(AddressDescriptor desc, integer size, AccessDescriptor accdesc, bits(8*size) value);SpeculativeStoreBypassBarrierToVA();

Library pseudocode for shared/functions/memory/PrefetchHintTag

enumerationconstant integer PrefetchHint {LOG2_TAG_GRANULE = 4; constant integerPrefetch_READ,TAG_GRANULE = 1 << Prefetch_WRITE, Prefetch_EXEC};;

Library pseudocode for shared/functions/memorympam/ShareabilityDefaultMPAMinfo

enumeration// DefaultMPAMinfo() // ================= // Returns default MPAM info. The partidspace argument sets // the PARTID space of the default MPAM information returned. MPAMinfo Shareability {DefaultMPAMinfo( Shareability_NSH,partidspace) Shareability_ISH,DefaultInfo; DefaultInfo.mpam_ns = partidspace; DefaultInfo.partid = ; DefaultInfo.pmg = DefaultPMGShareability_OSH };; return DefaultInfo;

Library pseudocode for shared/functions/memorympam/SpeculativeStoreBypassBarrierToPADefaultPARTID

constant PARTIDtype SpeculativeStoreBypassBarrierToPA();DefaultPARTID = 0<15:0>;

Library pseudocode for shared/functions/memorympam/SpeculativeStoreBypassBarrierToVADefaultPMG

constant PMGtype SpeculativeStoreBypassBarrierToVA();DefaultPMG = 0<7:0>;

Library pseudocode for shared/functions/memorympam/TagGenMPAMcurEL

constant integer// GenMPAMcurEL() // ============== // Returns MPAMinfo for the current EL and security state. // May be called if MPAM is not implemented (but in an version that supports // MPAM), MPAM is disabled, or in AArch32. In AArch32, convert the mode to // EL if can and use that to drive MPAM information generation. If mode // cannot be converted, MPAM is not implemented, or MPAM is disabled return // default MPAM information for the current security state. MPAMinfo LOG2_TAG_GRANULE = 4; constant integerGenMPAMcurEL( TAG_GRANULE = 1 <<acctype) bits(2) mpamEL; boolean validEL = FALSE; security = if IsSecure() then SS_Secure else SS_NonSecure; boolean InD = FALSE; PARTIDspaceType pspace = PARTIDspaceFromSS(security); if pspace == PIdSpace_NonSecure && !MPAMisEnabled() then return DefaultMPAMinfo(pspace); if UsingAArch32() then (validEL, mpamEL) = ELFromM32(PSTATE.M); else mpamEL = if acctype == AccType_NV2REGISTER then EL2 else PSTATE.EL; validEL = TRUE; case acctype of when AccType_IFETCH, AccType_IC InD = TRUE; otherwise // Other access types are DATA accesses InD = FALSE; if !validEL then return DefaultMPAMinfo(pspace); if HaveEMPAMExt() && security == SS_Secure then if MPAM3_EL3.FORCE_NS == '1' then pspace = PIdSpace_NonSecure; if MPAM3_EL3.SDEFLT == '1' then return DefaultMPAMinfo(pspace); if !MPAMisEnabled() then return DefaultMPAMinfo(pspace); else return genMPAMLOG2_TAG_GRANULESecurityState;(mpamEL, InD, pspace);

Library pseudocode for shared/functions/mpam/DefaultMPAMinfoMAP_vPARTID

// DefaultMPAMinfo() // ================= // Returns default MPAM info. The partidspace argument sets // the PARTID space of the default MPAM information returned. // MAP_vPARTID() // ============= // Performs conversion of virtual PARTID into physical PARTID // Contains all of the error checking and implementation // choices for the conversion. MPAMinfo(PARTIDtype, boolean) DefaultMPAMinfo(MAP_vPARTID(PARTIDspaceTypePARTIDtype partidspace)vpartid) // should not ever be called if EL2 is not implemented // or is implemented but not enabled in the current // security state. MPAMinfoPARTIDtype DefaultInfo; DefaultInfo.mpam_sp = partidspace; DefaultInfo.partid =ret; boolean err; integer virt = UInt(vpartid); integer vpmrmax = UInt(MPAMIDR_EL1.VPMR_MAX); // vpartid_max is largest vpartid supported integer vpartid_max = (vpmrmax << 2) + 3; // One of many ways to reduce vpartid to value less than vpartid_max. if UInt(vpartid) > vpartid_max then virt = virt MOD (vpartid_max+1); // Check for valid mapping entry. if MPAMVPMV_EL2<virt> == '1' then // vpartid has a valid mapping so access the map. ret = mapvpmw(virt); err = FALSE; // Is the default virtual PARTID valid? elsif MPAMVPMV_EL2<0> == '1' then // Yes, so use default mapping for vpartid == 0. ret = MPAMVPM0_EL2<0 +: 16>; err = FALSE; // Neither is valid so use default physical PARTID. else ret = DefaultPARTID; DefaultInfo.pmg = err = TRUE; // Check that the physical PARTID is in-range. // This physical PARTID came from a virtual mapping entry. integer partid_max = (MPAMIDR_EL1.PARTID_MAX); if UInt(ret) > partid_max then // Out of range, so return default physical PARTID ret = DefaultPARTIDDefaultPMGUInt; return DefaultInfo; err = TRUE; return (ret, err);

Library pseudocode for shared/functions/mpam/DefaultPARTIDMPAMisEnabled

constant PARTIDtype// MPAMisEnabled() // =============== // Returns TRUE if MPAMisEnabled. boolean DefaultPARTID = 0<15:0>;MPAMisEnabled() el =HighestEL(); case el of when EL3 return MPAM3_EL3.MPAMEN == '1'; when EL2 return MPAM2_EL2.MPAMEN == '1'; when EL1 return MPAM1_EL1.MPAMEN == '1';

Library pseudocode for shared/functions/mpam/DefaultPMGMPAMisVirtual

constant PMGtype// MPAMisVirtual() // =============== // Returns TRUE if MPAM is configured to be virtual at EL. boolean DefaultPMG = 0<7:0>;MPAMisVirtual(bits(2) el) return (MPAMIDR_EL1.HAS_HCR == '1' &&EL2Enabled() && ((el == EL0 && MPAMHCR_EL2.EL0_VPMEN == '1' && (HCR_EL2.E2H == '0' || HCR_EL2.TGE == '0')) || (el == EL1 && MPAMHCR_EL2.EL1_VPMEN == '1')));

Library pseudocode for shared/functions/mpam/GenMPAMcurELPARTIDspaceFromSS

// GenMPAMcurEL() // ============== // Returns MPAMinfo for the current EL and security state. // May be called if MPAM is not implemented (but in an version that supports // MPAM), MPAM is disabled, or in AArch32. In AArch32, convert the mode to // EL if can and use that to drive MPAM information generation. If mode // cannot be converted, MPAM is not implemented, or MPAM is disabled return // default MPAM information for the current security state. // PARTIDspaceFromSS() // =================== // Returns the primary PARTID space from the Security State. MPAMinfoPARTIDspaceType GenMPAMcurEL(PARTIDspaceFromSS(AccType acctype) bits(2) mpamEL; boolean validEL = FALSE; SecurityState security =security) case security of when CurrentSecurityStateSS_NonSecure(); boolean InD = FALSE;return PARTIDspaceType pspace = PARTIDspaceFromSS(security); if pspace == PIdSpace_NonSecure && !; whenMPAMisEnabled() then return DefaultMPAMinfo(pspace); if UsingAArch32() then (validEL, mpamEL) = ELFromM32(PSTATE.M); else mpamEL = if acctype == AccType_NV2REGISTER then EL2 else PSTATE.EL; validEL = TRUE; case acctype of when AccType_IFETCH, AccType_IC InD = TRUE; otherwise // Other access types are DATA accesses InD = FALSE; if !validEL then return DefaultMPAMinfo(pspace); if HaveEMPAMExt() && security == SS_Secure then if MPAM3_EL3.FORCE_NS == '1' then pspace =return PIdSpace_NonSecurePIdSpace_Secure; if MPAM3_EL3.SDEFLT == '1' then return otherwise DefaultMPAMinfoUnreachable(pspace); if !MPAMisEnabled() then return DefaultMPAMinfo(pspace); else return genMPAM(mpamEL, InD, pspace);();

Library pseudocode for shared/functions/mpam/MAP_vPARTIDgenMPAM

// MAP_vPARTID() // ============= // Performs conversion of virtual PARTID into physical PARTID // Contains all of the error checking and implementation // choices for the conversion. // genMPAM() // ========= // Returns MPAMinfo for exception level el. // If InD is TRUE returns MPAM information using PARTID_I and PMG_I fields // of MPAMel_ELx register and otherwise using PARTID_D and PMG_D fields. // Produces a PARTID in PARTID space pspace. (PARTIDtype, boolean)MPAMinfo MAP_vPARTID(genMPAM(bits(2) el, boolean InD,PARTIDtypePARTIDspaceType vpartid) // should not ever be called if EL2 is not implemented // or is implemented but not enabled in the current // security state.pspace) MPAMinfo returninfo; PARTIDtype ret; boolean err; integer virt =partidel; boolean perr; // gstplk is guest OS application locked by the EL2 hypervisor to // only use EL1 the virtual machine's PARTIDs. boolean gstplk = (el == UIntEL0(vpartid); integer vpmrmax =&& UIntEL2Enabled(MPAMIDR_EL1.VPMR_MAX); // vpartid_max is largest vpartid supported integer vpartid_max = (vpmrmax << 2) + 3; // One of many ways to reduce vpartid to value less than vpartid_max. if() && MPAMHCR_EL2.GSTAPP_PLK == '1' && HCR_EL2.TGE == '0'); bits(2) eff_el = if gstplk then UIntEL1(vpartid) > vpartid_max then virt = virt MOD (vpartid_max+1); // Check for valid mapping entry. if MPAMVPMV_EL2<virt> == '1' then // vpartid has a valid mapping so access the map. ret =else el; (partidel, perr) = mapvpmwgenPARTID(virt); err = FALSE; // Is the default virtual PARTID valid? elsif MPAMVPMV_EL2<0> == '1' then // Yes, so use default mapping for vpartid == 0. ret = MPAMVPM0_EL2<0 +: 16>; err = FALSE; // Neither is valid so use default physical PARTID. else ret =(eff_el, InD); DefaultPARTIDPMGtype; err = TRUE; // Check that the physical PARTID is in-range. // This physical PARTID came from a virtual mapping entry. integer partid_max =groupel = UIntgenPMG(MPAMIDR_EL1.PARTID_MAX); if UInt(ret) > partid_max then // Out of range, so return default physical PARTID ret = DefaultPARTID; err = TRUE; return (ret, err);(eff_el, InD, perr); returninfo.mpam_ns = pspace; returninfo.partid = partidel; returninfo.pmg = groupel; return returninfo;

Library pseudocode for shared/functions/mpam/MPAMisEnabledgenMPAMel

// MPAMisEnabled() // =============== // Returns TRUE if MPAMisEnabled. // genMPAMel() // =========== // Returns MPAMinfo for specified EL in the current security state. // InD is TRUE for instruction access and FALSE otherwise. booleanMPAMinfo MPAMisEnabled() el =genMPAMel(bits(2) el, boolean InD) HighestELSecurityState(); case el of whensecurity = EL3SecurityStateAtEL return MPAM3_EL3.MPAMEN == '1'; when(el); EL2PARTIDspaceType return MPAM2_EL2.MPAMEN == '1'; whenspace = (security); boolean use_default = !(HaveMPAMExt() && MPAMisEnabled()); if HaveEMPAMExt() && security == SS_Secure then if MPAM3_EL3.FORCE_NS == '1' then space = PIdSpace_NonSecure; if MPAM3_EL3.SDEFLT == '1' then use_default = TRUE; if !use_default then return genMPAM(el, InD, space); else return DefaultMPAMinfoEL1PARTIDspaceFromSS return MPAM1_EL1.MPAMEN == '1';(space);

Library pseudocode for shared/functions/mpam/MPAMisVirtualgenPARTID

// MPAMisVirtual() // =============== // Returns TRUE if MPAM is configured to be virtual at EL. // genPARTID() // =========== // Returns physical PARTID and error boolean for exception level el. // If InD is TRUE then PARTID is from MPAMel_ELx.PARTID_I and // otherwise from MPAMel_ELx.PARTID_D. boolean(PARTIDtype, boolean) MPAMisVirtual(bits(2) el) return (MPAMIDR_EL1.HAS_HCR == '1' &&genPARTID(bits(2) el, boolean InD) EL2EnabledPARTIDtype() && ((el ==partidel = EL0getMPAM_PARTID && MPAMHCR_EL2.EL0_VPMEN == '1' && (HCR_EL2.E2H == '0' || HCR_EL2.TGE == '0')) || (el ==(el, InD); partid_max = MPAMIDR_EL1.PARTID_MAX; if UInt(partidel) > UInt(partid_max) then return (DefaultPARTID, TRUE); if MPAMisVirtual(el) then return MAP_vPARTIDEL1PARTIDtype && MPAMHCR_EL2.EL1_VPMEN == '1')));(partidel); else return (partidel, FALSE);

Library pseudocode for shared/functions/mpam/PARTIDspaceFromSSgenPMG

// PARTIDspaceFromSS() // =================== // Returns the primary PARTID space from the Security State. // genPMG() // ======== // Returns PMG for exception level el and I- or D-side (InD). // If PARTID generation (genPARTID) encountered an error, genPMG() should be // called with partid_err as TRUE. PARTIDspaceTypePMGtype PARTIDspaceFromSS(genPMG(bits(2) el, boolean InD, boolean partid_err) integer pmg_max =SecurityStateUInt security) case security of when(MPAMIDR_EL1.PMG_MAX); // It is CONSTRAINED UNPREDICTABLE whether partid_err forces PMG to // use the default or if it uses the PMG from getMPAM_PMG. if partid_err then return SS_NonSecureDefaultPMG return; PIdSpace_NonSecurePMGtype; whengroupel = SS_SecuregetMPAM_PMG return(el, InD); if PIdSpace_SecureUInt; otherwise(groupel) <= pmg_max then return groupel; return UnreachableDefaultPMG();;

Library pseudocode for shared/functions/mpam/genMPAMgetMPAM_PARTID

// genMPAM() // ========= // Returns MPAMinfo for exception level el. // If InD is TRUE returns MPAM information using PARTID_I and PMG_I fields // of MPAMel_ELx register and otherwise using PARTID_D and PMG_D fields. // Produces a PARTID in PARTID space pspace. // getMPAM_PARTID() // ================ // Returns a PARTID from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PARTID_I field of that // register. Otherwise, selects the PARTID_D field. MPAMinfoPARTIDtype genMPAM(bits(2) el, boolean InD,getMPAM_PARTID(bits(2) MPAMn, boolean InD) PARTIDspaceType pspace) MPAMinfo returninfo; PARTIDtype partidel; boolean perr; // gstplk is guest OS application locked by the EL2 hypervisor to // only use EL1 the virtual machine's PARTIDs. boolean gstplk = (el ==partid; boolean el2avail = EL0 && EL2Enabled() && MPAMHCR_EL2.GSTAPP_PLK == '1' && HCR_EL2.TGE == '0'); bits(2) eff_el = if gstplk then(); if InD then case MPAMn of when '11' partid = MPAM3_EL3.PARTID_I; when '10' partid = if el2avail then MPAM2_EL2.PARTID_I else EL1Zeros else el; (partidel, perr) =(); when '01' partid = MPAM1_EL1.PARTID_I; when '00' partid = MPAM0_EL1.PARTID_I; otherwise partid = genPARTIDPARTIDtype(eff_el, InD);UNKNOWN; else case MPAMn of when '11' partid = MPAM3_EL3.PARTID_D; when '10' partid = if el2avail then MPAM2_EL2.PARTID_D else PMGtypeZeros groupel =(); when '01' partid = MPAM1_EL1.PARTID_D; when '00' partid = MPAM0_EL1.PARTID_D; otherwise partid = genPMGPARTIDtype(eff_el, InD, perr); returninfo.mpam_sp = pspace; returninfo.partid = partidel; returninfo.pmg = groupel; return returninfo;UNKNOWN; return partid;

Library pseudocode for shared/functions/mpam/genMPAMelgetMPAM_PMG

// genMPAMel() // =========== // Returns MPAMinfo for specified EL in the current security state. // InD is TRUE for instruction access and FALSE otherwise. // getMPAM_PMG() // ============= // Returns a PMG from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PMG_I field of that // register. Otherwise, selects the PMG_D field. MPAMinfoPMGtype genMPAMel(bits(2) el, boolean InD)getMPAM_PMG(bits(2) MPAMn, boolean InD) SecurityStatePMGtype security =pmg; boolean el2avail = SecurityStateAtELEL2Enabled(el);(); if InD then case MPAMn of when '11' pmg = MPAM3_EL3.PMG_I; when '10' pmg = if el2avail then MPAM2_EL2.PMG_I else PARTIDspaceTypeZeros space =(); when '01' pmg = MPAM1_EL1.PMG_I; when '00' pmg = MPAM0_EL1.PMG_I; otherwise pmg = PARTIDspaceFromSSPMGtype(security); boolean use_default = !(UNKNOWN; else case MPAMn of when '11' pmg = MPAM3_EL3.PMG_D; when '10' pmg = if el2avail then MPAM2_EL2.PMG_D elseHaveMPAMExtZeros() &&(); when '01' pmg = MPAM1_EL1.PMG_D; when '00' pmg = MPAM0_EL1.PMG_D; otherwise pmg = MPAMisEnabledPMGtype()); if HaveEMPAMExt() && security == SS_Secure then if MPAM3_EL3.FORCE_NS == '1' then space = PIdSpace_NonSecure; if MPAM3_EL3.SDEFLT == '1' then use_default = TRUE; if !use_default then return genMPAM(el, InD, space); else return DefaultMPAMinfo(space);UNKNOWN; return pmg;

Library pseudocode for shared/functions/mpam/genPARTIDmapvpmw

// genPARTID() // =========== // Returns physical PARTID and error boolean for exception level el. // If InD is TRUE then PARTID is from MPAMel_ELx.PARTID_I and // otherwise from MPAMel_ELx.PARTID_D. // mapvpmw() // ========= // Map a virtual PARTID into a physical PARTID using // the MPAMVPMn_EL2 registers. // vpartid is now assumed in-range and valid (checked by caller) // returns physical PARTID from mapping entry. (PARTIDtype, boolean)PARTIDtype genPARTID(bits(2) el, boolean InD)mapvpmw(integer vpartid) bits(64) vpmw; integer wd = vpartid DIV 4; case wd of when 0 vpmw = MPAMVPM0_EL2; when 1 vpmw = MPAMVPM1_EL2; when 2 vpmw = MPAMVPM2_EL2; when 3 vpmw = MPAMVPM3_EL2; when 4 vpmw = MPAMVPM4_EL2; when 5 vpmw = MPAMVPM5_EL2; when 6 vpmw = MPAMVPM6_EL2; when 7 vpmw = MPAMVPM7_EL2; otherwise vpmw = PARTIDtypeZeros partidel = getMPAM_PARTID(el, InD); PARTIDtype partid_max = MPAMIDR_EL1.PARTID_MAX; if UInt(partidel) > UInt(partid_max) then return (DefaultPARTID, TRUE); if MPAMisVirtual(el) then return MAP_vPARTID(partidel); else return (partidel, FALSE);(64); // vpme_lsb selects LSB of field within register integer vpme_lsb = (vpartid MOD 4) * 16; return vpmw<vpme_lsb +: 16>;

Library pseudocode for shared/functions/mpampredictionrestrict/genPMGASID

// genPMG() // ======== // Returns PMG for exception level el and I- or D-side (InD). // If PARTID generation (genPARTID) encountered an error, genPMG() should be // called with partid_err as TRUE. // ASID[] // ====== // Effective ASID. PMGtypebits(16) genPMG(bits(2) el, boolean InD, boolean partid_err) integer pmg_max =ASID[] if UIntEL2Enabled(MPAMIDR_EL1.PMG_MAX); // It is CONSTRAINED UNPREDICTABLE whether partid_err forces PMG to // use the default or if it uses the PMG from getMPAM_PMG. if partid_err then return() && ! DefaultPMGELUsingAArch32;( PMGtypeEL2 groupel =) && HCR_EL2.<E2H, TGE> == '11' then if TCR_EL2.A1 == '1' then return TTBR1_EL2.ASID; else return TTBR0_EL2.ASID; elsif ! getMPAM_PMGELUsingAArch32(el, InD); if( UIntEL1(groupel) <= pmg_max then return groupel; return) then if TCR_EL1.A1 == '1' then return TTBR1_EL1.ASID; else return TTBR0_EL1.ASID; else if TTBCR.EAE == '0' then return (CONTEXTIDR.ASID, 16); else if TTBCR.A1 == '1' then return ZeroExtend(TTBR1.ASID, 16); else return ZeroExtendDefaultPMGZeroExtend;(TTBR0.ASID, 16);

Library pseudocode for shared/functions/mpampredictionrestrict/getMPAM_PARTIDExecutionCntxt

// getMPAM_PARTID() // ================ // Returns a PARTID from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PARTID_I field of that // register. Otherwise, selects the PARTID_D field. PARTIDtypetype getMPAM_PARTID(bits(2) MPAMn, boolean InD)ExecutionCntxt is ( boolean is_vmid_valid, // is vmid valid for current context boolean all_vmid, // should the operation be applied for all vmids bits(16) vmid, // if all_vmid = FALSE, vmid to which operation is applied boolean is_asid_valid, // is asid valid for current context boolean all_asid, // should the operation be applied for all asids bits(16) asid, // if all_asid = FALSE, ASID to which operation is applied bits(2) target_el, // target EL at which operation is performed PARTIDtypeSecurityState partid; boolean el2avail =security, EL2EnabledRestrictType(); if InD then case MPAMn of when '11' partid = MPAM3_EL3.PARTID_I; when '10' partid = if el2avail then MPAM2_EL2.PARTID_I else Zeros(); when '01' partid = MPAM1_EL1.PARTID_I; when '00' partid = MPAM0_EL1.PARTID_I; otherwise partid = PARTIDtype UNKNOWN; else case MPAMn of when '11' partid = MPAM3_EL3.PARTID_D; when '10' partid = if el2avail then MPAM2_EL2.PARTID_D else Zeros(); when '01' partid = MPAM1_EL1.PARTID_D; when '00' partid = MPAM0_EL1.PARTID_D; otherwise partid = PARTIDtype UNKNOWN; return partid;restriction // type of restriction operation )

Library pseudocode for shared/functions/mpampredictionrestrict/getMPAM_PMGRESTRICT_PREDICTIONS

// getMPAM_PMG() // ============= // Returns a PMG from one of the MPAMn_ELx registers. // MPAMn selects the MPAMn_ELx register used. // If InD is TRUE, selects the PMG_I field of that // register. Otherwise, selects the PMG_D field. PMGtype// RESTRICT_PREDICTIONS() // ====================== // Clear all speculated values. getMPAM_PMG(bits(2) MPAMn, boolean InD)RESTRICT_PREDICTIONS( PMGtypeExecutionCntxt pmg; boolean el2avail = EL2Enabled(); if InD then case MPAMn of when '11' pmg = MPAM3_EL3.PMG_I; when '10' pmg = if el2avail then MPAM2_EL2.PMG_I else Zeros(); when '01' pmg = MPAM1_EL1.PMG_I; when '00' pmg = MPAM0_EL1.PMG_I; otherwise pmg = PMGtype UNKNOWN; else case MPAMn of when '11' pmg = MPAM3_EL3.PMG_D; when '10' pmg = if el2avail then MPAM2_EL2.PMG_D else Zeros(); when '01' pmg = MPAM1_EL1.PMG_D; when '00' pmg = MPAM0_EL1.PMG_D; otherwise pmg = PMGtype UNKNOWN; return pmg;c) IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/mpampredictionrestrict/mapvpmwRestrictType

// mapvpmw() // ========= // Map a virtual PARTID into a physical PARTID using // the MPAMVPMn_EL2 registers. // vpartid is now assumed in-range and valid (checked by caller) // returns physical PARTID from mapping entry. PARTIDtypeenumeration mapvpmw(integer vpartid) bits(64) vpmw; integer wd = vpartid DIV 4; case wd of when 0 vpmw = MPAMVPM0_EL2; when 1 vpmw = MPAMVPM1_EL2; when 2 vpmw = MPAMVPM2_EL2; when 3 vpmw = MPAMVPM3_EL2; when 4 vpmw = MPAMVPM4_EL2; when 5 vpmw = MPAMVPM5_EL2; when 6 vpmw = MPAMVPM6_EL2; when 7 vpmw = MPAMVPM7_EL2; otherwise vpmw =RestrictType { RestrictType_DataValue, RestrictType_ControlFlow, Zeros(64); // vpme_lsb selects LSB of field within register integer vpme_lsb = (vpartid MOD 4) * 16; return vpmw<vpme_lsb +: 16>;RestrictType_CachePrefetch };

Library pseudocode for shared/functions/predictionrestrict/ASIDTargetSecurityState

// ASID[] // ====== // Effective ASID. // TargetSecurityState() // ===================== // Decode the target security state for the prediction context. bits(16)SecurityState ASID[] ifTargetSecurityState(bit NS) curr_ss = EL2EnabledSecurityStateAtEL() && !(PSTATE.EL); if curr_ss ==ELUsingAArch32SS_NonSecure(then returnEL2SS_NonSecure) && HCR_EL2.<E2H, TGE> == '11' then if TCR_EL2.A1 == '1' then return TTBR1_EL2.ASID; else return TTBR0_EL2.ASID; elsif !; elsif curr_ss ==ELUsingAArch32SS_Secure(then case NS of when '0' returnEL1SS_Secure) then if TCR_EL1.A1 == '1' then return TTBR1_EL1.ASID; else return TTBR0_EL1.ASID; else if TTBCR.EAE == '0' then return; when '1' return ZeroExtendSS_NonSecure(CONTEXTIDR.ASID, 16); else if TTBCR.A1 == '1' then return ZeroExtend(TTBR1.ASID, 16); else return ZeroExtend(TTBR0.ASID, 16);;

Library pseudocode for shared/functions/predictionrestrictregisters/ExecutionCntxtBranchTo

type// BranchTo() // ========== // Set program counter to a new address, with a branch type. // Parameter branch_conditional indicates whether the executed branch has a conditional encoding. // In AArch64 state the address might include a tag in the top eight bits. ExecutionCntxt is ( boolean is_vmid_valid, // is vmid valid for current context boolean all_vmid, // should the operation be applied for all vmids bits(16) vmid, // if all_vmid = FALSE, vmid to which operation is applied boolean is_asid_valid, // is asid valid for current context boolean all_asid, // should the operation be applied for all asids bits(16) asid, // if all_asid = FALSE, ASID to which operation is applied bits(2) target_el, // target EL at which operation is performedBranchTo(bits(N) target, SecurityStateBranchType security,branch_type, boolean branch_conditional) (branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); bits(64) target_vaddress = AArch64.BranchAddrRestrictTypeHint_Branch restriction // type of restriction operation )(target<63:0>); _PC = target_vaddress; return;

Library pseudocode for shared/functions/predictionrestrictregisters/RESTRICT_PREDICTIONSBranchToAddr

// RESTRICT_PREDICTIONS() // ====================== // Clear all speculated values.// BranchToAddr() // ============== // Set program counter to a new address, with a branch type. // In AArch64 state the address does not include a tag in the top eight bits. RESTRICT_PREDICTIONS(BranchToAddr(bits(N) target, branch_type) Hint_Branch(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32ExecutionCntxtBranchType c) IMPLEMENTATION_DEFINED;(); _PC = target<63:0>; return;

Library pseudocode for shared/functions/predictionrestrictregisters/RestrictTypeBranchType

enumeration RestrictType {BranchType { RestrictType_DataValue,BranchType_DIRCALL, // Direct Branch with link RestrictType_ControlFlow,BranchType_INDCALL, // Indirect Branch with link RestrictType_CachePrefetch };BranchType_ERET, // Exception return (indirect)BranchType_DBGEXIT, // Exit from Debug state BranchType_RET, // Indirect branch with function return hint BranchType_DIR, // Direct branch BranchType_INDIR, // Indirect branch BranchType_EXCEPTION, // Exception entry BranchType_RESET, // Reset BranchType_UNKNOWN}; // Other

Library pseudocode for shared/functions/predictionrestrictregisters/TargetSecurityStateHint_Branch

// TargetSecurityState() // ===================== // Decode the target security state for the prediction context. SecurityState// Report the hint passed to BranchTo() and BranchToAddr(), for consideration when processing // the next instruction. TargetSecurityState(bit NS) curr_ss =Hint_Branch( SecurityStateAtELBranchType(PSTATE.EL); if curr_ss == SS_NonSecure then return SS_NonSecure; elsif curr_ss == SS_Secure then case NS of when '0' return SS_Secure; when '1' return SS_NonSecure;hint);

Library pseudocode for shared/functions/registers/BranchToNextInstrAddr

// BranchTo() // ========== // Set program counter to a new address, with a branch type. // Parameter branch_conditional indicates whether the executed branch has a conditional encoding. // In AArch64 state the address might include a tag in the top eight bits.// Return address of the sequentially next instruction. bits(N) BranchTo(bits(N) target,NextInstrAddr(); BranchType branch_type, boolean branch_conditional) Hint_Branch(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); bits(64) target_vaddress = AArch64.BranchAddr(target<63:0>); _PC = target_vaddress; return;

Library pseudocode for shared/functions/registers/BranchToAddrResetExternalDebugRegisters

// BranchToAddr() // ============== // Set program counter to a new address, with a branch type. // In AArch64 state the address does not include a tag in the top eight bits.// Reset the External Debug registers in the Core power domain. BranchToAddr(bits(N) target,ResetExternalDebugRegisters(boolean cold_reset); BranchType branch_type) Hint_Branch(branch_type); if N == 32 then assert UsingAArch32(); _PC = ZeroExtend(target); else assert N == 64 && !UsingAArch32(); _PC = target<63:0>; return;

Library pseudocode for shared/functions/registers/BranchTypeThisInstrAddr

enumeration// ThisInstrAddr() // =============== // Return address of the current instruction. bits(N) BranchType {ThisInstrAddr() assert N == 64 || (N == 32 && BranchType_DIRCALL, // Direct Branch with link BranchType_INDCALL, // Indirect Branch with link BranchType_ERET, // Exception return (indirect) BranchType_DBGEXIT, // Exit from Debug state BranchType_RET, // Indirect branch with function return hint BranchType_DIR, // Direct branch BranchType_INDIR, // Indirect branch BranchType_EXCEPTION, // Exception entry BranchType_RESET, // Reset BranchType_UNKNOWN}; // Other()); return _PC<N-1:0>;

Library pseudocode for shared/functions/registers/Hint_Branch_PC

// Report the hint passed to BranchTo() and BranchToAddr(), for consideration when processing // the next instruction.bits(64) _PC; Hint_Branch(BranchType hint);

Library pseudocode for shared/functions/registers/NextInstrAddr_R

// Return address of the sequentially next instruction. bits(N)array bits(64) _R[0..30]; NextInstrAddr();

Library pseudocode for shared/functions/registers/ResetExternalDebugRegisters_V

// Reset the External Debug registers in the Core power domain.array bits(128) _V[0..31]; ResetExternalDebugRegisters(boolean cold_reset);

Library pseudocode for shared/functions/registerssysregisters/ThisInstrAddrSPSR

// ThisInstrAddr() // =============== // Return address of the current instruction. // SPSR[] - non-assignment form // ============================ bits(N) ThisInstrAddr() assert N == 64 || (N == 32 &&SPSR[] bits(N) result; if UsingAArch32()); return _PC<N-1:0>;() then assert N == 32; case PSTATE.M of whenM32_FIQ result = SPSR_fiq<N-1:0>; when M32_IRQ result = SPSR_irq<N-1:0>; when M32_Svc result = SPSR_svc<N-1:0>; when M32_Monitor result = SPSR_mon<N-1:0>; when M32_Abort result = SPSR_abt<N-1:0>; when M32_Hyp result = SPSR_hyp<N-1:0>; when M32_Undef result = SPSR_und<N-1:0>; otherwise Unreachable(); else assert N == 64; case PSTATE.EL of when EL1 result = SPSR_EL1<N-1:0>; when EL2 result = SPSR_EL2<N-1:0>; when EL3 result = SPSR_EL3<N-1:0>; otherwise Unreachable(); return result; // SPSR[] - assignment form // ======================== SPSR[] = bits(N) value if UsingAArch32() then assert N == 32; case PSTATE.M of when M32_FIQ SPSR_fiq = ZeroExtend(value); when M32_IRQ SPSR_irq = ZeroExtend(value); when M32_Svc SPSR_svc = ZeroExtend(value); when M32_Monitor SPSR_mon = ZeroExtend(value); when M32_Abort SPSR_abt = ZeroExtend(value); when M32_Hyp SPSR_hyp = ZeroExtend(value); when M32_Undef SPSR_und = ZeroExtend(value); otherwise Unreachable(); else assert N == 64; case PSTATE.EL of when EL1 SPSR_EL1 = ZeroExtend(value); when EL2 SPSR_EL2 = ZeroExtend(value); when EL3 SPSR_EL3 = ZeroExtend(value); otherwise Unreachable(); return;

Library pseudocode for shared/functions/registerssystem/_PCArchVersion

bits(64) _PC;enumerationArchVersion { ARMv8p0 , ARMv8p1 , ARMv8p2 , ARMv8p3 , ARMv8p4 , ARMv8p5 , ARMv8p6 , ARMv8p7 , ARMv8p8 };

Library pseudocode for shared/functions/registerssystem/_RBranchTargetCheck

array bits(64) _R[0..30];// BranchTargetCheck() // =================== // This function is executed checks if the current instruction is a valid target for a branch // taken into, or inside, a guarded page. It is executed on every cycle once the current // instruction has been decoded and the values of InGuardedPage and BTypeCompatible have been // determined for the current instruction.BranchTargetCheck() assert HaveBTIExt() && !UsingAArch32(); // The branch target check considers two state variables: // * InGuardedPage, which is evaluated during instruction fetch. // * BTypeCompatible, which is evaluated during instruction decode. if InGuardedPage && PSTATE.BTYPE != '00' && !BTypeCompatible && !Halted() then bits(64) pc = ThisInstrAddr(); AArch64.BranchTargetException(pc<51:0>); boolean branch_instr = AArch64.ExecutingBROrBLROrRetInstr(); boolean bti_instr = AArch64.ExecutingBTIInstr(); // PSTATE.BTYPE defaults to 00 for instructions that do not explictly set BTYPE. if !(branch_instr || bti_instr) then BTypeNext = '00';

Library pseudocode for shared/functions/registerssystem/_VClearEventRegister

array bits(128) _V[0..31];// ClearEventRegister() // ==================== // Clear the Event Register of this PE.ClearEventRegister() EventRegister = '0'; return;

Library pseudocode for shared/functions/sysregisterssystem/SPSRClearPendingPhysicalSError

// SPSR[] - non-assignment form // ============================ bits(N)// Clear a pending physical SError interrupt. SPSR[] bits(N) result; ifClearPendingPhysicalSError(); UsingAArch32() then assert N == 32; case PSTATE.M of when M32_FIQ result = SPSR_fiq<N-1:0>; when M32_IRQ result = SPSR_irq<N-1:0>; when M32_Svc result = SPSR_svc<N-1:0>; when M32_Monitor result = SPSR_mon<N-1:0>; when M32_Abort result = SPSR_abt<N-1:0>; when M32_Hyp result = SPSR_hyp<N-1:0>; when M32_Undef result = SPSR_und<N-1:0>; otherwise Unreachable(); else assert N == 64; case PSTATE.EL of when EL1 result = SPSR_EL1<N-1:0>; when EL2 result = SPSR_EL2<N-1:0>; when EL3 result = SPSR_EL3<N-1:0>; otherwise Unreachable(); return result; // SPSR[] - assignment form // ======================== SPSR[] = bits(N) value if UsingAArch32() then assert N == 32; case PSTATE.M of when M32_FIQ SPSR_fiq = ZeroExtend(value); when M32_IRQ SPSR_irq = ZeroExtend(value); when M32_Svc SPSR_svc = ZeroExtend(value); when M32_Monitor SPSR_mon = ZeroExtend(value); when M32_Abort SPSR_abt = ZeroExtend(value); when M32_Hyp SPSR_hyp = ZeroExtend(value); when M32_Undef SPSR_und = ZeroExtend(value); otherwise Unreachable(); else assert N == 64; case PSTATE.EL of when EL1 SPSR_EL1 = ZeroExtend(value); when EL2 SPSR_EL2 = ZeroExtend(value); when EL3 SPSR_EL3 = ZeroExtend(value); otherwise Unreachable(); return;

Library pseudocode for shared/functions/system/ArchVersionClearPendingVirtualSError

enumeration// Clear a pending virtual SError interrupt. ArchVersion {ClearPendingVirtualSError(); ARMv8p0 , ARMv8p1 , ARMv8p2 , ARMv8p3 , ARMv8p4 , ARMv8p5 , ARMv8p6 , ARMv8p7 , ARMv8p8 };

Library pseudocode for shared/functions/system/BranchTargetCheckConditionHolds

// BranchTargetCheck() // =================== // This function is executed checks if the current instruction is a valid target for a branch // taken into, or inside, a guarded page. It is executed on every cycle once the current // instruction has been decoded and the values of InGuardedPage and BTypeCompatible have been // determined for the current instruction.// ConditionHolds() // ================ // Return TRUE iff COND currently holds boolean BranchTargetCheck() assertConditionHolds(bits(4) cond) // Evaluate base condition. boolean result; case cond<3:1> of when '000' result = (PSTATE.Z == '1'); // EQ or NE when '001' result = (PSTATE.C == '1'); // CS or CC when '010' result = (PSTATE.N == '1'); // MI or PL when '011' result = (PSTATE.V == '1'); // VS or VC when '100' result = (PSTATE.C == '1' && PSTATE.Z == '0'); // HI or LS when '101' result = (PSTATE.N == PSTATE.V); // GE or LT when '110' result = (PSTATE.N == PSTATE.V && PSTATE.Z == '0'); // GT or LE when '111' result = TRUE; // AL // Condition flag values in the set '111x' indicate always true // Otherwise, invert condition if necessary. if cond<0> == '1' && cond != '1111' then result = !result; return result; HaveBTIExt() && !UsingAArch32(); // The branch target check considers two state variables: // * InGuardedPage, which is evaluated during instruction fetch. // * BTypeCompatible, which is evaluated during instruction decode. if InGuardedPage && PSTATE.BTYPE != '00' && !BTypeCompatible && !Halted() then bits(64) pc = ThisInstrAddr(); AArch64.BranchTargetException(pc<51:0>); boolean branch_instr = AArch64.ExecutingBROrBLROrRetInstr(); boolean bti_instr = AArch64.ExecutingBTIInstr(); // PSTATE.BTYPE defaults to 00 for instructions that do not explictly set BTYPE. if !(branch_instr || bti_instr) then BTypeNext = '00';

Library pseudocode for shared/functions/system/ClearEventRegisterConsumptionOfSpeculativeDataBarrier

// ClearEventRegister() // ==================== // Clear the Event Register of this PE. ClearEventRegister() EventRegister = '0'; return;ConsumptionOfSpeculativeDataBarrier();

Library pseudocode for shared/functions/system/ClearPendingPhysicalSErrorCurrentInstrSet

// Clear a pending physical SError interrupt.// CurrentInstrSet() // ================= InstrSet ClearPendingPhysicalSError();CurrentInstrSet()InstrSet result; if UsingAArch32() then result = if PSTATE.T == '0' then InstrSet_A32 else InstrSet_T32; // PSTATE.J is RES0. Implementation of T32EE or Jazelle state not permitted. else result = InstrSet_A64; return result;

Library pseudocode for shared/functions/system/ClearPendingVirtualSErrorCurrentPL

// Clear a pending virtual SError interrupt.// CurrentPL() // =========== PrivilegeLevel ClearPendingVirtualSError();CurrentPL() returnPLOfEL(PSTATE.EL);

Library pseudocode for shared/functions/system/ConditionHoldsCurrentSecurityState

// ConditionHolds() // ================ // Return TRUE iff COND currently holds // CurrentSecurityState() // ====================== // Returns the effective security state at the exception level based off current settings. booleanSecurityState ConditionHolds(bits(4) cond) // Evaluate base condition. boolean result; case cond<3:1> of when '000' result = (PSTATE.Z == '1'); // EQ or NE when '001' result = (PSTATE.C == '1'); // CS or CC when '010' result = (PSTATE.N == '1'); // MI or PL when '011' result = (PSTATE.V == '1'); // VS or VC when '100' result = (PSTATE.C == '1' && PSTATE.Z == '0'); // HI or LS when '101' result = (PSTATE.N == PSTATE.V); // GE or LT when '110' result = (PSTATE.N == PSTATE.V && PSTATE.Z == '0'); // GT or LE when '111' result = TRUE; // AL // Condition flag values in the set '111x' indicate always true // Otherwise, invert condition if necessary. if cond<0> == '1' && cond != '1111' then result = !result; return result;CurrentSecurityState() returnSecurityStateAtEL(PSTATE.EL);

Library pseudocode for shared/functions/system/ConsumptionOfSpeculativeDataBarrierDSBAlias

enumeration DSBAlias {DSBAlias_SSBB, DSBAlias_PSSBB, ConsumptionOfSpeculativeDataBarrier();DSBAlias_DSB};

Library pseudocode for shared/functions/system/CurrentInstrSetEL0

// CurrentInstrSet() // ================= InstrSetconstant bits(2) CurrentInstrSet()EL3 = '11'; constant bits(2) InstrSet result; ifEL2 = '10'; constant bits(2) UsingAArch32() then result = if PSTATE.T == '0' thenEL1 = '01'; constant bits(2) InstrSet_A32 else InstrSet_T32; // PSTATE.J is RES0. Implementation of T32EE or Jazelle state not permitted. else result = InstrSet_A64; return result;EL0 = '00';

Library pseudocode for shared/functions/system/CurrentPLEL2Enabled

// CurrentPL() // =========== // EL2Enabled() // ============ // Returns TRUE if EL2 is present and executing // - with the PE in Non-secure state when Non-secure EL2 is implemented, or // - with the PE in Secure state when Secure EL2 is implemented and enabled, or // - when EL3 is not implemented. PrivilegeLevelboolean CurrentPL() EL2Enabled() return (EL2) && (!HaveEL(EL3) || SCR_GEN[].NS == '1' || IsSecureEL2EnabledPLOfELHaveEL(PSTATE.EL);());

Library pseudocode for shared/functions/system/CurrentSecurityStateELFromM32

// CurrentSecurityState() // ====================== // Returns the effective security state at the exception level based off current settings. // ELFromM32() // =========== SecurityState(boolean,bits(2)) CurrentSecurityState() returnELFromM32(bits(5) mode) // Convert an AArch32 mode encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'mode<4:0>' encodes a mode that is both valid for this implementation // and the current value of SCR.NS/SCR_EL3.NS. // 'EL' is the Exception level decoded from 'mode'. bits(2) el; boolean valid = ! (mode); // Check for modes that are not valid for this implementation case mode of when M32_Monitor el = EL3; when M32_Hyp el = EL2; valid = valid && (!HaveEL(EL3) || SCR_GEN[].NS == '1'); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. el = (if HaveEL(EL3) && !HaveAArch64() && SCR.NS == '0' then EL3 else EL1); when M32_User el = EL0SecurityStateAtELBadMode(PSTATE.EL);; otherwise valid = FALSE; // Passed an illegal mode value if !valid then el = bits(2) UNKNOWN; return (valid, el);

Library pseudocode for shared/functions/system/DSBAliasELFromSPSR

enumeration// ELFromSPSR() // ============ // Convert an SPSR value encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'spsr<4:0>' encodes a valid mode for the current state. // 'EL' is the Exception level decoded from 'spsr'. (boolean,bits(2)) DSBAlias {ELFromSPSR(bits(N) spsr) bits(2) el; boolean valid; if spsr<4> == '0' then // AArch64 state el = spsr<3:2>; if !DSBAlias_SSBB,() then // No AArch64 support valid = FALSE; elsif ! DSBAlias_PSSBB,(el) then // Exception level not implemented valid = FALSE; elsif spsr<1> == '1' then // M[1] must be 0 valid = FALSE; elsif el == && spsr<0> == '1' then // for EL0, M[0] must be 0 valid = FALSE; elsif el == EL2 && HaveEL(EL3) && !IsSecureEL2Enabled() && SCR_EL3.NS == '0' then valid = FALSE; // Unless Secure EL2 is enabled, EL2 only valid in Non-secure state else valid = TRUE; elsif HaveAArch32() then // AArch32 state (valid, el) = ELFromM32DSBAlias_DSB};(spsr<4:0>); else valid = FALSE; if !valid then el = bits(2) UNKNOWN; return (valid,el);

Library pseudocode for shared/functions/system/EL0ELIsInHost

constant bits(2)// ELIsInHost() // ============ boolean EL3 = '11'; constant bits(2)ELIsInHost(bits(2) el) if ! EL2 = '10'; constant bits(2)() || EL1 = '01'; constant bits(2)( ) then return FALSE; case el of when EL3 return FALSE; when EL2 return EL2Enabled() && HCR_EL2.E2H == '1'; when EL1 return FALSE; when EL0 return EL2Enabled() && HCR_EL2.<E2H,TGE> == '11'; otherwise UnreachableEL0 = '00';();

Library pseudocode for shared/functions/system/EL2EnabledELStateUsingAArch32

// EL2Enabled() // ============ // Returns TRUE if EL2 is present and executing // - with the PE in Non-secure state when Non-secure EL2 is implemented, or // - with the PE in Secure state when Secure EL2 is implemented and enabled, or // - when EL3 is not implemented. // ELStateUsingAArch32() // ===================== boolean EL2Enabled() returnELStateUsingAArch32(bits(2) el, boolean secure) // See ELStateUsingAArch32K() for description. Must only be called in circumstances where // result is valid (typically, that means 'el IN {EL1,EL2,EL3}'). (known, aarch32) = HaveELELStateUsingAArch32K(EL2) && (!HaveEL(EL3) || SCR_GEN[].NS == '1' || IsSecureEL2Enabled());(el, secure); assert known; return aarch32;

Library pseudocode for shared/functions/system/EL3SDDTrapPriorityELStateUsingAArch32K

// EL3SDDTrapPriority() // ==================== // Returns TRUE if in Debug state, EDSCR.SDD is set, and an EL3 trap by an // EL3 control register has the priority of the original trap exception. // ELStateUsingAArch32K() // ====================== boolean(boolean,boolean) EL3SDDTrapPriority() return (ELStateUsingAArch32K(bits(2) el, boolean secure) // Returns (known, aarch32): // 'known' is FALSE for EL0 if the current Exception level is not EL0 and EL1 is // using AArch64, since it cannot determine the state of EL0; TRUE otherwise. // 'aarch32' is TRUE if the specified Exception level is using AArch32; FALSE otherwise. if !HaltedHaveAArch32EL() && EDSCR.SDD == '1' && boolean IMPLEMENTATION_DEFINED "(el) then return (TRUE, FALSE); // Exception level is using AArch64 elsif secure && el ==EL2 then return (TRUE, FALSE); // Secure EL2 is using AArch64 elsif !HaveAArch64() then return (TRUE, TRUE); // Highest Exception level, and therefore all levels are using AArch32 // Remainder of function deals with the interprocessing cases when highest Exception level is using AArch64 boolean aarch32 = boolean UNKNOWN; boolean known = TRUE; aarch32_below_el3 = HaveEL(EL3) && SCR_EL3.RW == '0' && (!secure || !HaveSecureEL2Ext() || SCR_EL3.EEL2 == '0'); aarch32_at_el1 = (aarch32_below_el3 || (HaveEL(EL2) && ((HaveSecureEL2Ext() && SCR_EL3.EEL2 == '1') || !secure) && HCR_EL2.RW == '0' && !(HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && HaveVirtHostExt()))); if el == EL0 && !aarch32_at_el1 then // Only know if EL0 using AArch32 from PSTATE if PSTATE.EL == EL0 then aarch32 = PSTATE.nRW == '1'; // EL0 controlled by PSTATE else known = FALSE; // EL0 state is UNKNOWN else aarch32 = (aarch32_below_el3 && el != EL3) || (aarch32_at_el1 && el IN {EL1,EL0 trap priority when SDD == '1'");}); if !known then aarch32 = boolean UNKNOWN; return (known, aarch32);

Library pseudocode for shared/functions/system/ELFromM32ELUsingAArch32

// ELFromM32() // =========== // ELUsingAArch32() // ================ (boolean,bits(2))boolean ELFromM32(bits(5) mode) // Convert an AArch32 mode encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'mode<4:0>' encodes a mode that is both valid for this implementation // and the current value of SCR.NS/SCR_EL3.NS. // 'EL' is the Exception level decoded from 'mode'. bits(2) el; boolean valid = !ELUsingAArch32(bits(2) el) returnBadModeELStateUsingAArch32(mode); // Check for modes that are not valid for this implementation case mode of when(el, M32_MonitorIsSecureBelowEL3 el = EL3; when M32_Hyp el = EL2; valid = valid && (!HaveEL(EL3) || SCR_GEN[].NS == '1'); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. el = (if HaveEL(EL3) && !HaveAArch64() && SCR.NS == '0' then EL3 else EL1); when M32_User el = EL0; otherwise valid = FALSE; // Passed an illegal mode value if !valid then el = bits(2) UNKNOWN; return (valid, el);());

Library pseudocode for shared/functions/system/ELFromSPSRELUsingAArch32K

// ELFromSPSR() // ============ // ELUsingAArch32K() // ================= // Convert an SPSR value encoding to an Exception level. // Returns (valid,EL): // 'valid' is TRUE if 'spsr<4:0>' encodes a valid mode for the current state. // 'EL' is the Exception level decoded from 'spsr'. (boolean,bits(2))(boolean,boolean) ELFromSPSR(bits(N) spsr) bits(2) el; boolean valid; if spsr<4> == '0' then // AArch64 state el = spsr<3:2>; if !ELUsingAArch32K(bits(2) el) returnHaveAArch64ELStateUsingAArch32K() then // No AArch64 support valid = FALSE; elsif !(el,HaveELIsSecureBelowEL3(el) then // Exception level not implemented valid = FALSE; elsif spsr<1> == '1' then // M[1] must be 0 valid = FALSE; elsif el == EL0 && spsr<0> == '1' then // for EL0, M[0] must be 0 valid = FALSE; elsif el == EL2 && HaveEL(EL3) && !IsSecureEL2Enabled() && SCR_EL3.NS == '0' then valid = FALSE; // Unless Secure EL2 is enabled, EL2 valid only in Non-secure state else valid = TRUE; elsif HaveAArch32() then // AArch32 state (valid, el) = ELFromM32(spsr<4:0>); else valid = FALSE; if !valid then el = bits(2) UNKNOWN; return (valid,el);());

Library pseudocode for shared/functions/system/ELIsInHostEffectiveTGE

// ELIsInHost() // ============ // EffectiveTGE() // ============== // Returns effective TGE value booleanbit ELIsInHost(bits(2) el) if !EffectiveTGE() ifHaveVirtHostExtEL2Enabled() ||() then return if ELUsingAArch32(EL2) then return FALSE; case el of when EL3 return FALSE; when EL2 return EL2Enabled() && HCR_EL2.E2H == '1'; when EL1 return FALSE; when EL0 return EL2Enabled() && HCR_EL2.<E2H,TGE> == '11'; otherwise Unreachable();) then HCR.TGE else HCR_EL2.TGE; else return '0'; // Effective value of TGE is zero

Library pseudocode for shared/functions/system/ELStateUsingAArch32EndOfInstruction

// ELStateUsingAArch32() // ===================== boolean// Terminate processing of the current instruction. ELStateUsingAArch32(bits(2) el, boolean secure) // See ELStateUsingAArch32K() for description. Must only be called in circumstances where // result is valid (typically, that means 'el IN {EL1,EL2,EL3}'). (known, aarch32) =EndOfInstruction(); ELStateUsingAArch32K(el, secure); assert known; return aarch32;

Library pseudocode for shared/functions/system/ELStateUsingAArch32KEnterLowPowerState

// ELStateUsingAArch32K() // ====================== (boolean,boolean)// PE enters a low-power state. ELStateUsingAArch32K(bits(2) el, boolean secure) // Returns (known, aarch32): // 'known' is FALSE for EL0 if the current Exception level is not EL0 and EL1 is // using AArch64, since it cannot determine the state of EL0; TRUE otherwise. // 'aarch32' is TRUE if the specified Exception level is using AArch32; FALSE otherwise. if !EnterLowPowerState();HaveAArch32EL(el) then return (TRUE, FALSE); // Exception level is using AArch64 elsif secure && el == EL2 then return (TRUE, FALSE); // Secure EL2 is using AArch64 elsif !HaveAArch64() then return (TRUE, TRUE); // Highest Exception level, and therefore all levels are using AArch32 // Remainder of function deals with the interprocessing cases when highest Exception level is using AArch64 boolean aarch32 = boolean UNKNOWN; boolean known = TRUE; aarch32_below_el3 = HaveEL(EL3) && SCR_EL3.RW == '0' && (!secure || !HaveSecureEL2Ext() || SCR_EL3.EEL2 == '0'); aarch32_at_el1 = (aarch32_below_el3 || (HaveEL(EL2) && ((HaveSecureEL2Ext() && SCR_EL3.EEL2 == '1') || !secure) && HCR_EL2.RW == '0' && !(HCR_EL2.E2H == '1' && HCR_EL2.TGE == '1' && HaveVirtHostExt()))); if el == EL0 && !aarch32_at_el1 then // Only know if EL0 using AArch32 from PSTATE if PSTATE.EL == EL0 then aarch32 = PSTATE.nRW == '1'; // EL0 controlled by PSTATE else known = FALSE; // EL0 state is UNKNOWN else aarch32 = (aarch32_below_el3 && el != EL3) || (aarch32_at_el1 && el IN {EL1,EL0}); if !known then aarch32 = boolean UNKNOWN; return (known, aarch32);

Library pseudocode for shared/functions/system/ELUsingAArch32EventRegister

// ELUsingAArch32() // ================ booleanbits(1) EventRegister; ELUsingAArch32(bits(2) el) return ELStateUsingAArch32(el, IsSecureBelowEL3());

Library pseudocode for shared/functions/system/ELUsingAArch32KExceptionalOccurrenceTargetState

// ELUsingAArch32K() // ================= (boolean,boolean)enumeration ELUsingAArch32K(bits(2) el) returnExceptionalOccurrenceTargetState { ELStateUsingAArch32K(el,AArch32_NonDebugState, AArch64_NonDebugState, IsSecureBelowEL3());DebugState };

Library pseudocode for shared/functions/system/EffectiveTGEFIQPending

// EffectiveTGE() // ============== // Returns effective TGE value bit// Returns a tuple indicating if there is any pending physical FIQ // and if the pending FIQ has superpriority. (boolean, boolean) EffectiveTGE() ifFIQPending(); EL2Enabled() then return if ELUsingAArch32(EL2) then HCR.TGE else HCR_EL2.TGE; else return '0'; // Effective value of TGE is zero

Library pseudocode for shared/functions/system/EndOfInstructionGetAccumulatedFPExceptions

// Terminate processing of the current instruction.// Returns FP exceptions accumulated by the PE. bits(8) EndOfInstruction();GetAccumulatedFPExceptions();

Library pseudocode for shared/functions/system/EnterLowPowerStateGetPSRFromPSTATE

// PE enters a low-power state.// GetPSRFromPSTATE() // ================== // Return a PSR value which represents the current PSTATE bits(N) EnterLowPowerState();GetPSRFromPSTATE(ExceptionalOccurrenceTargetState targetELState) if UsingAArch32() && (targetELState IN {AArch32_NonDebugState, DebugState}) then assert N == 32; else assert N == 64; bits(N) spsr = Zeros(); spsr<31:28> = PSTATE.<N,Z,C,V>; if HavePANExt() then spsr<22> = PSTATE.PAN; spsr<20> = PSTATE.IL; if PSTATE.nRW == '1' then // AArch32 state spsr<27> = PSTATE.Q; spsr<26:25> = PSTATE.IT<1:0>; if HaveSSBSExt() then spsr<23> = PSTATE.SSBS; if HaveDITExt() then if targetELState == AArch32_NonDebugState then spsr<21> = PSTATE.DIT; else //AArch64_NonDebugState or DebugState spsr<24> = PSTATE.DIT; if targetELState IN {AArch64_NonDebugState, DebugState} then spsr<21> = PSTATE.SS; spsr<19:16> = PSTATE.GE; spsr<15:10> = PSTATE.IT<7:2>; spsr<9> = PSTATE.E; spsr<8:6> = PSTATE.<A,I,F>; // No PSTATE.D in AArch32 state spsr<5> = PSTATE.T; assert PSTATE.M<4> == PSTATE.nRW; // bit [4] is the discriminator spsr<4:0> = PSTATE.M; else // AArch64 state if HaveMTEExt() then spsr<25> = PSTATE.TCO; if HaveDITExt() then spsr<24> = PSTATE.DIT; if HaveUAOExt() then spsr<23> = PSTATE.UAO; spsr<21> = PSTATE.SS; if HaveFeatNMI() then spsr<13> = PSTATE.ALLINT; if HaveSSBSExt() then spsr<12> = PSTATE.SSBS; if HaveBTIExt() then spsr<11:10> = PSTATE.BTYPE; spsr<9:6> = PSTATE.<D,A,I,F>; spsr<4> = PSTATE.nRW; spsr<3:2> = PSTATE.EL; spsr<0> = PSTATE.SP; return spsr;

Library pseudocode for shared/functions/system/EventRegisterHasArchVersion

bits(1) EventRegister;// HasArchVersion() // ================ // Returns TRUE if the implemented architecture includes the extensions defined in the specified // architecture version. booleanHasArchVersion(ArchVersion version) return version == ARMv8p0 || boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/ExceptionalOccurrenceTargetStateHaveAArch32

enumeration// HaveAArch32() // ============= // Return TRUE if AArch32 state is supported at at least EL0. boolean ExceptionalOccurrenceTargetState {HaveAArch32() return boolean IMPLEMENTATION_DEFINED "AArch32 state is supported at at least AArch32_NonDebugState, AArch64_NonDebugState, DebugState };";

Library pseudocode for shared/functions/system/FIQPendingHaveAArch32EL

// Returns a tuple indicating if there is any pending physical FIQ // and if the pending FIQ has superpriority. (boolean, boolean)// HaveAArch32EL() // =============== boolean FIQPending();HaveAArch32EL(bits(2) el) // Return TRUE if Exception level 'el' supports AArch32 in this implementation if !HaveEL(el) then return FALSE; // The Exception level is not implemented elsif !HaveAArch32() then return FALSE; // No Exception level can use AArch32 elsif !HaveAArch64() then return TRUE; // All Exception levels are using AArch32 elsif el == HighestEL() then return FALSE; // The highest Exception level is using AArch64 elsif el == EL0 then return TRUE; // EL0 must support using AArch32 if any AArch32 return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/GetAccumulatedFPExceptionsHaveAArch64

// Returns FP exceptions accumulated by the PE. bits(8)// HaveAArch64() // ============= // Return TRUE if the highest Exception level is using AArch64 state. boolean GetAccumulatedFPExceptions();HaveAArch64() return boolean IMPLEMENTATION_DEFINED "Highest EL using AArch64";

Library pseudocode for shared/functions/system/GetPSRFromPSTATEHaveEL

// GetPSRFromPSTATE() // ================== // Return a PSR value which represents the current PSTATE // HaveEL() // ======== // Return TRUE if Exception level 'el' is supported bits(N)boolean GetPSRFromPSTATE(HaveEL(bits(2) el) if el IN {ExceptionalOccurrenceTargetStateEL1 targetELState) if, UsingAArch32EL0() && (targetELState IN {AArch32_NonDebugState, DebugState}) then assert N == 32; else assert N == 64; bits(N) spsr = Zeros(); spsr<31:28> = PSTATE.<N,Z,C,V>; if HavePANExt() then spsr<22> = PSTATE.PAN; spsr<20> = PSTATE.IL; if PSTATE.nRW == '1' then // AArch32 state spsr<27> = PSTATE.Q; spsr<26:25> = PSTATE.IT<1:0>; if HaveSSBSExt() then spsr<23> = PSTATE.SSBS; if HaveDITExt() then if targetELState == AArch32_NonDebugState then spsr<21> = PSTATE.DIT; else //AArch64_NonDebugState or DebugState spsr<24> = PSTATE.DIT; if targetELState IN {AArch64_NonDebugState, DebugState} then spsr<21> = PSTATE.SS; spsr<19:16> = PSTATE.GE; spsr<15:10> = PSTATE.IT<7:2>; spsr<9> = PSTATE.E; spsr<8:6> = PSTATE.<A,I,F>; // No PSTATE.D in AArch32 state spsr<5> = PSTATE.T; assert PSTATE.M<4> == PSTATE.nRW; // bit [4] is the discriminator spsr<4:0> = PSTATE.M; else // AArch64 state if HaveMTEExt() then spsr<25> = PSTATE.TCO; if HaveDITExt() then spsr<24> = PSTATE.DIT; if HaveUAOExt() then spsr<23> = PSTATE.UAO; spsr<21> = PSTATE.SS; if HaveFeatNMI() then spsr<13> = PSTATE.ALLINT; if HaveSSBSExt() then spsr<12> = PSTATE.SSBS; if HaveBTIExt() then spsr<11:10> = PSTATE.BTYPE; spsr<9:6> = PSTATE.<D,A,I,F>; spsr<4> = PSTATE.nRW; spsr<3:2> = PSTATE.EL; spsr<0> = PSTATE.SP; return spsr;} then return TRUE; // EL1 and EL0 must exist return boolean IMPLEMENTATION_DEFINED;

Library pseudocode for shared/functions/system/HasArchVersionHaveELUsingSecurityState

// HasArchVersion() // ================ // Returns TRUE if the implemented architecture includes the extensions defined in the specified // architecture version. // HaveELUsingSecurityState() // ========================== // Returns TRUE if Exception level 'el' with Security state 'secure' is supported, // FALSE otherwise. boolean HasArchVersion(HaveELUsingSecurityState(bits(2) el, boolean secure) case el of whenArchVersionEL3 version) return version ==assert secure; return (EL3); when EL2 if secure then return HaveEL(EL2) && HaveSecureEL2Ext(); else return HaveEL(EL2); otherwise return (HaveEL(EL3ARMv8p0HaveEL || boolean IMPLEMENTATION_DEFINED;) || (secure == boolean IMPLEMENTATION_DEFINED "Secure-only implementation"));

Library pseudocode for shared/functions/system/HaveAArch32HaveFP16Ext

// HaveAArch32() // HaveFP16Ext() // ============= // Return TRUE if AArch32 state is supported at at least EL0. // Return TRUE if FP16 extension is supported boolean HaveAArch32() return boolean IMPLEMENTATION_DEFINED "AArch32 state is supported at at leastHaveFP16Ext() return boolean IMPLEMENTATION_DEFINED; EL0";

Library pseudocode for shared/functions/system/HaveAArch32ELHighestEL

// HaveAArch32EL() // =============== // HighestEL() // =========== // Returns the highest implemented Exception level. booleanbits(2) HaveAArch32EL(bits(2) el) // Return TRUE if Exception level 'el' supports AArch32 in this implementation if !HighestEL() ifHaveEL(el) then return FALSE; // The Exception level is not implemented elsif !(HaveAArch32EL3() then return FALSE; // No Exception level can use AArch32 elsif !) then returnHaveAArch64EL3() then return TRUE; // All Exception levels are using AArch32 elsif el ==; elsif HighestELHaveEL() then return FALSE; // The highest Exception level is using AArch64 elsif el ==( ) then return EL2; else return EL1EL0EL2 then return TRUE; // EL0 must support using AArch32 if any AArch32 return boolean IMPLEMENTATION_DEFINED;;

Library pseudocode for shared/functions/system/HaveAArch64Hint_DGH

// HaveAArch64() // ============= // Return TRUE if the highest Exception level is using AArch64 state. boolean// Provides a hint to close any gathering occurring within the micro-architecture. HaveAArch64() return boolean IMPLEMENTATION_DEFINED "Highest EL using AArch64";Hint_DGH();

Library pseudocode for shared/functions/system/HaveELHint_WFE

// HaveEL() // ======== // Return TRUE if Exception level 'el' is supported boolean// Hint_WFE() // ========== // Provides a hint indicating that the PE can enter a low-power state // and remain there until a wakeup event occurs or, for WFET, a local // timeout event is generated when the virtual timer value equals or // exceeds the supplied threshold value. HaveEL(bits(2) el) if el IN {Hint_WFE(integer localtimeout,WFxType wfxtype) if IsEventRegisterSet() then ClearEventRegister(); elsif HaveFeatWFxT() && LocalTimeoutEvent(localtimeout) then // No further operation if the local timeout has expired. EndOfInstruction(); else bits(2) target_el; trap = FALSE; if PSTATE.EL == EL0 then // Check for traps described by the OS which may be EL1 or EL2. if HaveTWEDExt() then sctlr = SCTLR[]; trap = sctlr.nTWE == '0'; target_el = EL1,; elseAArch64.CheckForWFxTrap(EL1, wfxtype); if !trap && PSTATE.EL IN {EL0, EL1} && EL2Enabled() && !IsInHost() then // Check for traps described by the Hypervisor. if HaveTWEDExt() then trap = HCR_EL2.TWE == '1'; target_el = EL2; else AArch64.CheckForWFxTrap(EL2, wfxtype); if !trap && HaveEL(EL3) && PSTATE.EL != EL3 then // Check for traps described by the Secure Monitor. if HaveTWEDExt() then trap = SCR_EL3.TWE == '1'; target_el = EL3; else AArch64.CheckForWFxTrap(EL3, wfxtype); if trap && PSTATE.EL != EL3 then (delay_enabled, delay) = WFETrapDelay(target_el); // (If trap delay is enabled, Delay amount) if !WaitForEventUntilDelay(delay_enabled, delay) then // Event did not arrive before delay expired AArch64.WFxTrap(wfxtype, target_el); // Trap WFE else WaitForEvent} then return TRUE; // EL1 and EL0 must exist return boolean IMPLEMENTATION_DEFINED;(localtimeout);

Library pseudocode for shared/functions/system/HaveELUsingSecurityStateHint_WFI

// HaveELUsingSecurityState() // ========================== // Returns TRUE if Exception level 'el' with Security state 'secure' is supported, // FALSE otherwise. boolean// Hint_WFI() // ========== // Provides a hint indicating that the PE can enter a low-power state and // remain there until a wakeup event occurs or, for WFIT, a local timeout // event is generated when the virtual timer value equals or exceeds the // supplied threshold value. HaveELUsingSecurityState(bits(2) el, boolean secure) case el of whenHint_WFI(integer localtimeout, EL3WFxType assert secure; returnwfxtype) if HaveELInterruptPending(() || (EL3HaveFeatWFxT); when() && EL2LocalTimeoutEvent if secure then return(localtimeout)) then // No further operation if an interrupt is pending or the local timeout has expired. HaveELEndOfInstruction((); else if PSTATE.EL ==EL2EL0) &&then // Check for traps described by the OS. HaveSecureEL2ExtAArch64.CheckForWFxTrap(); else return( HaveELEL1(, wfxtype); if PSTATE.EL IN {EL0, EL1} && EL2Enabled() && !IsInHost() then // Check for traps described by the Hypervisor. AArch64.CheckForWFxTrap(EL2); otherwise return (, wfxtype); ifHaveEL(EL3) && PSTATE.EL != EL3 then // Check for traps described by the Secure Monitor. AArch64.CheckForWFxTrap(EL3, wfxtype); WaitForInterrupt) || (secure == boolean IMPLEMENTATION_DEFINED "Secure-only implementation"));(localtimeout);

Library pseudocode for shared/functions/system/HaveFP16ExtHint_Yield

// HaveFP16Ext() // ============= // Return TRUE if FP16 extension is supported boolean// Provides a hint that the task performed by a thread is of low // importance so that it could yield to improve overall performance. HaveFP16Ext() return boolean IMPLEMENTATION_DEFINED;Hint_Yield();

Library pseudocode for shared/functions/system/HighestELIRQPending

// HighestEL() // =========== // Returns the highest implemented Exception level. bits(2)// Returns a tuple indicating if there is any pending physical IRQ // and if the pending IRQ has superpriority. (boolean, boolean) HighestEL() ifIRQPending(); HaveEL(EL3) then return EL3; elsif HaveEL(EL2) then return EL2; else return EL1;

Library pseudocode for shared/functions/system/Hint_DGHIllegalExceptionReturn

// Provides a hint to close any gathering occurring within the micro-architecture.// IllegalExceptionReturn() // ======================== boolean Hint_DGH();IllegalExceptionReturn(bits(N) spsr) // Check for illegal return: // * To an unimplemented Exception level. // * To EL2 in Secure state, when SecureEL2 is not enabled. // * To EL0 using AArch64 state, with SPSR.M[0]==1. // * To AArch64 state with SPSR.M[1]==1. // * To AArch32 state with an illegal value of SPSR.M. (valid, target) =ELFromSPSR(spsr); if !valid then return TRUE; // Check for return to higher Exception level if UInt(target) > UInt(PSTATE.EL) then return TRUE; spsr_mode_is_aarch32 = (spsr<4> == '1'); // Check for illegal return: // * To EL1, EL2 or EL3 with register width specified in the SPSR different from the // Execution state used in the Exception level being returned to, as determined by // the SCR_EL3.RW or HCR_EL2.RW bits, or as configured from reset. // * To EL0 using AArch64 state when EL1 is using AArch32 state as determined by the // SCR_EL3.RW or HCR_EL2.RW bits or as configured from reset. // * To AArch64 state from AArch32 state (should be caught by above) (known, target_el_is_aarch32) = ELUsingAArch32K(target); assert known || (target == EL0 && !ELUsingAArch32(EL1)); if known && spsr_mode_is_aarch32 != target_el_is_aarch32 then return TRUE; // Check for illegal return from AArch32 to AArch64 if UsingAArch32() && !spsr_mode_is_aarch32 then return TRUE; // Check for illegal return to EL1 when HCR.TGE is set and when either of // * SecureEL2 is enabled. // * SecureEL2 is not enabled and EL1 is in Non-secure state. if HaveEL(EL2) && target == EL1 && HCR_EL2.TGE == '1' then if (!IsSecureBelowEL3() || IsSecureEL2Enabled()) then return TRUE; return FALSE;

Library pseudocode for shared/functions/system/Hint_WFEInstrSet

// Hint_WFE() // ========== // Provides a hint indicating that the PE can enter a low-power state // and remain there until a wakeup event occurs or, for WFET, a local // timeout event is generated when the virtual timer value equals or // exceeds the supplied threshold value.enumeration Hint_WFE(integer localtimeout,InstrSet { WFxType wfxtype) ifInstrSet_A64, IsEventRegisterSet() thenInstrSet_A32, ClearEventRegister(); elsif HaveFeatWFxT() && LocalTimeoutEvent(localtimeout) then // No further operation if the local timeout has expired. EndOfInstruction(); else bits(2) target_el; trap = FALSE; if PSTATE.EL == EL0 then // Check for traps described by the OS which may be EL1 or EL2. if HaveTWEDExt() then sctlr = SCTLR[]; trap = sctlr.nTWE == '0'; target_el = EL1; else AArch64.CheckForWFxTrap(EL1, wfxtype); if !trap && PSTATE.EL IN {EL0, EL1} && EL2Enabled() && !IsInHost() then // Check for traps described by the Hypervisor. if HaveTWEDExt() then trap = HCR_EL2.TWE == '1'; target_el = EL2; else AArch64.CheckForWFxTrap(EL2, wfxtype); if !trap && HaveEL(EL3) && PSTATE.EL != EL3 then // Check for traps described by the Secure Monitor. if HaveTWEDExt() then trap = SCR_EL3.TWE == '1'; target_el = EL3; else AArch64.CheckForWFxTrap(EL3, wfxtype); if trap && PSTATE.EL != EL3 then (delay_enabled, delay) = WFETrapDelay(target_el); // (If trap delay is enabled, Delay amount) if !WaitForEventUntilDelay(delay_enabled, delay) then // Event did not arrive before delay expired AArch64.WFxTrap(wfxtype, target_el); // Trap WFE else WaitForEvent(localtimeout);InstrSet_T32};

Library pseudocode for shared/functions/system/Hint_WFIInstructionSynchronizationBarrier

// Hint_WFI() // ========== // Provides a hint indicating that the PE can enter a low-power state and // remain there until a wakeup event occurs or, for WFIT, a local timeout // event is generated when the virtual timer value equals or exceeds the // supplied threshold value. Hint_WFI(integer localtimeout, WFxType wfxtype) if InterruptPending() || (HaveFeatWFxT() && LocalTimeoutEvent(localtimeout)) then // No further operation if an interrupt is pending or the local timeout has expired. EndOfInstruction(); else if PSTATE.EL == EL0 then // Check for traps described by the OS. AArch64.CheckForWFxTrap(EL1, wfxtype); if PSTATE.EL IN {EL0, EL1} && EL2Enabled() && !IsInHost() then // Check for traps described by the Hypervisor. AArch64.CheckForWFxTrap(EL2, wfxtype); if HaveEL(EL3) && PSTATE.EL != EL3 then // Check for traps described by the Secure Monitor. AArch64.CheckForWFxTrap(EL3, wfxtype); WaitForInterrupt(localtimeout);InstructionSynchronizationBarrier();

Library pseudocode for shared/functions/system/Hint_YieldInterruptPending

// Provides a hint that the task performed by a thread is of low // importance so that it could yield to improve overall performance.// InterruptPending() // ================== // Returns TRUE if there are any pending physical or virtual // interrupts, and FALSE otherwise. boolean Hint_Yield();InterruptPending() boolean pending_virtual_interrupt = FALSE; (irq_pending, -) =IRQPending(); (fiq_pending, -) = FIQPending(); boolean pending_physical_interrupt = (irq_pending || fiq_pending || IsPhysicalSErrorPending()); if EL2Enabled() && PSTATE.EL IN {EL0, EL1} && HCR_EL2.TGE == '0' then boolean virq_pending = HCR_EL2.IMO == '1' && (VirtualIRQPending() || HCR_EL2.VI == '1') ; boolean vfiq_pending = HCR_EL2.FMO == '1' && (VirtualFIQPending() || HCR_EL2.VF == '1'); boolean vsei_pending = HCR_EL2.AMO == '1' && (IsVirtualSErrorPending() || HCR_EL2.VSE == '1'); pending_virtual_interrupt = vsei_pending || virq_pending || vfiq_pending; return pending_physical_interrupt || pending_virtual_interrupt;

Library pseudocode for shared/functions/system/IRQPendingIsASEInstruction

// Returns a tuple indicating if there is any pending physical IRQ // and if the pending IRQ has superpriority. (boolean, boolean)// Returns TRUE if the current instruction is an ASIMD or SVE vector instruction. boolean IRQPending();IsASEInstruction();

Library pseudocode for shared/functions/system/IllegalExceptionReturnIsCMOWControlledInstruction

// IllegalExceptionReturn() // ======================== // When using AArch64, returns TRUE if the current instruction is one of IC IVAU, // DC CIVAC, DC CIGDVAC, or DC CIGVAC. // When using AArch32, returns TRUE if the current instruction is ICIMVAU or DCCIMVAC. boolean IllegalExceptionReturn(bits(N) spsr) // Check for illegal return: // * To an unimplemented Exception level. // * To EL2 in Secure state, when SecureEL2 is not enabled. // * To EL0 using AArch64 state, with SPSR.M[0]==1. // * To AArch64 state with SPSR.M[1]==1. // * To AArch32 state with an illegal value of SPSR.M. (valid, target) =IsCMOWControlledInstruction(); ELFromSPSR(spsr); if !valid then return TRUE; // Check for return to higher Exception level if UInt(target) > UInt(PSTATE.EL) then return TRUE; spsr_mode_is_aarch32 = (spsr<4> == '1'); // Check for illegal return: // * To EL1, EL2 or EL3 with register width specified in the SPSR different from the // Execution state used in the Exception level being returned to, as determined by // the SCR_EL3.RW or HCR_EL2.RW bits, or as configured from reset. // * To EL0 using AArch64 state when EL1 is using AArch32 state as determined by the // SCR_EL3.RW or HCR_EL2.RW bits or as configured from reset. // * To AArch64 state from AArch32 state (should be caught by above) (known, target_el_is_aarch32) = ELUsingAArch32K(target); assert known || (target == EL0 && !ELUsingAArch32(EL1)); if known && spsr_mode_is_aarch32 != target_el_is_aarch32 then return TRUE; // Check for illegal return from AArch32 to AArch64 if UsingAArch32() && !spsr_mode_is_aarch32 then return TRUE; // Check for illegal return to EL1 when HCR.TGE is set and when either of // * SecureEL2 is enabled. // * SecureEL2 is not enabled and EL1 is in Non-secure state. if HaveEL(EL2) && target == EL1 && HCR_EL2.TGE == '1' then if (!IsSecureBelowEL3() || IsSecureEL2Enabled()) then return TRUE; return FALSE;

Library pseudocode for shared/functions/system/InstrSetIsEventRegisterSet

enumeration// IsEventRegisterSet() // ==================== // Return TRUE if the Event Register of this PE is set, and FALSE if it is clear. boolean InstrSet {IsEventRegisterSet() return EventRegister == '1';InstrSet_A64, InstrSet_A32, InstrSet_T32};

Library pseudocode for shared/functions/system/InstructionSynchronizationBarrierIsHighestEL

// IsHighestEL() // ============= // Returns TRUE if given exception level is the highest exception level implemented boolean IsHighestEL(bits(2) el) return HighestELInstructionSynchronizationBarrier();() == el;

Library pseudocode for shared/functions/system/InterruptPendingIsInHost

// InterruptPending() // ================== // Returns TRUE if there are any pending physical or virtual // interrupts, and FALSE otherwise. // IsInHost() // ========== boolean InterruptPending() boolean pending_virtual_interrupt = FALSE; (irq_pending, -) =IsInHost() return IRQPendingELIsInHost(); (fiq_pending, -) = FIQPending(); boolean pending_physical_interrupt = (irq_pending || fiq_pending || IsPhysicalSErrorPending()); if EL2Enabled() && PSTATE.EL IN {EL0, EL1} && HCR_EL2.TGE == '0' then boolean virq_pending = HCR_EL2.IMO == '1' && (VirtualIRQPending() || HCR_EL2.VI == '1') ; boolean vfiq_pending = HCR_EL2.FMO == '1' && (VirtualFIQPending() || HCR_EL2.VF == '1'); boolean vsei_pending = HCR_EL2.AMO == '1' && (IsVirtualSErrorPending() || HCR_EL2.VSE == '1'); pending_virtual_interrupt = vsei_pending || virq_pending || vfiq_pending; return pending_physical_interrupt || pending_virtual_interrupt;(PSTATE.EL);

Library pseudocode for shared/functions/system/IsASEInstructionIsPhysicalSErrorPending

// Returns TRUE if the current instruction is an ASIMD or SVE vector instruction. // Returns TRUE if a physical SError interrupt is pending. boolean IsASEInstruction();IsPhysicalSErrorPending();

Library pseudocode for shared/functions/system/IsCMOWControlledInstructionIsSErrorEdgeTriggered

// When using AArch64, returns TRUE if the current instruction is one of IC IVAU, // DC CIVAC, DC CIGDVAC, or DC CIGVAC. // When using AArch32, returns TRUE if the current instruction is ICIMVAU or DCCIMVAC. // IsSErrorEdgeTriggered() // ======================= // Returns TRUE if the physical SError interrupt is edge-triggered // and FALSE otherwise. boolean IsCMOWControlledInstruction();IsSErrorEdgeTriggered(bits(2) target_el, bits(25) syndrome) ifHaveRASExt() then if HaveDoubleFaultExt() then return TRUE; if ELUsingAArch32(target_el) then if syndrome<11:10> != '00' then // AArch32 and not Uncontainable. return TRUE; else if syndrome<24> == '0' && syndrome<5:0> != '000000' then // AArch64 and neither IMPLEMENTATION DEFINED syndrome nor Uncategorized. return TRUE; return boolean IMPLEMENTATION_DEFINED "Edge-triggered SError";

Library pseudocode for shared/functions/system/IsCurrentSecurityStateIsSecure

// IsCurrentSecurityState() // ======================== // Returns TRUE if the current Security state matches // the given Security state, and FALSE otherwise. // IsSecure() // ========== // Returns TRUE if current Exception level is in Secure state. boolean IsCurrentSecurityState(IsSecure() ifSecurityStateHaveEL ss) return( ) && !UsingAArch32() && PSTATE.EL == EL3 then return TRUE; elsif HaveEL(EL3) && UsingAArch32() && PSTATE.M == M32_Monitor then return TRUE; return IsSecureBelowEL3CurrentSecurityStateEL3() == ss;();

Library pseudocode for shared/functions/system/IsEventRegisterSetIsSecureBelowEL3

// IsEventRegisterSet() // ==================== // Return TRUE if the Event Register of this PE is set, and FALSE if it is clear. // IsSecureBelowEL3() // ================== // Return TRUE if an Exception level below EL3 is in Secure state // or would be following an exception return to that level. // // Differs from IsSecure in that it ignores the current EL or Mode // in considering security state. // That is, if at AArch64 EL3 or in AArch32 Monitor mode, whether an // exception return would pass to Secure or Non-secure state. boolean IsEventRegisterSet() return EventRegister == '1';IsSecureBelowEL3() ifHaveEL(EL3) then return SCR_GEN[].NS == '0'; elsif HaveEL(EL2) && (!HaveSecureEL2Ext() || !HaveAArch64()) then // If Secure EL2 is not an architecture option then we must be Non-secure. return FALSE; else // TRUE if processor is Secure or FALSE if Non-secure. return boolean IMPLEMENTATION_DEFINED "Secure-only implementation";

Library pseudocode for shared/functions/system/IsHighestELIsSecureEL2Enabled

// IsHighestEL() // ============= // Returns TRUE if given exception level is the highest exception level implemented // IsSecureEL2Enabled() // ==================== // Returns TRUE if Secure EL2 is enabled, FALSE otherwise. boolean IsHighestEL(bits(2) el) returnIsSecureEL2Enabled() if (EL2) && HaveSecureEL2Ext() then if HaveEL(EL3) then if !ELUsingAArch32(EL3) && SCR_EL3.EEL2 == '1' then return TRUE; else return FALSE; else return IsSecureHighestELHaveEL() == el;(); else return FALSE;

Library pseudocode for shared/functions/system/IsInHostIsSynchronizablePhysicalSErrorPending

// IsInHost() // ========== // Returns TRUE if a synchronizable physical SError interrupt is pending. boolean IsInHost() returnIsSynchronizablePhysicalSErrorPending(); ELIsInHost(PSTATE.EL);

Library pseudocode for shared/functions/system/IsPhysicalSErrorPendingIsVirtualSErrorPending

// Returns TRUE if a physical SError interrupt is pending. // Returns TRUE if a virtual SError interrupt is pending. boolean IsPhysicalSErrorPending();IsVirtualSErrorPending();

Library pseudocode for shared/functions/system/IsSErrorEdgeTriggeredLocalTimeoutEvent

// IsSErrorEdgeTriggered() // ======================= // Returns TRUE if the physical SError interrupt is edge-triggered // and FALSE otherwise. // Returns TRUE if CNTVCT_EL0 equals or exceeds the localtimeout value. boolean IsSErrorEdgeTriggered(bits(2) target_el, bits(25) syndrome) ifLocalTimeoutEvent(integer localtimeout); HaveRASExt() then if HaveDoubleFaultExt() then return TRUE; if ELUsingAArch32(target_el) then if syndrome<11:10> != '00' then // AArch32 and not Uncontainable. return TRUE; else if syndrome<24> == '0' && syndrome<5:0> != '000000' then // AArch64 and neither IMPLEMENTATION DEFINED syndrome nor Uncategorized. return TRUE; return boolean IMPLEMENTATION_DEFINED "Edge-triggered SError";

Library pseudocode for shared/functions/system/IsSecureMode_Bits

// IsSecure() // ========== // Returns TRUE if current Exception level is in Secure state. booleanconstant bits(5) IsSecure() ifM32_User = '10000'; constant bits(5) HaveEL(M32_FIQ = '10001'; constant bits(5)EL3) && !M32_IRQ = '10010'; constant bits(5)UsingAArch32() && PSTATE.EL ==M32_Svc = '10011'; constant bits(5) EL3 then return TRUE; elsifM32_Monitor = '10110'; constant bits(5) HaveEL(M32_Abort = '10111'; constant bits(5)EL3) &&M32_Hyp = '11010'; constant bits(5) UsingAArch32() && PSTATE.M ==M32_Undef = '11011'; constant bits(5) M32_Monitor then return TRUE; return IsSecureBelowEL3();M32_System = '11111';

Library pseudocode for shared/functions/system/IsSecureBelowEL3NonSecureOnlyImplementation

// IsSecureBelowEL3() // ================== // Return TRUE if an Exception level below EL3 is in Secure state // or would be following an exception return to that level. // // Differs from IsSecure in that it ignores the current EL or Mode // in considering security state. // That is, if at AArch64 EL3 or in AArch32 Monitor mode, whether an // exception return would pass to Secure or Non-secure state. // NonSecureOnlyImplementation() // ============================= // Returns TRUE if the security state is always Non-secure for this implementation. boolean IsSecureBelowEL3() ifNonSecureOnlyImplementation() return boolean IMPLEMENTATION_DEFINED "Non-secure only implementation"; HaveEL(EL3) then return SCR_GEN[].NS == '0'; elsif HaveEL(EL2) && (!HaveSecureEL2Ext() || !HaveAArch64()) then // If Secure EL2 is not an architecture option then we must be Non-secure. return FALSE; else // TRUE if processor is Secure or FALSE if Non-secure. return boolean IMPLEMENTATION_DEFINED "Secure-only implementation";

Library pseudocode for shared/functions/system/IsSecureEL2EnabledPLOfEL

// IsSecureEL2Enabled() // ==================== // Returns TRUE if Secure EL2 is enabled, FALSE otherwise. // PLOfEL() // ======== booleanPrivilegeLevel IsSecureEL2Enabled() ifPLOfEL(bits(2) el) case el of when HaveELEL3(return if !HaveAArch64() then PL1 else PL3; when EL2) &&return HaveSecureEL2ExtPL2() then if; when HaveELEL1(returnEL3PL1) then if !; whenELUsingAArch32EL0(returnEL3PL0) && SCR_EL3.EEL2 == '1' then return TRUE; else return FALSE; else return SecureOnlyImplementation(); else return FALSE;;

Library pseudocode for shared/functions/system/IsSynchronizablePhysicalSErrorPendingPSTATE

ProcState// Returns TRUE if a synchronizable physical SError interrupt is pending. boolean IsSynchronizablePhysicalSErrorPending();PSTATE;

Library pseudocode for shared/functions/system/IsVirtualSErrorPendingPhysicalCountInt

// Returns TRUE if a virtual SError interrupt is pending. boolean// PhysicalCountInt() // ================== // Returns the integral part of physical count value of the System counter. bits(64) IsVirtualSErrorPending();PhysicalCountInt() return PhysicalCount<87:24>;

Library pseudocode for shared/functions/system/LocalTimeoutEventPrivilegeLevel

// Returns TRUE if CNTVCT_EL0 equals or exceeds the localtimeout value. booleanenumeration LocalTimeoutEvent(integer localtimeout);PrivilegeLevel {PL3, PL2, PL1, PL0};

Library pseudocode for shared/functions/system/Mode_BitsProcState

constant bits(5)type M32_User = '10000'; constant bits(5)ProcState is ( bits (1) N, // Negative condition flag bits (1) Z, // Zero condition flag bits (1) C, // Carry condition flag bits (1) V, // Overflow condition flag bits (1) D, // Debug mask bit [AArch64 only] bits (1) A, // SError interrupt mask bit bits (1) I, // IRQ mask bit bits (1) F, // FIQ mask bit bits (1) PAN, // Privileged Access Never Bit [v8.1] bits (1) UAO, // User Access Override [v8.2] bits (1) DIT, // Data Independent Timing [v8.4] bits (1) TCO, // Tag Check Override [v8.5, AArch64 only] bits (2) BTYPE, // Branch Type [v8.5] bits (1) ALLINT, // Interrupt mask bit bits (1) SS, // Software step bit bits (1) IL, // Illegal Execution state bit bits (2) EL, // Exception level bits (1) nRW, // not Register Width: 0=64, 1=32 bits (1) M32_FIQ = '10001'; constant bits(5) M32_IRQ = '10010'; constant bits(5) M32_Svc = '10011'; constant bits(5) M32_Monitor = '10110'; constant bits(5) M32_Abort = '10111'; constant bits(5) M32_Hyp = '11010'; constant bits(5) M32_Undef = '11011'; constant bits(5) M32_System = '11111';, // Stack pointer select: 0=SP0, 1=SPx [AArch64 only] bits (1) Q, // Cumulative saturation flag [AArch32 only] bits (4) GE, // Greater than or Equal flags [AArch32 only] bits (1) SSBS, // Speculative Store Bypass Safe bits (8) IT, // If-then bits, RES0 in CPSR [AArch32 only] bits (1) J, // J bit, RES0 [AArch32 only, RES0 in SPSR and CPSR] bits (1) T, // T32 bit, RES0 in CPSR [AArch32 only] bits (1) E, // Endianness bit [AArch32 only] bits (5) M // Mode field [AArch32 only] )

Library pseudocode for shared/functions/system/NonSecureOnlyImplementationRestoredITBits

// NonSecureOnlyImplementation() // ============================= // Returns TRUE if the security state is always Non-secure for this implementation. // RestoredITBits() // ================ // Get the value of PSTATE.IT to be restored on this exception return. booleanbits(8) NonSecureOnlyImplementation() return boolean IMPLEMENTATION_DEFINED "Non-secure only implementation";RestoredITBits(bits(N) spsr) it = spsr<15:10,26:25>; // When PSTATE.IL is set, it is CONSTRAINED UNPREDICTABLE whether the IT bits are each set // to zero or copied from the SPSR. if PSTATE.IL == '1' then ifConstrainUnpredictableBool(Unpredictable_ILZEROIT) then return '00000000'; else return it; // The IT bits are forced to zero when they are set to a reserved value. if !IsZero(it<7:4>) && IsZero(it<3:0>) then return '00000000'; // The IT bits are forced to zero when returning to A32 state, or when returning to an EL // with the ITD bit set to 1, and the IT bits are describing a multi-instruction block. itd = if PSTATE.EL == EL2 then HSCTLR.ITD else SCTLR.ITD; if (spsr<5> == '0' && !IsZero(it)) || (itd == '1' && !IsZero(it<2:0>)) then return '00000000'; else return it;

Library pseudocode for shared/functions/system/PLOfELSCRType

// PLOfEL() // ======== PrivilegeLeveltype PLOfEL(bits(2) el) case el of whenSCRType; EL3 return if !HaveAArch64() then PL1 else PL3; when EL2 return PL2; when EL1 return PL1; when EL0 return PL0;

Library pseudocode for shared/functions/system/PSTATESCR_GEN

// SCR_GEN[] // ========= SCRType SCR_GEN[] // AArch32 secure & AArch64 EL3 registers are not architecturally mapped assert HaveEL(EL3); bits(64) r; if !HaveAArch64() then r = ZeroExtendProcState PSTATE;(SCR); else r = SCR_EL3; return r;

Library pseudocode for shared/functions/system/PhysicalCountIntSecureOnlyImplementation

// PhysicalCountInt() // ================== // Returns the integral part of physical count value of the System counter. // SecureOnlyImplementation() // ========================== // Returns TRUE if the security state is always Secure for this implementation. bits(64)boolean PhysicalCountInt() return PhysicalCount<87:24>;SecureOnlyImplementation() return boolean IMPLEMENTATION_DEFINED "Secure-only implementation";

Library pseudocode for shared/functions/system/PrivilegeLevelSecurityState

enumeration PrivilegeLevel {SecurityState {PL3,SS_NonSecure, PL2,SS_Secure }; PL1, PL0};

Library pseudocode for shared/functions/system/ProcStateSecurityStateAtEL

type// SecurityStateAtEL() // =================== // Returns the effective security state at the exception level based off current settings. SecurityState ProcState is ( bits (1) N, // Negative condition flag bits (1) Z, // Zero condition flag bits (1) C, // Carry condition flag bits (1) V, // Overflow condition flag bits (1) D, // Debug mask bit [AArch64 only] bits (1) A, // SError interrupt mask bit bits (1) I, // IRQ mask bit bits (1) F, // FIQ mask bit bits (1) PAN, // Privileged Access Never Bit [v8.1] bits (1) UAO, // User Access Override [v8.2] bits (1) DIT, // Data Independent Timing [v8.4] bits (1) TCO, // Tag Check Override [v8.5, AArch64 only] bits (2) BTYPE, // Branch Type [v8.5] bits (1) ALLINT, // Interrupt mask bit bits (1) SS, // Software step bit bits (1) IL, // Illegal Execution state bit bits (2) EL, // Exception level bits (1) nRW, // not Register Width: 0=64, 1=32 bits (1)SecurityStateAtEL(bits(2) EL) if ! (EL3) then if SecureOnlyImplementation() then return SS_Secure; else return SS_NonSecure; elsif EL == EL3 then return SS_Secure; else // For EL2 call only when EL2 is enabled in current security state assert(EL != EL2 || EL2Enabled()); if !ELUsingAArch32(EL3) then return if SCR_EL3.NS == '1' then SS_NonSecure else SS_Secure; else return if SCR.NS == '1' then SS_NonSecure else SS_SecureSPHaveEL, // Stack pointer select: 0=SP0, 1=SPx [AArch64 only] bits (1) Q, // Cumulative saturation flag [AArch32 only] bits (4) GE, // Greater than or Equal flags [AArch32 only] bits (1) SSBS, // Speculative Store Bypass Safe bits (8) IT, // If-then bits, RES0 in CPSR [AArch32 only] bits (1) J, // J bit, RES0 [AArch32 only, RES0 in SPSR and CPSR] bits (1) T, // T32 bit, RES0 in CPSR [AArch32 only] bits (1) E, // Endianness bit [AArch32 only] bits (5) M // Mode field [AArch32 only] );

Library pseudocode for shared/functions/system/RestoredITBitsSendEvent

// RestoredITBits() // ================ // Get the value of PSTATE.IT to be restored on this exception return. bits(8)// Signal an event to all PEs in a multiprocessor system to set their Event Registers. // When a PE executes the SEV instruction, it causes this function to be executed. RestoredITBits(bits(N) spsr) it = spsr<15:10,26:25>; // When PSTATE.IL is set, it is CONSTRAINED UNPREDICTABLE whether the IT bits are each set // to zero or copied from the SPSR. if PSTATE.IL == '1' then ifSendEvent(); ConstrainUnpredictableBool(Unpredictable_ILZEROIT) then return '00000000'; else return it; // The IT bits are forced to zero when they are set to a reserved value. if !IsZero(it<7:4>) && IsZero(it<3:0>) then return '00000000'; // The IT bits are forced to zero when returning to A32 state, or when returning to an EL // with the ITD bit set to 1, and the IT bits are describing a multi-instruction block. itd = if PSTATE.EL == EL2 then HSCTLR.ITD else SCTLR.ITD; if (spsr<5> == '0' && !IsZero(it)) || (itd == '1' && !IsZero(it<2:0>)) then return '00000000'; else return it;

Library pseudocode for shared/functions/system/SCRTypeSendEventLocal

type// SendEventLocal() // ================ // Set the local Event Register of this PE. // When a PE executes the SEVL instruction, it causes this function to be executed. SCRType;SendEventLocal() EventRegister = '1'; return;

Library pseudocode for shared/functions/system/SCR_GENSetAccumulatedFPExceptions

// SCR_GEN[] // ========= SCRType// Stores FP Exceptions accumulated by the PE. SCR_GEN[] // AArch32 secure & AArch64 EL3 registers are not architecturally mapped assertSetAccumulatedFPExceptions(bits(8) accumulated_exceptions); HaveEL(EL3); bits(64) r; if !HaveAArch64() then r = ZeroExtend(SCR); else r = SCR_EL3; return r;

Library pseudocode for shared/functions/system/SecureOnlyImplementationSetPSTATEFromPSR

// SecureOnlyImplementation() // ========================== // Returns TRUE if the security state is always Secure for this implementation. boolean// SetPSTATEFromPSR() // ================== SecureOnlyImplementation() return boolean IMPLEMENTATION_DEFINED "Secure-only implementation";SetPSTATEFromPSR(bits(N) spsr) boolean illegal_psr_state =IllegalExceptionReturn(spsr); SetPSTATEFromPSR(spsr, illegal_psr_state); // SetPSTATEFromPSR() // ================== // Set PSTATE based on a PSR value SetPSTATEFromPSR(bits(N) spsr_in, boolean illegal_psr_state) bits(N) spsr = spsr_in; boolean from_aarch64 = !UsingAArch32(); assert N == (if from_aarch64 then 64 else 32); PSTATE.SS = DebugExceptionReturnSS(spsr); ShouldAdvanceSS = FALSE; if illegal_psr_state then PSTATE.IL = '1'; if HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; if HaveBTIExt() then PSTATE.BTYPE = bits(2) UNKNOWN; if HaveUAOExt() then PSTATE.UAO = bit UNKNOWN; if HaveDITExt() then PSTATE.DIT = bit UNKNOWN; if HaveMTEExt() then PSTATE.TCO = bit UNKNOWN; else // State that is reinstated only on a legal exception return PSTATE.IL = spsr<20>; if spsr<4> == '1' then // AArch32 state AArch32.WriteMode(spsr<4:0>); // Sets PSTATE.EL correctly if HaveSSBSExt() then PSTATE.SSBS = spsr<23>; else // AArch64 state PSTATE.nRW = '0'; PSTATE.EL = spsr<3:2>; PSTATE.SP = spsr<0>; if HaveBTIExt() then PSTATE.BTYPE = spsr<11:10>; if HaveSSBSExt() then PSTATE.SSBS = spsr<12>; if HaveUAOExt() then PSTATE.UAO = spsr<23>; if HaveDITExt() then PSTATE.DIT = spsr<24>; if HaveMTEExt() then PSTATE.TCO = spsr<25>; // If PSTATE.IL is set, it is CONSTRAINED UNPREDICTABLE whether the T bit is set to zero or // copied from SPSR. if PSTATE.IL == '1' && PSTATE.nRW == '1' then if ConstrainUnpredictableBool(Unpredictable_ILZEROT) then spsr<5> = '0'; // State that is reinstated regardless of illegal exception return PSTATE.<N,Z,C,V> = spsr<31:28>; if HavePANExt() then PSTATE.PAN = spsr<22>; if PSTATE.nRW == '1' then // AArch32 state PSTATE.Q = spsr<27>; PSTATE.IT = RestoredITBits(spsr); ShouldAdvanceIT = FALSE; if HaveDITExt() then PSTATE.DIT = (if (Restarting() || from_aarch64) then spsr<24> else spsr<21>); PSTATE.GE = spsr<19:16>; PSTATE.E = spsr<9>; PSTATE.<A,I,F> = spsr<8:6>; // No PSTATE.D in AArch32 state PSTATE.T = spsr<5>; // PSTATE.J is RES0 else // AArch64 state if HaveFeatNMI() then PSTATE.ALLINT = spsr<13>; PSTATE.<D,A,I,F> = spsr<9:6>; // No PSTATE.<Q,IT,GE,E,T> in AArch64 state return;

Library pseudocode for shared/functions/system/SecurityStateShouldAdvanceIT

enumerationboolean ShouldAdvanceIT; SecurityState { SS_NonSecure, SS_Secure };

Library pseudocode for shared/functions/system/SecurityStateAtELShouldAdvanceSS

// SecurityStateAtEL() // =================== // Returns the effective security state at the exception level based off current settings. SecurityStateboolean ShouldAdvanceSS; SecurityStateAtEL(bits(2) EL) if !HaveEL(EL3) then if SecureOnlyImplementation() then return SS_Secure; else return SS_NonSecure; elsif EL == EL3 then return SS_Secure; else // For EL2 call only when EL2 is enabled in current security state assert(EL != EL2 || EL2Enabled()); if !ELUsingAArch32(EL3) then return if SCR_EL3.NS == '1' then SS_NonSecure else SS_Secure; else return if SCR.NS == '1' then SS_NonSecure else SS_Secure;

Library pseudocode for shared/functions/system/SendEventSpeculationBarrier

// Signal an event to all PEs in a multiprocessor system to set their Event Registers. // When a PE executes the SEV instruction, it causes this function to be executed. SendEvent();SpeculationBarrier();

Library pseudocode for shared/functions/system/SendEventLocalSynchronizeContext

// SendEventLocal() // ================ // Set the local Event Register of this PE. // When a PE executes the SEVL instruction, it causes this function to be executed. SendEventLocal() EventRegister = '1'; return;SynchronizeContext();

Library pseudocode for shared/functions/system/SetAccumulatedFPExceptionsSynchronizeErrors

// Stores FP Exceptions accumulated by the PE.// Implements the error synchronization event. SetAccumulatedFPExceptions(bits(8) accumulated_exceptions);SynchronizeErrors();

Library pseudocode for shared/functions/system/SetPSTATEFromPSRTakeUnmaskedPhysicalSErrorInterrupts

// SetPSTATEFromPSR() // ==================// Take any pending unmasked physical SError interrupt. SetPSTATEFromPSR(bits(N) spsr) boolean illegal_psr_state =TakeUnmaskedPhysicalSErrorInterrupts(boolean iesb_req); IllegalExceptionReturn(spsr); SetPSTATEFromPSR(spsr, illegal_psr_state); // SetPSTATEFromPSR() // ================== // Set PSTATE based on a PSR value SetPSTATEFromPSR(bits(N) spsr_in, boolean illegal_psr_state) bits(N) spsr = spsr_in; boolean from_aarch64 = !UsingAArch32(); assert N == (if from_aarch64 then 64 else 32); PSTATE.SS = DebugExceptionReturnSS(spsr); ShouldAdvanceSS = FALSE; if illegal_psr_state then PSTATE.IL = '1'; if HaveSSBSExt() then PSTATE.SSBS = bit UNKNOWN; if HaveBTIExt() then PSTATE.BTYPE = bits(2) UNKNOWN; if HaveUAOExt() then PSTATE.UAO = bit UNKNOWN; if HaveDITExt() then PSTATE.DIT = bit UNKNOWN; if HaveMTEExt() then PSTATE.TCO = bit UNKNOWN; else // State that is reinstated only on a legal exception return PSTATE.IL = spsr<20>; if spsr<4> == '1' then // AArch32 state AArch32.WriteMode(spsr<4:0>); // Sets PSTATE.EL correctly if HaveSSBSExt() then PSTATE.SSBS = spsr<23>; else // AArch64 state PSTATE.nRW = '0'; PSTATE.EL = spsr<3:2>; PSTATE.SP = spsr<0>; if HaveBTIExt() then PSTATE.BTYPE = spsr<11:10>; if HaveSSBSExt() then PSTATE.SSBS = spsr<12>; if HaveUAOExt() then PSTATE.UAO = spsr<23>; if HaveDITExt() then PSTATE.DIT = spsr<24>; if HaveMTEExt() then PSTATE.TCO = spsr<25>; // If PSTATE.IL is set, it is CONSTRAINED UNPREDICTABLE whether the T bit is set to zero or // copied from SPSR. if PSTATE.IL == '1' && PSTATE.nRW == '1' then if ConstrainUnpredictableBool(Unpredictable_ILZEROT) then spsr<5> = '0'; // State that is reinstated regardless of illegal exception return PSTATE.<N,Z,C,V> = spsr<31:28>; if HavePANExt() then PSTATE.PAN = spsr<22>; if PSTATE.nRW == '1' then // AArch32 state PSTATE.Q = spsr<27>; PSTATE.IT = RestoredITBits(spsr); ShouldAdvanceIT = FALSE; if HaveDITExt() then PSTATE.DIT = (if (Restarting() || from_aarch64) then spsr<24> else spsr<21>); PSTATE.GE = spsr<19:16>; PSTATE.E = spsr<9>; PSTATE.<A,I,F> = spsr<8:6>; // No PSTATE.D in AArch32 state PSTATE.T = spsr<5>; // PSTATE.J is RES0 else // AArch64 state if HaveFeatNMI() then PSTATE.ALLINT = spsr<13>; PSTATE.<D,A,I,F> = spsr<9:6>; // No PSTATE.<Q,IT,GE,E,T> in AArch64 state return;

Library pseudocode for shared/functions/system/ShouldAdvanceITTakeUnmaskedSErrorInterrupts

boolean ShouldAdvanceIT;// Take any pending unmasked physical SError interrupt or unmasked virtual SError // interrupt.TakeUnmaskedSErrorInterrupts();

Library pseudocode for shared/functions/system/ShouldAdvanceSSThisInstr

boolean ShouldAdvanceSS;bits(32)ThisInstr();

Library pseudocode for shared/functions/system/SpeculationBarrierThisInstrLength

integer SpeculationBarrier();ThisInstrLength();

Library pseudocode for shared/functions/system/SynchronizeContextUnreachable

SynchronizeContext();Unreachable() assert FALSE;

Library pseudocode for shared/functions/system/SynchronizeErrorsUsingAArch32

// Implements the error synchronization event.// UsingAArch32() // ============== // Return TRUE if the current Exception level is using AArch32, FALSE if using AArch64. boolean SynchronizeErrors();UsingAArch32() boolean aarch32 = (PSTATE.nRW == '1'); if !HaveAArch32() then assert !aarch32; if !HaveAArch64() then assert aarch32; return aarch32;

Library pseudocode for shared/functions/system/TakeUnmaskedPhysicalSErrorInterruptsVirtualFIQPending

// Take any pending unmasked physical SError interrupt.// Returns TRUE if there is any pending virtual FIQ. boolean TakeUnmaskedPhysicalSErrorInterrupts(boolean iesb_req);VirtualFIQPending();

Library pseudocode for shared/functions/system/TakeUnmaskedSErrorInterruptsVirtualIRQPending

// Take any pending unmasked physical SError interrupt or unmasked virtual SError // interrupt.// Returns TRUE if there is any pending virtual IRQ. boolean TakeUnmaskedSErrorInterrupts();VirtualIRQPending();

Library pseudocode for shared/functions/system/ThisInstrWFxType

bits(32)enumeration ThisInstr();WFxType {WFxType_WFE, WFxType_WFI, WFxType_WFET, WFxType_WFIT};

Library pseudocode for shared/functions/system/ThisInstrLengthWaitForEvent

integer// WaitForEvent() // ============== // PE optionally suspends execution until one of the following occurs: // - A WFE wake-up event. // - A reset. // - The implementation chooses to resume execution. // - A Wait for Event with Timeout (WFET) is executing, and a local timeout event occurs // It is IMPLEMENTATION DEFINED whether restarting execution after the period of // suspension causes the Event Register to be cleared. ThisInstrLength();WaitForEvent(integer localtimeout) if !(IsEventRegisterSet() || (HaveFeatWFxT() && LocalTimeoutEvent(localtimeout))) then EnterLowPowerState(); return;

Library pseudocode for shared/functions/system/UnreachableWaitForInterrupt

// WaitForInterrupt() // ================== // PE optionally suspends execution until one of the following occurs: // - A WFI wake-up event. // - A reset. // - The implementation chooses to resume execution. // - A Wait for Interrupt with Timeout (WFIT) is executing, and a local timeout event occurs. WaitForInterrupt(integer localtimeout) if !(HaveFeatWFxT() && LocalTimeoutEvent(localtimeout)) then EnterLowPowerStateUnreachable() assert FALSE;(); return;

Library pseudocode for shared/functions/systemunpredictable/UsingAArch32ConstrainUnpredictable

// UsingAArch32() // ============== // Return TRUE if the current Exception level is using AArch32, FALSE if using AArch64. // ConstrainUnpredictable() // ======================== // Return the appropriate Constraint result to control the caller's behavior. The return value // is IMPLEMENTATION DEFINED within a permitted list for each UNPREDICTABLE case. // (The permitted list is determined by an assert or case statement at the call site.) boolean// NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // The extra argument is used here to allow this example definition. This is an example only and // does not imply a fixed implementation of these behaviors. Indeed the intention is that it should // be defined by each implementation, according to its implementation choices. Constraint UsingAArch32() boolean aarch32 = (PSTATE.nRW == '1'); if !ConstrainUnpredictable(HaveAArch32Unpredictable() then assert !aarch32; if !which) case which of when return Constraint_UNDEF; when Unpredictable_WBOVERLAPLD return Constraint_WBSUPPRESS; // return loaded value when Unpredictable_WBOVERLAPST return Constraint_NONE; // store pre-writeback value when Unpredictable_LDPOVERLAP return Constraint_UNDEF; // instruction is UNDEFINED when Unpredictable_BASEOVERLAP return Constraint_UNKNOWN; // use UNKNOWN address when Unpredictable_DATAOVERLAP return Constraint_UNKNOWN; // store UNKNOWN value when Unpredictable_DEVPAGE2 return Constraint_FAULT; // take an alignment fault when Unpredictable_DEVICETAGSTORE return Constraint_NONE; // Do not take a fault when Unpredictable_INSTRDEVICE return Constraint_NONE; // Do not take a fault when Unpredictable_RESCPACR return Constraint_TRUE; // Map to UNKNOWN value when Unpredictable_RESMAIR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_S1CTAGGED return Constraint_FALSE; // SCTLR_ELx.C == '0' marks address as untagged when Unpredictable_S2RESMEMATTR return Constraint_NC; // Map to Noncacheable value when Unpredictable_RESTEXCB return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESDACR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESPRRR return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESVTCRS return Constraint_UNKNOWN; // Map to UNKNOWN value when Unpredictable_RESTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_OORTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_LARGEIPA return Constraint_FORCE; // Restrict the IA size to the PAMax value when Unpredictable_ESRCONDPASS return Constraint_FALSE; // Report as "AL" when Unpredictable_ILZEROIT return Constraint_FALSE; // Do not zero PSTATE.IT when Unpredictable_ILZEROT return Constraint_FALSE; // Do not zero PSTATE.T when Unpredictable_BPVECTORCATCHPRI return Constraint_TRUE; // Debug Vector Catch: match on 2nd halfword when Unpredictable_VCMATCHHALF return Constraint_FALSE; // No match when Unpredictable_VCMATCHDAPA return Constraint_FALSE; // No match on Data Abort or Prefetch abort when Unpredictable_WPMASKANDBAS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_WPBASCONTIGUOUS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESWPMASK return Constraint_DISABLED; // Watchpoint disabled when Unpredictable_WPMASKEDBITS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESBPWPCTRL return Constraint_DISABLED; // Breakpoint/watchpoint disabled when Unpredictable_BPNOTIMPL return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_RESBPTYPE return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPNOTCTXCMP return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPMATCHHALF return Constraint_FALSE; // No match when Unpredictable_BPMISMATCHHALF return Constraint_FALSE; // No match when Unpredictable_RESTARTALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_RESTARTZEROUPPERPC return Constraint_TRUE; // Force zero extension when Unpredictable_ZEROUPPER return Constraint_TRUE; // zero top halves of X registers when Unpredictable_ERETZEROUPPERPC return Constraint_TRUE; // zero top half of PC when Unpredictable_A32FORCEALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_SMD return Constraint_UNDEF; // disabled SMC is Unallocated when Unpredictable_NVNV1 return Constraint_NVNV1_00; // Map unpredictable configuration of HCR_EL2<NV,NV1> // to NV = 0 and NV1 = 0 when Unpredictable_Shareability return Constraint_OSH; // Map reserved encoding of shareability to outer shareable when Unpredictable_AFUPDATE // AF update for alignment or permission fault return Constraint_TRUE; when Unpredictable_IESBinDebug // Use SCTLR[].IESB in Debug state return Constraint_TRUE; when Unpredictable_BADPMSFCR // Bad settings for PMSFCR_EL1/PMSEVFR_EL1/PMSLATFR_EL1 return Constraint_TRUE; when Unpredictable_ZEROBTYPE return Constraint_TRUE; // Save BTYPE in SPSR_ELx/DPSR_EL0 as '00' when Unpredictable_CLEARERRITEZERO // Clearing sticky errors when instruction in flight return Constraint_FALSE; when Unpredictable_ALUEXCEPTIONRETURN return Constraint_UNDEF; when Unpredictable_DBGxVR_RESS return Constraint_FALSE; when Unpredictable_PMSCR_PCT return Constraint_PMSCR_PCT_VIRT; when Unpredictable_WFxTDEBUG return Constraint_FALSE; // WFxT in Debug state does not execute as a NOP when Unpredictable_LS64UNSUPPORTED return Constraint_LIMITED_ATOMICITY; // Accesses are not single-copy atomic above the byte level // Misaligned exclusives, atomics, acquire/release to region that is not Normal Cacheable WB are atomic when Unpredictable_MISALIGNEDATOMIC return Constraint_FALSE; when Unpredictable_IGNORETRAPINDEBUG return Constraint_TRUE; // Trap to register access in debug state is ignored when Unpredictable_PMUEVENTCOUNTER return Constraint_UNDEFHaveAArch64Unpredictable_VMSR() then assert aarch32; return aarch32;; // Accesses to the register are UNDEFINED

Library pseudocode for shared/functions/systemunpredictable/VirtualFIQPendingConstrainUnpredictableBits

// Returns TRUE if there is any pending virtual FIQ. boolean// ConstrainUnpredictableBits() // ============================ // This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. // If the result is Constraint_UNKNOWN then the function also returns UNKNOWN value, but that // value is always an allocated value; that is, one for which the behavior is not itself // CONSTRAINED. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the bits part // of the result, and may not be applicable in all cases. (Constraint,bits(width)) VirtualFIQPending();ConstrainUnpredictableBits(Unpredictable which) c = ConstrainUnpredictable(which); if c == Constraint_UNKNOWN then return (c, Zeros(width)); // See notes; this is an example implementation only elsif c == Constraint_PMSCR_PCT_VIRT then return (c,Zeros(width)); else return (c, bits(width) UNKNOWN); // bits result not used

Library pseudocode for shared/functions/systemunpredictable/VirtualIRQPendingConstrainUnpredictableBool

// Returns TRUE if there is any pending virtual IRQ. // ConstrainUnpredictableBool() // ============================ // This is a simple wrapper function for cases where the constrained result is either TRUE or FALSE. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. boolean VirtualIRQPending();ConstrainUnpredictableBool(Unpredictable which) c = ConstrainUnpredictable(which); assert c IN {Constraint_TRUE, Constraint_FALSE}; return (c == Constraint_TRUE);

Library pseudocode for shared/functions/systemunpredictable/WFxTypeConstrainUnpredictableInteger

enumeration// ConstrainUnpredictableInteger() // =============================== // This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. If // the result is Constraint_UNKNOWN then the function also returns an UNKNOWN value in the range // low to high, inclusive. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the integer part // of the result. (Constraint,integer) WFxType {ConstrainUnpredictableInteger(integer low, integer high,WFxType_WFE,which) c = WFxType_WFI,(which); if c == WFxType_WFET, WFxType_WFIT};then return (c, low); // See notes; this is an example implementation only else return (c, integer UNKNOWN); // integer result not used

Library pseudocode for shared/functions/systemunpredictable/WaitForEventConstraint

// WaitForEvent() // ============== // PE optionally suspends execution until one of the following occurs: // - A WFE wakeup event. // - A reset. // - The implementation chooses to resume execution. // - A Wait for Event with Timeout (WFET) is executing, and a local timeout event occurs // It is IMPLEMENTATION DEFINED whether restarting execution after the period of // suspension causes the Event Register to be cleared.enumeration WaitForEvent(integer localtimeout) if !(Constraint {// GeneralIsEventRegisterSet() || (Constraint_NONE, // Instruction executes with // no change or side-effect to its described behaviorHaveFeatWFxT() &&Constraint_UNKNOWN, // Destination register has UNKNOWN value LocalTimeoutEvent(localtimeout))) thenConstraint_UNDEF, // Instruction is UNDEFINED Constraint_UNDEFEL0, // Instruction is UNDEFINED at EL0 only Constraint_NOP, // Instruction executes as NOP Constraint_TRUE, Constraint_FALSE, Constraint_DISABLED, Constraint_UNCOND, // Instruction executes unconditionally Constraint_COND, // Instruction executes conditionally Constraint_ADDITIONAL_DECODE, // Instruction executes with additional decode // Load-store Constraint_WBSUPPRESS, Constraint_FAULT, Constraint_LIMITED_ATOMICITY, // Accesses are not single-copy atomic above the byte level Constraint_NVNV1_00, Constraint_NVNV1_01, Constraint_NVNV1_11, Constraint_OSH, // Constrain to Outer shareable Constraint_ISH, // Constrain to Inner shareable Constraint_NSH, // Constrain to Nonshareable Constraint_NC, // Constrain to Noncacheable Constraint_WT, // Constrain to Writethrough Constraint_WB, // Constrain to Writeback // IPA too large Constraint_FORCE, Constraint_FORCENOSLCHECK, // PMSCR_PCT reserved values select Virtual timestamp EnterLowPowerState(); return;Constraint_PMSCR_PCT_VIRT};

Library pseudocode for shared/functions/systemunpredictable/WaitForInterruptUnpredictable

// WaitForInterrupt() // ================== // PE optionally suspends execution until one of the following occurs: // - A WFI wakeup event. // - A reset. // - The implementation chooses to resume execution. // - A Wait for Interrupt with Timeout (WFIT) is executing, and a local timeout event occurs.enumeration WaitForInterrupt(integer localtimeout) if !(Unpredictable {// VMSR on MVFRHaveFeatWFxT() &&Unpredictable_VMSR, // Writeback/transfer register overlap (load) LocalTimeoutEvent(localtimeout)) thenUnpredictable_WBOVERLAPLD, // Writeback/transfer register overlap (store) Unpredictable_WBOVERLAPST, // Load Pair transfer register overlap Unpredictable_LDPOVERLAP, // Store-exclusive base/status register overlap Unpredictable_BASEOVERLAP, // Store-exclusive data/status register overlap Unpredictable_DATAOVERLAP, // Load-store alignment checks Unpredictable_DEVPAGE2, // Instruction fetch from Device memory Unpredictable_INSTRDEVICE, // Reserved CPACR value Unpredictable_RESCPACR, // Reserved MAIR value Unpredictable_RESMAIR, // Effect of SCTLR_ELx.C on Tagged attribute Unpredictable_S1CTAGGED, // Reserved Stage 2 MemAttr value Unpredictable_S2RESMEMATTR, // Reserved TEX:C:B value Unpredictable_RESTEXCB, // Reserved PRRR value Unpredictable_RESPRRR, // Reserved DACR field Unpredictable_RESDACR, // Reserved VTCR.S value Unpredictable_RESVTCRS, // Reserved TCR.TnSZ value Unpredictable_RESTnSZ, // Reserved SCTLR_ELx.TCF value Unpredictable_RESTCF, // Tag stored to Device memory Unpredictable_DEVICETAGSTORE, // Out-of-range TCR.TnSZ value Unpredictable_OORTnSZ, // IPA size exceeds PA size Unpredictable_LARGEIPA, // Syndrome for a known-passing conditional A32 instruction Unpredictable_ESRCONDPASS, // Illegal State exception: zero PSTATE.IT Unpredictable_ILZEROIT, // Illegal State exception: zero PSTATE.T Unpredictable_ILZEROT, // Debug: prioritization of Vector Catch Unpredictable_BPVECTORCATCHPRI, // Debug Vector Catch: match on 2nd halfword Unpredictable_VCMATCHHALF, // Debug Vector Catch: match on Data Abort or Prefetch abort Unpredictable_VCMATCHDAPA, // Debug watchpoints: non-zero MASK and non-ones BAS Unpredictable_WPMASKANDBAS, // Debug watchpoints: non-contiguous BAS Unpredictable_WPBASCONTIGUOUS, // Debug watchpoints: reserved MASK Unpredictable_RESWPMASK, // Debug watchpoints: non-zero MASKed bits of address Unpredictable_WPMASKEDBITS, // Debug breakpoints and watchpoints: reserved control bits Unpredictable_RESBPWPCTRL, // Debug breakpoints: not implemented Unpredictable_BPNOTIMPL, // Debug breakpoints: reserved type Unpredictable_RESBPTYPE, // Debug breakpoints: not-context-aware breakpoint Unpredictable_BPNOTCTXCMP, // Debug breakpoints: match on 2nd halfword of instruction Unpredictable_BPMATCHHALF, // Debug breakpoints: mismatch on 2nd halfword of instruction Unpredictable_BPMISMATCHHALF, // Debug: restart to a misaligned AArch32 PC value Unpredictable_RESTARTALIGNPC, // Debug: restart to a not-zero-extended AArch32 PC value Unpredictable_RESTARTZEROUPPERPC, // Zero top 32 bits of X registers in AArch32 state Unpredictable_ZEROUPPER, // Zero top 32 bits of PC on illegal return to AArch32 state Unpredictable_ERETZEROUPPERPC, // Force address to be aligned when interworking branch to A32 state Unpredictable_A32FORCEALIGNPC, // SMC disabled Unpredictable_SMD, // HCR_EL2.<NV,NV1> == '01' Unpredictable_NVNV1, // Reserved shareability encoding Unpredictable_Shareability, // Access Flag Update by HW Unpredictable_AFUPDATE, // Consider SCTLR[].IESB in Debug state Unpredictable_IESBinDebug, // Bad settings for PMSFCR_EL1/PMSEVFR_EL1/PMSLATFR_EL1 Unpredictable_BADPMSFCR, // Zero saved BType value in SPSR_ELx/DPSR_EL0 Unpredictable_ZEROBTYPE, // Timestamp constrained to virtual or physical Unpredictable_EL2TIMESTAMP, Unpredictable_EL1TIMESTAMP, // WFET or WFIT instruction in Debug state Unpredictable_WFxTDEBUG, // Address does not support LS64 instructions Unpredictable_LS64UNSUPPORTED, // Misaligned exclusives, atomics, acquire/release to region that is not Normal Cacheable WB are atomic Unpredictable_MISALIGNEDATOMIC, // Clearing DCC/ITR sticky flags when instruction is in flight Unpredictable_CLEARERRITEZERO, // ALUEXCEPTIONRETURN when in user/system mode in A32 instructions Unpredictable_ALUEXCEPTIONRETURN, // Trap to register in debug state are ignored Unpredictable_IGNORETRAPINDEBUG, // Compare DBGBVR.RESS for BP/WP Unpredictable_DBGxVR_RESS, // Inaccessible event counter Unpredictable_PMUEVENTCOUNTER, // Reserved PMSCR.PCT behaviour. EnterLowPowerState(); return;Unpredictable_PMSCR_PCT, };

Library pseudocode for shared/functions/unpredictablevector/ConstrainUnpredictableAdvSIMDExpandImm

// ConstrainUnpredictable() // ======================== // Return the appropriate Constraint result to control the caller's behavior. The return value // is IMPLEMENTATION DEFINED within a permitted list for each UNPREDICTABLE case. // (The permitted list is determined by an assert or case statement at the call site.) // AdvSIMDExpandImm() // ================== // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // The extra argument is used here to allow this example definition. This is an example only and // does not imply a fixed implementation of these behaviors. Indeed the intention is that it should // be defined by each implementation, according to its implementation choices. Constraintbits(64) ConstrainUnpredictable(AdvSIMDExpandImm(bit op, bits(4) cmode, bits(8) imm8) bits(64) imm64; case cmode<3:1> of when '000' imm64 =UnpredictableReplicate which) case which of when( Unpredictable_VMSRZeros return(24):imm8, 2); when '001' imm64 = Constraint_UNDEFReplicate; when( Unpredictable_WBOVERLAPLDZeros return(16):imm8: Constraint_WBSUPPRESSZeros; // return loaded value when(8), 2); when '010' imm64 = Unpredictable_WBOVERLAPSTReplicate return( Constraint_NONEZeros; // store pre-writeback value when(8):imm8: Unpredictable_LDPOVERLAPZeros return(16), 2); when '011' imm64 = Constraint_UNDEFReplicate; // instruction is UNDEFINED when(imm8: Unpredictable_BASEOVERLAPZeros return(24), 2); when '100' imm64 = Constraint_UNKNOWNReplicate; // use UNKNOWN address when( Unpredictable_DATAOVERLAPZeros return(8):imm8, 4); when '101' imm64 = Constraint_UNKNOWNReplicate; // store UNKNOWN value when(imm8: Unpredictable_DEVPAGE2Zeros return(8), 4); when '110' if cmode<0> == '0' then imm64 = Constraint_FAULTReplicate; // take an alignment fault when( Unpredictable_DEVICETAGSTOREZeros return(16):imm8: Constraint_NONEOnes; // Do not take a fault when(8), 2); else imm64 = Unpredictable_INSTRDEVICEReplicate return( Constraint_NONEZeros; // Do not take a fault when(8):imm8: Unpredictable_RESCPACROnes return(16), 2); when '111' if cmode<0> == '0' && op == '0' then imm64 = Constraint_TRUEReplicate; // Map to UNKNOWN value when(imm8, 8); if cmode<0> == '0' && op == '1' then imm8a = Unpredictable_RESMAIRReplicate return(imm8<7>, 8); imm8b = Constraint_UNKNOWNReplicate; // Map to UNKNOWN value when(imm8<6>, 8); imm8c = Unpredictable_S1CTAGGEDReplicate return(imm8<5>, 8); imm8d = Constraint_FALSEReplicate; // SCTLR_ELx.C == '0' marks address as untagged when(imm8<4>, 8); imm8e = Unpredictable_S2RESMEMATTRReplicate return(imm8<3>, 8); imm8f = Constraint_NCReplicate; // Map to Noncacheable value when(imm8<2>, 8); imm8g = Unpredictable_RESTEXCBReplicate return(imm8<1>, 8); imm8h = Constraint_UNKNOWNReplicate; // Map to UNKNOWN value when(imm8<0>, 8); imm64 = imm8a:imm8b:imm8c:imm8d:imm8e:imm8f:imm8g:imm8h; if cmode<0> == '1' && op == '0' then imm32 = imm8<7>:NOT(imm8<6>): Unpredictable_RESDACRReplicate return(imm8<6>,5):imm8<5:0>: Constraint_UNKNOWNZeros; // Map to UNKNOWN value when(19); imm64 = Unpredictable_RESPRRRReplicate return(imm32, 2); if cmode<0> == '1' && op == '1' then if Constraint_UNKNOWNUsingAArch32; // Map to UNKNOWN value when() then ReservedEncoding(); imm64 = imm8<7>:NOT(imm8<6>): Unpredictable_RESVTCRSReplicate return(imm8<6>,8):imm8<5:0>: Constraint_UNKNOWNZeros; // Map to UNKNOWN value when Unpredictable_RESTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_OORTnSZ return Constraint_FORCE; // Map to the limit value when Unpredictable_LARGEIPA return Constraint_FORCE; // Restrict the IA size to the PAMax value when Unpredictable_ESRCONDPASS return Constraint_FALSE; // Report as "AL" when Unpredictable_ILZEROIT return Constraint_FALSE; // Do not zero PSTATE.IT when Unpredictable_ILZEROT return Constraint_FALSE; // Do not zero PSTATE.T when Unpredictable_BPVECTORCATCHPRI return Constraint_TRUE; // Debug Vector Catch: match on 2nd halfword when Unpredictable_VCMATCHHALF return Constraint_FALSE; // No match when Unpredictable_VCMATCHDAPA return Constraint_FALSE; // No match on Data Abort or Prefetch abort when Unpredictable_WPMASKANDBAS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_WPBASCONTIGUOUS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESWPMASK return Constraint_DISABLED; // Watchpoint disabled when Unpredictable_WPMASKEDBITS return Constraint_FALSE; // Watchpoint disabled when Unpredictable_RESBPWPCTRL return Constraint_DISABLED; // Breakpoint/watchpoint disabled when Unpredictable_BPNOTIMPL return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_RESBPTYPE return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPNOTCTXCMP return Constraint_DISABLED; // Breakpoint disabled when Unpredictable_BPMATCHHALF return Constraint_FALSE; // No match when Unpredictable_BPMISMATCHHALF return Constraint_FALSE; // No match when Unpredictable_RESTARTALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_RESTARTZEROUPPERPC return Constraint_TRUE; // Force zero extension when Unpredictable_ZEROUPPER return Constraint_TRUE; // zero top halves of X registers when Unpredictable_ERETZEROUPPERPC return Constraint_TRUE; // zero top half of PC when Unpredictable_A32FORCEALIGNPC return Constraint_FALSE; // Do not force alignment when Unpredictable_SMD return Constraint_UNDEF; // disabled SMC is Unallocated when Unpredictable_NVNV1 return Constraint_NVNV1_00; // Map unpredictable configuration of HCR_EL2<NV,NV1> // to NV = 0 and NV1 = 0 when Unpredictable_Shareability return Constraint_OSH; // Map reserved encoding of shareability to Outer Shareable when Unpredictable_AFUPDATE // AF update for alignment or Permission fault return Constraint_TRUE; when Unpredictable_IESBinDebug // Use SCTLR[].IESB in Debug state return Constraint_TRUE; when Unpredictable_BADPMSFCR // Bad settings for PMSFCR_EL1/PMSEVFR_EL1/PMSLATFR_EL1 return Constraint_TRUE; when Unpredictable_ZEROBTYPE return Constraint_TRUE; // Save BTYPE in SPSR_ELx/DPSR_EL0 as '00' when Unpredictable_CLEARERRITEZERO // Clearing sticky errors when instruction in flight return Constraint_FALSE; when Unpredictable_ALUEXCEPTIONRETURN return Constraint_UNDEF; when Unpredictable_DBGxVR_RESS return Constraint_FALSE; when Unpredictable_PMSCR_PCT return Constraint_PMSCR_PCT_VIRT; when Unpredictable_WFxTDEBUG return Constraint_FALSE; // WFxT in Debug state does not execute as a NOP when Unpredictable_LS64UNSUPPORTED return Constraint_LIMITED_ATOMICITY; // Accesses are not single-copy atomic above the byte level // Misaligned exclusives, atomics, acquire/release to region that is not Normal Cacheable WB are atomic when Unpredictable_MISALIGNEDATOMIC return Constraint_FALSE; when Unpredictable_IGNORETRAPINDEBUG return Constraint_TRUE; // Trap to register access in debug state is ignored when Unpredictable_PMUEVENTCOUNTER return Constraint_UNDEF; // Accesses to the register are UNDEFINED(48); return imm64;

Library pseudocode for shared/functions/unpredictablevector/ConstrainUnpredictableBitsMatMulAdd

// ConstrainUnpredictableBits() // ============================ // MatMulAdd() // =========== // // Signed or unsigned 8-bit integer matrix multiply and add to 32-bit integer matrix // result[2, 2] = addend[2, 2] + (op1[2, 8] * op2[8, 2]) // This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. // If the result is Constraint_UNKNOWN then the function also returns UNKNOWN value, but that // value is always an allocated value; that is, one for which the behavior is not itself // CONSTRAINED. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the bits part // of the result, and may not be applicable in all cases. (Constraint,bits(width))bits(N) ConstrainUnpredictableBits(MatMulAdd(bits(N) addend, bits(N) op1, bits(N) op2, boolean op1_unsigned, boolean op2_unsigned) assert N == 128; bits(N) result; bits(32) sum; integer prod; for i = 0 to 1 for j = 0 to 1 sum =UnpredictableElem which) c =[addend, 2*i + j, 32]; for k = 0 to 7 prod = ConstrainUnpredictableInt(which); if c ==( Constraint_UNKNOWNElem then return (c,[op1, 8*i + k, 8], op1_unsigned) * ZerosInt(width)); // See notes; this is an example implementation only elsif c ==( Constraint_PMSCR_PCT_VIRTElem then return (c,[op2, 8*j + k, 8], op2_unsigned); sum = sum + prod;ZerosElem(width)); else return (c, bits(width) UNKNOWN); // bits result not used[result, 2*i + j, 32] = sum; return result;

Library pseudocode for shared/functions/unpredictablevector/ConstrainUnpredictableBoolPolynomialMult

// ConstrainUnpredictableBool() // ============================ // PolynomialMult() // ================ // This is a simple wrapper function for cases where the constrained result is either TRUE or FALSE. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. booleanbits(M+N) ConstrainUnpredictableBool(PolynomialMult(bits(M) op1, bits(N) op2) result =UnpredictableZeros which) c =(M+N); extended_op2 = ConstrainUnpredictableZeroExtend(which); assert c IN {(op2, M+N); for i=0 to M-1 if op1<i> == '1' then result = result EORConstraint_TRUELSL, Constraint_FALSE}; return (c == Constraint_TRUE);(extended_op2, i); return result;

Library pseudocode for shared/functions/unpredictablevector/ConstrainUnpredictableIntegerSatQ

// ConstrainUnpredictableInteger() // =============================== // SatQ() // ====== // This is a variant of ConstrainUnpredictable for when the result can be Constraint_UNKNOWN. If // the result is Constraint_UNKNOWN then the function also returns an UNKNOWN value in the range // low to high, inclusive. // NOTE: This version of the function uses an Unpredictable argument to define the call site. // This argument does not appear in the version used in the Armv8 Architecture Reference Manual. // See the NOTE on ConstrainUnpredictable() for more information. // This is an example placeholder only and does not imply a fixed implementation of the integer part // of the result. (Constraint,integer)(bits(N), boolean) ConstrainUnpredictableInteger(integer low, integer high,SatQ(integer i, integer N, boolean unsigned) (result, sat) = if unsigned then UnpredictableUnsignedSatQ which) c =(i, N) else ConstrainUnpredictableSignedSatQ(which); if c == Constraint_UNKNOWN then return (c, low); // See notes; this is an example implementation only else return (c, integer UNKNOWN); // integer result not used(i, N); return (result, sat);

Library pseudocode for shared/functions/unpredictablevector/ConstraintSignedSatQ

enumeration// SignedSatQ() // ============ (bits(N), boolean) Constraint {// GeneralSignedSatQ(integer i, integer N) integer result; boolean saturated; if i > 2^(N-1) - 1 then result = 2^(N-1) - 1; saturated = TRUE; elsif i < -(2^(N-1)) then result = -(2^(N-1)); saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated); Constraint_NONE, // Instruction executes with // no change or side-effect to its described behavior Constraint_UNKNOWN, // Destination register has UNKNOWN value Constraint_UNDEF, // Instruction is UNDEFINED Constraint_UNDEFEL0, // Instruction is UNDEFINED at EL0 only Constraint_NOP, // Instruction executes as NOP Constraint_TRUE, Constraint_FALSE, Constraint_DISABLED, Constraint_UNCOND, // Instruction executes unconditionally Constraint_COND, // Instruction executes conditionally Constraint_ADDITIONAL_DECODE, // Instruction executes with additional decode // Load-store Constraint_WBSUPPRESS, Constraint_FAULT, Constraint_LIMITED_ATOMICITY, // Accesses are not single-copy atomic above the byte level Constraint_NVNV1_00, Constraint_NVNV1_01, Constraint_NVNV1_11, Constraint_OSH, // Constrain to Outer Shareable Constraint_ISH, // Constrain to Inner Shareable Constraint_NSH, // Constrain to Nonshareable Constraint_NC, // Constrain to Noncacheable Constraint_WT, // Constrain to Writethrough Constraint_WB, // Constrain to Writeback // IPA too large Constraint_FORCE, Constraint_FORCENOSLCHECK, // PMSCR_PCT reserved values select Virtual timestamp Constraint_PMSCR_PCT_VIRT};

Library pseudocode for shared/functions/unpredictablevector/UnpredictableUnsignedRSqrtEstimate

enumeration// UnsignedRSqrtEstimate() // ======================= bits(N) Unpredictable {// VMSR on MVFRUnsignedRSqrtEstimate(bits(N) operand) assert N == 32; bits(N) result; if operand<N-1:N-2> == '00' then // Operands <= 0x3FFFFFFF produce 0xFFFFFFFF result = Unpredictable_VMSR, // Writeback/transfer register overlap (load)(N); else // input is in the range 0x40000000 .. 0xffffffff representing [0.25 .. 1.0) // estimate is in the range 256 .. 511 representing [1.0 .. 2.0) increasedprecision = FALSE; estimate = Unpredictable_WBOVERLAPLD, // Writeback/transfer register overlap (store)( Unpredictable_WBOVERLAPST, // Load Pair transfer register overlap(operand<31:23>), increasedprecision); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : Unpredictable_LDPOVERLAP, // Store-exclusive base/status register overlap Unpredictable_BASEOVERLAP, // Store-exclusive data/status register overlap Unpredictable_DATAOVERLAP, // Load-store alignment checks Unpredictable_DEVPAGE2, // Instruction fetch from Device memory Unpredictable_INSTRDEVICE, // Reserved CPACR value Unpredictable_RESCPACR, // Reserved MAIR value Unpredictable_RESMAIR, // Effect of SCTLR_ELx.C on Tagged attribute Unpredictable_S1CTAGGED, // Reserved Stage 2 MemAttr value Unpredictable_S2RESMEMATTR, // Reserved TEX:C:B value Unpredictable_RESTEXCB, // Reserved PRRR value Unpredictable_RESPRRR, // Reserved DACR field Unpredictable_RESDACR, // Reserved VTCR.S value Unpredictable_RESVTCRS, // Reserved TCR.TnSZ value Unpredictable_RESTnSZ, // Reserved SCTLR_ELx.TCF value Unpredictable_RESTCF, // Tag stored to Device memory Unpredictable_DEVICETAGSTORE, // Out-of-range TCR.TnSZ value Unpredictable_OORTnSZ, // IPA size exceeds PA size Unpredictable_LARGEIPA, // Syndrome for a known-passing conditional A32 instruction Unpredictable_ESRCONDPASS, // Illegal State exception: zero PSTATE.IT Unpredictable_ILZEROIT, // Illegal State exception: zero PSTATE.T Unpredictable_ILZEROT, // Debug: prioritization of Vector Catch Unpredictable_BPVECTORCATCHPRI, // Debug Vector Catch: match on 2nd halfword Unpredictable_VCMATCHHALF, // Debug Vector Catch: match on Data Abort or Prefetch abort Unpredictable_VCMATCHDAPA, // Debug watchpoints: non-zero MASK and non-ones BAS Unpredictable_WPMASKANDBAS, // Debug watchpoints: non-contiguous BAS Unpredictable_WPBASCONTIGUOUS, // Debug watchpoints: reserved MASK Unpredictable_RESWPMASK, // Debug watchpoints: non-zero MASKed bits of address Unpredictable_WPMASKEDBITS, // Debug breakpoints and watchpoints: reserved control bits Unpredictable_RESBPWPCTRL, // Debug breakpoints: not implemented Unpredictable_BPNOTIMPL, // Debug breakpoints: reserved type Unpredictable_RESBPTYPE, // Debug breakpoints: not-context-aware breakpoint Unpredictable_BPNOTCTXCMP, // Debug breakpoints: match on 2nd halfword of instruction Unpredictable_BPMATCHHALF, // Debug breakpoints: mismatch on 2nd halfword of instruction Unpredictable_BPMISMATCHHALF, // Debug: restart to a misaligned AArch32 PC value Unpredictable_RESTARTALIGNPC, // Debug: restart to a not-zero-extended AArch32 PC value Unpredictable_RESTARTZEROUPPERPC, // Zero top 32 bits of X registers in AArch32 state Unpredictable_ZEROUPPER, // Zero top 32 bits of PC on illegal return to AArch32 state Unpredictable_ERETZEROUPPERPC, // Force address to be aligned when interworking branch to A32 state Unpredictable_A32FORCEALIGNPC, // SMC disabled Unpredictable_SMD, // HCR_EL2.<NV,NV1> == '01' Unpredictable_NVNV1, // Reserved shareability encoding Unpredictable_Shareability, // Access Flag Update by HW Unpredictable_AFUPDATE, // Consider SCTLR[].IESB in Debug state Unpredictable_IESBinDebug, // Bad settings for PMSFCR_EL1/PMSEVFR_EL1/PMSLATFR_EL1 Unpredictable_BADPMSFCR, // Zero saved BType value in SPSR_ELx/DPSR_EL0 Unpredictable_ZEROBTYPE, // Timestamp constrained to virtual or physical Unpredictable_EL2TIMESTAMP, Unpredictable_EL1TIMESTAMP, // WFET or WFIT instruction in Debug state Unpredictable_WFxTDEBUG, // Address does not support LS64 instructions Unpredictable_LS64UNSUPPORTED, // Misaligned exclusives, atomics, acquire/release to region that is not Normal Cacheable WB are atomic Unpredictable_MISALIGNEDATOMIC, // Clearing DCC/ITR sticky flags when instruction is in flight Unpredictable_CLEARERRITEZERO, // ALUEXCEPTIONRETURN when in user/system mode in A32 instructions Unpredictable_ALUEXCEPTIONRETURN, // Trap to register in debug state are ignored Unpredictable_IGNORETRAPINDEBUG, // Compare DBGBVR.RESS for BP/WP Unpredictable_DBGxVR_RESS, // Inaccessible event counter Unpredictable_PMUEVENTCOUNTER, // Reserved PMSCR.PCT behaviour. Unpredictable_PMSCR_PCT, };(N-9); return result;

Library pseudocode for shared/functions/vector/AdvSIMDExpandImmUnsignedRecipEstimate

// AdvSIMDExpandImm() // ================== // UnsignedRecipEstimate() // ======================= bits(64)bits(N) AdvSIMDExpandImm(bit op, bits(4) cmode, bits(8) imm8) bits(64) imm64; case cmode<3:1> of when '000' imm64 =UnsignedRecipEstimate(bits(N) operand) assert N == 32; bits(N) result; if operand<N-1> == '0' then // Operands <= 0x7FFFFFFF produce 0xFFFFFFFF result = Replicate(Zeros(24):imm8, 2); when '001' imm64 = Replicate(Zeros(16):imm8:Zeros(8), 2); when '010' imm64 = Replicate(Zeros(8):imm8:Zeros(16), 2); when '011' imm64 = Replicate(imm8:Zeros(24), 2); when '100' imm64 = Replicate(Zeros(8):imm8, 4); when '101' imm64 = Replicate(imm8:Zeros(8), 4); when '110' if cmode<0> == '0' then imm64 = Replicate(Zeros(16):imm8:Ones(8), 2); else imm64 =(N); else // input is in the range 0x80000000 .. 0xffffffff representing [0.5 .. 1.0) // estimate is in the range 256 to 511 representing [1.0 .. 2.0) increasedprecision = FALSE; estimate = ReplicateRecipEstimate(ZerosUInt(8):imm8:Ones(16), 2); when '111' if cmode<0> == '0' && op == '0' then imm64 = Replicate(imm8, 8); if cmode<0> == '0' && op == '1' then imm8a = Replicate(imm8<7>, 8); imm8b = Replicate(imm8<6>, 8); imm8c = Replicate(imm8<5>, 8); imm8d = Replicate(imm8<4>, 8); imm8e = Replicate(imm8<3>, 8); imm8f = Replicate(imm8<2>, 8); imm8g = Replicate(imm8<1>, 8); imm8h = Replicate(imm8<0>, 8); imm64 = imm8a:imm8b:imm8c:imm8d:imm8e:imm8f:imm8g:imm8h; if cmode<0> == '1' && op == '0' then imm32 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,5):imm8<5:0>:Zeros(19); imm64 = Replicate(imm32, 2); if cmode<0> == '1' && op == '1' then if UsingAArch32() then ReservedEncoding(); imm64 = imm8<7>:NOT(imm8<6>):Replicate(imm8<6>,8):imm8<5:0>:(operand<31:23>), increasedprecision); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> :Zeros(48); (N-9); return imm64; return result;

Library pseudocode for shared/functions/vector/MatMulAddUnsignedSatQ

// MatMulAdd() // =========== // // Signed or unsigned 8-bit integer matrix multiply and add to 32-bit integer matrix // result[2, 2] = addend[2, 2] + (op1[2, 8] * op2[8, 2]) // UnsignedSatQ() // ============== bits(N)(bits(N), boolean) MatMulAdd(bits(N) addend, bits(N) op1, bits(N) op2, boolean op1_unsigned, boolean op2_unsigned) assert N == 128; bits(N) result; bits(32) sum; integer prod; for i = 0 to 1 for j = 0 to 1 sum =UnsignedSatQ(integer i, integer N) integer result; boolean saturated; if i > 2^N - 1 then result = 2^N - 1; saturated = TRUE; elsif i < 0 then result = 0; saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated); Elem[addend, 2*i + j, 32]; for k = 0 to 7 prod = Int(Elem[op1, 8*i + k, 8], op1_unsigned) * Int(Elem[op2, 8*j + k, 8], op2_unsigned); sum = sum + prod; Elem[result, 2*i + j, 32] = sum; return result;

Library pseudocode for shared/functionstrace/vectorCommon/PolynomialMultGetTimestamp

// PolynomialMult() // ================ // GetTimestamp() // ============== // Returns the Timestamp depending on the type bits(M+N)bits(64) PolynomialMult(bits(M) op1, bits(N) op2) result =GetTimestamp( TimeStamp timeStampType) case timeStampType of when TimeStamp_Physical return PhysicalCountInt(); when TimeStamp_Virtual return PhysicalCountInt() - CNTVOFF_EL2; when TimeStamp_OffsetPhysical return PhysicalCountInt() - CNTPOFF_EL2; when TimeStamp_None return Zeros(M+N); extended_op2 =(64); when ZeroExtendTimeStamp_CoreSight(op2, M+N); for i=0 to M-1 if op1<i> == '1' then result = result EORreturn bits(64) IMPLEMENTATION_DEFINED "CoreSight timestamp"; otherwise LSLUnreachable(extended_op2, i); return result;();

Library pseudocode for shared/functionstrace/vectorselfhosted/SatQEffectiveE0HTRE

// SatQ() // ====== // EffectiveE0HTRE() // ================= // Returns effective E0HTRE value (bits(N), boolean)bit SatQ(integer i, integer N, boolean unsigned) (result, sat) = if unsigned thenEffectiveE0HTRE() return if UnsignedSatQELUsingAArch32(i, N) else( SignedSatQEL2(i, N); return (result, sat);) then HTRFCR.E0HTRE else TRFCR_EL2.E0HTRE;

Library pseudocode for shared/functionstrace/vectorselfhosted/SignedSatQEffectiveE0TRE

// SignedSatQ() // ============ // EffectiveE0TRE() // ================ // Returns effective E0TRE value (bits(N), boolean)bit SignedSatQ(integer i, integer N) integer result; boolean saturated; if i > 2^(N-1) - 1 then result = 2^(N-1) - 1; saturated = TRUE; elsif i < -(2^(N-1)) then result = -(2^(N-1)); saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated);EffectiveE0TRE() return ifELUsingAArch32(EL1) then TRFCR.E0TRE else TRFCR_EL1.E0TRE;

Library pseudocode for shared/functionstrace/vectorselfhosted/UnsignedRSqrtEstimateEffectiveE1TRE

// UnsignedRSqrtEstimate() // ======================= // EffectiveE1TRE() // ================ // Returns effective E1TRE value bits(N)bit UnsignedRSqrtEstimate(bits(N) operand) assert N == 32; bits(N) result; if operand<N-1:N-2> == '00' then // Operands <= 0x3FFFFFFF produce 0xFFFFFFFF result =EffectiveE1TRE() return if OnesUsingAArch32(N); else // input is in the range 0x40000000 .. 0xffffffff representing [0.25 .. 1.0) // estimate is in the range 256 .. 511 representing [1.0 .. 2.0) increasedprecision = FALSE; estimate = RecipSqrtEstimate(UInt(operand<31:23>), increasedprecision); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : Zeros(N-9); return result;() then TRFCR.E1TRE else TRFCR_EL1.E1TRE;

Library pseudocode for shared/functionstrace/vectorselfhosted/UnsignedRecipEstimateEffectiveE2TRE

// UnsignedRecipEstimate() // ======================= // EffectiveE2TRE() // ================ // Returns effective E2TRE value bits(N)bit UnsignedRecipEstimate(bits(N) operand) assert N == 32; bits(N) result; if operand<N-1> == '0' then // Operands <= 0x7FFFFFFF produce 0xFFFFFFFF result =EffectiveE2TRE() return if OnesUsingAArch32(N); else // input is in the range 0x80000000 .. 0xffffffff representing [0.5 .. 1.0) // estimate is in the range 256 to 511 representing [1.0 .. 2.0) increasedprecision = FALSE; estimate = RecipEstimate(UInt(operand<31:23>), increasedprecision); // result is in the range 0x80000000 .. 0xff800000 representing [1.0 .. 2.0) result = estimate<8:0> : Zeros(N-9); return result;() then HTRFCR.E2TRE else TRFCR_EL2.E2TRE;

Library pseudocode for shared/functionstrace/vectorselfhosted/UnsignedSatQSelfHostedTraceEnabled

// UnsignedSatQ() // ============== // SelfHostedTraceEnabled() // ======================== // Returns TRUE if Self-hosted Trace is enabled. (bits(N), boolean)boolean UnsignedSatQ(integer i, integer N) integer result; boolean saturated; if i > 2^N - 1 then result = 2^N - 1; saturated = TRUE; elsif i < 0 then result = 0; saturated = TRUE; else result = i; saturated = FALSE; return (result<N-1:0>, saturated);SelfHostedTraceEnabled() if !(HaveTraceExt() && HaveSelfHostedTrace()) then return FALSE; if EDSCR.TFO == '0' then return TRUE; if HaveEL(EL3) then secure_trace_enable = if ELUsingAArch32(EL3) then SDCR.STE else MDCR_EL3.STE; if secure_trace_enable == '1' && !ExternalSecureNoninvasiveDebugEnabled() then return TRUE; else if SecureOnlyImplementation() && !ExternalSecureNoninvasiveDebugEnabled() then return TRUE; return FALSE;

Library pseudocode for shared/trace/Commonselfhosted/GetTimestampTraceAllowed

// GetTimestamp() // TraceAllowed() // ============== // Returns the Timestamp depending on the type // Returns TRUE if Self-hosted Trace is allowed in the given Exception level. bits(64)boolean GetTimestamp(TraceAllowed(bits(2) el) if !TimeStampHaveTraceExt timeStampType) case timeStampType of when() then return FALSE; ss = TimeStamp_PhysicalSecurityStateAtEL return(el); if PhysicalCountIntSelfHostedTraceEnabled(); when() then boolean trace_allowed; // Detect scenarios where tracing in this Security state is never allowed. case ss of when TimeStamp_VirtualSS_NonSecure returntrace_allowed = TRUE; when PhysicalCountIntSS_Secure() - CNTVOFF_EL2; whenbit trace_bit; if TimeStamp_OffsetPhysicalHaveEL return( PhysicalCountIntEL3() - CNTPOFF_EL2; when) then trace_bit = if TimeStamp_NoneELUsingAArch32 return( ZerosEL3(64); when) then SDCR.STE else MDCR_EL3.STE; else trace_bit = '1'; trace_allowed = trace_bit == '1'; bit TRE_bit; case el of when TimeStamp_CoreSightEL3 return bits(64) IMPLEMENTATION_DEFINED "CoreSight timestamp"; otherwiseTRE_bit = if ! () then TRFCR.E1TRE else '0'; when EL2 TRE_bit = EffectiveE2TRE(); when EL1 TRE_bit = EffectiveE1TRE(); when EL0 if EffectiveTGE() == '1' then TRE_bit = EffectiveE0HTRE(); else TRE_bit = EffectiveE0TRE(); return trace_allowed && TRE_bit == '1'; else case ss of when SS_NonSecure return ExternalNoninvasiveDebugEnabled(); when SS_Secure return ExternalSecureNoninvasiveDebugEnabledUnreachableHaveAArch64();

Library pseudocode for shared/trace/selfhosted/EffectiveE0HTRETraceContextIDR2

// EffectiveE0HTRE() // ================= // Returns effective E0HTRE value // TraceContextIDR2() // ================== bitboolean EffectiveE0HTRE() return ifTraceContextIDR2() if ! ELUsingAArch32TraceAllowed((PSTATE.EL)|| !HaveEL(EL2) then return FALSE; return (!SelfHostedTraceEnabled) then HTRFCR.E0HTRE else TRFCR_EL2.E0HTRE;() || TRFCR_EL2.CX == '1');

Library pseudocode for shared/trace/selfhosted/EffectiveE0TRETraceSynchronizationBarrier

// EffectiveE0TRE() // ================ // Returns effective E0TRE value bit// Memory barrier instruction that preserves the relative order of memory accesses to System // registers due to trace operations and other memory accesses to the same registers EffectiveE0TRE() return ifTraceSynchronizationBarrier(); ELUsingAArch32(EL1) then TRFCR.E0TRE else TRFCR_EL1.E0TRE;

Library pseudocode for shared/trace/selfhosted/EffectiveE1TRETraceTimeStamp

// EffectiveE1TRE() // TraceTimeStamp() // ================ // Returns effective E1TRE value bitTimeStamp EffectiveE1TRE() return ifTraceTimeStamp() if () then if HaveEL(EL2) then TS_el2 = TRFCR_EL2.TS; if !HaveECVExt() && TS_el2 == '10' then // Reserved value (-, TS_el2) = ConstrainUnpredictableBits(Unpredictable_EL2TIMESTAMP); case TS_el2 of when '00' // Falls out to check TRFCR_EL1.TS when '01' return TimeStamp_Virtual; when '10' assert HaveECVExt(); // Otherwise ConstrainUnpredictableBits removes this case return TimeStamp_OffsetPhysical; when '11' return TimeStamp_Physical; TS_el1 = TRFCR_EL1.TS; if TS_el1 == '00' || (!HaveECVExt() && TS_el1 == '10') then // Reserved value (-, TS_el1) = ConstrainUnpredictableBits(Unpredictable_EL1TIMESTAMP); case TS_el1 of when '01' return TimeStamp_Virtual; when '10' assert HaveECVExt(); return TimeStamp_OffsetPhysical; when '11' return TimeStamp_Physical; otherwise Unreachable(); // ConstrainUnpredictableBits removes this case else return TimeStamp_CoreSightUsingAArch32SelfHostedTraceEnabled() then TRFCR.E1TRE else TRFCR_EL1.E1TRE;;

Library pseudocode for shared/tracetranslation/selfhostedat/EffectiveE2TREATAccess

// EffectiveE2TRE() // ================ // Returns effective E2TRE value bitenumeration EffectiveE2TRE() return ifATAccess { ATAccess_Read, ATAccess_Write, ATAccess_ReadPAN, UsingAArch32() then HTRFCR.E2TRE else TRFCR_EL2.E2TRE;ATAccess_WritePAN };

Library pseudocode for shared/tracetranslation/selfhostedat/SelfHostedTraceEnabledEncodePARAttrs

// SelfHostedTraceEnabled() // ======================== // Returns TRUE if Self-hosted Trace is enabled. // EncodePARAttrs() // ================ // Convert orthogonal attributes and hints to 64-bit PAR ATTR field. booleanbits(8) SelfHostedTraceEnabled() if !(EncodePARAttrs(HaveTraceExtMemoryAttributes() &&memattrs) bits(8) result; if HaveSelfHostedTraceHaveMTEExt()) then return FALSE; if EDSCR.TFO == '0' then return TRUE; if() && memattrs.tagged then result<7:0> = '11110000'; return result; if memattrs.memtype == HaveELMemType_Device(then result<7:4> = '0000'; if memattrs.device ==EL3DeviceType_nGnRnE) then secure_trace_enable = ifthen result<3:0> = '0000'; elsif memattrs.device == ELUsingAArch32DeviceType_nGnRE(then result<3:0> = '0100'; elsif memattrs.device ==EL3DeviceType_nGRE) then SDCR.STE else MDCR_EL3.STE; if secure_trace_enable == '1' && !then result<3:0> = '1000'; else // DeviceType_GRE result<3:0> = '1100'; else if memattrs.outer.attrs ==ExternalSecureNoninvasiveDebugEnabledMemAttr_WT() then return TRUE; else ifthen result<7:6> = if memattrs.outer.transient then '00' else '10'; result<5:4> = memattrs.outer.hints; elsif memattrs.outer.attrs == SecureOnlyImplementationMemAttr_WB() && !then result<7:6> = if memattrs.outer.transient then '01' else '11'; result<5:4> = memattrs.outer.hints; else // MemAttr_NC result<7:4> = '0100'; if memattrs.inner.attrs == then result<3:2> = if memattrs.inner.transient then '00' else '10'; result<1:0> = memattrs.inner.hints; elsif memattrs.inner.attrs == MemAttr_WBExternalSecureNoninvasiveDebugEnabledMemAttr_WT() then return TRUE; then result<3:2> = if memattrs.inner.transient then '01' else '11'; result<1:0> = memattrs.inner.hints; else // MemAttr_NC result<3:0> = '0100'; return FALSE; return result;

Library pseudocode for shared/tracetranslation/selfhostedat/TraceAllowedPAREncodeShareability

// TraceAllowed() // ============== // Returns TRUE if Self-hosted Trace is allowed in the given Exception level. // PAREncodeShareability() // ======================= // Derive 64-bit PAR SH field. booleanbits(2) TraceAllowed(bits(2) el) if !PAREncodeShareability(HaveTraceExtMemoryAttributes() then return FALSE; ss =memattrs) if (memattrs.memtype == SecurityStateAtELMemType_Device(el); if|| (memattrs.inner.attrs == SelfHostedTraceEnabledMemAttr_NC() then boolean trace_allowed; // Detect scenarios where tracing in this Security state is never allowed. case ss of when&& memattrs.outer.attrs == SS_NonSecureMemAttr_NC trace_allowed = TRUE; when)) then // Force Outer-Shareable on Device and Normal Non-Cacheable memory return '10'; case memattrs.shareability of when SS_SecureShareability_NSH bit trace_bit; ifreturn '00'; when HaveELShareability_ISH(return '11'; whenEL3Shareability_OSH) then trace_bit = if ELUsingAArch32(EL3) then SDCR.STE else MDCR_EL3.STE; else trace_bit = '1'; trace_allowed = trace_bit == '1'; bit TRE_bit; case el of when EL3 TRE_bit = if !HaveAArch64() then TRFCR.E1TRE else '0'; when EL2 TRE_bit = EffectiveE2TRE(); when EL1 TRE_bit = EffectiveE1TRE(); when EL0 if EffectiveTGE() == '1' then TRE_bit = EffectiveE0HTRE(); else TRE_bit = EffectiveE0TRE(); return trace_allowed && TRE_bit == '1'; else case ss of when SS_NonSecure return ExternalNoninvasiveDebugEnabled(); when SS_Secure return ExternalSecureNoninvasiveDebugEnabled();return '10';

Library pseudocode for shared/tracetranslation/selfhostedat/TraceContextIDR2TranslationStage

// TraceContextIDR2() // ================== booleanenumeration TraceContextIDR2() if !TranslationStage {TraceAllowed(PSTATE.EL)|| !TranslationStage_1,HaveEL(EL2) then return FALSE; return (!SelfHostedTraceEnabled() || TRFCR_EL2.CX == '1');TranslationStage_12 };

Library pseudocode for shared/tracetranslation/selfhostedattrs/TraceSynchronizationBarrierDecodeDevice

// Memory barrier instruction that preserves the relative order of memory accesses to System // registers due to trace operations and other memory accesses to the same registers// DecodeDevice() // ============== // Decode output Device type DeviceType TraceSynchronizationBarrier();DecodeDevice(bits(2) device) case device of when '00' returnDeviceType_nGnRnE; when '01' return DeviceType_nGnRE; when '10' return DeviceType_nGRE; when '11' return DeviceType_GRE;

Library pseudocode for shared/tracetranslation/selfhostedattrs/TraceTimeStampDecodeLDFAttr

// TraceTimeStamp() // ================ // DecodeLDFAttr() // =============== // Decode memory attributes using LDF (Long Descriptor Format) mapping TimeStampMemAttrHints TraceTimeStamp() ifDecodeLDFAttr(bits(4) attr) SelfHostedTraceEnabledMemAttrHints() then ifldfattr; if attr IN {'x0xx'} then ldfattr.attrs = HaveELMemAttr_WT(; // Write-through elsif attr == '0100' then ldfattr.attrs =EL2MemAttr_NC) then TS_el2 = TRFCR_EL2.TS; if !; // Non-cacheable elsif attr IN {'x1xx'} then ldfattr.attrs =HaveECVExtMemAttr_WB() && TS_el2 == '10' then // Reserved value (-, TS_el2) =; // Write-back else ConstrainUnpredictableBitsUnreachable((); // Allocation hints are applicable only to cacheable memory. if ldfattr.attrs !=Unpredictable_EL2TIMESTAMPMemAttr_NC); case TS_el2 of when '00' // Falls out to check TRFCR_EL1.TS when '01' returnthen case attr<1:0> of when '00' ldfattr.hints = TimeStamp_VirtualMemHint_No; when '10' assert; // No allocation hints when '01' ldfattr.hints = HaveECVExtMemHint_WA(); // Otherwise ConstrainUnpredictableBits removes this case return; // Write-allocate when '10' ldfattr.hints = TimeStamp_OffsetPhysicalMemHint_RA; when '11' return; // Read-allocate when '11' ldfattr.hints = TimeStamp_PhysicalMemHint_RWA; ; // Read/Write allocate TS_el1 = TRFCR_EL1.TS; if TS_el1 == '00' || (! // The Transient hint applies only to cacheable memory with some allocation hints. if ldfattr.attrs !=HaveECVExtMemAttr_NC() && TS_el1 == '10') then // Reserved value (-, TS_el1) =&& ldfattr.hints != ConstrainUnpredictableBitsMemHint_No(Unpredictable_EL1TIMESTAMP); case TS_el1 of when '01' return TimeStamp_Virtual; when '10' assert HaveECVExt(); return TimeStamp_OffsetPhysical; when '11' return TimeStamp_Physical; otherwise Unreachable(); // ConstrainUnpredictableBits removes this case else return TimeStamp_CoreSight;then ldfattr.transient = attr<3> == '0'; return ldfattr;

Library pseudocode for shared/translation/atattrs/ATAccessDecodeSDFAttr

enumeration// DecodeSDFAttr() // =============== // Decode memory attributes using SDF (Short Descriptor Format) mapping MemAttrHints ATAccess {DecodeSDFAttr(bits(2) rgn) ATAccess_Read,sdfattr; case rgn of when '00' // Non-cacheable (no allocate) sdfattr.attrs = ATAccess_Write,; when '01' // Write-back, Read and Write allocate sdfattr.attrs = ATAccess_ReadPAN,; sdfattr.hints = ; when '10' // Write-through, Read allocate sdfattr.attrs = MemAttr_WT; sdfattr.hints = MemHint_RA; when '11' // Write-back, Read allocate sdfattr.attrs = MemAttr_WB; sdfattr.hints = MemHint_RAATAccess_WritePAN };; sdfattr.transient = FALSE; return sdfattr;

Library pseudocode for shared/translation/atattrs/EncodePARAttrsDecodeShareability

// EncodePARAttrs() // ================ // Convert orthogonal attributes and hints to 64-bit PAR ATTR field. // DecodeShareability() // ==================== // Decode shareability of target memory region bits(8)Shareability EncodePARAttrs(DecodeShareability(bits(2) sh) case sh of when '10' returnMemoryAttributesShareability_OSH memattrs) bits(8) result; if; when '11' return HaveMTEExtShareability_ISH() && memattrs.tagged then result<7:0> = '11110000'; return result; if memattrs.memtype ==; when '00' return MemType_DeviceShareability_NSH then result<7:4> = '0000'; if memattrs.device ==; otherwise case DeviceType_nGnRnEConstrainUnpredictable then result<3:0> = '0000'; elsif memattrs.device ==( DeviceType_nGnREUnpredictable_Shareability then result<3:0> = '0100'; elsif memattrs.device ==) of when DeviceType_nGREConstraint_OSH then result<3:0> = '1000'; else // DeviceType_GRE result<3:0> = '1100'; else if memattrs.outer.attrs ==return MemAttr_WTShareability_OSH then result<7:6> = if memattrs.outer.transient then '00' else '10'; result<5:4> = memattrs.outer.hints; elsif memattrs.outer.attrs ==; when MemAttr_WBConstraint_ISH then result<7:6> = if memattrs.outer.transient then '01' else '11'; result<5:4> = memattrs.outer.hints; else // MemAttr_NC result<7:4> = '0100'; if memattrs.inner.attrs ==return MemAttr_WTShareability_ISH then result<3:2> = if memattrs.inner.transient then '00' else '10'; result<1:0> = memattrs.inner.hints; elsif memattrs.inner.attrs ==; when return Shareability_NSHMemAttr_WBConstraint_NSH then result<3:2> = if memattrs.inner.transient then '01' else '11'; result<1:0> = memattrs.inner.hints; else // MemAttr_NC result<3:0> = '0100'; return result;;

Library pseudocode for shared/translation/atattrs/PAREncodeShareabilityEffectiveShareability

// PAREncodeShareability() // EffectiveShareability() // ======================= // Derive 64-bit PAR SH field. // Force Outer Shareability on Device and Normal iNCoNC memory bits(2)Shareability PAREncodeShareability(EffectiveShareability(MemoryAttributes memattrs) if (memattrs.memtype == MemType_Device || (memattrs.inner.attrs == MemAttr_NC && memattrs.outer.attrs == MemAttr_NC)) then // Force Outer-Shareable on Device and Normal Non-Cacheable memory return '10'; case memattrs.shareability of when return Shareability_NSH return '00'; when Shareability_ISH return '11'; when Shareability_OSH return '10';; else return memattrs.shareability;

Library pseudocode for shared/translation/atattrs/TranslationStageMAIRAttr

enumeration// MAIRAttr() // ========== // Retrieve the memory attribute encoding indexed in the given MAIR bits(8) TranslationStage {MAIRAttr(integer index, TranslationStage_1, TranslationStage_12 };mair) bit_index = 8 * index; return mair<bit_index+7:bit_index>;

Library pseudocode for shared/translation/attrs/DecodeDeviceNormalNCMemAttr

// DecodeDevice() // ============== // Decode output Device type // NormalNCMemAttr() // ================= // Normal Non-cacheable memory attributes DeviceTypeMemoryAttributes DecodeDevice(bits(2) device) case device of when '00' returnNormalNCMemAttr() DeviceType_nGnRnEMemAttrHints; when '01' returnnon_cacheable; non_cacheable.attrs = DeviceType_nGnREMemAttr_NC; when '10' return; DeviceType_nGREMemoryAttributes; when '11' returnnc_memattrs; nc_memattrs.memtype = ; nc_memattrs.outer = non_cacheable; nc_memattrs.inner = non_cacheable; nc_memattrs.shareability = Shareability_OSHDeviceType_GREMemType_Normal;; nc_memattrs.tagged = FALSE; return nc_memattrs;

Library pseudocode for shared/translation/attrs/DecodeLDFAttrS1ConstrainUnpredictableRESMAIR

// DecodeLDFAttr() // =============== // Decode memory attributes using LDF (Long Descriptor Format) mapping // S1ConstrainUnpredictableRESMAIR() // ================================= // Determine whether a reserved value occupies MAIR_ELx.AttrN MemAttrHintsboolean DecodeLDFAttr(bits(4) attr)S1ConstrainUnpredictableRESMAIR(bits(8) attr, boolean s1aarch64) case attr of when '0000xx01' return !(s1aarch64 && MemAttrHintsHaveFeatXS ldfattr; if attr IN {'x0xx'} then ldfattr.attrs =()); when '0000xxxx' return attr<1:0> != '00'; when '01000000' return !(s1aarch64 && MemAttr_WTHaveFeatXS; // Write-through elsif attr == '0100' then ldfattr.attrs =()); when '10100000' return !(s1aarch64 && MemAttr_NCHaveFeatXS; // Non-cacheable elsif attr IN {'x1xx'} then ldfattr.attrs =()); when '11110000' return !(s1aarch64 && MemAttr_WBHaveMTE2Ext; // Write-back else Unreachable(); // Allocation hints are applicable only to cacheable memory. if ldfattr.attrs != MemAttr_NC then case attr<1:0> of when '00' ldfattr.hints = MemHint_No; // No allocation hints when '01' ldfattr.hints = MemHint_WA; // Write-allocate when '10' ldfattr.hints = MemHint_RA; // Read-allocate when '11' ldfattr.hints = MemHint_RWA; // Read/Write allocate // The Transient hint applies only to cacheable memory with some allocation hints. if ldfattr.attrs != MemAttr_NC && ldfattr.hints != MemHint_No then ldfattr.transient = attr<3> == '0'; return ldfattr;()); when 'xxxx0000' return TRUE; otherwise return FALSE;

Library pseudocode for shared/translation/attrs/DecodeSDFAttrS1DecodeMemAttrs

// DecodeSDFAttr() // =============== // Decode memory attributes using SDF (Short Descriptor Format) mapping // S1DecodeMemAttrs() // ================== // Decode MAIR-format memory attributes assigned in stage 1 MemAttrHintsMemoryAttributes DecodeSDFAttr(bits(2) rgn)S1DecodeMemAttrs(bits(8) attr_in, bits(2) sh, boolean s1aarch64) bits(8) attr = attr_in; if MemAttrHintsS1ConstrainUnpredictableRESMAIR sdfattr; case rgn of when '00' // Non-cacheable (no allocate) sdfattr.attrs =(attr, s1aarch64) then (-, attr) = ConstrainUnpredictableBits(Unpredictable_RESMAIR); MemoryAttributes memattrs; case attr of when '0000xxxx' // Device memory memattrs.memtype = MemType_Device; memattrs.device = DecodeDevice(attr<3:2>); memattrs.tagged = FALSE; memattrs.xs = if s1aarch64 then NOT attr<0> else '1'; when '01000000' assert s1aarch64 && HaveFeatXS(); memattrs.memtype = MemType_Normal; memattrs.tagged = FALSE; memattrs.outer.attrs = MemAttr_NC; when '01' // Write-back, Read and Write allocate sdfattr.attrs = memattrs.inner.attrs = MemAttr_WBMemAttr_NC; sdfattr.hints = memattrs.xs = '0'; when '10100000' assert s1aarch64 && MemHint_RWAHaveFeatXS; when '10' // Write-through, Read allocate sdfattr.attrs =(); memattrs.memtype = MemType_Normal; memattrs.tagged = FALSE; memattrs.outer.attrs = MemAttr_WT; sdfattr.hints = memattrs.outer.hints = MemHint_RA; when '11' // Write-back, Read allocate sdfattr.attrs = memattrs.outer.transient = FALSE; memattrs.inner.attrs = MemAttr_WT; memattrs.inner.hints = MemHint_RA; memattrs.inner.transient = FALSE; memattrs.xs = '0'; when '11110000' // Tagged memory assert s1aarch64 && HaveMTE2Ext(); memattrs.memtype = MemType_Normal; memattrs.tagged = TRUE; memattrs.outer.attrs = MemAttr_WB; sdfattr.hints = memattrs.outer.hints = ; memattrs.outer.transient = FALSE; memattrs.inner.attrs = MemAttr_WB; memattrs.inner.hints = MemHint_RWA; memattrs.inner.transient = FALSE; memattrs.xs = '0'; otherwise memattrs.memtype = MemType_Normal; memattrs.outer = DecodeLDFAttr(attr<7:4>); memattrs.inner = DecodeLDFAttr(attr<3:0>); memattrs.tagged = FALSE; if (memattrs.inner.attrs == MemAttr_WB && memattrs.outer.attrs == MemAttr_WB) then memattrs.xs = '0'; else memattrs.xs = '1'; memattrs.shareability = DecodeShareabilityMemHint_RAMemHint_RWA; (sh); sdfattr.transient = FALSE; return sdfattr; return memattrs;

Library pseudocode for shared/translation/attrs/DecodeShareabilityS2CombineS1AttrHints

// DecodeShareability() // ==================== // Decode shareability of target memory region // S2CombineS1AttrHints() // ====================== // Determine resultant Normal memory cacheability and allocation hints from // combining stage 1 Normal memory attributes and stage 2 cacheability attributes. ShareabilityMemAttrHints DecodeShareability(bits(2) sh) case sh of when '10' returnS2CombineS1AttrHints( Shareability_OSHMemAttrHints; when '11' returns1_attrhints, Shareability_ISHMemAttrHints; when '00' returns2_attrhints) Shareability_NSHMemAttrHints; otherwise caseattrhints; if s1_attrhints.attrs == ConstrainUnpredictableMemAttr_NC(|| s2_attrhints.attrs ==Unpredictable_ShareabilityMemAttr_NC) of whenthen attrhints.attrs = Constraint_OSHMemAttr_NC return; elsif s1_attrhints.attrs == Shareability_OSHMemAttr_WT; when|| s2_attrhints.attrs == Constraint_ISHMemAttr_WT returnthen attrhints.attrs = Shareability_ISHMemAttr_WT; when else attrhints.attrs = Constraint_NSHMemAttr_WB return; // Stage 2 does not assign any allocation hints // Instead, they are inherited from stage 1 if attrhints.attrs != Shareability_NSHMemAttr_NC;then attrhints.hints = s1_attrhints.hints; attrhints.transient = s1_attrhints.transient; return attrhints;

Library pseudocode for shared/translation/attrs/EffectiveShareabilityS2CombineS1Device

// EffectiveShareability() // ======================= // Force Outer Shareability on Device and Normal iNCoNC memory // S2CombineS1Device() // =================== // Determine resultant Device type from combining output memory attributes // in stage 1 and Device attributes in stage 2 ShareabilityDeviceType EffectiveShareability(S2CombineS1Device(MemoryAttributesDeviceType memattrs) if (memattrs.memtype ==s1_device, MemType_DeviceDeviceType || (memattrs.inner.attrs ==s2_device) if s1_device == MemAttr_NCDeviceType_nGnRnE && memattrs.outer.attrs ==|| s2_device == MemAttr_NCDeviceType_nGnRnE)) then then return ; elsif s1_device == DeviceType_nGnRE || s2_device == DeviceType_nGnRE then return DeviceType_nGnRE; elsif s1_device == DeviceType_nGRE || s2_device == DeviceType_nGRE then return DeviceType_nGRE; else return DeviceType_GREShareability_OSHDeviceType_nGnRnE; else return memattrs.shareability;;

Library pseudocode for shared/translation/attrs/MAIRAttrS2CombineS1MemAttrs

// MAIRAttr() // ========== // Retrieve the memory attribute encoding indexed in the given MAIR // S2CombineS1MemAttrs() // ===================== // Combine stage 2 with stage 1 memory attributes bits(8)MemoryAttributes MAIRAttr(integer index,S2CombineS1MemAttrs( s1_memattrs, MemoryAttributes s2_memattrs) MemoryAttributes memattrs; if s1_memattrs.memtype == MemType_Device && s2_memattrs.memtype == MemType_Device then memattrs.memtype = MemType_Device; memattrs.device = S2CombineS1Device(s1_memattrs.device, s2_memattrs.device); elsif s1_memattrs.memtype == MemType_Device then // S2 Normal, S1 Device memattrs = s1_memattrs; elsif s2_memattrs.memtype == MemType_Device then // S2 Device, S1 Normal memattrs = s2_memattrs; else // S2 Normal, S1 Normal memattrs.memtype = MemType_Normal; memattrs.inner = S2CombineS1AttrHints(s1_memattrs.inner, s2_memattrs.inner); memattrs.outer = S2CombineS1AttrHints(s1_memattrs.outer, s2_memattrs.outer); if ELUsingAArch32(EL2) || !HaveMTE2Ext() then memattrs.tagged = FALSE; else memattrs.tagged = AArch64.IsS2ResultTagged(memattrs, s1_memattrs.tagged); memattrs.shareability = S2CombineS1Shareability(s1_memattrs.shareability, s2_memattrs.shareability); memattrs.xs = s2_memattrs.xs; memattrs.shareability = EffectiveShareabilityMAIRTypeMemoryAttributes mair) bit_index = 8 * index; return mair<bit_index+7:bit_index>;(memattrs); return memattrs;

Library pseudocode for shared/translation/attrs/NormalNCMemAttrS2CombineS1Shareability

// NormalNCMemAttr() // ================= // Normal Non-cacheable memory attributes // S2CombineS1Shareability() // ========================= // Combine stage 2 shareability with stage 1 MemoryAttributesShareability NormalNCMemAttr()S2CombineS1Shareability( MemAttrHintsShareability non_cacheable; non_cacheable.attrs =s1_shareability, MemAttr_NCShareability;s2_shareability) if (s1_shareability == MemoryAttributesShareability_OSH nc_memattrs; nc_memattrs.memtype =|| s2_shareability == MemType_NormalShareability_OSH; nc_memattrs.outer = non_cacheable; nc_memattrs.inner = non_cacheable; nc_memattrs.shareability =) then return Shareability_OSH; elsif (s1_shareability == Shareability_ISH || s2_shareability == Shareability_ISH) then return Shareability_ISH; else return Shareability_NSH; nc_memattrs.tagged = FALSE; return nc_memattrs;;

Library pseudocode for shared/translation/attrs/S1ConstrainUnpredictableRESMAIRS2DecodeCacheability

// S1ConstrainUnpredictableRESMAIR() // ================================= // Determine whether a reserved value occupies MAIR_ELx.AttrN // S2DecodeCacheability() // ====================== // Determine the stage 2 cacheability for Normal memory booleanMemAttrHints S1ConstrainUnpredictableRESMAIR(bits(8) attr, boolean s1aarch64) case attr of when '0000xx01' return !(s1aarch64 &&S2DecodeCacheability(bits(2) attr) HaveFeatXSMemAttrHints()); when '0000xxxx' return attr<1:0> != '00'; when '01000000' return !(s1aarch64 &&s2attr; case attr of when '01' s2attr.attrs = HaveFeatXSMemAttr_NC()); when '10100000' return !(s1aarch64 &&; // Non-cacheable when '10' s2attr.attrs = HaveFeatXSMemAttr_WT()); when '11110000' return !(s1aarch64 &&; // Write-through when '11' s2attr.attrs = ; // Write-back otherwise // Constrained unpredictable case ConstrainUnpredictable(Unpredictable_S2RESMEMATTR) of when Constraint_NC s2attr.attrs = MemAttr_NC; when Constraint_WT s2attr.attrs = MemAttr_WT; when Constraint_WB s2attr.attrs = MemAttr_WBHaveMTE2ExtMemAttr_WB()); when 'xxxx0000' return TRUE; otherwise return FALSE;; // Stage 2 does not assign hints or the transient property // They are inherited from stage 1 if the result of the combination allows it s2attr.hints = bits(2) UNKNOWN; s2attr.transient = boolean UNKNOWN; return s2attr;

Library pseudocode for shared/translation/attrs/S1DecodeMemAttrsS2DecodeMemAttrs

// S1DecodeMemAttrs() // S2DecodeMemAttrs() // ================== // Decode MAIR-format memory attributes assigned in stage 1 // Decode stage 2 memory attributes MemoryAttributes S1DecodeMemAttrs(bits(8) attr_in, bits(2) sh, boolean s1aarch64) bits(8) attr = attr_in; ifS2DecodeMemAttrs(bits(4) attr, bits(2) sh) S1ConstrainUnpredictableRESMAIR(attr, s1aarch64) then (-, attr) = ConstrainUnpredictableBits(Unpredictable_RESMAIR); MemoryAttributes memattrs; case attr of when '0000xxxx' // Device memory memattrs.memtype = when '00xx' // Device memory memattrs.memtype = MemType_Device; memattrs.device = DecodeDevice(attr<3:2>); memattrs.tagged = FALSE; memattrs.xs = if s1aarch64 then NOT attr<0> else '1'; when '01000000' assert s1aarch64 &&(attr<1:0>); otherwise // Normal memory memattrs.memtype = HaveFeatXS(); memattrs.memtype = MemType_Normal; memattrs.tagged = FALSE; memattrs.outer.attrs = memattrs.outer = MemAttr_NCS2DecodeCacheability; memattrs.inner.attrs =(attr<3:2>); memattrs.inner = MemAttr_NCS2DecodeCacheability; memattrs.xs = '0'; when '10100000' assert s1aarch64 && HaveFeatXS(); memattrs.memtype = MemType_Normal; memattrs.tagged = FALSE; memattrs.outer.attrs = MemAttr_WT; memattrs.outer.hints = MemHint_RA; memattrs.outer.transient = FALSE; memattrs.inner.attrs = MemAttr_WT; memattrs.inner.hints = MemHint_RA; memattrs.inner.transient = FALSE; memattrs.xs = '0'; when '11110000' // Tagged memory assert s1aarch64 && HaveMTE2Ext(); memattrs.memtype = MemType_Normal; memattrs.tagged = TRUE; memattrs.outer.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RWA; memattrs.outer.transient = FALSE; memattrs.inner.attrs = MemAttr_WB; memattrs.inner.hints = MemHint_RWA; memattrs.inner.transient = FALSE; memattrs.xs = '0'; otherwise memattrs.memtype = MemType_Normal; memattrs.outer = DecodeLDFAttr(attr<7:4>); memattrs.inner = DecodeLDFAttr(attr<3:0>); memattrs.tagged = FALSE; if (memattrs.inner.attrs == MemAttr_WB && memattrs.outer.attrs == MemAttr_WB) then memattrs.xs = '0'; else memattrs.xs = '1'; (attr<1:0>); memattrs.shareability = DecodeShareability(sh); return memattrs;

Library pseudocode for shared/translation/attrs/S2CombineS1AttrHintsWalkMemAttrs

// S2CombineS1AttrHints() // ====================== // Determine resultant Normal memory cacheability and allocation hints from // combining stage 1 Normal memory attributes and stage 2 cacheability attributes. // WalkMemAttrs() // ============== // Retrieve memory attributes of translation table walk MemAttrHintsMemoryAttributes S2CombineS1AttrHints(WalkMemAttrs(bits(2) sh, bits(2) irgn, bits(2) orgn)MemAttrHintsMemoryAttributes s1_attrhints,walkmemattrs; walkmemattrs.memtype = MemAttrHintsMemType_Normal s2_attrhints); walkmemattrs.shareability = MemAttrHintsDecodeShareability attrhints; if s1_attrhints.attrs ==(sh); walkmemattrs.inner = MemAttr_NCDecodeSDFAttr || s2_attrhints.attrs ==(irgn); walkmemattrs.outer = MemAttr_NCDecodeSDFAttr then attrhints.attrs =(orgn); walkmemattrs.tagged = FALSE; if (walkmemattrs.inner.attrs == MemAttr_NCMemAttr_WB; elsif s1_attrhints.attrs ==&& walkmemattrs.outer.attrs == MemAttr_WT || s2_attrhints.attrs == MemAttr_WT then attrhints.attrs = MemAttr_WT; else attrhints.attrs = MemAttr_WB; // Stage 2 does not assign any allocation hints // Instead, they are inherited from stage 1 if attrhints.attrs != MemAttr_NC then attrhints.hints = s1_attrhints.hints; attrhints.transient = s1_attrhints.transient; ) then walkmemattrs.xs = '0'; else walkmemattrs.xs = '1'; return attrhints; return walkmemattrs;

Library pseudocode for shared/translation/attrsfaults/S2CombineS1DeviceAlignmentFault

// S2CombineS1Device() // =================== // Determine resultant Device type from combining output memory attributes // in stage 1 and Device attributes in stage 2 // AlignmentFault() // ================ DeviceTypeFaultRecord S2CombineS1Device(AlignmentFault(DeviceTypeAccType s1_device,acctype, boolean iswrite, boolean secondstage) DeviceTypeFaultRecord s2_device) if s1_device ==fault; fault.statuscode = DeviceType_nGnRnEFault_Alignment || s2_device == DeviceType_nGnRnE then return DeviceType_nGnRnE; elsif s1_device == DeviceType_nGnRE || s2_device == DeviceType_nGnRE then return DeviceType_nGnRE; elsif s1_device == DeviceType_nGRE || s2_device == DeviceType_nGRE then return DeviceType_nGRE; else return DeviceType_GRE;; fault.acctype = acctype; fault.write = iswrite; fault.secondstage = secondstage; return fault;

Library pseudocode for shared/translation/attrsfaults/S2CombineS1MemAttrsAsyncExternalAbort

// S2CombineS1MemAttrs() // ===================== // Combine stage 2 with stage 1 memory attributes // AsyncExternalAbort() // ==================== // Return a fault record indicating an asynchronous external abort MemoryAttributesFaultRecord S2CombineS1MemAttrs(AsyncExternalAbort(boolean parity, bits(2) errortype, bit extflag)MemoryAttributesFaultRecord s1_memattrs,fault; fault.statuscode = if parity then MemoryAttributesFault_AsyncParity s2_memattrs)else MemoryAttributesFault_AsyncExternal memattrs; if s1_memattrs.memtype ==; fault.extflag = extflag; fault.errortype = errortype; fault.acctype = MemType_DeviceAccType_NORMAL && s2_memattrs.memtype == MemType_Device then memattrs.memtype = MemType_Device; memattrs.device = S2CombineS1Device(s1_memattrs.device, s2_memattrs.device); elsif s1_memattrs.memtype == MemType_Device then // S2 Normal, S1 Device memattrs = s1_memattrs; elsif s2_memattrs.memtype == MemType_Device then // S2 Device, S1 Normal memattrs = s2_memattrs; else // S2 Normal, S1 Normal memattrs.memtype = MemType_Normal; memattrs.inner = S2CombineS1AttrHints(s1_memattrs.inner, s2_memattrs.inner); memattrs.outer = S2CombineS1AttrHints(s1_memattrs.outer, s2_memattrs.outer); if ELUsingAArch32(EL2) || !HaveMTE2Ext() then memattrs.tagged = FALSE; else memattrs.tagged = AArch64.IsS2ResultTagged(memattrs, s1_memattrs.tagged); memattrs.shareability = S2CombineS1Shareability(s1_memattrs.shareability, s2_memattrs.shareability); memattrs.xs = s2_memattrs.xs; memattrs.shareability = EffectiveShareability(memattrs); return memattrs;; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; return fault;

Library pseudocode for shared/translation/attrsfaults/S2CombineS1ShareabilityNoFault

// S2CombineS1Shareability() // ========================= // Combine stage 2 shareability with stage 1 // NoFault() // ========= // Return a clear fault record indicating no faults have occured ShareabilityFaultRecord S2CombineS1Shareability(NoFault()ShareabilityFaultRecord s1_shareability,fault; fault.statuscode = ShareabilityFault_None s2_shareability) if (s1_shareability ==; fault.acctype = Shareability_OSHAccType_NORMAL || s2_shareability == Shareability_OSH) then return Shareability_OSH; elsif (s1_shareability == Shareability_ISH || s2_shareability == Shareability_ISH) then return Shareability_ISH; else return Shareability_NSH;; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; return fault;

Library pseudocode for shared/translation/attrstranslation/S2DecodeCacheabilityS1TranslationRegime

// S2DecodeCacheability() // ====================== // Determine the stage 2 cacheability for Normal memory // S1TranslationRegime() // ===================== // Stage 1 translation regime for the given Exception level MemAttrHintsbits(2) S2DecodeCacheability(bits(2) attr)S1TranslationRegime(bits(2) el) if el != MemAttrHintsEL0 s2attr; case attr of when '01' s2attr.attrs =then return el; elsif MemAttr_NCHaveEL; // Non-cacheable when '10' s2attr.attrs =( MemAttr_WTEL3; // Write-through when '11' s2attr.attrs =) && MemAttr_WBELUsingAArch32; // Write-back otherwise // Constrained unpredictable case( ConstrainUnpredictableEL3() && SCR.NS == '0' then returnUnpredictable_S2RESMEMATTREL3) of when; elsif Constraint_NCHaveVirtHostExt s2attr.attrs =() && MemAttr_NCELIsInHost; when(el) then return Constraint_WTEL2 s2attr.attrs =; else return MemAttr_WTEL1; when // S1TranslationRegime() // ===================== // Returns the Exception level controlling the current Stage 1 translation regime. For the most // part this is unused in code because the system register accessors (SCTLR[], etc.) implicitly // return the correct value. bits(2) Constraint_WB s2attr.attrs =S1TranslationRegime() return MemAttr_WBS1TranslationRegime; // Stage 2 does not assign hints or the transient property // They are inherited from stage 1 if the result of the combination allows it s2attr.hints = bits(2) UNKNOWN; s2attr.transient = boolean UNKNOWN; return s2attr;(PSTATE.EL);

Library pseudocode for shared/translation/attrsvmsa/S2DecodeMemAttrsAddressDescriptor

// S2DecodeMemAttrs() // ================== // Decode stage 2 memory attributes MemoryAttributestype S2DecodeMemAttrs(bits(4) attr, bits(2) sh)AddressDescriptor is ( FaultRecord fault, // fault.statuscode indicates whether the address is valid MemoryAttributes memattrs; case attr of when '00xx' // Device memory memattrs.memtype =memattrs, MemType_DeviceFullAddress; memattrs.device =paddress, bits(64) vaddress ) constant integer DecodeDevice(attr<1:0>); otherwise // Normal memory memattrs.memtype = MemType_Normal; memattrs.outer = S2DecodeCacheability(attr<3:2>); memattrs.inner = S2DecodeCacheability(attr<1:0>); memattrs.shareability = DecodeShareability(sh); return memattrs;FINAL_LEVEL = 3;

Library pseudocode for shared/translation/attrsvmsa/WalkMemAttrsContiguousSize

// WalkMemAttrs() // ============== // Retrieve memory attributes of translation table walk // ContiguousSize() // ================ // Return the number of entries log 2 marking a contiguous output range MemoryAttributesinteger WalkMemAttrs(bits(2) sh, bits(2) irgn, bits(2) orgn)ContiguousSize( MemoryAttributesTGx walkmemattrs; walkmemattrs.memtype =tgx, integer level) case tgx of when MemType_NormalTGx_4KB; walkmemattrs.shareability =assert level IN {1, 2, 3}; return 4; when DecodeShareabilityTGx_16KB(sh); walkmemattrs.inner =assert level IN {2, 3}; return if level == 2 then 5 else 7; when DecodeSDFAttrTGx_64KB(irgn); walkmemattrs.outer = DecodeSDFAttr(orgn); walkmemattrs.tagged = FALSE; if (walkmemattrs.inner.attrs == MemAttr_WB && walkmemattrs.outer.attrs == MemAttr_WB) then walkmemattrs.xs = '0'; else walkmemattrs.xs = '1'; return walkmemattrs;assert level IN {2, 3}; return 5;

Library pseudocode for shared/translation/faultsvmsa/AlignmentFaultCreateAddressDescriptor

// AlignmentFault() // ================ // CreateAddressDescriptor() // ========================= // Set internal members for address descriptor type to valid values FaultRecordAddressDescriptor AlignmentFault(CreateAddressDescriptor(bits(64) va,AccTypeFullAddress acctype, boolean iswrite, boolean secondstage)pa, FaultRecordMemoryAttributes fault; fault.statuscode =memattrs) addrdesc; addrdesc.paddress = pa; addrdesc.vaddress = va; addrdesc.memattrs = memattrs; addrdesc.fault = NoFaultFault_AlignmentAddressDescriptor; fault.acctype = acctype; fault.write = iswrite; fault.secondstage = secondstage; (); return fault; return addrdesc;

Library pseudocode for shared/translation/faultsvmsa/AsyncExternalAbortCreateFaultyAddressDescriptor

// AsyncExternalAbort() // ==================== // Return a fault record indicating an asynchronous External abort // CreateFaultyAddressDescriptor() // =============================== // Set internal members for address descriptor type with values indicating error FaultRecordAddressDescriptor AsyncExternalAbort(boolean parity, bits(2) errortype, bit extflag)CreateFaultyAddressDescriptor(bits(64) va, FaultRecord fault; fault.statuscode = if parity thenfault) Fault_AsyncParityAddressDescriptor else Fault_AsyncExternal; fault.extflag = extflag; fault.errortype = errortype; fault.acctype = AccType_NORMAL; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; addrdesc; return fault; addrdesc.vaddress = va; addrdesc.fault = fault; return addrdesc;

Library pseudocode for shared/translation/faultsvmsa/NoFaultDescriptorType

// NoFault() // ========= // Return a clear fault record indicating no faults have occured FaultRecordenumeration NoFault()DescriptorType { FaultRecord fault; fault.statuscode =DescriptorType_Table, Fault_None; fault.acctype =DescriptorType_Block, DescriptorType_Page, AccType_NORMAL; fault.secondstage = FALSE; fault.s2fs1walk = FALSE; return fault;DescriptorType_Invalid };

Library pseudocode for shared/translation/translationvmsa/S1TranslationRegimeDomains

// S1TranslationRegime() // ===================== // Stage 1 translation regime for the given Exception level bits(2)constant bits(2) S1TranslationRegime(bits(2) el) if el !=Domain_NoAccess = '00'; constant bits(2) EL0 then return el; elsifDomain_Client = '01'; constant bits(2) HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.NS == '0' then return EL3; elsif HaveVirtHostExt() && ELIsInHost(el) then return EL2; else return EL1; // S1TranslationRegime() // ===================== // Returns the Exception level controlling the current Stage 1 translation regime. For the most // part this is unused in code because the system register accessors (SCTLR[], etc.) implicitly // return the correct value. bits(2) S1TranslationRegime() return S1TranslationRegime(PSTATE.EL);Domain_Manager = '11';

Library pseudocode for shared/translation/vmsa/AddressDescriptorFetchDescriptor

type// FetchDescriptor() // ================= // Fetch a translation table descriptor (FaultRecord, bits(N)) AddressDescriptor is (FetchDescriptor(bit ee, AddressDescriptor walkaddress, FaultRecord fault, // fault.statuscode indicates whether the address is validfault_in) // 32-bit descriptors for AArch32 Short-descriptor format // 64-bit descriptors for AArch64 or AArch32 Long-descriptor format assert N == 32 || N == 64; bits(N) descriptor; MemoryAttributesFaultRecord memattrs,fault = fault_in; FullAddressAccessDescriptor paddress, bits(64) vaddress ) walkacc; constant integer walkacc.acctype = ; // MPAM PARTID for translation table walk is determined by the access invoking the translation walkacc.mpam = GenMPAMcurEL(fault.acctype); PhysMemRetStatus memstatus; (memstatus, descriptor) = PhysMemRead(walkaddress, N DIV 8, walkacc); if IsFault(memstatus) then fault = HandleExternalTTWAbort(memstatus, fault.write, walkaddress, walkacc, N DIV 8, fault); if IsFault(fault.statuscode) then return (fault, bits(N) UNKNOWN); if ee == '1' then descriptor = BigEndianReverseFINAL_LEVEL = 3;(descriptor); return (fault, descriptor);

Library pseudocode for shared/translation/vmsa/ContiguousSizeHasUnprivileged

// ContiguousSize() // ================ // Return the number of entries log 2 marking a contiguous output range // HasUnprivileged() // ================= // Returns whether a translation regime serves EL0 as well as a higher EL integerboolean ContiguousSize(HasUnprivileged(TGxRegime tgx, integer level) case tgx of whenregime) return (regime IN { TGx_4KBRegime_EL20 assert level IN {1, 2, 3}; return 4; when, TGx_16KBRegime_EL30 assert level IN {2, 3}; return if level == 2 then 5 else 7; when, TGx_64KBRegime_EL10 assert level IN {2, 3}; return 5;});

Library pseudocode for shared/translation/vmsa/CreateAddressDescriptorIsAtomicRW

// CreateAddressDescriptor() // ========================= // Set internal members for address descriptor type to valid values // IsAtomicRW() // ============ // Is the access an atomic operation? AddressDescriptorboolean CreateAddressDescriptor(bits(64) va,IsAtomicRW( FullAddressAccType pa,acctype) return acctype IN { MemoryAttributesAccType_ATOMICRW memattrs), AddressDescriptorAccType_ORDEREDRW addrdesc; addrdesc.paddress = pa; addrdesc.vaddress = va; addrdesc.memattrs = memattrs; addrdesc.fault =, NoFaultAccType_ORDEREDATOMICRW(); return addrdesc;};

Library pseudocode for shared/translation/vmsa/CreateFaultyAddressDescriptorRegime

// CreateFaultyAddressDescriptor() // =============================== // Set internal members for address descriptor type with values indicating error AddressDescriptorenumeration CreateFaultyAddressDescriptor(bits(64) va,Regime { FaultRecord fault)Regime_EL3, // EL3 Regime_EL30, // EL3&0 (PL1&0 when EL3 is AArch32) Regime_EL2, // EL2 Regime_EL20, // EL2&0 AddressDescriptor addrdesc; addrdesc.vaddress = va; addrdesc.fault = fault; return addrdesc;Regime_EL10 // EL1&0 };

Library pseudocode for shared/translation/vmsa/DescriptorTypeRegimeUsingAArch32

enumeration// RegimeUsingAArch32() // ==================== // Determine if the EL controlling the regime executes in AArch32 state boolean DescriptorType {RegimeUsingAArch32( DescriptorType_Table,regime) case regime of when DescriptorType_Block,return DescriptorType_Page,( ); when Regime_EL30 return TRUE; when Regime_EL20 return FALSE; when Regime_EL2 return ELUsingAArch32(EL2); when Regime_EL3DescriptorType_Invalid };return FALSE;

Library pseudocode for shared/translation/vmsa/DomainsS1TTWParams

constant bits(2)type Domain_NoAccess = '00'; constant bits(2)S1TTWParams is ( // A64-VMSA exclusive parameters bit ha, // TCR_ELx.HA bit hd, // TCR_ELx.HD bit tbi, // TCR_ELx.TBI{x} bit tbid, // TCR_ELx.TBID{x} bit e0pd, // TCR_EL1.E0PDx or TCR_EL2.E0PDx when HCR_EL2.E2H == '1' bit ds, // TCR_ELx.DS bits(3) ps, // TCR_ELx.{I}PS bits(6) txsz, // TCR_ELx.TxSZ bit epan, // SCTLR_EL1.EPAN or SCTLR_EL2.EPAN when HCR_EL2.E2H == '1' bit dct, // HCR_EL2.DCT bit nv1, // HCR_EL2.NV1 bit cmow, // SCTLR_EL1.CMOW or SCTLR_EL2.CMOW when HCR_EL2.E2H == '1' // A32-VMSA exclusive parameters bits(3) t0sz, // TTBCR.T0SZ bits(3) t1sz, // TTBCR.T1SZ bit uwxn, // SCTLR.UWXN // Parameters common to both A64-VMSA & A32-VMSA (A64/A32) Domain_Client = '01'; constant bits(2)tgx, // TCR_ELx.TGx / Always TGx_4KB bits(2) irgn, // TCR_ELx.IRGNx / TTBCR.IRGNx or HTCR.IRGN0 bits(2) orgn, // TCR_ELx.ORGNx / TTBCR.ORGNx or HTCR.ORGN0 bits(2) sh, // TCR_ELx.SHx / TTBCR.SHx or HTCR.SH0 bit hpd, // TCR_ELx.HPD{x} / TTBCR2.HPDx or HTCR.HPD bit ee, // SCTLR_ELx.EE / SCTLR.EE or HSCTLR.EE bit wxn, // SCTLR_ELx.WXN / SCTLR.WXN or HSCTLR.WXN bit ntlsmd, // SCTLR_ELx.nTLSMD / SCTLR.nTLSMD or HSCTLR.nTLSMD bit dc, // HCR_EL2.DC / HCR.DC bit sif, // SCR_EL3.SIF / SCR.SIF Domain_Manager = '11';mair // MAIR_ELx / MAIR1:MAIR0 or HMAIR1:HMAIR0 )

Library pseudocode for shared/translation/vmsa/FetchDescriptorS2TTWParams

// FetchDescriptor() // ================= // Fetch a translation table descriptor (FaultRecord, bits(N))type FetchDescriptor(bit ee,S2TTWParams is ( // A64-VMSA exclusive parameters bit ha, // VTCR_EL2.HA bit hd, // VTCR_EL2.HD bit sl2, // V{S}TCR_EL2.SL2 bit ds, // VTCR_EL2.DS bit sw, // VSTCR_EL2.SW bit nsw, // VTCR_EL2.NSW bit sa, // VSTCR_EL2.SA bit nsa, // VTCR_EL2.NSA bits(3) ps, // VTCR_EL2.PS bits(6) txsz, // V{S}TCR_EL2.T0SZ bit fwb, // HCR_EL2.PTW bit cmow, // HCRX_EL2.CMOW // A32-VMSA exclusive parameters bit s, // VTCR.S bits(4) t0sz, // VTCR.T0SZ // Parameters common to both A64-VMSA & A32-VMSA if implemented (A64/A32) AddressDescriptorTGx walkaddress, FaultRecord fault_in) // 32-bit descriptors for AArch32 Short-descriptor format // 64-bit descriptors for AArch64 or AArch32 Long-descriptor format assert N == 32 || N == 64; bits(N) descriptor; FaultRecord fault = fault_in; AccessDescriptor walkacc; walkacc.acctype = AccType_TTW; // MPAM PARTID for translation table walk is determined by the access invoking the translation walkacc.mpam = GenMPAMcurEL(fault.acctype); PhysMemRetStatus memstatus; (memstatus, descriptor) = PhysMemRead(walkaddress, N DIV 8, walkacc); if IsFault(memstatus) then fault = HandleExternalTTWAbort(memstatus, fault.write, walkaddress, walkacc, N DIV 8, fault); if IsFault(fault.statuscode) then return (fault, bits(N) UNKNOWN); if ee == '1' then descriptor = BigEndianReverse(descriptor); return (fault, descriptor);tgx, // V{S}TCR_EL2.TG0 / Always TGx_4KB bits(2) sl0, // V{S}TCR_EL2.SL0 / VTCR.SL0 bits(2) irgn, // VTCR_EL2.IRGN0 / VTCR.IRGN0 bits(2) orgn, // VTCR_EL2.ORGN0 / VTCR.ORGN0 bits(2) sh, // VTCR_EL2.SH0 / VTCR.SH0 bit ee, // SCTLR_EL2.EE / HSCTLR.EE bit ptw, // HCR_EL2.PTW / HCR.PTW bit vm // HCR_EL2.VM / HCR.VM )

Library pseudocode for shared/translation/vmsa/HasUnprivilegedSDFType

// HasUnprivileged() // ================= // Returns whether a translation regime serves EL0 as well as a higher EL booleanenumeration HasUnprivileged(SDFType {Regime regime) return (regime IN {SDFType_Table, Regime_EL20,SDFType_Invalid, Regime_EL30,SDFType_Supersection, SDFType_Section, SDFType_LargePage, Regime_EL10 });SDFType_SmallPage };

Library pseudocode for shared/translation/vmsa/IsAtomicRWSecurityStateForRegime

// IsAtomicRW() // ============ // Is the access an atomic operation? // SecurityStateForRegime() // ======================== // Return the Security State of the given translation regime booleanSecurityState IsAtomicRW(SecurityStateForRegime(AccTypeRegime acctype) return acctype IN {regime) case regime of when AccType_ATOMICRWRegime_EL3,return AccType_ORDEREDRWSecurityStateAtEL,( ); when Regime_EL30 return SS_Secure; // A32 EL3 is always Secure when Regime_EL2 return SecurityStateAtEL(EL2); when Regime_EL20 return SecurityStateAtEL(EL2); when Regime_EL10 return SecurityStateAtEL(EL1AccType_ORDEREDATOMICRWEL3 };);

Library pseudocode for shared/translation/vmsa/RegimeStageOA

enumeration// StageOA() // ========= // Given the final walk state (a page or block descriptor), map the untranslated // input address bits to the output address FullAddress Regime {StageOA(bits(64) ia, Regime_EL3, // EL3tgx, Regime_EL30, // EL3&0 (PL1&0 when EL3 is AArch32)walkstate) // Output Address Regime_EL2, // EL2oa; integer csize; tsize = Regime_EL20, // EL2&0(tgx, walkstate.level); if walkstate.contiguous == '1' then csize = Regime_EL10 // EL1&0 };(tgx, walkstate.level); else csize = 0; ia_msb = tsize + csize; oa.paspace = walkstate.baseaddress.paspace; oa.address = walkstate.baseaddress.address<51:ia_msb>:ia<ia_msb-1:0>; return oa;

Library pseudocode for shared/translation/vmsa/RegimeUsingAArch32TGx

// RegimeUsingAArch32() // ==================== // Determine if the EL controlling the regime executes in AArch32 state booleanenumeration RegimeUsingAArch32(TGx {Regime regime) case regime of whenTGx_4KB, Regime_EL10 returnTGx_16KB, ELUsingAArch32(EL1); when Regime_EL30 return TRUE; when Regime_EL20 return FALSE; when Regime_EL2 return ELUsingAArch32(EL2); when Regime_EL3 return FALSE;TGx_64KB };

Library pseudocode for shared/translation/vmsa/S1TTWParamsTGxGranuleBits

type// TGxGranuleBits() // ================ // Retrieve the address size, in bits, of a granule integer S1TTWParams is ( // A64-VMSA exclusive parameters bit ha, // TCR_ELx.HA bit hd, // TCR_ELx.HD bit tbi, // TCR_ELx.TBI{x} bit tbid, // TCR_ELx.TBID{x} bit e0pd, // TCR_EL1.E0PDx or TCR_EL2.E0PDx when HCR_EL2.E2H == '1' bit ds, // TCR_ELx.DS bits(3) ps, // TCR_ELx.{I}PS bits(6) txsz, // TCR_ELx.TxSZ bit epan, // SCTLR_EL1.EPAN or SCTLR_EL2.EPAN when HCR_EL2.E2H == '1' bit dct, // HCR_EL2.DCT bit nv1, // HCR_EL2.NV1 bit cmow, // SCTLR_EL1.CMOW or SCTLR_EL2.CMOW when HCR_EL2.E2H == '1' // A32-VMSA exclusive parameters bits(3) t0sz, // TTBCR.T0SZ bits(3) t1sz, // TTBCR.T1SZ bit uwxn, // SCTLR.UWXN // Parameters common to both A64-VMSA & A32-VMSA (A64/A32)TGxGranuleBits( TGx tgx, // TCR_ELx.TGx / Always TGx_4KB bits(2) irgn, // TCR_ELx.IRGNx / TTBCR.IRGNx or HTCR.IRGN0 bits(2) orgn, // TCR_ELx.ORGNx / TTBCR.ORGNx or HTCR.ORGN0 bits(2) sh, // TCR_ELx.SHx / TTBCR.SHx or HTCR.SH0 bit hpd, // TCR_ELx.HPD{x} / TTBCR2.HPDx or HTCR.HPD bit ee, // SCTLR_ELx.EE / SCTLR.EE or HSCTLR.EE bit wxn, // SCTLR_ELx.WXN / SCTLR.WXN or HSCTLR.WXN bit ntlsmd, // SCTLR_ELx.nTLSMD / SCTLR.nTLSMD or HSCTLR.nTLSMD bit dc, // HCR_EL2.DC / HCR.DC bit sif, // SCR_EL3.SIF / SCR.SIFtgx) case tgx of when return 12; when TGx_16KB return 14; when TGx_64KBMAIRTypeTGx_4KB mair // MAIR_ELx / MAIR1:MAIR0 or HMAIR1:HMAIR0 )return 16;

Library pseudocode for shared/translation/vmsa/S2TTWParamsTLBContext

type S2TTWParams is ( // A64-VMSA exclusive parameters bit ha, // VTCR_EL2.HA bit hd, // VTCR_EL2.HD bit sl2, // V{S}TCR_EL2.SL2 bit ds, // VTCR_EL2.DS bit sw, // VSTCR_EL2.SW bit nsw, // VTCR_EL2.NSW bit sa, // VSTCR_EL2.SA bit nsa, // VTCR_EL2.NSA bits(3) ps, // VTCR_EL2.PS bits(6) txsz, // V{S}TCR_EL2.T0SZ bit fwb, // HCR_EL2.PTW bit cmow, // HCRX_EL2.CMOW // A32-VMSA exclusive parameters bit s, // VTCR.S bits(4) t0sz, // VTCR.T0SZ // Parameters common to both A64-VMSA & A32-VMSA if implemented (A64/A32)TLBContext is ( SecurityState ss, Regime regime, bits(16) vmid, bits(16) asid, bit nG, PASpace ipaspace, // Used in stage 2 lookups & invalidations only boolean includes_s1, boolean includes_s2, bits(64) ia, // Input Address TGx tgx, // V{S}TCR_EL2.TG0 / Always TGx_4KB bits(2) sl0, // V{S}TCR_EL2.SL0 / VTCR.SL0 bits(2) irgn, // VTCR_EL2.IRGN0 / VTCR.IRGN0 bits(2) orgn, // VTCR_EL2.ORGN0 / VTCR.ORGN0 bits(2) sh, // VTCR_EL2.SH0 / VTCR.SH0 bit ee, // SCTLR_EL2.EE / HSCTLR.EE bit ptw, // HCR_EL2.PTW / HCR.PTW bit vm // HCR_EL2.VM / HCR.VM tg, bit cnp, bit xs // XS attribute (FEAT_XS) )

Library pseudocode for shared/translation/vmsa/SDFTypeTLBRecord

enumerationtype SDFType {TLBRecord is ( SDFType_Table,context, SDFType_Invalid, SDFType_Supersection, SDFType_Section, SDFType_LargePage, SDFType_SmallPage };walkstate, integer blocksize, // Number of bits directly mapped from IA to OA integer contigsize, // Number of entries log 2 marking a contiguous output range bits(64) s1descriptor, // Stage 1 leaf descriptor in memory (valid if the TLB caches stage 1) bits(64) s2descriptor // Stage 2 leaf descriptor in memory (valid if the TLB caches stage 2) )

Library pseudocode for shared/translation/vmsa/SecurityStateForRegimeTTWState

// SecurityStateForRegime() // ======================== // Return the Security State of the given translation regime SecurityStatetype SecurityStateForRegime(TTWState is ( boolean istable, integer level,RegimeFullAddress regime) case regime of whenbaseaddress, bit contiguous, bit nG, bit guardedpage, Regime_EL3SDFType returnsdftype, // AArch32 Short-descriptor format walk only bits(4) domain, // AArch32 Short-descriptor format walk only SecurityStateAtELMemoryAttributes(memattrs,EL3Permissions); when Regime_EL30 return SS_Secure; // A32 EL3 is always Secure when Regime_EL2 return SecurityStateAtEL(EL2); when Regime_EL20 return SecurityStateAtEL(EL2); when Regime_EL10 return SecurityStateAtEL(EL1);permissions )

Library pseudocode for shared/translation/vmsa/StageOATranslationRegime

// StageOA() // ========= // Given the final walk state (a page or block descriptor), map the untranslated // input address bits to the output address // TranslationRegime() // =================== // Select the translation regime given the target EL and PE state FullAddressRegime StageOA(bits(64) ia,TranslationRegime(bits(2) el, TGxAccType tgx,acctype) if el == TTWStateEL3 walkstate) // Output Addressthen return if FullAddressELUsingAArch32 oa; integer csize; tsize =( TranslationSizeEL3(tgx, walkstate.level); if walkstate.contiguous == '1' then csize =) then else Regime_EL3; elsif el == EL2 then return if ELIsInHost(EL2) then Regime_EL20 else Regime_EL2; elsif el == EL1 then if acctype == AccType_NV2REGISTER then assert EL2Enabled(); return if ELIsInHost(EL2) then Regime_EL20 else Regime_EL2; else return Regime_EL10; elsif el == EL0 then if IsSecure() && ELUsingAArch32(EL3) then return Regime_EL30; elsif ELIsInHost(EL0) then return Regime_EL20; else return Regime_EL10; else UnreachableContiguousSizeRegime_EL30(tgx, walkstate.level); else csize = 0; ia_msb = tsize + csize; oa.paspace = walkstate.baseaddress.paspace; oa.address = walkstate.baseaddress.address<51:ia_msb>:ia<ia_msb-1:0>; return oa;();

Library pseudocode for shared/translation/vmsa/TGxTranslationSize

enumeration// TranslationSize() // ================= // Compute the number of bits directly mapped from the input address // to the output address integer TGx {TranslationSize( TGx_4KB,tgx, integer level) granulebits = TGx_16KB,(tgx); blockbits = ( TGx_64KB };- level) * (granulebits - 3); return granulebits + blockbits;

Library pseudocode for shared/translation/vmsa/TGxGranuleBitsUseASID

// TGxGranuleBits() // ================ // Retrieve the address size, in bits, of a granule // UseASID() // ========= // Determine whether the translation context for the access requires ASID or is a global entry integerboolean TGxGranuleBits(UseASID(TGxTLBContext tgx) case tgx of whenaccess) return TGx_4KBHasUnprivileged return 12; when TGx_16KB return 14; when TGx_64KB return 16;(access.regime);

Library pseudocode for shared/translation/vmsa/TLBContextUseVMID

type// UseVMID() // ========= // Determine whether the translation context for the access requires VMID to match a TLB entry boolean TLBContext is (UseVMID( SecurityStateTLBContext ss,access) return access.regime == RegimeRegime_EL10 regime, bits(16) vmid, bits(16) asid, bit nG,&& PASpaceEL2Enabled ipaspace, // Used in stage 2 lookups & invalidations only boolean includes_s1, boolean includes_s2, bits(64) ia, // Input Address TGx tg, bit cnp, bit xs // XS attribute (FEAT_XS) )();

Library pseudocode for shared/translation/vmsa/TLBRecordVARange

typeenumeration TLBRecord is (VARange { TLBContext context,VARange_LOWER, TTWState walkstate, integer blocksize, // Number of bits directly mapped from IA to OA integer contigsize, // Number of entries log 2 marking a contiguous output range bits(64) s1descriptor, // Stage 1 leaf descriptor in memory (valid if the TLB caches stage 1) bits(64) s2descriptor // Stage 2 leaf descriptor in memory (valid if the TLB caches stage 2) )VARange_UPPER };

Library pseudocode for shared/translation/vmsa/TTWState

type TTWState is ( boolean istable, integer level, FullAddress baseaddress, bit contiguous, bit nG, bit guardedpage, SDFType sdftype, // AArch32 Short-descriptor format walk only bits(4) domain, // AArch32 Short-descriptor format walk only MemoryAttributes memattrs, Permissions permissions )

Library pseudocode for shared/translation/vmsa/TranslationRegime

// TranslationRegime() // =================== // Select the translation regime given the target EL and PE state Regime TranslationRegime(bits(2) el_in, AccType acctype) bits(2) el; case acctype of when AccType_NV2REGISTER assert EL2Enabled() && el_in == EL1; el = EL2; otherwise el = el_in; if el == EL3 then return if ELUsingAArch32(EL3) then Regime_EL30 else Regime_EL3; elsif el == EL2 then return if ELIsInHost(EL2) then Regime_EL20 else Regime_EL2; elsif el == EL1 then return Regime_EL10; elsif el == EL0 then if CurrentSecurityState() == SS_Secure && ELUsingAArch32(EL3) then return Regime_EL30; elsif ELIsInHost(EL0) then return Regime_EL20; else return Regime_EL10; else Unreachable();

Library pseudocode for shared/translation/vmsa/TranslationSize

// TranslationSize() // ================= // Compute the number of bits directly mapped from the input address // to the output address integer TranslationSize(TGx tgx, integer level) granulebits = TGxGranuleBits(tgx); blockbits = (FINAL_LEVEL - level) * (granulebits - 3); return granulebits + blockbits;

Library pseudocode for shared/translation/vmsa/UseASID

// UseASID() // ========= // Determine whether the translation context for the access requires ASID or is a global entry boolean UseASID(TLBContext access) return HasUnprivileged(access.regime);

Library pseudocode for shared/translation/vmsa/UseVMID

// UseVMID() // ========= // Determine whether the translation context for the access requires VMID to match a TLB entry boolean UseVMID(TLBContext access) return access.regime == Regime_EL10 && EL2Enabled();

Library pseudocode for shared/translation/vmsa/VARange

enumeration VARange { VARange_LOWER, VARange_UPPER };


Internal version only: isa v01_27v01_26, pseudocode v2022-03_relv2021-12_to_suppress_diffs_in_2022_03_RC1 ; Build timestamp: 2022-03-29T102022-03-08T10:4611

Copyright © 2010-20222010-2021 Arm Limited or its affiliates. All rights reserved. This document is Non-Confidential.

(old) htmldiff from-(new)