Gather load first-fault unsigned halfwords to vector (vector index).
Gather load with first-faulting behavior of unsigned halfwords to active elements of a vector register from memory addresses generated by a 64-bit scalar base plus vector index. The index values are optionally first sign or zero-extended from 32 to 64 bits and then optionally multiplied by 2. Inactive elements will not read Device memory or signal faults, and are set to zero in the destination vector.
It has encodings from 6 classes: 32-bit scaled offset , 32-bit unpacked scaled offset , 32-bit unpacked unscaled offset , 32-bit unscaled offset , 64-bit scaled offset and 64-bit unscaled offset
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | xs | 1 | Zm | 0 | 1 | 1 | Pg | Rn | Zt |
if !HaveSVE() then UNDEFINED; integer t = UInt(Zt); integer n = UInt(Rn); integer m = UInt(Zm); integer g = UInt(Pg); integer esize = 32; integer msize = 16; integer offs_size = 32; boolean unsigned = TRUE; boolean offs_unsigned = xs == '0'; integer scale = 1;
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | xs | 1 | Zm | 0 | 1 | 1 | Pg | Rn | Zt |
if !HaveSVE() then UNDEFINED; integer t = UInt(Zt); integer n = UInt(Rn); integer m = UInt(Zm); integer g = UInt(Pg); integer esize = 64; integer msize = 16; integer offs_size = 32; boolean unsigned = TRUE; boolean offs_unsigned = xs == '0'; integer scale = 1;
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | xs | 0 | Zm | 0 | 1 | 1 | Pg | Rn | Zt |
if !HaveSVE() then UNDEFINED; integer t = UInt(Zt); integer n = UInt(Rn); integer m = UInt(Zm); integer g = UInt(Pg); integer esize = 64; integer msize = 16; integer offs_size = 32; boolean unsigned = TRUE; boolean offs_unsigned = xs == '0'; integer scale = 0;
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | xs | 0 | Zm | 0 | 1 | 1 | Pg | Rn | Zt |
if !HaveSVE() then UNDEFINED; integer t = UInt(Zt); integer n = UInt(Rn); integer m = UInt(Zm); integer g = UInt(Pg); integer esize = 32; integer msize = 16; integer offs_size = 32; boolean unsigned = TRUE; boolean offs_unsigned = xs == '0'; integer scale = 0;
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | Zm | 1 | 1 | 1 | Pg | Rn | Zt |
if !HaveSVE() then UNDEFINED; integer t = UInt(Zt); integer n = UInt(Rn); integer m = UInt(Zm); integer g = UInt(Pg); integer esize = 64; integer msize = 16; integer offs_size = 64; boolean unsigned = TRUE; boolean offs_unsigned = TRUE; integer scale = 1;
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | Zm | 1 | 1 | 1 | Pg | Rn | Zt |
if !HaveSVE() then UNDEFINED; integer t = UInt(Zt); integer n = UInt(Rn); integer m = UInt(Zm); integer g = UInt(Pg); integer esize = 64; integer msize = 16; integer offs_size = 64; boolean unsigned = TRUE; boolean offs_unsigned = TRUE; integer scale = 0;
<Zt> |
Is the name of the scalable vector register to be transferred, encoded in the "Zt" field. |
<Pg> |
Is the name of the governing scalable predicate register P0-P7, encoded in the "Pg" field. |
<Xn|SP> |
Is the 64-bit name of the general-purpose base register or stack pointer, encoded in the "Rn" field. |
<Zm> |
Is the name of the offset scalable vector register, encoded in the "Zm" field. |
<mod> |
Is the index extend and shift specifier,
encoded in
xs:
|
CheckSVEEnabled(); integer elements = VL DIV esize; bits(64) base; bits(64) addr; bits(VL) offset; bits(PL) mask = P[g]; bits(VL) result; bits(VL) orig = Z[t]; bits(msize) data; constant integer mbytes = msize DIV 8; boolean first = TRUE; boolean fault = FALSE; boolean faulted = FALSE; boolean unknown = FALSE; if n == 31 then CheckSPAlignment(); base = SP[]; else base = X[n]; offset = Z[m]; for e = 0 to elements-1 if ElemP[mask, e, esize] == '1' then integer off = Int(Elem[offset, e, esize]<offs_size-1:0>, offs_unsigned); addr = base + (off << scale); if first then // Mem[] will not return if a fault is detected for the first active element data = Mem[addr, mbytes, AccType_NORMAL]; first = FALSE; else // MemNF[] will return fault=TRUE if access is not performed for any reason (data, fault) = MemNF[addr, mbytes, AccType_NONFAULT]; else (data, fault) = (Zeros(msize), FALSE); // FFR elements set to FALSE following a supressed access/fault faulted = faulted || fault; if faulted then ElemFFR[e, esize] = '0'; // Value becomes CONSTRAINED UNPREDICTABLE after an FFR element is FALSE unknown = unknown || ElemFFR[e, esize] == '0'; if unknown then if !fault && ConstrainUnpredictableBool(Unpredictable_SVELDNFDATA) then Elem[result, e, esize] = Extend(data, esize, unsigned); elsif ConstrainUnpredictableBool(Unpredictable_SVELDNFZERO) then Elem[result, e, esize] = Zeros(); else // merge Elem[result, e, esize] = Elem[orig, e, esize]; else Elem[result, e, esize] = Extend(data, esize, unsigned); Z[t] = result;
Release: 00rel5-manual
Copyright © 2010-2018 ARM Limited or its affiliates. All rights reserved. This document is Non-Confidential.