You copied the Doc URL to your clipboard.

## CMLA (indexed)

Complex integer multiply-add with rotate (indexed).

Multiply the duplicated real components for rotations 0 and 180, or imaginary components for rotations 90 and 270, of the integral numbers in each 128-bit segment of the first source vector by the specified complex number in the corresponding the second source vector segment rotated by 0, 90, 180 or 270 degrees in the direction from the positive real axis towards the positive imaginary axis, when considered in polar representation.

Then add the products to the corresponding components of the complex numbers in the addend vector. Destructively place the results in the corresponding elements of the addend vector. This instruction is unpredicated.

These transformations permit the creation of a variety of multiply-add and multiply-subtract operations on complex numbers by combining two of these instructions with the same vector operands but with rotations that are 90 degrees apart.

Each complex number is represented in a vector register as an even/odd pair of elements with the real part in the even-numbered element and the imaginary part in the odd-numbered element.

It has encodings from 2 classes: 16-bit and 32-bit

### 16-bit

 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 1 0 0 0 1 0 0 1 0 1 i2 Zm 0 1 1 0 rot Zn Zda

#### 16-bit

CMLA <Zda>.H, <Zn>.H, <Zm>.H[<imm>], <const>

```if !HaveSVE2() then UNDEFINED;
integer esize = 16;
integer index = UInt(i2);
integer n = UInt(Zn);
integer m = UInt(Zm);
integer da = UInt(Zda);
integer sel_a = UInt(rot<0>);
integer sel_b = UInt(NOT(rot<0>));
boolean sub_r = (rot<0> != rot<1>);
boolean sub_i = (rot<1> == '1');```

### 32-bit

 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 1 0 0 0 1 0 0 1 1 1 i1 Zm 0 1 1 0 rot Zn Zda

#### 32-bit

CMLA <Zda>.S, <Zn>.S, <Zm>.S[<imm>], <const>

```if !HaveSVE2() then UNDEFINED;
integer esize = 32;
integer index = UInt(i1);
integer n = UInt(Zn);
integer m = UInt(Zm);
integer da = UInt(Zda);
integer sel_a = UInt(rot<0>);
integer sel_b = UInt(NOT(rot<0>));
boolean sub_r = (rot<0> != rot<1>);
boolean sub_i = (rot<1> == '1');```

### Assembler Symbols

 Is the name of the third source and destination scalable vector register, encoded in the "Zda" field.
 Is the name of the first source scalable vector register, encoded in the "Zn" field.
 For the 16-bit variant: is the name of the second source scalable vector register Z0-Z7, encoded in the "Zm" field. For the 32-bit variant: is the name of the second source scalable vector register Z0-Z15, encoded in the "Zm" field.
 For the 16-bit variant: is the element index, in the range 0 to 3, encoded in the "i2" field. For the 32-bit variant: is the element index, in the range 0 to 1, encoded in the "i1" field.
<const> Is the const specifier, encoded in rot:
rot <const>
00 #0
01 #90
10 #180
11 #270

### Operation

```CheckSVEEnabled();
integer pairs = VL DIV (2 * esize);
integer pairspersegment = 128 DIV (2 * esize);
bits(VL) operand1 = Z[n];
bits(VL) operand2 = Z[m];
bits(VL) operand3 = Z[da];
bits(VL) result;

for p = 0 to pairs-1
integer segmentbase = p - p MOD pairspersegment;
integer s = segmentbase + index;
integer elt1_a = SInt(Elem[operand1, 2 * p + sel_a, esize]);
integer elt2_a = SInt(Elem[operand2, 2 * s + sel_a, esize]);
integer elt2_b = SInt(Elem[operand2, 2 * s + sel_b, esize]);
bits(esize) elt3_r = Elem[operand3, 2 * p + 0, esize];
bits(esize) elt3_i = Elem[operand3, 2 * p + 1, esize];
integer product_r =  elt1_a * elt2_a;
integer product_i =  elt1_a * elt2_b;
if sub_r then
Elem[result, 2 * p + 0, esize] = elt3_r - product_r;
else
Elem[result, 2 * p + 0, esize] = elt3_r + product_r;
if sub_i then
Elem[result, 2 * p + 1, esize] = elt3_i - product_i;
else
Elem[result, 2 * p + 1, esize] = elt3_i + product_i;

Z[da] = result;```