The Cortex-A53 processor uses the MOESI protocol to maintain data coherency between multiple cores.
MOESI describes the state that a shareable line in a L1 Data cache can be in:
Modified/UniqueDirty (UD). The line is in only this cache and is dirty.
Owned/SharedDirty (SD). The line is possibly in more than one cache and is dirty.
Exclusive/UniqueClean (UC). The line is in only this cache and is clean.
Shared/SharedClean (SC). The line is possibly in more than one cache and is clean.
Invalid/Invalid (I). The line is not in this cache.
The DCU stores the MOESI state of the cache line in the tag and dirty RAMs.
The names UniqueDirty, SharedDirty, UniqueClean, SharedClean, Invalid are equivalent to the AMBA names for the cache states.
Data coherency is enabled only when the CPUECTLR.SMPEN bit is set. You must set the SMPEN bit before enabling the data cache. If you do not, then the cache is not coherent with other cores and data corruption could occur.
- Read allocate mode
The L1 Data cache supports only a Write-Back policy. It normally allocates a cache line on either a read miss or a write miss, although you can alter this by changing the inner cache allocation hints in the page tables.
However, there are some situations where allocating on writes is not wanted, such as executing the C standard library
memset()function to clear a large block of memory to a known value. Writing large blocks of data like this can pollute the cache with unnecessary data. It can also waste power and performance if a linefill must be performed only to discard the linefill data because the entire line was subsequently written by the
To prevent this, the BIU includes logic to detect when a full cache line has been written by the processor before the linefill has completed. If this situation is detected on a threshold number of consecutive linefills, it switches into read allocate mode.
When in read allocate mode, loads behave as normal and can still cause linefills, and writes still lookup in the cache but, if they miss, they write out to L2 rather than starting a linefill.
More than the specified number of linefills might be observed on the ACE or CHI master interface, before the BIU detects that three full cache lines have been written and switches to read allocate mode.
The BIU continues in read allocate mode until it detects either a cacheable write burst to L2 that is not a full cache line, or there is a load to the same line as is currently being written to L2.
A secondary read allocate mode applies when the L2 cache is integrated. After a threshold number of consecutive cache line sized writes to L2 are detected, L2 read allocate mode is entered.
When in L2 read allocate mode, loads behave as normal and can still cause linefills, and writes still lookup in the cache but, if they miss, they write out to L3 rather than starting a linefill. L2 read allocate mode continues until there is a cacheable write burst that is not a full cache line, or there is a load to the same line as is currently being written to L3.
In AArch64, CPUACTLR_EL1.L1RADIS configures the L1 read allocate mode threshold, and CPUACTLR_EL2.RADIS configures the L2 read allocate mode threshold. See CPU Auxiliary Control Register, EL1.
In AArch32, CPUACTLR.L1RADIS configures the L1 read allocate mode threshold, and CPUACTLR.RADIS configures the L2 read allocate mode threshold. See CPU Auxiliary Control Register.
- Data cache invalidate on reset
The Armv8-A architecture does not support an operation to invalidate the entire data cache. If this function is required in software, it must be constructed by iterating over the cache geometry and executing a series of individual invalidate by set/way instructions.
The Cortex-A53 processor automatically invalidates caches on reset unless suppressed with the DBGL1RSTDISABLE or L2RSTDISABLE pins. It is therefore not necessary for software to invalidate the caches on startup.