Devices like a DMA or GPU in a compute system see the physical address space, so when they are programmed, you must use the PA to specify the source and destination addresses for a DMA, or the frame buffer location for a GPU. This is normally handled by kernel level code, which makes calls into the kernel to get the VA to PA mappings.
When second stage translation is added, the kernel no longer sees ‘real’ physical addresses. It sees IPAs instead. This means that if a pointer is passed from the kernel module to the GPU or DMA it could be the wrong address.
One solution to this is for a hypervisor to intercept all communication between the OS and the device, with the Hypervisor translating passed addresses from the IPA to the PA. This approach is potentially expensive, as it would mean that you had to take an exception (to enter the Hypervisor) every time you wrote to one of the memory mapped registers of the device.
The alternative approach is to make the device see the same IPA space as the kernel, which is where the System MMU (SMMU) comes in.
The SMMU is effectively an external copy of the MMU inside the processor. It can be placed in your system between a device (like a DMA or GPU) and the interconnect. Any transaction passing through the SMMU can then be translated, meaning that the DMA or GPU sees a translated address space.
The SMMU architecture uses the same translation table formats as ARMv7-A and ARMv8-A. So an SMMU is typically pointed at the same set of tables in memory as the processor is using. This means that the DMA or GPU can have the same view of memory as the guest OS, removing the need for the costly trapping by the Hypervisor in software. The SMMU architecture allows for designs that do stage 1 translation (VA to IPA), stage 2 (IPA to PA) or both (VA to IPA to PA). Not all implementations support all these options.
Second stage translation only gives the DMA or GPU the same view of memory as the guest OS, so that the same drive code can be used in a virtualized or non-virtualized system.