Using the SoC-400 to create a debug subsystem

The SoC-400 differs from other pieces of IP shown in the example SoC because it is not a single piece of IP. The SoC-400 instead consists of several components. You can use these components to build an infrastructure to debug, monitor, and optimize an SoC design. These components include buses, control and access components, and trace links. Most of these components are configurable.

System designers use SoC-400 components to create a subsystem that is designed to meet the requirements of a specific SoC. The complexity of the subsystem depends on the SoC which the subsystem is part of. In this SoC, the subsystem collects trace output from the four cores of both Cortex-A53 clusters.

Note: In this guide, the terms SoC and SoC-400 refer to different things. SoC refers to the example dual Cortex-A53 System on Chip, which is the subject of this guide. The SoC-400 is a piece of Arm IP that contains multiple components. The example SoC in this guide contains an SoC-400 subsystem, which is shown as a single entity in System diagram.

The following diagram shows an SoC-400 subsystem that is suitable for our example SoC:

Now let’s explore the components in the SoC-400 subsystem that we have described, including their configuration options and interfaces:

Serial Wire JTAG Debug Port

A Serial Wire JTAG Debug Port (SWJ–DP) allows you to connect either a Serial Wire Debug (SWD) or JTAG probe to the SoC-400 subsystem, and therefore the SoC itself. This component is accessible from outside the SoC. In other words, the SWJ-DP forms the main entry point into an SoC for debugging it.

In addition to the external connection, the SWJ-DP contains a single DAPBUS master port interface. This interface connects to the Debug Access Port Bus interconnect (DAPBUSIC) in an SoC-400 subsystem.

Debug Access Port Bus Interconnect

A Debug Access Port Bus Interconnect (DAPBUSIC) allows the SWJ-DP to combine with Access Ports (APs). This provides a route from outside the SoC to the components within.

The DAPBUSIC has the following interfaces:

Interface  Description
DAPBUS slave Each DAPBUSIC has one slave DAPBUS interface. The SWJ-DP connects to this interface.
DAPBUS master Each DAPBUSIC has up to 32 master DAPBUS interfaces. These interfaces connect to individual APs providing AHB, APB, AXI, and JTAG interfaces.
APB Access Port

The APB Access Port provides a single APB master interface, and allows conversion from a DAPBUS interface to an APB interface.

Many SoC-400 components have APB programming interfaces. The APB Access Port and the APBIC allow you to control SoC-400 components from outside the SoC-400 subsystem.

AXI Access Port

The AXI Access Port provides a single AXI master interface and facilitates conversion from a DAPBUS interface to an AXI interface. AXI-AP is typically used to provide an access path to the SoC memory. You can also see the connection in the System diagram. If the CPUs hang, the AXI-AP can provide another path to the system memory.

APB Interconnect

The APB Interconnect (APBIC) facilitates multiple master connections to APB interfaces on components within the SoC-400 subsystem. Connections from external CoreSight components in the SoC are also facilitated.

The APBIC has the following interfaces:

Interface Description
APB slave The APBIC can have from one to four slave APB interfaces. Only one of these interfaces can connect to an APB-AP component. Other slave interfaces might have connections from IPs that are outside the SoC-400 but are part of the SoC.
For example, in the previous diagram, the Cortex-A53 clusters can control the SoC-400 components through the connection that the NIC-400 has with the APBIC. The APBIC connects like a master to the APB slave interfaces on the Cortex-A53 clusters. This design facilitates communication between the SoC-400 subsystem and the rest of the SoC.
APB master The APBIC can have from 1-64 master APB interfaces. These interfaces connect to and control other SoC-400 components within the subsystem. They can also connect to IP that is external to the subsystem.
Cross Trigger Matrix

The Cross Trigger Matrix (CTM) combines the trigger requests that are generated from CTIs and broadcasts them to all CTIs as channel triggers. The two Cortex-A53 clusters in our example have an internal Cross Trigger Interconnect (CTI) and CTM. Triggering requests arising internally in a Cortex-A53 cluster are sent to the Cross Trigger Matrix of the SoC-400 subsystem.

Cross Trigger Interconnect

The CTI provides an interface that enables events broadcasting in the system.

Trace funnel

A trace funnel can combine up to eight trace sources into one. To combine more than eight trace sources, chain several trace funnels together. This strategy effectively increases the number of trace inputs that are available.

Note: To avoid combinatorial timing loops, a register slice that is a forward, reverse, or full register slice must be instantiated between the cascaded funnels. 

In this system, the trace output from all cores of both Cortex-A53 clusters are combined into one trace funnel.

A trace funnel has the following interfaces:

Interface Description
ATB slave You can configure the funnel to have two to eight slave ATB interfaces that receive trace data.
ATB master
A master ATB interface that outputs the combined trace data.
APB slave programming You can configure the funnel to have an APB programming interface. If you do, an APB slave interface is added. You can use this interface to enable and disable slave ATB interfaces at runtime.
Trace replicator

The trace replicator enables an incoming trace stream to be passed to two trace sinks. A trace replicator has a slave ATB interface that receives the trace stream and two master ATB interfaces that output the combined sources.

A trace replicator has the following interfaces:

Interface  Description 
ATB slave  An ATB slave interface receives the combined trace output. 
ATB master   Two ATB master interfaces output the trace output. 
APB slave programming

The Trace replicator can be configured to have an APB programming interface. If you do, an APB slave interface is added. You can use this interface to:

  • Filter the trace passed to each master ATB interface according to the trace ID.
  • Filter out higher bandwidth traces from trace sinks that only support a lower bandwidth such as a TPIU. This filtering allows a high-bandwidth trace sink, for example an ETB, to be enabled at the same time as a lower bandwidth trace. The filtering prevents the lower bandwidth trace sink from slowing down the output of both trace sinks. If the filtering is not used, all trace sinks slow down to match the slowest trace sink.  
Embedded Trace Buffer

The Embedded Trace Buffer (ETB) provides on-chip storage of trace data in RAM. When designing the SoC-400 subsystem, you can configure the size of the RAM for ETB and then implement the required RAM.

The ETB contains a formatter that combines the source data and IDs into a single data stream before storing the data in RAM. The formatter operates in an identical manner to the formatter in the TPIU.

The ETB has the following interfaces:

Interfaces  Description 
ATB slave  Receives the combined trace output to store. 
APB slave  Allows programing of the ETB through its registers. In addition, you can read the captured trace through this interface. 
Cross trigger  Allows you to send and receive cross trigger events to and from a CTI. 
MBIST Enables the testing of the storage RAM. 
Trace Port Interface Unit

The Trace Port Interface Unit (TPIU) formats the trace data that it receives and outputs the formatted data through the pins of the trace port. The Trace Port Analyzer (TPA) can then capture the data. In other words, the trace port in the TPIU provides a route for trace out of the SoC that the TPIU is part of. The TPIU can output patterns over the trace port so that a TPA can tune its capture logic to the trace port. This feature enables the maximization of the speed at which trace can be captured.

The TPIU inserts source ID information into the trace stream. The formatter operates in an identical manner to the formatter in the ETB.

The TPIU has the following interfaces:

Interfaces  Description 
Trace out  Connects to the external trace port pins to facilitate trace exporting. 
ATB slave  Receives the combined trace output to export out of the SoC. 
APB slave  Allows programming of the TPIU through its registers. 
Cross trigger  Allows you to send and receive cross trigger events to and from a CTI. 
Previous Next