You copied the Doc URL to your clipboard.

A Configuration

Arm Forge shares a common configuration file between Arm DDT and Arm MAP. This makes it easy for users to switch between tools without reconfiguring their environment each time.

A.1 Configuration files

Arm Forge uses two configuration files: the system wide system.config and the user specific user.config. The system wide configuration file specifies properties such as MPI implementation. The user specific configuration file describes user's preferences such as font size. The files are controlled by environment variables:

Environment Variable Default
ALLINEA_USER_CONFIG \${ALLINEA_CONFIG_DIR}/user.config
ALLINEA_SYSTEM_CONFIG \${ALLINEA_CONFIG_DIR}/system.config
ALLINEA_CONFIG_DIR \${HOME}/.allinea

A.1.1 Sitewide configuration

If you are the system administrator, or have write-access to the installation directory, you can provide a configuration file which other users are automatically given a copy of the first time that they start Arm Forge. In this case users no longer need to provide configuration for site-specific aspects such as queue templates and job submission.

First configure Arm Forge normally and run a test program to make sure all the settings are correct. When you are satisfied with your configuration execute one of the following commands:


   forge --clean-config

This will remove any user-specific settings from your system configuration file and will create a system.config file that can provide the default settings for all users on your system. Instructions on how to do this are printed when --clean-config completes. Note that only the system.config file is generated. Arm Forge also uses a user-specific user.config which is not affected.

If you want to use DDT to attach to running jobs you also need to create a file called nodes in the installation directory with a list of compute nodes you want to attach to. See section 5.9 Attaching to running programs for details.

A.1.2 Startup scripts

When Arm Forge is started it searches for a sitewide startup script called allinearc in the root of the installation directory. If this file exists it is sourced before starting the tool. When using the remote client this startup script is sourced before any sitewide remote-init remote daemon startup script.

Similarly, you can also provide a user-specific startup script in ~/.allinea/allinearc.

Note

If the ALLINEA_CONFIG_DIR environment variable is set then the software will look in \$ALLINEA_CONFIG_DIR/allinearc instead. When using the remote client the user-specific startup script will be sourced before the user-specific ~/.allinea/remote-init remote daemon startup script.

A.1.3 Importing legacy configuration

If you have used a version of Arm DDT prior to 4.0 your existing configuration will be imported automatically. If the DDTCONFIG environment variable is set, or you use the --config command-line argument, the existing configuration will be imported. However, the legacy configuration file will not be modified, and subsequent configuration changes will be saved as described in the previous sections.

A.1.4 Converting legacy sitewide configuration files

If you have existing sitewide configuration files from a version of Arm DDT prior to 4.0 you will need to convert them to the new 4.0 format. This can easily be done using the following command line:


   forge --config=oldconfig.ddt --system-config=newconfig.ddt --clean-config

Note

newconfig.ddt must not exist beforehand.

A.1.5 Using shared home directories on multiple systems

If your site uses the same home directory for multiple systems you may want to use a different configuration directory for each system.

You can do this by specifying the ALLINEA_CONFIG_DIR environment variable before starting Arm Forge. If you use the module system you may choose to set ALLINEA_CONFIG_DIR according to which system the module was loaded on.

For example, say you have two systems: harvester with login nodes harvester-login1 and harvester-login2 and sandworm with login nodes sandworm-login1 and sandworm-login2. You may add something similar to the following code to your module file:


   case $(hostname) in 
harvester-login*)
ALLINEA_CONFIG_DIR=$HOME/.allinea/harvester
;;
sandworm-login*)
ALLINEA_CONFIG_DIR=$HOME/.allinea/sandworm
;;
esac

A.1.6 Using a shared installation on multiple systems

If you have multiple systems sharing a common Arm Forge installation, you may wish to have a different default configuration for each system. You can use the ALLINEA_DEFAULT_SYSTEM_CONFIG environment variable to specify a different file for each system. For example, you may add something similar to the following to your module file:


   case $(hostname) in 
harvester-login*)
ALLINEA_DEFAULT_SYSTEM_CONFIG=/sw/arm/forge/harvester.config
;;
sandworm-login*)
ALLINEA_DEFAULT_SYSTEM_CONFIG=/sw/arm/forge/sandworm.config
;;
esac

A.2 Integration with queuing systems

PIC

Figure 121: Queuing Systems

Arm Forge can be configured to interact with most job submission systems. This is useful if you wish to debug interactively but need to submit a job to the queue in order to do so.

MAP is usually run as a wrapper around mpirun or mpiexec, via the map --profile argument. Arm recommends using this to generate .map files instead of configuring MAP to submit jobs to the queue, but both usage patterns are fully-supported.

In the Options window (Preferences on Mac OS X ), you should choose Submit job through queue. This displays extra options and switches the GUI into queue submission mode.

The basic stages in configuring to work with a queue are:

  1. Making a template script.
  2. Setting the commands used to submit, cancel, and list queue jobs.

Your system administrator may wish to provide a configuration file containing the correct settings, thereby removing the need for individual users to configure their own settings and scripts.

In this mode Arm Forge can use a template script to interact with your queuing system. The templates subdirectory contains some example scripts that can be modified to meet your needs. {installation-directory}/templates/sample.qtf, demonstrates the process of creating a template file in some detail.

A.3 Template tutorial

Ordinarily, your queue script will probably end in a line that starts mpirun with your target executable. In most cases you can simply replace that line with AUTO_LAUNCH_TAG. For example, if your script currently has the line:


   mpirun -np 16 program_name myarg1 myarg2

Then create a copy of it and replace that line with:


   AUTO_LAUNCH_TAG
                                                                                       
                                                                                       

Select this file as the Submission template file on the Job Submission Settings page of the Options. Notice that you are no longer explicitly specifying the number of processes, and so on. You instead specify the number of processes, program name and arguments in the Run window.

Fill in Submit command with the command you usually use to submit your job, for example qsub or sbatch, Cancel command with the command you usually use to cancel a job, for example qdel or scancel and Display command with the command you usually use to display the current queue status, for example qstat or squeue.

You can usually use (
d+)
as the Regexp for job id. This just scans for a number in the output from your Submit command.

Once you have a simple template working you can go on to make more things configurable from the GUI. For example, to be able to specify the number of nodes from the GUI you would replace an explicit number of nodes with the NUM_NODES_TAG. In this case replace:


  #SBATCH --nodes=100

With:


  #SBATCH --nodes=NUM_NODE_TAG

See appendix I.1 Queue template tags for a full list of tags.

A.3.1 The template script

The template script is based on the file you would normally use to submit your job. This is typically a shell script that specifies the resources needed such as number of processes, output files, and executes mpirun, vmirun, poe or similar with your application.

The most important difference is that job-specific variables, such as number of processes, number of nodes and program arguments, are replaced by capitalized keyword tags, such as NUM_PROCS_TAG.

When Arm Forge prepares your job, it replaces each of these keywords with its value and then submits the new file to your queue.

To refer to tags in comments without Arm Forge detecting them as a required field the comment line must begin with ##.

A.3.2 Configuring queue commands

Once you have selected a queue template file, enter submit, display and cancel commands.

When you start a session Arm Forge will generate a submission file and append its file name to the submit command you give.

For example, if you normally submit a job by typing job_submit -u myusername -f myfile then you should enter job_submit -u myusername -f as the submit command.

To cancel a job, Arm Forge will use a regular expression you provide to get a value for JOB_ID_TAG. This tag is found by using regular expression matching on the output from your submit command. See appendix I.6 Job ID regular expression for details.

A.3.3 Configuring how job size is chosen

Arm Forge offers a number of flexible ways to specify the size of a job. You may choose whether Number of Processes and Number of Nodes options appear in the Run window or whether these should be implicitly calculated. Similarly you may choose to display Processes per node in the Run window or set it to a Fixed value.

Note

if you choose to display Processes per node in the Run window and PROCS_PER_NODE_TAG is specified in the queue template file then the tag will always be replaced by the Processes per node value from the Run dialog, even if the option is unchecked there.

A.3.4 Quick restart

DDT allows you reuse an existing queued job to quickly restart a run without resubmitting it to the queue, provided that your MPI implementation supports doing this. Simply check the Quick Restart check box on the Job Submission Options page.

In order to use quick restart, your queue template file must use AUTO_LAUNCH_TAG to execute your job.

For more information on AUTO_LAUNCH_TAG, see I.4.1 Using AUTO_LAUNCH_TAG.

A.4 Connecting to remote programs (remote-exec)

When Arm Forge needs to access another machine for remote launch or as part of starting some MPIs, it will attempt to use the secure shell, ssh, by default.

However, this may not always be appropriate, ssh may be disabled or be running on a different port to the normal port 22. In this case, you can create a file called remote-exec which is placed in your ~/.allinea directory and DDT will use this instead.

Arm Forge will use look for the script at ~/.allinea/remote-_exec, and it will be executed as follows:


   remote-exec HOSTNAME APPNAME [ARG1] [ARG2] ...

The script should start APPNAME on HOSTNAME with the arguments ARG1 ARG2 without further input (no password prompts). Standard output from APPNAME should appear on the standard output of remote-exec. An example is shown here:

SSH based remote-exec

A remote-exec script using ssh running on a non-standard port might look as follows:


   #!/bin/sh 
ssh -P {port-number} $*

In order for this to work without prompting for a password, you should generate a public and private SSH key, and ensure that the public key has been added to the ~/.ssh/authorized_keys file on machines you wish to use. See the ssh-keygen manual page for more information.

Testing

Once you have set up your remote-exec script, it is recommended that you test it from the command line. For example:


   ∼/.allinea/remote-exec TESTHOST uname -n

Should return the output of uname -n on TESTHOST, without prompting for a password.

If you are having trouble setting up remote-exec, please contact Arm support at Arm support for assistance.

Windows

The previously described functionality is also provided by the Windows remote client. However, there are two differences:

  1. The script is named remote-exec.cmd rather than remote-exec.
  2. The default implementation uses the plink.exe executable supplied with Arm Forge.

A.5 Optional configuration

Arm Forge providess an Options window (Preferences on Mac OS X ), which allows you to quickly edit the settings outlined below.

A.5.1 System

MPI Implementation: Allows you to tell Arm Forge which MPI implementation you are using.

Note

If you are not using Arm Forge to work with MPI programs select none.

Override default mpirun path: Allows you to override the path to the mpirun (or equivalent) command.

Select Debugger: Tells Arm Forge which underlying debugger it should use. This should almost always be left as Automatic.

On Linux systems Arm Forge ships with four versions of the GNU GDB debugger: GDB 7.6.2, GDB 7.10.1, GDB 7.12.1 and GDB 8.1. GDB 7.12.1 is the recommended debugger for MAP and GDB 8.1 is the recommended debugger for DDT. These recommended defaults are selected automatically when Automatic (recommended) is selected from the System Settings page on the Options window.

Create Root and Workers groups automatically: If this option is checked DDT will automatically create a Root group for rank 0 and a Workers group for ranks 1-n when you start a new MPI session.

Use Shared Symbol Cache: The shared symbol cache is a file that contains all the symbols in your program in a format that can be used directly by the debugger. Rather than loading and converting the symbols itself, every debugger shares the same cache file. This significantly reduces the amount of memory used on each node by the debuggers. For large programs there may be a delay starting a job while the cache file is created as it may be quite large. The cache files are stored in $HOME/.allinea/symbols. Arm recommends you only turn this option on if you are running out of memory on compute nodes when debugging programs with DDT.

Heterogeneous system support: DDT has support for running heterogeneous MPMD MPI applications where some nodes use one architecture and other nodes use another architecture. This requires a little preparation of your Arm Forge installation. You must have a separate installation of DDT for each architecture. The architecture of the machine running the Arm Forge GUI is called the host architecture. You must create symbolic links from the host architecture installation of Arm Forge to the other installations for the other architectures. For example with a 64-bit x86_64 host architecture (running the GUI) and some compute nodes running the 32-bit i686 architecture:


   ln -s /path/to/arm-forge-i686/bin/ddt-debugger \ 
/path/to/arm-forge-x86_64/bin/ddt-debugger.i686

Enable CUDA software pre-emption: Allows debugging of CUDA kernels on a workstation with a single GPU.

Default groups file: Entering a file here allows you to customize the groups displayed by DDT when starting an MPI job. If you do not specify a file DDT will create the default Root and Workers groups if the previous option is checked.

Note

A groups file can be created by right clicking the process groups panel and selecting Save groups… while running your program.

Attach hosts file: When attaching, DDT will fetch a list of processes for each of the hosts listed in this file. See section 5.9 Attaching to running programs for more details.

A.5.2 Job submission

This section allows you to configure Arm Forge to use a custom mpirun command, or submit your jobs to a queuing system. For more information on this, see section A.2 Integration with queuing systems.

A.5.3 Code viewer settings

This allows you to configure the appearance of the Arm Forge code viewer, which is used to display your source code while debugging.

Tab size: Sets the width of a tab character in the source code display. A width of 8 means that a tab character will have the same width as 8 space characters.

Font name: The name of the font used to display your source code. It is recommended that you use a fixed width font.

Font size: The size of the font used to display your source code.

External Editor: This is the program Arm Forge will execute if you right click in the code viewer and choose Open file in external editor. This command should launch a graphical editor. If no editor is specified, Arm Forge will attempt to launch the default editor as configured in your desktop environment.

Colour Scheme: Color palette to use for the code viewer's background, text and syntax highlighting. Defined in Kate syntax definition format in the resource/styles directory of the Arm Forge install.

Visualize Whitespace: Enables or disables this display of symbols to represent whitespace. Useful for distinguishing between space and tab characters.

Warn about potential programming errors: This setting enables or disables the use of static analysis tools that are included with the Arm Forge installation. These tools support F77, C and C++, and analyze the source code of viewed source files to discover common errors, but can cause heavy CPU usage on the system running the Arm Forge user interface. You can disable this by unchecking this option.

A.5.4 Appearance

This section allows you to configure the graphical style of Arm Forge, as well as fonts and tab settings for the code viewer.

Look and Feel: This determines the general graphical style of Arm Forge. This includes the appearance of buttons, context menus.

Override System Font Settings: This setting can be used to change the font and size of all components in Arm Forge (except the code viewer).

Was this page helpful? Yes No