hpl


Introduction:

HPL is a portable implementation of the Linpack Benchmark program.

HPL is a software package that solves a random dense linear system in
double precision arithmetic on distributed-memory computers.
It can thus be regarded as a portable as well as freely available
implementation of the High Performance Computing Linpack Benchmark.

The algorithm used by HPL can be summarized by:

  • Two-dimensional block-cyclic data distribution;
  • Right-looking variant of the LU factorization with row partial
    pivoting featuring multiple look-ahead depths;
  • Recursive panel factorization with pivot search and column broadcast combined;
  • Various virtual panel broadcast topologies;
  • Bandwidth reducing swap-broadcast algorithm;
  • Backward substitution with look-ahead of depth 1;

HPL provides a testing and timing program to quantify the
accuracy of the obtained solution as well as the time it took to compute
it. The best performance achievable by this software on your system
depends on a large variety of factors. Nonetheless, with some restrictive
assumptions on the interconnection network, the algorithm described here
and its attached implementation are scalable in the sense that their
parallel efficiency is maintained constant with respect to the per
processor memory usage.

Web Site:

The HPL home page at NETLIB:

http://www.netlib.org/benchmark/hpl/

Reference:

Usage:

On any ARC cluster, check the installation details
by typing "module spider hpl".

HPL requires that the appropriate modules be loaded before it can be used.
One version of the appropriate commands for use on NewRiver is:

module purge
module load gcc/5.2.0
module load openmpi/1.8.5
module load hpl/2.1
    

Examples:

The following batch file runs HPL with 8 MPI processes. Note that
HPL expects to be able to read an input file HPL.dat, which
specifies the problem size and various optional features of the algorithm.

#! /bin/bash
#
#PBS -l walltime=00:05:00
#PBS -l nodes=1:ppn=8
#PBS -W group_list=newriver
#PBS -q open_q
#PBS -j oe
#
cd $PBS_O_WORKDIR
#
module purge
module load gcc/5.2.0
module load openmpi/1.8.5
module load hpl/2.1
#
mpirun -np 8 xhpl

A complete set of files to carry out a similar process are available in
hpl_example.tar