
                            HP 9000 Systems
                              HP MPI V1.2
                              Release Note


                        HP Part No. B5880-90002
                               Edition 1
          Hewlett-Packard Company, 1997. All rights reserved.



HP MPI V1.2

The information contained in this document supports both the SPP-UX and
HP-UX versions of HP MPI V1.2.

What is in this release
-----------------------

HP MPI V1.2 is a high-performance implementation of the
Message-Passing Interface (MPI) Standard, release V1.2 (11/ 96.)  HP
MPI provides developers of technical applications with an application
programming interface (API) and software library to support parallel,
message-passing applications that are efficient, portable, and
flexible.

HP MPI V1.2 is supported on the following systems:

* Workstation systems, such as HP9000/712 and HP9000/715 entry-level
  workstations, and B- and C-Class systems running HP-UX 10.20 or
  higher.

* Mid-size symmetric multi-processors (SMPs), such as J-, D-, and
  K-Class servers running HP-UX 10.20 or higher.

* Scalable enterprise servers, such as S- and X-Class servers, running
  SPP-UX 5.1 or higher.

Benefits
--------

HP MPI V1.2 offers developers several benefits.  MPI is the industry
standard for distributed technical applications and is widely
supported on technical computing platforms.  Applications developed
using HP MPI port easily to other platforms, including those with
different architectures.  Because HP MPI takes advantage of
shared-memory Hewlett-Packard server architectures, developers get the
most performance out of the system without compromising portability
and flexibility.

In addition, HP MPI offers the quality and support of traditional
Hewlett-Packard products.  Technical application providers using HP
MPI protect their software investment by using this highly-efficient,
portable tool for parallel application development.  HP MPI will
evolve to comply with future versions of the MPI standard while
preserving compatibility for existing applications.

HP MPI V1.2 provides increased features, performance, stability, and
reliability over the previous version, HP MPI V1.1.  HP MPI 1.2 is
supported on both servers and workstations.  Developers of portable
technical parallel applications can use the same programming model for
low-cost clusters of workstations and high-end scalable servers.

Features
--------

HP MPI V1.2 supports development of portable message-passing
applications on all HP workstations and servers.

Applications that use HP MPI take advantage of the parallel
architecture of Exemplar servers.  HP MPI is supported on both HP-UX
and SPP-UX as two compatible, interoperable software packages.  The
following set of features applies to both the SPP-UX and HP-UX
versions of the product, except where noted.

HP MPI V1.2 provides several new features over HP MPI V1.1:

* HP MPI complies with the latest version (V1.2, 11/96) of the MPI
  standard.  Programs developed with HP MPI 1.2 are fully portable to
  other platforms that support a standard-compliant MPI
  implementation.

* HP MPI V1.2 is supported on workstations.  MPI applications can be
  executed on a single server or on a cluster of workstations and/or
  servers connected via a networking link.

* A new utility, mpitrstat, supports light-weight profiling of MPI
  applications.  It produces an execution time and message size
  profile of HP MPI application runs with tracing enabled.  In trace
  mode, each participating process generates a separate trace file.
  mpitrget combines trace files into a single file for post-mortem
  analysis by mpitrstat and other utilities such as xmpi(1).
  Displayed execution times and message sizes are in seconds and
  thousands of bytes (Kbytes), respectively.  The message size data
  does not reflect the actual bytes transferred within the HP MPI
  implementation.  By default, the data represents the MPI user's
  perspective of message size.

* xmpi, a monitoring, tracing, and visualization tool for MPI
  analysis, enhancements includethe following two new functions:

  ** For MPI applications started from within xmpi, the dump function
     consolidates raw trace files and dumps a formatted trace file.

  ** The express function works like dump, but also displays the trace
     in the timeline dialog.  You can choose either to terminate the
     application or leave it running at the beginning of Express
     function since it is sensible even while the application is
     running.

* These collective operations have been optimized to provide better
  latency on SMPs and scalable servers: MPI_Alltoall, MPI_Barrier,
  MPI_Bcast, and MPI_Reduce.

* On Exemplar technical servers, MPI applications running on a single
  subcomplex can be checkpointed and then restarted using the
  checkpoint/restart facilities available under SPP-UX.

* MPI accelerates messaging on scalable servers (such as S- and
  X-Class servers) that feature a hardware copy engine, or data mover.
  Use of the data mover is transparent to the application and the
  user.

* Some applications rely heavily on asynchronous communication and
  occasionally have several hundred requests pending.  HP MPI V1.2
  includes an optimization to avoid traversing non-started receive
  requests until data arrival is detected.

* HP MPI V1.2 streamlined manipulation of the MPI request object by
  removing two levels of indirection, reducing the number of calls to
  the MPI progression engine, and returning to the user code earlier.
  This enhancement increases the internal quality of the product and
  permits complete compliance with the MPI 1.2 specification.

* Error messages have changed format.  In addition, several error
  conditions that were not detected previously by HP MPI 1.1 now
  result in error messages from HP MPI V1.2.

* MPI_Cancel is now implemented.

* The extent returned by MPI_Type_extent() now accounts for byte
  alignment, as defined in Section 3.12 of the MPI 1 standard.

* A new environment variable, MPI_WORKDIR, changes the default
  execution directory.

* The compiler option +ppu can be used safely when linking with MPI.

* Ability to checkpoint/restart MPI applications under SPP-UX:

  An MPI application can be checkpoint/restarted if: 

     The MPI_CHECKPOINT environment variable is set, and 
     The application is restricted to a single sub-complex, and 
     The application is not started with mpirun. 

  An example of how to start an application that can be
  checkpoint/restarted:
 
      % setenv MPI_CHECKPOINT
      % foo -np 16

  When the MPI_CHECKPOINT environment variable is set the following
  limitations apply:

  ** The MPI job is not given a job ID. It cannot be monitored nor
     terminated using the mpijob and mpiclean utilities respectively. A
     job ID is not printed when the 'j' flag is set in the MPI_FLAGS
     environment variable or when the -j option is given to mpirun. If
     the application crashes or hangs, cleanup must be done manually
     using UNIX utilities (kill, ipcrm).

  ** MPI_Abort(), which is also used by the MPI_ERRORS_ARE_FATAL default
     error handler, does not kill the peer processes in the
     communicator. Only the calling process terminates.

  ** Direct process-to-process bcopy is disabled. This results in a
     bandwidth reduction for large message transfers.

Product documentation
---------------------

* HP MPI User's Guide, first edition (B6011-90001)

* MPI: The Complete Reference (B6011-90003)

Product packaging
-----------------

HP MPI is an optional software product installed in /opt/mpi.  You
must purchase a license and install HP MPI before you can build
applications that use the HP MPI libraries.

Product installation
--------------------

After loading the HP-UX or SPP-UX operating system, you can install HP
MPI.  To install your software, run the SD-UX swinstall command.  It
invokes a user interface that guides you through the installation
process and gives you information about product size, version numbers,
and dependencies.

For more information about installation procedures and related issues,
refer to "Managing HP-UX Software with SD-UX" and other README,
installation, and upgrade documentation included or described in the
HP-UX or SPP-UX operating system package.

HP MPI V1.2 for HP-UX-based servers requires at least 5 MB of space in
/opt to install.  HP MPI V1.2 for SPP-UX-based servers requires at
least 6 MB of space in /opt to install.

Known problems and workarounds
------------------------------

Calling MPI from Fortran 90 and C++ programs

HP MPI is based on V1.2 of the MPI standard, which defines bindings
for Fortran 77 and C but not Fortran 90 or C++.  Several features of
Fortran 90 can interact with MPI semantics to produce unexpected
results.  Consult the HP MPI User's Guide for details.

C++ applications use existing C bindings for MPI with no problems.

System resources

HP-UX imposes a limit to the number of file descriptors that
application processes can have open at one time.  When running an HP
MPI application across multiple hosts, each local process opens a
socket to all remote processes.  HP-UX counts these sockets as open
file descriptors, so an HP MPI application with a large number of
off-host processes can reach the maximum.  Ask your system
administrator to increase the file descriptor limit if your
applications frequently exceed the maximum.

HP MPI uses processes and shared-memory buffers.  Applications that
use a large number of processes or memory may run out of system
resources.

Backward Compatibility

In order for HP MPI V1.2 to be fully compliant with the standard, data
structures defined in the mpi.h and mpif.h header files have changed.
You must recompile all files that include MPI header files before you
link your MPI application with HP MPI 1.2.

If you link objects compiled with the HP MPI V1.1 header files with HP
MPI V1.2, your application will fail.

XMPI 

The xmpi utility included with HP MPI V1.2 does not support Build&Run.

If an application is launched under XMPI for execution on multiple
hosts, and one or more of its processes dies abnormally, XMPI is not
able to terminate the application cleanly.

Interaction with f77 command line options

Using certain f77 command line options can result in unpredictable
results. Currently, you cannot use +autodbl in conjunction with MPI.

If you compiled your application with the +E2 compile option, set
MPI_FLAGS to +E2 before you execute the application.

Compatibility information and installation requirements
-------------------------------------------------------

* HP MPI V1.2 requires HP-UX V10.20 or SPP-UX V5.1.

* Patches PHNE_9106 for 700 series and PHNE_9107 for 800 series are
  recommended to prevent HP MPI applications running in a multihost
  environment from getting confused with ENOLINK failures coming from
  the low-level network protocols.

* Applications built with the HP-UX version of HP MPI cannot execute
  on SPP-UX systems, and applications built with SPP-UX MPI cannot
  execute on HP-UX.  An application may be run, however, on clusters
  of HP-UX and SPP-UX systems if binaries appropriate for each
  operating system are built and specified in the appfile for mpirun.
  Refer to the MPI(1) and mpirun(1) man pages for more details about
  running applications on clusters.

* Optional products, such as the Fortran 77, Fortran 90, C and C++
  compilers, are needed to build MPI applications written in those
  languages.  Consult the mpif77(1), mpif90(1), mpicc(1), and mpiCC(1)
  man pages for information on how to change the default compiler when
  using these utilities.

* HP MPI V1.2 for HP-UX-based servers requires at least 5 MB of space
  in /opt to install.  HP MPI V1.2 for SPP-UX-based servers requires
  at least 6 MB of space in /opt to install.

* HP MPI V1.2 can be installed at any time with no reboot required.
  It is not necessary for the system to be in single-user mode.

Patches and fixes in this version
---------------------------------

60940: mpiclean returns incorrect status code.

60808: If you set MPI_TMPDIR to a bad directory, the application
       hangs.

60563: MPI does not clean up shared-memory segments when fork() fails.

60632: mpif77 has wrong order of library inclusions.

59979: Calling an MPI routine (except MPI_Initialized() and
       MPI_Init()) before MPI is initialized must be detected as an
       error.

59907: mpirun -sp /foo does not work. mpirun -e PATH=/foo works.

59906: MPI_Init() does not search the user's PATH for the executable.

59632: mpiclean sometimes does not kill Fortran applications.

59586: MPI_Type_extent does not account for alignment restrictions.

59469: When used with PARAMETER statements, named constants cause
       errors.

59313: MPI_ATTR_GET with MPI_WTIME_IS_GLOBAL returns error code 15.

59286: MPI_Cancel called with an inactive request returns an error

59285: MPI_Start and MPI_Startall called twice with same buffer hangs.

59282: MPI_Op_free does not fail when MPI_OP_NULL is used as a
       parameter.

59275: Datatype constructors with negative block length do not fail.

59253: Send and recv functions do not return error when tag > UB.

59251: MPI_Barrier call with inter-comm not returning correct err
       code.

59170: mpirun hangs if a directory or an empty file is given as an
       appfile.

59079: MPI_Type_lb and MPI_Type_ub do not work with positive and
       negative displs.

59077: Merged derived datatypes do not work with MPI_UB and MPI_LB.

Getting assistance
------------------

If you have questions about HP MPI V1.2, contact the Hewlett-Packard
Convex Technical Assistance Center (TAC) at the following phone
numbers:

* Within the continental U.S., call 1(800)952-0379

* From Canada, call 1(800)345-2384

* All other locations, contact the nearest Hewlett-Packard office.

You can also call the HP Response Center, which will forward your call
to the Convex TAC.


