CM-Fortran (CM-5) applications using parallel-I/O, all at NCSA
- Simulation of Collisionless stellar systems. This is apparently a
well known code in astronomy, called the SCF code. Two users from UCSC
and one from Illinois ran the same code. But SCF is pretty generic
and can be configured to do very different things. This writes huge
log files to the SDA ffrom time to time. Write traffic is fairly
sequential/consecutive. Pretty thin read traffic.
- CFD code to simulate unsteady flow past a normal flat plate.
Application is called "fltplt". At the beginning it reads a file
fairly sequentially. But during the application, from time to time it
reads the same file in varying patterns. When asked about this, user
said that input from the file is being mapped to several different
differential eqn solvers. So maybe if the user chose to have
different files for different stages, the read pattern from this
application would be more regular. The write traffic throughout the
application is very thin, except at the end. It writes out a number
of 256x256x32 arrays. But arrays are written as a whole and the write
is highly consecutive.
- Molecular hydrodynamics code (mhd). Uses 6th order finite
difference for derivatives and 3rd order "leapfrog time advance". In
particular, this one uses a *lot* of physical I/O. At some chosen time
steps it writes out data acuumulated after the last write which took
place at some previously chosen time step. The data written out is
dependent on the density of grid points around the current time
step. As the density of grid points increases when closer to solution,
amount of data written out increases at each write.
- Electromagnetic scattering code. Uses FDTD (finite difference
time domain forward solver, and conjugate gradient optimization). 2-D
FDTD object fields are written to the SDA using "serial axis
buffering"(??)... Object fields ( a collection of arrays), are defined
for all space points and disk writes involve writing a time sequence
of the space field to the disk. In the second phase data is read back
in "reverse time order". I do not know why they do this, but this sure
will generate a heck of a lot of backward jumping reads.
- Thermodynamics code. Computes "shear" in gaseous substances due
to temperature and pressure gradients. Reads some huge initialization
files. (I do not know what writes them to the SDA), apparently it
contains pressure temparature info at 3D grid points. Not much write
traffic. It calculates if the initial conditions are as specfied, can
the system reach "critical shear"? (I assume something bad happens
then)...
-
CMHOG (Connection Machine Higher Order Gundunov) is a third-order
accurate fluid dynamics code. It has been used for cosmological
simulation of the evolution of large slices of the universe as well as
simulations of astrophysical jets. The code primarily uses three
dimensional data structures divided in a block-style domain
decomposition. It is written mostly in CMF and its I/O consists of:
- 3d 'dumps' which output all or most of the data space to the
SDA via CMF utiliy calls. Althought these happan relatively rarely,
they constitute a lot of data (tens of GBs in a full run) written in a
short period. 32 bit floats.
- 2d 'slices'. These occur more frequently, but each output is a
smaller amount of data. Again, the utility library is used (32 bit
floats).
- 'telemetry data' These occur often, but occupy a small amount
of data. Most are just single floats or integers.
Contributed by AP (ap@cs.duke.edu)
August 1994.