Difference between revisions of "Limitations"

From ZeptoOS
Jump to navigationJump to search
Line 16: Line 16:
 
ZeptoOS will launch the appropriate number of application processes per node as determined by the mode; however, MPI jobs currently only work in the SMP mode.  We plan to fix this problem in the near future.
 
ZeptoOS will launch the appropriate number of application processes per node as determined by the mode; however, MPI jobs currently only work in the SMP mode.  We plan to fix this problem in the near future.
  
==No Universal Performance Counter(UPC)==
+
===No Universal Performance Counter(UPC)===
  
 
UPC is not available in this release. Thus, PAPI won't work since it depends on UPC.
 
UPC is not available in this release. Thus, PAPI won't work since it depends on UPC.

Revision as of 14:23, 1 May 2009

Top


Known Bugs / Current Limitations

No VN/DUAL mode in MPI

Blue Gene/P supports three job modes:

  • SMP (one application process per node)
  • DUAL (two application processes per node)
  • VN (four application processes per node)

In Cobalt, the job mode can be specified using cqsub -m or qsub --mode.

ZeptoOS will launch the appropriate number of application processes per node as determined by the mode; however, MPI jobs currently only work in the SMP mode. We plan to fix this problem in the near future.

No Universal Performance Counter(UPC)

UPC is not available in this release. Thus, PAPI won't work since it depends on UPC. We are currently trying to work UPC on our Linux environment.

MPI-IO support

Due to the limitations of FUSE (the compute-node infrastructure we use for I/O forwarding of POSIX calls), pathnames passed to MPI-IO routines need to be prefixed with bglockless: or bgl: (the latter will not work with PVFS; the former should work with all filesystems).

In general, the file I/O performance with ZeptoOS is not very good, again, due to the limitations of FUSE. Within the DOE FastOS I/O forwarding project we are working on a new, high performance I/O forwarding infrastructure for parallel applications and as this work matures, we will integrate it into ZeptoOS.

Some MPI jobs hung when they are killed

We have been seeing this a lot with cnip, the IP-over-torus program. This program runs "forever", so it eventually needs to be killed. When that happens, it will frequently hung one or more compute nodes, preventing the partition from shutting down cleanly.

However, the service node will force a shutdown after a timeout of five minutes, so in practice this is not a significant problem. Also, we have not seen this problem with ordinary MPI applications (unlike most MPI applications, cnip is multithreaded and communicates a lot with the kernel).

Features Coming Soon

Multiple MPI jobs one after another

Since ZeptoOS supports submitting a shell script as a compute node "application", it is possible to run multiple "real" applications from within one job:

#!/bin/sh

for i in 1 2 3 4 5 6 7 8 9 10; do
    /path/to/real/application
done

This does work for sequential applications, but not for those that are linked with MPI; with MPI, an application can only be run once. However, we have an experimental code that lifts this limitation and we plan to include it in the next release.


Top