Difference between revisions of "MPICH, DCMF, and SPI"
Line 7: | Line 7: | ||
Linux environment are comparable to that's on CNK. | Linux environment are comparable to that's on CNK. | ||
+ | As in IBM CNK environment, | ||
+ | Deep Computing Messaging Framework(DCMF) and System Programming Interface(SPI) are available. | ||
+ | You can also write a DCMF code or a SPI code directly if necessary. DCMF is a communication library that | ||
+ | provides non-blocking operations. Please refer [[http://dcmf.anl-external.org/wiki/index.php/Main_Page DCMF wiki]] for some details. SPI is the lowest level user space API for Torus DMA, collective network, BGP specifc lock mechanisms and | ||
+ | other compute node specific implementations. There is no public document available right now but almost all header files and source codes are available. Internally MPICH depends on DMCF that depends on SPI. | ||
To run your HPC application on the Zepto environment, first of all, | To run your HPC application on the Zepto environment, first of all, |
Revision as of 13:49, 29 April 2009
MPICH, DCMF and SPI
The ZeptoOS team has enabled IBM CNK's commucation software stack to work with the Zepto compute node Linux environment for high performance computing(HPC) applications, specifically for MPI applications. Performance of MPI applications on the Zepto compute node Linux environment are comparable to that's on CNK.
As in IBM CNK environment, Deep Computing Messaging Framework(DCMF) and System Programming Interface(SPI) are available. You can also write a DCMF code or a SPI code directly if necessary. DCMF is a communication library that provides non-blocking operations. Please refer [DCMF wiki] for some details. SPI is the lowest level user space API for Torus DMA, collective network, BGP specifc lock mechanisms and other compute node specific implementations. There is no public document available right now but almost all header files and source codes are available. Internally MPICH depends on DMCF that depends on SPI.
To run your HPC application on the Zepto environment, first of all, you need to recompile your code with our compile wrapper scripts which are installed in your Zepto installation path. We provide the same set of wrapper scripts that IBM provides. Once you have successfully compiled your code, you need to submit it with Zepto kernel profile ( see the Kernel Profile section). Note: only SMP mode is currently supported.
zmpicc zmpicxx zmpif77 zmpif90 zmpixlc zmpixlcxx zmpixlf2003 zmpixlf77 zmpixlf90 zmpixlf95 zmpixlc_r zmpixlcxx_r zmpixlf2003_r zmpixlf77_r zmpixlf90_r zmpixlf95_r
In case you can't use those compiler wrapper scripts, please make sure that your makefile or build environemnt points Zepto header files and libraries correctly. An example would be:
/bgsys/drivers/ppcfloor/gnu-linux/bin/powerpc-bgp-linux-gcc \ -o mpi-test-linux -Wall -O3 -I__INST_PREFIX__/include/ mpi-test.c \ -L__INST_PREFIX__/lib/ -lmpich.zcl -ldcmfcoll.zcl -ldcmf.zcl -lSPI.zcl -lzcl \ -lzoid_cn -lrt -lpthread -lm __INST_PREFIX__/bin/zelftool -e mpi-test-linux
NOTE:
- Replace __INST_PREFIX__ with your actuall Zepto install path
- Don't forget calling the zelftool utility
- which makes your executable a Zepto Compute Binary to let the Zepto kernel load
all application segments into the big memory area.
The file layout in the zepto install path would be:
|-- bin | |-- zelftool |-- include | |-- dcmf.h | |-- dcmf_collectives.h | |-- dcmf_coremath.h | |-- dcmf_globalcollectives.h | |-- dcmf_multisend.h | |-- dcmf_optimath.h | |-- mpe_thread.h | |-- mpi.h | |-- mpi.mod | |-- mpi_base.mod | |-- mpi_constants.mod | |-- mpi_sizeofs.mod | |-- mpicxx.h | |-- mpif.h | |-- mpio.h | |-- mpiof.h | `-- mpix.h `-- lib |-- libSPI.zcl.a |-- libcxxmpich.zcl.a |-- libdcmf.zcl.a |-- libdcmfcoll.zcl.a |-- libfmpich.zcl.a |-- libfmpich_.zcl.a |-- libmpich.zcl.a |-- libmpich.zclf90.a |-- libzcl.a `-- libzoid_cn.a