other versions
- jessie 2.2.03+dfsg-3
- jessie-backports 2.2.04+dfsg1-2~bpo8+1
- stretch 2.2.04+dfsg1-6
- testing 2.2.04+dfsg1-9
- unstable 2.2.04+dfsg1-10
MPY(1) | General Commands Manual | MPY(1) |
NAME¶
mpy - Message Passing YorickSYNOPSIS¶
mpirun -np mp_size mpy [ -j pfile1.i [ -j pfile2.i [ ... ]]] [ -i file1.i [ -i file2.i [ ... ]]]DESCRIPTION¶
Yorick is an interpreted language like Basic or Lisp, but far faster. See yorick (1) to learn more about it.Explanations¶
The mpy package interfaces yorick to the MPI parallel programming library. MPI stands for Message Passing Interface; the idea is to connect multiple instances of yorick that communicate among themselves via messages. Mpy can either perform simple, highly parallel tasks as pure interpreted programs, or it can start and steer arbitrarily complex compiled packages which are free to use the compiled MPI API. The interpreted API is not intended to be an MPI wrapper; instead it is stripped to the bare minimum. This is version 2 of mpy (released in 2010); it is incompatible with version 1 of mpy (released in the mid 1990s), because version 1 had numerous design flaws making it very difficult to write programs free of race conditions, and impossible to scale to millions of processors. However, you can run most version 1 mpy programs under version 2 by doing mp_include,"mpy1.i" before you mp_include any file defining an mpy1 parallel task (that is before any file containg a call to mp_task.)Usage notes¶
The MPI environment is not really specified by the standard; existing environments are very crude, and strongly favor non-interactive batch jobs. The number of processes is fixed before MPI begins; each process has a rank, a number from 0 to one less than the number of processes. You use the rank as an address to send messages, and the process receiving the message can probe to see which ranks have sent messages to it, and of course receive those messages. A major problem in writing a message passing program is handling events or messages arriving in an unplanned order. MPI guarantees only that a sequence of messages send by rank A to rank B will arrive in the order sent. There is no guarantee about the order of arrival of those messages relative to messages sent to B from a third rank C. In particular, suppose A sends a message to B, then A sends a message to C (or even exchanges several messages with C) which results in C sending a message to B. The message from C may arrive at B before the message from A. An MPI program which does not allow for this possibility has a bug called a "race condition". Race conditions may be extremely subtle, especially when the number of processes is large. The basic mpy interpreted interface consists of two variables:mp_size = number of proccesses
mp_rank = rank of this process and four functions:
mp_send, to, msg; // send msg to rank "to"
msg = mp_recv(from); // receive msg from rank "from"
ranks = mp_probe(block); // query senders of pending messages
mp_exec, string; // parse and execute string on every rank You call mp_exec on rank 0 to start a parallel task. When the main program thus created finishes, all ranks other than rank 0 return to an idle loop, waiting for the next mp_exec. Rank 0 picks up the next input line from stdin (that is, waits for input at its prompt in an interactive session), or terminates all processes if no more input is available in a batch session. The mpy package modifies how yorick handles the #include parser directive, and the include and require functions. Namely, if a parallel task is running (that is, a function started by mp_exec), these all become collective operations. That is, rank 0 reads the entire file contents, and sends the contents to the other processes as an MPI message (like mp_exec of the file contents). Every process other than rank 0 is only running during parallel tasks; outside a parallel task when only rank 0 is running (and all other ranks are waiting for the next mp_exec), the #include directive and the include and require functions return to their usual serial operation, affecting only rank 0. When mpy starts, it is in parallel mode, so that all the files yorick includes when it starts (the files in Y_SITE/i0) are included as collective operations. Without this feature, every yorick process would attempt to open and read the startup include files, overloading the file system before mpy ever gets started. Passing the contents of these files as MPI messages is the only way to ensure there is enough bandwidth for every process to read the contents of a single file. The last file included at startup is either the file specified in the -batch option, or the custom.i file. To avoid problems with code in custom.i which may not be safe for parallel execution, mpy does not look for custom.i, but for custommp.i instead. The instructions in the -batch file or in custommp.i are executed in serial mode on rank 0 only. Similarly, mpy overrides the usual process_argv function, so that -i and other command line options are processed only on rank 0 in serial mode. The intent in all these cases is to make the -batch or custommp.i or -i include files execute only on rank 0, as if you had typed them there interactively. You are free to call mp_exec from any of these files to start parallel tasks, but the file itself is serial. An additional command line option is added to the usual set:
mpy -j somefile.i
mp_handout, msg;
total = mp_handin(value);
Options¶
- -j file.i
- includes the Yorick source file file.i as mpy starts in parallel mode on all ranks. This is equivalent to the mp_include function after mpy has started.
- -i file.i
- includes the Yorick source file file.i as mpy starts, in serial mode. This is equivalent to the #include directive after mpy has started.
- -batch file.i
- includes the Yorick source file file.i as mpy starts, in serial mode. Your customization file custommp.i, if any, is not read, and mpy is placed in batch mode. Use the help command on the batch function (help, batch) to find out more about batch mode. In batch mode, all errors are fatal; normally, mpy will halt execution and wait for more input after an error.
AUTHOR¶
David H. Munro, Lawrence Livermore National LaboratoryFILES¶
Mpy uses the same files as yorick, except that custom.i is replaced by custommp.i (located in /etc/yorick/mpy/ on Debian based systems) and the Y_SITE/i-start/ directory is ignored.SEE ALSO¶
yorick(1)2010 MARCH 21 | 4th Berkeley Distribution |