Opened 7 years ago

Last modified 4 years ago

#1074 new bug

singleton init and dynamic processes

Reported by: Lisandro Dalcin <dalcinl@…> Owned by:
Priority: major Milestone: future
Component: mpich Keywords: dynamic process, singleton init
Cc:

Description (last modified by balaji)

$ cat usize.c
#include <mpi.h>
int main(int argc, char *argv[])
{
  int *usize, flag;
  MPI_Init(&argc, &argv);
  MPI_Comm_get_attr(MPI_COMM_WORLD, MPI_UNIVERSE_SIZE, &usize, &flag);
  MPI_Finalize();
  return 0;
}

$ mpicc usize.c

$ mpiexec ./a.out 

$ ./a.out 
[mpiexec@trantor] match_arg (./utils/args/args.c:122): unrecognized argument pmi_args
[mpiexec@trantor] HYDU_parse_array (./utils/args/args.c:140): argument matching returned error
[mpiexec@trantor] HYD_uii_mpx_get_parameters (./ui/mpich/utils.c:1016): error parsing input array

Usage: ./mpiexec [global opts] [exec1 local opts] : [exec2 local opts] : ...

< ..more mpiexec help output ...>

Change History (13)

comment:1 Changed 7 years ago by balaji

  • Owner set to balaji
  • Status changed from new to assigned

comment:2 Changed 7 years ago by balaji

  • Milestone changed from mpich2-1.3 to mpich2-1.4

The problem here is that the PMI implementation is assuming that the mpiexec will use a specific set of command-line parameters. This is incorrect. The MPI standard only specifies a few options that mpiexec can accept. AFAICT, the only (semi-)portable way of handling this is for the process to execvp "mpiexec -n 1 ./itself" if it detects that it's in a singleton init. However, the PMI library does not have information about the argc and argv, and thus cannot handle this case. I'd prefer to fix this the right way in PMI-2.

comment:3 Changed 7 years ago by gropp

The approach chosen by PMI v1 was necessitated by the fact that PMI v1 does not require that there be an existing server process (and PMI v2 must not change this). Thus, in the singleton init case, there may not be anything for PMI to contact. Thus, the PMI must be prepared to start whatever outside service it requires by running a program (it should also be permitted to contact a server if one is available). The options are to standardize on a new executable name or use mpiexec and standardize on the arguments needed to start (or contact, if a persistent service is already running). PMI v1 chose the latter approach. As mpiexec must know how to interact with PMI, it seemed reasonable to reuse mpiexec as the way to start the required service. If PMI v2 wants to go with option 1, it will need to define a new executable and the command line arguments used to connect with it. This seems like an unnecessary change from the PMI v1 approach. Note that there is code in cmnargs.c to handle all of the necessary features.

The "right" fix to PMI v2 is to standardize on how a PMI can determine whether it must invoke a startup program (which should be called mpiexec) or whether it can attempt to directly contact some service. For example, an environment variable PMI_SINGLETON_INIT_ADDR that had the form host:port could be used, along with a standard wire protocol for interacting with the singleton server.

comment:4 Changed 7 years ago by balaji

Bill: No, the approach I proposed is not assuming that an external service is running (at least not any more than what PMI-1 is already doing). The current PMI-1 is doing an execvp of "mpiexec --some-random-parameters". Instead, I'm proposing that it should use a more standard interface of mpiexec and execvp "mpiexec -n 1 ./itself".

comment:5 Changed 7 years ago by gropp

You mean to have the program restart itself as a child of mpiexec? That's an interesting thought, but the application may already have executed code (such as opening a file) before the singleton init. Note that the singleton init code isn't invoked unless a PMI service is needed, not at MPI_Init time. Further, in many cases, the PMI services are not needed by singleton MPI codes. For example, most MPI programs, run with one process, won't ever start the singleton init logic. So I don't see how you can re-exec the application from the point where the singleton init occurs and preserve correct behavior. And even if you required the singleton init logic in MPI_Init, it would still change the behavior of the program if the user did something before MPI_Init with an external effect, such as opening a file. While not recommended, doing so is not erroneous. That's why PMI-1 asks the mpiexec to start or connect to whatever services are required at the point where the PMI service is needed.

Also, the current PMI-1 is not doing execvp of "mpiexec --some-random-parameters", the parameters are well defined (albeit documented in the simple PM utility code). I don't see the problem with this; as I mentioned, the common code for processing mpiexec arguments already handled this, and while it isn't standard from the point of view of what the MPI standard defines, mpiexec is fairly tightly connected to PMI in practice.

The singleton init stuff is very tricky and there are many subtleties. Rusty and I have been through several rounds of this; keeping singleton init from adding complex behavior to simple codes and retaining correctness has not been easy. Making it easier to build mpiexec programs that could correctly handle this is one of the reasons that I wrote the PM utility routines.

Finally, *requiring* that the singleton init run any program has its own drawbacks, which is why an alternate route to contact a running service makes sense in PMI v2.

comment:6 Changed 7 years ago by balaji

Bill: as you are no doubt aware, a correct MPI program it is not allowed to do anything that affects the external environment before MPI_Init, because the part before MPI_Init can be executed by fewer, equal or more number of threads/processes than the number of processes launched by the user. (citation: lines 44--48 on page 23 of the MPI-2.2 standard).

So, I see this is a place where we can take advantage of this flexibility.

Yes, it is possible to mandate that a process manager that supports PMI is required to accept a specific set of parameters, but I think it is still cleaner to relaunch the process with mpiexec.

comment:7 Changed 7 years ago by Lisandro Dalcin <dalcinl@…>

Many high-level languages providing MPI wrappers usually call MPI_Init() well after process startup and boostrap. Currently, when using the MPD PM, I'm able to lauch Python scripts in singleton mode and even Spawn() child Python processes executing other scripts. I would really like to see this functionality working in the new Hydra PM.

comment:8 Changed 7 years ago by gropp

I don't see that text: I see

MPI programs require that library routines that are part of the basic language environment
(such as write in Fortran and printf and malloc in ISO C) and are executed after
MPI_INIT and before MPI_FINALIZE operate independently and that their completion is
independent of the action of other processes in an MPI program.

(I know that there is a statement that says we don't say what happens before MPI_Init or after MPI_Finalize, however, the principle of least surprise should be applied here).

I still don't see the reason to use a different approach from the one that works and is already implemented in C. If its an argument to mpiexec that is the objection (though we already require non-standard behavior, including the forwarding of environment variables), a separate program could be used. And in any event, requiring a program to be run, whether it is mpiexec or something else, has its own problems.

comment:9 Changed 7 years ago by balaji

  • Milestone changed from mpich2-1.4 to mpich2-1.5

comment:10 Changed 6 years ago by balaji

  • Milestone changed from mpich2-1.5 to mpich2-1.6

comment:11 Changed 5 years ago by balaji

  • Milestone changed from mpich2-1.6 to mpich-3.0

Milestone mpich2-1.6 deleted

comment:12 Changed 5 years ago by balaji

  • Milestone changed from mpich-3.0 to future

comment:13 Changed 4 years ago by balaji

  • Description modified (diff)
  • Owner balaji deleted
  • Status changed from assigned to new
Note: See TracTickets for help on using tickets.