w_direct
usage:
w_direct [-h] [-r RCFILE] [--quiet | --verbose | --debug] [--version]
[--max-queue-length MAX_QUEUE_LENGTH]
[--serial | --parallel | --work-manager WORK_MANAGER] [--n-workers N_WORKERS]
[--zmq-mode MODE] [--zmq-comm-mode COMM_MODE] [--zmq-write-host-info INFO_FILE]
[--zmq-read-host-info INFO_FILE] [--zmq-upstream-rr-endpoint ENDPOINT]
[--zmq-upstream-ann-endpoint ENDPOINT] [--zmq-downstream-rr-endpoint ENDPOINT]
[--zmq-downstream-ann-endpoint ENDPOINT] [--zmq-master-heartbeat MASTER_HEARTBEAT]
[--zmq-worker-heartbeat WORKER_HEARTBEAT] [--zmq-timeout-factor FACTOR]
[--zmq-startup-timeout STARTUP_TIMEOUT] [--zmq-shutdown-timeout SHUTDOWN_TIMEOUT]
{help,init,average,kinetics,probs,all} ...
optional arguments:
-h, --help show this help message and exit
general options:
-r RCFILE, --rcfile RCFILE
use RCFILE as the WEST run-time configuration file (default: west.cfg)
--quiet emit only essential information
--verbose emit extra information
--debug enable extra checks and emit copious information
--version show program's version number and exit
parallelization options:
--max-queue-length MAX_QUEUE_LENGTH
Maximum number of tasks that can be queued. Useful to limit RAM use for tasks that
have very large requests/response. Default: no limit.
direct kinetics analysis schemes:
{help,init,average,kinetics,probs,all}
help print help for this command or individual subcommands
init calculate state-to-state kinetics by tracing trajectories
average Averages and returns fluxes, rates, and color/state populations.
kinetics Generates rate and flux values from a WESTPA simulation via tracing.
probs Calculates color and state probabilities via tracing.
all Runs the full suite, including the tracing of events.
parallelization options:
--serial run in serial mode
--parallel run in parallel mode (using processes)
--work-manager WORK_MANAGER
use the given work manager for parallel task distribution. Available work managers
are ('serial', 'threads', 'processes', 'zmq'); default is 'serial'
--n-workers N_WORKERS
Use up to N_WORKERS on this host, for work managers which support this option. Use
0 for a dedicated server. (Ignored by work managers which do not support this
option.)
options for ZeroMQ (“zmq”) work manager (master or node):
--zmq-mode MODE Operate as a master (server) or a node (workers/client). "server" is a deprecated
synonym for "master" and "client" is a deprecated synonym for "node".
--zmq-comm-mode COMM_MODE
Use the given communication mode -- TCP or IPC (Unix-domain) -- sockets for
communication within a node. IPC (the default) may be more efficient but is not
available on (exceptionally rare) systems without node-local storage (e.g. /tmp);
on such systems, TCP may be used instead.
--zmq-write-host-info INFO_FILE
Store hostname and port information needed to connect to this instance in
INFO_FILE. This allows the master and nodes assisting in coordinating the
communication of other nodes to choose ports randomly. Downstream nodes read this
file with --zmq-read-host-info and know where how to connect.
--zmq-read-host-info INFO_FILE
Read hostname and port information needed to connect to the master (or other
coordinating node) from INFO_FILE. This allows the master and nodes assisting in
coordinating the communication of other nodes to choose ports randomly, writing
that information with --zmq-write-host-info for this instance to read.
--zmq-upstream-rr-endpoint ENDPOINT
ZeroMQ endpoint to which to send request/response (task and result) traffic toward
the master.
--zmq-upstream-ann-endpoint ENDPOINT
ZeroMQ endpoint on which to receive announcement (heartbeat and shutdown
notification) traffic from the master.
--zmq-downstream-rr-endpoint ENDPOINT
ZeroMQ endpoint on which to listen for request/response (task and result) traffic
from subsidiary workers.
--zmq-downstream-ann-endpoint ENDPOINT
ZeroMQ endpoint on which to send announcement (heartbeat and shutdown
notification) traffic toward workers.
--zmq-master-heartbeat MASTER_HEARTBEAT
Every MASTER_HEARTBEAT seconds, the master announces its presence to workers.
--zmq-worker-heartbeat WORKER_HEARTBEAT
Every WORKER_HEARTBEAT seconds, workers announce their presence to the master.
--zmq-timeout-factor FACTOR
Scaling factor for heartbeat timeouts. If the master doesn't hear from a worker in
WORKER_HEARTBEAT*FACTOR, the worker is assumed to have crashed. If a worker
doesn't hear from the master in MASTER_HEARTBEAT*FACTOR seconds, the master is
assumed to have crashed. Both cases result in shutdown.
--zmq-startup-timeout STARTUP_TIMEOUT
Amount of time (in seconds) to wait for communication between the master and at
least one worker. This may need to be changed on very large, heavily-loaded
computer systems that start all processes simultaneously.
--zmq-shutdown-timeout SHUTDOWN_TIMEOUT
Amount of time (in seconds) to wait for workers to shut down.
westpa.cli.tools.w_direct module
- westpa.cli.tools.w_direct.weight_dtype
alias of
float64
- class westpa.cli.tools.w_direct.WESTMasterCommand
Bases:
WESTTool
Base class for command-line tools that employ subcommands
- subparsers_title = None
- subcommands = None
- include_help_command = True
- add_args(parser)
Add arguments specific to this tool to the given argparse parser.
- process_args(args)
Take argparse-processed arguments associated with this tool and deal with them appropriately (setting instance variables, etc)
- go()
Perform the analysis associated with this tool.
- class westpa.cli.tools.w_direct.WESTParallelTool(wm_env=None)
Bases:
WESTTool
Base class for command-line tools parallelized with wwmgr. This automatically adds and processes wwmgr command-line arguments and creates a work manager at self.work_manager.
- make_parser_and_process(prog=None, usage=None, description=None, epilog=None, args=None)
A convenience function to create a parser, call add_all_args(), and then call process_all_args(). The argument namespace is returned.
- add_args(parser)
Add arguments specific to this tool to the given argparse parser.
- process_args(args)
Take argparse-processed arguments associated with this tool and deal with them appropriately (setting instance variables, etc)
- go()
Perform the analysis associated with this tool.
- main()
A convenience function to make a parser, parse and process arguments, then run self.go() in the master process.
- westpa.cli.tools.w_direct.sequence_macro_flux_to_rate(dataset, pops, istate, jstate, pairwise=True, stride=None)
Convert a sequence of macrostate fluxes and corresponding list of trajectory ensemble populations to a sequence of rate matrices.
If the optional
pairwise
is true (the default), then rates are normalized according to the relative probability of the initial state among the pair of states (initial, final); this is probably what you want, as these rates will then depend only on the definitions of the states involved (and never the remaining states). Otherwise (``pairwise’’ is false), the rates are normalized according the probability of the initial state among all other states.
- class westpa.cli.tools.w_direct.WESTKineticsBase(parent)
Bases:
WESTSubcommand
Common argument processing for w_direct/w_reweight subcommands. Mostly limited to handling input and output from w_assign.
- add_args(parser)
Add arguments specific to this component to the given argparse parser.
- process_args(args)
Take argparse-processed arguments associated with this component and deal with them appropriately (setting instance variables, etc)
- class westpa.cli.tools.w_direct.AverageCommands(parent)
Bases:
WESTKineticsBase
- default_output_file = 'direct.h5'
- add_args(parser)
Add arguments specific to this component to the given argparse parser.
- process_args(args)
Take argparse-processed arguments associated with this component and deal with them appropriately (setting instance variables, etc)
- stamp_mcbs_info(dataset)
- open_files()
- open_assignments()
- print_averages(dataset, header, dim=1)
- run_calculation(pi, nstates, start_iter, stop_iter, step_iter, dataset, eval_block, name, dim, do_averages=False, **extra)
- westpa.cli.tools.w_direct.mcbs_ci_correl(estimator_datasets, estimator, alpha, n_sets=None, args=None, autocorrel_alpha=None, autocorrel_n_sets=None, subsample=None, do_correl=True, mcbs_enable=None, estimator_kwargs={})
Perform a Monte Carlo bootstrap estimate for the (1-
alpha
) confidence interval on the givendataset
with the givenestimator
. This routine is appropriate for time-correlated data, using the method described in Huber & Kim, “Weighted-ensemble Brownian dynamics simulations for protein association reactions” (1996), doi:10.1016/S0006-3495(96)79552-8 to determine a statistically-significant correlation time and then reducing the dataset by a factor of that correlation time before running a “classic” Monte Carlo bootstrap.Returns
(estimate, ci_lb, ci_ub, correl_time)
whereestimate
is the application of the givenestimator
to the inputdataset
,ci_lb
andci_ub
are the lower and upper limits, respectively, of the (1-alpha
) confidence interval onestimate
, andcorrel_time
is the correlation time of the dataset, significant to (1-autocorrel_alpha
).estimator
is called asestimator(dataset, *args, **kwargs)
. Common estimators include:np.mean – calculate the confidence interval on the mean of
dataset
np.median – calculate a confidence interval on the median of
dataset
np.std – calculate a confidence interval on the standard deviation of
datset
.
n_sets
is the number of synthetic data sets to generate using the givenestimator
, which will be chosen using `get_bssize()`_ ifn_sets
is not given.autocorrel_alpha
(which defaults toalpha
) can be used to adjust the significance level of the autocorrelation calculation. Note that too high a significance level (too low an alpha) for evaluating the significance of autocorrelation values can result in a failure to detect correlation if the autocorrelation function is noisy.The given
subsample
function is used, if provided, to subsample the dataset prior to running the full Monte Carlo bootstrap. If none is provided, then a random entry from each correlated block is used as the value for that block. Other reasonable choices includenp.mean
,np.median
,(lambda x: x[0])
or(lambda x: x[-1])
. In particular, usingsubsample=np.mean
will converge to the block averaged mean and standard error, while accounting for any non-normality in the distribution of the mean.
- westpa.cli.tools.w_direct.accumulate_state_populations_from_labeled(labeled_bin_pops, state_map, state_pops, check_state_map=True)
- class westpa.cli.tools.w_direct.DKinetics(parent)
Bases:
WESTKineticsBase
,WKinetics
- subcommand = 'init'
- default_kinetics_file = 'direct.h5'
- default_output_file = 'direct.h5'
- help_text = 'calculate state-to-state kinetics by tracing trajectories'
- description = 'Calculate state-to-state rates and transition event durations by tracing\ntrajectories.\n\nA bin assignment file (usually "assign.h5") including trajectory labeling\nis required (see "w_assign --help" for information on generating this file).\n\nThis subcommand for w_direct is used as input for all other w_direct\nsubcommands, which will convert the flux data in the output file into\naverage rates/fluxes/populations with confidence intervals.\n\n-----------------------------------------------------------------------------\nOutput format\n-----------------------------------------------------------------------------\n\nThe output file (-o/--output, by default "direct.h5") contains the\nfollowing datasets:\n\n ``/conditional_fluxes`` [iteration][state][state]\n *(Floating-point)* Macrostate-to-macrostate fluxes. These are **not**\n normalized by the population of the initial macrostate.\n\n ``/conditional_arrivals`` [iteration][stateA][stateB]\n *(Integer)* Number of trajectories arriving at state *stateB* in a given\n iteration, given that they departed from *stateA*.\n\n ``/total_fluxes`` [iteration][state]\n *(Floating-point)* Total flux into a given macrostate.\n\n ``/arrivals`` [iteration][state]\n *(Integer)* Number of trajectories arriving at a given state in a given\n iteration, regardless of where they originated.\n\n ``/duration_count`` [iteration]\n *(Integer)* The number of event durations recorded in each iteration.\n\n ``/durations`` [iteration][event duration]\n *(Structured -- see below)* Event durations for transition events ending\n during a given iteration. These are stored as follows:\n\n istate\n *(Integer)* Initial state of transition event.\n fstate\n *(Integer)* Final state of transition event.\n duration\n *(Floating-point)* Duration of transition, in units of tau.\n weight\n *(Floating-point)* Weight of trajectory at end of transition, **not**\n normalized by initial state population.\n\nBecause state-to-state fluxes stored in this file are not normalized by\ninitial macrostate population, they cannot be used as rates without further\nprocessing. The ``w_direct kinetics`` command is used to perform this normalization\nwhile taking statistical fluctuation and correlation into account. See\n``w_direct kinetics --help`` for more information. Target fluxes (total flux\ninto a given state) require no such normalization.\n\n-----------------------------------------------------------------------------\nCommand-line options\n-----------------------------------------------------------------------------\n'
- open_files()
- go()
- class westpa.cli.tools.w_direct.DKinAvg(parent)
Bases:
AverageCommands
- subcommand = 'kinetics'
- help_text = 'Generates rate and flux values from a WESTPA simulation via tracing.'
- default_kinetics_file = 'direct.h5'
- description = 'Calculate average rates/fluxes and associated errors from weighted ensemble\ndata. Bin assignments (usually "assign.h5") and kinetics data (usually\n"direct.h5") data files must have been previously generated (see\n"w_assign --help" and "w_direct init --help" for information on\ngenerating these files).\n\nThe evolution of all datasets may be calculated, with or without confidence\nintervals.\n\n-----------------------------------------------------------------------------\nOutput format\n-----------------------------------------------------------------------------\n\nThe output file (-o/--output, usually "direct.h5") contains the following\ndataset:\n\n /avg_rates [state,state]\n (Structured -- see below) State-to-state rates based on entire window of\n iterations selected.\n\n /avg_total_fluxes [state]\n (Structured -- see below) Total fluxes into each state based on entire\n window of iterations selected.\n\n /avg_conditional_fluxes [state,state]\n (Structured -- see below) State-to-state fluxes based on entire window of\n iterations selected.\n\nIf --evolution-mode is specified, then the following additional datasets are\navailable:\n\n /rate_evolution [window][state][state]\n (Structured -- see below). State-to-state rates based on windows of\n iterations of varying width. If --evolution-mode=cumulative, then\n these windows all begin at the iteration specified with\n --start-iter and grow in length by --step-iter for each successive\n element. If --evolution-mode=blocked, then these windows are all of\n width --step-iter (excluding the last, which may be shorter), the first\n of which begins at iteration --start-iter.\n\n /target_flux_evolution [window,state]\n (Structured -- see below). Total flux into a given macro state based on\n windows of iterations of varying width, as in /rate_evolution.\n\n /conditional_flux_evolution [window,state,state]\n (Structured -- see below). State-to-state fluxes based on windows of\n varying width, as in /rate_evolution.\n\nThe structure of these datasets is as follows:\n\n iter_start\n (Integer) Iteration at which the averaging window begins (inclusive).\n\n iter_stop\n (Integer) Iteration at which the averaging window ends (exclusive).\n\n expected\n (Floating-point) Expected (mean) value of the observable as evaluated within\n this window, in units of inverse tau.\n\n ci_lbound\n (Floating-point) Lower bound of the confidence interval of the observable\n within this window, in units of inverse tau.\n\n ci_ubound\n (Floating-point) Upper bound of the confidence interval of the observable\n within this window, in units of inverse tau.\n\n stderr\n (Floating-point) The standard error of the mean of the observable\n within this window, in units of inverse tau.\n\n corr_len\n (Integer) Correlation length of the observable within this window, in units\n of tau.\n\nEach of these datasets is also stamped with a number of attributes:\n\n mcbs_alpha\n (Floating-point) Alpha value of confidence intervals. (For example,\n *alpha=0.05* corresponds to a 95% confidence interval.)\n\n mcbs_nsets\n (Integer) Number of bootstrap data sets used in generating confidence\n intervals.\n\n mcbs_acalpha\n (Floating-point) Alpha value for determining correlation lengths.\n\n\n-----------------------------------------------------------------------------\nCommand-line options\n-----------------------------------------------------------------------------\n'
- go()
- class westpa.cli.tools.w_direct.DStateProbs(parent)
Bases:
AverageCommands
- subcommand = 'probs'
- help_text = 'Calculates color and state probabilities via tracing.'
- default_kinetics_file = 'direct.h5'
- description = 'Calculate average populations and associated errors in state populations from\nweighted ensemble data. Bin assignments, including macrostate definitions,\nare required. (See "w_assign --help" for more information).\n\n-----------------------------------------------------------------------------\nOutput format\n-----------------------------------------------------------------------------\n\nThe output file (-o/--output, usually "direct.h5") contains the following\ndataset:\n\n /avg_state_probs [state]\n (Structured -- see below) Population of each state across entire\n range specified.\n\n /avg_color_probs [state]\n (Structured -- see below) Population of each ensemble across entire\n range specified.\n\nIf --evolution-mode is specified, then the following additional datasets are\navailable:\n\n /state_pop_evolution [window][state]\n (Structured -- see below). State populations based on windows of\n iterations of varying width. If --evolution-mode=cumulative, then\n these windows all begin at the iteration specified with\n --start-iter and grow in length by --step-iter for each successive\n element. If --evolution-mode=blocked, then these windows are all of\n width --step-iter (excluding the last, which may be shorter), the first\n of which begins at iteration --start-iter.\n\n /color_prob_evolution [window][state]\n (Structured -- see below). Ensemble populations based on windows of\n iterations of varying width. If --evolution-mode=cumulative, then\n these windows all begin at the iteration specified with\n --start-iter and grow in length by --step-iter for each successive\n element. If --evolution-mode=blocked, then these windows are all of\n width --step-iter (excluding the last, which may be shorter), the first\n of which begins at iteration --start-iter.\n\nThe structure of these datasets is as follows:\n\n iter_start\n (Integer) Iteration at which the averaging window begins (inclusive).\n\n iter_stop\n (Integer) Iteration at which the averaging window ends (exclusive).\n\n expected\n (Floating-point) Expected (mean) value of the observable as evaluated within\n this window, in units of inverse tau.\n\n ci_lbound\n (Floating-point) Lower bound of the confidence interval of the observable\n within this window, in units of inverse tau.\n\n ci_ubound\n (Floating-point) Upper bound of the confidence interval of the observable\n within this window, in units of inverse tau.\n\n stderr\n (Floating-point) The standard error of the mean of the observable\n within this window, in units of inverse tau.\n\n corr_len\n (Integer) Correlation length of the observable within this window, in units\n of tau.\n\nEach of these datasets is also stamped with a number of attributes:\n\n mcbs_alpha\n (Floating-point) Alpha value of confidence intervals. (For example,\n *alpha=0.05* corresponds to a 95% confidence interval.)\n\n mcbs_nsets\n (Integer) Number of bootstrap data sets used in generating confidence\n intervals.\n\n mcbs_acalpha\n (Floating-point) Alpha value for determining correlation lengths.\n\n\n-----------------------------------------------------------------------------\nCommand-line options\n-----------------------------------------------------------------------------\n'
- calculate_state_populations(pops)
- w_stateprobs()
- go()
- class westpa.cli.tools.w_direct.DAll(parent)
Bases:
DStateProbs
,DKinAvg
,DKinetics
- subcommand = 'all'
- help_text = 'Runs the full suite, including the tracing of events.'
- default_kinetics_file = 'direct.h5'
- description = 'A convenience function to run init/kinetics/probs. Bin assignments,\nincluding macrostate definitions, are required. (See\n"w_assign --help" for more information).\n\nFor more information on the individual subcommands this subs in for, run\nw_direct {init/kinetics/probs} --help.\n\n-----------------------------------------------------------------------------\nCommand-line options\n-----------------------------------------------------------------------------\n'
- go()
- class westpa.cli.tools.w_direct.DAverage(parent)
Bases:
DStateProbs
,DKinAvg
- subcommand = 'average'
- help_text = 'Averages and returns fluxes, rates, and color/state populations.'
- default_kinetics_file = 'direct.h5'
- description = 'A convenience function to run kinetics/probs. Bin assignments,\nincluding macrostate definitions, are required. (See\n"w_assign --help" for more information).\n\nFor more information on the individual subcommands this subs in for, run\nw_direct {kinetics/probs} --help.\n\n-----------------------------------------------------------------------------\nCommand-line options\n-----------------------------------------------------------------------------\n'
- go()
- class westpa.cli.tools.w_direct.WDirect
Bases:
WESTMasterCommand
,WESTParallelTool
- prog = 'w_direct'
- subcommands = [<class 'westpa.cli.tools.w_direct.DKinetics'>, <class 'westpa.cli.tools.w_direct.DAverage'>, <class 'westpa.cli.tools.w_direct.DKinAvg'>, <class 'westpa.cli.tools.w_direct.DStateProbs'>, <class 'westpa.cli.tools.w_direct.DAll'>]
- subparsers_title = 'direct kinetics analysis schemes'
- westpa.cli.tools.w_direct.entry_point()