Integrate MonteCarloGenerate capability from EG CML and associated TrickOps enhancements (#1415)

* Provide MonteCarloGenerate capability

Intermediate commit, this squash represents all of Isaac Reaves' work
during his Fall 2022 Pathways internship tour

[skip ci]

* TrickOps: Add phase, [min-max] range, and overhaul YAML verification

* Add new "phase:" mechanism to TrickOps Runs and Builds to support
  project-specific constraints on build and run ordering
  - phase defaults to zero if not specified and must be between -1000
    and 1000 if given.
  - jobs can now optionally be requested by their phase or phase range
  - See trickops/README.md for details
* Add [min-max] notation capability to run: entries and compare: entries
  - [min-max] ranges provide definition of a set of runs using a common
    numbering scheme in the YAML file, greatly reducing YAML file size
    for monte-carlo and other zero-padded run numbering use cases
  - See trickops/README.md for details
* YAML parsing changes
  - Overhaul the logic which verifies YAML files for the expected
    TrickOps format. This is now done in TrickWorkflowYamlVerifier and
    provides much more robust error checking than previous approach
  - .yaml_requirements.yml now provides the required types, ranges, and
    default values as applicable to expected entries in YAML files
  - valgrind: is now an sub-option to run: entries, not its own section
    Users should now list their runs normallly and define their flags in
    in that run's valgrind: subsection
  - parallel_safety is now a per-sim parameter and not global. Users
    should move their global config to the sim layer
  - self.config_errors is now a list of errors. Users should now
    check for empty list when using instead of True/False
* Robustify the get_koviz_report_jobs unit test to work whether koviz
  exists on PATH or not
* Adjust trickops.py to use the new phase and range features
   - Make it more configurable on the command-line via argparse
   - Move SIM_mc_generation tests into test_sims.yml

[skip ci]

* Code review and cleanup from PR #1389

Documentation:

* Adjust documentation to fit suggested symlinked approach. Also
  cleaned up duplicate images and old documentation.
* Moved the verification section out of markdown and into a PDF since it
  heavily leverages formatting not available in markdown.
* Clarify a couple points on the Darwin Trick install guide
* Update wiki to clarify that data recording strings is not supported

MCG Code:

* Replace MonteCarloVariableRandomNormal::is_near_equal with new
  Trick::dbl_is_near from trick team

MCG Testing:

* Reduce the set of SIM_mc_generation comparisons. After discussion
  the trick team, we are choosing to remove all comparisons to
  verif_data/ which contain random-generated numbers since
  these tests cannot pass across all supported trick platforms.
* Fix the wrong rule on exlcuding -Werror for Darwin builds
  of SIM_mc_generation
* Remove data recording of strings in SIM_mc_generation

Trickops:

* Replace build_command with build_args per discussion w/ Trick team
  Since we only support arguments to trick-CP, replace the build_command
  yaml entry with build_args
* Disable var server connection by default in SingleRun if TrickWorkflow.quiet
  is True
* Guard against multiple Job starts
* Remove SimulationJob inheritance layer since old monte-carlo wasn't
  and never will be supported by TrickOps
* Ignore IOError raise from variable_server that looks like "The remote
  endpoint has closed the connection". This appears to occur when
  SingleRun jobs attempt to connect to the var server for a sim that
  terminates very early

[skip ci]

* Adjust phasing of old/new MCG initialize functions

* Clarify failure message in generate_dispersions if new/old MC are both
  used.
* Adjust the phasing order of MCG intialize method to be before
  legacy MC initialized. Without this, monte-carlo dry run completes with
  success before the check in generate_dispersions() can run
* Add -Wno-stringop-truncation to S_override.mk for SIM_mc_generation
  since gcc 8+ warns about SWIG generated content in top.cpp

* Introduce MonteCarloGenerationHelper python class

This new class provides an easy-to-use interface for MCG sim-module
users:

1. Run generation
2. Getting an sbatch array job suitable for SLURM
3. Getting a list of SingleRun() instances for generated runs, to be
   executed locally if desired

---------

Co-authored-by: Dan Jordan <daniel.d.jordan@nasa.gov>
This commit is contained in:
ddj116 2023-03-06 09:25:50 -06:00 committed by GitHub
parent 9b3e6aac51
commit 9099792947
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
133 changed files with 7461 additions and 651 deletions

View File

@ -42,6 +42,7 @@ SIM_SERV_DIRS = \
${TRICK_HOME}/trick_source/sim_services/MemoryManager \ ${TRICK_HOME}/trick_source/sim_services/MemoryManager \
${TRICK_HOME}/trick_source/sim_services/Message \ ${TRICK_HOME}/trick_source/sim_services/Message \
${TRICK_HOME}/trick_source/sim_services/MonteCarlo \ ${TRICK_HOME}/trick_source/sim_services/MonteCarlo \
${TRICK_HOME}/trick_source/sim_services/MonteCarloGeneration \
${TRICK_HOME}/trick_source/sim_services/RealtimeInjector \ ${TRICK_HOME}/trick_source/sim_services/RealtimeInjector \
${TRICK_HOME}/trick_source/sim_services/RealtimeSync \ ${TRICK_HOME}/trick_source/sim_services/RealtimeSync \
${TRICK_HOME}/trick_source/sim_services/ScheduledJobQueue \ ${TRICK_HOME}/trick_source/sim_services/ScheduledJobQueue \

View File

@ -36,6 +36,7 @@ The user guide contains information pertinent to Trick users. These pages will h
01. [Realtime Sleep Timer](simulation_capabilities/Realtime-Timer) 01. [Realtime Sleep Timer](simulation_capabilities/Realtime-Timer)
01. [Realtime Injector](simulation_capabilities/Realtime-Injector) 01. [Realtime Injector](simulation_capabilities/Realtime-Injector)
01. [Monte Carlo](simulation_capabilities/UserGuide-Monte-Carlo) 01. [Monte Carlo](simulation_capabilities/UserGuide-Monte-Carlo)
02. [Monte Carlo Generation](miscellaneous_trick_tools/MonteCarloGeneration)
01. [Master Slave](simulation_capabilities/Master-Slave) 01. [Master Slave](simulation_capabilities/Master-Slave)
01. [Data Record](simulation_capabilities/Data-Record) 01. [Data Record](simulation_capabilities/Data-Record)
01. [Checkpoints](simulation_capabilities/Checkpoints) 01. [Checkpoints](simulation_capabilities/Checkpoints)

View File

@ -215,7 +215,7 @@ xcode-select --install
brew install python java xquartz swig@3 maven udunits openmotif brew install python java xquartz swig@3 maven udunits openmotif
``` ```
IMPORTANT: Make sure to follow the instructions for adding java to your path provided by brew. If you missed them, you can see them again by using `brew info java`. IMPORTANT: Make sure to follow the instructions for adding java and swig to your `PATH` provided by brew. If you missed them, you can see them again by using `brew info java` and `brew info swig@3`. Remember, you may need to restart your terminal for these `PATH` changes to take effect.
5. Download and un-compress the latest pre-built clang+llvm from llvm-project github. Go to https://github.com/llvm/llvm-project/releases 5. Download and un-compress the latest pre-built clang+llvm from llvm-project github. Go to https://github.com/llvm/llvm-project/releases
and download the latest version llvm that matches your Xcode version from the release assets. For example, if your Xcode version is 14 then you will want the latest 14.x.x release of llvm. 13.0.1 is the latest as of the writing of this guide, the link I used is below: and download the latest version llvm that matches your Xcode version from the release assets. For example, if your Xcode version is 14 then you will want the latest 14.x.x release of llvm. 13.0.1 is the latest as of the writing of this guide, the link I used is below:
@ -243,7 +243,7 @@ e.g.
OPTIONAL: Trick uses google test (gtest) version 1.8 for unit testing. To install gtest: OPTIONAL: Trick uses google test (gtest) version 1.8 for unit testing. To install gtest:
``` ```
brew install wget brew install cmake wget
wget https://github.com/google/googletest/archive/release-1.8.0.tar.gz wget https://github.com/google/googletest/archive/release-1.8.0.tar.gz
tar xzvf release-1.8.0.tar.gz tar xzvf release-1.8.0.tar.gz
cd googletest-release-1.8.0/googletest cd googletest-release-1.8.0/googletest

View File

@ -0,0 +1,705 @@
# MonteCarloGeneration Model
# Revision History
| Version | Date | Author | Purpose |
| :--- |:---| :--- | :--- |
| 1 | April 2020 | Gary Turner | Initial Version |
| 2 | March 2021 | Gary Turner | Added Verification |
| 3 | October 2022 | Isaac Reaves | Converted to Markdown |
# 1 Introduction
The MonteCarlo Model is used to disperse the values assigned to variables at the start of a simulation. Dispersing the initial
conditions and configurations for the simulation allows for robust testing and statistical analysis of the probability of
undesirable outcomes, and measuring the confidence levels associated with achieving desirable outcomes.
Conventionally, most of the time we think about dispersing variables, we think about apply some sort of statistical
distribution to the value. Most often, that is a normal or uniform distribution, but there may be situations in which other
distributions are desired. In particular, this model provides an extensible framework allowing for any type of distribution to
be applied to any variable.
For extensive analysis of safety-critical scenarios, where it is necessary to demonstrate high probability of success with high
confidence, traditional MonteCarlo analyses require often many thousands of runs. For long duration simulations, it may
not be feasible to run the number of simulations necessary to reach the high confidence of high success probability that is
necessary to meet requirements. Typically, failure cases occur out near the edges of state-space, but most of the runs will be
“right down the middle”; using conventional MonteCarlo techniques, most of these runs are completely unnecessary. With
a Sequential-MonteCarlo configuration, a small number of runs can be executed, allowing for identification of problem
areas, and a focussing of the distribution on those areas of state-space, thereby reducing the overall number of runs while
adding complexity to the setup. While this model does not (at this time) provide a Sequential-MonteCarlo capability, the
organization of the model has been designed to support external tools seeking to sequentially modify the distributions being
applied to the dispersed variables, and generate new dispersion sets.
# 2 Requirements
1. The model shall provide common statistical distribution capabilities, including:
1. Uniform distribution between specified values
1. as a floating-point value
1. as an integer value
1. Normal distribution, specified by mean and standard deviation
1. Truncated Normal Distribution, including
1. symmetric and asymmetric truncations
1. it shall be possible to specify truncations by:
1. some number of standard deviations from the mean,
1. a numerical difference from the mean, and
1. an upper and lower limit
1. The model shall provide an extensible framework suitable for supporting other statistical distributions
1. The model shall provide the ability to assign a common value to all runs:
1. This value could be a fixed, user-defined value
1. This value could be a random assignment, generated once and then applied to all runs
1. The model shall provide the capability to read values from a pre-generated file instead of generating its own values
1. The model shall provide the ability to randomly select from a discrete data set, including:
1. enumerations,
1. character-strings,
1. boolean values, and
1. numerical values
1. The model shall provide the capability to compute follow-on variables, the values of which are a function of one or more dispersed variables with values generated using any of the methods in requirements 1-5.
1. The model shall provide a record of the generated distributions, allowing for repeated execution of the same scenario using exactly the same conditions.
1. The model shall provide summary data of the dispersions which have been applied, including:
1. number of dispersions
1. types of dispersions
1. correlations between variables
# 3 Model Specification
## 3.1 Code Structure
The model can be broken down into its constituent classes; there are two principle components to the model the variables,
and the management of the variables.
### 3.1.1 Variable Management (MonteCarloMaster)
MonteCarloMaster is the manager of the MonteCarlo variables. This class controls how many sets of dispersed variables
are to be generated; for each set, it has the responsibility for
* instructing each variable to generate its own dispersed value
* collecting those values and writing them to an external file
### 3.1.2 Dispersed Variables (MonteCarloVariable)
MonteCarloVariable is an abstract class that forms the basis for all dispersed variables. The following classes inherit from
MonteCarloVariable:
* MonteCarloVariableFile will extract a value for its variable from a specified text file. Typically, a data file will
comprise some number of rows and some number of columns of data. Each column of data represents the possible
values for one variable. Each row of data represents a correlated set of data to be applied to several variables; each
data-set generation will be taken from one line of data. Typically, each subsequent data-set will be generated from the
next line of data; however, this is not required.
* In some situations, it is desirable to have the next line of data to be used for any given data set be somewhat randomly
chosen. This has the disadvantageous effect of having some data sets being used more than others, but it supports
better cross-population when multiple data files are being used.
* For example, if file1 contained 2 data sets and file2 contained 4 data sets, then a sequential sweep through
these file would set up a repeating pattern with line 1 of file2 always being paired with line 1 of file1. For
example, in 8 runs, we would get this pattern of line numbers from each run:
* (1,1), (2,2), (1,3), (2,4), (1,1), (2,2), (1,3), (2,4)
* If the first file was allowed to skip a line, the pattern can produce a more comprehensive combination of
data:
* (1,1), (1,2), (2,3), (1,4), (2,1), (2,2), (2,3), (1,4)
* MonteCarloVariableFixed provides fixed-values to a variable for all generated data-sets. The values can be
represented as a double, int, or STL-string.
* MonteCarloVariableRandom is the base class for all variables being assigned a random value. The values can be
represented as a double, int, or STL-string. There are several subclasses:
* MonteCarloVariableRandomNormal provides a variable with a value dispersed according to a normal
distribution specified by its mean and standard deviation.
* MonteCarloVariableRandomUniformInt provides a variable with a value dispersed according to a uniform
distribution specified by its upper and lower bounds. This class represents a discrete distribution, providing an
integer value.
* MonteCarloVariableRandomUniform provides a variable with a value dispersed according to a uniform
distribution specified by its upper and lower bounds. This class represents a continuous distribution.
* MonteCarloVariableRandomStringSet represents a discrete variable, drawn from a set of STL-strings. The
class inherits from MonteCarloVariableRandomUniform; this distribution generates a continuous value in [0,1)
and scales and casts that to an integer index in {0, …, size-1} where size is the number of available strings
from which to choose.
Note an astute reader may question why the discrete MonteCarloVariableRandomStringSet inherits from
the continuous MonteCarloVariableRandomUniform rather than from the discrete
MonteCarloVariableRandomUniformInt. The rationale is based on the population of the vector of
selectable strings in this class. It is desirable to have this vector be available for population outside the
construction of the class, so at construction time the size of this vector is not known. However, the
construction of the MonteCarloVariableRandomUniformInt requires specifying the lower and upper
bounds, which would be 0 and size-1 respectively. Because size is not known at construction, this cannot
be specified. Conversely, constructing a MonteCarloVariableRandomUniform with bounds at [0,1) still
allows for scaling to the eventual size of the strings vector.
* MonteCarloVariableSemiFixed utilizes a two-step process. First, a seed-variable has its value generated, then that
value is copied to this variable. The seed-variable could be a “throw-away” variable, used only to seed this value, or it
could be an instance of another dispersed variable. Once the value has been copied to this instance, it is retained in this
instance for all data sets. The seed-variable will continue to generate a new value for each data set, but they will not be
seen by this variable after that first set.
The seed-variable can be any type of MonteCarloVariable, but note that not all types of MonteCarloVariable actually
make sense to use in this context. Most of the usable types are specialized types of MonteCarloVariableRandom.
However, restricting the seed-variable in such a way would limit the extensibility of the model. All
MonteCarloVariableRandom types use the C++ \<random\> library for data generation. Limiting the
MonteCarloVariableSemiFixed type to be seeded only by something using the \<random\> library violates the concept of
free-extensibility. Consequently, the assigned value may be extracted from any MonteCarloVariable type. The only
constraint is that the command generated by the seed-variable includes an “=” symbol; everything to the right of that
symbol will be assigned to this variable.
* MonteCarloPythonLineExec provides a line of executable Python code that can be used to compute the value of this
variable. So rather than generating an assignment statement, e.g.
```
var_x = 5
```
when the MonteCarloMaster processes an instance of this class, it will use a character string to generate an
instruction statement, e.g.
```
var_x = math.sin(2 * math.pi * object.circle_fraction)
```
(in this case, the character string would be “math.sin(2 * math.pi * object.circle_fraction)” and
object.circle_fraction could be a previously-dispersed variable).
A particularly useful application of this capability is in generating systematic data sweeps across a domain, as
opposed to random distributions within a domain. These are commonly implemented as a for-loop, but we can use
the MonteCarloPythonLineExec to generate them internally. The first data assignment made in each file is to a
run-number, which can be used as an index. The example shown below will generate a sweep across the domain
[20,45) in steps of 2.5.
```
object.sweep_variable = (monte_carlo.master.monte_run_number % 10) * 2.5 + 20
```
* MonteCarloPythonFileExec is used when simple assignments and one-line instructions are insufficient, such as
when one generated-value that feeds into an analytical algorithm to generate multiple other values. With this class,
the execution of the Python file generated by MonteCarloMaster will hit a call to execute a file as specified by this
class. This is an oddity among the bank of MonteCarloVariable implementations. In all other implementations,
the identifying variable_name is used to identify the variable whose value is to be assigned (or computed). With
the MonteCarloPythonFileExec implementation, the variable_name is hijacked to provide the name of the file to
be executed.
## 3.2 Mathematical Formulation
No mathematical formulation. The random number generators use the C++ \<random\> library.
# 4 User's Guide
## 4.1 What to expect
This role played by this model can be easily misunderstood, so lets start there.
**This model generates Python files containing assignments to variables.**
Thats it!! It does not manage MonteCarlo runs. It does not execute any simulations. When it runs, it creates the requested
number of Python files and exits.
This design is deliberate; we want the model to generate the instruction sets that will allow execution of a set of dispersed
configurations. At that point, the simulation should cease, returning control to the user to distribute the execution of those
configurations according to whatever distribution mechanism they desire. This could be:
* something really simple, like a wild-card, \<executive\> `MONTE_RUN_test/RUN*/monte_input.py`
* a batch-script,
* a set of batch-scripts launching subsets onto different machines,
* a load-management service, like SLURM
* any other mechanism tailored to the users currently available computing resources
The intention is that the model runs very early in the simulation sequence. If the model is inactive (as when running a regular, non-MonteCarlo run), it will take no action. But when this model is activated, the user should expect the simulation to terminate before it starts on any propagation.
**When a simulation executes with this model active, the only result of the simulation will be the generation of files containing the assignments to the dispersed variables. The simulation should be expected to terminate at t=0.**
## 4.1.1 Trick Users
The model is currently configured for users of the Trick simulation engine. The functionality of the model is almost exclusively independent of the chosen simulation engine, with the exceptions being the shutdown sequence, and the application of units information in the variables.
Found at the end of the `MonteCarloMaster::execute()` method, the following code:
```c++
exec_terminate_with_return(0, __FILE__, __LINE__,message.c_str());
```
is a Trick instruction set to end the simulation.
Found at the end of `MonteCarloVariable::insert_units()`, the following code:
```c++
// TODO: Pick a unit-conversion mechanism
// Right now, the only one available is Trick:
trick_units( pos_equ+1);
```
provides the call to
```c++
MonteCarloVariable::trick_units(
size_t insertion_pt)
{
command.insert(insertion_pt, " trick.attach_units(\"" + units + "\",");
command.append(")");
```
which appends Trick instructions to interpret the generated value as being represented in the specified units.
The rest of the Users Guide will use examples of configurations for Trick-simulation input files
## 4.1.2 Non-Trick Users
To configure the model for simulation engines other than Trick, the Trick-specific content identified above should be replaced with equivalent content that will result in:
* the shutdown of the simulation, and
* the conversion of units from the type specified in the distribution specification to the type native to the variable to which the generated value is to be assigned.
While the rest of the Users Guide will use examples of configurations for Trick-simulation input files, understand that these are mostly just C++ or Python code setting the values in this model to make it work as desired. Similar assignments would be required for any other simulation engine.
## 4.2 MonteCarlo Manager (MonteCarloMaster)
### 4.2.1 Instantiation
The instantiation of MonteCarloMaster would typically be done directly in the S_module. The construction of this instance takes a single argument, a STL-string describing its own location within the simulation data-structure.
The MonteCarloMaster class has a single public-interface method call, `MonteCarloMaster::execute()`. This method has 2 gate-keeping flags that must be set (the reason for there being 2 will be explained later):
* `active`
* `generate_dispersions`
If either of these flags is false (for reference, `active` is constructed as false and `generate_dispersions` is constructed as true) then this method returns with no action. If both are true, then the model will generate the dispersions, write those dispersions to the external files, and shut down the simulation.
An example S-module
```c++
class MonteCarloSimObject : public Trick::SimObject
{
public:
MonteCarloMaster master; // <--- master is instantiated
MonteCarloSimObject(std::string location)
:
master(location) // <--- master is constructed with this STL-string
{
P_MONTECARLO ("initialization") master.execute(); // <--- the only function
call
}
};
MonteCarloSimObject monte_carlo("monte_carlo.master"); // <--- location of master
is passed as an
argument
```
### 4.2.2 Configuration
The configuration of the MonteCarloMaster is something to be handled as a user-input to the simulation without requiring re-compilation; as such, it is typically handled in a Python input file. There are two sections for configuration:
* modifications to the regular input file, and
* new file-input or other external monte-carlo initiation mechanism
#### 4.2.2.1 Modifications to the regular input file
A regular input file sets up a particular scenario for a nominal run. To add monte-carlo capabilities to this input file, the
following code should be inserted somewhere in the file:
```python
if monte_carlo.master.active:
# insert non-mc-variable MC configurations like logging
if monte_carlo.master.generate_dispersions:
exec(open(“Modified_data/monte_variables.py").read())
```
Lets break this down, because it explains the reason for having 2 flags:
| `generate_dispersions` | `active` | Result |
| :--- |:---| :--- |
| false | false | Regular (non-monte-carlo) run |
| false | true | Run scenario with monte-carlo configuration and pre-generated dispersions |
| true | false | Regular (non-monte-carlo) runs |
| true | true | Generate dispersions for this scenario, but do not run the scenario |
1. If the master is inactive, this content is passed over and the input file runs just as it would without this content
2. Having the master `active` flag set to true instructs the simulation that the execution is intended to be part of a monte-carlo analysis. Now there are 2 types of executions that fit this intent:
* The generation of the dispersion files
* The execution of this run with the application of previously-generated dispersions
Any code to be executed for case (a) must go inside the `generate_dispersions` gate. Any code to be executed for
case (b) goes inside the `active` gate, but outside the `generate_dispersions` gate.
You may wonder why this distinction is made. In many cases, it is desirable to make the execution for monte-carlo
analysis subtly different to that for regular analysis. One commonly used distinction is logging of data; the logging
requirement may differ between a regular run and one as part of a monte-carlo analysis (typically, monte-carlo runs
execute with reduced logging). By providing a configuration area for a monte-carlo run, we can support these
distinctions.
Note any code to be executed for only non-monte-carlo runs can be put in an else: block. For example, this code
will set up one set of logging for a monte-carlo run, and another for a non-monte-carlo run of the same scenario:
```python
if monte_carlo.master.active:
exec(open(“Log_data/log_for_monte_carlo.py”).read())
if monte_carlo.master.generate_dispersions:
exec(open(“Modified_data/monte_variables.py").read())
else:
exec(open(“Log_data/log_for_regular.py”).read())
```
3. If the `generate_dispersions` flag is also set to true, the `MonteCarloMaster::execute()` method will execute,
generating the dispersion files and shutting down the simulation.
#### 4.2.2.2 Initiating MonteCarlo
Somewhere outside this file, the `active` and generate_dispersion flags must be set. This can be performed either in a separate input file or via a command-line argument. Unless the command-line argument capability is already supported, by far the easiest mechanism is to create a new input file that subsequently reads the existing input file:
```
monte_carlo.master.activate("RUN_1")
exec(open("RUN_1/input.py").read())
```
The activate method takes a single string argument, representing the name of the run. This must be exactly the same name as the directory containing the original input file, “RUN_1” in the example above. This argument is used in 2 places (\<argument\> in these descriptions refers to the content of the argument string):
* In the creation of a `MONTE_<argument>` directory. This directory will contain some number of sub-directories identified as, for example, RUN_01, RUN_02, RUN_03, etc. each of which will contain one of the generated dispersion files.
* In the instructions written into the generated dispersion files to execute the content of the input file found in `<argument>`.
#### 4.2.2.3 Additional Configurations
There are additional configurations instructing the MonteCarloMaster on the generation of the new dispersion files. Depending on the use-case, these could either be embedded within the `if monte_carlo.master.generate_dispersions:` block of the original input file, or in the secondary input file (or command-line arguments if configured to do so).
* Number of runs is controlled with a single statement, e.g.
```monte_carlo.master.set_num_runs(10)```
* Generation of meta-data. The meta-data provides a summary of which variables are being dispersed, the type of dispersion applied to each, the random seeds being used, and correlation between different variables. This is written out to a file called MonteCarlo_Meta_data_output in the MONTE_* directory.
```monte_carlo.master.generate_meta_data = True```
* Changing the name of the automatically-generated monte-directory. By default, this takes the form “MONTE_\<run_name\>” as assigned in the MonteCarloMaster::activate(...) method. The monte_dir variable is public and can be reset after activation and before the `MonteCarloMaster::execute()` method runs. This is particularly useful if it is desired to compare two distribution sets for the same run.
```monte_carlo.master.monte_dir = “MONTE_RUN_1_vers2”```
* Changing the input file name. It is expected that most applications of this model will run with a typical organization of a Trick simulation. Consequently, the original input file is probably named input.py, and this is the default setting for the input_file_name variable. However, to support other cases, this variable is public and can be changed at any time between construction and the execution of the `MonteCarloMaster::execute()` method.
```monte_carlo.master.input_file_name = “modified_input.py”```
* Padding the filenames of the generated files. By default, the generated RUN directories in the generated MONTE_* directory will have their numerical component padded according to the number of runs. When:
* between 1 - 10 runs are generated, the directories will be named RUN_0, RUN_1, …
* between 11-100 runs are generated, the directories will be named RUN_00, RUN_01, …
* between 101-1000 runs are generated, the directories will be named RUN_000, RUN_001, …
* etc.
Specification of a minimum padding width is supported. For example, it might be desired to create 3 runs with names RUN_00000, RUN_00001, and RUN_00002, in which case the minimum-padding should be specified as 5 characters
```monte_carlo.master.minimum_padding = 5```
* Changing the run-name. For convenience, the run-name is provided as an argument in the MonteCarloMaster::activate(...) method. The run_name variable is public, and can be reset after activation and before the `MonteCarloMaster::execute()` method runs. Because this setting determines which run is to be launched from the dispersion files, resetting run_name has limited application effectively limited to correcting an error, which could typically be more easily corrected directly.
```monte_carlo.master.run_name = “RUN_2”```
## 4.3 MonteCarlo Variables (MonteCarloVariable)
The instantiation of the MonteCarloVariable instances is typically handled as a user-input to the simulation without requiring re-compilation. As such, these are usually implemented in Python input files. This is not a requirement, and these instances can be compiled as part of the simulation build. Both cases are presented.
### 4.3.1 Instantiation and Registration
For each variable to be dispersed, an instance of a MonteCarloVariable must be created, and that instance registered with the MonteCarloMaster instance:
1. Identify the type of dispersion desired
2. Select the appropriate type of MonteCarloVariable to provide that dispersion.
3. Create the new instance using its constructor.
4. Register it with the MonteCarloMaster using the `MonteCarloMaster::add_variable( MonteVarloVariable&)` method
#### 4.3.1.1 Python input file implementation for Trick:
When the individual instances are registered with the master, it only records the address of those instances. A user may create completely new variable names for each dispersion, or use a generic name as illustrated in the example below. Because these are typically created within a Python function, it is important to add the thisown=False instruction on each creation to prevent its destruction when the function returns.
```python
mc_var = trick.MonteCarloVariableRandomUniform( "object.x_uniform", 0, 10, 20)
mc_var.thisown = False
monte_carlo.master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "object.x_normal", 0, 0, 5)
mc_var.thisown = False
monte_carlo.master.add_variable(mc_var)
```
#### 4.3.1.2 C++ implementation in its own class:
In this case, the instances do have to be uniquely named.
Note that the registering of the variables could be done in the class constructor rather than in an additional method (process_variables), thereby eliminating the need to store the reference to MonteCarloMaster. In this case, the `generate_dispersions` flag is completely redundant because the variables are already registered by the time the input file is executed. Realize, however, that doing so does carry the overhead of registering those variables with the MonteCarloMaster every time the simulation starts up. This can a viable solution when there are only a few MonteCarloVariable instances, but is generally not recommended; using an independent method (process_variables) allows restricting the registering of the variables to be executed only when generating new dispersions.
```c++
class MonteCarloVarSet {
private:
MonteCarloMaster & master;
public:
MonteCarloVariableRandomUniform x_uniform;
MonteCarloVariableRandomNormal x_normal;
...
MonteCarloVarSet( MonteCarloMaster & master_)
:
master(master_),
x_uniform("object.x_uniform", 0, 10, 20),
x_normal ("object.x_normal", 0, 0, 5),
...
{ };
void process_variables() {
master.add_variable(x_uniform);
master.add_variable(x_normal);
...
}
};
```
#### 4.3.1.3 C++ implementation within a Trick S-module:
Instantiating the variables into the same S-module as the master is also a viable design pattern. However, this can lead to a very long S-module so is typically only recommended when there are few variables. As with the C++ implementation in a class, the variables can be registered with the master in the constructor rather than in an additional method, with the same caveats presented earlier.
```c++
class MonteCarloSimObject : public Trick::SimObject
{
public:
MonteCarloMaster master;
MonteCarloVariableRandomUniform x_uniform;
MonteCarloVariableRandomNormal x_normal;
...
MonteCarloSimObject(std::string location)
:
master(location),
x_uniform("object.x_uniform", 0, 10, 20),
x_normal ("object.x_normal", 0, 0, 5),
...
{ };
void process_variables() {
master.add_variable(x_uniform);
master.add_variable(x_normal);
...
};
{
P_MONTECARLO ("initialization") master.execute();
} };
MonteCarloSimObject monte_carlo("monte_carlo.master");
```
### 4.3.2 input-file Access
If using a (compiled) C++ implementation with the registration conducted at construction, the `generate_dispersions` flag is not used in the input file.
```python
if monte_carlo.master.active:
if monte_carlo.master.generate_dispersions:
exec(open(“Modified_data/monte_variables.py").read())
```
(where monte_variables.py is the file containing the mc_var = … content described earlier)
```python
if monte_carlo.master.active:
if monte_carlo.master.generate_dispersions:
monte_carlo_variables.process_variables()
```
If using a (compiled) C++ implementation with a method to process the registration, that method call must be contained inside the `generate_dispersions` gate in the input file:
```
if monte_carlo.master.active:
# add only those lines such as logging configuration
```
### 4.3.3 Configuration
For all variable-types, the variable_name is provided as the first argument to the constructor. This variable name must include the full address from the top level of the simulation. After this argument, each variable type differs in its construction arguments and subsequent configuration options.
#### 4.3.3.1 MonteCarloVariable
MonteCarloVariable is an abstract class; its instantiable implementations are presented below. There is one important configuration for general application to these implementations, the setting of units. In a typical simulation, a variable has an inherent unit-type; these are often SI units, but may be based on another system. Those native units may be different to those in which the distribution is described. In this case, assigning the generated numerical value to the variable without heed to the units mismatch would result in significant error.
```set_units(std::string units)```
This method specifies that the numerical value being generated is to be interpreted in the specified units.
Notes
* if it is known that the variables native units and the dispersion units match (including the case of a dimensionless value), this method is not needed.
* This method is not applicable to all types of MonteCarloVariable; use with MonteCarloVariableRandomBool and MonteCarloPython* is considered undefined behavior.
#### 4.3.3.2 MonteCarloVariableFile
The construction arguments are:
1. variable name
2. filename containing the data
3. column number containing data for this variable
4. (optional) first column number. This defaults to 1, but some users may want to zero-index their column numbers, in which case it can be set to 0.
There is no additional configuration beyond the constructor
There is no additional configuration beyond the constructor.
#### 4.3.3.3 MonteCarloVariableFixed
The construction arguments are:
1. variable name
2. value to be assigned
Additional configuration for this model includes the specification of the maximum number of lines to skip between runs.
`max_skip`. This public variable has a default value of 0 meaning that the next run will be drawn from the next line of data, but this can be adjusted.
#### 4.3.3.4 MonteCarloVariableRandomBool
The construction arguments are:
1. variable name
2. seed for random generator
There is no additional configuration beyond the constructor.
#### 4.3.3.5 MonteCarloVariableRandomNormal
The construction arguments are:
1. variable name
2. seed for random generator, defaults to 0
3. mean of distribution, defaults to 0
4. standard-deviation of distribution, defaults to 1.
The normal distribution may be truncated, and there are several configuration settings associated with truncation. Note that for all of these truncation options, if the lower truncation bound is set to be larger than the upper truncation bound, the generation of the dispersed value will fail and the simulation will terminate without generation of files. If the upper andlower bound are set to be equal, the result will be a forced assignment to that value.
`TruncationType`
This is an enumerated type, supporting the specification of the truncation limits in one of three ways:
* `StandardDeviation`: The distribution will be truncated at the specified number(s) of standard deviations away from the mean.
* `Relative`: The distribution will be truncated at the specified value(s) relative to the mean value.
* `Absolute`: The distribution will be truncated at the specified value(s).
`max_num_tries`
The truncation is performed by repeatedly generating a number from the unbounded distribution until one is found that lies within the truncation limits. This max_num_tries value determines how many attempts may be made before the algorithm concedes. It defaults to 10,000. If a value has not been found within the specified number of tries, an error message is sent and the value is calculated according to the following rules:
* For a distribution truncated at only one end, the truncation limit is used
* For a distribution truncated at both ends, the midpoint value between the two truncation limits is used.
`truncate( double limit, TruncationType)`
This method provides a symmetric truncation, with the numerical value provided by limit being interpreted as a number of standard-deviations either side of the mean, a relative numerical value from the mean, or an absolute value.
The value limit should be positive. If a negative value is provided, it will be negated to a positive value.
The use of TruncationType Absolute and this method requires a brief clarification because this may result in an asymmetric distribution. In this case, the distribution will be truncated to lie between (-limit, limit) which will be asymmteric for all cases in which the mean is non-zero.
`truncate( double min, double max, TruncationType)`
This method provides a more general truncation, with the numerical value provided by min and max being interpreted as a number of standard-deviations away from the mean, a relative numerical value from the mean, or an absolute value.
Unlike the previous method, the numerical arguments (min and max) may be positive or negative, and care must be taken especially when specifying min with TruncationType StandardDeviation or Relative. Realize that a positive value of min will result in a lower bound with value above that of the mean; min does not mean “distance to the left of the mean”, it means the smallest acceptable value relative to the mean.
`truncate_low( double limit, TruncationType)`
This method provides a one-sided truncation. All generated values will be above the limit specification.
`truncate_high( double limit, TruncationType)`
This method provides a one-sided truncation. All generated values will be below the limit specification.
`untruncate()`
This method removes previously configured truncation limits.
#### 4.3.3.6 MonteCarloVariableRandomStringSet
The construction arguments are:
1. variable name
2. seed for random generator
This type of MonteCarloVariable contains a STL-vector of STL-strings containing the possible values that can be assigned by this generator. This vector is NOT populated at construction time and must be configured.
`add_string(std::string new_string)`
This method adds the specified string (`new_string`) to the vector of available strings
#### 4.3.3.7 MonteCarloVariableRandomUniform
The construction arguments are:
1. variable name
2. seed for random generator, defaults to 0
3. lower-bound of distribution, default to 0
4. upper-bound for distribution, defaults to 1
There is no additional configuration beyond the constructor
#### 4.3.3.8 MonteCarloVariableRandomUniformInt
The construction arguments are:
1. variable name
2. seed for random generator, defaults to 0
3. lower-bound of distribution, default to 0
4. upper-bound for distribution, defaults to 1
There is no additional configuration beyond the constructor
#### 4.3.3.9 MonteCarloVariableSemiFixed
The construction arguments are:
1. variable name
2. reference to the MonteCarloVariable whose generated value is to be used as the fixed value.
There is no additional configuration beyond the constructor.
#### 4.3.3.10 MonteCarloPythonLineExec
The construction arguments are:
1. variable name
2. an STL-string providing the Python instruction for the computing of the value to be assigned to the specified variable.
There is no additional configuration beyond the constructor.
#### 4.3.3.11 MonteCarloPythonFileExec
The construction argument is:
1. name of the file to be executed from the generated input file.
There is no additional configuration beyond the constructor.
## 4.4 Information on the Generated Files
This section is for informational purposes only to describe the contents of the automatically-generated dispersion files. Users do not need to take action on any content in here.
The generated files can be broken down into 3 parts:
* Configuration for the input file. These two lines set the flags such that when this file is executed, the content of the original input file will configure the run for a monte-carlo analysis but without re-generating the dispersion files.
```python
monte_carlo.master.active = True
monte_carlo.master.generate_dispersions = False
```
* Execution of the original input file. This line opens the original input file so that when this file is executed, the original input file is also executed automatically.
```python
exec(open('RUN_1/input.py').read())
```
* Assignment to simulation variables. This section always starts with the assignment to the run-number, which is also found in the name of the run, so RUN_0 gets a 0, RUN_1 gets a 1, etc. This value can be used, for example, to generate data sweeps as described in section MonteCarloPythonLineExec above.
```python
monte_carlo.master.monte_run_number = 0
object.test_variable1 = 5
object.test_variable1 = 1.23456789
...
```
## 4.5 Extension
The model is designed to be extensible and while we have tried to cover the most commonly used applications, complete anticipation of all use-case needs is impossible. The most likely candidate for extension is in the area of additional distributions. In this case:
* A new distribution should be defined in its own class
* That class shall inherit from MonteCarloVariable or, if it involves a random generation using a distribution found in the C++ `<random>` library, from MonteCarloVariableRandom.
* Populate the command variable inherited from MonteCarloVariable. This is the STL string representing the content that the MonteCarloMaster will place into the generated dispersion files.
* Call the `insert_units()` method inherited from MonteCarloVariable
* Set the `command_generated` flag to true if the command has been successfully generated.
## 4.6 Running generated runs within an HPC framework
Modern HPC (High Performance Computing) labs typically have one or more tools for managing the execution of jobs across multiple computers. There are several linux-based scheduling tools, but this section focuses on running the generated runs using a SLURM (Simple Linux Utility for Resource Management) array job. Consider this script using a simulation built with gcc 4.8 and a user-configured run named `RUN_example` which has already executed once with the Monte-Carlo Generation model enabled to generate 100 runs on disk:
```bash
#SBATCH --array=0-99
# This is an example sbatch script demonstrating running an array job in SLURM.
# SLURM is an HPC (High-Performance-Computing) scheduling tool installed in
# many modern super-compute clusters that manages execution of a massive
# number of user-jobs. When a script like this is associated with an array
# job, this script is executed once per enumerated value in the array. After
# the Monte Carlo Generation Model executes, the resulting RUNs can be queued
# for SLURM execution using a script like this. Alternatively, sbatch --wrap
# can be used. See the SLURM documentation for more in-depth information.
#
# Slurm: https://slurm.schedmd.com/documentation.html
# $SLURM_ARRAY_TASK_ID is automatically provided by slurm, and will be an
# integer between 0-99 per the "SBATCH --array" flag specified at the top of
# this script
echo "SLURM has provided us with array job integer: $SLURM_ARRAY_TASK_ID"
# Convert this integer to a zero-padded string matching the RUN naming
# convention associated with thi
RUN_NUM=`printf %02d $SLURM_ARRAY_TASK_ID`
# Execute the single trick simulation run associated with RUN_NUM
echo "Running RUN_$RUN_NUM ..."
./S_main_Linux_4.8_x86_64.exe MONTE_RUN_example/RUN_${RUN_NUM}/monte_input.py
```
The above script can be executed within a SLURM environment by running `sbatch <path/to/script.sh>`. This single command will create 100 independent array jobs in SLURM, allowing the scheduler to execute them as resources permit. Be extra careful with the zero-padding logic in the script above. The monte-carlo generation model will create zero-padded `RUN` names suitable for the number of runs requested to be generated by the user. The `%02d` part of the script above specifies 2-digit zero-padding which is suitable for 100 runs. Be sure to match this logic with the zero-padding as appropriate for your use-case.
For more information on SLURM, refer to the project documentation: https://slurm.schedmd.com/documentation.html
# 5 Verification
The verification of the model is provided by tests defined in `test/SIM_mc_generation`. This sim was originally developed by by JSC/EG NASA in the 2020 timeframe. The verification section of the original documentation is omitted from this markdown file because it heavily leverages formatting that markdown cannot support. It can be viewed [here](MCG_verification_2020.pdf)

View File

@ -12,10 +12,11 @@
* [Other Useful Examples](#other-useful-examples) * [Other Useful Examples](#other-useful-examples)
* [The TrickOps Design](#regarding-the-design-why-do-i-have-to-write-my-own-script) * [The TrickOps Design](#regarding-the-design-why-do-i-have-to-write-my-own-script)
* [Tips & Best Practices](#tips--best-practices) * [Tips & Best Practices](#tips--best-practices)
* [MonteCarloGenerationHelper](#montecarlogenerationhelper---trickops-helper-class-for-montecarlogeneratesm-users)
# TrickOps # TrickOps
TrickOps is shorthand for "Trick Operations", and is a `python3` framework that provides an easy-to-use interface for common testing and workflow actions that Trick simulation developers and users often run repeatedly. Good software developer workflows typically have a script or set of scripts that the developer can run to answer the question "have I broken anything?". The purpose of TrickOps is to provide the logic central to managing these tests while allowing each project to define how and and what they wish to test. Don't reinvent the wheel, use TrickOps! TrickOps is shorthand for "Trick Operations". TricOps is a `python3` framework that provides an easy-to-use interface for common testing and workflow actions that Trick simulation developers and users often run repeatedly. Good software developer workflows typically have a script or set of scripts that the developer can run to answer the question "have I broken anything?". The purpose of TrickOps is to provide the logic central to managing these tests while allowing each project to define how and and what they wish to test. Don't reinvent the wheel, use TrickOps!
TrickOps is *not* a GUI, it's a set of python modules that you can `import` that let you build a testing framework for your Trick-based project with just a few lines of python code. TrickOps is *not* a GUI, it's a set of python modules that you can `import` that let you build a testing framework for your Trick-based project with just a few lines of python code.
@ -54,40 +55,42 @@ Simple and readable, this config file is parsed by `PyYAML` and adheres to all n
```yaml ```yaml
globals: globals:
env: <-- optional literal string executed before all tests, e.g. env setup env: <-- optional literal string executed before all tests, ex: ". env.sh"
parallel_safety: <-- <loose|strict> strict won't allow multiple input files per RUN dir
SIM_abc: <-- required unique name for sim of interest, must start with SIM SIM_abc: <-- required unique name for sim of interest, must start with SIM
path: <-- required SIM path relative to project top level path: <-- required SIM path relative to project top level
description: <-- optional description for this sim description: <-- optional description for this sim
labels: <-- optional list of labels for this sim, can be used to get sims labels: <-- optional list of labels for this sim, can be used to get sims
- model_x by label within the framework, or for any other project-defined - model_x by label within the framework, or for any other project-defined
- verification purpose - verification purpose
build_command: <-- optional literal cmd executed for SIM_build, defaults to trick-CP build_args: <-- optional literal args passed to trick-CP during sim build
binary: <-- optional name of sim binary, defaults to S_main_{cpu}.exe binary: <-- optional name of sim binary, defaults to S_main_{cpu}.exe
size: <-- optional estimated size of successful build output file in bytes size: <-- optional estimated size of successful build output file in bytes
phase: <-- optional phase to be used for ordering builds if needed
parallel_safety: <-- <loose|strict> strict won't allow multiple input files per RUN dir.
Defaults to "loose" if not specified
runs: <-- optional dict of runs to be executed for this sim, where the runs: <-- optional dict of runs to be executed for this sim, where the
RUN_1/input.py --foo: dict keys are the literal arguments passed to the sim binary RUN_1/input.py --foo: dict keys are the literal arguments passed to the sim binary
RUN_2/input.py: and the dict values are other run-specific optional dictionaries RUN_2/input.py: and the dict values are other run-specific optional dictionaries
... described as follows ... RUN_[10-20]/input.py: described in indented sections below. Zero-padded integer ranges
can specify a set of runs with continuous numbering using
[<starting integer>-<ending integer>] notation
returns: <int> <---- optional exit code of this run upon completion (0-255). Defaults returns: <int> <---- optional exit code of this run upon completion (0-255). Defaults
to 0 to 0
compare: <---- optional list of <path> vs. <path> comparison strings to be compare: <---- optional list of <path> vs. <path> comparison strings to be
- a vs. b compared after this run is complete. This is extensible in that - a vs. b compared after this run is complete. Zero-padded integer ranges
- d vs. e all non-list values are ignored and assumed to be used to define - d vs. e are supported as long as they match the pattern in the parent run.
- ... an alternate comparison method in derived classes - ... All non-list values are ignored and assumed to be used to define
- ... an alternate comparison method in a class extending this one
analyze: <-- optional arbitrary string to execute as job in bash shell from analyze: <-- optional arbitrary string to execute as job in bash shell from
project top level, for project-specific post-run analysis project top level, for project-specific post-run analysis
valgrind: <-- optional dict describing how to execute runs within valgrind phase: <-- optional phase to be used for ordering runs if needed
flags: <-- string of all flags passed to valgrind for all runs valgrind: <-- optional string of flags passed to valgrind for this run.
runs: <-- list of literal arguments passed to the sim binary through If missing or empty, this run will not use valgrind
- RUN_1... valgrind
non_sim_extension_example: non_sim_extension_example:
will: be ignored by TrickWorkflow parsing for derived classes to implement as they wish will: be ignored by TrickWorkflow parsing for derived classes to implement as they wish
``` ```
Almost everything in this file is optional, but there must be at least one top-level key that starts with `SIM` and it must contain a valid `path: <path/to/SIM...>` with respect to the top level directory of your project. Here, `SIM_abc` represents "any sim" and the name is up to the user, but it *must* begin with `SIM` since `TrickWorkflow` purposefully ignores any top-level key not beginning with `SIM` in order to allow for extensibility of the YAML file for non-sim tests specific to a project. Almost everything in this file is optional, but there must be at least one top-level key that starts with `SIM` and it must contain a valid `path: <path/to/SIM...>` with respect to the top level directory of your project. Here, `SIM_abc` represents "any sim" and the name is up to the user, but it *must* begin with `SIM` since `TrickWorkflow` purposefully ignores any top-level key not beginning with `SIM` and any key found under the `SIM` key not matching any named parameter above. This design allows for extensibility of the YAML file for non-sim tests specific to a project.
There is *no limit* to number of `SIM`s, `runs:`, `compare:` lists, `valgrind` `runs:` list, etc. This file is intended to contain every Sim and and every sim's run, and every run's comparison and so on that your project cares about. Remember, this file represents the *pool* of tests, not necessarily what *must* be tested every time your scripts which use it run. There is *no limit* to number of `SIM`s, `runs:`, `compare:` lists, `valgrind` `runs:` list, etc. This file is intended to contain every Sim and and every sim's run, and every run's comparison and so on that your project cares about. Remember, this file represents the *pool* of tests, not necessarily what *must* be tested every time your scripts which use it run.
@ -101,19 +104,21 @@ cd trick/share/trick/trickops/
``` ```
When running, you should see output that looks like this: When running, you should see output that looks like this:
![ExampleWorkflow In Action](trickops_example.png) ![ExampleWorkflow In Action](images/trickops_example.png)
When running, you'll notice that tests occur in two phases. First, sims build in parallel up to three at a time. Then when all builds complete, sims run in parallel up to three at a time. Progress bars show how far along each build and sim run is at any given time. The terminal window will accept scroll wheel and arrow input to view current builds/runs that are longer than the terminal height. When running this example script, you'll notice that tests occur in two phases. First, sims build in parallel up to three at a time. Then when all builds complete, sims run in parallel up to three at a time. Progress bars show how far along each build and sim run is at any given time. The terminal window will accept scroll wheel and arrow input to view current builds/runs that are longer than the terminal height. Before the script finishes, it reports a summary of what was done, providing a list of which sims and runs were successful and which were not.
Looking inside the script, the code at top of the script creates a yaml file containing a large portion of the sims and runs that ship with trick and writes it to `/tmp/config.yml`. This config file will be input to the framework. At the bottom of the script is where the magic happens, this is where the TrickOps modules are used: Looking inside the script, the code at top of the script creates a yaml file containing a large portion of the sims and runs that ship with trick and writes it to `/tmp/config.yml`. This config file is then used as input to the `TrickWorkflow` framework. At the bottom of the script is where the magic happens, this is where the TrickOps modules are used:
```python ```python
from TrickWorkflow import * from TrickWorkflow import *
class ExampleWorkflow(TrickWorkflow): class ExampleWorkflow(TrickWorkflow):
def __init__( self, quiet, trick_top_level='/tmp/trick'): def __init__( self, quiet, trick_top_level='/tmp/trick'):
# Real projects already have trick somewhere, but for this test, just clone it # Real projects already have trick somewhere, but for this example, just clone & build it
if not os.path.exists(trick_top_level): if not os.path.exists(trick_top_level):
os.system('cd %s && git clone https://github.com/nasa/trick' % (os.path.dirname(trick_top_level))) os.system('cd %s && git clone https://github.com/nasa/trick' % (os.path.dirname(trick_top_level)))
if not os.path.exists(os.path.join(trick_top_level, 'lib64/libtrick.a')):
os.system('cd %s && ./configure && make' % (trick_top_level))
# Base Class initialize, this creates internal management structures # Base Class initialize, this creates internal management structures
TrickWorkflow.__init__(self, project_top_level=trick_top_level, log_dir='/tmp/', TrickWorkflow.__init__(self, project_top_level=trick_top_level, log_dir='/tmp/',
trick_dir=trick_top_level, config_file="/tmp/config.yml", cpus=3, quiet=quiet) trick_dir=trick_top_level, config_file="/tmp/config.yml", cpus=3, quiet=quiet)
@ -135,9 +140,11 @@ Let's look at a few key parts of the example script. Here, we create a new class
from TrickWorkflow import * from TrickWorkflow import *
class ExampleWorkflow(TrickWorkflow): class ExampleWorkflow(TrickWorkflow):
def __init__( self, quiet, trick_top_level='/tmp/trick'): def __init__( self, quiet, trick_top_level='/tmp/trick'):
# Real projects already have trick somewhere, but for this test, just clone it # Real projects already have trick somewhere, but for this example, just clone & build it
if not os.path.exists(trick_top_level): if not os.path.exists(trick_top_level):
os.system('cd %s && git clone https://github.com/nasa/trick' % (os.path.dirname(trick_top_level))) os.system('cd %s && git clone https://github.com/nasa/trick' % (os.path.dirname(trick_top_level)))
if not os.path.exists(os.path.join(trick_top_level, 'lib64/libtrick.a')):
os.system('cd %s && ./configure && make' % (trick_top_level))
``` ```
Our new class `ExampleWorkflow.py` can be initialized however we wish as long as it provides the necessary arguments to it's Base class initializer. In this example, `__init__` takes two parameters: `trick_top_level` which defaults to `/tmp/trick`, and `quiet` which will be `False` unless `quiet` is found in the command-line args to this script. The magic happens on the very next line where we call the base-class `TrickWorkflow` initializer which accepts four required parameters: Our new class `ExampleWorkflow.py` can be initialized however we wish as long as it provides the necessary arguments to it's Base class initializer. In this example, `__init__` takes two parameters: `trick_top_level` which defaults to `/tmp/trick`, and `quiet` which will be `False` unless `quiet` is found in the command-line args to this script. The magic happens on the very next line where we call the base-class `TrickWorkflow` initializer which accepts four required parameters:
@ -149,15 +156,15 @@ The required parameters are described as follows:
* `project_top_level` is the absolute path to the highest-level directory of your project. The "top level" is up to the user to define, but usually this is the top level of your repository and at minimum must be a directory from which all sims, runs, and other files used in your testing are recursively reachable. * `project_top_level` is the absolute path to the highest-level directory of your project. The "top level" is up to the user to define, but usually this is the top level of your repository and at minimum must be a directory from which all sims, runs, and other files used in your testing are recursively reachable.
* `log_dir` is a path to a user-chosen directory where all logging for all tests will go. This path will be created for you if it doesn't already exist. * `log_dir` is a path to a user-chosen directory where all logging for all tests will go. This path will be created for you if it doesn't already exist.
* `trick_dir` is an absolute path to the top level directory for the instance of trick used for your project. For projects that use trick as a `git` `submodule`, this is usually `<project_top_level>/trick` * `trick_dir` is an absolute path to the top level directory for the instance of trick used for your project. For projects that use trick as a `git` `submodule`, this is usually `<project_top_level>/trick`
* `config_file` is the path to a YAML config file describing the sims, runs, etc. for your project. It's recommended this file be tracked in your SCM tool but that is not required. More information on the syntax expected in this file in the **The YAML File** section below. * `config_file` is the path to a YAML config file describing the sims, runs, etc. for your project. It's recommended this file be tracked in your SCM tool but that is not required. More information on the syntax expected in this file in the **The YAML File** section above.
The optional parameters are described as follows: The optional parameters are described as follows:
* `cpus` tells the framework how many CPUs to use on sim builds. This translates directly to `MAKEFLAGS` and is separate from the maximum number of simultaneous sim builds. * `cpus` tells the framework how many CPUs to use on sim builds. This translates directly to `MAKEFLAGS` and is separate from the maximum number of simultaneous sim builds.
* `quiet` tells the framework to suppress progress bars and other verbose output. It's a good idea to use `quiet=True` if your scripts are going to be run in a continuous integration (CI) testing framework such as GitHub Actions, GitLab CI, or Jenkins, because it suppresses all `curses` logic during job execution which itself expects `stdin` to exist. * `quiet` tells the framework to suppress progress bars and other verbose output. It's a good idea to use `quiet=True` if your scripts are going to be run in a continuous integration (CI) testing framework such as GitHub Actions, GitLab CI, or Jenkins, because it suppresses all `curses` logic during job execution which itself expects `stdin` to exist.
When `TrickWorkflow` initializes, it reads the `config_file` and verifies the information given matches the expected convention. If a non-fatal error is encountered, a message detailing the error is printed to `stdout` and the internal timestamped log file under `log_dir`. A fatal error will `raise RuntimeError`. When `TrickWorkflow` initializes, it reads the `config_file` and verifies the information given matches the expected convention. If a non-fatal error is encountered, a message detailing the error is printed to `stdout` and the internal timestamped log file under `log_dir`. A fatal error will `raise RuntimeError`. Classes which inherit from `TrickWorkflow` may also access `self.parsing_errors` and `self.config_errors` which are lists of errors encountered from parsing the YAML file and errors encountered from processing the YAML file respectively.
Moving on to the next important lines of code in our `ExampleWorkflow.py` script. The `def run(self):` line declares a function whose return code on run is passed back to the calling shell via `sys.exit()`. This is where we use the functions given to us by inherting from `TrickWorkflow`: Moving on to the next few important lines of code in our `ExampleWorkflow.py` script. The `def run(self):` line declares a function whose return code on run is passed back to the calling shell via `sys.exit()`. This is where we use the functions given to us by inherting from `TrickWorkflow`:
```python ```python
@ -177,7 +184,7 @@ The last three lines simply print a detailed report of what was executed and man
return (builds_status or runs_status or self.config_errors) return (builds_status or runs_status or self.config_errors)
``` ```
The `ExampleWorkflow.py` uses sims/runs provided by trick to exercise *some* of the functionality provided by TrickOps. This script does not have any comparisons, post-run analyses, or valgrind runs defined in the YAML file, so there is no execution of those tests in this example. The `ExampleWorkflow.py` script uses sims/runs provided by trick to exercise *some* of the functionality provided by TrickOps. This script does not have any comparisons, post-run analyses, or valgrind runs defined in the YAML file, so there is no execution of those tests in this example.
## `compare:` - File vs. File Comparisons ## `compare:` - File vs. File Comparisons
@ -192,8 +199,8 @@ SIM_ball:
RUN_foo/input.py: RUN_foo/input.py:
RUN_test/input.py: RUN_test/input.py:
compare: compare:
- path/to/SIM_/ball/RUN_test/log_a.csv vs. regression/SIM_ball/log_a.csv - path/to/SIM_ball/RUN_test/log_a.csv vs. regression/SIM_ball/log_a.csv
- path/to/SIM_/ball/RUN_test/log_b.trk vs. regression/SIM_ball/log_b.trk - path/to/SIM_ball/RUN_test/log_b.trk vs. regression/SIM_ball/log_b.trk
``` ```
In this example, `SIM_ball`'s run `RUN_foo/input.py` doesn't have any comparisons, but `RUN_test/input.py` contains two comparisons, each of which compares data generated by the execution of `RUN_test/input.py` to a stored off version of the file under the `regression/` directory relative to the top level of the project. The comparisons themselves can be executed in your python script via the `compare()` function in multiple ways. For example: In this example, `SIM_ball`'s run `RUN_foo/input.py` doesn't have any comparisons, but `RUN_test/input.py` contains two comparisons, each of which compares data generated by the execution of `RUN_test/input.py` to a stored off version of the file under the `regression/` directory relative to the top level of the project. The comparisons themselves can be executed in your python script via the `compare()` function in multiple ways. For example:
@ -237,10 +244,98 @@ if not failure:
If an error is encountered, like `koviz` or a given directory cannot be found, `None` is returned in the first index of the tuple, and the error information is returned in the second index of the tuple for `get_koviz_report_job()`. The `get_koviz_report_jobs()` function just wraps the singular call and returns a tuple of `( list_of_jobs, list_of_any_failures )`. Note that `koviz` accepts entire directories as input, not specific paths to files. Keep this in mind when you organize how regression data is stored and how logged data is generated by your runs. If an error is encountered, like `koviz` or a given directory cannot be found, `None` is returned in the first index of the tuple, and the error information is returned in the second index of the tuple for `get_koviz_report_job()`. The `get_koviz_report_jobs()` function just wraps the singular call and returns a tuple of `( list_of_jobs, list_of_any_failures )`. Note that `koviz` accepts entire directories as input, not specific paths to files. Keep this in mind when you organize how regression data is stored and how logged data is generated by your runs.
## `analyze:` - Post-Run Analysis ## `analyze:` - Post-Run Analysis
The optional `analyze:` section of a `run:` is intended to be a catch-all for "post-run analysis". The string given will be transformed into a `Job()` instance that can be retrieved and executed via `execute_jobs()` just like any other test. All analyze jobs are assumed to return 0 on success, non-zero on failure. One example use case for this would be creating a `jupytr` notebook that contains an analysis of a particular run. The optional `analyze:` section of a `run:` is intended to be a catch-all for "post-run analysis". The string given will be transformed into a `Job()` instance that can be retrieved and executed via `execute_jobs()` just like any other test. All analyze jobs are assumed to return 0 on success, non-zero on failure. One example use case for this would be creating a `jupytr` notebook that contains an analysis of a particular run.
## Defining sets of runs using [integer-integer] range notation
The `yaml` file for your project can grow quite large if your sims have a lot of runs. This is especially the case for users of monte-carlo, which may generate hundreds or thousands of runs that you may want to execute as part of your TrickOps script. In order to support these use cases without requiring the user to specify all of these runs individually, TrickOps supports a zero-padded `[integer-integer]` range notation in the `run:` and `compare:` fields. Consider this example `yaml` file:
```yaml
SIM_many_runs:
path: sims/SIM_many_runs
runs:
RUN_[000-100]/monte_input.py:
returns: 0
compare:
sims/SIM_many_runs/RUN_[000-100]/log_common.csv vs. baseline/sims/SIM_many_runs/log_common.csv
sims/SIM_many_runs/RUN_[000-100]/log_verif.csv vs. baseline/sims/SIM_many_runs/RUN_[000-100]/log_verif.csv
```
In this example, `SIM_many_runs` has 101 runs. Instead of specifying each individual run (`RUN_000/`, `RUN_001`, etc), in the `yaml` file, the `[000-100]` notation is used to specify a set of runs. All sub-fields of the run apply to that same set. For example, the default value of `0` is used for `returns:`, which also applies to all 101 runs. The `compare:` subsection supports the same range notation, as long as the same range is used in the `run:` named field. Each of the 101 runs shown above has two comparisons. The first `compare:` line defines a common file to be compared against all 101 runs. The second `compare:` line defines run-specific comparisons using the same `[integer-integer]` sequence. Note that when using these range notations zero-padding must be consistent, the values (inclusive) must be non-negative, and the square bracket notation must be used with the format `[minimum-maximum]`.
## `phase:` - An optional mechanism to order builds, runs, and analyses
The `yaml` file supports an optional parameter `phase: <integer>` at the sim and run level which allows the user to easily order sim builds, runs, and/or analyses, to suit their specific project constraints. If not specified, all sims, runs, and analyses, have a `phase` value of `0` by default. Consider this example `yaml` file with three sims:
```yaml
SIM_car:
path: sims/SIM_car
SIM_monte:
path: sims/SIM_monte
runs:
RUN_nominal/input.py --monte-carlo: # Generates the runs below
phase: -1
MONTE_RUN_nominal/RUN_000/monte_input.py: # Generated run
MONTE_RUN_nominal/RUN_001/monte_input.py: # Generated run
MONTE_RUN_nominal/RUN_002/monte_input.py: # Generated run
MONTE_RUN_nominal/RUN_003/monte_input.py: # Generated run
MONTE_RUN_nominal/RUN_004/monte_input.py: # Generated run
# A sim with constraints that make the build finnicky, and we can't change the code
SIM_external:
path: sims/SIM_external
phase: -1
runs:
RUN_test/input.py:
returns: 0
```
Here we have three sims: `SIM_car`, `SIM_monte`, and `SIM_external`. `SIM_car` and `SIM_monte` have the default `phase` of `0` and `SIM_external` has been assigned `phase: -1` explicitly. If using non-zero phases, jobs can be optionally filtered by them when calling helper functions like `self.get_jobs(kind, phase)`. Some examples:
```python
build_jobs = self.get_jobs(kind='build') # Get all build jobs regardless of phase
build_jobs = self.get_jobs(kind='build', phase=0) # Get all build jobs with (default) phase 0
build_jobs = self.get_jobs(kind='build', phase=-1) # Get all build jobs with phase -1
build_jobs = self.get_jobs(kind='build', phase=[0, 1, 3]) # Get all build jobs with phase 0, 1, or 3
build_jobs = self.get_jobs(kind='build', phase=range(-10,11)) # Get all build jobs with phases between -10 and 10
```
This can be done for runs and analyses in the same manner:
```python
run_jobs = self.get_jobs(kind='run') # Get all run jobs regardless of phase
run_jobs = self.get_jobs(kind='run', phase=0) # Get all run jobs with (default) phase 0
# Get all run jobs with all phases less than zero
run_jobs = self.get_jobs(kind='run', phase=range(TrickWorkflow.allowed_phase_range['min'],0))
# Get all analysis jobs with all phases zero or greater
an_jobs = self.get_jobs(kind='analysis', phase=range(0, TrickWorkflow.allowed_phase_range['max'+1]))
```
Note that since analysis jobs are directly tied to a single named run, they inherit the `phase` value of their run as specfied in the `yaml` file. In other words, do not add a `phase:` section indented under any `analyze:` section in your `yaml` file.
It's worth emphasizing that the specfiication of a non-zero `phase` in the `yaml` file, by itself, does not affect the order in which actions are taken. **It is on the user of TrickOps to use this information to order jobs appropriately**. Here's an example in code of what that might look for the example use-case described by the `yaml` file in this section:
```python
first_build_jobs = self.get_jobs(kind='build', phase=-1) # Get all build jobs with phase -1 (SIM_external)
second_build_jobs = self.get_jobs(kind='build', phase=0) # Get all build jobs with phase 0 (SIM_car & SIM_monte)
first_run_jobs = self.get_jobs(kind='run', phase=-1) # Get all run jobs with phase -1 (RUN_nominal/input.py --monte-carlo)
second_run_jobs = self.get_jobs(kind='run', phase=0) # Get all run jobs with phase 0 (All generated runs & RUN_test/input.py)
# SIM_external must build before SIM_car and SIM_monte, for project-specific reasons
builds_status1 = self.execute_jobs(first_build_jobs, max_concurrent=3, header='Executing 1st phase sim builds.')
# SIM_car and SIM_monte can build at the same time with no issue
builds_status2 = self.execute_jobs(second_build_jobs, max_concurrent=3, header='Executing 2nd phase sim builds.')
# SIM_monte's 'RUN_nominal/input.py --monte-carlo' generates runs
runs_status1 = self.execute_jobs(first_run_jobs, max_concurrent=3, header='Executing 1st phase sim runs.')
# SIM_monte's 'MONTE_RUN_nominal/RUN*/monte_input.py' are the generated runs, they must execute after the generation is complete
runs_status2 = self.execute_jobs(second_run_jobs, max_concurrent=3, header='Executing 2nd phase sim runs.')
```
Astute observers may have noticed that `SIM_external`'s `RUN_test/input.py` technically has no order dependencies and could execute in either the first or second run job set without issue.
A couple important points on the motivation for this capability:
* Run phasing was primarly developed to support testing monte-carlo and checkpoint sim scenarios, where output from a set of scenarios (like generated runs or dumped checkpoints) becomes the input to another set of sim scenarios.
* Sim phasing exists primarly to support testing scenarios where sims are poorly architectured or immutable, making them unable to be built independently.
## Where does the output of my tests go? ## Where does the output of my tests go?
All output goes to a single directory `log_dir`, which is a required input to the `TrickWorkflow.__init__()` function. Sim builds, runs, comparisons, koviz reports etc. are all put in a single directory with unique names. This is purposeful for two reasons: All output goes to a single directory `log_dir`, which is a required input to the `TrickWorkflow.__init__()` function. Sim builds, runs, comparisons, koviz reports etc. are all put in a single directory with unique names. This is purposeful for two reasons:
@ -291,9 +386,45 @@ This is purposeful -- handling every project-specific constraint is impossible.
* If `TrickWorkflow` encounters non-fatal errors while validating the content of the given YAML config file, it will set the internal member `self.config_erros` to be `True`. If you want your script to return non-zero on any non-fatal error, add this return code to your final script `sys.exit()`. * If `TrickWorkflow` encounters non-fatal errors while validating the content of the given YAML config file, it will set the internal member `self.config_erros` to be `True`. If you want your script to return non-zero on any non-fatal error, add this return code to your final script `sys.exit()`.
* Treat the YAML file like your project owns it. You can store project-specific information and retrieve that information in your scripts by accessing the `self.config` dictionary. Anything not recognized by the internal validation of the YAML file is ignored, but that information is still provided to the user. For example, if you wanted to store a list of POCS in your YAML file so that your script could print a helpful message on error, simply add a new entry `project_pocs: email1, email2...` and then access that information via `self.config['project_pocs']` in your script. * Treat the YAML file like your project owns it. You can store project-specific information and retrieve that information in your scripts by accessing the `self.config` dictionary. Anything not recognized by the internal validation of the YAML file is ignored, but that information is still provided to the user. For example, if you wanted to store a list of POCS in your YAML file so that your script could print a helpful message on error, simply add a new entry `project_pocs: email1, email2...` and then access that information via `self.config['project_pocs']` in your script.
## `MonteCarloGenerationHelper` - TrickOps Helper Class for `MonteCarloGenerate.sm` users
TrickOps provides the `MonteCarloGenerationHelper` python module as an interface between a sim using the `MonteCarloGenerate.sm` (MCG) sim module and a typical Trick-based workflow. This module allows MCG users to easily generate monte-carlo runs and execute them locally or alternatively through an HPC job scheduler like SLURM. Below is an example usage of the module. This example assumes:
1. The using script inherits from or otherwise leverages `TrickWorkflow`, giving it access to `self.execute_jobs()`
2. `SIM_A` is already built and configured with the `MonteCarloGenerate.sm` sim module
3. `RUN_mc/input.py` is configured with to generate runs when executed, specifically that `monte_carlo.mc_master.generate_dispersions == monte_carlo.mc_master.active == True` in the input file.
```python
# Instantiate an MCG helper instance, providing the sim and input file for generation
mgh = MonteCarloGenerationHelper(sim_path="path/to/SIM_A", input_path="RUN_mc/input.py")
# Get the generation SingleRun() instance
gj = mgh.get_generation_job()
# Execute the generation Job to generate RUNS
ret = self.execute_jobs([gj])
if ret == 0: # Successful generation
# Get a SLURM sbatch array job for all generated runs found in monte_dir
# SLURM is an HPC (High-Performance-Computing) scheduling tool installed on
# many modern super-compute clusters that manages execution of a massive
# number of jobs. See the official documentation for more information
# Slurm: https://slurm.schedmd.com/documentation.html
sbj = mgh.get_sbatch_job(monte_dir="path/to/MONTE_RUN_mc")
# Execute the sbatch job, which queues all runs in SLURM for execution
# Use hpc_passthrough_args ='--wait' to block until all runs complete
ret = self.execute_jobs([sbj])
# Instead of using SLURM, generated runs can be executed locally through
# TrickOps calls on the host where this script runs. First get a list of
# run jobs
run_jobs = mgh.get_generated_run_jobs(monte_dir="path/to/MONTE_RUN_mc")
# Then execute all generated SingleRun instances, up to 10 at once
ret = self.execute_jobs(run_jobs, max_concurrent=10)
```
Note that the number of runs to-be-generated is configured somewhere in the `input.py` code and this module cannot robustly know that information for any particular use-case. This is why `monte_dir` is a required input to several functions - this directory is processed by the module to understand how many runs were generated.
## More Information ## More Information
A lot of time was spent adding `python` docstrings to the `TrickWorkflow.py` and `WorkflowCommon.py` modules. This README does not cover all functionality, so please see the in-code documentation for more detailed information on the framework. A lot of time was spent adding `python` docstrings to the modules in the `trickops/` directory and tests under the `trickops/tests/`. This README does not cover all functionality, so please see the in-code documentation and unit tests for more detailed information on the framework capabilities.
[Continue to Software Requirements](../software_requirements_specification/SRS) [Continue to Software Requirements](../software_requirements_specification/SRS)

View File

Before

Width:  |  Height:  |  Size: 269 KiB

After

Width:  |  Height:  |  Size: 269 KiB

View File

@ -45,6 +45,7 @@ For example:
drg.add_variable("ball.obj.state.output.position[0]") drg.add_variable("ball.obj.state.output.position[0]")
drg.add_variable("ball.obj.state.output.position[1]") drg.add_variable("ball.obj.state.output.position[1]")
``` ```
In this example `position` is an array of floating point numbers. **DO NOT ATTEMPT TO DATA RECORD C OR C++ STRINGS. THIS HAS BEEN OBSERVED TO CREATE MEMORY ISSUES AND TRICK DOES NOT CURRENTLY PROVIDE ERROR CHECKING FOR THIS UNSUPPORTED USE CASE**
An optional alias may also be specified in the method as <tt>drg.add_variable("<string_of_variable_name>" [, "<alias>"])</tt>. An optional alias may also be specified in the method as <tt>drg.add_variable("<string_of_variable_name>" [, "<alias>"])</tt>.
If an alias is present as a second argument, the alias name will be used in the data recording file instead of the actual variable name. If an alias is present as a second argument, the alias name will be used in the data recording file instead of the actual variable name.

View File

@ -18,6 +18,17 @@
#include "trick/JSONVariableServer.hh" #include "trick/JSONVariableServer.hh"
#include "trick/JSONVariableServerThread.hh" #include "trick/JSONVariableServerThread.hh"
#include "trick/Master.hh" #include "trick/Master.hh"
#include "trick/mc_master.hh"
#include "trick/mc_python_code.hh"
#include "trick/mc_variable_file.hh"
#include "trick/mc_variable_fixed.hh"
#include "trick/mc_variable.hh"
#include "trick/mc_variable_random_bool.hh"
#include "trick/mc_variable_random.hh"
#include "trick/mc_variable_random_normal.hh"
#include "trick/mc_variable_random_string.hh"
#include "trick/mc_variable_random_uniform.hh"
#include "trick/mc_variable_semi_fixed.hh"
#include "trick/Slave.hh" #include "trick/Slave.hh"
#include "trick/MSSocket.hh" #include "trick/MSSocket.hh"
#include "trick/MSSharedMem.hh" #include "trick/MSSharedMem.hh"
@ -33,14 +44,12 @@
#include "trick/RealtimeSync.hh" #include "trick/RealtimeSync.hh"
#include "trick/ITimer.hh" #include "trick/ITimer.hh"
#include "trick/VariableServer.hh" #include "trick/VariableServer.hh"
#include "trick/regula_falsi.h" #include "trick/regula_falsi.h"
#include "trick/Integrator.hh" #include "trick/Integrator.hh"
#include "trick/IntegAlgorithms.hh" #include "trick/IntegAlgorithms.hh"
#include "trick/IntegLoopScheduler.hh" #include "trick/IntegLoopScheduler.hh"
#include "trick/IntegLoopManager.hh" #include "trick/IntegLoopManager.hh"
#include "trick/IntegLoopSimObject.hh" #include "trick/IntegLoopSimObject.hh"
#include "trick/ABM_Integrator.hh" #include "trick/ABM_Integrator.hh"
#include "trick/Euler_Cromer_Integrator.hh" #include "trick/Euler_Cromer_Integrator.hh"
#include "trick/Euler_Integrator.hh" #include "trick/Euler_Integrator.hh"
@ -51,7 +60,6 @@
#include "trick/RKF45_Integrator.hh" #include "trick/RKF45_Integrator.hh"
#include "trick/RKF78_Integrator.hh" #include "trick/RKF78_Integrator.hh"
#include "trick/RKG4_Integrator.hh" #include "trick/RKG4_Integrator.hh"
#include "trick/SimTime.hh" #include "trick/SimTime.hh"
/* from the er7_utils directory */ /* from the er7_utils directory */

124
include/trick/mc_master.hh Normal file
View File

@ -0,0 +1,124 @@
/*******************************TRICK HEADER******************************
PURPOSE: (Provides the front-end interface to the monte-carlo model)
LIBRARY DEPENDENCY:
((../src/mc_master.cc))
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_MASTER_HH
#define CML_MONTE_CARLO_MASTER_HH
#include <string>
#include <list>
#include <utility> // std::pair
#include "mc_variable.hh"
#include "mc_variable_file.hh"
#include "mc_python_code.hh"
#include "mc_variable_random.hh"
#include "mc_variable_fixed.hh"
#include "mc_variable_semi_fixed.hh"
/*****************************************************************************
MonteCarloMaster
Purpose:()
*****************************************************************************/
class MonteCarloMaster
{
public:
bool active; /* (--)
The main active flag determining whether an input file should be
processed for a monte-carlo run. This flag is used to manage the
configuration of the scenario, including things like which variables
to log.*/
bool generate_dispersions; /* (--)
This flag controls whether the variables should be loaded and
dispersed. It has no effect if the active flag is off.
False: Configure the run for MC; this
configuration typically uses a previously-generated
monte_input.py file; it does not read in MC variables and
does not generate new monte_input files.
True: Use this execution of the sim to read in the MC variables and
generating the monte-input files. After doing so, the
execution will terminate.
The sim can then be re-run using one of the new monte_input files.
Default: true. This is set to false in the monte_input.py files to
allow the base input file to be processed from within the
monte-input file without regenerting the monte-input files.*/
std::string run_name; /* (--)
The name of the scenario, used in generating the "MONTE_<run_name>"
directory, which contains all of the runs.*/
std::string monte_dir; /* (--)
The name of the MONTE<RUN...> directory relative to the SIM directory
where runs will be generated*/
std::string input_file_name; /* (--)
The name of the original file used as the main input file.
Default: input.py.*/
bool generate_meta_data; /* (--)
Flag indicating whether to generate the meta-data output file.*/
bool generate_summary; /* (--)
Flag indicating whether to generate the dispersion summary file. */
int minimum_padding; /* (--)
The minimum width of the run-number field, e.g. RUN_1 vs RUN_00001;
The run-number field will be sized to the minimum of this value or the
width necessary to accommodate the highest number.
Defaults to 0. */
size_t monte_run_number; /* (--)
A unique identifying number for each run.*/
protected:
bool input_files_prepared; /* (--)
Internal flag indicating that the input files have been generated and
waiting for execution. Effectively blocks further modifications to
variables after this flag has been set.*/
std::string location; /* (--)
The location in the main sim by which this instance may be addressed
in an input file. */
std::list<MonteCarloVariable *> variables; /* (--)
A STL-list of pointers to instances of all the base-class
MonteVarVariable instances. Note that this has to be a list of pointers
rather than instances because the actual instances are polymorphic;
making a copy would restrict them to be actual MonteCarloVariable
instances and we need the polymorphic capabilities for generating the
monte_input command.*/
unsigned int num_runs; /* (--)
The number of runs to execute for this scenario.*/
private:
std::list< std::pair< std::string,
MonteCarloVariableFile *> > file_list; /* (--)
A list of filenames being read as part of the MonteVarFile variables
being managed by this class. */
public:
MonteCarloMaster(std::string location);
virtual ~MonteCarloMaster(){};
void activate( std::string run_name);
bool prepare_input_files();
void add_variable( MonteCarloVariable & variable);
MonteCarloVariable * find_variable( std::string var_name);
void remove_variable( std::string var_name);
void set_num_runs( unsigned int num_runs);
void execute();
void collate_meta_data();
private:
static bool seed_sort( std::pair< unsigned int, std::string> left,
std::pair< unsigned int, std::string> right)
{
return left.first < right.first;
}
// and undefined:
MonteCarloMaster (const MonteCarloMaster&);
MonteCarloMaster& operator = (const MonteCarloMaster&);
};
#endif

View File

@ -0,0 +1,115 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Implementation of a simple python executable code instruction)
LIMITATION: (This implementation is intended to provide a fixed operation
that generates one variable as a function of one or more
variables -- such as random variables -- that were previously
generated by the MonteCarlo process.
E.g. y = x + 2
It does not provide randomization of the operation itself; in
this example the value y will always be generated as x+2, using
the value x which may be variable. However, it will never be
generated as x+3.
To implement randomization of the operation itself, use the
MonteCarloVariableRandomStringSet class instead.)
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_PYTHON_CODE_HH
#define CML_MONTE_CARLO_PYTHON_CODE_HH
#include "mc_variable.hh"
class MonteCarloPythonLineExec : public MonteCarloVariable
{
public:
std::string instruction_set; /* (--)
The right-hand-side of an equation that gets inserted into the
monte-input file and looks like:
<variable_name> = <instruction-set>*/
protected:
bool instruction_is_command; /* (--)
Indicates whether to implement a command that looks like:
- variable=instruction vs
variable representing the standalone command, in which case
variable_name and instruction_set are identical. */
public:
// 2 constructors:
MonteCarloPythonLineExec(const std::string & var_name,
const std::string & instruction)
:
MonteCarloVariable( var_name),
instruction_set(instruction),
instruction_is_command(false)
{
include_in_summary = false;
type = MonteCarloVariable::Calculated;
}
// other constructor
MonteCarloPythonLineExec( const std::string & instruction)
:
MonteCarloVariable( instruction),
instruction_set(instruction),
instruction_is_command(true)
{
include_in_summary = false;
type = MonteCarloVariable::Execute;
}
virtual ~MonteCarloPythonLineExec(){};
void generate_assignment()
{
if (instruction_is_command) {
command = "\n" + instruction_set;
}
else {
command = "\n" + variable_name + " = " + instruction_set;
}
}
private: // and undefined:
MonteCarloPythonLineExec( const MonteCarloPythonLineExec & );
MonteCarloPythonLineExec& operator = (const MonteCarloPythonLineExec&);
};
/*****************************************************************************
MonteCarloPythonFileExec
Purpose:(Provides a filename for execution to support more extensive
calculations than are possible with the simple one-liner commands
provided by MonteCarloPythonLineExec)
Assumptions: The file identified by filename is expected to be a Python file
Limitations: The file is not tested prior to execution
Other notes:
This class inherits from MonteCarloVariable to simplify the inclusion of
its "command" into the monte_input files. However, it does not populate
a specific variable; its command string is an executive statement, unlike
other MonteCarloVariable instances, which have a command string that
looks like "variable = ..."
This class's "variable_name" is instead re-purposed as a filename
*****************************************************************************/
class MonteCarloPythonFileExec : public MonteCarloVariable
{
public:
MonteCarloPythonFileExec(const std::string & filename)
:
MonteCarloVariable( filename)
{
include_in_summary = false;
type = MonteCarloVariable::Execute;
}
virtual ~MonteCarloPythonFileExec(){};
void generate_assignment()
{
command =
"\nexec(open('" + variable_name + "').read())";
}
private: // and undefined:
MonteCarloPythonFileExec( const MonteCarloPythonFileExec & );
MonteCarloPythonFileExec& operator = (const MonteCarloPythonFileExec&);
};
#endif

View File

@ -0,0 +1,83 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Base class for the MonteCarloVariable type)
LIBRARY DEPENDENCY:
((../src/mc_variable.cc))
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_HH
#define CML_MONTE_CARLO_VARIABLE_HH
#include <string>
class MonteCarloVariable
{
public:
enum MonteCarloVariableType
{
Undefined = 0,
Calculated,
Constant,
Execute,
Prescribed,
Random
};
std::string units; /* (--)
optional setting in the case where the specified values are in units
different from the native units of the variable.
These are the units associated with the specified-value.*/
bool include_in_summary; /* (--)
Flag telling MonteCarloMaster whether to include this variable in the
dispersion summary file. The default depends on the type of variable but
is generally true. */
protected:
std::string variable_name; /* (--)
The name of the sim-variable being assigned by this instance. */
std::string assignment; /* (--)
The value assigned to the variable. Used in MonteCarloMaster to generate
the dispersion summary file. */
std::string command; /* (--)
the command that gets pushed to the monte_input input file.*/
MonteCarloVariableType type; /* (--)
Broad categorization of types of MonteCarloVariable. This is set in the
constructor of the specific classes derived from MonteCarloVariable and
provides information to the MonteCarloMaster about what general type of
variable it is dealing with.*/
public:
MonteCarloVariable( const std::string & var_name);
virtual ~MonteCarloVariable() {};
virtual void generate_assignment() = 0;
virtual void shutdown() {}; // deliberately empty
// These getters are intended to be used by the MonteCarloMaster class in
// preparing the input files and summary data files. They may also be used
// in the user interface, but -- especially get_assignment and get_command --
// have limited use there.
const std::string & get_command() const {return command;}
const std::string & get_variable_name() const {return variable_name;}
const std::string & get_assignment() const {return assignment;}
MonteCarloVariableType get_type() const {return type;}
virtual unsigned int get_seed() const {return 0;}
protected:
void insert_units();
void trick_units(size_t);
void assign_double(double value);
void assign_int(int value);
void generate_command();
private: // and undefined:
MonteCarloVariable( const MonteCarloVariable &);
MonteCarloVariable& operator = (const MonteCarloVariable&);
};
#endif

View File

@ -0,0 +1,87 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Implementation of a file-lookup assignment
LIBRARY DEPENDENCY:
(../src/mc_variable_file.cc)
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_FILE_HH
#define CML_MONTE_CARLO_VARIABLE_FILE_HH
#include <fstream> // ifstream
#include <random> // default_random_engine, uniform_int_distribution
#include <list>
#include "trick/mc_variable.hh"
/*****************************************************************************
MonteCarloVariableFile
Purpose:(
Grabs a value from a data file.
The data value is located within the file at some column number and
row number.
The column number is specified for the variable.
The row -- or line -- number changes from run to run.
The file is expected to provide consistent presentation of data from
row to row.)
*****************************************************************************/
class MonteCarloVariableFile : public MonteCarloVariable
{
public:
size_t max_skip; /* (--)
The maximum number of lines to skip in a file. This defaults to 0,
indicating that the file will be read sequentially.*/
bool is_dependent; /* (--)
A flag indicating that this instance is dependent upon another instance
for reading the file "filename". Only one instance will read each
filename. Default: false. */
protected:
std::mt19937 rand_gen; /* (--)
A random number generator used in the case that the lines are not read
sequentially. This will generate a value from 0 to max_skip indicating
how many lines should be skipped.*/
std::string filename; /* (--)
The name of the file containing the data for this variable. Assigned
at construction.*/
size_t column_number;/* (--)
The column number indicating from where in the data file the data for
this variable should be extracted.*/
size_t first_column_number; /* (--)
Usually used to distinguish between whether the first column should be
identified with a 0 or 1, but extensible to other integers as well.
Default: 1. */
std::list< MonteCarloVariableFile *> dependents; /* (--)
A list of MonteVarVariableFile instances that use the same file as
this one. This list is only populated if this instance was the first
registered to use this file.*/
std::ifstream file; /* (--)
Input file stream being the file containing the data.*/
public:
MonteCarloVariableFile( const std::string & var_name,
const std::string & filename,
size_t column_number_,
size_t first_column_number = 1);
virtual ~MonteCarloVariableFile(){};
void initialize_file();
void generate_assignment();
void register_dependent( MonteCarloVariableFile *);
virtual void shutdown() {file.close();}
bool has_dependents() {return (dependents.size() > 1);}
size_t get_column_number() {return column_number;}
size_t get_first_column_number() {return first_column_number;}
const std::string & get_filename() {return filename;}
const std::list< MonteCarloVariableFile *> & get_dependents() {return dependents;}
protected:
void process_line();
static bool sort_by_col_num(MonteCarloVariableFile *, MonteCarloVariableFile *);
private: // and undefined:
MonteCarloVariableFile(const MonteCarloVariableFile&);
MonteCarloVariableFile& operator = ( const MonteCarloVariableFile&);
};
#endif

View File

@ -0,0 +1,70 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Implementation of a class template to support assignment of
a fixed value to a variable based on its type.)
LIMITATION: (Because these types are typically instantiated
from the Python input processor via SWIG, use of
templates is problematic. Consequently, it would
require a whole different setup to handle integers
differently than floats. Instead, integers and floats
will both be treated using the "double" data type.
Values that cannot be assigned to a double (like bool or strings)
cannot be represented by an instance of this class.)
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_FIXED_HH
#define CML_MONTE_CARLO_VARIABLE_FIXED_HH
#include "mc_variable.hh"
#include <sstream> // ostringstream
class MonteCarloVariableFixed : public MonteCarloVariable
{
protected:
int var_type; /* (--) 0: double
1: integer
2: string;*/
public:
MonteCarloVariableFixed(const std::string & var_name,
double assignment_)
:
MonteCarloVariable( var_name),
var_type(0)
{
type = MonteCarloVariable::Constant;
assign_double(assignment_);
}
MonteCarloVariableFixed(const std::string & var_name,
int assignment_)
:
MonteCarloVariable( var_name),
var_type(1)
{
type = MonteCarloVariable::Constant;
assign_int(assignment_);
}
MonteCarloVariableFixed(const std::string & var_name,
const std::string & assignment_)
:
MonteCarloVariable( var_name),
var_type(2)
{
include_in_summary = false;
assignment = assignment_;
type = MonteCarloVariable::Constant;
generate_command();
}
virtual ~MonteCarloVariableFixed(){};
void generate_assignment(){}; // to make this class instantiable
private: // and undefined:
MonteCarloVariableFixed( const MonteCarloVariableFixed & );
MonteCarloVariableFixed& operator = (const MonteCarloVariableFixed&);
};
#endif

View File

@ -0,0 +1,48 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Implementation of a class to support assignment of
a random value to a variable based on its type.)
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_RANDOM_HH
#define CML_MONTE_CARLO_VARIABLE_RANDOM_HH
#include "mc_variable.hh"
#include <random>
#include <string>
/*****************************************************************************
MonteCarloVariableRandom
Purpose:(An intermediate interface class that supports generation of random
variables of multiple types. Currently, double, integer, bool, and
string assignments are supported; others may be added later.)
NOTE - deliberately not using templates here because of the difficulty of having
SWIG use templates; Monte-Carlo variables are typically created and
populated via an input-process so the SWIG interface is critical and the
use of templates a non-starter.
*****************************************************************************/
class MonteCarloVariableRandom : public MonteCarloVariable
{
protected:
std::mt19937 random_generator; /* (--)
the basic random-generator used by all different types of random number
generators in the <random> library. */
unsigned int seed_m; /* (--)
the value used to seed the generator.*/
public:
MonteCarloVariableRandom(const std::string & var_name, unsigned int seed = 0): MonteCarloVariable(var_name),random_generator(seed),seed_m(seed) //changed seed member variable to seed_m
{
type = MonteCarloVariable::Random;
}
virtual ~MonteCarloVariableRandom(){};
unsigned int get_seed() const {return seed_m;} // override but SWIG cannot process the
// override keyword
private: // and undefined:
MonteCarloVariableRandom( const MonteCarloVariableRandom & );
MonteCarloVariableRandom& operator = (const MonteCarloVariableRandom&);
};
#endif

View File

@ -0,0 +1,42 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Uses the RandomString generator to generate either a "True" or
"False" string for assignment through the SWIG interface.
Note that SWIG uses the Python uppercase True/False rather than
C++ lowercase true/false identifiers.
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_RANDOM_BOOL_HH
#define CML_MONTE_CARLO_VARIABLE_RANDOM_BOOL_HH
#include "mc_variable_random_string.hh"
/*****************************************************************************
MonteCarloVariableRandomBool
Purpose:(Generates either a True or False string for assignment)
*****************************************************************************/
class MonteCarloVariableRandomBool : public MonteCarloVariableRandomStringSet
{
public:
MonteCarloVariableRandomBool( const std::string & var_name,
unsigned int seed)
:
MonteCarloVariableRandomStringSet( var_name, seed)
{
add_string("False");
add_string("True");
include_in_summary = true; // String variables are excluded by default
// because they may contain commas, which would
// cause trouble in the comma-delimited summary
// file. However, this is a special case in which
// the possible strings are "True" and "False".
}
virtual ~MonteCarloVariableRandomBool(){};
private: // and undefined:
MonteCarloVariableRandomBool(const MonteCarloVariableRandomBool&);
MonteCarloVariableRandomBool& operator = (
const MonteCarloVariableRandomBool&);
};
#endif

View File

@ -0,0 +1,57 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Implementation of a class to support generation and assignment
of a random value distributed normally.)
LIBRARY DEPENDENCY:
(../src/mc_variable_random_normal.cc)
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_RANDOM_NORMAL_HH
#define CML_MONTE_CARLO_VARIABLE_RANDOM_NORMAL_HH
#include <random>
#include "mc_variable_random.hh"
class MonteCarloVariableRandomNormal : public MonteCarloVariableRandom
{
public:
enum TruncationType
{
StandardDeviation,
Relative,
Absolute
};
size_t max_num_tries;
protected:
#ifndef SWIG
std::normal_distribution<double> distribution;
#endif
double min_value;
double max_value;
bool truncated_low;
bool truncated_high;
public:
MonteCarloVariableRandomNormal( const std::string & var_name,
unsigned int seed = 0,
double mean = 0,
double stdev = 1);
virtual ~MonteCarloVariableRandomNormal(){};
virtual void generate_assignment();
void truncate(double limit, TruncationType type = StandardDeviation);
void truncate(double min, double max, TruncationType type = StandardDeviation);
void truncate_low(double limit, TruncationType type = StandardDeviation);
void truncate_high(double limit, TruncationType type = StandardDeviation);
void untruncate();
private: // and undefined:
MonteCarloVariableRandomNormal(const MonteCarloVariableRandomNormal&);
MonteCarloVariableRandomNormal& operator = (
const MonteCarloVariableRandomNormal&);
};
#endif

View File

@ -0,0 +1,52 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Implementation of a class to randomly pick one of a set of
character strings. These strings could represent actual string
variables, or enumerated types, or commands, or any other concept
that can be expressed as an assignment.)
ASSUMPTIONS: (The content of the selected string will be assigned as written.
Consequently, if the value is being assigned to an actual string
or char variable, the contents of the string should be enclosed
in quotes.
E.g. the string might look like: "'actual_string'" so that the
assignment would look like
variable = 'actual_string'
This is to support the use of a string to represent a non-string
variable such as an enumeration or command:
E.g. the string might look like "x * 3 + 2" to achieve:
variable = x * 3 + 2
)
LIBRARY DEPENDENCY:
(../src/mc_variable_random_string.cc)
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_RANDOM_STRING_HH
#define CML_MONTE_CARLO_VARIABLE_RANDOM_STRING_HH
#include "mc_variable_random_uniform.hh"
#include <random>
#include <string>
#include <vector>
class MonteCarloVariableRandomStringSet : public MonteCarloVariableRandomUniform
{
public:
std::vector< std::string> values;
MonteCarloVariableRandomStringSet( const std::string & var_name,
unsigned int seed);
virtual ~MonteCarloVariableRandomStringSet(){};
virtual void generate_assignment();
void add_string(std::string);
private: // and undefined:
MonteCarloVariableRandomStringSet(const MonteCarloVariableRandomStringSet&);
MonteCarloVariableRandomStringSet& operator = (
const MonteCarloVariableRandomStringSet&);
};
#endif

View File

@ -0,0 +1,67 @@
/*******************************TRICK HEADER******************************
PURPOSE: ( Implementation of a class to support generation and assignment
of a random value distributed uniformally.
Provides float and integer distributions)
LIBRARY DEPENDENCY:
(../src/mc_variable_random_uniform.cc)
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_RANDOM_UNIFORM_HH
#define CML_MONTE_CARLO_VARIABLE_RANDOM_UNIFORM_HH
#include "mc_variable_random.hh"
#include <random>
/*****************************************************************************
MonteCarloVariableRandomUniform
Purpose:()
*****************************************************************************/
class MonteCarloVariableRandomUniform : public MonteCarloVariableRandom
{
protected:
#ifndef SWIG
std::uniform_real_distribution<double> distribution;
#endif
public:
MonteCarloVariableRandomUniform( const std::string & var_name,
unsigned int seed = 0,
double lower_bound = 0.0,
double upper_bound = 1.0);
virtual ~MonteCarloVariableRandomUniform(){};
virtual void generate_assignment();
private: // and undefined:
MonteCarloVariableRandomUniform( const MonteCarloVariableRandomUniform & );
MonteCarloVariableRandomUniform& operator = (
const MonteCarloVariableRandomUniform&);
};
/*****************************************************************************
MonteCarloVariableRandomUniformInt
Purpose:()
*****************************************************************************/
class MonteCarloVariableRandomUniformInt : public MonteCarloVariableRandom
{
protected:
#ifndef SWIG
std::uniform_int_distribution<int> distribution;
#endif
public:
MonteCarloVariableRandomUniformInt( const std::string & var_name,
unsigned int seed = 0,
double lower_bound = 0,
double upper_bound = 1);
virtual ~MonteCarloVariableRandomUniformInt(){};
virtual void generate_assignment();
private: // and undefined:
MonteCarloVariableRandomUniformInt(const MonteCarloVariableRandomUniformInt&);
MonteCarloVariableRandomUniformInt& operator = (
const MonteCarloVariableRandomUniformInt&);
};
#endif

View File

@ -0,0 +1,81 @@
/*******************************TRICK HEADER******************************
PURPOSE: (
Implementation of a semi-fixed monte-carlo variable.
The value of a MonteCarloVariableFixed instance is assigned at
construction time and held at that value for all runs.
The value of a MonteCarloVariableSemiFixed instance is assigned
from another MonteCarloVariable generated value for the first
input file generated, and held at that value for all runs. So the
assignment to a Semi-fixed can change each time the input files are
generated, but it is the same for all input files in any given
generation.)
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2019) (Antares) (Initial)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
**********************************************************************/
#ifndef CML_MONTE_CARLO_VARIABLE_SEMI_FIXED_HH
#define CML_MONTE_CARLO_VARIABLE_SEMI_FIXED_HH
#include "mc_variable_random.hh"
#include "trick/message_proto.h"
#include "trick/message_proto.h"
// TODO Turner 2019/11
// The reference to a MonteCarloVariable might be difficult to
// obtain because these are typically created on-the-fly and
// the handle to the created instance is lost. Might be
// nice to provide the seed-variable-name instead of the reference
// to the seed-variable itself, and have the MonteCarloMaster find
// the MonteCarloVariable associated with that name.
// But that's non-trivial and not necessarily desirable, so it is
// left unimplemented.
class MonteCarloVariableSemiFixed : public MonteCarloVariable
{
protected:
const MonteCarloVariable & seed_variable; /* (--)
A reference to another MonteCarloVariable; the value of *this*
variable is taken from the value of this seed-variable during
preparation of the first monte-input file. */
bool command_generated; /* (--)
flag indicating the fixed command has been generated.*/
public:
MonteCarloVariableSemiFixed(const std::string & var_name,
const MonteCarloVariable & seed_)
:
MonteCarloVariable( var_name),
seed_variable(seed_),
command_generated(false)
{
type = MonteCarloVariable::Constant;
}
virtual ~MonteCarloVariableSemiFixed(){};
void generate_assignment() {
if (!command_generated) {
// parse the command from seed_variable to get the assignment.
std::string seed_command = seed_variable.get_command();
size_t pos_equ = seed_command.find("=");
if (pos_equ == std::string::npos) {
std::string message =
std::string("File: ") + __FILE__ +
", Line: " + std::to_string(__LINE__) + std::string(", Invalid "
"sequencing\nFor variable ") + variable_name.c_str() +
std::string(" the necessary pre-dispersion to obtain the\n "
"random value for assignment has not been completed.\nCannot "
"generate the assignment for this variable.\n");
message_publish(MSG_ERROR, message.c_str());
return;
}
assignment = seed_command.substr(pos_equ+1);
generate_command();
insert_units();
command_generated = true;
}
}
private: // and undefined:
MonteCarloVariableSemiFixed( const MonteCarloVariableSemiFixed & );
MonteCarloVariableSemiFixed& operator = (const MonteCarloVariableSemiFixed&);
};
#endif

View File

@ -0,0 +1,73 @@
/*******************************TRICK HEADER******************************
PURPOSE: Provides a one-stop shop for all MonteCarlo operations that Trick
cannot support, including:
- assignment of values to non-floats
- assignment of variables to other dispersed variables
- computation of variables as a function of one or more dispersed
variables.
PROGRAMMERS:
(((Gary Turner) (OSR) (October 2018) (Antares) (initial creation for CML)))
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
***********************************************************************/
#ifndef SIM_OBJECT_MONTE_CARLO_Generation
#define SIM_OBJECT_MONTE_CARLO_Generation
##include "trick/mc_master.hh"
##include "trick/mc_python_code.hh"
##include "trick/mc_variable_random_uniform.hh"
##include "trick/mc_variable_random_normal.hh"
##include "trick/mc_variable_random_string.hh"
##include "trick/mc_variable_random_bool.hh"
##include "trick/mc_variable_file.hh"
##include "trick/mc_variable_fixed.hh"
##include "trick/mc_variable_semi_fixed.hh"
##include "trick/message_type.h"
##include "trick/message_proto.h"
##include "trick/exec_proto.h"
#include "sim_objects/default_trick_sys.sm"
##include "trick/MonteCarlo.hh"
class MonteCarloGeneratorSimObject : public Trick::SimObject
{
protected:
Trick::MonteCarlo * mc;
public:
MonteCarloMaster mc_master;
MonteCarloGeneratorSimObject(std::string location, Trick::MonteCarlo * mc_)
:
mc(mc_),
mc_master(location)
{
// pre-initialization assignments:
P0 ("initialization") generate_dispersions();
}
void generate_dispersions()
{
if (!mc_master.active || !mc_master.generate_dispersions) return;
const std::vector<Trick::MonteVar*> variables = mc->get_variables();
if (!variables.empty()) {
for (auto it :variables) {
std::string message = std::string("File: ") + __FILE__ + ", Line: "
+ std::to_string(__LINE__) + ", Monte Carlo Variable added using wrong "
"method\n" + "The variable " + it->name + " was added "
"improperly. Details:\n" + it->describe_variable();
message_publish(MSG_ERROR, message.c_str());
}
std::string message = std::string("File: ") + __FILE__ + ", Line: " +
std::to_string(__LINE__) + ", MonteCarloGeneratorSimObject is active but "
"variable(s) have been added using the older MonteCarloSimObject method. "
"Only one method can be used per sim. You must replace all trick_mc "
"based variables with monte_carlo sim object equivalents. See the trick "
"documentation for support.\n";
message_publish(MSG_ERROR, message.c_str());
exec_terminate_with_return(1, __FILE__, __LINE__, message.c_str());
}
mc_master.execute();
}
private:
// copy constructor, operator= blocked:
MonteCarloGeneratorSimObject (const MonteCarloGeneratorSimObject&);
MonteCarloGeneratorSimObject & operator =
( const MonteCarloGeneratorSimObject&);
};
MonteCarloGeneratorSimObject monte_carlo("monte_carlo.mc_master", &trick_mc.mc);
#endif

View File

@ -175,7 +175,7 @@ class MonteCarloSimObject : public Trick::SimObject {
exec_register_scheduler(&mc) ; exec_register_scheduler(&mc) ;
{TRK} P0 ("default_data") mc.process_sim_args() ; {TRK} P0 ("default_data") mc.process_sim_args() ;
{TRK} P0 ("initialization") mc.execute_monte() ; {TRK} P1 ("initialization") mc.execute_monte() ;
{TRK} ("shutdown") mc.shutdown() ; {TRK} ("shutdown") mc.shutdown() ;
} }
} }

View File

@ -0,0 +1,94 @@
# This file represents the required types, ranges, and values
# as applicable expected for keys found while parsing TrickWorkflow
# Yaml files. This is used by TrickWorkflowYamlVerifier to verify
# user-supplied content. The information is used in the following
# way:
# content:
# required: 0|1 <-- if 1, content must not be empty (None)
# default: <-- if required:0 and content not given, use this value
# overridable: 0|1 <-- if 0, check all given keys below
# type: <-- isinstance(content, type)
# contains: <-- contains in content
# min: <-- content > min
# max: <-- content < max
# This file should not be modified except in the case of TrickOps
# development.
# Default values of content if not specified in structures below
# Keys with empty values end up as None. Structures below override
# these values.
content:
required: 0
default:
overridable: 0
type:
contains:
min:
max:
# Expectations for everything under a top-level global key:
globals:
env:
default: ''
type: str
# Expectations for everything under a top-level SIM key:
sim:
name: # Comes from SIM key itself
required: 1
type: str
path:
required: 1
type: str
description:
type: str
labels:
type: list
build_args:
type: str
binary:
default: S_main_{cpu}.exe
type: str
size:
default: 2200000
type: int
min: 1
phase:
default: 0
type: int
min: -1000
max: 1000
parallel_safety:
default: loose
type: str
contains:
- strict
- loose
runs:
type: dict # Subsection validation articulated below
# Expectations for every key under a sim's run: subsection
run:
input: # Comes from run key itself
required: 1
type: str
returns:
type: int
default: 0
min: 0
max: 255
compare:
overridable: 1
type: list
analyze:
overridable: 1
type: str
phase:
default: 0
type: int
min: -1000
max: 1000
valgrind:
type: str

View File

@ -124,9 +124,11 @@ f.close()
from TrickWorkflow import * from TrickWorkflow import *
class ExampleWorkflow(TrickWorkflow): class ExampleWorkflow(TrickWorkflow):
def __init__( self, quiet, trick_top_level='/tmp/trick'): def __init__( self, quiet, trick_top_level='/tmp/trick'):
# Real projects already have trick somewhere, but for this test, just clone it # Real projects already have trick somewhere, but for this example, just clone & build it
if not os.path.exists(trick_top_level): if not os.path.exists(trick_top_level):
os.system('cd %s && git clone https://github.com/nasa/trick' % (os.path.dirname(trick_top_level))) os.system('cd %s && git clone https://github.com/nasa/trick' % (os.path.dirname(trick_top_level)))
if not os.path.exists(os.path.join(trick_top_level, 'lib64/libtrick.a')):
os.system('cd %s && ./configure && make' % (trick_top_level))
# Base Class initialize, this creates internal management structures # Base Class initialize, this creates internal management structures
TrickWorkflow.__init__(self, project_top_level=trick_top_level, log_dir='/tmp/', TrickWorkflow.__init__(self, project_top_level=trick_top_level, log_dir='/tmp/',
trick_dir=trick_top_level, config_file="/tmp/config.yml", cpus=3, quiet=quiet) trick_dir=trick_top_level, config_file="/tmp/config.yml", cpus=3, quiet=quiet)

View File

@ -0,0 +1,305 @@
"""
Module to be used in conjunction with the MonteCarloGenerate (MCG) sim module
provided by Trick and optionally TrickOps. This module allows MCG users to
easily generate monte-carlo runs and execute them locally or through an HPC job
scheduler like SLURM. Below is an example usage of the module assuming:
1. The using script inherits from TrickWorkflow, giving access to execute_jobs()
2. SIM_A/RUN_mc/input.py is configured with MonteCarloGenerate.sm sim module
to generate runs when executed
# Instantiate an MCG helper instance, providing the sim and input file for generation
mgh = MonteCarloGenerationHelper(sim_path="path/to/SIM_A", input_path="RUN_mc/input.py")
# Get the generation SingleRun() instance
gj = mgh.get_generation_job()
# Execute the generation Job to generate RUNS
ret = self.execute_jobs([gj])
if ret == 0: # Successful generation
# Get a SLURM sbatch array job for all generated runs
sbj = mgh.get_sbatch_job(monte_dir="path/to/MONTE_RUN_mc")
# Execute the sbatch job, which queues all runs in SLURM for execution
# Use hpc_passthrough_args ='--wait' to block until all runs complete
ret = self.execute_jobs([sbj])
# Instead of using SLURM, generated runs can be executed locally through
# TrickOps calls. First get a list of run jobs
run_jobs = mgh.get_generated_run_jobs(monte_dir="path/to/MONTE_RUN_mc")
# Execute all generated SingleRun instances, up to 10 at once
ret = self.execute_jobs(run_jobs, max_concurrent=10)
"""
import sys, os
import send_hs
import argparse, glob
import subprocess, errno
import pprint, logging
import WorkflowCommon, TrickWorkflow
# This global is the result of hours of frustration and debugging. See comment at the top of
# TrickWorkflow.py for details
this_trick = os.path.normpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../..'))
class MonteCarloGenerationHelper():
"""
Helper class for generating runs via Trick's built-in MonteCarloGenerate sim module.
"""
def __init__(self, sim_path, input_path, S_main_name='S_main*.exe', env=None, sim_args_gen=None):
"""
Initialize this instance.
>>> mgh = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, "test/SIM_mc_generation"), input_path="RUN_nominal/input_a.py")
Parameters
----------
sim_path : str
Path to the directory containing the built simulation
input_path : str
Path to input file used for monte-carlo generation, relative to sim_path
S_main_name : str
Literal string of S_main binary. Defaults to (lazy) S_main*.exe
env : str
Optional literal string to execute before everything, typically for sourcing
a project environment file.
sim_args_gen : str
Optional literal string to pass to the simulation after the input_path during
generation.
"""
self.sim_path = os.path.abspath(sim_path)
self.input_path = input_path
self.S_main_name = S_main_name
self.sim_args_gen = '' if not sim_args_gen else sim_args_gen
self.env = '' if not env else env
self.generated_input_files = [] # Will be list of successfully generated runs
# Sanity check inputs
if not isinstance(self.sim_args_gen, str):
msg = (f"ERROR: Given sim_args_gen ({sim_args_gen}) has invalid type {type(self.sim_args_gen)}."
f" It must be a string.")
raise TypeError(msg)
if not isinstance(self.env, str):
msg = (f"ERROR: Given env ({env}) has invalid type {type(self.env)}."
f" It must be a string.")
raise TypeError(msg)
if not os.path.isfile(os.path.join(self.sim_path, self.input_path)):
msg = (f"ERROR: input_path {os.path.join(self.sim_path, self.input_path)} "
f"doesn't exist.")
raise RuntimeError(msg)
# Construct job used to generate runs
cmd = "%s cd %s && " % (self.env, self.sim_path)
cmd += "./%s " % (self.S_main_name)
cmd += "%s " % self.input_path
if self.sim_args_gen:
cmd += "%s" % self.sim_args_gen
self.generation_job = TrickWorkflow.SingleRun(
name=f'Monte Carlo Generation for: {self.sim_path}/{self.input_path}',
command=cmd, log_file=os.path.join(self.sim_path, os.path.dirname(self.input_path),
'MCG_generation_output.txt'),
expected_exit_status=0, use_var_server=False)
def get_generation_job(self):
"""
Return the SingleRun (Job) instance corresponding to an MCG generation execution
as configured inside this instance. This SingleRun can be executed via
WorkflowCommon.execute_jobs() to generate runs.
>>> mgh = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, "test/SIM_mc_generation"), input_path="RUN_nominal/input_a.py")
>>> job = mgh.get_generation_job()
"""
return self.generation_job
def get_generated_input_files(self, monte_dir):
"""
Return a sorted list of absolute paths to the generated input files.
Parameters
----------
monte_dir : str
Path to monte_dir in which input_files were generated. This is a required input
since this class has no way to know where the user configured the sim to locate
the generated runs.
Returns
----------
list of strings (absolute paths)
Sorted list of absolute paths to input files found under monte_dir
Raises
------
RuntimeError
If an error in finding input files occurs.
"""
if os.path.isdir(monte_dir) == False:
msg = (f"Given monte_dir {monte_dir} doesn't exist! Cannot get run list.")
raise RuntimeError(msg)
just_input_file = ("monte_"+os.path.basename(self.input_path))
monte_dir_path_full = os.path.abspath(monte_dir)
run_list = [x for x in os.listdir(monte_dir_path_full) if \
(x.startswith("RUN_") and os.path.isdir(os.path.join(monte_dir_path_full,x)))]
if len(run_list) == 0:
msg = (f"Error: {monte_dir} doesn't have any runs!")
raise RuntimeError(msg)
self.generated_input_files = [os.path.join(monte_dir, x, just_input_file) for x in run_list
if os.path.isfile(os.path.join( monte_dir_path_full, x, just_input_file ))]
if len(self.generated_input_files) == 0:
msg = (f"Error: {monte_dir}'s RUN directories don't have any input files of expected "
f"name: {just_input_file}. Make sure input_path is correct and that MCG is configured "
"appropriately.")
raise RuntimeError(msg)
self.generated_input_files.sort()
# error checking regarding missing files or not enough runs
if len(run_list) != len(self.generated_input_files):
msg = ("WARNING in get_generated_input_files(): There's a mismatch between the number "
f"of MONTE*/RUN* directories and the number of input files ({just_input_file}) "
"in those directories. Returning only the found subset")
print(msg)
return(list(self.generated_input_files))
def get_zero_padding(self, monte_dir=None):
"""
Returns an integer representing the highest zero-padding contained in
monte_dir, or in self.generated_input_files list if monte_dir is None
Parameters
----------
monte_dir : str
Path to monte_dir in which input_files were generated
Returns
----------
int
Integer representing zero-padding length. Ready for use in printf-style
"%0<int>d" use-cases.
Raises
------
RuntimeError
If zero-padding information cannot be determined
"""
gif = []
if monte_dir:
gif = self.get_generated_input_files(monte_dir)
else:
gif = self.generated_input_files
if not gif or len(gif) < 1:
msg = ("Cannot find zero-padding information because self.generated_input_files "
"is empty! Have you run get_generated_input_files()?")
raise RuntimeError(msg)
# This exception may be unreachable, but for extra safety...
try:
padding = len(os.path.dirname(gif[-1]).split('_')[-1])
except Exception as e:
msg = ("Encountered unhandled exception attempting to determine zero-padding "
f"information. Error:\n{e}")
raise RuntimeError(msg)
return padding
def get_sbatch_job(self, monte_dir, sim_args=None, hpc_passthrough_args=None):
"""
Return a Job() whose command is an sbatch array job for running generated runs
found in monte_dir, with the given hpc_passthrough_args if specified. This single
Job() when executed will submit the entire set of monte-carlo runs in monte_dir
to a SLURM job scheduling system as an array job. This function simply creates
the Job(), it does not execute it.
Parameters
----------
monte_dir : str
Path to monte_dir in which input_files were generated. This is a required input
since this class has no way to know where the user configured the sim to locate
the generated runs.
hpc_passthrough_args : str
Literal string of args to be passed through to the HPC scheduling system SLURM
sim_args : str
Literal string of args to be passed through to the simulation binary
Returns
----------
Job()
Job instance with sbatch command configured for array submission.
"""
generated_runs = self.get_generated_input_files(monte_dir)
zero_padding = self.get_zero_padding()
just_input_file = os.path.basename(generated_runs[0])
sbatch_cmd = self.env + " sbatch "
# Build the default --array option, this is overridden later if the user gave it
# Homogeneous array assumption, last zero-padded num in last run in array
array = ("--array 0-%s " % os.path.dirname(generated_runs[-1]).split('_')[-1])
# A couple sbatch options are special: if the user has specified --array , use
# theirs, if not then generate the --array field for the mc_num_runs given. If --wait
# is given, we won't be able to post-process, so store that information off. Easiest
# way to get this passthrough information is to use arparse to parse the
# hpc_passthrough_args list, leaving what's left in rest list to be passed through
sbatch_parser = argparse.ArgumentParser()
sbatch_parser.add_argument('-a', '--array')
sbatch_parser.add_argument('-W', '--wait', action="store_true")
if hpc_passthrough_args:
subargs, rest = sbatch_parser.parse_known_args(hpc_passthrough_args.split())
if subargs.wait: # Since --wait not in rest, re-add it to passthrough args
rest.append('--wait')
if subargs.array: # If custom array given by user
# Custom array definition
array = "--array %s " % subargs.array
sbatch_cmd += ' '.join(rest) + ' '
sbatch_cmd += array
sbatch_cmd += "--wrap 'RUN_NUM=`printf %0" + str(zero_padding) + 'd $SLURM_ARRAY_TASK_ID`; '
sbatch_cmd += "%s cd %s && " % (self.env, self.sim_path)
sbatch_cmd += "./%s %s/RUN_${RUN_NUM}/%s" % (self.S_main_name, monte_dir, just_input_file)
if sim_args:
sbatch_cmd += " " + sim_args + " "
sbatch_cmd += (" 2> %s/RUN_${RUN_NUM}/stderr 1> %s/RUN_${RUN_NUM}/stdout '" %
(monte_dir, monte_dir))
job = WorkflowCommon.Job(name=(f'Running sbatch array job for '
f'{os.path.basename(self.sim_path)}'
f' {os.path.basename(monte_dir)}'),
command=sbatch_cmd, log_file=f'{os.path.join(monte_dir)}/sbatch_out.txt')
return (job)
#TODO Implement a Portable Batch System (PBS) job getter, and support for other HPC
# schedulers go here
def get_generated_run_jobs(self, monte_dir, sim_args=None):
"""
Return a list of SingleRun() instances, configured for each of the RUNs in
monte_dir. Each run's output goes to the generated RUN location in a file
called stdouterr, containing both stdout and stderr
Parameters
----------
monte_dir : str
Path to monte_dir in which input_files were generated. This must either be a
path relative to the sim dir or an absolute path.
sim_args : str
Literal string of args to be passed through to the sim binary
"""
jobs = []
generated_runs = self.get_generated_input_files(monte_dir)
for run in generated_runs:
cmd = "%s cd %s && " % (self.env, self.sim_path)
cmd += "./%s %s" % (self.S_main_name, run)
if sim_args:
cmd += " " + sim_args + " "
jobs.append( TrickWorkflow.SingleRun(
name=f'Executing generated run {os.path.basename(os.path.dirname(run))}',
command=cmd, log_file=os.path.join(os.path.dirname(run),
'stdouterr'), expected_exit_status=0)
)
return jobs
def is_sim_built(self):
"""
Return True if self.S_main_name exists in self.sim_path. False otherwise
"""
if glob.glob(os.path.join(self.sim_path, self.S_main_name)):
return True
else:
return False

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,376 @@
import os, sys, copy
import yaml # Provided by PyYAML
#from WorkflowCommon import tprint
from pydoc import locate
from exceptions import RequiredParamException, IncorrectYAMLException
this_dir = os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__))))
# This global is the result of hours of frustration and debugging. See comment at the top of
# TrickWorkflow.py for details
this_trick = os.path.normpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../..'))
class TrickWorkflowYamlVerifier():
"""
This class accepts a project YAML config file which contains a list of all
possible sims, runs, analyses, comparisons, etc and verifies that the content
meets the constraints that the TrickWorkflow expects. Specifically, self.config
is santitized to ensure that it is a dictionary matching the format below
globals:
env: <-- optional literal string executed before all tests, e.g. env setup
SIM_abc: <-- required unique name for sim of interest, must start with SIM
path: <-- required SIM path relative to project top level
description: <-- optional description for this sim
labels: <-- optional list of labels for this sim, can be used to get sims
- model_x by label within the framework, or for any other project-defined
- verification purpose
build_args: <-- optional literal args passed to trick-CP during sim build
binary: <-- optional name of sim binary, defaults to S_main_{cpu}.exe
size: <-- optional estimated size of successful build output file in bytes
phase: <-- optional phase to be used for ordering builds if needed
parallel_safety: <-- <loose|strict> strict won't allow multiple input files per RUN dir
runs: <-- optional dict of runs to be executed for this sim, where the
RUN_1/input.py --foo: dict keys are the literal arguments passed to the sim binary
RUN_2/input.py: and the dict values are other run-specific optional dictionaries
RUN_[10-20]/input.py: described in indented sections below. Zero-padded integer ranges
can specify a set of runs with continuous numbering using
[<starting integer>-<ending integer>] notation
returns: <int> <---- optional exit code of this run upon completion (0-255). Defaults
to 0
compare: <---- optional list of <path> vs. <path> comparison strings to be
- a vs. b compared after this run is complete. This is extensible in that
- d vs. e all non-list values are ignored and assumed to be used to define
- ... an alternate comparison method in a class extending this one
analyze: <-- optional arbitrary string to execute as job in bash shell from
project top level, for project-specific post-run analysis
phase: <-- optional phase to be used for ordering runs if needed
valgrind: <-- optional string of flags passed to valgrind for this run.
If missing or empty, this run will not use valgrind
non_sim_extension_example:
will: be ignored by TrickWorkflow parsing for derived classes to implement as they wish
Any key matching the above naming schema is checked for type and value validity
Any key not matching the above naming schema is ignored purposefully to provide
users of this framework the ability use the YAML file for project-specific use-cases
"""
def __init__(self, config_file, quiet=False):
"""
Initialize this instance.
>>> twyv = TrickWorkflowYamlVerifier(config_file=os.path.join(this_trick,"share/trick/trickops/tests/trick_sims.yml"))
>>> twyv.parsing_errors
[]
>>> len(twyv.config.keys())
57
"""
self.config_file=config_file
self.parsing_errors = [] # List of errors encountered during parsing
self.config = self._read_config(self.config_file) # Contains resulting Dict parsed by yaml.load
self.original_config= copy.deepcopy(self.config) # Keep a copy of original dict as parsed
# Contains requirements the format of key/value pairs inside self.config_file
self.yaml_requirements = self._read_config(os.path.join(this_dir,".yaml_requirements.yml"))
defaults = self.yaml_requirements.pop('content') # Only used to populate default values, can remove
self._populate_yaml_requirements(self.yaml_requirements, defaults)
def _populate_yaml_requirements(self, thedict, defaults):
"""
Recursively traverse thedict, and overlay thedict values on top
of defaults dict values. This ensures all non-dict values exist
as defined in the content: section of .yaml_requirements.yml without
having to specify them individually in that file
"""
for k,v in thedict.items():
if isinstance(v, dict):
subdicts = False
for j,u in v.items():
if isinstance(u, dict):
subdicts = True
if subdicts: # If there are subdicts, recurse
self._populate_yaml_requirements(v, defaults)
else: # No subdicts, overlay values over defaults
thedict[k] = TrickWorkflowYamlVerifier.DictVerifier().merge(
defaults,thedict[k])
def _read_config(self, config_file):
"""
Read the yaml file into a dict and return it
Parameters
----------
config_file : str
path to YAML config file to be read
Returns
-------
dict
dictionary representation of YAML content as parsed by yaml.safe_load()
Raises
------
RuntimeError
If config_file cannot be read
"""
try:
with open(config_file) as file:
y = yaml.safe_load(file)
return y
except Exception as e:
msg = ("Unable to parse config file: %s\nERROR: %s\nSee"
" the TrickOps documentation for YAML file format expectations"
% (config_file,e))
raise RuntimeError(msg)
def get_parsing_errors(self):
"""
Return the list of errors encountered while parsing self.original_config
Returns
-------
list
List of error messages encountered during parsing
"""
return self.parsing_errors
def verify(self):
"""
Verify the content of self.config. This is done by recursively checking content
in self.config against self.yaml_requirements using subclasses defined below.
Parsing errors are added as strings to the self.parsing_errors list
"""
if not self.config: # If entire config is empty
msg =("ERROR: Config file %s appears to be empty. Make sure file exists and is"
" valid YAML syntax." % (self.config_file))
self.parsing_errors.append(msg)
raise RuntimeError(msg)
if not isinstance(self.config, dict): # If parser did not produce a dict
msg =("ERROR: Config file %s doesn't match expected dictionary format. "
" See the TrickOps documentation for YAML file format expectations." %
(self.config_file))
self.parsing_errors.append(msg)
raise RuntimeError(msg)
if 'globals' not in self.config:
self.config['globals'] = {'env': ''}
gv = self.GlobalVerifier(
self.config['globals'], self.yaml_requirements['globals'])
gv.verify()
self.config['globals'] = gv.get_verified_globals_dict()
self.parsing_errors += gv.get_parsing_errors()
for sim in list(self.config.keys()):
if not str(sim).startswith('SIM'): # Ignore everything not starting with SIM
continue
else:
sv = self.SimVerifier(sim_name=sim, sim_dict=self.config[sim],
req_dict=self.yaml_requirements)
try:
# Replace the entire dict for this sim with it's verified equivalent
sv.verify()
self.config[sim] = sv.get_verified_sim_dict()
# If not all required params given, remove this sim entry
except RequiredParamException as e:
del self.config[sim]
finally:
self.parsing_errors += sv.get_parsing_errors()
# Return a copy of sanitized config
return(dict(self.config))
# These inner classes provide helper methods for merging default expected content
# in dictionaries produced by YAML files with the actual content produced by
# YAML files. The merged content is then verified for correctness according to
# expected types, ranges, and other constraints.
class DictVerifier():
def __init__(self):
self.errors = [] # collects all errors found in the YAML file parsing
def get_parsing_errors(self):
return(self.errors)
def merge(self, default_dict, authoritative_dict):
"""
Merges the content of authoritative_dict into default_dict, using values
from authoritative_dict over values in default_dict
"""
try:
merged ={**default_dict, **authoritative_dict}
except Exception as e:
msg=( "Unable to merge given dictionary with default dictionary."
"\n Given: %s"
"\n Default: %s"
"\nThis is usually caused by incorrect format in the YAML file. "
"See the TrickOps documentation for YAML file format expectations."
% (authoritative_dict, default_dict))
raise IncorrectYAMLException(msg)
return (merged)
def defaults_only(self, thedict):
"""
Given a dict of yaml_requirements format, produce a dict containing
only {key: <key's default value>}
"""
newdict = {}
for k,v in thedict.items():
newdict[k] = thedict[k]['default']
return newdict
# Utility methods for ensuring given content matches expected constraints.
# These return True if check passes, False otherwise. Errors ar logged to
# self.errors
def expect_type(self, param, expected_type, where):
if param is None and expected_type is None:
return True
if not isinstance(param, expected_type):
self.errors.append(where +
"value \"%s\" expected to be type:%s, but got type:%s instead. Ignoring." %
( param, expected_type, type(param)))
return False
return True
def expect_less_than_or_eq(self, param, maximum, where):
if (not param <= maximum):
self.errors.append(where +
"value \"%s\" expected to be <= %s. Ignoring." %
( param, maximum))
return False
return True
def expect_more_than_or_eq(self, param, minimum, where):
if (not param >= minimum):
self.errors.append(where +
"value \"%s\" expected to be >= %s. Ignoring." %
( param, minimum))
return False
return True
def expect_contains(self, param, contained_in, where):
if (param not in contained_in):
self.errors.append(where +
"value \"%s\" expected to be in subset %s. Ignoring." %
( param, contained_in))
return False
return True
def verify(self, user_dict, req_dict, identifier):
"""
Verifies the content of of user_dict matches the requirements of req_dict.
The content of req_dict comes from the .yaml_requirements file
The merge with defaults during __init__ guarantees all keys exist.
"""
for k,v in req_dict.items():
before_num_errors = len(self.errors)
if v['required']: # Must be given by user
if not self.expect_type(user_dict[k], locate(v['type']),
where=("In %s, required param \"%s\" " % (identifier, k))):
raise RequiredParamException("%s must exist and be of type %s in %s"
% (k, locate(v['type']), identifier))
elif v['default'] is None and user_dict[k] is None: # Can be given but have None type
# TODO: we may be able to set user_dict[k] = locate(v['type']) here
# which would make runs: and labels: lists, but need to see if that
# has downstream effects. If this works we can remove manual checking
# for None in the derived verifier classes
pass
# Could be another type used by derived class, so don't check anything
elif v['overridable'] == 1:
pass
else: # Check optional params (majority)
# Checking possible values only makes sense if we got the right type
if self.expect_type(user_dict[k], locate(v['type']),
where=("In %s, param \"%s\" " % (identifier, k))):
if v['min'] is not None:
self.expect_more_than_or_eq(int(user_dict[k]), int(v['min']),
where=("In %s, param \"%s\" " % (identifier, k)))
if v['max'] is not None:
self.expect_less_than_or_eq(int(user_dict[k]), int(v['max']),
where=("In %s, param \"%s\" " % (identifier, k)))
if v['contains'] is not None:
self.expect_contains(user_dict[k], v['contains'],
where=("In %s, param \"%s\" " % (identifier, k)))
# If we encountered any errors, set param to default value
if len(self.errors) > before_num_errors:
user_dict[k] = v['default']
class GlobalVerifier(DictVerifier):
def __init__(self, globals_dict, globals_req_dict):
super().__init__()
self.globals_req_dict = globals_req_dict
self.defaults = self.defaults_only(globals_req_dict)
self.globals_dict = self.merge(self.defaults, globals_dict)
def verify(self):
"""
Verifies the content of of self.globals_dict. The merge with defaults
during __init__ guarantees all keys exist.
"""
TrickWorkflowYamlVerifier.DictVerifier.verify(self, user_dict=self.globals_dict,
req_dict=self.globals_req_dict, identifier='globals')
def get_verified_globals_dict(self):
return (dict(self.globals_dict)) # Return a copy
class SimVerifier(DictVerifier):
def __init__(self, sim_name, sim_dict, req_dict):
super().__init__()
sim_dict = {} if not isinstance(sim_dict, dict) else sim_dict
self.req_dict = req_dict # TODO every derived class needs this, move it to the base class
self.defaults = self.defaults_only(self.req_dict['sim'])
self.defaults['name'] = sim_name
# TODO what if SIM: dict is empty?
self.sim_dict = self.merge(self.defaults, sim_dict)
def verify(self):
TrickWorkflowYamlVerifier.DictVerifier.verify(self, user_dict=self.sim_dict,
req_dict=self.req_dict['sim'], identifier=self.sim_dict['name'])
if self.sim_dict['labels'] is None:
self.sim_dict['labels'] = [] # If not given, make it an empty list
# Handle the edge case where labels is a list, but a list of not just strings
if any([ not isinstance(label, str) for label in self.sim_dict['labels']]):
self.errors.append("In %s, labels \"%s\" expected to be list of strings. Ignoring." %
( self.sim_dict['name'], self.sim_dict['labels']))
self.sim_dict['labels'] = []
if self.sim_dict['runs'] is None:
self.sim_dict['runs'] = {} # If not given, make it an empty dict
for run in list(self.sim_dict['runs']):
rv = (TrickWorkflowYamlVerifier.RunVerifier( run,
self.sim_dict['runs'][run], self.req_dict))
try:
rv.verify()
self.sim_dict['runs'][run] = rv.get_verified_run_dict()
# If not all required params given, remove this run entry. I believe this
# to be unreachable code, but added anyhow for safety -Jordan 12/2022
except RequiredParamException as e:
del self.sim_dict['runs'][run]
finally:
# Pass errors up to sim level
self.errors += rv.get_parsing_errors()
def get_verified_sim_dict(self):
return (dict(self.sim_dict)) # Return a copy
class RunVerifier(DictVerifier):
def __init__(self, run_name, run_dict, req_dict):
super().__init__()
self.req_dict = req_dict
self.defaults = self.defaults_only(self.req_dict['run'])
self.defaults['input'] = run_name
# If run_dict is None, use defaults, otherwise merge content into defaults
self.run_dict = (self.merge(self.defaults, run_dict) if run_dict else dict(self.defaults))
def verify(self):
"""
Verifies the content of of self.run_dict. The merge with defaults
during __init__ guarantees all keys exist.
"""
# Verify the run
TrickWorkflowYamlVerifier.DictVerifier.verify(self, user_dict=self.run_dict,
req_dict=self.req_dict['run'], identifier=self.run_dict['input'])
# Verify the analyze: section of the run, setting to None if invalid
if (self.run_dict['analyze'] is not None and
not self.expect_type(self.run_dict['analyze'], str,
where=("In %s, analyze section " % (self.run_dict['input'])))):
self.run_dict['analyze'] = None
if self.run_dict['compare'] is None:
self.run_dict['compare'] = [] # If not given, make it an empty list
# Verify the compare list of the run, removing any that aren't correct
# The check for list allows all other non-list types in the yaml file,
# allowing groups to define their own comparison methodology
if isinstance(self.run_dict['compare'], list):
for compare in list(self.run_dict['compare']):
if (not self.expect_type(compare, str,
where=("In %s, compare section " % (self.run_dict['input']))) or
not self.expect_contains(" vs. ", compare,
where=("In %s, compare section " % (self.run_dict['input'])))
):
self.run_dict['compare'].remove(compare)
def get_verified_run_dict(self):
return (dict(self.run_dict)) # Return a copy

View File

@ -250,13 +250,15 @@ class Job(object):
""" """
Start this job. Start this job.
""" """
logging.debug('Executing command: ' + self._command) # Guard against multiple starts
self._start_time = time.time() if self.get_status != self.Status.RUNNING:
self._log_file = open(self.log_file, 'w') logging.debug('Executing command: ' + self._command)
self._process = subprocess.Popen( self._start_time = time.time()
self._command, stdout=self._log_file, stderr=self._log_file, self._log_file = open(self.log_file, 'w')
stdin=open(os.devnull, 'r'), shell=True, preexec_fn=os.setsid, self._process = subprocess.Popen(
close_fds=True) self._command, stdout=self._log_file, stderr=self._log_file,
stdin=open(os.devnull, 'r'), shell=True, preexec_fn=os.setsid,
close_fds=True)
def get_status(self): def get_status(self):
""" """
@ -592,8 +594,8 @@ class WorkflowCommon:
""" """
if not os.environ.get('TERM') and not self.quiet: if not os.environ.get('TERM') and not self.quiet:
tprint( tprint(
'The TERM environment variable must be set when the command\n' 'The TERM environment variable must be set when\n'
'line option --quiet is not used. This is usually set by one\n' 'TrickWorkflow.quiet is False. This is usually set by one\n'
"of the shell's configuration files (.profile, .cshrc, etc).\n" "of the shell's configuration files (.profile, .cshrc, etc).\n"
'However, if this was executed via a non-interactive,\n' 'However, if this was executed via a non-interactive,\n'
"non-login shell (for instance: ssh <machine> '<command>'), it\n" "non-login shell (for instance: ssh <machine> '<command>'), it\n"

View File

@ -0,0 +1,7 @@
# define Python user-defined exceptions
class RequiredParamException(Exception):
"Raised when a parameter from a YAML file must exist but doesn't"
pass
class IncorrectYAMLException(Exception):
"Raised when a YAML file does not meet expected format"
pass

View File

@ -1,2 +1,2 @@
PyYAML PyYAML # Needed by TrickWorkflowYamlVerifier.py
psutil psutil

View File

@ -0,0 +1,67 @@
import re, os
import pdb
class send_hs(object):
"""
Reads a file containing the send_hs output and returns a send_hs
object containing the values from that output
"""
def __init__(self, hs_file):
self.hs_file = hs_file
self.actual_init_time = None
self.actual_elapsed_time = None
self.start_time = None
self.stop_time = None
self.elapsed_time = None
self.actual_cpu_time_used = None
self.sim_cpu_time = None
self.init_cpu_time = None
self.parse()
def parse(self):
f = open(self.hs_file, 'r')
lines = f.readlines()
for line in lines:
self.actual_init_time = self.attempt_hs_match('ACTUAL INIT TIME',self.actual_init_time, line)
self.actual_elapsed_time = self.attempt_hs_match('ACTUAL ELAPSED TIME',self.actual_elapsed_time, line)
self.start_time = self.attempt_hs_match('SIMULATION START TIME',self.start_time, line)
self.stop_time = self.attempt_hs_match('SIMULATION STOP TIME',self.stop_time, line)
self.elapsed_time = self.attempt_hs_match('SIMULATION ELAPSED TIME',self.elapsed_time, line)
self.actual_cpu_time_used = self.attempt_hs_match('ACTUAL CPU TIME USED',self.actual_cpu_time_used, line)
self.sim_cpu_time = self.attempt_hs_match('SIMULATION / CPU TIME',self.sim_cpu_time, line)
self.init_cpu_time = self.attempt_hs_match('INITIALIZATION CPU TIME',self.init_cpu_time, line)
# TODO add capture of blade and DIAGNOSTIC: Reached termination time as success criteria
def attempt_hs_match(self, name, var, text):
"""
name: pattern to match (e.g. SIMULATION START TIME)
var: variable to assign value if match found
text: text to search for pattern
returns: var if not found, found value if found
"""
m = re.match(name+': +([-]?[0-9]*\.?[0-9]+)', text.strip())
if m:
return(float(m.group(1)))
return(var)
def get(self,name):
"""
Get a value by the name that appears in the send_hs message
"""
if 'ACTUAL INIT TIME' in name:
return self.actual_init_time
if 'ACTUAL ELAPSED TIME' in name:
return self.actual_elapsed_time
if 'SIMULATION START TIME' in name:
return self.start_time
if 'SIMULATION STOP TIME' in name:
return self.stop_time
if 'SIMULATION ELAPSED TIME' in name:
return self.elapsed_time
if 'ACTUAL CPU TIME USED' in name:
return self.actual_cpu_time_used
if 'SIMULATION / CPU TIME' in name:
return self.sim_cpu_time
if 'INITIALIZATION CPU TIME' in name:
return self.init_cpu_time
else:
return None

View File

@ -0,0 +1,4 @@
# A YAML file which doesn't specify a single valid SIM
SIM_ball_L1:
path: 30

View File

@ -7,25 +7,24 @@ globals:
extension_example: extension_example:
should: be ignored by this framework should: be ignored by this framework
# This sim exists, but has duplicate run entries which is an error # This sim exists, but has duplicate run entries which means the last
# one is the only one respected
SIM_ball_L1: SIM_ball_L1:
path: trick_sims/Ball/SIM_ball_L1 path: trick_sims/Ball/SIM_ball_L1
size: 6000 size: 0
phase: 1001 # Out of bounds
runs: runs:
RUN_test/input.py: RUN_test/input.py:
RUN_test/input.py: RUN_test/input.py:
valgrind: valgrind:
runs:
RUN_test/input.py: # Wrong type, dict
- RUN_test/input.py: # Should be list of str not list of dict
# This sim exists, but its valgrind section is empty and its runs have problems # This sim exists, but runs have problems
SIM_alloc_test: SIM_alloc_test:
path: test/SIM_alloc_test path: test/SIM_alloc_test
valgrind:
runs: runs:
RUN_buddy/input.py: # Doesn't exist RUN_buddy/input.py: # Doesn't exist
RUN_test/input.py: # Does exist RUN_test/input.py: # Does exist
phase: -1001 # Out of bounds
extension: foobar # Should be retained in self.config but not used extension: foobar # Should be retained in self.config but not used
compare: compare:
- RUN_test/log.trk: # Should be list of str not list of dict - RUN_test/log.trk: # Should be list of str not list of dict
@ -42,8 +41,6 @@ SIM_events:
runs: runs:
- RUN_test/input.py: # List of dicts, should be ignored - RUN_test/input.py: # List of dicts, should be ignored
- RUN_test/unit_test.py: # List of dicts, should be ignored - RUN_test/unit_test.py: # List of dicts, should be ignored
valgrind: # Empty so should be ignored
runs:
# Sim exists, but runs have bad return values # Sim exists, but runs have bad return values
SIM_threads: SIM_threads:

View File

@ -33,7 +33,8 @@ def run_tests(args):
ut_results = runner.run(overall_suite) ut_results = runner.run(overall_suite)
# Run all doc tests by eating our own dogfood # Run all doc tests by eating our own dogfood
doctest_files = ['TrickWorkflow.py', 'WorkflowCommon.py'] doctest_files = ['TrickWorkflow.py', 'WorkflowCommon.py', 'TrickWorkflowYamlVerifier.py',
'MonteCarloGenerationHelper.py']
wc = WorkflowCommon(this_dir, quiet=True) wc = WorkflowCommon(this_dir, quiet=True)
jobs = [] jobs = []
log_prepend = '_doctest_log.txt' log_prepend = '_doctest_log.txt'

View File

@ -6,20 +6,26 @@ import os, sys, pdb
import unittest import unittest
import ut_WorkflowCommon import ut_WorkflowCommon
import ut_TrickWorkflowYamlVerifier
import ut_TrickWorkflow import ut_TrickWorkflow
import ut_MonteCarloGenerationHelper
# Define load_tests function for dynamic loading using Nose2 # Define load_tests function for dynamic loading using Nose2
def load_tests(*args): def load_tests(*args):
passed_args = locals() passed_args = locals()
suite = unittest.TestSuite() suite = unittest.TestSuite()
suite.addTests(ut_TrickWorkflowYamlVerifier.suite())
suite.addTests(ut_TrickWorkflow.suite()) suite.addTests(ut_TrickWorkflow.suite())
suite.addTests(ut_WorkflowCommon.suite()) suite.addTests(ut_WorkflowCommon.suite())
suite.addTests(ut_MonteCarloGenerationHelper.suite())
return suite return suite
# Local module level execution only # Local module level execution only
if __name__ == '__main__': if __name__ == '__main__':
suites = unittest.TestSuite() suites = unittest.TestSuite()
suites.addTests(ut_TrickWorkflowYamlVerifier.suite())
suites.addTests(ut_TrickWorkflow.suite()) suites.addTests(ut_TrickWorkflow.suite())
suites.addTests(ut_WorkflowCommon.suite()) suites.addTests(ut_WorkflowCommon.suite())
suites.addTests(ut_MonteCarloGenerationHelper.suite())
unittest.TextTestRunner(verbosity=2).run(suites) unittest.TextTestRunner(verbosity=2).run(suites)

View File

@ -5,19 +5,18 @@
SIM_ball_L1: SIM_ball_L1:
path: trick_sims/Ball/SIM_ball_L1 path: trick_sims/Ball/SIM_ball_L1
size: 6000 size: 6000
phase: 0
runs: runs:
RUN_test/input.py: RUN_test/input.py:
analyze: echo hi analyze: echo hi
compare: compare:
- share/trick/trickops/tests/testdata/log_a.csv vs. share/trick/trickops/tests/baselinedata/log_a.csv - share/trick/trickops/tests/testdata/log_a.csv vs. share/trick/trickops/tests/baselinedata/log_a.csv
valgrind:
flags: -v
runs:
- RUN_test/input.py
SIM_alloc_test: SIM_alloc_test:
path: test/SIM_alloc_test path: test/SIM_alloc_test
runs: runs:
RUN_test/input.py: RUN_test/input.py:
analyze: echo hi there
valgrind: -v
SIM_default_member_initializer: SIM_default_member_initializer:
path: test/SIM_default_member_initializer path: test/SIM_default_member_initializer
SIM_demo_inputfile: SIM_demo_inputfile:
@ -34,7 +33,9 @@ SIM_demo_sdefine:
- unit_test - unit_test
runs: runs:
RUN_test/input.py: RUN_test/input.py:
phase: 0
RUN_test/unit_test.py: RUN_test/unit_test.py:
phase: 1
SIM_dynamic_sim_object: SIM_dynamic_sim_object:
path: test/SIM_dynamic_sim_object path: test/SIM_dynamic_sim_object
runs: runs:
@ -83,7 +84,7 @@ SIM_segments:
RUN_test/input.py: RUN_test/input.py:
SIM_stls: SIM_stls:
binary: 'T_main_{cpu}_test.exe' binary: 'T_main_{cpu}_test.exe'
build_command: "trick-CP -t" build_args: "-t"
path: test/SIM_stls path: test/SIM_stls
labels: labels:
- unit_test - unit_test
@ -157,21 +158,35 @@ SIM_threads:
- unit_test - unit_test
runs: runs:
RUN_test/sched.py: RUN_test/sched.py:
phase: 1
analyze: echo phase 1 analysis
RUN_test/amf.py: RUN_test/amf.py:
phase: 2
analyze: echo phase 2 analysis
RUN_test/async.py: RUN_test/async.py:
phase: 3
analyze: echo phase 3 analysis
RUN_test/unit_test.py: RUN_test/unit_test.py:
phase: 4
analyze: echo phase 4 analysis
SIM_threads_simple: SIM_threads_simple:
path: test/SIM_threads_simple path: test/SIM_threads_simple
runs: runs:
RUN_test/input.py: RUN_test/input.py:
phase: 1
analyze: echo phase 1 analysis
RUN_test/sched.py: RUN_test/sched.py:
phase: 2
analyze: echo phase 2 analysis
RUN_test/async.py: RUN_test/async.py:
phase: 3
SIM_trickcomm: SIM_trickcomm:
path: test/SIM_trickcomm path: test/SIM_trickcomm
runs: runs:
RUN_test/input.py: RUN_test/input.py:
SIM_ball_L2: SIM_ball_L2:
path: trick_sims/Ball/SIM_ball_L2 path: trick_sims/Ball/SIM_ball_L2
runs:
SIM_ball_L3: SIM_ball_L3:
path: trick_sims/Ball/SIM_ball_L3 path: trick_sims/Ball/SIM_ball_L3
SIM_amoeba: SIM_amoeba:
@ -187,6 +202,7 @@ SIM_cannon_jet:
SIM_cannon_numeric: SIM_cannon_numeric:
path: trick_sims/Cannon/SIM_cannon_numeric path: trick_sims/Cannon/SIM_cannon_numeric
SIM_monte: SIM_monte:
phase: -1
path: trick_sims/Cannon/SIM_monte path: trick_sims/Cannon/SIM_monte
SIM_ode_ball: SIM_ode_ball:
path: trick_sims/ODE/SIM_ode_ball path: trick_sims/ODE/SIM_ode_ball
@ -213,10 +229,12 @@ SIM_sat2d:
SIM_satellite: SIM_satellite:
path: trick_sims/SIM_satellite path: trick_sims/SIM_satellite
SIM_sun: SIM_sun:
phase: 72
path: trick_sims/SIM_sun path: trick_sims/SIM_sun
SIM_wheelbot: SIM_wheelbot:
path: trick_sims/SIM_wheelbot path: trick_sims/SIM_wheelbot
SIM_ball_L1_er7_utils: SIM_ball_L1_er7_utils:
phase: -88
path: trick_source/er7_utils/sims/SIM_ball_L1 path: trick_source/er7_utils/sims/SIM_ball_L1
SIM_grav: SIM_grav:
path: trick_source/er7_utils/sims/SIM_grav path: trick_source/er7_utils/sims/SIM_grav

View File

@ -0,0 +1,60 @@
# This is a file containing trick sims from this repository to be used
# for unit testing the trickops module. Specifically this file contains
# values for parameters with the wrong type
SIM_type_errors1:
path: path/to/SIM
build_args: 1 # Should be string not int
parallel_safety: 2 # Should be string not int
description: 10 # Should be a string not int
binary: 20 # Should be a string not int
size: 'string' # Should be an int not string
phase: 'string' # Should be an int not string
labels: 'hi there' # Should be a list not string
runs:
RUN_test1/input.py:
returns: 256 # Should be between 0-255
analyze: 35 # Should be a string not int
compare:
- 40 # Should be a string not int
RUN_test2/input.py:
returns: -1 # Should be between 0-255
valgrind: 79 # Should be a string not int
SIM_type_errors2:
path: path/to/SIM
build_args:
- this shouldnt
- be a list
binary:
- this shouldnt
- be a list
build_args:
- this shouldnt
- be a list
parallel_safety:
- this shouldnt
- be a list
description:
- this shouldnt
- be a list
binary:
- this shouldnt
- be a list
size:
- this shouldnt
- be a list
phase:
- this shouldnt
- be a list
SIM_range_errors:
path: path/to/SIM
parallel_safety: 'bad_value'
size: -10
phase: -1000
# Bad path must be tested separate, since it halts processing
# of other parameters immediately
SIM_bad_path:
path: 30 # Should be a str not int

View File

@ -0,0 +1,204 @@
import os, sys, glob
import unittest, shutil
import pdb
from testconfig import this_trick, tests_dir
import TrickWorkflow
from MonteCarloGenerationHelper import *
def suite():
"""Create test suite from test cases here and return"""
suites = []
suites.append(unittest.TestLoader().loadTestsFromTestCase(MCGNominalTestCase))
suites.append(unittest.TestLoader().loadTestsFromTestCase(MCGAllArgsTestCase))
suites.append(unittest.TestLoader().loadTestsFromTestCase(MCGInvalidGenerationTestCase))
suites.append(unittest.TestLoader().loadTestsFromTestCase(MCGInvalidInputsTestCase))
return (suites)
class MCGNominalTestCase(unittest.TestCase):
def setUp(self):
# Nominal no-error when parsing the trick-sims config file scenario
self.instance = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, 'test/SIM_mc_generation'),
input_path='RUN_nominal/input_a.py')
# Create expected generated output directories and files this tests needs. Note this
# matches what SIM_mc_generation produces when run and should work whether output
# from that sim's tests are lingering in the workspace or not
self.monte_dir = os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_nominal')
os.makedirs(os.path.join(self.monte_dir, 'RUN_000'), exist_ok=True)
os.makedirs(os.path.join(self.monte_dir, 'RUN_001'), exist_ok=True)
for dir in ['RUN_000', 'RUN_001']:
if not os.path.isfile(os.path.join(self.monte_dir, dir, 'monte_input_a.py')):
with open(os.path.join(self.monte_dir, dir, 'monte_input_a.py'), 'w') as fp:
pass
def tearDown(self):
if self.instance:
del self.instance
self.instance = None
def test_init(self):
self.assertEqual(self.instance.S_main_name, "S_main*.exe")
self.assertEqual(self.instance.sim_path, os.path.join(this_trick,"test/SIM_mc_generation"))
self.assertEqual(self.instance.input_path, "RUN_nominal/input_a.py")
self.assertEqual(self.instance.sim_args_gen, '')
self.assertEqual(self.instance.env, '')
self.assertEqual(self.instance.generated_input_files, [])
self.assertTrue(('trick/test/SIM_mc_generation && ./S_main*.exe RUN_nominal/input_a.py')
in self.instance.generation_job._command)
self.assertTrue(os.path.join(self.instance.sim_path, os.path.dirname(self.instance.input_path),
'MCG_generation_output.txt') in self.instance.generation_job.log_file)
def test_get_generation_job(self):
j = self.instance.get_generation_job()
self.assertTrue(isinstance(j, TrickWorkflow.SingleRun))
def test_get_generated_input_files(self):
gif = self.instance.get_generated_input_files(os.path.join(self.instance.sim_path, 'MONTE_RUN_nominal'))
self.assertTrue('trick/test/SIM_mc_generation/MONTE_RUN_nominal/RUN_000/monte_input_a.py' in gif[0])
self.assertTrue('trick/test/SIM_mc_generation/MONTE_RUN_nominal/RUN_001/monte_input_a.py' in gif[1])
# Given the relative path from sim
os.chdir(self.instance.sim_path)
gif = self.instance.get_generated_input_files('MONTE_RUN_nominal')
# Given invalid path (relative to cwd())
with self.assertRaises(RuntimeError):
gif = self.instance.get_generated_input_files('fake/path/MONTE_RUN_nominal')
def test_get_zero_padding(self):
zp = self.instance.get_zero_padding(monte_dir=self.monte_dir)
self.assertEqual(zp, 3)
# Without any args it should use interal self.generated_input_files
zp = self.instance.get_zero_padding()
self.assertEqual(zp, 3)
def test_get_sbatch_job(self):
sbj = self.instance.get_sbatch_job(self.monte_dir)
self.assertTrue('--array 0-001 ' in sbj._command)
self.assertTrue(self.instance.S_main_name in sbj._command)
self.assertTrue(self.monte_dir in sbj._command)
self.assertTrue('RUN_${RUN_NUM}' in sbj._command)
def test_get_sbatch_job_with_passthrough_args(self):
sbj = self.instance.get_sbatch_job(self.monte_dir, hpc_passthrough_args="--array 50-80 --wait")
self.assertTrue('--array 50-80 ' in sbj._command)
self.assertTrue('--wait ' in sbj._command)
sbj = self.instance.get_sbatch_job(self.monte_dir, hpc_passthrough_args="--array 1-100:10")
self.assertTrue('--array 1-100:10 ' in sbj._command)
self.assertTrue('--wait ' not in sbj._command)
sbj = self.instance.get_sbatch_job(self.monte_dir, hpc_passthrough_args="--begin 16:00")
self.assertTrue('--begin 16:00 ' in sbj._command)
self.assertTrue('--array 0-001 ' in sbj._command)
sbj = self.instance.get_sbatch_job(self.monte_dir, sim_args="--logging=verbose")
self.assertTrue('--logging=verbose' in sbj._command)
def test_get_generated_run_jobs(self):
jobs = self.instance.get_generated_run_jobs(self.monte_dir)
self.assertEqual(len(jobs), 2)
self.assertTrue('MONTE_RUN_nominal/RUN_000/monte_input_a.py' in jobs[0]._command)
self.assertTrue('MONTE_RUN_nominal/RUN_001/monte_input_a.py' in jobs[1]._command)
jobs = self.instance.get_generated_run_jobs(self.monte_dir, sim_args='--logging=off')
self.assertTrue('--logging=off' in jobs[0]._command)
self.assertTrue('--logging=off' in jobs[1]._command)
class MCGAllArgsTestCase(unittest.TestCase):
def setUp(self):
# Nominal no-error when parsing the trick-sims config file scenario
self.instance = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, 'test/SIM_mc_generation'),
input_path='RUN_nominal/input_a.py', env='source bashrc;', sim_args_gen='--monte-carlo --runs=32')
def tearDown(self):
if self.instance:
del self.instance
self.instance = None
def test_init(self):
self.assertTrue( self.instance.generation_job._command.startswith('source bashrc; '))
self.assertTrue( self.instance.generation_job._command.endswith('--monte-carlo --runs=32'))
class MCGInvalidGenerationTestCase(unittest.TestCase):
def setUp(self):
# Nominal no-error when parsing the trick-sims config file scenario
self.instance = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, 'test/SIM_mc_generation'),
input_path='RUN_nominal/input_a.py')
# Create incorrect generated output directories and files this tests needs.
# This simulates a generation error and tests that the MonteCarloGenerationHelper class can
# detect it
monte_dir = os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_missing_input_files')
os.makedirs(os.path.join(monte_dir, 'RUN_000'), exist_ok=True)
os.makedirs(os.path.join(monte_dir, 'RUN_001'), exist_ok=True)
monte_dir = os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_mismatch_input_files')
os.makedirs(os.path.join(monte_dir, 'RUN_000'), exist_ok=True)
os.makedirs(os.path.join(monte_dir, 'RUN_001'), exist_ok=True)
with open(os.path.join(monte_dir, 'RUN_000', 'monte_input_a.py'), 'w') as fp:
pass
monte_dir = os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_incorrect_input_files')
os.makedirs(os.path.join(monte_dir, 'RUN_000'), exist_ok=True)
os.makedirs(os.path.join(monte_dir, 'RUN_001'), exist_ok=True)
for dir in ['RUN_000', 'RUN_001']:
if not os.path.isfile(os.path.join(monte_dir, dir, 'input_a.py')):
with open(os.path.join(monte_dir, dir, 'input_a.py'), 'w') as fp:
pass
monte_dir = os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_completely_bonkers')
os.makedirs(os.path.join(monte_dir, 'RUN_makes_no'), exist_ok=True)
os.makedirs(os.path.join(monte_dir, 'RUN_sense'), exist_ok=True)
for dir in ['RUN_makes_no', 'RUN_sense']:
if not os.path.isfile(os.path.join(monte_dir, dir, 'input_a.py')):
with open(os.path.join(monte_dir, dir, 'input_a.py'), 'w') as fp:
pass
def tearDown(self):
if self.instance:
del self.instance
self.instance = None
# Remove the fake directory tree created in this test
shutil.rmtree(os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_missing_input_files'))
shutil.rmtree(os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_mismatch_input_files'))
shutil.rmtree(os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_incorrect_input_files'))
shutil.rmtree(os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_completely_bonkers'))
def test_get_generated_input_files(self):
# Just a warning
gif = self.instance.get_generated_input_files(
os.path.join(self.instance.sim_path, 'MONTE_RUN_mismatch_input_files'))
with self.assertRaises(RuntimeError):
gif = self.instance.get_generated_input_files(
os.path.join(self.instance.sim_path, 'MONTE_RUN_missing_input_files'))
with self.assertRaises(RuntimeError):
gif = self.instance.get_generated_input_files(
os.path.join(self.instance.sim_path, 'MONTE_RUN_incorrect_input_files'))
with self.assertRaises(RuntimeError):
gif = self.instance.get_generated_input_files(
os.path.join(self.instance.sim_path, 'MONTE_RUN_completely_bonkers'))
def test_get_zero_padding(self):
# Without monte_dir and get_generated_input files never having been run, this will fail
with self.assertRaises(RuntimeError):
zp = self.instance.get_zero_padding()
# Nothing in monte_dir, will fail
with self.assertRaises(RuntimeError):
zp = self.instance.get_zero_padding(monte_dir=os.path.join(this_trick, 'test/SIM_mc_generation', 'MONTE_RUN_missing_input_files'))
class MCGInvalidInputsTestCase(unittest.TestCase):
def setUp(self):
# Invalid sim_path
with self.assertRaises(RuntimeError):
self.instance = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, 'test/SIM_notexist'),
input_path='RUN_nominal/input_a.py')
# Invalid input_path
with self.assertRaises(RuntimeError):
self.instance = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, 'test/SIM_mc_generation'),
input_path='RUN_nominal/input_x.py',)
# Invalid sim_args_gen
with self.assertRaises(TypeError):
self.instance = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, 'test/SIM_mc_generation'),
input_path='RUN_nominal/input_a.py', sim_args_gen=3)
# Invalid env
with self.assertRaises(TypeError):
self.instance = MonteCarloGenerationHelper(sim_path=os.path.join(this_trick, 'test/SIM_mc_generation'),
input_path='RUN_nominal/input_a.py', env=3)
def test_init(self):
pass

View File

@ -5,11 +5,36 @@ from testconfig import this_trick, tests_dir
from TrickWorkflow import * from TrickWorkflow import *
def suite(): def suite():
"""Create test suite from TrickWorkflowTestCase unit test class and return""" """Create test suite from test cases here and return"""
return unittest.TestLoader().loadTestsFromTestCase(TrickWorkflowTestCase) suites = []
suites.append(unittest.TestLoader().loadTestsFromTestCase(TrickWorkflowTestCase))
suites.append(unittest.TestLoader().loadTestsFromTestCase(TrickWorkflowSingleRunTestCase))
return (suites)
class TrickWorkflowSingleRunTestCase(unittest.TestCase):
def setUp(self):
self.instance= SingleRun(name='testname', command='echo hi',
log_file='/tmp/WorkflowCommonTestCase_hi.txt')
def tearDown(self):
del self.instance
self.instance = None
def test_SingleRun_Nominal(self):
self.assertEqual(self.instance.name, 'testname')
self.assertEqual(self.instance._command, 'echo hi')
self.assertEqual(self.instance.log_file, '/tmp/WorkflowCommonTestCase_hi.txt')
self.assertEqual(self.instance._log_file, None)
self.assertEqual(self.instance._expected_exit_status, 0)
self.assertEqual(self.instance.get_use_var_server(), True)
# Test the setter for var server interaction
self.instance.set_use_var_server(False)
self.assertEqual(self.instance.get_use_var_server(), False)
class TrickWorkflowTestCase(unittest.TestCase): class TrickWorkflowTestCase(unittest.TestCase):
# TODO: not all jobs even use this setUp, probably should split the ones
# out that need to load a different file or no file at all (static tests)
def setUp(self): def setUp(self):
# Nominal no-error when parsing the trick-sims config file scenario # Nominal no-error when parsing the trick-sims config file scenario
self.instance = TrickWorkflow(project_top_level=this_trick, log_dir='/tmp/', self.instance = TrickWorkflow(project_top_level=this_trick, log_dir='/tmp/',
@ -22,9 +47,9 @@ class TrickWorkflowTestCase(unittest.TestCase):
del self.instance del self.instance
self.instance = None self.instance = None
def setUpWithEmptyConfig(self): def setUpWithEmptyConfig(self, file="empty.yml"):
self.instance = TrickWorkflow(project_top_level=this_trick, log_dir='/tmp/', self.instance = TrickWorkflow(project_top_level=this_trick, log_dir='/tmp/',
trick_dir=this_trick, config_file=os.path.join(tests_dir,"empty.yml"), trick_dir=this_trick, config_file=os.path.join(tests_dir,file),
quiet=True) quiet=True)
def setUpWithErrorConfig(self, file): def setUpWithErrorConfig(self, file):
@ -33,21 +58,43 @@ class TrickWorkflowTestCase(unittest.TestCase):
trick_dir=this_trick, config_file=os.path.join(tests_dir,file), trick_dir=this_trick, config_file=os.path.join(tests_dir,file),
quiet=True) quiet=True)
def test_static_members(self):
self.assertEqual(TrickWorkflow.all_possible_phases,
range(TrickWorkflow.allowed_phase_range['min'], TrickWorkflow.allowed_phase_range['max']+1) )
self.assertEqual(TrickWorkflow.listify_phase(3), [3]) # Int becomes list of int
self.assertEqual(TrickWorkflow.listify_phase([-3, 3, 9]), [-3, 3, 9]) # Valid list passes through
self.assertEqual(TrickWorkflow.listify_phase(), list(TrickWorkflow.all_possible_phases))
self.assertEqual(TrickWorkflow.listify_phase(None), list(TrickWorkflow.all_possible_phases))
self.assertEqual(TrickWorkflow.listify_phase(range(0,5)), [0, 1, 2, 3, 4])
with self.assertRaises(RuntimeError):
p = TrickWorkflow.listify_phase('hey') # Bad type
with self.assertRaises(RuntimeError):
p = TrickWorkflow.listify_phase([-3, 3, 'hi']) # Bad type
with self.assertRaises(RuntimeError):
p = TrickWorkflow.listify_phase([-30000, 3, '30000']) # Out of range
with self.assertRaises(RuntimeError):
p = TrickWorkflow.listify_phase([-30000, 3, 'hello']) # Out of range and bad type
def test_init_nominal(self): def test_init_nominal(self):
self.assertEqual(self.instance.cpus, 3) self.assertEqual(self.instance.cpus, 3)
self.assertEqual(self.instance.parallel_safety, 'loose') self.assertEqual(self.instance.config_errors, [])
self.assertEqual(self.instance.config_errors, False)
self.instance.report() self.instance.report()
build_jobs = self.instance.get_jobs('build') build_jobs = self.instance.get_jobs('build')
self.assertEqual(len(build_jobs), 56) self.assertEqual(len(build_jobs), 56)
self.assertEqual(len(self.instance.sims), 56) self.assertEqual(len(self.instance.sims), 56)
run_jobs = self.instance.get_jobs('run') run_jobs = self.instance.get_jobs('run')
self.assertEqual(len(run_jobs), 37 ) self.assertEqual(len(run_jobs), 36)
val_run_jobs = self.instance.get_jobs('valgrind')
self.assertEqual(len(val_run_jobs), 1)
def test_init_empty_so_raises(self): def test_init_empty_so_raises(self):
with self.assertRaises(RuntimeError): with self.assertRaises(RuntimeError):
self.setUpWithEmptyConfig() self.setUpWithEmptyConfig()
def test_init_empty_after_parsing_so_raises(self):
with self.assertRaises(RuntimeError):
self.setUpWithEmptyConfig("empty1.yml")
def test_init_bad_yaml_so_raises(self): def test_init_bad_yaml_so_raises(self):
with self.assertRaises(RuntimeError): with self.assertRaises(RuntimeError):
self.setUpWithErrorConfig("errors_fatal1.yml") self.setUpWithErrorConfig("errors_fatal1.yml")
@ -56,17 +103,18 @@ class TrickWorkflowTestCase(unittest.TestCase):
def test_init_errors_but_no_raise(self): def test_init_errors_but_no_raise(self):
self.setUpWithErrorConfig("errors_nonfatal.yml") self.setUpWithErrorConfig("errors_nonfatal.yml")
self.assertTrue(self.instance.config_errors) self.assertEqual(len(self.instance.parsing_errors), 11)
self.assertEqual(self.instance.parallel_safety , 'loose') self.assertEqual(len(self.instance.config_errors), 2)
self.assertEqual(len(self.instance.sims), 4) self.assertEqual(len(self.instance.sims), 4)
self.assertEqual(len(self.instance.get_sim('SIM_ball_L1').get_runs()), 1) self.assertEqual(len(self.instance.get_sim('SIM_ball_L1').get_runs()), 1)
#import pprint; pprint.pprint(self.instance.parsing_errors)
self.assertEqual(len(self.instance.get_sim('SIM_ball_L1').get_valgrind_runs()), 0) self.assertEqual(len(self.instance.get_sim('SIM_ball_L1').get_valgrind_runs()), 0)
self.assertEqual(len(self.instance.get_sim('SIM_alloc_test').get_runs()), 1) self.assertEqual(self.instance.get_sim('SIM_ball_L1').get_phase(), 0)
self.assertEqual(len(self.instance.get_sim('SIM_alloc_test').get_runs()), 2)
self.assertEqual(len(self.instance.get_sim('SIM_alloc_test').get_valgrind_runs()), 0) self.assertEqual(len(self.instance.get_sim('SIM_alloc_test').get_valgrind_runs()), 0)
self.assertEqual(self.instance.config['SIM_alloc_test']['runs']['RUN_test/input.py']['extension'], 'foobar') self.assertEqual(self.instance.config['SIM_alloc_test']['runs']['RUN_test/input.py']['extension'], 'foobar')
self.assertEqual(len(self.instance.get_sim('SIM_events').get_runs()), 0) self.assertEqual(len(self.instance.get_sim('SIM_events').get_runs()), 0)
self.assertTrue('fine1' in self.instance.get_sim('SIM_events').labels) self.assertEqual(self.instance.get_sim('SIM_events').labels, [])
self.assertTrue('fine2' in self.instance.get_sim('SIM_events').labels)
self.assertEqual(self.instance.get_sim('SIM_threads').get_run('RUN_test/sched.py').returns, 0) self.assertEqual(self.instance.get_sim('SIM_threads').get_run('RUN_test/sched.py').returns, 0)
self.assertEqual(self.instance.get_sim('SIM_threads').get_run('RUN_test/amf.py').returns, 0) self.assertEqual(self.instance.get_sim('SIM_threads').get_run('RUN_test/amf.py').returns, 0)
self.assertEqual(self.instance.get_sim('SIM_threads').get_run('RUN_test/async.py').returns, 0) self.assertEqual(self.instance.get_sim('SIM_threads').get_run('RUN_test/async.py').returns, 0)
@ -75,8 +123,7 @@ class TrickWorkflowTestCase(unittest.TestCase):
self.assertTrue(self.instance.get_sim('SIM_L1_ball') is None) self.assertTrue(self.instance.get_sim('SIM_L1_ball') is None)
self.assertTrue(self.instance.get_sim('SIM_foobar') is None) self.assertTrue(self.instance.get_sim('SIM_foobar') is None)
self.assertTrue(self.instance.get_sim('SIM_parachute') is None) self.assertTrue(self.instance.get_sim('SIM_parachute') is None)
self.assertEqual(self.instance.config['extension_example'], {'should': 'be ignored by this framework'})
self.assertTrue(self.instance.config['extension_example'])
self.instance.report() self.instance.report()
def test_get_sim_nominal(self): def test_get_sim_nominal(self):
@ -146,15 +193,24 @@ class TrickWorkflowTestCase(unittest.TestCase):
self.assertTrue(ucd[0][0] is not None) self.assertTrue(ucd[0][0] is not None)
self.assertTrue(ucd[0][1] is not None) self.assertTrue(ucd[0][1] is not None)
def test_get_koviz_report_jobs_nominal(self): def test_get_koviz_report_jobs(self):
# Since instrumenting "is koviz on your PATH?" is difficult, this test is
# designed to check both error (not on PATH) and non-error (on PATH) cases
krj = self.instance.get_koviz_report_jobs() krj = self.instance.get_koviz_report_jobs()
self.assertTrue(isinstance(krj[0][0], Job)) if krj[1] == []: # No errors implies we have koviz on PATH
self.assertTrue(not krj[1]) self.assertTrue(isinstance(krj[0], list))
self.assertTrue(isinstance(krj[0][0], Job))
self.assertTrue(not krj[1])
else: # Otherwise koviz not on PATH
self.assertTrue('koviz is not found' in krj[1][0])
self.assertTrue(isinstance(krj[0], list))
self.assertEqual(len(krj[0]), 0)
def test_get_koviz_report_job_missing_dir(self): def test_get_koviz_report_job_missing_dir(self):
krj = self.instance.get_koviz_report_job('share/trick/trickops/tests/testdata_noexist', krj = self.instance.get_koviz_report_job('share/trick/trickops/tests/testdata_noexist',
'share/trick/trickops/tests/baselinedata') 'share/trick/trickops/tests/baselinedata')
self.assertTrue(krj[0] is None) self.assertTrue(krj[0] is None)
# Loose check but works even if koviz not on PATH
self.assertTrue('ERROR' in krj[1]) self.assertTrue('ERROR' in krj[1])
def test_status_summary_nominal(self): def test_status_summary_nominal(self):
@ -165,18 +221,49 @@ class TrickWorkflowTestCase(unittest.TestCase):
def test_get_and_pop_run(self): def test_get_and_pop_run(self):
sim = self.instance.get_sim('SIM_ball_L1') sim = self.instance.get_sim('SIM_ball_L1')
run = sim.get_run('RUN_test/input.py') run = sim.get_run('RUN_test/input.py')
self.assertEqual(run.input, 'RUN_test/input.py') self.assertEqual(run.input_file, 'RUN_test/input.py')
run = sim.pop_run('RUN_test/input.py') run = sim.pop_run('RUN_test/input.py')
self.assertEqual(run.input, 'RUN_test/input.py') self.assertEqual(run.input_file, 'RUN_test/input.py')
self.assertEqual(len(sim.get_runs()), 0) self.assertEqual(len(sim.get_runs()), 0)
def test_check_run_jobs(self): def test_get_run_jobs_kind(self):
sim = self.instance.get_sim('SIM_ball_L1') sim = self.instance.get_sim('SIM_ball_L1')
normal_run_jobs = sim.get_run_jobs() normal_run_jobs = sim.get_run_jobs()
self.assertTrue('valgrind' not in normal_run_jobs[0]._command) self.assertTrue('valgrind' not in normal_run_jobs[0]._command)
sim = self.instance.get_sim('SIM_alloc_test')
valgrind_run_jobs = sim.get_run_jobs(kind='valgrind') valgrind_run_jobs = sim.get_run_jobs(kind='valgrind')
self.assertTrue('valgrind' in valgrind_run_jobs[0]._command) self.assertTrue('valgrind' in valgrind_run_jobs[0]._command)
def test_get_run_jobs_phase(self):
sim = self.instance.get_sim('SIM_demo_sdefine')
run_jobs = sim.get_run_jobs() # Ask for everything
self.assertEqual(len(run_jobs), 2)
run_jobs_phase0_int = sim.get_run_jobs(phase=0) # Ask for only phase 0 as int
self.assertEqual(len(run_jobs_phase0_int), 1)
run_jobs_phase0_list = sim.get_run_jobs(phase=[0]) # Ask for only phase 0 as list
self.assertEqual(run_jobs_phase0_int, run_jobs_phase0_list) # Should be the same
self.assertEqual(len(run_jobs_phase0_list), 1)
run_jobs_phase1_int = sim.get_run_jobs(phase=1) # Ask for only phase 1 as int
self.assertEqual(len(run_jobs_phase1_int), 1)
run_jobs_phase1_list = sim.get_run_jobs(phase=[1]) # Ask for only phase 1 as list
self.assertEqual(run_jobs_phase1_int, run_jobs_phase1_list) # Should be the same
self.assertEqual(len(run_jobs_phase1_list), 1)
run_jobs_both_phases_list = sim.get_run_jobs(phase=[0,1]) # Ask for phase 0-1 as list
self.assertEqual(len(run_jobs_both_phases_list), 2)
self.assertEqual(run_jobs_both_phases_list, run_jobs) # These should be equivalent
run_jobs_one_phase_not_exist = sim.get_run_jobs(phase=[0,1,2]) # Ask for phase 0-2 as list, 2 doesn't exist
self.assertEqual(len(run_jobs_one_phase_not_exist), 2) # We only get what exists
# Ask for everything explicitly, this is far slower than phase=None but should produce equivalent list
run_jobs_all_possible_explicit = sim.get_run_jobs(phase=TrickWorkflow.all_possible_phases)
self.assertEqual(run_jobs_all_possible_explicit , run_jobs) # These should be equivalent
# Error cases
with self.assertRaises(RuntimeError):
run_jobs_invalid_input = sim.get_run_jobs(phase=['abc',1]) # strings aren't permitted
with self.assertRaises(RuntimeError):
run_jobs_invalid_input = sim.get_run_jobs(phase=-10000) # out of range as int
with self.assertRaises(RuntimeError):
run_jobs_invalid_input = sim.get_run_jobs(phase=[-10000]) # out of range as list
def test_compare(self): def test_compare(self):
sim = self.instance.get_sim('SIM_ball_L1') sim = self.instance.get_sim('SIM_ball_L1')
# Sim level comparison (test_data.csv vs. baseline_data.csv) will fail # Sim level comparison (test_data.csv vs. baseline_data.csv) will fail
@ -189,29 +276,93 @@ class TrickWorkflowTestCase(unittest.TestCase):
self.assertEqual(run.comparisons[0]._translate_status(), '\x1b[31mFAIL\x1b[0m') self.assertEqual(run.comparisons[0]._translate_status(), '\x1b[31mFAIL\x1b[0m')
def test_get_jobs_nominal(self): def test_get_jobs_nominal(self):
# Test all the permissive permutations # Test all the kinds permutations
builds = self.instance.get_jobs('build') builds = self.instance.get_jobs('build')
self.assertEqual(len(builds), 56) self.assertEqual(len(builds), 56)
builds = self.instance.get_jobs('builds') builds = self.instance.get_jobs('builds')
self.assertEqual(len(builds), 56) self.assertEqual(len(builds), 56)
runs = self.instance.get_jobs('run') runs = self.instance.get_jobs('run')
self.assertEqual(len(runs), 37) self.assertEqual(len(runs), 36)
runs = self.instance.get_jobs('runs') runs = self.instance.get_jobs('runs')
self.assertEqual(len(runs), 37) self.assertEqual(len(runs), 36)
vg = self.instance.get_jobs('valgrind') vg = self.instance.get_jobs('valgrind')
self.assertEqual(len(vg), 1) self.assertEqual(len(vg), 1)
vg = self.instance.get_jobs('valgrinds') vg = self.instance.get_jobs('valgrinds')
self.assertEqual(len(vg), 1) self.assertEqual(len(vg), 1)
a = self.instance.get_jobs('analysis') a = self.instance.get_jobs('analysis')
self.assertEqual(len(a), 1) self.assertEqual(len(a), 8)
a = self.instance.get_jobs('analyses') a = self.instance.get_jobs('analyses')
self.assertEqual(len(a), 1) self.assertEqual(len(a), 8)
a = self.instance.get_jobs('analyze') a = self.instance.get_jobs('analyze')
self.assertEqual(len(a), 1) self.assertEqual(len(a), 8)
def test_get_jobs_builds_with_phases(self):
# All builds all phases
builds = self.instance.get_jobs('build', phase=None)
self.assertEqual(len(builds), 56)
builds = self.instance.get_jobs('build', phase=TrickWorkflow.all_possible_phases)
self.assertEqual(len(builds), 56)
# Builds only specific phases
builds = self.instance.get_jobs('build', phase=970)
self.assertEqual(len(builds), 0)
builds = self.instance.get_jobs('build', phase=0)
self.assertEqual(len(builds), 53)
builds = self.instance.get_jobs('build', phase=[0])
self.assertEqual(len(builds), 53)
builds = self.instance.get_jobs('build', phase=-1)
self.assertEqual(len(builds), 1)
builds = self.instance.get_jobs('build', phase=72)
self.assertEqual(len(builds), 1)
builds = self.instance.get_jobs('build', phase=-88)
self.assertEqual(len(builds), 1)
builds = self.instance.get_jobs('build', phase=[-88, 72])
self.assertEqual(len(builds), 2)
def test_get_jobs_runs_with_phases(self):
# All runs all phases
vruns = self.instance.get_jobs('valgrind', phase=None)
self.assertEqual(len(vruns), 1)
vruns = self.instance.get_jobs('valgrind', phase=7)
self.assertEqual(len(vruns), 0)
runs = self.instance.get_jobs('run', phase=None)
self.assertEqual(len(runs), 36)
runs = self.instance.get_jobs('run', phase=TrickWorkflow.all_possible_phases)
self.assertEqual(len(runs), 36)
# Runs specific phases
runs = self.instance.get_jobs('run', phase=[8, 19])
self.assertEqual(len(runs), 0)
runs = self.instance.get_jobs('run', phase=1)
self.assertEqual(len(runs), 3)
runs = self.instance.get_jobs('run', phase=2)
self.assertEqual(len(runs), 2)
runs = self.instance.get_jobs('run', phase=3)
self.assertEqual(len(runs), 2)
runs = self.instance.get_jobs('run', phase=4)
self.assertEqual(len(runs), 1)
def test_get_jobs_analysis_with_phases(self):
# All analysis all phases
an = self.instance.get_jobs('analysis', phase=None)
self.assertEqual(len(an), 8)
an = self.instance.get_jobs('analysis', phase=TrickWorkflow.all_possible_phases)
self.assertEqual(len(an), 8)
# Analysis specific phases
an = self.instance.get_jobs('analysis', phase=[8, 19])
self.assertEqual(len(an), 0)
an = self.instance.get_jobs('analysis', phase=1)
self.assertEqual(len(an), 2)
an = self.instance.get_jobs('analysis', phase=2)
self.assertEqual(len(an), 2)
an = self.instance.get_jobs('analysis', phase=3)
self.assertEqual(len(an), 1)
def test_get_jobs_raises(self): def test_get_jobs_raises(self):
with self.assertRaises(TypeError): with self.assertRaises(TypeError):
jobs = self.instance.get_jobs(kind='bucees') jobs = self.instance.get_jobs(kind='bucees')
with self.assertRaises(RuntimeError):
jobs = self.instance.get_jobs(kind='build', phase='abx')
with self.assertRaises(RuntimeError):
jobs = self.instance.get_jobs(kind='run', phase=[-10000, 60000])
def test_get_comparisons_nominal(self): def test_get_comparisons_nominal(self):
c = self.instance.get_comparisons() c = self.instance.get_comparisons()
@ -219,14 +370,14 @@ class TrickWorkflowTestCase(unittest.TestCase):
self.assertEqual(c[0]._translate_status(), '\x1b[33mNOT RUN\x1b[0m') self.assertEqual(c[0]._translate_status(), '\x1b[33mNOT RUN\x1b[0m')
def test_add_comparison(self): def test_add_comparison(self):
sim = self.instance.get_sim('SIM_alloc_test') sim = self.instance.get_sim('SIM_demo_inputfile')
run = sim.get_run('RUN_test/input.py') run = sim.get_run('RUN_test/input.py')
run.add_comparison('share/trick/trickops/tests/baselinedata/log_a.csv', run.add_comparison('share/trick/trickops/tests/testdata/log_a.csv',
'share/trick/trickops/tests/testdata/log_a.csv') 'share/trick/trickops/tests/baselinedata/log_a.csv')
self.assertTrue(len(run.comparisons) == 1) self.assertTrue(len(run.comparisons) == 1)
def test_add_analysis_nominal(self): def test_add_analysis_nominal(self):
sim = self.instance.get_sim('SIM_alloc_test') sim = self.instance.get_sim('SIM_demo_inputfile')
run = sim.get_run('RUN_test/input.py') run = sim.get_run('RUN_test/input.py')
run.add_analysis('echo analysis goes here') run.add_analysis('echo analysis goes here')
self.assertTrue( 'echo analysis goes here' in run.analysis._command) self.assertTrue( 'echo analysis goes here' in run.analysis._command)
@ -238,12 +389,13 @@ class TrickWorkflowTestCase(unittest.TestCase):
self.assertTrue( 'echo overwriting analysis' in run.analysis._command) self.assertTrue( 'echo overwriting analysis' in run.analysis._command)
def test_run_init(self): def test_run_init(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input='RUN_test/input.py --someflag', r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_test/input.py --someflag',
binary='S_main_Linux_x86_64.exe') binary='S_main_Linux_x86_64.exe')
self.assertEqual(r.sim_dir, 'test/SIM_alloc_test') self.assertEqual(r.sim_dir, 'test/SIM_alloc_test')
self.assertEqual(r.prerun_cmd, '') self.assertEqual(r.prerun_cmd, '')
self.assertTrue(r.input == 'RUN_test/input.py --someflag') self.assertTrue(r.input_file == 'RUN_test/input.py --someflag')
self.assertEqual(r.returns, 0) self.assertEqual(r.returns, 0)
self.assertEqual(r.phase, 0)
self.assertTrue(r.valgrind_flags is None) self.assertTrue(r.valgrind_flags is None)
self.assertEqual(r.log_dir, '/tmp/') self.assertEqual(r.log_dir, '/tmp/')
self.assertEqual(r.just_input,'RUN_test/input.py') self.assertEqual(r.just_input,'RUN_test/input.py')
@ -263,7 +415,7 @@ class TrickWorkflowTestCase(unittest.TestCase):
self.assertTrue(c.error is None) self.assertTrue(c.error is None)
def test_run_compare_pass(self): def test_run_compare_pass(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input='RUN_test/input.py --someflag', r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_test/input.py --someflag',
binary='S_main_Linux_x86_64.exe') binary='S_main_Linux_x86_64.exe')
# Use same data to get a pass # Use same data to get a pass
test_data = 'share/trick/trickops/tests/baselinedata/log_a.csv' test_data = 'share/trick/trickops/tests/baselinedata/log_a.csv'
@ -272,7 +424,7 @@ class TrickWorkflowTestCase(unittest.TestCase):
self.assertEqual(r.compare(), 0) self.assertEqual(r.compare(), 0)
def test_run_compare_fail(self): def test_run_compare_fail(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input='RUN_test/input.py --someflag', r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_test/input.py --someflag',
binary='S_main_Linux_x86_64.exe') binary='S_main_Linux_x86_64.exe')
# Use same data to get a pass # Use same data to get a pass
test_data = 'share/trick/trickops/tests/testdata/log_a.csv' test_data = 'share/trick/trickops/tests/testdata/log_a.csv'
@ -295,9 +447,8 @@ class TrickWorkflowTestCase(unittest.TestCase):
os.makedirs(just_RUN_root, exist_ok=True) os.makedirs(just_RUN_root, exist_ok=True)
Path(os.path.join(SIM_root,run)).touch() Path(os.path.join(SIM_root,run)).touch()
yml_content=textwrap.dedent(""" yml_content=textwrap.dedent("""
globals:
parallel_safety: """ + parallel_safety + """
SIM_fake: SIM_fake:
parallel_safety: """ + parallel_safety + """
path: """ + SIM_root_rel + """ path: """ + SIM_root_rel + """
runs: runs:
""") """)
@ -356,3 +507,165 @@ class TrickWorkflowTestCase(unittest.TestCase):
self.assertEqual(len(self.instance.get_sims()), 1) self.assertEqual(len(self.instance.get_sims()), 1)
self.assertEqual(len(self.instance.get_sim('SIM_fake').get_runs()), 1) self.assertEqual(len(self.instance.get_sim('SIM_fake').get_runs()), 1)
self.teardown_deep_directory_structure() self.teardown_deep_directory_structure()
def test_sim_init_default_args(self):
s = TrickWorkflow.Sim(name='mySim', sim_dir='sims/SIM_fake')
self.assertEqual(s.name, 'mySim')
self.assertEqual(s.sim_dir, 'sims/SIM_fake')
self.assertEqual(s.description, None)
self.assertEqual(s.build_cmd, 'trick-CP')
self.assertEqual(s.cpus, 3)
self.assertEqual(s.size, 2200000)
self.assertEqual(s.labels, [])
self.assertEqual(s.phase, 0)
self.assertEqual(s.log_dir, '/tmp')
self.assertEqual(s.build_job, None)
self.assertEqual(s.runs, [])
self.assertEqual(s.valgrind_runs, [])
self.assertTrue(isinstance(s.printer, ColorStr))
job = s.get_build_job()
self.assertEqual(s.build_job, job ) # First get stores it locally
self.assertTrue('cd sims/SIM_fake && export MAKEFLAGS=-j3 && trick-CP' in job._command )
runs = s.get_run_jobs()
self.assertEqual(runs, []) # No runs have been added
def test_sim_init_all_args(self):
s = TrickWorkflow.Sim(name='yourSim', sim_dir='sims/SIM_foo', description='desc',
labels=['label1', 'label2'], prebuild_cmd='source env/env.sh; ',
build_cmd='trick-CP --flag', cpus=2, size=10000, phase=2, log_dir='~/logs')
self.assertEqual(s.name, 'yourSim')
self.assertEqual(s.sim_dir, 'sims/SIM_foo')
self.assertEqual(s.description, 'desc')
self.assertEqual(s.build_cmd, 'trick-CP --flag')
self.assertEqual(s.cpus, 2)
self.assertEqual(s.size, 10000)
self.assertEqual(s.labels, ['label1', 'label2'])
self.assertEqual(s.phase, 2)
self.assertEqual(s.log_dir, '~/logs')
job = s.get_build_job()
self.assertTrue('cd sims/SIM_foo && export MAKEFLAGS=-j2 && trick-CP --flag' in job._command )
self.assertTrue('source env/env.sh;' in job._command )
#import pdb; pdb.set_trace()
def test_phase_getters_setters(self):
s = TrickWorkflow.Sim(name='mySim', sim_dir='sims/SIM_fake')
s.set_phase(99) # OK
self.assertEqual(s.get_phase(), 99)
s.set_phase(999) # OK
self.assertEqual(s.get_phase(), 999)
s.set_phase(TrickWorkflow.allowed_phase_range['max'])
self.assertEqual(s.get_phase(), TrickWorkflow.allowed_phase_range['max']) # OK boundary
with self.assertRaises(RuntimeError):
s.set_phase(TrickWorkflow.allowed_phase_range['max']+1) # Over boundary
with self.assertRaises(RuntimeError):
s.set_phase(TrickWorkflow.allowed_phase_range['min']-1) # Under boundary
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_test/input.py',
binary='S_main_Linux_x86_64.exe')
r.set_phase(99) # OK
self.assertEqual(r.get_phase(), 99)
r.set_phase(999) # OK
self.assertEqual(r.get_phase(), 999)
with self.assertRaises(RuntimeError):
r.set_phase(TrickWorkflow.allowed_phase_range['max']+1) # Over boundary
with self.assertRaises(RuntimeError):
r.set_phase(TrickWorkflow.allowed_phase_range['min']-1) # Under boundary
def test_run__find_range_string(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_test/input.py',
binary='S_main_Linux_x86_64.exe')
self.assertEqual(TrickWorkflow._find_range_string("[01-09]"), "[01-09]")
self.assertEqual(TrickWorkflow._find_range_string("SET_foo/RUN_[0-9]/input.py"), "[0-9]")
self.assertEqual(TrickWorkflow._find_range_string("SET_foo/RUN_[01-09]/input.py"), "[01-09]")
self.assertEqual(TrickWorkflow._find_range_string("SET_foo/RUN_[001-999]/input.py"), "[001-999]")
self.assertEqual(TrickWorkflow._find_range_string("SET_foo/RUN_[0000-9999]/input.py"), "[0000-9999]")
self.assertEqual(TrickWorkflow._find_range_string("SET_[01-09]/RUN_hi/input.py"), "[01-09]" )
self.assertEqual(TrickWorkflow._find_range_string("[01-09]/RUN_hello/input.py"), "[01-09]")
self.assertEqual(TrickWorkflow._find_range_string("SET_foo/RUN_bar/input.py"), None)
self.assertEqual(TrickWorkflow._find_range_string("SET_foo/RUN_[001-009/input.py"), None)
self.assertEqual(TrickWorkflow._find_range_string("SET_foo/RUN_(01-09)/input.py"), None)
with self.assertRaises(RuntimeError):
TrickWorkflow._find_range_string("SET_[00-03]/RUN_[01-09]/input.py")
def test_run__get_range_list(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_test/input.py',
binary='S_main_Linux_x86_64.exe')
myRange = r._get_range_list("[01-09]")
self.assertEqual(myRange, ["01", "02", "03", "04", "05", "06", "07", "08", "09"])
myRange = r._get_range_list("[001-009]")
self.assertEqual(myRange, ["001", "002", "003", "004", "005", "006", "007", "008", "009"])
with self.assertRaises(RuntimeError):
myRange = r._get_range_list("[009-001]") # Min not less than max
with self.assertRaises(RuntimeError):
myRange = r._get_range_list("[09-001]") # Inconsistent leading zeros
with self.assertRaises(RuntimeError):
myRange = r._get_range_list("[abc-009]") # Min Can't be converted to int
with self.assertRaises(RuntimeError):
myRange = r._get_range_list("[01-zy]") # Max Can't be converted to int
with self.assertRaises(RuntimeError):
myRange = r._get_range_list("01-04") # Wrong syntax
with self.assertRaises(RuntimeError):
myRange = r._get_range_list("[01-04") # Wrong syntax
with self.assertRaises(RuntimeError):
myRange = r._get_range_list("[01/04") # Wrong syntax
myRange = r._get_range_list("[[01-04]]") # Extra brackets are ignored
self.assertEqual(myRange, ["01", "02", "03", "04"])
def test_run__multiply(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_[00-05]/input.py',
binary='S_main_Linux_x86_64.exe')
runs = r.multiply()
self.assertEqual(len(runs), 6) # Expect 6 copies
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_[0000-9999]/input.py',
binary='S_main_Linux_x86_64.exe')
runs = r.multiply()
self.assertEqual(len(runs), 10000) # Expect 10000 copies
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_[05-01]/input.py',
binary='S_main_Linux_x86_64.exe')
with self.assertRaises(RuntimeError):
runs = r.multiply() # Reverse order in syntax in input_file
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='SET_[01-02]/RUN_[05-01]/input.py',
binary='S_main_Linux_x86_64.exe')
with self.assertRaises(RuntimeError):
runs = r.multiply() # Invalid syntax, more than one pattern found
def test_run__multiply_with_double_pattern_comparisons(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_[00-05]/input.py',
binary='S_main_Linux_x86_64.exe')
# Add a fake comparison with correct [min-max] notation
r.add_comparison('testdata/RUN_[00-05]/log_a.csv', 'baselinedata/RUN_[00-05]/log_a.csv')
runs = r.multiply()
# Ensure the run's comparisons patterns are replaced with the expected value
self.assertEqual(len(runs), 6) # Expect 6 copies
self.assertEqual(runs[0].comparisons[0].test_data, 'testdata/RUN_00/log_a.csv')
self.assertEqual(runs[0].comparisons[0].baseline_data, 'baselinedata/RUN_00/log_a.csv')
self.assertEqual(runs[5].comparisons[0].test_data, 'testdata/RUN_05/log_a.csv')
self.assertEqual(runs[5].comparisons[0].baseline_data, 'baselinedata/RUN_05/log_a.csv')
def test_run__multiply_with_single_pattern_comparisons(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_[00-05]/input.py',
binary='S_main_Linux_x86_64.exe')
# Compare many to one
r.add_comparison('testdata/RUN_[00-05]/log_common.csv', 'baselinedata/RUN_common/log_common.csv')
r.add_comparison('testdata/RUN_[00-05]/log_a.csv', 'baselinedata/RUN_[00-05]/log_a.csv')
runs = r.multiply()
# Ensure the run's comparisons patterns are replaced with the expected value
self.assertEqual(len(runs), 6) # Expect 6 copies
self.assertEqual(runs[0].comparisons[0].test_data, 'testdata/RUN_00/log_common.csv')
self.assertEqual(runs[0].comparisons[0].baseline_data, 'baselinedata/RUN_common/log_common.csv')
self.assertEqual(runs[0].comparisons[1].test_data, 'testdata/RUN_00/log_a.csv')
self.assertEqual(runs[0].comparisons[1].baseline_data, 'baselinedata/RUN_00/log_a.csv')
self.assertEqual(runs[5].comparisons[0].test_data, 'testdata/RUN_05/log_common.csv')
self.assertEqual(runs[5].comparisons[0].baseline_data, 'baselinedata/RUN_common/log_common.csv')
self.assertEqual(runs[5].comparisons[1].test_data, 'testdata/RUN_05/log_a.csv')
self.assertEqual(runs[5].comparisons[1].baseline_data, 'baselinedata/RUN_05/log_a.csv')
def test_run__multiply_with_mismatched_patterns(self):
r = TrickWorkflow.Run(sim_dir='test/SIM_alloc_test', input_file='RUN_[00-05]/input.py',
binary='S_main_Linux_x86_64.exe')
# Compare many to one
r.add_comparison('testdata/RUN_[00-06]/log_common.csv', 'baselinedata/RUN_common/log_common.csv')
with self.assertRaises(RuntimeError):
runs = r.multiply()

View File

@ -0,0 +1,73 @@
import os, sys
import unittest
import pdb
from testconfig import this_trick, tests_dir
from TrickWorkflowYamlVerifier import *
def suite():
"""Create test suite from TrickWorkflowTestCase unit test class and return"""
return unittest.TestLoader().loadTestsFromTestCase(TrickWorkflowYamlVerifierTestCase)
class TrickWorkflowYamlVerifierTestCase(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def _verify(self, config_file):
"""
Given a config_file, create a TrickWorkflowYamlVerifier, assert expectations,
and return the instance for further examination
"""
twyv = TrickWorkflowYamlVerifier(config_file=config_file)
twyv.verify()
sim_keys = [sim for sim in twyv.config.keys() if sim.startswith('SIM')]
self.assertTrue('globals' in twyv.config)
self.assertTrue('env' in twyv.config['globals'])
for sk in sim_keys:
self.assertTrue( isinstance( twyv.config[sk]['name' ], str))
self.assertTrue( isinstance( twyv.config[sk]['binary' ], str))
self.assertTrue( isinstance( twyv.config[sk]['build_args'], str) or
twyv.config[sk]['build_args'] == None )
self.assertTrue( isinstance( twyv.config[sk]['name' ], str))
self.assertTrue( isinstance( twyv.config[sk]['parallel_safety'], str))
self.assertTrue( twyv.config[sk]['description' ] is None)
self.assertTrue( isinstance( twyv.config[sk]['runs'], dict))
for run in twyv.config[sk]['runs']:
self.assertTrue( isinstance( twyv.config[sk]['runs'][run]['input'], str))
self.assertTrue( isinstance( twyv.config[sk]['runs'][run]['returns'], int))
self.assertTrue( isinstance( twyv.config[sk]['runs'][run]['valgrind'], str) or
(twyv.config[sk]['runs'][run]['valgrind'] == None) )
self.assertTrue( isinstance( twyv.config[sk]['runs'][run]['analyze'], str) or
(twyv.config[sk]['runs'][run]['analyze'] == None) )
self.assertTrue( isinstance( twyv.config[sk]['runs'][run]['phase'], int))
self.assertTrue( isinstance( twyv.config[sk]['runs'][run]['compare'], list))
self.assertTrue( isinstance( twyv.config[sk]['phase' ], int))
self.assertTrue( isinstance( twyv.config[sk]['path' ], str))
return twyv
def test_type_errors_in_config(self):
twyv = self._verify(config_file=os.path.join(tests_dir,"type_errors.yml"))
self.assertEqual(len(twyv.get_parsing_errors()), 21)
def test_no_SIM_dict_keys_in_config(self):
twyv = TrickWorkflowYamlVerifier(config_file=os.path.join(tests_dir,"errors_fatal2.yml"))
with self.assertRaises(RuntimeError):
twyv.verify()
self.assertEqual(len(twyv.get_parsing_errors()), 1)
def test_empty_config(self):
twyv = TrickWorkflowYamlVerifier(config_file=os.path.join(tests_dir,"empty.yml"))
with self.assertRaises(RuntimeError):
twyv.verify()
self.assertEqual(len(twyv.get_parsing_errors()), 1)
def test_nominal_config(self):
twyv = self._verify(config_file=os.path.join(tests_dir,"trick_sims.yml"))
sim_keys = [sim for sim in twyv.config.keys() if sim.startswith('SIM')]
non_sim_keys = [entry for entry in twyv.config.keys() if not entry.startswith('SIM')]
self.assertEqual(len(non_sim_keys), 2) # Expect 2 non-SIM.*: dict key
self.assertEqual(len(sim_keys), 56) # Expect 56 SIM.*: dict keys
self.assertEqual(len(twyv.get_parsing_errors()), 0)

5
test/.gitignore vendored
View File

@ -6,7 +6,7 @@ send_hs
varserver_log varserver_log
log_* log_*
chkpnt_* chkpnt_*
MONTE_RUN_* MONTE_*
.S_library* .S_library*
.icg_no_found .icg_no_found
CP_out CP_out
@ -25,4 +25,5 @@ trick.zip
jitlib jitlib
build build
S_sie.json S_sie.json
*.ckpnt *.ckpnt
MonteCarlo_Meta_data_output

View File

@ -0,0 +1,18 @@
monte_carlo.mc_master.activate("FAIL_IO_error")
print('*********************************************************************')
print('this message is expected:')
print(' DIAGNOSTIC: Fatal Error I/O error')
print(' Unable to open file Modified_data/nonexistent_file.txt for reading.')
print(' Required for variable test.x_file_lookup[0].')
print('*********************************************************************')
# try to open a non-existant file
# code coverage for 'mc_variable_file.cc', lines 71-76.
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/nonexistent_file.txt",
3)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,19 @@
monte_carlo.mc_master.activate("FAIL_config_error")
print('************************************************************************************')
print('this message is expected:')
print(' DIAGNOSTIC: Fatal Error Configuration Error')
print(' In configuring the file for variable test.x_file_lookup[0], it was identified that')
print(' it was specified to draw data from column 4, but that the first')
print(' column was identified as having index 7.')
print('************************************************************************************')
# generate the error
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
4,
7)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,17 @@
monte_carlo.mc_master.activate("FAIL_duplicate_variable")
print('************************************************************')
print('this message is expected:')
print(' DIAGNOSTIC: Fatal Error Duplicated variable.')
print(' Attempted to add two settings for variable test.x_uniform.')
print(' Terminating to allow resolution of which setting to use.')
print('************************************************************')
mc_var = trick.MonteCarloVariableRandomUniform( "test.x_uniform", 0, 10, 20)
monte_carlo.mc_master.add_variable(mc_var)
# Add the variable twice to trigger the "Duplicated variable" fail in
# MonteCarloMaster::add_variable
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,18 @@
monte_carlo.mc_master.activate("FAIL_illegal_config")
monte_carlo.mc_master.set_num_runs(1)
print('**********************************************************************************\n' +
'this message is expected:\n' +
' DIAGNOSTIC: Fatal Error Illegal configuration\n' +
'For variable test.x_normal the specified minimum allowable value (6) >= the specified maximum allowable value (3).\n' +
'One or both of the limits must be changed to generate a random value.\n' +
'**********************************************************************************')
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal", 2, 10, 2)
# flip low and high values to generate the desired error
mc_var.truncate_low(6, trick.MonteCarloVariableRandomNormal.Absolute)
mc_var.truncate_high(3, trick.MonteCarloVariableRandomNormal.Absolute)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,24 @@
monte_carlo.mc_master.activate("FAIL_invalid_config")
print('******************************************************************************************')
print('this message is expected:')
print(' DIAGNOSTIC: Fatal Error Invalid configuration')
print(' Error in attempting to make test.x_file_lookup[1] be dependent on test.x_file_lookup[0].')
print(' test.x_file_lookup[1] cannot be marked as dependent when it has dependencies of its own.')
print(' The dependency hierarchy can only be one level deep.')
print('******************************************************************************************')
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
monte_carlo.mc_master.add_variable(mc_var)
mc_var2 = trick.MonteCarloVariableFile( "test.x_file_lookup[1]",
"Modified_data/datafile.txt",
3)
# the next command is the source of the error!
mc_var2.register_dependent(mc_var)
monte_carlo.mc_master.add_variable(mc_var2)
trick.stop(1)

View File

@ -0,0 +1,17 @@
monte_carlo.mc_master.activate("FAIL_invalid_data_file")
print('*****************************************************************************')
print('this message is expected:')
print(' DIAGNOSTIC: Fatal Error Invalid data file')
print(' Data file Modified_data/empty_file.txt contains no recognized lines of data')
print(' Required for variable test.x_file_lookup[0].')
print('*****************************************************************************')
# try to open an empty file
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/empty_file.txt",
3)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,20 @@
monte_carlo.mc_master.activate("FAIL_malformed_data_file")
monte_carlo.mc_master.set_num_runs(1)
print('**************************************************************************************************')
print('this message is expected:')
print(' DIAGNOSTIC: Fatal Error Malformed data file')
print(' Data file for variable test.x_file_lookup[0] includes this line:')
print(' 0 1 2 3 4')
print(' Which has only 5 values.')
print(' Variable test.x_file_lookup[0] uses the value from position 9, which does not exist in this line')
print('**************************************************************************************************')
# generate the error
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
9)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,27 @@
import os, shutil
# remove write permission to the 'MONTE_RUN' directory
os.chmod("MONTE_IO_FAIL", 0o555)
monte_carlo.mc_master.activate("IO_FAIL")
monte_carlo.mc_master.generate_meta_data = True
monte_carlo.mc_master.set_num_runs(1)
print('*********************************************************************************')
print('these messages are expected:')
print(' Error I/O error')
print(' Unable to open the variable summary files for writing.')
print(' Dispersion summary will not be generated.')
print('')
print(' DIAGNOSTIC: Fatal Error I/O error')
print(' Unable to open file MONTE_FAIL_IO_error2/RUN_0/monte_input.py for writing.')
print('*********************************************************************************')
# this simulation attempts to create good data but with the target
# directory write-protected, it can't generate the required input files.
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,32 @@
import os
# remove write permission to the 'MONTE_RUN' directory
os.chmod("MONTE_IO_RUN_ERROR1", 0o555)
monte_carlo.mc_master.activate("IO_RUN_ERROR1")
monte_carlo.mc_master.generate_meta_data = True
print('***********************************************************************************')
print('these messages are expected:')
print(' Error I/O error')
print(' Unable to open the variable summary files for writing.')
print(' Dispersion summary will not be generated.')
print('')
print(' Warning I/O error')
print(' Unable to open file MONTE_ERROR_IO_error/MonteCarlo_Meta_data_output for writing.')
print(' Aborting generation of meta-data.')
print('***********************************************************************************')
# this simulation attempts to create good data but with the target
# directory write-protected, it can't generate the (optional) summary files.
# NOTE - we avoid the terminal failure of not being able to generate the input
# files by having num_runs = 0, so none are attempted.
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
monte_carlo.mc_master.add_variable(mc_var)
trick.add_read(0,"""
os.chmod('MONTE_RUN_ERROR_IO_error', 0o755)
""")
trick.stop(1)

View File

@ -0,0 +1,25 @@
import os
# remove write permission to the 'RUN_0' directory
os.chmod("MONTE_IO_RUN_ERROR2/RUN_0", 0o500)
monte_carlo.mc_master.activate("IO_RUN_ERROR2")
monte_carlo.mc_master.generate_meta_data = True
monte_carlo.mc_master.set_num_runs(1)
print('***********************************************************************************')
print('this message is expected:\n'+
' Error Output failure\n'+
' Failed to record summary data for run 0.')
print('***********************************************************************************')
# this simulation attempts to create good data but with the target
# directory write-protected, it can't generate the (optional) summary files.
# NOTE - we avoid the terminal failure of not being able to generate the input
# files by having num_runs = 0, so none are attempted.
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,20 @@
dr_group = trick.sim_services.DRAscii("test_data")
dr_group.set_cycle(1)
dr_group.freq = trick.sim_services.DR_Always
trick.add_data_record_group(dr_group, trick.DR_Buffer)
dr_group.add_variable( "test.x_uniform")
dr_group.add_variable( "test.x_normal")
for ii in range(5):
dr_group.add_variable( "test.x_normal_trunc[%d]" %ii)
dr_group.add_variable( "test.x_normal_length")
dr_group.add_variable( "test.x_integer")
dr_group.add_variable( "test.x_line_command")
for ii in range(3):
dr_group.add_variable( "test.x_file_command[%d]" %ii)
dr_group.add_variable( "test.x_boolean")
dr_group.add_variable( "test.x_file_lookup")
dr_group.add_variable( "test.x_fixed_value_double")
dr_group.add_variable( "test.x_fixed_value_int")
dr_group.add_variable( "test.x_semi_fixed_value")
dr_group.add_variable( "test.x_sdefine_routine_called")

View File

@ -0,0 +1,6 @@
dr_group = trick.sim_services.DRAscii("test_data")
dr_group.set_cycle(1)
dr_group.freq = trick.sim_services.DR_Always
trick.add_data_record_group(dr_group, trick.DR_Buffer)
dr_group.add_variable( "test.x_sdefine_routine_called")

View File

@ -0,0 +1 @@
*.txt

View File

@ -0,0 +1,30 @@
#!/usr/bin/env python3
import os, sys
sys.path.append(os.path.abspath("../../../share/trick/trickops/"))
from TrickWorkflow import *
class ExampleWorkflow(TrickWorkflow):
def __init__( self, quiet, trick_top_level=os.path.abspath("../../../")):
# Base Class initialize, this creates internal management structures
if not(os.path.isdir(os.path.abspath("./MCGTrickOpsLog"))):
logDirectory = "MCGTrickOpsLog"
logDirectoryParent = os.path.abspath(".")
logPath = os.path.join(logDirectoryParent, logDirectory)
os.mkdir(logPath)
TrickWorkflow.__init__(self, project_top_level=trick_top_level,
log_dir=os.path.join(trick_top_level,'test/SIM_mc_generation/MCGTrickOps/MCGTrickOpsLog'),
trick_dir=trick_top_level,
config_file="test/SIM_mc_generation/MCGTrickOps/MCGenerationTest.yml",
cpus=3, quiet=quiet)
def run( self):
build_jobs = self.get_jobs(kind='build')
run_jobs = self.get_jobs(kind='run')
builds_status = self.execute_jobs(build_jobs, max_concurrent=3, header='Executing all sim builds.')
runs_status = self.execute_jobs(run_jobs, max_concurrent=1, header='Executing all sim runs.')
c = self.compare()
self.report() # Print Verbose report
self.status_summary() # Print a Succinct summary
return (builds_status or runs_status or self.config_errors or c)
if __name__ == "__main__":
ExampleWorkflow(quiet=True).run()

View File

@ -0,0 +1,256 @@
SIM_mc_generation:
path: test/SIM_mc_generation
runs:
RUN_nominal/input_a.py:
MONTE_RUN_nominal/RUN_000/monte_input_a.py:
MONTE_RUN_nominal/RUN_001/monte_input_a.py:
RUN_random_normal_truncate_abs/input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_0/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_1/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_2/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_3/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_4/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_5/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_6/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_7/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_8/monte_input.py:
MONTE_RUN_random_normal_truncate_abs/RUN_9/monte_input.py:
RUN_random_normal_truncate_rel/input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_0/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_1/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_2/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_3/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_4/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_5/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_6/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_7/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_8/monte_input.py:
MONTE_RUN_random_normal_truncate_rel/RUN_9/monte_input.py:
RUN_random_normal_truncate_sd/input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_0/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_1/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_2/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_3/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_4/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_5/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_6/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_7/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_8/monte_input.py:
MONTE_RUN_random_normal_truncate_sd/RUN_9/monte_input.py:
RUN_random_normal__untruncate/input.py:
MONTE_RUN_random_normal__untruncate/RUN_0/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_1/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_2/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_3/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_4/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_5/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_6/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_7/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_8/monte_input.py:
MONTE_RUN_random_normal__untruncate/RUN_9/monte_input.py:
RUN_random_normal_untruncated/input.py:
MONTE_RUN_random_normal_untruncated/RUN_0/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_1/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_2/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_3/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_4/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_5/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_6/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_7/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_8/monte_input.py:
MONTE_RUN_random_normal_untruncated/RUN_9/monte_input.py:
RUN_random_uniform/input.py:
MONTE_RUN_random_uniform/RUN_0/monte_input.py:
MONTE_RUN_random_uniform/RUN_1/monte_input.py:
MONTE_RUN_random_uniform/RUN_2/monte_input.py:
MONTE_RUN_random_uniform/RUN_3/monte_input.py:
MONTE_RUN_random_uniform/RUN_4/monte_input.py:
MONTE_RUN_random_uniform/RUN_5/monte_input.py:
MONTE_RUN_random_uniform/RUN_6/monte_input.py:
MONTE_RUN_random_uniform/RUN_7/monte_input.py:
MONTE_RUN_random_uniform/RUN_8/monte_input.py:
MONTE_RUN_random_uniform/RUN_9/monte_input.py:
RUN_ERROR_file_inconsistent_skip/input.py:
MONTE_RUN_ERROR_file_inconsistent_skip/RUN_0/monte_input.py:
RUN_ERROR_invalid_call/input.py:
MONTE_RUN_ERROR_invalid_call/RUN_0/monte_input.py:
RUN_ERROR_invalid_name/input.py:
MONTE_RUN_ERROR_invalid_name/RUN_0/monte_input.py:
RUN_ERROR_invalid_sequence/input.py:
MONTE_RUN_ERROR_invalid_sequence/RUN_0/monte_input.py:
RUN_ERROR_invalid_sequencing/input.py:
MONTE_RUN_ERROR_invalid_sequencing/RUN_0/monte_input.py:
RUN_ERROR_out_of_domain_error/input.py:
MONTE_RUN_ERROR_out_of_domain_error/RUN_0/monte_input.py:
RUN_ERROR_random_value_truncation/input.py:
MONTE_RUN_ERROR_random_value_truncation/RUN_0/monte_input.py:
MONTE_RUN_ERROR_random_value_truncation/RUN_1/monte_input.py:
RUN_generate_meta_data_early/input.py:
RUN_file_sequential/input.py:
MONTE_RUN_file_sequential/RUN_0/monte_input.py:
MONTE_RUN_file_sequential/RUN_1/monte_input.py:
MONTE_RUN_file_sequential/RUN_2/monte_input.py:
MONTE_RUN_file_sequential/RUN_3/monte_input.py:
MONTE_RUN_file_sequential/RUN_4/monte_input.py:
MONTE_RUN_file_sequential/RUN_5/monte_input.py:
MONTE_RUN_file_sequential/RUN_6/monte_input.py:
MONTE_RUN_file_sequential/RUN_7/monte_input.py:
MONTE_RUN_file_sequential/RUN_8/monte_input.py:
MONTE_RUN_file_sequential/RUN_9/monte_input.py:
RUN_file_skip/input.py:
MONTE_RUN_file_skip/RUN_0/monte_input.py:
MONTE_RUN_file_skip/RUN_1/monte_input.py:
MONTE_RUN_file_skip/RUN_2/monte_input.py:
MONTE_RUN_file_skip/RUN_3/monte_input.py:
MONTE_RUN_file_skip/RUN_4/monte_input.py:
MONTE_RUN_file_skip/RUN_5/monte_input.py:
MONTE_RUN_file_skip/RUN_6/monte_input.py:
MONTE_RUN_file_skip/RUN_7/monte_input.py:
MONTE_RUN_file_skip/RUN_8/monte_input.py:
MONTE_RUN_file_skip/RUN_9/monte_input.py:
RUN_file_skip2/input.py:
MONTE_RUN_file_skip2/RUN_0/monte_input.py:
MONTE_RUN_file_skip2/RUN_1/monte_input.py:
MONTE_RUN_file_skip2/RUN_2/monte_input.py:
MONTE_RUN_file_skip2/RUN_3/monte_input.py:
MONTE_RUN_file_skip2/RUN_4/monte_input.py:
RUN_remove_variable/input.py:
RUN_WARN_config_error/input.py:
MONTE_RUN_WARN_config_error/RUN_0/monte_input.py:
RUN_WARN_invalid_name/input.py:
MONTE_RUN_WARN_invalid_name/RUN_0/monte_input.py:
RUN_WARN_overconstrained_config/input.py:
MONTE_RUN_WARN_overconstrained_config/RUN_0/monte_input.py:
FAIL_config_error/input.py:
returns: 1
FAIL_duplicate_variable/input.py:
returns: 1
FAIL_illegal_config/input.py:
returns: 1
FAIL_invalid_config/input.py:
returns: 1
FAIL_invalid_data_file/input.py:
returns: 1
FAIL_IO_error/input.py:
returns: 1
FAIL_malformed_data_file/input.py:
returns: 1
compare:
- test/SIM_mc_generation/verif_data/MonteCarlo_Meta_data_output vs. test/SIM_mc_generation/MonteCarlo_Meta_data_output
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/RUN_000/monte_input_a.py vs. test/SIM_mc_generation/MONTE_RUN_nominal/RUN_000/monte_input_a.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/RUN_000/log_test_data.csv vs. test/SIM_mc_generation/MONTE_RUN_nominal/RUN_000/log_test_data.csv
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/RUN_001/monte_input_a.py vs. test/SIM_mc_generation/MONTE_RUN_nominal/RUN_001/monte_input_a.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/RUN_001/log_test_data.csv vs. test/SIM_mc_generation/MONTE_RUN_nominal/RUN_001/log_test_data.csv
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_nominal/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/monte_variables vs. test/SIM_mc_generation/MONTE_RUN_nominal/monte_variables
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/MonteCarlo_Meta_data_output vs. test/SIM_mc_generation/MONTE_RUN_nominal/MonteCarlo_Meta_data_output
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/monte_variables vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/monte_variables
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_file_inconsistent_skip/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_file_inconsistent_skip/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_invalid_call/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_invalid_call/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_invalid_name/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_invalid_name/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_invalid_sequence/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_invalid_sequence/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_invalid_sequencing/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_invalid_sequencing/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_out_of_domain_error/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_out_of_domain_error/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_random_value_truncation/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_random_value_truncation/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_random_value_truncation/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_random_value_truncation/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_generate_meta_data_early/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_generate_meta_data_early/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_generate_meta_data_early/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_generate_meta_data_early/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_file_skip/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_remove_variable/RUN_both_variables/monte_variables vs. test/SIM_mc_generation/MONTE_RUN_remove_variable/RUN_both_variables/monte_variables
- test/SIM_mc_generation/verif_data/MONTE_RUN_remove_variable/RUN_one_variable/monte_variables vs. test/SIM_mc_generation/MONTE_RUN_remove_variable/RUN_one_variable/monte_variables
- test/SIM_mc_generation/verif_data/MONTE_RUN_WARN_config_error/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_WARN_config_error/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_WARN_invalid_name/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_WARN_invalid_name/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_WARN_overconstrained_config/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_WARN_overconstrained_config/RUN_0/monte_input.py

View File

@ -0,0 +1,156 @@
SIM_mc_generation:
path: test/SIM_mc_generation
runs:
RUN_nominal/input_a.py:
RUN_random_normal_truncate_abs/input.py:
RUN_random_normal_truncate_rel/input.py:
RUN_random_normal_truncate_sd/input.py:
RUN_random_normal__untruncate/input.py:
RUN_random_normal_untruncated/input.py:
RUN_random_uniform/input.py:
RUN_ERROR_file_inconsistent_skip/input.py:
RUN_ERROR_invalid_call/input.py:
RUN_ERROR_invalid_name/input.py:
RUN_ERROR_invalid_sequence/input.py:
RUN_ERROR_invalid_sequencing/input.py:
RUN_ERROR_out_of_domain_error/input.py:
RUN_ERROR_random_value_truncation/input.py:
RUN_generate_meta_data_early/input.py:
RUN_file_sequential/input.py:
RUN_file_skip/input.py:
RUN_file_skip2/input.py:
RUN_remove_variable/input.py:
RUN_WARN_config_error/input.py:
RUN_WARN_invalid_name/input.py:
RUN_WARN_overconstrained_config/input.py:
FAIL_config_error/input.py:
returns: 1
FAIL_duplicate_variable/input.py:
returns: 1
FAIL_illegal_config/input.py:
returns: 1
FAIL_invalid_config/input.py:
returns: 1
FAIL_invalid_data_file/input.py:
returns: 1
FAIL_IO_error/input.py:
returns: 1
FAIL_malformed_data_file/input.py:
returns: 1
compare:
- test/SIM_mc_generation/verif_data/MonteCarlo_Meta_data_output vs. test/SIM_mc_generation/MonteCarlo_Meta_data_output
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/RUN_000/monte_input_a.py vs. test/SIM_mc_generation/MONTE_RUN_nominal/RUN_000/monte_input_a.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/RUN_001/monte_input_a.py vs. test/SIM_mc_generation/MONTE_RUN_nominal/RUN_001/monte_input_a.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_nominal/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/monte_variables vs. test/SIM_mc_generation/MONTE_RUN_nominal/monte_variables
- test/SIM_mc_generation/verif_data/MONTE_RUN_nominal/MonteCarlo_Meta_data_output vs. test/SIM_mc_generation/MONTE_RUN_nominal/MonteCarlo_Meta_data_output
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_abs/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_abs/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_rel/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_rel/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_truncate_sd/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_truncate_sd/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/monte_variables vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/monte_variables
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal__untruncate/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal__untruncate/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_normal_untruncated/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_normal_untruncated/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_random_uniform/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_random_uniform/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_file_inconsistent_skip/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_file_inconsistent_skip/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_invalid_call/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_invalid_call/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_invalid_name/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_invalid_name/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_invalid_sequence/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_invalid_sequence/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_invalid_sequencing/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_invalid_sequencing/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_out_of_domain_error/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_out_of_domain_error/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_random_value_truncation/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_random_value_truncation/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_ERROR_random_value_truncation/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_ERROR_random_value_truncation/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_generate_meta_data_early/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_generate_meta_data_early/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_generate_meta_data_early/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_generate_meta_data_early/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_sequential/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_sequential/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_file_skip/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_5/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_5/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_6/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_6/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_7/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_7/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_8/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_8/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip/RUN_9/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip/RUN_9/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/monte_values_all_runs vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/monte_values_all_runs
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_1/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_1/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_2/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_2/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_3/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_3/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_file_skip2/RUN_4/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_file_skip2/RUN_4/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_remove_variable/RUN_both_variables/monte_variables vs. test/SIM_mc_generation/MONTE_RUN_remove_variable/RUN_both_variables/monte_variables
- test/SIM_mc_generation/verif_data/MONTE_RUN_remove_variable/RUN_one_variable/monte_variables vs. test/SIM_mc_generation/MONTE_RUN_remove_variable/RUN_one_variable/monte_variables
- test/SIM_mc_generation/verif_data/MONTE_RUN_WARN_config_error/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_WARN_config_error/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_WARN_invalid_name/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_WARN_invalid_name/RUN_0/monte_input.py
- test/SIM_mc_generation/verif_data/MONTE_RUN_WARN_overconstrained_config/RUN_0/monte_input.py vs. test/SIM_mc_generation/MONTE_RUN_WARN_overconstrained_config/RUN_0/monte_input.py

View File

@ -0,0 +1,2 @@
This file holds MONTE_FAIL_IO_error2 in the git repo so its permissions can be
modified to support the FAIL_IO_error2 test.

View File

@ -0,0 +1,2 @@
This file holds MONTE_ERROR_IO_error in the git repo so its permissions can be
modified to support the ERROR_IO_error test.

View File

@ -0,0 +1,21 @@
*************************** SUMMARY **************************
1 total assignments
- 0 constant values
- 0 calculated variables
- 1 prescribed (file-based) variables
- 0 random variables
- 0 files for execution
- 0 variables of undefined type
********************* LIST OF VARIABLES, TYPES****************
test.x_file_lookup[0], Prescribed
**************************************************************
***** LIST OF DATA FILES AND THE VARIABLES THEY POPULATE *****
******
Modified_data/datafile.txt
3 test.x_file_lookup[0]
**************************************************************

View File

@ -0,0 +1,7 @@
monte_carlo.mc_master.active = True
monte_carlo.mc_master.generate_dispersions = False
exec(open('IO_RUN_ERROR2/input.py').read())
monte_carlo.mc_master.monte_run_number = 0
test.x_file_lookup[0] = 2

View File

@ -0,0 +1,2 @@
run_number
test.x_file_lookup[0],

View File

@ -0,0 +1,7 @@
0 1 2 3 4
# comment
10 11 12 13 14
20 21 22 23 24
30 31 32 33 34

View File

@ -0,0 +1,7 @@
0 1 2 3 4
# comment
10 11 12 13 14
20 21 22 23 24
30 31 32 33 34

View File

@ -0,0 +1,104 @@
mc_var = trick.MonteCarloVariableRandomUniform( "test.x_uniform", 0, 10, 20)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal", 2, 10, 2)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[0]", 2, 10, 2)
mc_var.truncate(0.5, trick.MonteCarloVariableRandomNormal.StandardDeviation)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[1]", 2, 10, 2)
mc_var.truncate(-0.5, 0.7, trick.MonteCarloVariableRandomNormal.Relative)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[2]", 2, 10, 2)
mc_var.truncate(9.9,11, trick.MonteCarloVariableRandomNormal.Absolute)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[3]", 2, 10, 2)
mc_var.truncate_low(9.9, trick.MonteCarloVariableRandomNormal.Absolute)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[4]", 2, 10, 2)
mc_var.truncate_high(4, trick.MonteCarloVariableRandomNormal.Absolute)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_length", 2, 10, 2)
mc_var.units = "ft"
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomUniformInt( "test.x_integer", 1, 0, 2)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomStringSet( "test.x_string", 3)
mc_var.add_string("\"ABC\"")
mc_var.add_string("\"DEF\"")
mc_var.add_string("'GHIJKL'")
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloPythonLineExec( "test.x_line_command",
"test.x_integer * test.x_uniform")
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloPythonLineExec(
"test.standalone_function( test.x_normal)")
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloPythonFileExec( "Modified_data/sample.py")
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomBool( "test.x_boolean", 4)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# call this one mc_var1 because I'm going to use it as the seed for the
# MonteCarloVariableSemiFixed later.
mc_var1 = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
mc_var1.thisown = False
monte_carlo.mc_master.add_variable(mc_var1)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[1]",
"Modified_data/datafile.txt",
2)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[2]",
"Modified_data/datafile.txt",
1)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFixed( "test.x_fixed_value_int", 7)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFixed( "test.x_fixed_value_double", 7.0)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFixed( "test.x_fixed_value_string", "\"7\"")
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableSemiFixed( "test.x_semi_fixed_value", mc_var1 )
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)

View File

@ -0,0 +1,3 @@
test.x_file_command[0] = 1
test.x_file_command[1] = monte_carlo.mc_master.monte_run_number
test.x_file_command[2] = test.x_file_command[0] + test.x_file_command[1]

View File

@ -0,0 +1,15 @@
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

View File

@ -0,0 +1,15 @@
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

View File

@ -0,0 +1,15 @@
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45

View File

@ -0,0 +1,44 @@
Verification simulation of monte_carlo.
The following tests rely on setting the directories to be non-writable. Their
purpose is to detect situations in which the monte-carlo model cannot generate
certain files. These tests by their very nature are difficult to run within an
automated scripted testing system.
IO_FAIL
IO_RUN_ERROR1
IO_RUN_ERROR2
The following cases are expecting either a warning or an error but
the simulation does not terminate. Instead, ZERO is returned.
The purpose of these cases is to emit the error or warning, not generate
viable datasets. Do not evaluate any of these cases for good dispersions.
There is no telling what state the data is in after the warning / error
message is emitted.
RUN_ERROR_file_inconsistent_skip
RUN_ERROR_invalid_call
RUN_ERROR_invalid_name
RUN_ERROR_invalid_sequence
RUN_ERROR_invalid_sequencing
RUN_ERROR_IO_error
RUN_ERROR_IO_error2
RUN_ERROR_out_of_domain_error
RUN_ERROR_random_value_truncation
RUN_WARN_config_error
RUN_WARN_invalid_name
RUN_WARN_overconstrained_config
The following cases emit a fatal error and the simulation halts in its tracks:
FAIL_config_error
FAIL_duplicate_variable
FAIL_illegal_config
FAIL_invalid_config
FAIL_invalid_data_file
FAIL_IO_error
FAIL_IO_error2
FAIL_malformed_data_file

View File

@ -0,0 +1,46 @@
monte_carlo.mc_master.activate("RUN_ERROR_file_inconsistent_skip")
monte_carlo.mc_master.set_num_runs(1)
print('*********************************************************************')
print('these messages are expected:')
print(' Error Invalid configuration')
print(' It is not permissible for two variables looking at the same file to')
print(' operate under different line-selection criteria.')
print(' test.x_file_lookup[1]')
print(' will be switched to the behavior of')
print(' test.x_file_lookup_rl[0],')
print(' which has a setting for the maximum number of lines to skip of 3')
print('')
print(' Error Invalid configuration')
print(' It is not permissible for two variables looking at the same file to')
print(' operate under different line-selection criteria.')
print(' test.x_file_lookup[2]')
print(' will be switched to the behavior of')
print(' test.x_file_lookup_rl[0],')
print(' which has a setting for the maximum number of lines to skip of 3')
print('*********************************************************************')
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
mc_var.thisown = False
mc_var.max_skip = 3
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[1]",
"Modified_data/datafile.txt",
2)
mc_var.thisown = False
mc_var.max_skip = 2
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[2]",
"Modified_data/datafile.txt",
1)
mc_var.thisown = False
mc_var.max_skip = 1
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,21 @@
monte_carlo.mc_master.activate("RUN_ERROR_invalid_call")
monte_carlo.mc_master.set_num_runs(1)
print('*********************************************************************')
print('this message is expected:')
print(' Error Invalid call')
print(' Attempted to register a dependent identified with NULL pointer with')
print(' the MonteCarloVariableFile for variable test.x_file_lookup[0].')
print(' This is not a valid action.')
print(' Registration failed, exiting without action.')
print('*********************************************************************')
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
# the next command is the source of the error!
mc_var.register_dependent(None)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,15 @@
monte_carlo.mc_master.activate("RUN_ERROR_invalid_name")
monte_carlo.mc_master.set_num_runs(1)
print('*********************************************************************')
print('this message is expected:\n'+
' Error Invalid name\n' +
' Could not find MonteCarlo variable with name test.x_uniform.\n'+
' Returning a NULL pointer.')
print('*********************************************************************')
# empty monte carlo master without any variables.
# lets ask it find us a variable.
monte_carlo.mc_master.find_variable("test.x_uniform")
trick.stop(1)

View File

@ -0,0 +1,37 @@
monte_carlo.mc_master.activate("RUN_ERROR_invalid_sequence")
monte_carlo.mc_master.set_num_runs(1)
print('*****************************************************************************************************************************')
print('three (3) types of messages are expected:')
print(' Error Invalid sequence')
print(' Attempted to set the number of runs to 10, but the input files have')
print(' already been generated.')
print('')
print(' Error Invalid sequence')
print(' Attempted to add a new variable test.x_normal to run RUN_invalid_sequence, but the input files have already been generated.')
print(' Cannot modify input files to accommodate this new variable.')
print(' Addition of variable rejected.')
print('')
print(' Error Invalid sequence')
print(' Attempted to generate a set of input files, but this action has')
print(' already been completed. Keeping the original set of input files.')
print(' Ignoring the later instruction.')
print('*****************************************************************************************************************************')
mc_var = trick.MonteCarloVariableRandomUniform( "test.x_uniform", 0, 10, 20)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# Trigger the "Invalid sequence" errors in MonteCarloMaster
monte_carlo.mc_master.prepare_input_files()
# Change run-number after prepping inputs
monte_carlo.mc_master.set_num_runs(10)
# Add a new variable after prepping inputs
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal", 2, 10, 2)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,18 @@
monte_carlo.mc_master.activate("RUN_ERROR_invalid_sequencing")
monte_carlo.mc_master.set_num_runs(1)
print('***********************************************************************************')
print('this message is expected:')
print(' Error Invalid sequencing')
print(' For variable test.x_semi_fixed_value, the necessary pre-dispersion to obtain the')
print(' random value for assignment has not been completed.')
print(' Cannot generate the assignment for this variable.')
print('***********************************************************************************')
mc_var = trick.MonteCarloVariableSemiFixed("test.x_semi_fixed_value",
test.mc_var_file)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,18 @@
monte_carlo.mc_master.activate("RUN_ERROR_out_of_domain_error")
monte_carlo.mc_master.set_num_runs(1)
print('***********************************************************************')
print('these messages are expected:')
print(' Error Out-of-domain error')
print(' Negative double-sided truncation specified for variable test.x_normal')
print(' truncate() must receive either two limits or one positive limit!')
print(' Using absolute value of limit.')
print('***********************************************************************')
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal" )
# the next command is the source of the error!
mc_var.truncate(-1.0)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,57 @@
monte_carlo.mc_master.activate("RUN_ERROR_random_value_truncation")
monte_carlo.mc_master.set_num_runs(2)
print('******************************************************************************************')
print('multiple error messages are expected:')
print(' Error Random value truncation failure')
print(' Could not generate a value for test.x_normal_trunc[#] within the specified domain within')
print(' the specified maximum number of tries (1).')
print(' Assuming a value equal to:')
print(' - midpoint value for a distribution truncated at both ends')
print(' - truncation value for a distribution truncated at only one end.')
print('')
print('NOTE: three tests are included here to test different code path after error')
print(' message is emitted')
print('******************************************************************************************')
# give some crazy initial values to the class
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[0]", 9956453, 10, 3.5)
mc_var.thisown = False
# give very small truncate_low & truncate_high distrubution values
mc_var.truncate_low(0.1)
mc_var.truncate_high(0.2)
# lower max_num_tries to an unreasonable # to generate the desired error
mc_var.max_num_tries = 1
monte_carlo.mc_master.add_variable(mc_var)
# note: this test also covers mc_variable_random_normal.cc lines 84-86
# in the same code section after the message
# give some crazy initial values to the class
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[1]", 99565644453, 10, 3.5)
mc_var.thisown = False
# give very small truncate_low distrubution value
mc_var.truncate_low(0.1)
# lower max_num_tries to an unreasonable # to generate the desired error
mc_var.max_num_tries = 1
monte_carlo.mc_master.add_variable(mc_var)
# note: this test also covers mc_variable_random_normal.cc lines 78-80
# in the same code section after the message
# give some crazy initial values to the class
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[2]", 99565644453, 10, 3.5)
mc_var.thisown = False
# give negative truncate_high distrubution value
mc_var.truncate_high(-50.2)
# lower max_num_tries to an unreasonable # to generate the desired error
mc_var.max_num_tries = 1
monte_carlo.mc_master.add_variable(mc_var)
# note: this test also covers mc_variable_random_normal.cc lines 81-83
# in the same code section after the message
trick.stop(1)

View File

@ -0,0 +1,20 @@
monte_carlo.mc_master.activate("RUN_WARN_config_error")
monte_carlo.mc_master.set_num_runs(1)
print('***************************************************************************************')
print('this message is expected:')
print(' Warning Configuration error')
print(' Zero truncation specified for variable test.x_normal which will produce a fixed point')
print('***************************************************************************************')
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal", 24858569, 10, 2)
# this call generates the warning!
mc_var.truncate(0.0)
# fix up the values before generating the assignment to avoid spurrious errors
mc_var.truncate(7.0, 13.0, trick.MonteCarloVariableRandomNormal.Absolute)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,22 @@
monte_carlo.mc_master.activate("RUN_WARN_invalid_name")
monte_carlo.mc_master.set_num_runs(1)
print('*********************************************************************************')
print('this message is expected:')
print(' Warning Invalid name')
print(' Attempt to remove MonteCarlo variable with name monte_carlo.not_found_variable FAILED.')
print(' Did not find a variable with that name.')
print('*********************************************************************************')
# give the simulation something to do
# add a variable into memory so list is not empty when we go to
# remove a non-existant variable from memory.
mc_var = trick.MonteCarloVariableRandomUniform( "test.x_uniform", 0, 10, 20)
monte_carlo.mc_master.add_variable(mc_var)
# Trigger the "Invalid Name" warning in MonteCarloMaster
# by calling remove_variable() to remove a non-existant variable name
monte_carlo.mc_master.remove_variable("monte_carlo.not_found_variable")
trick.stop(1)

View File

@ -0,0 +1,19 @@
monte_carlo.mc_master.activate("RUN_WARN_overconstrained_config")
monte_carlo.mc_master.set_num_runs(1)
print('*******************************************************')
print('these messages are expected:')
print(' Warning Overconstrained configuration')
print(' For variable The distribution collapses to a point.')
print(' the specified minimum allowable value and')
print(' the specified maximum allowable value are equal (14).')
print('*******************************************************')
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal", 2, 10, 2)
# the next two commands are neded to produce the warning!
mc_var.truncate_low(2.0)
mc_var.truncate_high(2.0)
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,48 @@
monte_carlo.mc_master.activate("RUN_file_sequential")
monte_carlo.mc_master.set_num_runs(10)
mc_var0 = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
monte_carlo.mc_master.add_variable(mc_var0)
mc_var1 = trick.MonteCarloVariableFile( "test.x_file_lookup[1]",
"Modified_data/datafile.txt",
2)
monte_carlo.mc_master.add_variable(mc_var1)
mc_var2 = trick.MonteCarloVariableFile( "test.x_file_lookup[2]",
"Modified_data/datafile.txt",
1)
monte_carlo.mc_master.add_variable(mc_var2)
print("\nmc_var0.has_dependents() returns: " + str(mc_var0.has_dependents()))
print("mc_var0.get_column_number() returns: " + str(mc_var0.get_column_number()))
print("mc_var0.get_first_column_number() returns: " + str(mc_var0.get_first_column_number()))
print("mc_var0.get_filename() returns: '" + mc_var0.get_filename() + "'")
print("\nmc_var1.has_dependents() returns: " + str(mc_var1.has_dependents()))
print("mc_var1.get_column_number() returns: " + str(mc_var1.get_column_number()))
print("mc_var1.get_first_column_number() returns: " + str(mc_var1.get_first_column_number()))
print("mc_var1.get_filename() returns: '" + mc_var1.get_filename() + "'")
print("\nmc_var2.has_dependents() returns: " + str(mc_var2.has_dependents()))
print("mc_var2.get_column_number() returns: " + str(mc_var2.get_column_number()))
print("mc_var2.get_first_column_number() returns: " + str(mc_var2.get_first_column_number()))
print("mc_var2.get_filename() returns: '" + mc_var2.get_filename() + "'")
# call parent class' "virtual int get_seed() const" method. should return ZERO.
# code coverage for: mc_variable.hh, line 70
print("\ncode coverage for parent's get_seed() virtual method... should return ZERO.")
print("mc_var2.get_seed() returns: " + str(mc_var2.get_seed()))
# Check the validity of looking up a variable by name.
print("\nmonte_carloing 'find_variable' and 'get_variable_name' for test.x_file_lookup[0]: "+
"returns: " +
monte_carlo.mc_master.find_variable("test.x_file_lookup[0]").get_variable_name())
print("monte_carloing 'find_variable' and 'get_variable_name' for test.x_file_lookup[1]: "+
"returns: " +
monte_carlo.mc_master.find_variable("test.x_file_lookup[1]").get_variable_name() + "\n")
trick.stop(1)

View File

@ -0,0 +1,26 @@
monte_carlo.mc_master.activate("RUN_file_skip")
monte_carlo.mc_master.set_num_runs(10)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/datafile.txt",
3)
mc_var.thisown = False
mc_var.max_skip = 3
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[1]",
"Modified_data/datafile.txt",
2)
mc_var.thisown = False
mc_var.max_skip = 3
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[2]",
"Modified_data/datafile.txt",
1)
mc_var.thisown = False
mc_var.max_skip = 3
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,32 @@
monte_carlo.mc_master.activate("RUN_file_skip2")
# For regression monte_carloing, use 5 runs
# For verification, setting this value to 250 results in 2 duplications.
monte_carlo.mc_master.set_num_runs(5)
#monte_carlo.mc_master.set_num_runs(250)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[0]",
"Modified_data/single_col_1.txt",
0,
0)
mc_var.max_skip = 1
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[1]",
"Modified_data/single_col_2.txt",
0,
0)
mc_var.max_skip = 2
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFile( "test.x_file_lookup[2]",
"Modified_data/single_col_3.txt",
0,
0)
mc_var.max_skip = 3
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,11 @@
monte_carlo.mc_master.activate("RUN_generate_meta_data_early")
monte_carlo.mc_master.set_num_runs(1)
monte_carlo.mc_master.generate_meta_data = True
monte_carlo.mc_master.input_file_name = "input.py"
exec(open("Modified_data/monte_variables.py").read())
# By running this early, the MonteCarlo_Meta_data_output file
# should end up in the sim directory instead of the MONTE_RUN..
# directory
monte_carlo.mc_master.collate_meta_data()

View File

@ -0,0 +1,48 @@
# The Monte Carlo tool uses a double execution of the S-main:
# - pass #1 uses the scenario input.py file to process the variables identified
# for dispersion. A specified number, N, of values {v_1, ..., v_N} is
# generated for each variable v, with the values constrained by the specified
# distribution of v; N is specified in the input file.
# A set of N files, {RUN_1/monte_input.py, ... , RUN_N/monte_input.py} is
# created, with each file containing one of the set of values for each
# variable. Once these files are generated, the simulation is complete for
# pass #1 and it terminates.
# - pass #2 uses one of the generated files (monte_input.py) as the input file
# for a regular execution of the simulation. There will typically be many
# executions of the sim, one for each of the generated monte_input.py files.
# This input file provides one example of how to test this two-pass process,
# although it is admittedly a bit convoluted and hard to read. TODO: Once
# TrickOps is capable of operating with this monte-carlo implementation, that
# framework can manage both the generation and local execution of generated
# monte_input.py files, removing the need for this type of "sim that launches a
# sim" test methodology -Jordan 10/2022
# For the purpose of expedient testing, we generate and run only 2 files.
# This is sufficient to demonstrate "multiple" without unnecessarily
# burning CPU time.
import os
exename = "S_main_" + os.getenv("TRICK_HOST_CPU") + ".exe"
# Pass #1 Generate the set of scenarios with unique dispersions
print("Processing Pass #1 for run RUN_nominal")
input_file = "RUN_nominal/input_a.py"
ret = os.system("./" + exename + " " + input_file)
if ret != 0:
trick.exec_terminate_with_return(1, "double_pass.py", 34, "Error running " + input_file)
# Pass #2 Run the scenarios. Logged data will go into each scenario's folder
print("")
print("")
print("Processing Pass #2 for run RUN_nominal")
for ii in range(2):
input_file = "MONTE_RUN_nominal"+"/RUN_00%d/monte_input_a.py" %ii
print ("**************** %s" %input_file)
ret = os.system("./" + exename + " " + input_file)
if ret != 0:
trick.exec_terminate_with_return(1, "double_pass.py", 43, "Error running " + input_file)
# To be compatible with our current unit-sim framework, this file has to be a
# simulation input file. Therefore it needs a stop time so it doesn't run forever.
trick.stop(0.0)

View File

@ -0,0 +1,37 @@
# Instruct sim to generate MC files for RUN_verif.
# This could be done in a top-level MC-launch script
monte_carlo.mc_master.activate("RUN_nominal")
monte_carlo.mc_master.set_num_runs(2)
monte_carlo.mc_master.generate_meta_data = True
monte_carlo.mc_master.input_file_name = "input_a.py"
monte_carlo.mc_master.minimum_padding = 3
# Standard if-tests for a regular multi-purpose input file, allowing for a
# MC-implementation of a general scenario.
# NOTE: in this case, the first test is redundant because this input file is
# ALWAYS going to have mc-master be active. But this is likely to get
# copied and used as a template.
# Quick breakdown:
# - if running with MC:
# (this test allows a general input file to have MC-specific content)
#
# - setup logging and any other MC-specific configurations
#
# - if generating dispersions, generate them.
# (This test separates out the execution of pass#1 (which generates the
# dispersions) from that of pass#2 (which executes with those
# dispersions). Without this test blocking the generation on pass#2, the
# dispersions would get regenerated for every actual run, which is
# completely unnecessary.)
if monte_carlo.mc_master.active:
# Logging
exec(open("Log_data/log_nominal.py").read())
if monte_carlo.mc_master.generate_dispersions:
exec(open("Modified_data/monte_variables.py").read())
trick.stop(1)

View File

@ -0,0 +1,22 @@
monte_carlo.mc_master.activate("RUN_random_normal__untruncate")
monte_carlo.mc_master.set_num_runs(10)
# generate a set of numbers
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[0]", 2, 10, 2)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[1]", 2, 10, 2)
# signal an absolute truncation
mc_var.truncate(8, 12, trick.MonteCarloVariableRandomNormal.Absolute)
# changed my mind. no longer wish to truncate.
# this method turns off 'truncated_low' and 'truncated_high' indicators, leaving
# the original variable values alone!
# NOTE: the two values in this sim should match!
#
# code coverage for untruncate() method, mc_variable_random_normal.cc, lines 204-205
mc_var.untruncate()
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,34 @@
monte_carlo.mc_master.activate("RUN_random_normal_truncate_abs")
# Use 10 runs for regression comparison; use more (10,000) for confirming
# statistical distribution.
monte_carlo.mc_master.set_num_runs(10)
# should keep values between -10 and 10, exclusive
# this one calls 'truncate_low(-10)' and 'truncate_high(10)' to establish
# truncation bounds so we need to pivot around ZERO. Otherwise, if mean
# is not ZERO, ex. 25, the truncation bounds would be -25 and 25 which
# does not match the desired values of +/- 10 of mean.
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[0]", 11122, 0, 5)
mc_var.truncate(10, trick.MonteCarloVariableRandomNormal.Absolute)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values 72.5 thru 85.0, exclusive
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[1]", 77546, 75, 5)
mc_var.truncate(72.5, 85, trick.MonteCarloVariableRandomNormal.Absolute)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values greater than 90.0
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[2]", 60540, 100, 5)
mc_var.truncate_low(90, trick.MonteCarloVariableRandomNormal.Absolute)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values less than 135.0
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[3]", 77077, 125, 5)
mc_var.truncate_high(135, trick.MonteCarloVariableRandomNormal.Absolute)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,33 @@
monte_carlo.mc_master.activate("RUN_random_normal_truncate_rel")
# Use 10 runs for regression comparison; use more (10,000) for confirming
# statistical distribution.
monte_carlo.mc_master.set_num_runs(10)
# should keep values between -10 and 10, exclusive
# this one computes mean+10, resulting in calls to
# 'truncate_low(-10)' and 'truncate_high(10)' to establish
# truncation bounds.
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[0]", 11122, 0, 5)
mc_var.truncate(10, trick.MonteCarloVariableRandomNormal.Relative)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values 72.5 thru 85.0, exclusive
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[1]", 77546, 75, 5)
mc_var.truncate(-2.5, 10, trick.MonteCarloVariableRandomNormal.Relative)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values greater than 90.0
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[2]", 60540, 100, 5)
mc_var.truncate_low(-10, trick.MonteCarloVariableRandomNormal.Relative)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values less than 135.0
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[3]", 77077, 125, 5)
mc_var.truncate_high(10, trick.MonteCarloVariableRandomNormal.Relative)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,33 @@
monte_carlo.mc_master.activate("RUN_random_normal_truncate_sd")
# Use 10 runs for regression comparison; use more (10,000) for confirming
# statistical distribution.
monte_carlo.mc_master.set_num_runs(10)
# should keep values between -10 and 10, exclusive
# this one computes (2*std_dev)+mean, resulting in calls to
# 'truncate_low(-10)' and 'truncate_high(10)' to establish
# truncation bounds.
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[0]", 11122, 0, 5)
mc_var.truncate(2, trick.MonteCarloVariableRandomNormal.StandardDeviation)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values 72.5 thru 85.0, exclusive
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[1]", 77546, 75, 5)
mc_var.truncate(-0.5, 2, trick.MonteCarloVariableRandomNormal.StandardDeviation)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values greater than 90.0
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[2]", 60540, 100, 5)
mc_var.truncate_low(-2, trick.MonteCarloVariableRandomNormal.StandardDeviation)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# should keep values less than 135.0
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[3]", 77077, 125, 5)
mc_var.truncate_high(2, trick.MonteCarloVariableRandomNormal.StandardDeviation)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,26 @@
monte_carlo.mc_master.activate("RUN_random_normal_untruncated")
# Use 10 runs for regression comparison; use more (10,000) for confirming
# statistical distribution.
monte_carlo.mc_master.set_num_runs(10)
# normal distribution from approximately -17.5 to 18.1
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[0]", 11122, 0, 5)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# normal distribution from approximately 53.1 to 94.5
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[1]", 77546, 75, 5)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# normal distribution from approximately 81.4 to 119.75
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[2]", 60540, 100, 5)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# normal distribution from approximately 106.3 to 144.7
mc_var = trick.MonteCarloVariableRandomNormal( "test.x_normal_trunc[3]", 77077, 125, 5)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,16 @@
monte_carlo.mc_master.activate("RUN_random_uniform")
# Use 10 runs for regression comparison; use more (10,000) for confirming
# statistical distribution.
monte_carlo.mc_master.set_num_runs(10)
# generate random uniformly distributed floating point values from 100.0 to 100,000.0
mc_var = trick.MonteCarloVariableRandomUniform( "test.x_uniform", 77545, 100.0, 100000.0)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
# generate random uniformly distributed integer values from 100.0 to 100,000.0
mc_var = trick.MonteCarloVariableRandomUniformInt( "test.x_integer", 77001, 100, 100000)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
trick.stop(1)

View File

@ -0,0 +1,33 @@
# The purpose of this test is to:
# Execute the first input file to add two variables into MonteCarloMaster.
# Execute the second input file to add the same variables into
# MonteCarloMaster, but removes one of them before monte carlo RUN files
# are generated.
# Then it compares the variable lists from the monte carlo runs. If they differ,
# the test terminate with a non-zero return.
import os
exename = "S_main_" + os.getenv("TRICK_HOST_CPU") + ".exe"
print("Processing 1st input file for run RUN_remove_variable")
input_file = "RUN_remove_variable/input_a.py"
ret = os.system("./" + exename + " " + input_file)
if ret != 0:
trick.exec_terminate_with_return(1, "input.py", 16, "Error running " + input_file)
print("Processing 2nd input file for run RUN_remove_variable")
input_file = "RUN_remove_variable/input_b.py"
ret = os.system("./" + exename + " " + input_file)
if ret != 0:
trick.exec_terminate_with_return(1, "input.py", 22, "Error running " + input_file)
print('Checking if the variable was successfully removed')
ret = os.system("diff -q MONTE_RUN_remove_variable/RUN_0/monte_variables MONTE_RUN_remove_variable/RUN_1/monte_variables > /dev/null")
if ret != 0:
trick.exec_terminate_with_return(0, "input.py", 27, "variable successfully removed!")
else:
trick.exec_terminate_with_return(1, "input.py", 29, "variable 'test.x_fixed_value_int' was not removed!")
# To be compatible with our current unit-sim framework, this file has to be a
# simulation input file. Therefore it needs a stop time so it doesn't run forever.
trick.stop(0.0)

View File

@ -0,0 +1,7 @@
monte_carlo.mc_master.activate("RUN_remove_variable/RUN_both_variables")
monte_carlo.mc_master.set_num_runs(1)
monte_carlo.mc_master.input_file_name = "input_a.py"
exec(open("RUN_remove_variable/variable_list.py").read())
trick.stop(1)

View File

@ -0,0 +1,11 @@
monte_carlo.mc_master.activate("RUN_remove_variable/RUN_one_variable")
monte_carlo.mc_master.set_num_runs(1)
monte_carlo.mc_master.input_file_name = "input_b.py"
exec(open("RUN_remove_variable/variable_list.py").read())
# execute remove_variable() (success path)
# code coverage for mc_master.cc, remove_variable(), lines 288-289
monte_carlo.mc_master.remove_variable("test.x_fixed_value_int")
trick.stop(1)

View File

@ -0,0 +1,8 @@
mc_var = trick.MonteCarloVariableRandomUniform( "test.x_uniform", 0, 10, 20)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)
mc_var = trick.MonteCarloVariableFixed( "test.x_fixed_value_int", 7)
mc_var.thisown = False
monte_carlo.mc_master.add_variable(mc_var)

View File

@ -0,0 +1,58 @@
/*****************************************************************************
PURPOSE: SIM object used to Exercise the features and failure cases of the
MonteCarloGeneration model.
PROGRAMMERS:
(((Isaac Reaves) (NASA) (November 2022) (Integration into Trick Core)))
*****************************************************************************/
#include "sim_objects/default_trick_sys.sm"
#include "sim_objects/MonteCarloGenerate.sm"
/*****************************************************************************
MC_Test sim object
****************************************************************************/
class MC_TestSimObject : public Trick::SimObject
{
public:
double x_uniform;
double x_normal;
double x_normal_trunc[5];
double x_normal_length; // (m) Dispersed in ft.
double x_line_command;
double x_file_command[3];
double x_file_lookup[3];
double x_fixed_value_double;
double x_semi_fixed_value;
int x_fixed_value_int;
int x_integer;
bool x_boolean;
std::string x_string;
std::string x_fixed_value_string;
int x_sdefine_routine_called;
MonteCarloVariableFile mc_var_file;
MC_TestSimObject()
:
mc_var_file("test.x_file_lookup[0]", "Modified_data/datafile.txt", 3)
{
("initialization") monte_carlo.generate_dispersions();
(1.0,"environment") output_strings();
};
// standalone_function is used to test using an instance of
// MonteCarloPythonLineExec to execute a standalone function.
void standalone_function( double value)
{
std::cout << "\nStandalone_function received a value of " << value << "\n";
x_sdefine_routine_called = 1;
}
private:
void output_strings() {
std::cout << "\nstrings : " << x_string << " : " << x_fixed_value_string << "\n";
}
MC_TestSimObject( const MC_TestSimObject&);
MC_TestSimObject & operator= ( const MC_TestSimObject&);
};
MC_TestSimObject test;

View File

@ -0,0 +1,8 @@
TRICK_CFLAGS += -g -Wall -Wextra
TRICK_CXXFLAGS += -g -std=c++11 -Wall -Wextra
# We can't yet make warnings to be errors on MacOS, because
# MACOS deprecates and warns about sprintf. But SWIG
# still generates code containing sprintf..
ifneq ($(TRICK_HOST_TYPE), Darwin)
TRICK_CXXFLAGS += -Werror -Wno-stringop-truncation
endif

View File

@ -0,0 +1,7 @@
monte_carlo.mc_master.active = True
monte_carlo.mc_master.generate_dispersions = False
exec(open('RUN_ERROR_invalid_call/input.py').read())
monte_carlo.mc_master.monte_run_number = 0
test.x_file_lookup[0] = 2

Some files were not shown because too many files have changed in this diff Show More