In the experiment mode you get access to the experiment's:
- Controls — use them to manage the experiment and its settings.
- Outputs — results of the experiment per experiment run.
- Settings — experiment's settings.
The dashboard in this mode contains statistics gathered during the experiment run.
To switch to the experiment mode
- Click the required experiment type.
You will see its controls, settings, and result items. The dashboard below the map will now contain statistics gathered during the experiment run.
To switch to the supply chain mode
- Click the MRP Inventory Policy tile.
The dashboard below the map will now contain the input tables.
The experiment status indicators show up at the experiment icon
|Experiment settings are correct.|
|Experiment is running.|
|Experiment is paused.|
|Incorrect experiment settings. The list of errors opens over the map area. Click the icon on the supply chain mode tile to open the list of errors if it is closed.|
The set of controls differs depending on the experiment type
|Run||Starts the experiment (it is substituted by Pause during the experiment run).|
|Pause||Pauses the experiment (it is substituted by Run if the experiment is not launched or if it is paused).|
|Run in virtual time mode without animation||(Active only if the experiment is not running) starts the experiment and runs it without displaying the GIS map.|
|Stop||(Active only if the experiment is launched or paused) stops the experiment.|
|Speed slider||(Active only if the experiment is launched) sets the experiment execution speed as a scale ratio between model time and real-time.
The value defines the number of model time units (days) corresponding to one second of real-time. The x1 value sets the execution speed to one model day per second, the x2 value — to two model days per second, and so on.
The max value sets the execution speed to the maximum. The maximum speed can be set when you need to simulate a model for an extended period of time and the model does not require you to set correspondence between the model time and the real-time units.
The x0 value sets the execution speed to zero, which pauses the experiment. You can resume it by setting Speed to a non-zero value.
|Show settings||Changes visibility of the selected experiment's settings.|
|Show outputs||Collapses/Expands the list of received outputs.|
If the Outputs section is not open, click in the experiments controls.
Here you can find the list of experiment outputs per experiment run.
Each output item comprises:
|Output name||Statistics/Result||Output name. You can define a custom name if required.|
|Show settings||Shows experiment settings that were used for this output.|
|Delete||Deletes this output.|
|Result options||(Enabled by default, available in the Network Optimization results only) shows the Result Options table with the result's best solution(s).|
To observe results
- Click the required result to observe its details in the statistics dashboard and on the map (if it can be displayed on the map).
If the experiment offers more than one solution, the selected result item will have the control.
To convert output to a scenario
- Right-click the required output to open a pop-up menu.
- Select the Convert to a new scenario menu item. The Convert result dialog box will open.
- Define the desired name for the new scenario, and choose the type of the scenario the output should be converted to.
- Click OK to convert the output.
To rename an output item
- Right-click the output that you wish to rename and select Rename from the context menu to open the editing box.
- Type the desired name in the Result name field.
- Click Save to save the changes and close the dialog box.
To delete output
- Click on the required output item. A dialog box will pop-up prompting you to confirm this action.
- Click OK to delete the output.
If the Settings window is not open, click in the experiments controls.
The header of the Simulation-based experiments settings (but the Safety stock estimation) additionally has the Statistics Configuration control, which opens a list of statistics in the dashboard that can be collected during the experiment run.
How can we improve this article?