Evaluating Results
This guide explains how to use the evaluation pipeline located in ctrl_optim/results/evaluation/
to analyze and visualize the results of your optimization runs.
Overview
The evaluation module provides two primary ways to analyze results:
- Graphical User Interface (GUI) for interactive, single-run analysis or batch processing multiple results folders (Windows).
- Command-Line Interface that uses a JSON configuration file for scriptable processing or batch processing (all platforms).
The primary output for both methods includes simulation videos, kinematics plots, and detailed analysis reports.
Quick Start
Using run_eval.py
The evaluation framework uses a unified approach with run_eval.py
as the main entry point:
# Navigate to the ctrl_optim directory
cd ctrl_optim
# Run the evaluation pipeline
python run_eval.py
This will launch the GUI interface (on Windows) or provide command-line options.
Method 1: Using the GUI (Windows)
Note: The GUI interface does not work on macOS due to tkinter compatibility issues. Mac users should use Method 2 (Command-Line) instead.
Launching the GUI
cd ctrl_optim
python run_eval.py
This will launch the “Controller Optimization Evaluation Pipeline” window.
GUI Options
The GUI is divided into sections that allow you to configure how the results are evaluated.
1. Select Results Folder(s)
This is the primary input for the tool.
- Add Folder(s): Opens a dialog to select one or more results folders (e.g.,
results/optim_results/baseline_date_time
). You can select multiple folders for batch processing. - Clear: Removes all selected folders from the list.
When a folder is selected, the tool automatically finds the associated configuration file (.bat
or .sh
) and the _Best.txt
or _BestLast.txt
parameter files.
2. Environment Configuration
This section displays the loaded environment configuration. You may also override the environment settings that were used during the original optimization if desired. Note: Overriding environment settings is not recommended, as the optimized results are environment dependent, but this feature can be used to swap models or test other environments with your optimized parameters.
- Model: The musculoskeletal model to use (e.g.,
tutorial
,baseline
,dephy
) - Mode: 2D or 3D simulation mode
- Slope: Terrain slope in degrees
- Max Torque: Maximum exoskeleton torque
- Init Pose: Initial walking pose
- Boolean Options: Delayed controller, exoskeleton on/off, fixed exo profile, 4-parameter spline
3. Parameter Types to Evaluate
Choose which parameter files to evaluate:
- Best: Evaluate the best parameters found during optimization (
_Best.txt
) - BestLast: Evaluate the best parameters from the final population (
_BestLast.txt
)
4. Evaluation Mode
This defines the level of detail in the output:
- Short: 5-second simulation with video and kinematics plot
- Long: 10-second simulation with video and kinematics plot
5. Output Directory
Specify where the evaluation results will be saved. Results are automatically organized in timestamped folders.
Method 2: Command-Line with JSON (All Platforms)
For automated workflows or macOS users, you can run the evaluation script from the command line using a JSON configuration file.
Running from the Command Line
cd ctrl_optim
python run_eval.py --config path/to/your_config.json
JSON Configuration File
Create a .json
file to specify the evaluation parameters. See ctrl_optim/results/evaluation/eval_config/example_config.json
for a template.
Example config.json
:
{
"results_dir": "results/optim_results/baseline_0701_1200",
"evaluation_mode": "short",
"output_dir": "results/evaluation_outputs",
"include_best": true,
"include_bestlast": true
}
Example batch processing with multiple directories:
{
"results_dirs": [
"results/optim_results/baseline_0701_1200",
"results/optim_results/exo_4param_0702_1400",
"results/optim_results/exo_npoint_0703_1600"
],
"evaluation_mode": "short",
"output_dir": "results/evaluation_outputs",
"include_best": true,
"include_bestlast": false
}
Configuration Parameters
results_dir
orresults_dirs
: Path to one or more results directoriesevaluation_mode
: Analysis mode ("short"
or"long"
)output_dir
: Directory where evaluation outputs will be savedinclude_best
: Whether to evaluate_Best.txt
files (default:true
)include_bestlast
: Whether to evaluate_BestLast.txt
files (default:true
)
Output Structure
Each evaluation run creates a timestamped output directory:
ctrl_optim/results/evaluation_outputs/MMDD_HHMM/
├── parameter_name_001.mp4 # Simulation video
├── parameter_name_001_kinematics.png # Kinematics plot
├── parameter_name_001_stats.txt # Kinematic statistics
├── parameter_name_001_exo.mp4 # Exoskeleton video (if applicable)
├── parameter_name_001_exo_cost.png # Exoskeleton controller + cost plot (if applicable)
└── config_name_timestamp.bat # Configuration file copy
Platform Compatibility
Windows
- GUI interface available
- Command-line interface available
- All features supported
macOS/Linux
- GUI interface NOT supported (tkinter compatibility issues)
- Command-line interface available
- All evaluation features supported via JSON configuration
Troubleshooting
Common Issues
- “Module not found” errors: Make sure you’re running from the correct directory:
cd ctrl_optim python run_eval.py
- GUI not working: Use the command-line interface instead:
python run_eval.py --config eval_config/example_config.json
-
No parameter files found: Verify the results directory contains
_Best.txt
or_BestLast.txt
files - Video generation fails: Ensure required dependencies are installed:
pip install imageio sk-video
Creating Custom Configurations
You can create custom evaluation configurations by:
- Copying the example config file:
cp ctrl_optim/results/evaluation/eval_config/example_config.json my_config.json
-
Modifying the parameters as needed
- Running with your custom config:
python run_eval.py --config my_config.json
Quick Visualization
For simple video generation without detailed analysis, you can also use the run_ctrl.py
imulation script. See the Running Reflex Control guide for more details.