Experiment Tracking
mlcli includes built-in experiment tracking to help you manage and compare your ML experiments.
Overview
Every time you run a training command, mlcli automatically tracks:
- Hyperparameters - All model configuration and training parameters
- Metrics - Accuracy, precision, recall, F1 score, AUC, and more
- Artifacts - Trained models, plots, and evaluation results
- Metadata - Timestamps, durations, and environment info
Quick Start
Experiment tracking is enabled by default. Just run your training command:
Terminal
mlcli train data.csv --model random_forest --output models/View your experiments:
Terminal
mlcli experiments listTracking Directory
By default, experiments are saved to ./mlcli_experiments/. You can change this:
Terminal
mlcli train data.csv --model xgboost --experiments-dir ./my_experimentsOr set it in your configuration:
YAML
# config.yaml
experiment_tracking:
enabled: true
directory: ./experiments
auto_log: trueViewing Experiments
List All Experiments
Terminal
mlcli experiments listOutput:
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━┓
┃ Run ID ┃ Model ┃ Accuracy ┃ Date ┃
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━┩
│ run_abc123 │ random_forest │ 94.5% │ Dec 3 │
│ run_def456 │ xgboost │ 95.2% │ Dec 3 │
│ run_ghi789 │ lightgbm │ 94.8% │ Dec 2 │
└─────────────────┴───────────────┴───────────┴──────────┘
View Experiment Details
Terminal
mlcli experiments show run_abc123Compare Experiments
Terminal
mlcli experiments compare run_abc123 run_def456Experiment Structure
Each experiment creates a directory with:
mlcli_experiments/
└── run_abc123/
├── config.yaml # Hyperparameters and settings
├── metrics.json # Training and evaluation metrics
├── model.pkl # Trained model artifact
├── plots/
│ ├── confusion_matrix.png
│ ├── roc_curve.png
│ └── feature_importance.png
└── logs/
└── training.log
Programmatic Access
Access experiments from Python:
Python
from mlcli.tracking import ExperimentTracker
tracker = ExperimentTracker("./mlcli_experiments")
# List all runs
runs = tracker.list_runs()
# Get specific run
run = tracker.get_run("run_abc123")
print(run.metrics)
print(run.hyperparameters)
# Load model from run
model = tracker.load_model("run_abc123")MLflow Integration
mlcli can export experiments to MLflow for advanced tracking:
Terminal
mlcli experiments export --format mlflow --uri http://localhost:5000Or configure MLflow as your tracking backend:
YAML
# config.yaml
experiment_tracking:
backend: mlflow
mlflow:
tracking_uri: http://localhost:5000
experiment_name: my-ml-projectBest Practices
- Use meaningful names - Tag your experiments with descriptive names
- Track everything - Let mlcli auto-log all parameters and metrics
- Compare regularly - Use the compare command to understand what works
- Clean up - Remove old experiments to save disk space
- Export important runs - Back up successful experiments
Next Steps
- Learn about Hyperparameter Tuning to optimize your models
- Explore Model Explainability to understand predictions
- Check the Runs Dashboard to visualize your experiments