The mne_ml library (MLLIB namespace) provides machine learning infrastructure for MNE-CPP: ONNX Runtime inference for pre-trained neural networks, built-in linear models, feature scaling, tensor data structures, and a processing pipeline. The primary use case is CMNE (Contextual Minimum-Norm Estimate) inference, where an LSTM network trained in Python is deployed via ONNX Runtime in C++.
target_link_libraries(my_app PRIVATE mne_ml)
Dependencies: mne_utils, mne_math, Qt6::Core, Eigen3. Optional: ONNX Runtime (enabled via USE_ONNXRUNTIME=ON).
Class Inventory
| Class / Struct | Header | Description |
|---|
MlModel | ml/ml_model.h | Abstract base class for all ML models |
MlOnnxModel | ml/ml_onnx_model.h | ONNX Runtime backed model for neural network inference |
MlLinearModel | ml/ml_linear_model.h | Ridge regression / logistic regression built-in model |
MlPipeline | ml/ml_pipeline.h | Scaler → model processing pipeline |
MlScaler | ml/ml_scaler.h | Feature scaler (StandardScaler or MinMaxScaler) |
MlTensor | ml/ml_tensor.h | N-dimensional row-major float32 tensor with zero-copy semantics |
MLTrainer | ml/ml_trainer.h | Python training script launcher (not available in WASM builds) |
Enumerations (ml/ml_types.h)
| Enum | Values | Description |
|---|
MlBackend | OnnxRuntime, BuiltIn | ML back-end engine |
MlDataType | Float32, Float64, Int64 | Tensor data types |
MlTaskType | Classification, Regression, FeatureExtraction | ML task categories |
MlModel (Abstract Base)
All ML models implement this interface:
#include <ml/ml_model.h>
class MlModel {
public:
typedef QSharedPointer<MlModel> SPtr;
virtual MlTensor predict(const MlTensor& input) const = 0;
virtual bool save(const QString& path) const = 0;
virtual bool load(const QString& path) = 0;
virtual QString modelType() const = 0;
virtual MlTaskType taskType() const = 0;
};
| Method | Description |
|---|
predict(input) | Run inference on the input tensor |
save(path) | Save model to file |
load(path) | Load model from file |
modelType() | Return model type string (e.g., "onnx", "linear") |
taskType() | Return the task type (Classification, Regression, FeatureExtraction) |
MlOnnxModel
ONNX Runtime backed model for deploying pre-trained neural networks (e.g., CMNE LSTM):
#include <ml/ml_onnx_model.h>
MLLIB::MlOnnxModel model;
if (model.load("cmne_lstm.onnx")) {
qDebug() << "Model loaded, type:" << model.modelType();
std::vector<float> data(1 * 20 * 204, 0.0f);
MLLIB::MlTensor input(std::move(data), {1, 20, 204});
MLLIB::MlTensor output = model.predict(input);
qDebug() << "Output shape:" << output.shape();
}
| Method | Description |
|---|
predict(input) | Run ONNX Runtime inference on the input tensor |
load(path) | Load an .onnx model file and create an inference session |
save(path) | Copy the model file to a new path |
isLoaded() | Check if a session is loaded and ready |
modelType() | Returns "onnx" |
taskType() | Returns the configured task type |
When built without USE_ONNXRUNTIME, all methods throw std::runtime_error. The ONNX Runtime shared library is automatically copied alongside mne_ml during the build for runtime discovery.
MlLinearModel
Built-in ridge regression and logistic regression:
#include <ml/ml_linear_model.h>
MLLIB::MlLinearModel regressor(MLLIB::MlTaskType::Regression, 1.0);
MLLIB::MlTensor prediction = regressor.predict(features);
const Eigen::MatrixXf& W = regressor.weights();
const Eigen::VectorXf& b = regressor.bias();
| Method | Description |
|---|
predict(input) | Compute X * W + b (regression) or sigmoid/softmax (classification) |
load(path) / save(path) | Serialize weights, bias, and configuration |
weights() | Access the weight matrix (n_features × n_outputs) |
bias() | Access the bias vector (n_outputs) |
modelType() | Returns "linear" |
taskType() | Returns the configured task type |
Constructor parameters:
| Parameter | Type | Default | Description |
|---|
type | MlTaskType | Regression | Task type |
regularization | double | 1.0 | L2 regularization strength (λ) |
MlPipeline
Simple scaler → model processing pipeline:
#include <ml/ml_pipeline.h>
MLLIB::MlPipeline pipeline;
MLLIB::MlScaler scaler(MLLIB::MlScaler::StandardScaler);
pipeline.setScaler(scaler);
auto model = QSharedPointer<MLLIB::MlOnnxModel>::create();
model->load("cmne_lstm.onnx");
pipeline.setModel(model);
pipeline.fitScaler(trainingFeatures);
MLLIB::MlTensor result = pipeline.predict(testFeatures);
| Method | Description |
|---|
setScaler(scaler) | Set the feature scaler |
setModel(model) | Set the model (as MlModel::SPtr) |
fitScaler(X) | Fit the scaler on the given feature matrix |
predict(X) | Scale (if scaler set) and run model prediction |
MlScaler
Feature scaling with two strategies:
#include <ml/ml_scaler.h>
MLLIB::MlScaler scaler(MLLIB::MlScaler::StandardScaler);
MLLIB::MlScaler minmax(MLLIB::MlScaler::MinMaxScaler);
MLLIB::MlTensor scaled = scaler.fitTransform(data);
MLLIB::MlTensor newScaled = scaler.transform(newData);
MLLIB::MlTensor original = scaler.inverseTransform(scaled);
| Method | Description |
|---|
fit(data) | Compute statistics from the data |
transform(data) | Apply the learned transform |
fitTransform(data) | Convenience: fit then transform |
inverseTransform(data) | Undo the scaling |
| Scaler Type | Formula |
|---|
StandardScaler | (x−μ)/σ |
MinMaxScaler | (x−xmin)/(xmax−xmin) |
MlTensor
N-dimensional tensor with contiguous row-major (C-order) float32 storage. Data is held in a reference-counted buffer, making copy, reshape, and slice O(1). Storage layout is row-major to match ONNX Runtime, PyTorch, and NumPy conventions.
#include <ml/ml_tensor.h>
std::vector<float> buf(1 * 20 * 204, 0.0f);
MLLIB::MlTensor t1(std::move(buf), {1, 20, 204});
Eigen::MatrixXf mat(100, 50);
MLLIB::MlTensor t2(mat);
float* ptr = ...;
MLLIB::MlTensor t3(ptr, {10, 20});
std::vector<int64_t> shape = t1.shape();
auto eigenMap = t2.matrix();
Type Aliases
| Alias | Type |
|---|
RowMajorMatrixXf | Eigen::Matrix<float, Dynamic, Dynamic, RowMajor> |
RowMajorMatrixMap | Eigen::Map<RowMajorMatrixXf> |
ConstRowMajorMatrixMap | Eigen::Map<const RowMajorMatrixXf> |
Constructors
| Constructor | Description |
|---|
MlTensor() | Empty 0-element tensor |
MlTensor(data&&, shape) | From moved std::vector<float> buffer and shape |
MlTensor(ptr, shape) | Copy from raw pointer and shape |
MlTensor(MatrixXf) | From Eigen column-major matrix (copied to row-major) |
MLTrainer
ML-specific convenience wrapper over UTILSLIB::PythonRunner for launching Python training scripts. Not available in WASM builds.
#include <ml/ml_trainer.h>
MLLIB::MLTrainer trainer;
if (trainer.checkPackages({"torch", "mne", "numpy"})) {
auto result = trainer.run("scripts/ml/training/train_cmne.py",
{"--epochs", "100", "--lr", "0.001"});
if (result.exitCode == 0) {
qDebug() << "Training complete";
}
}
| Method | Description |
|---|
run(scriptPath, args) | Run a Python training script |
checkPackages(packages) | Verify that required Python packages are importable |
runner() | Access the underlying PythonRunner for callback/config changes |
CMNE Workflow Example
A complete CMNE (Contextual Minimum-Norm Estimate) workflow:
#include <ml/ml_onnx_model.h>
#include <ml/ml_scaler.h>
#include <ml/ml_pipeline.h>
#include <ml/ml_tensor.h>
MLLIB::MlPipeline pipeline;
MLLIB::MlScaler scaler(MLLIB::MlScaler::StandardScaler);
pipeline.setScaler(scaler);
auto model = QSharedPointer<MLLIB::MlOnnxModel>::create();
model->load("models/cmne_lstm_audvis.onnx");
pipeline.setModel(model);
Eigen::MatrixXf sensorData(20, 204);
MLLIB::MlTensor input(sensorData);
pipeline.fitScaler(input);
MLLIB::MlTensor sourceEstimate = pipeline.predict(input);
MNE-Python Cross-Reference
| MNE-CPP (MLLIB) | Python Equivalent |
|---|
MlOnnxModel | onnxruntime.InferenceSession |
MlLinearModel | sklearn.linear_model.Ridge / LogisticRegression |
MlScaler | sklearn.preprocessing.StandardScaler / MinMaxScaler |
MlPipeline | sklearn.pipeline.Pipeline |
MlTensor | numpy.ndarray / torch.Tensor |
MLTrainer | Direct Python execution (no equivalent needed) |
See Also