Inverse Methods
This section describes the mathematical details of all inverse source estimation methods available in MNE-CPP: minimum-norm estimates (MNE, dSPM, sLORETA, eLORETA), contextual minimum-norm estimates (CMNE), sparse methods (MxNE, Gamma-MAP), beamformers (LCMV, DICS), RAP MUSIC, and dipole fitting.
Minimum-Norm Estimates
In the Bayesian sense, the ensuing current distribution is the maximum a posteriori (MAP) estimate under the following assumptions:
- The viable locations of the currents are constrained to the cortex. Optionally, the current orientations can be fixed to be normal to the cortical mantle.
- The amplitudes of the currents have a Gaussian prior distribution with a known source covariance matrix.
- The measured data contain additive noise with a Gaussian distribution with a known covariance matrix. The noise is not correlated over time.
The Linear Inverse Operator
The measured data in the source estimation procedure consists of MEG and EEG data, recorded on a total of channels. The task is to estimate a total of strengths of sources located on the cortical mantle. If the number of source locations is , for fixed-orientation sources and if the source orientations are unconstrained.
The regularized linear inverse operator following from regularized maximal likelihood of the above probabilistic model is given by the matrix:
where is the gain matrix relating the source strengths to the measured MEG/EEG data, is the data noise-covariance matrix and is the source covariance matrix. The dimensions of these matrices are , , and , respectively.
The expected value of the current amplitudes at time is then given by , where is a vector containing the measured MEG and EEG data values at time .
Regularization
The a priori variance of the currents is, in practice, unknown. We can express this by writing , which yields the inverse operator:
where the unknown current amplitude is now interpreted in terms of the regularization parameter . Larger values correspond to spatially smoother and weaker current amplitudes, whereas smaller values lead to the opposite.
We can arrive at the regularized linear inverse operator also by minimizing a cost function with respect to the estimated current (given the measurement vector at any given time ) as:
where the first term consists of the difference between the whitened measured data and those predicted by the model, while the second term is a weighted-norm of the current estimate. With increasing , the source term receives more weight and larger discrepancy between the measured and predicted data is tolerable.
Whitening and Scaling
The MNE software employs data whitening so that a "whitened" inverse operator assumes the form:
where
is the spatially whitened gain matrix.
The expected current values are:
where is the whitened measurement vector at time .
The spatial whitening operator is obtained with the help of the eigenvalue decomposition as .
In the MNE software the noise-covariance matrix is stored as the one applying to raw data. To reflect the decrease of noise due to averaging, this matrix, , is scaled by the number of averages, , i.e., .
When EEG data are included, the gain matrix needs to be average referenced when computing the linear inverse operator . This is incorporated during creation of the spatial whitening operator , which includes any projectors on the data. EEG data average reference (using a projector) is mandatory for source modeling.
A convenient choice for the source-covariance matrix is such that . With this choice we can approximate , where SNR is the (amplitude) signal-to-noise ratio of the whitened data.
The definition of the signal-to-noise ratio / relationship given above works nicely for the whitened forward solution. In the un-whitened case scaling with the trace ratio does not make sense, since the diagonal elements summed have, in general, different units of measure. For example, the MEG data are expressed in T or T/m whereas the unit of EEG is Volts.
Regularization of the Noise-Covariance Matrix
Since a finite amount of data is usually available to compute an estimate of the noise-covariance matrix , the smallest eigenvalues of its estimate are usually inaccurate and smaller than the true eigenvalues. Depending on the seriousness of this problem, the following quantities can be affected:
- The model data predicted by the current estimate
- Estimates of signal-to-noise ratios, which lead to estimates of the required regularization
- The estimated current values
- The noise-normalized estimates
Fortunately, the latter two are least likely to be affected due to regularization of the estimates. However, in some cases especially the EEG part of the noise-covariance matrix estimate can be deficient, i.e., it may possess very small eigenvalues and thus regularization of the noise-covariance matrix is advisable.
Historically, the MNE software accomplishes the regularization by replacing a noise-covariance matrix estimate with:
where the index goes across the different channel groups (MEG planar gradiometers, MEG axial gradiometers and magnetometers, and EEG), are the corresponding regularization factors, are the average variances across the channel groups, and are diagonal matrices containing ones at the positions corresponding to the channels contained in each channel group.
Computation of the Solution
The most straightforward approach to calculate the MNE is to employ the expression of the original or whitened inverse operator directly. However, for computational convenience we prefer to take another route, which employs the singular-value decomposition (SVD) of the matrix:
where the superscript indicates a square root of .
Combining the SVD with the inverse equation, it is easy to show that:
where the elements of the diagonal matrix are:
If we define , then the expected current is:
where , with being the -th column of . The current estimate is thus a weighted sum of the "weighted" eigenleads .
Noise Normalization
Noise normalization serves three purposes:
-
It converts the expected current value into a dimensionless statistical test variable. Thus the resulting time and location dependent values are often referred to as dynamic statistical parameter maps (dSPM).
-
It reduces the location bias of the estimates. In particular, the tendency of the MNE to prefer superficial currents is eliminated.
-
The width of the point-spread function becomes less dependent on the source location on the cortical mantle.
In practice, noise normalization is implemented as a division by the square root of the estimated variance of each voxel. Using our "weighted eigenleads" definition in matrix form as :
dSPM
Noise-normalized linear estimates introduced by Dale et al. (1999) require division of the expected current amplitude by its variance. The variance computation uses:
The variances for each source are thus:
Under the standard conditions, the -statistic values associated with fixed-orientation sources are proportional to while the -statistic employed with free-orientation sources is proportional to .
The MNE software usually computes the square roots of the F-statistic to be displayed on the inflated cortical surfaces. These are also proportional to .
sLORETA
sLORETA (Pascual-Marqui, 2002) estimates the current variances as the diagonal entries of the resolution matrix, which is the product of the inverse and forward operators:
Because is diagonal and we only care about the diagonal entries, the variance estimates are:
eLORETA
While dSPM and sLORETA solve for noise normalization weights that are applied to standard minimum-norm estimates , eLORETA (Pascual-Marqui, 2011) instead solves for a source covariance matrix that achieves zero localization bias. For fixed-orientation solutions the resulting matrix will be a diagonal matrix, and for free-orientation solutions it will be a block-diagonal matrix with blocks.
The following system of equations is used to find the weights, :
An iterative algorithm finds the values for the weights that satisfy these equations:
- Initialize identity weights.
- Compute .
- Holding fixed, compute new weights .
- Using new weights, go to step (2) until convergence.
Using the whitened substitution , the computations can be performed entirely in the whitened space, avoiding the need to compute directly:
Predicted Data
Under noiseless conditions the SNR is infinite and thus leads to and the minimum-norm estimate explains the measured data perfectly. Under realistic conditions, however, and there is a misfit between measured data and those predicted by the MNE. Comparison of the predicted data and measured data can give valuable insight on the correctness of the regularization applied.
In the SVD approach:
where the diagonal matrix has elements . The predicted data is thus expressed as the weighted sum of the "recolored eigenfields" in .
Cortical Patch Statistics
If source space distance information was used during source space creation, the source space file will contain Cortical Patch Statistics (CPS) for each vertex of the cortical surface. The CPS provide information about the source space point closest to each vertex as well as the distance from the vertex to this source space point.
Once these data are available, the following cortical patch statistics can be computed for each source location :
- The average over the normals at the vertices in a patch,
- The areas of the patches,
- The average deviation of the vertex normals in a patch from their average, , given in degrees
Orientation Constraints
The principal sources of MEG and EEG signals are generally believed to be postsynaptic currents in the cortical pyramidal neurons. Since the net primary current associated with these microscopic events is oriented normal to the cortical mantle, it is reasonable to use the cortical normal orientation as a constraint in source estimation.
In addition to allowing completely free source orientations, the MNE software implements three orientation constraints based on the surface normal data:
-
Fixed orientation: Source orientation is rigidly fixed to the surface normal direction. If cortical patch statistics are available, the average normal over each patch is used. Otherwise, the vertex normal at the source space location is employed.
-
Fixed Loose Orientation Constraint (fLOC): A source coordinate system based on the local surface orientation at the source location is employed. The first two source components lie in the plane normal to the surface normal, and the third component is aligned with it. The variance of the tangential components is reduced by a configurable factor.
-
Variable Loose Orientation Constraint (vLOC): Similar to fLOC except that the loose factor is multiplied by (the angular deviation of normals within the patch).
Depth Weighting
The minimum-norm estimates have a bias towards superficial currents. This tendency can be alleviated by adjusting the source covariance matrix to favor deeper source locations. In the depth weighting scheme, the elements of corresponding to the -th source location are scaled by a factor:
where , , and are the three columns of corresponding to source location and is the order of the depth weighting.
Effective Number of Averages
It is often the case that the epoch to be analyzed is a linear combination over conditions rather than one of the original averages computed. The noise-covariance matrix computed is originally one corresponding to raw data. Therefore, it has to be scaled correctly to correspond to the actual or effective number of epochs in the condition to be analyzed. In general:
where is the effective number of averages. To calculate for an arbitrary linear combination of conditions :
For a weighted average, where :
For a difference of two categories (, ):
Generalizing, for any combination of sums and differences where :
References
- Hämäläinen, M.S. & Ilmoniemi, R.J. (1994). Interpreting magnetic fields of the brain: minimum norm estimates. Med. Biol. Eng. Comput., 32, 35–42.
- Dale, A.M. et al. (2000). Dynamic Statistical Parametric Mapping: combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron, 26(1), 55–67. DOI: 10.1016/S0896-6273(00)81138-1
- Pascual-Marqui, R.D. (2002). Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Methods Find. Exp. Clin. Pharmacol., 24D, 5–12.
- Pascual-Marqui, R.D. (2007). Discrete, 3D distributed, linear imaging methods of electric neuronal activity. Part 1: exact, zero error localization. arXiv:0710.3341.
- Hämäläinen, M.S. et al. (1993). Magnetoencephalography — theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys., 65(2), 413–497.
Contextual Minimum-Norm Estimates (CMNE)
Standard minimum-norm methods such as MNE, dSPM, sLORETA, and eLORETA compute source estimates independently at each time point: the estimate depends only on the measurement and not on the temporal context in which occurs. While this makes the methods robust and computationally straightforward, it ignores the rich temporal structure of neural signals. Brain activity unfolds over time with characteristic dynamics — somatosensory responses follow reproducible sequences across cortical areas, auditory processing propagates from primary to associative cortices, and so forth.
Contextual Minimum-Norm Estimates (CMNE) address this limitation by augmenting dSPM with a spatiotemporal LSTM network that learns to exploit temporal context (Dinh et al., 2021). Rather than replacing the physics-based inverse model, CMNE uses data-driven temporal learning as a post-processing correction that improves spatial fidelity while preserving the well-understood regularization properties of dSPM.
Dinh, C.; Samuelsson, J.G.; Hunold, A.; Hämäläinen, M.S.; Khan, S.: Contextual MEG and EEG Source Estimates Using Spatiotemporal LSTM Networks. Frontiers in Neuroscience 15:552666 (2021). DOI: 10.3389/fnins.2021.552666
Overview
The CMNE pipeline proceeds in four stages:
- dSPM kernel computation — a standard noise-normalized inverse kernel is built from the forward model, noise covariance, and regularization parameter (as described above).
- dSPM source projection — the kernel is applied to the evoked/epoch data to obtain a noise-normalized source time course at every cortical location.
- Z-score rectification — the source amplitudes are rectified (absolute value) and z-scored across time at each vertex, producing a unit-variance representation that is suitable as LSTM input.
- LSTM temporal correction — a trained LSTM network predicts the expected source pattern from the preceding time steps and applies it as a multiplicative correction to the current dSPM estimate, forming a recursive Markov chain.
Mathematical Formulation
Stage 1–3: dSPM and Preprocessing
Let denote the measurement at time across channels. The whitened data and whitened gain matrix are:
The MNE kernel and dSPM noise normalization yield:
The dSPM source estimate is . This is then rectified and z-scored:
where and are the mean and standard deviation of over time.
Stage 4: LSTM Temporal Correction
The temporal correction operates as a recursive Markov chain. Let denote the CMNE-corrected source estimate at time and let be the look-back window. For the first time steps, no correction is possible:
For , the preceding corrected estimates form the LSTM input sequence:
The LSTM network (trained offline) maps this sequence to a prediction vector:
This prediction is normalized to form a diagonal weighting matrix:
The CMNE estimate is then the element-wise product of the LSTM-derived weights and the dSPM estimate:
Crucially, replaces in the sliding window for subsequent predictions. This recursive structure allows the network to build an evolving "context" of the source dynamics, progressively refining its predictions as more data becomes available.
LSTM Architecture
The default architecture follows the paper's cross-validated design:
| Parameter | Default | Description |
|---|---|---|
| Look-back | 80 | Number of past time steps fed to the LSTM |
| Hidden units | 1280 | LSTM hidden state dimension |
| Layers | 1 | Number of stacked LSTM layers |
| Output | (= n_sources) | Dense layer mapping hidden state to source space |
| Loss | MSE | Mean squared error between prediction and ground truth |
| Optimizer | Adam | Default learning rate |
- Input shape: — batch of look-back windows across all sources
- Output shape: — predicted source pattern for the next time step
The model is trained offline (in Python using PyTorch) and exported to ONNX format for C++ inference via ONNX Runtime.
Training
The LSTM is trained on epoched data — individual trials from the same experimental paradigm. For each epoch:
- The dSPM source estimate is computed using the same forward model, noise covariance, and regularization as will be used at inference time.
- The source amplitudes are z-score rectified.
- Sliding windows of length are extracted, each producing one training pair: the input window and the target .
When ground-truth source activity is available (e.g., from simulations), the target is the true source pattern. When ground truth is unavailable, the dSPM estimate itself serves as a pseudo ground truth — the LSTM then learns to predict the next dSPM pattern from the preceding ones, effectively performing temporal denoising (referred to as "test mode" in the implementation).
Training on the MNE sample dataset (289 epochs, 7498 sources, look-back 40) produces approximately 110,000 training samples. The first run computes dSPM for all epochs, which takes several minutes on CPU. Subsequent runs load cached source estimates from disk automatically.
Fallback: Moving-Average Approximation
When no trained ONNX model is available, MNE-CPP provides a moving-average fallback that replaces the LSTM prediction with a simple temporal average:
This produces a smoothed version of the CMNE correction without requiring any trained model. While it does not achieve the spatial improvement of the LSTM approach, it serves as a baseline and allows the pipeline to run end-to-end for testing.
Performance
The paper demonstrates the following improvements over standard dSPM on simulated somatosensory data (Dinh et al., 2021, Table 1):
| Metric | dSPM | CMNE |
|---|---|---|
| Peak localization error (PE) | higher | reduced |
| Spatial dispersion (SD) | broader | reduced |
| Source-space SNR | lower | improved |
The LSTM's contextual predictions effectively "focus" the dSPM estimate by suppressing source locations inconsistent with the learned temporal dynamics, leading to sharper and more focal source images while maintaining the noise normalization properties of dSPM.
CLI Tool
The mne_compute_cmne tool provides three modes of operation:
# Compute CMNE source estimates
mne_compute_cmne --mode compute \
--fwd sample_audvis-meg-eeg-oct-6-fwd.fif \
--cov sample_audvis-cov.fif \
--evoked sample_audvis-ave.fif \
--onnx cmne_lstm.onnx \
--out sample_audvis
# Train the LSTM model
mne_compute_cmne --mode train \
--fwd sample_audvis-meg-eeg-oct-6-fwd.fif \
--cov sample_audvis-cov.fif \
--epochs sample_audvis-epo.fif \
--onnx-out cmne_lstm.onnx
# Fine-tune an existing model
mne_compute_cmne --mode finetune \
--fwd sample_audvis-meg-eeg-oct-6-fwd.fif \
--cov sample_audvis-cov.fif \
--epochs sample_audvis-epo.fif \
--finetune cmne_lstm.onnx \
--onnx-out cmne_lstm_v2.onnx
Convenience scripts run_compute.sh and run_train.sh are provided for quick experiments using MNE sample data.
Sparse Methods
Sparse inverse methods estimate source activity under the assumption that only a small number of cortical locations are active at any given time. Unlike distributed methods (MNE, dSPM, sLORETA) that produce estimates at all cortical locations, sparse methods enforce sparsity through appropriate penalty terms, yielding solutions with most source amplitudes exactly zero. This makes them particularly suitable for localizing focal brain activity.
Mixed-Norm Estimates (MxNE)
The Mixed-Norm Estimate (MxNE; Gramfort et al., 2012) promotes spatial sparsity while allowing temporal smoothness within each active source. It minimizes a cost function with an -norm (group lasso) penalty:
where is the measurement matrix, is the gain matrix, is the source matrix, and denotes the -th row of (the time course of source ). The parameter controls the trade-off between data fit and sparsity.
The -norm is the sum of -norms of the rows: it penalizes the number of active sources (rows with non-zero -norm) rather than individual time-point amplitudes. This encourages entire source time courses to be driven to zero, producing a solution where only a few sources are active across all time points.
IRLS Algorithm
MNE-CPP solves the MxNE problem using Iteratively Reweighted Least Squares (IRLS):
- Initialize all source weights .
- Construct the diagonal weight matrix .
- Solve the weighted least-squares problem:
- Update the weights: where .
- Prune sources with from the active set.
- Repeat from step 2 until convergence (maximum weight change ) or a maximum number of iterations is reached.
The active-set strategy progressively removes inactive sources, reducing the problem size at each iteration and improving computational efficiency.
Parameters
| Parameter | Default | Description |
|---|---|---|
alpha | (user-specified) | Regularization parameter controlling sparsity |
nIterations | 50 | Maximum IRLS iterations |
tolerance | Convergence threshold on weight change |
References
- Gramfort, A.; Kowalski, M.; Hämäläinen, M.S. (2012). Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods. Physics in Medicine and Biology, 57(7), 1937–1961. DOI: 10.1088/0031-9155/57/7/1937
- Gramfort, A.; Strohmeier, D.; Haueisen, J.; Hämäläinen, M.S.; Kowalski, M. (2013). Time-frequency mixed-norm estimates: sparse M/EEG imaging with non-stationary source activations. NeuroImage, 70, 410–422. DOI: 10.1016/j.neuroimage.2012.12.051
Gamma-MAP
Gamma-MAP (Sparse Bayesian Learning; Wipf & Nagarajan, 2009) takes a Bayesian approach to sparse source estimation. Instead of directly penalizing source amplitudes, it places a parameterized prior on each source variance and uses the Expectation-Maximization (EM) algorithm to estimate these hyperparameters from the data. Sources whose estimated variance falls below a threshold are pruned, yielding a sparse solution.
Generative Model
The data model is:
where is the noise covariance matrix. Each source is assigned an independent Gaussian prior with unknown variance :
The model evidence (data covariance) is:
EM Update Rules
The EM algorithm alternates between computing the posterior source estimates and updating the hyperparameters:
E-step — compute the posterior mean:
M-step — update source variances:
Sources with are pruned from the active set between iterations, reducing the dimensionality of and improving computational efficiency.
Convergence
The algorithm converges when the maximum relative change in falls below a tolerance:
Parameters
| Parameter | Default | Description |
|---|---|---|
nIterations | 100 | Maximum EM iterations |
tolerance | Convergence threshold on relative change | |
gammaThreshold | Sources with are pruned |
References
- Wipf, D. & Nagarajan, S.S. (2009). A unified Bayesian framework for MEG/EEG source imaging. NeuroImage, 44(3), 947–966. DOI: 10.1016/j.neuroimage.2008.02.059
- Calvetti, D.; Hakula, H.; Pursiainen, S.; Somersalo, E. (2009). Conditionally Gaussian hypermodels for cerebral source localization. SIAM J. Imaging Sciences, 2(3), 879–909.
Beamformer Methods
Beamformers are a family of adaptive spatial filters that estimate the source activity at a given brain location while suppressing contributions from other locations and noise. Unlike minimum-norm methods, beamformers do not require an explicit regularized inverse operator — instead, they construct a spatial filter from the data covariance and the forward model at each source point.
LCMV Beamformer
The Linearly Constrained Minimum Variance (LCMV) beamformer (Van Veen et al., 1997) operates in the time domain. It finds a spatial filter that passes the signal from a target source location with unit gain while minimizing the total output power (i.e., suppressing all other sources and noise).
Spatial Filter
The optimization problem is:
where is the lead-field matrix (gain matrix) at the source location (, with for fixed orientation or for free orientation) and is the data covariance matrix.
The closed-form solution is the unit-gain filter:
Regularization
Since the data covariance matrix may be rank-deficient (e.g., after SSP or MaxFilter), Tikhonov regularization is applied (Gross & Ioannides, 1999):
where is the regularization parameter (typically 0.05). In practice this is implemented via eigendecomposition:
Weight Normalization
The raw unit-gain filter can produce estimates with non-uniform sensitivity across the source space. Several normalization schemes address this:
Unit-noise-gain (Sekihara & Nagarajan, 2008):
Unit-noise-gain-invariant (rotation-invariant form):
Neural Activity Index (NAI):
where is the estimated noise level.
Optimal Orientation
When the source orientation is free, the orientation that maximizes the beamformer output power can be found by solving the generalized eigenvalue problem (Sekihara & Nagarajan, 2008, eq. 4.47):
The eigenvector with the largest eigenvalue gives the optimal source orientation.
Source Estimates
The estimated source time course at location is:
and the source power is:
where is the whitened, projected data vector.
References
- Van Veen, B.D. et al. (1997). Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans. Biomed. Eng., 44(9), 867–880.
- Sekihara, K. & Nagarajan, S.S. (2008). Adaptive Spatial Filters for Electromagnetic Brain Imaging. Springer.
- Gross, J. & Ioannides, A.A. (1999). Linear transformations of data space in MEG. Phys. Med. Biol., 44, 2081–2097.
DICS Beamformer
The Dynamic Imaging of Coherent Sources (DICS) beamformer (Gross et al., 2001) is the frequency-domain analogue of LCMV. It replaces the time-domain data covariance with the cross-spectral density (CSD) matrix at a frequency of interest:
The CSD matrix can be estimated using Fourier, multitaper, or Morlet wavelet methods.
Real Filter Option
The CSD matrix is generally complex-valued. When computing spatial filter weights for source power estimation, it is common to use only the real part (Hipp et al., 2011):
This ensures real-valued spatial filter weights and avoids phase-related artifacts in the power estimates.
Source Power
The source power at location and frequency is:
All regularization and weight-normalization options from LCMV apply identically.
References
- Gross, J. et al. (2001). Dynamic imaging of coherent sources: studying neural interactions in the human brain. PNAS, 98(2), 694–699.
- van Vliet, M. et al. (2018). Analysis of functional connectivity and oscillatory power using DICS. J. Neurosci. Methods, 309, 199–212.
RAP MUSIC
Recursively Applied and Projected MUltiple SIgnal Classification (RAP-MUSIC; Mosher & Leahy, 1999) is a scanning method that localizes multiple correlated or uncorrelated dipolar sources by iteratively identifying them and projecting them out of the data.
Signal Subspace Estimation
Given the measured data matrix ( channels × time samples), the signal subspace is estimated from the eigendecomposition of the data covariance:
The signal subspace is formed from the eigenvectors corresponding to the largest eigenvalues, where is the number of dipoles to search for.
Subspace Correlation Scan
For each candidate source location with lead-field ():
- Compute the SVD of the lead-field:
- Compute the correlation matrix:
- Compute the SVD of the correlation:
- The subcorrelation metric is the largest singular value:
- The optimal orientation is:
The source location with maximum is selected as the -th identified source.
Recursive Projection
After identifying source with lead-field column , the source is projected out of both the lead-field and signal subspace. Let be the matrix of all identified source fields so far, and let come from its SVD. The projector is:
The projected quantities for the next iteration are:
The algorithm terminates when the subcorrelation falls below a threshold (typically 0.5) or sources have been found.
TRAP-MUSIC Variant
In Truncated RAP-MUSIC (Mäkelä et al., 2018), the signal subspace is additionally truncated at each iteration, keeping only columns after projection. This improves robustness when sources are highly correlated.
Dipole-Pair and N-Dipole Scanning
Mosher & Leahy (1999) already proposed that the subspace correlation scan can be performed over -tuples of grid points rather than single dipoles. For a pair of candidate locations , one forms the combined lead-field and computes the subcorrelation metric for this joint model. This extends naturally to -tuples, but the number of combinations grows as , making exhaustive search prohibitive for .
Powell-Accelerated Pair Scanning
To make dipole-pair scanning computationally tractable, MNE-CPP employs a Powell coordinate-descent strategy (Dinh et al., 2012): instead of evaluating all pairs exhaustively, the algorithm alternates between fixing one dipole index and scanning the other along all grid points. This reduces the search from to iterative sweeps that converge rapidly, though not necessarily to the global maximum. The pair search is parallelized with OpenMP.
The procedure is:
- Start with an initial pair of grid indices
- Fix dipole 1 at its current index, scan all grid points for the best partner dipole 2
- Fix dipole 2 at its newly found index, scan all grid points for the best partner dipole 1
- Repeat until the maximum pair index converges (same pair found in consecutive iterations)
- Project out the identified dipole pair and repeat for the next pair
This Powell search was further combined with lead-field clustering (RTC-MUSIC; Dinh et al., 2017) to enable real-time scanning by partitioning the source space into regions with representative lead fields, dramatically reducing the number of subcorrelation evaluations.
References
- Mosher, J.C. & Leahy, R.M. (1999). Source localization using recursively applied and projected (RAP) MUSIC. IEEE Trans. Signal Process., 47(2), 332–340.
- Dinh, C.; Bollmann, S.; Eichardt, R.; Baumgarten, D.; Haueisen, J. (2012). A GPU-accelerated Performance Optimized RAP-MUSIC Algorithm for Real-Time Source Localization. Biomedizinische Technik, 57. DOI: 10.1515/bmt-2012-4260
- Dinh, C.; Esch, L.; Rühle, J.; Bollmann, S.; Güllmar, D.; Baumgarten, D.; Hämäläinen, M.S.; Haueisen, J. (2017). Real-Time Clustered Multiple Signal Classification (RTC-MUSIC). Brain Topography, 30(5). DOI: 10.1007/s10548-017-0586-7
- Mäkelä, N. et al. (2018). Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization. NeuroImage, 167, 73–83.
Dipole Fitting
Sequential equivalent current dipole (ECD) fitting localizes focal brain activity by fitting one or more current dipoles to the measured field pattern at each time point. Unlike distributed methods (MNE, beamformers) that estimate activity at many locations simultaneously, dipole fitting finds the single position, orientation, and amplitude that best explain the data.
Forward Model
For a current dipole at position with moment , the predicted field at sensor is:
or in matrix form: , where is the forward matrix computed at the candidate position using the BEM or sphere model.
Cost Function
The fitting proceeds in whitened data space. Let be the whitened, projected measurement vector and let be the corresponding whitened forward matrix. The SVD of the whitened forward at the candidate position is:
The cost function to minimize over the dipole position is:
where is the effective number of components (typically 3 for a free dipole, reduced to 2 if the smallest singular value is less than 20% of the largest).
Goodness of Fit
The relative quality of a dipole fit is measured by the goodness-of-fit (GOF):
A GOF of 100% means the dipole model explains the data perfectly; lower values indicate contributions from other sources, distributed activity, or noise.
Dipole Moment Estimation
Once the optimal position is found, the dipole moment is computed from the SVD of the whitened forward:
where are column normalization scale factors and are the singular values.
Optimization Algorithm
Initial Grid Search
To avoid local minima, the optimization begins with a grid search over precomputed guess points — a set of candidate dipole positions distributed within the conductor model (typically on concentric spheres at 1–2 cm spacing inside the inner skull). The forward fields for all guess points are precomputed, and the best-fitting initial positions are identified by evaluating the projection of the data onto each guess point's forward field.
Non-Linear Optimization
Starting from the best guess point(s), the position is refined by non-linear optimization:
- MNE-CPP uses a Nelder-Mead simplex algorithm (ported from MNE-C): the initial simplex is a regular tetrahedron of ~1 cm edge length centered on the guess point, with convergence tolerance of 0.2 mm.
- MNE-Python uses COBYLA (Constrained Optimization BY Linear Approximation) with the constraint that the dipole must remain at least
min_dist(default 5 mm) inside the inner skull surface.
Two-Pass Strategy (MNE-CPP)
The MNE-CPP implementation optionally uses a two-pass approach: the first pass uses a sphere model for speed, and the second pass refines the position using the full BEM model starting from the sphere-model result.
Confidence Regions
Confidence limits on the fitted dipole position can be estimated from the Hessian of the cost function at the solution. The Jacobian matrix contains the partial derivatives of the predicted field with respect to all six dipole parameters (). The parameter covariance matrix is:
The 95% confidence volume is:
where are the eigenvalues of the position submatrix of and is the critical value from the distribution with 3 degrees of freedom at the 95% confidence level.
References
- Sarvas, J. (1987). Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem. Phys. Med. Biol., 32(1), 11–22.
- Hämäläinen, M. et al. (1993). Magnetoencephalography — theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys., 65(2), 413–497.
Summary of Inverse Methods
The following table summarizes all inverse methods available in MNE-CPP:
| Method | Type | Domain | Sources | Key Strength |
|---|---|---|---|---|
| MNE | Distributed | Time | All cortical | Mathematically well-defined, full current distribution |
| dSPM | Distributed | Time | All cortical | Noise normalization removes depth bias |
| sLORETA | Distributed | Time | All cortical | Zero localization error for point sources |
| eLORETA | Distributed | Time | All cortical | Exact zero localization bias, iterative weights |
| CMNE | Distributed + LSTM | Time | All cortical | Spatiotemporal context improves spatial fidelity |
| MxNE | Sparse (L21) | Time | Few active | Group-lasso sparsity with temporal smoothness |
| Gamma-MAP | Sparse (Bayesian) | Time | Few active | Automatic relevance determination, data-driven pruning |
| LCMV | Beamformer | Time | Scanning | Adaptive filter, good for focal sources |
| DICS | Beamformer | Frequency | Scanning | Frequency-specific source localization |
| RAP MUSIC | Subspace scanning | Time | Multiple focal | Localizes multiple correlated sources |
| Dipole Fit | Parametric | Time | 1 dipole/timepoint | Precise localization for truly focal activity |
All methods share common preprocessing: data whitening with the noise covariance matrix , application of SSP projectors, and (for MEG) software gradient compensation. The choice of method depends on the expected source configuration and the scientific question.