CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-mne

MNE-Python provides comprehensive tools for analyzing MEG, EEG, and other neuroimaging data with advanced source estimation and connectivity analysis.

Pending
Overview
Eval results
Files

datasets.mddocs/

Sample Datasets

Built-in access to standard neuroimaging datasets for testing, tutorials, and benchmarking. MNE-Python provides easy access to over 20 different datasets covering various experimental paradigms and recording modalities.

Capabilities

Core Datasets

Standard datasets used in tutorials and examples throughout the MNE documentation.

def data_path(path: Optional[str] = None, force_update: bool = False, update_path: bool = True,
             download: bool = True, accept: bool = False, verbose: Optional[Union[bool, str, int]] = None) -> str:
    """
    Generic dataset path function (pattern used by all datasets).
    
    Parameters:
    - path: Custom download path
    - force_update: Force redownload of data
    - update_path: Update MNE config with path
    - download: Download if missing
    - accept: Accept license terms
    - verbose: Verbosity level
    
    Returns:
    Path to dataset directory
    """

# Sample Dataset - Auditory/Visual Paradigm
sample.data_path: Callable[..., str]  # Download sample dataset
sample.get_version: Callable[[], str]  # Get dataset version

# Somatosensory Dataset
somato.data_path: Callable[..., str]  # Somatosensory MEG data
somato.get_version: Callable[[], str]

# Multimodal Dataset
multimodal.data_path: Callable[..., str]  # Multimodal face dataset
multimodal.get_version: Callable[[], str]

# SPM Face Dataset
spm_face.data_path: Callable[..., str]  # SPM face processing dataset
spm_face.get_version: Callable[[], str]

Motor Imagery and BCI Datasets

Datasets for brain-computer interface research and motor imagery classification.

# EEG Motor Movement/Imagery Dataset
eegbci.data_path: Callable[..., str]
eegbci.get_version: Callable[[], str]

def load_data(subject: int, runs: Union[int, List[int]], path: Optional[str] = None,
             force_update: bool = False, update_path: bool = True,
             base_url: str = 'https://physionet.org/files/eegmmidb/',
             verbose: Optional[Union[bool, str, int]] = None) -> List[str]:
    """
    Load EEGBCI dataset files.
    
    Parameters:
    - subject: Subject number (1-109)
    - runs: Run number(s) to load
    - path: Download path
    - force_update: Force redownload
    - update_path: Update MNE config
    - base_url: Base download URL
    - verbose: Verbosity level
    
    Returns:
    List of paths to downloaded files
    """

# SSVEP Dataset
ssvep.data_path: Callable[..., str]  # Steady-state visual evoked potentials
ssvep.get_version: Callable[[], str]

def load_data(path: Optional[str] = None, force_update: bool = False,
             update_path: bool = True, verbose: Optional[Union[bool, str, int]] = None) -> Dict:
    """
    Load SSVEP dataset.
    
    Returns:
    Dictionary with loaded epochs and metadata
    """

Sleep and Physiology Datasets

Datasets for sleep research and physiological signal analysis.

# Sleep Physiology Dataset
sleep_physionet.data_path: Callable[..., str]
sleep_physionet.get_version: Callable[[], str]

def age_group_averages(path: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> List[str]:
    """
    Load age group average data.
    
    Parameters:
    - path: Dataset path
    - verbose: Verbosity level
    
    Returns:
    List of paths to age group files
    """

def temazepam_effects(path: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> List[str]:
    """
    Load temazepam effects data.
    
    Returns:
    List of paths to temazepam study files
    """

Specialized Neuroimaging Datasets

Datasets for specific analysis methods and experimental paradigms.

# High-Frequency SEF Dataset
hf_sef.data_path: Callable[..., str]  # High-frequency somatosensory evoked fields
hf_sef.get_version: Callable[[], str]

# Epilepsy ECoG Dataset
epilepsy_ecog.data_path: Callable[..., str]  # Intracranial EEG epilepsy data
epilepsy_ecog.get_version: Callable[[], str]

# fNIRS Motor Task Dataset
fnirs_motor.data_path: Callable[..., str]  # Functional near-infrared spectroscopy
fnirs_motor.get_version: Callable[[], str]

# OPM Dataset
opm.data_path: Callable[..., str]  # Optically pumped magnetometer data
opm.get_version: Callable[[], str]

# Visual Categorization Dataset
visual_92_categories.data_path: Callable[..., str]  # Visual object categorization
visual_92_categories.get_version: Callable[[], str]

def load_data(path: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> Tuple[ArrayLike, ArrayLike]:
    """
    Load visual categorization data.
    
    Returns:
    Tuple of (data_array, labels)
    """

# Kiloword Dataset
kiloword.data_path: Callable[..., str]  # Lexical decision task
kiloword.get_version: Callable[[], str]

def load_data(path: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> Dict:
    """
    Load kiloword dataset.
    
    Returns:
    Dictionary with epochs and metadata
    """

Connectivity and Network Datasets

Datasets for studying brain connectivity and network analysis.

# FieldTrip CMC Dataset
fieldtrip_cmc.data_path: Callable[..., str]  # Cortico-muscular coherence
fieldtrip_cmc.get_version: Callable[[], str]

# mTRF Dataset
mtrf.data_path: Callable[..., str]  # Multivariate temporal response functions
mtrf.get_version: Callable[[], str]

def load_speech_envelope(path: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> Tuple[ArrayLike, float]:
    """
    Load speech envelope stimulus.
    
    Returns:
    Tuple of (envelope_data, sampling_rate)
    """

Phantom and Calibration Datasets

Datasets with known ground truth for method validation and calibration.

# 4D BTi Phantom Dataset
phantom_4dbti.data_path: Callable[..., str]  # 4D Neuroimaging phantom
phantom_4dbti.get_version: Callable[[], str]

# KIT Phantom Dataset
phantom_kit.data_path: Callable[..., str]  # KIT/Yokogawa phantom data
phantom_kit.get_version: Callable[[], str]

# Kernel Phantom Dataset
phantom_kernel.data_path: Callable[..., str]  # Kernel flow phantom
phantom_kernel.get_version: Callable[[], str]

def load_data(subject: str = 'phantom', session: str = '20220927_114934',
             path: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> Raw:
    """
    Load phantom data directly as Raw object.
    
    Parameters:
    - subject: Subject identifier
    - session: Session identifier
    - path: Dataset path
    - verbose: Verbosity level
    
    Returns:
    Raw object with phantom data
    """

Standard Brain Templates and Atlases

Access to standard brain templates and parcellations.

def fetch_fsaverage(subjects_dir: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> str:
    """
    Fetch FreeSurfer average brain template.
    
    Parameters:
    - subjects_dir: FreeSurfer subjects directory
    - verbose: Verbosity level
    
    Returns:
    Path to fsaverage directory
    """

def fetch_infant_template(age: str, subjects_dir: Optional[str] = None,
                         verbose: Optional[Union[bool, str, int]] = None) -> str:
    """
    Fetch infant brain template.
    
    Parameters:
    - age: Age group ('6mo', '12mo', etc.)
    - subjects_dir: FreeSurfer subjects directory
    - verbose: Verbosity level
    
    Returns:
    Path to infant template
    """

def fetch_hcp_mmp_parcellation(subjects_dir: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> List[str]:
    """
    Fetch HCP multi-modal parcellation.
    
    Parameters:
    - subjects_dir: FreeSurfer subjects directory
    - verbose: Verbosity level
    
    Returns:
    List of paths to parcellation files
    """

def fetch_aparc_sub_parcellation(subjects_dir: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> List[str]:
    """
    Fetch aparc sub-parcellation.
    
    Parameters:
    - subjects_dir: FreeSurfer subjects directory
    - verbose: Verbosity level
    
    Returns:
    List of paths to sub-parcellation files
    """

Dataset Utilities

Utility functions for dataset management and discovery.

def has_dataset(name: str, path: Optional[str] = None) -> bool:
    """
    Check if dataset is available locally.
    
    Parameters:
    - name: Dataset name
    - path: Custom path to check
    
    Returns:
    True if dataset is available
    """

def get_version(name: str) -> str:
    """
    Get version of specific dataset.
    
    Parameters:
    - name: Dataset name
    
    Returns:
    Version string
    """

def _download_all_example_data(path: Optional[str] = None, verbose: Optional[Union[bool, str, int]] = None) -> None:
    """
    Download all example datasets (for CI/testing).
    
    Parameters:
    - path: Download path
    - verbose: Verbosity level
    """

Usage Examples

Loading Sample Dataset

import mne

# Download sample dataset (if not already present)
sample_data_folder = mne.datasets.sample.data_path()
print(f"Sample data location: {sample_data_folder}")

# Load sample data files
sample_data_raw_file = sample_data_folder / 'MEG' / 'sample' / 'sample_audvis_filt-0-40_raw.fif'
sample_data_cov_file = sample_data_folder / 'MEG' / 'sample' / 'sample_audvis-cov.fif'
sample_data_trans_file = sample_data_folder / 'MEG' / 'sample' / 'sample_audvis_raw-trans.fif'

# Load the actual data
raw = mne.io.read_raw_fif(sample_data_raw_file, preload=True)
cov = mne.read_cov(sample_data_cov_file)

print(f"Raw data: {raw}")
print(f"Covariance: {cov}")

Motor Imagery Classification Data

import mne
from mne.datasets import eegbci

# Load EEGBCI motor imagery data
eegbci_path = eegbci.data_path()
print(f"EEGBCI data location: {eegbci_path}")

# Load specific subject and runs
subject = 1
runs = [6, 10, 14]  # Motor imagery runs
raw_fnames = eegbci.load_data(subject, runs)

# Load and concatenate runs
raws = [mne.io.read_raw_edf(f, preload=True) for f in raw_fnames]
raw = mne.concatenate_raws(raws)

# Set channel names to standard 10-20 system
mne.datasets.eegbci.standardize(raw)

# Set montage
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)

print(f"Motor imagery data: {raw}")

Using Phantom Data for Validation

import mne
from mne.datasets import phantom_kit

# Load phantom dataset
phantom_path = phantom_kit.data_path()
print(f"Phantom data location: {phantom_path}")

# Phantom data has known dipole locations - useful for validation
phantom_raw_file = phantom_path / 'phantom_100hz_20_sec_raw.fif'
phantom_raw = mne.io.read_raw_fif(phantom_raw_file, preload=True)

# Load dipole information
phantom_dipoles_file = phantom_path / 'phantom_dipoles.txt'
# dipoles = load_phantom_dipoles(phantom_dipoles_file)  # Custom function

print(f"Phantom raw data: {phantom_raw}")

Sleep Dataset Analysis

import mne
from mne.datasets import sleep_physionet

# Load sleep dataset
sleep_path = sleep_physionet.data_path()
print(f"Sleep data location: {sleep_path}")

# Load specific subject data
subjects = sleep_physionet.age_group_averages()
print(f"Available subjects: {len(subjects)}")

# Example loading one subject's data
# sleep_raw = mne.io.read_raw_edf(subjects[0], preload=True)
# print(f"Sleep recording: {sleep_raw}")

Visual Categorization Dataset

import mne
from mne.datasets import visual_92_categories

# Load visual categorization data
visual_path = visual_92_categories.data_path()
print(f"Visual data location: {visual_path}")

# Load preprocessed data
data, labels = visual_92_categories.load_data()
print(f"Data shape: {data.shape}")
print(f"Labels shape: {labels.shape}")
print(f"Unique categories: {len(np.unique(labels))}")

FreeSurfer Template

import mne

# Fetch FreeSurfer average brain
subjects_dir = mne.datasets.fetch_fsaverage(verbose=True)
print(f"fsaverage template: {subjects_dir}")

# Fetch HCP multi-modal parcellation
hcp_parcellation = mne.datasets.fetch_hcp_mmp_parcellation(subjects_dir=subjects_dir)
print(f"HCP parcellation files: {len(hcp_parcellation)}")

# Check if dataset is available
has_sample = mne.datasets.has_dataset('sample')
print(f"Sample dataset available: {has_sample}")

Checking Dataset Availability

import mne

# List of available datasets
datasets = [
    'sample', 'somato', 'spm_face', 'eegbci', 'hf_sef', 
    'multimodal', 'opm', 'phantom_4dbti', 'visual_92_categories'
]

for dataset in datasets:
    available = mne.datasets.has_dataset(dataset)
    if hasattr(mne.datasets, dataset):
        version = getattr(mne.datasets, dataset).get_version()
        print(f"{dataset}: {'✓' if available else '✗'} (v{version})")
    else:
        print(f"{dataset}: {'✓' if available else '✗'}")

Dataset Categories

By Recording Modality

  • MEG: sample, somato, multimodal, hf_sef, opm
  • EEG: eegbci, spm_face, visual_92_categories, kiloword
  • ECoG: epilepsy_ecog
  • fNIRS: fnirs_motor
  • Sleep: sleep_physionet

By Experimental Paradigm

  • Sensory: sample (auditory/visual), somato (somatosensory), hf_sef (tactile)
  • Motor: eegbci (motor imagery), somato (motor responses)
  • Cognitive: spm_face (face processing), visual_92_categories (object recognition)
  • Language: kiloword (lexical decision)
  • Clinical: epilepsy_ecog (seizure data), sleep_physionet (sleep disorders)

By Use Case

  • Tutorials: sample, somato, spm_face
  • Method validation: phantom_4dbti, phantom_kit, phantom_kernel
  • BCI research: eegbci, ssvep
  • Connectivity: fieldtrip_cmc, mtrf
  • Templates: fsaverage, infant templates, HCP parcellation

Types

from typing import Union, Optional, List, Dict, Tuple, Callable, Any
import numpy as np

ArrayLike = Union[np.ndarray, List, Tuple]

Install with Tessl CLI

npx tessl i tessl/pypi-mne

docs

data-io.md

datasets.md

index.md

machine-learning.md

preprocessing.md

source-analysis.md

statistics.md

time-frequency.md

visualization.md

tile.json