Optical Physiology#

Imaging#

Base Imaging#

Author: Ben Dichter.

class BaseImagingExtractorInterface(verbose: bool = False, photon_series_type: Literal['OnePhotonSeries', 'TwoPhotonSeries'] = 'TwoPhotonSeries', **source_data)[source]#

Bases: BaseExtractorInterface

Parent class for all ImagingExtractorInterfaces.

keywords: tuple[str] = ('ophys', 'optical electrophysiology', 'fluorescence', 'microscopy', 'two photon', 'one photon', 'voltage imaging', 'calcium imaging')#
get_metadata_schema() dict[source]#

Retrieve the metadata schema for the optical physiology (Ophys) data.

Returns:

The metadata schema dictionary containing definitions for Device, ImagingPlane, and either OnePhotonSeries or TwoPhotonSeries based on the photon_series_type.

Return type:

dict

get_metadata() DeepDict[source]#

Retrieve the metadata for the imaging data.

Returns:

Dictionary containing metadata including device information, imaging plane details, and photon series configuration.

Return type:

DeepDict

get_original_timestamps() ndarray[source]#

Retrieve the original unaltered timestamps for the data in this interface.

This function should retrieve the data on-demand by re-initializing the IO.

Returns:

timestamps – The timestamps for the data stream.

Return type:

numpy.ndarray

get_timestamps() ndarray[source]#

Retrieve the timestamps for the data in this interface.

Returns:

timestamps – The timestamps for the data stream.

Return type:

numpy.ndarray

set_aligned_timestamps(aligned_timestamps: ndarray)[source]#

Replace all timestamps for this interface with those aligned to the common session start time.

Must be in units seconds relative to the common ‘session_start_time’.

Parameters:

aligned_timestamps (numpy.ndarray) – The synchronized timestamps for data in this interface.

add_to_nwbfile(nwbfile: NWBFile, metadata: dict | None = None, photon_series_type: Literal['TwoPhotonSeries', 'OnePhotonSeries'] = 'TwoPhotonSeries', photon_series_index: int = 0, parent_container: Literal['acquisition', 'processing/ophys'] = 'acquisition', stub_test: bool = False, stub_frames: int | None = None, always_write_timestamps: bool = False, iterator_type: str | None = 'v2', iterator_options: dict | None = None, stub_samples: int = 100)[source]#

Add imaging data to the NWB file

Parameters:
  • nwbfile (NWBFile) – The NWB file where the imaging data will be added.

  • metadata (dict, optional) – Metadata for the NWBFile, by default None.

  • photon_series_type ({“TwoPhotonSeries”, “OnePhotonSeries”}, optional) – The type of photon series to be added, by default “TwoPhotonSeries”.

  • photon_series_index (int, optional) – The index of the photon series in the provided imaging data, by default 0.

  • parent_container ({“acquisition”, “processing/ophys”}, optional) – Specifies the parent container to which the photon series should be added, either as part of “acquisition” or under the “processing/ophys” module, by default “acquisition”.

  • stub_test (bool, optional) – If True, only writes a small subset of frames for testing purposes, by default False.

  • stub_frames (int, optional) –

    Deprecated since version February: 2026 Use stub_samples instead.

  • always_write_timestamps (bool, optional) – Whether to always write timestamps, by default False.

  • iterator_type ({“v2”, None}, default: “v2”) – The type of iterator for chunked data writing. ‘v2’: Uses iterative write with control over chunking and progress bars. None: Loads all data into memory before writing (not recommended for large datasets). Note: ‘v1’ is deprecated and will be removed on or after March 2026.

  • iterator_options (dict, optional) – Options for controlling the iterative write process (buffer size, progress bars). See the pynwb tutorial on iterative write for more information on chunked data writing.

    Note: To configure chunk size and compression, use the backend configuration system via get_default_backend_configuration() and configure_backend() after calling this method. See the backend configuration documentation for details.

  • stub_samples (int, default: 100) – The number of samples (frames) to use for testing. When provided, takes precedence over stub_frames.

Bruker Tiff Imaging#

class BrukerTiffMultiPlaneConverter(folder_path: Annotated[pathlib._local.Path, PathType(path_type='dir')], plane_separation_type: Literal['disjoint', 'contiguous'] = None, verbose: bool = False)[source]#

Bases: NWBConverter

Converter class for Bruker imaging data with multiple channels and multiple planes.

Initializes the data interfaces for Bruker volumetric imaging data stream.

Parameters:
  • folder_path (DirectoryPath) – The path to the folder that contains the Bruker TIF image files (.ome.tif) and configuration files (.xml, .env).

  • plane_separation_type ({‘contiguous’, ‘disjoint’}) – Defines how to write volumetric imaging data. Use ‘contiguous’ to create the volumetric two photon series, and ‘disjoint’ to create separate imaging plane and two photon series for each plane.

  • verbose (bool, default: False) – Controls verbosity.

display_name: str | None = 'Bruker TIFF Imaging (multiple channels, multiple planes)'#
keywords: tuple[str] = ('ophys', 'optical electrophysiology', 'fluorescence', 'microscopy', 'two photon', 'one photon', 'voltage imaging', 'calcium imaging')#
associated_suffixes: tuple[str] = ('.ome', '.tif', '.xml', '.env')#
info: str | None = 'Interface for handling all channels and all planes of Bruker imaging data.'#
classmethod get_source_schema()[source]#

Compile input schemas from each of the data interface classes.

Returns:

The compiled source schema from all data interface classes.

Return type:

dict

get_conversion_options_schema() dict[source]#

Get the schema for the conversion options.

Returns:

The schema dictionary containing conversion options for the Bruker TIFF interface.

Return type:

dict

add_to_nwbfile(nwbfile: NWBFile, metadata, stub_test: bool = False, stub_frames: int | None = None, stub_samples: int = 100)[source]#

Add data from multiple data interfaces to the specified NWBFile.

Parameters:
  • nwbfile (NWBFile) – The NWBFile object to which the data will be added.

  • metadata (dict) – Metadata dictionary containing information to describe the data being added to the NWB file.

  • stub_test (bool, optional) – If True, only a subset of the data (up to stub_samples) will be added for testing purposes. Default is False.

  • stub_frames (int, optional) –

    Deprecated since version February: 2026 Use stub_samples instead.

  • stub_samples (int, default: 100) – The number of samples (frames) to use for testing. When provided, takes precedence over stub_frames.

run_conversion(nwbfile_path: Annotated[Path, PathType(path_type=file)] | None = None, nwbfile: NWBFile | None = None, metadata: dict | None = None, overwrite: bool = False, stub_test: bool = False, stub_frames: int | None = None, stub_samples: int = 100) None[source]#

Run the conversion process for the instantiated data interfaces and add data to the NWB file.

Parameters:
  • nwbfile_path (FilePath, optional) – Path where the NWB file will be written. If None, the file will be handled in-memory.

  • nwbfile (NWBFile, optional) – An in-memory NWBFile object. If None, a new NWBFile object will be created.

  • metadata (dict, optional) – Metadata dictionary for describing the NWB file. If None, it will be auto-generated using the get_metadata() method.

  • overwrite (bool, optional) – If True, overwrites the existing NWB file at nwbfile_path. If False, appends to the file (default is False).

  • stub_test (bool, optional) – If True, only a subset of the data (up to stub_samples) will be added for testing purposes, by default False.

  • stub_frames (int, optional) –

    Deprecated since version February: 2026 Use stub_samples instead.

  • stub_samples (int, default: 100) – The number of samples (frames) to use for testing. When provided, takes precedence over stub_frames.

class BrukerTiffSinglePlaneConverter(folder_path: Annotated[pathlib._local.Path, PathType(path_type='dir')], verbose: bool = False)[source]#

Bases: NWBConverter

Primary data interface class for converting Bruker imaging data with multiple channels and a single plane.

Initializes the data interfaces for Bruker imaging data stream.

Parameters:
  • folder_path (DirectoryPath) – The path to the folder that contains the Bruker TIF image files (.ome.tif) and configuration files (.xml, .env).

  • verbose (bool, default: False) – Controls verbosity.

display_name: str | None = 'Bruker TIFF Imaging (multiple channels, single plane)'#
keywords: tuple[str] = ('ophys', 'optical electrophysiology', 'fluorescence', 'microscopy', 'two photon', 'one photon', 'voltage imaging', 'calcium imaging')#
associated_suffixes: tuple[str] = ('.ome', '.tif', '.xml', '.env')#
info: str | None = 'Interface for handling multiple channels of a single plane of Bruker imaging data.'#
classmethod get_source_schema()[source]#

Compile input schemas from each of the data interface classes.

Returns:

The compiled source schema from all data interface classes.

Return type:

dict

get_conversion_options_schema() dict[source]#

Get the schema for the conversion options.

Returns:

The schema dictionary containing conversion options for the Bruker TIFF interface.

Return type:

dict

add_to_nwbfile(nwbfile: NWBFile, metadata, stub_test: bool = False, stub_frames: int | None = None, stub_samples: int = 100)[source]#

Add data from all instantiated data interfaces to the provided NWBFile.

Parameters:
  • nwbfile (NWBFile) – The NWBFile object to which the data will be added.

  • metadata (dict) – Metadata dictionary containing information about the data to be added.

  • stub_test (bool, optional) – If True, only a subset of the data (defined by stub_samples) will be added for testing purposes, by default False.

  • stub_frames (int, optional) –

    Deprecated since version February: 2026 Use stub_samples instead.

  • stub_samples (int, default: 100) – The number of samples (frames) to use for testing. When provided, takes precedence over stub_frames.

run_conversion(nwbfile_path: Annotated[Path, PathType(path_type=file)] | None = None, nwbfile: NWBFile | None = None, metadata: dict | None = None, overwrite: bool = False, stub_test: bool = False, stub_frames: int | None = None, stub_samples: int = 100) None[source]#

Run the NWB conversion process for all instantiated data interfaces.

Parameters:
  • nwbfile_path (FilePath, optional) – The file path where the NWB file will be written. If None, the file is handled in-memory.

  • nwbfile (NWBFile, optional) – An existing in-memory NWBFile object. If None, a new NWBFile object will be created.

  • metadata (dict, optional) – Metadata dictionary used to create or validate the NWBFile. If None, metadata is automatically generated.

  • overwrite (bool, optional) – If True, the NWBFile at nwbfile_path is overwritten if it exists. If False (default), data is appended.

  • stub_test (bool, optional) – If True, only a subset of the data (up to stub_samples) is used for testing purposes. By default False.

  • stub_frames (int, optional) –

    Deprecated since version February: 2026 Use stub_samples instead.

  • stub_samples (int, default: 100) – The number of samples (frames) to use for testing. When provided, takes precedence over stub_frames.

Femtonics Imaging#

Femtonics imaging interface for NeuroConv.

class FemtonicsImagingInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], session_name: str | None = None, munit_name: str | None = None, channel_name: str | None = None, verbose: bool = False)[source]#

Bases: BaseImagingExtractorInterface

Data interface for Femtonics imaging data (.mesc files).

This interface handles Femtonics two-photon microscopy data stored in MESc (Measurement Session Container) format, which is an HDF5-based file format containing imaging data, experiment metadata, scan parameters, and hardware configuration.

Initialize the FemtonicsImagingInterface.

Parameters:
  • file_path (FilePath) – Path to the .mesc file.

  • session_name (str, optional) – Name of the MSession to use (e.g., “MSession_0”, “MSession_1”). If None, and there is only one session, then the first available session will be selected automatically. Otherwise this to be specified with the desired session. In Femtonics MESc files, an MSession (“Measurement Session”) represents a single experimental session, which may contain one or more MUnits (imaging acquisitions or experiments). MSessions are typically named as “MSession_0”, “MSession_1”, etc…

  • munit_name (str, optional) – Name of the MUnit within the specified session (e.g., “MUnit_0”, “MUnit_1”). If None, and there is only one session, then the first available session will be selected automatically. Otherwise this to be specified with the desired session.

    In Femtonics MESc files, an MUnit (“Measurement Unit”) represents a single imaging acquisition or experiment, including all associated imaging data and metadata. A single MSession can contain multiple MUnits, each corresponding to a separate imaging run/experiment performed during the session. MUnits are named as “MUnit_0”, “MUnit_1”, etc. within each session.

  • channel_name (str, optional) – Name of the channel to extract (e.g., ‘UG’, ‘UR’). If multiple channels are available and no channel is specified, an error will be raised. If only one channel is available, it will be used automatically.

  • verbose (bool, optional) – Whether to print verbose output. Default is False.

display_name: str | None = 'Femtonics Imaging'#
associated_suffixes: tuple[str] = ('.mesc',)#
info: str | None = 'Interface for Femtonics two-photon imaging data in MESc format.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Extract metadata specific to Femtonics imaging data.

Returns:

Dictionary containing extracted metadata including device information, optical channels, imaging plane details, and acquisition parameters.

Return type:

DeepDict

classmethod get_available_sessions(file_path: Annotated[Path, PathType(path_type=file)]) list[str][source]#

Get list of available session keys in the file.

Parameters:

file_path (str or Path) – Path to the .mesc file.

Returns:

List of available session keys.

Return type:

list of str

classmethod get_available_munits(file_path: Annotated[Path, PathType(path_type=file)], session_name: str = None) list[str][source]#

Get list of available unit keys in the specified session.

Parameters:
  • file_path (str or Path) – Path to the .mesc file.

  • session_name (str, optional) – Name of the MSession to use (e.g., “MSession_0”). If None and only one session exists, uses that session automatically. If multiple sessions exist, raises an error.

Returns:

List of available unit keys.

Return type:

list of str

classmethod get_available_channels(file_path: Annotated[Path, PathType(path_type=file)], session_name: str = None, munit_name: str = None) list[str][source]#

Get available channels in the specified session/unit combination.

Parameters:
  • file_path (str or Path) – Path to the .mesc file.

  • session_name (str, optional) – Name of the MSession to use (e.g., “MSession_0”). If None and only one session exists, uses that session automatically. If multiple sessions exist, raises an error.

  • munit_name (str, optional) – Name of the MUnit within the session (e.g., “MUnit_0”). If None and only one unit exists in the session, uses that unit automatically. If multiple units exist, raises an error.

Returns:

List of available channel names.

Return type:

list of str

HDF5 Imaging#

class Hdf5ImagingInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], mov_field: str = 'mov', sampling_frequency: float | None = None, start_time: float | None = None, metadata: dict | None = None, channel_names: list | numpy.ndarray | None = None, verbose: bool = False, photon_series_type: Literal['OnePhotonSeries', 'TwoPhotonSeries'] = 'TwoPhotonSeries')[source]#

Bases: BaseImagingExtractorInterface

Interface for HDF5 imaging data.

Parameters:
  • file_path (FilePath) – Path to .h5 or .hdf5 file.

  • mov_field (str, default: ‘mov’)

  • sampling_frequency (float, optional)

  • start_time (float, optional)

  • metadata (dict, optional)

  • channel_names (list of str, optional)

  • verbose (bool, default: False)

display_name: str | None = 'HDF5 Imaging'#
associated_suffixes: tuple[str] = ('.h5', '.hdf5')#
info: str | None = 'Interface for HDF5 imaging data.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

Inscopix Imaging#

is_file_multiplane(file_path)[source]#

Hacky check for ‘multiplane’ keyword in the file. Reads line by line to avoid memory issues with large files. If found, raises NotImplementedError. This is NOT a proper ISX API method—just a string search.

class InscopixImagingInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], verbose: bool = False, **kwargs)[source]#

Bases: BaseImagingExtractorInterface

Data Interface for Inscopix Imaging Extractor.

Parameters:
  • file_path (FilePath) – Path to the .isxd Inscopix file.

  • verbose (bool, optional) – If True, outputs additional information during processing.

  • **kwargs (dict, optional) – Additional keyword arguments passed to the parent class.

Raises:

NotImplementedError – If the file contains multiplane configuration that is not yet supported.

display_name: str | None = 'Inscopix Imaging'#
associated_suffixes: tuple[str] = ('.isxd',)#
info: str | None = 'Interface for handling Inscopix imaging data.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Retrieve the metadata for the Inscopix imaging data.

Returns:

Dictionary containing metadata including device information, imaging plane details, photon series configuration, and Inscopix-specific acquisition parameters.

Return type:

DeepDict

add_to_nwbfile(nwbfile, metadata: dict | None = None, **kwargs)[source]#

Add the Inscopix data to the NWB file.

Parameters:
  • nwbfile (NWBFile) – NWB file to add the data to.

  • metadata (dict, optional) – Metadata dictionary. If None, will be generated dynamically with OnePhotonSeries.

  • **kwargs – Additional keyword arguments passed to the parent add_to_nwbfile method.

  • # TODO (add logic for determining whether the microscope is nVista 2P and change photon_series_type to TwoPhotonSeries accordingly.)

MicroManager Tiff Imaging#

class MicroManagerTiffImagingInterface(folder_path: Annotated[pathlib._local.Path, PathType(path_type='dir')], verbose: bool = False)[source]#

Bases: BaseImagingExtractorInterface

Data Interface for MicroManagerTiffImagingExtractor.

Data Interface for MicroManagerTiffImagingExtractor.

Parameters:
  • folder_path (DirectoryPath) – The folder path that contains the OME-TIF image files (.ome.tif files) and the ‘DisplaySettings’ JSON file.

  • verbose (bool, default: False)

display_name: str | None = 'Micro-Manager TIFF Imaging'#
associated_suffixes: tuple[str] = ('.ome', '.tif', '.json')#
info: str | None = 'Interface for Micro-Manager TIFF imaging data.'#
classmethod get_source_schema() dict[source]#

Get the source schema for the Micro-Manager TIFF imaging interface.

Returns:

The schema dictionary containing input parameters and descriptions for initializing the Micro-Manager TIFF interface.

Return type:

dict

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Get metadata for the Micro-Manager TIFF imaging data.

Returns:

Dictionary containing metadata including session start time, imaging plane details, and two-photon series configuration.

Return type:

dict

Miniscope Imaging#

class MiniscopeConverter(folder_path: Annotated[pathlib._local.Path, PathType(path_type='dir')], verbose: bool = False)[source]#

Bases: NWBConverter

Primary conversion class for handling Miniscope multi-recording data streams.

This converter is designed for data where multiple recordings are organized in timestamp subfolders, each containing Miniscope and behavioral camera subfolders.

Initializes the data interfaces for the Miniscope recording and behavioral data stream.

The main Miniscope folder is expected to contain both data streams organized as follows:

C6-J588_Disc5/ (main folder)
├── 15_03_28/ (subfolder corresponding to the recording time)
│   ├── Miniscope/ (subfolder containing the microscope video stream)
│   │   ├── 0.avi (microscope video)
│   │   ├── metaData.json (metadata for the microscope device)
│   │   └── timeStamps.csv (timing of this video stream)
│   ├── BehavCam_2/ (subfolder containing the behavioral video stream)
│   │   ├── 0.avi (bevavioral video)
│   │   ├── metaData.json (metadata for the behavioral camera)
│   │   └── timeStamps.csv (timing of this video stream)
│   └── metaData.json (metadata for the recording, such as the start time)
├── 15_06_28/
│   ├── Miniscope/
│   ├── BehavCam_2/
│   └── metaData.json
└── 15_12_28/
Parameters:
  • folder_path (DirectoryPath) – The path to the main Miniscope folder.

  • verbose (bool, default: False) – Controls verbosity.

display_name: str | None = 'Miniscope Imaging and Video'#
keywords: tuple[str] = ('ophys', 'optical electrophysiology', 'fluorescence', 'microscopy', 'two photon', 'one photon', 'voltage imaging', 'calcium imaging', 'video')#
associated_suffixes: tuple[str] = ('.avi', '.csv', '.json', '.avi')#
info: str | None = 'Converter for handling both imaging and video recordings from Miniscope.'#
classmethod get_source_schema()[source]#

Compile input schemas from each of the data interface classes.

Returns:

The compiled source schema from all data interface classes.

Return type:

dict

get_conversion_options_schema() dict[source]#

Get the schema for the conversion options.

Returns:

The schema dictionary containing conversion options for the Miniscope interface.

Return type:

dict

add_to_nwbfile(nwbfile: NWBFile, metadata, stub_test: bool = False, stub_frames: int = 100)[source]#

Add Miniscope imaging and behavioral camera data to the specified NWBFile.

Parameters:
  • nwbfile (NWBFile) – The NWBFile object to which the imaging and behavioral data will be added.

  • metadata (dict) – Metadata dictionary containing information about the imaging and behavioral recordings.

  • stub_test (bool, optional) – If True, only a subset of the data (defined by stub_frames) will be added for testing purposes, by default False.

  • stub_frames (int, optional) – The number of frames to include in the subset if stub_test is True, by default 100.

run_conversion(nwbfile_path: str | None = None, nwbfile: NWBFile | None = None, metadata: dict | None = None, overwrite: bool = False, stub_test: bool = False, stub_frames: int = 100) None[source]#

Run the NWB conversion process for the instantiated data interfaces.

Parameters:
  • nwbfile_path (str, optional) – Path where the NWBFile will be written. If None, the file is handled in-memory.

  • nwbfile (NWBFile, optional) – An in-memory NWBFile object to be written to the file. If None, a new NWBFile is created.

  • metadata (dict, optional) – Metadata dictionary with information to create the NWBFile. If None, metadata is auto-generated.

  • overwrite (bool, optional) – If True, overwrites the existing NWBFile at nwbfile_path. If False (default), data is appended.

  • stub_test (bool, optional) – If True, only a subset of the data (up to stub_frames) is written for testing purposes, by default False.

  • stub_frames (int, optional) – The number of frames to include in the subset if stub_test is True, by default 100.

Sbx Imaging#

class SbxImagingInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], sampling_frequency: float | None = None, verbose: bool = False, photon_series_type: Literal['OnePhotonSeries', 'TwoPhotonSeries'] = 'TwoPhotonSeries')[source]#

Bases: BaseImagingExtractorInterface

Data Interface for SbxImagingExtractor.

Parameters:
  • file_path (FilePath) – Path to .sbx file.

  • sampling_frequency (float, optional)

  • verbose (bool, default: False)

display_name: str | None = 'Scanbox Imaging'#
associated_suffixes: tuple[str] = ('.sbx',)#
info: str | None = 'Interface for Scanbox imaging data.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Get metadata for the Scanbox imaging data.

Returns:

Dictionary containing metadata including device information and imaging details specific to the Scanbox system.

Return type:

dict

ScanImage Imaging#

class ScanImageImagingInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')] | None = None, channel_name: str | None = None, slice_sample: int | None = None, plane_index: int | None = None, file_paths: list[Annotated[pathlib._local.Path, PathType(path_type='file')]] | None = None, interleave_slice_samples: bool | None = None, plane_name: str | None = None, fallback_sampling_frequency: float | None = None, verbose: bool = False)[source]#

Bases: BaseImagingExtractorInterface

Interface for reading TIFF files produced via ScanImage software.

This interface is designed to handle the structure of ScanImage TIFF files, which can contain multi-channel and both planar and volumetric data. It supports both single-file and multi-file datasets generated by ScanImage in various acquisition modes (grab, focus, loop).

ScanImage is a software package for controlling laser scanning microscopes, particularly for two-photon and multi-photon imaging. The interface extracts imaging data and metadata from ScanImage TIFF files and converts them to NWB format.

Key features:

  • Handles multi-channel data with channel selection

  • Supports volumetric (multi-plane) imaging data

  • Automatically detects and loads multi-file datasets based on ScanImage naming conventions

  • Extracts and provides access to ScanImage metadata

  • Efficiently retrieves frames using lazy loading

  • Handles flyback frames in volumetric data

Parameters:
  • file_path (FilePath, optional) – Path to the ScanImage TIFF file. If this is part of a multi-file series, this should be the first file. Either file_path or file_paths must be provided.

  • channel_name (str, optional) – Name of the channel to extract (e.g., “Channel 1”, “Channel 2”).

    • If None and only one channel is available, that channel will be used.

    • If None and multiple channels are available, an error will be raised.

    • Use get_available_channels(file_path) to see available channels before creating the interface.

  • slice_sample (int, optional) – Controls how to handle multiple frames per slice in volumetric data: ScanImage data can contain multiple frames for a single plane. Use this to select a specific frame from each slice. if None, this will throw an error. Select a slice sample or set interleave_slice_samples to True to interleave all the slice samples as separate volumes/samples. Note that this will scramble the acquisition order of the frames. This parameter has no effect when frames_per_slice = 1.

  • plane_index (int, optional) – Must be between 0 and num_planes-1. Used to extract a specific plane from volumetric data. When provided:

    • The resulting extractor will be planar

    • Each sample will contain only data for the specified plane

    • This parameter has no effect on planar (non-volumetric) data.

  • file_paths (list[Path | str], optional) – List of file paths to use. This is an escape value that can be used in case the automatic file detection doesn’t work correctly and can be used to override the automatic file detection. This is useful when:

    • Automatic detection doesn’t work correctly

    • You need to specify a custom subset of files

    • You need to control the exact order of files

    The file paths must be provided in the temporal order of the frames in the dataset.

  • interleave_slice_samples (bool, optional) – Controls whether to interleave all slice samples as separate time points when frames_per_slice > 1:

    • If True: Interleaves all slice samples as separate time points, increasing the effective

    number of samples by frames_per_slice. This treats each slice_sample as a distinct sample. - If False: Requires a specific slice_sample to be provided when frames_per_slice > 1. - This parameter has no effect when frames_per_slice = 1 or when slice_sample is provided. - Default is True for backward compatibility (will change to False after November 2025).

  • plane_name (str, optional) – Deprecated. Use plane_index instead. Will be removed in or after November 2025.

  • fallback_sampling_frequency (float, optional) – Deprecated. Will be removed in or after November 2025.

  • verbose (bool, default: False) – If True, will print detailed information about the interface initialization process.

display_name: str | None = 'ScanImage Imaging'#
associated_suffixes: tuple[str] = ('.tif', '.tiff')#
info: str | None = 'Interface for ScanImage TIFF files.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Get metadata for the ScanImage imaging data.

Returns:

The metadata dictionary containing imaging metadata from the ScanImage files. This includes: - Session start time extracted from the ScanImage file - Device information for the microscope - Optical channel configuration - Imaging plane details including grid spacing and origin coordinates if available - Photon series metadata with scan line rate and other acquisition parameters

Return type:

DeepDict

static get_scanimage_version(file_path: Path | str) int[source]#

Extract the ScanImage version from a BigTIFF file without validation.

This method reads the binary header of the TIFF file to determine the ScanImage version that produced it. It supports ScanImage versions 3, 4, and 5.

Parameters:

file_path (Path | str) – Path to the ScanImage TIFF file

Returns:

ScanImage version number (3, 4, or 5)

Return type:

int

static get_available_channels(file_path: Path | str) list[str][source]#

Get the channel names available in a ScanImage TIFF file.

This static method extracts the channel names from a ScanImage TIFF file without needing to create an interface instance. This is useful for determining which channels are available before creating an interface.

Parameters:

file_path (Path | str) – Path to the ScanImage TIFF file.

Returns:

List of channel names available in the file (e.g., [“Channel 1”, “Channel 2”]).

Return type:

list[str]

static get_available_planes(file_path: Path | str) list[str][source]#

Get the available plane names from a ScanImage TIFF file.

This static method determines the number of planes (Z-slices) in a volumetric ScanImage dataset without needing to create an interface instance. This is useful for determining which planes are available before creating an interface.

Parameters:

file_path (Path | str) – Path to the ScanImage TIFF file.

Returns:

List of plane names available in the file. For volumetric data, this will be a list of strings representing plane indices (e.g., [“0”, “1”, “2”]).

Return type:

list[str]

class ScanImageLegacyImagingInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], fallback_sampling_frequency: float | None = None, verbose: bool = False)[source]#

Bases: BaseImagingExtractorInterface

Interface for reading TIFF files produced via ScanImage v3.8.

DataInterface for reading Tiff files that are generated by ScanImage v3.8. This interface extracts the metadata from the exif of the tiff file.

Parameters:
  • file_path (FilePath) – Path to tiff file.

  • fallback_sampling_frequency (float, optional) – The sampling frequency can usually be extracted from the scanimage metadata in exif:ImageDescription:state.acq.frameRate. If not, use this.

display_name: str | None = 'ScanImage Imaging'#
associated_suffixes: tuple[str] = ('.tif',)#
info: str | None = 'Interface for ScanImage v3.8 TIFF files.'#
classmethod get_source_schema() dict[source]#

Infer the JSON schema for the source_data from the method signature (annotation typing).

Returns:

The JSON schema for the source_data.

Return type:

dict

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Get metadata for the ScanImage imaging data.

Returns:

Dictionary containing metadata including session start time and device information specific to the ScanImage system.

Return type:

dict

get_scanimage_major_version(scanimage_metadata: dict) str[source]#

Determine the version of ScanImage that produced the TIFF file.

Parameters:

scanimage_metadata (dict) – Dictionary of metadata extracted from a TIFF file produced via ScanImage.

Returns:

version – The version of ScanImage that produced the TIFF file.

Return type:

str

Raises:

ValueError – If the ScanImage version could not be determined from metadata.

Tiff Imaging#

class TiffImagingInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], sampling_frequency: float, verbose: bool = False, photon_series_type: Literal['OnePhotonSeries', 'TwoPhotonSeries'] = 'TwoPhotonSeries')[source]#

Bases: BaseImagingExtractorInterface

Interface for multi-page TIFF files.

Initialize reading of TIFF file.

Parameters:
  • file_path (FilePath)

  • sampling_frequency (float)

  • verbose (bool, default: False)

  • photon_series_type ({‘OnePhotonSeries’, ‘TwoPhotonSeries’}, default: “TwoPhotonSeries”)

display_name: str | None = 'TIFF Imaging'#
associated_suffixes: tuple[str] = ('.tif', '.tiff')#
info: str | None = 'Interface for multi-page TIFF files.'#
classmethod get_source_schema() dict[source]#

Get the source schema for the TIFF imaging interface.

Returns:

The JSON schema for the TIFF imaging interface source data, containing file path and other configuration parameters.

Return type:

dict

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

Segmentation#

Base Segmentation#

class BaseSegmentationExtractorInterface(verbose: bool = False, **source_data)[source]#

Bases: BaseExtractorInterface

Parent class for all SegmentationExtractorInterfaces.

keywords: tuple[str] = ('segmentation', 'roi', 'cells')#
get_metadata_schema() dict[source]#

Generate the metadata schema for Ophys data, updating required fields and properties.

This method builds upon the base schema and customizes it for Ophys-specific metadata, including required components such as devices, fluorescence data, imaging planes, and two-photon series. It also applies temporary schema adjustments to handle certain use cases until a centralized metadata schema definition is available.

Returns:

A dictionary representing the updated Ophys metadata schema.

Return type:

dict

Notes

  • Ensures that Device and ImageSegmentation are marked as required.

  • Updates various properties, including ensuring arrays for ImagingPlane and TwoPhotonSeries.

  • Adjusts the schema for Fluorescence, including required fields and pattern properties.

  • Adds schema definitions for DfOverF, segmentation images, and summary images.

  • Applies temporary fixes, such as setting additional properties for ImageSegmentation to True.

get_metadata() DeepDict[source]#

Child DataInterface classes should override this to match their metadata.

Returns:

The metadata dictionary containing basic NWBFile metadata.

Return type:

DeepDict

get_original_timestamps() ndarray[source]#

Retrieve the original unaltered timestamps for the data in this interface.

This function should retrieve the data on-demand by re-initializing the IO.

Returns:

timestamps – The timestamps for the data stream.

Return type:

numpy.ndarray

get_timestamps() ndarray[source]#

Retrieve the timestamps for the data in this interface.

Returns:

timestamps – The timestamps for the data stream.

Return type:

numpy.ndarray

set_aligned_timestamps(aligned_timestamps: ndarray)[source]#

Replace all timestamps for this interface with those aligned to the common session start time.

Must be in units seconds relative to the common ‘session_start_time’.

Parameters:

aligned_timestamps (numpy.ndarray) – The synchronized timestamps for data in this interface.

add_to_nwbfile(nwbfile: NWBFile, metadata: dict | None = None, stub_test: bool = False, stub_frames: int | None = None, include_background_segmentation: bool = False, include_roi_centroids: bool = True, include_roi_acceptance: bool = True, mask_type: Literal['image', 'pixel', 'voxel'] = 'image', plane_segmentation_name: str | None = None, iterator_options: dict | None = None, stub_samples: int = 100)[source]#

Add segmentation data to the NWB file.

Parameters:
  • nwbfile (NWBFile) – The NWBFile to add the plane segmentation to.

  • metadata (dict, optional) – The metadata for the interface

  • stub_test (bool, default: False)

  • stub_frames (int, optional) –

    Deprecated since version February: 2026 Use stub_samples instead.

  • include_background_segmentation (bool, default: False) – Whether to include the background plane segmentation and fluorescence traces in the NWB file. If False, neuropil traces are included in the main plane segmentation rather than the background plane segmentation.

  • include_roi_centroids (bool, default: True) – Whether to include the ROI centroids on the PlaneSegmentation table. If there are a very large number of ROIs (such as in whole-brain recordings), you may wish to disable this for faster write speeds.

  • include_roi_acceptance (bool, default: True) – Whether to include if the detected ROI was ‘accepted’ or ‘rejected’. If there are a very large number of ROIs (such as in whole-brain recordings), you may wish to disable this for faster write speeds.

  • mask_type (str, default: ‘image’) – There are three types of ROI masks in NWB, ‘image’, ‘pixel’, and ‘voxel’.

    • ‘image’ masks have the same shape as the reference images the segmentation was applied to, and weight each pixel by its contribution to the ROI (typically boolean, with 0 meaning ‘not in the ROI’).

    • ‘pixel’ masks are instead indexed by ROI, with the data at each index being the shape of the image by the number of pixels in each ROI.

    • ‘voxel’ masks are instead indexed by ROI, with the data at each index being the shape of the volume by the number of voxels in each ROI.

    Specify your choice between these two as mask_type=’image’, ‘pixel’, ‘voxel’

  • plane_segmentation_name (str, optional) – The name of the plane segmentation to be added.

  • iterator_options (dict, optional) – Options for controlling the iterative write process (buffer size, progress bars) when writing image masks and traces.

    Note: To configure chunk size and compression, use the backend configuration system via get_default_backend_configuration() and configure_backend() after calling this method. See the backend configuration documentation for details.

  • stub_samples (int, default: 100) – The number of samples (frames) to use for testing. When provided, takes precedence over stub_frames.

Caiman Segmentation#

class CaimanSegmentationInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], verbose: bool = False)[source]#

Bases: BaseSegmentationExtractorInterface

Data interface for CaimanSegmentationExtractor.

Parameters:
  • file_path (FilePath) – Path to .hdf5 file.

  • verbose (bool, default False) – Whether to print progress

display_name: str | None = 'CaImAn Segmentation'#
associated_suffixes: tuple[str] = ('.hdf5',)#
info: str | None = 'Interface for CaImAn segmentation data.'#
classmethod get_source_schema() dict[source]#

Get the source schema for the CaImAn segmentation interface.

Returns:

The schema dictionary containing input parameters and descriptions for initializing the CaImAn segmentation interface.

Return type:

dict

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

Cnmfe Segmentation#

class CnmfeSegmentationInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], verbose: bool = False)[source]#

Bases: BaseSegmentationExtractorInterface

Data interface for constrained non-negative matrix factorization (CNMFE) segmentation extractor.

display_name: str | None = 'CNMFE Segmentation'#
associated_suffixes: tuple[str] = ('.mat',)#
info: str | None = 'Interface for constrained non-negative matrix factorization (CNMFE) segmentation.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

Extract Segmentation#

class ExtractSegmentationInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], sampling_frequency: float, output_struct_name: str | None = None, verbose: bool = False)[source]#

Bases: BaseSegmentationExtractorInterface

Data interface for ExtractSegmentationExtractor.

Parameters:
  • file_path (FilePath)

  • sampling_frequency (float)

  • output_struct_name (str, optional)

  • verbose (bool, default : True)

display_name: str | None = 'EXTRACT Segmentation'#
associated_suffixes: tuple[str] = ('.mat',)#
info: str | None = 'Interface for EXTRACT segmentation.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

Inscopix Segmentation#

class InscopixSegmentationInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], verbose: bool = False)[source]#

Bases: BaseSegmentationExtractorInterface

Conversion interface for Inscopix segmentation data.

Parameters:
  • file_path (FilePath) – Path to the .isxd Inscopix file.

  • verbose (bool, optional) – If True, outputs additional information during processing.

display_name: str | None = 'Inscopix Segmentation'#
associated_suffixes: tuple[str] = ('.isxd',)#
info: str | None = 'Interface for handling Inscopix segmentation.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Retrieve the metadata for the Inscopix segmentation data.

Returns:

  • DeepDict – Dictionary containing metadata including device information, imaging plane details, photon series configuration, and Inscopix-specific acquisition parameters.

  • TODO (Determine the excitation and emission wavelengths. For each Inscopix microscope they are fixed (e.g. NVista has an emission: 535 and excitation: 475).)

  • We currently do not know how to map the names returned by get_acquisition_info[‘MicroscopeType’]

  • to the actual microscope models, as we do not have example data for each type.

  • See related issue (https://github.com/inscopix/pyisx/issues/62)

Sima Segmentation#

class SimaSegmentationInterface(file_path: Annotated[pathlib._local.Path, PathType(path_type='file')], sima_segmentation_label: str = 'auto_ROIs')[source]#

Bases: BaseSegmentationExtractorInterface

Data interface for SimaSegmentationExtractor.

Parameters:
  • file_path (FilePath)

  • sima_segmentation_label (str, default: “auto_ROIs”)

display_name: str | None = 'SIMA Segmentation'#
associated_suffixes: tuple[str] = ('.sima',)#
info: str | None = 'Interface for SIMA segmentation.'#
classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

Suite2p Segmentation#

class Suite2pSegmentationInterface(folder_path: Annotated[pathlib._local.Path, PathType(path_type='dir')], channel_name: str | None = None, plane_name: str | None = None, plane_segmentation_name: str | None = None, verbose: bool = False)[source]#

Bases: BaseSegmentationExtractorInterface

Interface for Suite2p segmentation data.

Parameters:
  • folder_path (DirectoryPath) – Path to the folder containing Suite2p segmentation data. Should contain ‘plane#’ sub-folders.

  • channel_name (str, optional) – The name of the channel to load. To determine what channels are available, use Suite2pSegmentationInterface.get_available_channels(folder_path).

  • plane_name (str, optional) – The name of the plane to load. This interface only loads one plane at a time. If this value is omitted, the first plane found will be loaded. To determine what planes are available, use Suite2pSegmentationInterface.get_available_planes(folder_path).

  • plane_segmentation_name (str, optional) – The name of the plane segmentation to be added.

display_name: str | None = 'Suite2p Segmentation'#
associated_suffixes: tuple[str] = ('.npy',)#
info: str | None = 'Interface for Suite2p segmentation.'#
classmethod get_source_schema() dict[source]#

Get the source schema for the Suite2p segmentation interface.

Returns:

The schema dictionary containing input parameters and descriptions for initializing the Suite2p segmentation interface.

Return type:

dict

classmethod get_available_planes(folder_path: Annotated[Path, PathType(path_type=dir)]) dict[source]#

Get the available planes in the Suite2p segmentation folder.

Parameters:

folder_path (DirectoryPath) – Path to the folder containing Suite2p segmentation data.

Returns:

Dictionary containing information about available planes in the dataset.

Return type:

dict

classmethod get_available_channels(folder_path: Annotated[Path, PathType(path_type=dir)]) dict[source]#

Get the available channels in the Suite2p segmentation folder.

Parameters:

folder_path (DirectoryPath) – Path to the folder containing Suite2p segmentation data.

Returns:

Dictionary containing information about available channels in the dataset.

Return type:

dict

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Get metadata for the Suite2p segmentation data.

Returns:

Dictionary containing metadata including plane segmentation details, fluorescence data, and segmentation images.

Return type:

DeepDict

add_to_nwbfile(nwbfile: NWBFile, metadata: dict | None = None, stub_test: bool = False, stub_frames: int = 100, include_roi_centroids: bool = True, include_roi_acceptance: bool = True, mask_type: str | None = 'image', plane_segmentation_name: str | None = None, iterator_options: dict | None = None)[source]#

Add segmentation data to the specified NWBFile.

Parameters:
  • nwbfile (NWBFile) – The NWBFile object to which the segmentation data will be added.

  • metadata (dict, optional) – Metadata containing information about the segmentation. If None, default metadata is used.

  • stub_test (bool, optional) – If True, only a subset of the data (defined by stub_frames) will be added for testing purposes, by default False.

  • stub_frames (int, optional) – The number of frames to include in the subset if stub_test is True, by default 100.

  • include_roi_centroids (bool, optional) – Whether to include the centroids of regions of interest (ROIs) in the data, by default True.

  • include_roi_acceptance (bool, optional) – Whether to include acceptance status of ROIs, by default True.

  • mask_type (str, default: ‘image’) – There are three types of ROI masks in NWB, ‘image’, ‘pixel’, and ‘voxel’.

    • ‘image’ masks have the same shape as the reference images the segmentation was applied to, and weight each pixel

    by its contribution to the ROI (typically boolean, with 0 meaning ‘not in the ROI’). * ‘pixel’ masks are instead indexed by ROI, with the data at each index being the shape of the image by the number of pixels in each ROI. * ‘voxel’ masks are instead indexed by ROI, with the data at each index being the shape of the volume by the number of voxels in each ROI.

    Specify your choice between these two as mask_type=’image’, ‘pixel’, ‘voxel’, or None. plane_segmentation_name : str, optional The name of the plane segmentation object, by default None.

  • iterator_options (dict, optional) – Additional options for iterating over the data, by default None.

Fiber Photometry#

TDT Fiber Photometry#

class TDTFiberPhotometryInterface(folder_path: Annotated[pathlib._local.Path, PathType(path_type='dir')], verbose: bool = False)[source]#

Bases: BaseTemporalAlignmentInterface

Data Interface for converting fiber photometry data from a TDT output folder.

The output folder from TDT consists of a variety of TDT-specific file types (e.g. Tbk, Tdx, tev, tin, tsq). This data is read by the tdt.read_block function, and then parsed into the ndx-fiber-photometry format.

Initialize the TDTFiberPhotometryInterface.

Parameters:
  • folder_path (FilePath) – The path to the folder containing the TDT data.

  • verbose (bool, optional) – Whether to print status messages, default = True.

keywords: tuple[str] = ('fiber photometry',)#
display_name: str | None = 'TDTFiberPhotometry'#
info: str | None = 'Data Interface for converting fiber photometry data from TDT files.'#
associated_suffixes: tuple[str] = ('Tbk', 'Tdx', 'tev', 'tin', 'tsq')#
get_metadata() DeepDict[source]#

Get metadata for the TDTFiberPhotometryInterface.

Returns:

The metadata dictionary for this interface.

Return type:

DeepDict

get_metadata_schema() dict[source]#

Get the metadata schema for the TDTFiberPhotometryInterface.

Returns:

The metadata schema for this interface.

Return type:

dict

load(t1: float = 0.0, t2: float = 0.0, evtype: list[str] = ['all'])[source]#

Load the TDT data from the folder path.

Parameters:
  • t1 (float, optional) – Retrieve data starting at t1 (in seconds), default = 0 for start of recording.

  • t2 (float, optional) – Retrieve data ending at t2 (in seconds), default = 0 for end of recording.

  • evtype (list[str], optional) – List of strings, specifies what type of data stores to retrieve from the tank. Can contain ‘all’ (default), ‘epocs’, ‘snips’, ‘streams’, or ‘scalars’. Ex. [‘epocs’, ‘snips’]

Returns:

TDT data object

Return type:

tdt.StructType

get_original_timestamps(t1: float = 0.0, t2: float = 0.0) dict[str, ndarray][source]#

Get the original timestamps for the data.

Parameters:
  • t1 (float, optional) – Retrieve data starting at t1 (in seconds), default = 0 for start of recording.

  • t2 (float, optional) – Retrieve data ending at t2 (in seconds), default = 0 for end of recording.

Returns:

Dictionary of stream names to timestamps.

Return type:

dict[str, np.ndarray]

get_timestamps(t1: float = 0.0, t2: float = 0.0) dict[str, ndarray][source]#

Get the timestamps for the data.

Parameters:
  • t1 (float, optional) – Retrieve data starting at t1 (in seconds), default = 0 for start of recording.

  • t2 (float, optional) – Retrieve data ending at t2 (in seconds), default = 0 for end of recording.

Returns:

Dictionary of stream names to timestamps.

Return type:

dict[str, np.ndarray]

set_aligned_timestamps(stream_name_to_aligned_timestamps: dict[str, ndarray]) None[source]#

Set the aligned timestamps for the data.

Parameters:

stream_name_to_aligned_timestamps (dict[str, np.ndarray]) – Dictionary of stream names to aligned timestamps.

set_aligned_starting_time(aligned_starting_time: float, t1: float = 0.0, t2: float = 0.0) None[source]#

Set the aligned starting time and adjust the timestamps appropriately.

Parameters:
  • aligned_starting_time (float) – The aligned starting time.

  • t1 (float, optional) – Retrieve data starting at t1 (in seconds), default = 0 for start of recording.

  • t2 (float, optional) – Retrieve data ending at t2 (in seconds), default = 0 for end of recording.

get_original_starting_time_and_rate(t1: float = 0.0, t2: float = 0.0) dict[str, tuple[float, float]][source]#

Get the original starting time and rate for the data.

Parameters:
  • t1 (float, optional) – Retrieve data starting at t1 (in seconds), default = 0 for start of recording.

  • t2 (float, optional) – Retrieve data ending at t2 (in seconds), default = 0 for end of recording.

Returns:

Dictionary of stream names to starting time and rate.

Return type:

dict[str, tuple[float, float]]

get_starting_time_and_rate(t1: float = 0.0, t2: float = 0.0) tuple[float, float][source]#

Get the starting time and rate for the data.

Parameters:
  • t1 (float, optional) – Retrieve data starting at t1 (in seconds), default = 0 for start of recording.

  • t2 (float, optional) – Retrieve data ending at t2 (in seconds), default = 0 for end of recording.

Returns:

Dictionary of stream names to starting time and rate.

Return type:

dict[str, tuple[float, float]]

set_aligned_starting_time_and_rate(stream_name_to_aligned_starting_time_and_rate: dict[str, tuple[float, float]]) None[source]#

Set the aligned starting time and rate for the data.

Parameters:

stream_name_to_aligned_starting_time_and_rate (dict[str, tuple[float, float]]) – Dictionary of stream names to aligned starting time and rate.

get_events() dict[str, dict[str, ndarray]][source]#

Get a dictionary of events from the TDT files (e.g. camera TTL pulses).

The events dictionary maps from the names of each epoc in the TDT data to an event dictionary. Each event dictionary maps from “onset”, “offset”, and “data” to the corresponding arrays.

Returns:

Dictionary of events.

Return type:

dict[str, dict[str, np.ndarray]]

add_to_nwbfile(nwbfile: NWBFile, metadata: dict, *, stub_test: bool = False, t1: float = 0.0, t2: float = 0.0, timing_source: Literal['original', 'aligned_timestamps', 'aligned_starting_time_and_rate'] = 'original')[source]#

Add the data to an NWBFile.

Parameters:
  • nwbfile (pynwb.NWBFile) – The in-memory object to add the data to.

  • metadata (dict) – Metadata dictionary with information used to create the NWBFile.

  • stub_test (bool, optional) – If True, only add a subset of the data (1s) to the NWBFile for testing purposes, default = False.

  • t1 (float, optional) – Retrieve data starting at t1 (in seconds), default = 0 for start of recording.

  • t2 (float, optional) – Retrieve data ending at t2 (in seconds), default = 0 for end of recording.

  • timing_source (Literal[“original”, “aligned_timestamps”, “aligned_starting_time_and_rate”], optional) – Source of timing information for the data, default = “original”.

Raises:

AssertionError – If the timing_source is not one of “original”, “aligned_timestamps”, or “aligned_starting_time_and_rate”.