Testing#

Data Interface Mixins#

class DataInterfaceTestMixin(/, *args, **kwargs)[source]#

Bases: object

Generic class for testing DataInterfaces.

Several of these tests are required to be run in a specific order. In this case, there is a test_conversion_as_lone_interface that calls the check functions in the appropriate order, after the interface has been created. Normally, you might expect the interface to be simply created in the setUp method, but this class allows you to specify multiple interface_kwargs.

Class Attributes#

data_interface_clsDataInterface

class, not instance

interface_kwargsdict or list

When it is a dictionary, take these as arguments to the constructor of the interface. When it is a list, each element of the list is a dictionary of arguments to the constructor. Each dictionary will be tested one at a time.

save_directoryPath, optional

Directory where test files should be saved.

data_interface_cls: type[BaseDataInterface]#
interface_kwargs: dict#
save_directory: Path = PosixPath('/tmp/tmpfssf2smm')#
conversion_options: dict | None = None#
maxDiff = None#
setup_interface(request)[source]#

Add this as a fixture when you want freshly created interface in the test.

setup_default_conversion_options(request)[source]#
test_source_schema_valid()[source]#
test_conversion_options_schema_valid(setup_interface)[source]#
test_metadata_schema_valid(setup_interface)[source]#
test_metadata(setup_interface)[source]#
check_extracted_metadata(metadata: dict)[source]#

Override this method to make assertions about specific extracted metadata values.

test_no_metadata_mutation(setup_interface)[source]#

Ensure the metadata object is not altered by add_to_nwbfile method.

test_run_conversion_with_backend(setup_interface, tmp_path, backend)[source]#
test_run_conversion_with_backend_configuration(setup_interface, tmp_path, backend)[source]#
test_configure_backend_for_equivalent_nwbfiles(setup_interface, tmp_path, backend)[source]#
test_all_conversion_checks(setup_interface, tmp_path)[source]#
abstractmethod check_read_nwb(nwbfile_path: str)[source]#

Read the produced NWB file and compare it to the interface.

run_custom_checks()[source]#

Override this in child classes to inject additional custom checks.

check_run_conversion_in_nwbconverter_with_backend(nwbfile_path: str, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_run_conversion_in_nwbconverter_with_backend_configuration(nwbfile_path: str, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
class TemporalAlignmentMixin(/, *args, **kwargs)[source]#

Bases: object

Generic class for testing temporal alignment methods.

data_interface_cls: type[BaseDataInterface]#
interface_kwargs: dict#
save_directory: Path = PosixPath('/tmp/tmpu0h0igw1')#
conversion_options: dict | None = None#
maxDiff = None#
setup_interface(request)[source]#
setup_default_conversion_options(request)[source]#
setUpFreshInterface()[source]#

Protocol for creating a fresh instance of the interface.

check_interface_get_original_timestamps()[source]#

Just to ensure each interface can call .get_original_timestamps() without an error raising.

Also, that it always returns non-empty.

check_interface_get_timestamps()[source]#

Just to ensure each interface can call .get_timestamps() without an error raising.

Also, that it always returns non-empty.

check_interface_set_aligned_timestamps()[source]#

Ensure that internal mechanisms for the timestamps getter/setter work as expected.

check_shift_timestamps_by_start_time()[source]#

Ensure that internal mechanisms for shifting timestamps by a starting time work as expected.

check_interface_original_timestamps_inmutability()[source]#

Check aligning the timestamps for the interface does not change the value of .get_original_timestamps().

check_nwbfile_temporal_alignment()[source]#

Check the temporally aligned timing information makes it into the NWB file.

test_interface_alignment(setup_interface)[source]#
class ImagingExtractorInterfaceTestMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

data_interface_cls: type[BaseImagingExtractorInterface]#
optical_series_name: str = 'TwoPhotonSeries'#
check_read_nwb(nwbfile_path: str)[source]#

Read the produced NWB file and compare it to the interface.

check_nwbfile_temporal_alignment()[source]#

Check the temporally aligned timing information makes it into the NWB file.

class SegmentationExtractorInterfaceTestMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

data_interface_cls: BaseSegmentationExtractorInterface#
check_read(nwbfile_path: str)[source]#
class RecordingExtractorInterfaceTestMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

Generic class for testing any recording interface.

data_interface_cls: type[BaseRecordingExtractorInterface]#
is_lfp_interface: bool = False#
check_read_nwb(nwbfile_path: str)[source]#

Read the produced NWB file and compare it to the interface.

check_interface_set_aligned_timestamps()[source]#

Ensure that internal mechanisms for the timestamps getter/setter work as expected.

check_interface_set_aligned_segment_timestamps()[source]#
check_shift_timestamps_by_start_time()[source]#

Ensure that internal mechanisms for shifting timestamps by a starting time work as expected.

check_shift_segment_timestamps_by_starting_times()[source]#
check_interface_original_timestamps_inmutability()[source]#

Check aligning the timestamps for the interface does not change the value of .get_original_timestamps().

test_interface_alignment(setup_interface)[source]#
class SortingExtractorInterfaceTestMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

data_interface_cls: type[BaseSortingExtractorInterface]#
associated_recording_cls: type[BaseRecordingExtractorInterface] | None = None#
associated_recording_kwargs: dict | None = None#
setUpFreshInterface()[source]#

Protocol for creating a fresh instance of the interface.

check_read_nwb(nwbfile_path: str)[source]#

Read the produced NWB file and compare it to the interface.

check_interface_set_aligned_segment_timestamps()[source]#
check_shift_segment_timestamps_by_starting_times()[source]#
test_interface_alignment(setup_interface)[source]#
class AudioInterfaceTestMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

A mixin for testing Audio interfaces.

check_read_nwb(nwbfile_path: str)[source]#

Read the produced NWB file and compare it to the interface.

test_interface_alignment()[source]#
class VideoInterfaceMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

A mixin for testing Video interfaces.

check_read_nwb(nwbfile_path: str)[source]#

Read the produced NWB file and compare it to the interface.

check_interface_set_aligned_timestamps()[source]#

Ensure that internal mechanisms for the timestamps getter/setter work as expected.

check_shift_timestamps_by_start_time()[source]#

Ensure that internal mechanisms for shifting timestamps by a starting time work as expected.

check_set_aligned_segment_starting_times()[source]#
check_interface_original_timestamps_inmutability()[source]#

Check aligning the timestamps for the interface does not change the value of .get_original_timestamps().

class MedPCInterfaceMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

A mixin for testing MedPC interfaces.

test_metadata()[source]#
test_conversion_options_schema_valid()[source]#
test_metadata_schema_valid()[source]#
test_run_conversion_with_backend()[source]#
test_run_conversion_with_backend_configuration()[source]#
test_no_metadata_mutation()[source]#

Ensure the metadata object is not altered by add_to_nwbfile method.

test_configure_backend_for_equivalent_nwbfiles()[source]#
check_metadata_schema_valid()[source]#
check_conversion_options_schema_valid()[source]#
check_metadata()[source]#
check_no_metadata_mutation(metadata: dict)[source]#

Ensure the metadata object was not altered by add_to_nwbfile method.

check_run_conversion_with_backend(nwbfile_path: str, metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_configure_backend_for_equivalent_nwbfiles(metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_run_conversion_with_backend_configuration(nwbfile_path: str, metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_run_conversion_in_nwbconverter_with_backend(nwbfile_path: str, metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_run_conversion_in_nwbconverter_with_backend_configuration(nwbfile_path: str, metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
test_all_conversion_checks(metadata: dict)[source]#
check_interface_get_original_timestamps(medpc_name_to_info_dict: dict)[source]#

Just to ensure each interface can call .get_original_timestamps() without an error raising.

Also, that it always returns non-empty.

check_interface_get_timestamps()[source]#

Just to ensure each interface can call .get_timestamps() without an error raising.

Also, that it always returns non-empty.

check_interface_set_aligned_timestamps(medpc_name_to_info_dict: dict)[source]#

Ensure that internal mechanisms for the timestamps getter/setter work as expected.

check_shift_timestamps_by_start_time(medpc_name_to_info_dict: dict)[source]#

Ensure that internal mechanisms for shifting timestamps by a starting time work as expected.

check_interface_original_timestamps_inmutability(medpc_name_to_info_dict: dict)[source]#

Check aligning the timestamps for the interface does not change the value of .get_original_timestamps().

test_interface_alignment(medpc_name_to_info_dict: dict)[source]#
class MiniscopeImagingInterfaceMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

A mixin for testing Miniscope Imaging interfaces.

check_read_nwb(nwbfile_path: str)[source]#

Read the produced NWB file and compare it to the interface.

class TDTFiberPhotometryInterfaceMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

Mixin for testing TDT Fiber Photometry interfaces.

test_metadata()[source]#
test_metadata_schema_valid()[source]#
test_conversion_options_schema_valid()[source]#
test_run_conversion_with_backend()[source]#
test_run_conversion_with_backend_configuration()[source]#
test_no_metadata_mutation()[source]#

Ensure the metadata object is not altered by add_to_nwbfile method.

test_configure_backend_for_equivalent_nwbfiles()[source]#
check_metadata()[source]#
check_metadata_schema_valid()[source]#
check_conversion_options_schema_valid()[source]#
check_no_metadata_mutation(metadata: dict)[source]#

Ensure the metadata object was not altered by add_to_nwbfile method.

check_run_conversion_with_backend(nwbfile_path: str, metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_configure_backend_for_equivalent_nwbfiles(metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_run_conversion_with_backend_configuration(nwbfile_path: str, metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_run_conversion_in_nwbconverter_with_backend(nwbfile_path: str, metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
check_run_conversion_in_nwbconverter_with_backend_configuration(nwbfile_path: str, metadata: dict, backend: Literal['hdf5', 'zarr'] = 'hdf5')[source]#
test_all_conversion_checks(metadata: dict)[source]#
check_interface_get_original_timestamps()[source]#

Just to ensure each interface can call .get_original_timestamps() without an error raising.

Also, that it always returns non-empty.

check_interface_get_timestamps()[source]#

Just to ensure each interface can call .get_timestamps() without an error raising.

Also, that it always returns non-empty.

check_interface_set_aligned_timestamps()[source]#

Ensure that internal mechanisms for the timestamps getter/setter work as expected.

check_shift_timestamps_by_start_time()[source]#

Ensure that internal mechanisms for shifting timestamps by a starting time work as expected.

check_interface_original_timestamps_inmutability()[source]#

Check aligning the timestamps for the interface does not change the value of .get_original_timestamps().

test_interface_alignment()[source]#
class PoseEstimationInterfaceTestMixin(/, *args, **kwargs)[source]#

Bases: DataInterfaceTestMixin, TemporalAlignmentMixin

Generic class for testing any pose estimation interface.

check_read_nwb(nwbfile_path: str)[source]#

Check that pose estimation data can be read back from NWB file.

Mock Interfaces#

class MockInterface(verbose: bool = False, **source_data)[source]#

Bases: BaseDataInterface

A mock interface for testing basic command passing without side effects.

get_metadata() DeepDict[source]#

Child DataInterface classes should override this to match their metadata.

Returns:

The metadata dictionary containing basic NWBFile metadata.

Return type:

DeepDict

add_to_nwbfile(nwbfile: NWBFile, metadata: dict | None, **conversion_options)[source]#

Define a protocol for mapping the data from this interface to NWB neurodata objects.

These neurodata objects should also be added to the in-memory pynwb.NWBFile object in this step.

Parameters:
  • nwbfile (pynwb.NWBFile) – The in-memory object to add the data to.

  • metadata (dict) – Metadata dictionary with information used to create the NWBFile.

  • **conversion_options – Additional keyword arguments to pass to the .add_to_nwbfile method.

class MockTimeSeriesInterface(*, num_channels: int = 4, sampling_frequency: float = 30000.0, duration: float = 1.0, seed: int = 0, verbose: bool = False, metadata_key: str = 'TimeSeries')[source]#

Bases: BaseDataInterface

A mock TimeSeries interface for testing purposes.

This interface uses pynwb’s mock_TimeSeries to create synthetic time series data without only pynwb as a dependency.

Initialize a mock TimeSeries interface.

Parameters:
  • num_channels (int, optional) – Number of channels to generate, by default 4.

  • sampling_frequency (float, optional) – Sampling frequency in Hz, by default 30,000.0 Hz.

  • duration (float, optional) – Duration of the data in seconds, by default 1.0.

  • seed (int, optional) – Seed for the random number generator, by default 0.

  • verbose (bool, optional) – Control verbosity, by default False.

  • metadata_key (str, optional) – Key for the TimeSeries metadata in the metadata dictionary, by default “TimeSeries”.

get_metadata() DeepDict[source]#

Get metadata for the TimeSeries interface.

Returns:

The metadata dictionary containing NWBFile and TimeSeries metadata.

Return type:

dict

add_to_nwbfile(nwbfile: NWBFile, metadata: dict | None = None)[source]#

Add mock TimeSeries data to an NWB file.

Parameters:
  • nwbfile (NWBFile) – The NWB file to which the TimeSeries data will be added.

  • metadata (dict, optional) – Metadata dictionary. If None, uses default metadata.

class MockBehaviorEventInterface(event_times: list | numpy.ndarray | None = None)[source]#

Bases: BaseTemporalAlignmentInterface

A mock behavior event interface for testing purposes.

Initialize the interface with event times for behavior.

Parameters:

event_times (list of floats, optional) – The event times to set as timestamps for this interface. The default is the array [1.2, 2.3, 3.4] to simulate a time series similar to the MockSpikeGLXNIDQInterface.

classmethod get_source_schema() dict[source]#

Infer the JSON schema for the source_data from the method signature (annotation typing).

Returns:

The JSON schema for the source_data.

Return type:

dict

get_original_timestamps() ndarray[source]#

Get the original event times before any alignment or transformation.

Returns:

The original event times as a NumPy array.

Return type:

np.ndarray

get_timestamps() ndarray[source]#

Get the current (possibly aligned) event times.

Returns:

The current event times as a NumPy array, possibly modified after alignment.

Return type:

np.ndarray

set_aligned_timestamps(aligned_timestamps: ndarray)[source]#

Set the event times after alignment.

Parameters:

aligned_timestamps (np.ndarray) – The aligned event timestamps to update the internal event times.

add_to_nwbfile(nwbfile: NWBFile, metadata: dict)[source]#

Add the event times to an NWBFile as a DynamicTable.

Parameters:
  • nwbfile (NWBFile) – The NWB file to which the event times will be added.

  • metadata (dict) – Metadata to describe the event times in the NWB file.

Notes

This method creates a DynamicTable to store event times and adds it to the NWBFile’s acquisition.

class MockSpikeGLXNIDQInterface(signal_duration: float = 7.0, ttl_times: list[list[float]] | None = None, ttl_duration: float = 1.0)[source]#

Bases: SpikeGLXNIDQInterface

A mock SpikeGLX interface for testing purposes.

Define a mock SpikeGLXNIDQInterface by overriding the recording extractor to be a mock TTL signal.

Parameters:
  • signal_duration (float, default: 7.0) – The number of seconds to simulate.

  • ttl_times (list of lists of floats, optional) – The times within the signal_duration to trigger the TTL pulse for each channel. The outer list is over channels, while each inner list is the set of TTL times for each specific channel. The default generates 8 channels with periodic on/off cycle (which start in the ‘off’ state) each of which is of length ttl_duration with a 0.1 second offset per channel.

  • ttl_duration (float, default: 1.0) – How long the TTL pulses stays in the ‘on’ state when triggered, in seconds.

ExtractorName = 'NumpyRecording'#
classmethod get_source_schema() dict[source]#

Infer the JSON schema for the source_data from the method signature (annotation typing).

Returns:

The JSON schema for the source_data.

Return type:

dict

class MockRecordingInterface(num_channels: int = 4, sampling_frequency: float = 30000.0, durations: tuple[float, ...] = (1.0,), seed: int = 0, verbose: bool = False, es_key: str = 'ElectricalSeries', set_probe: bool = False)[source]#

Bases: BaseRecordingExtractorInterface

An interface with a spikeinterface recording object for testing purposes.

General interface for OpenEphys data. It works for both the legacy and the binary format.

For “legacy” format (.continuous files) the interface redirects to OpenEphysLegacyRecordingInterface. For “binary” format (.dat files) the interface redirects to OpenEphysBinaryRecordingInterface.

Parameters:
  • folder_path (DirectoryPath) – Path to OpenEphys directory (.continuous or .dat files).

  • stream_name (str, optional) – The name of the recording stream. When the recording stream is not specified the channel stream is chosen if available. When channel stream is not available the name of the stream must be specified.

  • block_index (int, optional, default: None) – The index of the block to extract from the data.

  • verbose (bool, default: False)

  • es_key (str, default: “ElectricalSeries”)

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Get metadata for the recording interface.

Returns:

The metadata dictionary containing NWBFile metadata with session start time.

Return type:

dict

class MockSortingInterface(num_units: int = 4, sampling_frequency: float = 30000.0, durations: tuple[float, ...] = (1.0,), seed: int = 0, verbose: bool = False)[source]#

Bases: BaseSortingExtractorInterface

A mock sorting extractor interface for generating synthetic sorting data.

Parameters:
  • num_units (int, optional) – Number of units to generate, by default 4.

  • sampling_frequency (float, optional) – Sampling frequency of the generated data in Hz, by default 30,000.0 Hz.

  • durations (tuple of float, optional) – Durations of the segments in seconds, by default (1.0,).

  • seed (int, optional) – Seed for the random number generator, by default 0.

  • verbose (bool, optional) – Control whether to display verbose messages during writing, by default True.

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Child DataInterface classes should override this to match their metadata.

Returns:

The metadata dictionary containing basic NWBFile metadata.

Return type:

DeepDict

class MockImagingInterface(num_samples: int = 30, num_rows: int = 10, num_columns: int = 10, sampling_frequency: float = 30, dtype: str = 'uint16', verbose: bool = False, seed: int = 0, photon_series_type: Literal['OnePhotonSeries', 'TwoPhotonSeries'] = 'TwoPhotonSeries')[source]#

Bases: BaseImagingExtractorInterface

A mock imaging interface for testing purposes.

Parameters:
  • num_samples (int, optional) – The number of samples (frames) in the mock imaging data, by default 30.

  • num_rows (int, optional) – The number of rows (height) in each frame of the mock imaging data, by default 10.

  • num_columns (int, optional) – The number of columns (width) in each frame of the mock imaging data, by default 10.

  • sampling_frequency (float, optional) – The sampling frequency of the mock imaging data in Hz, by default 30.

  • dtype (str, optional) – The data type of the generated imaging data (e.g., ‘uint16’), by default ‘uint16’.

  • seed (int, optional) – Random seed for reproducibility, by default 0.

  • photon_series_type (Literal[“OnePhotonSeries”, “TwoPhotonSeries”], optional) – The type of photon series for the mock imaging data, either “OnePhotonSeries” or “TwoPhotonSeries”, by default “TwoPhotonSeries”.

  • verbose (bool, default False) – controls verbosity

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Retrieve the metadata for the imaging data.

Returns:

Dictionary containing metadata including device information, imaging plane details, and photon series configuration.

Return type:

DeepDict

class MockSegmentationInterface(num_rois: int = 10, num_samples: int = 30, num_rows: int = 25, num_columns: int = 25, sampling_frequency: float = 30.0, has_summary_images: bool = True, has_raw_signal: bool = True, has_dff_signal: bool = True, has_deconvolved_signal: bool = True, has_neuropil_signal: bool = True, seed: int = 0, verbose: bool = False)[source]#

Bases: BaseSegmentationExtractorInterface

A mock segmentation interface for testing purposes.

Parameters:
  • num_rois (int, optional) – number of regions of interest, by default 10.

  • num_samples (int, optional) – number of samples (frames), by default 30.

  • num_rows (int, optional) – number of rows in the hypothetical video from which the data was extracted, by default 25.

  • num_columns (int, optional) – number of columns in the hypothetical video from which the data was extracted, by default 25.

  • sampling_frequency (float, optional) – sampling frequency of the hypothetical video from which the data was extracted, by default 30.0.

  • has_summary_images (bool, optional) – whether the dummy segmentation extractor has summary images or not (mean and correlation).

  • has_raw_signal (bool, optional) – whether a raw fluorescence signal is desired in the object, by default True.

  • has_dff_signal (bool, optional) – whether a relative (df/f) fluorescence signal is desired in the object, by default True.

  • has_deconvolved_signal (bool, optional) – whether a deconvolved signal is desired in the object, by default True.

  • has_neuropil_signal (bool, optional) – whether a neuropil signal is desired in the object, by default True.

  • seed (int, default 0) – seed for the random number generator, by default 0

  • verbose (bool, optional) – controls verbosity, by default False.

classmethod get_extractor_class()[source]#

Get the extractor class for this interface.

This classmethod must be implemented by each concrete interface to specify which extractor class to use.

Returns:

The extractor class or function to use for initialization.

Return type:

type or callable

get_metadata() DeepDict[source]#

Child DataInterface classes should override this to match their metadata.

Returns:

The metadata dictionary containing basic NWBFile metadata.

Return type:

DeepDict

class MockPoseEstimationInterface(num_samples: int = 1000, num_nodes: int = 3, pose_estimation_metadata_key: str = 'MockPoseEstimation', seed: int = 0, verbose: bool = False)[source]#

Bases: BaseTemporalAlignmentInterface

A mock pose estimation interface for testing purposes.

Initialize a mock pose estimation interface.

Parameters:
  • num_samples (int, optional) – Number of samples to generate, by default 1000.

  • num_nodes (int, optional) – Number of nodes/body parts to track, by default 3.

  • pose_estimation_metadata_key (str, optional) – Key for pose estimation metadata container, by default “MockPoseEstimation”.

  • seed (int, optional) – Random seed for reproducible data generation, by default 0.

  • verbose (bool, optional) – Control verbosity, by default False.

display_name: str | None = 'Mock Pose Estimation'#
keywords: tuple[str] = ('behavior', 'pose estimation', 'mock')#
associated_suffixes: tuple[str] = []#
info: str | None = 'Mock interface for pose estimation data testing.'#
classmethod get_source_schema() dict[source]#

Infer the JSON schema for the source_data from the method signature (annotation typing).

Returns:

The JSON schema for the source_data.

Return type:

dict

get_original_timestamps() ndarray[source]#

Get the original timestamps before any alignment.

get_timestamps() ndarray[source]#

Get the current (possibly aligned) timestamps.

set_aligned_timestamps(aligned_timestamps: ndarray)[source]#

Set aligned timestamps.

get_metadata() DeepDict[source]#

Get metadata for the mock pose estimation interface.

add_to_nwbfile(nwbfile: NWBFile, metadata: dict | None = None, **conversion_options)[source]#

Add mock pose estimation data to NWBFile using ndx-pose.

Mock TTL Signals#

generate_mock_ttl_signal(signal_duration: float = 7.0, ttl_times: list | ndarray | None = None, ttl_duration: float = 1.0, sampling_frequency_hz: float = 25000.0, dtype: type[Any] | dtype[Any] | _SupportsDType[dtype[Any]] | tuple[Any, Any] | list[Any] | _DTypeDict | str | None = 'int16', baseline_mean: int | float | None = None, signal_mean: int | float | None = None, channel_noise: int | float | None = None, random_seed: int | None = 0) ndarray[source]#

Generate a synthetic signal of TTL pulses similar to those seen in .nidq.bin files using SpikeGLX.

Parameters:
  • signal_duration (float, default: 7.0) – The number of seconds to simulate.

  • ttl_times (array of floats, optional) – The times within the signal_duration to trigger the TTL pulse. In conjunction with the ttl_duration, these must produce disjoint ‘on’ intervals. The default generates a periodic 1 second on, 1 second off pattern.

  • ttl_duration (float, default: 1.0) – How long the TTL pulse stays in the ‘on’ state when triggered, in seconds. In conjunction with the ttl_times, these must produce disjoint ‘on’ intervals.

  • sampling_frequency_hz (float, default: 25,000.0) – The sampling frequency of the signal in Hz. The default is 25000 Hz; similar to that of typical .nidq.bin files.

  • dtype (numpy data type or one of its accepted string input, default: “int16”) – The data type of the trace. Must match the data type of baseline_mean, signal_mean, and channel_noise, if any of those are specified. Recommended to be int16 for maximum efficiency, but can also be any size float to represent voltage scalings.

  • baseline_mean (integer or float, depending on specified ‘dtype’, optional) – The average value for the baseline; usually around 0 Volts. The default is approximately 0.005645752 Volts, estimated from a real example of a TTL pulse in a .nidq.bin file.

  • signal_mean (integer or float, optional) – Type depends on specified ‘dtype’. The average value for the signal; usually around 5 Volts. The default is approximately 4.980773925 Volts, estimated from a real example of a TTL pulse in a .nidq.bin file.

  • channel_noise (integer or float, optional) – Type depends on specified ‘dtype’. The standard deviation of white noise in the channel. The default is approximately 0.002288818 Volts, estimated from a real example of a TTL pulse in a .nidq.bin file.

  • random_seed (int or None, default: 0) – The seed to set for the numpy random number generator. Set to None to choose the seed randomly. The default is kept at 0 for generating reproducible outputs.

Returns:

trace – The synethic trace representing a channel with TTL pulses.

Return type:

numpy.ndarray

regenerate_test_cases(folder_path: Annotated[Path, PathType(path_type=dir)], regenerate_reference_images: bool = False)[source]#

Regenerate the test cases of the file included in the main testing suite, which is frozen between breaking changes.

Parameters:
  • folder_path (PathType) – Folder to save the resulting NWB file in. For use in the testing suite, this must be the ‘/test_testing/test_mock_ttl/’ subfolder adjacent to the ‘test_mock_tt.py’ file.

  • regenerate_reference_images (bool) – If true, uses the kaleido package with plotly (you may need to install both) to regenerate the images used as references in the documentation.

Mock Files#

generate_path_expander_demo_ibl(folder_path: str | None = None) None[source]#

Partially replicate the file structure of IBL data with dummy files for experimentation with LocalPathExpander. Specifically, it recreates the directory tree for the video files of the Steinmetz Lab’s data.

Parameters:

folder_path (str, optional) – Path to folder where the files are to be generated. If None, the current working directory will be used.