pynwb package

Subpackages

Submodules

Module contents

This package will contain functions, classes, and objects for reading and writing data in NWB format

pynwb.get_type_map(extensions=None)[source]
Get the TypeMap for the given extensions. If no extensions are provided,

return the TypeMap for the core namespace

Parameters:

extensions (str or TypeMap or list) – a path to a namespace, a TypeMap, or a list consisting of paths to namespaces and TypeMaps

Returns:

TypeMap loaded for the given extension or NWB core namespace

Return type:

tuple

pynwb.get_manager(extensions=None)[source]
Get a BuildManager to use for I/O using the given extensions. If no extensions are provided,

return a BuildManager that uses the core namespace

Parameters:

extensions (str or TypeMap or list) – a path to a namespace, a TypeMap, or a list consisting of paths to namespaces and TypeMaps

Returns:

the namespaces loaded from the given file

Return type:

tuple

pynwb.load_namespaces(namespace_path)[source]

Load namespaces from file

Parameters:

namespace_path (str) – the path to the YAML with the namespace definition

Returns:

the namespaces loaded from the given file

Return type:

tuple

pynwb.available_namespaces()[source]

Returns all namespaces registered in the namespace catalog

pynwb.register_class(neurodata_type, namespace, container_cls=None)[source]
Register an NWBContainer class to use for reading and writing a neurodata_type from a specification

If container_cls is not specified, returns a decorator for registering an NWBContainer subclass as the class for neurodata_type in namespace.

Parameters:
  • neurodata_type (str) – the neurodata_type to get the spec for

  • namespace (str) – the name of the namespace

  • container_cls (type) – the class to map to the specified neurodata_type

pynwb.get_nwbfile_version(h5py_file)[source]

Get the NWB version of the file if it is an NWB file.

Returns:

Tuple consisting of: 1) the original version string as stored in the file and 2) a tuple with the parsed components of the version string, consisting of integers and strings, e.g., (2, 5, 1, beta). (None, None) will be returned if the file is not a valid NWB file or the nwb_version is missing, e.g., in the case when no data has been written to the file yet.

Parameters:

h5py_file (File) – An NWB file

pynwb.register_map(container_cls, mapper_cls=None)[source]
Register an ObjectMapper to use for a Container class type

If mapper_cls is not specified, returns a decorator for registering an ObjectMapper class as the mapper for container_cls. If mapper_cls is specified, register the class as the mapper for container_cls

Parameters:
  • container_cls (type) – the Container class for which the given ObjectMapper class gets used

  • mapper_cls (type) – the ObjectMapper class to use to map

pynwb.get_class(neurodata_type, namespace)[source]
Parse the YAML file for a given neurodata_type that is a subclass of NWBContainer and automatically generate its

python API. This will work for most containers, but is known to not work for descendants of MultiContainerInterface and DynamicTable, so these must be defined manually (for now). get_class infers the API mapping directly from the specification. If you want to define a custom mapping, you should not use this function and you should define the class manually.

Examples:

Generating and registering an extension is as simple as:

MyClass = get_class('MyClass', 'ndx-my-extension')

get_class defines only the __init__ for the class. In cases where you want to provide additional methods for querying, plotting, etc. you can still use get_class and attach methods to the class after-the-fact, e.g.:

def get_sum(self, a, b):
    return self.feat1 + self.feat2

MyClass.get_sum = get_sum
Parameters:
  • neurodata_type (str) – the neurodata_type to get the NWBContainer class for

  • namespace (str) – the namespace the neurodata_type is defined in

class pynwb.NWBHDF5IO(path=None, mode='r', load_namespaces=True, manager=None, extensions=None, file=None, comm=None, driver=None, herd_path=None)[source]

Bases: HDF5IO

Parameters:
  • path (str or Path) – the path to the HDF5 file

  • mode (str) – the mode to open the HDF5 file with, one of (“w”, “r”, “r+”, “a”, “w-”, “x”)

  • load_namespaces (bool) – whether or not to load cached namespaces from given path - not applicable in write mode or when manager is not None or when extensions is not None

  • manager (BuildManager) – the BuildManager to use for I/O

  • extensions (str or TypeMap or list) – a path to a namespace, a TypeMap, or a list consisting paths to namespaces and TypeMaps

  • file (File or S3File) – a pre-existing h5py.File object

  • comm (Intracomm) – the MPI communicator to use for parallel I/O

  • driver (str) – driver for h5py to use when opening HDF5 file

  • herd_path (str) – The path to the HERD

static can_read(path: str)[source]

Determine whether a given path is readable by this class

property nwb_version

Get the version of the NWB file opened via this NWBHDF5IO object.

Returns:

Tuple consisting of: 1) the original version string as stored in the file and 2) a tuple with the parsed components of the version string, consisting of integers and strings, e.g., (2, 5, 1, beta). (None, None) will be returned if the nwb_version is missing, e.g., in the case when no data has been written to the file yet.

read(skip_version_check=False)[source]

Read the NWB file from the IO source.

raises TypeError:

If the NWB file version is missing or not supported

return:

NWBFile container

Parameters:

skip_version_check (bool) – skip checking of NWB version

export(src_io, nwbfile=None, write_args=None)[source]

Export an NWB file to a new NWB file using the HDF5 backend.

If nwbfile is provided, then the build manager of src_io is used to build the container, and the resulting builder will be exported to the new backend. So if nwbfile is provided, src_io must have a non-None manager property. If nwbfile is None, then the contents of src_io will be read and exported to the new backend.

Arguments can be passed in for the write_builder method using write_args. Some arguments may not be supported during export. {'link_data': False} can be used to copy any datasets linked to from the original file instead of creating a new link to those datasets in the exported file.

The exported file will not contain any links to the original file. All links, internal and external, will be preserved in the exported file. All references will also be preserved in the exported file.

The exported file will use the latest schema version supported by the version of PyNWB used. For example, if the input file uses the NWB schema version 2.1 and the latest schema version supported by PyNWB is 2.3, then the exported file will use the 2.3 NWB schema.

Example usage:

with NWBHDF5IO(self.read_path, mode='r') as read_io:
    nwbfile = read_io.read()
    # ...  # modify nwbfile
    nwbfile.set_modified()  # this may be necessary if the modifications are changes to attributes

    with NWBHDF5IO(self.export_path, mode='w') as export_io:
        export_io.export(src_io=read_io, nwbfile=nwbfile)

See Exporting NWB files and Adding/Removing Containers from an NWB File for more information and examples.

Parameters:
  • src_io (HDMFIO) – the HDMFIO object (such as NWBHDF5IO) that was used to read the data to export

  • nwbfile (NWBFile) – the NWBFile object to export. If None, then the entire contents of src_io will be exported

  • write_args (dict) – arguments to pass to write_builder