analyze
¶
Read / write access to the basic Mayo Analyze format
The Analyze header format¶
This is a binary header format and inherits from WrapStruct
Apart from the attributes and methods of WrapStruct:
Class attributes are:
.default_x_flip
with methods:
.get/set_data_shape
.get/set_data_dtype
.get/set_zooms
.get/set_data_offset
.get_base_affine()
.get_best_affine()
.data_to_fileobj
.data_from_fileobj
and class methods:
.from_header(hdr)
More sophisticated headers can add more methods and attributes.
Notes¶
This - basic - analyze header cannot encode full affines (only diagonal affines), and cannot do integer scaling.
The inability to store affines means that we have to guess what orientation the image has. Most Analyze images are stored on disk in (fastest-changing to slowest-changing) R->L, P->A and I->S order. That is, the first voxel is the rightmost, most posterior and most inferior voxel location in the image, and the next voxel is one voxel towards the left of the image.
Most people refer to this disk storage format as ‘radiological’, on the basis
that, if you load up the data as an array img_arr
where the first axis is
the fastest changing, then take a slice in the I->S axis - img_arr[:,:,10]
- then the right part of the brain will be on the left of your displayed slice.
Radiologists like looking at images where the left of the brain is on the right
side of the image.
Conversely, if the image has the voxels stored with the left voxels first - L->R, P->A, I->S, then this would be ‘neurological’ format. Neurologists like looking at images where the left side of the brain is on the left of the image.
When we are guessing at an affine for Analyze, this translates to the problem of whether the affine should consider proceeding within the data down an X line as being from left to right, or right to left.
By default we assume that the image is stored in R->L format. We encode this
choice in the default_x_flip
flag that can be True or False. True means
assume radiological.
If the image is 3D, and the X, Y and Z zooms are x, y, and z, then:
if default_x_flip is True::
affine = np.diag((-x,y,z,1))
else:
affine = np.diag((x,y,z,1))
In our implementation, there is no way of saving this assumed flip into the header. One way of doing this, that we have not used, is to allow negative zooms, in particular, negative X zooms. We did not do this because the image can be loaded with and without a default flip, so the saved zoom will not constrain the affine.
|
Class for basic analyze header |
|
Class for basic Analyze format image |
AnalyzeHeader
¶
- class nibabel.analyze.AnalyzeHeader(binaryblock=None, endianness=None, check=True)¶
Bases:
LabeledWrapStruct
,SpatialHeader
Class for basic analyze header
Implements zoom-only setting of affine transform, and no image scaling
Initialize header from binary data block
- Parameters:
- binaryblock{None, string} optional
binary block to set into header. By default, None, in which case we insert the default empty header block
- endianness{None, ‘<’,’>’, other endian code} string, optional
endianness of the binaryblock. If None, guess endianness from the data.
- checkbool, optional
Whether to check content of header in initialization. Default is True.
Examples
>>> hdr1 = AnalyzeHeader() # an empty header >>> hdr1.endianness == native_code True >>> hdr1.get_data_shape() (0,) >>> hdr1.set_data_shape((1,2,3)) # now with some content >>> hdr1.get_data_shape() (1, 2, 3)
We can set the binary block directly via this initialization. Here we get it from the header we have just made
>>> binblock2 = hdr1.binaryblock >>> hdr2 = AnalyzeHeader(binblock2) >>> hdr2.get_data_shape() (1, 2, 3)
Empty headers are native endian by default
>>> hdr2.endianness == native_code True
You can pass valid opposite endian headers with the
endianness
parameter. Even empty headers can have endianness>>> hdr3 = AnalyzeHeader(endianness=swapped_code) >>> hdr3.endianness == swapped_code True
If you do not pass an endianness, and you pass some data, we will try to guess from the passed data.
>>> binblock3 = hdr3.binaryblock >>> hdr4 = AnalyzeHeader(binblock3) >>> hdr4.endianness == swapped_code True
- __init__(binaryblock=None, endianness=None, check=True)¶
Initialize header from binary data block
- Parameters:
- binaryblock{None, string} optional
binary block to set into header. By default, None, in which case we insert the default empty header block
- endianness{None, ‘<’,’>’, other endian code} string, optional
endianness of the binaryblock. If None, guess endianness from the data.
- checkbool, optional
Whether to check content of header in initialization. Default is True.
Examples
>>> hdr1 = AnalyzeHeader() # an empty header >>> hdr1.endianness == native_code True >>> hdr1.get_data_shape() (0,) >>> hdr1.set_data_shape((1,2,3)) # now with some content >>> hdr1.get_data_shape() (1, 2, 3)
We can set the binary block directly via this initialization. Here we get it from the header we have just made
>>> binblock2 = hdr1.binaryblock >>> hdr2 = AnalyzeHeader(binblock2) >>> hdr2.get_data_shape() (1, 2, 3)
Empty headers are native endian by default
>>> hdr2.endianness == native_code True
You can pass valid opposite endian headers with the
endianness
parameter. Even empty headers can have endianness>>> hdr3 = AnalyzeHeader(endianness=swapped_code) >>> hdr3.endianness == swapped_code True
If you do not pass an endianness, and you pass some data, we will try to guess from the passed data.
>>> binblock3 = hdr3.binaryblock >>> hdr4 = AnalyzeHeader(binblock3) >>> hdr4.endianness == swapped_code True
- as_analyze_map()¶
Return header as mapping for conversion to Analyze types
Collect data from custom header type to fill in fields for Analyze and derived header types (such as Nifti1 and Nifti2).
When Analyze types convert another header type to their own type, they call this this method to check if there are other Analyze / Nifti fields that the source header would like to set.
- Returns:
- analyze_mapmapping
Object that can be used as a mapping thus:
for key in analyze_map: value = analyze_map[key]
where
key
is the name of a field that can be set in an Analyze header type, such as Nifti1, andvalue
is a value for the field. For example, analyze_map might be a something likedict(regular='y', slice_duration=0.3)
whereregular
is a field present in both Analyze and Nifti1, andslice_duration
is a field restricted to Nifti1 and Nifti2. If a particular Analyze header type does not recognize the field name, it will throw away the value without error. SeeAnalyze.from_header()
.
Notes
You can also return a Nifti header with the relevant fields set.
Your header still needs methods
get_data_dtype
,get_data_shape
andget_zooms
, for the conversion, and these get called after using the analyze map, so the methods will override values set in the map.
- data_from_fileobj(fileobj)¶
Read scaled data array from fileobj
Use this routine to get the scaled image data from an image file fileobj, given a header self. “Scaled” means, with any header scaling factors applied to the raw data in the file. Use raw_data_from_fileobj to get the raw data.
- Parameters:
- fileobjfile-like
Must be open, and implement
read
andseek
methods
- Returns:
- arrndarray
scaled data array
Notes
We use the header to get any scale or intercept values to apply to the data. Raw Analyze files don’t have scale factors or intercepts, but this routine also works with formats based on Analyze, that do have scaling, such as SPM analyze formats and NIfTI.
- data_to_fileobj(data, fileobj, rescale=True)¶
Write data to fileobj, maybe rescaling data, modifying self
In writing the data, we match the header to the written data, by setting the header scaling factors, iff rescale is True. Thus we modify self in the process of writing the data.
- Parameters:
- dataarray-like
data to write; should match header defined shape
- fileobjfile-like object
Object with file interface, implementing
write
andseek
- rescale{True, False}, optional
Whether to try and rescale data to match output dtype specified by header. If True and scaling needed and header cannot scale, then raise
HeaderTypeError
.
Examples
>>> from nibabel.analyze import AnalyzeHeader >>> hdr = AnalyzeHeader() >>> hdr.set_data_shape((1, 2, 3)) >>> hdr.set_data_dtype(np.float64) >>> from io import BytesIO >>> str_io = BytesIO() >>> data = np.arange(6).reshape(1,2,3) >>> hdr.data_to_fileobj(data, str_io) >>> data.astype(np.float64).tobytes('F') == str_io.getvalue() True
- classmethod default_structarr(endianness=None)¶
Return header data for empty header with given endianness
- classmethod from_header(header=None, check=True)¶
Class method to create header from another header
- Parameters:
- header
Header
instance or mapping a header of this class, or another class of header for conversion to this type
- check{True, False}
whether to check header for integrity
- header
- Returns:
- hdrheader instance
fresh header instance of our own class
- get_base_affine()¶
Get affine from basic (shared) header fields
Note that we get the translations from the center of the image.
Examples
>>> hdr = AnalyzeHeader() >>> hdr.set_data_shape((3, 5, 7)) >>> hdr.set_zooms((3, 2, 1)) >>> hdr.default_x_flip True >>> hdr.get_base_affine() # from center of image array([[-3., 0., 0., 3.], [ 0., 2., 0., -4.], [ 0., 0., 1., -3.], [ 0., 0., 0., 1.]])
- get_best_affine()¶
Get affine from basic (shared) header fields
Note that we get the translations from the center of the image.
Examples
>>> hdr = AnalyzeHeader() >>> hdr.set_data_shape((3, 5, 7)) >>> hdr.set_zooms((3, 2, 1)) >>> hdr.default_x_flip True >>> hdr.get_base_affine() # from center of image array([[-3., 0., 0., 3.], [ 0., 2., 0., -4.], [ 0., 0., 1., -3.], [ 0., 0., 0., 1.]])
- get_data_dtype()¶
Get numpy dtype for data
For examples see
set_data_dtype
- get_data_offset()¶
Return offset into data file to read data
Examples
>>> hdr = AnalyzeHeader() >>> hdr.get_data_offset() 0 >>> hdr['vox_offset'] = 12 >>> hdr.get_data_offset() 12
- get_data_shape()¶
Get shape of data
Examples
>>> hdr = AnalyzeHeader() >>> hdr.get_data_shape() (0,) >>> hdr.set_data_shape((1,2,3)) >>> hdr.get_data_shape() (1, 2, 3)
Expanding number of dimensions gets default zooms
>>> hdr.get_zooms() (1.0, 1.0, 1.0)
- get_slope_inter()¶
Get scalefactor and intercept
These are not implemented for basic Analyze
- get_zooms()¶
Get zooms from header
- Returns:
- ztuple
tuple of header zoom values
Examples
>>> hdr = AnalyzeHeader() >>> hdr.get_zooms() (1.0,) >>> hdr.set_data_shape((1,2)) >>> hdr.get_zooms() (1.0, 1.0) >>> hdr.set_zooms((3, 4)) >>> hdr.get_zooms() (3.0, 4.0)
- classmethod guessed_endian(hdr)¶
Guess intended endianness from mapping-like
hdr
- Parameters:
- hdrmapping-like
hdr for which to guess endianness
- Returns:
- endianness{‘<’, ‘>’}
Guessed endianness of header
Examples
Zeros header, no information, guess native
>>> hdr = AnalyzeHeader() >>> hdr_data = np.zeros((), dtype=header_dtype) >>> AnalyzeHeader.guessed_endian(hdr_data) == native_code True
A valid native header is guessed native
>>> hdr_data = hdr.structarr.copy() >>> AnalyzeHeader.guessed_endian(hdr_data) == native_code True
And, when swapped, is guessed as swapped
>>> sw_hdr_data = hdr_data.byteswap(swapped_code) >>> AnalyzeHeader.guessed_endian(sw_hdr_data) == swapped_code True
The algorithm is as follows:
First, look at the first value in the
dim
field; this should be between 0 and 7. If it is between 1 and 7, then this must be a native endian header.>>> hdr_data = np.zeros((), dtype=header_dtype) # blank binary data >>> hdr_data['dim'][0] = 1 >>> AnalyzeHeader.guessed_endian(hdr_data) == native_code True >>> hdr_data['dim'][0] = 6 >>> AnalyzeHeader.guessed_endian(hdr_data) == native_code True >>> hdr_data['dim'][0] = -1 >>> AnalyzeHeader.guessed_endian(hdr_data) == swapped_code True
If the first
dim
value is zeros, we need a tie breaker. In that case we check thesizeof_hdr
field. This should be 348. If it looks like the byteswapped value of 348, assumed swapped. Otherwise assume native.>>> hdr_data = np.zeros((), dtype=header_dtype) # blank binary data >>> AnalyzeHeader.guessed_endian(hdr_data) == native_code True >>> hdr_data['sizeof_hdr'] = 1543569408 >>> AnalyzeHeader.guessed_endian(hdr_data) == swapped_code True >>> hdr_data['sizeof_hdr'] = -1 >>> AnalyzeHeader.guessed_endian(hdr_data) == native_code True
This is overridden by the
dim[0]
value though:>>> hdr_data['sizeof_hdr'] = 1543569408 >>> hdr_data['dim'][0] = 1 >>> AnalyzeHeader.guessed_endian(hdr_data) == native_code True
- has_data_intercept = False¶
- has_data_slope = False¶
- classmethod may_contain_header(binaryblock)¶
- raw_data_from_fileobj(fileobj)¶
Read unscaled data array from fileobj
- Parameters:
- fileobjfile-like
Must be open, and implement
read
andseek
methods
- Returns:
- arrndarray
unscaled data array
- set_data_dtype(datatype)¶
Set numpy dtype for data from code or dtype or type
Examples
>>> hdr = AnalyzeHeader() >>> hdr.set_data_dtype(np.uint8) >>> hdr.get_data_dtype() dtype('uint8') >>> hdr.set_data_dtype(np.dtype(np.uint8)) >>> hdr.get_data_dtype() dtype('uint8') >>> hdr.set_data_dtype('implausible') Traceback (most recent call last): ... HeaderDataError: data dtype "implausible" not recognized >>> hdr.set_data_dtype('none') Traceback (most recent call last): ... HeaderDataError: data dtype "none" known but not supported >>> hdr.set_data_dtype(np.void) Traceback (most recent call last): ... HeaderDataError: data dtype "<type 'numpy.void'>" known but not supported
- set_data_offset(offset)¶
Set offset into data file to read data
- set_data_shape(shape)¶
Set shape of data
If
ndims == len(shape)
then we set zooms for dimensions higher thanndims
to 1.0- Parameters:
- shapesequence
sequence of integers specifying data array shape
- set_slope_inter(slope, inter=None)¶
Set slope and / or intercept into header
Set slope and intercept for image data, such that, if the image data is
arr
, then the scaled image data will be(arr * slope) + inter
In this case, for Analyze images, we can’t store the slope or the intercept, so this method only checks that slope is None or NaN or 1.0, and that inter is None or NaN or 0.
- Parameters:
- slopeNone or float
If float, value must be NaN or 1.0 or we raise a
HeaderTypeError
- interNone or float, optional
If float, value must be 0.0 or we raise a
HeaderTypeError
- set_zooms(zooms)¶
Set zooms into header fields
See docstring for
get_zooms
for examples
- sizeof_hdr = 348¶
- template_dtype = dtype([('sizeof_hdr', '<i4'), ('data_type', 'S10'), ('db_name', 'S18'), ('extents', '<i4'), ('session_error', '<i2'), ('regular', 'S1'), ('hkey_un0', 'S1'), ('dim', '<i2', (8,)), ('vox_units', 'S4'), ('cal_units', 'S8'), ('unused1', '<i2'), ('datatype', '<i2'), ('bitpix', '<i2'), ('dim_un0', '<i2'), ('pixdim', '<f4', (8,)), ('vox_offset', '<f4'), ('funused1', '<f4'), ('funused2', '<f4'), ('funused3', '<f4'), ('cal_max', '<f4'), ('cal_min', '<f4'), ('compressed', '<i4'), ('verified', '<i4'), ('glmax', '<i4'), ('glmin', '<i4'), ('descrip', 'S80'), ('aux_file', 'S24'), ('orient', 'S1'), ('originator', 'S10'), ('generated', 'S10'), ('scannum', 'S10'), ('patient_id', 'S10'), ('exp_date', 'S10'), ('exp_time', 'S10'), ('hist_un0', 'S3'), ('views', '<i4'), ('vols_added', '<i4'), ('start_field', '<i4'), ('field_skip', '<i4'), ('omax', '<i4'), ('omin', '<i4'), ('smax', '<i4'), ('smin', '<i4')])¶
AnalyzeImage
¶
- class nibabel.analyze.AnalyzeImage(dataobj, affine, header=None, extra=None, file_map=None, dtype=None)¶
Bases:
SpatialImage
Class for basic Analyze format image
Initialize image
The image is a combination of (array-like, affine matrix, header), with optional metadata in extra, and filename / file-like objects contained in the file_map mapping.
- Parameters:
- dataobjobject
Object containing image data. It should be some object that returns an array from
np.asanyarray
. It should have ashape
attribute or property- affineNone or (4,4) array-like
homogeneous affine giving relationship between voxel coordinates and world coordinates. Affine can also be None. In this case,
obj.affine
also returns None, and the affine as written to disk will depend on the file format.- headerNone or mapping or header instance, optional
metadata for this image format
- extraNone or mapping, optional
metadata to associate with image that cannot be stored in the metadata of this image type
- file_mapmapping, optional
mapping giving file information for this image format
- __init__(dataobj, affine, header=None, extra=None, file_map=None, dtype=None)¶
Initialize image
The image is a combination of (array-like, affine matrix, header), with optional metadata in extra, and filename / file-like objects contained in the file_map mapping.
- Parameters:
- dataobjobject
Object containing image data. It should be some object that returns an array from
np.asanyarray
. It should have ashape
attribute or property- affineNone or (4,4) array-like
homogeneous affine giving relationship between voxel coordinates and world coordinates. Affine can also be None. In this case,
obj.affine
also returns None, and the affine as written to disk will depend on the file format.- headerNone or mapping or header instance, optional
metadata for this image format
- extraNone or mapping, optional
metadata to associate with image that cannot be stored in the metadata of this image type
- file_mapmapping, optional
mapping giving file information for this image format
- ImageArrayProxy¶
alias of
ArrayProxy
- classmethod from_file_map(file_map, *, mmap=True, keep_file_open=None)¶
Class method to create image from mapping in
file_map
- Parameters:
- file_mapdict
Mapping with (key, value) pairs of (
file_type
, FileHolder instance giving file-likes for each file needed for this image type.- mmap{True, False, ‘c’, ‘r’}, optional, keyword only
mmap controls the use of numpy memory mapping for reading image array data. If False, do not try numpy
memmap
for data array. If one of {‘c’, ‘r’}, try numpy memmap withmode=mmap
. A mmap value of True gives the same behavior asmmap='c'
. If image data file cannot be memory-mapped, ignore mmap value and read array from file.- keep_file_open{ None, True, False }, optional, keyword only
keep_file_open controls whether a new file handle is created every time the image is accessed, or a single file handle is created and used for the lifetime of this
ArrayProxy
. IfTrue
, a single file handle is created and used. IfFalse
, a new file handle is created every time the image is accessed. Iffile_map
refers to an open file handle, this setting has no effect. The default value (None
) will result in the value ofnibabel.arrayproxy.KEEP_FILE_OPEN_DEFAULT
being used.
- Returns:
- imgAnalyzeImage instance
- get_data_dtype()¶
- header_class¶
alias of
AnalyzeHeader
- set_data_dtype(dtype)¶
- to_file_map(file_map=None, dtype=None)¶
Write image to file_map or contained
self.file_map
- Parameters:
- file_mapNone or mapping, optional
files mapping. If None (default) use object’s
file_map
attribute instead- dtypedtype-like, optional
The on-disk data type to coerce the data array.