utils¶
Module: utils
¶
Miscellaneous utilities for time series analysis.
Functions¶
-
nitime.utils.
adaptive_weights
(yk, eigvals, sides='onesided', max_iter=150)¶ Perform an iterative procedure to find the optimal weights for K direct spectral estimators of DPSS tapered signals.
Parameters: yk : ndarray (K, N)
The K DFTs of the tapered sequences
eigvals : ndarray, length-K
The eigenvalues of the DPSS tapers
sides : str
Whether to compute weights on a one-sided or two-sided spectrum
max_iter : int
Maximum number of iterations for weight computation
Returns: weights, nu :
The weights (array like sdfs), and the “equivalent degrees of freedom” (array length-L)
Notes
The weights to use for making the multitaper estimate, such that \(S_{mt} = \sum_{k} |w_k|^2S_k^{mt} / \sum_{k} |w_k|^2\)
If there are less than 3 tapers, then the adaptive weights are not found. The square root of the eigenvalues are returned as weights, and the degrees of freedom are 2*K
-
nitime.utils.
akaike_information_criterion
(ecov, p, m, Ntotal, corrected=False)¶ A measure of the goodness of fit of an auto-regressive model based on the model order and the error covariance.
Parameters: ecov : float array
The error covariance of the system
p :
the number of channels
m : int
the model order
Ntotal :
the number of total time-points (across channels)
corrected : boolean (optional)
Whether to correct for small sample size
Returns: AIC : float
The value of the AIC
Notes
This is an implementation of equation (50) in Ding et al. (2006):
M Ding and Y Chen and S Bressler (2006) Granger Causality: Basic Theory and Application to Neuroscience. http://arxiv.org/abs/q-bio/0608035v1
Correction for small sample size is taken from: http://en.wikipedia.org/wiki/Akaike_information_criterion.
-
nitime.utils.
antisymm_rand_arr
(size, sample_func=<built-in method random_sample of mtrand.RandomState object>)¶ Make an anti-symmetric random 2-d array of shape (size,size).
Parameters: n : int
Size of the output array.
sample_func : function, optional.
Must be a function which when called with a 2-tuple of ints, returns a 2-d array of that shape. By default, np.random.random is used, but any other sampling function can be used as long as matches this API.
Examples
>>> np.random.seed(0) # for doctesting >>> np.set_printoptions(precision=4) # for doctesting >>> antisymm_rand_arr(4) array([[ 0. , 0.7152, 0.6028, 0.5449], [-0.7152, 0. , 0.4376, 0.8918], [-0.6028, -0.4376, 0. , 0.5289], [-0.5449, -0.8918, -0.5289, 0. ]])
-
nitime.utils.
ar_generator
(N=512, sigma=1.0, coefs=None, drop_transients=0, v=None)¶ This generates a signal u(n) = a1*u(n-1) + a2*u(n-2) + … + v(n) where v(n) is a stationary stochastic process with zero mean and variance = sigma. XXX: confusing variance notation
Parameters: N : float
The number of points in the AR process generated. Default: 512
sigma : float
The variance of the noise in the AR process. Default: 1
coefs : list or array of floats
The AR model coefficients. Default: [2.7607, -3.8106, 2.6535, -0.9238], which is a sequence shown to be well-estimated by an order 8 AR system.
drop_transients : float
How many samples to drop from the beginning of the sequence (the transient phases of the process), so that the process can be considered stationary.
v : float array
Optionally, input a specific sequence of noise samples (this over-rides the sigma parameter). Default: None
Returns: u : ndarray
the AR sequence
v : ndarray
the unit-variance innovations sequence
coefs : ndarray
feedback coefficients from k=1,len(coefs)
The form of the feedback coefficients is a little different than :
the normal linear constant-coefficient difference equation. Therefore :
the transfer function implemented in this method is :
H(z) = sigma**0.5 / ( 1 - sum_k coefs(k)z**(-k) ) 1 <= k <= P :
Examples
>>> import nitime.algorithms as alg >>> ar_seq, nz, alpha = ar_generator() >>> fgrid, hz = alg.freq_response(1.0, a=np.r_[1, -alpha]) >>> sdf_ar = (hz * hz.conj()).real
-
nitime.utils.
autocorr
(x, **kwargs)¶ Returns the autocorrelation of signal s at all lags.
Parameters: x : ndarray
axis : time axis
all_lags : {True/False}
whether to return all nonzero lags, or to clip the length of r_xy to be the length of x and y. If False, then the zero lag correlation is at index 0. Otherwise, it is found at (len(x) + len(y) - 1)/2
Notes
Adheres to the definition
\[\]R_{xx}[k]=E{X[n+k]X^{*}[n]}
where X is a discrete, stationary (ergodic) random process
-
nitime.utils.
autocov
(x, **kwargs)¶ Returns the autocovariance of signal s at all lags.
Parameters: x : ndarray
axis : time axis
all_lags : {True/False}
whether to return all nonzero lags, or to clip the length of r_xy to be the length of x and y. If False, then the zero lag correlation is at index 0. Otherwise, it is found at (len(x) + len(y) - 1)/2
Returns: cxx : ndarray
The autocovariance function
Notes
Adheres to the definition
\[\]C_{xx}[k]=E{(X[n+k]-E{X})(X[n]-E{X})^{*}}
where X is a discrete, stationary (ergodic) random process
-
nitime.utils.
autocov_vector
(x, nlags=None)¶ This method computes the following function
\[\]R_{xx}(k) = E{ x(t)x^{*}(t-k) } = E{ x(t+k)x^{*}(t) } k in {0, 1, …, nlags-1}
(* := conjugate transpose)
Note: this is related to the other commonly used definition for vector autocovariance
\[\]R_{xx}^{(2)}(k) = E{ x(t-k)x^{*}(t) } = R_{xx}(-k) = R_{xx}^{*}(k)
Parameters: x : ndarray (nc, N)
nlags : int, optional
compute lags for k in {0, …, nlags-1}
Returns: rxx : ndarray (nc, nc, nlags)
-
nitime.utils.
bayesian_information_criterion
(ecov, p, m, Ntotal)¶ - The Bayesian Information Criterion, also known as the Schwarz criterion
- is a measure of goodness of fit of a statistical model, based on the number of model parameters and the likelihood of the model
Parameters: ecov : float array
The error covariance of the system
p : int
the system size (how many variables).
m : int
the model order.
corrected : boolean (optional)
Whether to correct for small sample size
Returns: BIC : float
The value of the BIC
a :
the resulting autocovariance vector
Notes
This is an implementation of equation (51) in Ding et al. (2006):
\[\]BIC(m) = 2 log(|\Sigma|) + frac{2p^2 m log(N_{total})}{N_{total}},
where $Sigma$ is the noise covariance matrix. In auto-regressive model estimation, this matrix will contain in $Sigma_{i,j}$ the residual variance in estimating time-series $i$ from $j$, $p$ is the dimensionality of the data, $m$ is the number of parameters in the model and $N_{total}$ is the number of time-points.
M Ding and Y Chen and S Bressler (2006) Granger Causality: Basic Theory and Application to Neuroscience. http://arxiv.org/abs/q-bio/0608035v1
-
nitime.utils.
circle_to_hz
(omega, Fsamp)¶ For a frequency grid spaced on the unit circle of an imaginary plane, return the corresponding frequency grid in Hz.
-
nitime.utils.
circularize
(x, bottom=0, top=6.283185307179586, deg=False)¶ Maps the input into the continuous interval (bottom, top) where bottom defaults to 0 and top defaults to 2*pi
Parameters: x : ndarray - the input array
bottom : float, optional (defaults to 0).
If you want to set the bottom of the interval into which you modulu to something else than 0.
top : float, optional (defaults to 2*pi).
If you want to set the top of the interval into which you modulu to something else than 2*pi
Returns: The input array, mapped into the interval (bottom,top) :
-
nitime.utils.
crosscorr
(x, y, **kwargs)¶ Returns the crosscorrelation sequence between two ndarrays. This is performed by calling fftconvolve on x, y[::-1]
Parameters: x : ndarray
y : ndarray
axis : time axis
all_lags : {True/False}
whether to return all nonzero lags, or to clip the length of r_xy to be the length of x and y. If False, then the zero lag correlation is at index 0. Otherwise, it is found at (len(x) + len(y) - 1)/2
Returns: rxy : ndarray
The crosscorrelation function
Notes
cross correlation is defined as
\[\]R_{xy}[k]=E{X[n+k]Y^{*}[n]}
where X and Y are discrete, stationary (ergodic) random processes
-
nitime.utils.
crosscov
(x, y, axis=-1, all_lags=False, debias=True, normalize=True)¶ Returns the crosscovariance sequence between two ndarrays. This is performed by calling fftconvolve on x, y[::-1]
Parameters: x : ndarray
y : ndarray
axis : time axis
all_lags : {True/False}
whether to return all nonzero lags, or to clip the length of s_xy to be the length of x and y. If False, then the zero lag covariance is at index 0. Otherwise, it is found at (len(x) + len(y) - 1)/2
debias : {True/False}
Always removes an estimate of the mean along the axis, unless told not to (eg X and Y are known zero-mean)
Returns: cxy : ndarray
The crosscovariance function
Notes
cross covariance of processes x and y is defined as
\[\]C_{xy}[k]=E{(X(n+k)-E{X})(Y(n)-E{Y})^{*}}
where X and Y are discrete, stationary (or ergodic) random processes
Also note that this routine is the workhorse for all auto/cross/cov/corr functions.
-
nitime.utils.
crosscov_vector
(x, y, nlags=None)¶ This method computes the following function
\[R_{xy}(k) = E{ x(t)y^{*}(t-k) } = E{ x(t+k)y^{*}(t) } k \in {0, 1, ..., nlags-1}\](* := conjugate transpose)
Note: This is related to the other commonly used definition for vector crosscovariance
\[R_{xy}^{(2)}(k) = E{ x(t-k)y^{*}(t) } = R_{xy}^(-k) = R_{yx}^{*}(k)\]Parameters: x, y : ndarray (nc, N)
nlags : int, optional
compute lags for k in {0, …, nlags-1}
Returns: rxy : ndarray (nc, nc, nlags)
-
nitime.utils.
dB
(x, power=True)¶ Convert the values in x to decibels. If the values in x are in ‘power’-like units, then set the power flag accordingly
- dB(x) = 10log10(x) (if power==True)
- dB(x) = 10log10(|x|^2) = 20log10(|x|) (if power==False)
-
nitime.utils.
detect_lines
(s, tapers, p=None, **taper_kws)¶ Detect the presence of line spectra in s using the F-test described in “Spectrum estimation and harmonic analysis” (Thompson 81). Strategies for detecting harmonics in low SNR include increasing the number of FFT points (NFFT keyword arg) and/or increasing the stability of the spectral estimate by using more tapers (higher NW parameter).
- s : ndarray
- The sequence(s) to test. If s.ndim > 1, then test sequences in the last axis in parallel
- tapers : ndarray or container
- Either the precomputed DPSS tapers, or the pair of parameters (NW, K) needed to compute K tapers of length n_pts.
- p : float
- The confidence threshold: under the null hypothesis of a locally white spectrum, there is a threshold such that there is a (1-p)% chance of a line amplitude being larger than that threshold. Only detect lines with amplitude greater than this threshold. The default is 1/NFFT, to control for false positives.
- taper_kws
- Options for the tapered_spectra method, if no DPSS are provided.
Returns: (freq, beta) : sequence
The frequencies (normalized in [0, .5]) and coefficients of the complex exponentials detected in the spectrum. A pair is returned for each sequence tested.
One can reconstruct the line components as such:
sn = 2*(beta[:,None]*np.exp(i*2*np.pi*np.arange(N)*freq[:,None])).real sn = sn.sum(axis=0)
-
nitime.utils.
diag_indices
(n, ndim=2)¶ Return the indices to access the main diagonal of an array.
This returns a tuple of indices that can be used to access the main diagonal of an array with ndim (>=2) dimensions and shape (n,n,…,n). For ndim=2 this is the usual diagonal, for ndim>2 this is the set of indices to access A[i,i,…,i] for i=[0..n-1].
Parameters: n : int
The size, along each dimension, of the arrays for which the returned indices can be used.
ndim : int, optional
The number of dimensions
See also
-
,array.
Examples
Create a set of indices to access the diagonal of a (4,4) array: >>> di = diag_indices(4)
>>> a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) >>> a array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]) >>> a[di] = 100 >>> a array([[100, 2, 3, 4], [ 5, 100, 7, 8], [ 9, 10, 100, 12], [ 13, 14, 15, 100]])
Now, we create indices to manipulate a 3-d array: >>> d3 = diag_indices(2,3)
And use it to set the diagonal of a zeros array to 1: >>> a = np.zeros((2,2,2),int) >>> a[d3] = 1 >>> a array([[[1, 0],
[0, 0]],- <BLANKLINE>
- [[0, 0],
- [0, 1]]])
-
nitime.utils.
diag_indices_from
(arr)¶ Return the indices to access the main diagonal of an n-dimensional array.
See diag_indices() for full details.
Parameters: arr : array, at least 2-d
-
nitime.utils.
dpss_windows
(N, NW, Kmax, interp_from=None, interp_kind='linear')¶ Returns the Discrete Prolate Spheroidal Sequences of orders [0,Kmax-1] for a given frequency-spacing multiple NW and sequence length N.
Parameters: N : int
sequence length
NW : float, unitless
standardized half bandwidth corresponding to 2NW = BW/f0 = BW*N*dt but with dt taken as 1
Kmax : int
number of DPSS windows to return is Kmax (orders 0 through Kmax-1)
interp_from : int (optional)
The dpss can be calculated using interpolation from a set of dpss with the same NW and Kmax, but shorter N. This is the length of this shorter set of dpss windows.
interp_kind : str (optional)
This input variable is passed to scipy.interpolate.interp1d and specifies the kind of interpolation as a string (‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic, ‘cubic’) or as an integer specifying the order of the spline interpolator to use.
Returns: v, e : tuple,
v is an array of DPSS windows shaped (Kmax, N) e are the eigenvalues
Notes
Tridiagonal form of DPSS calculation from:
Slepian, D. Prolate spheroidal wave functions, Fourier analysis, and uncertainty V: The discrete case. Bell System Technical Journal, Volume 57 (1978), 1371430
-
nitime.utils.
expected_jk_variance
(K)¶ Compute the expected value of the jackknife variance estimate over K windows below. This expected value formula is based on the asymptotic expansion of the trigamma function derived in [Thompson_1994]
- K : int
- Number of tapers used in the multitaper method
Returns: evar : float
Expected value of the jackknife variance estimator
-
nitime.utils.
fftconvolve
(in1, in2, mode='full', axis=None)¶ Convolve two N-dimensional arrays using FFT. See convolve.
This is a fix of scipy.signal.fftconvolve, adding an axis argument.
-
nitime.utils.
fill_diagonal
(a, val)¶ Fill the main diagonal of the given array of any dimensionality.
For an array with ndim > 2, the diagonal is the list of locations with indices a[i,i,…,i], all identical.
This function modifies the input array in-place, it does not return a value.
This functionality can be obtained via diag_indices(), but internally this version uses a much faster implementation that never constructs the indices and uses simple slicing.
Parameters: a : array, at least 2-dimensional.
Array whose diagonal is to be filled, it gets modified in-place.
val : scalar
Value to be written on the diagonal, its type must be compatible with that of the array a.
See also
-
,-
Examples
>>> a = np.zeros((3,3),int) >>> fill_diagonal(a,5) >>> a array([[5, 0, 0], [0, 5, 0], [0, 0, 5]])
The same function can operate on a 4-d array: >>> a = np.zeros((3,3,3,3),int) >>> fill_diagonal(a,4)
We only show a few blocks for clarity: >>> a[0,0] array([[4, 0, 0],
[0, 0, 0], [0, 0, 0]])>>> a[1,1] array([[0, 0, 0], [0, 4, 0], [0, 0, 0]]) >>> a[2,2] array([[0, 0, 0], [0, 0, 0], [0, 0, 4]])
-
nitime.utils.
fir_design_matrix
(events, len_hrf)¶ Create a FIR event matrix from a time-series of events.
Parameters: events : 1-d int array
Integers denoting different kinds of events, occurring at the time corresponding to the bin represented by each slot in the array. In time-bins in which no event occurred, a 0 should be entered. If negative event values are entered, they will be used as “negative” events, as in events that should be contrasted with the postitive events (typically -1 and 1 can be used for a simple contrast of two conditions)
len_hrf : int
The expected length of the HRF (in the same time-units as the events are represented (presumably TR). The size of the block dedicated in the fir_matrix to each type of event
Returns: fir_matrix : matrix
The design matrix for FIR estimation
-
nitime.utils.
generate_mar
(a, cov, N)¶ Generates a multivariate autoregressive dataset given the formula:
X(t) + sum_{i=1}^{P} a(i)X(t-i) = E(t)
Where E(t) is a vector of samples from possibly covarying noise processes.
Parameters: a : ndarray (n_order, n_c, n_c)
An order n_order set of coefficient matrices, each shaped (n_c, n_c) for n_channel data
cov : ndarray (n_c, n_c)
The innovations process covariance
N : int
how many samples to generate
Returns: mar, nz :
mar and noise process shaped (n_c, N) :
-
nitime.utils.
get_bounds
(f, lb=0, ub=None)¶ Find the indices of the lower and upper bounds within an array f
Parameters: f, array :
lb,ub, float :
Returns: lb_idx, ub_idx: the indices into ‘f’ which correspond to values bounded :
between ub and lb in that array :
-
nitime.utils.
get_freqs
(Fs, n)¶ Returns the center frequencies of the frequency decomposition of a time series of length n, sampled at Fs Hz
-
nitime.utils.
hanning_window_spectrum
(N, Fs)¶ Calculate the analytical spectrum of a Hanning window
Parameters: N : int
The size of the window
Fs : float
The sampling rate
Returns: float array - the frequency bands, given N and FS :
complex array: the power in the spectrum of the square window in the :
frequency bands :
Notes
This is equation 28b in Harris (1978):
\[W(\theta) = 0.5 D(\theta) + 0.25 (D(\theta - \frac{2\pi}{N}) + D(\theta + \frac{2\pi}{N}) ),\]where:
\[D(\theta) = exp(j\frac{\theta}{2}) \frac{sin\frac{N\theta}{2}}{sin\frac{\theta}{2}}\]F.J. Harris (1978). On the use of windows for harmonic analysis with the discrete Fourier transform. Proceedings of the IEEE, 66:51-83
-
nitime.utils.
intersect_coords
(coords1, coords2)¶ For two sets of coordinates, find the coordinates that are common to both, where the dimensionality is the coords1.shape[0]
-
nitime.utils.
jackknifed_coh_variance
(tx, ty, eigvals, adaptive=True)¶ Returns the variance of the coherency between x and y, estimated through jack-knifing the tapered samples in {tx, ty}.
Parameters: tx : ndarray, (K, L)
The K complex spectra of tapered timeseries x
ty : ndarray, (K, L)
The K complex spectra of tapered timeseries y
eigvals : ndarray (K,)
The eigenvalues associated with the K DPSS tapers
Returns: jk_var : ndarray
The variance computed in the transformed domain (see normalize_coherence)
-
nitime.utils.
jackknifed_sdf_variance
(yk, eigvals, sides='onesided', adaptive=True)¶ Returns the variance of the log-sdf estimated through jack-knifing a group of independent sdf estimates.
Parameters: yk : ndarray (K, L)
The K DFTs of the tapered sequences
eigvals : ndarray (K,)
The eigenvalues corresponding to the K DPSS tapers
sides : str, optional
Compute the jackknife pseudovalues over as one-sided or two-sided spectra
adpative : bool, optional
Compute the adaptive weighting for each jackknife pseudovalue
Returns: var : The estimate for log-sdf variance
Notes
The jackknifed mean estimate is distributed about the true mean as a Student’s t-distribution with (K-1) degrees of freedom, and standard error equal to sqrt(var). However, Thompson and Chave [1] point out that this variance better describes the sample mean.
[1] Thomson D J, Chave A D (1991) Advances in Spectrum Analysis and Array Processing (Prentice-Hall, Englewood Cliffs, NJ), 1, pp 58-113.
-
nitime.utils.
mask_indices
(n, mask_func, k=0)¶ Return the indices to access (n,n) arrays, given a masking function.
Assume mask_func() is a function that, for a square array a of size (n,n) with a possible offset argument k, when called as mask_func(a,k) returns a new array with zeros in certain locations (functions like triu() or tril() do precisely this). Then this function returns the indices where the non-zero values would be located.
Parameters: n : int
The returned indices will be valid to access arrays of shape (n,n).
mask_func : callable
A function whose api is similar to that of numpy.tri{u,l}. That is, mask_func(x,k) returns a boolean array, shaped like x. k is an optional argument to the function.
k : scalar
An optional argument which is passed through to mask_func(). Functions like tri{u,l} take a second argument that is interpreted as an offset.
Returns: indices : an n-tuple of index arrays.
The indices corresponding to the locations where mask_func(ones((n,n)),k) is True.
Examples
These are the indices that would allow you to access the upper triangular part of any 3x3 array: >>> iu = mask_indices(3,np.triu)
For example, if a is a 3x3 array: >>> a = np.arange(9).reshape(3,3) >>> a array([[0, 1, 2],
[3, 4, 5], [6, 7, 8]])Then: >>> a[iu] array([0, 1, 2, 4, 5, 8])
An offset can be passed also to the masking function. This gets us the indices starting on the first diagonal right of the main one: >>> iu1 = mask_indices(3,np.triu,1)
with which we now extract only three elements: >>> a[iu1] array([1, 2, 5])
-
nitime.utils.
minmax_norm
(arr, mode='direct', folding_edges=None)¶ Minmax_norm an array to [0,1] range.
By default, this simply rescales the input array to [0,1]. But it has a special ‘folding’ mode that allows for the normalization of an array with negative and positive values by mapping the negative values to their flipped sign
Parameters: arr : 1d array
mode : string, one of [‘direct’,’folding’]
folding_edges : (float,float)
Only needed for folding mode, ignored in ‘direct’ mode.
Examples
>>> np.set_printoptions(precision=4) # for doctesting >>> a = np.linspace(0.3,0.8,4) >>> minmax_norm(a) array([ 0. , 0.3333, 0.6667, 1. ]) >>> b = np.concatenate([np.linspace(-0.7,-0.3,3), ... np.linspace(0.3,0.8,3)]) >>> b array([-0.7 , -0.5 , -0.3 , 0.3 , 0.55, 0.8 ]) >>> minmax_norm(b,'folding',[-0.3,0.3]) array([ 0.8, 0.4, 0. , 0. , 0.5, 1. ])
-
nitime.utils.
multi_intersect
(input)¶ A function for finding the intersection of several different arrays
Parameters: input is a tuple of arrays, with all the different arrays : Returns: array - the intersection of the inputs : Notes
Simply runs intersect1d iteratively on the inputs
-
nitime.utils.
normal_coherence_to_unit
(y, dof, out=None)¶ The inverse transform of the above normalization
-
nitime.utils.
normalize_coherence
(x, dof, copy=True)¶ The generally accepted choice to transform coherence measures into a more normal distribution
Parameters: x : ndarray, real
square-root of magnitude-square coherence measures
dof : int
number of degrees of freedom in the multitaper model
copy : bool
Copy or return inplace modified x.
Returns: y : ndarray, real
The transformed array.
-
nitime.utils.
percent_change
(ts, ax=-1)¶ Returns the % signal change of each point of the times series along a given axis of the array time_series
Parameters: ts : ndarray
an array of time series
ax : int, optional (default to -1)
the axis of time_series along which to compute means and stdevs
Returns: ndarray :
the renormalized time series array (in units of %)
Examples
>>> ts = np.arange(4*5).reshape(4,5) >>> ax = 0 >>> percent_change(ts,ax) array([[-100. , -88.2353, -78.9474, -71.4286, -65.2174], [ -33.3333, -29.4118, -26.3158, -23.8095, -21.7391], [ 33.3333, 29.4118, 26.3158, 23.8095, 21.7391], [ 100. , 88.2353, 78.9474, 71.4286, 65.2174]]) >>> ax = 1 >>> percent_change(ts,ax) array([[-100. , -50. , 0. , 50. , 100. ], [ -28.5714, -14.2857, 0. , 14.2857, 28.5714], [ -16.6667, -8.3333, 0. , 8.3333, 16.6667], [ -11.7647, -5.8824, 0. , 5.8824, 11.7647]])
-
nitime.utils.
remove_bias
(x, axis)¶ Subtracts an estimate of the mean from signal x at axis
-
nitime.utils.
rescale_arr
(arr, amin, amax)¶ Rescale an array to a new range.
Return a new array whose range of values is (amin,amax).
Parameters: arr : array-like
amin : float
new minimum value
amax : float
new maximum value
Examples
>>> a = np.arange(5)
>>> rescale_arr(a,3,6) array([ 3. , 3.75, 4.5 , 5.25, 6. ])
-
nitime.utils.
square_window_spectrum
(N, Fs)¶ Calculate the analytical spectrum of a square window
Parameters: N : int
the size of the window
Fs : float
The sampling rate
Returns: float array - the frequency bands, given N and FS :
complex array: the power in the spectrum of the square window in the :
frequency bands :
Notes
This is equation 21c in Harris (1978):
\[W(\theta) = exp(-j \frac{N-1}{2} \theta) \frac{sin \frac{N\theta}{2}} {sin\frac{\theta}{2}}\]F.J. Harris (1978). On the use of windows for harmonic analysis with the discrete Fourier transform. Proceedings of the IEEE, 66:51-83
-
nitime.utils.
structured_rand_arr
(size, sample_func=<built-in method random_sample of mtrand.RandomState object>, ltfac=None, utfac=None, fill_diag=None)¶ Make a structured random 2-d array of shape (size,size).
If no optional arguments are given, a symmetric array is returned.
Parameters: size : int
Determines the shape of the output array: (size,size).
sample_func : function, optional.
Must be a function which when called with a 2-tuple of ints, returns a 2-d array of that shape. By default, np.random.random is used, but any other sampling function can be used as long as matches this API.
utfac : float, optional
Multiplicative factor for the upper triangular part of the matrix.
ltfac : float, optional
Multiplicative factor for the lower triangular part of the matrix.
fill_diag : float, optional
If given, use this value to fill in the diagonal. Otherwise the diagonal will contain random elements.
Examples
>>> np.random.seed(0) # for doctesting >>> np.set_printoptions(precision=4) # for doctesting >>> structured_rand_arr(4) array([[ 0.5488, 0.7152, 0.6028, 0.5449], [ 0.7152, 0.6459, 0.4376, 0.8918], [ 0.6028, 0.4376, 0.7917, 0.5289], [ 0.5449, 0.8918, 0.5289, 0.0871]]) >>> structured_rand_arr(4,ltfac=-10,utfac=10,fill_diag=0.5) array([[ 0.5 , 8.3262, 7.7816, 8.7001], [-8.3262, 0.5 , 4.6148, 7.8053], [-7.7816, -4.6148, 0.5 , 9.4467], [-8.7001, -7.8053, -9.4467, 0.5 ]])
-
nitime.utils.
symm_rand_arr
(size, sample_func=<built-in method random_sample of mtrand.RandomState object>, fill_diag=None)¶ Make a symmetric random 2-d array of shape (size,size).
Parameters: n : int
Size of the output array.
sample_func : function, optional.
Must be a function which when called with a 2-tuple of ints, returns a 2-d array of that shape. By default, np.random.random is used, but any other sampling function can be used as long as matches this API.
fill_diag : float, optional
If given, use this value to fill in the diagonal. Useful for
Examples
>>> np.random.seed(0) # for doctesting >>> np.set_printoptions(precision=4) # for doctesting >>> symm_rand_arr(4) array([[ 0.5488, 0.7152, 0.6028, 0.5449], [ 0.7152, 0.6459, 0.4376, 0.8918], [ 0.6028, 0.4376, 0.7917, 0.5289], [ 0.5449, 0.8918, 0.5289, 0.0871]]) >>> symm_rand_arr(4,fill_diag=4) array([[ 4. , 0.8326, 0.7782, 0.87 ], [ 0.8326, 4. , 0.4615, 0.7805], [ 0.7782, 0.4615, 4. , 0.9447], [ 0.87 , 0.7805, 0.9447, 4. ]])
-
nitime.utils.
tapered_spectra
(s, tapers, NFFT=None, low_bias=True)¶ Compute the tapered spectra of the rows of s.
Parameters: s : ndarray, (n_arr, n_pts)
An array whose rows are timeseries.
tapers : ndarray or container
Either the precomputed DPSS tapers, or the pair of parameters (NW, K) needed to compute K tapers of length n_pts.
NFFT : int
Number of FFT bins to compute
low_bias : Boolean
If compute DPSS, automatically select tapers corresponding to > 90% energy concentration.
Returns: t_spectra : ndarray, shaped (n_arr, K, NFFT)
The FFT of the tapered sequences in s. First dimension is squeezed out if n_arr is 1.
eigvals : ndarray
The eigenvalues are also returned if DPSS are calculated here.
-
nitime.utils.
threshold_arr
(cmat, threshold=0.0, threshold2=None)¶ Threshold values from the input array.
Parameters: cmat : array
threshold : float, optional.
First threshold.
threshold2 : float, optional.
Second threshold.
Returns: indices, values: a tuple with ndim+1 :
Examples
>>> np.set_printoptions(precision=4) # For doctesting >>> a = np.linspace(0,0.2,5) >>> a array([ 0. , 0.05, 0.1 , 0.15, 0.2 ]) >>> threshold_arr(a,0.1) (array([3, 4]), array([ 0.15, 0.2 ]))
With two thresholds: >>> threshold_arr(a,0.1,0.2) (array([0, 1]), array([ 0. , 0.05]))
-
nitime.utils.
thresholded_arr
(arr, threshold=0.0, threshold2=None, fill_val=nan)¶ Threshold values from the input matrix and return a new matrix.
Parameters: arr : array
threshold : float
First threshold.
threshold2 : float, optional.
Second threshold.
Returns: An array shaped like the input, with the values outside the threshold :
replaced with fill_val. :
-
nitime.utils.
tridi_inverse_iteration
(d, e, w, x0=None, rtol=1e-08)¶ Perform an inverse iteration to find the eigenvector corresponding to the given eigenvalue in a symmetric tridiagonal system.
Parameters: d : ndarray
main diagonal of the tridiagonal system
e : ndarray
offdiagonal stored in e[:-1]
w : float
eigenvalue of the eigenvector
x0 : ndarray
initial point to start the iteration
rtol : float
tolerance for the norm of the difference of iterates
Returns: e : ndarray
The converged eigenvector
-
nitime.utils.
tril_indices
(n, k=0)¶ Return the indices for the lower-triangle of an (n,n) array.
Parameters: n : int
Sets the size of the arrays for which the returned indices will be valid.
k : int, optional
Diagonal offset (see tril() for details).
See also
-
,for
,-
Examples
Commpute two different sets of indices to access 4x4 arrays, one for the lower triangular part starting at the main diagonal, and one starting two diagonals further right:
>>> il1 = tril_indices(4) >>> il2 = tril_indices(4,2)
Here is how they can be used with a sample array: >>> a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) >>> a array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]])Both for indexing: >>> a[il1] array([ 1, 5, 6, 9, 10, 11, 13, 14, 15, 16])
And for assigning values: >>> a[il1] = -1 >>> a array([[-1, 2, 3, 4],
[-1, -1, 7, 8], [-1, -1, -1, 12], [-1, -1, -1, -1]])These cover almost the whole array (two diagonals right of the main one): >>> a[il2] = -10 >>> a array([[-10, -10, -10, 4],
[-10, -10, -10, -10], [-10, -10, -10, -10], [-10, -10, -10, -10]])
-
nitime.utils.
tril_indices_from
(arr, k=0)¶ Return the indices for the lower-triangle of an (n,n) array.
See tril_indices() for full details.
Parameters: n : int
Sets the size of the arrays for which the returned indices will be valid.
k : int, optional
Diagonal offset (see tril() for details).
-
nitime.utils.
triu_indices
(n, k=0)¶ Return the indices for the upper-triangle of an (n,n) array.
Parameters: n : int
Sets the size of the arrays for which the returned indices will be valid.
k : int, optional
Diagonal offset (see triu() for details).
See also
-
,for
,-
Examples
Commpute two different sets of indices to access 4x4 arrays, one for the upper triangular part starting at the main diagonal, and one starting two diagonals further right:
>>> iu1 = triu_indices(4) >>> iu2 = triu_indices(4,2)
Here is how they can be used with a sample array: >>> a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) >>> a array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]])Both for indexing: >>> a[iu1] array([ 1, 2, 3, 4, 6, 7, 8, 11, 12, 16])
And for assigning values: >>> a[iu1] = -1 >>> a array([[-1, -1, -1, -1],
[ 5, -1, -1, -1], [ 9, 10, -1, -1], [13, 14, 15, -1]])These cover almost the whole array (two diagonals right of the main one): >>> a[iu2] = -10 >>> a array([[ -1, -1, -10, -10],
[ 5, -1, -1, -10], [ 9, 10, -1, -1], [ 13, 14, 15, -1]])
-
nitime.utils.
triu_indices_from
(arr, k=0)¶ Return the indices for the lower-triangle of an (n,n) array.
See triu_indices() for full details.
Parameters: n : int
Sets the size of the arrays for which the returned indices will be valid.
k : int, optional
Diagonal offset (see triu() for details).
-
nitime.utils.
unwrap_phases
(a)¶ Changes consecutive jumps larger than pi to their 2*pi complement.
-
nitime.utils.
zero_pad
(time_series, NFFT)¶ Pad a time-series with zeros on either side, depending on its length
Parameters: time_series : n-d array
Time-series data with time as the last dimension
NFFT : int
The length to pad the data up to.
-
nitime.utils.
zscore
(time_series, axis=-1)¶ Returns the z-score of each point of the time series along a given axis of the array time_series.
Parameters: time_series : ndarray
an array of time series
axis : int, optional
the axis of time_series along which to compute means and stdevs
Returns :
_______ :
zt : ndarray
the renormalized time series array