hcat.lib

cell

The “cell” object is the base object for a detected cell. It contains all pertinent, cell specific, information, such as classification, location, volume, or fluorescent intensity.

class hcat.lib.cell.Cell(image=None, mask=None, loc=None, id=None, scores=None, boxes=None, cell_type=None, channel_name=('dapi', 'gfp', 'myo7a', 'actin'))

Dataclass of a single detected cell object.

Parameters:
  • image (Optional[Tensor]) – [B, C, X, Y ,Z] image crop of a single cell

  • mask (Optional[Tensor]) – [B, C, X, Y ,Z] segmentation mask of the same cell as image with identical size

  • loc (Optional[Tensor]) – [C, X, Y, Z] center location of cell

  • id (Optional[int]) – unique cell identification number

  • scores (Optional[Tensor]) – cell detection likelihood

  • boxes (Optional[Tensor]) – [x0, y0, x1, y1] cell detection boxes

  • cell_type (Optional[str]) – cell classification ID: ‘OHC’ or ‘IHC’

  • channel_name (Optional[Tuple[str]]) – image ordered channel dye names

calculate_frequency(curvature, distance)

Calculates cell’s best frequency from its place along the cochlear curvature. Assign values to properties: percent_loc, frequency

Values of greenwood function taken from: Moore, B C. (1974). Relation between the critical bandwidth and the frequency-difference limen. The Journal of the Acoustical Society of America, 55(2), 359.

https://en.wikipedia.org/wiki/Greenwood_function

public double fMouse(double d){ // d is fraction total distance

//f(Hz) = (10 ^((1-d)*0.92) - 0.680)* 9.8 return (Math.pow(10, (1-d)*0.92) - 0.680) * 9.8;

}

d = d * 100; // f(KHz) = (10 ^(((100-d)/100)*2) - 0.4)*200/1000

Example:

>>> from hcat.lib.cell import Cell
>>> from hcat.lib.functional import PredictCurvature
>>> import torch
>>>
>>> cells = torch.load('array_of_cells.trch')
>>> curvature, distance, apex = PredictCurvature()(masks)
>>> for c in cells:
>>>     c.calculate_frequency(curvature, distance)
>>> print(f'Best Frequency: {cells[0].frequency}, Hz') # Best Frequency: 1.512 kHz
param curvature:

2D curvature array from hcat.lib.functional.PredictCurvature

param distance:

distance tensor from hcat.lib.functional.PredictCurvature

return:

None

Parameters:
  • curvature (Tensor) –

  • distance (Tensor) –

Return type:

None

cochlea

functional

class hcat.lib.functional.PredictCurvature(voxel_dim=(0.28888, 0.28888, 0.8664000000000001), equal_spaced_distance=0.01, erode=3, scale_factor=10, method=None)

Initialize the cochlear path prediction algorithm.

This module will attempt to fit a set of equally spaced points to an image containing a whole cochlea in one contiguous piece. Will attempt to calculate via myo7a signal, but may also use box detections.

Parameters:
  • voxel_dim (Optional[Tuple[float, float, float]]) – Tuple of pixel spacings in um

  • equal_spaced_distance (Optional[float]) – How far apart individual points of the resulting line need to be. Safely set at 0.1.

  • erode (Optional[int]) – How many times a binary erosion should be performed on the MyoVIIA signal. Larger values can reduce false positive values.

  • scale_factor (Optional[int]) – Downscale factor for curve estimation. Smaller values incur a performance hit.

  • method (Optional[str]) – Cochlea estimation approach: [‘mask’, ‘maxproject’, None]

fit(base, method, diagnostic=False)

Predicts cochlear curvature from a predicted segmentation mask.

Uses beta spline fits as well as a gaussian process regression to estimate a contiguous curve in spherical space.

Parameters:
  • image – [C, X, Y, Z] bool tensor

  • base (Tensor) –

  • method (str) –

Returns:

Tuple[Tensor, Tensor, Tensor]: - equal_spaced_points: [2, N] Tensor of pixel locations - percentage: [N] Tensor of cochlear percentage - apex: Tensor location of apex (guess)

Return type:

Tuple[Tensor, Tensor, Tensor]

fitEPL(path, pix2um=3.4616)

Fits a spline through a user defined list of points.

Parameters:
  • path (str) – Path to a user defined set of points from BASE to APEX as selected in FIJI.

  • pix2um (float) –

Returns:

Tuple[Tensor, Tensor, Tensor]: - equal_spaced_points: [2, N] Tensor of pixel locations - percentage: [N] Tensor of cochlear percentage - apex: Tensor location of apex (guess)

class hcat.lib.functional.merge_regions(destination, data, threshold=0.25)

DEPRECIATED! This is legacy code and will be removed.

assume [C, X, Y, Z] shape for all tensors

Takes data and puts it in destination intellegently. :param destination: :param data: :return:

Parameters:
  • destination (Tensor) –

  • data (Tensor) –

  • threshold (float) –

Return type:

Tuple[Tensor, Tensor]

utils

hcat.lib.utils.calculate_indexes(pad_size, eval_image_size, image_shape, padded_image_shape)

This calculates indexes for the complete evaluation of an arbitrarily large image by unet. each index is offset by eval_image_size, but has a width of eval_image_size + pad_size * 2. Unet needs padding on each side of the evaluation to ensure only full convolutions are used in generation of the final mask. If the algorithm cannot evenly create indexes for padded_image_shape, an additional index is added at the end of equal size.

Parameters:
  • pad_size (int) – int corresponding to the amount of padding on each side of the padded image

  • eval_image_size (int) – int corresponding to the shape of the image to be used for the final mask

  • image_shape (int) – int Shape of image before padding is applied

  • padded_image_shape (int) – int Shape of image after padding is applied

Returns:

List of lists corresponding to the indexes

Return type:

List[List[int]]

hcat.lib.utils.cochlea_to_xml(cochlea, filename)

Create an XML file of each cell detection from a cochlea object for use in the labelimg software.

Parameters:
  • cochlea – Cochlea object from hcat.lib.cochlea.Cochlea

  • filename (str) – full file path by which to save the xml file

Returns:

None

Return type:

None

hcat.lib.utils.correct_pixel_size_image(image, current_pixel_size=None, cell_diameter=None, antialias=True, verbose=False)

Upscales or downscales an torch.Tensor image to the optimal size for HCAT detection. Scales to 288.88nm/px based on cell_diameter, or current_pixel_size. If neither the current pixel size, or cell diameter were passed, this function returns the original image.

Shapes:
  • image: \((C_{in}, X_{in}, Z_{in})\)

Parameters:
  • image (Tensor) – Image to be resized

  • cell_diameter (Optional[int]) – Diameter of cells in unscaled image

  • current_pixel_size (Optional[float]) – Pixel size of unscaled image

  • antialias (Optional[bool]) – If true, performs antialiasing upon scaling

  • verbose (Optional[bool]) – Prints operation to standard out

Returns:

Scaled image

Return type:

Tensor

hcat.lib.utils.get_device(verbose=False)

Returns the optimal hardware accelerator for HCAT detection analysis. Currently supported devices are: cuda (Nvidia GPU), mps (Macbook M1 GPU), or CPU. When multiple GPU’s are available, always defaults to device 0.

Parameters:

verbose (Optional[bool]) – prints operation to standard out

Returns:

string representation of device

Return type:

str

hcat.lib.utils.get_dtype_offset(dtype='uint16', image_max=None)

Returns the scaling factor such that such that :math:`

rac{image}{f} in [0, …, 1]`

Supports: uint16, uint8, uint12, float64

param dtype:

String representation of the data type.

param image_max:

Returns image max if the dtype is not supported.

return:

Integer scale factor

Parameters:
  • dtype (str) –

  • image_max (Optional[Number]) –

Return type:

int

hcat.lib.utils.graceful_exit(message)

Decorator which returns a message upon failure

hcat.lib.utils.image_to_float(image, scale, verbose=False)

Normalizes an input image of any dtype to torch.float32 who’s values lie between 0 and 1.

Parameters:
  • image (Union[ndarray, Tensor]) – Input image array

  • scale (int) – Scale value by which to normalize image

  • verbose (Optional[bool]) – Prints operation to standard out

Returns:

Normalized image

Return type:

Tensor

hcat.lib.utils.load(file, header_name='TileScan 1 Merged', verbose=False, dtype='uint16', ndim=3)

Loads image file (*leica or *tif) and returns an np.array

Parameters:
  • file (str) – str path to the file

  • header_name (Optional[str]) – Optional[str] header name of lif. Does nothing image file is a tif

  • verbose (bool) – prints Operation to standard out

  • dtype (str) – data type of leica lif file

  • ndim (int) – unsure BUG???

Returns:

np.array image matrix from file, aborts if the image is too large and returns None.

Return type:

Union[None, array]

hcat.lib.utils.make_rgb(image)

Converts an N Channel image to rgb by adding a channel of all zeros, if necessary, or by indexing only the first three channels.

Example:
>>> original_image = torch.rand(1,100,100)  # 1 channel image
>>> rgb_image = make_rgb(original_image)
>>> image.shape
torch.Size[3, 100, 100]
>>> original_image = torch.rand(5, 100, 100)
>>> rgb_image = make_rbg(rgb_image)
>>> torch.all_close(original_image[0:3, ...], rgb_image)
True
Shapes:
  • image: \((C, X_in, Y_in)\)

  • returns: \((3, X_in, Y_in)\)

Parameters:

image (Tensor) – Input image

Returns:

3 Channel Image

Return type:

Tensor

hcat.lib.utils.normalize_image(image, *args, verbose=False)

Normalizes each channel in an image such that a majority of the image lies between 0 and 1. Calculates the maximum value of the image following a gaussian blur with a 7x7 kernel size, preventing random salt-and-pepper noise from drastically affecting maximum.

Shapes:
  • image: \((C_in, ...)\)

  • returns: \((C_in, ...)\)

Parameters:
  • image (Tensor) – Input image Tensor

  • verbose (Optional[bool]) – Prints operation to standard out

Returns:

Normalized image

Return type:

Tensor

hcat.lib.utils.pad_image_with_reflections(image, pad_size=(30, 30, 6))

Pads image according to Unet spec expect [B, C, X, Y, Z] Adds pad size to each side of each dim. For example, if pad size is 10, then 10 px will be added on top, and on bottom.

Parameters:
  • image (Tensor) –

  • pad_size (Tuple[int]) –

Returns:

Return type:

Tensor

hcat.lib.utils.prep_dict(data_dict, device)

basically a colate function for dataloader

Parameters:

device (str) –

hcat.lib.utils.rescale_box_sizes(boxes, current_pixel_size=None, cell_diameter=None)

Rescales bounding boxes predicted at the scaled, to the original size, such that they may be correctly displayed over the original image. If neither the current pixel size, or cell diameter were passed, this function returns the boxes unscaled. Meant to be used with hcat.lib.utils.correct_pixel_size_image.

Parameters:
  • boxes (Tensor) – bounding boxes predicted by Faster RCNN

  • current_pixel_size (Optional[float]) – Pixel size of unscalled image

  • cell_diameter (Optional[int]) – Diameter of cells in unscaled image

Returns:

hcat.lib.utils.save_image_as_png(image, filename, verbose=False)

Saves a float image Tensor with shape \((3, X_in, Y_in)\) to a 8-bit, png image. Pixel values not between 0 and 1 will be clipped.

Parameters:
  • image (Tensor) – input image tensor

  • filename (str) – filename by which to save the image

  • verbose (Optional[bool]) – Prints operation to standard out

Returns:

None

Return type:

None

hcat.lib.utils.warn(message, color)

utility function to send a warning of a certian color to std out

Parameters:
  • message (str) –

  • color (str) –

Return type:

None