Quantitative Big Imaging

Dynamic Experiments

ETHZ: 227-0966-00L

## Loading required package: knitr

Course Outline

Literature / Useful References

Books

Papers / Sites

Previously on QBI …

Quantitative “Big” Imaging

The course has covered imaging enough and there have been a few quantitative metrics, but “big” has not really entered.

What does big mean?

So what is “big” imaging

Objectives

  1. What sort of dynamic experiments do we have?
  2. How can we design good dynamic experiments?
  3. How can we track objects between points?

How can we track shape? How can we track distribution?

  1. How can we track topology?
  2. How can we track voxels?
  3. How can we assess deformation and strain?
  4. How can assess more general cases?

Outline

Experiments

Object Tracking

Distribution

Topology

Voxel-based Methods

General Problems

Motivation

We can say that it looks like, but many pieces of quantitative information are difficult to extract - how fast is it going? - how many particles are present? - are their sizes constant? - are some moving faster? - are they rearranging?

Scientific Goals

Rheology

Understanding the flow of liquids and mixtures is important for many processes - blood movement in arteries, veins, and capillaries - oil movement through porous rock - air through dough when cooking bread - magma and gas in a volcano

Deformation

Deformation is similarly important since it plays a significant role in the following scenarios

Experiments

The first step of any of these analyses is proper experimental design. Since there is always

There are always trade-offs to be made between getting the best possible high-resolution nanoscale dynamics and capturing the system level behavior.

If we measure too fast

Too slow

Too high resolution

Too low resolution

Simulation

In many cases, experimental data is inherited and little can be done about the design, but when there is still the opportunity, simulations provide a powerful tool for tuning and balancing a large number parameters

Simulations also provide the ability to pair post-processing to the experiments and determine the limits of tracking.

What do we start with?

Going back to our original cell image

  1. We have been able to get rid of the noise in the image and find all the cells (lecture 2-4)
  2. We have analyzed the shape of the cells using the shape tensor (lecture 5)
  3. We even separated cells joined together using Watershed (lecture 6)
  4. We have created even more metrics characterizing the distribution (lecture 7)

We have at least a few samples (or different regions), large number of metrics and an almost as large number of parameters to tune

How do we do something meaningful with it?

Basic Simulation

We start with a starting image

Analysis

Describing Motion

\[ \vec{v}(\vec{x})=\langle 0,0.1 \rangle \]

\[ \vec{v}(\vec{x})=0.3\frac{\vec{x}}{||\vec{x}||}\times \langle 0,0,1 \rangle \]

Many Frames

\[ \vec{v}(\vec{x})=\langle 0,0.1 \rangle \]

\[ \vec{v}(\vec{x})=0.3\frac{\vec{x}}{||\vec{x}||}\times \langle 0,0,1 \rangle \]

Random Appearance / Disappearance

Under perfect imaging and experimental conditions objects should not appear and reappear but due to - noise - limited fields of view / depth of field - discrete segmentation approachs - motion artifacts - blurred objects often have lower intensity values than still objects

It is common for objects to appear and vanish regularly in an experiment.

Jitter / Motion Noise

Even perfect spherical objects do not move in a straight line. The jitter can be seen as a stochastic variable with a random magnitude (\(a\)) and angle (\(b\)). This is then sampled at every point in the field

\[ \vec{v}(\vec{x})=\vec{v}_L(\vec{x})+||a||\measuredangle b \]

Jitter (Continued)

Over many frames this can change the path significantly

The simulation can be represented in a more clear fashion by using single lines to represent each spheroid

Limits of Tracking

We see that visually tracking samples can be difficult and there are a number of parameters which affect the ability for us to clearly see the tracking. - flow rate - flow type - density - appearance and disappearance rate - jitter - particle uniqueness

We thus try to quantify the limits of these parameters for different tracking methods in order to design experiments better.

Acquisition-based Parameters

Experimental Parameters

Tracking: Nearest Neighbor

While there exist a number of different methods and complicated approaches for tracking, for experimental design it is best to start with the simplist, easiest understood method. The limits of this can be found and components added as needed until it is possible to realize the experiment - If a dataset can only be analyzed with a multiple-hypothesis testing neural network model then it might not be so reliable

We then return to nearest neighbor which means we track a point (\(\vec{P}_0\)) from an image (\(I_0\)) at \(t_0\) to a point (\(\vec{P}_1\)) in image (\(I_1\)) at \(t_1\) by

\[ \vec{P}_1=\textrm{argmin}(||\vec{P}_0-\vec{y}|| \;\forall \vec{y}\in I_1) \]

Scoring Tracking

In the following examples we will use simple metrics for scoring fits where the objects are matched and the number of misses is counted.

There are a number of more sensitive scoring metrics which can be used, by finding the best submatch for a given particle since the number of matches and particles does not always correspond. See the papers at the beginning for more information

Basic Simulations

Input flow from simulation

\[ \vec{v}(\vec{x})=\langle 0,0,0.05 \rangle+||0.01||\measuredangle b \]

Nearest Neighbor Tracking

More Complicated Flows

Input flow from simulation

\[ \vec{v}(\vec{x})=\langle 0,0,0.01 \rangle+||0.05||\measuredangle b \]

Nearest Neighbor Tracking

Registration

Before any meaningful tracking tasks can be performed, the first step is to register the measurements so they are all on the same coordinate system.

Often the registration can be done along with the tracking by separating the movement into actual sample movement and other (camera, setup, etc) if the motion of either the sample or the other components can be well modeled.

Quantifying Tracking Rate

We can then quantify the success rate of each algorithm on the data set using the very simple match and mismatch metrics

Counting Misses

Time NN ONN ANN
1.9 20% 25% 20%
3.7 20% 22.5% 20%
5.5 17.5% 25% 20%
7.3 22.5% 30% 20%
9.1 15% 17.5% 20%
10.9 10% 15% 10%
12.7 12.5% 15% 12.5%
14.5 15% 15% 15%
16.3 17.5% 20% 17.5%
18.1 15% 22.5% 17.5%

Jitter Sensitivity

Density and Jitter

Designing Experiments

How does this help us to design experiments?

How much is enough?

Extending Nearest Neighbor

Bijective Requirement

Maximum Displacement

\[ \vec{P}_1=\begin{cases} ||\vec{P}_0-\vec{y} ||<\textrm{MAXD}, & \textrm{argmin}(||\vec{P}_0-\vec{y} || \forall \vec{y}\in I_1) \\ \textrm{Otherwise}, & \emptyset \end{cases}\]

Extending Nearest Neighbor (Continued)

Prior / Expected Movement

\[ \vec{P}_1=\textrm{argmin}(||\vec{P}_0+\vec{v}_{offset}-\vec{y} || \forall \vec{y}\in I_1) \]

Adaptive Movement

Can then be calculated in an iterative fashion where the offset is the average from all of the \(\vec{P}_1-\vec{P}_0\) vectors. It can also be performed \[ \vec{P}_1=\textrm{argmin}(||\vec{P}_0+\vec{v}_{offset}-\vec{y} || \forall \vec{y}\in I_1) \]

Beyond Nearest Neighbor

While nearest neighbor provides a useful starting tool it is not sufficient for truly complicated flows and datasets.

Better Approaches

  1. Multiple Hypothesis Testing Nearest neighbor just compares the points between two frames and there is much more information available in most time-resolved datasets. This approach allows for multiple possible paths to be explored at the same time and the best chosen only after all frames have been examined

Shortcomings

  1. Merging and Splitting Particles The simplicity of the nearest neighbor model does really allow for particles to merge and split (relaxing the bijective requirement allows such behavior, but the method is still not suited for such tracking). For such systems a more specific, physically-based is required to encapsulate this behavior.

Voxel-based Approaches

For voxel-based approachs the most common analyses are digital image correlation (or for 3D images digital volume correlation), where the correlation is calculated between two images or volumes.

Standard Image Correlation

Given images \(I_0(\vec{x})\) and \(I_1(\vec{x})\) at time \(t_0\) and \(t_1\) respectively. The correlation between these two images can be calculated

\[ C_{I_0,I_1}(\vec{r})=\langle I_0(\vec{x}) I_1(\vec{x}+\vec{r}) \rangle \]

Random Image Positions

With highly structured / periodic samples identfying a best correlation is difficult since there are multiple maxima.

Extending Correlation

The correlation function can be extended by adding rotation and scaling terms to the offset making the tool more flexible but also more computationally expensive for large search spaces.

\[ C_{I_0,I_1}(\vec{r},s,\theta)= \] \[ \langle I_0(\vec{x}) I_1( \begin{bmatrix} s\cos\theta & -s\sin\theta\\ s\sin\theta & s\cos\theta \end{bmatrix} \vec{x}+\vec{r}) \rangle \]

More Complicated Changes

Subdividing the data

We can approach the problem by subdividing the data into smaller blocks and then apply the digital volume correlation independently to each block. - information on changes in different regions - less statistics than a larger box

Choosing a window size

Overlap

Too large of a window

Overlap

Too small of a window

Overlap

Shearing

Compression

Introducing Physics

DIC or DVC by themselves include no sanity check for realistic offsets in the correlation itself. The method can, however be integrated with physical models to find a more optimal solutions.

\[ C_{\textrm{cost}} = \underbrace{C_{I_0,I_1}(\vec{r})}_{\textrm{Correlation Term}} + \underbrace{\lambda ||\vec{r}||}_{\textrm{deformation term}} \]

Distribution Metrics

As we covered before distribution metrics like the distribution tensor can be used for tracking changes inside a sample. Of these the most relevant is the texture tensor from cellular materials and liquid foam. The texture tensor is the same as the distribution tensor except that the edges (or faces) represent physically connected / touching objects rather than touching Voronoi faces (or conversely Delaunay triangles).

These metrics can also be used for tracking the behavior of a system without tracking the single points since most deformations of a system also deform the distribution tensor and can thus be extracted by comparing the distribution tensor at different time steps.

Quantifying Deformation: Strain

We can take any of these approaches and quantify the deformation using a tool called the strain tensor. Strain is defined in mechanics for the simple 1D case as the change in the length against the change in the original length. \[ e = \frac{\Delta L}{L} \] While this defines the 1D case well, it is difficult to apply such metrics to voxel, shape, and tensor data.

Strain Tensor

There are a number of different ways to calculate strain and the strain tensor, but the most applicable for general image based applications is called the infinitesimal strain tensor, because the element matches well to square pixels and cubic voxels.

A given strain can then be applied and we can quantify the effects by examining the change in the small element.

Types of Strain

We catagorize the types of strain into two main catagories:

\[ \underbrace{\mathbf{E}}_{\textrm{Total Strain}} = \underbrace{\varepsilon_M \mathbf{I_3}}_{\textrm{Volumetric}} + \underbrace{\mathbf{E}^\prime}_{\textrm{Deviatoric}} \]

Volumetric / Dilational

The isotropic change in size or scale of the object.

Deviatoric

The change in the proportions of the object (similar to anisotropy) independent of the final scale

Two Point Correlation - Volcanic Rock

Data provided by Mattia Pistone and Julie Fife The air phase changes from small very anisotropic bubbles to one large connected pore network. The same tools cannot be used to quantify those systems. Furthermore there are motion artifacts which are difficult to correct.

We can utilize the two point correlation function of the material to characterize the shape generically for each time step and then compare.