Kevin Mader
30 April 2015
ETHZ: 227-0966-00L
Comparsion of Tracking Methods in Biology
Multiple Hypothesis Testing
The course has covered imaging enough and there have been a few quantitative metrics, but “big” has not really entered.
What does big mean?
So what is “big” imaging
We can say that it looks like, but many pieces of quantitative information are difficult to extract
Understanding the flow of liquids and mixtures is important for many processes
Deformation is similarly important since it plays a significant role in the following scenarios
The first step of any of these analyses is proper experimental design. Since there is always
There are always trade-offs to be made between getting the best possible high-resolution nanoscale dynamics and capturing the system level behavior.
In many cases, experimental data is inherited and little can be done about the design, but when there is still the opportunity, simulations provide a powerful tool for tuning and balancing a large number parameters
Simulations also provide the ability to pair post-processing to the experiments and determine the limits of tracking.
Going back to our original cell image
We have at least a few samples (or different regions), large number of metrics and an almost as large number of parameters to tune
We start with a starting image
\vec{v}(\vec{x})=\langle 0,0.1 \rangle
\vec{v}(\vec{x})=0.3\frac{\vec{x}}{||\vec{x}||}\times \langle 0,0,1 \rangle
\vec{v}(\vec{x})=\langle 0,0.1 \rangle
\vec{v}(\vec{x})=0.3\frac{\vec{x}}{||\vec{x}||}\times \langle 0,0,1 \rangle
Under perfect imaging and experimental conditions objects should not appear and reappear but due to
It is common for objects to appear and vanish regularly in an experiment.
Even perfect spherical objects do not move in a straight line. The jitter can be seen as a stochastic variable with a random magnitude ( a ) and angle ( b ). This is then sampled at every point in the field
\vec{v}(\vec{x})=\vec{v}_L(\vec{x})+||a||\measuredangle b
Over many frames this can change the path significantly
The simulation can be represented in a more clear fashion by using single lines to represent each spheroid
We see that visually tracking samples can be difficult and there are a number of parameters which affect the ability for us to clearly see the tracking.
We thus try to quantify the limits of these parameters for different tracking methods in order to design experiments better.
While there exist a number of different methods and complicated approaches for tracking, for experimental design it is best to start with the simplist, easiest understood method. The limits of this can be found and components added as needed until it is possible to realize the experiment
We then return to nearest neighbor which means we track a point ( \vec{P}_0 ) from an image ( I_0 ) at t_0 to a point ( \vec{P}_1 ) in image ( I_1 ) at t_1 by
\vec{P}_1=\textrm{argmin}(||\vec{P}_0-\vec{y}|| \forall \vec{y}\in I_1)
In the following examples we will use simple metrics for scoring fits where the objects are matched and the number of misses is counted.
There are a number of more sensitive scoring metrics which can be used, by finding the best submatch for a given particle since the number of matches and particles does not always correspond. See the papers at the beginning for more information
Input flow from simulation
\vec{v}(\vec{x})=\langle 0,0,0.05 \rangle+||0.01||\measuredangle b
Nearest Neighbor Tracking
Input flow from simulation
\vec{v}(\vec{x})=\langle 0,0,0.01 \rangle+||0.05||\measuredangle b
Nearest Neighbor Tracking
Before any meaningful tracking tasks can be performed, the first step is to register the measurements so they are all on the same coordinate system.
Often the registration can be done along with the tracking by separating the movement into actual sample movement and other (camera, setup, etc) if the motion of either the sample or the other components can be well modeled.
We can then quantify the success rate of each algorithm on the data set using the very simple match and mismatch metrics
Time | NN | ONN | ANN |
---|---|---|---|
1.9 | 20% | 27.5% | 27.5% |
3.7 | 22.5% | 25% | 27.5% |
5.5 | 27.5% | 20% | 17.5% |
7.3 | 17.5% | 25% | 15% |
9.1 | 20% | 32.5% | 22.5% |
10.9 | 15% | 15% | 10% |
12.7 | 30% | 27.5% | 27.5% |
14.5 | 22.5% | 25% | 22.5% |
16.3 | 27.5% | 35% | 22.5% |
18.1 | 22.5% | 22.5% | 20% |
\vec{P}_1=\begin{cases} ||\vec{P}_0-\vec{y} ||<\textrm{MAXD}, & \textrm{argmin}(||\vec{P}_0-\vec{y} || \forall \vec{y}\in I_1) \\ \textrm{Otherwise}, & \emptyset \end{cases}
\vec{P}_1=\textrm{argmin}(||\vec{P}_0+\vec{v}_{offset}-\vec{y} || \forall \vec{y}\in I_1)
Can then be calculated in an iterative fashion where the offset is the average from all of the \vec{P}_1-\vec{P}_0 vectors. It can also be performed \vec{P}_1=\textrm{argmin}(||\vec{P}_0+\vec{v}_{offset}-\vec{y} || \forall \vec{y}\in I_1)
While nearest neighbor provides a useful starting tool it is not sufficient for truly complicated flows and datasets.
For voxel-based approachs the most common analyses are digital image correlation (or for 3D images digital volume correlation), where the correlation is calculated between two images or volumes.
Given images I_0(\vec{x}) and I_1(\vec{x}) at time t_0 and t_1 respectively. The correlation between these two images can be calculated
C_{I_0,I_1}(\vec{r})=\langle I_0(\vec{x}) I_1(\vec{x}+\vec{r}) \rangle
With highly structured / periodic samples identfying a best correlation is difficult since there are multiple maxima.
The correlation function can be extended by adding rotation and scaling terms to the offset making the tool more flexible but also more computationally expensive for large search spaces.
C_{I_0,I_1}(\vec{r},s,\theta)= \langle I_0(\vec{x}) I_1( \begin{bmatrix} s\cos\theta & -s\sin\theta\\ s\sin\theta & s\cos\theta \end{bmatrix} \vec{x}+\vec{r}) \rangle
We can approach the problem by subdividing the data into smaller blocks and then apply the digital volume correlation independently to each block.
DIC or DVC by themselves include no sanity check for realistic offsets in the correlation itself. The method can, however be integrated with physical models to find a more optimal solutions.
C_{\textrm{cost}} = \underbrace{C_{I_0,I_1}(\vec{r})}_{\textrm{Correlation Term}} + \underbrace{\lambda ||\vec{r}||}_{\textrm{deformation term}}
As we covered before distribution metrics like the distribution tensor can be used for tracking changes inside a sample. Of these the most relevant is the texture tensor from cellular materials and liquid foam. The texture tensor is the same as the distribution tensor except that the edges (or faces) represent physically connected / touching objects rather than touching Voronoi faces (or conversely Delaunay triangles).
These metrics can also be used for tracking the behavior of a system without tracking the single points since most deformations of a system also deform the distribution tensor and can thus be extracted by comparing the distribution tensor at different time steps.
We can take any of these approaches and quantify the deformation using a tool called the strain tensor. Strain is defined in mechanics for the simple 1D case as the change in the length against the change in the original length. e = \frac{\Delta L}{L} While this defines the 1D case well, it is difficult to apply such metrics to voxel, shape, and tensor data.
There are a number of different ways to calculate strain and the strain tensor, but the most applicable for general image based applications is called the infinitesimal strain tensor, because the element matches well to square pixels and cubic voxels.
A given strain can then be applied and we can quantify the effects by examining the change in the small element.
We catagorize the types of strain into two main catagories:
\underbrace{\mathbf{E}}_{\textrm{Total Strain}} = \underbrace{\varepsilon_M \mathbf{I_3}}_{\textrm{Volumetric}} + \underbrace{\mathbf{E}^\prime}_{\textrm{Deviatoric}}
The isotropic change in size or scale of the object.
The change in the proportions of the object (similar to anisotropy) independent of the final scale
Data provided by Mattia Pistone and Julie Fife The air phase changes from small very anisotropic bubbles to one large connected pore network. The same tools cannot be used to quantify those systems. Furthermore there are motion artifacts which are difficult to correct.
We can utilize the two point correlation function of the material to characterize the shape generically for each time step and then compare.