Kevin Mader
02 April 2015
ETHZ: 227-0966-00L
How can we extract topology of a structure?
How can we measure sizes in complicated objects?
How do we measure sizes relavant for diffusion or other local processes?
How do we identify seperate objects when they are connected?
How do we investigate surfaces in more detail and their shape?
How can we compare shape of complex objects when they grow?
To simplify our data, but an ellipse model is too simple for many shapes
So while bounding box and ellipse-based models are useful for many object and cells, they do a very poor job with the sample below.
A map (or image) of distances. Each point in the map is the distance that point is from a given feature of interest (surface of an object, ROI, center of object, etc)
If we start with an image as a collection of points divided into two categories
dist(\vec{x}) = \textrm{min}(||\vec{x}-\vec{y}|| \forall \vec{y} \in \textrm{Background})
We will use Euclidean distance ||\vec{x}-\vec{y}|| for this class but there are other metrics which make sense when dealing with other types of data like Manhattan/City-block or weighted metrics.
Using this rule a distance map can be made for the euclidean metric
Similarly the Manhattan or city block distance metric can be used where the distance is defined as \sum_{i} |\vec{x}-\vec{y}|_i
The distance map is one of the crictical points where the resolution of the imaging system is important.
Ideally
If that is not possible
We can create 2 distance maps
Foreground \rightarrow Background
Background \rightarrow Foreground
One of the most useful components of the distance map is that it is relatively insensitive to small changes in connectivity.
For some structures like cellular materials and trabecular bone, we want a more detailed analysis than just thickness. We want to know
The first step is to take the distance transform the structure I_d(x,y) = \textrm{dist}(I(x,y)) We can see in this image there are already local maxima that form a sort of backbone which closely maps to what we are interested in.
By using the laplacian filter as an approximate for the derivative operator which finds the values which high local gradients.
\nabla I_{d}(x,y) = (\frac{\delta^2}{\delta x^2}+\frac{\delta^2}{\delta y^2})I_d \approx \underbrace{\begin{bmatrix} -1 & -1 & -1 \\ -1 & 8 & -1 \\ -1 & -1 & -1 \end{bmatrix}}_{\textrm{Laplacian Kernel}} \otimes I_d(x,y)
We can locate the local maxima of the structure by setting a minimum surface distance I_d(x,y)>MIN_{DIST} and combining it with a minimum slope value \nabla I_d(x,y) > MIN_{SLOPE}
Harking back to our earlier lectures, this can be seen as a threshold on a feature vector representation of the entire dataset.
\textrm{cImg}(x,y) = \langle \underbrace{I_d(x,y)}_1, \underbrace{\nabla I_d(x,y)}_2 \rangle
\textrm{skelImage}(x,y) = \begin{cases} 1, & \textrm{cImg}_1(x,y)\geq MIN-DIST \\ & \& \textrm{ cImg}_2(x,y)\geq MIN-SLOPE \\ 0, & \textrm{otherwise} \end{cases}
The structure is a overgrown
With the skeleton which is ideally one voxel thick, we can characterize the junctions in the system by looking at the neighborhood of each voxel.
As with nearly every operation, the neighborhood we define is important and we can see that if we use a box neighborhood vs a cross neighborhood (4 vs 8 adjacent) we count diagonal connections differently
Once we have classified the skeleton into terminals, segments, and junctions we can analyze the various components and assess metrics like connectivity, branching, and many other.
The easist way is to further process the image by
Most of the other metrics can be simply counted
label | x |
---|---|
Junction | 828 |
Segment | 283 |
Terminal | 579 |
label | x |
---|---|
Junction | 1431 |
Segment | 734 |
Terminal | 398 |
One of the more interesting ones in material science is called tortuosity and it is defined as the ratio between the arc-length of a segment and the distance between its starting and ending points. \tau = \frac{L}{C}
A high degree of tortuosity indicates that the network is convoluted and is important when estimating or predicting flow rates. Specifically
Thickness is a metric for assessing the size and structure of objects in a very generic manner. For every point \vec{x} in the image you find the largest sphere which:
Taken from [1]
Calculating the distance map by drawing a sphere at each point is very time consuming ( O(n^3) ).
\textrm{thSkelImage}(x,y) = \begin{cases} \textrm{cImg}_1(x,y) , & \textrm{cImg}_1(x,y)\geq MIN-DIST \\ & \& \textrm{ cImg}_2(x,y)\geq MIN-SLOPE \\ 0, & \textrm{otherwise} \end{cases}
user system elapsed
5.567 0.263 6.265
user system elapsed
15.975 0.719 18.164
Full Map | Skeleton Map | |
---|---|---|
Min. | 0.750 | 1.750 |
1st Qu. | 2.500 | 2.500 |
Median | 2.658 | 2.658 |
Mean | 2.574 | 2.630 |
3rd Qu. | 2.704 | 2.704 |
Max. | 2.915 | 2.915 |
user system elapsed
2.185 0.028 2.259
Full Map | Skeleton Map | Tiny Skeleton Map | |
---|---|---|---|
Min. | 0.750 | 1.750 | 2.236 |
1st Qu. | 2.500 | 2.500 | 2.500 |
Median | 2.658 | 2.658 | 2.658 |
Mean | 2.574 | 2.630 | 2.627 |
3rd Qu. | 2.704 | 2.704 | 2.704 |
Max. | 2.915 | 2.915 | 2.795 |
While the examples and demonstrations so far have been shown in 2D, the same exact technique can be applied to 3D data as well. For example for this liquid foam structure
The thickness can be calculated of the background (air) voxels in the same manner.
With care, this can be used as a proxy for bubble size distribution in systems where all the bubbles are connected to it is difficult to identify single ones.
Watershed is a method for segmenting objects without using component labeling.
So we start by distributing pixels all across the image
If any of the neighbors have a higher distance (more downhill) then move to that position
We can then take the points from these basins and regrow them back to their original size and these represent the watershed regions of our image
Curvature is a metric related to the surface or interface between phases or objects.
\kappa = \frac{1}{R}
In order to meaningfully talk about curvatures of surfaces, we first need to define a consistent frame of reference for examining the surface of an object. We thus define a surface normal vector as a vector oriented orthogonally to the surface away from the interior of the object \rightarrow \vec{N}
With the notion of surface normal defined ( \vec{N} ), we can define many curvatures at point \vec{x} on the surface.
\kappa_1 = \textrm{max}(\kappa(\theta)) \kappa_2 = \textrm{min}(\kappa(\theta))
The mean of the two principal curvatures H = \frac{1}{2}(\kappa_1+\kappa_2)
The mean of the two principal curvatures K = \kappa_1\kappa_2
Examining a complex structure with no meaningful ellipsoidal or watershed model. The images themselves show the type of substructures and shapes which are present in the sample.
Characteristic shape can be calculated by measuring principal curvatures and normalizing them by scaling to the structure size. A distribution of these curvatures then provides shape information on a structure indepedent of the size.
For example a structure transitioning from a collection of perfectly spherical particles to a annealed solid will go from having many round spherical faces with positive gaussian curvature to many saddles and more complicated structures with 0 or negative curvature.
It provides another metric for characterizing complex shapes
There are hundreds of other techniques which can be applied to these complicated structures, but they go beyond the scope of this course. Many of them are model-based which means they work well but only for particular types of samples or images. Of the more general techniques several which are easily testable inside of FIJI are