Affine (MOPS)
Apply slice-by-slice (2D) affine-based image registration to a multi-dimensional stack.
Description
Apply slice-by-slice (2D) affine-based image registration to a multi-dimensional stack. Images can be aligned relative to the first frame in the stack, the previous frame or a separate image in the workspace. The registration transform can also be calculated from a separate stack to the one that it will be applied to. Registration can be performed along either the time or Z axes. The non-registered axis (e.g. time axis when registering in Z) can be "linked" (all frames given the same registration) or "independent" (each stack registered separately).
This module uses the Feature Extraction plugin and associated MPICBG tools to detect MOPS ("Multi-Scale Oriented Patches") features from the input images and calculate and apply the necessary 2D affine transforms.
References:
- Brown, Matthew & Szeliski, Richard "Multi-image feature matching using multi-scale oriented patches". US Patent 7,382,897 (June 3, 2008). Asignee: Microsoft Corporation.
Parameters
Parameter | Description |
---|---|
Input image | Image from workspace to apply registration to. |
Apply to input image | When selected, the post-operation image will overwrite the input image in the workspace. Otherwise, the image will be saved to the workspace with the name specified by the "Output image" parameter. |
Output image | If "Apply to input image" is not selected, the post-operation image will be saved to the workspace with this name. |
Registration axis | Controls which stack axis the registration will be applied in. For example, when "Time" is selected, all images along the time axis will be aligned. Choices are: Time, Z. |
Other axis mode | For stacks with non-registration axis lengths longer than 1 (e.g. the "Z" axis when registering in time) the behaviour of this other axis is controlled by this parameter:
|
Fill mode | Controls what intensity any border pixels will have. "Borders" in this case correspond to strips/wedges at the image edge corresponding to regions outside the initial image (e.g. the right-side of an output image when the input was translated to the left). Choices are: Black, White. |
Show detected points | When enabled, the points used for calculation of the registration will be added as an overlay to the input image and displayed. |
Enable multithreading | When selected, certain parts of the registration process will be run on multiple threads of the CPU. This can provide a speed improvement when working on a computer with a multi-core CPU. |
Reference mode | Controls what reference image each image will be compared to:
|
Number of previous frames | Number of previous frames (or slices) to use as reference image when "Reference mode" is set to "Previous N frames". If there are insufficient previous frames (e.g. towards the beginning of the stack) the maximum available frames will be used. Irrespective of the number of frames used, the images will be projected into a single reference image using the statistic specified by "Previous frames statistic". |
Previous frames statistic | Statistic to use when combining multiple previous frames as a reference ("Reference mode" set to "Previous N frames"). |
Reference image | If "Reference mode" is set to "Specific image" mode, all input images will be registered relative to this image. This image must only have a single channel, slice and timepoint. |
Calculation source | Controls whether the input image will be used to calculate the registration transform or whether it will be determined from a separate image:
|
External source | If "Calculation source" is set to "External", registration transforms will be calculated using this image from the workspace. This image will be unaffected by the process. |
Calculation channel | If calculating the registration transform from a multi-channel image stack, the transform will be determined from this channel only. Irrespectively, for multi-channel image stacks, the calculated transform will be applied equally to all channels. |
Transformation mode | Controls the type of registration being applied:
|
Test flip (mirror image) | When selected, alignment will be tested for both the "normal" and "flipped" (mirror) states of the image. The state yielding the lower cost to alignment will be retained. |
Independent rotation | When selected, the image will be rotated multiple times, with registration optimised at each orientation. The orientation with the best score will be retained. This is useful for algorithms which perform poorly with rotated features (e.g. block matching). The increment between rotations is controlled by "Orientation increment (degs)". |
Orientation increment (degs) | If "Independent rotation" is enabled, this is the angular increment between rotations. The increment is specified in degree units. |
Show transformation(s) | When selected, the affine transform will be displayed in the results table. Fixed affine transform values such as these can be applied using the "Affine (fixed transform)" module. |
Clear between images | If "Show transformation(s)" is enabled, this parameter can be used to reset the displayed affine transform in the results table. If this option isn't selected, the new transform will be added to the bottom of the results table. |
Initial Gaussian blur (px) | "Accurate localization of keypoints requires initial smoothing of the image. If your images are blurred already, you might lower the initial blur σ0 slightly to get more but eventually less stable keypoints. Increasing σ0 increases the computational cost for Gaussian blur, setting it to σ0=3.2px is equivalent to keep σ0=1.6px and use half maximum image size. Tip: Keep the default value σ0=1.6px as suggested by Lowe (2004).". Description taken from https://imagej.net/Feature_Extraction |
Steps per scale | "Keypoint candidates are extracted at all scales between maximum image size and minimum image size. This Scale Space is represented in octaves each covering a fixed number of discrete scale steps from σ0 to 2σ0. More steps result in more but eventually less stable keypoint candidates. Tip: Keep 3 as suggested by Lowe (2004) and do not use more than 10.". Description taken from https://imagej.net/Feature_Extraction |
Minimum image size (px) | "The Scale Space stops if the size of the octave would be smaller than minimum image size. Tip: Increase the minimum size to discard large features (i.e. those extracted from looking at an image from far, such as the overall shape).". Description taken from https://imagej.net/Feature_Extraction |
Maximum image size (px) | "The Scale Space starts with the first octave equal or smaller than the maximum image size. Tip: By reducing the size, fine scaled features will be discarded. Increasing the size beyond that of the actual images has no effect.". Description taken from https://imagej.net/Feature_Extraction |
Feature descriptor size | "The MOPS-descriptor is simply a nxn intensity patch with normalized intensities. Brown (2005) suggests n=8. We found larger descriptors with n>16 perform better for Transmission Electron Micrographs from serial sections.". Description taken from https://imagej.net/Feature_Extraction |
Feature descriptor orientation bins | "For SIFT-descriptors, this is the number of orientation bins b per 4×4px block as described above. Tip: Keep the default value b=8 as suggested by Lowe (2004).". Description taken from https://imagej.net/Feature_Extraction |
Closest/next closest ratio | "Correspondence candidates from local descriptor matching are accepted only if the Euclidean distance to the nearest neighbour is significantly smaller than that to the next nearest neighbour. Lowe (2004) suggests a ratio of r=0.8 which requires some increase when matching things that appear significantly distorted.". Description taken from https://imagej.net/Feature_Extraction |
Maximal alignment error (px) | "Matching local descriptors gives many false positives, but true positives are consistent with respect to a common transformation while false positives are not. This consistent set and the underlying transformation are identified using RANSAC. This value is the maximal allowed transfer error of a match to be counted as a good one. Tip: Set this to about 10% of the image size.". Description taken from https://imagej.net/Feature_Extraction |
Inlier ratio | "The ratio of the number of true matches to the number of all matches including both true and false used by RANSAC. 0.05 means that minimally 5% of all matches are expected to be good while 0.9 requires that 90% of the matches were good. Only transformations with this minimal ratio of true consent matches are accepted. Tip: Do not go below 0.05 (and only if 5% is more than about 7 matches) except with a very small maximal alignment error to avoid wrong solutions.". Description taken from https://imagej.net/Feature_Extraction |