Cookie Settings

We use cookies to optimize our website. These include cookies that are necessary for the operation of the site, as well as those that are only used for anonymous statistic. You can decide for yourself which categories you want to allow. Further information can be found in our data privacy protection .

Essential

These cookies are necessary to run the core functionalities of this website and cannot be disabled.

Name Webedition CMS
Purpose This cookie is required by the CMS (Content Management System) Webedition for the system to function correctly. Typically, this cookie is deleted when the browser is closed.
Name econda
Purpose Session cookie emos_jcsid for the web analysis software econda. This runs in the “anonymized measurement” mode. There is no personal reference. As soon as the user leaves the site, tracking is ended and all data in the browser are automatically deleted.
Statistics

These cookies help us understand how visitors interact with our website by collecting and analyzing information anonymously. Depending on the tool, one or more cookies are set by the provider.

Name econda
Purpose Statistics
External media

Content from external media platforms is blocked by default. If cookies from external media are accepted, access to this content no longer requires manual consent.

Name YouTube
Purpose Show YouTube content
Name Twitter
Purpose activate Twitter Feeds

Rigid image registration between mono- or multi-modal data sets can be done with different methods. All methods have one in common: the transformed images are not deformed during the optimization process. Yet, the objective function (=similarity measure) in the optimization and the interactivity can differ, resulting in different advantages and drawbacks for different applications.
In VIRTUOS the following four rigid registration modules are realized: The Stereotactic Registration, the Interactively Defined Landmarks Registration, the Interactive Registration by Fusion Techniques, and the registration based on the Maximization of Mutual Information.

Stereotactic Registration (STX)

The stereotaxy module allows the calculation of stereotactic transformations, if a stereotactic localization system was attached to the patient during image acquisition. The stereotactic system describes the link between the patient coordinate system (defined by the image) and the external coordinate system of the coach, fixation system, linear accelerator (LINAC), etc. This allows the spatial correlation of patients anatomy in images acquired at different devices e.g. in the MRI scanner, CT scanner, and on the treatment coach. For example, it makes it possible to use multi-modal complimentary image information for treatment planning, since tumor delineation, achieved by the higher tissue contrast in the MRI scan, can directly be transferred to the planning CT.

To visualize the external coordinates in the image, a localizer system with a known geometry is attached to the patient (or fixation system: head mask, vacuum pillow, and torso). This localizer system is visible as stereotactic markers in the image scan. Stereotactic markers can be extracted and the corresponding transformation can be calculated. Since the geometry of different localizer systems (Head, Body, etc) are known, the specialized algorithm can detect them easily in the image scans. Outliers due to imaging artifacts can be checked and excluded manually by the user, since a measure for violation of the rigidity of the transformation is presented.

Stereotactic registration was developed to enable an accurate patient positioning prior and during the treatment. Since this registration method aligns external markers, patient fixation needs to be reproducible. With the emerging image guidance in radiation therapy the stereotactic registration is progressively replaced by registration methods focusing directly on anatomical features in the image scan.

Interactively defined landmarks (LM)

This rigid image registration method can be used to align arbitrary image features, since the user can select image portions which can easily be identified in the different images.
To use this correlation method, corresponding landmarks must be selected within the pair of cubes, which should be matched. The algorithm is based on routines, developed by H. Treuer, University of Cologne. For a landmark correlation at least three landmarks are necessary, which must not be co-linear. With more than three landmarks different rigid-body transformations can be optimized, dependent on the objective function. The user can choose between three implemented strategies: mean transformation, distance weighted mean transformation and least square distance approach.

The advantage of this interactive method is that the user can select the areas to be aligned even if image quality is inferior or in the presence of artifacts. The drawback is, that in presence of deformations the user needs some experience to select landmarks, whose correlation will result in an overall adequate rigid image alignment. This approach can also be used in combination with a non-rigid transformation approach to cope with occurring deformations (→ deformable image registration methods).

Interactive by fusion techniques (MAN)

The interactive matching is based on the visible result of the “Image Fusion Mode”. The aim of this approach is a manual fusion of two images. It was frequently used with images, having a poor image quality (e.g. PET), or as a pre-processing step for automatic registration algorithms (e.g. Mutual Information, see previous section).

The currently selected work cube is overlaid onto the previously selected image information in the “Image Views”. To get best results we propose an intensity mix mode with red and green colored images. Each rigid transformation parameter (translation in x-, y-, and z-direction, rotation around the x-, y-, and z-axis and scaling) can be adjusted manually, by editing the parameters or clicking arrow buttons with different step sizes. The visual rendering of the overlaid images guides the user during this manual optimization process. This method is still frequently used if automatic image registration methods fail due to inferior image quality (e.g low signal to noise ratio or present imaging artifacts).

Maximization of Mutual Information (MI)

Mutual information is a similarity measure. Applied as objective function in the registration process, image alignment can be fully automated without requiring user interaction. An additional advantage of using mutual information in the registration process is its ability to judge the quality of multi-modal image alignments.

This correlation method is based on the approach of Maes et. al. [1]. Mutual information is an entropy-based similarity measure which penalizes the disorder in the phase space. According to this, two images are most similar once all voxel pairs of the same spatial position take up the minimal number of existing states in the joint histogram of both images (corresponding to highest achievable order).

© dkfz.de

References

[1] Maes F, Collignon A, Vondermeulen D, Marchel G and Suetens P (1997) "Multimodality Image Registration by Maximization of Mutual Information" IEEE Trans.Med.Imaging

to top
powered by webEdition CMS