N - 2 Marks QA and Part B Answers - MIP - Unit 4
N - 2 Marks QA and Part B Answers - MIP - Unit 4
17. What are IPAR? Mention its advantage over conventional PAR.
An iterative PAR (IPAR) method is used for registering MR and PET brain images. The
advantage of the IPAR over the conventional principal axes registration method is that the IPAR
method can be used with partial volumes. This procedure assumes that the field of view (FOV) of
a functional image such as PET is less than the full brain volume, while the other volume (MR
image) covers the entire brain.
18. State the uses of image landmarks and feature based registration.
Rigid and non-rigid transformations have been used in image landmarks (points) and features -
based medical image registration. Once the corresponding landmarks or features are identified
from source and target image spaces, a customized transformation can be computed for
registering the source image into the target image space.
19. Classify the algorithms used for point based registration.
There are two simple algorithms used for point - based registration of source and target images.
Similarity Transformation for Point - Based Registration
Weighted Features - Based Registration
22. Write the projection operator matrix which is used for rendering.
The projection operator is shown below if one assumes that the origin of all rays that project the
3D scene to the 2D imaging plane lies at infinity. When setting f, the distance of the observer to
the projection plane, to infinity,
28. State the volume rendering techniques with its transfer function.
The MIP (Maximum Intensity Projections) and the DRR (Digitally Rendering Radiographs) are
simple volume rendering techniques; in volume rendering, a transfer function determines the final
intensity of the rendered pixel. The two transfer functions are,
Where ρ is the intensity of voxels located at positions {x}, which is the set of all voxels within the
path of the ray.
Again, {x} is the set of voxels lying in the beams path, and x Render Plane is the end point of the ray.
The voxel with the greatest distance to the image plane defines a surface.
32. What is Lambertian Shading? State the Lambert’s law for optical properties of surface.
The simple model is actually Lambertian shading; it is based on Lambert’s law, which states that
the intensity of reflected light from a diffuse surface is proportional to the cosine of the viewing
angle.
where N is the total number of pixels. The inner mechanics of this measure are evident; if
two gray values ρ differ, the squared difference will be non-zero, and the larger the
difference for all image elements, the larger the sum of all squared differences will be. It is
evident that this measure will work best for completely identical images with the same
histogram content, which only differ by a spatial transform. A more versatile merit function
can be defined by using Equation 6.16; in this case, the correlation coefficient of paired gray
values ρBase (x, y, z) and ρMatch(x, y, z) is computed. Pearson’s cross-correlation assumes a
linear relationship between the paired variables; it does not assume these to be identical.
This is a clear progress over MSSD given in Equation 7.1. Still it is confined by definition to
intramodal registration problems. And, to add some more real life problems, one can usually
not assume that the gray values ρ in two intramodal images follow a linear relationship.
Another possibility to compare intramodal images is to take a look at the difference image
Idiff of two images, and to assess the disorder in the result. Such a measure is pattern
intensity, given here only for the 2D case:
d is the diameter of the local surrounding of each pixel; if this surrounding is rather
homogeneous, a good match is assumed. This measure, which has gained some popularity
in 2D/3D registration, assesses the local difference in image gray values of the difference
image Idiff within a radius r, thus giving a robust estimate on how chaotic the difference
image looks like. It is very robust, but also requires an excellent match of image histogram
content; furthermore, it features a narrow convergence range and therefore can only be used
if one is already pretty close to the final registration result. The internal scaling factor σ is
mainly used to control the gradient provided by MPI.
Probabilistic models can be constructed by counting the occurrence of a particular binary sub
volume that is extracted from the registered volumes corresponding to various images.
Let M (x, y, z) be a function describing the model providing the spatial probability
distribution of the sub volume, then
Where n is the total number of data sets in the model and Si(x, y, z) is the ith sub volume.
The normalized eigenvector matrix of a binary volume will rotate the 0 standard x-, y-, and z-
axes parallel to the principal axes of the volume. If the centroid of the volume is first translated to
the origin, then after rotation by the eigenvector matrix the x-, y-, and z- axes will be coincident
with the principal axes of the volume.
The PET data is registered with the MR data through the process of the required translations and
rotations. The rotation angles are determined as described above. To express all points, (x PET,
y PET, z PET), of the PET data in the coordinate space of the MR data, the following steps must
be performed:
1. Translate the centroid of the binary PET data to the origin.
2. Rotate the principal axes of the binary PET data to coincide with the x-, y-, and z- axes
3. Rotate the x-, y-, and z- axes to coincide with the MR principal axes.
4. Translate the origin to the centroid of the binary MR data.
C. Remove all voxels of the binary MR brain that lie outside the transformed FOV( i ) box.
This is the new binary MR brain volume.
The final transformation parameters for registration of MR and PET data are obtained from the
last iteration.
8. Interpolate the gray - level PET data to match the resolution of MR data, to prepare the PET
data for registration with MR data.
9. Transform the gray - level PET data into the space of the MR slices using the last set of MR
and PET centroids and principal axes. Extract the slices from the transformed gray - level PET
data that match the gray - level MR image.
The IPAR algorithm allows registration of two 3 - D image data sets, in which one set does not
cover the entire volume but has the sub volume contained in the other data set. Figures 12.7 a – c
show the MR (middle rows) and PET (bottom rows) brain.
6. Derive the orthogonal projections which are used for image visualization in medicine and
explain. (13)
The basic idea of rendering is to generate a 2D image from 3D data; every camera and every x-
ray machine does this actually. It is therefore straightforward to mimic the behavior of these
devices mathematically, and we have already learned in Section 7.6 how we can derive the most
important property of an imaging device – the distance of the viewpoint (or x-ray focus) from the
3D scene to be imaged. However, one can simplify the projection operator from Equation 7.8 if
one assumes that the origin of all rays that project the 3D scene to the 2D imaging plane lies at
infinity. When setting f, the distance of the observer to the projection plane, to infinity.
This projection images everything onto the x-y plane; the eye point is located at infinity in the
direction of the z-axis. This is an orthogonal projection: all rays hitting the object are parallel. The
term infinite only refers to the fact the viewer is located at a large distance from the object. It
therefore depends on the size of the object viewed. Orthogonal projection, which is
computationally more efficient than perspective rendering, is the common geometry for
rendering. We do, however, want to be able to look at our object from different viewpoints. In
real life, we have two possibilities to look at something from a different point of view. We can
change our position, or we can change the position of the object. When rendering an object, we
can do the same. We can either apply a volume transform V on every voxel of our object or apply
the projection PV x or we can apply another volume transform V to the projection operator:
V P x.
(ii) Raycasting
The projection operator is very helpful, but not very intuitive. Let’s go back to the idea of
simulating a camera, or an x-ray machine. Such a device records or emits rays of electromagnetic
radiation (for instance light). The most straightforward approach is to simulate an x-ray. X-rays
emerge from the anode of the x-ray tube and pass through matter. In dependence of the density
and radio opacity of the object being imaged, the x-ray is attenuated. The remaining intensity
produces a signal on the detector. Raycasting can simulate this behavior if we sum up all gray
values ρ in the path of the ray passing through a volume. This is, mathematically speaking, a line
integral. If we want to simulate a camera, the situation is similar, but a different physical process
takes place. Rather than penetrating the object, a ray of electromagnetic radiation is reflected.
Therefore we have to simulate a ray that terminates when hitting a surface, and which is
weakened and reflected; the amount of light hitting the image detector of a camera after reflection
is defined by the objects surface properties. These properties are defined by lighting models,
shading and textures. From these basic considerations, we can derive several properties of these
rendering algorithms:
• Volume rendering: A ray that passes an object and changes its initial intensity or color during
this passage is defined by a volume rendering algorithm. Such an algorithm does not know about
surfaces but draws all of its information from the gray values in the volume. It is an intensity-
based algorithm.
• Surface rendering: If we simulate a ray that terminates when hitting a surface, we are dealing
with a surface rendering algorithm. The local gradient in the surrounding of the point where the
ray hits the surface determines the shading of the corresponding pixel in the image plane. The
drawback lies in the fact that a surface rendering algorithm requires segmentation.
Raycasting is an image-driven technique – each pixel in the imaging plane is being assigned since
it is the endpoint of a ray by definition. This is a huge advantage of raycasting since round-off
artifacts as the ones we encountered in Example 6.6.1 cannot occur. Every pixel in the imaging
plane has its own dedicated ray. If the voxels are large in comparison to the resolution of the
image, discretization artifacts may nevertheless occur. In such a case, one can interpolate between
voxels along the path of the ray by reducing the increment of the ray. Performance does, of
course, suffer from this.
Again, { x}is the set of voxels lying in the beams path, and x Render Plane is the end
point of the ray. The voxel with the greatest distance to the image plane defines a surface; it is
also the first voxel which lies above a rendering threshold encountered by the ray. Its distance
gives the gray value of the pixel in the rendering plane the ray aims at. What is remarkable
about Equation 8.4 when comparing it to the fact that ρ, the intensity of the voxel, does not
play a role here since depth shading is, in fact, a surface rendering technique.
9. Explain the surface based rendering in detail with necessary expressions and diagrams. (13)
Refer the answer od Q. No. 8 (ii) and