next up previous
Next: OVERVIEW OF NAVIGATION Up: Interactive Navigation Inside 3D Previous: Interactive Navigation Inside 3D

 

Introduction

Many applications require the interactive visualization of anatomical structures in three-dimensional (3D) radiological images [1,2,3]. In many cases, single static two-dimensional (2D) rendered views of the structures are sufficient. But several researchers have noted that an animated sequence of views can reveal more information in some circumstances, even if the quality of the individual views are degraded [4,5,6]. In particular, navigation through a 3D medical image could provide useful information on distributed branching structures, such as the bronchial passages, coronary arteries, human fetus, intestinal tract, and cerebral vasculature [7,8,9,10,11,12].

To perform this navigation, one would first scan the subject to produce a high-resolution 3D image of the relevant anatomy (done off-line!). Then, using a computer-based system, the user could load in the 3D image and navigate inside of it, observing various structures along the way. The 3D image acts as a ``virtual environment'' representing the anatomy. The computer provides the means to navigate through this environment. During navigation, the computer generates volume-rendered views at various viewing sites along the navigation path. Figure 1 -- and the work of this paper -- demonstrates that such navigation can in fact be done.gif

With such a system, a physician could conceivably examine the anatomy with impunity and view any structures contained in the environment. The interaction resembles that of an endoscopic examination. But it does not suffer from the complications that can arise during an in vivo examination (how can you ``hurt'' an image?). Such a procedure can be effective, though, only if the navigation can be accomplished at comfortable interactive speeds. This demands real-time volume rendering.

We present an inexpensive volume rendering method that can generate sequence of views along an interactively defined path inside of a 3D image. The method works in real-time and forms part of a system for dynamic navigation through 3D radiological images. We provide pictorial and numerical results for 3D pulmonary analysis.

Figure 1. Example 3D Voyage through the 3D bronchial passages. Section 4 gives details. (a) (Top figure) View 1 through View 6 -- sample 3D-rendered views along a path depicted in Figure 1b. (b) (Bottom figure) Supplementary orthographic projection images for voyage depicted in Figure 1a. Green line indicates navigation path through 3D image. Red dot indicates last viewing site. (See color plate at end of proceedings.)

Much research has been carried out in developing fast rendering methods. One approach has been to develop algorithms for special-purpose hardware (e.g., [13,14,15,16,17]), but such hardware can be prohibitively expensive for routine use. Other rendering methods exist for reducing the computation time [18,19,20,21,22,23,24,25,26], but these methods either lack the flexibility to quickly compute arbitrary views along a path inside a 3D image, sacrifice too much accuracy, or consider an inapplicable rendering scenario. For example, in [18] only views observing the 3D image data from the outside can be computed; the method cannot be used to compute views when one travels inside the data. As another example, [26] permits fast rendering, but it considers a stationary viewpoint for observing dynamically changing flow data. Researchers have employed interactive viewing of 3D image data during surgery [27,28], but these efforts have not involved complex rendering or interactive navigation inside the data along an arbitrary path. Other researchers studying volume rendering have suggested methods where the first view is computed accurately and subsequent views in a sequence are computed by assuming temporal coherence [26,5,8,29,30,31]. If one's view point varies slowly from one view to the next, then consecutive views will tend to have much similarity. This observation is the temporal coherence concept.

Our efforts use temporal coherence to define a new ray-casting scheme based on the approximate discrete Radon transform (ADRT) [22,23,24]. ADRT uses the idea that considerable redundancy exists when rays are cast during view computation. In particular, adjacent rays share many of the same data points intersected in the 3D image. ADRT recognizes this redundancy by having adjacent rays share ray segments. A result is that ADRT can be used to compute projections at many different orientations simultaneously, in far less time than computing them individually. Unfortunately, past efforts of Brady et al. cannot accommodate the demands of constant interactive view-site changes, as required for navigation. The method in this paper permits interactive navigation through a large 3D image. Each view is generated using a subset of data in the vicinity of the current viewing site. The user is able to interactively rotate and translate his/her position within the image (virtual anatomy), producing an animated sequence of views.

Section 2 presents an overview of the navigation scenario and the top-level ideas of our efforts. Section 3 gives a detailed discussion of the methods. Section 4 presents results. Finally, Section 5 offers concluding remarks.



next up previous
Next: OVERVIEW OF NAVIGATION Up: Interactive Navigation Inside 3D Previous: Interactive Navigation Inside 3D



 

Krishnan Ramaswamy
The Multidimensional Image Processing Lab