Real-Time Multi-Dimensional Visualization
Scientific visualization describes the display of multi-dimensional data on mostly standard two-dimensional computer screens. The term 'scientific', in this case, means that the data we want to visually represent is strongly tied with physical coordinates. For example, a three-dimensional CT scan of the human body records tissue density on an underlying grid of measurement points. Consecutively, we cannot exploit the extremely strong visual indicator of spatial position to transport information to the viewer.
Rendering three-dimensional medical data on a computer screen is a rather well researched topic, for example, Direct Volume Rendering (DVR) of patient scans is widely used. Sometimes, however, there are more dimensions in the data we want to show simultaneously. These additional dimensions may include development over time (e.g. tumor growth or behaviour of a simulated radiofrequency ablation (RFA)), making the data four-dimensional, but also simultaneous measurements, such as the link between tissue temperature and cell death probability during a simulated RFA. In such cases, the input dimensionality is virtually unlimited and we intend to find visual metaphors for projecting as many input dimensions as possible onto a simple two-dimensional screen simultaneously to assist the user in analyzing the data efficiently and effectively.
Achieving these goals additionally requires high computational performance. In most use cases, the operator intends to quickly get not only an overview of the data, but also wants to delve into details. Achieving the former requires high interactivity and low latency, while the latter additionally leads to high resolution of the data and consecutively increased stress on computational time and memory consumption. Therefore, we extensively use the power of modern Graphics Processing Units (GPU). Nowadays, GPUs are closer to being co-processors rather than just display adapters, providing a lot of raw parallel computing power.
This rapid development in terms of hardware allows us to provide sophisticated visualization schemes to the end-user. Since transfering images in high frequency via the internet can severely influence the interactivity during exploration of the data, we use a locally executed application, similar to a web browser in nature, called VisApp. The VisApp combines the best of both worlds: We implemented a standard web browser, locked onto the GoSmart webpage, and enhance it by introducing a locally executed 3D visualization engine. The 3D view seamlessly integrates into the web-environment and provides the user with visualization of patient images, segmentation results and additional volumetric simulation data. Further, we employ techniques for emphasizing data in focus. For example, if the user investigates a simulation result, we automatically reduce the emphasis on surrounding anatomic structures. Since such portions could block the view on the simulation and might even make it hard to find, we adapt the representation to emphasize on the most important data.
Besides providing automatic techniques for optimizing the visual representation of such comprehensive datasets, we additionally support flexible and exhaustive parameterization for the end-user. We attempt to reduce the complexity of finding a suitable representation, even in tough cases, by introducing techniques that allow the end-user to easily, while at the same time effectively, modifying the representation with only a few user interface elements. Further, the user can easily store a suitable configuration and load it as a preset for upcoming patients.
The combination of all these algorithms allows the user to efficiently and effectively investigate patient data, segmentation results and simulated MICT, concurrently in both 2D and 3D.
