Ten years of visualisation or VR for science: What is or was your favourite project?
Dr Thomas Odaker: That is hard to say, over the years my personal favourites have changed several times. There was the virtual reconstruction of the Kaisersaal in Bamberg, or the most detailed and largest visualisation of a cosmic turbulence, or the 3D video interviews with Holocaust survivors or the development of the MOOSAIK app featuring AR – these are a number of recent projects that required our creativity, knowledge and experience. I still love explaining Prof. Hans-Peter Bunge’s Earth model, which was created around 2016 and led to a lasting partnership with the Department of Geophysics at Ludwig-Maximilians-Universität München (LMU). It shows the temperature distribution in the Earth's mantle and the drift of the continental plates. This work has everything we love: the technical challenge of transporting large data sets into graphics programmes and converting them into images, developing workflows and interfaces for this. The results are great images that are fun to look at and that improve the understanding of processes. Above all, however, the model is being used sustainably, on the one hand by museums, and on the other by Bunge's team, which is constantly developing it further. We have achieved a lot here, which makes me very happy and also a little proud.
Which research disciplines need the V2C technology most often?
Odaker: Art history, archaeology, also the natural sciences such as astrophysics and geophysics, engineering and environmental sciences, human and veterinary medicine – the fields of application are very broad. Historical and artistic fields of study are heavily represented because researchers here traditionally deal with spaces, image data and other information are usually already available and the benefits of using visualization methods and virtual reconstructions quickly become apparent. They can walk through an ancient Babylonian villa, record artefacts or illustrate construction phases. Actually, technology could also be illustrated in virtual space; with the Technical University of Munich (TUM), we modelled a gas-insulated surge arrester in VR in 2014 and show how it works inside. Surprisingly, however, engineers come to us less often. Probably the practical experience is still lacking here, which is feasible with VR.
Have the areas in which the V2C is used changed?
Odaker: Essentially not, but the technology, the software and thus the work processes have. In 2016, head-mounted displays, HMDs, conquered the mass market, which led to a VR hype in business and science and also to partly unrealistic expectations. Above all, however, the software environment is much more practical, diverse and flexible today than it was ten years ago. Today, researchers can display a lot with graphics programmes on their notebooks, for which they still had to develop their own tools at the LRZ back then. Consequently, the university departments are doing more and more themselves; we have reacted to this development and offer remote access to graphics programmes.
A supercomputer is replaced about every six years - is that similar for visualisation technology?
Odaker: In the case of supercomputers, economic efficiency is one of the reasons why they have to be replaced so quickly. The technology and devices for visualisation and VR have become cheaper and easier to use, and in recent years there has not been a major quantum leap in visualisation technology. The five-sided CAVE and also the Powerwall of the LRZ were set up at the start of the V2C with the best, most powerful projectors and computers at the time and consequently still are still considered very good systems. Now, LED technology is is becoming more and more prevalent – whether it makes sense to replace the projectors of the CAVE with it in a few years and thereby simplify the system remains to be seen. We are watching this development with interest.
The V2C not only provides technology, it also advises researchers: where do they need support?
Odaker: Scientists are experts in their respective fields, but not necessarily in the use of graphics programmes or game engines, i.e. software that can be used to build virtual worlds for games or for academic research. The V2C therefore shows scientists what is feasible and where the limits of this technology lie. We also explain what data is needed for visualisation or for building VR, what programmes can be used to process it and what problems arise when dealing with large data sets. Of course, we advise university departments and institutes on the acquisition of technology or on setting up workflows, for example if they want to digitize specimens or other display objects and make them accessible on an online platform, as the LMU's Faculty of Veterinary Medicine does. However, the main task of the V2C is to optimise applications for virtual reality and to visualise large data sets: For this, the data must be prepared, often exchanged between different software, interface problems are programmed. VR needs a lot of computing power, and optimising the models is also important to improve efficiency.
The V2C conducts its own research or participates in projects - what areas does it focus on?
Odaker: The emergence of HMD has shifted the focus. We used to work intensively on processing of large amounts of data, displaying 3D images and the system issues associated with this. But the workflows have become simpler, so we are now more concerned with human-machine interactions, i.e. how users interact with objects in virtual space, in addition to the effect of interfaces. At MOOSAIK, we tested together with environmental organisations how AR works in environmental communication and in describing the fauna and flora of a moss landscape, and how apps are developed. In the European centre of excellence CompBioMed, we are contributing our experience in terms of visualising large data sets on supercomputers and also regarding interactive media for the construction of a digital twin. After the blood flow in brain regions, the blood flow in the arm was visualised in VR in 2021. With these visualisations, blood flow in other regions of the human organism can be simulated and visualised more easily.
Do researchers still come to the LRZ to view visualisations or immerse themselves in VR?
Odaker: HMDs have become inexpensive, are now purchased by university departments themselves and enable the reception of immersive images there. With our five-sided CAVE, and also the Powerwall, we cover the upper spectrum of visualisation possibilities. Such systems are still very expensive and extremely complex to set up and operate, but they offer the best ways to immerse oneself in research data, to move around in it and to absorb a model with all one’s senses. It is always a pleasure to see how researchers move around in the CAVE and gain new perspectives for their research. In some research and visualisation projects, we work with game engines to display data, creating applications that can be ported between HMD and CAVE with relatively little effort.
The V2C will not only offer graphics software in remote access, but also graphics cards and other tools soon.
Odaker: During many collaborations we have realised that chairs and institutes often lack powerful graphics cards and corresponding tools to post-process visualisations or VR applications or to process very large data sets. Therefore, RemoteVIS will offer online access to various graphics tools via the LRZ Cloud. Large amounts of data can thus be processed and rendered remotely, and the result can in turn be stored on local systems - provided there is sufficient storage space. RemoteVIS is still in the test phase, and it will certainly be a service for a very specialised target group.
The Covid-19 pandemic has accelerated digitisation in universities and research: have new tasks arisen for the V2C through virtual lectures?
Odaker: No, on the contrary, usage figures have plummeted because HMDs are rather problematic from a hygiene point of view. They can be wiped after each use, but the lenses may only be wiped with a dry cloth. Video conferencing systems were used for lectures and seminars. There are no established software solutions yet to bring larger groups together in VR. However, as a consequence of Covid-19 measures, we set up virtual rooms in the Mozilla Hubs/The Virtual World to present projects. This worked wonderfully, small groups could meet there virtually and exchange with each other acoustically with avatars.
Metaverse and other three-dimensional virtual worlds are currently in vogue: Do they become a future topic for science and research?
Odaker: This development is certainly interesting, but at the moment I am rather sceptical about the announcements by Meta/Facebook and other companies. Is a three-dimensional internet even necessary and sensible? Many questions have not been clarified: for example, the use of user data, data protection and privacy, a uniform system for all is missing, as is software for designing spaces. However, we have already collaborated with some research projects that build VR for analysis. In 2013, the V2C collaborated with researchers from the University of Tokyo and Tohoku University in Sendai to record and digitise in 3D sculptures from the Roman classical period. A software for location-independent cooperation between researchers was used and tested for their evaluation. We are also supporting the Bavarian State Library in digitising sculptures and other exhibits for Bavarikon, the virtual art treasure of Bavaria. The State Library is also working on virtual reading rooms where people can meet with others to work together and handle digitised material. The Gauss Centre for Supercomputing (GCS), the association of German high-performance computing centres, is also working on the topic of VR and analysis. This is exciting for science and also for the V2C, but these are still mainly special cases; software suitable for mass use is lacking.
Which developments will drive the V2C in future?
Odaker: It’s rather the technology, LED for example, that could simplify the construction of the next CAVE. In the LED wall we installed in 2018, the technology has already established itself. Also new, easier to use software and construction kits for online worlds will change and improve our work and scientific visualisation.
And what would be a future dream project for the V2C?
Odaker: A project that challenges us, such as Professor Bunge's earth model, and that we therefore enjoy working on: one where we can solve problems in visualising data in partnership with researchers and institutes, where we can develop and simplify workflows through our own developments as we did for CompBioMed. Where we set standards, as we did for the corpus of baroque ceiling painting, and these representations are used and developed sustainably. Where we can deal with complex, difficult data sets and manage them. In short, where we are properly challenged.