Skip Ribbon Commands
Skip to main content

Speakers

​​

Professor Dieter W. Fellner
Director - Fraunhofer Institute for Computer Graphics Research (IGD)
Professor - Computer Science at TU Darmstadt
Germany



Abstract
Museums and collections in the industrialized countries alone host a vast number of cultural heritage artefacts - estimates consistently talk about hundreds of millions. The collections in the Smithonian Institutes, for example, hold approximately 137 million artefacts with a large number of objects being added each year. And according to consistent estimates, 90% of all of the existing artefacts in archives and museums are awaiting 'their discovery'. In analogy to the 2D-digitization of classical print material of the past decades 3D-digitization will, in the beginning, mainly focus on the 3D 'back digitization' of collections together with the digital markup and classification and the digital archiving of existing artfacts as well as new collection items. All this with the obvious benefit of a broad, world-wide and parallel availability (for the interested individual to the expert member in collaborative research teams), the use of 'digital replicas' in hybrid exhibitions (i.e. exhibitions merging real with virtual artefacts), and the option for high-fidelity replicas (based on 3D printing). Given the substantial amounts of artefacts in worldwide collections 3D-digitization of cultural heritage bears an enormous market potential. But, 3D-digitization, up to now, has been prohibitively time consuming and, thus, too expensive. Studies by the Victoria and Albert Museums in London carried out together with the world-wide leading labs in this field show that the average time to digitize a 3D artefact ranges between half a day and 2 days, with most of the time spent to manually position the digitization device. The talk will analyse the challenges which have to be met in order to automate and to industrialize the complete workflow for the 3D digitization of cultural heritage artefacts and will address both, the research results to be delivered as well as the chances for a new technology/service market aiming at a complete, fast, effective, and economic approach for the digitization, classification, annotation, and archiving. High-end visualization together with high-speed multimedia networks will then warrant a truly world-wide and distributed access to all of our cultural heritage.

Speaker's Brief Bio
Dieter W. Fellner is professor of computer science at TU Darmstadt, Germany and Director of the Fraunhofer Institute of Computer Graphics (IGD) at the same location. He is also professor at TU Graz, Austria, were he established the Institute of Computer Graphics and Knowledge Visualization. His research activities over the last years covered efficient rendering and visualization algorithms, generative and reconstructive modeling, virtual and augmented reality, graphical aspects of internet-based multimedia information systems, cultural heritage and digital libraries.

Associate Professor Dr. Mark Sagar
Director - Laboratory for Animate Technologies
Auckland Bioengineering Institute, The University of Auckland
New Zealand

Speech Title: Autonomous Animation of Virtual Human Faces 
(Chair: Prof Nadia Magnenat Thalmann)

Abstract
The talk will describe a neurobehavioural modeling and visual computing framework for the integration of realistic interactive computer graphics with neural systems modelling, allowing real-time autonomous facial animation and interactive visualization of the underlying neural network models. The system has been designed to integrate and interconnect a wide range of cognitive science and computational neuroscience models to construct embodied interactive psychobiological models of behaviour. An example application of the framework combines models of the facial motor system, physiologically based emotional systems, and basic neural systems involved in early interactive behaviour and learning and embodies them in a virtual infant rendered with realistic computer graphics. The model reacts in real time to visual and auditory input and its own evolving internal processes as a dynamic system. The live state of the model which generates the resulting facial behaviour can be visualized through graphs and schematics or by exploring the activity mapped to the underlying neuroanatomy.

Speaker's Brief Bio
Academy Award winner Dr. Mark Sagar is the director of the Laboratory for Animate Technologies at the Auckland Bioengineering Institute. Mark is interested in bringing digital characters to life using artificial nervous systems to empower the next generation of human computer interaction. Mark’s laboratory is pioneering neurobehavioral animation that combines biologically based models of faces and neural systems to create live, naturally intelligent, and highly expressive interactive systems. Mark previously worked as the Special Projects Supervisor at Weta Digital and Sony Pictures Imageworks and developed technology for the characters in blockbusters such as Avatar, King Kong, and Spiderman 2. He has co-directed research and development for Pacific Title/Mirage and Life F/X Technologies which led groundbreaking development of realistic digital humans for film and eCommerce applications driven by artificial intelligence. His pioneering work in computer-generated faces was recognized with two consecutive Scientific and Engineering Oscars in 2010 and 2011. Dr. Sagar holds a Ph.D. in Bioengineering.

Associate Professor Yiyu Cai
School of Mechanical & Aerospace Engineering, College of Engineering 
Nanyang Technological University
Singapore

Speech Title: Virtual Reality and Cardiovascular Interventions 
(Chair: Prof Nadia Magnenat Thalmann)

Abstract
Cardiovascular diseases are the leading cause of death worldwide. Cardiovascular interventions are minimally invasive surgical procedures getting more and more routinely used in the diagnosis or treatment of the diseases. Due the complex nature of the cardiovascular system, interventional procedures rely heavily on medical imaging, and the experience of interventionalists. This talk will present our image-based Virtual Reality research for cardiovascular interventions and intra-cardiac interventions. More specifically, tagged MRI based cardiac motion extraction, and progressive heart reconstruction for electro-mechanical mapping will be discussed. This talk will also report the training of junior cardiac interventionalists using an interactive Virtual Reality system developed with patient-specific medical images and a tactile user interface. The talk will end up with a discussion on future intra-cardiac and cardiovascular interventions with the aid of Virtual Reality technology.

Speaker's Brief Bio
Dr Yiyu Cai has near 20 years experience developing image-based 3D Virtual Reality technology to improve the cardiovascular and intra-cardiac intervention in collaboration with clinicians and medical device industry. Currently, he is Associate Professor with Nanyang Technological University (NTU), Singapore. He teaches Computer Graphics & Virtual Reality, directs The Strategic Research Program of VR and Soft Computing, and heads The Computer-aided Engineering Labs in NTU. As principal investigators, Dr Cai has led a variety of research projects funded through Singapore's Ministry of Education, Ministry of Information, Communication and The Arts, Singapore Millennium Foundation, A*STAR Science, Engineering and Research Council, and private sectors. He co-founded the ACM SIGGRAPH Conference Series of Virtual Reality Continuum and Its Applications in Industry (VRCAI) in 2004. He holds a BSc degree in Mathematics, a MSc in Computer Graphics & Computer-aided Geometry Design, and a PhD in Mechanical Engineering.

Associate Professor Jianmin Zheng
School of Computer Engineering, College of Engineering 
Nanyang Technological University
Singapore

Speech Title: Real-time Deformation of Complex Objects 
(Chair: Prof Daniel Thalmann)

Abstract
Creating and controlling deformations of geometric shapes is a fundamental problem in computer graphics and computer animation. Techniques for real-time deformations are widely used in various applications such as character animation, computer games, VR simulation and product design. Extensive research has been conducted on this topic and many relatively mature deformation techniques have been developed. However, with the advance of graphics technology, geometric models become more and more complicated and there is demand for next-generation deformation techniques that are able to perform real-time deformation for objects with complex geometry, provide high level and intuitive deformation control and support flexible topology. In this talk, I will review state-of-the-art technologies for 3D deformation, which include geometrically-based, physically-based and example-based approaches. Then I will present our recent work in freeform deformation and example-based elastic deformation. Some thoughts on next-generation deformation techniques and their underlying technical challenges are discussed as well.

Speaker's Brief Bio
Jianmin ZHENG is an associate professor in the School of Computer Engineering at Nanyang Technological University (NTU). His research interests lie in geometric modeling, computer graphics, computer animation, virtual reality, medical visualization, and interactive digital media. He has published over 150 papers in international journals and conferences such as ACM TOG, IEEE TVCG, SIGGRAPH, Eurographics, and CVPR. He received his BS and PhD from Zhejiang University. Prior to joining NTU in 2003, he was a post-doc and a research staff at Brigham Young University, and a professor in mathematics at Zhejiang University. He was the conference co-chair of Geometric Modeling and Processing 2014 and has served on the program committee of many international conferences including ACM SIGGRAPH ASIA 2008 and 2013.​​

Not sure which programme to go for? Use our programme finder
Loading header/footer ...