Skip Ribbon Commands
Skip to main content

Speakers

Professor Nadia Magnenat Thalmann
Director, Institue for Media Innovation, NTU, Singapore

Title: The Humanoid Social Robot Nadine: a Social Companion?

Abstract
Humans have always dreamed to have robots to do their dull tasks and help them to perform basic actions. However, since quite a while, researchers and companies started to develop robots as our social companions. Social companions’ goal today is mostly for people in needs such as elderly, children, autists, etc. They can stay with people and help them in conversations and reading, detect their actions and emotions and react in an appropriate way. Social Robots have quite a lot of sensors and actuators, as cameras, microphones, and they can consequently see, listen and talk and monitor situations. The main ongoing research is to be able to analyse all the meaning of these real life social signals as who speaks, in which intonation and intention, who gazes to whom, who does gestures and which ones with which meaning, who speaks, what is the message and the intention, etc. The most difficult part today is to define a comprehension model of social clues and a social decision making process that allow robots like Nadine to answer accordingly with a variety of adequate sentences and attitudes corresponding to real social clues and behaviours. So far, little research has been developed on the social decision making process, the own intentions and initiatives of the social robot. In this talk, we will summarize the avenues of this interdisciplinary research and what is still to be done on a longer perspective.

Brief Bio
Nadia Magnenat Thalmann is currently Professor and Director of the Institute for Media Innovation, Nanyang Technological University, Singapore. She is also the Founder and Director of the MIRALab, an interdisciplinary lab in Human Computer Animation, University of Geneva, Switzerland. Her global research area is the modelling and simulation of Virtual Humans. She is also working on Social Robots, mixed realities and medical simulation. All over her career, she has received many artistic and scientific Awards, among them the 2012 Humboldt Research Award, and two Doctor honoris Causa (from University of Hanover in Germany and from the University of Ottawa in Canada). She is Editor-in-Chief of the Journal The Visual Computer (Springer-Verlag) and is a Member of the Swiss Academy of Engineering Sciences.

Associate Professor Jianfei Cai
BeingThere Centre and School of Computer Engineering, NTU, Singapore

Title: Joint Depth Recovery and RGB-D Face Tracking

Abstract
Existing depth recovery methods for commodity RGB-D sensors primarily rely on low-level information for repairing the measured depth estimates. However, as the distance of the scene from the camera increases, the recovered depth estimates become increasingly unreliable. The human face is often a primary subject in the captured RGB-D data in applications such as the video conference. In this talk, I will present our work on incorporating face priors extracted from a general sparse 3D face model into the depth recovery process. If time allows, I will also show you some of our recent progress on using depth recovery to help RGB-D face tracking.

Brief Bio
Jianfei received his PhD degree from the University of Missouri-Columbia. He is currently an Associate Professor and has served as the Head of Visual & Interactive Computing Division and the Head of Computer Communication Division at the School of Computer Engineering, Nanyang Technological University, Singapore. His major research interests include visual computing and multimedia networking. He has published more than 150 technical papers in international conferences and journals. He is a co-recipient of paper awards in ACCV, IEEE ICIP and MMSP. He has been actively participating in program committees of various conferences. He has served as the leading Technical Program Chair for IEEE International Conference on Multimedia & Expo (ICME) 2012 and the leading General Chair for Pacific-rim Conference on Multimedia (PCM) 2012. He is currently an Associate Editor for IEEE Trans on Image Processing (T-IP) and has served as an Associate Editor for IEEE Trans on Circuits and Systems for Video Technology (T-CSVT), and Guest Editor for IEEE Trans on Multimedia (TMM), etc. He is also a senior member of IEEE.

Associate Professor Chi Wing Fu
BeingThere Centre and School of Computer Engineering, NTU, Singapore

Title: 3D Telepresence: From Seeing There Towards Being There

Abstract
Imagine users in two or more distant rooms can communicate and collaborate with one another with an illusion that they are co-located together. This is one of the visions we aim to achieve in the BeingThere centre: next-generation 3D telepresence application. To support the creation of such an immersive experience, one key component is real-time 3D acquisition and synchronized 3D rendering of the participants, replicating both their underlying geometry, texture, and motion. In this talk, we will briefly present some of the previous works and ongoing works towards this goal, including high-quality Kinect data filtering, real-time GPU-based foreground extraction, robust face tracking and composition from stereo depth sensors and background deception in 3D telepresence.

Brief Bio
Chi-Wing FU, Philip, is currently an associate professor in School of Computer Engineering, Nanyang Technological University, Singapore. He received his B.Sc. and M.Phil. degrees in Computer Science and Engineering from the Chinese University of Hong Kong, and his PhD degree in Computer Science from Indiana University in Bloomington, USA. He joined the School of Computer Engineering of the Nanyang Technological University in 2008. He received the IEEE Transactions on Multimedia Prize Paper Award for his co-authored paper published in the journal. His research interests include computer graphics, user interaction, and visualization. He has served as program committee members in key international research conferences such as SIGGRAPH Asia, IEEE Visualization, and ACM CHI. He is currently serving as associate editor of Computer Graphics Forum (CGF). He is one of the key researchers in BeingThere centre, NTU.

Associate Professor Ying He
BeingThere Centre and School of Computer Engineering, NTU, Singapore

Title: Real-time Computation of Discrete Geodesics

Abstract
Measuring geodesic distance on polyhedral surfaces plays an important role in computer graphics and digital geometry processing. The existing methods are based on either computational geometry or PDE. The former is able to compute exact geodesic distances, however, it is computationally expensive. The latter is fairly efficient but produces only the first order approximation, whose quality is highly dependent on mesh triangulation. In this talk, I will introduce graph-theoretic algorithms for computing highly accurate geodesic distances in real-time. I will also demonstrate various applications of discrete geodesics in digital geometry processing.

Brief Bio
Ying He is currently an associate professor at School of Computer Engineering, Nanyang Technological University, Singapore. He received the BS and MS degrees in electrical engineering from Tsinghua University, China, and the PhD degree in computer science from Stony Brook University, USA. His research interests fall into the general areas of visual computing and he is particularly interested in the problems which require geometric analysis and computation. His recent work has foccused on discrete geodesic and its broad applications.

 
Junhui Hou1,2 and Associate Professor Lap-Pui Chau2
1: Institute for Media Innovation, NTU, Singapore
2: BeingThere Centre and School of Electrical & Electronic Engineering, NTU, Singapore

Title: Efficient Compression of 3D Motion Data

Abstract
Recent advances in 3-D capture technology, such as structured light, and multiview photometric stereo, Microsoft Kinect, and motion capture system, allows users to easily obtain 3-D motions. 3-D motion data, such as mocap, and dynamic meshes, are widely used in video games, movie production, social media, virtual reality, and many others. Besides, the rapid growth of 3DTV and 3Dfilms increases the interest in 3D motion data. With growing usage, it is highly desired to reduce the data size for efficient storage and smooth transmission. In this talk, we will present our recent work in terms of compression of 3D Time-Varying meshes and human motion capture data.

Brief Bio
Junhui Hou received the B. Eng degree in Information Engineering (Talented Students Program) from South China University of Technology, Guangzhou, China and the M. Eng in Signal and Information Processing from Northwestern Polytechnical University, Xi'an, China in 2009 and 2012, respectively. He is currently pursuing the Ph.D degree from the School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore. His current research interests include video compression, image processing and computer graphics processing.

Lap-Pui Chau received the B. Eng degree with first class honours in Electronic Engineering from Oxford Brookes University, England, and the Ph.D. degree in Electronic Engineering from Hong Kong Polytechnic University, Hong Kong, in 1992 and 1997, respectively. His research interests include fast signal and image processing algorithms, and robust video transmission. He is General Chairs for IEEE DSP 2015, and ICICS 2015, Technical Program Co-Chairs for VCIP 2013 and ISPACS 2010. He was the chair of Technical Committee on Circuits & Systems for Communications (TC-CASC) of IEEE Circuits and Systems Society from 2010 to 2012. He served as an associate editor for IEEE Transactions on Multimedia, IEEE Signal Processing Letters, and is currently serving as an associate editor for IEEE Transactions on Circuits and Systems for Video Technology, IEEE Transactions on Broadcasting, The Visual Computer (Springer Journal) and IEEE Circuits and Systems Society Newsletter. Besides, he is IEEE Distinguished Lecturer for 2009-2014, and a steering committee member of IEEE Transactions for Mobile Computing from 2011-2013.

Nanyang Assistant Professor Junsong Yuan
BeingThere Centre and School of Electrical & Electronic Engineering, NTU, Singapore

Title: Capturing and Understanding Hand Gestures for 3D Tele-presence

Abstract
3D tele-presence is more than capturing, transferring, and displaying the 3D scene. The understanding of the intention and interaction of the users may also be importance to obtain great 3D tele-presence experience. In this talk, I will present our recent work on hand parsing, pose tracking and gesture recognition, using both depth and RGB cameras. We show that real-time sensing and understanding of hand gestures can be a great complementary to 3D telepresence, e.g., manipulation of virtual objects and teleoperation.

Brief Bio
Junsong Yuan is currently a Nanyang Assistant Professor and program director of video analytics at School of EEE, Nanyang Technological University (NTU), Singapore. He received Ph.D. from Northwestern University, USA. His research interests include computer vision, video analytics, action and gesture analysis, large-scale visual search and mining etc. He has authored and co-authored 3 books and 130 conference and journal papers. He received Nanyang Assistant Professorship and Tan Chin Tuan Exchange Fellowship from Nanyang Technological University, Outstanding EECS Ph.D. Thesis award from Northwestern University, and Doctoral Spotlight Award from IEEE Conf. Computer Vision and Pattern Recognition Conference (CVPR'09). He is Program Co-Chair of IEEE Visual Communications and Image Processing (VCIP’15), Organizing Co-Chair of Asian Conf. on Computer Vision (ACCV’14), Area chair of IEEE Winter Conf. on Computer Vision (WACV'14), IEEE Conf. on Multimedia Expo (ICME'14’15), and Asian Conf. on Computer Vision (ACCV'14). He serves as guest editor for International Journal of Computer Vision (IJCV), associate editor for The Visual Computer journal (TVC) and Journal of Multimedia. He also co-chairs six workshops at SIGGRAPH Asia’14, CVPR'12'13’15, and ICCV'13, and gives tutorials at ACCV’14, ICIP'13, FG'13, ICME'12, SIGGRAPH VRCAI'12, and PCM’12.

Professor Daniel Thalmann
IMI-BTC and School of Computer Engineering, NTU, Singapore

Title: Playing Volleyball in Immersive Environment

Abstract
We will explain how a volleyball game has been created for Immersive Environments. In this game, a real player could play against virtual players, but also in collaboration with virtual team players, which is an innovation compared to most games. The current implantation allows to play with 4 different hardware environments: The IMI Immersive Room, Oculus Rift glasses, an Alioscopy 3D display, and a 3D TV. In this presentation, we will first present the technical aspects of this immersive game: the ball trajectory with the prediction of impact and interaction and the generation of the movement of the virtual players. We will then discuss in details a user study comparing the 4 different hardware environments and their impact on the degree of Presence.

Brief Bio
Professor Daniel Thalmann is with the Institute for Media Innovation at the Nanyang Technological University in Singapore. He is a pioneer in research on Virtual Humans. His current research interests include Real-time Virtual Humans in Virtual Reality, Crowd Simulation, and 3D Interaction. Daniel Thalmann has been the Founder of The Virtual Reality Lab (VRlab) at EPFL, Switzerland. He is coeditor-in-chief of the Journal of Computer Animation and Virtual Worlds, and member of the editorial board of 6 other journals. Daniel Thalmann was member of numerous Program Committees, Program Chair and CoChair of several conferences including IEEE VR, ACM VRST, ACM VRCAI, CGI, and CASA. Daniel Thalmann has published more than 500 papers in Graphics, Animation, and Virtual Reality. He is coeditor of 30 books, and coauthor of several books including 'Crowd Simulation' (second edition 2012) and 'Stepping Into Virtual Reality' (2007), published by Springer. He received his PhD in Computer Science in 1977 from the University of Geneva and an Honorary Doctorate (Honoris Causa) from University Paul- Sabatier in Toulouse, France, in 2003. He also received the Eurographics Distinguished Career Award in 2010 and the 2012 Canadian Human Computer Communications Society Achievement Award.

Associate Professor Tat Jen Cham
BeingThere Centre and School of Computer Engineering, NTU, Singapore

Title: Recent Developments in Displays for Room-based Immersive Telepresence

Abstract
In this talk I will survey existing research trends in 3D displays that will likely compete to have an impact on future 3D telepresence systems, ranging from technologies involving wall-sized displays to wearable augmented reality systems, and discuss their advantages and limitations in meeting the perceptual needs for telepresence. I will also present some of the ongoing 3D display research that is currently carried out in the BeingThere Centre.

Brief Bio
Tat-Jen Cham is an Associate Professor in the School of Computer Engineering, Nanyang Technological University and a principal investigator in the BeingThere Centre for 3D Telepresence. His research interests are broadly in computer vision and machine learning. He received his BA in Engineering in 1993 and his PhD in 1996, both from the University of Cambridge, after which he received a Jesus College Research Fellowship in Cambridge 1996-97 and was a research scientist in the DEC / Compaq Research Lab in Boston 1998-2001. While in NTU he was concurrently a Singapore-MIT Alliance Fellow 2003-2006, the Director for the Centre of Multimedia and Network Technology 2007-2015 and an NTU Senator 2010-2014. He is on the Editorial Board for the International Journal of Computer Vision, and was a General Co-Chair for the Asian Conference on Computer Vision 2014.

Speakers for Industrial Workshop

Mr Laurent Fabry
Technical Director, Alioscopy Singapore

Title: Unity 3D/Realtime Auostereoscopic Application and Usage

Brief Bio
Laurent Fabry has more than 20 years experience in media, telecommunication and broadcasting industries, He provides consulting, integration and project management services for digital media projects including media asset management, digital signage and interactive digital media projects. Specialties: Media Asset Management, Digital Asset Management, Digital Signage, Interactive content, System Integration, 3D autosteroscopic, audience measurement, OOH, multi-screen projects.

A short description of Alioscopy
Alioscopy has been a pioneer in the field of auto-stereoscopic displays for some 27 years and holds a portfolio of international patents covering key aspects of this technology. Alioscopy Glasses-Free 3D solutions are representative of the foremost technical expertise in the auto-stereoscopic 3D industry.Alioscopy 3D Displays deliver a unique immersive experience which helps to boost brand recognition by engaging the attention of viewers in retail & public event applications, and can assist with complex decision-making processes where accurate 3D presentation is required. Alioscopy is headquartered in Paris, France with offices in Las Vegas and Singapore.

Mr Murali Pappoppula
Autodesk - Reality Solutions Team

Title: Reality to High-definition 3D

Abstract
The presentation talks about Autodesk Memento, which is a highly scalable & easy to use technology available from Autodesk to capture, process & publish real world objects & environment in high definition using off-the-shelf digital cameras, laser scanners and hand-held scanners. This technology is aimed at academics, professionals, content creators, tinkerers and consumers, who would like to use high definition 3D models generated out of reality captured data. The talk will cover capabilities of photogrammetry, mesh reconstruction, mesh processing and unique applications of such data in the context of high-quality immersive visualization, web streaming, 3D printing/fab & scientific & engineering analysis.

Brief Bio
Murali Pappoppula is part of Reality Solutions division of Autodesk working at Singapore R&D. He is has been leading the team that is responsible for research & development of Autodesk Memento since the inception of the project 2.5 years ago.

A short description of Autodesk
Autodesk is a world leader in 3D design software for entertainment, natural resources, manufacturing, engineering, construction, and civil infrastructure.

Mr Jonathan Kwek
Lead 3D Artist and Producer, Content Creation, EON Reality

Title: Producer, Content Development

Brief Bio
Jonathan Kwek have almost 10 years of experience in game development, mostly from 3D environment creation and level design. The projects that I’ve been involved in range from MMORPGs for the PC platform to casual games for the iOS mobile devices. The roles I have undertaken are mainly as lead environment artist and, until recently, as game producer as well. My formal training is in Architecture at National University of Singapore. I am competent in 3dsMax, Unity and Unreal editors.

A short description of EON Reality
EON Reality is the world leader in Virtual and Augmented Reality based knowledge transfer for industry, education, and edutainment.

Mr Jeremy Main
Senior Solution Architect, APAC, NVIDIA

Title: Remote High Fidelity Visualization

Brief Bio
Jeremy Main is the Senior Solution Architect for enterprise graphics virtualization in APAC enabling leading manufacturing and other companies to deliver high-fidelity GPU-accelerated applications to their workers. Before joining NVIDIA, Jeremy worked for Fujitsu leading the development of several remote graphics application products as well as 3D CAD software development. Jeremy received his Bachelor of Science from the University of Utah.

A short description of NVIDIA
NVIDIA's work in visual computing — the art and science of computer graphics — has led to thousands of patented inventions, breakthrough technologies, deep industry relationships and a globally recognized brand. At the core of our company is the GPU — the engine of modern visual computing — which we invented in 1999. The GPU has propelled computer graphics from a feature into an ever-expanding industry — encompassing video games, movie production, product design, medical diagnosis and scientific research, among many other categories. GPUs are now driving new fields like computer vision, image processing, machine learning and augmented reality.

Dr Michael Lee
Vice President of Software, Razer

Title: OSVR: Open Source Virtual Reality

Abstract
OSVR is an open source HW and SW initiative that now involves more than 90 different industrial and academic partners. The open nature of the HW and SW seeks to accelerate the VR industry by providing a standard platform on which the community can rapidly and collaboratively experiment and refine VR solutions. The hackable HW is designed in such a way as to promote experimentation with HMD development and peripheral device integration. The SW framework provides a HW abstraction layer to isolate game and application developers from HW specifics enabling them to focus in their area of expertise. The highly modular and flexible SW architecture also provides analytics and gesture recognition researchers easy access to input device data streams such as cameras and positional trackers. In this talk, we present details of the OSVR HW and SW elements and key items on the OSVR roadmap. Further, we discuss how some of the active partners have integrated their technologies into the system. Finally, we highlight some of the key VR challenges and how OSVR activities seek to address these challenges.

Brief Bio
Dr. Michael Lee is VP Software at Razer and is responsible for Razer's software contribution to the OSVR initiative. His research in academia and industry has spanned from machine learning, computer music and media signal processing, to user interfaces. He holds a PhD in computer sciences from the University of California.

A short description of Razer
Razer™ is a world leader in connected devices and software for gamers.Razer is transforming the way people play games, engage with other gamers and identify with the gamer lifestyle. Having won the coveted “Best of CES” award consecutively for five years, the company’s leadership in product innovation continues to create new categories for the gaming community that is estimated to have over 1 billion gamers worldwide. Razer’s award-winning design and technology include an array of user interface and systems devices, voice-over IP for gamers and a cloud platform for customizing and enhancing gaming devices.

Not sure which programme to go for? Use our programme finder
Loading header/footer ...