Shanghai, China

Keynote Speakers

Keynote Speaker

Prof. Yen-Wei Chen

Ritsumeikan University, Japan

Zhejiang Lab, China

Yen-Wei Chen received the B.E. degree in 1985 from Kobe Univ., Kobe, Japan, the M.E. degree in 1987, and the D.E. degree in 1990, both from Osaka Univ., Osaka, Japan. He was a research fellow with the Institute for Laser Technology, Osaka, from 1991 to 1994. From Oct. 1994 to Mar. 2004, he was an associate Professor and a professor with the Department of Electrical and Electronic Engineering, Univ. of the Ryukyus, Okinawa, Japan. He is currently a professor with the college of Information Science and Engineering, Ritsumeikan University, Japan. He is also an adjunct professor with the College of Computer Science, Zhejiang University, China and Zhejiang Lab, China. He was a visiting professor with the Oxford University, Oxford, UK in 2003 and a visiting professor with Pennsylvania State University, USA in 2010. His research interests include medical image analysis, computer vision and computational intelligence. He has published more than 300 research papers in a number of leading journals and leading conferences including IEEE Trans. Image Processing, IEEE Trans. SMC, Pattern Recognition. He has received many distinguished awards including ICPR2012 Best Scientific Paper Award, 2014 JAMIT Best Paper Award, Outstanding Chinese Oversea Scholar Fund of Chinese Academy of Science. He is/was a leader of numerous national and industrial research projects.

 

Speech Title: "Tensor Sparse Coding for Multi-Dimensional Medical Image Analysis"

Due to the rapid development of imaging technologies, we have obtained a large amount of biomedical images. In addition to 3-dimensional spatial information, the biomedical images have temporal information. Efficient representation of the multi-dimensional biomedical image is an important issue for biomedical image analysis. Sparse coding is one of machine learning methods and is widely used for efficient image representation and image recognition. The limitation of the conventional sparse coding is that a multi-dimensional data (e.g. an image or a video image) should be unfolded into a vector resulting in loss of spatial and spatial-temporal relationship of the data. In this keynote talk, I will talk about anew tensor sparse coding method and its application to multi-dimensional medical image analysis, in which the multi-dimensional data can be treated as a tensor without unfolding.

 

Keynote Speaker

Prof. Kiyoshi Hoshino

University of Tsukuba, Japan

Prof. Kiyoshi Hoshino received two doctor's degrees; one in Medical Science in 1993, and the other in Engineering in 1996, from the University of Tokyo respectively. From 1993 to 1995, he was an assistant professor at Tokyo Medical and Dental University School of Medicine. From 1995 to 2002, he was an associate professor at University of the Ryukyus. From 2002, he was an associate professor at the Biological Cybernetics Lab of University of Tsukuba. He is now a professor. From 1998 to 2001, he was jointly appointed as a senior researcher of the PRESTO "Information and Human Activity" project of the Japan Science and Technology Agency (JST). From 2002 to 2005, he was a project leader of a SORST project of JST. He served as a member of the “cultivation of human resources in the information science field” WG, Special Coordination Funds for the Promotion of Science and Technology, MEXT, a member of “Committee for Comport 3D Fundamental Technology Promotion”, JEITA, and the chairman of the 43rd Annual Meeting of Japanese Society of Biofeedback Research.

 

Speech Title: "Eye Movement Estimation Based on the Intensity Gradients of Blood Vessels in the Eye"

 

The author proposes a method that allows eye movement measurement with high accuracy without using a blue auxiliary light for users between whom blood vessels in the white part of the eye differ considerably in terms of thickness and density on the image. In the proposed system, in order to select a template image that includes a thick, dense blood vessel suitable for tracking in the white part of the eye, feature points are first extracted from the white part of the eye on the acquired image based on the intensity gradients, and the number of feature points in a candidate template image is counted. Next, from among the candidate template images that include a larger number of feature points, those with a reflection of external light source are excluded. Lastly, a candidate template image that includes a blood vessel with a distinct shape is selected as a template image. The results of an evaluation experiment show that the method proposed in this study can, even without a blue auxiliary light, reduce the standard deviation of estimation errors to almost by half, compared with that of the conventional method developed by our group that uses a blue auxiliary light to enhance the contrast of blood vessels.

 

Keynote Speaker

Assoc. Prof. Kuo-Yuan Hwa

National Taipei University of Technology, Taiwan

Dr. Kuo-Yuan Hwa is an associate professor and the director of the Center for Biomedical Industries at the National Taipei University of Technology. Dr. Hwagraduated and received her PhD from the School of Medicine, the Johns Hopkins University. She is the president of the Medical Association for Indigenous Peoples of Taiwan (MAIPT). Dr. Hwa’s scientific interests are: 1) nanotechnology and biosensor, 2) new drug discovery for human diseases by proteomics and genomics approaches and 3) glycobiology, especially on enzymes kinetics. She has published 85 conference and journal articles and 10 patents. She has served in many national and international committees. Dr. Hwa has been invited as a speaker for many academic research institutes and universities in China, Korea, Japan and USA. She has been invited as a reviewer, a judge and an editor for international meetings and journals. In addition, one of her currently works is on developing culturally inclusive health science educational program, with both indigenous and western science knowledge for indigenous children.

 

Copyright © DMIP 2019