Seminar: Deep CNN in Video Analytics for Assistive Healthcare Tech

IEEE Signal Processing Society, Bangalore Chapter,
and Department of Computational and Data Sciences, Indian Institute of Science
Invite you to the following talk:
Title: Deep Convolutional Neural Network in Video Analytics for Assistive Healthcare Technologies.
by Dr. Bappaditya Mandal, Lecturer (Computing), Keele University, UK
Time & Date:  11:00 AM, Wednesday, August 22, 2018 
Venue: CDS Seminar Hall (Room No: 102), IISc.
Because of the fast advancement and price reduction in the hardware and computing facilities, deep convolutional neural network (DCNN) for video analytics has been computationally possible in practise and have shown considerable improvement in many video analytics tasks, such as object recognition, face recognition, etc. My talk will cover two aspects of my research: (1) Computer Vision on Wearable devices and (2) DCNN for two Biomedical Applications: Melanoma Skin Cancer Detections and Optic Disc and Cup Segmentation for Glaucoma Assessment. In the first part of my talk, I will talk about the development of computational methodologies in wearable devices that would help people to improve their lives. For example, camera in the wearable devices (such as Google Glass, GoPro) generates first-person-view (FPV) or egocentric videos that show near human vision field of view. They provide immerse opportunities for various applications, such as face recognition for social interaction assistance. Life-logged egocentric data are useful for summarization and retrieval (memory assistance), security, health monitoring, lifestyle analysis to memory rehabilitation (i.e., subject matters being remembered, such as time, place, object, people, context, and mental states) for dementia patients.
In the second portion of my talk I will discuss how we have improved the deep residual network with regularized Fisher framework for differentiating melanoma (malignant) from non-melanoma (benign) skin cancer cases, which is supported by large number of experimental results from benchmark databases. I will conclude my talk on how we have modified deep residual learning framework to extract more patch based discriminating features by improving the information flow in the network by introducing extra skip connections for the challenging optic disk and optic cup segmentation for glaucoma assessment.
Speaker Bio:
Bappaditya Mandal has received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology (IIT), Roorkee, India and the Ph.D. degree in Electrical and Electronic Engineering from Nanyang Technological University (NTU), Singapore, in 2003 and 2008, respectively. His research interest are in the areas of computer vision, machine learning, pattern recognition and video analytics. Bappaditya has worked as a Scientist for >9 years at the Cognitive Vision Lab, Visual Computing Department in the Institute for Infocomm Research, A*STAR, Singapore, between May 2008 to June 2017 for a number of research projects and published extensively in Journals, conferences and workshops. He has been in the Kingston University London for a short while before joining as a Lecturer in Computer Science, School of Computing and Mathematics at Keele University, United Kingdom in March 2018.

Leave a Reply