IEEE
January 19th, 2020

IEEE EMBS DISTINGUISHED LECTURE PROGRAM(DLP)

Venue: Golden Jubilee Seminar Hall, ECE Dept, IISc, Bangalore, India

Time: 4.00 pm On 20th January 2020

Organized by

IEEE EMBS Bangalore Chapter, EMB IISc and RIT Student Chapters,

       IEEE SPS Bangalore Chapter, Department of ECE, Indian Institute of Science.

 

Title of the Talk: Model-based Signal Processing in Neurocritical Care

Abstract: Large volumes of heterogeneous data are now routinely collected and archived from patients in a variety of clinical environments, to support real-time decision-making, monitoring of disease progression, and titration of therapy. This rapid expansion of available physiological data has resulted in a data-rich – but often knowledge-poor – environment. Yet the abundance of clinical data also presents an opportunity to systematically fuse and analyze the available data streams, through appropriately chosen mathematical models, and to provide clinicians with insights that may not be readily extracted from visual review of the available data streams. In this talk, I will highlight our work in model-based signal processing for improved neurocritical care to derive additional and clinically useful information from routinely available data streams. I will present our model-based approach to noninvasive, patient-specific and calibration free estimation of intracranial pressure and will elaborate on the challenges of (and some solutions to) collecting high-quality clinical data for validation.

Speaker: Prof Thomas Heldt
Massachusetts Institute of Technology, United States   

Thomas Heldt studied physics at Johannes Gutenberg University, Germany, at Yale University, and at MIT. He received the PhD degree in Medical Physics from MIT’s Division of Health Sciences and Technology and undertook postdoctoral training at MIT’s Laboratory for Electromagnetic and Electronic Systems. Prior to joining the MIT faculty in 2013, Thomas was a Principal Research Scientist with MIT’s Research Laboratory of Electronics. He currently holds the W.M. Keck Career Development Chair in Biomedical Engineering. He is a member of MIT’s Institute for Medical Engineering and Science and on the faculty of the Department of Electrical Engineering and Computer Science.

Thomas’s research interests focus on signal processing, mathematical modeling and model identification in support of real-time clinical decision making, monitoring of disease progression, and titration of therapy, primarily in neurocritical and neonatal critical care. In particular Thomas is interested in developing a mechanistic understanding of physiologic systems, and in formulating appropriately chosen computational physiologic models for improved patient care. His research is conducted in close collaboration with clinicians from Boston-area hospitals, where he is integrally involved in designing and deploying high-quality data-acquisition systems and collecting clinical data. 

January 8th, 2020

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY BANGALORE CHAPTER

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

IEEE SPS DISTINGUISHED LECTURE

———————————————————————————————-

 

Title: Towards Autonomous Video Surveillance

 

Speaker: Prof. Janusz Konrad, Boston University

 

Time: 1630-1730 hrs (coffee/tea at 4.15pm)

 

Date:  Thursday, 16 Jan 2020

 

Venue: Golden Jubilee Seminar Hall, Dept. of ECE, Indian Institute of Science Bangalore

 

Abstract:

It is estimated that in 2014 there were over 100 million surveillance cameras in the world. Fueled by security concerns, this number continues to steadily grow. As monitoring of video feeds by human operators is not scalable, automatic surveillance tools are needed. In this talk, I will cover a complete video surveillance processing chain, developed over years at Boston University, from low-level video analysis to summarization of dynamic events. I will focus on three fundamental questions posed in video surveillance: “How to detect anomalous events in a visual scene? How to classify those events? How to represent them succinctly?’’ First, I will present “behavior subtraction’’, an extension of “background subtraction’’ to scenes with dynamic backgrounds (e.g., water surface that is notoriously difficult to handle), which can detect complex anomalies in surveillance video. Then, in order to classify activities within the detected anomalies, I will discuss activity recognition on covariance manifolds. Finally, I will describe “video condensation’’, a computational method to succinctly summarize activities of interest for efficient evaluation by human operators.

 

Bio:

Janusz Konrad received Master’s degree from Technical University of Szczecin, Poland in 1980 and PhD degree from McGill University, Montréal, Canada in 1984. He joined INRS-Télécommunications, Montréal as a post-doctoral fellow and, since 1992, as a faculty member. Since 2000, he has been on faculty at Boston University. He is an IEEE Fellow and a recipient of several IEEE and EURASIP Best Paper awards. He has been actively engaged in the IEEE Signal Processing Society as a member of various boards and technical committees, as well as an organizer of conferences. He has also been on editorial boards of various EURASIP journals. His research interests include video processing and computer vision, stereoscopic and 3-D imaging and displays, visual sensor networks, human-computer interfaces, and cybersecurity.

 

———————————————————————————————-

ALL ARE WELCOME


Lecture Flyer

January 1st, 2020
IEEE Signal Processing Society, Bangalore Chapter and Department of Computational and Data Sciences, 
Indian Institute of Science Invite you to the following talk:
 
SPEAKER   : Suraj Srinivas, PhD Scholar, Idiap Research Institute and EPFL, Switzerland
TITLE          : “Neural Network Interpretability using Full-Gradient Representation”
Venue            :  #102 CDS Seminar Hall
Date & Time :  Jan  07, 2020, 12:00 Noon

ABSTRACT:

In this talk, I will introduce a new tool for interpreting neural net responses, namely full-gradients, which decomposes the neural net response into input sensitivity and per-neuron sensitivity components. This is the first proposed representation which satisfies two key properties: completeness and weak dependence, which provably cannot be satisfied by any saliency map-based interpretability method. For convolutional nets, we also propose an approximate saliency map representation, called FullGrad, obtained by aggregating the full-gradient components. We experimentally evaluate the usefulness of FullGrad in explaining model behaviour with two quantitative tests: pixel perturbation and remove-and-retrain. Our experiments reveal that our method explains model behaviour correctly, and more comprehensively than other methods in the literature. Visual inspection also reveals that our saliency maps are sharper and more tightly confined to object regions than other methods. This talk is based on our recent NeurIPS 2019 paper titled “Full-Gradient Representation for Neural Network Visualization”.

BIOGRAPHY:
Suraj Srinivas is a 3rd year PhD research assistant at Idiap Research Institute and EPFL, Switzerland. He works with Prof. Francois Fleuret on analytically understanding deep learning architectures. Before that, he completed his M.Sc.(Engg.) at CDS, IISc, where he worked with Prof. Venkatesh Babu on neural network compression. His research interest are broadly relating to the robustness, adaptability and interpretability of deep neural networks.

Host Faculty: Prof. Venkatesh Babu

January 1st, 2020

IEEE Signal Processing Society, Bangalore Chapter

and
Department of Electrical Engineering
Indian Institute of Science (IISc), Bangalore
invite you to the following talk
Title: Utilizing Real-time MRI to Investigate Speech Articulation Disorders
Date and time: January 10, 2020; 11:30AM (Coffee will be served at 11:15AM)
Venue: Multimedia Classroom, Department of Electrical Engineering, IISc.
Speaker: Christina Hagedorn, PhD, SLP, CCC-SLP, City University of New York – College of Staten Island
Abstract:
Over the past two decades, real-time Magnetic Resonance Imaging (rtMRI), elaborating traditional medical MRI, has played a critical role in studying a variety of biological movement patterns. Through collaboration between engineers and speech scientists, rtMRI technology has been applied to the study of speech production. Through semi-automatic detection of air-tissue boundaries and estimation of articulatory kinematics using pixel intensity time functions, rtMRI can be used to quantitatively analyze speech production patterns in both typical and disordered populations. In this work, rtMRI is demonstrated to shed light on aspects of speech produced by individuals with tongue cancer and individuals with Apraxia of Speech that would not be possible using tools providing more limited spatiotemporal information about vocal tract shaping.
Biography of the speaker:
Christina Hagedorn is an assistant professor of Linguistics and director of the Motor Speech Laboratory at the City University of New York (CUNY) – College of Staten Island. Her research focuses primarily on disordered speech production. Her work aims to shed light on the precise nature of articulatory breakdowns in disordered speech and how this can inform theories of unimpaired speech production, as well as lead to refinement of the therapeutic techniques used to address these speech deficits.

She received her Ph.D. in Linguistics from the University of Southern California, where she was a member of the Speech Production and Articulation kNowledge (SPAN) Group, the USC Phonetics and Phonology Group, and was a Hearing and Communication Neuroscience pre-doctoral fellow. She received her clinical training in Communicative Sciences and Disorders at New York University, and holds a certificate of clinical competency in Speech and Language Pathology (CCC-SLP).

January 1st, 2020

The IEEE Signal Processing Society, Bangalore Chapter and

Indian Institute of Science

Cordially invite you to the following talk on

New Twists for New Tricks, Making Audio Deep Learning Practical

Speaker: Prof. Paris Smaragdis, University of Illinois at Urbana-Champaign.

Date and Time: 6th Jan, 2020, 4pm, Refreshments: 3:45pm

Venue: ECE Golden Jubilee Seminar Hall.

Talk flyer

December 18th, 2019

IEEE Signal Processing Society, Bangalore Chapter and Department of EE, Indian Institute of Science Invite you to the following talk:

SPEAKER   :  Prof. Ardhendu Behera, Associate Professor (Reader), Edge Hill University, UK

TITLE          : “Computer Vision and Deep Learning – A Marriage of Neuroscience and Machine Learning”

Venue            :  MMCR Room No C241, First Floor, EE Dept.

Date & Time :  Dec 20, 2019, 04:00 PM

Abstract:

For almost 10 decades, human vision researchers have been studying how the human vision system has evolved. While computer vision is a much younger discipline, it has achieved impressive results in many detection and classification tasks (e.g. object recognition, scene classification, face recognition, etc.) within a short span of time. Computer vision is one of the fastest growing fields and one of the reasons is due to the amount of video/image data from urban environment growing exponentially (e.g. 24/7 cameras, social media sources, smart city, etc.). The scale and diversity of these videos/images make it very difficult to extract reliable information to automate in a timely manner. Recently, Deep Convolutional Neural Networks (DCNNs) have shown impressive performance for solving visual recognition tasks when trained on large-scale datasets. However, such progresses face challenges when rolling into automation and production. These include enough data of good quality, executives’ expectations about model performance, responsibility and trustworthiness in decision making, data ingest, storage, security and overall infrastructure, as well as understanding how machine learning differ from software engineering.

In this talk, I will focus on recent progress in advancing human action/activity and behaviour recognition from images/videos, addressing the research challenges of relational learning, deep learning, human pose, human-objects interactions and transfer learning. I will then briefly describe some of our recent efforts to adopt these challenges in automation and robotics, in particular human-robot social interaction, in-vehicle activity monitoring and smart factories.

Speaker Bio:

Ardhendu Behera is a Senior Lecturer (Associate Professor) in the Department of Computer Science in Edge Hill University (EHU). Prior to this, he held post-doc positions at the universities of Fribourg (2006-07) and Leeds (2007-14). He holds a PhD from University of Fribourg, MEng from Indian Institute of Science, Bangalore and BEng from NIT Allahabad. He is leading the visualisation theme of the Data and Complex Systems Research Centre at the EHU. He is also a member of Visual Computing Lab. His main research interests are computer vision, deep learning, pattern recognition, robotics and artificial intelligence. He applies this interest to interdisciplinary research areas such as monitoring and recognising suspicious behaviour, human-robot social interactions, autonomous vehicles, monitoring driving behaviour, healthcare and patient monitoring, and smart environments. Dr Behera has been involved in various outreach activities and some of his research are covered by media, press, newspaper and television.

————————————–

ALL ARE WELCOME

————————————–

December 17th, 2019

IEEE Signal Processing Society, Bangalore Chapter and Department of Computational and Data Sciences, 

Indian Institute of Science Invite you to the following talk:

SPEAKER   :  Prof. Dima Damen, Associate Professor (Reader), University of Bristol, UK 

TITLE          : A fine-grained perspective onto object interactions 

Venue            :  #102, CDS Seminar Hall

Date & Time :  Dec 26, 2019, 04:00 PM

ABSTRACT:

This talk aims to argue for a fine-grained perspective onto human-object interactions, from video sequences. The talk will present approaches for determining skill or expertise from video sequences [CVPR 2019], assessing action ‘completion’ – i.e. when an interaction is attempted but not completed [BMVC 2018], dual-domain and dual-time learning [CVPR 2019, ICCVW 2019] as well as multi-modal approaches using vision, audio and language [ICCV 2019, BMVC 2019].

This talk will also introduce EPIC-KITCHENS [ECCV 2018], the recently released largest dataset of object interactions in people’s homes, recorded using wearable cameras. The dataset includes 11.5M frames fully annotated with objects and actions, based on unique annotations from the participants narrating their own videos, thus reflecting true intention. Three open challenges are now available on object detection, action recognition and action anticipation [http://epic-kitchens.github.io

BIOGRAPHY:

Dima Damen is a Reader (Associate Professor) in Computer Vision at the University of Bristol, United Kingdom. Received her PhD from the University of Leeds, UK (2009). Dima’s research interests are in the automatic understanding of object interactions, actions and activities using static and wearable visual (and depth) sensors. Dima co-chaired BMVC 2013, is area chair for BMVC (2014-2018), associate editor of IEEE TPAMI (2019-) and associate editor of Pattern Recognition (2017-). She was selected as a Nokia Research collaborator in 2016, and as an Outstanding Reviewer in ICCV17, CVPR13 and CVPR12. She currently supervises 6 PhD students, and 4 postdoctoral researchers. More details at: [http://dimadamen.github.io]

Host Faculty: Prof. Venkatesh Babu

__________________________________________________________________________________________________________________

                        ALL ARE WELCOME

December 16th, 2019

IEEE Signal Processing Society, Bangalore Chapter

and

Department of Electrical Engineering

Indian Institute of Science (IISc), Bangalore

invite you to the following talk

Title: From compressed sensing to deep learning: tasks, structures, and models.

Date and time: December 18, 2019; 11.30 AM.

Coffee will be served during the talk.

Venue: Multimedia Classroom, Department of Electrical Engineering, IISc.

Speaker: Prof. Yonina Eldar, Weizmann Institute of Science, Israel.

Host faculty: Dr. Chandra Sekhar Seelamantula, EE, IISc.

Abstract: 

The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal and image processing. However, in many modern applications, the signal bandwidths have increased tremendously, while the acquisition capabilities have not scaled sufficiently fast. Consequently, conversion to digital has become a serious bottleneck.  Furthermore, the resulting digital data requires storage, communication and processing at very high rates which is computationally expensive and requires large amounts of power.  In the context of medical imaging sampling at high rates often translates to high radiation dosages, increased scanning times, bulky medical devices, and limited resolution.

In this talk, we present a framework for sampling and processing a large class of wideband analog signals at rates far below Nyquist in space, time and frequency, which allows to dramatically reduce the number of antennas, sampling rates and band occupancy.

 

Our framework relies on exploiting signal structure and the processing task.  We consider applications of these concepts to a variety of problems in communications, radar and ultrasound imaging and show several demos of real-time sub-Nyquist prototypes including a wireless ultrasound probe, sub-Nyquist MIMO radar, super-resolution in microscopy and ultrasound, cognitive radio, and joint radar and communication systems. We then discuss how the ideas of exploiting the task, structure and model can be used to develop interpretable model-based deep learning methods that can adapt to existing structure and are trained from small amounts of data. These networks achieve a more favorable trade-off between increase in parameters and data and improvement in performance, while remaining interpretable.

Biography of the speaker:

Yonina C. Eldar received the B.Sc. degree in Physics in 1995 and the B.Sc. degree in Electrical Engineering in 1996 both from Tel-Aviv University (TAU), Tel-Aviv, Israel, and the Ph.D. degree in Electrical Engineering and Computer Science in 2002 from the Massachusetts Institute of Technology (MIT), Cambridge. From January 2002 to July 2002 she was a Postdoctoral Fellow at the Digital Signal Processing Group at MIT.

She is currently a Professor in the Department of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel. She was previously a Professor in the Department of Electrical Engineering at the Technion, where she held the Edwards Chair in Engineering. She is also a Visiting Professor at MIT, a Visiting Scientist at the Broad Institute, and an Adjunct Professor at Duke University and was a Visiting Professor at Stanford. She is a member of the Israel Academy of Sciences and Humanities (elected 2017), an IEEE Fellow and a EURASIP Fellow.

Dr. Eldar has received numerous awards for excellence in research and teaching, including the IEEE Signal Processing Society Technical Achievement Award (2013), the IEEE/AESS Fred Nathanson Memorial Radar Award (2014), and the IEEE Kiyo Tomiyasu Award (2016). She was a Horev Fellow of the Leaders in Science and Technology program at the Technion and an Alon Fellow. She received the Michael Bruno Memorial Award from the Rothschild Foundation, the Weizmann Prize for Exact Sciences, the Wolf Foundation Krill Prize for Excellence in Scientific Research, the Henry Taub Prize for Excellence in Research (twice), the Hershel Rich Innovation Award (three times), the Award for Women with Distinguished Contributions, the Andre and Bella Meyer Lectureship, the Career Development Chair at the Technion, the Muriel & David Jacknow Award for Excellence in Teaching, and the Technion’s Award for Excellence in Teaching (twice). She received several best paper awards and best demo awards together with her research students and colleagues including the SIAM outstanding Paper Prize and the IET Circuits, Devices and Systems Premium Award, and was selected as one of the 50 most influential women in Israel.

She was a member of the Young Israel Academy of Science and Humanities and the Israel Committee for Higher Education. She is the Editor in Chief of Foundations and Trends in Signal Processing, a member of the IEEE Sensor Array and Multichannel Technical Committee and serves on several other IEEE committees. In the past, she was a Signal Processing Society Distinguished Lecturer, member of the IEEE Signal Processing Theory and Methods and Bio Imaging Signal Processing technical committees, and served as an associate editor for the IEEE Transactions On Signal Processing, the EURASIP Journal of Signal Processing, the SIAM Journal on Matrix Analysis and Applications, and the SIAM Journal on Imaging Sciences. She was Co-Chair and Technical Co-Chair of several international conferences and workshops.

She is author of the book “Sampling Theory: Beyond Bandlimited Systems” and co-author of the books “Compressed Sensing” and “Convex Optimization Methods in Signal Processing and Communications,” all published by Cambridge University Press.

December 3rd, 2019

Indian Institute of Science
Centre for BioSystems Science and Engineering

BSSE Seminar

(Organized by IEEE Signal Processing Society Bangalore Chapter)

9th December 2019 (Monday), 11:00 AM, MRDG Seminar Hall, 1st floor, Biological Sciences Building

Title: A Small Rearguard Action in the Age of Big Data and Machine Learning: Mechanistic Models in Computational Physiology

Speaker: Dr. George Verghese, MIT, Cambridge, Massachusetts

Abstract: The talk will draw some contrasts between phenomenological or empirical models (e.g., regression, neural networks) and mechanistic models (e.g., circuit analogs). Mechanistic models focus on meaningful component parts/subprocesses of the phenomenon of interest, and on their interconnections/interactions, which then generate the range of possible system behaviors. Examples will be given of mechanistic models for aspects of cardiovascular, cerebrovascular and respiratory physiology, and application of these models to extracting interpretable information from relevant data obtained in clinical or ambulatory settings.

Bio: Dr. George Verghese received his BTech from the Indian Institute of Technology, Madras in 1974, his MS from the State University of New York, Stony Brook in 1975, and his PhD from Stanford University in 1979, all in Electrical Engineering. Since 1979, he has been with MIT, where he is the Henry Ellis Warren (1894) Professor, and Professor of Electrical and Biomedical Engineering, in the Department of Electrical Engineering and Computer Science. He was named a MacVicar Faculty Fellow at MIT for the period 2011-2012, for outstanding contributions to undergraduate education.Verghese is also a principal investigator with MIT’s Research Laboratory of Electronics (RLE). His research interests and publications are in the areas of dynamic systems, modeling, estimation, signal processing, and control. Over the past decade, his research focus has shifted from applications in power systems and power electronics entirely to applications in biomedicine. He directs the Computational Physiology and Clinical Inference Group in RLE. He is an IEEE Fellow, and has co-authored two texts: Principles of Power Electronics (with J.G. Kassakian and M.F. Schlecht, 1991), and Signals, Systems and Inference (with A.V. Oppenheim, 2015).

 

November 21st, 2019
***************************************************************************************
The IEEE Signal Processing Society, Bangalore Chapter and Department of Electrical Engineering, Indian Institute of Science, welcomes you to following talk.
 
Location and Date : MMCR, EE, Thursday Nov. 21, 4 pm (Coffee at 345pm).
Speakers : Dr. Sivaram Garimella and Kishore Nandury

 

Title: Semi-supervised Learning for Amazon Alexa.
Abstract:
State-of-the-art Acoustic Models (AM) are large, complex deep neural networks that typically comprise millions of model parameters. Deep neural networks can express highly complex input-output relationships and transformations, but the key to getting the best performance out of them is the availability of large amounts of matched acoustic data – matched to the desired dialect, language, environmental/channel condition, microphone characteristic, speaking style, and so on. Since it is both time consuming and expensive to transcribe large amounts of matched acoustic data for every desired condition, we leverage Teacher/Student based Semi-Supervised Learning technology for improving the AM. Our training leverages vast amount of un-transcribed data in addition to multi-dialect transcribed data yielding up to 7% relative word error rate reduction over the baseline model, which has not seen any unlabelled data.
Bio:
Sri Garimella is a Senior Manager heading the Alexa Machine Learning/Speech Recognition group in Amazon, India. He has been associated with Amazon for more than 7 years. He obtained PhD from the Department of Electrical and Computer Engineering, Center for Language and Speech Processing at the Johns Hopkins University, Baltimore, USA in 2012. And Master of Engineering in Signal Processing from the Indian Institute of Science, Bangalore, India in 2006.
Kishore Nandury is an Applied scientist in Alexa ASR team in Amazon Bangalore. Prior to Amazon, he has worked in Intel, Sling media & NVIDIA graphics. He has obtained Masters degree in Signal Processing from Indian Institute of Science in 2005.
Host Faculty:  Sriram Ganapathy
***************************************************************************************