Keynotes
Keynote 1
Title
Pervasive Intelligence and Cloud Intelligence Enabling Intelligent Revolution and Intelligent Economy
by Guang-Bin Huang, Nanyang Technological University, Singapore
Abstract
This talk points out that from a technical and historical point of view, although artificial intelligence and machine learning has made many achievements, overall speaking it is still in the eve of the intelligent revolution. The intelligent revolution will have far greater impact on the human being than the agricultural revolution and the industrial revolution. The new intelligent economic model will also emerge. The new wave of artificial intelligence and machine learning technology will arise: 1) machine learning will be extended from the cloud to the local, cloud machine learning techniques (such as deep learning and other methods) and local machine learning techniques (such as Extreme Learning Machines (ELM) and other methods) will work closely; 2) machine learning algorithm is no longer just dependent on GPU, but GPU can play important roles in the cloud intelligence; 3) machine learning and bio-learning tend to gradually converge; 4) smart chips will become popular; 5) non von Neumann computer architectures will become true; 6) Although data is important, the intelligence does not have to rely on large data, large data will cause the machine to over-fit in many cases. Universal learning and universal intelligence are the engine of intelligent revolution and intelligent economy. This talk will also analyze the 10 major artificial intelligence applications and the 10 major impact of intelligent revolution.
Biography
Guang-Bin Huang is a Full Professor in the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. He is a member of Elsevier's Research Data Management Advisory Board. He is one of three Expert Directors for Expert Committee of China Big Data Industry Ecological Alliance organized by China Ministry of Industry and Information Technology, and a member of International Robotic Expert Committee for China. He was a Nominee of 2016 Singapore President Science Award, was awarded by Thomson Reuters “Highly Cited Researcher” (in two fields: Engineering and Computer Science), and listed in Thomson Reuters’s “The World's Most Influential Scientific Minds.” He received the best paper award from IEEE Transactions on Neural Networks and Learning Systems (2013). His two works on Extreme Learning Machines (ELM) have been listed by Google Scholar in 2017 as Top 2 and Top 7, respectively in its “Classic Papers: Articles That Have Stood the Test of Time” - Top 10 in Artificial Intelligence.
He serves as an Associate Editor of Neurocomputing, Cognitive Computation, neural networks, and IEEE Transactions on Cybernetics.
He is Principal Investigator of BMW-NTU Joint Future Mobility Lab on Human Machine Interface and Assisted Driving, Principal Investigator (data and video analytics) of Delta – NTU Joint Lab, Principal Investigator (Scene Understanding) of ST Engineering – NTU Corporate Lab, and Principal Investigator (Marine Data Analysis and Prediction for Autonomous Vessels) of Rolls Royce – NTU Corporate Lab. He has led/implemented several key industrial projects (e.g., Chief architect/designer and technical leader of Singapore Changi Airport Cargo Terminal 5 Inventory Control System (T5 ICS) Upgrading Project, etc).
One of his main works is to propose a new machine learning theory and learning techniques called Extreme Learning Machines (ELM), which fills the gap between traditional feedforward neural networks, support vector machines, clustering and feature learning techniques. ELM theories have recently been confirmed with biological learning evidence directly, and filled the gap between machine learning and biological learning. ELM theories have also addressed “Father of Computers” J. von Neumann’s concern on why “an imperfect neural network, containing many random connections, can be made to perform reliably those functions which might be represented by idealized wiring diagrams.”
Keynote 2
Title
Artificial Vision by Deep CNN Neocognitron
by Kunihiko Fukushima, Fuzzy Logic Systems Institute, Japan
Abstract
Recently, deep convolutional neural networks (deep CNN) have become very popular in the field of visual pattern recognition. The neocognitron, which was first proposed by Fukushima (1979), is a network classified to this category. Its architecture was suggested by neurophysiological findings on the visual systems of mammals. It is a hierarchical multi-layered network. It acquires the ability to recognize visual patterns robustly through learning. Although the neocognitron has a long history, improvements of the network are still continuing. This talk discusses the recent neocognitron, focusing on differences from the conventional deep CNN.
For
training intermediate layers of the neocognitron, the learning rule
called AiS (Add-if-Silent) is used. Under the AiS rule, a new cell is
generated and added to the network if all postsynaptic cells are
silent in spite of non-silent presynaptic cells. The generated cell
learns the activity of the presynaptic cells in one-shot. Once a cell
is generated, its input connections do not change any more. Thus the
training process is very simple and does not require time-consuming
repetitive calculation.
In
the deepest layer, a method called IntVec (Interpolating-Vector) is
used for classifying input patterns based on the features extracted
by the intermediate layers. For the recognition by the IntVec, we
search, in the multi-dimensional feature space, the nearest plane or
line that is made of a trio or pair of reference vectors. Computer
simulation shows that recognition error can be made much smaller by
the IntVec than by the WTA (Winner-Take-All) or even by the SVM
(support vector machine).
Some
other functions of the visual system
can also be realized by networks extended from the neocognitron, for
example, mechanism of selective attention, recognition and completion
of partly occluded patterns, and so on.
Biography
Kunihiko Fukushima received a B.Eng. degree in electronics in 1958 and a PhD degree in electrical engineering in 1966 from Kyoto University, Japan. He was a professor at Osaka University from 1989 to 1999, at the University of Electro-Communications from 1999 to 2001, at Tokyo University of Technology from 2001 to 2006; and a visiting professor at Kansai University from 2006 to 2010. Prior to his Professorship, he was a Senior Research Scientist at the NHK Broadcasting Science Research Laboratories. He is now a Senior Research Scientist at Fuzzy Logic Systems Institute (part-time position), and usually works at his home in Tokyo.
He received the Achievement Award, Distinguished Achievement and Contributions Award, and Excellent Paper Awards from IEICE; the Neural Networks Pioneer Award from IEEE; APNNA Outstanding Achievement Award; Excellent Paper Award from JNNS; INNS Helmholtz Award; and so on. He was the founding President of JNNS (the Japanese Neural Network Society) and was a founding member on the Board of Governors of INNS (the International Neural Network Society). He is a former President of APNNA (the Asia-Pacific Neural Network Assembly).
Keynote 3
Title
Towards Next Generation of Deep Learning Frameworks
by Mu Li, Amazon Web Services, USA
Abstract
We present MxNet Gluon, an easy to use tool for designing a wide range of networks from image processing (LeNet, inception, etc.) to advanced NLP (TreeLSTM). It combines the convenience of imperative frameworks (PyTorch, Torch, Chainer) with efficient symbolic execution (TensorFlow, CNTK).
Biography
Mu Li is a principal scientist for machine learning at AWS. Before joining AWS, he was the CTO of Marianas Labs, an AI start-up. He also served as a principal research architect at the Institute of Deep Learning at Baidu. He obtained his PhD in computer science from Carnegie Mellon University.
Mu’s research has focused on large-scale machine learning. In particular, he is interested in the co-design of distributed systems and machine learning algorithms. He has been the first-author for computer science conference and journal papers on subjects that span theory (FOCS), machine learning (NIPS, ICML), applications (CVPR, KDD), and operating systems (OSDI).
Keynote 4
Title
Learning with Random Guesses in Random Decision Forests
by Tin Kam Ho, IBM Watson, USA
Abstract
Over the past 20 years, Random Decision Forest has been established as one of the most robust methods for classification and regression.
The method was first proposed in 1995 as an algorithm to accomplish what is anticipated by Kleinberg's theory of stochastic discrimination.
The theory formalizes an extreme form of learning -- learning with a large ensemble of random guesses. In this talk we review the key elements of the stochastic discrimination theory and how they provide guidance to algorithmic implementations, including how they led to the idea of random decision forests. We discuss the forest method's evolution and its unexploited potentials in large scale parallelism, stochastic searches, and uses of different model forms. Observing how certain elements of the stochastic discrimination theory are adopted in other classification methods, we suggest how to relate those methods to the theory to look for potential enhancements.
Biography
Tin Kam Ho is a lead scientist in artificial intelligence research and applications at IBM Watson. Before, she led a department in statistics and machine learning research in Bell Labs. She pioneered research in multiple classifier systems, random decision forests, and data complexity analysis. Over her career she contributed to many application domains of pattern recognition and data analysis, including multilingual reading machines, optical network design and monitoring, wireless geolocation, and smart grid demand forecasting. She served as Editor-In-Chief for Pattern Recognition Letters in 2004-2010, and as Editor or Associate Editor for several other journals including IEEE Transactions on Pattern Analysis and Machine Intelligence, Pattern Recognition, and International Journal on Document Analysis and Recognition. Her work has been honored with the Pierre Devijver Award in statistical pattern recognition, several Bell Labs awards, and the Young Scientist Award of the International Conference on Document Analysis and Recognition. Her publications have received over 9000 citations. She is a Fellow of the IAPR and the IEEE.
Keynote 5
Title
Ensemble Approaches to Class Imbalance Learning
by Xin Yao, University of Birmingham, UK
Abstract
Many real world classification problems have highly imbalanced and skew data distributions. In fault diagnosis and condition monitoring for example, there are ample data for the normal class, yet data for faults are always very limited and costly to obtain. It is often a challenge to increase the performance of a classifier on the minority classes without sacrificing the performance on the majority classes. This talk discusses some of the techniques and algorithms that have been developed for class imbalance learning, especially through ensemble learning. First, the motivations behind ensemble learning are introduced and the importance of diversity highlighted. Second, some of the challenges of multi-class imbalance learning and potential solutions are presented. What might have worked well for the binary case do not work for multiple classes anymore, especially when the number of classes increases. Third, online class imbalance learning will be discussed, which can be seen as a combination of online learning and class imbalance learning. Online class imbalance learning poses new research challenges that still have not been well understood., let alone solved, epecially for imbalanced data streams with concept drift. Fourth, the natural fit of multi-objective learning to class imbalance learning is mentioned. The relationship between multi-objective learning and ensemble learning will be discussed. Finally, future research diections will be pointed out.
Biography
Xin Yao is a Chair Professor of Computer Science at the Southern University of Science and Technology, Shenzhen, China, and at the University of Birmingham, UK. His major research interests include evolutionary computation, ensemble learning and search-based software engineering. His work won the 2001 IEEE Donald G. Fink Prize Paper Award, 2010, 2015 and 2017 IEEE Transactions on Evolutionary Computation Outstanding Paper Awards, 2010 BT Gordon Radley Award for Best Author of Innovation (Finalist), 2011 IEEE Transactions on Neural Networks Outstanding Paper Award, and many other best paper awards. He received the prestigious Royal Society Wolfson Research Merit Award in 2012 and the IEEE CIS Evolutionary Computation Pioneer Award in 2013.
Keynote 6
Title
Toward Precision Brain Monitoring in Critical Care
by M. Brandon Westover, Harvard Medical School
Abstract
Seizures, status epilepticus, and seizure-like rhythmic or periodic activity are common, pathological, and harmful states of brain electrical activity seen in the electroencephalogram (EEG) of patients during critical medical illnesses or acute brain injury. A growing body of evidence shows that these states, when prolonged, cause neurological injury. Development of rational interventions is hampered by poor inter-rater agreement regarding visual interpretation of EEG patterns by experts, and by the difficulty in studying the large and heterogenous population at risk for these states. Consequently, the relationships between features of seizure and seizure-like brain states, their duration, and ultimate neurologic outcomes, have not been systematically studied.
In this talk I will review progress toward creating tools to (a) automatically detect and classify seizure-spectrum EEG patterns, (b) extract their key characteristics and measure their persistence, (c) make quantitative statements about the potential for harm, and (d), integrate clinical and EEG information to predict neurological outcomes. I will discuss the challenges and need to grapple with a large number of cases ("Big Data"), spanning a wide range of EEG patterns and acute illnesses, to find the reliable signals within the noise of the case heterogeneity encountered in real-world critical care EEG.
Biography
Dr. M. Brandon Westover, MD, PhD is a clinical neurophysiologist at Harvard Medical School / Massachusetts General Hospital (MGH), where he directs the MGH Critical Care EEG Monitoring Service and co-directs a medical informatics group called the MGH Clinical Data Animation Center dedicated to “bringing massive medical data to life.” His clinical interests include applying EEG to help care for patients with acute neurological conditions such as delirium, anoxic brain injury, status epilepticus, and delayed cerebral ischemia following subarachnoid hemorrhage. His research focuses on developing automated methods for interpreting clinical EEG data, closed-loop control of sedation and analgesia, medical informatics, medical decision theory, and the neurophysiology of pain, sedation, and delirium in critically ill patients. Dr. Westover’s overarching research goal is to improve the care of neurological patients through the application of engineering and computational approaches.
Keynote 7
Title
From Artificial Intelligence to Cyborg Intelligence
by Gang Pan, Zhejiang University, China
Abstract
Advances in multidisciplinary fields such as brain-machine interfaces, artificial intelligence, and computational neuroscience, signal a growing convergence between machines and biological beings. Especially, brain-machine interfaces enable a direct communication pathway between the brain and machines. It promotes the brain-in-loop computational paradigm which integrates biological and artificial intelligence. A biological-machine system consisting of both organic and computing components is emerging, which we called cyborg intelligence. This talk will introduce the concept, architectures, and applications of cyborg intelligence. It will also discuss issues and challenges.
Biography
Gang Pan is a professor of the College of Computer Science and Technology at Zhejiang University. His interests include pervasive computing, computer vision, artificial intelligence, and brain-machine interfaces. He earned his B.S. and Ph.D. degrees both from Zhejiang University in 1998 and 2004 respectively. From 2007 to 2008, he was with the University of California, Los Angeles as a visiting scholar. He has co-authored more than 100 refereed papers, and has 25 patents granted. Dr. Pan is a recipient of CCF-IEEE CS Young Computer Scientist Award, Microsoft Fellowship Award. He has received many technical awards, including TOP-10 Achievements in Science and Technology in Chinese Universities (2016), National Science and Technology Progress Award (2015), Best Paper Award of ACM UbiComp'16, 2016 BCI Research Award Nomination. He serves as an associate editor of IEEE Systems Journal, ACM Proceedings of Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), and Chinese Journal of Electronics.
Keynote 8
Title
Turing Machine Logic in Brain-Inspired Networks for Vision, Speech, and Natural Languages
by Juyang Weng, Michigan State University, USA
Abstract
In his 1991 paper published by the AI Magazine, Marvin Minsky wrote "symbolic versus connectionist or neat versus scruffy". This talk presents a series of advances of in understanding brain anatomy, brain plasticity, and brain-architecture. These advances have led to a new kind of neural networks (Developmental Networks, DN) for solving the representation issue that Prof. Minsky questioned. In particular, such brain-inspired networks are not ``scruffy" but instead learn the logic of a new kind of Turing Machine, Emergent Turing Machine. In the AIML Contest 2016 and 2017, the participants have hand-on experience for how a unified learning network DNs learn three very different AI problems --- vision, speech, and natural languages.
Biography
Juyang (John) Weng is a professor at the Department of Computer Science and Engineering, the Cognitive Science Program, and the Neuroscience Program, Michigan State University, East Lansing, Michigan, USA. He received his BS degree from Fudan University in 1982, his MS and PhD degrees from University of Illinois at Urbana-Champaign, 1985 and 1989, respectively, all in Computer Science. From August 2006 to May 2007, he was also a visiting professor at the Department of Brain and Cognitive Science of MIT. He has also been a Changjiang visiting professor at Fudan University 2004-2014.
His research interests include computational biology, computational neuroscience, computational developmental psychology, biologically inspired systems, computer vision, audition, touch, natural languages, behaviors, and intelligent robots. He is the author or coauthor of over three hundred research articles. He is an editor-in-chief of International Journal of Humanoid Robotics, the editor-in-chief of the Brain-Mind Magazine, and an associate editor of the new IEEE Transactions on Autonomous Mental Development (now IEEE Transactions on Cognitive and Developmental Systems), and the editor-in-chief of the Brain-Mind Magazine. He was the Chairman of the Governing Board of the International Conferences on Development and Learning (ICDLs) (2005-2007, http://cogsci.ucsd.edu/~triesch/icdl/), chairman of the Autonomous Mental Development Technical Committee of the IEEE Computational Intelligence Society (2004-2005), an associate editor of IEEE Trans. on Pattern Recognition and Machine Intelligence, an associate editor of IEEE Trans. on Image Processing. He is a Fellow of IEEE.
Keynote 9
Title
Towards Automated Spike Detection
by Jing Jin, Harvard Medical School
Abstract
The finding of primary importance for the diagnosis for epilepsy is the presence of epileptiform discharges, also known as “spikes” and “sharp waves”, hereafter referred to collectively as “spike(s)”. As the distinctive biomarkers of epilepsy, spikes exhibit a wide variety in morphology intra- and inter- patients. While in clinical practice, visual inspection is still the gold standard for interpreting EEG, which is tedious and ultimately subjective. The agreement rate for spikes has been found as low as 60% between experts for certain cases. Consequently, many patients are undiagnosed or misdiagnosed, leading to inappropriate medical interventions and avoidable suffering. In Singapore, the burden of untreated epilepsy is 6%, while up to 25-40% of patients are over-diagnosed. Moreover, experienced experts are in short supply, especially in developing countries like China and India. With all these limitations, automated or semi-automated spike detection systems are the key solution. Past attempts to create automated spike detectors have failed primarily due to the intense labor and expense to gather a sufficiently large and diverse spike database; and the lack of rigorous validation on large numbers of EEGs. In our project, we have brought together a unique and unprecedented combination of clinical and algorithmic expertise, and data resources, to develop a general purpose spike detector.
Biography
Jing Jin is a postdoctoral research fellow with Massachusetts General Hospital (MGH), Harvard Medical School, and School of Electrical & Electronic Engineering at Nanyang Technological University (NTU) Singapore. Her research interests are in machine learning, signal processing, and computational neuroscience. She enjoys working on real-world problems, often in collaboration with medical practitioners.
Prior to joining MGH, Jing Jin was a research fellow receiving her postdoctoral training during 2016-2017, under the guidance of Prof. Justin Dauwels, in School of Electrical & Electronic Engineering at NTU. She obtained a PhD degree in Electrical Engineering at NTU in Singapore in July 2016, supervised by Prof. Justin Dauwels. In 2011 she received the engineering degree in Biomedical Electronics from NTU Singapore.
Jing Jin was a visiting scholar at the Cash Lab at MGH Neurology Department, and Harvard Medical School in winter 2013, summer 2014, and fall 2016. She is a member of the IEEE. She is a research affiliate with the Singapore-MIT Alliance for Research and Technology (SMART), National Health Innovation Centre Singapore (NHIC), Medical University of South Carolina (MUSC), and National University Hospital of Singapore (NUH).
Keynote 10
Title
Multimodal Emotion Recognition and Vigilance Estimation Using Machine Learning
by Bao-Liang Lu, Shanghai Jiaotong University, China
Abstract
Covert aspects of ongoing user mental states provide key context information in user-aware human computer interactions, which can help systems react adaptively in a proper manner. Various studies have introduced the assessment of the mental states of users, such as emotion and vigilance, to promote active interactions between users and machines. Emotions are complex psycho-physiological processes that are associated with many external and internal activities. Different modalities describe different aspects of emotions and contain complementary information. Integrating this information with fusion technologies is attractive for constructing robust emotion recognition models. Vigilance refers to the ability to endogenously maintain focus. Various working environments require sustained high vigilance, particularly for some dangerous occupations such as driving trucks and high-speed trains. In this talk, we introduce our recent work on multimodal emotion recognition and vigilance estimation using extreme learning machine, transfer learning and deep networks. We present a multimodal framework for recognizing human emotions using EEG and eye-tracking glasses to integrate the internal brain activities and external subconscious behaviors of users. We present a multimodal approach for vigilance estimation by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. Our experimental results demonstrate that modality fusion can improve the performance compared with a single modality and multimodality data contain complementary information for both emotion recognition and vigilance estimation.
Biography
Bao-Liang
Lu received the Ph.D. degree in electrical engineering from Kyoto
University, Kyoto, Japan, in 1994. From April 1994 to March 1999, He
was a Frontier Researcher at the Bio-Mimetic Control Research Center,
the Institute of Physical and Chemical Research (RIKEN), Japan. From
April 1999 to August 2002, he joined the RIKEN Brain Science
Institute, Japan, as a research scientist. Since August 2002, he has
been a full professor at the Department of Computer Science and
Engineering, Shanghai Jiao Tong University, China. He is the
directors of the Center for Brain-Like Computing and Machine
Intelligence and the Key Laboratory of Shanghai Education Commission
for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao
Tong University. His research interests include brain-like computing,
neural network, machine learning, brain-computer interface and affect
computing. He was the past President of the Asia Pacific Neural
Network Assembly (APNNA) and the general Chair of the 18th
International Conference on Neural Information Processing. He is
Associate Editors of the Neural Networks and IEEE Transactions on
Cognitive and Developmental Systems, and a senior member of the IEEE.
Keynote 11
Title
Non-invasive Detection of Silent Hippocampal Seizures: Applications in Epilepsy, Alzheimer’s disease, and Beyond
by Alice Lam, Harvard Medical School
Abstract
Seizures are characterized by rhythmic brain activity that evolves in frequency and location. When neurologists review a scalp electroencephalogram (EEG) looking for seizures, they are typically looking for an obvious, visually identifiable electrical event. Similarly, most scalp EEG-based seizure detection algorithms are trained using seizure examples that are clearly visible on the scalp EEG.
Not all seizures are visible on the scalp EEG, however. Here, I will introduce the concept of “silent” seizures, which arise from deep brain structures such as the hippocampus, and which show no obvious visible signs on the scalp EEG. Silent seizures may have important implications for memory impairments in epilepsy and in Alzheimer’s disease. However, study of silent seizures has been limited by the fact that they are extremely difficult to detect. Currently, detection of silent seizures requires invasive recordings with intracranial electrodes (e.g., surgically implanted depth electrodes). I will discuss machine learning approaches for non-invasive detection of silent hippocampal seizures, using only information derived from a standard scalp EEG. Potential clinical and research applications for silent seizure detectors will also be discussed.
Biography
Alice D. Lam, MD PhD, is a staff neurologist and researcher at the Massachusetts General Hospital and Harvard Medical School. She received her MD/PhD at the University of Michigan Medical School and trained in neurology and epilepsy at the Massachusetts General Hospital.
Dr. Lam’s research focuses on the intersection between epilepsy and the neurodegenerative diseases. Her group uses machine learning approaches to build tools to non-invasively study epileptiform activity arising from the hippocampus. Her work has been published in Nature Medicine and Brain, and has been highlighted in Nature Reviews Neurology, Neurology Today, and Epilepsy Currents.
Keynote 12
Title
Indoor Positioning Systems: Some Recent Development and Challenges
by Lihua Xie, Nanyang Technological University, Singapore
Abstract
The Internet of Things (IoT) envisions a highly networked future where every object is integrated to interact with each other, allowing for communications between objects, as well as between humans and objects. IoT is rapidly transforming our lives, changing everything from how we shop, how we work, how we enjoy ourselves, to how we stay healthy. In this talk, we shall focus on indoor positioning and localization of objects and individuals which is essential to IoT. The demands for indoor location based service have increased significantly in recent years. We shall discuss opportunities and challenges of indoor localization such as environmental dynamics, device heterogeneity and tedious calibration requirements, and present solutions to these challenges. Machine learning methods such as extreme learning machine and deep learning are leveraged to develop algorithms for indoor positioning based on received signal strength (RSS). With the WiFi indoor positioning system we have developed in recent years, we shall demonstrate some applications such as outdoor/indoor seamless navigation, multi-floor indoor localization and navigation, indoor geo-fencing, and occupancy distribution monitoring and analysis. We shall also discuss device free human behavior/occupancy detection based on channel state information (CSI). The talk will be concluded with directions for future research.
Biography
Lihua Xie received the B.E. and M.E. degrees in electrical engineering from Nanjing University of Science and Technology in 1983 and 1986, respectively, and the Ph.D. degree in electrical engineering from the University of Newcastle, Australia, in 1992. He was with the Department of Automatic Control, Nanjing University of Science and Technology from 1986 to 1989. Since 1992, he has been with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, where he is currently a professor and the Director, Delta-NTU Corporate Lab for Cyber-Physical Systems. He served as the Head of Division of Control and Instrumentation from July 2011 to June 2014, and the Director, Centre for E-City from July 2011 to June 2013.
His current research interests include networked control, multi-agent systems, sensor networks, compressive sensing, localization, and unmanned systems. He has authored/co-authored 8 books, over 320 journal papers, and 6 patents. He was listed as a highly cited researcher by Thomson Reuters in 2014, 2015 and 2016. He is currently an Editor-in-Chief of Unmanned Systems and Associate Editor, IEEE Transactions on Network Control Systems. He has served as Editor for IET Book Series in Control and Associate Editor for Automatica, IEEE Transactions on Automatic Control, IEEE Transactions on Control Systems Technology, IEEE Transactions on Circuit and Systems-II, etc. He was an IEEE Distinguished Lecturer from 2011 to 2014 and an appointed member of the Board of Governors of IEEE Control System Society in 2011.
Dr Xie is Fellow of IEEE, Fellow of IFAC, and an elected member of the Board of Governors of IEEE Control System Society (2016-2018).
Keynote 13
Title
Robotic Experience Learning based on Extreme Learning Machine
by Fuchun Sun, Tsinghua University, China
Abstract
Experience learning is a potential method for robots mastering dexterous manipulation skills. Basically, experience learning is to discover the laws of skills from the extracted features of demonstrated manipulation, which can be divided as three steps: firstly, collecting different kinds of data from humans or robots’ manipulations; secondly, extracting skill features from these demonstration data; thirdly, learning the skill models by various machine learning methods. Innovatively, Extreme Learning Machine (ELM) is utilized in our work for learning the robotic operation skill model. Specifically, it includes building robotic tactile recognition model and learning robots’ grasping model given ELM’s advantages of fast learning speed and precise classification accuracy. Finally, the experimental results show the effectiveness of the proposed methods.
Biography
Fuchun Sun is professor of Department of Computer Science and Technology and President of Academic Committee of the Department, deputy director of State Key Lab. on Intelligent Technology & Systems, Tsinghua University, Beijing, China. His research interests include intelligent control, robot precise operation and teleportation using visual and tactile sensing.
Dr. Sun is the recipient of the excellent Doctoral Dissertation Prize of China in 2000 by MOE of China and the Choon-Gang Academic Award by Korea in 2003, and was recognized as a Distinguished Young Scholar in 2006 by the Natural Science Foundation of China. He served as an associated editor of IEEE Trans. on Neural Networks during 2006-2010, IEEE Trans. on Fuzzy Systems since 2011 and IEEE Trans. on Systems, Man and Cybernetics: Systems since 2011.
Keynote 14
Title
Deterministic Methods for Pattern Classification
by Kar-Ann Toh, Yonsei University, South Korea
Abstract
In this talk, an overview of existing approaches for pattern classification will be provided. Subsequently, starting from least squares error regression utilizing a reduced polynomial model, we shall walk through several related methods for classifier learning including the ELM. These learning methods are recently found to relate to each other by mere data transformation. Our focus shall be on deterministic methods for solving the error counting problem in classification.
Biography
Kar-Ann Toh is a Professor in the School of Electrical and Electronic Engineering at Yonsei University, South Korea. He received the PhD degree from Nanyang Technological University (NTU), Singapore and then worked for two years in the aerospace industry prior to his post-doctoral appointments at research centers in NTU from 1998 to 2002. He was affiliated with Institute for Infocomm Research in Singapore from 2002 to 2005 prior to his current appointment in Korea. His research interests include biometrics, machine learning, pattern classification, optimization and neural networks. He is a co-inventor of two US patents and has made several PCT filings related to biometric applications. Besides being active in publication, Dr. Toh has served as an advisor/member/co-chair of technical program committee for international conferences related to biometrics and artificial intelligence. He has served as an Associate Editor of IEEE Transactions on Information Forensics and Security, Pattern Recognition Letters and IET Biometrics. He is a senior member of the IEEE.
Keynote 15
Title
Extreme Learning Machines for Commonsense Reasoning and Sentiment Analysis
by Erik Cambria, Nanyang Technological University, Singapore
Abstract
Between the dawn of the Internet through year 2003, there were just a few dozens exabytes of information on the Web. Today, that much information is created weekly. The opportunity to capture the opinions of the general public about social events, political movements, company strategies, marketing campaigns, and product preferences has raised increasing interest both in the scientific community, for the exciting open challenges, and in the business world, for the remarkable fallouts in marketing and financial prediction. Keeping up with the ever-growing amount of unstructured information on the Web, however, is a formidable task and requires fast and efficient models for opinion mining. To this end, we explore how the high generalization performance, low computational complexity, and fast learning speed of extreme learning machines can be exploited to perform analogical reasoning in a vector space model of affective commonsense knowledge. In particular, by enabling a fast reconfiguration of such a vector space, extreme learning machines allow the polarity associated with natural language concepts to be calculated in a more dynamic and accurate way and, hence, perform better concept-level sentiment analysis.
Biography
Erik Cambria received his PhD in Computing Science and Mathematics in 2012 following the completion of an EPSRC project in collaboration with MIT Media Lab, which was selected as impact case study by the University of Stirling for the UK Research Excellence Framework (REF2014). After working at HP Labs India, Microsoft Research Asia, and NUS Temasek Labs, in 2014 he joined NTU, which is now one of the leading universities on AI research, as an Assistant Professor. His current affiliations include Rolls Royce, Delta, A*STAR, and MIT Synthetic Intelligence Lab. He is also Fellow of the Brain Sciences Foundation and SMIA.
Dr Cambria is Associate Editor of many top-tier journals edited by Elsevier, e.g., INFFUS and KBS, Springer, e.g., AIRE and Cognitive Computation, and IEEE, e.g., CIM and Intelligent Systems, where he manages the Department of Affective Computing and Sentiment Analysis. He is also recipient of several awards, e.g., Temasek Research Fellowship and Emerald Citations of Excellence, founder of SenticNet, a Singapore-based university spin-off offering B2B sentiment analysis services, and is involved in many international conferences as PC member, e.g., AAAI, UAI, ACL, and EMNLP, workshop organizer, e.g., ICDM SENTIRE (since 2011), program chair, e.g., ELM, and keynote speaker, e.g., CICLing.
Keynote 16
Title
Computational Intelligence and Data Analytics for Energy Internet
by Zhaoyang Dong, University of New South Wales, Australia
Abstract
Energy internet can be regarded as a complex system composed of multiple energy systems and next generation ICT technologies. Different from smart grid whereas form of energy is primarily electricity, energy internet provides a platform allowing for multiple forms of energy sources being optimized to meet the energy needs. It also provides a platform for peer to peer energy trading. In order to maintain system operations and to facilitate trading in an energy internet, real time data analytics plays an important role. In this talk, computational methods including ELM will be presented for energy internet dispatch, operations and trading.
Biography
Professor
Z.Y. Dong obtained Ph.D. from the University of Sydney, Australia in
1999. He is SHaRP professor with the University of NSW, Sydney. His
immediate role is Professor and Head of the School of Electrical and
Information Engineering, and Director for Sydney Energy Internet
Research Institute, The University of Sydney. He is a member of the
ARC College of Experts panel. He was Ausgrid Chair and Director of
the Centre for Intelligent Electricity Networks (CIEN), the
University of Newcastle. He also worked as manager for (transmission)
system planning at Transend Networks (now TASNetworks), Australia.
His research interest includes smart grid, power system planning,
power system security, load modeling, renewable energy systems, and
electricity market. He is an editor of IEEE Transactions on Smart
Grid, IEEE Transactions on Sustainable Energy, IEEE Power Engineering
Letters and IET Renewable Power Generation. He is an international
Advisor for the journal of Automation of Electric Power Systems. He
is Fellow of IEEE.
Keynote 17
Title
Applying Machine Learning to Open-Source Learning Management System in order to Develop Visualizations of Students’ Risk of Not Succeeding in STEM Courses
by Amaury Lendasse, University of Iowa, USA
Abstract
Increasingly, educators have access to rich sources of data about student learning, but do not know how to utilize this data. For instance, data from Learning Management Systems (LMS) has been used to examine predictors of student learning outcomes with the goal of developing processes to identify and support students who are at risk of not succeeding. One barrier to this has been how to use findings of these analyses to support learners. Another has been the fact that studies of learning management system data alone, without incorporating other student data, have not been strongly predictive of student outcomes or do not describe how learners use tools in the LMS.
Our goal is to to develop prediction methods (using a probabilistic framework) that would allow instructors to identify in advance whether students who did well on exams exhibited different patterns of usage of resources in the LMS, and whether these patterns differed according to measures of prior learning, such as pre-term GPA, high school GPA, or ACT test scores, or demographic characteristics, such as gender, race and ethnicity, or first-generation status.
Several Machine Learning methods have been implemented, among them Extreme Learning Machines an Self-Organizing Maps (SOM) have been be used. SOM provide visualization of data by implementing a smart clustering that preserves the inner structure of the data itself. The advantage of SOM is its low complexity and its ability to be used in real time. The main drawback is that the visualization is intrinsically discrete. Previous utilization of SOM, has shown that it is possible to visualize trajectories within the SOM. For example Côme et al. in 2015 were able to visualize the main professional trajectories in United States, i.e. situations of American workers with respect to employment. Using these visualizations and trajectories, they were able to predict the risk of a person losing his job. We have adapted this idea in order to visualize the trajectories of the students and identify which trajectories lead to success and which trajectories lead to failure.
Biography
Amaury Lendasse was born in 1972, in Belgium. He received a M.S. degree in Mechanical Engineering from the Universite Catholique de Louvain (Belgium) in 1996, a M.S. in Control in 1997 and a Ph.D. in Applied Mathematics in 2003 from the same university. In 2003, he was a Postdoctoral Researcher in the Computational Neurodynamics Lab at the University of Memphis. From 2004 to 2014, he was a Senior Researcher and an Adjunct Professor in the Adaptive Informatics Research Centre in the Aalto University School of Science (better known as the Helsinki University of Technology) in Finland. He has created and lead the Environmental and Industrial Machine Learning at Aalto. He is now an Associate Professor at The University of Iowa (USA) and a visiting Professor at Arcada University of Applied Sciences in Finland. He was the Chairman of the annual ESTSP conference (European Symposium on Time Series Prediction) and member of the editorial board and program committee of several journals and conferences on machine learning. He is the author or coauthor of more than 200 scientific papers in international journals, books or communications to conferences with reviewing committee. His research includes Big Data, time series prediction, chemometrics, variable selection, noise variance estimation, determination of missing values in temporal databases, nonlinear approximation in financial problems, functional neural networks and classification. His h-index is 34 and he has a total of 4800+ citations.
Keynote 18
Title
ELM Feature Learning and Its Applications in Remote Sensing
by Chenwei Deng, Beijing Institute of Technology, China
Abstract
With the development of remote sensing (RS) imaging and signal processing technologies, aerospace RS has been entering the era of massive data. High-resolution RS optical images, as important geospatial big data closely related to peoples’ livelihood and national security, have brought very serious challenges to the traditional satellite data processing strategies. On-board real-time image processing can be implemented for invalid data removal, targets detection, positioning, ROI extraction, etc. And by doing so, valuable information can be rapidly generated and transmitted for various applications and users. In this talk, we will elaborate the connotation and demands of on-board real-time RS image processing, and introduce the current status of ELM feature learning, and then discuss the new advances of the corresponding key technologies and solutions in ELM feature learning, finally, some applications and future trends are presented.
Biography
Chenwei Deng is currently a full professor at the School of Information and Electronics, Beijing Institute of Technology, China. Prior to this, he was a post-doctoral research fellow with the School of Computer Engineering, Nanyang Technological University, Singapore. He was awarded the titles of “Beijing Excellent Talent” and “Excellent Young Scholar of Beijing Institute of Technology” in 2013. He has authored or co-authored over 50 technical papers in refereed international journals and conferences, and co-edited one book. His current research interests include machine learning, pattern recognition, and real-time information processing for remote sensing.
Keynote 19
Title
Time-Sensitive Modeling, For Better Clinical Prognostication
by Mohammad Ghassemi, Massachusetts Institute of Technology
Abstract
Clinical prognostication is a challenging because similar observations should be interpreted differently, depending on the time they were observed. In this talk, we will demonstrate how the application of time-sensitive modeling techniques can enhance our ability to predict outcomes, and provide new insights into patient disease progression. For illustrative purposes, we will focus our discussion on the challenging problem of predicting neurologic outcomes for patients in coma, following cardiac arrest. We will also discuss open-source tools we have developed that can help mitigate the practical challenges associated with collection, curation, and analysis of clinical data.
Biography
Mohammad Ghassemi is a doctoral candidate at the Massachusetts Institute of Technology. As an undergraduate, he studied Electrical Engineering and graduated as both a Goldwater scholar and the University's "Outstanding Engineer". Mohammad later perused an MPhil in Information Engineering at the University of Cambridge where he was a recipient of the prestigious Gates-Cambridge Scholarship. Since arriving at MIT in 2011, he has perused research which has allowed him to leverage his knowledge of machine learning and background in hardware/sensor design to enhance critical care medicine. Mohammad's doctoral focus is machine learning techniques in the context of multi-modal, multi-scale datasets. He has currently put together the largest collection of post-anoxic coma EEGs in the world, which he is investigating for his doctoral thesis. He has published in several top Artificial Intelligence and Medical venues including: Nature, Science, Intensive Care Medicine, AAAI and KDD. Mohammad's work has been internationally recognized by venues including: BBC, NPR, The Wall Street Journal, Newsweek. In addition to his research efforts, Mohammad is also involved in a range of entrepreneurial activities including a platform to facilitate connections between students, and an algorithm for social coaching.
Keynote 20
Title
SLEEPNET: Automated Sleep Study via Deep learning
by Jimeng Sun, George Institute of Technology, USA
Abstract
Sleep disorders, such as sleep apnea, parasomnias, and hypersomnia, affect 50-70 million adults in the United States in the United States. Overnight polysomnography (PSG), including brain monitoring using electroencephalography (EEG), is a central component of the diagnostic evaluation for sleep disorders. While PSG is conventionally performed by trained technologists, the recent rise of powerful neural network learning algorithms combined with large physiological datasets offers the possibility of automation, potentially making expert-level sleep analysis more widely available. We propose SLEEPNET (Sleep EEG neural network), a deployed annotation tool for sleep staging. SLEEPNET uses a deep recurrent neural network trained on the largest sleep physiology database assembled to date, consisting of PSGs from over 10,000 patients from the Massachusetts General Hospital (MGH) Sleep Laboratory. SLEEPNET achieves human-level annotation performance on an independent test set of 1,000 EEGs, with an average accuracy of 85.76% and algorithm-expert inter-rater agreement (IRA) of κ = 79.46%, comparable to expert-expert IRA.
Biography
Jimeng Sun is an Associate Professor of College of Computing at Georgia Tech. Prior to Georgia Tech, he was a researcher at IBM TJ Watson Research Center. His research focuses on health analytics and data mining, especially in designing tensor factorizations, deep learning methods, and large-scale predictive modeling systems.
He published over 120 papers and filed over 20 patents (5 granted). He has received SDM/IBM early career research award 2017, ICDM best research paper award in 2008, SDM best research paper award in 2007, and KDD Dissertation runner-up award in 2008. Dr. Sun received B.S. and M.Phil. in Computer Science from Hong Kong University of Science and Technology in 2002 and 2003, M.Sc and PhD in Computer Science from Carnegie Mellon University in 2006 and 2007.
Keynote 21
Title
How old is your brain? Insights into brain age from a large-scale sleep EEG
by Haoqi Sun, Harvard Medical School
Abstract
In the Big Data era, the combination of large clinical datasets and advanced machine learning supports a variety of novel research questions not easily addressed previously. We have curated a large database of overnight polysomnography (PSG) with full clinical annotations and multi-channel physiology. Subtle age-dependent changes in sleep EEG are well known, but have not been systematically explored in large datasets. We hypothesize that EEG during sleep can predict chronological age (CA). To test this hypothesis, we used machine learning methods to develop a model that estimates a patient's "brain age" (BA): the patient's apparent chronological age predicted solely from characteristics of brain activity during sleep. We analyzed EEG data from 4,330 adult patients who underwent overnight PSG recording at the MGH sleep laboratory. 510 EEG features were extracted from each patient by averaging features from 30-second epochs, including frontal, central and occipital leads. The results show that, in a heterogeneous adult clinical population, we can predict the “brain age” based on EEG features with >0.8 Pearson's correlation, which is far stronger than using delta power alone. We analyze outliers to gain insights into which clinical factors potentially account for deviations from chronological age. We speculate that deviations of brain age from chronological age (patients with brain age older or younger than chronological age) may have clinical implications for age-related cognitive performance or neurodegenerative disorders. Ongoing work seeks to determine whether the deviations between the CA and BA are modifiable, and whether they can provide mechanistic insights into the aging process and risk of age-related neurological disorders.
Biography
Dr. Sun Haoqi's research interest covers using machine learning methods to study the brain from both macro- and micro-scales. He obtained his PhD degree from Nanyang Technological University, Singapore, in 2017, under the supervision of Prof. Huang Guang-Bin. His research topics during PhD are two folds: at macro-scale, he develops driver vigilance estimation algorithms based on electroencephalogram (EEG) in collaboration with BMW Groups and Fraunhofer Institute in Singapore. At micro-scale, he develops novel bio-plausible neuronal plasticity to decode information in spike trains. The works have been published in Neural Computation. He is currently a postdoc research fellow at Massachusetts General Hospital, USA, under the supervision of Dr. Brandon Westover. His current research topics include prediction of consciousness level in intensive care unit (ICU) patients with delirium, as well as automatic sleep staging and brain age prediction based on EEG. Some of the works are accepted in Sleep.
Keynote 22
Title
Designing ‘Intelligent’ Chips in the Face of Statistical Variations: The Neuromorphic Solution
by Arindam Basu, Nanyang Technological University, Singapore
Abstract
As CMOS technology has been scaling down over the last decade, the effect of statistical variations (or component mismatch) and their impact on circuit design have become increasingly prominent. Further, new nanoscale devices like memristors and spin- mode devices like domain wall memories have emerged as possible candidates for neuromorphic computing at energy levels lower than CMOS—however, they also suffer from issues of variability and mismatch. In this talk, I will present some of the work done by our group where we take inspiration from neuroscience and show new approaches to perform machine learning with low-energy consumption using low-resolution mismatched components. First, I will talk about “combinatoric learning” using binary or 1-bit synapses—an alternative to weight based learning in neural networks that is inspired by structural plasticity in our brains. Second, I will present an example of utilizing component mismatch to perform part of the computation—an example of algorithm -hardware co- design involving random projection algorithms like Reservoir Computing or Extreme Learning Machine. Lastly, I will show an application of such a low-power machine learner to perform intention decoding in low -power brain- machine interfaces.
Biography
Arindam Basu received the B.Tech and M.Tech degrees in Electronics and Electrical Communication Engineering from the Indian Institute of Technology, Kharagpur in 2005, the M.S. degree in Mathematics and PhD. degree in Electrical Engineering from the Georgia Institute of Technology, Atlanta in 2009 and 2010 respectively. Dr. Basu received the Prime Minister of India Gold Medal in 2005 from I.I.T Kharagpur. He joined Nanyang Technological University in June 2010 and is currently a tenured Associate Professor. He is currently an Associate Editor of IEEE Sensors journal and IEEE Transactions on Biomedical Circuits and Systems. He is also Guest Editor of two Special Issues in IEEE Trans. on Biomedical Circuits and Systems as well as Corresponding Guest Editor for a special Issue in IEEE JETCAS.
Dr. Basu received the best student paper award at Ultrasonics symposium, 2006, best live demonstration at ISCAS 2010 and a finalist position in the best student paper contest at ISCAS 2008. He was awarded MIT Technology Review's inaugural TR35@Singapore award in 2012 for being among the top 12 innovators under the age of 35 i n SE Asia, Australia and New Zealand. He is a technical committee member of the IEEE CAS societies of Biomedical Circuits and Systems, Neural Systems and Applications (Secretary Elect) and Sensory Systems. His research interests include bio- inspired neuromorphic circuits, non -linear dynamics in neural systems, low power analog IC design and programmable circuits and devices.
Keynote 23
Title
Cognition Behaviour Opportunity Learning for Healthcare
by Yiqiang Chen, Chinese Academy of Science, China
Abstract
Understanding the inherent complex relationships between motor behaviors and cognitive functions has great significance for revealing the brain's cognitive functions, deriving brain-inspired computing, and reshaping the future of healthcare technologies. As embedded computer chips and sensors become ubiquitous, cognition behavior learning and understanding through wearable devices or IoT technologies has become an emerging and fast-growing research field. However, utilizing ubiquitous computing devices to acquire accurate assessment of cognition abilities is still a challenge. The whole process has many stochastic factors, such as personalized behavior style, dynamic sensing environment, diverse smart devices et al., which clouds the connection between behavior and cognition. Hence, we proposed a new method for cognition ability learning from behavior data, which is based on Extreme Learning Machine and can maintain the recognition performance in the presence of stochastic factors. Experimental results show that compared to other methods, the proposed method can improve the recognition accuracy of individual cognitive ability by 10% to 15%.
Biography
Dr Yiqiang Chen is a professor and Director of the Research Center for Ubiquitous Computing Systems, Institute of Computing Technology (ICT), the Chinese Academy of Sciences (CAS). He received his PhD from ICT, CAS in 2002. In 2004, he was a Post-Doctoral Fellow in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST). His research focuses on intelligent human-computer interaction and pervasive computing, especially on learning and understanding from real users’ behaviors (gesture, activity, etc.) in unobtrusive ways. His research on sign language recognition and synthesis was wildly used in 3,000 schools for the hearing impaired around China, and his proposed motor pattern for cognition ability assessment method was published in Science (Advances in Computational Psychophysiology) and Scientific Reports - Nature. He received several Best Paper Awards on international conferences and journal. In 2017, his multi-layer transfer learning method was conferred the IJIT 15th Anniversary Best Paper Award. He is a founding member of the ECMA Wearable Data Standard group and the IEEE IWCD (Interactive Wearable Computing Device) Technical Committee.
Keynote 24
Title
ELM Tree and Its Spark Implementation
by Xi-Zhao Wang, Shenzhen University, China
Abstract
A challenge in big data classification is the design of highly parallelized learning algorithms. One solution to this problem is applying parallel computation to different components of a learning model. In this talk, an extreme learning machine tree (ELM-Tree) model, in which the uncertainty reduction is considered as heuristics and individual ELM is embedded as leaf node, is proposed. Its Spark implementation is briefly discussed and some advantages of ELM-tree, such as the computation time reduction and good ability to handle symbolic attributes, are verified experimentally.
Biography
BS and MA (Hebei University); PhD (Harbin Institute of Technology).
Prof. Wang’s major research interests include uncertainty modeling and machine learning for big data. Prof. Wang has edited 10+ special issues and published 3 monographs, 2 textbooks, and 200+ peer-reviewed research papers. By the Google scholar, the total number of citations is over 5000 and the maximum number of citation for a single paper is over 200. Prof. Wang is on the list of Elsevier 2014/15/16 most cited Chinese authors. As a Principle Investigator (PI) or co-PI, Prof. Wang's has completed 30+ research projects. Prof. Wang is an IEEE Fellow, the previous BoG member of IEEE SMC society, the chair of IEEE SMC Technical Committee on Computational Intelligence, and editorial board member of a couple of journals, the Chief Editor of Machine Learning and Cybernetics Journal.
Keynote 25
Title
Advanced Transfer Learning in Intelligent Vision and Olfaction
by Lei Zhang, Chongqing University, China
Abstract
Machine learning plays an increasing important role in artificial intelligence, computer vision, machine olfaction, and data mining, which generally supposes that the training set and testing set are with independent and identical distribution (i.i.d.) characteristic. However, this assumption does not often hold in real unconstrained applications, i.e. the data are heterogeneous and multi-task, which causes the performance degradation of generic learning algorithms. Therefore, transfer learning (TL), as an emerging hot topic, has been paid more attentation in AI. In this topic, I will introduce our recent progress of Transfer Learning and domain adaptation methods for addressing the non-i.i.d. cross-domain data modeling problem in intelligent vision and olfaction. Further, our transfer learning methods inspired by extreme learning machine (ELM) and deep learning (DL) will be presented.
Biography
Lei Zhang received his Ph.D degree in Circuits and Systems from the College of Communication Engineering, Chongqing University, Chongqing, China, in 2013. He was selected as a Hong Kong Scholar in China in 2013, and worked as a Post-Doctoral Fellow with The Hong Kong Polytechnic University, Hong Kong, from 2013 to 2015. He is currently a Professor/Distinguished Research Fellow with Chongqing University and also the founder of Learning Intelligence & Vision Essential (LiVE Group). He has authored more than 70 scientific papers in top journals, including the IEEE Trans. Neural Networks and Learning Systems, the IEEE Trans. Image Processing, the IEEE Trans. Multimedia, the IEEE Trans. Instrumentation and Measurement, the IEEE Trans. Systems, Man, and Cybernetics: Systems, etc. His current research interests include machine learning, pattern recognition, computer vision and intelligent systems. He has been a reviewer for more than 50 journals such as IEEE Transactions (T-IP, T-MM, T-CSVT, T-CYB, T-SMCA, T-IM, T-IE), and so on. He also serves as keynote speaker, area chair, session chair, and program committee for more than 20 international conferences. Dr. Zhang was a recipient of Outstanding Reviewer Award of Sensor Review Journal in 2016, Outstanding Doctoral Dissertation Award of Chongqing, China, in 2015, Hong Kong Scholar Award in 2014, Academy Award for Youth Innovation of Chongqing University in 2013 and the New Academic Researcher Award for Doctoral Candidates from the Ministry of Education, China, in 2012.