Keynote Speakers
Sanguthevar Rajasekaran
UTC Chair, Computer Science and Engineering
Director of Booth Engineering Center for Advanced Technologies (BECAT),
University of Connecticut, USA
Brief Speaker Bio :
Sanguthevar Rajasekaran received his M.E. degree in Automation from the Indian Institute of Science (Bangalore) in 1983, and his Ph.D. degree in Computer Science from Harvard University in 1988. Currently he is the Board of Trustees Distinguished Professor, UTC Chair Professor of Computer Science and Engineering, and the Director of Booth Engineering Center for Advanced Technologies (BECAT) at the University of Connecticut. Before joining UConn, he has served as a faculty member in the CISE Department of the University of Florida and in the CIS Department of the University of Pennsylvania. During 2000-2002 he was the Chief Scientist for Arcot Systems. His research interests include Big Data, Bioinformatics, Algorithms, Data Mining, Machine Learning, Randomized Computing, and HPC. He has published over 350 research articles in journals and conferences. He has co-authored two texts on algorithms and co-edited six books on algorithms and related topics. His research works have been supported by grants from such agencies as NSF, NIH, DARPA, and DHS (totaling around $20M). He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and the American Association for the Advancement of Science (AAAS). He is also an elected member of the Connecticut Academy of Science and Engineering.

Title of Talk

Algorithms for Big Data Analytics

We live in the midst of big data. Generation of data is no longer a bottleneck. Analysis of them and extracting useful information are indeed huge bottlenecks. Efficient techniques are needed to process these data. Society at large can benefit immensely from advances in this arena. For example, information extracted from biological data can result in gene identification, diagnosis for diseases, drug design, etc. Market-data information can be used for custom-designed catalogues for customers, supermarket shelving, and so on. Weather prediction and protecting the environment from pollution are possible with the analysis of atmospheric data.

In this talk we present some challenges existing in processing big data. We also provide an overview of some basic techniques. In particular, we will summarize various data processing and reduction techniques. In addition, we will briefly outline the role of machine learning in big data analytics.

Chandrajit Bajaj
Computer Science, and Institute of Computational Engineering and Sciences
Center for Computational Visualization,
The University of Texas - Austin
Brief Speaker Bio :
Chandrajit Bajaj is the director of the Center for Computational Visualization, in the Institute for Computational and Engineering Sciences (ICES) and a Professor of Computer Sciences at the University of Texas at Austin.
Bajaj holds the Computational Applied Mathematics Chair in Visualization.
He is also an affiliate faculty member of Mathematics, Computational Neuroscience and Electrical Engineering.
He is currently on the editorial boards for the International Journal of Computational Geometry and Applications, and the ACM Computing Surveys, and past editorial member of the SIAM Journal on Imaging Sciences.
He was awarded a distinguished alumnus award from the Indian Institute of Technology, Delhi, (IIT, Delhi).
He is also a Fellow of The American Association for the Advancement of Science (AAAS), Fellow of the Association for Computing Machinery (ACM), Fellow of the Institute of Electrical and Electronic Engineers (IEEE), and Fellow of the Society of Industrial and Applied Mathematics (SIAM).

Title of Talk

Unsupervised Super Resolution Hyperspectral Imaging

One achieves super resolution (SR) in imaging through computational enhancement of the input to yield output images which have improved resolvability of features. This is done by optimizing de-blurring and de-warping operators, and the appropriate leveraging of prior knowledge, and/or information from multiple similar images of the same scene or objects. Modern RGB digital cameras produce SR photographs of a scene by fusing a set of dynamically shifted acquired images of the scene. A related approach, using cross-modality imaging, can be used for producing SR hyperspectral (multi-spectral band) images. A low spatial resolution, hyperspectral image (LHI) and a high spatial resolution RGB image (Hrgb), can be optimally combined to produce a super resolution, hyperspectral image (SRHI). In this talk, I shall present new optimization methods based on low-rank or orthogonal tensor decompositions, and compare them with low-rank approximations of matricized representations of LHI and Hrgb. I shall also discuss variations stemming from different cross-modality imaging application (dynamic scene capture, facial and object recognition, cancer tissue histopathology) and where coupled optimization schemes for image registration and SR are necessary.
Invited Speakers
Dr. S S Iyengar
Distinguished University Professor
Director and Ryder Professor
School of Computing & Information Sciences
Florida International University, Miami
Brief Speaker Bio :
Dr. S.S. Iyengar is currently the Distinguished University Professor, Ryder Professor of Computer Science and Director of the School of Computing and Information Sciences at Florida International University (FIU), Miami. He is also the founding director of the Discovery Lab. Prior to joining FIU, Dr. Iyengar was the Roy Paul Daniel’s Distinguished Professor and Chairman of the Computer Science department for over 20 years at Lousiana State University. He has also worked as a visiting scientist at Oak Ridge National Lab, Jet Propulsion Lab, Satish Dhawan Professor at IISc and Homi Bhabha Professor at IGCAR, Kalpakkam and University of Paris and visited Tsinghua University, Korea Advanced Institute of Science and Technology (KAIST) etc. His research interests include High-Performance Algorithms, Biomedical Computing, Sensor Fusion, and Intelligent Systems for the last four decades. His research has been funded by the National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), Multi-University Research Initiative (MURI Program), Office of Naval Research (ONR), Department of Energy / Oak Ridge National Laboratory (DOE/ORNL), Naval Research Laboratory (NRL), National Aeronautics and Space Administration (NASA), US Army Research Office (URO), and various state agencies and companies. Dr. Iyengar has also served as a research proposal evaluator for the National Academy of Engineering. His work has been featured on the cover of the National Science Foundation’s breakthrough technologies in both 2014 and again in 2016.
Dr. Iyengar is a Member of the European Academy of Sciences, a Life Fellow of the IEEE, a Fellow of the ACM, a Fellow of the AAAS, a Fellow of the Society for Design and Process Science (SDPS), a Fellow of National Academy of Inventors (NAI) and Fellow of the American Institute of Medical and Biological Engineering (AIMBE). He was awarded Satish Dhawan Chaired Professorship at IISc, then Roy Paul Daniel Professorship at LSU. He has received the Distinguished Alumnus Award of the Indian Institute of Science. In 1998, he was awarded the IEEE Computer Society’s Technical Achievement award and is an IEEE Golden Core Member. He also received a Lifetime Achievement award conferred by International Conference on Agile manufacturing at IIT-BHU. Professor Iyengar is an IEEE Distinguished Visitor, SIAM Distinguished Lecturer, and ACM National Lecturer and has won many other awards like Distinguished Research Master’s award, Hub Cotton award of Faculty Excellence (LSU), Rain Maker awards (LSU), Florida Information Technology award (IT2), Distinguished Research award from Tunisian Mathematical Society etc. During the last four decades, he has supervised over 55 Ph.D. students, 100 Master’s students, and many undergraduate students who are now faculty at Major Universities worldwide or Scientists or Engineers at National Labs/Industries around the world. He has published more than 600 research papers, has authored/co-authored and edited 22 books. His books are published by MIT Press, John Wiley, and Sons, CRC Press, Prentice Hall, Springer Verlag, IEEE Computer Society Press, etc. One of his books titled “Introduction to Parallel Algorithms” has been translated to Chinese.

Title of Talk

Data Center and Cloud Computing

Cloud Computing is a general term used to describe a new class of network based computing that takes place over the Internet. It is basically a step up from Utility Computing and can be defined as a collection/group of integrated and networked hardware, software and Internet infrastructure (called a platform). Cloud Computing uses the Internet for communication and transport and provides hardware, software and networking services to clients. These platforms hide the complexity and details of the underlying infrastructure from users and applications by providing very simple graphical interface or API (Applications Programming Interface).

Design of efficient Data Centers has also been a topic of significant research from both academia and industry in the recent years. Efficient data centers are of prime importance as most of the modern businesses rely heavily on cloud services. Data centers also play a very important part in supporting and sustaining the fast growing web-based services and applications. They also form the backbone of most of the search engines, content hosting and distribution companies, social network platforms, tasks involving intense computation etc. Most of the networks have had to adapt and reconfigure and thus use the services offered by cloud computing and data centers to respond to the changing application demands and service requirements. Many large organizations like Microsoft and Google have their own cloud based services and boast of data centers with millions of servers.

The use of cloud computing services and applications is increasing rapidly due to the various advantages it brings and thus has led to the rise of vast cloud data centers. Both consumer and business applications are contributing to the growing dominance of cloud services. The exponential growth of Internet of Things (IoT) devices and applications would also expand the need for data centers to manage the large amounts of data and information that is collected and shared. A study by Cisco reveals that more than 94% of workloads and computation instances would need cloud services by 2021. The various advantages that the cloud based services and the data centers offer makes it the most viable option to choose for businesses and companies who plan to either secure their information or scale to newer levels.

Sumeet Dua, Ph.D.
Associate Vice President for Research and Partnerships
Max P. and Robbie L. Watson Eminent Scholar Chair
Professor of Computer Science
Louisiana Tech University
Ruston, LA U.S.A.
Brief Speaker Bio :
Dr. Sumeet Dua is the Associate Vice President for Research and Partnerships, Professor of Computer Science and Cyber Engineering, and the Max. P. & Robbie L. Watson Eminent Scholar Chair at Louisiana Tech University in LA, U.S.A. Prior to his current administrative appointment, he was the Associate Dean for Graduate Studies and the Director for Computer Science, Electrical Engineering, Cyber Engineering and Electrical Engineering Technology programs in the College of Engineering and Science at Louisiana Tech University. His research interests include data mining, bioinformatics, clinical informatics and cybernetics. He has been awarded grants/contracts for over US$7 Million by various funding agencies, including NSF, NIH, AFOSR, AFRL and NASA. He has co-authored/edited 5 books and advised over 25 graduate thesis and dissertations in these areas. He has also served on over 50 National Institutes of Health (NIH) study sections and National Science Foundation (NSF) expert scientific review panels. He has received multiple awards, including the best paper presentation awards at leading international conferences, and most recently the 2016 Louisiana Tech University Foundation Professorship Award for excellence in teaching, research and service. He is a senior member of the IEEE and ACM, and a member of AAAS.

Title of Talk

Feature Engineering for Semi-supervised Machine Learning in Protein Informatics

Feature engineering, an integral data preprocessing step in machine learning, is aimed to boost the accuracy and efficiency of prediction systems. Those efforts principally rely on the creation of methods that imbibe facets of feature extraction, ranking, and selection methods cognizant of the underlying domain knowledge. This talk will emphasize the role of engineered evolutionary features using domain knowledge of hydrophobicity properties of proteins for enhanced prediction of their folding paradigm. Protein folding is frequently directed by local residue interactions that form clusters in the protein core. The interactions between residue clusters serve as potential nucleation sites in the folding process. Evidence postulates that the residue interactions are governed by the hydrophobic propensities that the residues possess. We will discuss a graph-theory-based machine learning framework to extract and isolate protein structural features that sustain invariance in evolutionary-related proteins feature ranking. The results obtained demonstrate that discriminatory residue interaction patterns obtained by these feature engineering methods are shared amongst proteins of the same family and can be effectively employed for both the structural and the functional annotation of proteins for multiple machine learning applications in bioinformatics.

Somesh Jha
Computer Sciences Department
University of Wisconsin Madison
Brief Speaker Bio :
Somesh Jha received his B.Tech from Indian Institute of Technology, New Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University in 1996 under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Somesh Jha is the Grace Wahba Professor in the Computer Sciences Department at the University of Wisconsin (Madison), which he joined in 2000. His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has also worked on privacy-preserving protocols and adversarial ML. Somesh Jha has published over 150 articles in highly-refereed conferences and prominent journals. He has won numerous best-paper awards. Somesh also received the NSF career award in 2005. Prof. Jha is the fellow of the ACM and IEEE.

Title of Talk

Towards Semantic Adversarial Examples

Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks, are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences.

However, existing approaches to generating adversarial examples and devising robust ML algorithms mostly ignore the semantics and context of the overall system containing the ML component. For example, in an autonomous vehicle using deep learning for perception, not every adversarial example for the neural network might lead to a harmful consequence. Moreover, one may want to prioritize the search for adversarial examples towards those that significantly modify the desired semantics of the overall system. Along the same lines, existing algorithms for constructing robust ML algorithms ignore the specification of the overall system. In this talk, we argue that the semantics and specification of the overall system has a crucial role to play in this line of research. We present preliminary research results that support this claim.

Inderjit S. Dhillon
Gottesman Family Centennial Professor
Director, Center for Big Data Analytics
Department of Computer Science
University of Texas at Austin
Brief Speaker Bio :
Inderjit Dhillon is the Gottesman Family Centennial Professor of Computer Science and Mathematics at UT Austin, where he is also the Director of the ICES Center for Big Data Analytics. Currently he is on leave from UT Austin and works as Amazon Fellow at A9/Amazon, where he is developing and deploying state-of-the-art machine learning methods for Amazon search. His main research interests are in big data, machine learning, network analysis, linear algebra and optimization. He received his B.Tech. degree from IIT Bombay, and Ph.D. from UC Berkeley. Inderjit has received several awards, including the ICES Distinguished Research Award, the SIAM Outstanding Paper Prize, the Moncrief Grand Challenge Award, the SIAM Linear Algebra Prize, the University Research Excellence Award, and the NSF Career Award. He has published over 175 journal and conference papers, and has served on the Editorial Board of the Journal of Machine Learning Research, the IEEE Transactions of Pattern Analysis and Machine Intelligence, Foundations and Trends in Machine Learning and the SIAM Journal for Matrix Analysis and Applications. Inderjit is an ACM Fellow, an IEEE Fellow, a SIAM Fellow and an AAAS Fellow.

Title of Talk

Multi-Target Prediction Using Low-Rank Embeddings: Theory & Practice

Linear prediction methods, such as linear regression and classification, form the bread-and-butter of modern machine learning. The classical scenario is the presence of data with multiple features and a single target variable. However, there are many recent scenarios where there are multiple target variables. For example, recommender systems, predicting bid words for a web page (where each bid word acts as a target variable), or predicting diseases linked to a gene. In many of these scenarios, the target variables might themselves be associated with features. In these scenarios, bilinear and nonlinear prediction via low-rank embeddings have been shown to be extremely powerful. The low-rank embeddings serve a dual purpose: (i) they enable tractable computation even in the face of millions of data points as well as target variables, and (ii) they exploit correlations among the target variables, even when there are many missing observations. We illustrate our methodology on various modern machine learning problems: recommender systems, multi-label learning and inductive matrix completion.

Ashish Anand
Associate Professor
Department of Computer Science and Engineering
Indian Institute of Technology, Guwahati
Brief Speaker Bio :
Ashish has joined the Dept of CSE, Indian Institute of Technology Guwahati in Feb, 2011. He did his Masters (Int-MSc, 5 years) in Mathematics and Scientific Computing from Indian Institute of Technology Kanpur. Thereafter, he joined Androgen Receptor Laboratory (University of Helsinki, Finland) as a visiting research student. That was the place where he got introduced to exciting biology compared to the boring school days biology. Computational challenges arising from efforts to understand biology fascinated and motivated him to continuing research in this highly interdisciplinary area. Later he joined a collaborative project of Prof Pradip Sinha and Prof K Deb (at IIT Kanpur) to understand neoplastic cancer in the model organism D. Melanagastor. He did his PhD from Nanyang Technological University on "Computational Inteliigence Methods for Problems in Computational Biology". In particular he worked on multi-class classification, template clustering for short time series data and imbalanced binary classification problems. And prior to joining IIT G, he was part of European Consortium, BaSySBio at Systems Biology Lab (Group Leader: Dr Benno Schwikowski) Institut Pasteur, Paris. His post-doc work was mainly concentrated on regulatory network reconstruction and pathway analysis.

Title of Talk

Towards solving the Entity Classification Task rather than the Entity Classification on a dataset

Entity classification (EC), an important subtask of Information Extraction, is a task of assigning labels to given entity mentions in a sentence. Over the period, several domain-specific datasets with a pre-defined target label sets have been generated. The focus was then building a model which improves the performance on a domain-specific dataset for a specific target label set. However, in the real scenario, a model does not have control over the domain and intended target labels given by an arbitrary user. In this talk, I present a formulation of a more generalized framework of solving the EC task in the absence of a domain and target label set information. Then we present a collective learning framework to solve this problem.

Arya Kumar Bhattacharya
Dean of Research and Professor
School of Engineering Sciences
Mahindra École Centrale, Hyderabad
Brief Speaker Bio :
Dr. Arya Kumar Bhattacharya is the Dean of Research and Professor in the School of Engineering Sciences at Mahindra École Centrale, Hyderabad. He has been with the Institute since inception in 2014. He has with him more than twenty years of industrial experience, including at Defence Research and Development Organization (ADA - Bangalore), Alstom Transport (UK) and at Tata Steel (Automation Division, Jamshedpur), all in Research and Development roles at different levels. He has been AICTE-INAE Distinguished Visiting Professor at Birla Institute of Technology, Mesra.

Prof. Arya has filed for more than twenty patents and published more than thirty-five papers in Journals, International Conferences and book chapters. His chapter in the book Evolutionary Computation published by InTech has been downloaded more than 9000 times since 2010, according to the Publisher. His multi-disciplinary research interests cover Machine Learning, Evolutionary Algorithms, Game Theory, and Autonomous Systems applied to different domains like Aerospace, Manufacturing and Industry 4.0, Logistics, Transportation and others.

Title of Talk

Adaptive Critic Design for Extreme Learning Machines applied to drifting industrial processes

Natural or manmade continuous-time dynamical systems are susceptible to adverse digressions, i.e. periods of rapid deterioration in performance. Conceptually, digressions of a given type are engendered when specific parameters of the system get into definitive combinatorial relationships. Machine Learning (ML) techniques can in principle identify in real time the acquisition (formation) of such relationships and release warnings that consequently command actions that can return the system to normality, or mitigate the impact of the digression. Here a manufacturing system, namely the continuous casting process of steel making, is the object of study and both Artificial Neural Networks (ANNs) and Extreme Learning Machines (ELMs) have been shown to perform the above function. A specific characteristic of industrial systems is process drift; in such processes the drift is induced into the inter-parameter data relationship learnt by the ML mechanism and the latter has to adapt to the evolving relationship else lose out on accuracy with advancing time. Adaptive-Critic techniques can function as enablers for the ML mechanism to autonomously adapt to this drift. In the presented work two such Adaptive-Critic techniques are developed, the first for ANNs and then for ELMs, which are demonstrated to work successfully for the industrial process of interest. Importantly, this is the first development in public domain of an Adaptive-Critic technique using ELMs for adaptation to industrial drift. The techniques are generic and amenable for drifting processes in any non-stationary environment, including process and manufacturing industries.