Research

Publications

TCS Research community has scientists who are furthering discoveries in several emerging technologies related to ICT. Several hundred papers are presented each year in premier conferences in the areas of Applications, Software and Systems.

 

 

Featured Publications | Complete List of Publications 

Featured Publications

Here, we present some recent publications including some award winning ones from TCS' research team.

To Top

 

Model Driven Software Performance Engineering: Current Challenges and Way Ahead
Authors:Manoj Nambiar, Ajay Kattepur, Gopal Bhaskaran, Rekha Singhal & Subhasri Duttagupta

Abstract
Performance model solvers and simulation engines have been around for more than two decades. Yet, performance modeling has not received wide acceptance in the software industry, unlike pervasion of modeling and simulation tools in other industries. This paper explores underlying causes and looks at challenges that need to be overcome to increase utility of performance modeling, in order to make critical decisions on software based products and services. Multiple real-world case studies and examples are included to highlight our viewpoints on performance engineering. Finally, we conclude with some possible directions the performance modeling community could take, for better predictive capabilities required for industrial use.

Read more

High Performance Loop Closure Detection using Bag of Word Pairs
Authors:  Nishant Kejriwal, Swagat Kumar and Tomohiro Shibata
Robotics and Autonomous Systems, Vol. 77, pp. 55-65, March, 2016.
Abstract

In this paper, we look into the problem of loop closure detection in topological mapping. The bag of words (BoW) is a popular approach which is fast and easy to implement, but suffers from perceptual aliasing, primarily due to vector quantization. We propose to overcome this limitation by incorporating the spatial co-occurrence information directly into the dictionary itself. This is done by creating an additional dictionary comprising of word pairs, which are formed by using a spatial neighborhood defined based on the scale size of each point feature. Since the word pairs are defined relative to the spatial location of each point feature, they exhibit a directional attribute which is a new finding made in this paper. The proposed approach, called bag of word pairs (BoWP), uses relative spatial co-occurrence of words to overcome the limitations of the conventional BoW methods. Unlike previous methods that use spatial arrangement only as a verification step, the proposed method incorporates spatial information directly into the detection level and thus, influences all stages of decision making.

Read more

A Serial Five-Bar Mechanism Based Robotic Snake Exhibiting Three Kinds of Gait”
Authors: V. S. Rajashekhar and Swagat Kumar. 
IEEE International Conference on Robotics and Biomimetics (ROBIO) 2015, December 6-9, Zhuhai, China

Abstract

In this paper, we provide the design of a Serial Five Bar Mechanism (named as SFBM-A1) for a snake-like robot. Each five bar mechanism in series is capable of rotating and translating which, in turn, enables the robotic snake to exhibit three kinds of gaits - rectilinear, side shifting and turning on a flat surface. A quaternary link is used as a part of the five-bar mechanism which helps in connecting them in series. The friction anchors (used at both ends of the snake robot) are designed to take active part in producing various gaits by exhibiting a push or pull kind of effect. This is in addition to its usual role of providing stability during motion. The kinematics of the proposed joint mechanism is derived and its working is demonstrated through simulation and experiments.

Read more

A Hierarchical frame-by-frame association method based on Graph Matching for Multi-object Tracking.
Authors: Sourav Garg, Ehtesham Hassan, Swagat Kumar and Prithwijit Guha.
11th International Symposium on Visual Computing (ISVC). Las Vegas, Nevada, USA, December 14-16, 2015

Abstract:
Multiple object tracking is a challenging problem because of issues like background clutter, camera motion, partial or full occlusions, change in object pose and appearance etc. Most of the existing algorithms use local and/or global association based optimization between the detections and trackers to find correct object IDs. We propose a hierarchical frame-by-frame association method that exploits a spatial layout consistency and inter-object relationship to resolve object identities across frames. The spatial layout consistency based association is used as the first hierarchical step to identify easy targets. This is done by finding a MRF-MAP solution for a probabilistic graphical model using a minimum spanning tree over the object locations and finding an exact inference in polynomial time using belief propagation. For difficult targets, which can not be resolved in the first step, a relative motion model is used to predict the state of occlusion for each target.

Read more

Pedestrian Detection via Mixture of CNN Experts and thresholded Aggregated Channel Features.
Authors:Ankit Garg, Ramaya H, Lovekesh Viz, Swagat Kumar, Ehtesham Hassan

Published in the Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, held in Chile in December, 2015, pp. 163-171. 

Abstract:
In this paper, we propose a two stage pedestrian detector. The first stage involves a cascade of Aggregated Channel Features (ACF) to extract potential pedestrian windows from an image. We further introduce a thresholding technique on the ACF confidence scores that segregates candidate windows lying at the extremes of the ACF score distribution. The windows with ACF scores in between the upper and lower bounds are passed on to a Mixture of Expert (MoE) CNNs for more refined classification in the second stage. Results show that the designed detector yields better than state-of-the-art performance on the INRIA benchmark dataset and yields a miss rate of 10.35% at FPPI=10-1

Read more

An Occlusion Reasoning Scheme for Monocular Pedestrian Tracking in Dynamic Scenes
Authors: Sourav Garg, Rajesh Ratnakaram, Swagat Kumar and Prithwijit Guha
IEEE International Conference on Advance Video and Signal-based Surveillance (AVSS), Karlsruhe, Germany, August 25-28, 2015

Abstract
This paper looks into the problem of pedestrian tracking using a monocular, potentially moving, uncalibrated camera. The pedestrians are located in each frame using a standard human detector, which are then tracked in subsequent frames. This is a challenging problem as one has to deal with complex situations like changing background, partial or full occlusion and camera motion. In order to carry out successful tracking, it is necessary to resolve associations between the detected windows in the current frame with those obtained from the previous frame. Compared to methods that use temporal windows incorporating past as well as future information, we attempt to make decision on a frame-by-frame basis.

Read more

Connected Wellness - An Approach for Cloud Connected Sensing for Healthcare and Wellness
Authors: Arpan Pal, Aniruddha Sinha, Chirabrata Bhaumik, Avik Ghose,Anirban Dutta Choudhury, Aishwarya Visvanathan, Rohan Banerjee

Abstract
Connected wellness and healthcare can be the next generation revenue earner for telecommunication service providers who can provide these services as bouquet of value added services (VAS). This might involve patient and elderly people monitoring by indoor localization and recognition of activities of daily living (ADL),connected physiological sensing and remote monitoring of health trends, calorie and workout management. Emergency features like fall detection can also be added to the suite. The advent of Smartphones has ensured connectivity to the masses and with peer-to-peer (P2P) connections like Bluetooth ensures connectivity to personal healthcare devices. Further, Smartphone sensors can themselves be used to elucidate activity and certain physiological parameters. Finally, cloud connectivity ensures remote monitoring and intervention and data integrity. This can further lead to opportunities like telemedicine and tele-rehabilitation. This paper discusses a connected application suite available

Read more

Spatio-temporal assessment of urban growth impact in Pune city using remotely sensed data
Authors: Piyush Yadav, Shailesh Deshpande

Abstract:
Monitoring Land Use Land Cover (LULC) changes over the  years have been one  of the  standard methods  for determining  the impact  of anthropogenic activities  in  a given region. Comparative  assessment  of LULC  changes  is  performed–predominantly–using  a remotely  sensed  data and socio  economic data  from  other sources. This  paper  presents  the  land  use  land  cover  changes  taken  place  in  Pune  city  in  past decade.  Specific goals were to  identify  impervious  surface  increment,  green  cover  loss,  and  natural  drainage  loss.  We have used LANDSAT  7  imagery  from  2001  to  2014.  After  correcting  images  for  atmospheric  effects  and  line  striping problem,  we  used  support  vector  machine (SVM) to  classify  each  image  into different classes.  Various  LULC classes  namely  forest,  agriculture,  residential,  industrial,  water  bodies,  open  areas  etc.  were  further  grouped  into broad  level  classes  such  as  vegetation,  impervious  surface,  and  soil  (VIS).  We  performed  growth  prediction  of urban sprawl using Land Change Modeler in Terrset module. The lost drainage cover assessment was done using DEM  image  obtained  from  CARTOSAT-1. Various  LULC  assessment  result  shows  that  there  was  a  significant increase of 13.74% in impervious surface, 14.27% combine loss of soil and vegetation and 5.30% loss of natural drainage network.

Read more

 

Bugs in the Freezer: Detecting Faults in Supermarket Refrigeration Systems Using Energy Signals
Authors: Shravan Srinivasan, Arunchandar Vasan, Venkatesh Sarangan, Anand Sivasubramaniam

Abstract:
Refrigeration is a major component of supermarket energy consumption. Ensuring faultless operation of refrigeration systems is essential from both economic and sustainability perspectives. Present day industry practises of monitoring refrigeration systems to detect operational anomalies have several drawbacks: (i) Overdependence on human skills; (ii) Limited help in identifying the root-cause of the anomaly; and (iii) Presumption about high degree of instrumentation – which prevents their usage in supermarkets in developing economies. Existing approaches in literature to detect anomalies in refrigeration systems either are done in controlled laboratory settings or assume the availability of sensory information other than energy. In this paper, we present an approach to detect anomalous behavior in the operation of refrigeration systems by monitoring their energy signals alone. We test the performance of our approach using data collected from refrigeration systems across 25 stores of a real world supermarket chain.

Read more

The Book : 'Multimedia Ontology: Representation and Applications'
Authors: Dr. Santanu Chaudhury, Dr. Anupama Mallik, Dr Hiranmay Ghosh

Abstract:
The result of more than 15 years of collective research, Multimedia Ontology: Representation and Applications provides a theoretical foundation for understanding the nature of media data and the principles involved in its interpretation. The book presents a unified approach to recent advances in multimedia knowledge representation schemes and explains how a multimedia ontology can fill the semantic gap between concepts and the media world. It relays real-life examples of implementations in different domains to illustrate how this gap can be filled. The book contains information that helps with building multimedia applications that involve semantic analysis of contents. It guides you in designing real-life systems that aid in logical and conceptual organization of large amounts of multimedia data distributed over the web. As a practical demonstration, it showcases multimedia applications in several domains, such as information aggregation, cultural heritage preservation and multimedia artifacts recommendation.

Read more

To Top

Over-approximating loops to prove properties using bounded model checking, in Design Authors: Priyanka Darke, Bharti C, R. Venkatesh, Ulka Shrotri, R Metta
(Automation & Test in Europe 2015)

Abstract:

Bounded Model Checkers (BMCs) are widely used to detect violations of program properties up to a bounded execution length of the program. However when it comes to proving the properties, BMCs are unable to provide a sound result for programs with loops of large or unknown bounds. To address this limitation, we developed a new loop over-approximation technique LA. LA replaces a given loop in a program with an abstract loop having a smaller known bound by combining the techniques of output abstraction and a novel abstract acceleration, suitably augmented with a new application of induction. The resulting transformed program can then be fed to any bounded model checker to provide a sound proof of the desired properties. We call this approach, of LA followed by BMC, as LABMC. We evaluated the effectiveness of LABMC on some of the SV-COMP14 loop benchmarks, each with a property encoded into it. Well known BMCs failed to prove most of these properties due to loops of large, infinite or unknown bounds while LABMC obtained promising results. We also performed experiments on a real world automotive application on which the well known BMCs were able to prove only one of the 186 array accesses to be within array bounds. LABMC was able to successfully prove 131 of those array accesses to be within array bounds.

Read more

To Top

Value Slice : A New Slicing Concept for Scalable Property Checking
in Book Title:  Tools and Algorithms for the Construction and Analysis of Systems 2015
Authors: Shrawan Kumar, Amitabha Sanyal, Uday P. Khedker

Abstract:
A backward slice is a commonly used preprocessing step for scaling property checking. For large programs though, the reduced size of the slice may still be too large for verifiers to handle. We propose an aggressive slicing method that, apart from slicing out the same statements as backward slice, also eliminates computations that only decide whether the point of property assertion is reachable. However, for precision, we also carefully identify and retain all computations that influence the values of the variables in the property. The resulting slice, called value slice, is smaller and scales better for property checking than backward slice.

Read more

To Top

On implementational Variations in Static Analysis Tools
Authors: Tukaram Muske, Prasad Bokil
(2015 IEEE 22nd International Conference on Software Analysis, Evolution and Reengineering (SANER) 2015 )

Abstract:
Static analysis tools are widely used in practice due to their ability to detect defects early in the software development life-cycle and that too while proving absence of defects of certain patterns. There exists a large number of such tools, and they are found to be varying depending on several tool characteristics like analysis techniques, programming languages supported, verification checks performed, scalability, and performance. Many studies about these tools and their variations, have been performed to improve the analysis results or figure out a better tool amongst a set of available static analysis tools. It is our observation that, in these studies only the aforementioned tool characteristics are considered and compared, and other implementational variations are usually ignored. In this paper, we study the implementational variations occurring among the static analysis tools, and experimentally demonstrate their impact on the tool characteristics and other analysis related attributes. The aim of this paper is twofold - a) to provide the studied implementational variations as choices, along with their pros and cons, to the designers or developers of static analysis tools, and b) to provide an educating material to the tool users so that the analysis results are better understood.

Read more

To Top

Verification of Group variable for Detecting Inconsistencies in Software
Authors: Advaita Datar, Amey Zare
(Society of Automotive Engineers World Conference 2015)

Abstract:
Verification and Validation (V&V) techniques commonly use static analysis to detect property violations in modern software systems. However, besides checking for general programming errors like division by zero, array index out of bound etc., certain program patterns can also be verified in order to detect inconsistencies in the software. For instance, there could be several strongly related program entities, such as groups of variables or data structure members updated together, which are often observed across various parts of a program. We term such strongly related entities as group variables. When only a subset of group variables is updated at some part of a program, it could probably be a result of some inconsistency in implementation which may lead to unexpected behavior or failure of the underlying system. Therefore, verifying group variables and their write operations is essential to ensure the safety and reliability of software.

Read more

To Top

Improving dynamic inference with variable dependence graph ( Book Chapter) 
Book title : Runtime Verification 2015.
Author(s): Anand Yeolekar

Abstract:
Dynamic detection of program invariants infers relationship between variables at program points using trace data, but reports a large number of irrelevant invariants. We outline an approach that combines lightweight static analysis with dynamic inference that restricts irrelevant comparisons. This is achieved by constructing a variable dependence graph relating a procedure’s input and output variables. Initial experiments indicate the advantage of this approach over the dynamic analysis tool Daikon.

Read more

To Top

Cost-Effective Functional Testing of Reactive Software, in Evaluation of Novel Approaches to Software Engineering 2015.
Authors: R. Venkatesh, Ulka Shrotri, Amey Zare, Supriya Agrawal
10th International Conference on Evaluation of Novel Software Approaches to Software Engineering (ENASE 2015)

Abstract:
Creating test cases to cover all functional requirements of real-world systems is hard, even for domain experts. Any method to generate functional test cases must have three attributes: (a) an easy-to-use formal notation to specify requirements, from a practitioner’s point of view, (b) a scalable test-generation algorithm, and (c) coverage criteria that map to requirements. In this paper we present a method that has all these attributes. First, it includes Expressive Decision Table (EDT), a requirement specification notation designed to reduce translation efforts. Second, it implements a novel scalable row-guided random algorithm with fuzzing (RGRaF)(pronounced R-graph) to generate test cases. Finally, it implements two new coverage criteria targeted at requirements and requirement interactions. To evaluate our method, we conducted experiments on three real-world applications. In these experiments, RGRaF achieved better coverage than pure random test case generation. When compared with manual approach, our test cases subsumed all manual test cases and achieved up to 60% effort savings. More importantly, our test cases, when run on code, uncovered a bug in a post-production sub-system and captured three missing requirements in another.

Read more

To Top

Verifying Synchronous Reactive Systems using Lazy Abstraction 
Authors: Kumar Madhukar Kumar, Mandayam Srivas, Bjorn Watcher, Daniel Kroening, R Metta
(Design, Automation & Test in Europe 2015)

Abstract:
Embedded software systems are frequently modeled as a set of synchronous reactive processes. The transitions performed by the processes are given as sequential, atomic code blocks. Most existing verifiers flatten such programs into a global transition system, to be able to apply off-the-shelf verification methods. However, this monolithic approach fails to exploit the lock-step execution of the processes, severely limiting scalability.

Read more

To Top

What You Ask is What You Get: Understanding Architecturally Significant Functional Requirements
Authors: Preethu Rose Anish, Maya Daneva, Jane Cleland-Huang, Roel J. Wieringa, and Smita Ghaisas (Tata Consultancy Services, India; University of Twente, Netherlands; DePaul University, USA)
RE'15

Abstract:
Software architects are responsible for designing an architectural solution that satisfies the functional and non-functional requirements of the system to the fullest extent possi-ble. However, the details they need to make informed design deci-sions are often missing from the requirements specification. An earlier study we conducted indicated that architects intuitively recognize which requirements in a project are architecturally significant and often seek out relevant stakeholders in order to ask Probing Questions (PQs) that help them acquire the infor-mation they need. This paper presents results from a qualitative interview study aimed at identifying architecturally significant functional requirements’ categories from various business do-mains, exploring relevant PQs for each category, and then group-ing PQs by type.

Read more

To Top

Improving ASR Recognized Speech Output for Effective Natural Language Processing
Authors: Chandrasekhar Anantaram, Sunil Kumar Kopparapu, Nikhil Kini, Chiragkumar Patel
The Ninth International Conference on Digital Society, 2015.

Abstract:

The paper was written by The process of converting human spoken speech into text is performed by an Automatic Speech Recognition (ASR) system. While functional examples of speech recognition can be seen in day-to-day use, most of these work under constraints of a limited domain, and/or use of additional cues to enhance the speech-to-text conversion process. However, for natural language spoken speech, the typical recognition accuracy achievable even for state-of-the-art speech recognition systems have been observed to be about 50 to 60% in real-world environments. The recognition is worse if we consider factors such as environmental noise, variations in accent, poor ability to express on the part of the user, or inadequate resources to build recognition systems. Natural language processing of such erroneously and partially recognized text becomes rather problematic. It is thus important to improve the accuracy of the recognized text. We present a mechanism based on evolutionary development to help improve the overall content accuracy of an ASR text for a domain. Our approach considers an erroneous sentence as a zygote and grows it through an artificial development approach, with evolution and development of the partial gene present in the input sentence with respect to the genotypes in the domain. Once the genotypes are identified, we grow them into phenotypes that fill the missing gaps and replace erroneous words with appropriate domain words in the sentence. In this paper, we describe our novel evolutionary development approach to repair an erroneous ASR text to make it accurate for further deeper natural language processing.

Read more

To Top

Noise Cleaning and Gaussian Modeling Of Smart Phone Photoplethysmogram To Improve Blood Pressure Estimation
Authors: Rohan Banerjee, Anirban Dutta Choudhury, Arpan Pal, Avik Ghose, Aniruddha Sinha
Proceedings of ICASSP, Brisbane Australia, April 2015

Abstract:
Photoplethysmography (PPG) signals, captured using smart phones are generally noisy in nature. Although they have been successfully used to determine heart rate from frequency domain analysis, further indirect markers like blood pressure (BP) require time domain analysis for which the signal needs to be substantially cleaned. In this paper we propose a methodology to clean such noisy PPG signals. Apart from filtering, the proposed approach reduces the baseline drift of PPG signal to near zero. Furthermore it models each cycle of PPG signal as a sum of 2 Gaussian functions which is a novel contribution of the method. We show that, the noise cleaning effect produces better accuracy and consistency in estimating BP, compared to the state of the art method that uses the 2-element Windkessel model on features derived from raw PPG signal, captured from an Android phone.

To Top

Adaptive Sensor Data Compression in Iot Systems: Sensor Data Analytics Based Approach
Authors: Soma Bandyopadhyay, Arpan Pal, Arijit Ukil
Proceedings of ICASSP, Brisbane Australia, April 2015

Abstract:
Sensor nodes are embodiment of IoT systems in microscopic level. As the volume of sensor data increases exponentially, data compression is essential for storage, transmission and in-network processing. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods. In this paper, we propose ASDC (Adaptive Sensor Data Compression), an adaptive compression scheme that caters various sensor applications and achieve high performance gain. Our approach is to exhaustively analyze the sensor data and adapt the parameters of compression scheme to maximize compression gain while optimizing information loss. We apply robust statistics and information theoretic techniques to establish the adaptivity criteria. We experiment with large sets of heterogeneous sensor datasets to prove the efficacy. Nonlinear lossy compression (Chebyshev) is extensively considered as the standard technique as well as experimental result with frequency domain compression like Discrete Fourier Transform (DFT) is shown as future scope of further improvement.

To Top

IoT Data Compression: Sensor-agnostic Approach
Authors: Arijit Ukil, Soma Bandyopadhyay, Arpan Pal
Proceedings of Data Compression Conference (DCC), Utah, USA, April 2015

Abstract:
Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time ‘sensor information content’-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.

Read more

To Top

Why Not Keep Your Personal Data Secure Yet Private in IoT?: Our Lightweight Approach
Authors: Tulika Bose, Soma Bandyopadhyay, Arpan Pal, Abhijan Bhattacharyya, Arijit Ukil
Proceedings of ISSNIP, Singapore, April 2015

Abstract:
IoT (Internet of Things) systems are resource constrained and primarily depend on sensors for contextual, physiological and behavioral information. Sensitive nature of sensor data incurs high probability of privacy breaching risk due to intended or malicious disclosure. Uncertainty about privacy cost while sharing sensitive sensor data through Internet would mostly result in overprovisioning of security mechanisms and it is detrimental for IoT scalability. In this paper, we propose a novel method of optimizing the need for IoT security enablement, which is based on the estimated privacy risk of shareable sensor data. Particularly, our scheme serves two objectives, viz. privacy risk assessment and optimizing the secure transmission based on that assessment. The challenges are, firstly, to determine the degree of privacy, and evaluate a privacy score from the finegrained sensor data and, secondly, to preserve the privacy content through secure transfer of the data, adapted based on the measured privacy score. We further meet this objective by introducing and adapting a lightweight scheme for secure channel establishment between the sensing device and the data collection unit/ backend application embedded within CoAP (Constrained Application Protocol), a candidate IoT application protocol and using UDP as a transport. We consider smart energy management, a killer IoT application, as the use-case where smart energy meter data contains private information about the residents. Our results with real household smart meter data demonstrate the efficacy of our scheme.

Read more

To Top

pTransform: Making enterprise privacy safe
Author: Sachin Lodha
Gartner Newsletter on Data Privacy 
    

Abstract:
This is  a part of the TCS MasterCraft Quarterly Newsletter Series featuring research from Gartner
Abstract: Did you know that more than 80% of data hacking is done by outside entities? Data privacy and protection are no longer a good-to-have features for an enterprise but has become a mandatory part of any enterprise’s security portfolio. This newsletter discusses the unique suite of TCS MasterCraft products and how it ensures that only right data reaches the right people at any given time.

Read more

  

Get Adobe Acrobat ReaderGet Adobe Acrobat Reader 

To Top

Reach Us.

Share