The media, entertainment and advertising research area conducts applied research in technologies that enable understanding, composition and monetization of entertainment, media and advertising. Cognitive and affective understanding of media and advertising forms the foundation of the group’s research on top of which applications around personalized, ubiquitous and monetized entertainment, as well as effective advertising at scale, are being built. The group draws upon multi-disciplinary advances in artificial intelligence to analyze media in a multimodal context (image, audio, video and text). One key differentiator of the group is that it employs the principles of behavioral and cognitive sciences in understanding and modeling the effect of media and advertising on humans. The research work aims to cut across the media and advertising industry spectrum – video, gaming, audio, new media, data & measurement and adTech.
The media, entertainment and advertising research area focuses on research that addresses key problems in the space – personalizing media at scale, intelligent media, media monetization and immersive media. Research is looked at under four themes to address these key problems:
Cognitive and affective annotation of media and advertising: Multimodal annotation of content – movies, sports, games and advertisement – powered by AI models to generate greater contextual knowledge of content – what is the content about (cognitive) and how is it likely going to affect the consumer (affective).
Creation & composition of media and advertising: Research that deals with automated or semi-automated creation, composition and synthesis of media & advertising. Example research problems include programmatic short personalized content generation, automated artwork generation, synthetic media creation and multi-modal stylization.
Media monetization: Focus on media content as an effective monetization channel for disseminating advertisements and ways in which advertising spend attribution can be improved using brand-content personalities as well as personalized advertising. Monetization of synthetic media is another key research focus.
Media learning & assessment: Research around capturing certain media-related parameters for analyzing audience affect & mood.
People & Patents
Research team: Led by principal scientist Niranjan Pedanekar, the team includes Savita Bhat; Rishabh Agrawal; Sarath Sivaprasad; Stephen Pilli; Satej Kadlay; Yashaswi Rauthan, Neilkumar Shah, Dharmeshkumar Agrawal, Manasi Malik, Ashwanth Thotta, Vikram Jamwal. Collaborators from other TCS Research groups include Shirish Karande, Manasi Patwardhan and Abhay Garg from DL&AI research group.
Academic partners: Trinity College of Music, London; IIIT Hyderabad, India; IIIT Delhi
- Yashaswi Rauthan, Vatsala Singh, Rishabh Agrawal, Satej Kadlay, Niranjan Pedanekar, Shirish Karande, Manasi Malik, and Iaphi Tariang, “Avoid Crowding in the Battlefield: Semantic Placement of Social Messages in Entertainment Programs”, 2nd International Workshop on AI for Smart TV Content Production: Affiliation; Access and Delivery (AI4TV’20), ACM. 2020.
- Stephen Pilli, Manasi Patwardhan, Niranjan Pedanekar, and Shirish Karande, “Predicting Sentiments in Image Advertisements using Semantic Relations among Sentiment Labels”, CVPR 2020, Workshop on Challenges and Promises of Inferring Emotion from Images and Video. 2020
- Tanmayee Joshi. Sarath Sivaprasad, and Niranjan Pedanekar, “Partners in Crime: Utilizing Arousal-Valence Relationship for Continuous Prediction of Valence in Movies”, AAAI-19 Workshop on Affective Content Analysis. 2019.
- Tanmayee Joshi. Sarath Sivaprasad, Savita Bhat, and Niranjan Pedanekar, “Multimodal Approach to Predicting Media Memorability”, MediaEval, 2018
- Rohit Saxena, Savita Bhat, and Niranjan Pedanekar, “EmotionX-Area66: predicting emotions in dialogues using hierarchical attention network with sequence labeling”, In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media (SocialNLP) at 56th Annual Meeting of the Association for Computational Linguistics - ACL 2018
- Bhole, Tejas, Tejas Mahajan, Nihar Gajare, Niraj Pandkar, Niranjan Pedanekar, and Shilpa Paygude. “Understanding Emotional and Effective Components of Advertisements.” Accepted. Towards Automatic Understanding of Visual Advertisements ADS Workshop at the 2018 Computer Vision and Pattern Recognition (CVPR) Conference. IEEE, 2018.
- Tanmayee Joshi, Sarath Sivaprasad, Rishabh Agrawal, and Niranjan Pedanekar, “Multimodal continuous prediction of emotions in movies using long short-term memory networks”, Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval - ACM, 2018
- Rohit Saxena, Savita Bhat, and Niranjan Pedanekar. "Live on TV, Alive on Twitter: Quantifying Continuous Partial Attention of Viewers During Live Television Telecasts." Data Mining Workshops (ICDMW), 2017 IEEE International Conference on. IEEE, 2017.