Skip to main content
Skip to footer
We're taking you to another TCS website now.

Enriching a fan’s content viewing experience 

TCS Research invents an AI-led technique that automates creating personalized, snackable content for sports and entertainment fans. Here’s how it works.

Error: unable to play the video


 

Highlights

  • Personalized, short-form content is key to hook and keep online audiences engaged.
  • Scientists at TCS Research use AI and behavioral science to automate personalized, short-form media content to enhance viewer experience.
  • An AI-driven technique automatically captures media highlights, making it personalized for a fan to relive special moments.


 


TCS Research invents an AI-driven model that uses cognitive and affective annotations to automatically generate personalized, short-form content in the sports, news, and entertainment spaces. Watch the video
.
 

An influx of digital content has shortened viewer attention spans. ‘Snackable’ or ‘bite-sized’ is how consumers like getting their information today. And content creators are continually trying to keep that ephemeral eyeball engaged. Personalized, short content is therefore key to enhancing the content viewing experience. 

Enriching the experience  

Let’s explore a scenario. It's the last few seconds in a game and your favourite footballer delivers a bicycle kick, clinching an unbelievable win. Fans across the world go wild watching this. 

Technology can capture this stellar moment, so a fan gets to savor it repeatedly. But what if this content personalization happened automatically, and for similar moments that only feature your sports idol and no one else?

Scientists at TCS Research are using AI and behavioral science to enable exactly this. 

An AI-led invention from TCS Research automatically captures sports highlights such that they become more personalized and help the fan experience last longer. How? By being snackable. Short form means the content lends itself to being shared more easily on social media fan fraternities, ensuring multiple viewers get to relive that moment. 

How it happens

The invention uses annotations—labelling data on images for the AI model to generate contextual knowledge. These annotations that are cognitive—what the content is about—as well as affective—how it will it affect the viewer—help collate context.

Multimodal analysis—involving video, speech, and text—of such cognitive and affective annotations works towards creating personalization. A semantic query is used to generate the final highlight. 

Wondering how it comes together? Watch TCS Research’s Chief Scientist, Niranjan Pedanekar, and Researcher, Rishabh Agrawal explain it all.