While digital-born retailers are the poster children of hyper-personalization, no solution is perfect. Typically, soft computing methods are suitable, as data is incomplete and situations are uncertain. We look at two approaches: recommendation for the next best offer and information fusion for recommending the next action. As personalization gets more ambitious, its complexities are likely to keep researchers in the area of Computational Intelligence busy for quite some time to come.
Amazon started personalizing our shopping experience twenty years ago. Now, Netflix creates our own home theatre, experimenting even to the extent of changing billboard images. For instance, if you watch Doctor Strange and The Imitation Game, it offers the next Cumberbatch movie; no surprises there. However, what is surprising is that it may change the poster of Twelve Years a Slave from showing Chiwetel Ejiofor (the lead) to showing Benedict Cumberbatch (a minor role as William Ford), just so you will pick it up. You may indeed be happy to watch a movie you never knew had Cumberbatch in it; and Netflix sold you a new genre of content. Digital businesses are trying to do special things for hundreds of millions of customers and over millions of products, over each interaction. The segment of one has arrived.
Powering the segment of one
The power behind the segment of one lies in predictive analytics powered by big data technologies.
These work with large volumes of transactional and operational data containing a mix of structured and unstructured content. Predictive analysis helps in identifying different ways to keep a customer engaged on a site. Knowledge and insights extracted from analysis of a customer’s past history, including his or her browsing history, past transactions, items abandoned in their carts, buying behavior, etc., are combined with data tracked during a current visit to generate recommendations and targeted offers. Adding profile information obtained from customer relationship management (CRM) systems or gathered through surveys or customer-provided feedback is often found to significantly improve the results. Despite the strides personalization has made, it is still far from perfect. As personalization becomes more widespread, the many complexities of dealing with this many-headed monster are coming to the forefront, which are likely to keep researchers in the field of Computational Intelligence busy for quite some time to come.
With humongous computational resources at their disposal, ecommerce establishments are taking personalization to the logical extreme, where microsegmentation culminates at the “group of one”. It is also possible to do powerful long-tail analysis that takes into account seemingly unpopular but niche and novel products that interest only specific customer segments. Historically, business analysts have tried to segment customers into groups based on similarity in behavioral patterns. This is still a valid grouping. Based on historical and current action gathered through real-time tracking, microsegmentation aims at finding small groups of customers, who exhibit fine-grained behavioral similarities. Coupled with customer profile data and ample details about available inventory, predictive analytics models built for each of these segments can provide the right inputs for targeted marketing to optimize nuanced customer experience. These models can also generate context-sensitive next best actions or offers. Combining micro-segmentation with other ancillary data about cross-sell, upsell, etc., has led to the development of rich predictive models that are suggesting the next best offers to customers with high success rate.
Computational intelligence and hyper-personalization
Computational Intelligence usually refers to an array of techniques that are employed to emulate complex real-world phenomena, which are difficult to represent as pure mathematical models. Traditionally, soft computing methods included Fuzzy and Rough Logic, Evolutionary Computation, Machine Learning, and Probabilistic Reasoning, all of which can be modeled to work with uncertainties and incompleteness of knowledge. These methods were found to be highly suitable while working with unstructured data such as images, videos, or text. Soft-computing methods help in extracting data about personal preferences of users, based on product images or descriptions chosen by them, textual feedback given, content of documents read, and also from content generated in the form of status messages or communications. Since user actions are rarely deterministic and rather largely dependent on the surrounding physical environment and mental states, probabilistic reasoning techniques play a key role in designing personalized content delivery systems.
High-performance computing and hyper-personalization
The dream of hyper-personalization can be attained only if it is coupled with a delivery platform that can track millions of users and predict their next needs or actions, based on history. High-performance computing practices aggregate computing power to deliver application performances that are not possible to be delivered otherwise over typical workstations. Building application-agnostic hyper-personalization platforms that can adapt to multiple recommendation engines and domains, while scaling up robustly, is a research challenge.
Role-based personalization in the enterprise
Digital-born retail has been the poster child for personalization. But for businesses with global footprints, the enterprise and its extensions (vendor, partners, supply chains, customers), information overload has created the need for personalized content delivery. Contextually intelligent systems fuse multi-structured information gathered from heterogeneous sources in real time to predict the next likely action of the user, and thereby, optimize the uptake possibility of the recommended content, not just for a single application but for a multitude of applications.
While the recommendation models discussed here appear to be fairly generic in design, their applications across all fields have not seen the same level of success as eRetail. The primary reason for this is the lack of mature behavioral models for other domains. Though the challenges for each area appear to be different from the others, there are some common problems that personalized recommenders are trying to address.
(i) Handling novelty: Though repetition and frequency derived from personal preferences drive the bulk of personalized recommendations, sectors like travel and hospitality, and entertainment, which have seen a huge surge in the number of online transactions, have had limited success of personalization, to date. A major reason for this is the incapability of the current models to incorporate novelty as a user preference. Novel or unique elements, which have no history of being chosen or, more importantly, are not even similar to products chosen earlier, cannot be accommodated easily in the predictive model. However, novelty is a key factor that drives customer choices for these sectors. This is referred to as the cold-start problem in the science of recommendation. Determining the novelty of an item is comparatively easier than determining which novel items to recommend to whom, how, and when. These are some of the major challenges that personalization systems are trying to deal with.
(ii) Objective-driven information recommendation: Recommending information or textual content to users faces another type of challenge. Due to the explosive growth in the amount of available digital information, filtering systems that can detect the right information for a user are in great demand. Along with user preferences that can be learnt from their past behavior, personalization, in this case, also has to determine the intangible quantity that can measure the worth of a piece of information in the user’s personal and professional life. While role-based information-filtering systems are being designed, the challenge lies in formally specifying the concepts of role and responsibility and integrating them to create an objective-driven information-filtering framework.
(iii) Security and privacy concerns: The recent success of personalization in many sectors can be attributed to the data that a user shares with content providers voluntarily and involuntarily. With rising concerns over privacy and security breaches, research in the area of personalization will have to deal with this seriously. An immediate point of concern is the enforcement of the new EU-wide privacy rules, the General Data Protection Regulation (GDPR) that is expected to create a seismic shift not only in the way personal information is defined but also in how it is collected, processed, used, and transferred. Several key data elements, like sexual orientation, organizational memberships and so on will require the explicit consent of users before they can be used. An important
technological aspect that would be affected by the introduction of GDPR is the regulation on using “persistent online identifiers”, such as cookies, which are small text files storing tiny pieces of data in a user’s browser. What it would mean for personalization platforms, which store these identifiers, will have to be seen, since violators of GDPR provisions are likely to face huge penalties.