Analytic-optimized recommendations are a prime engine of e-commerce. Recommendation engines, aka “next best action” platforms, execute according to a core analytic action grammar, which I’d characterize thusly: “we now recommend this to you with high confidence because we expect that you will accept it, per our knowledge that you and/or people similar to you have responded to this and/or things similar to this under these and/or similar circumstances.”
Essentially, the automated recommendation is a knowledge-driven hunch that was made by decision/data scientists when they constructed the analytics and algorithms that power it all. The hunch rides on specific assertions about the past (“have responded”), present (“we now recommend”), and future (“expect you will accept”) of a particular decision-scenario instance being presented to a particular customer. Contrast this to the purely speculative low-confidence action grammar underlying the proverbial stab in the dark, which is roughly “how’s this grab ya?”
Recommendation engines work beautifully when they leverage deep historical data and a detailed profile of every customer, every item, and every circumstance that might be relevant to their next best offer. When that all clicks into place, they can make automated recommendations with high confidence that they will be accepted.
But when a new customer, an unfamiliar never-before-sold item, or an unprecedented circumstance enters the picture, recommendations become risky stabs in the dark. You have little idea who, if anybody, will find it acceptable. And you may not even know whether the simple act of recommending it will offend or alienate various customers (e.g., “I can’t believe they thought I’m the kind of person who goes for that kind of thing”).
This problem, referred to as the “cold start” dilemma, is often encountered by developers of recommendation-engine decision logic, according to this article. It is most likely to be found when new products that lack a transactional history enter an e-commerce provider’s online category and inventory. The engines rely on historical data of user response rates in order to have enough confidence in a product’s response-rate potential to justify presenting that to a live user.
Unless various tactics are used to overcome the “cold start” problem, one unfortunate operational consequence might be a vicious circle: the new product, lacking any prior transactions, may seldom be recommended, hence not generate enough transaction history to rise higher in the engine’s rankings. This problem, says author Ricky Ho, is also found with recommendation engines that serve ad content; brand-new ads with zero initial clickthrough history need to build up clickthroughs, somehow, in order to realize their potential.
One key tactic for breaking this deadlock, says Ho, is to write recommendation-engine algorithms that alternate between “exploitation” (“make the optimal choice based on current data”) and “exploration” (“make a random choice or choices that we haven’t made before”). Exploitation favors recommendations with proven transactional track histories (viewing, acceptance, response, etc.). In contrast, exploration spreads the recommendations randomly among proven and unproven choices. It does so within the context of continuous in-production “A/B testing” (aka “real-world experiments”) by the recommendation engine of diverse options. In the process, exploration builds up a history for the unproven choices and thereby supports the continuous refinement of analytic-driven weights that determine how often these will be offered under future circumstances.
Ho refers to this as an “epsilon-greedy” strategy: “at every play we throw a dice between explore and exploit.” However, as his discussion shows, it’s not entirely a random toggling between the approaches, but operates on some deterministic rules as well. Before long, exploratory stabs in the dark must give way to a brightly illuminated and precisely optimized recommendation paths. According to Ho, “We’ll pick our strategy to do more Exploration initially when you have a lot of uncertainty, and gradually tune down the ratio of Exploration (and leverage more on Exploitation) as we collected more statistics.”
In a dynamic e-commerce environment, the need for “stab-in-the-dark” recommendations will never go away completely. A steady stream of unfamiliar new offerings enter the system continually, and may occasionally need to be offered “sight unseen” to customers in order to generate a transaction history that shapes future recommendations.
In addition, a large “long tail” catalog of veteran albeit low-demand/niche products will need the added demand boost that an automated recommendation or three might enable.
Even in the absence of new or “long tail” products, it may be smart to add a randomizing factor to the recommendation logic that drives offers to existing customers. This may be useful for A/B testing of the extent to which customers respond to “out of left field” offers that your models previously assumed they’d reject out of hand.
To the extent that you’re just continuing to recommend the “same ol’ same ol’” products to your current customers 100 percent of the time, you may never realize the range of other items that they might respond to–if you’d only offered them under the proper circumstances. In a previous post, I referred to this as offerings “bursts of spontaneity within the routine.”
You might also refer to it as the “serendipity principle.” Serendipity is a powerful force for happiness. People often don’t know what they truly want until it jumps out and grabs them.