Customer Engagement – From BI Guesswork to Prescriptive AI

Customer Engagement approaches, and the technology used to enable them, have evolved immensely over the last 25 years.  Two distinct eras define this period, as well as a major technological shift to real-time systems with AI feedback loops.

Prescriptive AI

The BI Guesswork Era

During the advent of the Business Intelligence (BI), Marketing Technology and Campaign Management era (circa 1990), marketers had limited predictive powers.  In many cases, when it came to what individuals really needed, they resorted to guesswork.  They channeled their energy to perfect efficiencies in targeting and automation.  Their main emphasis was finding an approximate audience for products so they designed promotions for large segments of the population. They fixated on finding segments that fit into certain “likelihood to respond” buckets, and then repeatedly tested timing, messages, and creative content by peppering those segments with treatments.  In other words, they identified massive groups, matched offers to these groups, and then used technology to systematize their marketing.

Although some of those marketers drew on basic models (such as RFM – Recency, Frequency, Monetary), which provided rough guidance on how deep to mail into a file, most didn’t even do this.  Typical response rates were 0.5% at best.  During this period, the average adult was receiving about 50 pounds of junk mail a year – coined junk mail because the promotions were irrelevant 99.5% of the time.  Thus, the majority viewed this activity as frivolous, mocking it with nicknames and jokes.  Regardless, marketers were unrelenting as they continually carpet-bombed until consumers either responded or learned how to opt-out.

Their tools of choice were crude in nature.  They were slow, not fine-grained, and certainly not customer-centric.  Usually, the campaign flowcharts they devised utilized basic analytics where deterministic queries ran against databases returning huge customer lists called segments.  If there was any further segment refinement, they relied on business intelligence technologies like OLAP (Online Analytical Processing) and dashboards to support their intuition.  Even as some of the more sophisticated marketers attempted predictions, providing those models with feedback was nearly impossible due to the batch processing nature of the flows and platforms they employed.  As shown in Figure 1, although some crept up the analytics value chain toward being predictive and answering the question “What will happen?” most fell short.

Figure 1:

business intelligence

Source: http://www.bi-bestpractices.com/view-articles/5642

Using a backward approach, engineers pre-developed the product, and marketers wrangled the packaging, promotions, and messaging to the audience – again using more guesswork than analytics.  It was difficult to react contextually, at scale, to actual individual needs, so instead they focused on groups of customers.

And so they executed bulk outbound communications at scale. With promotional ammunition in hand, readily available data afforded them reasonable targeting coordinates, and computers and devices served as the delivery mechanisms. The marketplace and emerging technology supported a numbers game and rewarded short-term economic gains.  Longer-term loyalty and longitudinal effects took a back seat.

By the turn of the century, direct marketers were plodding ahead using ever-richer consumer profiles that enabled them to focus promotions on increasingly smaller segments.  And even though in 1995, Peppers & Rogers had coined the term “1:1 marketing,” enterprise marketers were no where near direct conversations with individual consumers.  Still constrained by scale, they were stuck communicating to segments, albeit smaller and smaller ones.  What they didn’t realize was they were about to hit a wall (Figure 2)

Figure 2:

Real-Time Evolution

By 2005, marketers had the tools to perform hyper-targeting.  They aggressively tested different incentives, creative elements, and fine tuned things based on response metrics.   Scoring models were refined, though the expense was large, and the iterations long.  The results didn’t so much alter someone’s behavior, but more provided alternatives to consider, often ones that still had borderline relevance to a current need.

Often the goal, instead of steadfast loyalty, was simply to increase immediate purchases with minimal marketing waste.  In theory, if targets responded and steadily purchased, no matter the purchase, more purchases should follow.  Supposedly then, over the long haul, the business accomplished its goal of capturing more share of wallet.

Around 2010, some leading edge marketers who realized the value of a real-time approach, began hitting that wall.  The foundation of the system they had spent 15 years building was the wrong foundation.  It was a platform built for segmentation, and it supported the wrong approach. They needed a “Real-time 1:1” platform, customer-centric prescriptions, and a more dynamic feedback loop.

Enter the Prescriptive AI Era

Good marketers have always been similar to psychologists in that they study consumer behavior. With today’s data and technology, it’s possible to take engagements one-step further – diagnosing, and treating those customers to alter their behavior methodically over time.  Stealing a page from the broadcast advertisers’ playbook – who use “subliminal seduction” – many marketers are marching toward implementing systems that use incremental and proactive drip therapy to persuade inner minds toward brand myopia.

The only piece missing from the puzzle is a real-time platform.  Traces of this began appearing in 2010, as big data systems, parallel computing, solid-state storage, and other technology advances drove computing costs radically down, and speeds up.

Today the pieces are in place, and more are climbing aboard, as real-time platforms have fully emerged and are cheaper and more reliable.  It’s now feasible to use customer-centric prescriptive tactics at scale and get huge lift over baseline approaches.  Models can predict behavior to an amazing degree of accuracy.  The artificial intelligence (AI) models both diagnose and – using Decision Management – proactively prescribe next-best-action engagement treatments.

Figure 3:

next-best-action

Everyone knows engagement professionals today have more channels.  They’re no longer constrained to broadcast media delivery systems (that lack dynamic feedback loops), and can now use digital response media and even physical surveillance.  And with this plethora of channels, they can administer and perfect personalized, contiguous, and hypersonic stimuli-response strategies.  Essentially, they can employ an always-on brain, powered by rich consumer data, advanced machine learning algorithms, and a 24 x 7 continuous learning loop.

What’s more, these machine learning technologies and embedded predictive algorithms can work in a very deliberate and intelligent way, dynamically creating conditional content and promotions, each time consumers reengage on a digital channel.  Incremental repeated responses (or lack thereof) allow these models to learn, tune themselves, and in essence direct and alter the future – programming individual behavior.  Customers are enticed to reveal ever-increasing amounts of personal information, in exchange for points or some privilege, trusting the exchange is amenable, and the information use one-dimensional.

All of this behavioral activity – social, purchase, demographic, and so forth – is recorded, with the aim of feeding it back into those same algorithms that iterate to find new patterns, refine predictions, and subsequently inform Decision Strategies that recommend the next series of treatments.  In some cases, these systems can even run autonomously, using advanced data science techniques such as genetic algorithms, game theory, and reinforcement learning.  System designers seed the rules of the game, configure the objective function and constraints, and then push “Go.”  The designers and their business counterparts peer in on occasion to monitor whether goals, such as higher loyalty and profit, are trending in the right direction.

Figure 4:

AI Learning Loop

Although this suggests overt manipulation, it’s not necessarily malevolent.  Provided customers have choice (and are well informed and discriminate), and businesses operate ethically (on a level playing field), the economic scales can still balance, and brands that provide products and experiences with the best value can still prevail, and consumers get a fair exchange of value.  You may have noticed, however, a few important “ifs” in this last statement.

Whether we like it or not, we now live in the Prescriptive Era, where the mission of brands is to get to know us, maybe even better than we actually know ourselves. That might sound crazy, but consider this statement from a recent article, “The Rise of the Weaponized AI Propaganda Machine” [i] where an analytics firm compiled data on Facebook likes and built millions of consumer behavior profiles, subsequently fed into an AI political campaigning machine:

“With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.”

Whether you buy this or not, the fact remains that consumer profiles are becoming richer and consumer behavior predictions more accurate.  Data are exploding, as are the algorithms voraciously feeding on them.

Brands compiling this data and wielding their algorithms do it because they say they want to know us better.  Presumably, this enables them to continuously add value, deliver insights, help automate our lives, and make attractive recommendations.

Ostensibly then, for consumers, it comes down to a few simple questions:

  • How much is our data worth to us?
  • What’s the value of the insights that brands provide when they use our data?
  • Are we getting an equitable exchange?
  • Can we trust brands to honor their commitments regarding the use of our data?
  • Do we understand the fine print in those agreements?

Consider the mission statement for Datacoup, a data company based in New York, who have gone one step further and are trying to make a marketplace where consumer’s have a more direct exchange of value for their data:

“Our mission is to help people unlock the value of their personal data. Almost every link in the economic chain has their hand in our collective data pocket. Data brokers in the US alone account for a $15bn industry, yet they have zero relationship with the consumers whose data they harvest and sell. They offer no discernible benefit back to the producers of this great data asset – you.”[ii]

So are you getting value for the data you’re giving up?  Are the “Prescriptions” you get in return an equitable exchange?  Are you aware of what happens to your data after you release it?

A Day in the Life of Your Data

We all joke about the eye-glazing 56 page “Terms and Conditions” from Apple that we always accept and never read.  We want the free software, and don’t worry about the consequences. However, if you use that approach for everything you do online, that mindset is dangerous.

Consider this for a moment.  Most firms have language that allows them to send your data to affiliates, which is a fancy word for other companies. Once floating in the ecosystem, it’s grinded, distilled, and appended to other copies, until records of your preferences, habits, and behavioral are expressed in 5,000 or more different ways.  If it’s wrong, it doesn’t matter, because you don’t own it, don’t have access to it, and can’t change it.  In many ways, it’s another version of you, right or wrong.

Is Prescriptive AI Working?

So back to the question of whether it’s helping.   It’s fair to say there are cases where it adds value.  Here are some examples:

  • You decide you aren’t satisfied with your telecommunication services. You’ve made it obvious (with various signals) you’re considering other alternatives.  Your current provider prescribes an attractive bundle that satisfies your needs. You get a better bundle of services, and your provider retains you.  The bundle is custom tailored for you, using AI.
  • You have investments with a firm. You provide additional data on your financial goals, risk tolerance, and other investments, and they provide advice (prescriptions) on how to achieve your goals over time, within the parameters you set.  They provide various alternatives and education that prove useful to your financial planning.   Presumably, some of those alternatives include additional investments with them, and turn out to be good choices.
  • Your health plan suggests meaningful diet, exercise, and other tips that promote a healthy lifestyle. They are custom tailored to you, based on your family history, age, and other personal data you provide.   They reward you with lower premiums or credits.

These are just a few examples, and many more exist across industries such as travel and leisure, automotive, insurance, and retail.  And while good exchanges do exist, there are plenty of examples where the prescription doesn’t justify the information surrendered because the value exchange is unbalanced, or the prescriptions are ineffective.

Final Thoughts

In her book, “Weapons of Math Destruction[iii],” Cathy O’Neil writes:

“Many of these models, like some of the WMDs we’ve discussed, will arrive with the best intentions.  But they must also deliver transparency, disclosing the input data they’re using as well as the results of their targeting. And they must be open to audits. These are powerful engines, after all.  We must keep our eyes on them.”

She highlights important considerations we must heed.  I’m not convinced we’re spiraling toward a dystopian society regarding the use of prescriptive AI for customer engagement, but I do believe a balance is necessary between efficacy of these systems and fairness.  As responsible marketers, we should be mindful of the ramifications of the models we use for prescriptive purposes, and as consumers, it’s our job to demand transparency, choice, and a level playing field.

[i] Anderson And Horvath, https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine, January 2017

[ii] Datacoup, https://datacoup.com/docs#faq, February 2017

[iii] Cathy O’Neil, 1st edition, Weapons of Math Destruction (New York: Crown), 2016.