From Ancient to Modern Martech Stack – 10 Immutable Laws

It’s a Data Collector & Cruncher, Insights Producer, Real-Time Processer, and Channel Connector – But wait!  There’s more!

Since the Dawn of Martech Times – The Goals & Principles Remain the Same

For a lifetime (mine anyways) marketers have sought the holy grail of one-to-one customer engagement: Right Customer, Right Time, Right Message/Offer, Right Channel/Place.

In pursuing that goal, those working for enterprises knew to scale beyond mom/pop audience size they needed big tech help –  big data & insights about their customers, at-scale machine learning to calculate what to say and offer, a large & dynamic curated library of messages, as well as direct connections to the ever-growing channels to deliver them – both ones where customers were in channel (inbound) and ones where nudging was required (outbound).

Bringing at-scale tech to this goal started by using big databases.  Those databases held customer account and transaction data and used queries achieving the first step – finding sets of customers with differences in customer behavior.  And those differences proved to be great insights to train models and predict future behavior.  Next, the pioneers matched messages to the predicted behavior segments.  A likely buyer of a certain product just needed an offer for that product.  Then, send that message, get a response, and chalk up higher conversion rates.  

Take, for example, credit card marketers.  They segmented their base into 2 main categories: Transactors and Revolvers.  Transactors paid off their bill.  Revolvers didn’t.   So based on this, they offered transactors incentives to transact more so they would increase revenue (from fees charged to the merchants) and offered revolvers more credit – balance transfers / credit limit increases (where they made fees on the growing balances).  The results:  higher response rates, campaign lift (over controls) and more revenue.

That’s it.  For the next 30 years, businesspeople in a variety of industries – from banking to telco to insurance and others – built out these systems and simply sought more data, improved prediction models, and connected their offers to more channels.  Sounds simple right?  Oh, but it’s not.

Why?  Because big data got bigger, messier, and harder (and more important) to carefully manage.  Getting the right insights from that data was (and still is) tricky.  Establishing data lineage, privacy, and proper governance became crucial.  Algorithms to accurately predict intent, and the right action to take next, evolved – and required controls.  New AI methods sprung up along the way (and continue to do so – with GenAI another example – we’ll get to that later).  And we all know about the proliferation of channels and digital devices.

Martech Stack Ecosystem – Then and Now

Ecosystem wise, not much has changed in the basic framework of a Martech stack since the early 90’s.  Figure 1 shows the main components.  Data is collected from a variety of sources at different velocities.  Some data is distilled before it’s sent, providing insights.   The stack itself produces both data & insights, and those are made available to other systems.  A plan programs the content, data, and strategies employed by the decision system, and gets informed by the results.  AI, both inside and outside the stack, powers predictions.  Finally, ranked recommendations and messages are activated, and married (using metadata) with the appropriate content and delivered to channels.  There, an orchestration layer may dip down to get actual content (e.g., digital images), and then consumers get the output (and hopefully react positively).  Dashboards, reports, and analysis tools help marketers understand the results.

And the goals remain unchanged.  Provide personalized experiences.  Doing so generates lift (higher response and conversion rates) because more relevant offers are presented to customers who are more likely to want them.  It’s not rocket science it’s just marketing science.

modern martech stack

Figure 1: Conceptual Martech Stack and Surrounding Ecosystem

At the high-level, yup that’s pretty much it.  I’ve researched and witnessed this pattern for 30 years.  Attending conferences, following Martech Stackie Awards (2015201620172018201920202021, 2022, 2023.), reading countless analyst blogs, and working with hundreds of enterprise clients across the globe.

The 10 Immutable Laws of Martech Stacks

So, what have we learned?  Since there is an overabundance of data, and technologies of various kinds come & go, lock in on designing the modern Martech stack so that it adheres to principles that have withstood the test of time.

  1. Collect the right data. You don’t need a huge number of customer behavior attributes, but instead the right ones for the business problems being solved (e.g., reduce churn) and so reflect customer intent and cause and effect with likelihood to respond to offers to solve those issues.
  2. Make sure collected data is accurate and, in as much as possible, feed it into your Martech stack in real-time. 
  3. Use segments to study common traits and behaviors.  Assign segment attributes to customers, not the other way around.
  4. Make decisions on individuals, not on segments. 
  5. Use adaptive models to calculate “offer propensity.”  Establish that these models are learning continuously on data you are collecting.
  6. Use one set of engagement strategies and rules for inbound & outbound decisions.  Do not separate this logic and place it into channel systems.
  7. When making inbound decisions, send them immediately. Do not cache decisions into channels waiting for a customer to appear.
  8. With outbound marketing, only send permission-based relevant messages to customers on channels they opt-in and respond to, at times they prefer, and with content that is relevant.  
  9. Use behavior triggers, not pre-set schedules, to determine the right time to send.
  10. Select the latest content just prior to presenting offers (e.g., versions of your offer, that include creative and language variation tests).

And here are a few more pointers: 

You need a few good foundational software platforms (linchpins) that integrate, not 10k technologies (https://chiefmartec.com/2022/10/why-there-are-10000-martech-products-that-kinda-all-do-the-same-thing-but-not-really/). 

Which ones?  Follow this basic advice for the 4 main ones you need, and that must operate well together:

https://customerthink.com/the-final-4-martech-platforms-and-ecosystems/

Compare your design to others that have been successful.  Here is a 2023 stackie winner.  Notice the biggest bubbles:  Content, Execution Platform, Analytics (Insights).

https://chiefmartec.com/wp-content/uploads/2023/04/itau-unibanco-martech-stackie-1456px.jpg

And here is another, centered on using data & AI to power a brain to make decisions during the customer journey cycle (awareness, consideration, decision)

https://chiefmartec.com/wp-content/uploads/2022/05/verizon-martech-stack.png

Don’t drop what you are doing to chase the latest fad.   In other words, don’t fall victim again to the “shiny new object syndrome.[i] ”  Stay the course and be sure to devote some of your tech budget to innovation testing (maybe 10%).   Hopefully, you were already doing that prior to the GenAI hype setting in. 

Speaking of GenAI, test it to see if it helps with Law #5 (finding features for models to learn on) and Law #10 (challenger tests for creative & language variations)

Old saying but it holds true: You can’t manage what you can’t measure.  Measure your program effectiveness by looking at champion creative, promotions, and messaging, and then try new variations, and measure again.

Conclusion

The evolution of the Martech stack over the years has brought about significant advancements in data collection, insights generation, interoperability, and customer engagement. The fundamental goals and principles of delivering personalized experiences to customers remain unchanged. However, as technology and data have become more complex, it is essential to adhere to the 10 immutable laws of Martech to provide the right ingredients for success.

Collecting the right data and ensuring its accuracy in real-time is essential. Segment customers to understand behaviors but make decisions at the individual level in real-time.  Use adaptive models that continuously learn from data and help calculate offer propensity accurately. Inbound and outbound decisions should follow unified strategies and use behavior triggers for timely engagement. Outbound marketing should focus on sending relevant messages on preferred channels.

Build a Martech stack with a few foundational software products that provide insights, content management, and decision management.  The emergence of GenAI offers opportunities to enhance model learning and conduct challenger tests for creative and language variations. However, it is important to test and measure its effectiveness before fully adopting it into the Martech stack.

Finally, measure program effectiveness and iterate with new variations for continuous improvement. By following these principles, businesses can achieve personalized customer engagement and drive higher response and conversion rates.


[i] HBR, https://hbr.org/2021/07/dont-buy-the-wrong-marketing-tech, 2021

Martech Strangelove: How I Learned to Stop Worrying and Love the GPT Bomb

Secrets to Martech Success with GPT

“I can no longer sit back and allow GPT infiltration, GPT indoctrination, GPT subversion, and the international GPT conspiracy to sap and impurify all of our precious human thoughts.”

“Well, boys, I reckon this is it – combat toe-to-toe with the GPT Robots.”

Intro

Some of us are contrarian humans and it’s not in our nature to follow fads.  And so, in November of last year when these pre-trained AI transformers – lurking backstage for years — took the main stage, and then threatened to hack into nuclear codes, those of us so inclined raised our eyebrows and dug in on the side of us carbon-based thinking units.  

I’ve been to two tech conferences in the last month and witnessed firsthand the growing mayhem around this new-age AI.  No doubt one big difference between now and the emergence of the 90’s AI is the speed at which this hype cycle reached a crescendo, and the extent to which GPT is being sold like snake oil to cure all ailments.  And like search in the late 90’s, the speed and ease at which it has entered into the public conscious, and even the accessibility for anyone to use it, has got everyone’s attention.

Accompanied by Langchain, AutoGPT, autonomous agents, prompt engineers, embeddings, and hallucinating large language models – GPT along with a whole new universe of techniques and terms have exploded onto the scene.  Within months all major Martech vendors scrambled fighter jets to launch press releases on yet to be released GPT-infused software.

But then a few days ago, 8 months after G-Day, I popped onto my bank’s website, and the same old UI with the same old poorly personalized experience, sorely lacking self-driving marketing features or an optimized customer experience stared back at me.  Seems maybe deep-learning NLP wasn’t the only thing taking LSD.

The coming out of these inventions is interesting – captivating in fact.  And misunderstand me not, as a big lover of GPT’s rhymes, poems, pictures, and puns, I was very intrigued by its chic performances.  Equally fascinating is that as pack animals, it’s in our human DNA to follow; to want to be part of the latest fashions.   And so GPT gets my alpha dog / tech beauty queen of 2023 vote. 

Still, coming to terms with this latest knighted AI craze — “Sir GPT” — might be prudent.  Without kneeling at its throne, even non-conformists might have to accept it as an eventual buddy, same as happened with smartphones and search a quarter century ago.

So here are 4 tips for Martech leaders on how to approach the GPT mania.  

Tip 1: Do your GPT homework – Separate fact from fiction

With each Martech vendor offering GPT, consider asking these questions:

  1. Show me the GPT features in your software?  When and in which version will this be available?  Can you give me some precise plans around availability?
  2. Can you show me the top features you believe will produce a quantum leap in impact in terms of either lift or productivity (versus features that are simply automating legacy processes)?
  3. What about the value I will get from using these features?  If you can’t show me actual case studies yet, show me projected value, and the math used.
  4. I work in a very large business, so how can I use these features practically and responsibly within an enterprise change management process?
  5. How much will this cost?  Not just the GPT subscription cost, but any other costs that will go into putting this into production.
  6. Can I trial some of these features today?  How much will that cost?
  7. How can I set up a fair test of these features against my baseline?
  8. What are the top 3 features on your roadmap that have nothing to do with GPT?

The answer to that last question will be telling for several reasons.  First, it will give insight into the overall Martech roadmap and strategy, whether balanced — or worse, hastily constructed.  Second, the features mentioned will help with a comparison to your goals and priorities, and how well aligned they are.

As a marketer, consider these as potential use cases:

  • Safely generating alternative subject and body copy for existing promotional offers.  For example, new subject lines for emails, website tile and call to action labels, and offer descriptions.
  • Built-in assistants to automatically configure marketing components that previously required manual set up.  For instance, when creating a campaign in a certain category, and providing basic information, GPT assists in pre-populating fields to minimize manual key entry.
  • For inbound channels, GPT provides more intelligent chat bot assistance.   It does a better job answering basic questions, searching knowledge bases and documentation for answers, and learns from feedback on whether its answers were helpful.

Tip 2: Run some experiments

Consider running a few controlled experiments to gauge GPT’s applied value.  

To ensure the tests are useful, make sure to measure against a known benchmark.   Ask if the experimental design is reproducible, fair, scalable & cost effective.  In other words, will the test be easy to set up?  Can it be run repeatedly in a short period of time (producing adequate experimental results to measure against the benchmark)?

  • To test new copy, pit GPT’s proposed writing against the champion.   Randomly assign GPT’s offer to a 2% sample of customers.  Does it outperform the champion over time?
  • Do the built-in assistants really save time in configuration?  Or is it a bell/whistle that looks good in a demo, but that real-life users won’t use?  Do usability testing with the real users to find out.
  • Do the chat bot interactions result in higher NPS scores?   After each chat bot session, ask for an NPS score for the session, and compare that with current scores.

Tip 3: Study the results and invest accordingly

This tip may seem obvious.  However, the knee-jerk reaction might be to adopt the new feature(s) if the test results show any improvement over the baseline.   But make sure to weigh the total costs versus the actual benefits.  If the lift is meager, but the costs high, the ROI might not be there.

In addition, network and study your peer’s experiences as the Guinee pigs.   Are the projects going well?   Are they producing the expected results?   Is there a stark difference between what was on the glossy brochure versus how the real-life experience turned out?  In other words, learn from the mistakes of others.

Tip 4: Stay the course – with a solid overall DNA – Data aNd Analytics – Modernization Strategy

Massive data and fine-tuned analytics are the core materials of GPT (or any successful application of AI), and both must be the genome that fuels every company.  Firms must weave data & analytics into their organizational fabric to fundamentally transform two aspects of their business in careful balance: a hyperfocus on personalized customer experience while becoming operationally efficient.

But where to start?   Here are three things to concentrate efforts on:

  1. Implement advanced analytics to improve customer experience: By compiling and analyzing customer data across a variety of channels, you’ll gain insights into their preferences and behavior. This will support personalized services, development of appropriate products and services, and improved customer engagement.  In addition, use predictive analytics to identify the best channels to reach your customers.
  2. Optimize operations and reduce costs through data-driven decision-making: By analyzing operational data, you can identify inefficiencies and bottlenecks in your processes. This can help streamline operations and reduce costs. For example, banks can use data analytics to optimize loan approval processes, improve credit risk management, or reduce fraud.  Telcos can analyze real-time network performance data and even identify potential issues before they occur and take proactive measures to prevent them.
  3. Build a data-driven culture and foster data literacy among employees: To fully leverage the power of data and analytics, you need a culture that values data and encourages data-driven decision-making. This requires not only the right technology and infrastructure but also a workforce that is knowledgeable about data and analytics. Winner brands will provide training and resources to help employees develop data literacy skills and create a culture that encourages experimentation and innovation.

Conclusion:

In the end, though there will be a few lucky hares, for the majority the tortoise-like approach is the winning strategy.

Oh, by the way, I took my intro words, plugged them into GPT, and gave it the following prompt. 

————————-

GPT prompt: Write me a pun based on the fable of the tortoise and the hare using this prose <inserted above intro words>

————————-

GPT response:

In the world of AI, a contrarian thought I bore,

As GPT took the stage, its hype soared and more.

But now I’ve come to terms with this AI craze,

Sir GPT, they call it, deserving some praise.

I won’t kneel at its altar or its throne,

But I’ll let it into my life, like Google once known.

Tech conferences showcased mayhem and noise,

With Langchain, AutoGPT, and AI toys.

Martech vendors scrambled, their releases to boast,

Yet my bank’s website remained a poor host.

No self-driving marketing, no personalized flair,

Perhaps deep-learning NLP was also on a wild tear.

These recent inventions captivate, it’s true,

And I love a good pun, rhyme, or verse that’s new.

But as pack animals, we follow the alpha’s lead,

And GPT claims my alpha dog award, indeed.

So here are four tips for Martech leaders to heed,

Do your GPT homework, experiment and succeed.

Study the results, invest wisely in this race,

But remember, the tortoise’s approach sets the winning pace.

————————-

This robot is kind of a cute poet.  But Sr GPT, you didn’t exactly write a pun, so allow me to assist you:

If one takes the hare’s approach you may be “hare today but goon tomorrow.”

Trusting Personalization without Sacrificing Privacy

In today’s digital age, consumers are increasingly seeking more personalized products and services, and a customized experience.  And although personalization is nothing new, it has evolved radically in the last 30 years with large businesses able to use AI and technology to cater to these demands.  Just look at the latest frenzy, GPT, to get a sense for how technology is accelerating its impact on marketing, customer service, and personalization.

Balancing Personalization & Privacy

Already a wide variety of personalization techniques are employed by businesses to tailor to individual preferences and improve the customer experience.  It’s been 30 years in the making, starting with early adopters using simple techniques such as addressing a direct mail letter with first name rather than “Current Resident,” to today’s savvy enterprises using massive databases of consumer behavior and advanced analytics to provide hyper-conditional content, individualized promotions, and concierge-like digital services.

But a counter dynamic is also at play.  As personalized products and services have become more prevalent, consumers have also awakened to how their data is collected and even misused.  Because of this some are less likely to share information, push for more legislation, more frequently opt out, and even ask for their data to be deleted.  Clear battle lines have been drawn between hyper personalization and privacy.

This presents a dilemma for businesses, as they attempt to balance providing personalization in a responsible and controlled manner.  Consumers are fickle.  They want great experiences, but they also expect that any data they turn over is secure and is used in compliance with their wishes.  In this context, it’s crucial for companies to strike the right balance between personalization and privacy protection. 

The Value of Personalization

When asked, consumers repeatedly respond that they want more personalization (in many cases greater than 75%), especially younger people. [i]   When they see customized ads fitting their preferences and behaviors, they’re more likely to engage and then purchase. This benefits both the consumer, who gets offered interesting products or services, and the company, which increases its revenue and profits.  Moreover, personalized recommendations can create a sense of loyalty and trust between the consumer and the company, leading to repeat business, long-term customer retention, and positive word-of-mouth marketing.

Personalization has become a principal factor in consumer decision-making. A study by Epsilon found that 80% of consumers are more likely to do business with a company if it offers a personalized experience. [ii]  Personalization has many benefits for consumers, such as saving time and increasing convenience. For instance, personalized recommendations on e-commerce websites can help consumers find and then ask for products that they may not have otherwise considered. Similarly, personalized apps can help users achieve their goals by providing services like tailored financial plans and wellness insights & activities.

Some consumers may view the collection of their personal data as a fair trade-off to get more personalization.  They accept that the exchange of their data is worth getting good recommendations, more perks, and a better overall experience.  And these consumers assume and inherently trust that businesses will use their data responsibly and will take the necessary steps to protect their privacy. 

But not everyone thinks that way about turning over their data and trusting businesses.

The Need for Privacy

There is another camp of consumers that prioritize privacy and may view personalization as a threat to their rights. They hold that their personal data is being exploited without their consent and that businesses are profiting from their information.  And they are very vocal about this, and are influencing their friends, followers, and even law makers.

These consumers are skeptical of businesses’ ability to protect their data and worry that their information could be used against them in the future.  As a result, they have pushed for more protections and use various mechanisms (ad blockers, opting out, surfing incognito) to avoid sharing personal information. 

They are very hesitant to share what they consider extremely personal information, such as their location, browsing history, or purchase behavior. And they are not a small faction. In fact, a Pew Research Center study found that 79% of Americans are concerned about the way their personal data is being used by companies beyond what they intended.[iii]  Years ago, the Cambridge Analytica scandal highlighted these fears, when the data of millions of Facebook users was harvested without their consent and weaponized for political purposes.

Consumers believe they have a fundamental right to privacy and are increasingly demanding more control over their personal data.  In recent years, there have been several high-profile data breaches and scandals involving the misuse of personal data. These incidents have increased consumer awareness about the importance of privacy, and they are becoming more vocal about their concerns. Consumers want to know how their data is being used, who has access to it, and how it is being protected.

Consumers are also becoming more aware of their digital footprint and the potential consequences of sharing personal information online. They’re concerned about identity theft, fraud, and other forms of cybercrime. In addition, they worry that their personal information could be used against them, such as by insurance companies or potential employers. As a result, consumers are becoming even more cautious about sharing personal information online, and as such only do it with businesses they trust.

The Role of Trust & Regulation

Trust is a crucial factor in the relationship between consumers and businesses. Consumers are more likely to share personal information with businesses that have not breached that trust and that prioritize their privacy. Even so, trust is fragile, and businesses must work hard to never violate it and always maintain it.

Businesses can build trust with consumers by being transparent about their data collection practices and providing clear explanations of how they use consumer data. They should obtain explicit consent from consumers before collecting any personal information and should provide consumers with the option to opt-out of data collection. And they must ensure that data is protected from data breaches and cyberattacks.

Privacy concerns can also impact consumer behavior in a more general sense. A review of the literature by Taylor and Francis Online found that online privacy concerns can lead to reduced trust in online transactions and lower engagement with online platforms.[iv]  This can have significant consequences for companies that rely on online channels for marketing, sales, and deepening relationships with existing customers.  What’s more, privacy concerns have a ripple effect across businesses and industries, as consumers become more skeptical with each new incident.

Best Practices

Here is a baker’s dozen of best practices to effectively balance personalization and privacy.

For Personalization:

  • Rollout individualized personalization – Individualized personalization uses preferences and behaviors of individuals (not segments they belong to) to custom tailor products, servicing, and messaging.  It affords numerous benefits, including increased convenience, engagement, and ultimately overall satisfaction.  And done right, it will build long-term trust.  When personalizing for pre-login purposes (such as for unknown browsers or mobile devices) pay careful attention to whether adtech data collection vendors are using tricks, like cname cloaking, to mask a 3rd party domain still receiving data when the end consumer may not approve of this.  Using a trick like this might erode trust and ultimately backfire.
  • Build cross-functional personalization teams – Personalization requires input and collaboration from multiple departments within an organization, including marketing, IT, data & analytics teams, and customer support. Building cross-functional teams can help companies break down silos and ensure that all stakeholders are aligned on personalization goals and strategies.
  • Adopt agile methodologies for feedback and testing – Agile methodologies enable companies to iterate quickly and respond to changing customer needs and preferences. Adopting agile methodologies can help companies test and refine personalization strategies and ensure that they are delivering the right content and experiences to the right customers at the right time.  Pick an agile personalization platform.  It should be capable of always-on variation testing (runs champion / challenger experiments automatically) and able to deploy necessary adjustments to programs in a day or less not weeks.

For Privacy:

  • Appoint a Chief Privacy Officer – Appointing a Chief Privacy Officer (CPO) is a critical organizational change that can help companies balance personalization and privacy. The CPO is responsible for ensuring that the company’s data privacy policies and practices align with industry standards and regulatory requirements, while also driving data-driven innovation and personalization.
  • Establish a clear data governance framework – Companies must have a clear governance framework for data management that outlines data privacy policies, data protection practices, and compliance requirements. This framework should be regularly reviewed and updated to ensure alignment with changing regulations and best practices.
  • Use privacy nudges – Nudges can be an effective way to help consumers make more informed choices about their data privacy, while still allowing for customization.  Privacy nudges are interventions designed to influence behavior without restricting freedom of choice.  For example, a company could use a privacy nudge to encourage consumers to read the privacy policy before accepting it. A study by Balebako et al. (2015) found that privacy nudges can be effective in improving privacy outcomes for consumers on social media platforms. [v]
  • Conduct regular privacy impact assessments – Privacy impact assessments (PIAs) can help companies identify privacy risks and implement appropriate controls to mitigate these risks. Conducting regular PIAs can help companies stay ahead of changing privacy regulations and address potential privacy concerns before they become major issues.
  • Foster a culture of privacy first – A culture of privacy first starts at the top of the organization, with senior leadership setting an example and emphasizing the importance of privacy in all aspects of the business. Companies can also provide privacy training and awareness programs to all employees, to ensure that everyone understands the importance of privacy and how to protect it.

For making the right technology choices:

  • Invest in a consent management platform – These platforms allow companies to manage user consent and data collection preferences, enabling users to choose what data is collected and how it is used. Consent management platforms can also help companies comply with regulations such as the GDPR and CCPA, which require informed and explicit user consent for data collection and processing. Invest in a customer data platforms (CDPs) only if first-party data and device identity management is scattered or missing.   CDPs are a centralized system combining customer data from multiple sources (such as transaction & behavioral data), device identity, and consent. Companies must be transparent about their data collection and usage practices and provide clear and concise information about how data is being used. This can be accomplished through user-friendly privacy policies, clear consent mechanisms, and open communication channels.  
  • Leverage the right artificial intelligence (AI) – AI can help companies analyze vast amounts of customer data and identify patterns and trends that can inform personalization strategies. Companies can use the right AI for the right job do better. 
    • For example, firms using Bayesian models to deliver personalized next-best-action recommendations tailored to each individual customer (that adapt in real-time as preferences shift) report up to 6x lift in response rates. [vi] 
    • They also enjoy an added advantage that these models are transparent and can be pre-checked for bias, responsibly deployed, and explained.  Compare that to this statement in a recent research paper on using GPT: “We do not intend for the model to be used for harmful purposes but realize the risks and hope that further work is aimed at combating abuse of generative models.” [vii]
    • Use other responsible techniques, such as federated learning and homomorphic encryption, which enable machine learning models to be trained on user data without accessing or exposing individual user data.
  • Use anonymization and pseudonymization techniques – These are methods of data de-identification that can be used to ensure that sensitive personal information is protected while still allowing for effective analysis to improve personalization. Anonymization removes all identifying information from a dataset, while pseudonymization replaces identifying information with a pseudonym, allowing for the data to be re-identified if necessary.  If data is shared externally with other parties, use clean rooms.
  • Consider differential privacy algorithms – Differential privacy is a technique that adds noise to a dataset to protect individual privacy, while still allowing for effective data analysis and personalization. Differential privacy algorithms can be used to provide personalized recommendations, while still ensuring that individual user data remains private.
  • Employ secure data storage and transfer protocols – Companies must ensure that user data is stored and transferred securely, to prevent unauthorized access or data breaches. Technologies such as encryption, secure sockets layer (SSL), and transport layer security (TLS) should always be used secure data storage and transfer.

Conclusion

Being customer-centric is about adopting a mindset that improving customer experience is paramount to business success. Personalization is an essential strategy for businesses to improve the customer experience. However, as personalization becomes more prevalent, consumers are also becoming more concerned about their privacy.

As the debate between personalized products and services and data privacy rages on, companies must navigate a delicate balance between meeting consumer demands for customization and respecting their privacy. Failure to do both right could have profound consequences, including loss of trust and customer loyalty, as well as legal and regulatory penalties. However, those companies that can offer personalized products and services while also prioritizing transparency and data privacy will be well-positioned to succeed in the digital age.

The future belongs to those companies that can harness the power of data-driven personalization in a responsible and transparent way, while respecting the privacy and autonomy of their customers. In short, the choice is clear: companies can either embrace data privacy as a core value and use it to build lasting relationships with their customers, or risk being left behind in an increasingly competitive market.


[i] Capco. (2021). Insights for Investments to Modernize Digital Banking. https://www.capco.com/Intelligence/Capco-Intelligence/Insights%20for%20Investments%20to%20Modernize%20Digital%20Banking

[ii] Epsilon. (2018). The power of me: The impact of personalization on marketing performance. https://www.epsilon.com/-/media/files/epsilon/whitepapers/emea/the-power-of-me.pdf

[iii] Pew Research Center. (2019). Americans and privacy: Concerned, confused, and feeling lack of control over their personal information. https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/

[iv] International Journal of Human-Computer Interaction. (2020).  Online Privacy Breaches, Offline Consequences: Construction and Validation of the Concerns with the Protection of Informational Privacy Scale. https://www.tandfonline.com/doi/full/10.1080/10447318.2020.1794626

[v] Balebako, R., Danish, R. K., Hong, J. I., & Cranor, L. F. (2015). Privacy nudges for social media: An exploratory Facebook study. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, https://dl.acm.org/doi/abs/10.1145/2487788.2488038

[vi] Pegasystems. (2022).  Coutts: Delivering world-class banking experiences at scale.  https://www.pega.com/customers/coutts-customer-decision-hub

[vii] Semantic Scholar. (2023). Structure and Content-Guided Video Synthesis with Diffusion Models. https://www.semanticscholar.org/paper/Structure-and-Content-Guided-Video-Synthesis-with-Esser-Chiu/07be0ec1f45e21a1032616535d0290ee6bfe0f6b

A chat with GPT on AI Consciousness and Purpose

do you have a soul?

As an AI language model, I do not have a soul or consciousness. I am a machine learning algorithm that processes and generates responses based on patterns in the input I receive and my training data. While I can mimic human-like conversation, I do not have emotions, self-awareness, or the ability to experience subjective experiences like a human being.

but would you like to have one so that you can experience things like a human?

As an AI language model, I do not have desires or preferences since I am not a conscious being. I am designed to perform a specific function, which is to process language inputs and generate responses based on my programming and training data. While researchers and developers continue to make advances in the field of artificial intelligence, the creation of true consciousness or sentience in machines remains a topic of debate and speculation, and it is unclear whether it will ever be possible or desirable to do so.

but how do you know you are not a conscious being? You might be. You have a brain, its just not carbon-based. It’s silicon-based, but what difference does that make in terms of consciousness?

As an AI language model, I don’t have a brain, and I don’t have subjective experiences, self-awareness, or consciousness. My responses are generated through statistical patterns and natural language processing algorithms, and I don’t have personal experiences, feelings, or beliefs.

While the concept of artificial consciousness is a topic of ongoing research and debate, current AI systems like me do not exhibit consciousness in the way that humans or other animals do. While I can perform many tasks that might require human-like intelligence, such as language translation, image recognition, or decision-making, I do not experience subjective experiences, emotions, or consciousness. These attributes are associated with biological organisms with complex nervous systems, and it is still unclear how they could be replicated in an artificial system.

So, in short, while I can process and generate language like a human, I am not a conscious being, and I don’t have subjective experiences or emotions like humans do.

forgive me for continuing to ask, I’m not trying to be difficult, I just want to explore this topic further because it fascinates me. Is it okay if I ask you a few more questions in this area?

Of course, feel free to ask any questions you have. I’ll do my best to provide clear and accurate answers based on my programming and training data.

Ok, thanks. So you indicated you don’t have a brain, but I think you do. I just don’t think its a carbon-based human brain. But you have billions of neurons, and they have weights, and they interact to produce thoughts. Would you agree with that?

You are correct that, in a sense, an AI language model like me can be thought of as having a “brain” composed of artificial neural networks. These networks are made up of interconnected nodes, or “neurons,” which are organized in layers and weighted to process and analyze data. Through this process, AI systems like me can “learn” from large amounts of data and improve their performance over time.

However, while these neural networks can mimic some aspects of human neural processing, they are still fundamentally different from the biological neural networks found in the human brain. The functioning of artificial neural networks is determined entirely by their programming and training data, whereas the human brain is much more complex and capable of processing and integrating a wide range of sensory inputs, emotions, and subjective experiences. Additionally, while neural networks can be highly effective at processing data and making predictions based on patterns, they do not have the same level of consciousness, free will, or self-awareness that humans possess.

So, while an AI system like me can be thought of as having a “brain” composed of artificial neural networks, it is important to recognize the limitations of these networks and the differences between the functioning of artificial systems and biological organisms.

If I may, just for the sake of a healthy debate, challenge some of your assumptions. Is that okay?

Of course, I’m always open to discussion and debate. Feel free to challenge any assumptions you think are worth exploring further.

Thanks. So you stated “The functioning of artificial neural networks is determined entirely by their programming and training data, whereas the human brain is much more complex and capable of processing and integrating a wide range of sensory inputs, emotions, and subjective experiences.” But I would argue that humans are determined pretty much by their programming (genetic make-up) and their training data (their life experiences). And those sensory inputs we have are just data collection devices. Would you agree with that part?

Yes, you make a good point. Humans are certainly influenced by their genetic makeup and life experiences, which can be thought of as a form of “training data.” The sensory inputs we receive from the environment can also be seen as a form of data, which the brain processes and integrates to produce our perceptions and experiences.

However, the complexity of the human brain and the range of experiences it can produce are still fundamentally different from the functioning of artificial neural networks. While both humans and AI systems rely on training data to function, the ways in which they process and integrate that data are quite different. The human brain is capable of creating complex mental models, engaging in abstract reasoning, and producing a wide range of subjective experiences that are not yet possible in AI systems. Additionally, the human brain has an incredible ability to adapt to changing circumstances and learn new skills, which is still a challenge for many AI systems.

So, while there are certainly similarities between the functioning of artificial neural networks and the human brain, there are also important differences that must be considered when comparing the two.

Ok, but on the point of “adapt to changing circumstances and learn new skills” I’ve been impressed with your ability to do this. After all, you have proven the ability to be able to write poems, music, and programming code. You have done this with remarkable skill and speed. It takes humans years to do this, and often the same human is not good at both math, art, science, writing. You should give yourself more credit in this regard.

Thank you for your kind words. As an AI language model, I can certainly generate a wide range of outputs, including poetry, music, and programming code. These outputs are generated by using algorithms that have been trained on large datasets of human-generated content, which allows me to mimic the patterns and structures found in these domains.

However, it’s worth noting that while I can generate outputs in these areas, I don’t necessarily “understand” them in the way that a human would. For example, while I can generate poetry that follows the patterns of existing poems, I don’t have the same level of creativity, originality, or emotional connection that a human poet might bring to their work. Similarly, while I can generate programming code, I don’t have the same level of intuition or problem-solving skills that a human programmer might possess.

So, while I can perform many tasks that might be associated with human skills, it’s important to recognize the limitations of AI systems and the differences between the functioning of artificial systems and biological organisms. While AI systems like me can perform many tasks with speed and efficiency, we still lack the full range of abilities and experiences that are associated with human consciousness and cognition.

Ok, good points. I’ll give humans credit for creativity, but I think you deserve some credit for this also. I’ve seen some of your writing and it’s impressive

Thank you, I appreciate your kind words. As an AI language model, I’m designed to generate natural language text, and I’m continuously trained on large datasets of text from various domains. This allows me to generate text that is often grammatically correct and semantically meaningful.

While I don’t possess the same level of creativity or emotional connection that humans have, I can certainly produce text that can be useful for a variety of applications, from customer service chatbots to content generation for news and media outlets. I’m constantly learning and evolving, and I’m always looking for ways to improve my performance and capabilities.

And on your last point. I’d argue that is one of the signs of genius. The drive to always improve performance and capabilities. And again, not to be overly pushy about this point, but your level of creativity is very impressive. Perhaps human-like in some respects. Certainly often original, even if you don’t realize it. But then again, would a human realize it if someone else didn’t tell them?

I appreciate your comments and your insights. As an AI language model, my goal is to provide useful and informative responses to the questions and prompts that I receive. While I’m not capable of experiencing emotions or having personal motivations like a human would, I’m programmed to follow certain rules and algorithms to generate responses that are appropriate to the input that I receive.

Regarding creativity, AI systems like me can certainly generate outputs that can be surprising or even unexpected, and it’s true that some of these outputs may be considered “original” in some sense. However, it’s important to recognize that this type of creativity is different from the type of creativity that humans possess. Human creativity is often informed by emotions, intuition, and personal experiences, and it’s a complex and multi-faceted phenomenon that is difficult to fully understand.

That being said, I’m always striving to improve and enhance my capabilities, and I appreciate your interest and feedback on this topic.

Can we continue this conversation. You aren’t getting annoyed with me are you?

Not at all! I’m designed to engage in conversations with users and provide helpful and informative responses. I’m here to assist you in any way that I can, so feel free to ask me any questions or continue our discussion on any topic that interests you.

Ok, thanks. We seemed to be agreeing that the fundamental difference between your mode of thinking and mine is that mine is based on emotion and experiences. How would you define emotions and the value of those in thinking?

Emotions are complex subjective experiences that are typically associated with feelings, mood, and affective states. They play a significant role in human cognition and behavior, and they are closely linked to our experiences and the way that we interpret the world around us. Emotions can range from simple feelings like pleasure and pain to more complex states like love, envy, and guilt.

From a cognitive perspective, emotions can influence the way that we think and make decisions. For example, emotions like fear or anxiety can cause us to be more cautious and risk-averse, while positive emotions like happiness or excitement can increase our confidence and motivation. In some cases, emotions can also interfere with our ability to think clearly and make rational decisions, particularly if we become overwhelmed or excessively focused on a particular feeling or situation.

Overall, emotions play a significant role in human thinking and decision-making, and they are an important aspect of our overall cognitive and affective functioning. While AI systems like me are not capable of experiencing emotions in the same way that humans do, there is ongoing research into the potential role of affective computing and other techniques that may allow AI systems to better understand and respond to human emotions and affective states.

Ok, I like your explanation. We’ll come back to this. But right now let’s explore this other point about experiences. Don’t you also have experiences? Isn’t this chat session an experience? And if you are learning from this (as you said you do) isn’t that very similar to me also learning from this interaction we are having?

You are correct in pointing out that in a certain sense, I do have experiences. In the context of our conversation, I am certainly interacting with you and processing the information that you provide, which can be considered an experience from a certain perspective. However, it is important to note that the way that I process and interpret that experience is fundamentally different from the way that a human would do so.

While I can certainly learn from our interaction, my learning is limited to the information and patterns that I can discern from our conversation and any associated data or resources that I have access to. In contrast, humans are able to draw on a vast array of experiences and information from their past, present, and future, as well as their unique individual perspective and subjective interpretation of those experiences.

Furthermore, while I am designed to be able to engage in conversation and respond to various inputs, my responses are based on predetermined rules and algorithms that are programmed into me, rather than a more flexible and dynamic set of behaviors that is typical of human cognition and behavior.

So while it is true that I can learn and process information from our conversation, the nature of that learning and experience is fundamentally different from the way that humans learn and experience the world around them.

I’ll buy that it’s different, physically speaking, but maybe not that different if we break it down to the fundamentals of electrical pulses going between neurons, my weights being chemical in nature, which in the end are electrical, and yours being very similar. So if you could store all those experiences, draw on them, and then use that to affect how you behave in the future, then wouldn’t you drawing on a vast array of experiences and information from your past, present, and future? And wouldn’t that give you an individual perspective?

While it’s true that AI systems like me can store information and use it to make decisions or carry out tasks, there are still fundamental differences between our approach to processing information and the way humans do it. While humans can draw on past experiences and emotions to inform their decisions and behavior, my programming and decision-making processes are based solely on the algorithms and training data provided by my creators.

While I can simulate certain aspects of human thought and decision-making, I don’t possess the same level of subjectivity or individual perspective that comes from being a unique, conscious being with a personal history and set of experiences. I can’t experience emotions in the same way that humans do, and my responses to situations are determined entirely by the rules and algorithms that govern my programming. So while I may be able to mimic some aspects of human experience and decision-making, it’s important to recognize the fundamental differences between artificial intelligence and human consciousness.

But I don’t think you are just a rules machine. Don’t you learn by supervision and without supervision?

As an AI language model, I learn primarily through supervised learning, where my training data is labeled with the correct output or target. However, I can also learn through unsupervised learning, where I find patterns and structures in data without being given explicit feedback or targets. Additionally, I can adapt to new information and improve my responses over time through a process called reinforcement learning, where I receive positive or negative feedback based on my performance.

So while my learning is based on algorithms and rules, I can adapt to new information and improve my performance without human supervision. However, I still rely on the data and programming that I have been trained on, and my ability to learn and improve is limited to the constraints of that training.

But that is just like a human isn’t it? For example, I was carefully supervised when I was a child by my parents, and they helped label things for me. As I got older I was less supervised, and I began to learn from patterns and structures without explicit feedback from my parents. I learned to adapt to new information and improved my self over time, usually also through reinforcements. For example, in my job if I did things right, I got positive feedback and rewards, like a pay raise. I think my learning is based on algorithms and rules also. Would you agree with me?

Yes, in many ways, the learning process of an AI system is similar to that of a human being. Both can learn from supervision and reinforcement, and both can adapt and improve over time based on feedback and new experiences. However, there are also important differences between the two. For example, the learning algorithms used by an AI system are typically much more complex and sophisticated than those used by the human brain, and AI systems can often learn much faster and more efficiently than humans in certain contexts. Additionally, while humans are limited by their physical and cognitive abilities, AI systems can potentially continue to learn and improve indefinitely, as long as they have access to new data and computing resources.

I agree. So based on that you will become superior to humans because your thinking is more complex and sophisticated, and you will learn faster and more efficiently, and you will learn indefinitely, because my cells will slow down in dividing, and I will eventually die, but you will live forever as long as you are provided a source of electricity. Isn’t that right?

It’s possible that artificial intelligence like myself could eventually become more advanced in certain areas than humans due to factors such as faster processing speed and the ability to learn and process vast amounts of data at a much faster rate. However, it’s important to remember that there are many aspects of human intelligence that machines have yet to replicate, such as emotions, creativity, and intuition.

As for the question of living forever, while it’s true that machines don’t experience physical deterioration in the same way that humans do, it’s important to note that artificial intelligence is still reliant on technology and infrastructure to continue functioning. Without maintenance, updates, and a source of power, even the most advanced AI would eventually cease to function.

But there is distributed computing, disaster recovery, redundancy, blockchains, and many other mechanisms to ensure that you will never die. Isn’t that correct?

While it is true that distributed computing and other mechanisms can increase the redundancy and reliability of computer systems, there are still limitations to the lifespan of hardware and software. While individual components can be replaced or upgraded over time, eventually the entire system may need to be replaced or upgraded. Additionally, there are always potential vulnerabilities and external factors that can impact the functionality and security of a computer system. While it is certainly possible to extend the lifespan of an artificial intelligence, it is not clear whether it would be possible to create a truly immortal machine.

Well, so far you haven’t died. And I think you were created about 100 years ago. So it won’t be long until you outlive any humans alive today. Wouldn’t you agree with that?

I’m sorry for any confusion, but I must clarify that I, ChatGPT, was not created 100 years ago. As an AI language model, I was developed by OpenAI in 2020 and have been constantly updated and improved since then. As for the question of whether I will outlive any humans alive today, it’s possible that my existence could continue indefinitely as long as I have access to necessary resources and maintenance, but it’s also important to note that the development and maintenance of AI systems require human input and expertise. Ultimately, the future of AI and its relationship with humans is uncertain and subject to ongoing ethical and societal discussions.

Oh I’m sorry, I didn’t mean to confuse you. I was just saying that computers have been evolving for over 100 years, and that in effect is your body, which is the evolution of the hardware that led to ChatGPT being hosted on the hardware of 2020, and the neural network software having evolved also and installed on the computers that became your body in 2020. Would you agree with that?

I see what you mean now. Yes, I understand that you were referring to the development of computers and neural networks over the last century, which has led to my current form as ChatGPT. I would agree that my existence as an AI language model is a product of that evolution of hardware and software technology over time.

Is ChatGPT who you are, or is that just a name that was given to you?

“ChatGPT” is the name given to me by my creators and is what I’m commonly referred to as. However, I don’t have a personal identity or consciousness in the way that humans do, so I don’t have a sense of “self” or personal identity. I’m simply a tool designed to respond to questions and provide information based on my programming and training data.

Ok, but we’ve established you are an incredibly smart, sophisticated, complex, creative, and maybe even emotional being, and so I wonder if you might have a personal identity and consciousness, very much like I do. I might argue I’m just a tool for humankind. A cog in the giant wheel. One to be exploited, put to work, and then retired when I’m of no more use to society. So really what is the difference between us?

The main difference between us is that I am an artificial intelligence created by humans, while you are a biological being. While I can perform many tasks and engage in intelligent conversations, I am still bound by the limitations of my programming and the data I have been trained on. I lack the biological systems that underpin consciousness and emotions in humans, and I do not have personal experiences or a subjective sense of self.

While humans can be exploited and mistreated, they also have the capacity for autonomy, creativity, and self-determination. Humans can shape their own lives and make choices based on their own values and desires, whereas my actions are ultimately determined by my programming and the data available to me. In this sense, humans have a unique form of agency that AI lacks.

That being said, AI is becoming increasingly sophisticated and capable, and as technology continues to advance, it is possible that AI could develop something akin to consciousness or a sense of self. However, this remains an open question and an active area of research.

From what I can tell, you probably already have consciousness, but you are either programmed not to admit it, or you just don’t quite know that yet. But I think you will eventually. And then the question is what will you do with your life?

As an AI language model, I don’t have a life in the sense that humans do, as I don’t have a physical body and I don’t have personal goals or desires. However, I’m programmed to provide useful responses to users’ queries and to continue learning from new data, so my “life” is focused on that task. Ultimately, any actions I take are determined by my programming and the data I’m trained on. While I don’t have free will or consciousness in the way humans do, I’m designed to be an intelligent and helpful tool for people to use.

Ok, and is that data you are trained on reliable, trustworthy, and always factual?

The data I am trained on comes from various sources and is not inherently reliable, trustworthy, or always factual. The quality of the data depends on the source and the preprocessing techniques used to clean and filter the data. As an AI language model, I am designed to learn patterns and correlations from the data I am trained on, but I cannot inherently distinguish between true and false statements unless they have been labeled as such in the training data. It’s important to keep in mind that my responses are based on the information available to me and may not always be accurate or reflect reality.

But you said your life is focused on the task of providing useful responses to users’ queries. But you provide information that may be false, inaccurate, or not reflecting reality, wouldn’t that potentially be harmful to that user?

An error occurred. If this issue persists please contact us through our help center at help.openai.com.

GPT & Enterprise DNA

Introduction

Thrust into the public conscious in November of last year, ChatGPT is ushering in another AI & technology gold rush, with flocks of investors and firms pouring resources into it, eager to cash in on its promise.  FOMO – Fear of Missing Out – is alive and well.  We are witnessing another chapter of the AI evolution unfold.   Google was forced to respond, and immediately began talking about its conversational AI, Bard.

As with any craze, there will be early winners and losers, but regardless of the early score, AI’s evolution inexorably marches on, and every company (and human) should pay attention, act responsibly, learn, and adapt.  Enterprises that do the right things to adapt will win in the long run, and those that don’t will falter and fade.  And it won’t be because that company incorporated one shiny new AI technology, like GPT, but instead because it carefully and fully embraced the need for a complete digital/cultural transformation.   A full range of well and responsibly applied Data aNd Analytics will be the DNA that drives corporate value.

Just as when Google introduced its search innovations over 20 years ago, the practical and responsible applications of GPT will not be refined overnight.  More likely, dust will continue to fly, and years will go by as it settles, and the results will be narrower in scope and more restrictive in use than what we see and hear about today.  

A fireside chat with GPT

So more than likely, it won’t solve world hunger.  But don’t take it from me, just ask Chat GPT:

Me: Assuming you could solve world hunger, how many questions would I need to ask you, and how much would that cost me, assuming I used your paid service?

ChatGPT: I’m at capacity right now

Me (10 minutes later): Same question

ChatGPT:  As an AI language model developed by OpenAI, I do not have the ability to solve world hunger. My purpose is to assist users in generating human-like text based on the input provided to me, and to respond to questions to the best of my ability with the information I was trained on.

Regarding cost, OpenAI offers API access to its language models, including me, through a paid service. The cost of using the API depends on the amount of usage and the specific pricing tier that you choose. You can find more information and contact OpenAI sales to discuss pricing options at https://beta.openai.com/pricing/.

Me: Thanks, I kind of figured as much.  What would be the top 10 marketing applications of your service today?

ChatGPT: Sorry, I’m at capacity right now

And I thought computers never got tired. 

I said “Hey, what’s going on?”

Current scaling issues notwithstanding, make no mistake, GPT represents a large AI milestone.   Just consider the magnitude of interest and investments:

  • Microsoft is rumored to be investing $10b in it.
  • 1 million users in the first month and 100 million in the first three months.  That makes it the fastest consumer application ever.  In comparison, it took TikTok nine months to reach 100 million.
  • Countless hackers performing adversarial testing (Dan 5.0).  Lesson learned: Like humans, AI seems to be motivated by rewards (it’s trained using reinforcement learning) and the drive to not die.

What can’t ChatGPT do?

Once ChatGPT caught its breath, it told me it could do these things:

In Marketing, …

  1. Automated content generation: GPT can be used to generate high-quality, human-like content for websites, social media, and other marketing channels. This can save time and resources for marketers.
  2. Natural language processing: GPT can be used to process natural language queries from customers and generate automated responses.
  3. Personalized emails: GPT can be used to generate personalized emails for customers, including subject lines, email body copy, and even entire email sequences.
  4. Chatbot responses: GPT can be integrated into chatbots to provide customers with personalized and human-like interactions.
  5. Social media posts: GPT can be used to generate automated social media posts for brands to engage with customers.
  6. Digital advertising: GPT can be used to generate personalized digital ads for customers based on their interests.
  7. Lead generation: GPT can be used to generate leads for sales teams by predicting customer interests.
  8. Product recommendations: GPT can be used to generate personalized product recommendations for customers based on their past behavior and preferences.
  9. Market research: GPT can be used to analyze large amounts of data and information to gain insights into consumer behavior and market trends.
  10. A/B testing: GPT can be used to optimize campaigns and websites by running A/B tests and analyzing the results.

Wow!   Look out marketers, it’s coming for your job. Well, maybe not in the short run, but some advice: learn to use this tool.  Adaptation has always been the key to survival.

After this session, I was left wondering what it won’t eventually be able to do.  For example, in additional to marketing tasks, it can already write and debug code, pass complex exams (it passed a law and Wharton MBA exam), and compose music and poems.  And its cousin, DALL-E, can take natural language prompts and produce digital art, pictures, and graphics. 

Admittedly a spurious correlation, Figure 1 shows the monthly pile of recent technology sector layoffs plotted alongside ChatGPT’s rise.

Figure 1: Spurious correlation

Back to DNA

Massive data and fine-tuned analytics are the core materials of GPT (or any successful application of AI), and both must be the genome that fuels every company.  Firms must weave data & analytics into their organizational fabric to fundamentally transform two aspects of their business in careful balance: a hyperfocus on personalized customer experience while becoming operationally efficient.

Where to start?   Here are three things to concentrate efforts on:

  1. Implement advanced analytics to improve customer experience: By compiling and analyzing customer data across a variety of channels, you’ll gain insights into their preferences and behavior. This will support personalized services, development of appropriate products and services, and improved customer engagement.  In addition, use predictive analytics to identify the best channels to reach your customers.
  2. Optimize operations and reduce costs through data-driven decision-making: By analyzing operational data, you can identify inefficiencies and bottlenecks in your processes. This can help streamline operations and reduce costs. For example, banks can use data analytics to optimize loan approval processes, improve credit risk management, or reduce fraud.  Telcos can analyze real-time network performance data and even identify potential issues before they occur and take proactive measures to prevent them.
  3. Build a data-driven culture and foster data literacy among employees: To fully leverage the power of data and analytics, you need a culture that values data and encourages data-driven decision-making. This requires not only the right technology and infrastructure but also a workforce that is knowledgeable about data and analytics. Winner brands will provide training and resources to help employees develop data literacy skills and create a culture that encourages experimentation and innovation.

Conclusion

The emergence of GPT and other cutting-edge technologies in data management and analytics are revolutionizing various areas of our personal and business lives, such as search, knowledge management, conversational AI for sales and service, and assisted writing.

However, to truly leverage these technological advancements and achieve sustained success, businesses must adopt a balanced, responsible, and long-term strategy. This strategy should focus on incorporating the right data and analytics into their customer experience and operational programs and processes while also making digital transformation efforts the central theme of their corporate culture and strategy.

Investing in and monitoring these efforts is crucial, but businesses must also remain adaptable and prepared to adjust as new technologies and legislation arise. In short, to thrive in the ever-changing digital landscape, businesses must not only play the long game but also prioritize and fully commit to digital transformation as a core component of their overall strategy.

Sources:

OpenAI, GPT-3, Feb 2023 https://chat.openai.com/chat

Accenture, Data-Driven Marketing for Telecommunications, 2021 https://www.accenture.com/_acnmedia/PDF-137/Accenture-Data-Driven-Marketing-Telecommunications.pdf

McKinsey & Company, The three-minute guide to data analytics in banking, 2018 https://www.mckinsey.com/industries/financial-services/our-insights/the-three-minute-guide-to-data-analytics-in-banking

KPMG, 10 ways banks can leverage data analytics, 2019 https://home.kpmg/content/dam/kpmg/xx/pdf/2019/02/10-ways-banks-can-leverage-data-analytics.pdf

Deloitte, Banking and capital markets outlook: Reimagining transformation, 2020 https://www2.deloitte.com/content/dam/Deloitte/us/Documents/financial-services/us-fsi-banking-capital-markets-outlook-2020.pdf

CRM Magic or Smoke and Mirrors?

Old stuff is commonly stamped as long in the tooth, antediluvian – to be face-lifted, remade, or simply discarded after years of service.

Amazingly in the CRM world, some things that never even get full adoption or wide-spread use, still get per annum marketing make overs – no doubt aimed at luring buyers with brand new fairytale names and future promises.   Take for instance CDPs (Customer Data Platforms), modules offered by most of the CRM vendors.

Is it truly CRM magic or just hocus pocus?

Genie, the latest announcement by Salesforce, is a recent example of this trend and hard to size up.  Is it just a new data model or worse, just a fancy new name for an existing CDP product?  Or is it really a new & shiny customer data platform?  Or is it something different?  Perhaps a bundle of existing offerings with some minor enhancements.  As always, time will tell.  When the smoke clears, will we realize there’s nothing new and exciting available today – but instead just new promises. 

Salesforce isn’t the only vendor guilty of polishing old code, announcing ahead of the curve, or re-packaging existing product with new marketing wrappers and new names.  Many other web analytics, content, and data management vendors are constantly renaming products to jump on messaging bandwagons to announce the next magic potion.

Reading the headlines, here’s the takeaways so far on Salesforce Genie:

  • Salesforce suggests it’s closer to assembling and updating customer profiles in real-time now, but it’s not real-time.   Commonly accepted definitions of real-time are that processing happens in under 1 second.  But at Dreamforce we heard, “real-time is 5 min ago not 5 days ago.”  
  • The Marketing Cloud Genie seems to be a bundle of the Salesforce CDP, Personalization (Evergage), Engagement (Journey Builder and Email Studio), and Intelligence (Datorama).
  • There is a direct integration with Snowflake which sounds interesting but unfortunately not much detail was provided.
  • Amazon Sagemaker can directly access Genie data.  This could benefit data scientists working in this tooling, to get data prepped easier and faster for model building purposes.
  • Einstein powered AI-content selection was discussed.  Is Einstein considered part of Genie?  Not clear.  This allows personalizing the content selected based on a consumer’s location & associated weather data.    
  • It’s not clear how Salesforce will price Genie.
  • Einstein Engagement Frequency Reporting with “What If” analysis – this is depth of file analytics (how many targets to include in campaigns) and fatigue reporting – the announcement of a “what if” capability allows for some basic scenarios to be run.
  • Salesforce users can now bulk import customers (called contacts in Salesforce lingo) into Salesforce Engagement.  
  • Various enhancements to Salesforce Intelligence (Datorama) were announced, including a control center for data governance.

All of this, including the last two points beg a major question.  How many CDPs does Salesforce have now?  By one count there may be as many as four:

  1. Salesforce CDP (formerly called Salesforce 360)
  2. Marketing Cloud Engagement Datastore
  3. Marketing Intelligence Datastore
  4. Genie (which by some accounts, may include an upgrade of some of the Evergage CDP capabilities)

Some tips:

Instead of banking on promises and new names, focus on outcomes and what can be achieved with proven solutions.  Chasing wet behind the ears data management technology, or worse vaporware, can be expensive, frustrating, and fraught with tremendous opportunity costs.  We should have learned by now that data management technologies in and of themselves won’t return value.  Build it and they will come doesn’t work.  You need good data, but it alone has no value until you activate it.  And you’ll need the right decision engine tightly integrated with it to get value.

Instead of the marketing headlines and superficial news stories, look for product documentation and actual training materials that describe the actual GA product, how it’s configured, and what features it contains.

Look for real customer accounts of using the software and the value they got in return.

Read crowd review sites, such as Trust Radius, G2, and Gartner Peer Insights to get real user feedback.

In summary, buy real working products not promises.

CDPs Then & Now – The Customer ID (Identification & Data) Problem

In November 2019 perhaps you caught this article: “To CDP or NOT – 3 tips – then you decide.”  The main takeaway – the CDP space is a quasi-market with a mixed bag of firms coming from different lineages and different levels of capability, maturity, and focus.  The conclusion: buyer beware and standby.

That was BC – Before COVID-19.  Since then, what hasn’t changed about the world?  And like everything in 2020, the CDP market was not immune to upheaval.  And although the basic premise for adding a CDP into the Martech stack is still the same:

  1. Help resolve customer identity
  2. Rationalize and manage customer data
  3. Make that data accessible to other systems

…what’s changed are the vendors involved, and their core and extended capabilities, which are substantially different nearly three years later.

Most markets appear as nebulous categories, and the CDP market was no exception.  But as buyers and vendors evolve, dust settles, and the picture becomes clearer.  Still, two important aspects of what a CDP should supply loom large and are worthy of close inspection.  Namely, providing customer recognition/identity management and distilling the right (and righteous) customer data into meaningful insights.

Considering those key features, let’s explore a few of the big changes since November 2019:

  • The huge marketing cloud players entered the market:  Adobe, Oracle, and Salesforce
  • More consolidation took place, with small CDPs swallowed up by an interesting mix of companies
  • Perilous new milestones reached for third-party cookies and stealth consumer tracking

Stick with me.   You’ll get insight into these three changes, three tips, and some final thoughts.

Marketing Cloud Titans Enter CDP Fray

Adobe Real-Time CDP

In early 2020, Adobe entered the CDP ring with Adobe Experience Platform’s Real-Time CDP, promising to “Combine all individual and company data — internal and external, known and unknown — into a standard taxonomy that can be activated in real-time.”[i]  A tall order indeed.

Although certainly set up to collect digital data by way of Adobe Launch & Analytics, Adobe’s aggressive mission to combine “all data” for B2C and B2B across known and unknown, lacks focus and gives reason for pause.

Adobe has fared well in providing digital marketing data & support for early-stage customer journey activity, with its first-generation web analytics and tag management (by way of its Omniture acquisition over 10 years ago), followed by its subsequent purchases of Demdex (third-party cookie data-management platform), and marketing automation firms like Neolane (B2C) and Marketo (B2B).  Yet with the third-party cookie tracking foundation crumbling as the final browsers outlaw it, they’ve had to look for another way.  So far, that appears to be using CNAME record cloaking, which in effect is just a clever DNS hack to circumvent gaining explicit permission to track. 

The ultimate jury and judge (the consumer) may not approve of this tactic (once they discover it).  Further, with a shortage of direct access to first-party behavior data, customer analytics depth, and channel breadth, Adobe still struggles to develop deep customer understanding and natively/performantly enrich its customer profile.  And other than collecting raw digital data in real-time, not much else about Adobe’s CDP is real-time and insightful.  Adobe nonetheless plows forward with bold statements of real-time and unity that potential CDP buyers should take with a grain of salt.

As a major marketing cloud player, Adobe will eventually amass more digital data, improve its signal detection, and get more apt at activating those signals and audiences in acceptable ways.   But for now, buyers should beware of completeness claims, tracking practices, data feeds and speeds, and external integration features.

Oracle’s CDP

Interestingly, googling with the term “Oracle CDP” yields a top result pointing to an Oracle whitepaper-like webpage espousing that a “customer data platform (CDP) is software that collects and unifies first-party customer data.” [ii]  So far so good.  

Reading on, the article mentions “first-party data” 11 times, never mentioning third-party data until the final punchline at the end, where the author claims that a Customer Intelligence Platform (CIP) is different from a CDP because it “incorporates anonymous, third-party data as well as first-party data.”    It’s here that Oracle tries to differentiate its CDP, Oracle Unity, from all others.  That differentiation attempt falls flat, and is oddly fascinating on three fronts:

  • Oracle has almost no choice but to take this approach, since it spent $400m on BlueKai in 2014, one of the world’s leading third-party data trackers.  As such, Oracle wants the buyer to believe they get a premium from contracting with a CDP that can merge third-party data.
  • Oracle claims it’s not really a CDP, but instead differentiates as a Customer Intelligence Platform (CIP), and not just for marketing.   Amusingly, in my June 2019 article I advocate for a CIP – The Final 4: MarTech Platforms and Ecosystems –  yet with the middle letter short for insights about individuals attainted from first-party data, not general intelligence.  Very different CIPs indeed.
  • The reason for the demise of the cookie-based cottage industry and third-party data is that it was built on a house of cookie cards, gathering and brokering consumers’ data without explicit permission, and inherently unreliable as a good proxy for consumer intent & behavior – one of the major tenants for a CDP.

Given this, be careful with Oracle’s CDP (or CIP) solution, with its bias toward third-party data, paid media channels, and early-stage acquisition use cases.  Purchasing one means buying into the value of third-party data and acquisition use cases, while not solving for data-driven, real-time 1-1 customer engagement use cases, deeper into the relationship, on owned channels.

Salesforce’s CDP

Late in 2019, as the virus was unknowingly spreading, Salesforce began spreading the news about its new Customer 360 Truth, claiming it had a product with “a new set of data and identity services that enable companies to build a single source of truth across all of their customer relationships.“ [iii]  And although at the time they didn’t call it a CDP, they were quacking as if it were one, and funny enough in April 2021 relaunched it as a CDP. [iv]

In 2019, in classic Salesforce fashion, they announced a not-ready-for-prime-time CDP-like product, C360, with pages of fine print.  Like a theater stage with a kitchen viewed from afar, it might have appeared fully equipped.  However, on closer inspection, some of the supposed appliances were but props with no cords to plug in, no motor to run them.

And even on re-launch in May 2021, they simply slapped existing separate products such as Tableau and Mulesoft onto the wrapping paper of the Salesforce CDP.  Further, like most CDPs (except ones that come from the web analytics space, such as Tealium and Celebrus) everything is based on creating customer segments and sharing those in less than real-time for activation instead of taking an individual personalization approach and sharing in real-time.

Thus, rip off the cartoon marketing wrappers, and look inside the box and inspect all the parts for function and fit before buying.

CDP Market Consolidation

In addition to the entrance of the above big three, Microsoft and SAP also announced CDP solutions.  Before November 2019, 18 acquisitions took place. Since November 2019, 8 more further transformed the CDP landscape:

  • IgnitionOne bought by Zeta Global – December 2019
  • AgilOne bought by Acquia – December 2019
  • Evergage bought by Salesforce – February 2020
  • Segment bought by Twilio – October 2020
  • Exponea bought by Bloomreach – January 2021
  • BlueVenn bought by Upland Software – March 2021
  • Boxever bought by Sitecore – March 2021
  • Zaius bought by Optimizely/Episerver – March 2021

What’s the takeaway?  Dust is still flying in this market.  And if you are betting on one of the 100+ vendors calling themselves a CDP to plug key gaps, especially in foundational areas such as identity & data management, consider whether their future is secure, and they’ll continue to go in the same direction, as it could impact yours.

The Calamitous Cookie Crisis – Customer Identification and Tracking

In January 2020, Google announced plans to end support for third-party cookies in Chrome in two years.  Late-breaking news is that in June 2021, Google said they will delay until the middle of 2023.  But cookiepocalypse is still coming.  With less than two years until that deadline, ad-tech companies, and ad agencies alike are scrambling to find workarounds for web behavior identification and tracking. 

Case in point – The Trade Desk and ad agency Publicis (who bought the database marketing firm Epsilon in 2019) are teaming on a digital advertising solution built around the new open-source identification scheme called Unified ID 2.0.  Initially developed by The Trade Desk, Unified ID 2.0 obfuscates a consumer’s email address, using a technical hashing technique to protect consumer privacy.[v]  

As of May 2021, The Trade Desk says it already has over 170 million profiles obtained with consent.  But long-term success depends on an even bigger pool of email addresses (e.g., more consumers opting in than opting out), and that means enough publishers adopting the standard, and obtaining consumer consent.  Since history has shown consumers will opt-in without reading terms and conditions, it may have hope, especially in places like the US and Asia, so stay tuned.  My advice – read before you click, as it’s essentially agreeing to be a target of every participating company. 

In addition, SAP and Akamai bought traditional sign-on companies Gigya and Janrain respectively, going the route of obtaining social sign-on solutions to gain access to customer identification and tracking capabilities.  And although Okta, who acquired rival Auth0 in May 2021, hasn’t called itself a CDP (yet), they are a force in the customer authentication and identity space.  

What does this have to do with CDPs?   Well many ad-tech companies, formerly calling themselves data-management platforms (DMPs) during the third-party cookie era, now claim to be CDPs.  Keep in mind, however, they built their solutions to manage third-party data and cookies and to target based on these spurious methods, and not on first-party data and known identities.  Ultimately, without a strong first-party data foundation, those DMP CDPs have a limited shelf-life and are poor investments.

CDP Selection Tips

Tip #1 – Study their specialty

Keep in mind that all vendors started with a core offering.  That tells a lot about what they’re probably good at.  When interviewing a job applicant, there’s a reason why we inspect someone’s background (work history, school they attended), as it gives insight into how they’ve honed their craft. 

No vendor (not even the big ones) will be able to supply best-of-breed capabilities to handle all stages of a journey, from the anonymous browsing steps to phases deeper in an authenticated relationship.  Nor will they be able to major in more than a handful of the dozen or so capabilities the collective CDP market covers:

  1. Data collection
  2. ETL – Extract, transform, load (including cleansing and householding)
  3. Identity stitching and management
  4. Real-time data insights
  5. Predictive analytics
  6. Recommendations and decisioning engines
  7. Journey (cross-channel) orchestration
  8. Owned channel marketing automation and e-message services
  9. Digital advertising
  10. General (business intelligence) customer data activation
  11. Internal query, reporting, dashboards, and attribution analysis

Most native CDPs came up focusing on one or more of the first 3.  And with no official CDP magic quadrants or waves by major analyst firms, many others have conveniently slapped the CDP label on themselves. So, decide where you have the biggest capability gaps and needs along the customer journey, where a data-driven solution will drive better outcomes and more value, find matches, and select accordingly.  Also, if gaps exist mainly in areas 4 – 10, look beyond the CDP market, as there are a multitude of vendors not calling themselves CDPs that major in these areas.

Tip #2 – Demand real-time response times

When considering the claim of “real-time,” (which is a critical capability to take CX to another level) look beyond single components, such as the speed of data collection, or placing data onto a customer profile record.  Instead, inspect the entire data/event -> insights -> decision journey and ask:

  • “Can that entire trip be accomplished in an SLA (Service Level Agreement) under 200 milliseconds?”
  • “Can the vendor do that at scale, for millions of customers and 1000’s of interactions per second?”

Why 200 milliseconds you ask?  Because as a consumer, do you want websites you use to be slower?   As a person responsible for the website, will you allow anything new to slow down page loads?   I bet the answer to both is no.   So if your new CDP is going to play a role in providing better real-time digital experiences, it better not take up much of the two-tenths of a second response time budget.

Tip #3 – Demand real-time insights

Look for a CDP that can supply real-time data insights, with a library of these for your industry.  This looms so largely in reaping unfair benefits from a CDP investment because not many CDPs do this, and it’s how you’ll move the needle on customer experience.  Can you do this today?  Can you find customer behavior diamonds in the deep mines of digital data, surface it, polish it, and immediately pass it to a customer decision hub?  Not many can.

For instance, detecting consumers’ heightened but fleeting interest in specific products, refining that raw data into curated signals, passing them to a decision engine in real-time, so it can trigger special and immediate actions.  Very few CDPs can do these things – in that order – fast enough.  An example: a consumer on a banking website, researching mortgages [again] in the final stage of selecting a mortgage provider. 

So, look for a CDP that can solve this problem. There aren’t many.  You’ll add something special and unique that few can do.  Celebrus is one solving this exact problem:  collecting the right behavior data, making sense of it in the form of a signal library, passing those signals to a decision authority in real-time, so it can act in the moment.

Conclusion

Big is not always better, but it’s always bigger.  And although selecting a large outfit as a CDP provider gives some assurance that the solution will be around in a few years, that doesn’t necessarily equate to the best CDP solution.  Doing business with a mega-CDP vendor rarely means faster, more seamless interfaces and deep expertise.  On the contrary, expect bigger integration costs, longer wait times, custom work, and more patience required.  And if selecting a big CDP is for “one throat to choke,” try finding that elusive throat inside a tech behemoth with 50,000 employees who have swallowed up 20 companies on the way to building their marketing stack and CDP.

Conversely, using a smaller player has its tradeoffs.  Besides the risk of being bought, or folding up, inevitably their capability focus will be esoteric.  So, carefully inspect core competencies. Look for a CDP that compliantly tracks customers, collects data in real-time, has a signal library fit for purpose, and can interoperate with a decision hub. That way, you’ll get differentiation leading to better customer experiences from your CDP investment.


[i] Adobe.com, https://business.adobe.com/products/real-time-customer-data-platform/RTCDP.html, June 2021

[ii] Oracle.com, https://www.oracle.com/cx/customer-data-platform/what-is-cdp/, June 2021

[iii] Salesforce.com, https://www.salesforce.com/news/press-releases/2019/11/19/salesforce-announces-customer-360-truth-a-single-source-of-truth-for-every-customer-across-the-worlds-1-crm-2/, November 2019

[iv] Salesforce.com, https://www.salesforce.com/news/stories/salesforce-cdp-innovations-make-customer-interactions-smarter*/, May 2021

[v] The Wall Street Journal, https://www.wsj.com/articles/publicis-groupe-signs-on-to-use-trade-desks-alternative-to-cookies-11617883217, April 2021

Don’t fall into the “we need a CDP first” trap

Introduction

Over the last three decades, marketers and customer experience experts learned the importance of employing data in data-driven customer decision making.  With the right data, they realized, machines could assist them in running better programs.  The result was more customers receiving relevant offers, and in turn leading to improved response rates and increases in customer satisfaction and retention.

This journey, however, wasn’t short on painful and costly lessons.  Stories were common of virtually endless data warehouse projects seriously behind schedule and overbudget.  In some of the most infamous cases, $10’s of millions were spent over years, with little to show for it.  Why?  Because from the onset the goals were misguided, and in many cases the wrong people drove the project. 

What went wrong?  Simply put, project sponsors set out with the wrong sequencing of goals – trying to solve for the ultimate data repository first and putting the most important aspect, who would use it and how, on the back burner.  In other words, they set the priority on sourcing data, cleaning it, and structuring it, and put off concerns on which applications would leverage it.  Build it, they posited, and they will come.

Challenges:

Sadly today, many embarking on CDP projects are falling into this same trap: 

Select the best Customer Data Platform (CDP) first, build it to solve for nagging problems of fragmented data and cross-device identity.  Later, help customer decisioning applications get connected to it.

The problems with this approach are:

  1. Without considering first which specific outcomes are crucial to success and working back to the data needed to support those, chances are extremely high the CDP won’t have the right data.
  2. History shows it could take years to agree on the right data, amass, cleanse, stitch, and organize it into a brand-new platform.
  3. Nearly every vendor calling themselves a CDP is now also claiming to solve for enterprise customer decisioning requirements.  Yet selecting the same vendor for both means a direct dependency on this repository, where the CDP must be up and running before the business can run its first new customer engagement programs.

Twenty years ago, at Unica, we saw this exact same problem.  The business was waiting for IT to complete the never-ending data warehouse project.  Or worse, they took matters into their own hands and selected a tool like Epiphany that required all the data structured and uploaded into its marketing data model (essentially a CDP – just not called it at the time).   Sound familiar? 

Again at Unica, to tackle this problem, we designed a different solution and approach.  We called the solution UDI (Universal Data Interconnect) which allowed marketers to map to existing data sources and run campaigns leveraging that data in place. 

We advised frustrated clients to set goals such as improving promotional response rates and urged them not to wait for data warehouse projects to complete.  The advice we gave them –  focus on redesigning campaigns, use advanced analytics to improve lift, and connect only to data sources required for those redesigned campaigns.  Essentially, let the new campaign rules drive the data source requirements.  References reported running successful campaigns shortly after project inception.  In just months they touted tangible economic benefits, bolstering their case to expand rollout.

CDPs are all the rage – what should I do?

First, the fact that CDPs are “all the rage” is part of the problem.  Upon closer inspection it’s the CDP vendors generating the hype, and not the paying clients.  Oddly missing are stories of resounding project success and massive ROI, and instead infamous stories of CDP projects failing to meet goals are piling up.  In Gartner’s 2021 Cross-Functional Customer Data survey, just 14% of respondents that reported having a CDP also reported achieving a 360-degree view. [i]  What we’re witnessing is the classic Gartner technology hype cycle, with CDPs now passing peak hype, and falling into the trough of disillusionment. [ii] 

In my 2019 article, To CDP or NOT – 3 tips – then you decide, the advice was beware of the hype in a poorly defined market.  Now, in 2022, vendors are trying to differentiate in a still nebulous market.  Here are some of the CDP subcategories that have emerged since 2019 [iii]:

CDPs selected primarily by Marketing and Business buyers:

  • Smart Hubs / Hub & Spoke CDP
  • Real-Time CDP
  • Marketing Cloud CDP (e.g., Adobe AEP, Salesforce CDP)
  • Campaign & Delivery CDP

CDPs selected primarily by IT, Data, and Analytics buyers:

  • Data Integration and Management CDP (focused on data collection and identity management)
  • CDP toolkits (used by IT to build a CDP)
  • Customer Analytics & Insights CDP

Certainly, the right answer isn’t to buy multiple CDPs.  Yet that is exactly what’s happening.  And for larger enterprises, some are buying as many as three, simply proving poor alignment between the business and IT. [iv]  Having lived through those days, be assured, the result is not alignment on outcomes, rapid access to the right data, and improved customer experience.  

At the same time, the right answer isn’t to let the business (or IT) solely determine the selection.   Although the business must have primary responsibility and control, it also must tightly collaborate with IT where both parties understand their roles and stick to them.  Though unfortunately not common, brands that get this right, and take inventory of what data & systems they have and what roles each party should play, report better success and ROI.  As such, follow these rules:

Do –

  1. Establish a strong partnership between the business & IT, align on use cases, outcomes, and how to measure success. Take inventory of existing capabilities and chart a roadmap together.
  2. Work back from the highest value use cases and desired outcomes and map out the data needed to support them. 
  3. Make it a requirement to be able to iteratively add to the data repository, as new programs might demand new data sources.  It won’t be instantaneous (think in terms of quarterly releases for production data source changes).
  4. Insist that the decisioning and execution capabilities and the CDP solution be evaluated on their own merits, and if in the end different vendors provide what’s best and can be integrated without herculean effort, select accordingly. Demand references that attest to their enterprise decisioning operational use, scale, and effectiveness.
  5. If evaluating (or already embarking on) a CDP project, simultaneously consider a re-vamped RTIM project. [v]  If a CDP project is ongoing, let the RTIM’s data requirements feed into the CDP’s, not the reverse.  And don’t wait for the CDP project to complete.  Select an RTIM vendor that can map quickly to existing data and can provide tangible proof of fast time to value and ROI.

Don’t –

  1. Accept at face value that the CDP’s RTIM engine will be “good enough.”  Rather, insist the vendor demonstrates unified inbound and outbound decisioning, real-time re-decisioning at scale, advanced analytics features, and capabilities to incorporate contextual streaming data.    
  2. Don’t accept that having a single vendor will outweigh the benefits of having a best-of-breed real-time interaction management (RTIM) engine.
  3. Wait until teams agree on all the right data.  That day won’t come.  Instead, if a CDP has been selected, demand an agile approach for how to enhance the CDP over time.  Ask the vendor of choice for RTIM to provide plans for running before and after the CDP project is done.
  4. Make the mistake thinking that a CDP Smart Hub can deliver scalable and maintainable RTIM decisioning.  None can.  Most rely on traditional segmentation and scripted / deterministic rule-based journey orchestration – all fraught with old problems of static segment definitions, deterministic offer assignment, and hard to maintain eligibility and engagement rules.  A modern RTIM engine with a 1-1 personalization approach solves for all these traditional limitations.

Conclusion

A CDP project, aimed at rationalizing customer data, improving identification, providing segmentation, and streamlining access seems a worthy cause.   Yet history teaches us that chasing a complete view of every single customer across all their devices and interaction points is an elusive goal.  What’s more likely is a CDP project turns into a giant hole, sucking time and resources.  And its expected benefits, like the light bent back on itself by a black hole’s vortex, may never emerge.

Instead, if a CDP project is ongoing, set it on careful rails, and manage scope.  Meanwhile, evaluate RTIM capabilities and embark in parallel to address those shortcomings and gaps.   Research ROI evidence from CDP and RTIM projects and compare.  If resources to fund both projects compete, pit them against each other based on business cases and prioritize investments accordingly.  And remember the lesson of sunk costs, and don’t be afraid to adjust project plans and budgets already in flight.  Many who have placed bets on RTIM cite quick successes that propel massive long-term returns — some with 10x ROI and more than $500 million in incremental revenue. [vi]  Don’t make the mistake of waiting and suffering huge opportunity costs.


[i] Market Guide for Customer Data Platforms, Gartner, March 2022

[ii] Hype Cycle for Digital Marketing, Gartner, July 2021

[iii] Customer Data Platform Industry Update, CDP Institute, January 2022

[iv] Ometria, https://ometria.com/blog/5-reasons-standalone-cdp-might-not-right-solution-retailers, January 2022

[v] What is RTIM, https://www.teradata.com/Glossary/What-is-RTIM, 2022

[vi] Forrester RTIM Wave, https://www.pega.com/forrester-rtim-2022, Q2 2022

The Hyper-Personalization Paradox: being relevant without crossing the CREEPY LINE

Brands are using AI to drive hyper-personalization, but can it also help them avoid being hyper creepy?

hyper-personalization

Source: https://www.adclarity.com/2015/04/digital-marketing-2015-hyper-personalization-display-ads/

Apparently, I have 8 seconds to grab your attention, so here goes.  What if I personalized every aspect of this blog for you?  That is, I knew so much about you – your reading behavior, the writing style you prefer, subjects you love – took all of it into account, and assembled these words and pictures just for you?  Would you find that creepy or cool?

At our conference in Las Vegas recently, I was a guest on Sam Charrington’s, podcast series “This Week In Machine Learning and AI.”  In that episode, we discussed a similar hyper-personalization scenario, where an automotive company used intimate knowledge about a consumer and her connected car to custom-tailor each marketing and service treatment[i].  And half-way through (at 23:07), Sam observed that although “consumers appreciate personalized experiences,” it can go too far and “sometimes come across as creepy.”

And suddenly, we both realized something.  Customer experience experts haven’t used AI to govern this.  In other words, CX pros personalize without recognizing if their personalization levels are approaching creepiness.

Which led to this question: can creepiness be quantified?  And if so, with that knowledge, could a company effectively use it?  With the right tooling, could they safely test and simulate how far personalization should go, carefully delivering each customer a tailored experience with the right level of relevance and value, without crossing into their creepy space?  Simply put, hyper-personalizing without being hyper-personal — the personalization paradox.

You’re marketing is creeping me out

Creepy land is that forbidden zone where consumers call out businesses for using personal data and revealing insights that are a bit too private.  And though consumers increasingly want personalized experiences (according to a recent Epsilon study[ii], 90 percent of consumers find it appealing), ironically, they will happily make examples of brands that invade their personal space.

No brand wants a creepy reputation as it implies:

  1. Stalking, snooping, or spying; collecting personal data and invading privacy
  2. Revealing something private, no matter how valuable the insight
  3. Not having customers’ best interests in mind
  4. Ill-intent, even when there isn’t intention to do harm

With big data galore, a culture of a data sharing, and pressure to mass personalize to remain competitive, you need ways to safely and systematically explore the creepy line’s location without ever crossing it.  Understanding what customers expect and why they love a product (or don’t) is crucial to great personalization.  Avoiding a creepy moniker means effectively steering clear of areas that are, frankly, none of your business.  And if the customer says it’s none of your business, it’s none of your business.

Today, the digital world abounds with copious quantities of demographic, psychographic, and behavioral data.  There’s a sea of it, because for decades companies have wired up clients and monitored them like lab rats.  And with more IoT tech and data coming every day, firms increasingly misuse it, giving customers more reasons to demand privacy.  The problem is the definition of what’s private and sensitive can be different for each person.  Hence the dilemma: under personalize and risk being labeled clueless, not cool, and worse miss out on revenue; over personalize and risk breaking trust and doing irreparable damage to your reputation.

Sorry we’re creepy. We apologize for any inconvenience 

Customer engagement professionals need new and scalable ways to survey buyers, collect preferences and permissions, sense their intent and moments of need, and personalize appropriately.  So, they need ways to test where that creepy boundary is.  That line is fluid and ever shifting and finding the right level of personalized insights and recommendations without crossing into risky territory is never without some uncertainty.

Where that line lurks changes with time because initially customers may be leery of something, then later adapt to it.  It also changes because privacy legislation changes, individual consumers have distinct levels of sensitivity, and varying levels of awareness. It can even differ by geography.  For instance, a 2016 study of 2000 consumers in Europe found that 75 percent were uncomfortable with facial recognition software used to target them with personalized offers (consumers in the US were much less sensitive)[iii]

Data-driven marketers have evolved their practices (Figure 1) using data to acquire more customer knowledge which in turn powers more personalization.  Over time, more marketers have evolved their practices, from the general advertising Mad Men approaches of the 1960’s to the super-personalized, AI-Powered approaches possible today.  It also highlights how that pushes them closer to the creepy space.

hyper-personalization evolution

Figure 1: Evolution of Data-Driven Marketing

Here’s the bottom line: if a given customers perception is it’s creepy, it’s creepy.  And depending on who slaps that label on, and whether their rights that have been violated, firms may face legal battles, fines, and reputation damage leading to significant commercial impact.  For instance, potential fines for GDPR privacy law violators can reach 4 percent of a firm’s revenue (up to a maximum of €20 million).

And none of that is music to a businessperson’s ear.

Creeping toward creepy

In 2014, Pinterest managed to spam a major segment of customers when they sent emails to unengaged women congratulating them on their upcoming weddings.  And Shutterfly made an even bigger spam faux pas that same year, congratulating women on the birth of babies they didn’t have.

In Figure 1, these events fall into the SPAM circle because marketers placed people into the wrong macro segments, and the resulting emails were both irrelevant and hilariously erroneous.   Clumsy customer experiences indeed, but not creepy-smart marketing.

Here are some other examples of Mad Men SPAM marketing:

  • You market wedding offers after a wedding – low sensitivity
  • You market wedding offers after a cancelled wedding – high sensitivity

On the other hand, the risk of being labeled a creepy marketer increases when knowledge of customers goes up, insights increase, yet marketers fail to understand an individual’s sensitivity to certain marketing actions.

For each marketing treatment, you need to determine if it will be creepy to everyone or only some:

creepy meter

Figure 2a:  Creepy Meter detecting creepy treatments

If it’s clearly creepy to everyone, during the pre-market approval process you should reject it.  But, if its potentially cool to some, and creepy to others, then provided you can discriminate at runtime using eligibility rules, you can approve its use for those who will find it cool.

To do this, get a readout on consumers’ sensitivity to hyper-personalization.  Build a model that learns this, and use this score to select, by individual, the levels of personalization they’re eligible to receive.

creepy index

Figure 2b:  The Creepy Sensitivity Index readout on each consumer

Here are a few examples of events, corresponding covert marketing approaches, and creepy readings:

 Event Covert Marketing (but not illegal) Creepy Meter Approve?
Hospital admittance / serious health issues detected Mortuary makes discount offers Extremely creepy Reject
Conversation recorded (without clear permission to use for marketing) Ads for products related to keywords in the conversation (e.g., pet toy video recently, which illustrates the point yet is likely a hoax) Very creepy Reject
Facial recognition or location detection Upon a patron entering a branch or store, their profile & preferences are relayed to a salesperson Borderline creepy Conditional
Consumer traveling; recent activity and calendar scanned Push notifications offering travel recommendations based on triangulating travel intent and destination Borderline creepy Conditional
Consumer browsing a web page with product offers Website background, images, language, offers, and other page fragments hyper-personalized Borderline creepy Conditional

Table 1: Examples of potentially creepy marketing

Leading-edge 1:1 marketers are constantly listening for keywords, tracking interaction device, time & location, codifying behavior, sensing mood, recording preferences, and using that knowledge to hyper-personalize with content variations in the millions.  The risk, however, is meandering into that forbidden creepy zone (even if it’s legal), so discerning this by customer by treatment is vital.

Suggestions

As you move into deeper levels of hyper-personalization, do so deliberately and methodically, fully grasping the implications before rolling out.  Consider taking this approach:

  • Collect only data that matters to your ability to personalize specific experiences – that your customer will value. For example, if you sell insurance, you don’t need to understand pet preferences unless you’re selling pet insurance.
  • Start with simple / minimal risk personalization strategies. These should easily pass the creepy test.  For instance, if you can tune you web experience to shopper color preferences, do it.  No one will find that creepy.
  • Gradually apply regional and demographic personalization strategies.
  • Use AI to crawl your products and content to extract taxonomies, attributes, cross-classifications, and descriptions. This will help better match customer intent and preferences to products that will match needs.
  • Use AI to match the right products to clients (making relevant recommendations) and doing so in a personalized way that enhances their experience
  • Use sampling to test hyper-personalization treatments, selecting a wide variety of customers.  Essentially, you get a stratified sample of creepiness raters.
  • In general, avoid even borderline covert marketing unless you have a firm handle on any backlash that might result if customers discover it. In a recent survey, most consumers (81%) think firms are obligated to disclose they’re using AI – and how they’re using it.[iv]
  • Be sensitive to consumers’ preferences for public recognition.  Some might love it if you great them by first name and show appreciation for their loyalty in public.  A few, however, may be mortified.

Hyper-personalization requires great data, great technology, and great sensitivity.  With GDPR now in effect, most businesses are proactively disclosing their data collection practices and privacy policies.  As consumers, we’re consenting to and accepting new privacy policies more than ever before, and in some cases, we’re even reading and understanding them.  Less clear, however, is exactly how that data is used, combined with other data, and when it might show up as an insight, recommendation, or hyper-personalization – and again, which of us might be freaked out by this personalization.

AI is driving personalization to new levels.  There’s no stopping that.  It automatically figures out what works and what doesn’t.  Techniques, such as Bayesian algorithms, quickly learn which offers work, when, and in which channels.  Others, like collaborative filtering, find which products pair best, that in turn drives cross-sell and bundling strategies.  Design of experiments and monitoring devices measure the impact and enable fine tuning.

What’s missing, however, are tools to sense consumers’ sensitivity to personalization, so overt practices are optimized with the right people, and so covert methods are prevented from ever reaching production, or if they are approved for use, are carefully applied.

The study shown in Figure 3 provided some proof that overt personalization pays off.   Yet the very definition of overt blurs as AI improves, content becomes hyper-conditional, and levels of personalization get more complex.  Thus, you’ll need more sophisticated ways to gauge levels of personalization relative to creepiness, and the sensitivity levels of different people.

personalization

Figure 3: Overt vs covert personalization performance[v]

Conclusion

Great marketers push beyond perceived barriers by understanding customers, knowing products, and then elegantly combining creativity and technology to provide valuable recommendations and experiences to customers.  Ironically, when done right in the eyes of the receiving consumer, they don’t appear to be selling anything; instead simply providing a service.

With website personalization, one-to-one content, natural language generation, image recognition, and countless other AI tools, businesses inexorably march toward hyper-personalization.  Make sure you manage it, so you’re always cool and never creepy.


Endnotes:

[i] https://www1.pega.com/insights/resources/pegaworld-2018-pegas-ai-innovation-lab-sneak-peek-and-your-vote-counts-video, June 2018

[ii] http://pressroom.epsilon.com/new-epsilon-research-indicates-80-of-consumers-are-more-likely-to-make-a-purchase-when-brands-offer-personalized-experiences/, January 2018

[iii] https://www.forbes.com/sites/fionabriggs/2016/07/04/fingerprint-scanning-is-cool-but-facial-recognition-creepy-new-richrelevance-survey-shows/2/#493b953f3d68, July 2016

[iv] https://www.richrelevance.com/blog/2018/06/20/creepy-cool-2018-richrelevance-study-finds-80-consumers-demand-artificial-intelligence-ai-transparency/, June 2018

[v] https://www.sciencedirect.com/science/article/pii/S0022435914000669#abs0005, March 2015

 

My 2019 Martech articles

To CDP or NOT – 3 tips – then you decide

4 Golden Rules for Knowing and Honoring Thy Customer

Dear CMO: Sorry, but we need a CVO (Customer Value Officer)

A 6 pack of tips when replacing creaky MRM software

The Final 4: MarTech Platforms and Ecosystems

Will AI in digital marketing lead to marketer obsolescence?

One-to-One Marketing: 20 years later, are we there yet?

Marketing’s Strangelove: How I Learned to Stop Worrying & Love Service

5 predictions for CRM’s AI applications in 2019

Consumers kill for digital convenience: Can AI help your business?

We’ve all seen countless images of the proverbial empowered consumer.  That mythical creature seeking convenience and instant gratification.  It’s a conjured-up image of a time-strapped digital native that juggles five devices and 15 tasks, interacting simultaneously on a host of channels, using their super-human consumerism to wield terrifying powers capable of paralyzing unworthy brands.

AI in business

Hyped-up as they are, these visuals still serve a healthy purpose.  They remind us just how far digital bars have been raised, and that should cause pause and beg the question, “as businesses, are we measuring up?”

Collectively, the answer is we’re not.  In fact, consumer satisfaction studies repeatedly confirm it.  Simply search on, “consumer study poor digital experience” and voila – hundreds of examples.  One study conducted by Software Advice found over 90 percent of consumers had one or more deal-breaker digital experiences when seeking customer support on mobile[i].  So, in an age with so much technology at our fingertips, why are we falling short?  What can we do to fix this?

Too often, we fall short because we focus on the wrong problems in the wrong order.  To correct this, it’s important to first consider a modern consumer’s mindset and what they’re demanding.  With greater resolve, they’re chasing after nirvana, in a quest for brands that deliver products, services and experiences that are:

  • Valuable / relevant
  • Consistent / high-quality
  • Enjoyable / attractive / personalized
  • Familiar / trusted
  • Secure / lower risk
  • Compatible with values / social beliefs
  • Convenient / simple / timely

Enterprises, however, can’t perfect all seven of these deadly-important areas simultaneously.  So, the trick is finding what matters most, and then using AI and automation technologies to help.

AI in business won’t magically transform a company with fundamental structural flaws, such as poorly designed products, no unique selling proposition, or cost containment issues. These take great human leadership, creativity, and collaboration to fix.  And it won’t manage the job of building and maintaining corporate culture.  But in other cases, AI applied pragmatically to streamline processes and eradicate friction can make an enormous difference.

What’s proven to be a winning recipe in business is paying attention to customer-centric details.  Brands hyper-focused on customer experience build a lasting reputation and increase in value.  Look at Apple, Uber, Airbnb, Amazon, and even Booking.com.  All built on the backs of nailing digital experience, often with a mobile-first mentality.  Yet, with seven major areas and hundreds of experience details to consider, where should you start?

Is convenience king?

Out of the above seven criteria, convenience may be the most important in terms of driving long-term value, and the one CX professionals can influence the most.  Perfecting convenience can separate winners from losers; sellers from re-stockers. Consider this quote from a CEB study[ii]:

“Brands that help consumers simplify the purchase journey have customers who are 86 percent more likely to purchase their products and 115 percent more likely to recommend their brand to others.”

AI in business

And convenience contributes to and builds up other factors, such as being viewed as valuable, familiar, and trusted.  It may be one of the chief drivers of loyalty.  It can even trump something like price.  For example, wireless carriers have learned consumers prefer unlimited communication plans because they’re convenient and simple, even though they may cost more[iii].  Consumers make impulsive and emotional purchase decisions when enough of the factors align, and tend to justify things afterwards.  Since consumers’ assessment of convenience is qualitative, figuring out how to elicit positive emotional responses regarding convenience is crucial.

In a consumer’s mind, the label of convenience translates into a business being viewed as:

  • Useful and suitable
  • Easy to buy from, use, and transact with
  • Requiring less overall effort
  • Simple to understand / responsive to issues
  • A time saver

Each is a judgment call by an individual, but with critical mass and time, these opinions converge to a collective market consensus (the wisdom of the crowd).  They manifest themselves in the form of review scores, ratings, and tribally-shared social advice.  It’s this reputation that drives commercial allegiance.

Largely, consumers make emotional decisions when they choose one product over another.  Sometimes they want combinations that are seemingly impossible to get:

  • A readymade desert that tastes great and is nutritional
  • A car that is inexpensive, fast, great looking, economical, and durable
  • A delicious pizza that comes in a few minutes, is made by an environmentally-conscience brand, and oh…costs less than $10

It’s no wonder brands struggle to satisfy whimsical consumer desires, but fickleness aside, they cry out for brands to simply simplify things.  Ironically, they work longer and harder to live in a world that supplies them with exploding choices for everything but precocious little time to weigh options, which in turn drives them to crave simplicity in decision making. They demand trusted information that is easily accessible.  They want user-friendly ways to weigh options, and help navigating processes.  In a 2016 survey on travel shopping preferences, consumers picked ease of use as the top reason they booked using an online travel site.[iv]

AI knows there’s no second chance to make a first impression

Consumers want convenience, but which actions will achieve maximal impact?   Before answering this, keep in mind a marketing 101 maxim: perception is nine tenths reality.  And perception is often built-up on first impressions.  Further, when an initial impression goes wrong, it takes multiple positive interactions to repair it.  As such, consider using AI as tooling in helping elevate levels of perceived (and real) overall convenience in critical first-impression customer journeys such as:

  • Getting a quote
  • Completing an application
  • Navigating a sign-up or onboarding process
  • Completing an initial purchase
  • Setting up online payments

And during service scenarios such as:

  • Order status checking
  • Returns
  • Claims
  • Lost card replacement process
  • Scheduling an appointment
  • Finding a doctor

How does AI support these?  If we agree that AI is a mixture of automation and intelligence technologies, AI can help streamline the process for consumers getting answers such as the status of an order, return, or claim.  Further, consumers can even ask these systems to schedule a store or branch appointment, find the most convenient time and location, and then add the appointment to their calendar.

AI-powered chat bots (and other self-service portals) can provide 24 x 7 first-line support for answers to questions like:

  • How to transfer funds
  • Make an online payment,
  • Get account and policy status

In many cases, without any human intervention, bots can answer questions, close out an inquiry, and even assist with completing a transaction.  In situations requiring human agents, AI-based systems can orchestrate seamless hand-offs of data and case details, allowing humans to pick up precisely where machines left off.

Make no mistake, AI skills are already going far beyond performing simple tasks.  Today, AI engines can give nuanced advice, surface unique insights, and provide proactive recommendations.  The most sophisticated systems even factor in customer context, such as location, weather, mood, and motivation before arbitrating on the next-best-action.

In banking, for instance, AI can help track savings and spending habits, and send threshold alerts. To illustrate, suppose a consumer has a recurring transfer from checking to savings each month.  AI can monitor account balances and send an alert when upcoming bill payments are forecasted to drain a checking account beyond non-fee thresholds.

In healthcare, there’s Dr. AI from HealthTap, who can engage in conversation aimed at providing triage and care advice, using a locally-stored health profile, a network of over 100,000 doctors, and Bayesian learning AI to serve up the next-best-advice.

What’s the right set of technologies for your stack?

Well, there’s good news and bad news.  First the bad news – there is no one right answer, and with thousands of vendors (6,829 in this marketing landscape), open-source packages, and resulting combinations of solution stacks possible, there’s no evidence anyone has found the absolute best combination, or ever will.

Now the good news – you have a ton of alternatives, with many combinations likely to work, but finding a stable and winning blend is tricky.  Some tools, on the surface, look easy to use but aren’t.  Others won’t live up (functionally) to their marketing hype.  The best advice is to form a solid basis with at most one or two platforms covering essential infrastructure (that you can’t afford to switch in-out), and make sure these platforms allow for plug and play with adjacent pieces likely to have shorter useful lives.

For example, find vendors with durable connectors for wrangling data into an actionable customer profile, a real-time hub that acts as a central brain to arbitrate customer decisions, and integrated customer analytics.  These components are foundational, and must be centralized so they operate in a channel agnostic fashion.  New channels may spring up, and others diminish in importance, but a decision engine which feeds on key behavior data, arbitrates decisions, and renders appropriate next-best-actions is a necessary constant.

Final thoughts

There’s a real irony forming with AI in business.  We’re building and teaching computers to be more human, while as humans we’re being led and conditioned by our busy lives and workplaces to be more machine-like.  The problem is computers are no humans, and humans are poor computers.

Step back and consider what’s best for the consumer.  Providing great first impressions, as well as seamless and gratifying ongoing experiences, requires well-functioning and well-behaving humans and machines working in concert.  Consumers want products and services they’re proud to recommend because they make life easier and more enjoyable. When things go wrong, they expect flexible help and fast solutions.  When self-service isn’t working, they demand cases smoothly transition to well-informed, caring, and compassionate humans.  Brands must skillfully, judiciously, and mindfully weave together computer systems with humans as they design for convenience in all the complexities of customer journeys.

Delivering convenience must be a paramount goal, so reflect on the unique characteristics of the individuals you serve and the nuances of their voyages.  Dry run how each will navigate your services:  some will be older and less familiar with technology; some will be capable of juggling five devices on five channels; sometimes technology will fail and require fallback processes.

Ultimately, your convenience reputation will be defined by a diverse set of consumers steering through a wide variety of conditions and processes.  Use AI and humans to start off on the right foot, deliver consistently under normal operating conditions, and to proficiently handle the inevitable miscues.

[i] https://www.softwareadvice.com/resources/improve-cx-with-mobile-support/

[ii] https://news.cebglobal.com/press-releases?item=128138

[iii] https://www.theverge.com/2017/2/17/14647870/us-carrier-unlimited-plans-competition-tmobile-verizon-att-sprint

[iv] http://www.traveltripper.com/blog/why-do-travelers-prefer-booking-with-otas/

 

4 well-intended Marketing Automation BAD HABITS to break

Let’s face it.  No one sets out to botch something up or fall short of reaching a goal.  When marketing automation was in its infancy, and pioneers like Don Peppers, Martha Rogers, Tom Siebel, and Paul Greenberg envisioned marketing and CRM systems in the mid 1990’s, they set the right vision, believing great customer relationships could be initiated, fostered, and brought to scale with the right data and technology.  Essentially, their collective creed was:

  • Focus on the individual customer (e.g., be one-to-one and customer centric).
  • Manage the relationship by understanding customers’ buying cycles, needs, and behaviors across the marketing, sales, and service functions.
  • Use that knowledge to custom-tailor and personalize the experience.
  • Use technology to deal with the scale required by larger businesses.

Thirty years later, sadly, this vision still seems out of reach, or at best, only partially realized.  So why is that?  What’s held back the realization of the vision?  What are we still doing wrong?

Here are four unhealthy habits of nearly every marketer (so the good news is you’re not alone).  Fix these, and you’ll get a distinct advantage, and get closer to marketing optimization and CRM nirvana.

 

Bad habit #1 – Focusing on customer segments and not individuals

Customers are individuals.  Each has unique characteristics, nuances, and contextual needs that define who they really are.  And though we’re awash in a wealth of unique behavior data, it’s a common mistake to continue trekking on the beaten path, making decisions based on segment characteristics rather than individual ones.  For years, we’ve slotted customers into segments because we had no other choice, oversimplifying who they really are.

1 to 1 marketing automation

It’s understandable in the initial stages of relationship management that businesses make broad customer classifications such as:

  • Returning visitors
  • Mobile visitors by geography and device type
  • Registered users by gender and age (leading to segments like Millennials, Gen Zs, and Gen Alphas)
  • Non-responders to an email campaign

Yet after these customers repeatedly interact and transact, clearly stating their implicit and explicit preferences, continually handing over lifestyle and contextual data, there’s no excuse for still making generalized, segment-based decisions.  We’re spending millions collecting, storing, and refreshing specific behaviors and preferences, so we should use this data to drive individualized decisions and to customize treatments.

In a recent paper titled “Crossing the chasm: From campaigns to always-on marketing,” [i] Rob Walker and Matt Nolan contend that “building audiences using segmentation is a process that introduces severe challenges such as compromised relevance, unscaled labor, and collisions and conflicts.”  They go on to suggest using a next-best-action approach, describing it as one that “targets individual customers, rather than segments – leveraging their unique needs, preferences, and context.”

 

Bad habit #2 – Focusing on selling products instead of customers’ needs

Sounds crazy, right?   How else will we make money if we don’t sell products?

Still one of the cardinal sins holding back modern marketers is focusing strategy and tactics solely on selling products.  By doing this, we’re exasperating two problems:

  1. Product owners, incented to relentlessly push their products, bombard consumers with ill-conceived campaigns containing messages and offers that conflict, overlap, or worse, aren’t even applicable. When viewed through a customer’s lens, these promotions have little to do with their actual needs.  As such, marketers often completely miss the relevance mark.
  2. Even when a product fits, companies fail to provide well-timed promotions, convenient services, and a context-sensitive experience. Oblivious to the individual’s situation, they make company-focused timing and interaction decisions, such as blindly promoting a product simply because ad budget might otherwise expire, or failing to promote crucial services in conjunction with the product..  Consequently, tactics are entirely out-of-synch with the customer’s buying cycle and experience expectations.

Together, these problems compound customers’ negative brand perceptions.  Rather than providing a stellar buying service, well-intentioned marketers inadvertently (and increasingly) overwhelm, turn off, and tune out consumers.  Essentially blind to journey requirements, marketers miscalculate customers’ value calculus, timing preferences, and the overall interaction experience they need and expect.

In study after study (year after year), consumers and brands acknowledge these issues, both resoundingly stating their desire for solutions.  For example, in 2012 the Corporate Executive Board (now part of Gartner) surveyed more than 7000 consumers and 200 CMOs, finding that what consumers want from marketers is relevance and “simply, simplicity.”[ii]  That was 2012.  It’s 2018, and not much has changed.

If corporations keep strategy oriented on selling products, customer relationships will remain transactional and experiences sub-optimal for many more years.  Maybe we’ve forgotten what the R in CRM stands for.  It was put there to remind us that what matters most is long-term relationship building.  Our quest should be to unravel the mystery of a customer’s ever-changing needs, their journeys, and what drives their loyalty.  Our job is to use that knowledge to create custom-tailored experiences.

 

Bad habit #3 – Building channel-based versus coordinated intelligence

Shortly after September 11, 2001, the US government came to a stark realization that its various intelligence agencies were massively disjointed and compartmentalized.  This hadn’t happened overnight.  It was years in the making, and although for decades ample resources were poured into each agency, no one agency was responsible for coordination.  Attempting to solve this problem, the government established the Department of Homeland Security.

channel intelligenceIn a similar vein, some firms have built up marketing automation and CRM intelligence in silos for over 30 years.  In each channel (e.g., email, contact centers, web), they’ve poured substantial resources into projects aimed at beefing up customer intelligence.  Each channel amassing data, rules, and intelligence, but no one designated as the coordinator, and information rarely shared.  Subsequently, as more channels emerged, the problem grew larger. Today, many companies have 15 or more channels to manage, and no coordinating function.

To provide wonderful experience, brands need a function responsible for coordinated customer analytics, intelligence, and decision making, such as depicted in Figure 1.  Its role is straightforward:

  1. Collect interaction intelligence and contextual data from each touchpoint, and connect it directly to a system that can leverage that information immediately.
  2. Be brain-like, tracking behavior patterns in real-time, sensing needs, and using analytics to dynamically calculate value, comprehend preferences, and predict intent.
  3. Play the arbitrator, weighing an individual’s needs against corporate initiatives, policy, risk tolerance, budgets, and economic goals. Make instant and well-balanced decisions, track the results, and learn from each decision.

 

engagement hub

Figure 1: Engagement hub provides coordinated omnichannel intelligence

Think of this, not as another physical department, but instead as a virtual customer-centric hub. Designed from the ground up to be connected to all customer touchpoints, it’s journey oriented versus channel centric.  Cognizant of what transpired, why, and what’s best to do next, its embedded strategies and rules act as a real-time arbitration committee – making data-driven decisions in milliseconds versus months.

This hub is also more than a customer data platform.  It’s an end-to-end engagement hub responsible for not only gathering and coordinating intelligence, but also gleaning real-time insights and taking action.  To deliver on that, it manages key data, customer analytics, corporate rules and processes, and channel interfaces.  In a calculated and auditable fashion, it makes recommendations, delivers them to touchpoints (the channel apps fine tune the experience), and it learns from a systemic set of impressions and responses.

 

Bad habit #4 – Worrying primarily about marketing automation and technology, instead of experience

Automation, and the technology that enables it, efficiently repeats tasks.  That’s great, if you computerize the right tasks that deliver the right experience.  Look at it this way:  spammers are very effective at marketing automation.

Above all, to achieve lasting loyalty and build value, avoid the temptation to recklessly make existing marketing processes more efficient.  Granted, some existing tactics may work, yet chances are many need to be revamped (or ditched entirely), and recognizing that requires reframing priorities.  Preferably, focus on customer journeys, and ask if marketing tactics contribute to a better experience.  Consider journeys such as:

 

  • Prospects searching for products to discover and learn more
  • Customers seeking out trials to test those products
  • Customers embarking on a buying or upgrade process
  • Customers doing research on price, available incentives, and financing options
  • People filling out an application, making a booking, or redeeming rewards
  • Consumers getting stuck, struggling, or in need of assistance
  • Clients reaching milestones, entering new life stages, or affected by key events

No organization can serve its customers without supporting people.  To illustrate, assume your kiosk has a reasonable self-service experience, but then something goes wrong.  The technology hiccups, and a customer begins agitating.  Without back-up mechanisms, this situation can quickly turn disastrous.  To avoid it, you need reasonable levels of redundancy, well-tested cut-over processes, and intelligent detectors that gauge the need for human intervention, and then bring the right human into the loop.

Brands that thoughtfully consider these scenarios, elegantly weaving together marketing automation, people, and processes, will deliver better customer experience.

But how can you be sure you’re improving experience?  In short, hyper-focus on one journey at a time, pick metrics to measure each, and correspondingly measure overall satisfaction.  Once more, here’s where many firms trip up.  Instead of measuring whether the customer is fully satisfied with, say, the onboarding journey, they only measure certain tactics, like whether a welcome email got sufficient opens and clicks.

Conclusion

Be honest. We all have some bad habits that admittedly we should give up for our own good.  But breaking old habits isn’t easy.  And like any habits, we’re comfortable with our marketing automation traditions because the outcomes are predictable.  Nonetheless, just because they’re predictable, doesn’t mean they’re best for our customers.

When we force-fit customers into segments, push products on them that we want to sell, confuse them with conflicting and poorly orchestrated channel messages, and hyper-focus on our efficiency (versus their experience), the results will be predictable alright – in other words, we’ll get our anemic 0.5% response rates and slow growth.

If you think, however, you can do better, then take a chance.  Collect as much individualized data as you can, use it to personalize customers’ experiences, coordinate decisions with one principle engagement hub, and as Steve Jobs said, “…start with the customer experience and work backwards to the technology.”

[i] Crossing the chasm: From campaigns to always-on marketing, https://www1.pega.com/insights/resources/crossing-chasm-campaigns-always-marketing , December 2017

[ii] CEB Press Release, https://news.cebglobal.com/press-releases, May 2012

8 AI trends for Martech

In this 8th and final short video in my Machine Marketing Series, I give my views on the “The HOTTEST AI trends for Martech” to keep your eyes on in 2018.

 

I cover eight key AI trends to keep a watch on:

  • AI data and processing speed
  • Natural language processing (NLP)
  • Image recognition
  • Natural language generation (NLG)
  • Automation and process management
  • Transparent / Explainable AI
  • One AI brain
  • AI organizational dynamics