Friction isn’t always a bad thing, especially when companies are looking for responsible ways to use AI. The trick is learning to differentiate good friction from bad, and to understand when and where adding good friction to your customer journey can give customers the agency and autonomy to improve choice, rather than automating the humans out of decision-making.
In marketing circles, friction has become synonymous with “pain point.” Eradicating it, conventional wisdom goes, is crucial to building a customer-centric strategy that yields competitive advantage. Taking a cue from policy applications of behavioral economics, marketers seek to “nudge” people along the customer journey and remove friction in the battle against “sludge.” At many firms, “artificial intelligence” has become the go-to tool for creating frictionless experiences and removing impediments that slow down efficient customer journeys. While this was true before the pandemic, Covid-19 has only hastened this digital transformation trend by creating demand for more contactless customer experiences that reduce points of potential friction, like in-person human interactions.
Consider the recent launch of Amazon One, which uses “custom-built algorithms” to turn the palm of a one’s hand into a contactless form of payment, identification, and access. “We started with the customer experience and […] solved for things that are durable and have stood the test of time but often cause friction or wasted time for customers,” the company announced. This eliminates steps like searching for one’s wallet or interacting with a human (though it is unclear whether customers receive commensurate benefits in exchange for their biometrics). In the same vein, Hudson and Aldi stores have recently launched frictionless retail that allows customers to “just walk out” with their purchases, skipping the traditional checkout process. This embrace of the frictionless customer experiences is not limited to retail: Facebook recently introduced “smart glasses” with AI that allow users to be constantly and effortlessly online, an approach the company calls “ultra-low-friction input.” Even schools in the UK have adopted facial recognition technology in cafeterias that remove friction from the cues and speed up the checkout transaction time.
No doubt removing friction-based pain points can be beneficial, as in the case of simplifying healthcare systems, voter registration, and tax codes. But, when it comes to the adoption of AI and machine learning, “frictionless” strategies can also lead to harm, from privacy and surveillance concerns, to algorithms’ capacity to reflect and amplify bias, to ethical questions about how and when to use AI.
Cognitive biases can undermine optimal decision-making, and the decisions around the application of AI are no different. Humans can hold biases against or in favor of algorithms, but in the latter case, they presume greater AI neutrality and accuracy despite being made aware of algorithmic errors. Even when there is an aversion to algorithmic use in consequential domains (like medical care, college admissions, and legal judgments), the perceived responsibility of a biased decision-maker can be lowered if they incorporate AI input. Yet the pace of investment is only increasing: The 2021 Stanford AI Index reports that the total global investment in AI increased by 40% in 2020 relative to 2019, for a total of $67.9 billion.
Nevertheless, 65% of executives cannot explain how their AI models make decisions. Therefore, executives who seek to improve customer experiences must embrace “good friction” to interrupt automaticity in the application of “black box” AI systems. The promise of AI is tremendous, but if we are to be truly customer-centric, its application requires guardrails, including systemic elimination of bad friction and the addition of good friction. Friction isn’t always a negative — the trick is differentiating good friction from bad and auditing systems to determine which is most beneficial. Companies should analyze where humans interact with AI and investigate areas where harm could occur, weigh how adding or removing friction would change the process, and test these modified systems via experimentation and multi-method analyses.
Finding Good Friction
What is good friction, and how can you differentiate it from bad friction in customer experiences? Good friction is a touch point along the journey to a goal that gives humans the agency and autonomy to improve choice, rather than automating the humans out of decision-making. This approach is decidedly human-first. It allows for reasonable consideration of choices for the consumer and testing of options by the management team according to user needs, and a clear understanding of the implications of choices. And it may also enhance the customer journey by engaging users in increased deliberation or better co-creation of experiences.
Significantly, good friction doesn’t necessarily diminish the customer experience, in fact, it can lead to brand advocacy. For instance, it may not be automatic or frictionless to offer more agency over one’s data, to make transparent how personal data are being used, or to place human welfare over engagement, but it is better for the humans behind the data points and society at large. Twitter’s recent crowdsourced “algorithmic bounty challenge,” where the company asked customers to identify potential algorithmic bias, added good friction to the customer experience in a way designed to increase engagement and mitigate harm. And friction can be good when we need to take time with customers to better understand their needs and unique experiences, a process that can be inefficient (but delightfully so). Communities where customers connect with and inform each other may enhance customer experiences beyond the transaction touchpoint, as can customer service interactions that capture data insights beyond traditional NPS scores (as in the base of Booking.com). These are opportunities to create, not just extract, value.
Bad friction, on the other hand, disempowers the customer and introduces potential harm, especially to vulnerable populations. It is the placement of obstacles to human-first digital transformation that creates incentives to undermine customer agency or obstacles to algorithmic transparency, testing, and inclusive perspectives. For example, when WhatsApp changed its terms of service, users who did not agree to the new terms saw an increase in friction and reduced utility of the app. This asymmetry of friction along the user journey — easy and incentivized entrance to adoption, followed by barriers to exit — create a dynamic akin to a lobster trap: an enticing entrance but lack of agency to exit.
Firms can revisit their customer journeys and conduct “friction audits” to identify touchpoints where good friction could be deliberately employed to benefit the user, or where bad friction has nudged customers into “dark patterns.” Already there are firms and organizations that seek to offer this expertise with regard to combatting algorithmic bias. Cass Sunstein has proposed “sludge audits,” to root out “excessive and unjustified frictions” for consumers, employees, and investors. Similarly, friction audits could take a deliberate review of points of friction along the customer journey and in the employee experience (EX).
What Firms Can Do
When assessing the role of friction in digital transformation, positive or negative, consider the behavioral tendencies and welfare of customers Nudging is a powerful tool, but executives must wield it carefully, as it can quickly become manipulative. The good friction aimed at reducing such risk is a relatively small price to pay compared with customer churn due to destruction of trust and reputation. Here are three suggestions:
1. When it comes to the deployment of AI, practice acts of inconvenience.
Yes, offering customers more choice can make their customer journeys appear less convenient (as in the case of default cookie acceptance), but affirmative consent must be preserved. It is also convenient (and comfortable) in organizations to be in homogenous groups, but diversity ultimately combats cognitive bias and results in greater innovation. Take the time to include more representative, cross-disciplinary, and diverse data datasets and coders.
But perhaps the first and most critical inconvenient act is for your team to take a beat and ask, “Should AI be doing this? And can it do what is being promised?” Question whether it is appropriate to use AI at all in the context (e.g., it cannot predict criminal behavior and should not be used for “predictive policing” to arrest citizens before the commission of crimes, “Minority Report” style”). Deliberately place kinks in the processes that we have made automatic in our breathless pursuit of frictionless strategy and incorporate “good friction” touchpoints that surface the limitations, assumptions, and error rates for algorithms (e.g., AI model cards that list these details to increase transparency). Consider external AI audit partners that may be less embedded in organizational routines and more likely to identify areas where lack of friction breeds a lack of critical, human-first thinking and where good friction could improve customer experience and reduce risk.
2. Experiment (and fail) a lot to prevent auto-pilot applications of machine learning.
This requires a mindset shift to a culture of experimentation throughout the organization, but too often, only the data scientists are charged with embracing experimentation. Executives must encourage regular opportunities to test good friction (and remove bad friction) along the customer journey. For example, at IBM all marketers are trained in experimentation, tools for experiments are user-friendly and easily accessible, and contests of 30 experiments in 30 days occur regularly. This requires management be confident enough to have ideas tested and to let the lessons about the customer drive the product.
Re-acquaint yourself and your team with the scientific method and encourage all members to generate testable hypotheses at customer journey touchpoints, testing small and being precise about the variables. For example, Microsoft’s Fairlearn assists with testing algorithms and identifying issues like errors on a sample group where real harm could be experienced before released. Train widely and make this part of your KPIs to create a culture of experimentation. Plan for lots of experimental failure — the learning is friction. But it’s not just about failing fast, it’s about incorporating the lessons, so make the dissemination of these learnings as frictionless as possible.
3. Be on the lookout for “dark patterns.”
Gather your team, map your digital customer journey, and ask: Is it easy for customers to enter a contract or experience, yet disproportionately difficult or inscrutable to exit? If the answer is yes, they are likely in a digital version of a lobster trap. This entry/exit asymmetry undermines a customer’s ability to act with agency, and nudging along this type of customer journey can start to resemble manipulation. Examples include subscriptions that frictionlessly auto-renew with fine print that make them seem impossible to cancel and data sharing “agreements” that mask violations of privacy. Increased transparency into options along the customer journey, though not frictionless, preserves customer agency and, eventually, trust. This is a critical for customer loyalty.
These three tenets center around human-first digital transformation: respect and trust your customers enough to empower them, even if it creates friction at a touchpoint. A confident, responsible brand should not have to engage in sleight of hand or manipulation to boost engagement. It is likely that legislation like an AI Bill of Rights is in our future, so it is an opportune time to develop customer-centric practices. And with Google’s third-party cookies going away, now is the time to change course, to create competitive advantage. Already, Apple is positioning itself as a haven for privacy, and Duck Duck Go is positioning itself against Google as prioritizing user agency over access to data. Salesforce’s AI Ethics team has not only created a code of ethics for internal purposes, it helps its enterprise customers adopt AI friction points like reminders to customers that they are interacting with bots, not humans.
Salman Rushdie noted, “Free societies are societies in motion, and with motion comes friction.” In this way, good friction amidst digital transformation can be viewed as a feature, not a bug. Rather than frictionlessly exploit information asymmetries in algorithms, seek to co-create experiences with customers to share value with and serve the human first. Firms that embrace customer agency in their application of machine learning will get closer to achieving the buzzed about “responsible AI.” It’s time we viewed friction not as something to eradicate, but a tool that, when harnessed effectively, can spark the fire of empowerment and agency, as well as convenience. This will lead your firm to become not only “customer-centric,” but human-first.