“Know. Like. Trust”.
In the marketing world, it’s an adage as old as time. And while the glittering allure of the cost efficiencies made available by AI can be hard to resist, consumers are aware of the change– and they don’t necessarily like it.
If authenticity and trust are the currency of consumer relationships, what happens when customers feel like they’re interacting with machines instead of humans? From marketing campaigns to content to feedback surveys to customer service interactions, the implications are profound, affecting not only the credibility of individual brands but also entire trust-based ecosystems like reviews and testimonials.
Traffic and trust: tread carefully using AI in creative campaigns
What’s the point of content marketing, really?
It comes down to two things: generating traffic and building trust.
Some people might argue that the quality of a piece of content should stand on its own. If it’s entertaining, engaging, and can hold your audience’s attention, does it matter if it was AI generated?
Look at Coca-Cola as an example. They very recently released a reboot of their classic ‘Holidays Are Coming’ campaign. Only this time, the graphics were AI-generated, and the general public are not impressed.
Truescope’s investigation into public sentiment about the campaign found that consumers raised questions around:
- The visual details of the campaign seeming ‘off’
- The inefficient energy use of the technology
- Taking opportunities away from artists and actors
- AI graphics making the campaign feel ‘soulless and unfeeling’
Don Anderson, CEO of Kaddadle Consultancy remarked that the end results “lacked the necessary emotional effects which Coca-Cola’s annual campaign is known for”, emphasising the lack of human faces contributing to the failure of the campaign to feel genuine.
And while it might be appealing to use AI to generate creative content at scale, it’s important to remember that generative AI has been trained on content that already exists and essentially produces an average of what’s already out there. Lean too heavily on AI for your content and your voice will start to sound exactly like everyone else’s, which entirely defeats the purpose of putting content out.
“Is it a lifeless and soulless piece of advertising because of the limitations of AI, or is it lifeless and soulless because the humans who commissioned and executed it brought no originality of thought or execution to the AI tools? Hopefully it is proof that you don’t look to the tool for originality of thought or craft – you look to people.”
- Nicholas Hulley, chief creative officer of AMV BBDO
AI and customer experience
One area where AI has had more of an opportunity to shine is in customer experience. Chatbots custom-trained on vast amounts of data have allowed organisations to meet rising customer expectations when it comes to response time and support availability. But this same efficiency can alienate the very customers it seeks to serve.
When customers place their trust in a brand, they expect more than just accurate answers; they expect empathy, understanding, and authenticity—traits machines simply cannot replicate. There are times when customers come looking for an answer to a simple problem. In those cases, a chatbot can save them the effort of searching for the answer themselves, and have a positive experience. However when customers come searching for resolution for more complex issues, frustration can arise. And when a customer feels a sense of connection with your brand or has come to expect a certain standard or service, the disappointment can have far-reaching consequences for your brand’s reputation.
Does AI mark the death of social proof?
Another fascinating area that AI is having a growing impact on is trust in social proof and word of mouth recommendations.
The erosion of trust in online recommendations already began with the rise in influencer marketing. Once it became common knowledge that influencers could buy followers, what once felt personal and relatable has been replaced by suspicion and scepticism. The more recent rise in not only AI generated content on social media but also AI generated reviews has accelerated the erosion of trust in online recommendations.
Social proof, a crucial selling tool, is now at risk thanks to AI generated reviews running rampant. Online reviews have historically been powerful tools to drive revenue and increase customer spending. They were a trustworthy source of information because writing and leaving fake reviews was such a laborious task, who would be bothered?
Trust-based platforms like Trustpilot or online review systems are particularly vulnerable to misuse of AI. Now, AI can be trained on previous reviews and generate positive, negative or neutral reviews at scale. And while a slew of negative reviews might seem like it can damage your brand reputation, customer suspicion now means you have to worry about your positive ones too. After all, how can consumers rely on reviews or review platforms to make informed decisions?
The current reality is a growing cynicism that could mirror the downfall of trust in social media influencers—once trusted voices, now seen as transactional mouthpieces.
The not-so-hidden human cost
Today’s consumers are more invested in a business’s transparency and ethical practices than ever. And while organisations often claim that AI adoption won’t impact staffing, the reality tells a different story.
Behind the scenes, jobs are being automated, reducing opportunities for human interaction. This creates a vicious cycle: fewer humans mean fewer authentic connections, which further alienates customers. What’s more, organisations risk losing the institutional knowledge and emotional intelligence that only their staff can provide.
AI adoption in businesses also raises questions around artificial ‘integrity’ for Artificial Intelligence. When hiring a human, organisations carefully filter and interview potential candidates to find one who can not only complete the job, but uphold the company values. Are organisations applying the same filters and requisites to the AI tools they are bringing into their organisation as they do to humans?
Finding the Balance: Innovation vs Integrity
If in doubt, always remember that AI isn’t the enemy—it’s a tool. Like any powerful tool, the challenge lies in using it responsibly. Organisations must strike a balance between innovation and authenticity, ensuring that automation supports rather than replaces human effort.
A quick framework for ethical AI integration:
- Transparency: Always disclose when AI is being used in customer interactions.
- Oversight: Ensure humans oversee critical touchpoints, blending technology with empathy.
- Integrity: Stay on top of data security and make a concerted effort to prevent the spread of misinformation that could come from your AI use.
- Consistency: Maintain your brand’s voice and values across all channels, regardless of who—or what—is communicating.
AI has the power to transform customer experiences, but with great power comes great responsibility. Over-reliance on automation risks eroding the trust and authenticity your brand was built on. The organisations that thrive in this new era will be those that innovate with integrity, blending the efficiency of AI with the irreplaceable human touch.
Ask yourself: Is your AI strategy building trust or breaking it?
If you’re ready to align your technology with your values, let’s talk about how to future-proof your brand without sacrificing authenticity. Book your FREE 30 minute consultation today.