Consumer Trust vs. Artificial Image Usage

Feb 3, 2025 · 5min

You’re researching a new medication online, looking for options to manage a chronic condition. You find a website showing images of a doctor smiling warmly, consulting with a patient, or a lab technician holding a test tube filled with a promising breakthrough drug. It looks trustworthy. Professional.

But what if it’s not real?

What if those images weren’t taken in a clinic or lab but were generated by artificial intelligence to evoke your trust? What if the doctor in the picture doesn’t exist, and the lab scene never happened? Would it matter?

This is the challenge we now face. AI is revolutionizing industries, including healthcare, where trust is paramount. The technology that can create compelling visuals also risks undermining that trust—especially if consumers are unaware that what they see isn’t real.

Let's play a game

To understand just how realistic AI-generated imagery can be, let’s put your skills to the test.

Below are two side-by-side images: one is an actual photograph, and the other was created using artificial intelligence. Look closely at both and pay attention to details like lighting, textures, and proportions. Once you’ve made your guess, press the “reveal” button to see the answer.

Reveal

Reveal

Reveal

Reveal

Reveal

Reveal

Reveal

Reveal

Well, how did you do?

AI can create imagery so realistic that even trained eyes might struggle to spot the difference. The lighting, the expressions, the details—they all seem legitimate. But there are subtle tells: a mismatched shadow here, a too-perfect composition there.
This blurring of lines has significant implications for trust in marketing. If a patient—or any consumer—places confidence in fabricated visuals, what happens when they discover the truth?

The Growing Challenge of Transparency

This isn’t a hypothetical concern for the future—it’s already happening. AI-generated images are becoming commonplace across industries. While the technology offers exciting possibilities, it raises a critical question: Do consumers have the right to know what’s real and what’s artificial?

For businesses, particularly in healthcare or education, this isn’t just a matter of ethics; it’s about preserving trust, credibility, and integrity in their relationships with customers.

The Legal Line: What Does the FTC Say?

The Federal Trade Commission (FTC) clearly states that transparency is non-negotiable. According to their Advertising FAQs, businesses must disclose material facts that could influence consumer decisions. This includes using AI-generated visuals, particularly in industries like healthcare, where credibility directly impacts consumer well-being.

Consider this scenario: a medical practice uses AI to generate an image of a doctor consulting with a patient, but the interaction never occurred. Similarly, a pharmaceutical company might use AI to create a lab breakthrough scene to suggest innovation. Without disclosure, these visuals could mislead consumers into believing something untrue, violating FTC regulations.

The FTC’s rules aren’t just guidelines—they’re safeguards. There’s no room for deception when consumer trust is at stake, especially in industries where lives and not just wallets are impacted

Ethical Dilemmas in AI and User Experience

Trust is a cornerstone of any successful business, no matter the industry. Customers rely on authenticity to make informed decisions. Yet, AI enables companies to generate visuals that look genuine but may lack any real-world grounding.

Imagine a company advertising a sustainable product. Using AI, they create an ad showing a family enjoying a picnic in a pristine park, projecting an eco-friendly image. But if the scene isn’t tied to real-world practices or testimonials, the company risks misleading its audience.

What’s the harm? Consumers might interpret the image as proof of the product’s values or benefits. A company’s credibility crumbles when trust is based on fabricated visuals rather than genuine experiences.

As designers and marketers, we must ask ourselves: Are we creating content to inform and build trust, or are we leaning into emotional manipulation? The answer matters, mainly as consumers grow more aware of AI’s capabilities—and more skeptical of its misuse.

A Simple Solution: Transparency Through Labeling

Here’s a practical way forward: require an “AI” label on all images and videos generated or modified using artificial intelligence.

An “AI” label would help preserve trust and encourage businesses to use AI responsibly. This isn’t just a compliance issue for companies—it’s an opportunity to demonstrate ethical leadership. A small logo or watermark can make a big difference in fostering consumer confidence.

The next time you see an image in a business ad, ask yourself: Is this real? And more importantly, does it matter? Transparency doesn’t weaken trust—it strengthens it.

In today’s world, trust is a foundational part of every consumer relationship. By adopting clear, transparent practices like labeling AI-generated visuals, businesses can ensure that AI becomes a tool for connection rather than a source of doubt.

With honesty and clarity, we can build a future where AI supports better consumer experiences—without sacrificing integrity. Because, at the end of the day, trust isn’t built on flawless imagery. It’s built on truth.

Have a project in mind?

Let's do it!

If you want to discuss a project or role —

please email me, and I'll get back to you shortly.

Copy my email

@2024 Nate Maxwell-Doherty

Have a project in mind?

Let's do it!

If you want to discuss a project or role —

please email me, and I'll get back to you shortly.

Copy my email

@2024 Nate Maxwell-Doherty

Have a project in mind?

Let's do it!

If you want to discuss a project or role —

please email me, and I'll get back to you shortly.

Copy my email

@2024 Nate Maxwell-Doherty

Have a project in mind?

Let's do it!

If you want to discuss a project or role please send me an email and I’ll get back shortly.

Copy my email

@2024 Nate Maxwell-Doherty