CONTINUE TO SITE »
or wait 15 seconds

AI

Retailers looking to harness Gen AI must beware of its double-edged sword

By adopting the proper strategies and resources, retailers can look to enjoy the promises that gen AI has laid out for the future

Photo: Adobe Stock

March 1, 2024 by Alexandra Popken — VP of Trust and Safety, WebPurify

In 2023 consumers set a new online shopping record during Black Friday, reaffirming for retailers everywhere the importance of investing in their online platforms. Conveniently, there's a shiny new tool in the form of generative AI technology, which promises endless new customer engagement opportunities. From streamlining shopping experiences with AI chatbot assistants to offering upgraded product personalization, gen AI is being pitched to retailers (large and small) as a compelling tool to stay ahead of the competition.

While designed to help create content more efficiently, gen AI also presents real risks, particularly when it comes to trust and safety. Serious issues of copyright infringement and bad actors nefariously utilizing the technology to curate false product and service claims have the potential to place a business' relationship with their consumer base in jeopardy if they are not keenly aware of gen AI's faults and prepared to address them when implementing.

Gen AI threatens not only consumer trust, but also the law

Consumers have already expressed hesitancy around gen AI, with a recent study of over 10,000 shoppers revealing only 20% "mostly" or "completely trust" AI during the shopping process — and who can blame them?

In 2022 alone, Amazon removed over 200 million suspicious customer reviews that proved to threaten the integrity of the platform while Google discovered 115 million fake reviews on Maps, a 20% increase from 2021. The majority of these fake reviews were also found to be positive or 5-star ratings, instilling doubt and jeopardizing customer loyalty with false product promises.

Furthermore, the Transparency Company discovered over 100,000 businesses to be using fake reviews, leaving the Federal Trade Commission no choice but to begin cracking down. If found to be deceiving the public with fake reviews, those who are guilty can face fines of up to $100,000, which negates any efforts for increasing revenue in the first place. Given gen AI's potential to place retailers into such detriment, how are they able to harness its power for good?

Retailers must be tactical and transparent with gen AI

Despite its dangers, gen AI utilization is only predicted to keep growing within the retail space, being slated to have an impact of $9.21 trillion on the industry by 2029. These promises, however, can only be fulfilled if retailers implement the proper AI-based strategies that protect the essential qualities of brand success, such as consumer loyalty and trust. Thus, it is imperative they learn how to strategically implement gen AI tools to not only protect themselves, but also valuable shoppers.

WebPurify recently conducted a study with Censuswide and discovered 75% of consumers believe more can be done to guard them from the potential risks of gen AI online, acknowledging the moral obligation that rests on retailers to provide a better sense of protection when shopping on their websites.

One key element for those looking to incorporate gen AI into business practices is ensuring there is high transparency around when and where the technology is being used for the shopping experience. With almost half of consumers not feeling confident in their abilities to discern gen AI's presence online, providing labels and signals where it is being utilized can help quell any doubts that customers may have.

Platforms like TikTok have already seen success with taking a transparent approach by introducing new labeling-systems to disclose AI-generated content. Meta has gone further to launch Community Forums for educating its users and allowing them to provide feedback on AI developments. Being comfortable to engage and be transparent with users is the first step towards establishing mutual respect and trust.

Additionally, while product personalization and image-generation tools have the power to enhance the shopping experience drastically, retailers should also look to better protect consumer privacy by allowing shoppers to opt out of any personal data sharing that may use their activity habits or user-generated content to further educate their own gen AI models.

Moderation lifts gen AI's burdens off brands: a crucial component

The responsibilities of harnessing gen AI do not have to solely fall on retailers — after all, these tools were designed with the intent of increasing efficiency of business practices: not further complicating them.

Content moderation looks to set up additional guardrails and further protect against gen AI's looming threats. AI-powered moderation systems not only look to lift extra responsibility off companies, but they can also be trained to combat against potentially damaging content for the brand.

Human moderators are also expertly trained to pick up on more nuanced violations that the technology may not be able to pick up on, such as AI-generated fake reviews, and can help better protect retailers from any damaging content that might place them into hot water with consumer bases or, even worse, the law.

Companies playing it safe cash out higher amongst competition

By adopting the proper strategies and resources, businesses can look to enjoy the promises that gen AI has laid out for the future while also protecting their customers along the way. Most recently, the incorporation of programs such as PSYKHE AI on behalf of online fashion retailers has demonstrated a five-times increase in revenue alongside a 26% increase in customer retention thanks to safe and strategic product personalization.

So long as brands look to evolve alongside the technology to preemptively counter its dangers, they can expect to reap the rewards and stick alongside their competitors within the ever-shifting retail space. However, reckless retailers will only lose out on customers and revenue by failing to recognize gen AI's double edge sword, ready to strike at any moment.

About Alexandra Popken

Alexandra Popken is the Vice President of Trust & Safety, the leading content moderation service combining the power of AI and humans to keep online communities and brands safe from the risks of user-generated content. Popken joined the WebPurify team in the beginning of 2023 after working at Twitter for nearly a decade as Senior Director of Trust & Safety Operations, leading a global team that spanned the consumer and monetization sectors, and is committed to equipping companies across all industries with the necessary trust and safety knowledge to further protect its users.

Connect with Alexandra:




©2025 Networld Media Group, LLC. All rights reserved.
b'S1-NEW'