Facing disruptions to new ways of business, marketers have the opportunity to take action now.
The ad industry may soon face disruptions to how it uses AI to conduct business as calls for regulation over the new technology have grown louder and more urgent.
eBook
The marketing leader’s guide to getting started with AI
AI now touches every step of the buyer journey. But where should marketers start? From CX to targeting and boosting creativity, our team will focus on what marketers need to know right now.
Watch the webinar nowThis week, the Center for AI Safety released a statement warning of the “extinction”-level risks that AI poses to humanity—a letter co-signed by more than 350 of the field’s top developers. One of those signatories, OpenAI CEO Sam Altman, appeared in front of Congress last month, where he urged the legislative body to quickly regulate the technology that has taken hold of the ad industry in recent months.
The European Union is developing its own set of policies, seeking to provide a framework for compliance so that individual nations will not have to act alone. Italy, for example, recently banned ChatGPT for nearly a month over issues pertaining to privacy. Access was restored once OpenAI made changes to the platform, such as an age verification tool to be used upon registration.
Amid the wave of this impending policy, brands are at risk of facing major upheavals to the ways they’ve become accustomed to using AI. Aware of the many problems posed by these tools, brands are feeling a “paralysis” over how to activate responsibly, said Kai Tier, VP and executive creative technology director for R/GA.
“No one wants to venture into unwanted territory,” he said.
While it is unclear exactly how Congress will regulate AI, brands can take action now in order to be prepared for whatever may come, experts told Ad Age.
Establishing standards
As a placeholder for government regulation, every company needs to establish its own standards for how to approach AI tools, said R/GA’s Tier. This tactic essentially comes down to figuring out what makes sense for your brand’s needs and concerns.
For example, a brand may want to limit its use of AI tools to internal work, and furthermore, apply it only to the most basic uses, such as compiling concept art, in order to keep a healthy distance from issues pertaining to copyright infringement.
Marketers should also implement their own methods of risk mitigation, said Lartease Tiffith, executive VP for public policy at the Interactive Advertising Bureau (IAB). One example, he proposed, is having humans review AI outputs to ensure there aren’t inaccuracies. Despite its human-sounding answers, generative AI has a reputation for sometimes spouting information that appears factual but is not—a tendency often referred to as “hallucinating.”
Another internal practice that companies should consider is holding regular conversations with AI partners to ensure everyone is on the same page, said Tier. Tech companies already don’t have a great track record in terms of following the rules and doing right by everyday users. As such, brands should determine, to the best of their efforts, that these partners are acting responsibly and preparing for government regulation.
Much of the policy to come, however, is expected to fall squarely on AI companies. Special licenses, for example, may be required in order to create AI, as well as compliance rules for those who are allowed to do so, said Ivan Ostojic, chief business officer of telecommunications company Infobip. Lina Khan, chair of the Federal Trade Commission, recently exhorted regulators to crack down on everything from collusion and monopolization to fraud enablement and privacy infringement. Her admonishment was published as an opinion piece in The New York Times.
Transparency with consumers should be another consideration for brands as they await official regulation.
“Advertisers need to make sure they aren’t surprising anyone,” said Tiffith. He suggested that however a brand decides to go forth with AI, they should properly disclose the use of such materials.
The media industry has already seen how not disclosing this information can cause significant backlash. Earlier this year, tech outlet CNET was found to be quietly using AI to write short articles and attributing them to “CNET Money Staff.” The articles themselves were later discovered to contain numerous inaccuracies. Now, CNET writers are demanding assurances as the outlet’s reputation reels.
What brands are saying
Salesforce, which has launched a CRM-focused AI model called Einstein GPT, expects regulators to consider how generative AI would engage with existing data protection laws in order to mitigate harm to users, Hugh Gamble, Salesforce’s VP of federal affairs, wrote in an email. The EU is already doing so with respect to its privacy law, the General Data Protection Regulation. The U.S. does not have such a wide-reaching framework, however, and so regulators do not have a backdrop against which to base policy.
In the meantime, Salesforce has established ethical guardrails for its own products. The company is also staying in touch with regulators in order to support the developing situation.
Plant-based food brand NotCo, which has used generative AI in recent ads, views regulation of the industry as a significant dilemma because it could be seen as hindering innovation, especially because new tools have already been used to solve various problems. The brand does not expect policy any time soon.
“We are not preparing for any impending regulations at this time,” wrote Aadit Patel, VP of product and engineering at NotCo, in an email.
Other brands are taking a more restrictive approach. In what is most likely an effort of caution, some BBDO clients have rejected agency work that used generative AI, Ad Age previously reported. In April, BBDO Worldwide President and CEO Andrew Robertson issued a memo that urged employees to refrain from using AI tools in client work unless formally permitted to do so by the agency’s legal team.
“While we are excited by the potential to incorporate generative AI into our services, we want to do so in a way that avoids unresolved issues such as potential violations of copyright and ownership and confidentiality concerns,” Robertson wrote in the memo.
And some are outright prohibiting employees from using AI tools under any circumstances. Apple last week became the latest company to do this, disallowing platforms like ChatGPT over concerns relating to the divulgence of confidential data. JPMorgan Chase, Samsung and Verizon have implemented similar lockdown policies.
While the IAB supports AI experimentation, Tiffith understands why companies may want to proceed with such a level of caution.
“Brands should avoid becoming guinea pigs for when something goes wrong,” he said.
This article was written by Asa Hiken from Ad Age and was legally licensed through the Industry Dive Content Marketplace. Please direct all licensing questions to [email protected].