Tripp Scott

  • Sponsored Report

Business Use of AI: Caution is Critical

The embrace of generative artificial intelligence has grown significantly since ChatGPT’s public debut in 2022. It’s estimated that AI today has 350 million users, including 75% of businesses and 92% of Fortune 500 firms. Some expect it to top 950 million users globally by 2030.

The reason is clear: AI can streamline processes, handle rote — and increasingly complex — tasks, and ease human workloads. However, the euphoria surrounding the business-enhancing possibilities must be accompanied by caution. News of inaccuracies, “hallucinations” and copyright infringement, among other issues, should have all users on high-alert regarding the unbridled embrace of this still-emerging technology.

What questions should your organization be asking before rolling out AI across the enterprise?

What legal concerns surround AI? In a word: many. Some content creators, such as authors, publishers, and companies, claim copyright law violation in how AI developers use published material to train their systems. Developers argue such efforts fall under the “fair use” doctrine. Courts are divided. However, companies turning to AI must ensure content generated is unique and violates no copyright or other legal protections.

What about AI’s accuracy and bias issues? That’s a growing concern. Grabbing the headlines is news of deliberate “deep fakes,” or dark, hostile, even bigoted AI-generated videos of individuals indiscernible from truth, or AI systems that have gone rogue in creating abusive content. Of more concern for the average business are AI “hallucinations,” or content presenting completely false information as if true. There are increasing examples of documents likely generated by AI that contain fake citations, from recent government reports to court filings. AI developers, who often prioritize innovation over accuracy, claim they’re moving fast to reel in falsehoods. The onus remains, however, on users to separate fact and fiction.

What are my company’s responsibilities regarding the use of AI-generated content? There is no defense — business, legal or ethical — for a company accepting false, AI-generated content as fact. To paraphrase the legal maxim, ‘ignorance of an AI falsehood is no excuse.’ If your company is relying on AI to generate content, especially that to be used in public-facing venues, you need to verify the accuracy of all the content.

How should organizations respond? Simply put, cautiously. Organizations, including nonprofits, should avoid publishing AI-generated content that resembles copyrighted works, or which has not been internally fact-checked. If your organization is increasingly using AI-generated content, discuss the implications with your legal department or outside counsel. Internal use guidelines are advisable to ensure employees are aware of expected practices surrounding acceptable use of AI-generated research and content creation.

Until developers resolve issues of AI-generated misinformation or falsehoods, and companies have fact-checking policies in place, AI may best be used to aid in research or to help guide internal decision-making. This is still no time for unbridled embrace of this still-emerging technology.


Seth Donahoe
Seth Donahoe is a director at Tripp Scott. The majority of Seth’s litigation practice is state and federal court litigation of business disputes.

For more than 50 years, Tripp Scott has played a leadership role in issues that impact business.
Learn more at TrippScott.com.