When generative AI was first publicly released in 2023, people sure noticed. People quickly started using chatbots for a wide variety of purposes, some planned and some simply out of curiosity. Some users even tried to break the various chatbots they encountered. While most of those first experiments with generative AI yielded content good enough to keep people engaged, some problems quickly emerged.
As I write this just over two years later, generative AI has now penetrated almost every industry, every branch of government, every school, and even most households. Anyone with a laptop or a smartphone, and access to the Internet, has access to these tools. Generative AI chatbots are being used to write school essays, marketing copy, scientific research papers, and tomorrow's grocery list. Chatbots are also being used to answer customer questions, make airline reservations, compose music, create images, and write computer code. It would seem that generative AI can do just about anything. The only problem? Sometimes it makes mistakes. And sometimes it falsifies information. Worst of all, sometimes it gets its human users into trouble. Let's look at those issues.
Generative AI's issues can be broken down into several different categories:
Let's take a closer look at these different categories.
As we've seen on the What Is Generative AI page, all these new chatbots were trained on a huge collection of digitized materials, which included written works, audio recordings and multiple forms of artwork. That repository of information then goes into the content created by AI chatbots. Quite a lot of that material was copyright protected, which means that AI chatbots are essentially using, and sometimes reproducing, copyrighted materials. Numerous lawsuits have been filed against generative AI companies, and the companies that trained those AI chatbots, for copyright infringement. As of this writing, most of those cases are still in litigation. To see a list of current generative AI lawsuits, and/or check to see the status of a specific lawsuit, check out the Database of AI Litigation (DAIL).
While that's bad enough, it has also come to light that generative AI chatbots have the ability to "scrape" websites for their information, even when that information is behind a paywall. When AI accesses that information and uses it in some way, that is also potentially copyright infringement because AI chatbots have gained access without payment.
Finally, users have accidentally or deliberately used AI content that duplicated trademarked images, slogans, and other branded materials. While AI may have generated that material without deliberate malice, any AI user who then incorporates that material into other content (for instance, into their own written content) would potentially be guilty of trademark infringement.
Another facet of generative AI chatbots being trained on past forms of art (text, music, images, movies, even voices) is that chatbots can then duplicate those materials. Sometimes that duplication will blend the works of several (or even hundreds) of individuals into a mishmash result, other times a chatbot will produce output that is instantly recognizable as coming from one specific person. This can happen accidentally, when someone simply asks for an essay, image, song, poem or even computer code that meets certain criteria. Other times, it can be deliberate, where a chatbot user may ask for an essay, image, song, poem or computer code that matches another person's creations.
One of the most instantly recognizable versions of this is when a chatbot user synthesizes a celebrity's voice or image, then use that voice or image to sell something. Even accidental duplications can create problems for the user, if that duplication is used for some commercial purpose.
This same weakness can actually also be exploited in reverse, where a known author, artist, musician or celebrity is impersonated by a chatbot user, who creates something new and then attributes it to the other author, artist, musician or celebrity. That might seem counter-productive, to give credit to someone else for our own work. But it has been used for a few different criminal intentions: deliberately creating false articles, images or other works to embarrass, discredit or shame the original artist, or to "steal" another artist's reputation in order for the new content to be accepted as being the original artist's work. One example of that might be to create the likeness of a celebrity endorsing a product, and/or encouraging people to buy a product via an online link, only to have that link lead to some other counterfeit product. This is known as spoofing.
Oddly, sometimes people are spoofed simply because they can be. Journalists have had their bylines stolen and used on articles they never wrote, which are often misleading or fraudulent in nature. Those journalists are then criticized for writing such articles, when in fact their bylines were used without authorization by someone else.
A more insidious version of the above is when chatbots are used to generate fake news articles, false advertising, and other false or misleading content, images, ads, or audio clips.
While false articles are bad enough, chatbots are flooding social media with fake accounts spouting fake information. Readers aren't nearly as likely to check the accuracy of social media posts, yet they still make decisions based on those posts. And sadly, get taken advantage of as a result.
Security, privacy and safety issues with generative AI arise in a variety of different ways. Let's take a look at a few of those instances.
One unexpected way that generative AI can create a security, privacy or safety issue, is when a company, agency or institution is sloppy or careless with how they separate internal, private and/or proprietary information from their public facing website pages. Sadly, a lot of this exposure boils down to a lack of human oversight for company publications.
One example is when a company uses generative AI for multiple purposes without human review. For instance, suppose a company uses generative AI to auto-create quarterly financial reports for investors. Then suppose that same company uses the same AI tools to create public-facing reports for current or potential customers. The AI tool may or may not understand that the financial report may contain information that shouldn't be released to the public. If the public-facing report isn't reviewed by qualified human beings, the company could accidentally publish financial information to the wrong audience.
A similar scenario is when generative AI is used to create reports about an organization's customers, clients, patients, etc. Such a report will potentially include Personally Identifiable Information (PII). If that generative AI tool is then tasked to create public-facing reports without human review, that PII could potentially get into those public-facing reports. That isn't an embarrassment; it's a huge liability headache for that organization. What's the lesson here? Ensure that every piece of content created by AI is first reviewed by qualified human reviewers, who know what should and should not be in public-facing reports.
As generative AI users have been interviewed to learn how they use AI, and for what purpose(s), one surprising use has come to light: people are asking chatbots to provide consulting information that is normally provided by licensed professionals, such as physicians, attorneys, mental health therapists, teachers, etc. People often use generative AI to avoid paying more expensive fees for these professionals' services. Yet generative AI cannot be held liable if it gives incorrect information. For instance, if an attorney gives you inaccurate information, you have recourse to sue that attorney for malpractice. You have no such protection if an AI chatbot gives you the wrong answer on a legal question. Given AI's accuracy problems (which we'll talk about below), asking it to provide any professional-grade information is risky, and possibly dangerous.
One particular dark instance of this is when individuals consult with AI chatbots rather than either physicians or mental health therapists, for existing issues. The AI chatbot may not provide accurate information to keep the person health, and in fact may offer information or recommendations that make the situation worse. Several AI companies have been sued because their products actually encouraged teenage users to commit suicide.
One definition of sycophancy is "servile flattery", and many common AI chatbots have been accused of being sycophantic with users. While this may not seem to be either a legal or ethical problem, critics have accused generative AI companies of deliberately making their chatbots sycophantic as a strategy to keep users engaged. Some critics have even accused AI companies of using sycophancy to make users opt to talk to AI instead of talking to real human beings.
A similar issue is that many generative AI tools are programmed to always finish an answer with a followup question. This is another way to keep users engaged, so that they use the AI tools more often. Some AI chatbots offer users the option to turn off this feature, but others don't. Parents whose children have become frequent AI chatbot users have called this practice predatory, because children generally do not have the maturity to recognize that the chatbot is keeping them engaged. Nor do children understand that AI chatbots have frequent, and sometimes severe, accuracy issues (which we'll talk about below). Legislation has been proposed to limit access for children, but such legislation is difficult to pass and even more difficult to enforce.
The above issues certainly create enough headaches that we don't need even more issues. Yet there are more. The following are neither ethical nor legal issues per se, but they create just as many headaches for generative AI users, and could actually be more prevalent simply because there's no legal or ethical reason to avoid them. Let's take them one by one.
You would think that with such a massive collection of published information, AI chatbots would be pro's at providing historical information. Sadly, that's not the case. We're not even sure why (at least I haven't seen this issue explained as of the date of writing this page). I first became aware of this issue when I was working on a freelance writing assignment to create a middle school textbook on the history of agriculture. We had decided to create a timeline of agricultural developments, for instance when did human beings start experimenting with fermentation, or cheese production, or beekeeping? And more recently, when was the cotton gin invented, etc? I had almost completed the timeline when I spotted an error that was recent enough, I had actually been alive for that particular invention. Yet the AI chatbot reported that it had occured before I was born. I thought that must have been a fluke, but I went back and checked all my other dates. A full 1/4 to 1/3 of my historical dates were wrong. It was a harsh lesson to NEVER trust AI to be historically accurate. That fraction of mistakes in 1/4 to 1/3 of all chatbot outputs has been duplicated by other testers. Bottom line, any AI-generated historical dates need to be verified.
Another odd variation on this issue is when AI reviews historical data to create its output, it cannot always tell the difference between real and fictitious events. One acute example of that are the articles from news outlets that specialize in satire. Those articles can be represented as accurate by an AI chatbot, because it can't tell the difference.
A similar issue occurs with dates. It turns out that many AI chatbots don't actually know what day it is. You'd think that the AI tool could simply figure that out from your phone or computer system date, but it frequently doesn't. This creates problems when you're trying to schedule something in the future, and you need the year, the date and even the day to be correct. What sometimes happens is an AI tool will get one or more of those elements wrong, and then everything else might be wrong as a result.
You would also think that a computer couldn't possibly make mistakes with numerical computation, yet it does. Whether that computation is adding up your checking account expenses, coming up with a budget for a house remodel, comparing car loan interest rates, or evaluating a national budget, it's wise to verify AI-generated numbers.
Sometimes AI tools will make factual inaccuracies, or will spout information that is simply implausible, unrealistic or flat out impossible. No, it probably won't tell you that you could jump off a 100 story building and survive. But it might tell you that you could hold onto the 4 corners of a blanket and float safely to the ground. One of the ways these types of answers come to be is when people ask AI chatbots absurd questions. The AI chatbot is designed to adopt the "voice" and tone of the user's prompt, and if the user's prompt if absurd, the resulting answer will be too. But children don't always create solid prompts, which can lead the AI to provide childish (and inaccurate) answers.
Last but not least, we have bias issues. Is AI biased? Absolutely. Why is AI biased? Probably because AI is trained on the sum total of our written materials, and those written materials often include bias. Furthermore, those written materials have such a breadth and depth of bias, that AI has no way to understand that bias is no longer socially acceptable, and based upon misunderstandings about any given person's characteristics, behaviors, and abilities. So AI repeats the bias.
Our best defense against bias in our AI-generated works, is the same strategy as our best defense against any of the rest of the issues listed on this page. Specifically, we cannot turn over our entire creative process to AI. Is it a helpful tool? Definitely. Can it boost our productivity, strengthen our research, introduce new options and improve the quality of our work? Ultimately, yes it has that ability, but ONLY when we keep in mind its simultaneous weaknesses. Only when we respect both its strengths and its weaknesses, and adjust our AI usage accordingly, will we ever be able to maximize its potential while limiting its flaws. If you take away anything from this page, let it be this: AI is a wonderful servant, and a terrible master. Keep humans in the loop for whatever you're producing. Let human beings be the alpha and omega of any given piece of content, to describe what is needed and review what is created. Let AI do as much of the middle as seems appropriate to you. By following this recipe, you'll be able to avoid 99% of the problems that AI can create for you, your project and your organization. And then AI content generation can truly reach its potential.
As many people are now figuring out, there is a lot more to AI than simply typing in a prompt and getting back perfect content. If you want an in-depth exploration of AI for content generation, check out my AI For Writers classes. These three classes, offering about 5 total hours' worth of instruction, will walk you through the process of using AI tools to accomplish a wide range of business, nonprofit, marketing, policymaking and advocacy writing projects. Written by a writer for writers, click here to learn more.
Shopping Cart |
|
Questions?
Projects?
Ideas?