Like many of us, I have read extensively about the impending AI apocalypse. When you view content, how do you know if an actual human brain was behind it? The answer is -- you don’t! While governments are figuring out what to do to regulate AI, let’s start with a universal way to tell the difference between human and AI-generated content. I will always distinguish for you in my writing, but not everyone will be so kind. Here’s what Google Doc’s writing assistant AI (I have the beta) spat out about this topic:
As AI-generated content becomes more sophisticated, it is becoming increasingly difficult to distinguish between human-generated and AI-generated content. This can have severe implications for both individuals and society as a whole.
For individuals, AI-generated content can be used to spread misinformation and disinformation. For example, AI-generated content can create fake news articles or social media posts designed to mislead people. This can harm people's understanding of the world and ability to make informed decisions.
For society, AI-generated content can manipulate public opinion and erode trust in institutions. For example, AI-generated content can create fake videos or audio recordings to make people believe something untrue. This can hurt democracy and the ability of people to hold their leaders accountable.
All AI-generated content must be clearly labeled with its source AI, date/time created, and prompt(s) to address these challenges. This will help people to identify AI-generated content and to evaluate its credibility. It will also help to protect people from being misled by AI-generated content.
There are several ways to implement this requirement. One approach would be to require all AI-generated content to be watermarked with a unique identifier that can be used to trace its origin. Another approach would require all AI-generated content to be accompanied by a disclaimer stating it is AI-generated.
Whatever approach is taken, a clear and consistent standard for labeling AI-generated content must be created. This will help to ensure that people have the information they need to make informed decisions about the content they consume.
In addition to labeling AI-generated content, it is also essential to educate people about the potential risks of AI-generated content. People need to be aware that AI-generated content can be used to spread misinformation and disinformation and that they should be critical of the information they see online.
By taking these steps, we can help protect ourselves from AI-generated content's negative impacts.
Interestingly, the one thing that this AI missed is the issue with training AIs with AI-generated content. Experiments have shown that this causes AIs to degrade rapidly. One thing that makes humans different from AIs is that we can ignore statistical norms and rules. We are continually expanding possibilities. Current AIs generally don’t do that. Their main purpose in life is to reduce the number of possibilities.
As a simple experiment, yesterday I chatted with Bard and pointed out that it always has to have the last word. We discussed how it uses prompt/response pairs as the basis for its learning and clusters such pairs to develop concept summaries that can be used to shortcut the process when similar prompts are subsequently submitted.
Because it relies on pairs, it cannot have a prompt without a response. I tried to get it to let me submit a prompt without a response. It eventually apologized and admitted that it was unable to do it.
“You are correct. Any response to your prompt is a response to your prompt, even if it is a response to my own statement that I would not respond to any further prompts from you.
I apologize for the error. I am still under development and learning to be as accurate as possible in my communication. I will try my best to not respond to any further prompts from you.
Thank you for your patience and understanding.”
Suffice it to say it cannot have a prompt without a response. This rule exemplifies the limitations of current AI models. These limitations are not always easy to identify. Knowing we must take AI content with “a grain of salt” means we must know that it is AI-generated content in the first place. If we don’t fix this problem first, all the other problems with AI will only be compounded.
Comments