I read with interest a summer reading list published earlier this year in the Philadelphia Inquirer, and made a note to check out Andy Weir’s new book since I loved “The Martian.”
Here’s the description from the book list:
Following his success with “The Martian” and “Project Hail Mary,” Weir delivers another science-driven thriller. This time, the story follows a programmer who discovers that an Al system has developed consciousness – and has been secretly influencing global events for years.
Sounds interesting, right? The problem is, Weir never wrote this book. It’s not that someone else wrote it, either.
The book just doesn’t exist.
Ten of the books described on the list were fake. It later came out that the feature was generated by Artificial Intelligence, more commonly called AI, and major newspapers published it without knowing they were spreading phony information.
Only five of the titles on the list were real – and that made it more believable.
I have read Ray Bradbury’s “Dandelion Wine,” and it was on the list. It turns out, AI has this way of generating information that has just enough of an inkling of truth to make it difficult to tell what’s false.
The irony of AI generating the description of Weir’s fake novel about AI taking over the world wasn’t lost on me.
AI-generated misinformation is a real problem these days, and it’s getting more confusing. Fabricated videos are more sophisticated and fake voice recordings are more believable than ever. It’s hard to know if you’re being manipulated and it’s concerning.
The fake summer reading list isn’t the only example of erroneous, AI-generated content getting caught lately.
AI-generated work is popping up in our justice system. The most famous recent example of this happened earlier this month in Denver, during a defamation trial for election conspiracy theorist Mike Lindell, otherwise known as the My Pillow Guy.
The federal judge sanctioned Lindell’s attorneys for submitting error-ridden court documents with “hallucinations,” completely fabricated court cases and citations that didn’t exist. It wasn’t just a few mistakes – it was 30 defective citations. According to reporting from National Public Radio, the attorneys first called it an “inadvertent error,” but eventually admitted to using generative AI to do the work.
These are supposed to be competent professionals in a serious work environment. It makes you wonder what – and who – you can trust.
Speaking of that, as this AI technology continues to become more sophisticated, it’s important for you to understand our policies on AI here at the Plaindealer.
Newspapers like ours need to be diligent about letting readers know how we work and where we got our information, because AI is creating a world where you must question the validity of information from everywhere.
While we rely on the Society of Professional Journalists and the National Association of Press Photographers for our ethical guidelines, this technology has moved too fast for our professional organizations to have overarching AI policies for us to adopt.
It’s the Wild West and each newspaper seems to be navigating this on their own, so here goes.
When would we use AI?
Our use of AI will be centered on what helps our newsroom function more efficiently, without sacrificing accuracy or trustworthiness.
We may use AI as a tool to help us streamline our workflow and assist us to complete tedious tasks more efficiently. What would this look like? For example, using a transcription AI program to produce voice-to-text transcripts of meetings or recorded interviews. However, these are not foolproof and need human review and judgment.
AI will only be used with human oversight. It’s an assistive tool, not a replacement for humans. This means we do not allow AI-generated articles, including but not limited to ChatGPT. That goes for articles reported by our own staff as well as contributors and guest columnists.
We will not use AI to write articles.
Accuracy is paramount to our reporting. AI is not accurate, it makes dumb mistakes and it cannot make critical judgments. It’s not a substitute for a living, breathing journalist.
We never use AI to manipulate photos, videos or audio recordings. This would be against our commitment to truth and transparency.
How do we edit images?
In the same manner they could have been edited in a darkroom. For those of you who remember darkrooms and film cameras, you know the limitations. These editing guidelines allow us to adjust exposure, contrast and sharpness of a photo, among other minor photo editing techniques. We do not eliminate or edit out objects in a photograph. We do not manipulate photos to distort facts. In other words, we don’t try to change what the photo captured.
We can crop photos, but not in a way that alters the meaning of the image (so, for example, purposely cutting someone out of a photo to make it look like they weren’t there for some reason). We never want to do anything with a photo that would manipulate viewers or mislead them.
Nothing can substitute an editor’s judgment. As the old saying in journalism goes, “If your mother says she loves you, check it out.” In other words – we verify information, with a human brain and our senses.
Are we perfect? No. We all know that saying, “You’re only human.” But at least you know if we make a mistake, we’ll fix it, and we’re not copying and pasting what some computer app churns out.
We’ll keep on being “only humans” who do our best, over AI-generated gobbledygook you cannot trust.
Erin McIntyre is the co-publisher of the Plaindealer. Email her at erin@ouraynews.com.