OK, let’s delve into this.
It’s like hearing your grandpa cuss. If you’re 4 years old and hear him often enough, you’ll start cussing.
All your dad and mom can do is hope that your grandpa is a classy cusser.
Notice, I didn’t say curser.
That’s because Grandpa doesn’t say “curse.” He says, “Joey, don’t cuss!”
Chatbots are like that. They say delve. They say it a lot. They picked it up from humans — at least humans on the Internet. And I have picked it up from them. I don’t say examine, or explore, or investigate, or probe, or research. Or any other word that I could pick up instead from Thesarus.com.
No. I say, “OK, let’s delve into this.”
I have unwittingly fallen under the influence of ch — atbots. It turns out that chatbots overuse certain words.
Linguist Adam Aleksic, author of the book “Algospeak: How Social Media Is Transforming the Future of Language,” writes in The Washington Post: “ChatGPT uses the word ‘delve’ at higher rates than people generally use (it).”
His analysis is headlined:
“It’s happening: People are starting to talk like ChatGPT. Unnervingly, words overrepresented in chatbot responses are turning up more in human conversation.”
I am a victim.
In addition to delve, Aleksic offers other examples.
“Intricate.”
“Commendable.”
“Meticulous.”
“Are we also supposed to stop using the chatbot-overused ‘inquiry’?” he writes. “Or ‘surpass’? There’s too much to keep track of.”
There’s no question that overuse of specific words can ruin writing.
“In the two years since ChatGPT launched in late 2022,” Aleksic says, “the appearance of ‘delve’ in academic publishing saw a tenfold increase as researchers began turning to AI for help with their papers. …
“Most people likely don’t know about the chatbot biases toward certain words. Users assume that ChatGPT is talking in ‘normal’ English. … They also assume that regular, everyday texts they encounter are normal English, even when they might have been AI-generated. …
“Indeed, a study reported in Scientific American last month found that people have started saying “delve” more in spontaneous, spoken conversations. The result: AI learns from us, and then we reflect AI back to itself. It’s dizzying.
And now it isn’t AI’s doing anymore; we’ve started internalizing its biases and repeating them by ourselves.
“I say ‘we’ because even those in the anti-‘delve’ faction aren’t exempt. We might avoid using the most well-known ChatGPT giveaways, but so many words are appearing with unnatural frequency that we can’t possibly avoid them all.”
And there is a hidden danger.
“Racial biases, gender biases and political biases all of these are likely trained into the (AI) models, much like linguistic biases, but these are harder to definitively measure. We need to remember that these aren’t neutral tools: They hold the power to subtly reshape our thinking.”
We are all victims.













