I see what you mean, but I think facility has also increased quantity. Half of the people writing on Substack would have been writing much less, if at all, if AI didn't make their process faster and easier.
Bad writing is bad writing, and I'm not sure it matters whether this sort of thing is produced by people or by ai. Obviously it's a different issue when trying to evaluate students, or build human knowledge with some assurance that it's not hallucinated, but for this type of anecdotal self help BS, it doesn't make it any more or less meaningful, so I'm not sure why people react angrily to it.
I don't think there are any single features that infallibly indicate AI writing. Rather, there are many ingredients, and it's the overall flavor of the stew that does it. "Not this but that," of course; short sentences and paragraphs that try to be snappy, along with a particular tone of didactic confidence; the spewing out of mixed metaphors; etc.
I was fairly confident of my AI-slop radar until I began reading a best-selling novel that I would have *sworn* had significant AI contribution. But it was published 20 years ago. So, it was no doubt among the training input, or was at least *like* the training input. AI didn't invent glib writing but it has made it ubiquitous.
My one subtle disagreement with this would be that I think this kind of writing has always been ubiquitous among "writing" as a whole-- that's why it's all over the training set used to make these LLMs. What LLM availability has done is to make it ubiquitous in spaces where people didn't previously go to enough effort to produce really slick but empty prose.
I see what you mean, but I think facility has also increased quantity. Half of the people writing on Substack would have been writing much less, if at all, if AI didn't make their process faster and easier.
Bad writing is bad writing, and I'm not sure it matters whether this sort of thing is produced by people or by ai. Obviously it's a different issue when trying to evaluate students, or build human knowledge with some assurance that it's not hallucinated, but for this type of anecdotal self help BS, it doesn't make it any more or less meaningful, so I'm not sure why people react angrily to it.
I don't think there are any single features that infallibly indicate AI writing. Rather, there are many ingredients, and it's the overall flavor of the stew that does it. "Not this but that," of course; short sentences and paragraphs that try to be snappy, along with a particular tone of didactic confidence; the spewing out of mixed metaphors; etc.
I was fairly confident of my AI-slop radar until I began reading a best-selling novel that I would have *sworn* had significant AI contribution. But it was published 20 years ago. So, it was no doubt among the training input, or was at least *like* the training input. AI didn't invent glib writing but it has made it ubiquitous.
My one subtle disagreement with this would be that I think this kind of writing has always been ubiquitous among "writing" as a whole-- that's why it's all over the training set used to make these LLMs. What LLM availability has done is to make it ubiquitous in spaces where people didn't previously go to enough effort to produce really slick but empty prose.
> LLMspeak from a linguistics standpoint
Colin Gorrie (a linguist) did one a while back: https://www.deadlanguagesociety.com/p/rhetorical-analysis-ai
I guess I would characterize it as the trope density of the text?