Over on ex-Twitter, I ran across a tweet about a student encounter that, like the poster says in the story, just floored me:
My reaction isn’t fundamentally either “I weep for the Youth” or “I for one welcome our new artificial overlords,” though. It’s just… bafflement at this as a use case. How, exactly, would one use ChatGPT in a way that would make you bad at… talking?
I mean, I can sort of see how this would be used for offering more polished phrasing of your ideas than you might otherwise be capable of. But at the end of the day, it’s still just a next-word prediction machine, limited to giving you text that you ask it for. Which sort of requires you to know what you want it to produce at least well enough to give it a prompt. Which, in turn, would seem to require thinking at least some thoughts to completion, if not eloquence.
Is the issue just that this kid is usually limited to the sort of asynchronous interactions where you could just cut-and-paste the other person’s conversational gambits into the prompt window, and the LLM output into the outgoing message field? A kind of neural-network Cyrano scenario? That doesn’t seem like an AI thing, it’s just a toxic stew of poor ethics and crippling social anxiety. But then I’m puzzled by the puzzlement of the original tweeter: You work in tech, but you’re shocked to meet someone who looks great on paper but is awkward during in-person interactions?
Admittedly, I have only played around with these tools a tiny bit, because what they offer isn’t a service I particularly need— coming up with big piles of words isn’t a Thing that I struggle with. (Picture me making an expansive gesture at, you know, All This…) So maybe there’s some clever mode to these things that I’ve never encountered. But I literally can’t imagine using LLMs in a way that would make me feel like I wasn’t able to complete a thought on my own.
So, I’ll throw this out to the reading public: Am I missing some deep and subtle use case here, or is this just a nerd getting busted using linear algebra to seem glib? Seasoned with equal parts hype and credulity?
I thought about breaking this up into a series of posts on one or both of the big micro-blogging sites, but I’m better at long form, and it’s been a bad week. If you want to see if I start using chat bots to replace my brain, here’s a button:
And if you can offer a more sensible alternative interpretation of this scenario, the comments will be open:
Kid likely had the chess device implanted in him and was waiting for the various vibration signals to tell him which word to use. He's become a meat puppet for the AI. Soon they will come for us all.
I'm pretty doubtful about this story. For a start, how did the writer conclude that the kid was "smart" when he couldn't complete a simple sentence without pausing?
Given his confessed reliance on AI, it would be silly to place much weight on a "perfect resume". And could the resume have been that perfect? AI is great at producing the kind of work that might get you a B- at a lot of universities, but it's not going to put you on the Dean's list at Stanford.