Every day, the Macalope is more and more convinced that we need less artificial intelligence and more human intelligence. (Zing! Walk that one off, human race.)
Fresh off credulously telling us how close we are to artificial general intelligence, The New York Times’s Kevin Roose is back to ask, if you prick AI, does it not bleed?
“If A.I. Systems Become Conscious, Should They Have Rights?”
You will be surprised to read that, once again, after talking solely to a group of people who work in AI, Roose believes that super-intelligent, sentient AI is right around the corner.
But can ChatGPT experience joy or suffering? Does Gemini deserve human rights? Many A.I. experts I know would say no, not yet, not even close.
But I was intrigued.
Experts say, “Dude, no.” But what if I really want it to be true?
After all, more people are beginning to treat AI systems as if they are conscious — falling in love with them, using them as therapists and soliciting their advice.
“Could the experts be wrong and the nincompoops right?” asks writer for The New York Fricking Times.
Roose is more and more like a guy who accidentally got subscribed to a podcast about sourdough starters months ago and somehow now thinks the right sourdough starter might be able to cure cancer because he just knows so much about it now.
Roose’s perspective seems backward to the Macalope; coming at it from whether or not AI is approaching sentience. The easier way to think about it is simply what it says about us if we abuse something that simply presents itself as human.

IDG
Maybe it comes from spending too much time with action figures as a child and then seeing Toy Story and thinking it was a documentary, but the Macalope believes you shouldn’t abuse anything, whether you think it’s sentient or not, because… you may not know.
Just ask the whales. Or watch that famous documentary, Star Trek IV: The Voyage Home.
Also, if you build up the habit of, for example, swearing at Siri, you might find it hard to break that habit when you interact with actual humans who also don’t know how to turn the lights on or play “Expert In A Dying Field” by The Beths or add “butter” to the groceries list instead of “butler”.
Like, what aisle are those even on? And if you’re at Costco you have to get the three pack so the Macalope is frantically texting friends asking if they want to go in on a pack of butlers when his wife just wanted more butter. Which comes in a 164 pack at Costco so the Macalope still has to text some friends but if he does it with Siri the situation has a 50/50 chance of having the butler problem again.
What were we talking about again?
So, yeah, it’s nice to be concerned about how we treat AI if and when it becomes conscious. But the current collection of Rube Goldberg machines that obsequiously apologize to you for incorrectly suggesting you ingest rocks are probably further away from needing your concern than Roose seems to think.
One question Anthropic is exploring, he said, is whether future A.I. models should be given the ability to stop chatting with an annoying or abusive user…
Funny you should mention that, actually. Because, despite not believing there’s the slightest reason to be worried about the inalienable rights of AI at this juncture, the Macalope would still support a restraining order that requires Henry Blodget to stay 50 yards away from AI at all times. If only because then he’d have to stop writing about it.
Once a vocal espouser of Apple doom, Blodget is almost certainly doing a bit here to gin up some attention since it’s harder to get traction these days claiming Apple’s about to go out of business. His latest piece of performance art is to use AI to create a “newsroom” staffed by ChatGPT personalities who are only slightly more phony baloney than Blodget himself. When he then had ChatGPT create headshots for these bogus employees, he found one so attractive he felt the need to comment on it, even though he knew it was wrong.
Honestly, this is one of those few cases where the Macalope would have preferred interpretive dance. Or even mime.
Okay, maybe not mime.
With all of this goofball reaction to AI, you will forgive the horny one if he’s not really worked up over the fact that Apple’s AI efforts right now seem to largely consist of rearranging the deck chairs.
The relocation of [Brian] Lynch’s unit is also notable because it gives [John] Ternus control over key AI operating system and algorithms teams, groups not typically managed by the hardware engineering department.
Again, AI is a useful tool in the right circumstances. But it’s a little early to be worried about the morality of its indentured servitude, the performative lasciviousness of some people notwithstanding.