I must have been living under a rock/a different kind of terminally online, because I had only ever heard of Honey through Dan Olson's riposte to Doug Walker's The Wall, which describes Doug Walker delivering "an uncomfortably over-acted ad for online data harvesting scam Honey" (35:43).
I saw this floating around fedi (sorry, don't have the link at hand right now) and found it an interesting read, partly because it helped codify why editing Wikipedia is not the hobby for me. Even when I'm covering basic, established material, I'm always tempted to introduce new terminology that I think is an improvement, or to highlight an aspect of the history that I feel is underappreciated, or just to make a joke. My passion project — apart from the increasingly deranged fanfiction, of course — would be something more like filling in the gaps in open-access textbook coverage.
As a person whose job has involved teaching undergrads, I can say that the ones who are honestly puzzled are helpful, but the ones who are confidently wrong are exasperating for the teacher and bad for their classmates.
"If you don't know the subject, you can't tell if the summary is good" is a basic lesson that so many people refuse to learn.
From the replies:
In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.
Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.
There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.
And a good sneer:
With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.
Not A Sneer But: "Princ-wiki-a Mathematica: Wikipedia Editing and Mathematics" and a related blog post. Maybe of interest to those amongst us whomst like to complain.
the team have a bit of an elon moment
"Oh shit, which one of them endorsed the German neo-Nazis?"
Aaron likes a porn post
"Whew."
"Drinking alone tonight?" the bartender asks.
I don't see what useful information the motte and bailey lingo actually conveys that equivocation and deception and bait-and-switch didn't. And I distrust any turn of phrase popularized in the LessWrong-o-sphere. If they like it, what bad mental habits does it appeal to?
The original coiner appears to be in with the brain-freezing crowd. He's written about the game theory of "braving the woke mob" for a Tory rag.
In the department of not smelling at all like desperation:
On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free.
It had a very focused area of expertise, but for sincerity, you couldn't beat 1-900-MIX-A-LOT.
I have the feeling that they're not a British trans person talking about the NHS, or an American in a red state panicking about dying of sepsis because the baby they wanted so badly miscarried.