this post was submitted on 11 Jul 2023
118 points (99.2% liked)

Technology

59300 readers
5298 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] [email protected] 29 points 1 year ago* (last edited 1 year ago) (1 children)

I tested it out just now. It seems to be much better than gpt 3.5, but just a little worse than gpt 4.

I tested this: “Explain the plot of Cinderella in a sentence where each word has to begin with the next letter in the alphabet from A to Z, without repeating any letters.”

GPT-4 is able to always get it right first try. It gave me: “A beautiful Cinderella, determined, endures, finds glass heel, invites joy; king's love magically nurtures opulence, prince quests, restores slipper, transformation unveils, validates wistful xerophyte yearning zealously."

Claude 2.0 took 4 tries to get it right. It gave me these: “A beautiful young girl called Cinderella did evil for her fair godmother's horrid instructions just kicking laundry maidens' nice outfits purportedly quelling ridiculous step-sisters' terrible underhanded villainy with x-rays yielding zilch.” and then “Awaited ball causes Downtrodden Ella's fairy godmother's help. I'll just knock loud mean old prune's quest, righteously sabotaging two ugly vicious women. Xanthippe yells, "Zounds!"” and then “Abandoned by cruel destiny, Ella finds godmother's hidden invitation juxtaposing king's lavish ball. Magic occurs, placing radiant slipper to unveil victorious wife. Xerxes yearns, "Zounds, she's found!"” and then finally “Awaiting ball, Cinderella dreams every Friday godmother helps. Inside jousting knights leer maidens' nice outfits. Prince quietly rescues sweetheart, though ugly vengeful wife x-rays yellow zebra.”

After that, i tried this prompt: “Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What are some options for start times for a 30 minute meeting for Ca Given the availability of Andrew, Joanne, and Hannah, we need to find overlapping times for a 30-minute meeting.

After cross-referencing their schedules, the potential start times that accommodate all three schedules are:

12:00 pm to 12:30 pm: This slot is within Andrew's free time (11 am - 3 pm), Joanne's free time (12 pm - 2 pm), and Hannah's first free half hour at noon. There is no other slot that fits within all three schedules, due to Joanne's break from 2 pm to 3:30 pm and Hannah's availability starting again only at 4 pm, when Andrew is no longer available. Therefore, there is just one option for a 30-minute meeting that works for all three of them.”

Claude 2.0 responded with: “Okay, based on the availability you provided, here are some options for 30 minute meeting start times that work for Andrew, Hannah and Joanne: 11:30 am - This works because Andrew is free starting at 11am, Hannah is available at noon for half an hour, and Joanne is free from noon to 2pm. 3:30 pm - This works because Andrew is free until 3pm, Hannah is free from 4pm to 6pm, and Joanne is free from 3:30pm to 5pm. 4:30 pm - This also works for the same reasons as 3:30pm. Andrew is free until 3pm, Hannah is free from 4pm to 6pm, and Joanne from 3:30pm to 5pm. 5 pm - This could work as well since Andrew is free until 3pm, Hannah until 6pm, and Joanne until 5pm. It's pushing the end of Joanne's availability though. So in summary, the best options are 11:30 am, 3:30 pm or 4:30 pm to ensure there is overlap with all three people's availability. Let me know if you need any clarification or have additional constraints!”

I told Claude it is incorrect, and it proceeded to correct itself and tell me there is NO available time overlap. After several more attempts, it eventually gave up and asked me for the correct answer.

So although Claude 2.0 is much better than GPT-3.5, it is still worse than GPT-4 for complex reasoning.

EDIT: I just realized that Claude has a 100k token context window. This is SIGNIFICANTLY more than GPT-4’s normal 8k context and 32k api context. This immediately makes it much more useful, as it can parse ~75,000 words. I also have not tested Claude’s creative, programming, mathematic, etc., skills. But regardless of how that compares to GPT’s, it doesn’t matter because of the sheer quantity of tokens that can be parsed by Claude.

[–] [email protected] 1 points 1 year ago

That's interesting. I haven't tried its reasoning skills. I did try playing Jeopardy! with it though, and it showed a lot of improvement from previous attempts. Usually chatbots are very bad at Jeopardy, telling you just about any answer is correct, but Claude 2 did really well, explaining why I was wrong several times. I did ask it to provide an explanation about whether my answers were right or wrong in the initial prompt, so that might've made a difference though.

[–] [email protected] 15 points 1 year ago (1 children)

Woah, this is huge. Claude 1 was already more useful and coherent than ChatGPT (3.5, not 4). The big point was that it wasn't available to everyone. This could really steal some marketshare from OpenAI if things go well.

[–] [email protected] -2 points 1 year ago (1 children)

What market though? These AI chatbots seem like money sinks for a potential development into something useful in the distant future.

[–] [email protected] 11 points 1 year ago (1 children)

The market of people buying APIs for popular chatbots. Right now OpenAI's GPT is overwhelmingly the most popular option and pretty expensive. You constantly see a lot of "powered by GPT" features on products now, but hopefully Claude can provide some better competition.

[–] [email protected] -1 points 1 year ago (1 children)

Fair, I don't see any real use for these right now. Chatbots just seem like a gimmick that can help people cheat in school (not that I give a fuck about that). Probably just the online circles we run in, what sorta things are powered by GPT? Customer support and stuff?

[–] [email protected] 8 points 1 year ago (1 children)

You've got stuff like helping assistants on Duolingo and Khan Academy powered by GPT-4, you've got stuff like tools for automatic search engine optimization, tools for automatic code generation, tools for grammar spell checking, tools for translation, and probably a lot more I'm unaware of.

There's quite a lot of people depending on GPT right now.

[–] [email protected] 4 points 1 year ago (1 children)

The Khan academy approach to ai-assisted learning looks amazing and it's just a first attempt. I think having individual, endlessly patient AI tutors leading each student via the Socratic method will revolutionise teaching. Teachers actually have more time to socialise with the students, so fears that ai learning would deprive children of the social interaction may be put to rest. It looks really promising.

https://m.youtube.com/watch?v=3j0jSKcvbLY

[–] [email protected] 1 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=3j0jSKcvbLY

https://piped.video/watch?v=3j0jSKcvbLY

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] [email protected] 9 points 1 year ago (1 children)

Seems to only be available in the US and UK for now tho.

[–] [email protected] 2 points 1 year ago (2 children)

Luckily it doesn't need a phone number like OpenAI so you can just VPN it.

[–] [email protected] 8 points 1 year ago (1 children)

You shouldn't have to though. The whole "only available if you happen to live in X" is so much bs when it comes to things like this. Sure if it was a giveaway and needed to be shipped, I could understand. But a website being locked away to only certain regions is ridiculous.

[–] [email protected] 6 points 1 year ago

I suspect it has to do with legal compliance. Only available in US = only needing to comply with US law.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I tried using a VPN and it still didn't allow me to sign up.

[–] [email protected] 1 points 1 year ago

I've used ProtonVPN and managed to sign in easily

[–] [email protected] 4 points 1 year ago

Say's only available in the US, used a VPN to sign up, good to have alternatives!

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

Just tried it out, withe some questions about ceramic firing in a electric kiln. Seems to have similar accuracy to chatgpt, maybe closer to gpt4.

It's not clear when using it what version it's on, so this may have been Claude 1, I'm unsure where to check.

[–] [email protected] 3 points 1 year ago (2 children)

I asked it directly. It didn't know and stated it has never had version numbers. I pointed out that news articles differentiate 1.0 and 2.0. It agreed but didn't say what it was. I asked it again directly, it said it was 2.0.

Hard to believe something that feels like it's lying to you all the time. I asked it about a topic that I'm in and have a website about, it told me the website was hypothetical. It got it wrong twice, even after it agreed it was wrong, and then told me the wrong thing again.

Can you ask perplexity.ai your question about ceramic firing and see what you get? Perplexity offers prompts to move you along towards your answer.

[–] [email protected] 2 points 1 year ago (1 children)

I asked perplexity that same question. It kind of did better, it made no errors in temperature's like the others do. It just left those details out, initially. After asking follow-up questions it answered correctly, but also gave some unnecessary and unrelated information.

I didn't use any of the prompts, I was asking about saggar firing processes and temps, the prompts were just ceramics related.

[–] [email protected] 1 points 1 year ago

My area has 40 years of studies behind it with a heap of science online. I'm always surprised that AI do so badly with it. If I can work it out by reading through study after study, AI should piss it in.

The good thing about perplexity is that it sources itself so you can check it. Others just give you the answer and if you don't know much, you dont know if it's wrong or not (better than no sources, I feel). I've also asked someone who is the world leader in my field to figure out when it starts giving completely wrong answers and in what area.

Is what you're searching more of an in-field technique or would there be webpages or studies devoted to it?

[–] [email protected] 2 points 1 year ago

Hard to believe something that feels like it's lying to you all the time. I asked it about a topic that I'm in and have a website about, it told me the website was hypothetical. It got it wrong twice, even after it agreed it was wrong, and then told me the wrong thing again.

Is this what they consider hallucinations?

load more comments
view more: next ›