this post was submitted on 04 Aug 2024
333 points (98.8% liked)

Programming

17408 readers
71 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

researchers conducted experimental surveys with more than 1,000 adults in the U.S. to evaluate the relationship between AI disclosure and consumer behavior

The findings consistently showed products described as using artificial intelligence were less popular

“When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,”

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 3 months ago (1 children)

At work, we recently talked about AI. One use case mentioned (by an AI consulting firm, not us or actually suggested for us) was meeting summaries and extracting TODOs from them.

My stance is that AI could be useful for summaries about topics so you can see what topics were being talked about. But I would never trust it with extracting the or all significant points, TODOs, or agreements. You still need humans to do that, and have explicit agreement and confirmation of the list in or after the meeting.

It can also help to transcribe meetings. It could even translate them. Those things can be useful. But summarization should never be considered factual extraction of the significant points. Especially in a business context, or anything else where you actually care about being able to trust information.

I wouldn't [fully] trust it with transforming facts either. It can work where you can spot inaccuracies (long text, lots of context), or where you don't care about them.

Natural language instructions to machine instructions? I'd certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.

[–] [email protected] 2 points 3 months ago

Natural language instructions to machine instructions? I'd certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.

I’m imagining it to be quite limited. Mostly to talk with appliances in a way that’s more advanced than today. Instructions like “gradually dim down the lights in living room until bed time”, or “dim down the lights in the living room when the we watch a movie on TV”.