this post was submitted on 09 Oct 2024
136 points (100.0% liked)

Technology

1567 readers
218 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
 

Microsoft's LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that's inaccurate or misleading.

LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.

LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.

The relevant passage, which takes effect on November 20, 2024.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won't be held responsible for any consequences.

top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 34 points 2 months ago (2 children)

I would rather clean up dog vomit than use linkedin.

[–] [email protected] 6 points 2 months ago (1 children)

My hope is to get a government job so I can delete that shit from my life.

[–] [email protected] 2 points 2 months ago (1 children)

My dream is to build a secret laser big enough that I can deathstar linkedin out of existence in one zap, my hope is however much the same as yours.

[–] [email protected] 2 points 2 months ago

You don't need a laser. You need a computer virus. Leave advanced physics to KTU interns.

[–] possiblylinux127 5 points 2 months ago

Well that's not saying much as most dog owners have cleaned up vomit at least once.

[–] [email protected] 19 points 2 months ago

The real question is will this hold up in court. Judges are likely to frown on this type of thing. Sure the EULA that they know nobody reads says that, but their tools are giving advice in an authoritative tone. My company has got in trouble in court because in an advertisement it appeared our tools were being used in ways the warning label says don't.

[–] [email protected] 8 points 2 months ago
[–] [email protected] 2 points 2 months ago* (last edited 2 months ago)

Sooo shitty.

We need an alternative

[–] [email protected] 1 points 2 months ago

LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.

is this the same parent that’s talking about adding an AI button to people’s keyboards?