Technology

1567 readers
193 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
1
12
AI goes nuclear (thebulletin.org)
submitted 4 hours ago by [email protected] to c/technology
 
 

Big tech is turning to old reactors (and planning new ones) to power the energy-hungry data centers that artificial intelligence systems need. The downsides of nuclear power—including the potential for nuclear weapons proliferation—have been minimized or simply ignored.

2
3
40
submitted 14 hours ago* (last edited 14 hours ago) by [email protected] to c/technology
 
 

All Governments Should Protect Children’s Privacy by Regulating Artificial Intelligence

4
5
6
7
8
9
 
 

This is very entertaining.

In order to give myself and the many tired volunteers around WordPress.org a break for the holidays, we’re going to be pausing a few of the free services currently offered:

  • New account registrations on WordPress.org (clarifying so press doesn’t confuse this: people can still make their own WordPress installs and accounts)
  • New plugin directory submissions
  • New plugin reviews
  • New theme directory submissions
  • New photo directory submissions

We’re going to leave things like localization and the forums open because these don’t require much moderation.

As you may have heard, I’m legally compelled to provide free labor and services to WP Engine thanks to the success of their expensive lawyers, so in order to avoid bothering the court I will say that none of the above applies to WP Engine, so if they need to bypass any of the above please just have your high-priced attorneys talk to my high-priced attorneys and we’ll arrange access, or just reach out directly to me on Slack and I’ll fix things for you.

I hope to find the time, energy, and money to reopen all of this sometime in the new year. Right now much of the time I would spend making WordPress better is being taken up defending against WP Engine’s legal attacks. Their attacks are against Automattic, but also me individually as the owner of WordPress.org, which means if they win I can be personally liable for millions of dollars of damages.

If you would like to fund legal attacks against me, I would encourage you to sign up for WP Engine services, they have great plans and pricing starting at $50/mo and scaling all the way up to $2,000/mo. If not, you can use literally any other web host in the world that isn’t suing me and is offering promotions and discounts for switching away from WP Engine.

10
11
12
13
 
 

Large language models continue to be unreliable for election information. Our research was able to substantially improve the reliability of safeguards in the Microsoft Copilot chatbot against election misinformation in German. However barriers to data access greatly restricted our investigations into other chatbots.

14
15
 
 

Netflix did not give customers sufficient information about what the company does with their personal data between 2018 and 2020. And the information that Netflix did give was unclear on some points. For this reason, the Dutch Data Protection Authority (Dutch DPA) is imposing a fine of 4.75 million euro on the streaming service. Netflix has since updated its privacy statement and improved its information provision.

16
10
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/technology
 
 

In conclusion, the rise of AI washing represents a significant challenge that demands immediate attention from state attorneys general and other regulatory bodies. AGs can play a pivotal role in mitigating the risks of misleading AI claims by providing clear definitions, fostering consumer awareness, and strategically leveraging AI tools. These efforts are not merely about protecting consumers' wallets; they are about safeguarding trust in emerging technologies and ensuring that innovation continues to serve, rather than deceive, the public. As AI technologies become increasingly integrated into daily life, regulatory frameworks must keep pace, striking a balance between fostering innovation and holding companies accountable. The stakes are high, but with proactive measures, regulators can ensure that AI enhances rather than erodes consumer confidence.

17
 
 

Starting January 1, Floridians can no longer legally come to Pornhub.

18
 
 
  • Much-delayed program had been in development since around 2022
  • Company also shut down Pay Later offering earlier this year
19
 
 

On its face, the EU DMA is meant to stop monopolies from abusing their market position, but Meta appears to be abusing this legislation in an attempt to gather unprecedented access to iPhone user data.

20
21
22
 
 

Bans and blanket restrictions on social media, like the impending US TikTok ban or Australia’s recent age restrictions, are often presented as decisive solutions to complex problems. These measures promise to safeguard national security, protect user data, or shield vulnerable users from harm. Yet, they rarely achieve their intended goals. Instead, they create a paradox: rather than mitigating risks, such restrictions make platforms and user practices less governable. Users circumvent controls, oversight is fragmented, and transparency gives way to opacity—all while opportunities for meaningful governance are lost.

23
24
 
 

The Democratic Republic of Congo has filed a criminal case against European subsidiaries of tech giant Apple, accusing the company of illicitly using "blood minerals" in its supply chain.

25
view more: next ›