[-] [email protected] 7 points 5 hours ago

Summary:

  • Colorado passes first-in-nation law to protect privacy of biological or brain data, which is similar to fingerprints if used to identify people.
  • Advances in artificial intelligence have led to medical breakthroughs, including devices that can read minds and alter brains.
  • Neurotechnology devices, such as Emotiv and Somnee, are used for health care and can move computers with thoughts or improve brain function and identify impairments.
  • Most of these devices are not regulated by the FDA and are marketed for wellness.
  • With benefits come risks, such as insurance companies discriminating, law enforcement interrogating, and advertisers manipulating brain data.
  • Medical research facilities are subject to privacy laws, but private companies amassing large caches of brain data are not.
  • The Neurorights Foundation found that two-thirds of these companies are already sharing or selling data with third parties.
  • The new law takes effect on Aug. 8, but it is unclear which companies are subject to it and how it will be enforced.
  • Pauzauskie and the Neurorights Foundation are pushing for a federal law and even a global accord to prevent brain data from being used without consent.
69
submitted 5 hours ago by [email protected] to c/[email protected]

After all, the privacy of our mind may be the only privacy we have left.

[-] [email protected] 6 points 5 hours ago

Summary:

  • Colorado passes first-in-nation law to protect privacy of biological or brain data, which is similar to fingerprints if used to identify people.
  • Advances in artificial intelligence have led to medical breakthroughs, including devices that can read minds and alter brains.
  • Neurotechnology devices, such as Emotiv and Somnee, are used for health care and can move computers with thoughts or improve brain function and identify impairments.
  • Most of these devices are not regulated by the FDA and are marketed for wellness.
  • With benefits come risks, such as insurance companies discriminating, law enforcement interrogating, and advertisers manipulating brain data.
  • Medical research facilities are subject to privacy laws, but private companies amassing large caches of brain data are not.
  • The Neurorights Foundation found that two-thirds of these companies are already sharing or selling data with third parties.
  • The new law takes effect on Aug. 8, but it is unclear which companies are subject to it and how it will be enforced.
  • Pauzauskie and the Neurorights Foundation are pushing for a federal law and even a global accord to prevent brain data from being used without consent.
119
submitted 5 hours ago by [email protected] to c/[email protected]

After all, the privacy of our mind may be the only privacy we have left.

[-] [email protected] 54 points 8 hours ago

Summary:

  • The author expresses dissatisfaction with the commercial and impersonal feel of modern Windows operating systems.
  • Past versions of Windows were disconnected and resilient, providing a more personal user experience.
  • Advertising integration in Windows has made it feel cheaper and less user-friendly.
  • Updates, intrusive changes, settings modifications, and lack of control are common issues plaguing modern Windows systems.
  • The author compares the current Windows experience to the offline glory days of Windows, highlighting the shift in user experience.
  • Windows now includes advertising, which some users find intrusive and unwanted.
  • Updates on Windows often lead to issues, with users experiencing broken computers after updates.
  • Users complain about settings changing after updates, impacting their preferences and privacy settings.
  • The author switched to macOS due to technical issues with Windows updates, appreciating the user experience on macOS.
  • Linux is praised for respecting its users by providing the operating system for free without intrusive ads.
  • The author hopes for a future version of Windows that offers more user control and less interference from Microsoft's software-as-a-service products.
254
submitted 8 hours ago by [email protected] to c/[email protected]
[-] [email protected] 105 points 1 day ago

Summary:

  • The FTC is investigating PC manufacturers for using "warranty void if removed" labels to discourage consumers from exercising their right to repair.
  • ASRock, Gigabyte, and Zotac received letters from the FTC regarding these practices.
  • The FTC is concerned about manufacturers denying warranty coverage based on these provisions.
  • The federal Magnuson-Moss Warranty Act is being invoked to prevent companies from making misleading warranties.
  • The Act prohibits conditioning warranties on the use of specific repair services unless provided for free or with a waiver from the FTC.
  • The FTC plans to review the written warranties and promotional materials of the companies after 30 days.
  • In the past, Nintendo, Sony, Microsoft, Asus, HTC, and Hyundai were also warned by the FTC for similar practices.
544
submitted 1 day ago by [email protected] to c/[email protected]
[-] [email protected] 14 points 1 day ago

Summary:

  • Telegram founder Pavel Durov claimed in an interview that the company only employs "about 30 engineers."
  • Security experts say this is a major red flag for Telegram's cybersecurity, as it suggests the company lacks the resources to effectively secure its platform and fight off hackers.
  • Telegram's chats are not end-to-end encrypted by default, unlike more secure messaging apps like Signal or WhatsApp. Users have to manually enable the "Secret Chat" feature to get end-to-end encryption.
  • Telegram also uses its own proprietary encryption algorithm, which has raised concerns about its security.
  • As a social media platform with nearly 1 billion users, Telegram is an attractive target for both criminal and government hackers, but it seems to have very limited staff dedicated to cybersecurity.
  • Security experts have long warned that Telegram should not be considered a truly secure messaging app, and Durov's recent statement may indicate that the situation is worse than previously thought.
192
submitted 1 day ago by [email protected] to c/[email protected]
157
submitted 1 day ago by [email protected] to c/[email protected]
886
submitted 1 day ago by [email protected] to c/[email protected]
169
submitted 3 days ago by [email protected] to c/[email protected]

The cable industry has been in a nose-dive for years. Comcast's Q1 2024 earnings report showed its cable business losing 487,000 subscribers. The cable giant ended 2022 with 16,142,000 subscribers; in January, it had 13,600,000.

Charter, the only US cable company bigger than Comcast, is rapidly losing pay-TV subscribers, too. In its Q1 2024 earnings report, Charter reported losing 405,000 subscribers, including business accounts. It ended 2022 with 15,147,000 subscribers; at the end of March, it had 13,717,000.

And, like Comcast, Charter is looking to streaming bundles to keep its pay-TV business alive and to compete with the likes of YouTube TV and Hulu With Live TV.

It’s a curious time as cable TV providers scramble to be part of an industry created in reaction to business practices that many customers viewed as anti-consumer. Meanwhile, the streaming industry is adopting some of these same practices, like commercials and incessant price hikes, to establish profitability. And some smaller streaming players say it's nearly impossible to compete as the streaming industry's top players are taking form and, in some cases, collaborating.

But after decades of discouraging many subscribers with few alternatives, it will be hard for former or current cable customers to view firms like Comcast and Charter as trustworthy competitive streaming providers.

454
submitted 3 days ago by [email protected] to c/[email protected]

The EU Council has now passed a 4th term without passing its controversial message-scanning proposal. The just-concluded Belgian Presidency failed to broker a deal that would push forward this regulation, which has now been debated in the EU for more than two years.

For all those who have reached out to sign the “Don’t Scan Me” petition, thank you—your voice is being heard. News reports indicate the sponsors of this flawed proposal withdrew it because they couldn’t get a majority of member states to support it.

Now, it’s time to stop attempting to compromise encryption in the name of public safety. EFF has opposed this legislation from the start. Today, we’ve published a statement, along with EU civil society groups, explaining why this flawed proposal should be withdrawn.

The scanning proposal would create “detection orders” that allow for messages, files, and photos from hundreds of millions of users around the world to be compared to government databases of child abuse images. At some points during the debate, EU officials even suggested using AI to scan text conversations and predict who would engage in child abuse. That’s one of the reasons why some opponents have labeled the proposal “chat control.”

There’s scant public support for government file-scanning systems that break encryption. Nor is there support in EU law. People who need secure communications the most—lawyers, journalists, human rights workers, political dissidents, and oppressed minorities—will be the most affected by such invasive systems. Another group harmed would be those whom the EU’s proposal claims to be helping—abused and at-risk children, who need to securely communicate with trusted adults in order to seek help.

The right to have a private conversation, online or offline, is a bedrock human rights principle. When surveillance is used as an investigation technique, it must be targeted and coupled with strong judicial oversight. In the coming EU council presidency, which will be led by Hungary, leaders should drop this flawed message-scanning proposal and focus on law enforcement strategies that respect peoples’ privacy and security.

Further reading:

145
submitted 3 days ago by [email protected] to c/[email protected]

The EU Council has now passed a 4th term without passing its controversial message-scanning proposal. The just-concluded Belgian Presidency failed to broker a deal that would push forward this regulation, which has now been debated in the EU for more than two years.

For all those who have reached out to sign the “Don’t Scan Me” petition, thank you—your voice is being heard. News reports indicate the sponsors of this flawed proposal withdrew it because they couldn’t get a majority of member states to support it.

Now, it’s time to stop attempting to compromise encryption in the name of public safety. EFF has opposed this legislation from the start. Today, we’ve published a statement, along with EU civil society groups, explaining why this flawed proposal should be withdrawn.

The scanning proposal would create “detection orders” that allow for messages, files, and photos from hundreds of millions of users around the world to be compared to government databases of child abuse images. At some points during the debate, EU officials even suggested using AI to scan text conversations and predict who would engage in child abuse. That’s one of the reasons why some opponents have labeled the proposal “chat control.”

There’s scant public support for government file-scanning systems that break encryption. Nor is there support in EU law. People who need secure communications the most—lawyers, journalists, human rights workers, political dissidents, and oppressed minorities—will be the most affected by such invasive systems. Another group harmed would be those whom the EU’s proposal claims to be helping—abused and at-risk children, who need to securely communicate with trusted adults in order to seek help.

The right to have a private conversation, online or offline, is a bedrock human rights principle. When surveillance is used as an investigation technique, it must be targeted and coupled with strong judicial oversight. In the coming EU council presidency, which will be led by Hungary, leaders should drop this flawed message-scanning proposal and focus on law enforcement strategies that respect peoples’ privacy and security.

Further reading:

205
submitted 5 days ago by [email protected] to c/[email protected]

We’ve said it before: online age verification is incompatible with privacy. Companies responsible for storing or processing sensitive documents like drivers’ licenses are likely to encounter data breaches, potentially exposing not only personal data like users’ government-issued ID, but also information about the sites that they visit.

This threat is not hypothetical. This morning, 404 Media reported that a major identity verification company, AU10TIX, left login credentials exposed online for more than a year, allowing access to this very sensitive user data.

A researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents,” including “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents. Platforms reportedly using AU10TIX for identity verification include TikTok and X, formerly Twitter.

Lawmakers pushing forward with dangerous age verifications laws should stop and consider this report. Proposals like the federal Kids Online Safety Act and California’s Assembly Bill 3080 are moving further toward passage, with lawmakers in the House scheduled to vote in a key committee on KOSA this week, and California's Senate Judiciary committee set to discuss AB 3080 next week. Several other laws requiring age verification for accessing “adult” content and social media content have already passed in states across the country. EFF and others are challenging some of these laws in court.

In the final analysis, age verification systems are surveillance systems. Mandating them forces websites to require visitors to submit information such as government-issued identification to companies like AU10TIX. Hacks and data breaches of this sensitive information are not a hypothetical concern; it is simply a matter of when the data will be exposed, as this breach shows.

Data breaches can lead to any number of dangers for users: phishing, blackmail, or identity theft, in addition to the loss of anonymity and privacy. Requiring users to upload government documents—some of the most sensitive user data—will hurt all users.

According to the news report, so far the exposure of user data in the AU10TIX case did not lead to exposure beyond what the researcher showed was possible. If age verification requirements are passed into law, users will likely find themselves forced to share their private information across networks of third-party companies if they want to continue accessing and sharing online content. Within a year, it wouldn’t be strange to have uploaded your ID to a half-dozen different platforms.

No matter how vigilant you are, you cannot control what other companies do with your data. If age verification requirements become law, you’ll have to be lucky every time you are forced to share your private information. Hackers will just have to be lucky once.

212
submitted 5 days ago by [email protected] to c/[email protected]

We’ve said it before: online age verification is incompatible with privacy. Companies responsible for storing or processing sensitive documents like drivers’ licenses are likely to encounter data breaches, potentially exposing not only personal data like users’ government-issued ID, but also information about the sites that they visit.

This threat is not hypothetical. This morning, 404 Media reported that a major identity verification company, AU10TIX, left login credentials exposed online for more than a year, allowing access to this very sensitive user data.

A researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents,” including “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents. Platforms reportedly using AU10TIX for identity verification include TikTok and X, formerly Twitter.

Lawmakers pushing forward with dangerous age verifications laws should stop and consider this report. Proposals like the federal Kids Online Safety Act and California’s Assembly Bill 3080 are moving further toward passage, with lawmakers in the House scheduled to vote in a key committee on KOSA this week, and California's Senate Judiciary committee set to discuss AB 3080 next week. Several other laws requiring age verification for accessing “adult” content and social media content have already passed in states across the country. EFF and others are challenging some of these laws in court.

In the final analysis, age verification systems are surveillance systems. Mandating them forces websites to require visitors to submit information such as government-issued identification to companies like AU10TIX. Hacks and data breaches of this sensitive information are not a hypothetical concern; it is simply a matter of when the data will be exposed, as this breach shows.

Data breaches can lead to any number of dangers for users: phishing, blackmail, or identity theft, in addition to the loss of anonymity and privacy. Requiring users to upload government documents—some of the most sensitive user data—will hurt all users.

According to the news report, so far the exposure of user data in the AU10TIX case did not lead to exposure beyond what the researcher showed was possible. If age verification requirements are passed into law, users will likely find themselves forced to share their private information across networks of third-party companies if they want to continue accessing and sharing online content. Within a year, it wouldn’t be strange to have uploaded your ID to a half-dozen different platforms.

No matter how vigilant you are, you cannot control what other companies do with your data. If age verification requirements become law, you’ll have to be lucky every time you are forced to share your private information. Hackers will just have to be lucky once.

[-] [email protected] 47 points 1 week ago

According to a report from Arizona’s Family:

The 12-volt battery that powers the car’s electronics died without warning.

Tesla drivers are supposed to receive three warnings before that happens, but the Tesla service department confirmed that Sanchez didn’t receive any warnings.

[-] [email protected] 113 points 1 week ago

Tesla didn’t respond to a request for comment; it has dissolved its press office.

[-] [email protected] 132 points 2 weeks ago

Summary:

  • The US government is suing Adobe for allegedly deceiving customers with hidden fees and making it difficult to cancel subscriptions.
  • The Department of Justice claims Adobe enrolls customers in its most lucrative subscription plan without clearly disclosing important plan terms.
  • Adobe allegedly hides the terms of its annual, paid monthly plan in fine print and behind optional textboxes and hyperlinks.
  • The company fails to properly disclose the early termination fee, which can amount to hundreds of dollars, upon cancellation.
  • The cancellation process is described as "onerous and complicated", involving multiple webpages and pop-ups.
  • Customers who try to cancel over the phone or via live chats face similar obstacles, including dropped or disconnected calls and having to re-explain their reason for calling.
  • The lawsuit targets Adobe executives Maninder Sawhney and David Wadhwani, alleging they directed or participated in the deceptive practices.
  • The federal government began investigating Adobe's cancellation practices late last year.
  • Adobe's subscription model has long been a source of frustration for creatives, who feel forced to stay subscribed to continue working.
  • Recently, Adobe's new terms of service were met with backlash, with some users interpreting the changes as an opportunity for Adobe to train its AI on users' art.
  • The company has also faced regulatory scrutiny in the past, including antitrust scrutiny from European regulators over its attempted $20 billion acquisition of product design platform Figma in 2022, which was ultimately abandoned.
view more: next ›

ForgottenFlux

joined 5 months ago