this post was submitted on 09 Apr 2024
1 points (100.0% liked)

Improve The News

8 readers
1 users here now

Improve The News is a free news aggregator and news analysis site developed by a group of researchers at MIT and elsewhere to improve your access to trustworthy news. Many website algorithms push you (for ad revenue) into a filter bubble by reinforcing the narratives you impulse-click on. By understanding other people’s arguments, you understand why they do what they do – and have a better chance of persuading them. **What's establishment bias?** The establishment view is what all big parties and powers agree on, which varies between countries and over time. For example, the old establishment view that women shouldn’t be allowed to vote was successfully challenged. ITN makes it easy for you to compare the perspectives of the pro-establishment mainstream media with those of smaller establishment-critical news outlets that you won’t find in most other news aggregators. This Magazine/Community is not affiliated with Improve The News and is an unofficial repository of the information posted there.


**LR (left/right): 1 = left leaning, 3 = neutral, 5 = right leaning** **CP (critical/pro-establishment): 1 = critical, 3 = neutral, 5 = pro**

founded 1 year ago
 
  • Nippon Telegraph and Telephone, Japan's largest telecom company, and Yomiuri Shimbun Group Holdings, the country's biggest newspaper, have called for a law to end unrestrained use of artificial intelligence (AI). Wall Street Journal (LR: 3 CP: 5)
  • They warned that democracy and social order could be in peril in the face of unhindered AI development. Wall Street Journal (LR: 3 CP: 5)
  • Claiming that AI tools are inaccurate or biased, including AI chatbots that suffer from "hallucination" and often "lie with confidence," the companies on Monday said that unchecked generative AI could also catalyze warfare. The Telegraph
  • AI hallucinations, most commonly associated with AI text generators, occur when AI generates false and misleading information but presents it as fact. They are usually powered by large language models and reportedly rely on probability, not accuracy. Built In
  • Both OpenAI and Google caution users about AI chatbots making mistakes and recommend cross-verification of their outputs. OpenAI has implemented "process supervision," while Google takes user feedback to address the AI hallucination issue. CNBC (LR: 3 CP: 5)
  • Last year, US Securities and Exchange Commission chairperson Gary Gensler told The Financial Times that AI could roil financial markets "as soon as the late 2020s or early 2030s." The Hill

Narrative A:

  • AI struggles with hallucinations, confidently generating inaccurate information. Despite guardrails, these hallucinations are a challenge as these errors have consequences. A full eradication of the problem may be difficult, and perfect accuracy remains a distant goal. Trust in AI responses must be sparing, as there's no immediate fix in sight.
    CNN (LR: 2 CP: 5)

Narrative B:

  • AI errors ought to be viewed as creative experimentation. The focus should be on embracing AI's unpredictable nature rather than aiming for specific outcomes. AI hallucinations could be a concern in fields like finance and healthcare, and ways to leverage them for creative endeavors must be explored for innovative outcomes. Proper context is key to managing this risk.
    BLOOMBERG (LR: 3 CP: 5)

Narrative C:

  • AI's chatbots' hallucinations act as a buffer, requiring human verification before full reliance on AI-generated content. The debate continues on whether these hallucinations can be eliminated entirely. For now, they offer a balance with even some upside, preventing complete automation and maintaining human involvement in critical decision-making processes.
    WIRED (LR: 3 CP: 4)

Nerd narrative:

  • There's an 81% chance that by June 30, 2025, OpenAI will release an LLM product or API that hallucinates 5x less than GPT-4 did when it was released, according to the Metaculus prediction community.
    METACULUS (LR: 3 CP: 3)
no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here