- Nippon Telegraph and Telephone, Japan's largest telecom company, and Yomiuri Shimbun Group Holdings, the country's biggest newspaper, have called for a law to end unrestrained use of artificial intelligence (AI). Wall Street Journal (LR: 3 CP: 5)
- They warned that democracy and social order could be in peril in the face of unhindered AI development. Wall Street Journal (LR: 3 CP: 5)
- Claiming that AI tools are inaccurate or biased, including AI chatbots that suffer from "hallucination" and often "lie with confidence," the companies on Monday said that unchecked generative AI could also catalyze warfare. The Telegraph
- AI hallucinations, most commonly associated with AI text generators, occur when AI generates false and misleading information but presents it as fact. They are usually powered by large language models and reportedly rely on probability, not accuracy. Built In
- Both OpenAI and Google caution users about AI chatbots making mistakes and recommend cross-verification of their outputs. OpenAI has implemented "process supervision," while Google takes user feedback to address the AI hallucination issue. CNBC (LR: 3 CP: 5)
- Last year, US Securities and Exchange Commission chairperson Gary Gensler told The Financial Times that AI could roil financial markets "as soon as the late 2020s or early 2030s." The Hill
Narrative A:
- AI struggles with hallucinations, confidently generating inaccurate information. Despite guardrails, these hallucinations are a challenge as these errors have consequences. A full eradication of the problem may be difficult, and perfect accuracy remains a distant goal. Trust in AI responses must be sparing, as there's no immediate fix in sight.
CNN (LR: 2 CP: 5)
Narrative B:
- AI errors ought to be viewed as creative experimentation. The focus should be on embracing AI's unpredictable nature rather than aiming for specific outcomes. AI hallucinations could be a concern in fields like finance and healthcare, and ways to leverage them for creative endeavors must be explored for innovative outcomes. Proper context is key to managing this risk.
BLOOMBERG (LR: 3 CP: 5)
Narrative C:
- AI's chatbots' hallucinations act as a buffer, requiring human verification before full reliance on AI-generated content. The debate continues on whether these hallucinations can be eliminated entirely. For now, they offer a balance with even some upside, preventing complete automation and maintaining human involvement in critical decision-making processes.
WIRED (LR: 3 CP: 4)
Nerd narrative:
- There's an 81% chance that by June 30, 2025, OpenAI will release an LLM product or API that hallucinates 5x less than GPT-4 did when it was released, according to the Metaculus prediction community.
METACULUS (LR: 3 CP: 3)