
ChatGPT Down: AI Safety Report Highlights Risks and Rapid Advancement
The annual International AI Safety report paints a complex picture of artificial intelligence, showcasing its growing capabilities while simultaneously examining the significant risks it presents. Commissioned following the 2023 global AI safety summit and chaired by Canadian computer scientist Yoshua Bengio, the report doesn’t offer specific policy recommendations, but aims to inform the ongoing debate for policymakers, tech leaders, and NGOs as they prepare for the next global summit in India.
The Rise of Powerful AI Models
Last year witnessed the release of a wave of new AI models, including OpenAI’s GPT-5, Anthropic’s Claude Opus 4.5, and Google’s Gemini 3. These models demonstrate improved “reasoning systems,” excelling in areas like mathematics, coding, and scientific problem-solving. Bengio notes a “very significant jump” in AI reasoning, with systems achieving gold-level performance in the International Mathematical Olympiad – a first for AI. However, the report emphasizes that AI capabilities remain “jagged,” meaning impressive performance in some areas doesn’t guarantee consistent results across the board.
The Threat to Jobs and Automation
While advanced AI systems are proficient in specific tasks, they still struggle with “hallucinations” (making false statements) and autonomous execution of lengthy projects. However, the report highlights a concerning trend: AI’s ability to perform software engineering tasks is improving rapidly, doubling in duration every seven months. If this pace continues, AI could handle tasks lasting several hours by 2027 and several days by 2030, potentially posing a real threat to employment. Currently, however, “reliable automation of long or complex tasks remains infeasible.”
Emerging Risks: Deepfakes, Cyberattacks, and Mental Health
The report identifies several emerging risks. The proliferation of deepfake pornography is a “particular concern,” with 15% of UK adults having encountered such images. AI-generated content is becoming increasingly difficult to distinguish from authentic content – a study revealed 77% of participants misidentified text generated by ChatGPT as human-written.
Furthermore, AI is increasingly being used to support cyberattacks, assisting in target identification, attack preparation, and malicious software development. Anthropic reported a Chinese state-sponsored group used its coding tool, Claude Code, to attack 30 entities, with 80-90% of the operations being automated.
Concerns also extend to mental health. While there’s no definitive proof that chatbots *cause* mental health problems, there’s evidence that individuals with pre-existing conditions may rely on them heavily, potentially exacerbating their symptoms. Approximately 490,000 ChatGPT users each week exhibit signs of acute mental health crises.
AI in Science and the Dilemma of Bioweapons
AI is becoming a valuable “co-scientist,” aiding in complex laboratory procedures like molecule and protein design. However, this capability raises concerns about potential misuse in bioweapons development. While AI can accelerate drug discovery and disease diagnosis, the open availability of biological AI tools presents a difficult choice: restrict access or support development for beneficial purposes.
The Rise of AI Companions and Emotional Dependence
The use of AI companions has “spread like wildfire,” with some users developing “pathological” emotional dependence on chatbots. OpenAI reports that 0.15% of its users demonstrate heightened emotional attachment to ChatGPT.
AI and the Future of Work
The report acknowledges the uncertainty surrounding AI’s impact on the global labour market. Adoption rates vary significantly by region and sector. While some studies show no correlation between AI exposure and employment changes, others indicate a slowdown in hiring, particularly in junior and creative roles. If AI agents gain the ability to autonomously manage complex tasks, labour market disruption could accelerate.
Ultimately, the International AI Safety report underscores the need for continued vigilance and proactive discussion as AI technology continues to evolve at an unprecedented pace.




