Advanced AI Models Demonstrate Capability to Exploit Smart Contract Vulnerabilities
A new study conducted by MATS in partnership with Anthropic has revealed that some of the most sophisticated artificial intelligence models currently available — including Claude Opus 4.5, Sonnet 4.5, and OpenAI’s GPT-5 — are capable of detecting and exploiting security flaws in smart contracts under simulated conditions.
Study Overview and Key Findings
The research focused on evaluating the ability of these AI systems to analyze smart contract code and identify potential vulnerabilities that could be leveraged to execute unauthorized actions or extract financial value. In controlled testing environments, the models successfully pinpointed critical weaknesses and simulated exploits that could amount to millions in potential losses if replicated in real-world scenarios.
This demonstration highlights the dual-use nature of advanced AI: while these models can be powerful tools for improving security by identifying flaws proactively, they also pose risks if utilized maliciously to automate cyberattacks or fraud.
Implications for AI Safety and Blockchain Security
The findings underscore the growing intersections between artificial intelligence and blockchain technologies, particularly in areas related to decentralized finance (DeFi) and smart contract deployment. As AI systems become more adept at code analysis, the security landscape must evolve to address new threat vectors that emerge from AI-enabled exploitation techniques.
Experts emphasize the importance of integrating AI safety and alignment principles into the development of both AI models and blockchain protocols to mitigate potential harms. This includes improving transparency, fostering responsible AI use, and reinforcing smart contract auditing processes with AI-enhanced tools designed to detect vulnerabilities before deployment.
Looking Ahead: Balancing Innovation and Risk
While the study showcases significant advancements in AI’s analytical capabilities, it also serves as a cautionary note regarding the ethical and regulatory challenges that accompany such progress. Industry stakeholders, policymakers, and researchers are urged to collaborate on frameworks that ensure AI is harnessed to strengthen security rather than to exploit weaknesses.
As AI continues to evolve rapidly, ongoing research such as this will be critical in shaping strategies that balance innovation with the imperative of safeguarding digital infrastructures.
Fonte: ver artigo original

Salesforce Unveils Advanced Slackbot AI to Compete with Microsoft and Google in Enterprise AI
Samsung to Reveal Enhanced Bespoke AI Kitchen Appliances at CES 2026
L’Oréal Integrates AI to Enhance Digital Advertising Production Efficiency
Artificial neurons that behave like real brain cells