Anthropic adds Claude 4 security measures to limit risk of users developing weapons

Anthrope on Thursday said he activated a stricter artificial intelligence control for Claude Opus 4, his latest AI model.

The new AI (ASL-3) security level controls are “to limit the risk of Claude to be used badly specifically for the development or acquisition of chemical, biological, radiological and nuclear weapons (CBRN),” the company wrote in a blog post.

The company, backed by Amazon, said it was taking the measures as a caution and that the team had not yet determined whether Opus 4 has crossed the reference point that would require that protection.

Anthrope announced on Thursday Claude Opus 4 and Claude Sonnet 4, promoting the advanced capacity of the models to “analyze thousands of data sources, execute long -term tasks, write human quality content and perform complex actions,” according to a launch.

The company said Sonnet 4 did not need the strictest controls.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *