Catastrophic Liability: Managing Systemic Risks in Frontier AI Development

Aidan Kierans, Kaley Rittichier, Utku Sonsayar

Abstract

Despite growing risks, current practices at frontier AI labs lack transparency around safety measures, testing procedures, and governance structures. This opacity makes it challenging to verify safety claims or establish appropriate liability when harm occurs. Drawing on liability frameworks in domains with similar risks, we propose a comprehensive approach to safety documentation and accountability in frontier AI development.

Previous
Previous

Mizumoto et al. — "Virtue Ethics as an Alternative to Current AI Safety Approaches"

Next
Next

Aoyagi — "Singular Learning Theory and Deep Neural networks"