The Basic Argument for AI Safety
Richard Y Chappell
The Basic Argument for AI Safety
High-stakes uncertainty warrants caution and research
When I see confident dismissals of AI risk from other philosophers, it’s usually not clear whether our disagreement is ultimately empirical or decision-theoretic in nature. (Are they confident that there’s no non-negligible risk here, or do they think we should ignore the risk even though it’s non-negligible?) Either option seems pretty unreasonable to me, for the general reasons I previously outlined in...
