← Back to KHAO

Superintelligence ·

You can only build safe ASI if ASI is globally banned

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

◌ Single Source

Image accompanies the article at Alignment Forum. No description was extracted from the source.

Sometimes people make various suggestions that they should simply build "safe" artificial Superintelligence (ASI), rather than the presumably "unsafe" kind.[1]There are various flavors of “safe” people suggest.

Key facts

Summary

Sometimes they suggest building “aligned” ASI: You have a full agentic autonomous god-like ASI running around, but it loves you and definitely will do the right thing. Now the reporter could argue at lengths about why this is astronomically harder than people think it is, why their various proposals are almost universally unworkable, why even attempting this is insanely immoral[2], but that’s not the main point the reporter want to make. You can’t build a controlled ASI without knowing many, MANY things about intelligence and how to build it.

Read full article at Alignment Forum →

#superintelligence #agentic