Superintelligence · Alignment Forum
You can only build safe ASI if ASI is globally banned
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
Sometimes people make various suggestions that they should simply build "safe" artificial Superintelligence (ASI), rather than the presumably "unsafe" kind.[1]There are various flavors of “safe” people suggest.
Key facts
- Sometimes people make various suggestions that they should simply build "safe" artificial Superintelligence (ASI), rather than the presumably "unsafe" kind
- Sometimes they suggest they should simply build “tool AI” or “non-agentic” AI.Sometimes they have even more exotic, or more obviously-stupid ideas
- You can’t build a controlled ASI without knowing many, MANY things about intelligence and how to build it
- Now the reporter could argue at lengths about why this is astronomically harder than people think it is, why their various proposals are almost universally unworkable, why even attempting this is insanely
Summary
Sometimes they suggest building “aligned” ASI: You have a full agentic autonomous god-like ASI running around, but it loves you and definitely will do the right thing. Now the reporter could argue at lengths about why this is astronomically harder than people think it is, why their various proposals are almost universally unworkable, why even attempting this is insanely immoral[2], but that’s not the main point the reporter want to make. You can’t build a controlled ASI without knowing many, MANY things about intelligence and how to build it.