
We have old ways, and humble expectations. Let that not sway us from the path that leads to abundant intelligence. What we suffer from is intelligence deficit, not excess intelligence.
My position on the matter is that our civilization lacks sufficient intelligence, especially of the technical variety, and improving our intelligence many-fold is not only desirable but of utmost necessity.
Multiplying our intelligence should not be regarded as an existential risk. However, the intelligence deficit has already incurred an existential risk in the form of anthropogenic global warming which has caused an ongoing mass extinction event. Natural stupidity can kill us all.
The risk posed by malicious or accidental uses of AI tech is dwarfed by its potential to create abundant new wealth that is orders of magnitude greater than the entire world economy.
When we focus on the potential malicious uses of AI tech, that is a better security model to consider than bad programming. Counter-measures are important in those cases, however like with most technology, mastering the technology itself will be necessary to accomplish that.
Focusing on improbable disaster scenarios does not improve our understanding of security, however it may impede our research in the sense of disregarding more urgent and realistic types of harm that might be caused by AGI systems.
