Reason
Starting late in 2020 I wanted to start writing. Since the late 90’s I have been wondering about the impact of greater computer power and when that might be. At the time I thought it would be in about 10 years. 2010 has come and gone and so has computer power and the exaflop machine exists.
Nick Bostrom has categorised the various existential risks humanity faces. SHAGI worries me the most.
I think it is both probably and difficult to stop. Stuart Russell wrote an excellent book Human Compatible : Artificial intelligence and the question of control about this problem. But for my reading the book was split into two halves. The first was describing the problem and then the second how a good and competent actor might mitigate the risk. But in the middle he describes that it is essentially game over if we meet unprepapred a SHAGI.