Dear Viktoriya,

While the mission statement of your organization seemed fair at the time, I believe that the AI expertise level of your board is not sufficiently high, and your board of “scientific advisors” have little to do with actual AI research. It seems like you believe that celebrities know everything; we will have to disagree about your faith in the cult of celebrity. Elon Musk is a great investor, and I would prefer that he sticks to investment, he is quite successful. Morgan Freeman is a great actor, he should continue playing god. Bostrom is a famous creationist philosopher, he should continue talking about how The Matrix movie is real, and why we should worship post-human programmer gods, and why he believes in the judgement day, and how scared he felt when he watched Terminator etc. There are some physicists among you like Max Tegmark, but Tegmark is not a computer scientist. Computer Science is actually just as deep a scientific field as physics itself. A few lines of python code he may have written does not make him an AI researcher. Please accept my apologies for the trouble, and erase my name from the signatory, because I view it as a trivialization of my field. For what it’s worth, I am very troubled by the public statements of some of your organization about AI, that are more of an AI eschatology mindset, rather than any genuine concern about AI. It looks more like an effort to profit or to self-promote via AI eschatology attitudes and replicate the funding success of such organizations. That is to say, I will have to withdraw from this effort despite the fact that your summary reads fairly initially, as I now suspect that the true focus is negativity and creating Fear, Uncertainty, Doubt about AI research. And eventually introduce policy to ban or restrict AI research via moral panic.

AI critics first used to claim AI is impossible, and after we explained to them for several decades that they are wrong, they are now claiming that it is the enemy of life. That is frustrating to anyone who knows the history of boring and ignorant AI critics like Searle and Penrose. I also haven’t read the longer mission statement in detail, and I think I would rather skip reading it, seeing that creationists and eschatologists are involved with your organization (most notably FHI/MIRI members). I would have ordinarily supported some of the research goals you listed, but I have my doubts about your true intentions. If you had wanted to accomplish any real good, there would only be actual AI researchers in your organization, not celebrities. Though celebrities may have improved the look of your web page, I believe that was a wrong thing to do. At this stage, this just looks like over-emphasis of near-trivial, popular science accounts of AI under the general heading of “AI Ethics” just as FHI does. For what it’s worth, I had written an unpublished pop-science paper on the arxiv on the subject, about perceived “risks” to the human society, and how to fix such risks, even with autonomous agents which are mostly unnecessary. It’s actually trivial to disprove almost every claim of FHI about AI, it’s just a load of pseudo-scientific nonsense, while my paper hopefully wasn’t, though it sort of reveals the absurdity of FHI/MIRI. 😉 In a nutshell, FHI/MIRI have a rather dim view of everything under the sun, including ethics. It’s just third-grade science fiction. I unfortunately cannot waste my time with an organization dominated by FHI/MIRI proponents, because I know them to be ignorant demagogues. Please accept this as honest criticism.

Kind Regards,

My letter to Future of Life Institute

Eray Özkural

Eray Özkural has obtained his PhD in computer engineering from Bilkent University, Ankara. He has a deep and long-running interest in human-level AI. His name appears in the acknowledgements of Marvin Minsky's The Emotion Machine. He has collaborated briefly with the founder of algorithmic information theory Ray Solomonoff, and in response to a challenge he posed, invented Heuristic Algorithmic Memory, which is a long-term memory design for general-purpose machine learning. Some other researchers have been inspired by HAM and call the approach "Bayesian Program Learning". He has designed a next-generation general-purpose machine learning architecture. He is the recipient of 2015 Kurzweil Best AGI Idea Award for his theoretical contributions to universal induction. He has previously invented an FPGA virtualization scheme for Global Supercomputing, Inc. which was internationally patented. He has also proposed a cryptocurrency called Cypher, and an energy based currency which can drive green energy proliferation. You may find his blog at https://log.examachine.net and some of his free software projects at https://github.com/examachine/.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.