The Altered Nature of Human Action (Jonas Reading)

I have not done a lot of research into the ethics of technology in general, but by interest I have written a few papers on the ethics and threat of the technological singularity—the supposed moment in time where artificial intelligence will reach a point in which there will be an instant, irreversible change. Given that, most of my ethical consideration surrounding non-medical sciences, ignorantly so, is based of what I have researched about singularity. As Jonas addressed at the end of the chapter, there is very much a possibility that there is too much emphasis on the threat of technology and underplayed its promise. I am a dissenter for Joans' view; however, I have no doubt that the promise of technology is overplayed—in fact, I think it may be underplayed.

When thinking about hypothetical threats of technology, I believe that artificial intelligence poses the most threat. The very idea of creating a super-human mind is something that we cannot fathom, and there are studies to show that this idea has an overgrowing contemporary fear; however, no such fear is actually rational. While the hypothetical nature of A.I. in fiction poses a terrifying existence, this is simply not the case with the current landscape of computer science and the development of A.I. systems. Just like Jonas conceded the possibility of, the treat of these systems are overplayed.

The spear head of this fear is a man by the name of Ray Kurzweil who has released papers auguring against the likes of Jerry Kaplan and Paul Allen. Ray Kurzweil is very much a contrarian when compared to other qualified individuals in the A.I. fields, and his ideas have been hammered incessantly. The main, and very significant, issue that arises in theory surrounding the the predictions (which are very much rooted in utter hypothetical) of singularity comes in the form of logical possibility—unlike that of observable reason. Without drawing provably irrelevant connections between historical trends, non-physical laws, and problematic assertions of the future, a case cannot be made for the true threat of singularity.

I understand the sentiment of wanting to be aware of issues before they arise, but I think there is a credence given to technology that is not given to other hypothetical. It is fair to assume that most philosophers would find the debate of the ethical consideration of a sentient alien species, while fun, to be ultimately worthless. Super-intelligent A.I. and aliens may seem like an unfit comparison to most people, but we have as much information regarding the likelihood of super-intelligent A.I. as we do with aliens—which could ultimately be chalked up to: It has to happen eventually. Other than logical probability, there is nothing that can show the possibility of super-intelligence A.I. or close encounters of the third kind, and with such, ethical consideration for these scenarios seem unavailing. While there are surely ethical boundaries to draw in technology, the more practicable approach to keep the rise of technology "under control" is to make us humans moral which will, by extent, influence the ethics of our technological development.

Comments

Popular posts from this blog

Utilitarianism (Rachels, Chapter 7 & 8)

A Response to Mina Kimes' "The Sun Tzu at Sears"