Tuesday, April 19, 2016

Can Artificial Intelligence work?

Artificial Intelligence has been an interesting topic in recent years. A lot of people have stated whether or not they believe the pursuit of this technology could be the downfall of humanity. While most of the people gaining attention should know more about this topic than I do, I do not believe in others dictating my opinions. Since the consequences of Artificial Intelligence can't be viewed as fact, I might as well form my own thoughts on this one.


For those who don't already know, Artificial Intelligence is the idea that we will be able to program a computer or similar machine to form its own thoughts without each concept being specifically programmed. This would be more like a human's thought process, but more advanced and without a guarantee to view humanity as a priority over machine. The concern with this concept is that computers could turn against humans. Since they would be more intelligent, there is nothing that we could do about it.

Do I believe that it is possible to create a safe form of Artificial Intelligence? Absolutely. Should we pursue this goal? I'm not sure. Both the risks and benefits are very significant.

The keys to safe Artificial Intelligence are limiting functionality and placing strict guidelines in how such a device could be used. Any programming needs to include protections for humanity. Perhaps including two operating systems could help. The first could be closer to the heart of all of the hardware and would establish certain requirements to access that hardware. This would provide basic tools to test the purpose of a request. It could also include a read-only segment that the AI operating system could be required to reference to state the purpose of actions. While I believe that it is possible to create software like this that could successfully protect human interests against machines, the programming relies on us imperfect humans. More than likely, the code would not be perfect. It is possible that the AI could discover weaknesses and find ways to bypass this kind of protective software without sending any signal recognized as detrimental to humanity.

Isolation is something else that would need to be considered. Imagine for a moment that you have a brilliant computer set up with full internet access. That computer could find security holes that would allow it to install code that it generated onto your computer. This computer could then tap your computers resources to further it own abilities. Your computer and other internet-enabled devices could turn against you. In a worst case scenario, devices used to manufacture various goods could be reprogrammed by the AI in order to create new devices that are designed to bring down humanity.

I'm not too concerned with an AI having access to significant amounts of data. I think streaming data one-way to such a computer would work. The real problem is if the computer could output to the internet. A device would likely need to be invented to regulate data transfer between the AI and the internet. Input would be allowed to move fairly smoothly. Output would either have to be cut off (an external computer would likely be required to establish what content would be sent to the AI) or output would have to head through a device with severely limited capabilities. For example, a request for a website shouldn't be too risky. Uploading files should definitely not be possible.

In order for AI to have any value, some form of output is required. For example, a video screen or speech could be used to relay information to people. A super-intelligent computer could try to manipulate some of the people exposed to output. All it takes is one person to make a mistake and open up the AI to other forms of output.

One last thing. Any super-computer built for Artificial Intelligence needs a kill switch. If such a device is established as detrimental, we need to have an easy mechanism to bring it to a halt. This could be something as simple as a power cutoff. The drawbacks to this are that there is a good chance that we would be too late in using the kill switch. If it has spread it's capabilities beyond itself, stopping a single device might not be sufficient. Additionally, the investment in developing AI would likely result in hesitation to turn the device into something non-functional.

If we successfully developed AI tomorrow, we would not be in immediate danger. As of right now, computers and other machines need up for manufacture and maintenance (but that will likely change in the future). Taking on humanity would essentially be the computer version of suicide. I also fully believe that it's possible to create a safe super-computer that can create new ideas to do such things as predict earthquakes, lengthen our lives, utilize resources beyond our planet, prevent various forms of disasters, and so much more. The big question is whether or not we can successfully contain the process in a way that will continue to benefit us. Do the rewards outweigh the risks? I honestly don't know.

No comments:

Post a Comment