Human vs. Artificial Intelligence (AI) Values
I was watching Game 7 of the world series - the Cleveland Indians vs. the Chicago Cubs. The building I live in has a large community area with a huge HD television that allows viewers to really immerse themselves in the experience. Sitting next to me was a gentleman who is a physicist, mathematician and a computer engineer... the perfect triple threat for someone working in Artificial Intelligence, right?
While he relegated his family in Vancouver to weekend visits, during the week he was working in Artificial Intelligence (AI) at the Google Seattle office. We shared some interesting discourse on how his work in mapping and replicating the human brain was his "dream job". Not surprisingly, he was quick to suggest that AI offered more promise than threat. He firmly believed that while AI had its shortcomings, it would only be a matter of time and diligence until it caught up or exceeded human capacity.
I found this a little disturbing because it stuck me as a wholesale undervaluing of just how unique we humans are. What struck me as particularly interesting is that he seemed to focus on the technical (read: tangible) aspects of AI, but seemed short on answers about the intangible qualities of human values and how they would be applied to AI.
My takeaway from the conversation is that one of the dangers of AI is the assumption that it can be designed in a vacuum absent the type of neoplatonistic, intangible human values inherent to the species. I have my doubts that deconstructing the brain in the attempt to make AI more human will be any more successful than somebody flapping their arms faster and subsequently thinking the are going to start flying like a bird.
I designed a human vs. AI value scorecard: