Sunday, April 15, 2018

Artificial Intelligence and Robots

I saw a video about another group of people who are working on human like robots.  The model being demonstrated had the general appearance of a woman even with human-like skin tones and expressions.  It was limited. The back was connected via wiring to a main computer. So it could only be in a sitting position. I’ve seen a couple of other groups working on similar robots.  The main designer said that he envisioned a time when robots would be fully human-like.

This got me thinking about my opinions regarding the attempts to program human mindsets into robots.  I think that it’s problematic. I had an early experience with beta testing a pre-AI interface. It was pre-AI in the sense that all the responses were completely canned. But I could have a conversation with it.  This was before Seri. I initially found it fascinating. But I discovered that various human biases were programmed into it. The most disturbing was its responses to questions about religion. It responded as if it was a Christian.  I contacted the developers about why it would program human religious biases into the interface. They said that they wanted to market it in a wide range of markets. They believed that programming local beliefs in the computer would make it more relatable.  I thought that this was a terrible idea. It answered very matter-of-fact as if the beliefs were absolutely true. I tried to explain how I believe that we should leave out these kind of biases for a couple of reasons. For one, kids would use the interface and could think that local beliefs are verified as true because the computer says so.  But a larger problem is the problem of the base framework potentially corrupting the computer intelligence if it eventually evolved into a truer AI.

I believe that we should keep computers factual.  Today, you ask Google’s audio interface or Cortana if they believe in a god, they either answer with variations of “I don’t know” or run a web search.  That is as it should be. But I see other answers to be troubling because they are basically lies. Ask Cortana how if feels and it answers, “splendid”.  But that isn’t true because it does not “feel”. There are other questions which will trigger false statements. Right now, these are harmless things probably intended more for humor than anything.  But I think that it’s a bad beginning because further advances will be built on the framework of what is programmed today. If computers make false statements or treats cultural beliefs as fact now, these things are basically passing human problems to the computers.  Although some human traits such as empathy and a sense of ethical behavior probably need to be part of the programming, I think that human cognitive foibles need to be kept out of the programming.

Until next time, get out there.


No comments:

Post a Comment