Hopefully we're smart enough to prevent that AI from learning.
I've also noticed that no one really talks about the ethics of creating a machine consciousness. What happens if a created consciousness ends up being on the same level as human consciousness and we end up with an entirely new race of machines because humans weren't careful enough to prevent it?
I mean on a large scale, too, not just one or two experimental "beings."
I don't think we'll ever get to that point. Humans are so bent on destroying everyone else, that creating things are totally out of our point of view. We'll end up blowing up Earth before we create a sentient robot.
Hmm.. I wonder if many people are brainwashed by the media that AI will take over the world and use humans as ground beef. If we give AI a sense of right and wrong then I don't see them causing much destruction. There will be up's and down's, though.