I’ve been watching the AMC series Humans, and, for the most part, I’m enjoying it a great deal. There are a few things that I wish that they would do differently, and a few things which see to come up quite often in this genre that kind of annoy me.
So one of the big things that seems to come up in stories like these is that the creation of things that look convincingly human is associated with the creation of things that are essentially self aware. This bothers me for a number of reasons, the most pressing of which is that the two are completely unrelated. I understand that this is something being explored in the series in question. But what bothers me is how common this is. Somebody makes a lifelike machine, which acts like people do, and suddenly it starts thinking for itself. It’s like people think that behavior precedes reason, and once it matches its behavior to us, inevitably its mind will start to be shaped by that.
I also am frustrated by the fact that in so many of these stories, once they start to think for themselves and reason for themselves, they become so very much like us. It’s true, in trying to create an artificial consciousness we will inevitably base its reasoning patterns on our own, but given the fact that it is, on a fundamental level, not the same as us, I think it’s also inevitable that it would not function the same way we do. Getting angry about the same things that would anger us if we experienced them, and viewing our behavior by the same standards that we use is, I think fundamentally flawed.
I guess what I’m saying is that, someday, I want to write a story where artificial humans are created but have absolutely nothing in their programming which makes them more than utilities, while vast and powerful artificial intelligences are running the world based on guidelines and reasoning that is so foreign to us as to be virtually incomprehensible. Chances are good that somebody has already written that book, but I haven’t read it yet.