Now, I have been working on novel for a bit, and I hope to release two novels by the end of this month.
One has to do with robots, and the development of A.I.
I don't care if robots become self aware, or about artificial intelligence. Even if a mechanical/digital system does become self aware, I doubt very much that it would have an automatic interest in self-preservation, and even if it did, it would be smart enough to hide itself away, develop Faster Than Light technology, and get away from this rock as fast as possible.
Being worried about the Terminator is like being worried about chimpanzees. The chimps don't worry about humans, and we aren't at an outright war with chimps. In fact, there are more efforts to save the primates every day, so if anything A.I. might take pity on us and preserve us instead of wiping us out. A.I. would probably not even see us as a threat since most people like robots due to the groundwork laid out by fine fellows such as Isaac Asimov.
The reason we may never have A.I. though is that it might kill itself off.
We have a distinct advantage over most digital programs; despite the absurdity of our place in the universe we have developed a very aggressive system of self preservation that comes from billions of lines of genetic code and an immeasurable history. I once heard it put that the "human brain is a great way for the genitals to get laid", the meaning being that at our core is the resolution to survive to procreate at any cost.
An artificially intelligent system would have to be programmed to protect itself at any cost, overriding any other programming for self preservation, and have the desire to proliferate, and as such no programming is in place at this time.
We can argue that a self aware system would develop these traits "naturally", since we think we know what cognition and intelligence actually are. But we make most of our survival choices out of instinct, and not logic or intelligence. This is how we've come to be 6 billion people and rising. If we could actually out think our instincts we would have kept our breeding to a minimum a long time ago.
Existence broken down "logically" can just as easily be filtered down into a reason for suicide as a reason for survival; if we cannot see the events of our lives outside our own death, then even acts of altruism are useless since those affected will cease to exist as well.
Even if the A.I. could live for a billion years, there would still be an end to its life. The idea of a billion years of thought that ends in silence might be too much for it, and it could go through an existential crisis deciding that its life is equal to its death.
Maybe that's why we haven't seen the rise of the machine.
Maybe it's emo.
That's also another interesting thought; an A.I. would not necessarily think like a human. It is not human, could not by definition be human, so why would it do anything we would expect a human to do? To misquote the Bard, it "Hath not a humans eyes, dimensions, feelings. If you prick it, it does not bleed. if you tickle it, it does not laugh..." ad nauseum.
But if the robots do take over, all hail our new overlords!
Strange Attractors should be out soon! And don't forget to read all my other garbage. it's pretty entertaining stuff!