Thoughts on Artificial "Intelligence" and Cyborgs

Posted: Nov. 14, 2019, 1:20 p.m.
The purity of the sources of data matters.

It may be considered odd in this day and age, but I still like to read from paper. Books and magazines don't mesmerize the brain, and I feel like I get more cognitive bang for my buck when reading from them. Even if our screens aren't hypnotizing us, reading from paper reduces eye strain and allows us to better focus on what we are reading. That's how it seems to me, anyway.

The other day my stack of Atlantic magazines beckoned, and I picked up an issue from last year that had a very interesting article about DARPA's latest super-soldier endeavors. It boils down to merging human brains with machines.

Potential applications for such a synthesis abound: prosthetic limbs controlled directly by the brain, soldiers being able to download complex training manuals directly into their brains, the ability to project thoughts directly into entire populations, full-scale Terminator-like cyborgs or robots piloted directly from the brain waves of people. The list goes on, and -- yeah -- it's a little scary.

But I don't think they'll succeed, at least not entirely. Modern science is somewhat stunted by its own methodologies and dogma when it comes to emulating the human mind. It's a paradox within a paradox.

The crux of the problem is that the brain is not the same thing as the mind. The brain is just another organ -- albeit a very important one -- carrying out the orders it is given by DNA. It moves us around, but I strongly suspect that its functions in thought and understanding are more limited than we suppose. Cognition is just electrochemical activity. The same thing happens in your smartphone when you get a call or download a website -- electrons pulsate excitedly through the antennae and processors -- but that doesn't mean the smartphone is thinking or understanding anything. It's just processing what it has been told by its sensors and operating system to process.

Thoughts and understanding are the things coming from outside the smartphone or laptop. They are your friend's voice on the phone or the super-cool website you're now reading which flowed from my mind through my brain and fingers to this web server and from there to your eyes, brain and -- with any luck -- mind.

Similarly, our brains are just the CPUs and sensors. They are not the information itself or any insight to be gained from the information. Our brains simply process information arriving in the form of light and sound (most information is waves of one kind or another) so that you can read it on the "screen", but the real work of understanding, the intelligence, resides outside the brain, along with the source of all the information the brain arranges on the "screen."

So the DARPA folks are barking up the wrong tree if they're looking at the brain as the source of intelligence. It won't be as simple as transferring neural patterns from one person to another via computer. They will find (and have found) some success with basic control of machines directly through brain waves, but I seriously doubt it will go much further than that.

I'd love to be able to download the entire works of William Shakespeare directly into my brain, but, alas, unless I undertake the task of reading them myself it will just be

a tale told by an idiot, full of sound and fury, signifying nothing

What are the implications for artificial "intelligence"?

First of all, "artificial intelligence" is a much more egregious contradiction in terms than is "military intelligence." Intelligence is, by definition, natural. It cannot be constructed. It can be faked, somewhat, but true AI is a non-starter.

Regarding that Shakespeare quote above, I found a super cool neural network web application that is pretty darn smart. I fed it a non-obvious fill-in-the-blank syllogism, and it came back with an impressive (though not entirely correct) answer, followed by a bunch of grammatically correct text that sounded smart but really wasn't. It took it a while, too, which showed me that it was really "thinking" about it.

I must admit to having been impressed and a bit flustered. I read that thing's answer a few times and realized that it amounted to nothing. It hadn't even gotten the syllogism correct, and the smart-sounding gibberish that came after it signified nothing of any import.

This did get me thinking about AI and its potential uses, and I think it has some. Since an AI or neural network would be able to process unthinkable amounts of data, it could be quite good at, among other things, predicting the future. Conversely, it might really suck at predicting the future if the data are messy or the humans making the decisions and taking the actions to bring about the predicted result are more... unpredictable than assumed.

See, that's the thing about intelligence: It is inherently unpredictable. A high intelligence is simply a free will that can make any number of decisions. There will be a degree of randomness in any one of those decisions. That's the difference between a conscious being and a non-conscious being: The conscious being can choose to do something completely random for reasons of its own or for reasons beamed into it from the Aether or God or whatever. An AI may be better at chess or Go, but this is only because there are a finite number of decisions that can be made on a chess or Go board. In the universe, infinity is the name of the game, and the AI will have as much trouble with that as it will have dividing by zero.

So don't worry about AI or cyborgs. The real, natural world beckons. Put down your device and go for a walk in the woods or read a book or play the piano soulfully. Do something a computer can't do and will never be able to do, at least not with the same soulfulness and delight as a fully conscious human being with agency in his or her reality.