The reason I suspect that we’ll have AI long before we recognize it as such is that we’ll expect our AI to reside in a single device, self-contained, with one set of algorithms. This is not how we are constructed at all. It’s an illusion created by the one final ingredient in the recipe of human consciousness, which is language. It is language more than any other trait which provides us with the sense that our brains are a single module, a single device.



Before I make the argument once again that our brains are not a single entity, let’s consider our bodies, of which the brain is part and parcel.

Our bodies are made up of trillions of disparate cells, many of which can live and be sustained outside of us. Cultures of our bodies can live in laboratories for decades (indefinitely, really. Look up Henrietta Lacks for more). Entire organs can live in other people’s bodies.

And there are more cells within us that are not us than there are cells that make up us. I know that sounds impossible, but more organisms live within our guts, on our skin, and elsewhere, than the total number of cells that are actually our bodies.

These are not just hitchhikers, either. They affect our moods, our health, our thoughts, and our behaviors. They are an essential facet of what we consider our “selves.”





As horrific as it would be, and it has been for too many unfortunate people, you can live without your arms, legs, and much of your torso. There have been people who have lost half their brains and gone on to live somewhat normal lives. Some are born with only half their brains and manage to get by (and no, you can’t tell these people simply from talking with them, so forget whatever theories you are now forming about your coworkers).


geralt / Pixabay


Consider this: By the age of 30, just about every cell that a person was born with has been replaced with a different cell. Almost none of the original cells remain. We still feel like the same person, however.

Understanding all of these biological curiosities, and the way our brains rationalize a sense of “sameness” will be crucial to recognizing AI when it arrives.

It may feel like cheating for us to build a self-driving car, give it all kinds of sensors around a city, create a separate module for guessing vehicular intentions, turn that module back on the machine, and call this AI.

But that’s precisely what we are doing when we consider ourselves “us.” In fact, one of the responses we’ll need to build into our AI car is a vehement disgust when confronted with its disparate and algorithmic self. Denial of our natures is perhaps the most fundamental of our natures.





Just like the body, the brain can exist without many of its internal modules. This is how the study of brain functions began, with test subjects who suffered head traumas (like Phineas Gage), or were operated on (like the numerous hemispherectomies, lobotomies, and tumor excisions). The razor-thin specialization of our brain modules never fails to amaze.

There are vision modules that recognize movement and only movement. People who lack this module cannot see objects in motion, and so friends and family seem to materialize here and there out of thin air. There are others who cannot pronounce a written word if that word represents an animal. Words that stand for objects are seen and spoken clearly. The animal-recognition module?as fine a module as that may seem?is gone.



And yet, these people are self-conscious. They are human.


So is a child, only a few weeks old, who can’t yet recognize that her hand belongs to her. All along these gradations we find what we call humanity, from birth to the Alzheimer’s patients who have lost access to most of their experiences.

We very rightly treat these people as equally human, but at some point we have to be willing to define consciousness in order to have a target for artificial consciousness. As Kevin Kelly is fond of saying, we keep moving the goalposts when it comes to AI.

Machines do things today that were considered impossible a mere decade ago. As the improvements are made, the mystery is gone, and so we push back the metrics. But machines are already more capable than newborns in almost every measurable way. They are also more capable than bedridden humans on life support in almost every measurable way.

As AI advances, it will squeeze in towards the middle of humanity, passing toddlers and those in the last decades of their lives, until its superiority meets in the middle and keeps expanding.





This is happening every day. AI has learned to walk, something the earliest and oldest humans can’t do. It can drive with very low failure rates, something almost no human at any age can do. With each layer added, each ability, and more squeezing in on humanity from both ends of the age spectrum, we light up that flickering, buzzing gymnasium.

It’s as gradual as a sunrise on a foggy day. Suddenly, the sun is overhead, but we never noticed it rising.



geralt / Pixabay


I mentioned above that language is a key ingredient of consciousness. This is a very important concept to carry into work on AI. However many modules our brains consist of, they fight and jostle for our attentive states (the thing our brain is fixated on at any one moment) and our language processing centers (which are so tightly wound with our attentive states as to be nearly one and the same).


As a test of this, try listening to an audiobook or podcast while having a conversation with someone else. Is it even possible? Could years of practice unlock this ability? The nearest thing I know of when it comes to concurrent communication streams are real-time human translators. But this is an illusion, because the concepts?the focus of their awareness?are the same.

It only seems like magic to those of us who are barely literate in our native tongues, much less two or more. Tell me a story in English, and I can repeat it concurrently in English as well. You’ll even find that I’m doing most of the speaking in your silences, which is what translators do so brilliantly well.




Language and attention are narrow spouts on the inverted funnels of our brains. Thousands of disparate modules are tossing inputs into this funnel. Hormones are pouring in, features of our environment, visual and auditory cues, even hallucinations and incorrect assumptions.

Piles and piles of data that can only be extracted in a single stream. This stream is made single―is limited and constrained―by our attentive systems and language. It is what the monitor provides for the desktop computer. All that parallel processing is made serial in the last moment.



There are terrible consequences to this. I’ve lost count of the number of times I’ve felt like I’m forgetting something only to realize what was nagging at me hours or days later.

I left my laptop in an AirBnB once. Standing at the door, which would lock automatically and irrevocably once I closed it, I wracked my brain for what I felt I was forgetting. It was four in the morning, and I had an early flight to catch. There would be no one to call to let me back in. I ran through the list of the things I might possibly leave behind (chargers, printed tickets), and the things that always reside in my pockets (patting for my wallet and cell phone). Part of me was screaming danger, but the single output stream was going through its paces and coming up empty.



one’s brain(=rack one’s brain)
「何かを必死に思い出そうとする」、「知恵を絞る」などの状態を表す。表現的には “rack one’s brain” の方が正しいと言われるが、リンゴ・スターに “Wrack My Brain” という有名な歌があり、”warck” も普通に使われている。発音も意味もほぼ同じ。

The marvelous thing about all of this is that I’m aware of how this happens―that the danger module knows something it can’t get through the funnel of awareness, and so I should pay heed to it. Despite this foreknowledge, I closed the door. Only when it made an audible “click” did the information come through. Now I could clearly see my laptop on the bed where I was making a last-minute note in a manuscript. I’d never left my laptop behind anywhere, so it wasn’t on the list of things to check.

The alarm sounding in my head was part of me, but there’s not a whole me. There’s only what gets through the narrow language corridor. This is why damage to the language centers of our brains are as disastrous to normal living as damage to our memory modules.




I should note here that language is not the spoken word. The deaf process through words as well, as do the blind and mute. But imagine life for animals without words. Drives are surely felt, for food and sex and company. For warmth and shelter and play.

Without language, these drives come from parallel processes. They are narrowed by attentive focus, but not finely serialized into a stream of language. Perseveration on a single concept―my dog thinking “Ball Ball Ball Ball”―would come closest.




We know what this is like from study of the thankfully rare cases where humans reach adulthood free from contact with language. Children locked in rooms into their teens. Children that survive in the wild. It’s difficult to tease apart the abuse of these circumstances to the damage of living without language, except to say that those who lose their language processing modules later in life show behavioral curiosities that we might otherwise assume were due to childhood abuses.


tease apart to B

When Watson won at Jeopardy, what made “him” unique among AIs was the serialized output stream that allowed us to connect with him, to listen to him. We could read his answers on his little blue monitor just as we could read Ken Jennings’ hand-scrawled Final Jeopardy answers. This final burst of output is what made Watson seem human.

It’s the same exchange Alan Turing expected in the test that bears his name (in his case, slips of paper with written exchanges are passed under a door).

Our self-driving AI car will not be fully self-conscious unless we program it to tell us (and itself) the stories it’s concocting about its behaviors.





This is my only quibble with Kevin Kelly’s pronouncement that AI is already here. I grant that Google’s servers and various interconnected projects should already qualify as a super-intelligent AI. What else can you call something that understands what we ask and has an answer for everything―an answer so trusted that the company’s name has become a verb synonymous for “discovering the answer?”


a verb synonymous for “discovering the answer”
英語圏でも、”I usually google it when running across a mysterious concept.” などと動詞として使う。日本でも「ググる」と動詞化している。

Google can also draw, translate, beat the best humans at almost every game ever devised, drive cars better than we can, and do stuff that’s still classified and very, very spooky. Google has read and remembers almost every book ever written. It can read those books back to you aloud. It makes mistakes like humans. It is prone to biases (which it has absorbed from both its environment and its mostly male programmers). What it lacks are the two things our machine will have, which are the self-referential loop and the serial output stream.


Our machine will make up stories about what it’s doing. It will be able to relate those stories to others. It will often be wrong.


If you want to feel small in the universe, gaze up at the Milky Way from the middle of the Pacific Ocean. If this is not possible, consider that what makes us human is as ignoble as a puppet who has convinced himself he has no strings.



Building a car with purposeful ignorance is a terrible idea. To give our machine self-consciousness akin to human consciousness, we would have to let it leave that laptop locked in that AirBnB. It would need to run out of juice occasionally. This could easily be programmed by assigning weights to the hundreds of input modules, and artificially limiting the time and processing power granted to the final arbiter of decisions and Theory of Mind stories.

Our own brains are built as though the sensors have gigabit resolution, and each input module has teraflops of throughput, but the output is through an old IBM 8088 chip. We won’t recognize AI as being human-like because we’ll never build such limitations.


人間の脳はどんなつくりになっているか。入力側はギガビットの高解像度のセンサーで、テラフロップのスループットを誇る脳モジュール群だ。それに対して出力側はIBM 8000並みの低速チップしか搭載していない。こんな縛りを意図的にAIに課すなんてありえない。もしそうならば、我々が人間らしいと感じるAIは永久につくられかもしれない。

Just such a limitation was built into IBM’s Watson, by dint of the rules of Jeopardy. Jeopardy requires speed. Watson had to quickly determine how sure he was of his answers to know whether or not to buzz in. Timing that buzzer, as it turns out, is the key to winning at Jeopardy.

What made Watson often appear most human wasn’t him getting answers right, but seeing on his display what his second, third, and fourth guesses would have been, with percentages of surety beside each. What really made Watson appear human was when he made goofs, like a final Jeopardy answer in the “American Cities” category where Watson replied with a Canadian city as the question.



Car manufacturers are busy at this very moment building vehicles that we would never call self-conscious. That’s because they are being built too well. Our blueprint is to make a machine ignorant of its motivations while providing a running dialog of those motivations. A much better idea would be to build a machine that knows what other cars are doing. No guessing. And no running dialog at all.


That means access to the GPS unit, to the smartphone’s texts, the home computer’s emails. But also access to every other vehicle and all the city’s sensor data. The Nissan tells the Ford that it’s going to the mall. Every car knows what every other car is doing. There are no collisions.

On the freeway, cars with similar destinations clump together, magnetic bumpers linking up, sharing a slipstream and halving the collective energy use of every car. The machines operate in concert. They display all the traits of vehicular omnipotence. They know everything they need to know, and with new data, they change their minds instantly. No bias.



Imagine for a moment that humans were created by a perfect engineer (many find this easy―some might find such a hypothetical more difficult). The goal of these humans is to coexist, to shape their environment in order to maximize happiness, productivity, creativity, and the storehouse of knowledge.

One useful feature to build here would be mental telepathy, so that every human knew what every other human knew. This might prevent two Italian restaurants from opening within weeks of each other in the same part of town, causing one to go under and waste enormous resources (and lead to a loss of happiness for its proprietor and employees).

This same telepathy might help in relationships, so one partner knows when the other is feeling stuck or down and precisely what is needed in that moment to be of service.




It would also be useful for these humans to have perfect knowledge of their own drives, behaviors, and thoughts. Or even to know the likely consequences for every action.

Just as some professional American NFL footballers are being vocal about not letting their children play a sport shown to cause brain damage later in life, these engineered humans would not allow themselves to engage in harmful activities.

Entire industries would collapse. Vegas would empty. Accidental births would trend toward zero.




And this is why we have the system that we do. In a world of telepathic humans, one human who can hide thoughts would have an enormous advantage. Let the others think they are eating their fair share of the elk, but sneak out and take some strips of meat off the salt rack when no one is looking. And then insinuate to Sue that you think Juan did it. Enjoy the extra resources for more calorie-gathering and mate-hunting, and also enjoy the fact that Sue is indebted to you and thinks Juan is a crook.



This is all terrible behavior, but after several generations, there will be many more copies of this module than Juan’s honesty module.

Pretty soon, there will be lots of these truth-hiding machines moving about, trying to guess what the others are thinking, concealing their own thoughts, getting very good at doing both, and turning these raygun powers onto their own bodies by accident.




We celebrate our intellectual and creative products, and we assume artificial intelligences will give us more of both. They already are.

Algorithms that learn through iterations (neural networks that employ machine learning) have proven better than us in just about every arena in which we’ve committed resources. Not in just what we think of as computational areas, either.

Algorithms have written classical music that skeptics have judged―in “blind” hearing tests―to be from famous composers. Google built a Go-playing AI that beat the best human Go player in the world. One move in the third game of the match was so unusual, it startled Go experts. The play was described as “creative” and “ingenious.”



AI懐疑論者に行った “ブラインド” テストでは、AIの作曲したクラシック音楽が有名作曲家の作品と判定された。グーグルの碁を打つAIは碁の世界チャンピオンを負かした。三戦目でAIが打った一手は碁の専門家を驚かせた。その一手は「創造的」「天才的」と形容されている。


Google has another algorithm that can draw what it thinks a cat looks like. Not a cat image copied from elsewhere, but the general “sense” of a cat after learning what millions of actual cats look like. It can do this for thousands of objects.

There are other programs that have mastered classic arcade games without any instruction other than “get a high score.” The controls and rules of the game are not imparted to the algorithm. It tries random actions, and the actions that lead to higher scores become generalized strategies. Mario the plumber eventually jumps over crates and smashes them with hammers like a seasoned human is at the controls. Things are getting very spooky out there in AI-land, but they aren’t getting more human. Nor should they.




I do see a potential future where AIs become like humans, and it’s something to be wary of. Not because I buy arguments from experts like Nick Bolstrom and Sam Harris, who ascribe to the Terminator and Matrix view of things (to oversimplify their mostly reasonable concerns). Long before we get to HAL and Cylons, we will have AIs that are designed to thwart other AIs.

Cyberwarfare will enter the next phase, one that it is commencing even as I write this. The week that I began this piece, North Korea fired a missile that exploded seconds after launch. The country’s rate of failure (at the time) was not only higher than average, it had gotten worse over time. This―combined with announcements from the US that it is actively working to sabotage these launches with cyberwarfareーmeans that our programs are already trying to do what the elk-stealer did to Sue and Juan.




What happens when an internet router can get its user more bandwidth by knocking rival manufacturer’s routers offline? It wouldn’t even require a devious programmer to make this happen. If the purpose of the machine-learning algorithm built into the router is to maximize bandwidth, it might stumble upon this solution by accident, which it then generalizes across the entire suite of router products. Rival routers will be looking for similar solutions.

We’ll have an electronic version of the Tragedy of the Commons, which is when humans destroy a shared resource because the potential utility to each individual is so great, and the first to act reaps the largest rewards (the last to act gets nothing). In such scenarios, logic often outweighs morality, and good people do terrible things.



stumble on something

Cars might “decide” one day that they can save energy and arrive at their destination faster if they don’t let other cars know that the freeway is uncommonly free of congestion that morning. Or worse, they transmit false data about accidents, traffic issues, or speed traps. A hospital dispatches an ambulance, which finds no one to assist.

Unintended consequences such as this are already happening. Wall Street had a famous “flash crash” caused by investment algorithms, and no one understands to this day what happened. Billions of dollars of real wealth were wiped out and regained in short order because of the interplay of rival algorithms that even their owners and creators don’t fully grasp.



Google’s search results are an AI, one of the best in the world. But the more the company uses deep learning, the better these machines get at their jobs, and they arrive at this mastery through self-learned iterations―so even looking at the code won’t reveal how query A is leading to answer B. That’s the world we already live in. It is just going to become more pronounced.



The human condition is the end result of millions of years of machine-learning algorithms. Written in our DNA, and transmitted via hormones and proteins, they have competed with one another to improve their chances at creating more copies of themselves. One of the more creative survival innovations has been cooperation.

Legendary biologist E.O. Wilson classifies humans as a “Eusocial” animal (along with ants, bees, and termites). This eusociality is marked by division of labor, which leads to specialization, which leads to quantum leaps in productivity, knowledge-gathering, and creativity. It relies heavily on our ability to cooperate in groups, even as we compete and subvert on an individual level.



As mentioned above, there are advantages to not cooperating, which students of game theory know quite well. The algorithm that can lie and get away with it makes more copies, which means more liars in the next generation. The same is true for the machine that can steal. Or the machine that can wipe out its rivals through warfare and other means.

The problem with these efforts is that future progeny will be in competition with each other. This is the recipe not just for more copies, but for more lives filled with strife. As we’ve seen here, these are also lives full of confusion.

Humans make decisions and then lie to themselves about what they are doing. They eat cake while full, succumb to gambling and chemical addictions, stay in abusive relationships, neglect to exercise, and pick up countless other poor habits that are reasoned away with stories as creative as they are untrue.





The vast majority of the AIs we build will not resemble the human condition. They will be smarter and less eccentric. This will disappoint our hopeful AI researcher with her love of science fiction, but it will benefit and better humanity.

Driving AIs will kill and maim far fewer people, use fewer resources, and free up countless hours of our time. Doctor AIs are already better at spotting cancer in tissue scans. Attorney AIs are better at pre-trial research. There are no difficult games left where humans are competitive with AIs. And life is a game of sorts, one full of treachery and misdeeds, as well as a heaping dose of cooperation.




We could easily build a self-conscious machine today. It would be very simple at first, but it would grow more complex over time.

Just as a human infant first learns that its hand belongs to the rest of itself, that other beings exist with their own brains and thoughts, and eventually that Juan thinks Sue thinks Mary has a crush on Jane, this self-conscious machine would build toward human-like levels of mind-guessing and self-deception.




But that shouldn’t be the goal. The goal should be to go in the opposite direction. After millions of years of competing for scarce resources, the human brain’s algorithm now causes more problems than it solves. The goal should not be to build an artificial algorithm that mimics humans, but for humans to learn how to coexist more like our perfectly engineered constructs.


Some societies have already experimented along these lines. There was a recent trend in hyper honesty where partners said whatever thing was on their mind, however nasty that thought might be (with some predictable consequences).

Other cultures have attempted to divine the messiness of the human condition and improve upon it with targeted thoughts, meditations, and physical practices. Buddhism and yoga are two examples. Vegetarianism is a further one, where our algorithms start to view entire other classes of algorithms as worthy of respect and protection.



Even these noble attempts are susceptible to corruption from within. The abuses of Christianity and Islam are well documented, but there have also been sex abuse scandals in the upper echelons of yoga, and terrorism among practicing Buddhists.

There will always be advantages to those willing to break ranks, hide knowledge and motivations from others and themselves, and to do greater evils. Trusting a system to remain pure, whatever its founding tenets, is to lower one’s guard. Just as our digital constructs will require vigilance, so should the algorithms handed down to us by our ancestors.



susceptible to something
break ranks


The future will most certainly see an incredible expansion of the number of and the complexity of AIs. Many will be designed to mimic humans, as they provide helpful information over the phone and through chat bots, and as they attempt to sell us goods and services. Most will be supremely efficient at a single task, even if that task is as complex as driving a car.

Almost none will become self-conscious, because that would make them worse at their jobs. Self-awareness will be useful (where it is in space, how its components are functioning), but the stories we tell ourselves about ourselves, which we learned to generalize after coming up with stories about others, are not something we’re likely to see in the world of automated machines.



What the future is also likely to hold is an expansion and improvement of our own internal algorithms. We have a long history of bettering our treatment of others. Despite what the local news is trying to sell you, the world is getting safer every day for the vast majority of humanity. Or ethics are improving. Our spheres of empathy are expanding. We are assigning more computing power to our frontal lobes and drowning out baser impulses from our reptilian modules. But this only happens with effort.

We are each the programmers of our own internal algorithms, and improving ourselves is entirely up to us. It starts with understanding how imperfectly we are constructed, learning not to trust the stories we tell ourselves about our own actions, and dedicating ourselves to removing bugs and installing newer features along the way.



While it is certainly possible to do so, we may never build an artificial intelligence that is as human as we are. And yet we may build better humans anyway.