Scott Graber

‘You have to have some knowledge of self, right?’

//

By Scott Graber

It’s Saturday, early, but I’ve got my coffee. I’ve also got a fire and a copy of the New Yorker Magazine. The December 4 issue brings us a profile of Jensen Huang, the CEO of a tech company that makes Artificial Intelligence.

Jensen Huang was born in Taiwan in 1963, relentlessly bullied and called a “Chink,” but now runs a company that is the sixth most valuable company on earth — a company with more value than Walmart and Exxon-Mobil combined.

Some of you know that I have a contempt for technology. Mostly I dislike cellphones and what they have done to relationships and romance. But a big part of my loathing is the realization that I don’t have a mental, simplified notion of how computers work. 

In other words I can visualize small explosions pushing pistons in a gasoline-powered engine, and the physiological connection between one’s heart and one’s lungs, but I can’t imagine how my MacBook Air does what it does.

In the New Yorker piece,”The Chosen Chip,” Stephen Witt writes his article with many illustrative comparisons between the internal functioning of Artificial Intelligence and the trillions of neurons that make up our brains.

The article also talks about the computer “image recognition” experiments and then, in 2012, when “AlexNet correctly identified photographs of a scooter, a leopard, and a container ship among other things. The fact that they (computers) can solve computer vision, which is completely unstructured, leads to the question, ‘What else can you teach it?’”

Well. apparently, you can teach it to pass the bar examination; to manage your investment portfolio, to fly a drone, to steal one’s likeness, to write music or poetry.

Wait a minute. Write a poem?

“‘If you allow yourself to believe that an artificial neuron is like biological neuron, then it’s like you’re training brains,’ Sutskever told me. ‘They should do everything that we do.’”

Witt (the author) was initially skeptical of Sutskever’s claim — “I hadn’t learned to identify cats by looking at ten million reference images, and I hadn’t leaned to write by scanning the complete works of humanity.”

But it seems that our “cat image recognition” came with a nervous system that had been in development over a period of several hundred million years. “Cat image recognition” came baked-into our infant (human) brains after years of evolutionary upgrades.

“‘There have been a lot of living creatures on this earth for a long time that have learned a lot of things,’ Catanzaro said, ‘and a lot was written down in the physical structure of your brain.’”

I, myself, don’t have a problem with the notion of teaching grammar to a computer; or giving the computer a billion bits of data — but doesn’t poetry require something more?

Doesn’t poetry, or the writing of a novel, require real emotion?

I have tried writing a novel and I know that one must dive deeply into one’s own soul when seeking honor, love, humor, anger and betrayal. If one wants a reader to fall in love the writer must know how it feels to be consumed with love.

Some years ago I was trying to write a “falling in love scene” in one of my novels. It involved two teenagers who are slowly, incremental falling in love during a summer vacation. Unexpectedly they find themselves offshore, stranded on a sandbar with a sudden summer storm forming on the horizon.

In order to write that passage I needed think about young, tanned, thin-armed girls; and remember the intoxication of giving and receiving love from that lissome, straw-hat-wearing girl. I had to pull-up and dust off memories from that long lost summer. I know there are existing novels with beach scenes like this — From Here to Eternity— but I don’t know how you write about falling in love without having lived in that moment yourself.

But my wife, Susan, argues that Artificial Intelligence can manufacture the words and phrases that trigger the emotion. “Artificial Intelligence doesn’t have to know the feeling itself.” 

But I hold fast that poetry writers have to be self-aware. And the author of the piece, Witt, also wonders if a stainless steel box, however complicated, can become self-aware.

“I wondered if someday soon an A.I. might become self-aware. In order for you to be a creature, you have to be conscious. You have to have some knowledge of self, right?”

Jensen Huang replied, “I don’t know where that could happen.”

Scott Graber is a lawyer, novelist, veteran columnist and longtime resident of Port Royal. He can be reached at cscottgraber@gmail.com.

Previous Story

Arizona’s universal school vouchers a cautionary

Next Story

How the Boston Tea Party’s changed American history

Latest from Contributors