23 Comments

As Yogi Berra said, "It is very hard to make predictions, especially about the future."

I am reminded of the novel 'Fahrenheit 451' where the author Ray Bradbury looks ahead into the future and imagines many science-fiction-like developments, some of which have still not come to pass more than 70 years later. However, he also makes a reference to future engineers using a slide rule. In other words, Bradbury could not even imagine the development of a simple scientific calculator, something that happened within 25 years after his novel was published.

The point is that humans can only imagine the long-term future along a few dimensions that they are most familiar with. But the world changes along scores of important dimensions simultaneously (both in technology and otherwise). And nobody can predict how fast these changes will occur, how they will interact with one another, or which changes will be most critical in determining the fate of humanity.

Expand full comment

Humans are doing a fine job of ending human life.

Expand full comment

Looking at other examples of knowledge or technology being dangerous, like nuclear weapons, air planes or gain of function research, I think the danger is that AI could give one country a big advantage who's leaders could use to eliminate their enemies. Imagine if the Nazi's had gotten the bomb before the allies, or look at how the Europeans used their advanced knowledge of navigation and weapons to subjugate Asia, the Americas and Africa. The stakes get higher as advanced knowledge gets more and more efficient at killing and our leaders get more and more paranoid about what their perceived enemies could do if they get some game-changing weapon 1st. Covid appears to be the result of bio-engineering military leaders funded to thwart other countries from developing a disease that could be spread to other countries whilst giving their own citizens, (or at least the citizens they wanted to survive), vaccines. Watch Dr. Strangelove, which was apparently a very accurate dramatization of the Cold War thinking of both the USSR and our own leaders. AI doesn't seem to me to be anything all that different from what we've been dealing with for the last few hundred years, since humans became very efficient at killing each other. Imagine a modern day Napoleon, Genghis Khan or Idi Amin with a weapon that gave them a disproportionate advantage. That worries me much more than the idea of a computer that would turn on us as a species. We'll be fine if we don't allow sociopaths to gain power...

Expand full comment

Humans are the only species that I know of that is willfully and knowingly trying to eliminate itself: I guess that is what makes us the most intelligent creature on Earth (?). Human interactions, however hard and disappointing they may sometimes (often?) be, is what makes life worth living for me; take them away, and I do not see the point in living (in a performant, but dehumanized world). I generally agree with you on the disastrous handling of COVID (thank you for keeping some sanity to this world) and public health in general, but I wholeheatedly disagree with you on the net benefit (to talk in public health terms) of social media and maybe now AI.

Expand full comment

Hi Vinay- Interesting post. Having followed you since your first guest spot on econ talk it is this post that convinced me to upgrade to a paid subscription so I can comment. I agree this is something approaching an "evidence free zone" but there are a lot of important arguments why this is something to seriously fear. Chief among them is that we have not yet seen an AI that can actually learn for itself, and that breakthrough could be coming any day, expecially considering how the latest chat GPT models surprised so many people in their effectiveness.

Anyways, the quibble I have with this is that you landed on "ntd". I agree that in the policy space that is probably correct, but in most of your medicine articles the answer is instead that we need to find the evidence and invest in research. Do you really think there is no research that can be directed that gets us a better understanding of the problem? Shouldn't academic institutions be better positioned than private companies to work on these projects and find answers that solve major questions like AI alignment? This is in my mind a problem with a small but real likelihood of ending humanity in the next generation or two, and far more important than slower moving targets like climate change. Gien that opinion, why aren't non-profit institutions and academic centers directing funds anywhere near that proortion to this issue? Just food for thought.

Ben

Expand full comment

I agree with NTD. Look at Y2K. The people who feared it the most were our friends in the computing business. They stockpiled granola bars to the hilt for fear of food shortages. We made little paper lanterns with creative Y2K cut outs with our young son, added tea lights and lit them in the yard at midnight. The we went to bed and slept well. After all - we knew where the granola bars were stockpiled.

How about the atom bomb? I was born a little over a decade after the end of WWII. When I entered school they were still doing A-bomb drills where we filed orderly to the basement, sat along the walls and covered our heads. I don't recall any after third grade. To be honest, it was only in third grade that I recall these drills. It was a large school with multiple classrooms for each grade. The prior two years I went to a two room school house where we were allowed to walk home for lunch. We all walked to that neighborhood school. I weep for the simplicity of those days that very few children will ever experience again and I have more fear about the lack of nurturing children receive today.

I think we are going to be ok in the AI department!

Expand full comment

Thanks for this column. You make me think and that's why I subscribe. The future will occur and we along with it. Innovations and discoveries I can't even fathom now will appear. Some will be net positive and some the opposite. Will the US start wholesale driving of small vehicles or will our top automobile sellers still be the Ford F-150 and the competition's similar models? Will AI enhance our lives or destroy them?

Expand full comment

I always enjoy your writings. Keep up the good work !!

Expand full comment

I think much of the debate smacks of "first world privilege". Working in disaster zones, rural communities, hurricane response areas..the first thing you have to be comfortable with is not relying on technology. Let's revisit this argument when the playing field is level for every community, every country, every individual.

Expand full comment

This is the only place on the internet where I truly enjoy reading the article AND the comments. Vinay and all the readers, thank you for bringing back intelligent, civil discourse to the public square.

AI doesn't necessarily scare me...the people using it do. As a Christian, I believe all of human conflict is ultimately theological. To put it simply, what you believe (or don't believe) about God is at the root of all human conflict. In this arena, it's pretty obvious to me: I'm fearful of someone powerful using AI who feels they are not accountable to humankind and not accountable to an almighty moral law-giver. At some point, secular altruism runs out, and the great and powerful wizard behind the curtain uses AI for sordid gain.

We were NOT ready for the meteoric rise of the internet...morally, legally, financially, etc. We are certainly not ready for AI. Kamala Harris, the so-called "czar of AI" is CERTAINLY not ready. She wasn't ready for public speaking, let alone something as complex as AI. We need to find the best and brightest to buttress our collective ethic against the potential pitfalls of AI. It starts in the heart, so we don't need to use people who disregard the importance of holding one's self to a passion for impeccable morality and love of neighbor.

Expand full comment

I want to re-watch the 1970 movie Colossus: The Forbin project

Expand full comment

These are urgent and interesting questions. My husband Quinton Skinner just so happens to have a sci-fi novel, Project Chrysalis, coming out this month (under pen name Willard Joyce) that explores these very questions--what’s the role of increasingly powerful tech and AI in a post-pandemic world? Eerily, he wrote it in 2018 and seems to have been channeling prescient insights about what was coming. The first chapter is available to read for free here: https://open.substack.com/pub/quintonskinner/p/project-chrysalis-chapter-one?utm_source=direct&r=8dfgb&utm_campaign=post&utm_medium=web

Expand full comment

“A dispassionate entity a million times smarter than us may not kill us, but kill itself. Prone to depression and realizing that ultimately everything is futile.”

says Marvin the Paranoid Android

Expand full comment

It never ceases to amaze me how much “scientific “ speculation seems driven by Hollywood movie and TV plots. People actually have imagined “uploading “ their consciousness to a computer ever since the idea was written into Star Trek in the 80’s.

I’m a big fan of Sam Harris. I read his books, listen to his podcasts, and downloaded his meditation app. He’s certainly much smarter than I will ever be, but sometimes his speculation about “sentient “ computers gets into hypotheticals that are so fanciful, I can’t help but wonder if he might have had a few too many psychedelic experiences.

Smart people contemplating whether or not we’re actually in the Matrix is another one. Although I do wonder if this one might have some usefulness as an analogy in terms of thinking about the nature of reality. But to debate whether or not we’re actually in a computer simulation? I’m not going to waste time on that one.

Expand full comment

To simplify things doesn't history teach us that what is created will almost always come back to kill you? Frankenstein's Monster, Androids 17 & 18, Emma, the USR robots in iRobot, the Sentinels, robots in Judge Dredd. I could go on and on, but it can't be coincidence that movie writers, comic book writers, book writers all think that what is created will be come back to bite us. Harlan Wade from F.E.A.R., "It is the nature of man to create monsters, and it's the nature of monsters to destroy their makers."

Expand full comment

Interesting but confusing and too long for this 91 year old mind. And I’m in some sort of evolution which favors being in water. On land walking is difficult and there are pains everywhere. In water motion is easy and nearly all pains disappear.

Expand full comment