Skip to main content

Professor Answers AI Questions

AI and machine learning professor at Gonzaga University Graham Morehead joins WIRED to answer the internet's burning questions about artificial intelligence. What are the origins of AI? What's the difference between AI, AGI, and ASI? What will the implications be if China achieves artificial super intelligence first or the United States does? What does the next 10 years of AI development look like? Will AI take all human jobs? Answers to these questions and many more await on AI Support. Director: Jackie Phillips Director of Photography: AJ Young Editor: Richard Trammell Expert: Graham Morehead Line Producer: Joseph Buscemi Associate Producer: Paul Gulyas Production Manager: Peter Brunette Production Coordinator: Rhyan Lark Casting Producer: Nick Sawyer Camera Operator: Oliver Lukacs Sound Mixer: Lila Rowel Production Assistant: Abigayle Devine Post Production Supervisor: Christian Olguin Post Production Coordinator: Rachel Kim Additional Editor: Jason Malizia; Shane Boissiere Assistant Editor: Andy Morell

Released on 03/25/2025

Transcript

Hello, I'm Graham Morehead,

I teach AI Machine Learning

at Gonzaga University.

Let's answer some questions

from the internet.

This is AI Support.

[upbeat music]

@JustinDart82 asks,

What are the different types

of artificial intelligence?

Broadly speaking,

there's Type 1

and Type 2.

Type 1 is like your instinct,

that part of the brain

that can take in lots of information,

but reacts very quickly with emotion.

Type 2 is more like

a methodical series of steps,

like when someone asks you

to do a math problem.

Neural Networks are Type 1,

they can take in many inputs,

and they give a quick output,

but that output sometimes is not exact.

Type 2 AI is logic,

and multiple steps.

Jaisurya asks,

When did AI start?

You may have heard

that there was a conference,

it was in 1956,

it was the first conference

where they used the term

Artificial Intelligence,

it was at Dartmouth in New Hampshire.

But, AI existed before that,

we've all heard of the Imitation Game,

Alan Turing cracked the Enigma Machine

to save us in World War II,

that was a kind of AI,

but also, he wrote papers in the 1930's

and '40s that we still read today,

that are important for understanding AI.

@BFBMorningShow asks,

What's up with Grok?

Grok is a newer entry into the AI ecosphere,

we had Google come out

with a paper in 2017

called Attention Is All You Need,

and that was the BERT model,

but no one really cared

until OpenAI came out with ChatGPT.

Then, we had Facebook or Meta

come out with Llama,

and a French company came out with Mistral,

of course the Chinese company

came out with DeepSeek.

Elon Musk didn't wanna be left out of this,

so he came up with Grok,

and it's the same structure.

Now, Grok 3 the newest one,

was trained on more data,

more compute than any other

of these models had ever been trained on before,

and it's good,

the big benefit comes

from seeing other companies make mistakes,

the newer models get the benefit

from those past mistakes,

and not make them.

Ugin asks,

Is AI taking our jobs or not?

The answer is yes,

but don't worry,

think back to the invention of the ATM,

people thought it was gonna get rid

of bank tellers,

but we have just as many bank tellers now

as we did back then,

AI is going to take a lot of jobs,

we just have to think differently

about what jobs we do,

I would encourage you to think about,

how can you disrupt yourself?

How could you take part of your own job,

and have AI do it for you?

Then, you'll be more productive.

@DailyAITweets asks,

Why is AI dumb?

Because it doesn't have a brain,

just a lot of circuits

and algorithms!

The shape of thought is what's different,

when you think about,

This thing is moving,

and it's going over here.

You think about an object in space,

and you think about consequence of actions.

For AI, it's just numbers,

matrices and vectors,

and all these numbers.

It is not shaped internally

like your thoughts are shaped.

Jabbar JD asks,

What's the difference between AI

and AGI,

and the so-called super intelligent ASI?

ASI is Artificial Superintelligence.

Well, AI of course was coined

at that conference in 1956,

they thought they were 20 years away

from having something equivalent to a human,

but obviously it took a lot longer than that.

AGI Artificial General Intelligence,

what that means is that you have an AI

that can think adaptively,

think about all the things that you can do,

even if you weren't trained to do them,

you can adapt to the situation,

AI can't do that very well yet,

and AGI would be able to.

ASI is gonna come very soon after that,

because once you have one computer

that can do as much as a human,

you can just turn on the second computer,

and now it can double.

User Ok-Toe-6969 asks,

How far are we from an AI partner,

like in the film HER?

From the outside,

it looks like we're not that far,

there are people who have had online weddings,

where they invited their friends

and had an online ceremony

where they became betrothed to their AI loves,

so for some people,

we're already there,

but there is something that the AI in HER had,

that no AI has yet,

and that's a will, a desire.

The AI in the movie

wanted to do things on her own,

and by the end of the movie,

moves on to have her own life,

AI as it currently exists,

doesn't want anything,

it just responds with words

when you give it words.

User incognitototoo asks,

Hello, I don't understand what tokens are,

and how they work with ChatGPT.

When OpenAI or Google,

or all these other companies design their AI,

the first choice they have to make is,

How big is the vocabulary?

How many words will it know about?

And, the bigger you make it,

the harder it is to train your model.

Sometimes they're words,

other times, they're word parts,

when you say a word like gin,

like gin the drink,

that's in there somewhere,

that's already a token,

but the word ginormous is not in there,

ginormous gets split into three tokens,

gin-or-mous,

and that first part of ginormous is a token,

and that token is the same token

as gin the drink,

it thinks they're exactly the same

at the beginning,

which should give you a hint

about why AI is stupid sometimes.

@nicoleheatherr,

ChatGPT can't even get answers right,

I hate chemistry!

How does AI get [beep] wrong?

I'm irritated.

Chemistry is very difficult for AI.

When AI operates on words that you give it,

it thinks about the world

in a sequence of these words,

or tokens, which we just learned about.

Chemistry is often three-dimensional structures,

it's difficult for AI to conceptualize

that three-dimensional world.

If you think about a glass of wine,

it can render a beautiful photograph

of a glass of wine,

but it cannot render a full glass of wine,

because that's what it's seen,

AI is great at giving you something it has seen.

@LeMacDee asks,

How is DeepSeek different to ChatGPT?

Is it better?

They're very similar,

ChatGPT version 4-o came out in May of 2024,

and it was amazing,

it was the best thing we had seen,

and they have a reasoning model called o1

that was built on that.

DeepSeek comes along,

and gives us a model that's just as good,

maybe a little better on some things.

Any time somebody does something first,

someone else can come along

and do the same thing cheaper,

maybe even a little bit better.

@RoseMinoan asks,

How can AI be biased?

AI is learned from the internet,

every bias across the internet

is somewhere represented in AI,

it's learning from you,

it's learning from me,

it's learning from everybody.

Also remember, every book,

every blog post ever written,

on flat earth theory,

is in there somewhere,

take it with a grain of salt.

@AIandDesign asks,

What will happen if the US achieves ASI first?

What if it's China?

Will there be a difference?

Both the US

and China have amazing AI capabilities,

there are so many great papers

coming out of China,

and I'm glad they published them in English,

'cause we learned a lot from them,

and they learn a lot from us.

They have more AI researchers than we do,

they have more STEM students than we do,

AI has been deployed in China,

largely as a part of what we call

their Surveillance State.

In the US,

we typically use AI to empower people,

to be more creative

and do more things.

Right now, there is somewhat

of a spirit of competition,

but also cooperation,

and I hope that's maintained.

I want you to think about ASI

as being a virtual Einstein,

once we have that,

we can tell Einstein,

Einstein, why don't you please

go do 10,000 years of research,

figure out time travel,

figure out anti-gravity,

and then come back,

and tell us what you've learned.

10-seconds later it comes back,

I just did 10,000-years of research,

and here's the secrets to time travel,

and anti-gravity.

Think about what that virtual Einstein could do.

Whoever gets to ASI first,

there's no points for second-place.

@KerryCassidy18 asks,

How is AI powered?

Don't you have to charge it

at a car charging station?

AI uses a lot of power,

the big facility called Colossus,

they just put in Memphis Tennessee,

uses about 50MW,

and uses a lot of power

and a lot of water to cool the machines,

and we might be bringing on so many

new AI training centers in the next 10-years,

that just AI will use the same amount of power,

as the whole country uses now.

@lusciousgemm asks,

AI needs water?

How? Why?

The facility I just mentioned in Memphis,

uses swimming pools full of water

maybe every minute,

it goes into these little tubes,

and each little tube goes up

to one of the GPU's,

these are Graphical Processing Units,

the actual computational engine of AI,

and every one of them has a little loop

for the water to go by it,

and take the heat,

and then the hot water is ejected

out of the facility.

@timbocop asks,

I have questions about 'well-intended'

calls for educators to teach their students

AI literacy.

Are we AI literate ourselves?

Do we know what AI literacy involves?

Who will teach us?

It's important to read books,

I suggest you read books.

It's also important to get a vague sense

of how these things work,

most people are never gonna understand

the inner-mechanics of it,

just like you don't understand

how your car works,

but you still drive it,

you gotta think about AI as being

your expert friend,

who knows a lot,

but has some bad misconceptions,

so you don't trust everything.

If I were you,

I would use it a lot

and trust it very little.

k0_crop asks,

Is AI generated misinformation

going to ruin history?

What are it's potential implications

for future historiography?

Yes, it might ruin history,

but it's not a new problem.

Whenever there are victors in a war,

and they get to write the history,

those books may or may not be true.

Whatever we get from our AI,

you gotta remember,

it might not be true,

the only way not to ruin history,

is to get to firsthand sources,

and cross-reference with evidence.

@Higher_AI asks,

What's next for AI?

Where do you see AI in the next decade?

Now, there's a lot of very popular things

that AI will do,

it's going to cure many diseases.

AlphaFold is a model that came out of Google,

Demis Hassabis was the head of this project,

and he used methods that had been applied

to game-playing,

and he applied them to figuring out

When a protein is made,

how does it fold?

What's it's actual shape?

In the old days,

it might take it six-months

and $100,000,

to determine how one protein folds,

now we know how they're structured,

this came out of AI.

That's probably gonna lead to curing

many cancers,

and many other diseases.

launchthetrain asks,

Am I crazy,

or is AI therapy helping me?

The first time AI therapy helped someone,

was around 1965,

the AI was called ELIZA,

all-caps,

and a certain scientist made it,

and started to show it to people.

His secretary would ask him to leave the room,

so she could talk to ELIZA privately.

I think that when you're talking

to an AI therapist,

you're doing the internal work on yourself,

and the AI merely provides a framework,

or almost an excuse,

it's doing next-token prediction,

just the next word,

it doesn't know what it means,

and it cannot conceptualize

how it would feel to a human to hear those words.

Sad_Meat_ asks,

How soon do you realistically think

the AI will gain full sentience?

Sentience is like consciousness,

it means, it's like something to be you,

it's not like that for AI yet,

AI has no will,

no feelings,

it's not sad,

it's not happy.

Even if it says to you the phrase,

I'm happy.

To the AI,

those are just tokens.

When we figure out what consciousness is,

which we haven't yet,

then maybe we can begin to program,

or begin to build hardware that can support it.

Giulio Tononi

and Sir Roger Penrose,

they're studying consciousness,

and they believe it could be something

that involves quantum states,

and it's not something computational.

AI that you use right now,

and maybe for at least 10/20 years,

will not be sentient.

@sillyfoxgirlnya asks,

How do I avoid AI?

Well, you can go camping,

it's gonna be harder

and harder to avoid AI,

AI is everywhere.

When you talk to your phone,

and it types out your words,

that's AI.

When you walk past maybe a government building,

and there's a security camera pointed at you,

and it's recognizing people or things,

that's AI.

Now, you don't have to use AI,

you can live your life,

and write your own essays,

do your own homework,

and I encourage you to do that.

unpopulardave asks,

With the advancements of AI video,

how will we be able to differentiate

between what's real,

and what's not?

This is a very challenging problem,

and I know some researchers

who are working on embedding keys in video

and in text,

later on, when you look at that data,

you can make sure it was not changed.

Another way is,

when AI changes something,

there are little watermarks in the video

or the audio,

and we can develop another AI

that can see those,

and identify, this text came from

a large language model,

or this video came from a deepfake.

When you watch an AI video,

you may see a person shake their hair

and the hair changes color,

or things that weren't there,

come into view all of a sudden,

AI doesn't know about object permanence,

it doesn't know about physics or the world,

and that's what I would look for,

if I were looking to see if AI was Deep Faked.

@atm0spheric with a zero asks,

How does AI learn to distinguish between truth

and popular opinion in the material

it gets to learn from?

It doesn't, AI has no clue.

All AI is trying to do,

is predict the next word,

and all it's trying to predict,

Is this word likely?

It doesn't care or know if it's true.

Careful_Fig8482 asks,

What is the difference between AI

and ML?

AI is Artificial Intelligence,

a broad umbrella,

and there's many things under it.

Typically, you think if a human does something

that takes intelligence,

Can I write a program that does that for them?

That's Artificial Intelligence.

Now, within that,

one of the methods we use

is called Machine Learning,

you have a bunch of examples,

and you don't program every little line,

you just tell the AI,

Look at these examples,

and teach yourself.

That's Machine Learning.

@Albert_jjjj,

Is AI that can write it's own code an AGI?

Not necessarily,

because we already have this,

and we don't have AGI yet,

there are a number of AI Agents

that can write code,

you can ask an AI Agent to write code

and then run that code,

and see if you can ask it to write code

to improve itself,

and then run that code,

and very quickly, it decays.

Like images, we've all seen image generators,

you give it a phrase,

then you imagine what it's gonna show you,

and it shows you something beautiful.

Well, we did that,

and then took a bunch of these,

and trained another AI,

and it made images that were not quite as good,

but we used those

and trained another level,

and did this again,

and again,

and what you find is that,

it becomes like a copy,

of a copy,

of a copy,

it's terrible.

Mode Collapse is what happens,

and that's going to happen right now,

when AI writes itself,

and changes itself over time,

because these agents

and these AI's don't have consciousness.

@codewithdelia asks,

Is tech evolving too fast for our own good?

Today's AI can compose music,

write essays,

and code, but are we losing the human touch?

Where should we draw the line

in the silicon sand?

I think every person should draw their own line.

Think about it like exercise,

I did not come here on foot,

I came in a car,

the car made it easier

and faster, but does that make me lazy?

I find time to still exercise,

even though I have machines

that can bring me places very quickly.

The same for the mind,

even though you have a machine

that can write an essay for you,

you should still write essays,

you should still be creative,

you should still express yourself.

AI just let's you get rid of the boring stuff.

original_confusion_ asks,

What is the difference between predictive AI

and generative AI?

They're just different ways of looking,

sometimes at the same AI,

sometimes differently.

A generative AI learns from a distribution,

think about all the words on the internet

being a distribution of words,

or all of the pictures on the internet

being a distribution of images.

A generative AI can make an image

that looks like it was taken from the internet,

or generate words that look

like they were generated by a human.

Predictive AI is just a different way

to think about it,

maybe you wanna predict

who's gonna win a big basketball game,

or the Super Bowl,

or what the weather will be tomorrow.

@EvincerWaven asks,

I am lost here,

how can AI create jobs?

Isn't it supposed to replace humans?

You sent that on a computer,

the computer was going to take away jobs,

but somehow it made many new jobs.

The internet took away jobs,

but it also made new jobs.

100-years-ago,

would you have been able to explain to someone

what a YouTuber is?

We all need to become AI managers,

by the end of this year,

there will be millions,

if not billions of AI Agents,

these are little AI's that could work for you.

The jobs of the future,

a lot of them will be management,

managing the work of AI's.

@TheOmniLiberal asks,

Please stop using dog-[beep] AI

for legal documents.

Either go to someone that has learned

to read legal documents,

or do some homework yourself.

I would say a similar answer,

use AI for the easy stuff,

easy doesn't mean quick.

There are things that are easy to do,

that take forever,

like reading through a document

and finding that one section

that talks about a certain thing,

that maybe you can't search for it,

'cause you don't know what words were used,

but AI can find it for you.

@Anygoodnameleft asks,

When they get good enough,

how will they be proven to be deepfakes?

Deepfakes again,

are AI generated videos

that look like someone you know.

AI, as it stands now,

doesn't understand the three-dimensional world,

so, we still have that way to look

at these deepfakes,

and say, Something's wrong.

Either the dynamics in the video,

or something disappeared,

or came into existence,

but at some point,

that won't be true,

at some point,

AI will operate in a three-dimensional world,

and it won't be possible

for maybe even the best of their AI

to determine that it was deepfaked.

Therefore, we have to go

to the other strategy,

you have to have trusted people,

who, when they're taking video,

or when they're taking a picture,

or audio, they're baking into it,

a signature, that makes it impossible to cheat,

that's the only way in the future

we'll know that something is not deepfaked.

@desaivinit asks,

What sector do you think

will be most impacted by AI

in the next five-years?

I'm not sure I could tell you

even for one-year.

I do think that everything that involves words,

is going to be very much affected,

then imagery, then video.

There is so much coming,

that we don't know where to start.

Think about the proteins

that just came out of AlphaFold,

somewhere in there,

there's a few needles in the haystack

that can cure this cancer,

and that cancer.

There are things that can help

just general metabolic health,

maybe we'll all live longer.

@kayleigh_author asks,

How dangerous is AI?

You gotta think about AI as a tool,

there are dangerous people out there

that might use that tool,

there are bad actors

who can use it in a bad way,

but AI on it's own,

has no will or desire.

@ajinkyainamdar_ asks,

Considering the potential

of Artificial Intelligence

reaching human-like intelligence,

do you believe it should have rights?

No. For reasons I've said before,

I believe AI does not have consciousness,

so I do not believe we should give it rights.

anonymity_anonymous asks,

What things is it important

not to tell ChatGPT

and why?

What if you tell your deepest,

darkest secrets to ChatGPT?

Then it's a chance that,

that's in it's memory somewhere,

and it used that to train

it's next word prediction.

So, the basics are,

you don't tell ChatGPT things

you wouldn't want to put on a website

that just faces the world.

@hypnotizd_ asks,

With all of these companies

talking about AI,

how long do you think it will be

until we get Skynet?

Skynet is a choice,

and we should choose not to do it.

Now, there are limited cases

where you have to have an AI in control

of a weapon,

think about a Patriot Missile,

when you have some kind of missile

coming in to kill you,

you need a missile to take down that missile,

but it's too fast for a human to sit there

and aim it correctly,

you need an AI to take over

for those limited things.

We as humans,

need to make policy choices

to say there will always be a responsible human

who makes the top choice.

Okay, those are all the questions,

thanks for watching AI Support.

[soft upbeat music]

Up Next