So could you tell me about a project you've worked on?

>> Sure, so my background is in theory.

And so I'll tell you a project that I worked on about the bounds of computation.

And so what I really like to think about is,

how far can we push computers and where are the limitations?

And this goes way back to before there were computers.

There were still people trying to think about what it would be

like if there were computers, and what would it mean to think about computation.

And so Alan Turing, in the 1930s,

devised this notion of a universal machine, a Turing Machine, that could,

you may have seen the movie, the Imitation Game with Alan Turing.

And so we have this notion of computers being able to perform computations.

And this is kind of an abstract notion, it doesn't have to do with whether you're

a Mac or a PC or Operating Systems or anything to do with a nitty gritty

hardware or software but just the notion of computation.

And what's amazing about Turing's work is that you can

actually distinguish between problems that are solvable by computers.

And ones that can't ever be solved by any computer, no matter how clever we are it

building that computer or how much time we give it, or how much memory we give it.

They’re just some limits to computation.

>> So they're completely unsolved, you could never solve these no matter what?

>> No matter what, which I just think is crazy but, really interesting because

there's some problems that are just so hard that computers can't access.

And I actually think that's kind of cool because it means that we as human beings

can think about them, but computers can't solve them.

And so that's what I think about a lot.

And then a very specific research project that I worked on based

on this notion of computability,

is trying to understand how we can recover some computability, some algorithms for

solving problems if maybe we restrict the resources that we give our computers.

And so we can say, let's not go hog wild and give our computers as much time

as they want to solve the problems and as much memory as they want, but

say, what about if we have a model of computation that's much simpler?

And so this is called a deterministic finite automaton or a DFA.

I don't know, have you heard of finite automata before?

>> I have a bit, yeah.

>> Okay, so a way I like to think about automata are as smaller computers.

So the functionality of the automata is somewhat similar to a full blown computer,

but it just doesn't have as many resources.

It doesn't have much memory to work with.

And it's gotta make decisions on the fly.

It can't really store a lot of information.

And it can't defer decisions.

So- >> Do you quantify those amounts?

Do you narrow the scope?

>> Absolutely, so, all of this can be made very precise and very mathematical.

But the cool thing about the research project that I'll tell you about is that

you can see sort of the heart of the question without getting into all of those

details.

And then afterwards we can go back to them because they're really cool, too.

>> Okay.

>> But the idea is that when we restrict the computation power,

then all of a sudden there's all sorts of questions that a computer would've been

able to solve, but that these little machines, these automata, can't.

And so there's particular questions that we can prove separate the class of

computable objects, those that are answerable by computers,

from those that are automatic, the ones that automata can handle.

And so there's this very clear distinction between the automatic world and

the computable world.

And so then my work was to try to understand that interface, and

in particular I was trying to understand the complexity of objects that live in

the automatic world.

So we know that these machines are weaker than computers.

We know that there's sets and functions that automata just can't describe very

well, that they can't do everything the computers can do.

And so, is there a bound on the complexity of the objects that they can describe,

because maybe we want to understand how much we've lost

when we've taken away those resources from the computers, how much remains.

And so we looked at a particular, and this was joint work with a couple of

my colleagues, we looked at a particular measure of complexity called Scott rank,

after Dana Scott, who's a computer scientist.

And this measure of Scott rank tells us

how complicated the objects that are being described are.

And we know a lot about the Scott rank of computable structures and

computable objects.

And we know that the Scott rank can get pretty high up there.

And so, we were thinking well, maybe the Scott rank of the automatic world

is going to be bounded quite a bit lower than the computable world.

Because they're so much smaller and weaker and you know these tiny little things.

But the amazing thing was that we proved that you can find structures

that are arbitrarily complicated in terms of Scott rank that have really,

really, really high Scott rank but that are still automatic.

And so even though this machines are like way smaller, and simpler and

stupider than full blown computers, they can still code up