This Teach-Out examines the present and possible futures of Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR) through conversations with leading experts and practitioners in this ever-evolving world of digital interfaces. In this Teach-Out, we explore opportunities of these emerging technologies in domains ranging from medicine and nursing, to landscaping and architectural design, to multimedia and entertainment, to education and research. We also discuss dark patterns and new challenges associated with these new interfaces, such as authenticity, accessibility, and privacy.
.
We invite you to join this conversation about these emerging technologies that blur the line between reality and computer-generated sensory experiences. This Teach-Out will examine broader questions, such as: What are these new technological breakthroughs? What are practical applications of AR, MR, and VR to users’ everyday lives? What are possible directions for future AR, MR, and VR interfaces, and what are the important issues to consider?
This Teach-Out investigates the differences between AR, MR, and VR, and discusses a broad range of implications for our daily lives. It also explores future applications of these technologies across a range of domains.
A Teach-Out is:
-an event – it takes place over a fixed, short period of time
-an opportunity – it is open for free participation to everyone around the world
-a community – it will be joined by a large number of diverse individuals
-a conversation – an opportunity to give and take ideas and information from people
The University of Michigan Teach-Out Series provides just-in-time community learning events for participants around the world to come together in conversation with the U-M campus community, including faculty experts. The U-M Teach-Out Series is part of our deep commitment to engage the public in exploring and understanding the problems, events, and phenomena most important to society.
Teach-Outs are short learning experiences, each focused on a specific current issue. Attendees will come together over a few days not only to learn about a subject or event but also to gain skills. Teach-Outs are open to the world and are designed to bring together individuals with wide-ranging perspectives in respectful and deep conversation. These events are an opportunity for diverse learners and a multitude of experts to come together to ask questions of one another and explore new solutions to the pressing concerns of our global community. Come, join the conversation!
Find new opportunities at Teach-Out.org.
Assistant Professor of Information, School of Information and Assistant Professor of Electrical Engineering and Computer Science, College of Engineering
Steve Oney
Assistant Professor of Information, School of Information and Assistant Professor of Electrical Engineering and Computer Science, College of Engineering
Roland Graf
Artist, Designer, Associate Professor at the University of Michigan Stamps School of Art & Design
So, I want to transition to talking more about research in AR,
VR, and MR. And I would split research into two areas.
So, there's research into improving the actual underlying technologies themselves,
and there's research that may leverage
these technologies to either communicate or explain the concept,
but isn't about improving the AR devices that we actually use.
So, if we start with the first one,
research into improving AR, VR, MR devices,
I want to start by asking what you think are some of
the most important research questions that we need to answer in this space?
Yes. So, I can take a stab at that.
I think there's really three concerns.
So, one there is the enabling technology,
and this would include the sensors,
the cameras, the visualization or the graphic visualization,
the batteries, the form factor, and so forth.
Those are all objects of research,
and there are places at the University of Michigan that will look at all of those things.
So, in Joanna's college,
they would definitely look at the batteries,
and the sensors, and the display technology.
In the Art Design College,
they'd be very interested in the form factor.
In our art school, we'd be very interested in
the user experience and the design of that interaction.
So, that's kind of one area,
or that's one problematic.
The second problematic has to do with
the design of the AR environment itself as in affordance.
We don't really understand very much about that.
And that's perhaps best illustrated if you think about doing
movie production from the point of view of watching the movie through an AR device.
Where do you hide the cameras?
Because if I pan around,
now I can see is 360 degrees,
where are the sound guys,
and the director, and everyone else.
So, that's a second kind of a problem.
And then, there's a whole third area in my mind that have to do
with the ethics and the aesthetics of augmented reality,
and we saw a little of this with the introduction of Google Glass.
The sense in which the camera was on
and people didn't know what the wearer was recording,
or what she might be doing with that recording,
and this is going to be a problem with AR and an even greater magnitude than it was.
I mean, Google Glass was
a very simple implementation relative to what AR is going to deliver.
So, those are the three broad areas that I would describe.
And Michael, you're working principally in that user interaction space,
maybe you can say more about your activity.
Yeah. So, a lot of the issues that we're
addressing at the moment is actually like even when it comes to teaching.
So, normally, in interaction design,
we have books we can refer to.
We have design guidelines.
They have often evolved to website design.
For example, we've learned for 20 years now.
So, there is a good set of guidelines.
For augmented reality and for virtual reality,
I think we're still in the early stages of actually
exploring what works and what doesn't.
We've talked a lot about the domains where we see a lot of potential,
but when it comes to actually asking a designer to
sit down and create this augmented reality experience for you,
there are so many issues at the moment.
So, if you, for example, take website design,
we have now been designing websites for about 20 years,
and we have a good set of guidelines.
Some of them are really practical that I can just like put in front of the students.
And then, the students would try to apply some of these guidelines.
They always require interpretation.
So, it's not like you can just implement these guidelines.
The problem is that for augmented reality and virtual reality,
we don't actually have those guidelines at the moment.
There's a lot of experimentation that is going on.
We don't know what is working well with the user.
What is actually improving the user experience?
What is actually detracting from the user experience?
I think this whole idea of accessibility needs to be
transformed into augmented reality and virtual reality.
So, we don't have the design cove.
We just don't know how many existing other kinds
of guidelines and principles need to be transformed.
And so, actually, in my lab,
a big that I think would be a Ph.D. topic is actually augmented reality usability.
So, we talk about usability a lot with, for example,
mobile interfaces, and touch,
and make these things work and accessible.
But for augmented reality,
it's so difficult and so different.
For example, testing an augmented reality interface like how does
the designer actually anticipate in which environments their interface will be used.
Will it be used in the living room at home?
Can the content still be seen with whatever is going on in that home,
in the kitchen, and whatever?
So, there's lot of questions there.
Yes. So, in the space
of human-computer interaction around desktop computers, like you say,
we had a couple of decades experience,
and then at the end of that,
Jacob Nielsen delivered his,
whatever they are, 10 heuristics for interface design.
And now we know we can hand those heuristics to students,
and they can look at a prototype design,
and they can really do like checkboxes.
Navigation? Yes, no.
Legibility? Yes, no.
That kind of thing. But we don't have that at all for augmented reality.
And I think that almost all of these technologies
take off when you get to the point where the heuristics can be communicated efficiently,
and then a relatively novice designer
can take those heuristics and quickly advance to state of the art.
And then, you'll get a lot of "me too" step,
which happened with graphical user interfaces.
Steve Jobs is the best example.
He got the tour of the power to Research Center,
and he saw the stuff that Xerox was doing,
and he said, "I want to have some of that."
And then, he went back and built at least-
The story would be nice if he went back and built the Mac,
but he went back into Lisa, which failed.
And then, they got the Mac.
So we don't even have at least an equivalent in the AR space right now.
Well, and to tie it back to the education space.
We don't even know what educational concepts are best suited to be put into AR,
VR. We have our guesses.
We have our hypotheses that anything that is three
dimensional could in principle benefit from being put in AR,
VR, but we don't know what the design principles there are either. And so.
We need to develop those as well,
so that teachers can recognize what they can virtualize
or augment rather than taking their entire curriculum and dumping it into AR, VR.
You want to take certain parts that are going to be the most suitable for that kind of-
Yeah, and particularly if that's effortful.
That's right.
So, if you're going to invest the effort,
you want to make sure that that is going to produce the biggest payoff.
That's right.
In terms of learning outcomes.
So, for research that maybe isn't applied to AR, VR,
and MR directly but might leverage it to improve communication,
to improve collaboration, especially remote collaboration,
how do you see these technologies fitting in?
Well, I'm really excited about using these technologies
to examine new ways of trying to accelerate science in particular.
So for instance, as I was talking about before,
if we take a crystal structure, currently,
we teach crystal structures are pretty simple
because it's easy to visualize on the page or even on a computer screen.
But imagine if you take a fairly complicated crystal,
and now, you can visualize it that much easier.
You'll be able to accelerate understanding and expertise,
the rate at which knowledge is absorbed by the student,
if you will, to put it in those kinds of terms.
And then, if you can take a really complicated structure like a protein, for instance,
and you put it into the space,
you can start to manipulate and you can actually see how the ligands,
for instance, can attach to the protein or not.
I mean, you could really accelerate research.
Normally, the way we do this is you have this chemical formula, again,
the abstraction into a single line,
that's not going to tell you very much about the confirmation of the molecule.
It's only once you see the molecule that you can really
start to make advances in that kind of physical intuition.
And also, other areas are looking at how metals deform,
for instance, are looking at the structure of grain boundaries.
Currently, we have technologies called transmission
electron microscopy where we look at the structure of a material.
And the way we do it is we actually look through the material,
and then we project a three-dimensional material onto a two-dimensional page.
And so, we have to infer a lot about
the structure of that material because we're looking through it.
And we can't tell, this piece,
this feature that we're looking at,
is it in the middle or is it at either side?
And how is it interacting with this other thing?
It really does complicate interpretation of nanomaterials, for instance.
So I can imagine creating a whole new way of visualizing these kinds of structures.
I mean, the critical views of the DNA molecule look
like Rorschach blots when you see them because it's the pattern.
The diffraction pattern.
Diffraction pattern. So you're really doing a very complex translation.
First, you have to understand how that diffraction technology works in the first place.
That's right.
And then, once you understand that,
you have to understand what characteristic pattern would it produce.
That's right.
And then, which pattern is signature of the structure that you believe it might have.
So, that's such an elegant thing to see,
that once you understand what the double helix structure is,
you see that that is the only form it could get from the diffraction pattern.
But again, you're going through all the abstraction, right?
Because diffraction, you're looking at the material,
you're looking at the molecule in frequency space instead
of in real physical space that we exist in.
So, it is really complicated.
And that's why diffraction is so powerful,
is that it's a very simple thing.
It takes the average pattern for the entire material.
But here, we can actually look at individual molecules in a way that isn't possible,
or is just now becoming possible.
And then also, multi-dimensionality is another area that could be very interesting.
So we think about three dimensions, X, Y, Z.
There's the fourth dimension which is time.
But there are other dimensions that you can also start thinking about like composition.
So if you take, for instance, a binary alloy,
has two components or three or four,
you can start to really think about how to visualize all those different dimensions.
So you have position in three-dimensional space.
We have different kinds of composition.
And then we can also superimpose over that, time evolution.
So this entire thing,
it's very complicated just to imagine the computational power necessary to do this.
But if we had it, the physics that we can understand,
and then the materials that we could design because of that newfound physics,
it's pretty, pretty exciting.
So as you were asking about how Augmented Reality is
going to transform research and I was thinking about two things.
One is, research is all about sharing knowledge.
And right now, the way it works is
the scientists are very busy around
certain times of the year to prepare for these conferences.
These conferences are really big physical events where you bring together lots
of people across the globe to share their knowledge, they give presentations.
I believe that that model will probably become more flexible.
In fact, those already the beginnings of these kinds
of changing the way we actually present and disseminate information.
And I think, the other thing that was
going through my head is also some of the research that's going on in my lab.
For example, one is a project where we're designing a future web browser,
a web browser that actually makes use of Augmented Reality.
The way we share links, at the moment,
is we copy these kinds of strings,
these URLs, maybe we've bookmarked them and maybe we have fancy ways to exchange them.
But it comes down to that. What if I, in the future,
just place it right in front of you
with my smartphone and place it here and grab it from there,
I feel like it changes both the interactions,
it changes the meaning.
Actually, what does that mean? This is not physically placed here.
Will future people coming to the room see this
or is meant for you to pick up and take with you?
So, we were thinking a lot about all these interactions and
like how it can change research.
And I think, the other project that the learners saw in the video is
this idea of bringing remote participants into a co-located physical space.
So, we had the set up with a 360 camera that was actually streaming out of a room,
and we wrote participants sitting on their laptop,
and they could actually walk within the space by actually
manipulating the camera view so as if they were actually in the physical space.
And I feel like these technologies
can be used in a very good way to actually bring together people,
to bridge physical constrains,
and have interesting ways of actually exchanging information,
sharing knowledge, and accelerating research.
So, do you think that shift in collaboration might actually have an effect
on the types of lab spaces that universities need or that they use as well?
I mean, one possibility is the physical footprint of the lab changes.
Disney is demonstrating right now,
amusement park feature called The Void,
where you put the VR helmet on,
and then in the space of a very small room,
I mean even smaller than this studio,
you can create an experience that feels like you've traversed hundreds of meters,
and over different kinds of terrain, and so forth.
So it's plausible that if you were working in this space of user interaction around AR,
you might not need more than a closet, right?
Or certainly not. I'm sorry,
talking like a dean here.
You see, he's taking away your lab. Nice.
In a club back that square feet, right?
This is an alternative for this.
I think, it could have have consequences.
The other thing I'm thinking about is,
there are people who talk a lot about this notion of dematerialization.
So you replace material activities like
built spaces or transportation to create physical co-location with virtual co-location.
And I'm thinking, especially in terms of
the greenhouse gas emissions that are
generated when we all get in airplanes and fly off to remote places,
because it's the long haul flights that do the worst damage.
We could replace that with
these virtual experiences that still have an energy signature but not anything.
Like the KAI, this is a big meeting in human-computer interaction,
brings a couple of thousand people from all over the world to
some specific location that's hundreds of long haul flights.
If we could replace that,
I think that starts to define a path toward perhaps a greener, healthier world.
In fact, as KAI is introducing some of these ideas
at the work that happens before a conference, when actually,
people select those papers,
kind of the best, and should make the batch this year,
they have been introducing this idea.
So what if we do not actually physically meet?
So it's 200 people before the 3,000 people come together.
What if we actually try this out with virtual meetings?
And they have been very careful
with actually making it more like a scientific experiment where half
of these people actually physically
still come and then the other half actually experience it virtually,
and they have tried to understand the benefits and tradeoffs and they're actually a lot.
So people feel very strongly about taking away the physical, the co-located component.
Tom, I will not be able to have a coffee with you,
I mean not the same way,
and we do not meet in the same place.
And maybe the coffee isn't the greatest example. But it seems like
Before we could eat, go get our coffee and bring it back.
That's right. That's how you would have your coffee. I would have my coffee.
Yes. So, people need to start thinking about,
I feel like this is a good tradeoff.
If it really comes down to the coffee,
would we save a lot of emission?
Then I feel like we should do this.
But people think very strongly about this,
the thing that would go away if you are not physically co-present anymore.
And so, I don't know.
I think, it is also thinking within the current technological constraints.
Because I do feel there will be better experiences in the future.
If you can manage to stimulate other senses,
if we feel like we actually really nearby each other
and we forget about the fact that Tom is actually not sitting with me in the room,
well actually, Tom is sitting with me in the room right now,
but it feels like I can't tell the difference.
If we get to this stage,
then that has other dangers,
of course, which we talked about, dark pattern sometimes.
But it's definitely a state that we could potentially reach one day,
and then we cannot tell that difference anymore and
that everything feels more natural and integrated.
So, beyond education and research,
how do you think AR might affect entertainment, humanities, arts?
Well, certainly from the entertainment perspective,
we have a very large entertainment enterprise at
the University of Michigan intercollegiate athletics.
And I think, one of the motivating stories for many of us all along has been,
what would the experience of the Big House be with augmented reality?
Imagine that you're watching Michigan build up a lead
over a higher state and maybe the game is
not as interesting as it was because the lead is so huge.
But you're using your
Science fiction.
Yeah, exactly. You're using your AR technology to examine tendencies,
configurations of the offense,
how many times does that particular configuration appeared.
The players are all instrumented so you could start to get
some sense of which players have experienced excessive load or
what the average load is for the whole team and whether that makes it
likely that the coach will go forward and forth down or punt.
A lot of things that you see when you're watching the telecast,
the line of game,
the probabilities for the made field goal by distance from goal line,
and that kind of thing could be part of the stadium experience.
Having said that, there are a bunch of technological barriers that have to be surmounted.
The big one is, we just don't know how to beam
the Wi-Fi signal down into that hole in the ground right now,
that your colleagues I think are working on in electrical engineering.
And we could solve that one,
and then the form factor has to be cheap enough,
and then we have to produce the content.
We're already seeing some augmentation in the game day experience.
You can get the special FM headsets that give you the play by play,
and one of our colleagues talked about using the glass front
and the luxury boxes as a projection surface so for the whole room,
you could superimpose the augmented reality overlay onto the players.
Yes, so I could imagine taking that and extrapolating that to either movies or plays.
If you go to a play or a movie,
you can situate yourself within the scene, for instance.
That would be a really interesting way of going.
Experiencing a concert in a completely different way using AR,
VR is another place that that could be really interesting.
But I also think that an area that is not only about
consuming entertainment but creating art, literally.
I mean, what are the new forms of art that can take place in these types of spaces?
I mean, right now,
we paint on a two-dimensional canvas.
What does painting look like in augmented or virtual reality?
The way that we currently make sculptures is either reach
away at existing material or we form existing material in some way.
So, thinking about what does
that art creation look like in virtual reality is an interesting exercise to think about.
So not only does it change the experience for the consumer of whatever it is,
but it also changes the experience very strongly for the creators of the experiences,
the artists, the musicians, and so forth.
Yeah, in addition to these examples,
clearly in my work,
I often think about how the office environment, or the classroom,
or at home, the living room,
how that could change, what kinds of augmentations would be interesting,
when would I actually be in a state where I would want to
sit down and then consume virtual reality.
So, we've a couple of projects.
I mean, they're coming out of research but they're
supposed to target like daily live activities.
So for example, reading a newspaper,
or getting news content,
accessing and seeing a recap of the game,
I think there's a lot of playground just with the media.
And maybe another example that a lot of people can relate to is travel.
So actually, when you're going on a trip,
going to a new location you haven't been there before,
there's all these things you have to figure it out.
We have maps and we have those kinds of things but they need time and effort,
they need to be read.
And I feel like there's a lot in this domain of personal assistants.
I'm not talking just about accessibility at the moment,
but really, just like going to unknown locations, finding your way through.
There are lots of ideas there where the glasses can actually suddenly make a lot of sense
and they would help you navigate.
As an activity, I often give that to students to think about the daily life activities,
just like do a diary kind of thing and see what you did just today,
and then think about which of these tasks could be meaningfully
enhanced now that you know what augmented reality and virtual reality can do.
And the students, and I,
expect also our learners,
would come up with lots of good examples.
Accessibility is a topic I have mentioned before.
I also think accessibility and privacy are like
big topics that need to be thought through by their respective scientists.
And the interesting thing,
tying back to how I believe this whole thing is going to change research and teaching,
is you actually need to start mix people
together that have previously maybe not worked so much together.
So you often need somebody who was
very experienced on augmented reality and who can bridge
disciplines to work with medicine or to work with physics and those kinds of things.
So, it's very interesting how this whole thing is going to
transform what we do in our daily lives and then what we do in our work.
I mean, one consideration with the headset is,
the headset is a richly equipped device
with all kinds of sensors about where you are in the physical world.
If you are visually impaired,
those sensors could be very useful in providing feedback
about navigation and what's happening around you.
It's not just for people with normal vision.
It could be for people who have various impairments.
It could be very useful.
In fact, another project that we have started but that
is technically too challenging for our research lab and the collaborations I have so far,
is this equivalent of an augmented reality screen reader.
We talk a lot about seeing these augmentations and seeing molecules in front of us.
But what happens to the people that don't have vision?
And so, to think about a device that could somehow put
in voice or read to you what you see
and that depending on how you change your perspective,
it's really difficult to produce that information.
We don't actually know what it is that you're actually looking at.
So if you had to use voice to describe just in the physical world,
I can't see the mug here,
I can see you.
So an AI screen reader I think is one of
the hardest nuts to crack in the future to actually
include people with visual impairment so that
they can also benefit from augmented reality and virtual realty.
So one last note on the accessibility theme.
We have a colleague in the School of Art Design who has developed games
for players who have different levels of physical ability,
so that someone who is in a wheelchair and someone who
is ambulatory can play equally well using the AR techniques.
And it creates a level playing field where when previously,
maybe the person with different ability would have been excluded, and now,
they can be included in the play, but to scale.
Every player's got a 4,000-dollar device on her head.
We got to get past that.
Right. That's actually the project that the viewers
saw and they visited Rolling Graphs lab.
And then, I guess, there were also some backward references to some
of the projects viewers have seen when they visited my lab.
Tom, Joy, and Michael,
thank you for this really fascinating conversation,
and thank you for watching.
So, we look forward to talking more with all
of you in the forums and continuing in the teach out.