Hello everyone! My name is Ladek and my guest for this episode is Matt Donovan, a self-described “slayer” of shiny objects, recovering instructional designer, advocate for the learner, warrior for ruthless relevance… and as his day job, Matt is Chief Learning and Innovation Officer for GP Strategies.
In this future looking conversation, Matt and I talk about
00:00 › Start
6:24 › Strategic—What strategies, processes and tech are really popping in the corporate L&D space recently, and how has GP Strategies responded?
18:00 › Innovative—When innovating, at what point do you put an “alpha” out there to start testing, and at what stage do you move to something that is production-ready?
24:49 › Transformative—How are L&D and education changing in terms of the infrastructure around learning and learner access? And, how is the infrastructure around us going to change to support more personalized immediate learning opportunities?
36:35 › Visionary—What is Matt’s vision about Web 3.0 and Artificial Intelligence, and how will they impact what we normally think of the arc of adult continuous learning?
44:37 › Transcendent—Matt’s opinion about the near future of learning and what we all should be doing to prepare and remain relevant?
Subscribe for the latest news, practice and thought leadership at eLearnMagazine.com
Hello everyone! My name is Ladek and my guest for this episode is Matt Donovan, a self-described “slayer” of shiny objects, recovering instructional designer, advocate for the learner, warrior for ruthless relevance… and as his day job, Matt is Chief Learning and Innovation Officer for GP Strategies.
In this future looking conversation, Matt and I talk about
00:00 › Start
6:24 › Strategic—What strategies, processes and tech are really popping in the corporate L&D space recently, and how has GP Strategies responded?
18:00 › Innovative—When innovating, at what point do you put an “alpha” out there to start testing, and at what stage do you move to something that is production-ready?
24:49 › Transformative—How are L&D and education changing in terms of the infrastructure around learning and learner access? And, how is the infrastructure around us going to change to support more personalized immediate learning opportunities?
36:35 › Visionary—What is Matt’s vision about Web 3.0 and Artificial Intelligence, and how will they impact what we normally think of the arc of adult continuous learning?
44:37 › Transcendent—Matt’s opinion about the near future of learning and what we all should be doing to prepare and remain relevant?
Subscribe for the latest news, practice and thought leadership at eLearnMagazine.com
This is the eLearn podcast. If you're passionate about the future of learning, you're in the right place. The expert guests on this show provide insights into the latest strategies, practices,
and technologies for creating killer online learning outcomes. My name's Ladek, and I'm your host from Open LMS. The eLearn podcast is sponsored by eLearn Magazine,
your go -to resource for all things online learning. Click -by -click how -to articles, the latest in edtech, spotlight on successful outcomes, and trends in the marketplace. Subscribe today and never miss a post at elearnmagazine .com And OpenLMS,
a company leveraging open source software to deliver effective, customized, and engaging learning experiences for schools, universities, companies, and governments around the world since 2005.
Learn more at openlms .net Hello everyone, my name's Ladek, and my guest for this episode is Matt Donovan, who is a self -described slayer of shiny objects,
recovering instructional designer, advocate for the learner, and a warrior for ruthless relevance. But he also has the day job of chief learning and innovation officer at GP Strategies.
In this future -looking conversation, Matt and I talk about what strategies, processes, and tech are really popping in the corporate L &D space recently, and how has GP Strategies responded to it?
Then we ask Matt when innovating, at what point does he and his team put an alpha out there to start testing, and then at what stage does he move to something that is more production ready?
Next, we talk about how our L &D and education changing in terms of the infrastructure around learning and learner access. And we also talk about how is the infrastructure around us going to change to support more personalized,
immediate learning opportunities? Next, I ask what is Matt's vision about Web 3 .0 and artificial intelligence, and how these are going to impact what we normally think of the arc of adult continuous learning?
And then finally, we close our conversation around discussing Matt's opinion about the near future of learning and what we all should be doing to prepare and remain relevant. Hello,
everyone. My name is Ladic, and welcome to the eLearn podcast. I'm super excited to have my guest here with us today, Mr. Matt Donovan. How are you today, Matt? Hello. Doing well. Excellent.
Excellent. Excellent. I'm glad to see. Remind me, we had this conversation like a week ago, but tell me and everyone else, where in the world do we find you? I am in Bloomington,
Indiana, here just a little bit south of Indianapolis, home of Indiana University. Fantastic. Bloomington, Indiana. I hope, I'm assuming it's still,
we're doing this live and recording it on August 8th, 2023, so I'm hoping that you are feeling like you're not going to have too much of an Indian summer coming your way.
No, I don't. I mean, we've got, it already feels like the summer's over because school's back in. Both my kids now have started school. The university,
the kids are coming back into town, so it feels like summer's already over, so we always generally have a little bit of an Indian summer, so to speak. I feel that.
You know, I came back from vacation myself. My kids just started school and everybody keeps, hey, say, how was your summer? And I said, oh, you mean last year because it feels like it was already a year ago. Matt,
you are the chief learning officer and chief innovation officer, or I guess it's chief learning and innovation officer for a company called GP Strategies. As I like to do for everyone on the show,
could you please take the 60 seconds, 90 seconds, introduce yourself, kind of give us, bring us up to speed about who you are and what you do. Absolutely. So as the chief learning and innovation officer for GP Strategies,
I have one of the coolest jobs and what I like to be able to do is look externally, what's happening out there in our industry, the L &D industry, look what's happening in the industry of our clients and our partners and be able to look for the trends and signals and then kind of really separate out the noise and figure out how we're gonna bring that together for,
you know, not only for our internal teams, but also for our partners that we work with to kind of move forward, I'd say, in the space of learning and development. So work a lot in the corporate sector,
really looking at what are the advancements and learning technology, et cetera. So I have a fantastic team. We oversee the Innovation Kitchen, which is our dedicated space to be able to bring in.
When I talk about those trends and signals, is it a new methodology, new research on evidence -based approach to learning design? Is it a new tool? And being able to kind of bring it in,
play with it, see what's really working? Is it really ready for prime time? Where it's strengths and weaknesses, a lot of that. So the team really runs the Innovation Kitchen, and overseeing that. Personally,
though, I am a recovery instructional designer by trade, classically trained, been in the field for way too many years. So I won't even share that, but I will say that I have always been part of the new and the innovative part of that,
whether it was in corporate or higher ed, or in the nonprofit space. I have a passion for learning. So it's a big part of what I love to do. So it kind of brings me to bring all those things together.
Tech, innovation, learning, and just how people learn in general. And one of the things I love in your tagline, I think it's on your LinkedIn as well, whatever. You like things to be ruthlessly relevant.
So bring it out. I want this question to come out right, but GP Strategies, it's a fairly large organization. You interact with a lot of big name brands and whatnot.
It's still today, adult learners kind of think of L &D as that compliance training, that thing. Like what are the kind of trends that you've been... So before we get into the future,
I'd love to kind of like, let's think the past 12 months, 24 months, like what are the things that are like really popping in the corporate L &D space that you have been seeing that you actually pushed go on.
and You know rolled out and sort of said wow this worked really well or maybe this fell flat on his face Yeah, I mean I think you know in order to so when you put that time frame in there past 12 and 24 months I think it's really important to take a step back and acknowledge it We've had two major what I consider inflection points in the L and D industry the first one was the pandemic It really changed the way
we look at it and I can talk a little bit about that but I think the second one is the the reintroduction or the amping up of AI Into the space and so both of those things I think are having a way in which Organizations think about L and D how they tackle it deliver it enable it support it So a lot of that comes back to those two big changes that that were happening out there So I mean it's hard to even go
back and say what was what was life before March of 2020 for a lot of us Because so much has changed in what we do so With within that I think you know for learning and development in general I think we've had to realize that we have to be more responsive more agile in nature It's an increasingly volatile uncertain complex and ambiguous world.
It's not that we're only getting You know more change The the the velocity at which it's hitting us is getting faster So if you think of like the Delta of a change curve coming through the beta is being the volatility That's actually getting spikier and more compressed as it's faster So the L and D organizations need to be able to respond better to be able to shift to meet the needs of the The learners and the
folks so when you really think about this It's really shifting from a primarily org centric to to an incorporating a learner centric view in the way in which they design bring things to the forward.
And so that's where the concept of Roofs will see relevant comes back in. It's putting it back into the learner, performer, consumer at the end. How are they going to use this to do something different in the workspace,
grow, be something different? So, you know, drive to a meeting a specific moment of learning needs. So I think the agility is probably one of the biggest things that has to be how do we adopt new technology,
make the most out of it? How do we bring in new approaches like design thinking, for example, which gets back to learners' interest? So these are all, I think, things that are coming in and,
you know, not to mention AI, which is going to change not only the way we learn, but the work that we do. So, if you think about, I learned to be able to perform better, both sides of that equation are now changing rapidly.
So we need to think about how we actually get learning closer to the point of work so that people can actually deliver on that, which is a great promise, I think, of AI to help out. Well, let's talk about that. No,
no, but let's talk about that promise. Hi there. I'm sorry to break into the show right now. But if you're enjoying this show, if you are challenged, if you're inspired, if you're learning something, if you think that you're going to be able to get something out of this to put into your practice,
do me a quick favor, pause right now and just hit Subscribe on your podcast player right now. It doesn't matter which one. Just hit Subscribe because that way it will make sure that you never miss an episode in the future. Thanks.
Now, back to the show. I'm super interested to have our kickoff, you know, going into the future because what we're now 10 months into the release of chat GPT, which as we all know,
AI has been with us for years. I mean, Google Maps. I mean, any social media platform, Duolingo, it's been around for a long time, but this really brought it to the stage.
Any individual consumer could show up and start to get benefits, many surprising and amazing outcomes and benefits from it.
I guess what has been GP Strategies or your experience over the last 10 months with that implosion in terms of working with your clients and responding to that,
oh my God, that the OMG effect. And then now that we're starting to settle down, how is that being incorporated? I mean, what's the realism that's coming out of it?
And then we'll follow on with, is everything gonna be different in five years? You know? Is there? Yeah, I mean, wow,
there's a lot in that question. So I think first of all, let's talk about why. Why is this phenomenon? Because let's be honest, I wrote an article, every year I do a learning trends.
I look at anywhere from seven to nine trends that are coming out. I roll it out at the end of the year, at the very beginning of the year. And I said, hey, can people go back and find that somewhere?
Yeah, it's actually on GP strategies. So our learning trends and we do them for 2023. And I remember last year in 2022, end of 21,
22, I was coming back and AI's been being talked about having an impact in L &D for, I went back and did the research like six years in some of the marketing or trade magazines.
We've been talking about it and there's been very little adoption of it. It really didn't get any traction behind it. The technology we haven't seen actually was moving along. It still was developing and innovating.
If you look at peer industries like call centers, for example, I've been using AI for a long time. They've been using it for sentiment analysis. So when you're on a phone call with perhaps a call agent and they can actually listen in and there's a little AI coach that can hear like escalations in the voice and it can send a note to the actual call person to say,
hey, de -escalate, share this, reinforce or reaffirm. And so there are ways that they've actually had this technology in other areas, but in L &D it's been slow to come in. But I think what really drug it forward,
which was the part that was just mind blowing in this is that they took a very large language model. Open AI took this LLM and what they did is they put a chat interface on the front of it.
This was the first time I didn't need to know Python or an API to be able to go out and do this. I mean, IBM and Watson have been doing a lot of this B2B work, and you had these coders and engineers on the back end to be able to use it.
This was the first time you actually had the average human have their fingertips to go in and have a chat interface and have a conversation with a generative AI bot. And what was fascinating is if you look back,
it took Netflix years, three -ish years to get to a million users. ChatGPT, it took five days. Yeah. And that's what's mind blowing,
is all of a sudden you put that in the front of it. So now, all of a sudden, in the front of the average person, you're putting the power of large data in the hands of the individual. Now,
that's what's fascinating is as much data and as much information it's bringing forward, we started to see what is it, we're starting to learn what are the limitations of it,
what are the strengths, how does it work. There have been several prior launches which have struggled. Microsoft Tay, for example, was an interesting example where they tried to create a chat bot that emulated like a 20 -something to be able to connect it and it went horribly wrong in a couple of days.
Was that just like the reinvigration of Clippy or what was that like? It was the ability to kind of create like a social presence to connect like in a chat style with the general population.
But given human population on average, of course, what did some people do? They tried to get it to say bad things and it didn't take long for it to happen. And it was, it was been several years back of being able to do it.
And we've had some like notable experiences like these didn't turn out the way we wanted. You know, was AI really ready or was it really ready for humans or was human really ready for AI?
Because, you know, that's the thing is that because it's created by us, by humans, we often can bring into it or work into it some of our worst behaviors unfortunately, biases and the data sets,
things like that. But you bring it forward and say, say, all of a sudden, you had a large formulation exposed to what its potential could be. Now we're trying to say, so now we've got that interface between the two.
So now we start to think about what is the art of possible? How could I apply this to create a better learner experience? You know, start to get a personalized learning bot for me, for myself,
not for an organization on an organization's behalf provided to me that I work here. But I actually start to have the opportunity to have my own personalized bot. I can actually ask you questions, it can gather information,
it can bring it back to me. So there's a lot of that I think that goes around, but I would challenge it. I think we are still in the what I consider the heavy learning phase around this.
I don't know if you saw the notification that day about the backlash on Zoom. Did you see that by chance? You mean that they're pulling people in back and they're doing the two days a week in office?
No, no, this is where they're new data privacy thing where they basically, they change their terms of service that says, hey, we can use any of your conversations to learn. Yeah, but do you know why they changed it?
I don't know. Because they had been doing it. From my interesting, so the backlashes that they've been using to train the AI, the question was,
have they been listening in all our Zoom calls to actually train their AI bot? So this is an example of that. When you look at that, that's what makes us interesting is those large language models or a lot of this deep learning.
We consider it to be a really black box. We don't know what's in it. We don't know how it thinks. We have an input and an output, but it's this big black box in the middle of it. And so the question is really,
just because it's say it's X doesn't mean it's really X. And I think that's part of saying we see potential of what it can do, but also in the context of privacy and security and all that kind of information.
I mean, what AI really does well well as it crunches large amounts of data quickly So when we used to have things of the concept of security through obscurity The concept is it's like they could never find me if they wanted to even if it was out there They couldn't find it that changes the game with what we're looking at with large data sets So so I think we're learning a lot about what can it do but also its
strengths its limitations How do we think about the social impacts? I know there are a lot of you know You just go out there and look at a lot of like even in the faculty and higher ed There there are two camps to say do we embrace it or do we shun it?
You know and and that's a very strong of both sides It's not like an 80 20 split It's a really strong base in both sides and there are some faculty have said I encourage the students to use it I encourage them to use it intelligently It does push me to be different as an instructor to think and how I ask questions so that That they use the tool,
but they have to go deeper So they're actually Covering some of the obvious questions, but I can allow them to go deeper because they could use a tool and others are saying Yeah, I just don't want them to use it at all In what we do so how do we capture and check for things like that?
So it's an exciting it's an exciting time a realm of possibilities that that point from saying can we and should we And that's the confluent of the two questions. And so when you're working with your team,
how do you? How do you get into that innovation kitchen and say, you know, I'm really interested to kind of think think or learn more about the process of You know going through that deep learning space and then you know,
something's gonna come out on the end of it or you know Multiple things will come out as you begin to learn like at what point do you do you put a you know Put an alpha out there for either a client or an internal process or something to start testing it and then you know What's at what stage do you move to a level of confidence really like hey,
you know This this particular thing we've either created or recommend works really well like can you give us some insight into that? Yeah, I think one of the things that I really did with the team, and because we don't know where the possibility is,
so as we're playing with it, one of the first things that we did was create a guidelines for responsible use of AI for ourselves. So this is, if I don't know exactly what the answer is, how do we help ourselves make ethical decisions as we're going,
bringing that data in? How do we make sure that we're using it well, intelligently, et cetera like that? So, because we may be using it for ourselves on behalf of our clients, but we're gonna be learning.
So the first rule is go out and play with it. But as we're playing with it, what do we learn about it? Do we test it, we try it? Most of my team, they're actually using open AI,
you know, chat, GBT, and they're playing with it. They're experimenting with it. Now, but we're doing it in a way in which we're not uploading client data, we're not uploading our sensitive information, but we're looking at how the actual is making decisions in it.
So, you know, first things that we're doing as a primary is like privacy and security. That's an important component. So this gets back to that learning curve. I liken it back to the day, now this'll date me,
but as one is, in the advent of email when the reply all became a really big thing, we realized that a very simple action could have a massive impact. And that's one of those things.
One of the challenges that we have is that people don't understand that when you upload data, there can be impacts around that. And our prior mindset is like, oh, I loaded something up,
I just need to go ask them to pull it down. I know everybody says that it's always in the internet, but we still think that we can go get content and pull it down or ask somebody not to post it. The problem is that when you're training a large language model,
so you put information into it, you can't extract it. It would be like that I've built you this wonderful, great smoothie together. It's got all of these, it's got strawberry,
mango, pineapple, and avocado. And you're like, I don't like avocado. Well, I can't go back into the smoothie and take the avocado out. The only way to fix this is to throw out the smoothie and make you a new one.
That's really the way we think about the implications of privacy and security and how we work through those. So these guidelines are really important to help people understand not only the power, but the responsibility that you have and how hard it is to undo certain things that are out there.
So for example, if my personal identifiable information gets uploaded into it, it's really hard to pull it out. Now, will there be a bot down the road? Maybe that helps us do that.
Maybe. But right now, as far as I know, it's really hard to go back to that black box to prove that it's actually pulled out. So being very thoughtful about the information that we share with the brain,
the LLM, and having the information that we get back from it and making sure that we can use it. So we talk about ensuring safety and well -being so that if we build an application,
let's say, for example, a lot of places people are looking at as coaching bots. So imagine I'm doing some work on a day and the bot pops up and it says, hey, you have a meeting with your team today. You said you wanted to talk about personal presence and managing a meeting.
We're going to watch the call and we're going to provide feedback to you. So it monitors it, it listens to it, does some sentiment analysis, and it fires back some feedback. You said a lot of ums and ahs.
You didn't really wait for people's comments, et cetera. But the question is, how is that feedback data? Is it just for me or is it for my manager? Is it for somebody else who's going to make a decision to hire me,
to fire me, to promote me, to compensate me differently? And the question is, is that when we were building an app like that, we have to be able to understand and say, hey, this is for the benefit of this and not for that.
Because you know, how confident is the algorithm? How really good is its assessment strategy? Is it really legit? Does it work the same for somebody in Japan as it would for Brazil? Does the cultural impact on those evaluations?
When we think about safety and well -being, when we build the applications that leverage these things, are we really, you know, for the benefit of the humans, promoting promoting more better connection,
more vibrant discussions, that's the out point. We're not trying to use AI to separate, to minimize the role of the humans, things like that. Diversity, equity, inclusion, you want to make sure that you're bringing those things into mind as you're doing it.
Are you inclusive, not only in the design of your data sets, but also in how you're evaluating the outputs of those? Being transparent. When we use these tools,
how we use these tools, don't hide the fact that we're using them. Be transparent. Start saying, hey, I used this for inspiration and generation, but what you see before you is 100 % my work. Or it says,
hey, I was a little more robust in pulling some data together and some predictive analytics so it had a bigger role to play. And I think the last one, which is really most important for us is really saying that as we go through,
are we actually narrowing the divide between the has and the have not. So will the access to AI be more available to everybody at the end point,
or will this just give more of a powerful tools and the power of data to a top layer versus a bottom, you know, that really need that. So in the long term, we want to build things that are promoting fair and equitable access as well.
So as we experiment, as we play, these things are kind of running in the back of our minds as we learn about them. So we're answering the question of, can we? These help us say,
should we, or how should we, or how should we think about that? You have to have both sides of that conversation. And it's really, you know, one of the things I do applaud is when we've seen different examples of organizations that ultimately say no to some of those situations,
right? It's like, hey, we could do that, but, you know, ethically, we won't, or we can't, or whatnot. And so that's, I'm very interested to see how that plays out over time.
I was going to give the cake example, as you said, you know, like making the cake. You can, you know, as soon as you put that egg and flour and stuff together, you can't, you know, you can't unbake the cake. And that's just the way, that's just the way it goes,
you have to throw the cake out. Talk to me a little bit about, I think, especially in the L and D space and the higher ed space, we focus so much about how can AI or how will it change the classroom moment,
right? Creating the lesson plan, designing the course, student feedback, student inputs, you know, original content, et cetera. So, one of the things I found very interesting in our AI and learning summit over here,
and is the conversation that takes a step outside of the classroom and talks about how L and D in education are changing in terms of the infrastructure around learning,
the learner access. You just, you know, emphasized, you know, a personalized learning bot or something like that. Like, tell me how the universe around learning is changing in your mind and,
you know, I actually, I'm going to sneaky, I'm going to, I know in our pre -pre -talk, you had talked about kind of web 3 .0 stuff, and I want to kind of want to go that direction as well. Well, yeah, and so I think really trying to acknowledge that we now have more access at our fingertips to information,
I think it starts to change not only how I can get information or insights or even have it to generate to a certain point, but it also pushes us, I think, as learners to actually move more into the curiosity,
the critical thinking skills that we need to have. So it changes a little bit of, you know, it should be the bedrock of a lot of higher ed course, where you should be, you know, curiosity, critical thinking,
problem solving. These should always be parts of the conversations. What are the types of things we apply this for, we solve it for, and bring it forward. But I think what it does is it pulls away from some of the things of like,
go research, go find, pull this together. And I think what I mean is that at our fingertips, we will now have access to more information. But the question is, is the information that I'm getting,
is it really good? So I think not only will it be able to go out and create a range of types of information, but is it what I need for the specific type of learning task?
On the work side, we often talk about five moments of learning need, and each one of those moments has a different kind of way in which I'll serve the information up for it.
So the first time I'm learning something, I don't have a lot of the context. I need more context, more scaffolding. I need a little more of the exposure with it. When I'm going to learn a little bit more about something,
I can attach it to an existing scaffolding. Then there's the moment of where I'm trying to apply it to do something with it. Then there's a moment for when something goes wrong or something changes. And if you think about each of those five moments,
you need a different type of like conversation, learning conversation around that. As I mentioned the first moment, a lot more context. And if I'm trying to solve a problem when something went wrong or something changed,
I need really an answer around that thing specifically. And what's nice about this is that AI can allow us to be able to actually as a designer and developer, I can now create journeys and experiences that help me create for all five moments of need.
Historically capacity, we've been able to drive on one type or one moment or two moments of learning need. Now we can actually start to extend the experience, get more structured in what we're doing to meet a range of those needs.
So the beautiful part is it's not that you're diminishing the role of the actual designer, you're actually increasing my capacity to be able to create multiple pathways for learners to be able to do it.
And so on the designer side, another area would be if I have more time to be able to create different pathways, what if I now started to be able to adjust my curricula so that I could actually meet and support folks that are neurodivergent.
So instead of one curricular learning path, are there ways that I can meet their needs and now on the flip side on the learner side, I can now start to get things served to me in a way that aligns better and how,
you know, I may learn if I if I if I am neurodivergent and I need certain types of things, I can start to get that from the classroom. So you actually have two sides to this.
The designer, the facilitator, the instructor, all of that has more capacity to create a richer offering and the learners have an opportunity to take advantage of that. That's where AI helps on both sides of those conversations.
Absolutely. I also just love, if you don't mind if I take it one step further, I love thinking about those opportunities in the administrative, in the administrative of a high,
let's just say higher ed, or an L &E department in a large corporation. To be able to save time in booking my vacation days or making an insurance claim or those kinds of things,
all of those things, once again, if I were able to just open up my phone or my laptop or whatever and just say, hey, I need to take October 15, 16 off.
Can you please book PTO days, et cetera? That gets done. I know we already have examples of that out there, but I think we can even go one layer deeper in the how is the infrastructure around us going to change to support more personalized,
more robust, more immediate learning opportunities. Would you agree? Absolutely. I like how you pulled that together. When we think about AI, I know ChatGPT has a lot of the buzz.
It's around that generative component, but if you think about their three, I think rough. This is a highly reductionary, but you look at what I would consider automation, smart automation,
intelligent automation. I think that's one. I think there's a generative side, which is the interaction with it. I can ask a folksonomical question. Ask something in my terms and get it back to me in a way in which is more conversational,
not like 53 discrete links that I got to stitch together. Generative. The third one is predictive, looking for patterns and insights in the data.
Let me give you one example that was a personal one for me. In my course of my career, I have written time management courses. I have facilitated. facilitated. I have enabled them,
all that. So I am probably the example of the person that has helped write, create, support these, but my own time management, I've kind of struggled from time to time. And so one of the things in like our digital world is calendar control is always an important component to it.
And I was always struggling as I, you know, move into this role, a lot more meetings, we serve a lot of much broader audience. And so what I was beginning to find is I was having struggle trying to maintain my calendar.
So I was actually trying to find and put holds in my calendar, soft spots. So the unpredictable things that would pop up a week, I could do it and the problem is I'm a week out. I'm even a week and a half out trying to manage my calendar and I'm like,
I'm not really getting better. I'm doing the practices, the behaviors, creating time and space dedicated and I'm not making any traction. Well, what happens is Microsoft has, you know, this dynamics back end,
which looks at the productivity and the things that they do. And one day it popped up and it said, hey, you know, we're noticing that your calendar fills up two and a half weeks in advance. In order to get better control on your calendar,
we recommend you going out to schedule in advance of that. And what happened is all of a sudden through being able to do light analysis of my behaviors, my calendars, my environment,
it gave me a data point that now when I'm scheduling two and a half weeks out, I'm actually being able to block my calendar. It's an obvious thing. Now that when you step back, it's obvious.
I couldn't see. Sure, yeah, no, no, it sounds totally obvious, but that one, one realization, right? It was like, look, I have to think three weeks out. Full stop, you know, I plan three weeks out.
If you just, you walk into each day thinking about that, here's my one question for you. Did you find now over time that those two weeks that kind of were already structured,
do they stay fairly, fairly concrete? Or do you find that there's fluidity and things, like you're kind of, one of the things that I always find with time management is always like, hey,
I plan. And I think, you know, it's kind of budgeting yet, right? But then, oh, somebody bails or somebody gets sick or this meeting, you know, didn't have the outcome we wanted to, we have to plan another one, you know? so how much like would you say that well the beauty of this what happened is I started to not notice natural rhythms because I was starting to connect with the defined things it started to find that
there were certain things where I wasn't even getting disruptive requests so actually the best times for me were you know and and it does seasonally fluctuate but it says this block of time will stay held for you but these others even though you have them held will often get filled up so now I know that instead of trying to start something and I can't finish it because it's disruptive I know where I can guarantee and
bank on if I can get it in early great I'm able to use the time but I know that when I have this time I want to make sure that I'm really utilizing it so it was able to look at the ebbs and the flows it was able to look at large amounts of data over time when did this pop this popped over the pandemic when 95 percent of my activity is in my screen yeah all of this is there and they're in the back looking
at the productivity data and they pop this stuff forward but let me take it one step forward so this says okay and this is I think where we're going to be at so I struggle with I want to learn some things more about it I want to do this but I have these holes in my calendar but it's like I don't have time to take off and do it you know a semester long xyz and a curriculum to be able to go and do something
so one word one word sabbatical bro sabbatical exactly exactly and I know that's a word I don't get off as much but I have plenty of friends who take their sabbaticals and send me beautiful pictures from older writing and relaxation wherever they're doing it in nice exotic places but um but imagine an ability of a predictive learning bot that go go out and said you know what I've got things for five minutes 15 minutes
30 minutes I think I can squeeze in a deep dive you've given me these topics and it starts to feed things to you finding scrap time in your calendar and be able to put things in it says hey you got five minutes Here's an article on XYZ.
I recommend you do this. So now all of a sudden it's looking at my life My life work my work life synergy balances a ridiculous statement But synergy is is what I strive for But now having a predictive learning bot that helps me be able to do that now flip this to the classroom What if you were able to start to manage the ebb and the flow of of your learning patterns more than just out of traditional Syllabus and
book but you can actually get into a more fluid organic way in which I consume learning to be able to prepare and say How do I need to prep for XYZ? What's my my pace my learning style?
What's the rest of like the mix of course work that I have? So now all of a sudden you have bought this able to take large amounts of data about you your patterns of behavior and then start to Give you intelligent insights about isn't where you can fit in certain types of learning certain types of activities certain types of things Look yeah I think it's exciting where it's all some going to start to enable me to
be more productive to do the learning that I want to do To find the time in the schedule for me to do it because let's be honest I have a will and I'd love to but I've really not been able to execute because I just it just hasn't been able you know Haven't been deep enough you it you and you know like you know,
I'm pretty sure 99 % of other you know Especially information workers right we just we find that our days get full and suddenly you want to take that that deep breath moment And even though you know,
there's a lot of great psychology and guidance out there that says look You need to break those pieces out where you're always sharpening the saw and in growing and whatnot We all rarely take that opportunity I want to take this moment in order to pivot though because I want to take us to that web 3 .0 conversation in that Let's just say you know,
let's let's let's assume that that that future is real where hey, you know what I've got I you know, I have half a day to do that certification course or I you know I'm gonna take that long weekend to do that new piece of tech or soft skill,
whatever it is. Talk to me about your vision, about how this is going to impact what we normally think of the arc of adult continuous learning or how this might evolve what we look at in higher ed,
etc. Because we need to be able to take those moments, carry them with us and prove that they're true. Yeah, I think one of the big things I think in the future will be the continue expansion of the creatorverse.
And what I mean by this is that who do I get my content from? We'll start to change. And then ultimately, how do I get accreditation and moving that forward? So I think it'll be where we start to seize boundary challenges into traditional institutions to say,
I can now pull in a core element maybe within an institution, but I can bring in additional valid sources of information, instruction, application, all going towards a more organic certification or credentialing that's happening with it.
So now I'm trying to come out and say, we talked a little bit about the example of an accountant. The basics of accounting, great, I may get there. But if I want to go into this field or in that direction,
how do I actually start to bring in those very specific applications so it's now in the context of accounting in a pharmaceutical industry tied to somehow connected to a product launch or something like that?
Thinking about how you look at the dynamics within that space, all that could come together. But I may not find all of that information, that full experience in one institution. So I get a core here,
bring in a couple of other pieces. And through that connected tissue, I can now say, who's providing it to me? Now, who's providing the credentialing? Well, I get more of that transferable credentialing that comes from these sorts.
So these are the outcomes and the achievements I had. So they go into an overarching representation of what it is that I've accomplished. What is the knowledge I have? Where do these things come from? So that boundary is going to start to be challenged.
Because I think you can actually have a whole variety of validated range of sources of information that help extend and make that higher ed degree that you come out there even much more applicable.
You're narrowing that gap from the day I graduate to the day I'm actually productive in the work environment. That window and that gap, can we shorten that? And I think that's the way we start to be able to do that.
But now as a learner myself, I'm able to find those things. So how do I do it? I can go and search. I can have the bot come in and said, you know, hey, this is where your lifelong has been going. These are the areas that you're looking to grow in.
It can go out and start to find and assemble. Here's some traction. Here's some people to connect with social collaborative community, practitioners in this space. Those that are doing research,
it can start to build and find and pull those things together. The issue is, we're going to take advantage of more and more data out there. The question is, how do we make sense of it? And that's where AI comes in to really help us with that.
That's one thing that clear and without is AI is really good at crunching huge amounts of data which humans themselves are not. We're not able to do that.
It can help make sense of a lot of that. We are better at things where you talk about curiosity, creativity, where critical problems are.
We're using these tools to think and solve and how am I going to apply this and bring it together? So using those tools together will come part of that. But I think it narrows the gap between that higher ed end date from graduating and going into the world.
We're really looking to manage that gray space in between. They call it K through gray all the way into that to that work sector. But the idea is to narrow that fall off that gap between.
Hey, great. They taught you accounting. We'll reteach it to you when you come inside. Can we actually get people more productive earlier and sooner or so? Where are you able to do it? How are you able to share it?
Blockchain offers what I think is one point of truth on the back end to be able to have one record instead of 8 ,000 records out there. Constantly you have actually one point of record that's tied to me,
so I have an identity and then it has my learning credentials, my learning pathway is attached to this. It validates and verifies who I am, so it's a trusted identity,
but it's for me, not for the organization, not necessarily for the institution, but it's about me. A lot of that web point is actually taking the power of big data and being able to bend it to the will of the individual to do this.
So I think that's where it's really exciting to do that, and I think it'll be a lot of pressure, but exciting times to see how the university system really responds to that. I was just gonna say, that was my follow -up question,
there was like, this strikes terror. I just gotta believe in the heart of the established higher education system at least in the United States and Europe. Have you been able to have those or been a part of those conversations?
And I'm thinking of other guests that we've had on here. One woman was named, her name is Kate Givaccini. She runs the trusted learner network at ASU, and she's starting to have all of these conversations about this kind of,
how do I take my certification from here and my class from there, and there's a huge number of stakeholders there. Have you been able to be a part of any of those or? Well, yeah, I've been, so primarily in my domain,
where in structural systems technology, that's been, and so I've had some conversations saying, how do we help make the curricula out of those master's degree programs more helpful,
more insightful? How do you be able to incorporate some elements of digital? So I've had some conversations with it. I think what is important to understand is that in the essence of it,
coming together and conversing and having robust dialogue and reflection with others is absolutely critical. I am not saying that we abandon where you go just a solo pass.
That's not what I'm saying. So I think that the concept of the institution itself, needs to think about how do they capture the essence of their value without the firm boundaries of their values.
So the question is how do you start to really think about that differently? So your intelligence, how do you enable the community? So instead of an in control, you think to start about how do I open up the ecosystem to be a part of a little more of that boundaryless environment to continue to still deliver value.
Look, they survived with the oncoming of the online class. A lot of people thought, hey, bricks and mortar institutions may collapse with everybody going online. And there's been some challenges with institutions like SNHU,
for example, is a great one that offers a lot. University of Phoenix has historically been in the one and they've had some success. But the question is, what if you were able to really open up and provide this access to all the knowledge that we have in the space to help us move this forward?
But how do I actually put a boundary around it that enable and support it? I think is the big question around that. I don't think that there's an easy answer to it, but I do worry that at some point, if they don't,
it'll be a breaking point in the system if they don't start thinking about that. There was always resistance to going online. And then they did. And then everybody said, if I don't have it, where am I gonna be? Because now it's like,
well, geographically bound, we typically take from this state or these areas where we have these students. Now it's like you're reaching way cross borders to be able to pull students that don't have to move,
come to the campus to do that. I think it's only, it's gonna be more of that. So the question is, how do you keep your stickiness, your attractiveness to your brand to be able to bring all the thought leaders in your space to do that.
So yes, I think there is good amount of fear and concern, but my worry is that holding onto the past will be a detriment in the long term. In that,
on that note, I'm gonna ask you my final question. I ask everybody here on the podcast, but I'm gonna catch it a little differently. Usually I just ask, you know, what do you foresee in the next 12 to 18 months or whatnot?
But the question I'm really interested based on our conversation is, how do you see us evolving too? And what time will we get to that Jarvis moment where, you know, I can literally be like, hey, you know,
hey, you know, you know, book me an Uber and then, you know, do the meeting and, you know, you know, make sure that lunch shows up at 315, you know, and I put it down and then I just walk out the door and it all happens.
Like, first of all, do you foresee that happening and how far away are we from it? Oh, wow, how far away are we? You know, boy, that's a great question because it's probably sooner than I would normally guess or think because I've been wrong on how long it's taken some of the advancements we have had,
but I do still believe that we are still awake. So if you're using like the Tony Stark Jarvis, I would say it's more about the lab robot he has that he kind of yells out for being an idiot versus the actual full Jarvis yet on the continuum between the two.
I think we're a little more here than we are there on that fully sentient. Here's the thing is that in the end, AI can, for the most part, it requires to find things.
So to be able to think about it, to prepare it, to generate and bring it back to it, so it has to be found in that we have a very rich way in which we have to be able to bring stuff into it. A lot of the data that we're gonna be putting into this thing will still need to be cleaned and supported to actually really get good,
consistent outputs from it that are accurate. Take translation. Translation, there have been some great evolutions in machine -based, you know, machine learning translation,
and, but yet it still struggles with heavily technical. Yeah. And, oh, sorry, heavily technical and jargon -based translations which still requires humans to be able to do it.
So I think we're still away from it, but I'm telling you, we're getting closer every time. but I like to think back at Star Trek You know I look at Star Trek and despite you know all the things that they had You know they still put humans into space to go explore they use the machine to help them You know do space travel more complicated run scenario modeling gather more data and input But then still in the end
the humans were very integrated within this and even in the later They got rid of the bartenders. They brought them back in the later episodes They still know that in the end they need you know Yeah,
I go get a replicator to build me a drink, but I want the conversation I want the connection as humans when AI helps us connect That's the power behind it when it when it divides.
It's not what we're looking for Matt Donovan you are the chief learning and innovation officer for GP strategies. I want you on my away team 100 % Absolutely. Thank you so much for taking the time out of your day to speak with us today.
I really appreciate it. Enjoy being here. Thank you Thank you again for listening to the either in podcast here from open LMS I just wanted to ask one more time if you enjoyed this show If you learned something if you were inspired if you you were challenged if you feel like you know This is something you can take into your practice Please do me a favor and right now on your podcast player hit subscribe that way
You're never going to miss a future episode also come over to e -learn magazine calm and subscribe there as well because we have tons of great Information about how to create killer online learning outcomes. Thanks