Learning analytics feel as though they're at the heart of most L&D teams' aspirations, and businesses are looking for more concrete evidence that L&D deserves to be involved in the wider strategy.
I strongly believe that they do, but it’s not always easy to connect the dots between the work the teams do against the action that happens elsewhere in the business.
This isn’t news, of course, it’s a problem that’s been around for ages, but it is starting to feel like it’s becoming increasingly solvable with the right tools and the quality of the data that’s available. While I’m excited about what the future holds, I’m cautious about some of the offerings out there and am strongly considering adding “data driven” to Buzzword Bingo to sit nicely alongside phrases like “low hanging fruit”.
I wanted to take an opportunity to share some of my thoughts around how to create a clear plan to provide evidence of L&D's effectiveness. I'll also lay out a few pointers to be mindful of if you find yourself investing in the latest and greatest learning platform to save the day.
Listen to your people. Now, I don’t just mean sending out a survey and collecting some results to come up with some problems based on the answers to questions you created. I mean get teams from around your organisation together to work through what’s important to them.
Provide a structured yet open space to discuss the problems they face everyday, understand the things that affect them and work out what their relationship is with the tools you use and the other areas of the business.
‍
Now that you’ve identified the problems that really impact the individuals you want to help, there's a few things you need to think about.
Firstly, consider how the problems you’ve identified link to the wider goals of the organisation. Ultimately, whatever you want to do to have a positive impact against the issues identified is likely to need buy in, not least from the people who sign the cheques!
What we want to get to is a structure that resembles a business case, so it’s important you make those connections at the very beginning.
Secondly, we need to understand what the root cause of the problem is. It can be really easy to get caught up in solving symptoms without actually getting to the heart of it, so make sure you consider all the possible factors that are linked to the problem and explore them individually.
You might find there are multiple contributing factors, some of which are likely to be out of your control. However, I see L&D as a bit of a research arm of the business, so highlighting these issues and working to support teams that can make a difference should be something you look to do. There’s lots of material around root cause analysis if you want more detail. It’s not something that I want to go overboard with in this blog!
Finally, can you prove that this is actually a problem?
We need to establish some baseline metrics to work from so that when we analyse the performance of whatever route we decide to take to solve it, we know whether the things we do make a difference or not. This is the most important piece of analysis in the whole process.
You need to be clear on what the world looks like right now if you are ever going to know if you’ve changed it.
‍
Don’t just start collecting all the data you can and hope you find something at the end. You need to plan for your findings by creating proper tests and controls to ensure that your evidence isn’t something that happened by chance or that some other environmental factor you hadn’t considered as part of your setup. A few things to think about:
‍
Not unique to experiments like this, but an important factor to consider. It's a common impulse to try and fix everything at once, and you have to stop yourself from trying to do that. Not only is it going to be hard to implement, but the data you’re going to work with to try and prove the effectiveness of your actions (which is why we’re here, right?) has a good chance of being a total mess.
Think of these projects as an evolving journey, start small, understand what’s possible and expand from there. Keeping the rest of the organisation bought into these ideas and tests is key to its success and overcomplicating your plans is a great way to kill that enthusiasm.
Also, be aware of the unknown. You might not get the results you expect from your initial experiments and it’s going to be much harder to make changes to a big complicated solution than something more compact and controlled. I’m a believer in incremental success over chancing it on the lottery, and I think this holds true for your learning analytics strategy as well.
‍
You start off with a really exciting plan and then a few months into the implementation you hit a complete roadblock that totally kills the project. It’s gutting and can ruin the motivation to continue down a particular path. I’ve found it particularly true with learning analytics. There’s often quite a lot of work up front, so it’s a real energy sapper when you get blindsided by something that simply halts any progress towards measurable results.
Now I can’t list all the challenges you’ll come across as many will be unique to your organisation, but there’s a few common ones that crop up regularly...
‍
This one might sound obvious, but I’ve seen lots of scenarios where an assumption is made about what’s possible to get from the data produced by content or the LMS/LXP. As a result, there are a lot of sad faces when they realise it’s either really difficult or simply impossible to get what you need.
It’s why I made the point earlier about proper planning. Know what you want to test for, and figure out whether the data exists. It might be the first step in the process of finding a way to capture the necessary information, before committing to a project that involves a lot of different people's time and effort.
‍
Think about what different systems are in play for your plan, and how they can deliver what you want. Is there a missing piece that stops you collecting information you need, or does the tool simply not provide enough detail to enable you to be confident in your results?
It’s worth noting that I sit firmly in the xAPI camp here. Using a standard like xAPI, while not perfect by any stretch, does remove restrictions on format of the content you deliver as well as the level of detail you can capture from the tools your learners use. There are other challenges associated with something like xAPI, so I’m not suggesting it’s definitely the solution for your particular problem, but you want to look for tools that make data available and of a good quality before you begin investing in something new.
‍
In a lot of cases you’ll want to make the connection between your learning platform and the resulting action. It’s one of the major issues with the LMS generally, in that the place people go to learn new information and the place where the resulting action happens are often quite disconnected.
A big factor that causes issues is that often the users sign in in different ways, or use an email in one place and then a different form of user ID in another. This makes for a time consuming data cleaning process that is hard to get around. It's one of the first things I’d be looking to solve, before committing to try and evidence behavioural change or performance improvements.
‍
This one is really key. Change in behaviour or culture is never going to happen overnight. You’ll see loads of case studies everywhere that talk about amazing results and huge success, but rarely do they tell you how much data was collected and how long it took to get to where they are.
Why? Because that is not the most attractive thing to put in your marketing content; it’s pretty daunting to think that some processes have taken months or even years to make real change. But that's the reality, making changes and proving that they have happened will often take time, lots of reviews and monitoring over long periods to be sure that what you’re seeing hasn’t happened simply by chance. However, those incremental changes that happen along the way should be a big motivator to keep going.
The tools exist to prove that what you’re doing makes a difference and it’s up to you to take the opportunity to demonstrate why L&D deserves a seat at the table.
Ready to modernise your data strategy? Get a free personalised demo and explore how Thrive can revolutionise how you measure learning.
‍
Explore what impact Thrive could make for your team and your learners today.
Learning analytics feel as though they're at the heart of most L&D teams' aspirations, and businesses are looking for more concrete evidence that L&D deserves to be involved in the wider strategy.
I strongly believe that they do, but it’s not always easy to connect the dots between the work the teams do against the action that happens elsewhere in the business.
This isn’t news, of course, it’s a problem that’s been around for ages, but it is starting to feel like it’s becoming increasingly solvable with the right tools and the quality of the data that’s available. While I’m excited about what the future holds, I’m cautious about some of the offerings out there and am strongly considering adding “data driven” to Buzzword Bingo to sit nicely alongside phrases like “low hanging fruit”.
I wanted to take an opportunity to share some of my thoughts around how to create a clear plan to provide evidence of L&D's effectiveness. I'll also lay out a few pointers to be mindful of if you find yourself investing in the latest and greatest learning platform to save the day.
Listen to your people. Now, I don’t just mean sending out a survey and collecting some results to come up with some problems based on the answers to questions you created. I mean get teams from around your organisation together to work through what’s important to them.
Provide a structured yet open space to discuss the problems they face everyday, understand the things that affect them and work out what their relationship is with the tools you use and the other areas of the business.
‍
Now that you’ve identified the problems that really impact the individuals you want to help, there's a few things you need to think about.
Firstly, consider how the problems you’ve identified link to the wider goals of the organisation. Ultimately, whatever you want to do to have a positive impact against the issues identified is likely to need buy in, not least from the people who sign the cheques!
What we want to get to is a structure that resembles a business case, so it’s important you make those connections at the very beginning.
Secondly, we need to understand what the root cause of the problem is. It can be really easy to get caught up in solving symptoms without actually getting to the heart of it, so make sure you consider all the possible factors that are linked to the problem and explore them individually.
You might find there are multiple contributing factors, some of which are likely to be out of your control. However, I see L&D as a bit of a research arm of the business, so highlighting these issues and working to support teams that can make a difference should be something you look to do. There’s lots of material around root cause analysis if you want more detail. It’s not something that I want to go overboard with in this blog!
Finally, can you prove that this is actually a problem?
We need to establish some baseline metrics to work from so that when we analyse the performance of whatever route we decide to take to solve it, we know whether the things we do make a difference or not. This is the most important piece of analysis in the whole process.
You need to be clear on what the world looks like right now if you are ever going to know if you’ve changed it.
‍
Don’t just start collecting all the data you can and hope you find something at the end. You need to plan for your findings by creating proper tests and controls to ensure that your evidence isn’t something that happened by chance or that some other environmental factor you hadn’t considered as part of your setup. A few things to think about:
‍
Not unique to experiments like this, but an important factor to consider. It's a common impulse to try and fix everything at once, and you have to stop yourself from trying to do that. Not only is it going to be hard to implement, but the data you’re going to work with to try and prove the effectiveness of your actions (which is why we’re here, right?) has a good chance of being a total mess.
Think of these projects as an evolving journey, start small, understand what’s possible and expand from there. Keeping the rest of the organisation bought into these ideas and tests is key to its success and overcomplicating your plans is a great way to kill that enthusiasm.
Also, be aware of the unknown. You might not get the results you expect from your initial experiments and it’s going to be much harder to make changes to a big complicated solution than something more compact and controlled. I’m a believer in incremental success over chancing it on the lottery, and I think this holds true for your learning analytics strategy as well.
‍
You start off with a really exciting plan and then a few months into the implementation you hit a complete roadblock that totally kills the project. It’s gutting and can ruin the motivation to continue down a particular path. I’ve found it particularly true with learning analytics. There’s often quite a lot of work up front, so it’s a real energy sapper when you get blindsided by something that simply halts any progress towards measurable results.
Now I can’t list all the challenges you’ll come across as many will be unique to your organisation, but there’s a few common ones that crop up regularly...
‍
This one might sound obvious, but I’ve seen lots of scenarios where an assumption is made about what’s possible to get from the data produced by content or the LMS/LXP. As a result, there are a lot of sad faces when they realise it’s either really difficult or simply impossible to get what you need.
It’s why I made the point earlier about proper planning. Know what you want to test for, and figure out whether the data exists. It might be the first step in the process of finding a way to capture the necessary information, before committing to a project that involves a lot of different people's time and effort.
‍
Think about what different systems are in play for your plan, and how they can deliver what you want. Is there a missing piece that stops you collecting information you need, or does the tool simply not provide enough detail to enable you to be confident in your results?
It’s worth noting that I sit firmly in the xAPI camp here. Using a standard like xAPI, while not perfect by any stretch, does remove restrictions on format of the content you deliver as well as the level of detail you can capture from the tools your learners use. There are other challenges associated with something like xAPI, so I’m not suggesting it’s definitely the solution for your particular problem, but you want to look for tools that make data available and of a good quality before you begin investing in something new.
‍
In a lot of cases you’ll want to make the connection between your learning platform and the resulting action. It’s one of the major issues with the LMS generally, in that the place people go to learn new information and the place where the resulting action happens are often quite disconnected.
A big factor that causes issues is that often the users sign in in different ways, or use an email in one place and then a different form of user ID in another. This makes for a time consuming data cleaning process that is hard to get around. It's one of the first things I’d be looking to solve, before committing to try and evidence behavioural change or performance improvements.
‍
This one is really key. Change in behaviour or culture is never going to happen overnight. You’ll see loads of case studies everywhere that talk about amazing results and huge success, but rarely do they tell you how much data was collected and how long it took to get to where they are.
Why? Because that is not the most attractive thing to put in your marketing content; it’s pretty daunting to think that some processes have taken months or even years to make real change. But that's the reality, making changes and proving that they have happened will often take time, lots of reviews and monitoring over long periods to be sure that what you’re seeing hasn’t happened simply by chance. However, those incremental changes that happen along the way should be a big motivator to keep going.
The tools exist to prove that what you’re doing makes a difference and it’s up to you to take the opportunity to demonstrate why L&D deserves a seat at the table.
Ready to modernise your data strategy? Get a free personalised demo and explore how Thrive can revolutionise how you measure learning.
‍
Explore what impact Thrive could make for your team and your learners today.