AI is such a ubiquitous topic of conversation - not just in the workplace, but in the wider world - that it almost needs no introduction.
At Thrive we've recently launched our suite of innovative AI features, and we’re ever cognisant of the common questions, concerns and challenges associated with the curious world of Artificial Intelligence.
During our recent LinkedIn webinar Putting the ‘I’ back in ‘AI’ with Helen Marshall and Mark Ward, Helen conducted a poll that found 55% of respondents were interested in the ways in which AI can help with workplace learning, but were wary about using it on a practical level.
Meanwhile, the World Economic Forum has recently published a study exploring the fact that traditional models of education are becoming redundant and outdated thanks to AI - and those facilitating the learning will need to catch up if they want to make an impact.
While this study was specifically about academic education, the same can be said for workplace learning and development; our entire learning platform is built on the knowledge that traditional, classroom-based training rarely works.
There’s a disconnect here - traditional methods of learning are fast becoming irrelevant, but even as AI becomes the new normal, the vast majority of professionals are still understandably hesitant to apply it to workplace learning.
On our travels through the AI conversation, we’ve identified three common concerns regarding AI in the workplace. In this blog, we’ll highlight these challenges and explore whether or not they’re justified - and more importantly, how we can help to alleviate them.
It feels like for as long as we’ve been discussing AI, we’ve been discussing its impact on human job security.
It’s a completely fair and valid question: With AI becoming ever more prevalent, ever more skilled, ever more sophisticated, how can real human beings - with their capacity to make mistakes, their need for sleep, and their ability to take time off - ever measure up? How at risk are our jobs in a world where AI can seemingly do everything, from telling you how to bake a cake to administering digital therapy?
As part of our exclusive annual customer conference, Thrive Live 2023, we enjoyed an inspiring talk from bestselling author and speaker Henry-Coutinho Mason. During his time on stage, Henry explained some of his predictions for the future of AI. On the subject of job security, Henry pointed out a much more common and less scandalous reality: AI is far more likely to automate the jobs that people already have, rather than eradicate them altogether.
He cited Microsoft’s AI Copilot as an example, which acts as a virtual assistant so that busy professionals can get back to doing the most important parts of their jobs. The ‘grunt work’ gets outsourced, and everybody’s happy. (Except the robots, but that’s mainly because they lack consciousness and subjective experience.)
Henry showcased another example where AI has actually helped to create more jobs for real human beings - yes, those un-automated, mistake-making, sleep-needing creatures who are scared of becoming obsolete in the workplace.
In this example, IKEA implemented an automated AI customer service chatbot they aptly named Billy after their famous bookcases.
Where this has the potential to become the start of a sad narrative that ends in multiple layoffs, anxiety and job insecurity, it’s actually the first chapter of a much happier story. IKEA chose to upskill those human customer service operatives whose jobs were being automated by Billy, and create the opportunity for them to become trained interior design consultants instead. This is a perfect example of AI creating jobs, instead of destroying them.
Another incredible guest speaker at Thrive Live was technology entrepreneur and economist Dr Pippa Malgren, who emphasised the importance of love and creativity in the workplace during her talk. She explained that while AI admittedly has the market cornered on automation and analytical skills, it lacks something that humans have in abundance: Creativity. In the words of Dr. Malgren: “Creativity is not to be underestimated… [We have to] appreciate that what we bring to the table is as important, or more so.”
Another important concern: the idea of the potential bias and profiling inherent within AI models whose knowledge is entirely formed of the existing data it has access to.
Social media users will know that algorithms have become an inextricable part of our digital lives. The content you are fed on your social media feeds is decided by analysing the content you have already consumed. (Yes, that does sound a bit dystopian - but don't throw your computer out the window just yet. Finish reading this blog post first, at least.)
In much the same way, AI models can only produce results based on those that already exist. Take this example from Amazon: The trillion-dollar company wanted to automate their recruitment process, so they built an AI-powered hiring tool to analyse incoming resumes and identify the best candidates for the role.
The problem with this seemingly innovative and time-saving digital tool was that over time, it started to demonstrate a bias against women. Why? It was trained to vet applicants based on the company's previous 10 years of (male-dominated) resumes. Again, it could only produce results based on data it already had - which showed a clear preference towards men in the tech industry.
Over time, the tool taught itself to penalise women for the very fact of their womanhood. Eventually, it was explicitly discarding resumes that featured the word 'women' and graduates of all-women colleges. Humans had to refine and edit the AI tool to end its bias against female candidates.
This is another example where human intervention becomes invaluable. If human beings are reviewing the results produced by AI, it’s much less likely that bias will creep into the workplace.
And, in some cases, AI can actually be used to mitigate bias. To reference Henry-Coutinho Mason’s Thrive Live talk again, the author pointed out a few interesting examples of AI being used to actively prevent discrimination across different industries. These included a bot that alerted the Financial Times to male-leaning bias throughout its publication, and an automated moderator created by the video game Call of Duty to prevent abusive language on their voice chat.
A well-documented concern when it comes to AI is data privacy. You’re probably already aware that most websites collect data, and AI tools are no different.
Going back to our AI webinar, Helen addressed this very concern by advising us to “utilise cautious optimism.”
To summarise Helen's point, it seems that AI will have a mostly positive impact on the way we work and live, but we should always proceed with caution.
Whenever you use a piece of AI software, make sure you’re thinking very carefully about what information you’re feeding it. Remember that once you put information into the AI tool, it is then accessible by anyone in the world. It’s an open model on the internet, so avoid sharing sensitive or proprietary data.
We’ll end with another piece of wisdom from Dr. Malgren: “Will AI hurt us? Only if we let it.”
Thanks for reading. We hope our guide to the ethics of AI in the workplace has helped assuage any worries you may have had - and maybe even give you some inspiration. If you’re looking to streamline your learning with an innovative, all-in-one LMS that harnesses AI for good, why not book a demo with a member of our team today?
And, if you’re looking for further reading, check out some more of our handy resources on the topic of AI:
5 ways to use ChatGPT for workplace learning
5 AI tools for digital learning
Explore what impact Thrive could make for your team and your learners today.
AI is such a ubiquitous topic of conversation - not just in the workplace, but in the wider world - that it almost needs no introduction.
At Thrive we've recently launched our suite of innovative AI features, and we’re ever cognisant of the common questions, concerns and challenges associated with the curious world of Artificial Intelligence.
During our recent LinkedIn webinar Putting the ‘I’ back in ‘AI’ with Helen Marshall and Mark Ward, Helen conducted a poll that found 55% of respondents were interested in the ways in which AI can help with workplace learning, but were wary about using it on a practical level.
Meanwhile, the World Economic Forum has recently published a study exploring the fact that traditional models of education are becoming redundant and outdated thanks to AI - and those facilitating the learning will need to catch up if they want to make an impact.
While this study was specifically about academic education, the same can be said for workplace learning and development; our entire learning platform is built on the knowledge that traditional, classroom-based training rarely works.
There’s a disconnect here - traditional methods of learning are fast becoming irrelevant, but even as AI becomes the new normal, the vast majority of professionals are still understandably hesitant to apply it to workplace learning.
On our travels through the AI conversation, we’ve identified three common concerns regarding AI in the workplace. In this blog, we’ll highlight these challenges and explore whether or not they’re justified - and more importantly, how we can help to alleviate them.
It feels like for as long as we’ve been discussing AI, we’ve been discussing its impact on human job security.
It’s a completely fair and valid question: With AI becoming ever more prevalent, ever more skilled, ever more sophisticated, how can real human beings - with their capacity to make mistakes, their need for sleep, and their ability to take time off - ever measure up? How at risk are our jobs in a world where AI can seemingly do everything, from telling you how to bake a cake to administering digital therapy?
As part of our exclusive annual customer conference, Thrive Live 2023, we enjoyed an inspiring talk from bestselling author and speaker Henry-Coutinho Mason. During his time on stage, Henry explained some of his predictions for the future of AI. On the subject of job security, Henry pointed out a much more common and less scandalous reality: AI is far more likely to automate the jobs that people already have, rather than eradicate them altogether.
He cited Microsoft’s AI Copilot as an example, which acts as a virtual assistant so that busy professionals can get back to doing the most important parts of their jobs. The ‘grunt work’ gets outsourced, and everybody’s happy. (Except the robots, but that’s mainly because they lack consciousness and subjective experience.)
Henry showcased another example where AI has actually helped to create more jobs for real human beings - yes, those un-automated, mistake-making, sleep-needing creatures who are scared of becoming obsolete in the workplace.
In this example, IKEA implemented an automated AI customer service chatbot they aptly named Billy after their famous bookcases.
Where this has the potential to become the start of a sad narrative that ends in multiple layoffs, anxiety and job insecurity, it’s actually the first chapter of a much happier story. IKEA chose to upskill those human customer service operatives whose jobs were being automated by Billy, and create the opportunity for them to become trained interior design consultants instead. This is a perfect example of AI creating jobs, instead of destroying them.
Another incredible guest speaker at Thrive Live was technology entrepreneur and economist Dr Pippa Malgren, who emphasised the importance of love and creativity in the workplace during her talk. She explained that while AI admittedly has the market cornered on automation and analytical skills, it lacks something that humans have in abundance: Creativity. In the words of Dr. Malgren: “Creativity is not to be underestimated… [We have to] appreciate that what we bring to the table is as important, or more so.”
Another important concern: the idea of the potential bias and profiling inherent within AI models whose knowledge is entirely formed of the existing data it has access to.
Social media users will know that algorithms have become an inextricable part of our digital lives. The content you are fed on your social media feeds is decided by analysing the content you have already consumed. (Yes, that does sound a bit dystopian - but don't throw your computer out the window just yet. Finish reading this blog post first, at least.)
In much the same way, AI models can only produce results based on those that already exist. Take this example from Amazon: The trillion-dollar company wanted to automate their recruitment process, so they built an AI-powered hiring tool to analyse incoming resumes and identify the best candidates for the role.
The problem with this seemingly innovative and time-saving digital tool was that over time, it started to demonstrate a bias against women. Why? It was trained to vet applicants based on the company's previous 10 years of (male-dominated) resumes. Again, it could only produce results based on data it already had - which showed a clear preference towards men in the tech industry.
Over time, the tool taught itself to penalise women for the very fact of their womanhood. Eventually, it was explicitly discarding resumes that featured the word 'women' and graduates of all-women colleges. Humans had to refine and edit the AI tool to end its bias against female candidates.
This is another example where human intervention becomes invaluable. If human beings are reviewing the results produced by AI, it’s much less likely that bias will creep into the workplace.
And, in some cases, AI can actually be used to mitigate bias. To reference Henry-Coutinho Mason’s Thrive Live talk again, the author pointed out a few interesting examples of AI being used to actively prevent discrimination across different industries. These included a bot that alerted the Financial Times to male-leaning bias throughout its publication, and an automated moderator created by the video game Call of Duty to prevent abusive language on their voice chat.
A well-documented concern when it comes to AI is data privacy. You’re probably already aware that most websites collect data, and AI tools are no different.
Going back to our AI webinar, Helen addressed this very concern by advising us to “utilise cautious optimism.”
To summarise Helen's point, it seems that AI will have a mostly positive impact on the way we work and live, but we should always proceed with caution.
Whenever you use a piece of AI software, make sure you’re thinking very carefully about what information you’re feeding it. Remember that once you put information into the AI tool, it is then accessible by anyone in the world. It’s an open model on the internet, so avoid sharing sensitive or proprietary data.
We’ll end with another piece of wisdom from Dr. Malgren: “Will AI hurt us? Only if we let it.”
Thanks for reading. We hope our guide to the ethics of AI in the workplace has helped assuage any worries you may have had - and maybe even give you some inspiration. If you’re looking to streamline your learning with an innovative, all-in-one LMS that harnesses AI for good, why not book a demo with a member of our team today?
And, if you’re looking for further reading, check out some more of our handy resources on the topic of AI:
5 ways to use ChatGPT for workplace learning
5 AI tools for digital learning
Explore what impact Thrive could make for your team and your learners today.