Thrive's biggest ever feature releaseđź‘€
Find out more
February 29, 2024
|
8 mins to read

Is AI a threat or an opportunity? How to shift your mindset

Is AI a threat to our humanity, or an opportunity to learn a new way of thinking? This edition of the Thrive Learning blog unpacks the risks and benefits.
Alex Mullen
Web Content Writer

The AI debate


Is AI a threat or an opportunity?

If you see it as a threat, can you successfully shift your mindset?

This is an ongoing debate. As is to be expected of any rapidly evolving technology, the conversation that surrounds it is also rapidly evolving.

Vacillating between fear and reverence, the discourse explores its useful applications; its contributions to a fairer future; its time and money saving abilities. Equally, it bemoans its potential to take away livelihoods; its inherent bias as a model trained by humans; its worrying implications for intellectual copyright and plagiarism. There are champions, nay-sayers, and people with questions.

Backlash to emerging and developing technology is obviously nothing new. As far back as there has been innovation, there have been downsides to that innovation - and movements of people willing to warn against these downsides.

Obviously, not all tech - or tech backlash - is created equal. In the case of AI, the concerns are a bit more multifaceted than just automation. Last year saw The Writers Guild of America take a stand against the use of generative AI in screenwriting. The landmark case, later joined by SAG-AFTRA, resulted in the successful inclusion of guardrails that will prevent executives from replacing human writers with AI. We’ve also seen a wave of intellectual property cases in the media as artists take a stand against their work being used to train generative AI models.

What does all this have to do with L&D? A lot, as it turns out. Artificial intelligence in all its forms has been huge for our industry, constantly introducing new tools and tech that make L&D professionals’ lives easier, more straightforward, and even more fulfilling. But some are understandably hesitant.

Having recently unveiled our new suite of AI features, it’s important to us that we both remain at the forefront and stay informed about any potential risks. So with all this in mind, read on to find out how you can balance AI scepticism with AI optimism.


Is AI a threat to our humanity?


This question probably seems like a step on a well-trodden path at this point, but where there are still concerns, there are still reasons to address them. Is AI a threat? To our humanity; our job roles and responsibilities; our creativity? Let’s briefly address these valid concerns.


How will AI impact jobs?


Since the beginning, one of the biggest concerns about AI has been its impact on human job security.

AI can’t get sick, go on holiday, or quit. It can list every former president of The United States, write a Shakespearean sonnet, and produce an image of a giraffe snowboarding in the time it takes a human being to blink.

So, with all that in mind, shouldn’t we be worried about it taking over?

Well, not exactly. What’s far more likely than AI replacing us completely is AI being used to make our jobs easier. (More detail on that later.) In fact, while AI is a fantastic addition to the human experience, you could argue that AI is completely useless without the human experience to inform it.

Despite its speed and seemingly unending well of knowledge, AI needs human intervention to function effectively. It’s more than capable of making mistakes - or even having a full meltdown that causes it to spout gibberish.

While some people - or organisations - might choose to forgo humanity for the sake of automation, it’s our hope that the general consensus will see AI merely as a useful addition to their existing human workforce.

Otherwise, you risk - in the words of Brian Cranston for SAG-AFTRA - “dehumanising the workforce.”


AI and data privacy: what are the risks?


Data privacy is another big concern. Like all websites, AI models use data to learn and update themselves.

Considering how much data we feed AI on a regular basis, it’s natural to have reservations about what that means for our privacy. To protect yourself and your people, it’s essential that you read up on best practices and strategies for privacy, and arm yourself against potential issues.

You won't be shocked to hear this statement coming from a learning management system provider, but compliance is key. Conduct risk assessments, make sure you have a robust privacy policy, and confirm that every single person at every level of your company is compliant.


How can AI be biased?


As models whose entire knowledge base comes from human beings - themselves rife with inherent bias - generative AI in particular is only as good (or as impartial) as the data it has access to. Because of that, it’s understandable that people are concerned about its propensity for prejudice, profiling and discrimination.

For an example of how this negatively impacts real-world outcomes, just look to Amazon’s now-defunct recruitment tool which eventually demonstrated an explicit prejudice against women.

Again, this sheds light on the importance of human intervention - and humans’ own ability to harness technology for good. Having said that, human intervention is what recently got Google into trouble. In an effort to reduce bias, their interference saw AI model Gemini generate images of racially-diverse Nazi-era German soldiers amongst other controversial errors - proving that the right balance of human and AI is essential.

To counterbalance both of these disasters, there are also plenty of examples of AI being used to mitigate intolerance. Take the bot being used to alert The Financial Times to male-leaning bias within its publication, or the AI tool that prevents abusive language in the Call of Duty voice chats.


AI and misinformation


It’s an established fact that generative AI can “hallucinate”, inventing facts, figures and even legal cases that do not exist. You’ve probably heard about the lawyer who found this out the hard way when he cited six completely fictional cases in his federal court filing, as fed to him by Chat GPT.

It’s scary that of all people, a lawyer would make this mistake, but it serves as the most extreme example of a cautionary tale. AI’s ability to, put simply, “make stuff up” has consequences ranging from funny (like confidently asserting that a violin will break into exactly four parts if a jelly bean is dropped onto it) to scary (like having the potential to become a threat to democracy and a tool for facism.)

It’s vital that we train ourselves to be AI and media literate. Remain credulous, fact-check, and don’t accept everything at face value.



How can you shift your AI mindset?


So, with all these perfectly understandable and valid points in mind - and the knowledge that whether we like it or not, AI is here to stay - how can you shift your mindset around this technology from apprehension to opportunity? How can you accept AI and see it not as a threat, but as an opportunity?

‍

Explore useful applications of AI in your role or industry


Whether you’re in L&D, HR, or any other industry, the chances are that AI can assist you in your role one way or another. Instead of thinking of AI as a replacement, think of it as a tool or an assistant. How many times have you repeated the same tedious process or task, and thought “if only there was some way to automate this, my life would be so much easier”? Or “if I could outsource this, it would free up time to do more important tasks”?

From an LMS with AI-suggested goals based on the user’s individual profile and behaviours, to an AI meeting assistant who takes notes for you, to productivity tools that automate your workflows, once you lift the cap on the AI bottle there’s seemingly no limit to what it can help you achieve.

‍

Conduct AI experiments


What’s the best way to confront an idea or a process that you’re unfamiliar with? Experiment.

Practice using AI tools in an upcoming project, in whatever form that may take, and observe the results. Did you save time or money? Did you learn something new?

Take a leaf out of Thrive’s Chief Content Officer Al Thompson’s book. She recently conducted a week-long experiment, which saw the Content team create an entire learning module exclusively using AI tools. Instead of seeing AI as a threat to their jobs, the team saw it as an opportunity to explore and adapt.

Their journey can be used as a great template for your own in-house AI experiment, no matter your industry.

Step one: Establish an objective or goal. In the case of the Thrive Content Team, this goal was simply to see if they could create an entire learning module using only artificial intelligence, and to test whether it could be as effective - and human - as their regular learning content.

Step two: Assemble a team, and brainstorm an idea. Thrive Content's AI crew comprised AI Subject Matter Experts and enthusiasts, from both within and outside of the Thrive Content team. Their brainstorming session landed them on the topic of "Empathy" for their fully AI-powered module. This kept in mind their goal of seeing just how human AI can really be.

Step three: Research and establish your tools. There is an almost unending list of AI tools available to you for any given need, so honing in on the most important and suitable is essential before continuing your project. For their purposes, Al and her team decided to use two AI tools per task (scriptwriting, design, etc.)

Step four: Begin the experiment! In our example, Thrive's AI Content Team began feeding prompts to Chat GPT and refining the results to create the basis for the learning module. Next, they used Adobe Firefly to create the visuals and ElevenLabs for an eerily accurate voiceover.

Step five: Roll out the project, and gauge the reaction.

Step six: Reflect on the experiment, what you learned, how it's benefited your team, and what you could do better next time.

‍

Ask: “How does AI help humans?”


Whether or not they’re relevant to your specific job role, considering the ways in which AI helps humanity as a whole - rather than harms it - is a great starting point for shifting your mindset. This shouldn’t replace an awareness of the risks and shortfalls; the two attitudes should work in tandem with each other.

Artificial intelligence - like almost every single other thing on the planet - is neither “all good”, nor “all bad.” We’ll go into more detail about the nuances of that concept later, but for now let’s simply explore some of the best ways AI has enhanced or helped humanity in recent years.

Firstly, take the programmers who are developing the appropriately named “MeToo Bot”: an artificial intelligence system that aims to crack down on workplace sexual harassment by monitoring emails and flagging any inappropriate content, before forwarding the offending messages to HR. This developing technology is far from perfect - and at the moment, what actually constitutes “inappropriate” is proprietary information - but it’s a step in the right direction when it comes to AI being used for good.

If you’re familiar with Smokey Bear - sharp left turn here, we know, stay with us - you might have heard the phrase “Only YOU can prevent forest fires.” But this has recently morphed into “Only AI can prevent forest fires” thanks to another new, impressive application of AI technology. This fascinating innovation uses artificial intelligence to predict where wildfires are likely to break out. It can even analyse their potential severity, and suggest the best ways to combat them. Keeping on the firefighting theme, this revolutionary AI helmet allows firefighters to find trapped people faster by helping them see through smoke.

To say nothing of the advances in healthcare. These new AI–assisted advances can analyse and find patterns in data, predict risks in patients, and help hospitals run smoothly. They can even read medical images like X-rays and scans. This has huge implications for the future of healthcare, and places a tick firmly in AI’s “pro” column.

As we mentioned at the top of this point, while these applications might not be relevant to your specific role or responsibilities, hopefully they help paint a picture of how AI can pave the way for a safer future - therefore serving to further shift your mindset in a more positive direction.

‍

Do your own AI research and join in the discussion


AI is changing at such a speed that it’s difficult to keep up. New features, updates and companies are seemingly announced every single day.

But it’s essential to stay up to date – as hard as that might be. Always do your own research, and find trusted, expert sources. (AI Disruptor and Deep Learning Weekly are both informative newsletters that will keep you up to date from the comfort of your inbox.)

Contribute to the ongoing conversation by bringing any doubts, questions or concerns to your peers or colleagues so that everyone stays informed.

‍

Is AI a threat or an opportunity?


So after all that, is AI a threat or an opportunity?

As we hope we’ve conveyed by now, this is a nuanced issue. By its very nature, AI can’t really be clearly defined one way or the other. Inherently, it gives with one (artificial) hand, and takes with the same hand. The very things that make AI useful are the same things that can make it a potential danger. Unsurprisingly, then, it ultimately comes down not to the AI itself but to the humans controlling and monitoring it.

But if we stay informed and vigilant about AI, know which tools to pick and how to use them, we can absolutely get to a place where it serves as an opportunity rather than a threat. Think of AI less as the young upstart who’s after your job, and more as the willing and knowledgeable assistant that will help you do it to the very best of your ability.

And if you’re looking to learn more about how Thrive is staying at the forefront of AI innovation, take a look at our AI features or join in the conversation on our LinkedIn newsletter AI helped us do it.

More Stories

See all

See Thrive in action

Explore what impact Thrive could make for your team and your learners today.

February 29, 2024
|
8 mins to read

Is AI a threat or an opportunity? How to shift your mindset

Is AI a threat to our humanity, or an opportunity to learn a new way of thinking? This edition of the Thrive Learning blog unpacks the risks and benefits.
Alex Mullen
Web Content Writer

The AI debate


Is AI a threat or an opportunity?

If you see it as a threat, can you successfully shift your mindset?

This is an ongoing debate. As is to be expected of any rapidly evolving technology, the conversation that surrounds it is also rapidly evolving.

Vacillating between fear and reverence, the discourse explores its useful applications; its contributions to a fairer future; its time and money saving abilities. Equally, it bemoans its potential to take away livelihoods; its inherent bias as a model trained by humans; its worrying implications for intellectual copyright and plagiarism. There are champions, nay-sayers, and people with questions.

Backlash to emerging and developing technology is obviously nothing new. As far back as there has been innovation, there have been downsides to that innovation - and movements of people willing to warn against these downsides.

Obviously, not all tech - or tech backlash - is created equal. In the case of AI, the concerns are a bit more multifaceted than just automation. Last year saw The Writers Guild of America take a stand against the use of generative AI in screenwriting. The landmark case, later joined by SAG-AFTRA, resulted in the successful inclusion of guardrails that will prevent executives from replacing human writers with AI. We’ve also seen a wave of intellectual property cases in the media as artists take a stand against their work being used to train generative AI models.

What does all this have to do with L&D? A lot, as it turns out. Artificial intelligence in all its forms has been huge for our industry, constantly introducing new tools and tech that make L&D professionals’ lives easier, more straightforward, and even more fulfilling. But some are understandably hesitant.

Having recently unveiled our new suite of AI features, it’s important to us that we both remain at the forefront and stay informed about any potential risks. So with all this in mind, read on to find out how you can balance AI scepticism with AI optimism.


Is AI a threat to our humanity?


This question probably seems like a step on a well-trodden path at this point, but where there are still concerns, there are still reasons to address them. Is AI a threat? To our humanity; our job roles and responsibilities; our creativity? Let’s briefly address these valid concerns.


How will AI impact jobs?


Since the beginning, one of the biggest concerns about AI has been its impact on human job security.

AI can’t get sick, go on holiday, or quit. It can list every former president of The United States, write a Shakespearean sonnet, and produce an image of a giraffe snowboarding in the time it takes a human being to blink.

So, with all that in mind, shouldn’t we be worried about it taking over?

Well, not exactly. What’s far more likely than AI replacing us completely is AI being used to make our jobs easier. (More detail on that later.) In fact, while AI is a fantastic addition to the human experience, you could argue that AI is completely useless without the human experience to inform it.

Despite its speed and seemingly unending well of knowledge, AI needs human intervention to function effectively. It’s more than capable of making mistakes - or even having a full meltdown that causes it to spout gibberish.

While some people - or organisations - might choose to forgo humanity for the sake of automation, it’s our hope that the general consensus will see AI merely as a useful addition to their existing human workforce.

Otherwise, you risk - in the words of Brian Cranston for SAG-AFTRA - “dehumanising the workforce.”


AI and data privacy: what are the risks?


Data privacy is another big concern. Like all websites, AI models use data to learn and update themselves.

Considering how much data we feed AI on a regular basis, it’s natural to have reservations about what that means for our privacy. To protect yourself and your people, it’s essential that you read up on best practices and strategies for privacy, and arm yourself against potential issues.

You won't be shocked to hear this statement coming from a learning management system provider, but compliance is key. Conduct risk assessments, make sure you have a robust privacy policy, and confirm that every single person at every level of your company is compliant.


How can AI be biased?


As models whose entire knowledge base comes from human beings - themselves rife with inherent bias - generative AI in particular is only as good (or as impartial) as the data it has access to. Because of that, it’s understandable that people are concerned about its propensity for prejudice, profiling and discrimination.

For an example of how this negatively impacts real-world outcomes, just look to Amazon’s now-defunct recruitment tool which eventually demonstrated an explicit prejudice against women.

Again, this sheds light on the importance of human intervention - and humans’ own ability to harness technology for good. Having said that, human intervention is what recently got Google into trouble. In an effort to reduce bias, their interference saw AI model Gemini generate images of racially-diverse Nazi-era German soldiers amongst other controversial errors - proving that the right balance of human and AI is essential.

To counterbalance both of these disasters, there are also plenty of examples of AI being used to mitigate intolerance. Take the bot being used to alert The Financial Times to male-leaning bias within its publication, or the AI tool that prevents abusive language in the Call of Duty voice chats.


AI and misinformation


It’s an established fact that generative AI can “hallucinate”, inventing facts, figures and even legal cases that do not exist. You’ve probably heard about the lawyer who found this out the hard way when he cited six completely fictional cases in his federal court filing, as fed to him by Chat GPT.

It’s scary that of all people, a lawyer would make this mistake, but it serves as the most extreme example of a cautionary tale. AI’s ability to, put simply, “make stuff up” has consequences ranging from funny (like confidently asserting that a violin will break into exactly four parts if a jelly bean is dropped onto it) to scary (like having the potential to become a threat to democracy and a tool for facism.)

It’s vital that we train ourselves to be AI and media literate. Remain credulous, fact-check, and don’t accept everything at face value.



How can you shift your AI mindset?


So, with all these perfectly understandable and valid points in mind - and the knowledge that whether we like it or not, AI is here to stay - how can you shift your mindset around this technology from apprehension to opportunity? How can you accept AI and see it not as a threat, but as an opportunity?

‍

Explore useful applications of AI in your role or industry


Whether you’re in L&D, HR, or any other industry, the chances are that AI can assist you in your role one way or another. Instead of thinking of AI as a replacement, think of it as a tool or an assistant. How many times have you repeated the same tedious process or task, and thought “if only there was some way to automate this, my life would be so much easier”? Or “if I could outsource this, it would free up time to do more important tasks”?

From an LMS with AI-suggested goals based on the user’s individual profile and behaviours, to an AI meeting assistant who takes notes for you, to productivity tools that automate your workflows, once you lift the cap on the AI bottle there’s seemingly no limit to what it can help you achieve.

‍

Conduct AI experiments


What’s the best way to confront an idea or a process that you’re unfamiliar with? Experiment.

Practice using AI tools in an upcoming project, in whatever form that may take, and observe the results. Did you save time or money? Did you learn something new?

Take a leaf out of Thrive’s Chief Content Officer Al Thompson’s book. She recently conducted a week-long experiment, which saw the Content team create an entire learning module exclusively using AI tools. Instead of seeing AI as a threat to their jobs, the team saw it as an opportunity to explore and adapt.

Their journey can be used as a great template for your own in-house AI experiment, no matter your industry.

Step one: Establish an objective or goal. In the case of the Thrive Content Team, this goal was simply to see if they could create an entire learning module using only artificial intelligence, and to test whether it could be as effective - and human - as their regular learning content.

Step two: Assemble a team, and brainstorm an idea. Thrive Content's AI crew comprised AI Subject Matter Experts and enthusiasts, from both within and outside of the Thrive Content team. Their brainstorming session landed them on the topic of "Empathy" for their fully AI-powered module. This kept in mind their goal of seeing just how human AI can really be.

Step three: Research and establish your tools. There is an almost unending list of AI tools available to you for any given need, so honing in on the most important and suitable is essential before continuing your project. For their purposes, Al and her team decided to use two AI tools per task (scriptwriting, design, etc.)

Step four: Begin the experiment! In our example, Thrive's AI Content Team began feeding prompts to Chat GPT and refining the results to create the basis for the learning module. Next, they used Adobe Firefly to create the visuals and ElevenLabs for an eerily accurate voiceover.

Step five: Roll out the project, and gauge the reaction.

Step six: Reflect on the experiment, what you learned, how it's benefited your team, and what you could do better next time.

‍

Ask: “How does AI help humans?”


Whether or not they’re relevant to your specific job role, considering the ways in which AI helps humanity as a whole - rather than harms it - is a great starting point for shifting your mindset. This shouldn’t replace an awareness of the risks and shortfalls; the two attitudes should work in tandem with each other.

Artificial intelligence - like almost every single other thing on the planet - is neither “all good”, nor “all bad.” We’ll go into more detail about the nuances of that concept later, but for now let’s simply explore some of the best ways AI has enhanced or helped humanity in recent years.

Firstly, take the programmers who are developing the appropriately named “MeToo Bot”: an artificial intelligence system that aims to crack down on workplace sexual harassment by monitoring emails and flagging any inappropriate content, before forwarding the offending messages to HR. This developing technology is far from perfect - and at the moment, what actually constitutes “inappropriate” is proprietary information - but it’s a step in the right direction when it comes to AI being used for good.

If you’re familiar with Smokey Bear - sharp left turn here, we know, stay with us - you might have heard the phrase “Only YOU can prevent forest fires.” But this has recently morphed into “Only AI can prevent forest fires” thanks to another new, impressive application of AI technology. This fascinating innovation uses artificial intelligence to predict where wildfires are likely to break out. It can even analyse their potential severity, and suggest the best ways to combat them. Keeping on the firefighting theme, this revolutionary AI helmet allows firefighters to find trapped people faster by helping them see through smoke.

To say nothing of the advances in healthcare. These new AI–assisted advances can analyse and find patterns in data, predict risks in patients, and help hospitals run smoothly. They can even read medical images like X-rays and scans. This has huge implications for the future of healthcare, and places a tick firmly in AI’s “pro” column.

As we mentioned at the top of this point, while these applications might not be relevant to your specific role or responsibilities, hopefully they help paint a picture of how AI can pave the way for a safer future - therefore serving to further shift your mindset in a more positive direction.

‍

Do your own AI research and join in the discussion


AI is changing at such a speed that it’s difficult to keep up. New features, updates and companies are seemingly announced every single day.

But it’s essential to stay up to date – as hard as that might be. Always do your own research, and find trusted, expert sources. (AI Disruptor and Deep Learning Weekly are both informative newsletters that will keep you up to date from the comfort of your inbox.)

Contribute to the ongoing conversation by bringing any doubts, questions or concerns to your peers or colleagues so that everyone stays informed.

‍

Is AI a threat or an opportunity?


So after all that, is AI a threat or an opportunity?

As we hope we’ve conveyed by now, this is a nuanced issue. By its very nature, AI can’t really be clearly defined one way or the other. Inherently, it gives with one (artificial) hand, and takes with the same hand. The very things that make AI useful are the same things that can make it a potential danger. Unsurprisingly, then, it ultimately comes down not to the AI itself but to the humans controlling and monitoring it.

But if we stay informed and vigilant about AI, know which tools to pick and how to use them, we can absolutely get to a place where it serves as an opportunity rather than a threat. Think of AI less as the young upstart who’s after your job, and more as the willing and knowledgeable assistant that will help you do it to the very best of your ability.

And if you’re looking to learn more about how Thrive is staying at the forefront of AI innovation, take a look at our AI features or join in the conversation on our LinkedIn newsletter AI helped us do it.

More Stories

See all

See Thrive in action

Explore what impact Thrive could make for your team and your learners today.