The results you deserveđź‘€
See for yourself
April 17, 2024
|
5 mins to read

The essential AI glossary for L&D teams

A fool-proof breakdown of essential Gen AI terms for Learning and Development professionals
Alex Mullen
Web Content Writer

The fast-growing and complex world of AI is one that comes along with a lot of hyper-specific words, phrases, and jargon that make it even more inscrutable.

Alongside the roll-out of our new, game-changing AI Authoring Tool, we’re trying to demystify AI and its vernacular with this (non-exhaustive) AI glossary. In order to mitigate some of the risks of AI, we need to increase literacy around the topic and make it accessible to everyone. Doing this democratises the power of AI - instead of confining it to a handful of people - and means that everyone can understand it, use it, and manage the risks.

Firstly, it’s important to distinguish the difference between the two most prevalent forms of AI. While Generative AI is having a moment in the mainstream spotlight, we need to differentiate it from its possibly more obscure and less shiny cousin Non-Generative AI.


What’s the difference between Generative AI and Non-Generative AI?

While Generative AI creates new content, Non-Generative AI analyses data and recognises patterns therein. It’s largely used to predict outcomes and make decisions based on data.

In honour of our brand new AI Authoring Tool - which falls on the "generative" side - we’ll be defining terms today that relate exclusively to Gen AI.

Let’s first explain what we’re dealing with in a little more detail.
‍

What is Generative AI?

When it comes to defining Gen AI, we need to establish its umbrella terms first.

The first umbrella is, of course, Artificial Intelligence, or AI. Underneath that comes Machine Learning. This branch of AI is exactly what it sounds like: The “machine” (computer) is trained to learn, recognise patterns, and make decisions. Next there’s Deep Learning, a subfield of Machine Learning. Deep Learning uses algorithms to teach computers how to learn by example; the "Deep" referring to the many layers it uses to understand. Finally within that, we come to Generative AI.

Even those who have only dipped their toes into the weird and wonderful world of AI will probably have heard of Generative AI programs like ChatGPT, Google Gemini (formerly Bard) and Bing AI.

Generative AI learns from pre-existing data to generate new data. Simply put (after all, that’s the whole point of this guide), it learns from the information it’s given - just like humans. And, also like humans, it can make mistakes. However in the world of Gen-AI, this mistakes are known as “hallucinations.”

For example when asked what the world record was for crossing the channel entirely on foot, ChatGPT happily and obligingly replied: “The world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed the crossing in 14 hours and 51 minutes on August 14, 2020.”

Wow, the more you know!

Hallucinations aside, the practical applications of Generative AI - in multiple industries, but particularly L&D - are boundless. We’re already seeing how it can help with content creation, skills development and more - but in order to effectively and safely make use of these applications, it’s so important to have a full understanding of each and every one of them.

Within AI’s complex network of data, language and information, many new terms and bits of jargon have been born. What follows is our simplistic breakdown of these terms, so you and your L&D team can understand exactly what you’re working with - and more importantly, any risks or things to keep an eye out for.


Types of Generative AI, explained
‍

Data augmentation

Our first foray into AI speak is “data augmentation.”

To break this term down:

Data = information

Augmentation =  to make something larger or more intense

Based on this breakdown, it’s not surprising that the practice of Data Augmentation aims to increase the amount of data by artificially generating data from that which already exists, resulting in a more robust impression.

For example, say you wanted to train a machine learning model to detect parrots. Instead of showing the AI one picture of a parrot, or several pictures of parrots but all from the same angle, you might provide a more varied series of images from different angles and orientations to create more diverse data.

This is best used for image duplication and synthetic data generation, and just like with all applications of AI technology, it isn’t without its risks. Let’s briefly go over the most prominent risk so you know what to look out for:

Deep Fakes (like fake IDs or documentation) can lead to cybersecurity issues.

An example of a program that makes use of data augmentation is Midjourney.


Text-to-image diffusion models

Text-to-image diffusion models, unsurprisingly, produce images or videos based on a text input. For example, “Create an image of a flying saucer being driven by Elon Musk” or, less excitingly, “Create an image of a happy-looking employee sitting at a desk.”

The best use cases for this are to create images or videos that you may not be able to find organically. In L&D, we’re all probably familiar with the situation of trawling various image-hosting websites for the perfect picture and coming up short. Text-to-image diffusion models aim to take some of the legwork out of that frustrating process by helping you craft an ultra-specific image or video to support your learning content.

Usefulness aside, the ethical considerations and risks of this type of AI have been regularly debated, so let’s go over a few of them now:

Copyright: It’s not always clear who owns the copyright to the images on which the text-to-image models have been trained, and this has spawned a wave of high-profile intellectual property cases.

Infringement on the work of human artists: There’s a concern that over-reliance on AI generated art will lead to a kind of “AI supremacy” that values convenience and speed over human artistry; taking jobs and income away from artists as a result. Although I think anyone who has seen AI-generated art can agree that humans still have the upper hand (with the correct amount of fingers, no less.)

‍

‍

‍

‍

Ethical considerations: In the alarming age of misinformation and disinformation, when internet literacy is moving slower than the information it needs to counteract, there’s an ethical consideration around AI generated “photographs”. If it isn’t disclosed that certain images are AI generated, people who aren’t trained to recognise them as such can be targeted.

If you were to close your eyes now and think of an AI-generated photograph of a woman, it’s likely your mind would conjure up an eerily-blurred, cartoonish and uncanny portrait completely unrecognisable from a real human being.

But certain tools are becoming more sophisticated than that - and people are also becoming better at using them. Take the recent study from the University of Waterloo in Canada that found only 61% of participants could tell the difference between real and AI generated photos, well below the 85% the researchers had predicted.

This proves how important it is to disclose the use of AI at all times. Thankfully, new EU Law coming into effect from 2026 dictates all AI-generated material needs to be explicitly tagged as such.
‍

Some examples of text-to-image diffusion models include:

GLIDE, DALL-E and Imagen.

‍

Text-to-text

Old reliable, the one and only LLM is what most people think of when they think of AI. Text-to-text or Large Language Models (LLMs) are AI models that generate written content based on written input.

For example: “ChatGPT, write a short Shakespearean sonnet about a milkshake.” (Incidentally, why is it always Shakespeare? He never asked for this. Anyway, here’s the milkshake sonnet.)

‍

‍

Not bad, although I’m not sure how I feel about the phrase “creamy tendrils.”

The capability and reach of LLMs obviously extends far beyond dairy-based poetry. People can use it for work, personal use, to tidy up a particularly difficult email… you name it, the possibilities are endless.

There are a few risks associated with this type of AI.

Bias: This is a familiar issue of much discussion when it comes to concerns about Gen-AI. We’ve explored this in our blogs The ethical implications of AI in the workplace and AI: Threat or challenge? but to summarise: AI is only as good as the data sets it’s trained on, and - in case you missed it - AI is trained on data sets from humans, who are - in case you missed it - inherently biased. As a result, AI can start to demonstrate unconscious bias in the text it generates.  

Misinformation: We’ve already gone over this in our text-to-image section, but if the misinformation and disinformation risks associated with AI-generated “photographs” are dangerous, just imagine the risks of AI-generated text. AI’s ability to generate huge swathes of text at lightning-fast speed is a dangerous weapon in the disinformation arsenal. OpenAI themselves have admitted that LLMs could potentially be used to automate disinformation campaigns, making them easier to scale. This makes AI literacy more important than ever.

High environmental impact: LLMs unfortunately consume an awful lot of energy, and have a high carbon footprint thanks to the sheer volume of resources needed to run them. This report titled “Power Hungry Processing: Watts Driving the Cost of AI Deployment?” found that:
‍

“The most carbon-intensive image generation model… generates 1,594 grams of C02 for 1,000 inferences, which is roughly the equivalent to 4.1 miles driven by an average gasoline-powered passenger vehicle.”


As a result, the carbon footprint of LLMs can’t easily be ignored and is definitely something to keep in mind when considering the risks.


Example of Large Language Models include:

Chat-GPT 3.5 and Claude.
‍

One input, one output

This is an umbrella term encompassing several different types of AI models. Text-to-text and text-to-image are both examples of models that fall under this umbrella. Our Shakespearean sonnet example also falls into this category: I gave Chat GPT a creative brief of sorts, using words, and it produced its interpretation of that brief also using words.

“One input, one output” is an AI model that takes one input type, and produces one output type as a result. This is as opposed to “any-to-any” or “multi-input, multi-output” models that can intake any type of data such as images, video, audio, or text, and produce any kind of output.

This is the main shortcoming associated with one input, one output AI:

Scalability issues: As these AI models grow, the scale of resources required also grows. That includes the need for superior hardware, larger datasets, and more energy, which is costly in the long run.
‍

Example of one input, one output:

Chat-GPT 3.5
‍

Multi-input, multi-output (or multimodal)

This is another umbrella term that encompasses several different types of AI models. Some LLMs (such as GPT-4 or Sora) are classified as multimodal.

Unlike one-input, one-output models, multi-input, multi-output can accept and produce content in multiple formats. For example, instead of only intaking and producing text, Open AI’s GPT-4V(ision) can actually also process image inputs. It can then produce text outputs as a result, such as a description of the image, contextual information or even a story.
‍

Risks include:

Data privacy concerns: Multi-input, multi-output AI has a greater risk of data privacy concerns than one-input, one-output, simply due to the sheer amount and variety of data that it handles.

Deep Fakes: Unfortunately, the scary practice of Deep Fakes is also an increased risk with a broader set of data.
‍

Examples include:

GPT-4V(ision)

Sora


We hope this glossary of AI terms has been helpful - whether for your L&D team specifically, or to inform you more generally about AI’s risks and opportunities. If you’re looking for a handy table to refer to, check out our AI Jargon Buster, and to learn more about Thrive’s new AI Authoring Tool, browse our AI features page here.

Are you going to be at Learning Technologies?

If you’d like to speak with the Thrive team - and even Thrive customers – about how Thrive could work for your business, head to stands K20 and H20. We'll be exploring the topic of AI in more depth, offering insights from real customers, and making some exciting new announcements. You won't want to miss it. We look forward to seeing you there!

‍

More Stories

See all

See Thrive in action

Explore what impact Thrive could make for your team and your learners today.

April 17, 2024
|
5 mins to read

The essential AI glossary for L&D teams

A fool-proof breakdown of essential Gen AI terms for Learning and Development professionals
Alex Mullen
Web Content Writer

The fast-growing and complex world of AI is one that comes along with a lot of hyper-specific words, phrases, and jargon that make it even more inscrutable.

Alongside the roll-out of our new, game-changing AI Authoring Tool, we’re trying to demystify AI and its vernacular with this (non-exhaustive) AI glossary. In order to mitigate some of the risks of AI, we need to increase literacy around the topic and make it accessible to everyone. Doing this democratises the power of AI - instead of confining it to a handful of people - and means that everyone can understand it, use it, and manage the risks.

Firstly, it’s important to distinguish the difference between the two most prevalent forms of AI. While Generative AI is having a moment in the mainstream spotlight, we need to differentiate it from its possibly more obscure and less shiny cousin Non-Generative AI.


What’s the difference between Generative AI and Non-Generative AI?

While Generative AI creates new content, Non-Generative AI analyses data and recognises patterns therein. It’s largely used to predict outcomes and make decisions based on data.

In honour of our brand new AI Authoring Tool - which falls on the "generative" side - we’ll be defining terms today that relate exclusively to Gen AI.

Let’s first explain what we’re dealing with in a little more detail.
‍

What is Generative AI?

When it comes to defining Gen AI, we need to establish its umbrella terms first.

The first umbrella is, of course, Artificial Intelligence, or AI. Underneath that comes Machine Learning. This branch of AI is exactly what it sounds like: The “machine” (computer) is trained to learn, recognise patterns, and make decisions. Next there’s Deep Learning, a subfield of Machine Learning. Deep Learning uses algorithms to teach computers how to learn by example; the "Deep" referring to the many layers it uses to understand. Finally within that, we come to Generative AI.

Even those who have only dipped their toes into the weird and wonderful world of AI will probably have heard of Generative AI programs like ChatGPT, Google Gemini (formerly Bard) and Bing AI.

Generative AI learns from pre-existing data to generate new data. Simply put (after all, that’s the whole point of this guide), it learns from the information it’s given - just like humans. And, also like humans, it can make mistakes. However in the world of Gen-AI, this mistakes are known as “hallucinations.”

For example when asked what the world record was for crossing the channel entirely on foot, ChatGPT happily and obligingly replied: “The world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed the crossing in 14 hours and 51 minutes on August 14, 2020.”

Wow, the more you know!

Hallucinations aside, the practical applications of Generative AI - in multiple industries, but particularly L&D - are boundless. We’re already seeing how it can help with content creation, skills development and more - but in order to effectively and safely make use of these applications, it’s so important to have a full understanding of each and every one of them.

Within AI’s complex network of data, language and information, many new terms and bits of jargon have been born. What follows is our simplistic breakdown of these terms, so you and your L&D team can understand exactly what you’re working with - and more importantly, any risks or things to keep an eye out for.


Types of Generative AI, explained
‍

Data augmentation

Our first foray into AI speak is “data augmentation.”

To break this term down:

Data = information

Augmentation =  to make something larger or more intense

Based on this breakdown, it’s not surprising that the practice of Data Augmentation aims to increase the amount of data by artificially generating data from that which already exists, resulting in a more robust impression.

For example, say you wanted to train a machine learning model to detect parrots. Instead of showing the AI one picture of a parrot, or several pictures of parrots but all from the same angle, you might provide a more varied series of images from different angles and orientations to create more diverse data.

This is best used for image duplication and synthetic data generation, and just like with all applications of AI technology, it isn’t without its risks. Let’s briefly go over the most prominent risk so you know what to look out for:

Deep Fakes (like fake IDs or documentation) can lead to cybersecurity issues.

An example of a program that makes use of data augmentation is Midjourney.


Text-to-image diffusion models

Text-to-image diffusion models, unsurprisingly, produce images or videos based on a text input. For example, “Create an image of a flying saucer being driven by Elon Musk” or, less excitingly, “Create an image of a happy-looking employee sitting at a desk.”

The best use cases for this are to create images or videos that you may not be able to find organically. In L&D, we’re all probably familiar with the situation of trawling various image-hosting websites for the perfect picture and coming up short. Text-to-image diffusion models aim to take some of the legwork out of that frustrating process by helping you craft an ultra-specific image or video to support your learning content.

Usefulness aside, the ethical considerations and risks of this type of AI have been regularly debated, so let’s go over a few of them now:

Copyright: It’s not always clear who owns the copyright to the images on which the text-to-image models have been trained, and this has spawned a wave of high-profile intellectual property cases.

Infringement on the work of human artists: There’s a concern that over-reliance on AI generated art will lead to a kind of “AI supremacy” that values convenience and speed over human artistry; taking jobs and income away from artists as a result. Although I think anyone who has seen AI-generated art can agree that humans still have the upper hand (with the correct amount of fingers, no less.)

‍

‍

‍

‍

Ethical considerations: In the alarming age of misinformation and disinformation, when internet literacy is moving slower than the information it needs to counteract, there’s an ethical consideration around AI generated “photographs”. If it isn’t disclosed that certain images are AI generated, people who aren’t trained to recognise them as such can be targeted.

If you were to close your eyes now and think of an AI-generated photograph of a woman, it’s likely your mind would conjure up an eerily-blurred, cartoonish and uncanny portrait completely unrecognisable from a real human being.

But certain tools are becoming more sophisticated than that - and people are also becoming better at using them. Take the recent study from the University of Waterloo in Canada that found only 61% of participants could tell the difference between real and AI generated photos, well below the 85% the researchers had predicted.

This proves how important it is to disclose the use of AI at all times. Thankfully, new EU Law coming into effect from 2026 dictates all AI-generated material needs to be explicitly tagged as such.
‍

Some examples of text-to-image diffusion models include:

GLIDE, DALL-E and Imagen.

‍

Text-to-text

Old reliable, the one and only LLM is what most people think of when they think of AI. Text-to-text or Large Language Models (LLMs) are AI models that generate written content based on written input.

For example: “ChatGPT, write a short Shakespearean sonnet about a milkshake.” (Incidentally, why is it always Shakespeare? He never asked for this. Anyway, here’s the milkshake sonnet.)

‍

‍

Not bad, although I’m not sure how I feel about the phrase “creamy tendrils.”

The capability and reach of LLMs obviously extends far beyond dairy-based poetry. People can use it for work, personal use, to tidy up a particularly difficult email… you name it, the possibilities are endless.

There are a few risks associated with this type of AI.

Bias: This is a familiar issue of much discussion when it comes to concerns about Gen-AI. We’ve explored this in our blogs The ethical implications of AI in the workplace and AI: Threat or challenge? but to summarise: AI is only as good as the data sets it’s trained on, and - in case you missed it - AI is trained on data sets from humans, who are - in case you missed it - inherently biased. As a result, AI can start to demonstrate unconscious bias in the text it generates.  

Misinformation: We’ve already gone over this in our text-to-image section, but if the misinformation and disinformation risks associated with AI-generated “photographs” are dangerous, just imagine the risks of AI-generated text. AI’s ability to generate huge swathes of text at lightning-fast speed is a dangerous weapon in the disinformation arsenal. OpenAI themselves have admitted that LLMs could potentially be used to automate disinformation campaigns, making them easier to scale. This makes AI literacy more important than ever.

High environmental impact: LLMs unfortunately consume an awful lot of energy, and have a high carbon footprint thanks to the sheer volume of resources needed to run them. This report titled “Power Hungry Processing: Watts Driving the Cost of AI Deployment?” found that:
‍

“The most carbon-intensive image generation model… generates 1,594 grams of C02 for 1,000 inferences, which is roughly the equivalent to 4.1 miles driven by an average gasoline-powered passenger vehicle.”


As a result, the carbon footprint of LLMs can’t easily be ignored and is definitely something to keep in mind when considering the risks.


Example of Large Language Models include:

Chat-GPT 3.5 and Claude.
‍

One input, one output

This is an umbrella term encompassing several different types of AI models. Text-to-text and text-to-image are both examples of models that fall under this umbrella. Our Shakespearean sonnet example also falls into this category: I gave Chat GPT a creative brief of sorts, using words, and it produced its interpretation of that brief also using words.

“One input, one output” is an AI model that takes one input type, and produces one output type as a result. This is as opposed to “any-to-any” or “multi-input, multi-output” models that can intake any type of data such as images, video, audio, or text, and produce any kind of output.

This is the main shortcoming associated with one input, one output AI:

Scalability issues: As these AI models grow, the scale of resources required also grows. That includes the need for superior hardware, larger datasets, and more energy, which is costly in the long run.
‍

Example of one input, one output:

Chat-GPT 3.5
‍

Multi-input, multi-output (or multimodal)

This is another umbrella term that encompasses several different types of AI models. Some LLMs (such as GPT-4 or Sora) are classified as multimodal.

Unlike one-input, one-output models, multi-input, multi-output can accept and produce content in multiple formats. For example, instead of only intaking and producing text, Open AI’s GPT-4V(ision) can actually also process image inputs. It can then produce text outputs as a result, such as a description of the image, contextual information or even a story.
‍

Risks include:

Data privacy concerns: Multi-input, multi-output AI has a greater risk of data privacy concerns than one-input, one-output, simply due to the sheer amount and variety of data that it handles.

Deep Fakes: Unfortunately, the scary practice of Deep Fakes is also an increased risk with a broader set of data.
‍

Examples include:

GPT-4V(ision)

Sora


We hope this glossary of AI terms has been helpful - whether for your L&D team specifically, or to inform you more generally about AI’s risks and opportunities. If you’re looking for a handy table to refer to, check out our AI Jargon Buster, and to learn more about Thrive’s new AI Authoring Tool, browse our AI features page here.

Are you going to be at Learning Technologies?

If you’d like to speak with the Thrive team - and even Thrive customers – about how Thrive could work for your business, head to stands K20 and H20. We'll be exploring the topic of AI in more depth, offering insights from real customers, and making some exciting new announcements. You won't want to miss it. We look forward to seeing you there!

‍

More Stories

See all

See Thrive in action

Explore what impact Thrive could make for your team and your learners today.