More on our AI Notetakers Stance

Our stance on AI notetakers

In this article we’re talking about generative AI. Almost everyone’s heard of AI, because almost everyone’s talking about generative AI.

But what is it?

Generative AI is a type of artificial intelligence that can create new content, such as text, images, and audio, based on existing data. It learns patterns from large datasets and then uses those patterns to generate novel outputs that are similar to, but not exactly the same as, the original data.

Please note, this is the ONLY part of this article that is written by AI.

AI Notetaking Tools

The use of AI for taking notes in meetings is increasing in popularity, and yes, we’re not going to deny that they’re a handy and powerful tool. They’re great for saving time you’d spend writing up meeting notes, summarising the feel and outcomes of meetings, and pulling out information and turning it into a handy list of actions.

They do this by recording your meetings, transcribing and summarising to create minutes. Some AI tools can even identify key topics and suggest actions based on the discussion. Sounds great!

But, as great as that is, it’s actually a GDPR minefield, not to mention an ethical conundrum. But before we get into it, we want to share where we stand on the use of AI notetakers and then we’re going to explain why. Not for you to do the same, but to help and empower you to make an informed decision on where you’re going to stand on it too.

There is a lot of value in AI, and it can be a very powerful tool – but it’s early days and we’re still trying to understand the benefits, risks, and how it works. We still can’t say, with certainty, that the benefits outweigh the risks.

Our stance

For the comfort and privacy of all participants, we will not allow the use of AI notetakers in our meetings, networks and forums, and training sessions.

Our primary reasons for this are:

  • The uncertainty around the processing and storage of data by AI providers.
  • The recording of participants’ names, faces, and conversations without consent.
  • The ethical considerations of using AI to record, transcribe, and summarise meeting discussions, and to ensure our sessions continue to be a safe place for those we meet with.
  • The environmental impact and carbon footprint of data processing centres, and the energy consumption to run and maintain them.

We are conscious however, that there are some who use AI notetakers for accessibility purposes. If this is how you use AI notetakers, please call 01782 683030 or email events@vast.org.uk to discuss your accessibility requirements. We will always exhaust all other options before allowing AI to transcribe our meetings, networks, or training.

Here's Why

When coming to this decision, we looked at the benefits of using AI this way. We also discussed the risks and our concerns in using AI for taking notes during meetings. These concerns are around:

GDPR Compliance, Data Protection, and Storage

Let’s start with GDPR and informed consent. Often, the AI notetakers are already in online meetings when you join so your consent is retrospective.

Lawful basis for processing data under UK GDPR legislations means that we, as organisations, MUST have explicit consent before collecting and processing personal data. This includes:

  • Staff, volunteer, and client names.
  • Sensitive strategic discussions.
  • Personally identifiable information (PII).
  • Potentially confidential or regulated data.

Now, this data and its processing could fall under contractual necessity or legitimate interest, but most of the time it’ll be consent, and that consent must be given BEFORE the data is collected or processed. That means we should be giving, or asking for, consent before an AI notetaker joins the meeting. Sadly, AI notetakers are often already in online meetings when you join so your consent would be retrospective.

Due to the rapid pace of AI development, maintaining appropriate regulation is a risk, therefore legislation can quickly become outdated posing a bigger security risk.

“It remains unclear if it is technically possible to successfully render such systems safe and responsible without direct human oversight.”

Most of the time, the data collected, processed, and stored by AI notetakers are on cloud servers and aren’t anywhere near the UK when this happens. It could be in the EU or any number of countries which might not have the robust data protection laws we have in the UK. This could increase the risk of data leaks and hackers stealing data for ransom. Where this is unlikely, it’s still more of a risk than a UK-based cloud storage system or data processing centre.

There are even some AI platforms that retain the transcripts to help AI ‘learn’ and this poses a risk of confidential and sensitive conversations being exposed to external parties or ransomware. Before using AI notetakers, you should ensure that the platform you’re using provides end-to-end encryption to protect sensitive information from potential breaches.

Informed consent means that anyone being recorded by AI notetakers should be asked for consent prior to it being used. The person using the notetaker should be able to explain the data protection measure in place, as well as how and where the data is stored.

To prevent GDPR and data protection breaches, we will not be allowing AI notetakers in our meetings, training, or networks.

Transparency

When using AI, it’s crucial to inform everyone you’re working with that you’re using AI. That could be in the form of a transparency statement on anything that’s using AI content or including a tick box on your materials and platforms.

That could come in the form of a note like the one we’ve used in the definition of AI above. It could also be in the form of a disclaimer (right). It could also simply be a footnote.

You need to be clear when using AI notetakers and use something like this in those minutes, notes, and transcripts. With some apps, this isn’t an option.

Example AI transparency disclaimer

Best practice for AI transparency includes:

  • Be clear with those being recorded in your meetings about how the notetaker is collecting data (faces, names, conversations etc.), how and where it’s being stored, and how it’ll be used in creating notes, summaries, lists, and transcripts.
  • Be able to obtain explicit consent from other in the meeting BEFORE using an AI notetaker.
  • Thoroughly explain how you’re preventing the content inaccuracy and biases of AI notetakers (more on this below).
  • Have thorough knowledge on how the AI notetaking app you’re using works, how you’ve customised the settings for security, how you access, review, and share the notes it’s taking, and you should be able to explain all these points to the other people in your meetings.

To remain completely transparent, and to allow others to be too, we will not be allowing AI notetakers in our meetings, training, or networks.

Environmental Impact

The data that AI collects, shares, stores, and learns from is processed in data centres. These house the serves and computers that power AI.

There are the resources required for building these massive structures to consider, but then there’s the sheer amount of electricity needed to power them, and the equipment that’s stopping them from overheating. Predictions suggest that by 2030 AI data centres could use 945TWh (terawatt hours) of electricity every year. That’s three times more that the entire UK!

Servers of such huge sizes generate a lot of heat, and so to stop them overheating, it needs a cooling system. Cooling systems predominantly use water, and the amount of water those systems need is staggering. For example, Virginia in the US has the highest concentration of data processing centres in the world; between 2019 and 2023 their water usage went up to a mind-blowing seven billion litres (1.85 billion gallons)!

The water consumption for US-based facilities may be striking, but the research in ‘Making AI Less “Thirsty”’ also suggests that data centres in Asia could use as much as three times the amount of water as their western equivalents.

Rather than further contributing to the environmental consequences of using AI, we’re not allowing AI notetakers in our meetings, training, or networks.

Content Accuracy and Bias

Regardless of how good AI can be for content, it needs extremely thorough fact, spelling, and grammar checks before sharing it. But when it comes to AI notetaking apps, they, often, share meeting transcripts immediately after the meeting ends. It doesn’t account for tone of voice, different accents, context, or simply ‘mishearing’ words or conversations. It also usually uses American English spellings and grammar instead of British English.

AI is infamously biased. It learns from the data it’s trained on, which can represent only certain demographics or historical bias. These biases will be reflected in AI’s output. Even if the data it’s fed isn’t necessarily biased, algorithms can still prioritise certain demographics or features potentially widening existing inequalities, producing discriminatory content, reinforce stereotypes, or marginalise certain communities or viewpoints. This kind of content can have huge ethical, and potentially legal, consequences.

As well as inaccuracy and bias, generative AI can also make it up if it can’t find the content it needs. These are called ‘AI hallucinations.’ Ai trawls the internet to fill in blanks, but if it can’t find the answer, it will hallucinate and provide you with something that is completely fabricated.

Instead of risking the accuracy of our meeting notes or minutes, sharing content with unintentional bias, were saying “no” to AI notetakers in our meetings, training, or networks.

Ethical considerations

Firstly, its okay not be okay with AI notetakers recording you and storing your data. It’s okay to speak out and challenge the use of AI notetakers in your meetings if you’re not comfortable with them – and we’re making our AI notetaker statement to protect the comfort and confidentiality of you, our membership, to reassure you and empower you to speak out too if you’re not happy.

We’re right to question the ethics of AI though, especially AI noteakers.

There are occasions where AI notetakers are in meetings before we’re aware of them, and consent is retrospective. But there’s also the AI notetaking apps that just look like another person in those meetings, or they can’t be seen at all. So, if no-one mentions them, or asks about them, how are we supposed to decide on whether we want to provide consent?

Some people are even using notetaking apps instead of going to meetings, or to be able to share paid-for training that providers have put a lot of time, effort, and resources into designing and delivering.

When it comes to the sharing of the notes, transcripts, summaries, and actions produced by AI notetakers, they often go to everyone on the invite list, not just those that attended and were involved. While meeting minutes and notes are probably more important for those that didn’t attend, there’s the potential for conversations taking place that weren’t intended for some of those people. This risks potentially sensitive conversations or confidential information being exposed.

Best practice suggests that pausing notetaking during these kinds of conversations or information that could identify a person, but that’s reliant on the person responsible for the app in that meeting. It would be easy to forget to pause it, or for conversations to flow too quickly to be able to pause it in time.

It would also be best for these notes and minutes to be thoroughly checked and redacted wherever necessary, but most of the time, they are automatically sent out immediately after the meeting, with no time to have them checked before sharing them.

Because of these ethical considerations and to provide a safe place for our members, we’re not allowing AI notetakers in our meetings, training, or networks.

If you have any questions about this policy, please email enquiries@vast.org.uk.