I would love to say that I have no worries about ChatGPT, but I do. As Erik says, it is "utterly wrong", and as Timrit Gebru says it is as "stochastic parrot". On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? "1F99C (acm.org)
ChatGPT and other LLM's are a classic example of a technology of which the creators are going to claim that it is not dangerous, but users should be able to determine the validity of the output, i.e. it is the users that are dangerous. Irrespective of whether we agree with that: In the device field, I already hear the banging on the water pipes that it can be used to write complex documents like CER's to save time and money. This makes me cringe.
I don't think that most people realize that for any Large Language Model (LLM) like ChatGPT it needs to have a steady stream of new and reliable data to be able to actually produce a decent result, being stochastic it bases its output on the likely sequence of words from other texts (Erik obviously does, but even lawyers try to take shortcuts - ChatGPT: US lawyer admits using AI for case research - BBC News). LLM's obtain data partly by scraping the internet for content, and anybody looking at the content of most sites will realize that it is really not that good quality (CIFS expert Timothy Shoup estimates that 99 percent to 99.9 percent of the internet's content will be AI-generated by 2025 to 2030, especially if models like OpenAI's GPT-3 achieve wider adoption). This is going to mean that LLM's are going to generate the content that LLM's are going to scrape and use to generate content. I fear that it is a doom loop for knowledge, and it is going to render just about anything we see invalid.
I believe that most of us in the regulatory field realize the challenges already in finding quality information relating to regulations around the world (translations and interpretations), imagine for a second what that challenge is going to be like in 2030...
Gert Sorensen
Quality-Audit.eu
------------------------------
GertSørensen
------------------------------
Original Message:
Sent: 25-Jun-2023 02:43
From: Erik Vollebregt
Subject: Do you have any concerns about ChatGPT in regulatory?
As a lawer who is usually called in when things have already progressed to the level of hot mess I have no concerns whatsoever business wise. Actually, it's business development for me. Any area of law that has changed significantly or not even that much during the last two years is bound to be spectacularly misinterpreted by Chat GPT due to the temporal restrictions on its model. I've played around with having it concurrently answering client queries and Chat GPT is stunningly wrong most of the time on EU medicines and medical device law (which, indeed, changes a lot). But even on the more set in stones points it will for example tell you that the Eurpoean Medicines Agency regulates devices (utterly wrong).
I am worried about the confidence that the tool may give regulatory affairs staff in interpreting foreign law, because the staff will have little frame of reference to detect where Chat GPT is a little or a lot wrong and that is a risk to the company as the company may act on ill-advised machine interpretation that is not recognized as wrong by people that have not been trained sufficiently to understand the foreign legal system sufficiently.
------------------------------
Erik Vollebregt
Partner
Amsterdam
Netherlands
Original Message:
Sent: 31-Jan-2023 06:26
From: Ryan Connors
Subject: Do you have any concerns about ChatGPT in regulatory?
ChatGPT is everywhere these days, it seems. For those who are unfamiliar, ChatGPT is an AI chatbot that can answer all sorts of questions and spit out detailed responses. (It's largely in the news in the U.S. right now because teachers are concerned that students will use it to cheat on assignments.)
Do you think ChatGPT has any uses for regulatory affairs professionals? Do you see any risks that this AI poses?
------------------------------
Ryan Connors
Social Media and Communications Specialist
RAPS
------------------------------