Shortly after Chat GPT was released, someone had already released an App that could determine if a document was written using AI. That would be useful for teachers to analyze their pupils' submissions.
But I wonder what the characteristics of an AI document might be that another AI App could definitively say something was written using AI? And would the document be questioned simply because it was written by a bot?
Original Message:
Sent: 01-Apr-2023 18:11
From: Michaela O.
Subject: ChatGPT for Regulatory Work
Don't forget there's a lot more documentation than just 21 CFR and the long and historic list of FDA guidance documents. There's global guidance, GMP, QSIT, EU regulations, MDCG documents, and lots of relevant context built into the ~200k FDA submissions (and around 85k submission-related documents) and all of their surrounding data.
------------------------------
Michaela O.
Original Message:
Sent: 01-Apr-2023 13:49
From: Ram Balani
Subject: ChatGPT for Regulatory Work
Agree fully that ChatGPT learns from the Internet -but that's not all it can learn from.
Apart from public Internet, Wikipedia, all other corpus of datasets that now make up the trained OpenAI ChatGPT(x) --AI Machine Learning can be geared towards proprietary data ,e,g your own SOPs within an organization, protocols, quality systems data, reports, etc.
OpenAI publishes their API (Application programming Interface) such that one can take their out-of-the box ChatGPT LLM (Large Language Model you are citing) and customize or fine-tune with private training datasets to make it 'smarter'
That's what we plan to do , i.e. customize ChatGPT-4 against all the 2600 and growing FDA.Gov PDF Guidance we've collected along with the over 25K items that make up the FDA 21 CFRs HTML on Subchapter H (medical devices), Subchapter F(Biologics) and Subchapter C &D (Drugs).
This take some major doing--I cover how and is part of my upcoming RAPS July Quarterly article in the works.
That said--though ChatGPT can be made 'smarter' with private datasets--it can never match a Full-text index search hosted in Sharepoint Online created from ACTUAL FDA PDFs document contents or 21 CFRs FDA publishes on FDA.Gov.
The trade-off is that though you're searching Actual FDA published content that is full-text indexed- but not natural language like ChatGPT--some search operators (not rocket science and similar to Google) needs a little brushing up to learn.
See sample on the mobile screen - https://tinyurl.com/3jbjcd7v
Our goal is to pair the precision search Sharepoint renders with our FDA Guidance and 21 CFRs collections from FDA.Gov directly (all public domain) while at the same time deploying a custom-trained ChatGPT on the same platform for keywords 'fishing' or 'mining' to get two birds with one stone so to speak, i.e. a kind of dueling-banjo search style where may prompt ChatGPT with Natural Language (ie plain English) then repurposing possible handpicked ChatGPT response as search keyword against a Sharepoint based search engine.
There's challenges to doing these of course but the benefit would be awesome since the current US FDA website search sites for PDF Guidance and 21 CFRs are metadata and HTML single word based & published respectively.
To clarify further, if your search keywords do not match the tags assigned by FDA on a PDF Guidance hosted in FDA.Gov site-or if you wish to drill down on 21 CFRs where multiple keywords are needed for your search ('quality' 'supplier' 'audit' or 'biologics' referenced with 'stability' or 'assessment' -- you'll turn blue in the face and likely miss what's discoverable within FDA.Gov vaults but can't be found.
Ram B
------------------------------
Ram Balani
CEO
FDASmart Inc. /eSTARHelper LLC www.estarhelper.com
Amawalk , New York
rbalani@fdasmart.com
2019130558
https://tinyurl.com/3jbjcd7v
on US FDA eSTAR for 510(K)
Original Message:
Sent: 01-Apr-2023 08:36
From: Edwin Bills
Subject: ChatGPT for Regulatory Work
Chat GPT is all the rage, all over the news these days. And even a new version 4 has been released. Since the news is hyping it, it must be good, right?
Chat GPT "learns" from the internet, and we all know "if it is on the internet it must be true"
So there is no guarantee that anything produced by Chat GPT has any regulatoryvvalue.
------------------------------
Edwin Bills MEd, BSc, ASQ Fellow, CQE, CQA, CQM/OE, RAC
Principal Consultant
Overland Park KS
United States
elb@edwinbillsconsultant.com
Original Message:
Sent: 31-Mar-2023 17:06
From: Rashmi Dalvi
Subject: ChatGPT for Regulatory Work
This is a very valuable feedback Merel. My experience was somewhat similar as well. When it comes to pulling in reference information, ChatGPT gave me incorrect examples as well as links to the articles that did not exist. It is suffice to say that at least for now, ChatGPT is a useful tool for language development however in terms of actual scientific work its usefulness remains in question.
------------------------------
Rashmi Dalvi
Buena Park CA
United States
Original Message:
Sent: 30-Mar-2023 09:38
From: Merel Stok
Subject: ChatGPT for Regulatory Work
Out of curiosity to understand the capabilities of ChatGPT, I asked it a couple of things:
- To summarize the in vivo data of a straight forward paper that had clear headings stating "in vivo data". Notably, it gave me the wrong enzyme, it gave me 3 animal models plus conclusions on those animal models while there were only 2 animal models used in the paper.
- I asked to provide me the equivalent of RMAT for other regions, it gave me RAPS, DIA, which is obviously not correct.
- I asked about the top 5 papers in a certain field (clearly defined). It responded that it cannot give me rankings as it was built as a literacy tool, but it did gave me some, looking at the titles, relevant papers. ChatGPT provided all with titles and authors. When looking them up in PubMed/search machine, they did not exist. All 5. I then asked it to give the me the DOIs of those papers. They were incorrect and provided me with papers from a totally different field that did not match the original title nor authors given by it originally.
I realize that the way you define your ask plays a major role, and I have tried to ask the same question in different ways .
After playing with it for a couple of hours, using scientific and regulatory search terms and refining them along the way, I concluded that I will not be using ChatGPT for any scientific/regulatory work as the tool is not there yet in my experience, not yet ready to be used in our line of work. At least, how I am able to use it ;) For other, not fact-based texts, it is fun to see what it can come up with.
------------------------------
Merel Stok
Mountain View CA
United States
Original Message:
Sent: 07-Mar-2023 00:54
From: Anonymous Member
Subject: ChatGPT for Regulatory Work
This message was posted by a user wishing to remain anonymous
Has anyone used ChatGPT for writing summaries of published articles? I am curious if anyone has any experience in terms of its accuracy.
I realize this is a sensitive topic, I am just trying to see if there is merit in the output from ChatGPT.