Regulatory Open Forum

 View Only
Expand all | Collapse all

Do you have any concerns about ChatGPT in regulatory?

  • 1.  Do you have any concerns about ChatGPT in regulatory?

    Posted 31-Jan-2023 06:26

    ChatGPT is everywhere these days, it seems. For those who are unfamiliar, ChatGPT is an AI chatbot that can answer all sorts of questions and spit out detailed responses. (It’s largely in the news in the U.S. right now because teachers are concerned that students will use it to cheat on assignments.)

    Do you think ChatGPT has any uses for regulatory affairs professionals? Do you see any risks that this AI poses?



    ------------------------------
    Ryan Connors
    Social Media and Communications Specialist
    RAPS
    ------------------------------


  • 2.  RE: Do you have any concerns about ChatGPT in regulatory?

    This message was posted by a user wishing to remain anonymous
    Posted 31-Jan-2023 13:47
    This message was posted by a user wishing to remain anonymous

    (1) Free term papers are abundant on the open internet. Yet most regulatory documentation is non-public or redacted. Thus it would be hard to train a bot on good samples of regulatory filings.
    (2) Even if you had a good sample for training, it is significantly more difficult to write a PMA or an NDA than a term paper.
    (3) The stakes are much higher in business generally and RA in particular than in academic work. Most people would be unwilling to risk adverse regulatory action or a failed filing for the sake of supposed convenience.
    (4) Automation works well for some low-level processes but it needs human supervision, and such human validation and supervision is usually a requirement in regulated industry (unless the automated process is purely administrative).
    (5) Regulators will require human review of documentation (regardless of whether AI produces the documentation) so a human element will always be required on that end.

    Therefore, for the time being, I have few concerns about this technology in RA and see few applications for it outside database searching, basic admin and RIMS.


  • 3.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 31-Jan-2023 14:43
    Edited by John Barry 31-Jan-2023 15:06

    Hello Guys,
    In response, bots of any type, for example disinfection is done by staff in hospitals manually already and standardising any process that repeatable by augmentation backed up by what consistent concrete data in validation secured with concurrent validation at specific intervals if required so therefore I don't see that it's not ideal.
    Yes, its plain and simple with admin on validation and administrative supervision it does remove physical labour the medical industry workers would thank, still save costs, be in larger batches and therefore better controlled, therefore legally framing batches for easier post market surveillance. Please correct me if I'm wrong but any type of recall and monitoring of equipment or devices at post-market would be made easier and less expensive to that effect.
    One last thing I'd like to mention is that root cause could be identified quicker making a recall if any faster. Indeed, finding what caused the issue is to ensure it never happens again. I think it's a simple challenge in augmented assistant with itself in this context would make progress that constitutes bigger results that what's outlined. Is it not worth removing work where not necessary and turn tasks that skilled individuals are doing into better productive functions? This then leaves more time to stick to what's important - right first time - no recalls and easier post market surveillance that at this time is costing.

    btw - I won't continue to post on the forum because the forum is very live and understand this is a straightforward topic!!!

    That’s my take on bots and the same admin can go for ChatGPT in the future to remove simple tasks.
     Thank you.



    ------------------------------
    John Barry
    Project Engineer
    Mullingar
    Ireland
    ------------------------------



  • 4.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 01-Feb-2023 16:28
    Edited by John Barry 01-Feb-2023 16:51
    hello,
    As I outlined in my post on AI. My direct opinion on AI with the use of augmented assistant as a tool in manufacturing maybe useful. However I 100% agree with Anonymous post and Kimberly that "it spits out info from the internet" so unless the AI ChatCPT or similar software can have access to data of a series of factual inputs its defines no not threat to industry. Further to this it can be a proven unreliable source of data like a previous post mentioned by challenging it. Further to this it's a serious risk to the spread of false or mis-information that seems would be the biggest issue posed to RA.
    Broadening education using false tools like AI ChatCPT and the spread of misinformation is likely to do more harm than good - as the thread on the post outlines, "inexperienced individuals using it as a tool" is where the harm can be identified.
    Thank you.
    Best regards,
     John




    ------------------------------
    John Barry
    Project Engineer
    Mullingar
    Ireland
    ------------------------------



  • 5.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 01-Feb-2023 08:50
    Edited by Kimberly Chan 01-Feb-2023 08:51
    One of my non-regulatory colleagues recommended trying ChatGPT for a question that I had regarding MDA codes when we were applying to MDR. It very confidently spat out an answer that was completely, utterly incorrect. I was relieved to find that I still have job security even with the advent of AI!

    Like another poster said here -- any AI requires good training data. Certain data is easy to scoop up from the internet, and some is not. If there is ever going to be an AI that could threaten or replace a regulatory position for a human, I think it would need to be specifically trained by a regulatory professional, and not a general chat bot.

    Overall, I think there is a risk in relying on or asking ChatGPT about a regulatory question for the above reasons. Perhaps an inexperienced person might believe an incorrect ChatGPT answer because it acts SO confident about it.

    ------------------------------
    Kimberly Chan
    Lansdale PA
    United States
    ------------------------------



  • 6.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 01-Feb-2023 14:01
    I use it to create outline structure for my topics. I dont trust the details but the language and sequences it uses from topic to topic are amazingly fluid.

    ------------------------------
    Edward Panek
    VP, QA/RA
    Med Device
    USN Veteran
    Research into Neural Nets - https://www.twitch.tv/edosani
    ------------------------------



  • 7.  RE: Do you have any concerns about ChatGPT in regulatory?

    This message was posted by a user wishing to remain anonymous
    Posted 01-Feb-2023 14:37
    This message was posted by a user wishing to remain anonymous

    I've personally played with ChatGPT. As everyone in this forum knows, Regulatory topics can be very much "it depends". ChatGPT is CANNOT answer most of the questions that gets asked in this forum. As it currently stands, ChatGPT does not have many uses for RA professionals.

    However, It can be good for formulating sentences or phrases when you have a writers block!


  • 8.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 02-Feb-2023 03:06
    As long as the source data for these artificial intelligence programs are not verified and validated then there will be always cases of
    Garbage In = Garbage Out
    Individuals in RA and QA will always be required to check the sources and do the sampling. 

    My concerns are more with the use of ChatGPT and other similar programs without strict verification and validation of the source data. 
    With out V&V, costly - in terms of $ and harm - regulatory and medical mistakes can be made. 

    If the reported cases of garbage output increases to a significant level, people will loose faith in the technology - artificial intelligence - behind them. 
    Let us not forget that AI can be beneficial in different areas i.e. when solutions need to be rapidly found. 

    Best Regards,
    Stephanie

    ------------------------------
    Stephanie Grassmann
    Founder & Managing Director of MedTechXperts Ltd
    Biberstein
    Switzerland
    ------------------------------



  • 9.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 02-Feb-2023 09:42

    Melanie Mitchell is a well known researcher in AI and ML and is on the faculty of the Santa Fe Institute. She has an interesting blog post entitled On Detecting Whether Text was Generated by a Human or an AI Language Model at https://aiguide.substack.com/p/on-detecting-whether-text-was-generated

     



    ------------------------------
    Dan O'Leary CQA, CQE
    Swanzey NH
    United States
    ------------------------------



  • 10.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 02-Feb-2023 14:30

    I agree with Stephanie  100% on this - Garbage In/ Garbage Out - or as we say it in IT - GIGO! 🖐

    Furthermore--and apologies for getting 'geeky' -- two crucial points bears mentioning I refer to below that contributes to the ChatGPT puzzle --is this "WOW' or "SO -SO"?

    1-ChatGPT relies on crawling the web pages as it adds to its learning database in time --so it may just get smarter and smarter with FDA regulatory specifically in time is likely,
    That said-- today however their  EARLY web page mining /  learning system and database  of learning compilation leaves things to be desired.

    In other words - ChatGPT as I see it - is today nothing more than a cleaner Google search without the Google web noise or clutter !

    ChatGPT bots gathers web pages information thats there on the Internet and also - am assuming -has a page ranking algorithm that mimics Google page ranking  to reply to a query  as well but has managed to accumulate som  intelligence as it matures  its AI intelligence learning database also.

    2-Equally crucial - and this is where things get challenging for ANY/ALL  AI tech bots  on  medical device/biologics and drugs regulatory,

    US FDA does not publish web APIs (Application Programming Interface)  on FDA's databases hosting PDF GUidance and 21 CFRs legal statutes hosted on the FDA website.

    Heck-- FDA's website can't even support a simple full text search inside PDF content like find me guidance (PDFs ) that refer to 'device' close the the term 'label' and next to 'advertising'  unless FDA has assigned these tags on a PDF hosted as Summary'  column on the FDA search site! This is known as Proximity Search which Adobe Acrobat supports today with bunch of PDFs assembled in one directory!

    Of course most RAPs member will head straight to 21 CFR 807 to discover answers to this query most likely but this is a 21 CFR not PDF document guidance content search. There are 34 Subchapter H  21 CFRs on medical devices  while over 600 plus PDFs Guidance.
    So the FDA leaves the industry stuck with ABYSMAL searches where the tags FDA assigns to PDF Guidance for instance needs to match the keywords one searches for (not Full text content PDF search)! Otherwise the search fails to produce your search results.

    FDA has no intentions of publishing APIs  either (could help ChatGPT!) since FDA is perfectly content with their meta-data (Tag based) search vs Full text every word other platforms offers when  indexing  PDF Guidance content and 21 CFR HTML pages published on FDA site.

    In comes ChatGPT today  and other AI bots but NOW  are then constrained in maturing their AI learning experiences with what is published by web browser users as more and more join into the ChatGPT experiences. Again--am speaking FDA regulatory content/intelligence  here specifically.

    In other words--ChatGPT does well with a generic search like - find me cGMP immunogenicity  testing manufacturing  but ChatGPT fails largely when the  search is for actual PDF  FDA Guidance on the same keywords (ie the actual PDF content)!

    There are ways to remedy this gap-I plan to write an article on the RAPs Quarterly July 2023 Regulatory Intelligence issue-so stay tuned for that.
    If you can't wait  however-feel free to ping me  (rbalani@estarhelper.com email - reference this post please)-  so I can render hints  how YOU  can get this  accomplished though I need to caution all - there's work involved in doing this.

    See ChatGPT query and ChatGPT replies screenshots attached.
    Query--> 1st one is a ChatGPT query.
    Response--> 2nd attachment is how ChatGPT replies. Not bad!

    Query--> 3rd one is another more specific  ChatGPT query, i.e.  asking for actual  PDF guidance where the keywords are rendered as inquired.
    Response--> 4th attachment- ChatGPT comes up empty with its REPLY  and begs for 'Regenerate Response but nothing changes.


    Cheers.
    Ram B







     



    ------------------------------
    Ram Balani
    CEO
    FDASmart Inc. /eSTARHelper LLC www.estarhelper.com
    Amawalk , New York
    rbalani@fdasmart.com
    2019130558
    https://tinyurl.com/2wkxp69y
    on US FDA eSTAR for 510(K)
    ------------------------------



  • 11.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 01-Apr-2023 17:49
    Edited by Michaela O. 01-Apr-2023 18:53

    A company in the space has already begun jumping on this, namely this one: https://www.linkedin.com/feed/update/urn:li:activity:7047369862796460032/

    Some of the companies already collecting and managing all of the data, documents, registrations, devices, guidance, etc in this space are lined up to quickly leverage that groundwork to segue directly into generative AI for regulatory research.

    You are spot on about the fact 1) you need clean, organized data; and 2) that it's just a research tool at the end of the day, and the responsibility for verification and proper due diligence is in the hands of the user who has knowledge and intuition and ability to verify.

    (See 3 attached images)

    ------------------------------
    Michaela O.
    ------------------------------



  • 12.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 02-Apr-2023 02:02
    Edited by Michaela O. 02-Apr-2023 02:05

    Hi Stephanie, you're absolutely right. Unambiguous verification by the user of the program is ultimately the only option. There isn't another way of conveying certainty without allowing for verification.

    The tools that will see use in a regulatory setting will cite all of the sources they utilize for generating responses, for this reason.



    ------------------------------
    Michaela O.
    ------------------------------



  • 13.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 14-Feb-2023 12:38

    https://www.technologyreview.com/2023/02/08/1068068/chatgpt-is-everywhere-heres-where-it-came-from/?utm_source=linkedin&utm_medium=tr_social&utm_campaign=NL-WhatsNext&utm_content=02.14.23



    ------------------------------
    John Barry
    Project Engineer
    Mullingar
    Ireland
    ------------------------------



  • 14.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 31-Mar-2023 17:18

    This is a great topic. I am too interested and researching the same. I think Chat GTP will particularly help the clinical and regulatory coordinator. It can help to:
    1. Summarize the regulatory laws
    2. Rewrite the consent form in the 8th-grade level
    3. Help construct the smart phrases
    4. Some other functions that will make the clinical coordinator and regulatory specialists' life easy.

    I would also argue that it is different from Google and other search engines because it summarizes the information you need after searching from different internet sources, while other search engines provide vast resources and the user have to search within them to summarize the information. Right now, there is a  need for data validation on some content that chat gtp provides but I am hopeful it will come out as a useful tool in clinical research.

    I want to write and publish a paper about this topic. I would like to ask if anyone here is interested to collaborate.

    Thanks
    Amrita
    Senior CRC and Regulatory lead



    ------------------------------
    Amrita Ghosh
    Cupertino CA
    United States
    ------------------------------



  • 15.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 03-Apr-2023 16:03

    Here is a related article in Scientific American of March 31 about use of AI in diagnosis and the associated issues:

    https://www.scientificamerican.com/article/ai-chatbots-can-diagnose-medical-conditions-at-home-how-good-are-they/ 



    ------------------------------



    ------------------------------
    Edwin Bills MEd, BSc, ASQ Fellow, CQE, CQA, CQM/OE, RAC
    Principal Consultant
    Overland Park KS
    United States
    elb@edwinbillsconsultant.com
    ------------------------------



  • 16.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 23-Jun-2023 04:20
    Edited by Enrico Schurig 23-Jun-2023 04:20

    I just made a check with ChatGPT:

    "I would like to know the IVR code for a specific IVD."

    ...giving intended use... 

    Unfortunately, as an AI language model, I do not have access to real-time databases or regulatory information beyond my knowledge cutoff in September 2021....To obtain the IVR code for the product, I recommend reaching out to the manufacturer of the test or consulting the relevant regulatory guidelines or databases specific to your country or region. They will be able to provide you with the accurate and up-to-date IVR code associated with that particular IVD product.

    So I guess my job in regulatory is still safe for the moment.  ;)

    Have a nice day,

    Enrico



    ------------------------------
    Enrico Schurig
    Le Mont-sur-Lausanne
    Switzerland
    ------------------------------



  • 17.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 24-Jun-2023 23:31

    Personally, I'm not too concerned about the possibility of ChatGPT displacing regulatory affairs professionals. In fact, I think it could potentially become a very useful tool for reducing the amount of time spent searching for information. However, most regulatory decisions and strategies require deep analysis and evaluation of multiple factors and ever-changing regulatory requirements for different medical devices across different regions and countries.  ChatGPT would have to evolve into a highly sophisticated AI program with a dedicated team constantly supplying newly trained algorithms based on the latest datasets to give out accurate responses. We're nowhere near there and I doubt it'll get there anytime soon. Time will tell, but, as for now, I'm excited at the prospect of using ChatGPT to make my work more efficient.



    ------------------------------
    Carol
    ------------------------------



  • 18.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 25-Jun-2023 02:43

    As a lawer who is usually called in when things have already progressed to the level of hot mess I have no concerns whatsoever business wise. Actually, it's business development for me. Any area of law that has changed significantly or not even that much during the last two years is bound to be spectacularly misinterpreted by Chat GPT due to the temporal restrictions on its model. I've played around with having it concurrently answering client queries and Chat GPT is stunningly wrong most of the time on EU medicines and medical device law (which, indeed, changes a lot). But even on the more set in stones points it will for example tell you that the Eurpoean Medicines Agency regulates devices (utterly wrong).

    I am worried about the confidence that the tool may give regulatory affairs staff in interpreting foreign law, because the staff will have little frame of reference to detect where Chat GPT is a little or a lot wrong and that is a risk to the company as the company may act on ill-advised machine interpretation that is not recognized as wrong by people that have not been trained sufficiently to understand the foreign legal system sufficiently.



    ------------------------------
    Erik Vollebregt
    Partner
    Amsterdam
    Netherlands
    ------------------------------



  • 19.  RE: Do you have any concerns about ChatGPT in regulatory?

    Posted 26-Jun-2023 02:01

    I would love to say that I have no worries about ChatGPT, but I do. As Erik says, it is "utterly wrong", and as Timrit Gebru says it is as "stochastic parrot". On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? "1F99C (acm.org)

    ChatGPT and other LLM's are a classic example of a technology of which the creators are going to claim that it is not dangerous, but users should be able to determine the validity of the output, i.e. it is the  users that are dangerous. Irrespective of whether we  agree with that: In the device field, I already hear the banging on the water pipes that it can be used to write complex documents like CER's to save time and money. This makes me cringe. 

    I don't think that most people realize that for any Large Language Model (LLM) like ChatGPT it needs to have a steady stream of new and reliable data to be able to actually produce a decent result, being stochastic it bases its output on the likely sequence of words from other texts (Erik obviously does, but even lawyers try to take shortcuts - ChatGPT: US lawyer admits using AI for case research - BBC News).  LLM's obtain data partly by scraping the internet for content, and anybody looking at the content of most sites will realize that it is really not that good quality (CIFS expert Timothy Shoup estimates that 99 percent to 99.9 percent of the internet's content will be AI-generated by 2025 to 2030, especially if models like OpenAI's GPT-3 achieve wider adoption). This is going to mean that LLM's are going to generate the content that LLM's are going to scrape and use to generate content.  I fear that it is a doom loop for knowledge, and it is going to render just about anything we see invalid. 

    I believe that most of us in the regulatory field realize the challenges already in finding quality information relating to regulations around the world (translations and interpretations), imagine for a second what that challenge is going to be like in 2030...

    Gert Sorensen

    Quality-Audit.eu



    ------------------------------
    GertSørensen
    ------------------------------