Regulatory Open Forum

 View Only
  • 1.  Do we have a severity bias when analyzing risk?

    Posted 16-Jun-2023 08:29

    Dear colleagues - I would love to hear your opinion on this question. 

    In practical terms, should we treat both Severity (S) and Probability (POH) to have the same impact on risk or is one more impactful than the other?

    Recently, I did a quick poll on LinkedIn which prompted a good discussion. Here is a summary of results:

    I have written an article on this topic. Take a look and please share your opinions. Thank you!



    ------------------------------
    Naveen Agarwal, Ph.D.
    Problem Solver | Knowledge Sharer.
    Let's Talk Risk!
    @https://naveenagarwalphd.substack.com/
    ------------------------------


  • 2.  RE: Do we have a severity bias when analyzing risk?

    Posted 20-Jun-2023 12:29

    I think severity has more impact on risk than probability. If the severity of the risk is high - i.e. could lead to death - then the probability could be low and the product would still carry high risk. If the probability is high but the severity is very low, then there is likely little concern. That would look something like: 

    S=5, P=1 > S=1, P=5

    Interested to hear what others think as well! I am new to medical devices so thank you for this discussion as I am learning this new field of regulatory affairs. 



    ------------------------------
    Stephanie Markey
    Associate Director
    Fraser CO
    United States
    ------------------------------



  • 3.  RE: Do we have a severity bias when analyzing risk?

    Posted 21-Jun-2023 09:11

    @Stephanie Markey thank for your sharing your perspective.

    I can understand your view in the context of a severity level 5. The two combinations S(5,1) and S(1,5) may seem to be too far apart.

    What is your view on severity level 3 for example? That was option 2 in the LI poll above. Would you see S(3,2) and S(2,3) to be somewhat equivalent?

    Thanks again!



    ------------------------------
    Naveen Agarwal, Ph.D.
    Problem Solver | Knowledge Sharer.
    Let's Talk Risk!
    @https://naveenagarwalphd.substack.com/
    ------------------------------



  • 4.  RE: Do we have a severity bias when analyzing risk?

    Posted 21-Jun-2023 15:51

    Hi Narveen.

    This is a great topic.  Lots of potential differences in opinion on this point.  I tend to generally agree with Stephanie when you have any "significant" severity (I usually consider anything 4 or 5 "significant").  I tend to look at these sorts of things with reason so the higher the severity the lower the probability needs to be.  So anytime I am looking at a 4 I want the probability at 1 if at all possible.  For "5"s I generally want to work at getting the severity of the rise decreased because anything that is likely to lead to death or significant impairment is never something to be looked at flippantly even when probability of the harm is extremely low.

    That said, and being realistic, I recognize that in some instances there is no way to make the severity less or the probability "0".  In those instances I tend to error on the side of caution and build as many of the guardrails as I can think of in the process to mitigate or minimize the severity.  At the mid-level points, I think that 2,3 and 3,2 are essentially the same "risk" in reality.  I think that a moderate risk (maybe ER visit or office visit) is really not nearly as significant as hospital stays even.  I also tend to look at this from a "liability" perspective when I am trying to make decisions in how to manage this sort of severity x probability scenario.  Living and working in the US where there are such things as "class action lawsuits" my goal in part during the mitigation of the dangers is to ensure that we as a company have the best information and program to show due diligence in mitigating risks for patients.



    ------------------------------
    Victor Mencarelli MS
    Global Director Regulatory Affairs
    New YorkNY
    United States
    ------------------------------



  • 5.  RE: Do we have a severity bias when analyzing risk?

    Posted 22-Jun-2023 09:07

    Thank you @Victor Mencarelli for your thoughtful comment.

    I agree, that for S=5 risks, we should expect the lowest probability level, such as P=1.

    The question is, would you be willing to accept a risk with S=1 and P=5? At what point do you say that even S=1 risks need to be below a certain level of occurrence?

    Best regards



    ------------------------------
    Naveen Agarwal, Ph.D.
    Problem Solver | Knowledge Sharer.
    Let's Talk Risk!
    @https://naveenagarwalphd.substack.com/
    ------------------------------



  • 6.  RE: Do we have a severity bias when analyzing risk?

    Posted 22-Jun-2023 13:42

    I think that is where the "fair balance" process comes into play Naveen.  It would depend on many factors, but some that I can think of immediately include things like what is the actual harm to the patient (real, perceived or treatment related), what is the "inconvenience" from that harm, and what, if any, options are available to avoid the harm and lower P.  Example - for blood test lancets that are commonly used by laypersons in treatment/management of diabetes, the potential "harm" is bleeding which would be rated with a P=5 in most cases because that is the actual purpose of the device.  Severity is typically low because the amount of bleeding is relatively small and proportionate to the value of the data to risk it.  But what about for a hemophiliac?  Or someone already on blood thinners?  That risk might not be so easily waived through.  So I think that a harm severity score of 1 with a probability of 5 needs to be seriously considered in the overall assessment of the product.  If something is a P=5 then to me that means it is almost certainly going to happen to a significant number (possibly even a majority) of patients using the product.  Why is the harm there?  What mitigation has been put in place?  Could there be other mitigation options that should be considered?  If they were not implemented someone needs to be able to explain the exact source of the decision and the rationale or the team needs to go back to the drawing board to determine whether any other mitigation options are viable.

    As I originally noted I would not consider S=5 P=1 equivalent to S=1 P=5 but I do believe we need to really seriously consider why a P=5 item is in the mix and how we can try to lower that probability in any reasonable ways possible...



    ------------------------------
    Victor Mencarelli MS
    Global Director Regulatory Affairs
    New YorkNY
    United States
    ------------------------------