This message was posted by a user wishing to remain anonymous
Someone in my company keeps asking us to create requirements and run unit testing on a statistical test (T-square test) for our algorithm, which is silly. The algorithm designer refuses to on the grounds that the T-square test is one of four tests that asks the question "how close is the test spectrum to the calibration spectrum? In other words, how well is the test spectrum described by the model developed with the calibration spectrum?
All four tests are used in combination to improve confidence in the final decision and to understand the source of the difference between the measured test spectrum and the model. According to him, none of these tests are requirements, they are; rather, design choices to meet the user accuracy requirement.
I try following the IEC 62304 standard and we do a good job when it comes to documenting the lifecycle of our software. However, the lifecycle of the algorithm is not well understood. Do we need requirements and unit tests for every function that the <g class="gr_ gr_1097 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling" id="1097" data-gr-id="1097">algo</g> performs? Do we have to create requirements and unit testing around the T-square tests? Please help!