AI systems are not recognised as decision-makers. Responsibility for outcomes remains fully with the regulated entity.
Inputs, transformations, and outputs generated by AI systems must remain attributable, traceable, and reviewable.
AI does not sit outside established validation principles. However, defining appropriate validation approaches for adaptive or non-deterministic systems presents new challenges.
Regulators continue to expect meaningful human control over GxP-relevant decisions.