In a bid to foster greater digital trust in AI products used for medical diagnoses and treatment, the British Standards Institution (BSI) has released high-level guidance.
The guidance, titled ’Validation framework for the use of AI within healthcare – Specification (BS 30440),’ aims to bolster confidence among clinicians, healthcare professionals, and providers regarding the safe, effective, and ethical development of AI tools.
As the global debate on the appropriate use of AI continues, this auditable standard targets products primarily designed for healthcare interventions, diagnoses, and health condition management.
Jeanne Greathouse, Global Healthcare Director at BSI, said:
“This standard is highly relevant to organisations in the healthcare sector and those interacting with it. As AI becomes the norm, it has the potential to be transformative for healthcare.
With the onset of more innovative AI tools, and AI algorithms’ ability to digest and accurately analyse copious amounts of data, clinicians and health providers can efficiently make informed diagnostic decisions to intervene, prevent, and treat diseases, ultimately improving patients’ quality of life.”
According to forecasts, the global healthcare AI market is expected to surpass $187.95 billion by 2030. However, healthcare providers and clinicians may face challenges in assessing AI products due to time and budget constraints or a lack of in-house capabilities.
The BS 30440 specification seeks to aid decision-making processes by providing criteria for evaluating healthcare AI products, including clinical benefit, performance standards, safe integration into clinical environments, ethical considerations, and equitable social outcomes.
The standard covers a wide range of healthcare AI products, including regulated medical devices like software used for medical purposes, imaging software, patient-facing products like AI-powered smartphone chatbots, and home monitoring devices. It applies to products and technologies utilising AI elements – including machine learning – and is relevant to both AI system suppliers and product auditors.
The development of this specification involved collaboration among a panel of experts, including clinicians, software engineers, AI specialists, ethicists, and healthcare leaders. The guidance draws from existing literature and best practices, translating complex functionality assessments into an auditable framework for AI system conformity.
Healthcare organisations will be able to mandate BS 30440 certification in their procurement processes to ensure adherence to these recognized standards.
Scott Steedman, Director General for Standards at BSI, commented:
“The new guidance can help build digital trust in cutting-edge tools that represent enormous potential benefit to patients, and the professionals diagnosing and treating them.
AI has the potential to shape our future in a positive way and we all need confidence in the tools being developed, especially in healthcare.
This specification, which is auditable, can help guide everyone from doctors to healthcare leaders and patients to choose AI products that are safe, effective, and ethically produced.”
The specification addresses the need for an agreed validation framework for AI development and clinical evaluation in healthcare. It builds on a framework initially piloted at Guy’s and St. Thomas Cancer Centre and later revised through discussions with stakeholders involved in AI and machine learning.
With the publication of this guidance, BSI seeks to instil confidence in AI products used in healthcare and empower doctors, healthcare leaders, and patients to make informed and ethical choices for improved patient care and overall societal benefit.
As AI continues to shape the future of healthcare, adherence to recognised standards will play a vital role in ensuring the safe and effective integration of AI technologies in medical practice.