Contact
Header_1077x260px_About_Profil.jpg

Trustworthy AI in healthcare - it's TIME TO DELIVER

Prof.
Posted by Prof. Dr. Freimut Schliess on Jan 7, 2020 5:16:00 PM

In Europe chronic diseases account for 86% of deaths and 77% of disease burden, thereby creating a tremendous challenge on societies. At the same time digitisation is bringing huge technological and cultural opportunities. In healthcare the usage of data-driven forecasts on individual and population health as by integration of artificial intelligence (AI)-enabled algorithms has the potential to revolutionise health protection and chronic care provision while securing the sustainability of healthcare systems.

18295_520x272

[Designed by starline / Freepik]

European start-ups, SMEs and large corporations offer smart AI-enabled digital solutions backed by medical evidence. They could help to achieve the WHO Sustainable Development Goals and to reduce premature mortality from major chronic diseases by 25% up to 2025 [1]. Some, but too few of the solutions are available in the markets. A paradigm example is the increasing availability and reimbursement of closed-loop metabolic control (artificial pancreas) systems for persons with diabetes [2].

„There is no excuse for inaction as we have evidence-based solutions“. This statement was made by the Co-Chairs of the WHO Independent High-Level Commission on Noncommunicable Diseases in 2018 – paired with the appeal on global governments to „start from the top“ in taking action against the global disease burden [3]

In April 2019 a high-level expert group on AI set up by the European Commission (EC) published Ethics Guidelines for Trustworthy AI [4].  The guidelines lay down criteria for trustworthy AI with emphasis on ethical, legal and social issues and follow the goal to promote trustworthy AI.

Three foundations of trustworthy AI

First, trustworthy AI should respect all applicable laws and regulations.

Second, trustworthy AI should adhere to the ethical principles of respect for human autonomy (e.g. people should keep full and effective self-determination); prevention of harm (with paying particular attention to vulnerable persons); fairness (e.g. ensuring that people are free from discrimination & stigmatisation and are able to seek effective redress against AI-enabled decisions); and explicability (e.g. a full traceability, auditability and transparent communication on system capabilities particularly in case of ‚black-box‘ algorithms).

And third, trustworthy AI should be robust and ensure that, even with good intentions, no unintentional harm can occur.

Seven requirements on trustworthy AI

They were inspired by the ethical principles and should be met through developers, deployers and end-users.

First, human agency & oversight (e.g. decision autonomy, governance); second, technical robustness and  safety (e.g. security & resilience to attack, accuracy & reproducibility of outcomes); third, privacy and data governance (e.g. data integrity & protection, governance of data access); fourth, transparency (data, systems, business models); fifth, diversity, non-discrimination and fairness (e.g. avoidance of bias, co-creation, user-centrism); sixth, societal and environmental well-being (e.g. promotion of democraty, ecological sustainability); and seventh, accountability (e.g. forecast quality, auditability, report of negative impacts).

The EIT Health Pilot on the EC's Ethics Guidlines for trustworthy AI

Most recently the guidelines underwent an early reality check through EIT Health – a public-private partnership of about 150 best-in-class health innovators backed by the European Union (EU) and collaborating across borders to deliver new solutions that can enable European citizens to live longer, healthier lives [5].

A survey among start-ups and entrepreneurs as well as EIT Health partners from industry, academia, and research organizations indicated a currently low (22% of respondents) awareness of the guidelines. More than 60% of respondents were aware that their AI application will need regulatory approval.

Among the seven requirements on trustworthy AI the highest priority was given to privacy & data governance, technical robustness & safety, followed by traceability, and human agency & oversight.

Lower ranked, though still relevant, were the ethics of diversity, non-discrimination & fairness (respondents are working on it following e.g. an iterative approach to improving data sets and removing biases); accountability (currently traditional auditing, post-market surveillance and procedures for redress appear to be relied on); and societal and environmental well-being (the former appears self-evident for health solutions,  consciousness for the latter in the context of health solutions is possibly not yet well established).

It is TIME TO DELIVER

Clearly there is a conflicting interdependence between a comprehensive resolution of every conceivable ethical, legal & social issue, the imperative to eventually break down the longstanding barriers to personalised and preventative healthcare (which would save millions of lives), and the requirement for the EU of tackling global competition for a worldwide market penetration of trustworthy AI.   

We agree with the recent TIME TO DELIVER appeal from the WHO [3]. In collaboration with vital communities such as EIT Health the EU should go ahead in establishing a productive balance between promoting innovation, welcoming global competition, and defining healthcare-specific ethical, legal and social requirements for trustworthy AI.

We welcome the idea of establishing „world reference testing facilities for AI“ recently contributed by Roberto Viola (Director General of DG Connect at the EC) [6].  EIT Health should be in a privileged position to orchestrate such testing facilities for AI through providing secured validation environments applying the high ethical and regulatory standards of clinical contract research.

Here partners from innovation, education and business should collaborate on concrete AI-enabled solutions for an effective assessment of real risks and opportunities, followed by the provision of a solution-specific dossier on ethical, legal and social issues (ELSI dossier) in order then to join forces to launch the trustworthy AI-enabled solution to the markets and scale-up the business model.

That way European societies could be impactful in breaking down innovation barriers and eventually providing trustworthy solutions globally to the persons who need them most.

 

Profil is core partner of EIT Health, a network of best-in-class health innovators that collaborates across borders and delivers solutions to enable European citizens to live longer, healthier lives. EIT Health is supported by the EIT, a body of the European Union.

 

 

Topics: The Science behind Diabetes, Treating Diabetes, Diabetes Technology