Interpretable Machine Learning in Healthcare

About

Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of the existing ML approach inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could ultimately facilitate the deployment. On the other hand, it is also essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more aligned with clinical reasoning. Also, it may help mitigate biases in the learning process or identify more relevant variables for making medical decisions.

Review policy on Publons
  • Does not allow reviews to be publicly displayed
  • Only allows reviewers to display the journal they reviewed for
Reviews

< 10

In accordance with Interpretable Machine Learning in Healthcare's editorial policy, review content is not publicly displayed on Publons.

Interested in reviewing for this conference?

We can put registered members of Publons' reviewer community in touch with partnered journals they would like to review for. Register now to let Interpretable Machine Learning in Healthcare know you want to review for them.

Editorial board members on Publons

No one has yet noted that they are on Interpretable Machine Learning in Healthcare's editorial board. If you're on the editorial board of Interpretable Machine Learning in Healthcare, you can add it in your profile settings.

Endorsed by

No one has yet endorsed Interpretable Machine Learning in Healthcare.

Journal/Conference Endorsement