Over the past few years, Artificial Intelligence has really become THE most impactful technology, mainly through the development of Deep learning, in which computer systems mimic the learning process of human brains, but much faster. Since its introduction, huge advancements have been made and every single industry has or could benefit greatly from deep learning capabilities. This is especially true for healthcare, where AI already outperforms world-leading medical experts in diagnosing diseases. It can for instance predict a person’s chances of dying within a year by looking at electrocardiogram (ECG) logs (even when this person appears to be normal to doctors) or even the onset of some psychiatric disorders such as schizophrenia (which is extremely difficult for any human physician to predict).
That’s the good – or even great – news. But there’s a ‘slight’ problem, too. No one (not even the creators) knows exactly how these AI models really work and we tend to trust decision-making algorithms only when we understand how conclusions are reached. And thus, it is fair to say that the ‘explainability’ of AI may be a key stumbling block to its acceptance.
Next to this lack of transparency, the ethics behind algorithms has also become a hotly debated topic. Possible biases or human prejudices may very well be hidden in the training data, which will be inherited by the algorithms, potentially leading to unfair or even wrong decisions. Fortunately, this year, open-source AI bias detection and mitigation tools and resources were already released by IBM Research, with the primary goal “to encourage global collaboration around addressing bias in AI”.
Since we as people are so scared of the unknown, for AI and deep learning technology to become widely accepted and adopted, opening the black box is going to be crucial in the coming years. Next to that, according to the GDPR, individuals should be able to obtain “meaningful information of the logic involved” whenever automated decisions are being made.
New algorithms are being built as we speak to have AI explain what it does. These include amongst others, having the algorithm highlight the data that contributed most to the discovered patterns, or even running the algorithm in reverse.
But will it be at all possible to have AI explain its behavior to the fullest? Can we ourselves as humans fully explain our own behavior and decision-making? I look forward to 2020 as a pivotal year for Explainable Artificial Intelligence, so we can continue– or start (in certain areas) to benefit from it for the good.