AI “hallucinations” mitigation and methods for improving anomaly detection algorithms in infrastructure of a CN have been developed by scientists of the University of Bristol’s School of Computer Science.
Innovations in the field of AI in the recent past have also brought into focus the use of AI for detecting anomalies within the sensor and actuator data for CNIs. Nevertheless, these AI algorithms have certain problems, including long training time and a problem of identifying precise elements in an abnormal state. Further, there has been criticism due to the lack of transparent decision-making mechanisms of AI.
To address these issues, the Bristol team implemented several measures to boost efficiency and reliability: In order to counter these problems the Bristol team introduced several changes which helped to optimize the workflow and increase the equipment’s dependability:
- Enhanced Anomaly Detection: Two state-of-art anomaly detection algorithms were used by researchers which have far less training time and ability to detect anomalies more quickly and effectively, though efficient. These algorithms were tested using a dataset from the operational Watertestbed, SWaT of the Singapore University of Technology and Design.
- Explainable AI Integration: In order to enhance the created Black-Box model transparency and to build the audience’s trust, the authors included eXplainable AI (XAI) models into anomaly detectors. This enables the human operators be in a position to understand and check the recommendation made by the AI before coming up with final decisions. It also made an assessment of what kind of XAI models would be beneficial to human comprehension by comparing the existent models.
- Human-Centric Decision Making: Taking all those aspects into consideration the paper emphasize the need for human intervention into AI decisions. The team always takes their time to explain to the human operators that the AI is only a recommendation tool to prevent it from operating like a prophet with the human mind. This methodology brings in a measure of accountabilities because human beings make the final decisions based on the artificial intelligence, policy, rules, and regulations.
- Scoring System Development: Progress of a scoring system, which measures the perceived correctness and confidence of the AI’s explanation is in progress. This helps human operators to be able to estimate the accuracy of the AI provided information.
Such innovations do not only augment the capability of AI in CNIs but also make sure that human intervention is critical in something to avoid the possible occurrence of mistakes, which is all about accountability and reliability.
Dr. Sarad Venugopalan, co-author of the study, explained, “Humans learn by repetition over a longer period and work for shorter hours without being error-prone. This is why, in some cases, we use machines that can carry out the same tasks in a fraction of the time and at a reduced error rate. However, this automation, involving cyber and physical components, and subsequent use of AI to solve some of the issues brought by the automation, is treated as a black box. This is detrimental because the personnel using the AI recommendation is held accountable for the decisions made by them, not the AI itself. In our work, we use explainable AI to increase transparency and trust, so the personnel using the AI is informed why the AI made the recommendation before a decision is made.”