Explainable AI Research Topics
Explainable AI Research Topics is one of the most recent topics that are now utilized in many fields and applications moreover it is understandable and believed by humans. Here we provide several topics related to the Explainable AI to understand and interpret them:
- Define Explainable AI
At the beginning of the research we first see the definition of Explainable AI (XAI); it defines the ability of machine learning techniques and artificial intelligence (AI) model to offer transparent, interpretable, and understandable definitions for their forecasting and choices. In nature, XAI intends to fill the gap across the complicated inside workings of AI methods and the requirement for humans to understand and believe AI-driven findings. It contains AI methods that not just create exact forecasting but also offer understandings into why and how those forecasting are generated, creating the decision-making process more obvious and explainable. XAI is commonly essential in important applications like autonomous systems, healthcare and finance where the human belief and interpretation are important.
- What is Explainable AI?
After the definition we will see the detailed explanation of Explainable AI, it describes the idea of designing and enhancing machine learning methods and artificial intelligence (AI) model in a way that creates their decision-making processes understandable, obvious and interpretable to humans and it intends to offer understandings in what way the AI systems appear at their end statements, creating it probable for users to interpret and believe the AI’s findings.
- Where Explainable AI used?
Next to the detailed explanation we converse about in which place it will be utilized. It is employed in different applications and fields where the understandability, belief and obvious in AI systems are important. Several major fields is used and it may contain are Legal and Compliance, Manufacturing and quality control, Finance, Healthcare, Customer support and chatbots and Autonomous vehicles.
- Why Explainable AI technology proposed? , previous technology issues
The XAI is proposed in this research and it overcomes many previous technology issues. The previous technology issues that addressed by this research are Overfitting, Class imbalance in EEG datasets, absence of interpretability and absence of data accessibility. This technology is created and proposed for some significant reasons like Bias Mitigation, Regulatory Compliance, Improved Decision-Making, Transparency and belief, Accountability and Safe-important applications are the reasons for developing these methods.
- Algorithms / protocols
In this research the XAI technology is proposed and it addresses some existing technology problems. Now we give several methods that are used in this research are “ Deep customized LOngitudinal convolutional RIsk model, or DeepLORI, short-time Fourier transform (STFT), spatiotemporal- spectral hierarchical graph convolutional network with an active preictal interval learning scheme (STS-HGCN-AL), Discrete Wavelet Transform (DWT), Channel Attention Dual-input Convolutional Neural Network (CADCNN) and convolutional neural network (CNN)”.
- Comparative study / Analysis
In comparative analysis we have to compare previous technology methods to tackle the present one. The existing technique propose a machine learning based technique for finding and forecasting the seizure by employing PAC and frequency domain examination. The datasets they explore are Siena Scalp EEG database and the CHB-MIT database. For time-reliant measure we utilize Modulation Index and Frequency Characteristics. We utilize the obtained features to create forecasting and categorizing the data by train the random forest classifier. For involvement and treatment when handling the prediction accuracy, the fine-performing band is utilized to examine the seizure prediction horizon (SPH) in an try to improve the time accessible. Obtain both, the results implies the enhanced achievement that is probable, with the average accuracy for the Siena Scalp EEG database is 85.71% and the CHB-MIT database is 95.87% respectively at a interval of 5 minute. The interaction of PAC examination and categorization are assist to seizure identification and forecasting in the adult database. It suggests the rarely used SPH that also essentially impacts seizure forecast and identification, requiring more research on the utilization of PAC.
- Simulation results / Parameters
XAI is proposed to detect and identify the seizure in this research moreover it addresses several existing technology issues. The performance metrics that is utilized for this research are Precision, Sensitivity, Accuracy, F1 score and Recall.
- Dataset LINKS / Important URL
Here we offered few links that provide the information or details about the XAI technique which are useful to us when we undergo the Explainable AI based queries.
- https://link.springer.com/article/10.1007/s11042-023-15052-2
- https://ieeexplore.ieee.org/abstract/document/9440862/
- https://www.sciencedirect.com/science/article/pii/S037843712100649X
- https://www.sciencedirect.com/science/article/pii/S0957417422020280
- https://www.frontiersin.org/articles/10.3389/frai.2021.610197/full
- https://www.sciencedirect.com/science/article/pii/S0010482522004851
- Explainable AI Applications
The Explainable AI is now be utilized in a broad range of applications over different industries and fields. Some of the important applications are Finance, Agriculture, Environment monitoring, Legal and Compliance, Cyber security, Healthcare and Medicine, Autonomous vehicles, Customer Service and Manufacturing and Industry.
- Topology for Explainable AI
Now we notice the topology that employed for Explainable AI, it generally defines the structure or framework of XAI methods or the techniques utilized to improve the understandability and obvious of the AI model. Here we provide several general topologies or structural methodologies for XAI are Interpretable Models, Rule-Based Models, Hybrid Approaches, Local Explanations, Visualization Techniques, Post-hoc Explainability Apparatus, Model-Specific Approaches and Global Explanations.
- Environment in Explainable AI
The environment to be required for Explainable AI should work among different environments, in terms of its application fields as well as the technical framework are incorporated to create and organize XAI models. Here we provide several primary factors of the environments in which XAI is used are AI Model Environment, Interpretation Environment, Data Environment, Deployment Environment, Regulatory Environment and Development and Testing Environment.
- Simulation tools
For this research XAI, the dataset we used is CHB-MIT DB.csv dataset. The software requirement that require for this are listed below. The tool that utilized to implement the research is Matlab-R2020a (or and above version) or Python 3.11.4. The work is executed by using the operating system Windows 10 (64-bit).
- Results
The Explainable AI is proposed in this research and it overcomes several existing technology issues. It is compared with the different methods/techniques or also contrasted with various performance metrics to attain correct accurate findings. This technique can be implemented by employing the tools like Matlab-R2020a (or and above version) or Python 3.11.4.
Explainable AI Research Ideas:
The following are the research topics relevant to Explainable AI technique. The topics are useful to assist the view of XAI based uses, applications, techniques, methods or any other details relevant to that technique.
- Explainable AI (XAI) for AI-Acceptability: The Coming Age of Digital Management 5.0
- Extraction of Important Temporal Order for eXplainable AI on Time-series data
- MRI image based Ensemble Voting Classifier for Alzheimer’s Disease Classification with Explainable AI Technique
- Human Centered Explainable AI Framework for Military Cyber Operations
- Leveraging Explainable AI Methods Towards Identifying Classification Issues on IDS Datasets
- Unveiling the Contributing Features in Heart Disease Occurrence: An Explainable AI Approach
- Explainable AI for Cheating Detection and Churn Prediction in Online Games
- Ensemble machine-learning model for solar radiation prediction using explainable AI
- Explainable AI via Linguistic Summarization of Black Box Computer Vision Models
- XAI-LCS: Explainable AI-Based Fault Diagnosis of Low-Cost Sensors
- Compressing Deep Neural Networks Using Explainable AI
- XAI-AMD-DL: An Explainable AI Approach for Android Malware Detection System Using Deep Learning
- Feature Relevance in NAT Detection Using Explainable AI
- A Standard Baseline for Software Defect Prediction: Using Machine Learning and Explainable AI
- Explainable AI for Deep Learning Based Potato Leaf Disease Detection
- Classification of kidney abnormalities using deep learning with explainable AI
- A Comparative Analysis of Explainable AI Techniques for Enhanced Model Interpretability
- Linking Team-level and Organization-level Governance in Machine Learning Operations through Explainable AI and Responsible AI Connector
- Improving Prospective Healthcare Outcomes by Leveraging Open Data and Explainable AI
- PNEXAI: An Explainable AI Driven Decipherable Pneumonia Classification System Leveraging Ensemble Neural Network
- Securing Federated Learning through Blockchain and Explainable AI for Robust Intrusion Detection in IoT Networks
- Quantitative Explainable AI For Face Recognition
- An Explainable AI model in the assessment of Multiple Sclerosis using clinical data and Brain MRI lesion texture features
- Data-Driven Early Diagnosis of Chronic Kidney Disease: Development and Evaluation of an Explainable AI Model
- Explainable AI Based Malaria Detection Using Lightweight CNN
- Towards Explainable AI Validation in Industry 4.0: A Fuzzy Cognitive Map-based Evaluation Framework for Assessing Business Value
- Explainable AI based Maternal Health Risk Prediction using Machine Learning and Deep Learning
- Interpretable Lung Cancer Detection using Explainable AI Methods
- Explainable AI for Enhanced Interpretation of Liver Cirrhosis Biomarkers
- Explainable AI (XAI): Explained
- Do Explainable AI techniques effectively explain their rationale? A case study from the domain expert’s perspective
- Effect of CLAHE-based Enhancement on Bean Leaf Disease Classification through Explainable AI
- A 26.55TOPS/W Explainable AI Processor with Dynamic Workload Allocation and Heat Map Compression/Pruning
- Wireless Capsule Endoscopy Image Classification: An Explainable AI Approach
- Explainable AI and Transfer Learning in the Classification of PET Cardiac Perfusion Polar Maps
- Requirements Engineering for Explainable AI
- Learning with Explainable AI-Recommendations at School: Extracting Patterns of Self-Directed Learning from Learning Logs
- Transparency in Medicine: How eXplainable AI is Revolutionizing Patient Care
- Explainable AI for Industrial Alarm Flood Classification Using Counterfactuals
- An Explainable AI Predictor to Improve Clinical Prognosis for Acute Respiratory Distress Syndrome
- Explainable AI for Bearing Fault Detection Systems: Gaining Human Trust
- Enhancing Supplier Selection through Explainable AI: A Transparent and Interpretable Approach
- Classification of Gastrointestinal Cancer through Explainable AI and Ensemble Learning
- Developing an Explainable AI Model for Predicting Patient Readmissions in Hospitals
- SignExplainer: An Explainable AI-Enabled Framework for Sign Language Recognition With Ensemble Learning
- Comprehensive Analysis Over Centralized and Federated Learning-Based Anomaly Detection in Networks with Explainable AI (XAI)
- Decision Tree-Based Explainable AI for Diagnosis of Chronic Kidney Disease
- Enhancing Hate Speech Detection through Explainable AI
- LIME-based Explainable AI Models for Predicting Disease from Patient’s Symptoms
- Explainable AI for CPS-Based Manufacturing Workcell