The Cutting Edge of Deep Learning for Diagnostic Support: Innovations in Audio-based Healthcare Applications
Advancements in deep learning have generated a large-scale interest in the development of black-box models for various use cases in different domains such as healthcare, in both at-home and critical settings for diagnosis and monitoring of various health conditions. The use of audio signals as a view for diagnosis is nascent, and the success of deep learning models in ingesting multimedia data provides an opportunity for use as a diagnostic medium.
For the widespread use of these decision support systems, it is prudent to develop high-performing systems which require large quantities of data for training and low-cost methods of data collection making it more accessible for developing regions of the world and general population. Data collected from low-cost collection especially wireless devices are prone to outliers and anomalies. The presence of outliers skews the hypothesis space of the model and leads to model drift on deployment.
In this paper, we propose a multiview pipeline through interpretable outlier filtering on the small Mendeley Children Heart Sound dataset collected using wireless low-cost digital stethoscope. Our proposed pipeline explores and provides dimensionally reduced interpertable visualizations for functional understanding of the effect of various outlier filtering methods on deep learning model hypothesis space and fusion strategies for multiple views of heart sound data namely raw time-series signal and Mel Frequency Cepstrum Coefficients achieving 98.19% state-of-the-art testing accuracy.
Gamifying Physical Activity: The Trigger Screen Restriction Framework
The growing trend of inactive lifestyles caused by excessive use of mobile devices raises severe concerns about people’s health and well-being. This paper illustrates the technical implementation of the Trigger Screen Restriction (TSR) framework, which integrates advanced technologies, including machine learning and gamification techniques, to address the limitations of traditional gamified physical interventions.
The TSR framework encourages physical activity by leveraging the fear of missing out phenomenon, strategically restricting access to social media applications based on activity goals. The framework’s components, including the Screen Time Restriction, Notification Triggers, Computer Vision Model, and Reward Engine, work together to create an engaging and personalized experience that motivates users to engage in regular physical activity.
Although the TSR framework represents a potentially significant step forward in gamified physical activity interventions, it remains a theoretical model requiring further investigation and rigorous testing.
Sentiment Analysis for Customer Satisfaction: Unlocking the Voice of the Customer
Customer loyalty and customer satisfaction are premier goals of modern business since these factors indicate customers’ future behaviour and ultimate impact on the revenue and value of a business. The customers’ reviews, ratings, and rankings are a primary source for gauging customer satisfaction levels.
Similar efforts have been reported in the literature. However, there has been no solution that can record real-time views of customers and provide analysis of the views. In this paper, a novel approach is presented that records, stores, and analyzes the customer live reviews and uses text mining to perform various levels of analysis of the reviews. The used approach also involves steps like void-to-text conversion, pre-processing, sentiment analysis, and sentiment report generation. This paper also presents a prototype tool that is the outcome of the present research.
This research not only provides novel functionalities in the domain but also outperforms similar solutions in performance.
Serverless Computing for AI and Machine Learning
Serverless computing has grown in popularity as a paradigm for deploying applications in the cloud due to its ability to scale, cost-effectiveness, and simplified infrastructure management. Serverless architectures can benefit AI and Machine Learning (ML) models, which are becoming increasingly complex and resource-intensive.
This study investigates the integration of AI/ML frameworks and models into serverless computing environments. It explains the steps involved, including model training, deployment, packaging, function implementation, and inference. Serverless platforms’ auto-scaling capabilities allow for seamless handling of varying workloads, while built-in monitoring and logging features ensure effective management.
Continuous integration and deployment pipelines simplify the deployment process. Using serverless computing for AI/ML models offers developers scalability, flexibility, and cost savings, allowing them to focus on model development rather than infrastructure issues.
The proposed model leverages performance forecasting and serverless computing model deployment using virtual machines, specifically utilizing the Knative platform. Experimental validation demonstrates that the model effectively predicts performance based on specific parameters with minimal data collection. The results indicate significant improvements in scalability and cost efficiency while maintaining optimal performance.
This performance model can guide application owners in selecting the best configurations for varying workloads and assist serverless providers in setting adaptive defaults for target value configurations.
Ontology-based Approach to Facilitate Scientific Collaboration
Researchers in Higher Education (HE) institution-s/academia and in industry are continuously engaged in generating new solutions and products for existing and emergent problems. Doing quality research and producing better scientific results depend greatly on solid research teams and scientific collaborators.
Research output in HE institutions and industry can be optimized with appropriate resources in research teams and collaborations with suitable research partners. The main challenge in finding suitable resources for joint research projects and scientific collaborations pertains to the availability of data and metadata of researchers and their scientific work in traditional formats, for instance, websites, portals, documents, and traditional databases.
However, these traditional data sources do not support intelligent and smart ways of finding and querying the right resources for joint research and scientific collaboration. A possible solution resides in the deployment of Semantic Web (SW) techniques and technologies for representing researcher and their research contribution data in a machine-understandable format, thus ultimately proving useful for smart and intelligent query-answering purposes.
In pursuit of this, we present a general Methodology for Ontology Design and Development (MODD). We also describe the use of this methodology to design and develop Higher Education Ontology (HEO). This HEO can be used to automate various activities and processes in HE. In addition, we describe the use and adoption of the HEO through a case study on the topic of “finding the right resources for joint research and scientific collaboration”.
Finally, we provide an analysis and evaluation of our methodology for posing smart queries and evaluating the results based on machine reasoning.
IoT Security: Advancing Anomaly Detection and Behavior Analysis with Machine Learning
The integration of Internet of Things (IoT) technologies in hospital environments has introduced transformative changes in patient care and operational efficiency. However, this increased connectivity also presents significant cybersecurity challenges, particularly concerning the protection of patient data and healthcare operations.
This research explores the application of advanced machine learning models, specifically LSTM-CNN hybrid architectures, for anomaly detection and behavior analysis in hospital IoT ecosystems. Employing a mixed-methods approach, the study utilizes LSTM -CNN models, coupled with the Mobile Health Human Behavior Analysis dataset, to analyze human behavior in a cybersecurity context in the hospital.
The model architecture, tailored for the dynamic nature of hospital IoT activities, features a layered. The training accuracy attains an impressive 99.53%, underscoring the model’s proficiency in learning from the training data. On the testing set, the model exhibits robust generalization with an accuracy of 91.42%.
This paper represents a significant advancement in the convergence of AI and healthcare cybersecurity. The model’s efficacy and promising outcomes underscore its potential deployment in real-world hospital scenarios.
Enhanced Attention-based Defect Recognition for Tile Surfaces
For the current situation of low AP of tile defect detection with incomplete detection of defect types, this paper proposes YOLO-SA, a detection neural network based on the enhanced attention mechanism and feature fusion. We propose an enhanced attention mechanism named amplified attention mechanism to reduce the information attenuation of the defect information in the neural network and improve the AP of the neural network.
Then, we use the EIoU loss function, the four-layer feature fusion, and let the backbone network directly involved in the detection and other methods to construct an excellent tile defect detection and recognition model Yolo-SA.
In the experiments, this neural network achieves better experimental results with an improvement of 8.15 percentage points over Yolov5s and 8.93 percentage points over Yolov8n. The model proposed in this paper has high application value in the direction of tile defect recognition.
Adaptive Tracking Control of Tendon-Driven Robot Arm with Radial Basis Function Neural Network
The study combines the tendon drive theory and radial basis function neural network to construct the robotic arm model, and then combines the back-stepping method and non-singular fast terminal sliding mode to improve the controller and system optimization of the tendon drive robotic arm model.
Simulation tests on commercial mathematical software platforms yielded that joint 2 achieves stable overlap of position trajectory and velocity trajectory after 0.2s and 0.5s with errors of 1° and 1°/s, respectively. Radial basis function neural network approximation of robotic arm error converged to the true value at 14s. The optimized joint achieved the accuracy of trajectory tracking after 0.2s. Also the control torque of joint 2 changes at 1.5s, 4.5s and 8s and its change is small. The tendon tension curve was smoother and more stable in the range of -0.05N~0.0.5N to show that the robotic arm model has superiority after the optimization of the controller, and the interference observer had accurate estimation of the tracking trajectory of the tendon-driven robotic arm.
Therefore, the radial basis function-based adaptive tracking algorithm had higher accuracy for the tendon-driven robotic arm model and provided technical reference for the control system of the intelligent robotic arm.
Hybrid Algorithm for Cloud Computing Task Scheduling Optimization
With the development of cloud computing technology, effective task scheduling can help people improve work efficiency. Therefore, this study presented a hybrid algorithm on the grounds of simulated annealing and taboo search to optimize task scheduling in cloud computing.
This study presented a hybrid algorithm for optimizing the cloud computing task scheduling model. The model used simulated annealing algorithm and taboo search algorithm to convert the objective function into an energy function, allowing atoms to quickly arrange in terms of a certain rule for obtaining the optimal solution.
The study analyzed the model through simulation experiments, and the experiment showed that the optimal value of the hybrid algorithm in high-dimensional unimodal testing was 7.15E-247, far superior to the whale optimization algorithm’s 3.99E-28 and the grey wolf optimization algorithm’s 1.10E-28.
The completion time of the hybrid algorithm decreased with the growth of virtual machines, and the shortest time was 8.6 seconds. However, the load balancing degree of the hybrid algorithm increased with the growth of virtual machines.
The final results indicated that the proposed hybrid algorithm exhibits high efficiency and superior performance in cloud computing task scheduling, especially when dealing with large-scale and complex optimization problems.
Ensemble Empirical Mode Decomposition with Sparse Bayesian Learning for Landslide Displacement Prediction
Inspired by the principles of decomposition and ensemble, we introduce an Ensemble Empirical Mode Decomposition (EEMD) method that incorporates Sparse Bayesian Learning (SBL) with Mixed Kernel, referred to as EEMD-SBLMK, specifically tailored for landslide displacement prediction.
EEMD and Mutual Information (MI) techniques were jointly employed to identify potential input variables for our forecast model. Additionally, each selected component was trained using distinct kernel functions. By minimizing the number of Relevance Vector Machine (RVM) rules computed, we achieved an optimal balance between kernel functions and selected parameters.
The EEMD-SBLMK approach generated final results by summing the prediction values of each subsequence along with the residual function associated with the corresponding kernel function.
To validate the performance of our EEMD-SBLMK model, we conducted a real-world case study on the Liangshuijing (LSJ) landslide in China. Furthermore, in comparison to RVM-Cubic and RVM-Bubble, EEMD-SBLMK emerged as the most effective method, delivering superior results in the same measurement metrics.
Predicting Student Performance in Math Courses Using Machine Learning
Predicting students’ performance through demographic information makes it possible to predict their performance before the start of the course. Filtered and wrapper feature selection methods were used to find 10 important features in predicting students’ final math grades.
Then, all the features of the data set as well as the 10 selected features of each of the feature selection methods were used as input for the regression analysis with the Adaboost model. Finally, the prediction performance of each of these feature sets in predicting students’ math grades was evaluated using criteria such as Pearson’s correlation coefficient and mean squared error.
The best result was obtained from feature selection by the LASSO method. After the LASSO method for feature selection, the Extra Tree and Gradient Boosting Machine methods respectively had the best prediction of the final math grade.
The present study showed that the LASSO feature selection technique integrated with regression analysis with the Adaboost model is a suitable data mining framework for predicting students’ mathematical performance.
Hybrid Model for User Credibility Detection on Twitter
User credibility detection on social networking platforms such as Twitter is crucial to combat the spread of fake news. In this research, we aim to integrate existing solutions from previous research to create a hybrid model.
Our approach is based on selecting and weighting features using supervised machine learning methods such as ExtraTressClarifier, correlation-based algorithm methods, and SelectKBest to extract new ranked and weighted features in the dataset and then use them to train our model to discover their impact on the accuracy of user credibility detection issues.
Experiments are conducted on one of the online available datasets. We seek to employ extracted features from a user’s profile and statistical and emotional information. Then, the experimental results are compared to discover the effectiveness of the proposed solution.
This study focuses on revealing the credibility of Twitter (or X-platform as recently renamed) accounts, which may result in the need for some adjustments to the generalization of Twitter’s outputs to other social media accounts such as LinkedIn, Facebook, and others.
Enhanced Deep Learning Approach for EEG-based Motor Imagery Classification
The classification of motor imagery holds significant importance within brain-computer interface (BCI) research as it allows for the identification of a person’s intention, such as controlling a prosthesis. Motor imagery involves the brain’s dynamic activities, commonly captured using electroencephalography (EEG) to record nonstationary time series with low signal-to-noise ratios.
This research introduces a new deep learning approach based on two-dimensional CNNs with different architectures. Specifically, time-frequency domain representations of EEGs obtained by the wavelet transform method with different mother wavelets (Mexicanhat, Cmor, and Cgaus).
The BCI competition IV-2a dataset held in 2008 was utilized for testing the proposed deep learning approaches. Several experiments were conducted, and the results showed that the proposed method achieved better performance than some state-of-the-art methods.
The findings of this study showed that the architecture of CNN and specifically the number of convolution layers in this deep learning network has a significant effect on the classification performance of motor imagery brain data. In addition, the mother wavelet in the wavelet transform is very important in the classification performance of motor imagery EEG data.
Nonlinear Feature Analysis for Mental Workload Recognition Using EEG
In the current research, two nonlinear features were utilized for the design of EEG-based mental workload recognition: one feature based on differential entropy and the other feature based on multifractal cumulants.
Clean EEGs recorded from 36 healthy volunteers in both resting and task states were subjected to feature extraction via differential entropy and multifractal cumulants. Then, these nonlinear features were utilized as input for a fuzzy KNN classifier.
Experimental results showed that the multifractal cumulants feature vector achieved an AUC of 0.951, which is larger than the differential entropy feature vector (AUC = 0.935). However, the combination of both feature sets resulted in added value in identifying these two mental workloads (AUC = 0.993).
Furthermore, the multifractal cumulants feature vector (best classification accuracy = 94.76%) obtained better classification results than the differential entropy feature vector (best classification accuracy = 92.61%). However, the combination of these two feature vectors achieved the best classification results: accuracy of 96.52%, sensitivity of 97.68%, specificity of 95.58%, and F1-score of 96.61%.
Machine Learning for Student Performance Prediction
In the field of education, understanding and predicting student performance plays a crucial role in improving the quality of system management decisions. In this study, the power of various machine learning techniques to learn the complicated task of predicting students’ performance in math courses using demographic data of 395 students was investigated.
Filtered and wrapper feature selection methods were used to find 10 important features in predicting students’ final math grades. Then, all the features of the data set as well as the 10 selected features of each of the feature selection methods were used as input for the regression analysis with the Adaboost model.
The best result was obtained from feature selection by the LASSO method. After the LASSO method for feature selection, the Extra Tree and Gradient Boosting Machine methods respectively had the best prediction of the final math grade.
The present study showed that the LASSO feature selection technique integrated with regression analysis with the Adaboost model is a suitable data mining framework for predicting students’ mathematical performance.
Hybrid Facial Recognition for IoT Authentication
This study aims to address the insufficient model recognition