|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
May 2025 | Vol. 103 No.10 |
Title: |
SOL-AUTOCLUST: A SMART ONLINE-LEARNING AUTOMATED CLUSTERING FRAMEWORK |
Author: |
IBRAHIM GOMAA, HODA M. O. MOKHTAR , NEAMAT EL-TAZI , ALI ZIDANE |
Abstract: |
The automation of machine learning has predominantly focused on supervised
tasks, leaving unsupervised clustering, a critical component of exploratory data
analysis, significantly underdeveloped by existing Auto-ML frameworks. Current
approaches often limit their scope to dataset characteristics, neglecting the
crucial influence of algorithmic suitability (e.g., robustness to outliers) and
user-defined requirements (e.g., interpretability needs). This oversight leads
to suboptimal clustering outcomes, particularly when dealing with complex,
high-dimensional, or noisy data. To address these limitations, this research
introduces SOL-Auto-Clust, a novel end-to-end automated clustering framework
that makes a key contribution by holistically integrating three fundamental
dimensions: inherent data characteristics, intrinsic algorithmic traits, and
explicit user-defined objectives. By employing a meta-feature architecture,
SOL-Auto-Clust dynamically generates customized clustering pipelines, addressing
both data intricacies and real-world application requirements. Extensive
evaluation across diverse datasets highlights the framework's ability to
simplify clustering processes and produce reliable, insightful outcomes, marking
a significant step towards human-aligned Auto-ML for unsupervised learning. |
Keywords: |
Automated Machine Learning (Auto-ML), Automated Clustering, Unsupervised
Learning, CASH |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
AN OPTIMIZED ENSEMBLE MODEL FOR CARDIOVASCULAR WITH DIABETES DISEASE PREDICTION
USING CGAN-AUGMENTED DATA |
Author: |
MUNI BALAJI THUMU , N. BALAJIRAJA , MUHAMMED YOUSOOF |
Abstract: |
Cardiovascular disease (CVD) holds the position as the main killer worldwide in
diabetic populations thus underlining the importance of accurate predictive
tools. The inability of traditional statistical methods to adapt to data
limitations alongside poor handling of clinical data imbalance leads to
unsuccessful risk assessment. Deep learning solutions demonstrate promising
results, yet they confront expensive computations and insufficient feature
background understanding in addition to lacking interpretability features. The
research introduces DFE-CVRP as a cardiovascular risk prediction system which
merges expert models tailored for specific features and implements dynamic
ensemble control with adaptive data balancing techniques. The performance
evaluation determines if a lightweight ensemble model optimized dynamically
improves CVD risk prediction results when processing structured clinical data.
The method combines EfficientNet architectures which were optimized using
Successive Halving and Population-Based Training methods and Conditional
Generative Adversarial Networks to balance and improve feature diversity for the
dataset. The performance of DFE-CVRP exceeds conventional machine learning
techniques together with baseline deep learning architectures such as CCGLSTM
when used on structured health databases. The algorithm reaches 98.2% accuracy
and 97.8% precision combined with 98.4% recall while obtaining 98.1% F1-score
and 98.6% AUC-ROC. The effectiveness of dynamic ensemble learning and data
augmentation strategies for improved cardiovascular healthcare diagnosis has
been confirmed through the study findings. The proposed predictive framework
offers interpretability and scalability as well as affordable resource
utilization that creates substantial value for future clinical decision systems
leveraging patient-specific data. |
Keywords: |
Diabetes, Cardiovascular Disease Prediction, GAN, Deep Learning, Machine
Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
DEEP LEARNING APPROACHES FOR THE DEVELOPMENT OF INSECT- AND MOULD-RESISTANT
PAINTS: AI-DRIVEN FORMULATION AND OPTIMIZATION |
Author: |
S. HEMALATHA , DR.P.ARIVUBRAKAN ,PONNURU ANUSHA , SURYA LAKSHMI KANTHAM VINTI
,JYOTI D. SHENDAGE , PRAMODKUMAR H KULKARNI |
Abstract: |
Paint coatings serve as the first line of defense against environmental
degradation, yet microbial infestation and insect adhesion continue to pose
significant challenges, leading to structural damage and health-related risks.
Traditional antifungal and insect-repellent solutions often depend on chemical
additives that raise environmental and health concerns. This study investigates
the potential of deep learning to optimize paint formulations for enhanced
resistance to mould and insect infestation. We introduce a novel AI-driven
framework that integrates convolutional neural networks (CNNs) to detect
microbial growth patterns, recurrent neural networks (RNNs) to model temporal
environmental influences, and generative adversarial networks (GANs) to simulate
and generate optimized paint formulations. The system is trained on a
comprehensive dataset comprising spectral and microscopic imagery, chemical
composition data, and environmental conditions. Our results indicate that the
proposed deep learning models outperform conventional heuristic-based methods in
identifying effective resistance-enhancing formulations. These findings
underscore the transformative role of artificial intelligence in advancing
material science and promote the development of eco-friendly, self-adaptive
coatings. Future work will involve real-world testing and the integration of
IoT-enabled sensors for dynamic resistance management. |
Keywords: |
Deep Learning, Insect-Resistant Coatings,Mould-Resistant Paints, Convolutional
Neural Networks (CNNs),Generative Adversarial Networks (GANs),Antimicrobial
Coatings, Smart Paints, IoT-Integrated Coatings, Reinforcement Learning,
Material Science AI |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
THE CONSCIOUSNESS SIMULATION GAP: EVALUATING AND BENCHMARKING AI MODELS THROUGH
FUNCTIONAL DECOMPOSITION |
Author: |
MYKHAILO ZHYLIN, TAMARA HOVORUN, BILAL ALIZADE, MAKSYM KOVALENKO, ALLA
LYTVYNCHUK |
Abstract: |
The relevance of the research is determined by the need to study the
consciousness of neural networks and the possibility of developing artificial
self-awareness. The aim of the article is to investigate the main functional
elements of models of consciousness in artificial intelligence (AI). The study
employed such methods as the Turing test, Context-driven Testing and analysis of
generation models. F1-score, Accuracy, t-test were used for statistical
analysis. The reliability of the selected methods was checked by Test-Retest
Reliability. The results were obtained that demonstrate the key aspects of the
functioning of artificial consciousness models. GPT-4 shows the highest accuracy
(92%) and F1-score (0.91), but has difficulties with complex logic problems.
AlphaZero has the lowest accuracy (85%) and has trouble understanding abstract
concepts. IBM Watson shows medium performance, but does not recognize irony
well. DeepMind’s Gato is 90% accurate and wrong on coreference problems. The
resulting analysis showed that modern models, such as GPT-4, have a high level
of development of perception and attention, which contributes to the effective
processing of natural language. However, the question of true consciousness and
self-awareness of AI remains open, requiring further research. Understanding the
functional components of consciousness is important for the development of
ethical norms in the field of AI. Therefore, it is necessary to improve the
algorithms to grade up the cognitive functions of the models. Prospects for
future research in neural network consciousness include an in-depth study of the
mechanisms that provide true consciousness and self-awareness in artificial
systems. |
Keywords: |
Artificial Consciousness, Neural Networks, Dialogue Model, GPT-4, Neuroscience. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
USING VARIANTS OF GENETIC ALGORITHM AND LEARNABLE EVOLUTION MODEL TO SOLVE
RESOURCE-CONSTRAINED PROJECT SCHEDULING PROBLEM |
Author: |
GAMAL ALSHORBAGY, MOHAMED EL-DOSUKY |
Abstract: |
The resource-constrained project scheduling problem (RCPSP) is a complex
scheduling challenge as it is proven to be NP-hard. The Learnable Evolution
Model (LEM) is a non-Darwinian evolutionary approach that speeds up convergence
using machine learning instead of crossover. It classifies individuals into
high-performance (H-group) and low-performance (L-group) based on fitness,
learns distinguishing features, and generates new individuals through an
instantiation step. To ensure diversity, LEM applies mutation as a Darwinian
component, making it more efficient than traditional evolutionary methods. This
paper proposes a new approach, which attempts variants of genetic algorithms and
LEM, aiming to tackle issues in generating Gantt charts for big cases. |
Keywords: |
Scheduling, RCPSP, Learnable evolution model, Genetic Algorithm |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
FROM STATIC DISPLAYS TO INTERACTIVE AR: EVALUATING THE EFFECTIVENESS OF AN AR
APP FOR GEN Z AND ALPHA IN MUSEUM PUSAKA |
Author: |
REHMAN ULLAH KHAN, YIN BEE OON, AMALIA BT MADIHIE, MOHAMAD HARDYMAN BIN BARAWI,
IDA JULIANA HUTASUHUT, HARI NUGRAHA RANUDINATA, DESI DWI KRISTANTO |
Abstract: |
Museums play a crucial role in preserving and disseminating cultural heritage;
however, they often struggle to engage modern audiences who seek immersive and
interactive experiences. Traditional static displays fail to engage visitors or
transfer cultural values to the new generations, specifically the Gens Z and
Alpha. This research explores the implementation of Augmented Reality (AR) in
museum settings to enhance visitor engagement and learning effectiveness, with a
particular focus on the Museum Pusaka at Taman Mini Indonesia Indah in Jakarta.
An applied empirical research design was utilized, two different studies
involving a total of 40 and 86 students who were divided into a control or an
experimental group. The experimental group used an AR mobile application that
contains 3D models and interactive content, while the control group used
traditional printed materials. To measure the learning outcomes and engagement,
pre-and post-tests were conducted, and the data were analysed and compared using
paired sample t-tests and ANCOVA. The results showed significant improvements in
both learning and engagement among participants using the AR mobile application.
These findings indicate that AR has the potential to transform the museum
exhibit more engaging, interactive, and impactful. |
Keywords: |
Augmented Reality, Museum Studies, Educational Technology, User Experience,
Learning Outcomes |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
MULTI HEAD ATTENTION-BASED LSTM AND GRADIENT-WEIGHTED CLASS ACTIVATION MAPPING
FOR BRAIN TUMOR DETECTION USING MRI IMAGES |
Author: |
PRATHIMA DEVADAS, G. MATHIVANAN |
Abstract: |
The presented of attention-based architectures in medical imaging has ushered in
a novel era of precision diagnostics, mainly for the identification and
classification of brain tumors. This research developing a novel knowledge
distillation method that employs a tripartite attention mechanism within
transformer encoder models to identify various brain tumor types utilizing
magnetic resonance imaging (MRI). This study offerings a unique method for brain
tumor identification, integrating Multi-Head Attention-based Long Short-Term
Memory (MHA-LSTM) networks with Gradient-weighted Class Activation Mapping
(Grad-CAM). The MHA-LSTM design utilizes multi-head attention to capture complex
spatial-temporal relationships across consecutive MRI slices, enabling the model
to focus on the most critical features. Grad-CAM is incorporated to provide
visual explanations by highlighting key regions contributing to the model's
predictions, improving both interpretability and clinical relevance.
Experimental results demonstrate that the suggested technique surpasses
conventional LSTM models in terms of accuracy, sensitivity, and specificity.
Moreover, the Grad-CAM visualizations offer valuable insights into the model's
decision-making process, fostering better understanding and facilitating future
clinical validation. This method presented a robust and interpretable solution
for brain tumor identification, advancing the application of deep learning in
medical imaging. |
Keywords: |
Brain Tumor Detection, Deep Learning, Grad-CAM, Interpretability, Long
Short-Term Memory, Magnetic resonance imaging, Medical Imaging, Multi-Head
Attention, Neural Networks, Spatiotemporal Modeling. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
NEXT-GEN SECURITY: LEVERAGING DNA CRYPTOGRAPHY FOR ROBUST ENCRYPTION |
Author: |
GURU PRAKASH B , SIVA T , SHUNMUGASUNDARAM S , MARIAPPAN E , ANNA LAKSHMI A ,
RAMNATH MUTHUSAMY |
Abstract: |
Cloud computing is the popular growing technology that provides services through
the internet for data sharing and storage, access. Cryptography is the study of
protecting the information by using algorithms, codes so that the intended users
can view the data. Cryptography plays a vital role while transmitting data
through networks and it’s very important to ensure the confidentiality of the
data. In this paper, to achieve and enhance the confidentiality of the data, DNA
cryptography has been proposed. DNA cryptography is used to enhance the security
of the data which is purely based on the nucleotide of DNA. The proposed
modernized DNA cryptography algorithm is implemented using the .net framework
and examples are also given with screenshots for the conclusion. |
Keywords: |
Cloud Computing, Secure Communication, DNA Cryptography, Amino Acid Tables,
Dynamic Key Generation. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
ADVANCED CNN-BASED FRAMEWORKS FOR ROBUST AND EXPLAINABLE BREAST CANCER DIAGNOSIS
ACROSS MULTI-MODAL IMAGING DATASETS |
Author: |
Dr. ALURI BRAHMAREDDY , Dr. MERCY PAUL SELVAN |
Abstract: |
Breast cancer stands as the principal cause triggering female fatalities
worldwide hence requiring immediate diagnostic solutions which produce precise
and timely results. Expanding from its demonstrated potential deep learning
technologies function with constraints stemming from single-type medical image
analysis as well as inadequate transparency regarding decision-making
operations. This research examines deficient diagnostic methods because they
fail to effectively link multimodal medical images with healthcare parameters to
diagnose breast cancer both precisely and explainable to medical staff. The
predictive model uses a combination of mammograms, ultrasounds and MRIs together
with histopathological images and structured clinical data features like age,
breast density, lesion size, genetic marker scores and tumor stage to achieve
better predictive results and stronger model generalization. This study
eliminates the present knowledge gaps through a novel Multi-Modal Explainable
Convolutional Neural Network (MME-CNN) framework that unites mammograms with
MRIs and ultrasounds, histopathological images and structured clinical data
containing age, lesion size, breast density, genetic markers and tumor stage
information. Grad-CAM visualizations served within the model as an
interpretability tool that shows doctors how the predictions were made. The
experimental analysis shows the model achieved a perfect validation accuracy of
100% while needing only four epochs to complete training. It reduced training
loss from 0.6975 to 0.2551 and established a validation loss at 0.0886.
Real-world clinical implementations benefit from this framework because it shows
good universal applicability and quick calculation rates and better explanation
capabilities. The future development will concentrate on conducting extensive
validity tests alongside EHR system combinations to enable widespread precision
oncology implementation. |
Keywords: |
Breast Cancer Diagnosis, Convolutional Neural Networks, Multi-Modal Imaging,
Robust Classification, Explainability, Precision Medicine, Medical Imaging
Analysis And Grad-CAM Visualization . |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
LEVERAGING DEEP LEARNING FOR REAL-TIME, CONTINUOUS MONITORING AND PREDICTION OF
SEPSIS IN ICU PATIENTS USING MULTISENSORY DATA |
Author: |
BHOOMPALLY VENKATESH, DR. PRADEEP K R, DR. Y.RAMADEVI, DR VISHWA KIRAN S,
KAPARTHI UDAY |
Abstract: |
Sepsis is a life-threatening condition caused by the body's extreme response to
infection, which may end fatally, with high death rates, particularly in the
critical care unit, associated with multi-organ failure. Physiological data in
ICU patients is highly non-stationary and complex, significantly challenging
early detection. Standard machine learning models and traditional scoring
systems fail to learn spatial and temporal patterns and accurately provide poor
early detection performance. To overcome these limitations, we present a new
framework called SepsisNet, a deep-learning model for real-time continuous
sepsis prediction from multisensory ICU data. Our proposed attention-based CNN
BiLSTM model incorporates CNN for spatial feature extraction and BiLSTM networks
for temporal sequence modeling and adds an attention mechanism to emphasize the
most informative physiological features for classification. On the benchmark
dataset MIMIC-III, experimental results show that SepsisNet achieves 98.68%
accuracy, surpassing the baseline models, including Logistic Regression, Random
Forest, SVM, LSTM, and standard CNN. The ablation study also reinforces the
importance of each architectural component. We demonstrate that SepsisNet has
the potential to serve as a reliable, interpretable, and computationally
efficient sepsis predictor, thereby enabling real-time clinical decision support
in ICU settings. This research will help enhance sepsis detection as well as
play a significant role in early medical treatment, which is significantly
required to reduce fatalities caused by sepsis. |
Keywords: |
Sepsis Prediction, Deep Learning, CNN, BiLSTM, Attention Mechanism |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
PREDICTIVE MODELING AND MULTIVARIATE ANALYSIS OF CORE FOOD SECURITY INDICATORS
IN MOROCCO |
Author: |
MEHDI RAHMAOUI , ACHRAF CHAKIR BARAKA, AHSSAINE BOURAKADI , NADA YAMOUL , HAMID
KHALIFI , ABDELLATIF BOUR |
Abstract: |
This study presents a multivariate analysis of key food security indicators in
Morocco between 2000 and 2023. The first section introduces the theoretical
framework of linear regression and principal component analysis (PCA). A linear
regression model was then applied to examine the relationship between the
prevalence of undernourishment and the Consumer Price Index (CPI in %), yielding
a high coefficient of determination (R² = 95%). The regression model
demonstrated that a one-unit increase in CPI leads to a 0.04% rise in
undernourishment prevalence. The PCA of food security indicators highlights
two distinct dimensions that shape nutritional outcomes. The first principal
component, accounting for most of the variance, shows strong positive
correlations with dietary energy adequacy (0.92), minimum caloric requirements
(0.98), and per capita GDP (0.96), while inversely relating to food supply
instability (-0.81). This axis essentially measures a nation's economic strength
and food system resilience. The second component exclusively tracks
undernourishment metrics, with near-perfect alignment to both the rate (0.98)
and absolute numbers (1.00) of underfed populations. This dimension directly
reflects the human toll of food insecurity. Importantly, these components are
statistically independent (orthogonal), which reveals an important reality of
policies: economic growth and enhancements to the food system (Dimension 1) do
not translate into reduced hunger (Dimension 2). This separation reinforces the
necessity to engage in bi-focal approaches to fighting food insecurity -
macroeconomic policies strengthen the food system while humanitarian action
focuses on nutrition for specific populations. These results show that food
security exists on different levels and requires solutions that target the
systemic economic level and humanitarian levels to address the multi-faceted
concept of hunger and malnutrition. Ultimately, ARIMA forecasting is utilized
to forecast trends for Dietary Energy Adequacy, Prevalence of Undernourishment
(% of population), and Number of Undernourished People between 2025 and 2028.
Forecasts suggests a gradual improvement in the following areas: Dietary Energy
Adequacy rising from143.1 to 145.9 kcal/cap/day, Prevalence of Undernourishment
drops from 5.88% to 5.19%, and Number of Undernourished People falling from 2.09
to 1.96 million. This evidence offers substantial information for policy makers
to improve food security policies in Morocco. |
Keywords: |
Food Security, Morocco, Linear Regression, Principal Component Analysis (PCA),
ARIMA Models, Forecasting, Undernourishment. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
BLOCKCHAIN TECHNOLOGY AND ITS IMPACT ON FINANCIAL REPORTING IN THE DIGITAL
ACCOUNTING ERA |
Author: |
AHMAD ALNAIMAT MOHAMMAD , OLEKSANDR CHUMAK , MYKYTA ARTEMCHUK , ALONA KHMELIUK ,
SVITLANA SKRYPNYK |
Abstract: |
The study is relevant, as blockchain technologies transform financial reporting,
increasing its transparency, reliability, and data processing speed, reducing
costs and minimizing risks, which requires further analysis. However, despite
numerous studies on blockchain applications, a knowledge gap exists in
understanding its comprehensive impact on financial reporting processes and the
development of integration models with other digital technologies. The aim of
the study is to determine the impact of blockchain technologies on the processes
of preparing and submitting financial statements, as well as determining the
prospects for using this technology in accounting in the digital age. The
research employed the following methods: content analysis of modern blockchain
systems, comparative analysis of financial indicators of companies that use
blockchain, as well as economic and statistical modelling. The impact of
blockchain technologies was assessed through quantitative analysis, including
descriptive statistics, analysis of variance (ANOVA), correlation analysis
(Pearson and Spearman coefficients), regression analysis, cluster analysis and
hypothesis testing (t-test, Mann-Whitney U-test). The calculations were
performed using SPSS, Stata, and Python software (Pandas, Statsmodels,
Scikit-learn). The results confirm that the implementation of blockchain
technologies increases the efficiency of financial reporting, reducing operating
costs by 15–20% and reducing audit costs by 25–30%. Smart contracts minimize
errors by 18%, and the average processing time for financial transactions
decreased from 48 to 5 hours. In the financial sector, costs were reduced by
30%, and transaction processing time by 85%. The academic novelty of the study
lies in the comprehensive analysis of the application of blockchain technologies
to increase the transparency and reliability of financial reporting in a global
context, as well as the creation of new knowledge through statistical analysis
and practical assessment of blockchain's effectiveness. The prospects for
further research include the development of models for integrating blockchain
with other digital technologies, such as artificial intelligence (AI) and Big
Data, as well as assessing the long-term economic consequences of using
blockchain in the financial sector. |
Keywords: |
Blockchain, Financial Reporting, Digital Accounting, Smart Contracts,
Transparency, International Standards, Automation. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
ADVANCED RIDGE REGRESSION USING IMFT MODEL FOR RSU DESIGN IN VEHICULAR NETWORKS |
Author: |
KRISHNA KOMARAM , NAGARJUNA KARYEMSETTY |
Abstract: |
In the realm of Vehicular Ad Hoc Networks (VANETs), Roadside Units (RSUs) play a
pivotal role in enhancing communication, data processing, and predictive
analytics. This paper introduces a novel hybrid design that integrates Ridge
Regression and XG-Boost algorithms to optimize the data processing and
prediction capabilities of RSUs, aimed at improving traffic management and
safety applications. The hybrid framework with IMFT (inter-intra mod filter)
algorithm employs Ridge Regression for robust initial data processing,
minimizing overfitting and ensuring reliability in the noisy, dynamic
environment of vehicular data. Feature extraction with IMFT is utilized to
encompass the relevant features before utilizing Ridge-Model for high-accuracy
predictions, leveraging its gradient boosting capabilities to facilitate timely
interventions and optimize traffic flow. Furthermore, the architecture of the
RSU is expanded to include essential units such as communication modules, data
storage, and user interface components, all functioning cohesively to create a
comprehensive system. With the proposed IMFT design we have incorporated
extensive simulations with K-fold loss to demonstrate that the proposed IMFT
with Ridge Model design significantly enhances prediction accuracy and
processing efficiency compared to traditional methods (Elastic Net.) with more
than 98% of improved R2-score. By optimizing the operational capabilities of
RSUs in VANETs, this work contributes to the development of smarter and safer
urban mobility solutions, paving the way for more effective traffic management
and improved vehicular safety. |
Keywords: |
Vehicular Ad Hoc Networks (VANETs), Roadside Units (RSUs), Security Protocols,
Energy Management, ML (Machine Learning), Ridge Regression. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
SELF-RELIANT RESIDUAL NETWORK BASED DEEP LEARNING FRAMEWORK FOR MELANOMA SKIN
DISEASE DETECTION |
Author: |
V. RADHIKA , A. MUTHUCHUDAR , M. LINGARAJ |
Abstract: |
Melanoma is one of the deadliest types of skin cancer and one of the most
aggressive that may be if caught late. While traditional approaches may have
their limits, an accurate diagnosis is vital for patient survival. Thanks to its
capacity to understand intricate patterns from massive datasets, deep learning
has evolved as a potential method for automated melanoma diagnosis. The
challenges persist, including overfitting, instability during training, and
difficulties in handling nonlinearities, which can hinder accurate predictions.
To address these challenges, Self-Reliant ResNet (SR-ResNet) has been proposed.
This enhanced version of ResNet integrates Zoutendijk’s Method, a nonlinear
optimization technique, to optimize weight updates and improve convergence.
SR-ResNet features a series of residual blocks where Zoutendijk’s Method refines
the learning process, ensuring stability and efficient training, even in deeper
networks. The network’s architecture has been designed to enhance performance
and generalization. The proposed SR-ResNet has been evaluated using a dataset of
10,000 Melanoma Skin Cancer images. The results demonstrate significant
improvements in classification accuracy, achieving a high precision rate with
reduced overfitting. SR-ResNet outperforms traditional models, establishing
itself as a robust tool for melanoma diagnosis. |
Keywords: |
Melanoma Skin Cancer, Deep Learning, SR-ResNet, Zoutendijk’s Method,
Classification Accuracy |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
DIFT-VAR: A DYNAMIC MULTI-LAYER FRAMEWORK FOR DEVICE FINGERPRINTING OF IDENTICAL
DEVICES |
Author: |
MANOJ KUMAR VEMULA , KAILA SHAHU CHATRAPATI |
Abstract: |
Device fingerprinting is a powerful technique for identifying devices in an IoT
environment, offering multiple advantages such as enhanced security through
device authentication, improved network management by monitoring device
behaviors, and anomaly detection for identifying unauthorized or compromised
devices. The majority of recent fingerprinting schemes consider a heterogeneous
device environment and use different machine learning techniques to identify
devices using network traffic, signal-level information, radio frequency
characteristics, etc. However, fingerprinting devices of the same make and model
is a significant challenge in modern IoT environments, where many devices often
share identical hardware and software configurations. Existing techniques cannot
reliably differentiate identical devices as they lack sufficient data. This
paper proposes a novel approach for Device Identification and Fingerprinting
with Time-Variant Adaptive Recognition (DIFT-VAR) based on multi-layer,
time-varying feature extraction. We construct dynamic fingerprints that uniquely
identify each device by monitoring and fusing features such as probe request
behavior, clock skew, transport layer characteristics, and radio signal metrics
over time. We utilize machine learning algorithms such as Random Forests to
classify devices based on these dynamic fingerprints. We further propose the use
of dynamic time warping (DTW) for feature alignment and classification.
Experimental results demonstrate the efficacy of our approach in distinguishing
identical devices with an accuracy of over 97% using standard machine learning
metrics. |
Keywords: |
Device fingerprinting, IoT Security, Dynamic time warping (DTW), Time-variant
feature extraction, Machine learning for IoT security. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
SCALABILITY AND EFFICIENCY OF CLUSTERING ALGORITHMS FOR LARGE-SCALE IoT DATA: A
COMPARATIVE ANALYSIS |
Author: |
PRABHAT DAS, KARTHIK KOVURI, SAJAL SAHA |
Abstract: |
This research investigates the scalability and efficiency of clustering
algorithms applied to large-scale Internet of Things (IoT) data. A comprehensive
evaluation is conducted on fourteen clustering algorithms—Affinity Propagation,
Agglomerative, BIRCH, Bisecting K-Means, DBSCAN, Fuzzy C-Means, Gaussian
Mixtures, HDBSCAN, K-Means, Mean-Shift, OPTICS, Overlapping K-Means, Spectral
Clustering, and Ward-Hierarchical—across datasets ranging from 40,000 to 100,000
sensor readings. The study systematically analyzes execution time and clustering
performance to determine their suitability for large-scale IoT applications.
Results indicate that K-Means, Ward-Hierarchical, and BIRCH exhibit strong
scalability and computational efficiency, whereas Affinity Propagation and
Spectral Clustering face significant challenges with increasing dataset size.
These findings provide valuable guidance for selecting optimal clustering
techniques in IoT-based data analytics, considering factors such as
computational constraints, dataset characteristics, and clustering granularity. |
Keywords: |
Clustering Algorithms, IoT Data Clustering, Comparative Analysis, Sensor Data
Analysis, Bibliometric Analysis, Machine Learning in IoT, Multi-Dimensional Data
Clustering |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
HOW CAN EXPERT CONSENSUS METHODS ENHANCE THE DESIGN OF IMMERSIVE LEARNING
PRACTICAL MODELS FOR DEAF OR HARD-OF-HEARING STUDENTS IN TVET? |
Author: |
RINI HAFZAH BINTI ABDUL RAHIM, DINNA NINA BINTI MOHD NIZAM, NUR FARAHA BINTI
MOHD NAIM, ASLINA BAHARUM |
Abstract: |
Traditional auditory-based teaching approaches limit the effectiveness of
practical skills acquisition for Deaf or Hard-of-Hearing (DHH) students in
Technical and Vocational Education and Training (TVET). Despite increased
interest in immersive technologies like augmented reality (AR), the field lacks
validated, inclusive instructional models tailored to DHH learners. This study
addresses this gap by integrating the Nominal Group Technique (NGT) and Fuzzy
Delphi Method (FDM) to design and validate an Immersive Learning Practical
Skills (ILPS) model. The novelty lies in the combined use of NGT and FDM for
consensus-building among experts in AR, gamification, and DHH education—an
approach not commonly applied in inclusive model development. Results revealed a
strong expert consensus (>97%) on 15 items across three core constructs:
Learning Input Medium, Practical Skills Module, and AR Gamification Features.
This research offers a replicable and participatory model development process
and introduces a validated framework for inclusive immersive learning in TVET.
The study contributes new knowledge by demonstrating how expert-driven methods
can operationalize inclusive pedagogy through immersive technologies. This study
demonstrates how combining FDM and NGT may successfully evaluate inclusive
design elements for immersive learning. The results support the development of a
practical skills model with a DHH focus and provide a repeatable framework for
inclusive curriculum co-creation. This combination strengthen consensus among 11
panel of experts and according to the study's findings, the NGT and FDM approach
has made it simple and quick for researchers to confirm crucial details that
should be highlighted. To help DHH students learn more effectively, it is
advised that more research be done in collaboration with course designers. To
provide a scalable approach for developing immersive, accessible learning
environments in specialized educational contexts, this study hopes to
demonstrate how effectively NGT and FDM collaborate for inclusive instructional
design. |
Keywords: |
Educational Technology, Teaching And Learning, Hearing Impaired, Higher
Education, Model Development. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
CAN DIGITAL TRANSPARENCY TOOLS SYSTEMATICALLY REDUCE CORRUPTION IN GOVERNMENT?
EVIDENCE FROM ESTONIA, UKRAINE AND BRAZIL |
Author: |
OLEKSANDR KOTUKOV, DMYTRO KARAMYSHEV, TETIANA KOTUKOVA, ALINA CHERNOIVANENKO,
ARTEM SERENOK |
Abstract: |
This study addresses a critical gap in the existing literature, which has
primarily focused on general transparency rather than the specific impact of
digital tools in various political and institutional contexts. Despite the
proliferation of e-governance initiatives, there is limited empirical research
systematically comparing the effectiveness of digital transparency in mitigating
corruption across multiple countries.To bridge this gap, we investigate how
digital governance platforms influence institutional accountability and reduce
corruption in Ukraine, Estonia, and Brazil. Using a combination of econometric
modeling, comparative case analysis, and time-series analysis across 120
government institutions, we demonstrate that digital transparency
tools—particularly open data platforms, e-procurement systems, and AI-driven
fraud detection—are associated with statistically significant reductions in
corruption rates. Estonia, with its mature digital ecosystem, achieved a 39%
reduction in corruption, followed by Ukraine (28%) and Brazil (16%). The novelty
of this study lies in its comparative design, the integration of AI analytics,
and the identification of conditions under which digital transparency tools are
most effective. Our findings provide actionable insights for policymakers,
emphasizing the need for robust digital infrastructure, legal mandates for data
openness, and civic engagement to maximize anti-corruption outcomes. This
research contributes new empirical knowledge on how digital transparency tools
can transform public administration and strengthen institutional integrity. |
Keywords: |
Digital Transparency; Corruption Reduction; E-Governance; Institutional
Accountability; Public Administration |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
SQUIRREL SEARCH GRADIENT OPTIMIZED DEEP BELIEF NETWORK CLASSIFIER FOR THYROID
DISEASE PREDICTION |
Author: |
R.VANITHA , Dr.K. PERUMAL |
Abstract: |
Thyroid disease is a range of disorders that affect the thyroid gland, a
butterfly-shaped organ located in the neck responsible for producing hormones
that regulate metabolism, energy levels, and overall bodily functions. Early
detection and management of thyroid disease are crucial, as untreated conditions
leads to severe complications, including cardiovascular issues, infertility, and
metabolic disorders. Advanced diagnostic methods, including machine learning and
deep learning techniques, are increasingly used to improve the accuracy and
timeliness of thyroid disease detection, facilitating better treatment outcomes.
But, severity of thyroid disease prediction accuracy with minimal time is major
challenging issues. In order to improve the accuracy of thyroid disease
prediction, a novel Squirrel Search Gradient Optimized Deep Belief Neural
Classifier (SSGODBNC) model is developed with minimal time consumption. The
proposed Deep Belief Network (DBN) is a fully connected artificial feed-forward
deep learning method comprising two visible layers such as the input and output
layer and multiple hidden layers for processing the given input. In the
layer-by-layer process, the first hidden layer receives weighted input and
performs data preprocessing. Then extracting significant features and eliminates
the insignificant features from the dataset using the Sparse Autoencoder model.
These selected significant features are utilized to classify the severity level
of thyroid disease using Sokal–Michener’s simple matching method. During
fine-tuning, error back-propagation algorithms adjust the hyperparameters using
Squirrel Search Gradient Optimization to increase the accuracy of thyroid
disease classification. This optimized fine-tuning process significantly
enhances the performance of the deep belief network and improves overall
learning efficiency in classification tasks. Finally, the accurate thyroid
disease severity prediction results with minimal error are obtained at the
output layer. Experimental assessment is conducted with different evaluation
metrics such as Accuracy, Precision, Recall, F1-score, specificity and Thyroid
disease prediction time. The observed result shows the effectiveness of the
proposed SSGODBNC model with higher accuracy in thyroid disease prediction with
minimum time than the existing methods. |
Keywords: |
Thyroid Disease Prediction, Deep Belief Network, Fine-Tuning, Adaptive Gradient
Method, Squirrel Search Gradient Optimization, Sokal–Michener’s Simple Matching
Method. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
A COMPUTATIONAL MODEL FOR TEA LEAF PRICE PREDICTION BASED ON QUALITY FACTORS
USING HYBRID MACHINE LEARNING TECHNIQUES |
Author: |
IRA GABA , B RAMAMURTHY |
Abstract: |
This document reflects the effort made to calculate and identify the grade of
the tea leaves based on the assessment of the leaves' size and color. The leaves
were classified based on their severity with the help of HSV. The leaves were
further classified using the k prototypes clustering once their length and width
were established. The leaves were then further categorized in line with that.
Light, medium, and dark are the three-color categories into which it belongs.
The leaves were further sorted according to their quality so that the farmer
could sell the produce at a better price. With the machine learning method for
the categorization part, we were able to show its values. All of the healthy
leaves were considered in a different dataset, and the images were obtained
using the feature selection method. The length and width of each individual
leaf, along with its color and shape, were then measured using those leaves. We
were able to differentiate between the various leaf grades based on the
findings. The healthy leaves were separated from the diseased leaves using the
textual features. Additionally, we were able to use the other criteria to obtain
higher-grade leaves. |
Keywords: |
Image Pre-Processing, Feature Selection, Classification, HSV, Color Parameters,
K-Prototypes Clustering. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
LEVERAGING SPEECH FOR DYNAMIC IMAGE CAPTIONING: A MOBILENETV3 AND LSTM APPROACH |
Author: |
PREETY SINGH, NAGA DURGA SAILE K, TAKKEDU MALATHI, T RAVI, 5DIPAK J DAHIGAONKAR,
CHUNDURI LAVANYA |
Abstract: |
This paper presents a novel way of generating captions for images using an
automatic image captioning system; the proposed model combines MobileNetV3 and
LSTM to create captions that are accurate and relevant to the image being
depicted. MobileNetV3 acts as a feature extractor as it extracts vital
components of pictures at a reasonable computational cost. These features are
then taken to the LSTM network and from it, descriptive captions from visual
context are made. As a measure that improves user access, the generated captions
are further translated to sound using Google Text-to-Speech (GTTS), which is
especially important for the visually impaired and other hand-free users.
Cross-sectional experimental assessments of the performances of the proposed
model on the Flickr8k dataset further validate the impressive usefulness of the
proposed model for generating faithful and comprehensive descriptions for media
items that are potentially useful in assistive technologies, media organizing,
and interactive systems. |
Keywords: |
MobileNetV3, Long Short-Term Memory (LSTM), Google Text-to-Speech (GTTS) |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
EDGE LEVEL COMPLEX EVENT PROCESSING AND OPTIMISATION BASED FOG LEVEL LOAD
BALANCING MECHANISM IN SOFTWARE DEFINED NETWORK |
Author: |
SARKARSINHA HARSINHA RAJPUT, DR. MANOJ EKNATH PATIL |
Abstract: |
This study uses an efficient transmission model based on the Hybrid
Meta-heuristic Model to enhance data transfer by reducing time complexity.
Initially, data is moved into the Complex Event Processing (CEP), which is
positioned between the fog layer and the IoT layer. In edge IoT devices, complex
event processing comprises real-time analysis, correlation, and interpretation
of continuous data streams generated by sensors and edge devices. It seeks to
identify significant trends or intricate occurrences in various data streams in
order to facilitate quick decisions or immediate reactions. After CEP, a
multi-tier priority queue-based model is used to attain priority-aware task
scheduling. After the arrival of all the tasks, each task is sorted into slots
based on its category. High-priority tasks are completed first due to their
preference over lower-priority slots. A software-defined network's optimal
resource utilization and task response time are guaranteed by an effective
load-balancing method called Hybrid Pigeon Cat Search Optimization Algorithms
(HPC_SOA). Arranging tasks based on their availability, capacity, proximity, and
energy efficiency may optimize the fog nodes' resource utilization and energy
usage. In the evaluation, the proposed approach has consumed 22051 Kw/h of
energy. |
Keywords: |
Complex Event Processing, Priority Queue Approach, Pigeon Optimization, Cat
Search Optimization, Software-Defined Network. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
TO-MULTILONTOLOGY & MPCO: A METHODOLOGY FOR DEVELOPING MULTILINGUAL ONTOLOGIES &
A LEGAL ONTOLOGY OF THE PENAL CODE |
Author: |
ISMAHANE KOURTIN |
Abstract: |
Ontologies are among the techniques introduced by artificial intelligence in the
early 1990s to enable better organization and semantic representation of
information. They have the potential to play a crucial role in the design of
question-answering systems and content comprehension by organizing and
structuring the data they present. Multilingual ontologies are both
language-independent and capable of supporting multiple languages, offering
significant potential for querying and understanding knowledge in multicultural
and multilingual environments. Although several ontology development
methodologies exist, they provide the necessary elements for ontology
construction without clearly demonstrating how to implement them or specifying
the models to guide the development process, particularly for multilingual
ontologies. Indeed, existing methodologies summarize the development of
ontologies as a mere enumeration of important terms, followed by the definition
of classes and their hierarchy, the definition of properties and their facets,
and finally the creation of instances—without showing users the approach or
method that could guide them in choosing terms, defining classes, the hierarchy,
and properties, or in demonstrating how to build multilingual ontologies. In
addition, there is a lack of models that allow for representing ontology data in
a way that guides its development and documentation. This article proposes a
customized methodology, TO-MULTILONTOLOGY, which covers aspects from the
specification phase to the validation and evaluation phase, offering a detailed
implementation process with clearly defined steps to guide and simplify the task
of building multilingual ontologies. The proposed methodology also addresses one
of the main obstacles to effective knowledge sharing: the inadequate
documentation of existing ontologies. It provides powerful tools and models that
not only document the ontology but also guide its development. This methodology
will be explained and applied in the development of a multilingual legal
ontology, MPCO (Multilingual Penal Code Ontology), in French and Arabic, for the
Moroccan government's Penal Code. The constructed ontology can play a
significant role in information retrieval and in learning about the penal code.
It can also serve as a reference for the development of similar penal law
ontologies. |
Keywords: |
Legal Ontologies, Ontology Development, Ontology Design, Multilingual
Ontologies, Knowledge Representation And Modeling, Ontology Construction And
Development Methodologies. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
ENHANCING ROBUSTNESS IN MEDICAL QUESTION ANSWERING SYSTEMS WITH NOVEL DEFENSE
MODELS AGAINST ADVERSARIAL ATTACKS |
Author: |
ATRAB A. ABD EL-AZIZ, REDA A EL-KHORIBI, AND NOUR ELDEEN KHALIFA |
Abstract: |
Medical Question Answering (MQA) systems play a critical role in supporting
accurate medical diagnoses and healthcare decision-making. However, they are
increasingly vulnerable to adversarial text attacks. These attacks subtly alter
input questions and lead to incorrect outputs. While prior research has
extensively explored adversarial defenses for medical images, there remains a
significant gap in protection strategies for text-based MQA systems. To the best
of our knowledge, this paper is the first to propose and evaluate defense
mechanisms specifically designed to secure MQA systems against these attacks. We
introduce three novel defense models that address both word-level (synonym
substitution, word deletion) and character-level (random character insertion)
attacks targeting the BERT model. The Synonym Substitution Embedding (SSE)
Defense Framework combines TF-IDF ranking with transformer-based synonym
embeddings to resist synonym substitution attacks. CosineDefender leverages
cosine similarity to detect and neutralize perturbed inputs, while
JaccardDefender applies Jaccard similarity to provide robust protection across
multiple attack vectors. To validate our approach, we conduct experiments on two
medical datasets (Symptom2Disease and Medical Symptoms Text and Audio
Classification) and a natural language dataset (AG’s News) for comparative
analysis. Our results show that the SSE model reduces the attack success rate on
AG’s News from 8.7% to just 0.4%. On medical datasets, CosineDefender
significantly lowers attack success rates to 3.4%, 4.3%, and 12.8%, while
JaccardDefender consistently achieves the best performance, reducing all attack
success rates to around 3.4% and maintaining high classification accuracy. This
work introduces a new line of defense for MQA systems. It establishes a baseline
for adversarial robustness in the medical NLP domain. It also contributes the
first comprehensive evaluation of targeted defense models in this critical area. |
Keywords: |
Adversarial Attacks, BERT, Medical Question Answer (MQA), Term Frequency-Inverse
Document Frequency (TFIDF). |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
HYBRID INTRUSION DETECTION FRAMEWORK FOR MOBILE EDGE COMPUTING |
Author: |
SUJAN KUMAR DAS , MOHAMED EL-DOSUKY , SHERIF KAMEL |
Abstract: |
The growing use of mobile edge computing (MEC) has had a positive impact on user
experience and reduced latency. However, this closeness also makes MEC
environments vulnerable to a number of security risks. This research article
presents an edge-based hybrid intrusion detection system for MEC and the
Internet of Things (IoT). The system uses techniques like behavioral analysis,
anomaly detection, and signature-based detection, ensuring real-time response
and reduced bandwidth usage. The system also addresses challenges in data
acquisition and cleaning due to potential threats from malicious users and
noise. The model uses smoothing filters, unsupervised learning, and deep
learning techniques to detect anomalies and threats, reducing bandwidth.
According to the findings, securing MEC environments against changing cyber
threats can be accomplished using an edge-based hybrid intrusion detection
system. |
Keywords: |
Mobile edge computing, Intrusion detection, Blockchain, Hybrid intrusion,
Machine learning |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
A DYNAMIC IMAGE RETRIEVAL FRAMEWORK BASED ON FUSION-BASED FEATURE EXTRACTION
USING DEEP LEARNING |
Author: |
ALLA TALIB MOHSIN, MOHD SHAFRY BIN MOHD RAHIM |
Abstract: |
Images are integral to human communication, and with the rapid growth of
multimedia data, finding relevant images in large archives has become a
significant challenge. Content-Based Image Retrieval (CBIR) offers a solution by
retrieving visually similar images based on content rather than textual
annotations. Despite its potential, CBIR systems face critical challenges,
including irrelevant region detection, sensitivity to variations in brightness
and size, and the absence of predefined class information in datasets. To
address these issues, this study proposes a CBIR framework that integrates low-
and high-level features such as texture, shape, and color for robust image
representation. The framework employs wavelet-based Local Ternary Pattern (LTP)
for texture extraction and incorporates a dynamic weight allocation mechanism,
which adapts to statistical metrics like mean and variance to enhance retrieval
accuracy. Comprehensive evaluations of the Corel-1k and Corel-10k datasets
demonstrate the method's effectiveness in retrieving visually similar images
with high precision. The proposed approach surpasses existing techniques,
including CBIR-ANR, OMCBIR, and CNN-QCSO, in terms of precision, memory
efficiency, and visual quality. The result of this work for the feature types
included the dataset in the Retrieval Performance on the Corel 1K Dataset of
fused features, proposing a precision of 88.1, recall of 80.5, and MAP of 88.3,
and the Retrieval Performance on the Corel 10K Dataset of fused features
proposed the precision of 83.7, recall of 76.2, and MAP of 83.9. This study
establishes a promising direction for developing efficient CBIR systems capable
of handling large-scale image datasets while improving retrieval performance. |
Keywords: |
Image Retrieval; CBIR; Features extraction; CNN; LTP. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
EXPLORING THE FACTORS AFFECTING THE DEPLOYMENT OF THE INTERNET OF THINGS IN
HEALTHCARE ORGANIZATIONS IN THE UAE USING THE UTAUT MODEL |
Author: |
NAHIL ABDALLAH |
Abstract: |
Despite the growing interest in digital healthcare, the adoption of Internet of
Things (IoT) technologies in healthcare organizations, remains limited and
underexplored, particularly from the patients' perspective. This research
investigates the key factors influencing the deployment of IoT-based healthcare
devices among end users in public hospitals across the UAE. Drawing from the
Unified Theory of Acceptance and Use of Technology (UTAUT), enhanced with
constructs identified from existing literature, the study proposes a predictive
adoption model. Data was gathered from 231 participants, and structural equation
modeling was used to validate both the measurement and structural components of
the model. The findings highlight that technological complexity, social
influence, perceived health risks, facilitating conditions, perceived security
and privacy, and relative advantages significantly shape users' attitudes (ATT),
which in turn affect their behavioral intentions (BI) to adopt IoT healthcare
devices. The study concludes that addressing these factors is critical for
successful IoT implementation in healthcare. It contributes to Information
Systems (IS) research by integrating new variables into the UTAUT model and
offers practical insights for healthcare decision-makers and technology
providers aiming to boost IoT adoption. |
Keywords: |
Healthcare, Internet of Things, Privacy, Security, UTAUT |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
ROBUST QUANTILE REGRESSION-BASED MACHINE LEARNING FRAMEWORK FOR
OUTLIER-RESILIENT TIME SERIES ANALYSIS |
Author: |
Mr. TUSHAR MEHTA, DR.DHARMENDRA PATEL |
Abstract: |
Time series analysis is a powerful tool in countless regions, from finance to
healthcare, but is often challenged by the presence of outliers that can distort
predictions and model output. This article presents a robust quantile
regression-based framework for machine learning, which increases the resilience
of time series analysis compared to outliers. By using quantile regression, the
proposed framework captures the conditional distribution of time series data and
provides a more comprehensive understanding of its underlying structure. Machine
learning integration improves the ability of models to adapt to complex
nonlinear patterns while simultaneously maintaining robustness to anomaly data
points. Through extensive research into synthesis and practical datasets, we
show that our framework outweighs traditional predictability and trigger
resilience methods. The results highlight the possibility of combining quantile
regression with machine learning for robust time series analysis, providing
promising directions for future research and applications in the environment.
Time series forecasts play a key role, especially in financial markets where
accurate forecasts are useful for investors and stakeholders. However,
traditional models have difficulty recording nonlinear dependencies and are not
able to effectively handle outliers. This article presents a robust quantile
regression-based machine learning framework for diffusion-preserving time series
analysis. It integrates long-term time memory (LSTM), LightGBM, and stacked
ensemble models. The proposed ensemble approach uses quantile regression for
robust outsourcing processing while combining deep learning strengths to
increase base techniques to improve predictive performance. Experimental
evaluation of Goldman Sachs BDC, Inc.(GSBD) shared course data demonstrates the
advantages of the ensemble model compared to the individual model. The results
show that the stacked ensemble model reaches the lowest flipper loss (0.0656),
MAE (0.1313), RMSE (0.2185), and the highest R² (0.9778), exceeding LSTM and
LightGBM. The results highlight the effectiveness of hybrid ensemble learning in
financial series forecasting, providing a more accurate and robust approach to
dealing with outlier sacrificial data. |
Keywords: |
Time Series Analysis, Quantile Regression, Outlier Resilience, Machine Learning
Framework, Robust Prediction. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
THE INFLUENCE OF SOCIAL MEDIA AND DIGITAL COMMUNICATION ON THE EVOLUTION OF
VOCABULARY AND GRAMMATICAL STRUCTURES |
Author: |
VIKTORIIA SIKORSKA, OKSANA SNIGOVSKA, HANNA PEREDERII, ALINA АNDROSHCHUK,
OLEKSANDR KALISHCHUK |
Abstract: |
The article discusses the impact of emerging communication technologies and
social networks on the development of lexical and grammatical norms of the
English language. The study is dedicated to the most important tendencies in
language evolution, i.e., the emergence of neologisms, acronyms, abbreviations
and borrowings, and grammatical simplifications and non-standard syntactic
structures. Its importance is due to the need to investigate the mechanisms of
language norm adaptation into the ever-changing digital environment, reshaping
traditional language standards and communication methods. The research is based
on the study of linguistic features of five popular sites (Twitter, Facebook,
Instagram, TikTok, Reddit), which allows us to identify the specifics of the use
of linguistic innovations in different situations of online communication. The
article aims to determine the nature and causes of digital language changes,
systematise their lexical and grammatical manifestations, and assess the impact
of age and social factors on language dynamics. The study used a set of methods:
content analysis, comparative and contrastive analysis, sociolinguistic
approach, and descriptive analysis. The material was 250 text samples from five
digital platforms. According to the research results, social networks are an
effective mechanism for linguistic innovation, creating novel communication
models and evolving forms of classical languages to digital ones. It has been
established that different platforms have some linguistic features: Twitter is
characterised by the active shortening of words and phrases, and TikTok and
Instagram utilise non-standard grammatical forms with ironic or humorous
connotations. Reddit is characterised by language play and violation of
traditional syntactic rules. The research also revealed a strong dependency of
language variations on users' age and social qualities: young people are the
primary agents of language development. At the same time, their seniors keep
traditional language norms. The findings may be used in future research on
digital linguistics, namely how social media affects academic writing,
professional jargon, and the long-term restructuring of the language system. |
Keywords: |
Social Networks, Language, Language Change, Online Communication, Communication,
Language Tools |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
REVIEW ON PROGRAMMING LANGUAGE LEARNING MODELS AND INSTRUCTIONAL APPROACHES:
CURRENT TRENDS AND FUTURE DIRECTIONS |
Author: |
SANAL KUMAR T S, R THANDEESWARAN |
Abstract: |
Programming has become a fundamental subject in the academic curriculum, but the
learning process presents unique challenges. It requires not only systematic
study and dedication but also the application of logical thinking and
problem-solving skills in practical contexts. This complexity makes programming
particularly difficult for beginners, who often face challenges in understanding
foundational concepts like sequencing, decision-making, and looping. As a
result, various pedagogical methods have evolved to address the difficulty in
programming learning. To better understand the recent developments in
programming educational approaches, we aim to provide a detailed review of
learning models and instructional approaches in programming learning. Following
this, we explore the cognitive factors influencing the learner and the essential
aspects of the learner’s learning style and preferences in programming
education. Finally, we conclude the review by discussing how these techniques
can be combined to formulate future pedagogical approaches in programming
instruction. Consequently, this review proposes integrating the learning style
model with adaptive e-learning environments (ALE) in a blended learning approach
as a better solution to address the hurdles of programming learning difficulty.
Given this, the review paper provides a comprehensive overview of the
programming learning environments, strategies, instructional approaches,
cognitive factors, and learning styles leveraged in programming education, which
future researchers can utilize. |
Keywords: |
Programming Education; Difficulty In Programming; Instructional Methods;
Learning Style Models; Adaptive Learning Environments. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
VADER-RLA: A REINFORCEMENT LEARNING-AUGMENTED SENTIMENT ANALYSIS MODEL
LEVERAGING VADER FOR CONTEXT-AWARE EMOTION CLASSIFICATION |
Author: |
KOLLI. SAI BHAVANA, DR. SENTHIL ATHITHAN, Dr. NESARANI ABRAHAM |
Abstract: |
VADER-RLA is presented as a hybrid architecture that utilizes reinforcement
learning through VADER, expanding the contextual applicability of already
established sentiment scoring in favour of comprehensive emotion classification,
demonstrating that this framework can bridge the gap of ability to properly
predict sentiment in tandem with trustworthiness. The most novel aspect of this
work is the application of reinforcement learning to the problem of adaptive
sentiment classification, hence allowing the model to perform real-time
optimization of the sentiment scoring in changing linguistic surroundings.
Proposed approach overcomes lexicon-based models and deep learning models, with
respect to adaptability, precision and interpretability. Results show that
VADER-RLA consistently exceeds traditional methods, yielding impressive accuracy
and robustness and successfully detecting sophisticate sentiments like sarcastic
and mixed ones. The experimental results indicate that VADER-RLA achieved an
accuracy of 92.3%, an F1-score of 0.89, precision of 91.7%, and recall of 90.5%,
demonstrating significant improvements over the baseline VADER model. These
findings highlight the potential of VADER-RLA to provide a robust, adaptive
solution for sentiment analysis in environments with rapidly changing linguistic
trends. |
Keywords: |
Adaptability, Context-Aware, Emotion Classification, Hybrid Approach,
Performance Metrics, Reinforcement Learning, Sentiment Analysis, VADER |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
EMPOWERING ONLINE SHOPPING SENTIMENT ANALYSIS USING TENACIOUS ARTIFICIAL BEE
COLONY INSPIRED TAYLOR SERIES-BASED GAUSSIAN MIXTURE MODEL (TABC-TSGMM) |
Author: |
G.M. BALAJI, K. VADIVAZHAGAN |
Abstract: |
Sentiment Analysis has become increasingly important in online shopping, where
consumers rely on reviews and feedback to make informed purchasing decisions.
However, accurate sentiment classification poses challenges, such as handling
nuanced language and varying review lengths. This study introduces a novel
approach called the Tenacious Artificial Bee Colony inspired Taylor Series-based
Gaussian Mixture Model (TABC-TSGMM) to address these challenges.TABC-TSGMM
leverages two key components: the Taylor Series-based Gaussian Mixture Model
(TSGMM) and the Tenacious Artificial Bee Colony (TABC). TSGMM captures complex
sentiment patterns in text data, while TABC optimizes the model’s performance
through an intelligent search strategy.TABC enhances TSGMM by optimizing model
parameters, making the sentiment analysis process more robust and accurate. To
evaluate the effectiveness of TABC-TSGMM, we conducted experiments on an Amazon
product review dataset, focusing on electronic products. The results
demonstrated superior classification accuracy, showcasing the potential of this
approach to empower online shopping sentiment analysis. |
Keywords: |
Sentiment Analysis, ABC, GMM, Taylor Series, Local Search. Optimization |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
Title: |
CLOUD COMPUTING ADOPTION IN E-GOVERNMENT SERVICES: A COMPREHENSIVE POST-COVID-19
SYSTEMATIC REVIEW AND FUTURE DIRECTIONS |
Author: |
RAGHED ALKHASAWNEH, ZAIHISMA CHE COB, and ALIZA BINTI ABDUL LATIF |
Abstract: |
Cloud computing has emerged as a pivotal technology in transforming public
sector services, particularly in the wake of the COVID-19 pandemic, which
accelerated the global shift toward digital governance. This study conducts a
comprehensive systematic literature review to examine the adoption of cloud
computing in e-government services, underscoring its growing relevance in
enhancing efficiency, scalability, and citizen engagement. The goal of this
review is to identify the key factors influencing cloud adoption in government
organizations, explore the most frequently applied IT/IS theoretical frameworks,
and assess how the pandemic has reshaped adoption trends and priorities. Based
on the analysis of 50 peer-reviewed studies from seven scholarly databases, the
study hypothesizes that cloud adoption in e-government is significantly
influenced not only by technological and organizational factors but also by
external pressures such as public health crises and evolving citizen
expectations. The findings confirm this hypothesis by revealing that while cloud
computing offers substantial advantages, such as cost-effectiveness, improved
accessibility, and service innovation, its adoption is challenged by data
security concerns, legal barriers, and varying levels of institutional
readiness. Additionally, the review identifies a shift toward citizen-centric
service models post-pandemic, emphasizing the need for inclusive and resilient
digital infrastructures. The study also highlights a gap in the literature,
noting a limited diversity in publication sources and keywords, and encourages
future research to broaden its scope. Practically, the insights gained can
support policymakers and decision-makers in designing more adaptive, secure, and
user-focused cloud-based e-government services. This review uniquely contributes
by contextualizing cloud adoption within the post-COVID-19 digital
transformation of the public sector. |
Keywords: |
Cloud Computing, E-government Services, Adoption, Benefits, Challenges,
Systematic Literature Review, IS/IT Models, COVID-19. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2025 -- Vol. 103. No. 10-- 2025 |
Full
Text |
|
|
|