|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
November 2025 | Vol. 103 No.22 |
|
Title: |
ROBL-TSA AND SNAPSHOT ENSEMBLE CLASSIFIER FOR PAPAYA FRUIT DISEASE
CLASSIFICATION |
|
Author: |
K. HARIKA , M.N. NACHAPPA |
|
Abstract: |
Papaya fruit is a healthy tropical fruit that is grown in many countries. But
its growth and quality are affected by various diseases. Early and accurate
disease detection is crucial to minimizing crop loss and the diseases appear in
different shapes, sizes, and colours. In this study, developed a Snapshot
Ensemble method to classify papaya fruit diseases more accurately. It integrates
Histogram of Oriented Gradients (HOG) for feature extraction and Random
Opposition-Based Learning with Tuna Swarm Algorithm (ROBL-TSA) to choose the
best features for classification. The snapshot ensemble is utilized to improve
generalization when reducing overfitting by using cyclic cosine annealing for
adjusting learning rate in training. Our approach achieved 89.83% accuracy and
93.27% AUC on the papaya fruit disease dataset and performed better than
traditional models like the Feed-forward Artificial Neural Network (FANN). This
method offers a low-cost, effective solution for detecting papaya diseases. |
|
Keywords: |
Papaya Fruit Disease, Random Opposition-Based Learning, Snapshot Ensemble, Tuna
Swarm Algorithm. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
PRIVACY IN IOT NETWORKS: A ZERO-KNOWLEDGE PROOF APPROACH FOR SHARDED BLOCKCHAIN
ARCHITECTURES |
|
Author: |
YASSINE LKHALIDI, HAMZA MAJID ,HATIM KHARRAZ AROUSSI ,ACHRAF TIFERNINE |
|
Abstract: |
Internet of Things (IoT) deployments operate at the challenging intersection of
constrained hardware, cryptographic security requirements, and distributed
ledger scalability a problem space where existing blockchain IoT integrations
fail due to unrealistic computational assumptions, monolithic trust models, or
lack of privacy guarantees across shard boundaries. This paper bridges the gap
between cryptographic theory and IoT reality by presenting a practical
implementation of PLONK-based zero-knowledge authentication deployed on ESP32
hardware within a sharded blockchain architecture. We contribute: (1) A novel
system architecture that integrates PLONK zk-SNARKs directly into the
authentication layer of a sharded ledger, enabling IoT devices to prove identity
without revealing credentials while maintaining cross-shard privacy; (2) A
succinct cross-shard transaction protocol where zero-knowledge proofs accompany
inter-shard transfers, ensuring atomic operations with minimal overhead
addressing the previously unsolved problem of verifiable privacy across shard
boundaries; (3) Practical validation on real ESP32 devices demonstrating
sub-50ms verification times and 288-byte proof sizes, proving that modern
zero-knowledge systems are deployable beyond theoretical constructs; (4) A
game-theoretic security analysis showing negligible attack probabilities under
realistic IoT network conditions. By validating PLONK on actual constrained
hardware within a sharded architecture, we demonstrate that sophisticated
cryptographic primitives can meet the stringent requirements of real-world IoT
ecosystems. This work establishes that the integration of universal-setup SNARKs
with sharded blockchains represents a practical path forward for securing
billions of IoT devices without compromising privacy or scalability. |
|
Keywords: |
Elliptic Curve Cryptography, Zero-Knowledge Proofs, ZK-STARKs, PLONK, Shard
Blockchain. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
DYNAMIC ADAPTIVEENHANCED ELLIPTIC CURVE CRYPTOGRAPHY FOR SECURE DATA ENCRYPTION
AND DECRYPTION IN SMART CITIES |
|
Author: |
NALLURI BRAHMA NAIDU, GONDI LAKSHMEEWARI |
|
Abstract: |
As smart cities increasingly rely on interconnected systems, such as the
Internet of Things (IoT) devices, sensors, and communication networks, the
security of data conduction has developed a dangerousdisquiet. This research
addresses securing data transmission in smart cities, where IoT devices and
sensors handle sensitive data. It proposes Enhanced Elliptic Curve Cryptography
(EECC) to provide a scalable, efficient, and resilient cryptographic framework
ensuring confidentiality, integrity, and authentication. This model uses EECC to
ensure protected data broadcast in smart cities, providing strong encryption,
data confidentiality, integrity, and authentication. It addresses scalability,
performance, and resilience to advanced security threats in interconnected
systems. Contextual Adaptive Data Filtering (CADF) optimizes smart city security
by dynamically adjusting encryption based on data sensitivity, network
conditions, and threats. Dynamic Adaptive Elliptic Curve Encryption (DAECE)
ensures secure, efficient communication in smart cities by adjusting encryption
based on data sensitivity, network conditions, and threats.Enhanced ECC secures
communication in IoT, smart grids, healthcare, blockchain, smart cities, mobile
devices, and public safety. Homomorphic Encryption (HE-SDP) enables secure data
analysis in smart cities, preserving privacy by processing encrypted data
without revealing it. Findings show that ECC key sizes (100-500 bits) provide
security levels of 75-250 bits, while RSA key sizes of 1000-15000 bits offer
higher security at comparable levels, implemented in Python software. The future
scope of EECC in smart cities includes further optimization for IoT devices,
integration with emerging technologies like 5G and blockchain, and enhancing
resilience against quantum computing threats for long-term security. |
|
Keywords: |
Elliptic Curve Cryptography, Encryption, Decryption, Smart Cities, Contextual
Adaptive Data Filtering and Data Transmission. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
NON-LINEAR SMART MODEL SUPPORT VECTOR MACHINE (SVM) AS A HOAX NEWS CLASSIFIER |
|
Author: |
A M H PARDEDE, A FAUZI, Y MAULITA, PREDDY MARPAUNG, SUTRISNO ARIANTO PASARIBU |
|
Abstract: |
The urgency of this study lies in the increasing spread of hoax news in the
digital era and its negative impact on society, particularly in relation to
government policies such as budget efficiency and President Prabowo's free meal
program. Hoaxes can trigger misinformation and undermine public trust in
implemented policies. Therefore, an automated system capable of detecting hoax
news with high accuracy is essential to counter the spread of misinformation.
The objectives of this study are to: (1) develop a non-linear Support Vector
Machine (SVM) model to improve the accuracy of hoax news classification; (2)
identify the most influential features in hoax detection; (3) compare the
performance of the non-linear SVM model with other machine learning methods; (4)
design a web-based system or mobile application for automated hoax detection;
and (5) apply the model to identify hoaxes related to public policy, thereby
ensuring that the public receives valid and reliable information. The
methodology adopts a Natural Language Processing (NLP)-based approach,
consisting of several stages: data collection (hoax and valid news from various
trusted sources), data preprocessing, feature extraction, implementation of the
non-linear SVM model, model evaluation, and system development. The experimental
results demonstrate that the proposed model achieves an accuracy of 93.6%,
precision of 94.5%, recall of 92.9%, F1-score of 93.7%, and an AUC (Area Under
the Curve) of 0.96. These results indicate that the model effectively
distinguishes between hoax and non-hoax news. The high precision value suggests
that the model is highly reliable in ensuring that news classified as hoax is
indeed hoax. Meanwhile, the high recall value shows that the model is
sufficiently sensitive in capturing most hoaxes, although there remain 40 hoax
cases that were missed (false negatives). In conclusion, this study demonstrates
that the non-linear SVM with an RBF kernel is a robust and reliable algorithm
for classifying hoax and non-hoax news in the Indonesian language. Moreover, it
successfully addresses the research questions posed at the outset, providing a
significant contribution to automated hoax detection in the context of public
policy and digital information integrity. |
|
Keywords: |
ews, Hoax, Classification, Linear, SVM. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
DEW COMPUTING WITH EDGE INTELLIGENCE FOR INDUSTRIAL AUTOMATION AND PREDICTIVE
MAINTENANCE REAL-TIME ANOMALY DETECTION |
|
Author: |
AKASH GHOSH, ABHRANEEL DALUI, SATYENDR SINGH, SUNIL KUMAR SHARMA, LALBIHARI
BARIK, JATINDERKUMAR R. SAINI, BIBHUTI BHUSAN DASH, UTPAL CHANDRA DE, SUDHANSU
SHEKHAR PATRA |
|
Abstract: |
The increasing complexity of industrial automation systems, coupled with the
pressing demand for real-time decision support, necessitates the deployment of
efficient and decentralized computing paradigms. Edge computing (EC), operating
at the periphery of the network, offers significant advantages by enabling
localized data processing and reducing reliance on centralized cloud
infrastructures. Building on this concept, this paper introduces a novel
framework that integrates edge intelligence with dew computing (DC) to advance
industrial automation and predictive maintenance. The proposed approach employs
lightweight algorithms for real-time anomaly detection at dew nodes, enabling
early identification of operational deviations in industrial equipment while
maintaining minimal resource usage. Furthermore, causal inference models are
incorporated to determine the root causes of equipment failures directly within
the dew layer, thereby enhancing the precision of maintenance strategies and
minimizing downtime. By leveraging localized computation, the framework
effectively reduces latency, optimizes energy consumption, and enhances system
reliability. Experimental evaluation demonstrates that the system achieves 96.3%
accuracy in anomaly detection, correctly identifies root causes in 92.7% of
cases, reduces average latency to 10.6 ms, and consumes only 2.4 W of power per
dew node. A case study conducted in a smart manufacturing environment validates
the practical benefits of the framework, highlighting improvements in anomaly
detection and maintenance scheduling. The study also examines scalability and
energy efficiency, underscoring the potential of the proposed system for
deployment across diverse industrial settings. |
|
Keywords: |
Lightweight Anomaly Detection, Real-Time Industrial Systems, Fog Computing,
Resource-Constrained Environments, Internet of Things (IoT), Predictive
Analytics, Energy-Efficient Computing, Dew Computing |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
A NOVEL FEATURE-SELECTION-GUIDED DEEP LEARNING FRAMEWORK INTEGRATING EXPLAINABLE
AI FOR SCALABLE, DUAL-DIAGNOSIS PREDICTION OF DIABETES AND ADHD USING
HETEROGENEOUS BIG HEALTH DATA |
|
Author: |
TANGUTURU SP MADHURI, DR. G. S RAGHAVENDRA |
|
Abstract: |
This study introduces a novel, feature-selection-guided deep learning framework
designed to enable scalable and interpretable dual-diagnosis prediction of
Diabetes and Attention Deficit Hyperactivity Disorder (ADHD) using heterogeneous
health data. The model integrates structured clinical variables from electronic
health records (EHRs), temporal patterns from wearable sensors, and behavioural
features from psychological assessments. Advanced feature selection techniques
including LASSO, Recursive Feature Elimination (RFE), and XGBoost were employed
to extract 38 high-impact variables from over 400 inputs. These selected
features feed into a multi-branch neural network comprising Convolutional Neural
Networks (CNNs) for static inputs, Long Short-Term Memory (LSTM) networks for
time-series data, and a Multilayer Perceptron (MLP) fusion layer for dual-output
classification. The architecture achieved high accuracy for both Diabetes
(93.2%) and ADHD (92.5%) with AUCs above 0.94, outperforming baseline models.
Integrated explainable AI tools SHAP, LIME, and Grad-CAM enhanced
interpretability, revealing key biomarkers such as HRV entropy, glucose
variability, BMI, and impulsivity score. The model generalized well across
demographic subgroups and demonstrated robust diagnostic capability even in
comorbid cases. This work makes a distinctive contribution by embedding
explainable AI directly within the deep learning training process, enabling
transparent and adaptive model refinement. It also discovers shared
physiological and behavioural biomarkers linking Diabetes and ADHD, offering new
knowledge for comorbidity-aware healthcare analytics. This end-to-end framework
provides a clinically transparent, technically rigorous solution for early,
accurate, and interpretable multi-disease prediction, representing a significant
step forward in comorbidity-aware healthcare AI systems. |
|
Keywords: |
Dual-diagnosis Prediction, Heterogeneous Health Data, Feature Selection, Deep
Learning Architecture, Explainable Artificial Intelligence, Comorbidity
Modelling |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
PERFORMANCE EVALUATION OF WEIBULL-TYPE LIFETIME DISTRIBUTIONS BASED ON INFINITE
FAILURE NHPP SOFTWARE RELIABILITY MODEL |
|
Author: |
SEUNG KYU PARK |
|
Abstract: |
This study focuses on evaluating the performance of NHPP-based software
reliability models under infinite failure conditions by applying Weibull-type
lifetime distributions, which are suitable for modeling time-dependent failure
rates. Software failure time data were used for the analysis, and model
parameters were estimated using the MLE approach. To quantitatively assess model
performance, several evaluation criteria were employed: model efficiency
measured by MSE and R², predictive accuracy assessed using the mean value
function, failure occurrence rate using the intensity function, and system
reliability based on the reliability function. The results demonstrate that the
proposed Weibull-Extension model exhibits the highest performance among the
models considered. These findings suggest that the Weibull-Extension model
offers a more effective alternative for software reliability analysis and can be
applied even in complex development environments. Therefore, this study
contributes to the field by newly identifying the reliability performance of
Weibull-type models, which have been underexplored in previous research, and is
expected to support developers in early-stage failure rate estimation during the
software development lifecycle. |
|
Keywords: |
Extended-Weibull, Infinite-Failure NHPP, Musa-Okumoto, Weibull,
Weibull-Extension, Weibull-Type. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
BERT-GCN MULTI-TASK MODEL FOR DISASTER DETECTION, CLASSIFICATION, AND HELP
IDENTIFICATION IN TWEETS |
|
Author: |
BASUDEV NATH, DEEPAK SAHOO, SATYENDR SINGH, SUNIL KUMAR SHARMA, SUDHANSU SHEKHAR
PATRA |
|
Abstract: |
X, which used to be called Twitter, has become an important way to share
information during natural disasters in this age of real-time digital contact.
It is still very hard to get useful information from the huge amounts of social
media data that are available. Our study suggests a multi-stage integrated deep
learning models that uses both contextual and relational features to effectively
sort and analyze tweets about disasters. In the first task, our model uses
binary classification to figure out with 96.58% accuracy if a tweet is disaster
related or not. A multi-class categorization of tweets into three
categories—non-disaster, flood, and earthquake—is performed in the second task.
The accuracy of this classification is 92.65%, 96.30%, and 95.48%, respectively.
The third and most focused task is to identify specific support needs in
earthquake-related tweets, such as calls for food and medical help, with an
accuracy of 95.07%. Graph Convolutional Networks (GCNs) are used to learn the
relationships between words, and BERT (Bidirectional Encoder Representations
from Transformers) is used to get deep contextual meanings. The combined
embeddings are passed into a fully connected neural network (FCNN) for
classification. Experimental findings show that our hybrid approach not only
works well across all tasks but also has practical relevance in improving
real-time catastrophe response and focused humanitarian relief distribution. |
|
Keywords: |
Tweeter Analysis, BERT, GCN, NLP, FCNN, Tweeter Help Categorization |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
SCALABLE AND SECURE MULTI-USER AUDIO COMMUNICATION IN CLOUD-BASED SYSTEMS USING
BLOCKCHAIN AND ENCRYPTION |
|
Author: |
DHANARAJU MURALA, Dr.K.THAMMI REDDY |
|
Abstract: |
In recent advancements, the integration of encrypted data frameworks with
blockchain systems like the Hierarchical-Policy Identity-Centric Cryptographic
Algorithm (HP-ICCA) has captured the scholarly interest due to its potential to
ensure comprehensive security assessment and transaction goavousibility in data
exchange domains. The prevalent blockchain-centric HP-ICCA paradigms, however,
often leave cryptographic keys under the aegis of a singular centralized
authority, resulting in intensive computational demands, elevated transactional
expenditures, and limitations in scalability within a decentralized schema. To
mitigate these issues, our study introduces an enhanced strategy for employing a
distributed encryption mechanism (DESM) WITH zero-knowledge protocol
enhancements. In a blockchain network, expansion into areas addressing
Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is indispensable. Common
impediments include centralized issuance dependent on variable ζ to define
authorities .Ciphertext Policy Attribute-Based Encryption (CP-ABE) models are
prevalent in cloud- sharing scenarios but suffer from privacy issues in access
policies, user or attribute irrevocability, key escrow dilemmas, and trust
constraints. To enhance CP-ABE’s practicality while maintaining ζ (key
management security), distributed models (Ω) along with zero-knowledge models λ
for encryption are explored. By deploying proxy re-encryption nodes (Ρ), which
ensure multi-party management and dissemination of master keys (ΜΚ), autonomy
from central authorities is achieved, contributing to the establishment of
comprehensive reencryption proofs ρ . Our proposal contemplates an economic
incentive model thereby utilizing staking mechanics (Σ) to devise an equilibrium
between security and reward distribution. It includes mathematical optimizations
for latency reduction to achieve optimal performance, e.g., in scenarios
exhibiting concurrent transaction processing of up to 100 occurrences
instantaneously, and a reduction in average gas consumption γ greater than 97%. |
|
Keywords: |
Audio Based Block Chain, Cloud Computing, Multiuser Security. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
CUSTOM REGRESSION-BASED ARCHITECTURES FOR REAL-TIME TRAJECTORY PREDICTION USING
SPATIO-TEMPORAL DATA |
|
Author: |
PRAVEEN KUMAR .V .S , SAJIMON ABRAHAM , SIJO THOMAS, SABU AUGUSTINE |
|
Abstract: |
Trajectory prediction plays a critical role in applications such as autonomous
navigation, human motion forecasting, and robotic path planning. However,
despite its importance, the scarcity of clean, large-scale datasets and the high
bandwidth cost of continuous polling limit practical deployment in real-world
intelligent systems. This study proposes a comparative evaluation framework
using a hybrid dataset that integrates real and simulated user travel
trajectories. The performance of seven regression models—LSTM, Bidirectional
LSTM (Bi-LSTM), GRU, 1D CNN, Transformer, SVM, and XGBoost—is systematically
analyzed using key metrics including Mean Squared Error (MSE), Mean Absolute
Error (MAE), Root Mean Squared Error (RMSE), R-squared (R²), and Symmetric MAPE
(SMAPE). The goal of this study is to benchmark these models on hybrid
trajectory datasets and to test the hypothesis that sequence-aware deep models
(Bi-LSTM, GRU) outperform traditional learners (SVM, XGBoost), with SMAPE
providing clearer insights for low-value targets. Among these, the Bi-LSTM model
demonstrates superior performance, particularly in terms of MAE and SMAPE.
Additionally, the study explores hyperparameter tuning techniques, statistical
validation, and the challenges of combining real and synthetic data,
highlighting the effectiveness of SMAPE in low-value target scenarios. The
findings not only support the use of Bi-LSTM for enhanced trajectory prediction
accuracy in hybrid data environments but also underline the broader potential of
predictive intelligence in reducing data transmission overhead and improving the
efficiency of real-time mobility systems. These findings have direct
implications for smart mobility and surveillance applications, where reducing
polling frequency without compromising accuracy is essential. This study
directly addresses the critical challenge of accurate real-time trajectory
prediction under conditions of data scarcity by integrating both real and
simulated datasets. The central goal is to design and benchmark regression-based
architectures for short-term forecasting and to test the hypothesis that
prediction accuracy can be significantly enhanced by incorporating temporal
dependencies within spatio-temporal data. |
|
Keywords: |
Trajectory Prediction, Bidirectional LSTM (Bi-LSTM), Real And Simulated Data,
Regression Models, Performance Evaluation Metrics, SMAPE |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
FACTORS THAT INFLUENCE CUSTOMER CONTINUANCE USAGE OF DIGITAL STREAMING VIDIO
SERVICES |
|
Author: |
FERDY RENHART SITORUS, ELFINDAH PRINCES |
|
Abstract: |
The purpose of this study is to identify the factors that influence customers in
the continuance usage of the Vidio digital streaming service using a
quantitative research method. This study employs data analysis techniques with
PLS-SEM to examine the relationships among latent variables. Data were collected
through questionnaires distributed to 393 respondents, of which 385 had active
subscriptions to the Vidio application. The results of the study show that eight
hypotheses were accepted. The variable perceived usefulness has a significant
effect on customer satisfaction. Perceived usefulness also significantly
influences continuance usage. Then, the variable confirmation significantly
affects perceived usefulness. Confirmation also has a significant effect on
customer satisfaction. The variables content quality, price, and personalization
significantly influence customer satisfaction, which in turn affects continuance
usage. Finally, customer satisfaction has a significant influence on continuance
usage. |
|
Keywords: |
Video on Demand, Digital Streaming Vidio Services, Customer Satisfaction,
PLS-SEM, Continuance Usage |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
AI-POWERED SMART CITY EVACUATION PLANNING: TRAFFIC CONGESTION PREDICTION USING
DEEP BELIEF NETWORKS |
|
Author: |
GEETHA PAWAR1, B SARITHA, RAVI UYYALA, VIJAYA CHANDRA JADALA, RAYAVARAPU
VEERANJANEYULU, RUDRAMANI BHUTIA, NIDAMANURU SRINIVASA RAO |
|
Abstract: |
This study aims to evaluate the feasibility and efficacy of using Artificial
Intelligence (AI) Deep Learning in smart city contexts. A traffic flow
prediction model is developed utilizing the Deep Belief Network (DBN) algorithm.
The designated road segment and its historical traffic flow data in Tianjin have
been gathered and pre-processed. Subsequently, many Restricted Boltzmann
Machines (RBMs) are aggregated to construct a Deep Belief Network (DBN), which
is trained as a generative model. The performance is ultimately assessed using a
simulation exercise. The suggested algorithm model is compared with the Neuro
Fuzzy C-Means (FCM) model, Deep Learning Architecture (DLA), and Convolutional
Neural Network (CNN) model. The findings indicate that the Root Mean Square
Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error
(MAPE) of the suggested algorithm model are 4.42%, 6.21%, and 8.03%,
respectively. Its predictive accuracy surpasses that of the other three
algorithms considerably. Furthermore, the algorithm can efficiently mitigate the
proliferation of congestion in the smart city, facilitating prompt alleviation
of traffic bottlenecks. The developed Deep Learning-based traffic flow
prediction model demonstrates great precision in forecasting and effective
traffic congestion mitigation, offering valuable experimental insights for
future smart city development. |
|
Keywords: |
Smart cities, Deep Learning, Traffic flow prediction, Artificial Intelligence,
Deep Belief Network |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
REDISTRIBUTION OF SOCIAL RESOURCES IN THE MODERN ERA: NEW CHALLENGES FOR THE
STATE FINANCE SYSTEM |
|
Author: |
OLEKSANDR BULAVYNETS, VOLODYMYR VALIHURA, IRYNA SYDOR, KATERYNA KRYSOVATA,
VIKTORIA BELIAVTSEVA |
|
Abstract: |
Relevance. The digital transformation of public finances and social transfers is
becoming particularly important in the context of rapid technological change and
growing public demand for transparency, efficiency, and accessibility of public
services. Given that the introduction of digital tools allows for the
optimization of budget resource management, improves the efficiency of social
support, and promotes the sustainable development of social systems, it is
necessary to take these processes into account in order to modernize public
administration and ensure social justice. Aim. The aim of this study is to
determine the impact of social transfers on the efficiency of public finances in
the context of digital transformation and to develop recommendations for their
optimization. Methods. The study used methods of literary source synthesis,
statistical data analysis, systematization, and generalization to study the
impact of social transfers on the effectiveness of public finances in the
context of digital transformation, which revealed uneven social spending in EU
countries and the importance of digitalization for improving the effectiveness
of social policy. Results. The analysis showed that social transfers
significantly increase the financial capacity of local communities, but the
level of social spending varies significantly between EU countries, and the
introduction of digital technologies improves the transparency and management of
social benefits. Conclusions. Based on the analysis, it was determined that
in order to improve the effectiveness of social policy, it is necessary to
combine social transfers with the development of digital infrastructure and the
introduction of innovative management practices. |
|
Keywords: |
Social Transfers, Public Finance, Digitalization, Digital Technologies,
Innovative Digital Solutions, Social Benefit Management, Social Policy
Effectiveness, Transformations, Budget, Financial Analysis, Fiscal Policy. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
AN ADAPTIVE GRAPH NEURAL NETWORKS WITH LANDSCAPE-AWARE PARTICLE SWARM
OPTIMIZATION FOR INTELLIGENT MEDICAL INSURANCE FRAUD DETECTION |
|
Author: |
MR. V VINAY KUMAR, M V V A L SUNITHA, PALAMAKULA RAMESH BABU, A ARUNA KUMARI
ARAJU ANITHA, BPRAVEENA MANDAPATI |
|
Abstract: |
This paper introduces a novel framework for detecting fraudulent medical
insurance claims using Adaptive Graph Neural Networks (AGNN) optimized by
Landscape-Aware Particle Swarm Optimization (LAPSO). By modeling healthcare
data—including patients, providers, and procedures—as a graph, AGNN captures
complex relational patterns indicative of fraud. Dynamic attention mechanisms
help highlight critical relationships while LAPSO tunes key hyperparameters to
improve generalization. Experimental results on real-world datasets demonstrate
the proposed model’s superiority in terms of accuracy (91%), precision (89%),
recall (88%), and F1-score (88%) over traditional machine learning and deep
learning models. The results confirm the efficacy and interpretability of our
framework for practical fraud detection applications. The AGNN component
employs dynamic attention mechanisms to selectively prioritize significant
relationships during message passing, thereby enhancing the ability of the model
model’s ability to detect subtle and coordinated fraud schemes. To further
improve detection accuracy and generalization, LAPSO fine-tunes critical
hyperparameters of the AGNN architecture. By leveraging both global and local
search capabilities, LAPSO accelerates convergence toward optimal configurations
while adapting to the model’s performance landscape.The proposed method is
evaluated on a real-world medical claim’s dataset, demonstrating superior
performance associated with conventional deep learning and graph-based models
among key metrics, including accuracy, precision, recall and F1-score. Moreover,
the framework exhibits strong generalization capability, effectively identifying
both known and previously unseen fraudulent behaviors. The integration of graph
learning and evolutionary optimization offers a scalable and interpretable
solution for healthcare providers and insurance companies aiming to mitigate
fraud risks[15]. This study contributes to the progression of intelligent fraud
detection systems and opens new directions for the application of adaptive graph
learning in health. The AGNN component employs dynamic attention mechanisms to
selectively prioritize significant relationships during message passing,
improving the model’s capability to notice subtle and coordinated fraud schemes.
To further improve detection accuracy and generalization, LAPSO fine-tunes
critical hyperparameters of the AGNN architecture. By leveraging both global and
local search capabilities, LAPSO accelerates convergence toward optimal
configurations while adapting to the model’s performance landscape. |
|
Keywords: |
Adaptive Graph Neural Networks(AGNNs),LAPSO,Deep Learning ,Medical
Insurance,Health care analytics etc., |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
A HYBRID CYBERSECURITY FRAMEWORK FOR SMART EV CLOUD SYSTEMS USING FUZZY LOGIC,
MACHINE LEARNING, AND BLOCKCHAIN |
|
Author: |
PANTHANGI VENKATESWARA RAO, SK. MD. SHAREEF, DEEPIKA VODNALA , SIRISHA
NARKEDAMILLI , ASHISH B. JIRAPURE , P. LAKSHMI PRASANNA , P. S. SUBHASHINI
PEDALANKA |
|
Abstract: |
The increasing need for communication within and between electric vehicles can
create serious challenges for infrastructure. This paper focuses on protecting
electric cars from cyberattacks by introducing a secure and intelligent
framework. We propose a new cybersecurity method that combines blockchain
technology with smart cloud computing and fuzzy machine learning, specifically
designed for electric vehicle systems. To handle data from vehicles, the model
uses a cloud system integrated with the smart grid, while the fuzzy adversarial
Q-stochastic (FAQS) model detects and analyzes suspicious activities. Data is
protected through encryption and decryption, based on role-based access control,
which ensures only authorized us-ers can access information according to their
responsibilities. The proposed system was tested using differ-ent cybersecurity
datasets and evaluated on performance measures such as security rate, error
(RMSE), quality of service (QoS), scalability, and energy efficiency. |
|
Keywords: |
Electric Vehicle, Smart Cloud Computing, Cyber Security Analysis, Fuzzy Machine
Learning, Blockchain Model |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
CONTEXT-AWARE PRIORITIZATION OF SOFTWARE REQUIREMENTS FOR SMALL-SCALE PROJECTS
USING MODAL VERBS, SEMANTIC EMBEDDINGS, AND GRAPH-BASED RANKING |
|
Author: |
SUCHETHA VIJAYAKUMAR , SURESHA D |
|
Abstract: |
Effective prioritization of software requirements is critical to the success of
software projects. Traditional methods such as MoSCoW often lack the ability to
semantically distinguish and set a requirements based on context. This paper
presents a unified approach that first leverages context-aware and semantic
ordering of requirements within each priority group of High, Medium and Low
priority requirements. The ordering is done by ten different approaches
belonging to various categories and the results of the same are compared. Unlike
traditional MoSCoW-based or purely semantic approaches, our method integrates
linguistic modality, contextual semantics, and graph-based learning in a unified
ranking framework, enabling more nuanced, accurate requirement prioritization.
Extensive experiments on a real-world dataset demonstrate that
ModalGraphIntegration, a hybrid method involving fusion of semantic, linguistic,
and graph-based signals achieves superior performance across Exact Match
Accuracy, Top-K Accuracy, MRR, and NDCG. |
|
Keywords: |
Functional Requirement Prioritization, Modal Verbs, Natural Language Processing,
Requirement Engineering, Software Development, Automated Prioritization. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
ANALYSIS OF A HARMONIZED OPTIMIZATION-DRIVEN CROSSOVER PERCEPTRON NETWORK FOR
HEART DISEASE PREDICTION |
|
Author: |
M. K. ARIF, KALAIVANI KATHIRVELU, A.YASIR |
|
Abstract: |
Heart disease prediction requires accurate and efficient models capable of
handling heterogeneous medical data, mitigating noise distortions, and
optimizing feature selection for enhanced classification performance. To address
these challenges, we propose the Harmonized Optimization-Driven Crossover
Perceptron Network (HOC-Perceptron), an integrated framework comprising Advanced
Normalization-Noise Filtering (ANNF) for robust data preprocessing, Greylag
Goose Optimization (GGO) for feature selection, and Crossover
Arithmetic-Optimized Multi-Layer Perceptron (CAO-MLP) for classification. ANNF
improves signal quality by 27.8%, reducing noise-induced distortions while
preserving diagnostic markers. GGO enhances feature selection efficiency by
32.5%, ensuring optimal subset selection and reducing computational complexity.
CAO-MLP further boosts classification accuracy by 8.9%, achieving an overall
F1-score of 96.4%, outperforming baseline models in terms of convergence speed
and generalizability. The proposed HOC-Perceptron framework significantly
enhances heart disease prediction reliability, offering a computationally
efficient and clinically interpretable solution for early diagnosis and risk
assessment. |
|
Keywords: |
Heart disease prediction, ECG signals, Greylag Goose Optimization (GGO), Feature
Selection, Crossover Multi-Layer Perceptron (CMLP) Arithmetic Optimization
Algorithm (AOA) |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
AN INTELLIGENT HYBRID NEURAL-SWARM ARCHITECTURE FOR ADAPTIVE AND
ENERGY-EFFICIENT LOAD BALANCING IN CLOUD COMPUTING |
|
Author: |
AHMED KHALIFA, HELAL A. SULEMAN, MOHAMED MARIE |
|
Abstract: |
Cloud computing delivers flexible, transparent, and interactive access to shared
resources over the Internet, freeing consumers from device management concerns
and providing seamless data and computational access anytime, anywhere. The
pervasive adoption of smart devices has necessitated the rapid expansion of
cloud computing, placing load balancing at the forefront of resource management
challenges. Effective load balancing allows for equitable distribution of tasks
among multiple devices, which is vital for maximizing performance and resource
utilization in cloud environments. This paper introduces a hybrid Model that
leverages Deep Neural Networks (DNNs), Long Short-Term Memory (LSTM), and
Particle Swarm Optimization (PSO) to address prevailing research gaps in cloud
load balancing optimization. Our proposed approach focuses on resource
provisioning and dynamic load balancing among virtual machines (VMs), aiming to
achieve optimal utilization and uniform workload distribution. The model is
trained and evaluated on a complex cloud load balancing dataset, demonstrating
its capacity to distinguish optimal load balancing scenarios under real-world
conditions. Experimental results highlight the efficacy of the proposed
system, achieving a test accuracy of approximately 92% and a test AUC of around
0.98. However, while validation AUC exceeded 0.99, a notable gap in test AUC
(~0.49) indicates Complexity in model tuning, which is discussed in detail.
Overall, the hybrid DNN-LSTM-PSO model presents a robust and scalable solution
for load balancing in cloud computing, effectively integrating deep learning’s
hierarchical feature extraction and sequence modeling with advanced optimization
strategies for improved cloud service performance. |
|
Keywords: |
Cloud Computing, Load Balancing, Hybrid Load Balancing Algorithms, Deep Neural
Networks, Long Short-Term Memory, Particle Swarm Optimization
|
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
STRATEGIC MANAGEMENT OF OMNICHANNEL MARKETING WITH THE USE OF GENERATIVE
ARTIFICIAL INTELLIGENCE |
|
Author: |
SVITLANA KOVALCHUK, TETIANA CHUNIKHINA, VADIM H. GRIGORYAN, VALENTYNA
SHEVCHENKO, ULIANA KOROLOVA |
|
Abstract: |
The relevance of the issue under research is determined by the rapid
introduction of generative artificial intelligence (GenAI) into omnichannel
marketing strategies. This radically changes approaches to personalization,
communication, and strategic management in the digital economy. There is a
growing need for quantitative analysis of the impact of such technologies on
marketing efficiency, which justifies the need for this study. Particular
attention is paid to personalization, communication automation, and data
integration. The aim of the study is to determine the relationship between the
level of implementation of GenAI and marketing efficiency. The problem is the
lack of quantitative assessments of the impact of artificial intelligence (AI)
on omnichannel management indicators. The analysis covers 10 companies from 10
countries for 2022-2024. An econometric model with six variables was applied:
GAI, AdSpend, Data Integration, CSAT, OMDEPTH, and BRAND. The highest OM_EFF
values in 2024 were recorded at Bosideng (98.3), Woolworths (97.0), and Zara
(96.9). Companies with high GenAI have seen faster OM_EFF growth and market
adaptation. Woolworths’ GenAI increased from 0.58 to 0.77, OM_EFF from 85.7 to
97.0. Hudson’s Bay showed the smallest changes because of limited digitalization
and weak AI development. The study found that a high level of integration of
GenAI is directly related to the increase in omnichannel marketing efficiency
(OM_EFF). Bosideng achieved the highest OM_EFF – 98.3 in 2024 – with the maximum
value of the GenAI Index (0.83) and the active use of personalized content. The
obtained data confirm that GenAI enhances the effectiveness of marketing
strategies through automation, deep data integration and multi-channel
interaction with customers. The article provides practical recommendations for
increasing OM_EFF. The study has applied significance for retail trade
strategies. Further research may include other industries and regions. |
|
Keywords: |
Omnichannel Marketing, Generative AI, Efficiency, Strategic Management, Customer
Personalization, Digital Transformation, Econometric Model |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
A CROSS-LAYER ROUTING FRAMEWORK FOR ENERGY-EFFICIENT AND LOW-LATENCY IOT
COMMUNICATION: SIMULATION AND EXPERIMENTAL VALIDATION |
|
Author: |
YUVA KRISHNA ALURI , S TAMILSELVAN |
|
Abstract: |
The rapid expansion of the Internet of Things (IoT) has created unprecedented
challenges in providing efficient, low-latency, and energy-aware communication
among resource-constrained devices. Traditional routing approaches are designed
in isolation from each other at different layers of the OSI model and therefore
do not consider the interdependencies between as an example of the physical,
MAC, and network layers. In this work, we present a Cross-Layer Routing
Framework (CLRF), which dynamically integrates metrics of residual energy (RE),
link quality (LQ), and latency (E2E latency to each node) to provide efficient
routing in heterogeneous IoT environments. The validity of the framework is
established by extensive simulation studies carried out in NS-3 and experimental
deployment as a testbed using an ESP32-Raspberry Pi combination. The simulation
and experimental results show that CLRF achieves up to 25% improved energy
efficiency, 40% improved average end-to-end latency, and 12% improved packet
delivery ratio (PDR) compared to traditional baseline protocols such as LEACH
and RPL. The statistical validity of the improvements is confirmed through
Wilcoxon signed-rank test with 30 independent runs (p < 0.05) and bootstrap
confidence intervals further reinforce the reliability of the experimental
findings. The experimental results show similar behavior to the simulation
results and it indicates the robustness and scalability of the framework. This
research proposes a unified framework addressing dual problems of energy
efficiency and delay which allows for sustainable and real-time communication in
IoT communications. |
|
Keywords: |
Cross-Layer Design, Iot Communication, Energy Efficiency, Low-Latency Routing,
NS-3 Simulation, Experimental Validation |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
STREAMLINING REQUIREMENTS ANALYSIS USING LARGE LANGUAGE MODELS AND USER
INTERFACE AUTOMATION |
|
Author: |
MAMATHA TALAKOT, BALARAM AMAGOTH , SUDHA RANI CHIKKALWAR, SUDHAKAR JANGILI,
ANGOTU NAGESWARA RAO, RAVI MOGILI |
|
Abstract: |
Requirements analysis, a critical phase in software development, often grapples
with the challenge of extracting structured user stories from unstructured data
sources such as text descriptions, UI mockups, and informal communication. This
paper presents a novel approach leveraging Large Language Models (LLMs) and
prompt engineering to automate the generation of well-structured user stories.
By employing carefully crafted prompts, LLMs can effectively analyze
unstructured data, identify key requirements elements (user roles, goals,
benefits), and translate them into actionable user stories. Proposed work
evaluate the effectiveness of this LLM-driven approach through a comparative
study with manually created user stories, assessing accuracy, completeness, and
clarity. Preliminary results demonstrate the potential of LLMs to streamline
requirements analysis, improve the quality of user stories, and ultimately
contribute to the development of software solutions that better align with user
needs. However, challenges such as hallucinations and inconsistencies in
LLM-generated outputs warrant further investigation and refinement. This
research provides valuable insights into the feasibility and limitations of
using LLMs for automating user story generation, paving the way for more
efficient and effective requirements engineering practices. |
|
Keywords: |
Software Requirements, User Stories, Prompt Engineering, Requirement Analysis,
Speech-Text Engine, LLMS |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
DSS QUERY OPTIMIZATION USING ENTROPYBASED RESTRICTED GENETIC APPROACH
ANDPARALLEL PROCESSING |
|
Author: |
K. SWARUPA RANI,B. LAKSHMI,Dr. YARLAGADDA ANURADHA, Dr. KOLLURU SURESH BABU,P.
VENKATESWARLU REDDY,CH. SABITHA, DIVVELA.SRNIVASA RAO |
|
Abstract: |
Any distributed database system's query optimization challenge is exciting and
continues to attract a lot of interest. Recent years have seen the application
of a number of heuristics that suggest novel strategies for enhancing a query's
viability. One of the most widely used heuristics for optimization issues is the
Genetic Algorithm (GA). The search for a better solution is still ongoing. This
work proposed the Entropy-based Restricted Genetic Approach (ERGA), a novel
query optimizer approach. Massive amounts of data (in gigabytes, petabytes, or
even more) are processed using DSS queries. As a result, these queries are
unlikely to be mission impart antin terms of "Response Time." However, a major
concern is the amount of system resources required to execute the query. As a
result, Total Costs—the sum of input, output, processing, and communication
costs—are used to optimize DSS enquiries. The overall cost of DSS queries can be
decreased by increasing the replication factor in the distributed database. It
was determined that ERGA more effectively met the low time complexity (Runtime)
and high quality (Total Costs) objectives for distributed DSS queries, which are
otherwise at odds with one another. Intra-Site Parallelism is proposed to reduce
ERGA's performance. Two or more processors may be present at a site in
intra-site parallelism.The utilization of two cores or processors over a site
has been shown to significantly enhance ERGA quality by up to 6.5%.
Additionally, several of the standard metrics of descriptive statistics were
used to statistically analyze ERGA. Using ERGA, the best solution outperformed
the worst by 89%. |
|
Keywords: |
Query Optimization, DSS, Genetic Algorithm, ERGA, Parallel Processing |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
HOW CAN GRAPH NEURAL NETWORKS AND ATTENTION MECHANISMS IMPROVE HUMAN ACTIVITY
RECOGNITION? A MULTIMODAL DEEP LEARNING FRAMEWORK |
|
Author: |
SAURABH GUPTA, RAJENDRA PRASAD MAHAPATRA |
|
Abstract: |
HAR is a widely studied area that works on recognizing human actions from
information captured by different sensors such as cameras and inertial sensors.
The performance of HAR has been greatly enhanced thanks to recent developments
in deep learning, mainly because of convolutional and recurrent neural networks.
It provides a thorough overview of HAR techniques between 2020 and 2025,
focusing on models that blend data from RGB images, depth images and skeleton
joints together. Our design combines the benefits of Xception and EfficientNet
for feature extraction, along with skeleton-based features, to make the
recognition more accurate and solid. Tests conducted on recognized UTD-MHAD,
HMDB51 and UCF101 benchmarks prove that the model outperforms other methods,
surpassing 92.79% accuracy. Furthermore, the paper addresses the issues brought
by dataset limits, complex computing requirements and difficulties in adapting
the models to new applications and proposes promising paths for advancements in
HAR. |
|
Keywords: |
Human Activity Recognition (HAR), Deep Learning, Multi-Modal Data Fusion,
Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Skeleton
Joint Analysis, Benchmark Datasets, Hybrid Models, UTD-MHAD. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
MULTIPLE SCLEROSIS MR IMAGES DENOISING MODEL FOR ACCURATE CLASSIFICATION USING
CONVOLUTION NEURAL NETWORK |
|
Author: |
KALPANA SANJAY PAWASE, LUBNA TARANUM M P, VAIBHAV VASUDEVRAO GIJARE, SHABANAM
KHALID SHIKALGAR, ARIFA JAVID SHIKALGAR |
|
Abstract: |
Multiple Sclerosis (MS) is a chronic neurological disease characterized by
demyelination in the central nervous system, where Magnetic Resonance Imaging
(MRI) plays a vital role in lesion detection. However, accurate identification
of MS lesions remains a challenge due to the presence of various noise types
such as Gaussian, speckle, and salt-and-pepper noise, which reduce image clarity
and distort lesion boundaries. Existing denoising and segmentation approaches,
including traditional filters and Convolution Neural Networks (CNN) based
models, have shown limitations, filters often compromise edge preservation,
while many deep learning techniques require extensive pre- and post-processing,
leading to reduced efficiency and inconsistency in lesion classification. This
gap in the literature highlights the need for an approach that can
simultaneously suppress noise and retain essential lesion details for reliable
segmentation. To address this, the study proposes a Hybrid Convolutional Neural
Network with Pixel Batch Normalization (HCNN-PBN) that integrates denoising and
segmentation in a unified framework. The novelty of this model lies in its
ability to preserve critical structural features during the denoising process
through pixel batch normalization, thereby improving lesion classification
accuracy compared to conventional CNN-based denoising methods. The research
introduces new knowledge by demonstrating that denoising and segmentation can be
optimized together in a hybrid CNN framework without significant loss of detail,
reducing computational complexity while enhancing diagnostic reliability.
Experimental results validate that the proposed HCNN-PBN model outperforms
existing CNN-based approaches in terms of denoising accuracy, segmentation
accuracy, filtering efficiency, and processing time, thereby offering a more
effective solution for automated MS lesion detection in MRI images. |
|
Keywords: |
Multiple Sclerosis, Magnetic Resonance Imaging, Convolution Neural Networks,
Denoising, Segmentation. Pixel Normalization. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
THE USE OF ARTIFICIAL INTELLIGENCE SYSTEMS TO SUPPORT FORENSIC ACTIONS IN
COMPUTER CRIMES INVESTIGATIONS |
|
Author: |
VIACHESLAV KULIUSH, YULIIA CHORNOUS, OLEKSII KHARKEVYCH, OKSANA VASYLOVA,
VITALII MATSAK |
|
Abstract: |
The relevance of the study is determined by the increasing number of computer
crimes and the need for a normatively permissible, explainable, and integrated
implementation of artificial intelligence (AI) systems in digital forensics. The
aim of the research is to develop an optimized model for the AI use in computer
crime investigations, taking into account explainability, auditability, and
legal admissibility. The study employed the following methods: integrative
stratification system of the typology of computer crimes, inter-normative
comparative-procedural analysis of regulations, procedural modelling of the AI
application scenario, synthesis of an optimized multi-agent model based on
decomposition and normative analysis. The generalized results of the study
showed that the integration of AI into forensics is limited by algorithmic
opacity, regulatory fragmentation, and lack of procedural admissibility. The
proposed optimized model with multi-agent architecture, explainability,
auditability, and legal traceability minimizes algorithmic bias, formalizes
forensic admissibility, and justifies the need to revise regulations and
implement forensically explainable AI standards. The academic novelty is the
formalization of an explainable AI-based forensic framework that integrates
multi-agent analytics, legal traceability mechanisms, and standardized
validation protocols, thereby advancing cognitive interpretability, regulatory
admissibility, and institutional scalability of AI-evidence in criminal
proceedings. The prospects for further research include conducting controlled
empirical tests to verify the legal admissibility, interpretability, and
procedural efficiency of the optimized procedural model. |
|
Keywords: |
Criminal Justice; Legal System; Rule of Law; Explainability; Legal Traceability |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
A UNIFIED FRAMEWORK FOR DEVICE-AWARE WEB INTERFACE ADAPTATION BASED ON
RESPONSIVE DESIGN AND HEURISTIC USABILITY EVALUATION |
|
Author: |
OLEH NI , OLEKSIY PISKAREV , YANA NI |
|
Abstract: |
In modern computer engineering, the efficiency of human-computer interaction
depends on the adaptability of web interfaces across heterogeneous environments.
This study analyzes Responsive and Adaptive Web Design approaches, evaluating
their impact on usability, system performance, and interaction quality in
multi-device ecosystems. Engineering considerations include rendering speed,
resource efficiency, and cross-platform scalability. A scheduling interface
prototype was implemented using HTML, CSS, and JavaScript, and optimized through
media queries and front-end optimization techniques. Usability was assessed
using the System Usability Scale (SUS) and heuristic evaluation. The results
confirm that device-aware design models significantly enhance robustness and
user satisfaction. This research highlights the importance of integrating UX
best practices, adaptive layouts, and behavioral analytics. The proposed
methodology synthesizes strategies from human-computer interaction, software
architecture, and mathematical modeling. By uniting interdisciplinary insights,
this work provides practical guidelines for developing scalable, user-centered
web applications aligned with evolving digital system requirements. |
|
Keywords: |
Responsive Web Design (RWD), Adaptive Interfaces, Usability Adaptation,
Heuristic Evaluation, System Usability Scale (SUS). |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
COMBINING THE USER-CENTERED DESIGN METHOD WITH REQUIREMENT ENGINEERING TO
IMPROVE USABILITY IN WEB-BASED APPLICATION |
|
Author: |
NAUVAL FAKSI ERLANSYAH, TANTY OKTAVIA |
|
Abstract: |
Usability is a crucial factor in ensuring the effectiveness and user
satisfaction of digital systems, especially in high-stakes contexts such as
e-Assessment. This study proposes an integrated approach that combines
User-Centered Design (UCD) and Requirement Engineering (RE) to address UCD's
limitations and improve the Usability of web-based applications, with a case
study on an e-Assessment system. The research framework involves five
independent variables—User Need, User Behavior, User Motivation, User
Requirement, and Functional Requirement—hypothesized to influence Usability,
which in turn affects the core usability dimensions of Learnability, Efficiency,
Memorability, Error, and Satisfaction. The integrated method incorporates RE's
structured processes—elicitation, analysis, specification, and validation—into
UCD's iterative cycle. A prototype was developed and tested using quantitative
methods, including path analysis and usability evaluation. The findings reveal
that user behavior plays the most critical role in shaping Usability, while
Usability itself positively impacts all core dimensions. This study contributes
to the field by operationalizing the integration of UCD and RE into a
comprehensive framework and validating its effectiveness in a web-based
application. It demonstrates, through the e-Assessment case, how aligning
structured requirements with user-centered practices can enhance the overall
user experience. |
|
Keywords: |
User-Centered Design, Requirement Engineering, e-Assessment, Usability, System
Usability Scale |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
TRANSFORMERS IN ARABIC SHORT ANSWER GRADING: BRIDGING LINGUISTIC COMPLEXITY WITH
DEEP LEARNING |
|
Author: |
WAEL HASSAN GOMAA, MENA HANY, EMAD NABIL , ABDELRAHMAN E. NAGIB, HALA ABDEL
HAMEED |
|
Abstract: |
Automating the evaluation of Arabic short answers is a crucial step in advancing
educational technology, as it enables rapid feedback, consistent scoring, and a
significant reduction in educators’ workload. However, the structural richness
and semantic complexity of Arabic—characterized by its extensive morphology,
flexible word order, and diverse vocabulary—make reliable grading especially
challenging. To address these difficulties, this study introduces a three-stage
framework built upon fine-tuned transformer architectures. In the first stage,
both the question and the learner’s response are encoded into dense semantic
embeddings. The second stage applies comprehensive fine-tuning to a pre-trained
transformer model, allowing it to capture task-specific nuances and better
represent the intricate patterns of Arabic. In the final stage, a regression
layer generates a numerical score, which is then compared against the
human-assigned reference grade for evaluation. The proposed framework was
rigorously tested on two benchmark datasets for Arabic short answer grading,
AR-ASAG and Philosophy. Experimental results demonstrated strong performance,
achieving Pearson correlation scores of 0.85 and 0.97, respectively, and
outperforming previously reported state-of-the-art methods. These outcomes
confirm the effectiveness of transformer-based models in handling the linguistic
subtleties of Arabic while also demonstrating their scalability and adaptability
across domains. Overall, the findings position fine-tuned transformers as a
promising foundation for building accurate, efficient, and equitable automated
grading systems in Arabic educational contexts. |
|
Keywords: |
Transformers, Arabic Short Answer Grading, Natural Language Processing, Deep
Learning, Model Fine-Tuning |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
THE IMPACT OF ARTIFICIAL INTELLIGENCE ALGORITHMS ON AUTOMATIC EDITING AND
PROOFREADING OF TEXTS |
|
Author: |
OLEKSII SYTNYK, NATALIA KNYRIK, OKSANA TIUTIUNNYK, YEVGENYA BAZINYAN, TETIANA
TSEPKALO |
|
Abstract: |
The relevance of the study is determined by the rapid development of artificial
intelligence (AI) and the need to assess its impact on the quality of automatic
editing of English texts in the global informational technologies (IT)
environment. The aim was to identify and assess the impact of various AI
algorithms on the efficiency and quality of automatic editing and proofreading
of English texts in an international context. The objective of this study is to
identify and analyze how various AI algorithms impact the efficiency and quality
of automatic editing and proofreading of English texts in an international
context. The study employed the following methods: experimental methods, content
analysis, comparative analysis, and focus groups. The reliability was confirmed
by using qualitative statistical analysis, Student’s t-test, chi-square test,
and analysis of variance (ANOVA). The assessment was carried out in comparison
with human editing. The texts included news, scientific articles, advertising
texts, essays, and technical documentation. Readability analysis showed
statistically significantly lower indicators of AI compared to editors
(Flesch-Kincaid: 69.4 versus 75.2). The users noted the convenience and speed of
intelligent technologies, but pointed to insufficient flexibility and stylistic
problems. Analysis of variance confirmed the significant advantage of editors in
quality (F=12.56, p<0.001, η²=0.45). The results obtained during the study
showed a significant advantage of professional editors over modern AI algorithms
in editing quality. At the same time, its use demonstrates the potential to
accelerate the process, which is important for practical application. Prospects
for further research include improving AI algorithms for stylistic editing and
studying hybrid methods that combine automation with human expertise. |
|
Keywords: |
Natural Language Analysis; Neural Networks; Proofreading Quality; Text
Processing; Text Proofreading |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
LEVERAGING ARABERT WITH STATISTICAL WEIGHTING FOR SCALABLE UNSUPERVISED ARABIC
TEXT SUMMARIZATION |
|
Author: |
WADEEA R. NJI, SURESHA, MOHAMMED A. S AL-MOHAMADI, AHMED R. A. SHAMSAN |
|
Abstract: |
The volume of Arabic digital text found in news platforms, educational
resources, and social media continues to grow rapidly, making it increasingly
difficult for users to extract key information in a timely manner. Automatic
Text Summarization (ATS) provides an effective way to condense lengthy documents
while preserving their essential meaning. However, progress in Arabic ATS
remains limited because annotated datasets are scarce, Arabic-specific NLP
resources are underdeveloped, and many recent language models demand high
computational costs. Traditional summarization techniques also struggle to
capture deeper sentence level semantics, which reduces the relevance and
coherence of the generated summaries. To address these challenges, this study
proposes a scalable unsupervised summarization framework that integrates TF-IDF
weighting with AraBERT contextual embeddings to produce richer and more
informative sentence representations. The model uses k-means clustering to
identify thematic structure and selects representative sentences based on their
similarity to cluster centroids. Maximal Marginal Relevance is then applied as a
final step to reduce redundancy and maintain diversity across the selected
content. Experimental results on the EASC dataset show that the weighted AraBERT
representation achieves a ROUGE-1 score of 0.615, surpassing FastText, TF-IDF,
and unweighted AraBERT. The findings demonstrate that integrating statistical
term importance with contextual transformer embeddings provides an efficient
strategy for enhancing summarization quality in low-resource Arabic settings.
The proposed framework contributes a scalable, annotation-free alternative to
supervised language models and provides new insight into representation
weighting strategies for Arabic NLP. |
|
Keywords: |
Arabic Text Summarization; AraBERT, Weighted Embeddings; TF-IDF; Semantic
Clustering, Unsupervised Learning. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING DATA PRIVACY AND OPTIMIZING LONG-DISTANCE DATA TRANSMISSION USING
MACHINE LEARNING IN WIRELESS SENSOR NETWORK |
|
Author: |
SUBHRA PROSUN PAUL, SCHIN-SHIUH SHIEH, D. VETRITHANGAM, SIVA SHANKAR |
|
Abstract: |
Wireless Sensor Networks (WSNs) have emerged as a transformative technology that
enables real-time data acquisition and communication across sectors such as
healthcare, environmental monitoring, and smart transportation. WSNs have
achieved notable progress in micro-electro-mechanical systems, low-power
communication, and digital electronics, yet remain burdened by two ongoing
issues: data privacy preservation and resource optimization for data
transmission. The proposed research establishes a unified framework that
combines Homomorphic Encryption (HE) with Federated Learning (FL) and
Q-learning-based Reinforcement Learning (RL) to overcome existing challenges.
The methodology is organized into three layers that support encrypted data
acquisition, distributed model development, and optimized intelligent network
routing. The Homomorphic Encryption system provides strong protection for
encrypted sensor data, and the Federated Learning method prevents the
transmission of raw information during model training. Through Q-Learning, the
system achieves energy-efficient routing by dynamically adjusting paths based on
network conditions. In addition to performance monitoring, the system provides
real-time feedback loops that adaptively control encryption and routing
parameters. The experimental tests conducted with NS-3 simulations and benchmark
datasets showed that the HE-FL framework achieved a 0% success rate in data
reconstruction under adversarial attacks, while maintaining acceptable model
accuracy. Q-Learning-based routing improved network lifetime by 27% over LEACH,
while delivering packets at 98.2%. The system operated with high computational
efficiency using limited resources from microcontrollers. This research develops
an extended, secure, and efficient WSN architecture that meets real-time
requirements in healthcare and transportation networks. Edge intelligence
benefits from this work because it integrates privacy-protecting algorithms and
self-operating routing methods, overcoming standard WSN limitations. This
framework demonstrates potential for general use in extensive security-critical
sensor-based systems that need reliable information protection. |
|
Keywords: |
Homomorphic Encryption, Federated Learning, Q-Learning, WSNs, Privacy
Preservation, Energy Efficiency |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
INTEGRATION OF DATA FILTERING WITH HYBRID RSA DEEP LEARNING ALGORITHM FOR IOT
DATA SECURITY AND CLASSIFICATION |
|
Author: |
ACHMAD FAUZI, SUCI RAMADANI, HUSNUL KHAIR, AKIM M H PARDEDE |
|
Abstract: |
The Internet of Things (IoT) presents significant challenges in maintaining data
security, particularly in ensuring confidentiality while simultaneously
detecting anomalies intelligently. Therefore, this study aims to develop a
hybrid model that integrates the RSA algorithm for encryption and decryption
with a Convolutional Neural Network (CNN) as the classification mechanism for
IoT data. The research workflow includes RSA key generation and validation,
encryption of IoT image data, decryption with the RSA private key, and
classification using CNN with the log-sum-exp and softmax methods. Simulation
results produced outputs of O₁ = −3945.78 with a probability of 0%, O₂ =
−1972.89 with 0%, and O₃ = 0 with ≈100%, confirming that the input almost
certainly falls into the file under attack class and demonstrating the model’s
ability to preserve data confidentiality while achieving very high anomaly
detection accuracy. The main contribution of this research is the development of
a comprehensive approach to enhance the reliability of IoT systems against
cybersecurity threats through the integration of RSA, which focuses on data
confidentiality, and CNN, which ensures intelligent anomaly detection, so that
the proposed model not only strengthens IoT security layers but also offers a
practical solution that can be implemented to build more robust and adaptive
security systems in the future. |
|
Keywords: |
CNN, Deep_Learning, Data_Security, IoT, RSA |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
XMEDLLM: A FRAMEWORK FOR EXPLAINABLE MEDICAL INTELLIGENCE VIA LLMS IN
EDGE-DRIVEN MCP SYSTEMS |
|
Author: |
K PRASUNA, T NAGAMANI, L MADHAVI DEVI, B KEERTHI SAMHITHA, MAHANTI SRIRAMULU, S
SINDHURA |
|
Abstract: |
This Research proposes the design and test of an interpretable medical AI
solution by incorporating Large Language Models (LLMs) like BioBERT and GPT-lite
using edge-enabled Medical Cyber-Physical (MCP) systems. The model proposed is
intended to resolve the most important problems of clinical AI such as
interpretability, real-time inference, data privacy and scalability. The system
allows the processing of medical data locally, on edge devices such as Jetson
Nano and Raspberry Pi 4 via the use of LLMs, which not only keeps the latency to
a minimum but makes sure that such privacy regulations as HIPAA are not broken.
SHAP and LIME are used to give visual and textual understanding of AI decision
algorithms, giving a great improvement to clinical trust. Clinician readiness to
use the tool was observed in performance assessment performed in a simulated
hospital environment which showed a high level of inference accuracy and
excellent usability. The results of this research show that the AI models
trained by LLM and deployed on the edges can be a scalable and safe solution to
contemporary healthcare conditions. |
|
Keywords: |
Explainable AI, Large Language Models (LLMs), Edge Computing, Medical
Cyber-Physical Systems, Clinical Decision Support |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
QUANTUM-ENHANCED HYBRID XAI-LSTM FRAMEWORK FOR TELUGU RESTAURANT ANALYTICS AND
RECOMMENDATION SYSTEMS |
|
Author: |
VENUGOPAL BOPPANA, SANDHYA PRASAD |
|
Abstract: |
This study examines the application of Explainable Artificial Intelligence (XAI)
methodologies in a quantum-enhanced restaurant recommendation system designed
for the evaluation of Telugu reviews. Our new framework uses the Natural
Language Toolkit (NLTK) to process Telugu language, the Term Frequency-Inverse
Document Frequency (TF-IDF) feature vectorization method, quantum-powered
Ordering Points to Identify the Clustering Structure (OPTICS) algorithms, and
the Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) architecture with
100-cycle iterations. It also uses the whale optimization method to improve
performance. Performance evaluation employs COSINE, DICE, and JACCARD similarity
metrics across spatial, temporal, daily, and user feedback dimensions. The
results show great results, such as 99.85% precision for Telugu preprocessing
with 25,000 reviews and 99.76% precision for 28,456 unique tokens.
Quantum-enhanced feature extraction reached 97.89% precision with 95.67% dataset
coverage. Quantum LSTM networks reached 93.45% precision with 0.929 F1-scores.
OPTICS clustering reached 91.23% precision with 0.934 AUC-ROC values, and whale
optimization reached 92.34% precision with 0.945 AUC-ROC measurements. The
integration of XAI led to amazing improvements, raising the overall system's
accuracy from 85.5% to 95.6%, the users' confidence levels from 75.2% to 92.3%,
and the ability to fix errors from 65.3% to 89.4%. Statistical analysis confirms
the framework's exceptional efficacy in safeguarding Telugu cultural elements
while offering clear recommendations. This illustrates the considerable
potential of quantum computing when integrated with explainable AI to create
culturally-sensitive, interpretable restaurant recommendation systems that
propel multilingual recommendation research and lay the groundwork for future
quantum-enhanced XAI studies. |
|
Keywords: |
Explainable Artificial Intelligence, Quantum Machine Learning, Restaurant
Recommendation System, Telugu Natural Language Processing. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
RING THEORY-BASED AGRICULTURE RISK IDENTIFICATION USING HYBRID MODELS |
|
Author: |
M.SRIKANTH, M.CHANDRA NAIK, R.N.V.JAGAN MOHAN |
|
Abstract: |
Unrelenting plant diseases continue to pose a serious challenge to the
agricultural sector around the world, undermining the production, food security,
and economic stability of people. Conventional control systems are usually not
timely assessed on their risks, and they do not consider the intricate element
interdependence between environmental, biological, and crop-health components.
This study tries to fill these shortcomings by developing a hybrid modelling
system enhanced using a ring-theory-based influence mapping to enhance
agricultural risk modelling. The research combines various data sets such as
field surveys, environmental sensors that are based on the idea of IoT, remote
sensing images, and past disease data. Statistical regression, decision trees,
random forests, and artificial neural network models were trained as hybrid
models simultaneously, optimized by ensemble methods and verified by
cross-validation methods and interpretability methods such as SHAP and LIME.
Findings indicate that the addition of the ring theory has a considerable
positive impact on predictive performance: the accuracy, recall, and AUC-ROC
increased by 4.2, 5 points, and 0.05 points, respectively, when compared to the
performance of the models that were not augmented using the ring theory. In
addition to the predictive gains, the framework provides early-warning signals,
informs biosecurity measures, and assists with the development of resistant
crops. The work presented in this study can add to the issue of agricultural
resilience because it provides a mathematically informed and data-driven
solution to the challenge of managing disease risks, which meets the ethical
standards of AI and sustainable agriculture. |
|
Keywords: |
Agriculture, Risk Identification, Ring Theory, Hybrid Models, Disease
Management, Agricultural Resilience |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
Title: |
SECURE SCADA FEDERATED INTELLIGENCE FRAMEWORK FOR CYBER SECURITY IN INDUSTRIAL
NETWORK |
|
Author: |
YASIR A , KALAIVANI KATHIRVELU , MK. ARIF |
|
Abstract: |
The increasing frequency and sophistication of cyber threats in Supervisory
Control and Data Acquisition (SCADA) systems pose significant challenges to
industrial cybersecurity, particularly in decentralized and heterogeneous
networks. Existing machine learning and optimization-based intrusion detection
approaches suffer from inconsistent data representation, suboptimal feature
selection, class imbalance, and poor generalization across federated nodes,
leading to high false positives, missed attacks, and scalability limitations. To
address these gaps, this study proposes Secure SCADA Federated Intelligence
(SSFI), a unified framework integrating adaptive preprocessing, hybrid feature
optimization, and federated attack classification. The Adaptive Feature
Transformation (AFT) module improves data consistency and class balance,
reducing preprocessing inconsistencies by 27.6% and enhancing minority class
representation by 41.3%. The Adaptive Swarm-Bayesian Feature Optimization
(ASB-FO) technique combines Particle Swarm Optimization (PSO) with Bayesian
Optimization to select highly relevant features, achieving a 23.8% reduction in
redundancy and a 16.2% improvement in classification accuracy. The Adaptive
Federated Attack Classification (AFAC) module employs adaptive client-weighted
aggregation to mitigate model divergence, resulting in a 32.5% reduction in
global model instability and a 19.4% increase in detection accuracy.
Experimental validation on benchmark SCADA datasets demonstrates that SSFI
outperforms existing federated intrusion detection systems, achieving 96.8%
accuracy, a 5.7% reduction in false positives, and a 14.9% improvement in
precision. By addressing limitations in data preprocessing, feature
optimization, and federated learning, SSFI introduces a novel, scalable, and
privacy-preserving cybersecurity framework, representing a significant
advancement in attack detection for industrial SCADA networks. |
|
Keywords: |
Cyber-Physical Systems, Federated Learning, Feature Selection, Optimization
Techniques, Intrusion Detection, IOT-WSN attack, SCADA networks and CPS
Security. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th November 2025 -- Vol. 103. No. 22-- 2025 |
|
Full
Text |
|
|
|