|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
March 2026 | Vol. 104
No.6 |
|
Title: |
THE ROLE OF DIGITAL LITERACY IN STRENGTHENING CIVIL SERVANTS’ COMPETENCIES AND
PUBLIC GOVERNANCE OUTCOMES |
|
Author: |
IHOR LUKIANENKO, TETIANA NYCH, YEVHEN HREBONOZHKO, MAKSYM SIKALO, LIUDMYLA
TREBYK |
|
Abstract: |
Digital transformation calls for the enhancement and cultivation of public
servants’ competencies, thereby enabling them to more fully realize their
intellectual potential and contributing to the overall improvement of the public
administration’s efficacy. The aim of this study was to assess the influence of
the E-Government Development Index and the E-Participation Index on public
government’s effectiveness at a global scale. The research applied the methods
of regression and correlation analyses, as well as ANOVA testing. Utilizing
panel data from 190 countries during 2014-2024, the study examined the
E-Government Development Index, E-Participation Index, and Government
Effectiveness. The findings suggest that in nations exhibiting a high degree of
e-government advancement and active e-participation, the digital literacy among
public servants and citizens is correspondingly elevated. This observation
substantiates the rationale for employing the E-Government Development and
E-Participation indices as indicators that indirectly reflect the level of
digital literacy within both civil servants and the general public. The study
revealed a close positive correlation among all indicators of e-governance,
e-participation, and governance efficacy. The strongest correlations were
observed between the E-Participation Index and the Online Service Index
(correlation coefficient 0.95), followed by the E-Government Development Index
and the E-Participation Index (0.84), and the E-Government Development Index and
the Government Effectiveness Index (0.81). A regression model elucidating the
influence of e-governance sub-indices, with an explanatory power of 66.04%,
demonstrated that all three independent variables exert a direct and
statistically significant impact on governance efficacy. An analysis concerning
the influence of e-participation indicated that the E-Participation Index also
possesses a statistically significant and direct effect on governance
effectiveness, albeit its impact is partially overlapped by other components of
e-governance. The insights derived from this study may support the formulation
of strategies aimed at the countries’ digital advancement, integrating
initiatives to reinforce the digital literacy of both the population and civil
servants. |
|
Keywords: |
E-governance, E-participation, Online Services, Human Capital, Telecommunication
Infrastructure, Government Effectiveness |
|
DOI: |
https://doi.org/10.5281/zenodo.19365432 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
MITIGATING MODE COLLAPSE TO IMPROVE DIVERSITY IN TEXT-TO-IMAGE GAN OUTPUTS:
STRATEGIES IN ARCHITECTURAL DESIGN, TRAINING METHODOLOGIES, AND EVALUATION
TECHNIQUES |
|
Author: |
SUBUHI KASHIF ANSARI, MANAL AL KHAMMASH, ANJALI APPUKUTTAN, ANNE ANOOP, SANDEEP
KUMAR MATHARIYA, SHEELA D V, MOHAMMED SALEH AL ANSARI |
|
Abstract: |
Text-to-image generation using Generative Adversarial Networks (GANs) has
advanced significantly in recent years. This enables image synthesis from
textual descriptions. However, mode collapse remains a critical challenge that
limits output diversity. This systematic review analyzes strategies to mitigate
mode collapse in text-to-image GANs. It examins architectural designs, training
methodologies, latent-space techniques, and evaluation metrics. The review
covers 45 studies published between 2015 and 2025, categorized into:
architectural innovations (18 papers), training-based strategies (12 papers),
latent-space and loss function methods (10 papers), and evaluation-centric
approaches (5 papers). Findings show that attention-based models, multi-scale
architectures, and semantic-spatial models enhance semantic alignment and
diversity, with specific limitations. Training-based approaches, including
curriculum learning, adaptive training, gradient penalties, and progressive
growing of GANs, help stabilize training and mitigate collapse. Latent-space
techniques, such as mode-seeking losses, contrastive losses, and noise
manipulation, promote output diversity. However, evaluation metrics like Fréchet
Inception Distance (FID), Inception Score (IS), Learned Perceptual Image Patch
Similarity (LPIPS), and Multi-Scale Structural Similarity Index (MS-SSIM) show
limitations in capturing semantic diversity. Progress in mitigating mode
collapse depends on combined architectural design, training stability, and
loss-function engineering. Future priorities include developing unified
benchmarks for evaluating semantic diversity, exploring hybrid architectures,
and designing adaptive training protocols to enable more robust text-to-image
models generating diverse, semantically coherent outputs. |
|
Keywords: |
Text-To-Image GANS, Mode Collapse, Output Diversity, Architectural Design,
Training Methodologies, Evaluation Techniques, Attention Mechanisms,
Latent-Space Techniques |
|
DOI: |
https://doi.org/10.5281/zenodo.19365469 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
ADAPTIVE FEATURE EXTRACTION FRAMEWORK FOR ROBUST FACE ANTI-SPOOFING WITH
CROSS-DOMAIN GENERALIZATION |
|
Author: |
KARTHIKA S, G PADMAVATHI |
|
Abstract: |
Face spoofing detection is crucial for the security of facial recognition
systems; however, many existing methods struggle to generalize across varying
acquisition conditions such as changes in lighting, camera angles, and multiple
types of spoofing attempts. To address this challenge, this work proposes a
novel feature extraction framework that combines three advanced components:
Adaptive Kernel Generator (AKG), Discrete Style Assembly (DSA), and Adaptive
Style Transfer (AST). AKG dynamically adjusts feature extraction based on
instance-specific characteristics, enhancing sensitivity to subtle variations in
spoofing attacks. DSA categorizes input samples into distinct style categories,
enabling synthesis of style-specific features that are resilient to different
presentation attack instruments (PAIs) and environmental conditions. AST further
refines feature representations by adaptively transferring stylistic information
from reference images, ensuring consistency and accuracy across diverse
scenarios. The framework is built upon a modified ResNet-18 backbone optimized
for single-channel face inputs, serving as the initial feature extractor before
enhancement by the AKG, DSA, and AST modules. The model is evaluated on
Replay-Attack, SiW-Mv2, and OULU-NPU datasets, each offering unique variations
in spoofing scenarios. Experimental results demonstrate that the proposed model
outperforms a baseline ResNet-18 model, achieving up to 94% accuracy in
cross-dataset testing, highlighting its effectiveness and improved
generalization performance in real-world face spoofing detection. Unlike
conventional face anti-spoofing methods that rely on fixed feature extractors or
classifier-level adaptation, this work introduces a style-aware, feature-level
adaptation strategy that improves cross-domain generalization without requiring
target-domain data. |
|
Keywords: |
Face Spoofing, Domain Generalization, Adaptive Kernel Generator, Discrete Style
Assembly, Adaptive Style Transfer |
|
DOI: |
https://doi.org/10.5281/zenodo.19365489 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
TOWARDS ROBUST AND INTRINSICALLY INTERPRETABLE BRAIN TUMOR MRI CLASSIFICATION
VIA ADAPTIVE ATTENTION-GUIDED FUSION |
|
Author: |
MORSA SRUTHI, VEERRAJU GAMPALA |
|
Abstract: |
The correct and sensible categorization of brain tumors through magnetic
resonance imaging (MRI) is still a major problem with a heterogeneous tumor
morphology, overlapping intensity patterns and the shortcoming of fixed feature
fusion techniques in deep learning models. Current convolutional and
transformer-based architectures tend to focus on raw accuracy but do not have
adaptive fusion and inherent interpretability, which restricts their clinical
reliability. In this research work, whether hierarchical attention integration
and spatially adaptive dual-backbone fusion with Particle Swarm Optimization
(PSO)-based scaling of features are examined that can enhance robustness and
explainability in the process of brain tumor MRI classification. In a bid to
resolve this issue, the Enhanced Spatial Attention Enhanced (SAE) Hybrid
architecture is developed that includes the ConvNeXt and EfficientNetB0
backbones with multi-stage Convolutional Block Attention Modules (CBAM), spatial
gating fusion, and a learnable PSO Weighted Scaling layer. The model was tested
on a stringently partitioned Kaggle brain tumor MRI dataset of 4,117 images of
three tumor subtypes, where data were strictly partitioned as training and
validation and hold-out independent test (906 images). It has been
experimentally shown that the proposed framework attains 95.03% accuracy, 95.03%
F1-score, and 0.9949 ROC-AUC, which is statistically significantly higher than
the baseline concatenation (p = 0.012). The fusion of spatially adaptive methods
was better than scalar weighting methods, thereby establishing that features
balancing regionally is more effective than global mixing in the classification
of MRI. Moreover, the presence of embedded attention mechanisms offered inherent
interpretability of dynamically focusing on regions of the tumor of interest
without the use of post-hoc explanation. |
|
Keywords: |
Attention-Based Feature Fusion, Convolutional Block Attention Module, Deep
Neural Network Architecture, Particle Swarm Optimization, Spatial Gating
Mechanism, Transfer Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.19365523 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
SCIBERT- DRIVEN GNN MODELS FOR DETECTING COMMUNITIES FROM SCOPUS KEYWORD
CO-OCURRENCE NETWORK |
|
Author: |
KIRUTHIKA. R., KRISHNAVENI SAKKARAPANI |
|
Abstract: |
Identifying the research communities from the large-scale bibliometric data is
essential for understanding and interpreting the trends among recent research
topics. This article presents a novel framework for identifying communities from
the Scopus bibliometric data by constructing a keyword co-occurrence network.
The deep learning based scientific articles extracted from the Scopus
bibliographic database were selected for this work. These collected data are
segmented into five different time frames named as Scopus Bibliographic Dataset
(SBD), SBD_1 (2006-2013), SBD_2 (2014-2016), SBD_3 (2017), SBD_4 (2018) and
SBD_5 (2019). This work is developed as a hybrid framework by integrating the
traditional Louvain algorithm with various Graph Neural Network (GNN) models,
which helps to improve the performance through the extraction of the best
textual information features from the keywords by applying the SciBERT model as
node features. GNN techniques like GCN, GraphSAGE, FeaStConv, APPNP,
WLConvContinuous and AGNN were implemented to compare among the models to find
the best model for developing this novel framework. These integrated frameworks
are named as SciBLoGCN, SciBLoGS, SciBLoFSC, SciBLoWLC, SciBLoAPPNP, and
SciBLoAGNN. The experimental findings determine the presence of meaningful and
effective research communities based on the structure and interconnection
between the nodes within the network. This work supports scholars to understand
the interconnection among their domains based on recent research topics, which
enables strategic prioritization of their research areas. |
|
Keywords: |
Community Detection, Scopus Bibliometric Data, Keyword co-occurrence Networks,
SciBERT, Complex Network. |
|
DOI: |
https://doi.org/10.5281/zenodo.19365549 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
TRANSPARENT WILDFIRE DETECTION SYSTEMS: THE ROLE OF FIREDETXPLAINER IN
EXPLAINABLE AI MODELS |
|
Author: |
CHITTAKULA ROHINI, LAXMAIAH KOCHARLA, A. GEETHA, KONETI VARALAKSHMI, JAGADEESAN
S, PRAVEENA MANDAPATI, B. SRINIVASA RAO, VENKATA SRINIVASU VEESAM |
|
Abstract: |
Advanced detection systems are required to improve the efficacy of response
efforts in the face of the growing threat posed by wildfires to ecosystems and
communities. Wildfires' dynamic nature frequently renders conventional detection
methods inadequate. This paper introduces FireDetXplainer, a novel framework
that is intended to enhance wildfire detection by incorporating transparent and
explainable AI techniques. FireDetXplainer guarantees interpretability and
clarity in decision-making processes by employing state-of-the-art machine
learning models. Our strategy is designed to improve the accuracy of detection
and foster stakeholder trust by offering actionable insights into AI
predictions. The model's overall accuracy is significantly enhanced by the
integration of convolutional blocks and advanced image pre-processing
techniques. FireDetXplainer implements Explainable AI (XAI) tools to guarantee
comprehensive result interpretation by utilizing a variety of datasets from
Kaggle and Mendeley. The FireDetXplainer outperforms current top models and
achieves remarkable accuracy, as evidenced by the extensive experimental
results. This renders it a highly efficient method for image classification in
wildfire management. |
|
Keywords: |
Wildfire Detection, Explainable AI, LIME, Meteorological Data, Deep Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.19365585 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
DEVELOPMENT OF AN EDUCATIONAL MOBILE APPLICATION WITH INTELLIGENT AUTOMATION FOR
MATHEMATICS TEACHING: INTEGRATION OF FLUTTER, N8N AND OPEN SOURCE AI |
|
Author: |
CARLOS J. LAVADO AYALA, LUIS E. TORIBIO PALACIOS, FERNANDO W. BALBIN
URCUHUARANGA, ALAN M. INFANTE VIDALON |
|
Abstract: |
Design, develop and evaluate an educational mobile application that integrates
Flutter, n8n and open source AI models (Router, Llama 2, Mistral) for automated
mathematical assistance with teacher monitoring. Methods: A dual application was
developed combining guided assistance and automatic resolution via AI, with
automated notifications and reporting via n8n. Quasi-experimental evaluation
with 180 students and 12 teachers over 12 weeks. Results: 87% adoption rate,
average 23% improvement in mathematics grades, significant increase in
self-efficacy (d=1.18), and 40% reduction in teacher time for diagnostic
identification. AI models successfully processed 94% of mathematical queries
(n=1,247 total queries). Conclusions: Intelligent automation with n8n and
open source AI represents an effective innovation for personalizing mathematical
learning and optimizing pedagogical monitoring. |
|
Keywords: |
Flutter, N8n, Educational Artificial Intelligence, Llama 2, Mistral, Workflow
Automation, Mathematics Learning, Educational Technology. |
|
DOI: |
https://doi.org/10.5281/zenodo.19365615 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
AN INTELLIGENT NHPP-BASED SOFTWARE RELIABILITY GROWTH MODEL ENHANCED WITH DEEP
LEARNING FOR CROSS-PLATFORM FAILURE PREDICTION |
|
Author: |
DR B. SUVARNA MUKHI, DR.P. NAMRATHA, DR SUKANYA K, DR. E. HARIPRASAD, A.
HEMANTHA KUMAR, DR. NICHENAMETLA RAJESH, DR PRAVEEN KULKARNI, DR.P. NARESH |
|
Abstract: |
Modern software ecosystems especially those driven by open-source collaboration
and continuous integration/continuous deployment (CI/CD) practices exhibit
rapidly evolving and non-stationary behavior. Such dynamics pose significant
challenges to conventional Software Reliability Growth Models (SRGMs).
Traditional models based on the Non-Homogeneous Poisson Process (NHPP) typically
assume fixed or smoothly varying failure detection rates, limiting their
effectiveness in capturing real-time failure trends within highly dynamic and
frequently updated code repositories. To overcome these limitations, this study
proposes a hybrid predictive framework that integrates NHPP-based reliability
modeling with Long Short-Term Memory (LSTM) networks for adaptive and real-time
failure forecasting. The proposed Deep Learning Augmented NHPP (DL-NHPP) model
enhances the classical failure intensity function by incorporating temporal
patterns learned from repository-level signals, including commit frequency,
issue reports, and developer activity metrics. Through this integration, the
model dynamically adjusts failure rate estimations in response to evolving
development behaviors. The framework is evaluated on five large-scale
open-source repositories, demonstrating substantial predictive improvements.
Experimental results show a reduction of more than 30% in Mean Absolute Error
(MAE) and Root Mean Square Error (RMSE) compared to conventional NHPP-based
SRGMs. These findings indicate that the DL-NHPP approach provides a scalable,
interpretable, and robust solution for real-time software reliability prediction
in modern, continuously evolving development environments. |
|
Keywords: |
Software Reliability Growth Model (SRGM), Non-Homogeneous Poisson Process
(NHPP), Deep Learning, LSTM, Failure Prediction; Temporal Modeling, Open-Source
Repositories |
|
DOI: |
https://doi.org/10.5281/zenodo.19365744 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
AN INTERPRETABLE TRANSFORMER–GRAPH FUSION FRAMEWORK FOR MULTI-MODAL
CARDIOVASCULAR DISEASE PREDICTION |
|
Author: |
CHANDRASEKHARA REDDY T , Dr M.PURUSHOTHAM |
|
Abstract: |
The rapid rise of urbanization and the connected mobility demonstrates the
serious limitations facing traditional traffic management systems. Solutions
previously focused on vehicle flow. Others relied on manual inspection for road
maintenance. Both these were not comprehensive. They also caused delays in
answering urban transport issues. In this paper, we will introduce DU-Net
(Dual-Stream Urban Network), an intelligent deep-learning framework for
real-time detection of road anomalies and vehicle analytics in a smart city. The
design utilizes high-definition cathode ray tube roadside camera visual sensing
and IOT sensor data from embedded infrastructure for multi-task learning to
detect potholes and classify vehicles and dynamically estimate traffic density.
A convolutional pipeline with two streams can capture spatial dependencies and
subsequent temporal dependencies. A probabilistic fusion model views the
hypotheses for sensor signals and video signals as being generated from a
probabilistic mixture model. It can align the video-based hypotheses with the
sensor-based hypotheses to achieve consistent decisions. The system uses a smart
technique that gives an appropriate weight to each task in order to enhance
robustness over weather and lighting and traffic conditions. Experimental
analysis using urban road datasets shows DU-Net to outperform state-of-the-art
detectors like YOLOv8 and Faster R-CNN in terms of evaluation metrics, inference
speed, and computational cost. Moreover, its prediction analytics component
facilitates scheduling of maintenance and traffic congestion prediction. The
suggested DU-Net framework provides an extensible foundation for smart
transportation systems in the future settings through data-driven, adaptive and
climate-resilient urban traffic governance. |
|
Keywords: |
Deep Learning, Dual-Stream Architecture, Intelligent Transportation Systems, IOT
Data Fusion, Multi-Task Learning, Pothole Detection, Smart Cities, Traffic
Analytics, Urban Governance, Vehicle Classification. |
|
DOI: |
https://doi.org/10.5281/zenodo.19366388 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
KIDNEY STONE DETECTION FROM DICOM IMAGES USING IMAGE PROCESSING TECHNIQUES |
|
Author: |
YUTI DEWITA ARIMBI, SARIFUDDIN MADENDA, LUSSIANA ETP |
|
Abstract: |
Kidney stone detection from medical images is a critical step in early diagnosis
and treatment planning. In this study, an image processing approach is proposed
to identify kidney stones from DICOM (Digital Imaging and Communications in
Medicine) files, focusing on both right and left kidney regions. The methodology
includes grayscale conversion, elliptical area cropping, thresholding, and pixel
counting to calculate kidney stone size. The results indicate significant
differences in processing time and threshold values depending on the image
characteristics. Based on the analysis, the shortest processing time was 1.30
seconds, while the longest reached 1.46 seconds, with an average of 1.32
seconds. It was also observed that images without kidney stones required
approximately 1 second of processing time, while those containing kidney stones
took between 20 to 40 seconds, depending on the size and number of stones. In
one case (Sample 14), the processing time reached 2.21 seconds, reflecting the
high pixel count (542 pixels) and larger stone area (322.59 mm²). The
correlation analysis showed that the right kidney had a moderate positive
relationship (R = 0.57, R² = 0.33), while the left kidney had a stronger
correlation (R = 0.94, R² = 0.88), indicating more consistent pixel-to-area
relationships. These findings demonstrate the effectiveness of the proposed
image processing approach in identifying kidney stones with high accuracy,
providing valuable insights for clinical decision-making and future improvements
in automated kidney stone detection systems. |
|
Keywords: |
DICOM, Kidney Stone Detection, Image Processing, Pixel Analysis, Medical Imaging |
|
DOI: |
https://doi.org/10.5281/zenodo.19366416 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
HYBRID GLOBAL–LOCAL FEDERATED LEARNING FOR PRIVACY-PRESERVING TRAFFIC PREDICTION
UNDER NON-IID URBAN DATA |
|
Author: |
CHOPPARAPU GOWTHAMI, S KAVITHA |
|
Abstract: |
Rapid urbanization and increasing vehicular density require scalable and
privacy-preserving traffic forecasting systems capable of operating across
heterogeneous cities. Conventional centralized deep learning approaches demand
raw data aggregation, creating significant privacy, governance, and scalability
concerns, particularly in multi-city collaborations. This study proposes a
Privacy-Preserving Federated Learning (PP-FL) framework for collaborative urban
traffic flow prediction and disaster-aware mobility modeling without sharing
sensitive local datasets. The framework introduces a hybrid global–local model
architecture in which shared layers capture universal spatio-temporal traffic
dynamics while city-specific layers preserve localized mobility characteristics,
effectively mitigating non-identically distributed (non-IID) data heterogeneity.
To ensure strong confidentiality guarantees, differential privacy with
calibrated Gaussian noise and secure aggregation are integrated into the
federated optimization process, protecting against gradient inversion and
membership inference attacks.Extensive experiments conducted on heterogeneous
multi-city traffic datasets and 40,000 surveillance images demonstrate that the
proposed model achieves an RMSE of 12.94, MAE of 6.01, and R² of 0.91,
outperforming conventional local and centralized baselines. The framework
reduces privacy leakage risk from 91% to 5% while lowering communication
overhead by 34% through compressed update transmission. Additionally, the model
enhances anomaly detection performance (F1-score = 0.94) and improves
disaster-response sensitivity across flood and evacuation scenarios. |
|
Keywords: |
Federated Learning, Traffic Flow Prediction, Privacy Preservation,
Intelligent Transportation Systems, Deep Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.19366441 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
A DYNAMIC PRICING FRAMEWORK FOR TRAIN TICKETS USING MACHINE LEARNING PREDICTION
AND RULE-BASED ADAPTATION |
|
Author: |
KIKI WIJAYA, YULYANI ARIFIN |
|
Abstract: |
The railway transportation system in Indonesia has experienced a significant
increase in demand, especially during holiday seasons, indicating the need for
ticket price optimization to maximize revenue and balance passenger
distribution. This study aims to develop a simple and efficient dynamic pricing
model for train tickets, addressing the issues of static pricing and the
confusing complexity of ticket subclasses for passengers and management. The
methods employed include identifying key factors influencing ticket prices
(booking time, route, service type, demand) and building a robust price
prediction model using the XGBoost algorithm. Train ticket purchase transaction
data from 2020 to 2025, including details like purchase time, route, ticket
class, and schedule popularity, were utilized to generate accurate base prices.
These base prices are then adjusted in real-time considering current demand and
seat availability. Dynamic pricing simulations will evaluate price increases
based on demand percentage and train occupancy rates. Model evaluation will use
R-squared (R²), Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE)
metrics to measure prediction accuracy. The results of this study are expected
to contribute significantly to railway companies in optimizing ticket pricing
strategies and improving operational efficiency. |
|
Keywords: |
Ticket Price Prediction, Dynamic Pricing, XGBoost, Railway, Machine Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.19366465 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
FUSIONDEFECTNET-A CNN VISION TRANSFORMER METHOD WITH ADAPTIVE GATING FOR
EXPLAINABLE TEXTILE DEFECT DETECTION |
|
Author: |
THANDAVA KRISHNA SAI PANDRAJU, VENKANNA CHANAGONI, P MARY KAMALA KUMARI,
MANEESHA VADDURI, SHAIK SALMA BEGUM, V. SUMA AVANI |
|
Abstract: |
It is challenging for textile manufacturers to automate inspection due to a
pronounced imbalance in the class of samples, with the majority of samples being
defect-free, and the complementary nature of local and global defects. In this
study, FusionDefectNet is presented as a novel hybrid architecture. For
extracting local features, Convolutional Neural Networks (CNNs) are used, and
for understanding global context, Vision Transformers (ViTs) are employed. As
part of the adaptive gating mechanism, the weights on branch contributions are
modified based on the input attributes. The 68.7:1 class imbalance is addressed
by integrating class-weighted loss with focal loss (γ=2). To gain insights into
decision-making processes, use interpretable AI technologies like Grad-CAM and
ViT attention visualizations. According to the TILDA dataset, we were 97.3%
accurate and scored 96.9% higher than pure CNN solutions. When α is below 0.3,
CNNs are preferred over other search methods for localized imperfections like
holes and stains. However, for structural issues at global scale, it favors ViTs
when α exceeds 0.7. According to Explainable AI, CNN identifies minor
discrepancies, while ViT identifies patterns within the context that are more
substantial. |
|
Keywords: |
FusionDefectNet, CNN, Grad cam, Explainable AI, Focal Loss, VIT. |
|
DOI: |
https://doi.org/10.5281/zenodo.19366478 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
EVALUATING QAOA AND QSVM FOR OPTIMIZATION AND MACHINE LEARNING ON CLASSICAL AND
QUANTUM |
|
Author: |
Dr. R BOOPATHI, Dr. KALYANAPU SRINIVAS, Dr. N PUSHPALATHA, V V RAMA KRISHNA, V
LAKSHMI SAILAJA, Dr. A MUTHUKRISHNAN, SHAIK JILANI BASHA, Dr. R BALAMURUGAN |
|
Abstract: |
The ability of quantum computing to transform multiple domains, such as
cryptography, optimization, and machine learning, has been recognized. This
paper describes how quantum algorithms, such as QAOA, can be applied to solve
the TSP and QSVM for MNIST. We want to examine how quantum algorithms compute
complex problems and distinguish their performance from classical computing
systems. We use quantum processors from IBM and classical computing to tackle
simulation and optimization tasks, respectively. The research revealed that QAOA
gives close-to-optimal outcomes for the TSP, running much faster than classical
algorithms. At the same time, QSVM still performs well in digit classification
with only a slight decrease in accuracy. Nevertheless, training quantum neural
networks requires more time, showing that hardware should be upgraded. This work
points out how quantum computing might play a significant role in helping with
optimization and machine learning while explaining the present problems and
possibilities for quantum algorithms and hardware. This study indicates that if
error correction techniques and hardware are improved, quantum computing can
help solve everyday challenges. |
|
Keywords: |
Quantum computing, Quantum Approximate Optimization Algorithm (QAOA), Quantum
Support Vector Machines (QSVM), Traveling Salesman Problem (TSP), MNIST dataset,
Hybrid quantum-classical models |
|
DOI: |
https://doi.org/10.5281/zenodo.19366503 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
BIG DATA AND MACHINE LEARNING FRAMEWORK FOR CANCER, FINANCIAL, AND STRESS RISK
PREDICTION |
|
Author: |
M V B MURALI KRISHNA M, VIJAYA KRISHNA SONTHI, N. SRINIVAS RAO, AVSS SOMASUNDAR,
M.SRIKANTH, M CHILAKARAO |
|
Abstract: |
The rapid evolution of the big data in the medical, financial, and behavioral
track has offered an opportunity to utilize predictive analytics to contribute
to the well-being of the whole picture. However, the existing systems tend to
work on the prediction of cancer risks, estimation of financial status, and
stress analysis independently, which is limited to provide integrated and
tailored risk measurement in the future. To address this limitation, the
proposed paper will recommend one single Big Data and Machine Learning Framework
to forecast the risks of Cancer, Financial, and Stress. It uses a combination of
heterogeneous data, medical indicators, financial data, and behavioral data that
are connected to stress and executes the supervised machine learning algorithms
such as Logistic Regression, Decision Tree, Random Forest, Linear Regression,
and Gradient Boosting. The results of the experiment indicate that, as compared
to the Logistic Regression or the Random Forest, Decision Tree model predicted
the risk of cancer the most accurately with an accuracy of 83%. Gradient
Boosting was the lowest in the Mean Squared Error of 5.25 x 10-6 and better than
that of Linear Regression, which is 0.15. In addition, stress risk
classification was also effective in the determination of the various levels of
stress basing on behavioral and physiological indicators. These results confirm
the notion that the proposed integrated framework improves predictive accuracy
and makes it possible to consider risks in the comprehensive manner. This model
provides decision support tool, which is data-oriented to detect early risks of
cancer, financial planning, and stress management to improve holistic
well-being. |
|
Keywords: |
Big Data Analytics, Machine Learning, Cancer Risk Prediction, Financial Risk
Estimation, Stress Risk Analysis |
|
DOI: |
https://doi.org/10.5281/zenodo.19366535 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
F4-SERUNET: THEORY-GUIDED TEMPORAL DEEP LEARNING FRAMEWORK FOR MULTI-FACTOR
PERFORMANCE PREDICTION IN SERU PRODUCTION SYSTEMS |
|
Author: |
MANISH KAUSHIK, DEVENDRA CHOUDHARY , PRIYA DARSHINI |
|
Abstract: |
Seru Production Systems entail complex time-evolving interactions between
structural flexibility, operational efficiency, human related, and lean process
improvement-the result of stochastic task processing, workforce learning, and
reconfiguration dynamics. Due to the scarcity of temporally rich datasets and
limitations regarding static regression models, accurate predictions of these
interacting performance dimensions are rather difficult to make. This research
introduces F4-SeruNet, a theory-guided temporal deep learning framework for
jointly predicting four interpretable Seru performance factors. A synthetically
generated yet realistic factory evolution is mathematically created, generating
180,000 temporal observations from 300 production episodes of 600 discrete time
steps each. Time-skewed bottleneck times due to variability-driven congestion
effects, learning-induced performance gains, and reconfiguration penalties are
all captured in the data; this ensures face validity and consistency in
sensitivity. F4-SeruNet integrates temporal modeling of factor-specific encoders
and cross-factor attention with soft monotonic constraints that embed Seru
production theory. Experimental results are reported to include consistently
high goodness-of-fit in all factors, with RMSE values within 0.003747-0.005474
and R² values in excess of 0.9985 for all four factors. Ablation and sensitivity
analyses confirm that theory-guided constraints stabilize learning and enforce
coherent responses to perturbations, while strong static baselines yield lower
one-step errors. F4-SeruNet demonstrates superior temporal consistency, balanced
multi-factor behavior, and robustness under rolling-horizon forecasts. These
results point to the suitability of F4-SeruNet for scenario analysis and
medium-term decision support in dynamic Seru production planning. |
|
Keywords: |
Seru Production Systems, Temporal Deep Learning, Theory-Guided Neural Networks,
Multi-Factor Performance Prediction, Synthetic Industrial Data |
|
DOI: |
https://doi.org/10.5281/zenodo.19366562 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
A MULTIMODAL CONVNEXT AND LLM-POWERED CLINICAL DECISION SUPPORT SYSTEM FOR EARLY
AMD CLASSIFICATION AND TREATMENT |
|
Author: |
AKILA A , DURGADEVI P |
|
Abstract: |
Age-Related Macular Degeneration (AMD) is a leading cause of vision loss in the
elderly, making early identification essential to preventing irreparable damage.
While timely intervention is critical for preserving vision, traditional
diagnostic methods often struggle to balance accuracy with computational speed.
The primary research contribution of this work is the development of an
Integrated Theranostic Framework that synergizes deep learning with Generative
AI to perform simultaneous disease staging and therapeutic guidance,
fundamentally solving the computational bottlenecks of unimodal image analysis.
The proposed model introduces a multimodal fusion framework, specifically a
ConvNeXt, that integrates a dual-stream input of Color Fundus Photography (CFP)
and Optical Coherence Tomography (OCT) to transcend the constraints of unimodal
classification, specifically targeting the early staging of AMD. By leveraging
the advanced feature extraction capabilities of the ConvNeXt architecture, the
model robustly synthesizes spatial features from both imaging modalities,
optimized using both standard backpropagation and stochastic gradient descent.
Furthermore, the new knowledge created by this research is the successful
operationalization of a Neuro-Symbolic diagnostic pipeline within ophthalmology.
Following the classification of the AMD stage based on fused decision scores, a
Large Language Model (LLM) processes the symbolic clinical findings to generate
context-aware drug and therapeutic recommendations. Model interpretability is
further ensured through Class Activation Mapping (CAM) to visualize decision
heatmaps. Implemented in Python, the proposed framework achieves a superior
accuracy of 98.65% with a rapid inference time of 13.12 seconds. These results
demonstrate that combining ConvNeXt-driven multimodal fusion with LLM-driven
insights significantly advances the capability of early AMD detection and
establishes a new paradigm for automated, guardrailed patient management |
|
Keywords: |
Age-Related Macular Degeneration (AMD), Integrated Theranostics, OCT, CFP,
Multi-Modal Fusion; Cascaded Group Attention; Deep Spatial Attention; Large
Language Models (LLM); Personalized Therapy. |
|
DOI: |
https://doi.org/10.5281/zenodo.19366586 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
INTEGRATION OF ADAPTIVE EDUCATIONAL PLATFORMS BASED ON ARTIFICIAL INTELLIGENCE
FOR FOREIGN LANGUAGE TEACHING IN NON-LINGUISTIC HIGHER EDUCATIONAL INSTITUTIONS |
|
Author: |
ALEVTYNA MINIAILOVA, IRYNA HOLUBIEVA, YEVHENIIA KOSTYK, SNIZHANA KUTSYN, OKSANA
SYVYK |
|
Abstract: |
The study assesses the effectiveness of adaptive educational platforms based on
artificial intelligence (AI) in teaching foreign languages in non-linguistic
higher education institutions (HEIs). The relevance of the issue is determined
by the growing need for personalized language trajectories that can combine
prediction accuracy with pedagogical interpretability and stability of
algorithms. The paper compares three groups of platforms – Machine Learning
(ML)-based, Deep Learning (DL)-based, and Hybrid Cognitive-oriented. They were
tested based on the data of success, Learning Management System (LMS) activity,
and standardized language testing (n = 524). To generalize the results, the
integral Language Training Effectiveness Index (LTEI) was used, which combines
the metrics F0.5-score, Receiver Operating Characteristic (ROC)- Area Under
Curve (AUC), Gini index, and pedagogical interpretability assessment
(Ped-Value). The methodology included stratified validation, analysis of the
stability of the weighting coefficients (ΔW-index), and testing the statistical
significance of the differences (analysis of variance (ANOVA), paired t-tests).
The results showed that the hybrid systems provided the highest balance (LTEI =
0.85–0.87; Ped-Value = 0.85; Kappa = 0.79), combining accuracy and
understandable interpretation for teachers, despite the higher computational
costs. Deep models achieved maximum accuracy (F0.5 ≈ 0.81; AUC ≈ 0.87), but lost
in stability (ΔW-index = 0.134) and transparency of the results. Classical ML
algorithms provided the fastest adaptation time (≈0.84 s/epoch) and the lowest
weight fluctuations (ΔW-index = 0.091), but their LTEI indicators remained at
the level of 0.72–0.76. Statistically significant differences (p < 0.01)
confirmed the superiority of hybrid solutions, while the choice between ML and
DL platforms should depend on the resources and tasks of the educational
process. The academic novelty of the study is the comprehensive comparison of
different architectures of adaptive platforms according to technical and
pedagogical criteria, as well as taking into account the user experience
(UX)/user interface (UI) aspects of educational web interfaces. |
|
Keywords: |
Adaptive Platforms, Artificial Intelligence, Machine Learning, Deep Learning,
Hybrid Cognitive-Oriented, LTEI, ΔW-Index, Interpretability, Stability of
Forecasts, Teaching Foreign Languages, UX (User Experience), Human–Computer
Interaction. |
|
DOI: |
https://doi.org/10.5281/zenodo.19366610 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
FORMING SOCIAL BUSINESS OPPORTUNITY BELIEFS IN A DEVELOPING ECONOMY: A CASE
STUDY OF INDONESIAN DIGITAL ENTREPRENEURS |
|
Author: |
REYNARD J.N MAKARAWUNG, AGUNG STEFANUS KEMBAU, JULIUS SUTRISNO, LILIS
SUSILAWATY4, TANNIA |
|
Abstract: |
The "opportunity actualization" perspective has become a dominant framework for
understanding how entrepreneurs identify and pursue social business
opportunities. However, existing models are predominantly Western-centric,
overlooking the distinct institutional and cultural realities of developing
economies. This study addresses this theoretical gap by investigating how
digital entrepreneurs in Indonesia form social business opportunity beliefs in
an environment characterized by institutional voids. Adopting a qualitative
multiple case study design, we conducted in-depth interviews with Indonesian
digital impact entrepreneurs and analyzed the data using the Gioia methodology.
The findings reveal that opportunity formation in this context is not merely an
individual cognitive act but a socially negotiated process driven by "Communal
Alertness" rather than individual dissatisfaction. Crucially, the study
identifies a novel mechanism of "Digital-Relational Substitution," where
entrepreneurs leverage digital platforms and informal networks to substitute for
the lack of formal institutional validation common in the Global North. We argue
that in high-context cultures, opportunity beliefs are validated through
relational capital and digital micro-experiments rather than formal market data.
These insights contribute to "recalibrating" entrepreneurship research for the
Global South and offer practical guidance for policymakers to foster trust-based
entrepreneurial ecosystems. |
|
Keywords: |
Social Entrepreneurship, Opportunity Actualization, Digital Entrepreneurship,
Developing Economy, Gioia Methodology. |
|
DOI: |
https://doi.org/10.5281/zenodo.19366642 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
A FUSION-BASED SENTIMENT CLASSIFIER FOR HINGLISH USING CONTEXTUAL ENCODERS AND
COMMONSENSE INFERENCE |
|
Author: |
KISHORE KUMAR P. V, HARI JYOTHULA |
|
Abstract: |
Sentiment analysis of code-mixed text like Hinglish is still a tough problem.
People who use social media often mix Hindi and English. The grammar isn't very
formal and isn't always the same. People often show how they feel without saying
it directly. A lot of the sentiment analysis models that are already out there
depend mostly on the text's context. These models have a hard time figuring out
how people feel when they show it through situations or cultural cues. This
limit causes incorrect classification. This issue happens more often in content
that is neutral or has a negative tone. The goal of this study is to make it
easier to classify feelings in Hinglish text. The work focuses on dealing with
emotional expressions that aren't clear. A framework for classifying sentiments
based on fusion is suggested. The framework brings together contextual
embeddings and external commonsense knowledge. Contextual embeddings record how
words are used in the text. Common sense knowledge helps us figure out how
people feel about everyday things. The Hinglish SentiMix dataset, which has
about 20,000 tweets that are marked as positive, negative, or neutral, is used
to test the proposed model. The results indicate that the suggested method
attains a weighted F1-score of 74.1%. This score is better than those of
well-known transformer-based baseline models. The results show that adding
commonsense reasoning makes it easier to understand sentiment. The improvement
is big for feelings that are not directly stated. The study shows how useful it
is to combine linguistic context with outside knowledge. This method works well
for analyzing multilingual social media in the real world. |
|
Keywords: |
Hinglish Sentiment Analysis, Code-Mixed Text, Contextual Embeddings, Commonsense
Knowledge, Fusion Model. |
|
DOI: |
|
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
KRILL HERD OPTIMIZED ZONE ROUTING PROTOCOL (KHO-ZRP) FOR ENERGY-EFFICIENT
ULTRA-DYNAMIC FLYING AD HOC NETWORKS |
|
Author: |
GUNAVATHY H, Dr.RAVINDRANATH P.V |
|
Abstract: |
Flying Ad Hoc Networks operate under extreme mobility, fluctuating node density,
frequent link disruptions, and strict energy constraints, which jointly degrade
routing stability and communication reliability. Existing FANET routing
approaches address these challenges in isolation through learning-based
adaptation, clustering, or bio-inspired heuristics, leaving coordination among
zoning, relay selection, and energy regulation insufficiently explored. This
work introduces a Krill Herd Optimized Zone Routing Protocol that embeds
collective swarm intelligence directly into zonal routing control. New knowledge
is created through an optimization-governed routing framework where krill herd
dynamics regulate zone radius adaptation, relay selection, hop progression, and
spatial stability within a unified routing architecture. The contribution
advances routing-system design rather than proposing a new optimizer, enabling
stable and energy-aware communication under ultra-dynamic aerial conditions.
NS3-based evaluation across varying node densities demonstrates reduced delay,
lower packet loss, improved packet delivery, higher throughput, and controlled
energy consumption compared to existing protocols. Results confirm the
effectiveness of optimization-driven zonal coordination for sustaining quality
of service in highly dynamic FANET deployments. |
|
Keywords: |
Zone Routing Protocol, Krill Herd Optimization, UAV, FANET, QoS) |
|
DOI: |
https://doi.org/10.5281/zenodo.19366952 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
A DATA-DRIVEN HYBRID FRAMEWORK FOR TELEVISION AND MEDIA ETHICS: SCIENTOMETRICS
AND DISCOURSE NETWORK ANALYTICS |
|
Author: |
JIMI NAROTAMA MAHAMERUAJI, ATWAR BAJARI, DADANG RAHMAT HIDAYAT, ACENG ABDULLAH |
|
Abstract: |
This study integrates bibliometric mapping and Discourse Network Analysis (DNA)
to examine television and media ethics across scholarship and online news
discourse. Bibliometrically, Scopus-indexed publications (2000–2024; n = 238)
were analyzed using ScientoPy to compute AGR, ADY, PDLY, and the h-index.
Results indicate a marked rise in output after 2018, with recurrent themes
around media ethics and television discourse, and a prominent cluster related to
reality television, alongside emerging lines of inquiry such as artificial
intelligence, indigenization, and televangelism. Complementing this macro-level
view, DNA was applied to Indonesian online news coverage in 2025 (Google News
retrieval via Linkclump; keyword “Etika Televisi”; n = 69). Network modeling
(DNA 3.0.11) and visualization (Visone 2.28.1) show a centralized discourse
structure in which KPI and the Press Council (Dewan Pers) function as anchor
nodes, while issue connectivity concentrates on General News and Broadcasting
Ethics. The individual–issue network further suggests cross-issue linkage
through intermediary nodes (e.g., journalists). Limitations include reliance on
a single indexing database and potential platform/source bias in Google News
retrieval; future research should expand databases, use multi-platform media
corpora, and test robustness with longitudinal and metric-based network
validation. |
|
Keywords: |
Scientometrics, ScientoPy, Discourse Network Analysis (DNA), Network Analytics,
Television and Media Ethics |
|
DOI: |
https://doi.org/10.5281/zenodo.19366973 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
A REAL TIME HYBRID QUANTUM CLASSICAL DIAGNOSTIC FRAMEWORK FOR EARLY DISEASE
DETECTION |
|
Author: |
GOWRIPUSHPA GEDDAM, NARASIMHA RAO THOTA, PRANUSHA DOGGA, SELVA MALAR N4, RAVURI
LALITHA, MYLAVARAPU KALYAN RAM, M CHAITANYA KUMARI, SATHISHKUMAR SHANMUGAM |
|
Abstract: |
Early detection of clinical deterioration is crucial for better patient
outcomes, and current diagnostic models struggle to model the complex
interactions observed in nonlinear patterns in multimodal medical data while
meeting the performance requirements for real-time operation. This paper
proposes a Real-Time Hybrid Quantum-Classical Diagnostic Framework (HQCDF) to
improve early disease detection by combining a Quantum Variational Biomarker
Embedding module (QVBE), a Classical Temporal-Clinical Learning module (CTCLM),
and a novel Quantum-Classical Diagnostic Fusion mechanism (QCDF). A
semi-synthetic multimodal dataset, Med-EarlyQ, physiological time series, lab
biomarkers, and clinical scores was built to evaluate. The hybrid model has been
trained using End-to-end differentiable quantum-classical optimization via the
parameter-shift rule. Results showed that HQCDF obtained 96% accuracy, 0.93
F1-score, and 0.95 AUROC to outperform state-of-the-art classical deep learning
models and existing hybrid approaches by 6~15% while controlling the real-time
inference latency to 18 ms. Analysis revealed that the QCDF module contributed
to increased robustness against quantum noise and missing clinical information,
and to QE's increased sensitivity to subtle trends in disease in its early
stages. The results show that the proposed hybrid framework is a promising
evolution, addressing the call for next-generation, real-time, and
resource-efficient diagnostic systems with great potential for early
intervention and clinical decision-support impact. |
|
Keywords: |
Hybrid Quantum–Classical Diagnosis, Early Disease Detection, Quantum Variational
Circuits, Real-Time Clinical Analytics, Multimodal Biomarker Modeling,
Quantum–Classical Fusion. |
|
DOI: |
https://doi.org/10.5281/zenodo.19367026 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
HCRHE-NET: HIGH-CONFIDENCE RESIDUAL HYBRID ENSEMBLE NETWORK FOR BREAST CANCER
DETECTION FROM TCGA-BRCA DNA METHYLATION DATA |
|
Author: |
HEMALATHA D , N. GOMATHI |
|
Abstract: |
Due to noise, redundancy and uncertainty with respect to predictive confidence,
proper classification of high-dimensional biological data remains a significant
challenge. To overcome these shortcomings, this paper proposes a High-Confidence
Residual Hybrid Ensemble (HCRHE) Network, which is a composite of residual
learning, deep neural modelling, and confidence-conscious decision fusion to
classify diseases with high confidence. The Cancer Genome Atlas (TCGA) contains
large-scale data on DNA methylation that are susceptible to overfitting and
unstable forecasts by using conventional deep learning models to evaluate the
proposed methodology. To learn latent patterns due to reconstruction, HC-RHE
architecture consists of a primary-prediction base multilayer perceptron (MLP),
together with a residual learning path that is constructed using an autoencoder
and residual MLP. A new machine called a confidence-based fusion technique
allows the dynamical weighting of the base and residual prediction in terms of
model certainty and makes adaptive decision-making. Also, forecasts with high
confidence margins are retained and a high-confidence filtering process used,
which maximizes reliability with minimal coverage loss. An accuracy-optimized
threshold selection strategy is also provided in order to enhance the
performance of classification further. Vast comparative experiments are
conducted with the state-of-the-art deep learning baselines, including CNN,
autoencoder-based classifiers, Dense DropConnect, residual CNN frameworks, and
Basic MLP (Adam and SGD). The proposed HC-RHE has a much better accuracy at 98.7
compared to all other methods. These results prove success of confidence-aware
residual fusion as they reflect continuous improvement over the best baseline
CNN model. The proposed framework is, all in all, exceptionally promising to the
field of clinical decision-support systems and gives a credible, intuitive, and
high-confidence classification paradigm of high-dimensional biomedical data. |
|
Keywords: |
High-Confidence Learning, Residual Hybrid Ensemble, Deep Neural Networks,
Autoencoder, DNA Methylation, Biomedical Classification, TCGA, Confidence-Aware
Fusion, Cancer Prediction |
|
DOI: |
https://doi.org/10.5281/zenodo.19367048 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
SCARLET MACAW-INSPIRED DEEP BELIEF NETWORK FOR EARLY, ACCURATE, AND
INTERPRETABLE PREDICTION OF GESTATIONAL DIABETES MELLITUS |
|
Author: |
D.SHOBANA, V VINODHINI |
|
Abstract: |
Gestational Diabetes Mellitus (GDM) is a prevalent pregnancy-related condition
that contributes significantly to maternal and neonatal health complications,
such as pre-eclampsia, neonatal hypoglycemia, and long-term metabolic disorders.
Existing diagnostic methods, such as the Oral Glucose Tolerance Test (OGTT), are
invasive, time-consuming, and not easily scalable for early detection. This
research introduces the Scarlet Macaw-Inspired Deep Belief Network (SM-DBN), a
deep learning-based model designed to predict GDM risk early in pregnancy. By
incorporating bio-inspired optimization techniques such as adaptive foraging and
territorial behavior, SM-DBN addresses challenges like class imbalance, missing
data, and the dynamic nature of pregnancy-related risks. The model achieved a
classification accuracy of 81.277%, showing significant promise in early-stage
GDM detection. SM-DBN integrates temporal pattern learning to capture
trimester-specific risk variations, and explainable decision support ensures
transparency for clinicians. The model’s ability to handle incomplete and noisy
clinical data, adapt dynamically to evolving risks, and offer interpretable
predictions makes it a highly effective tool. This approach provides a scalable,
interpretable, and non-invasive solution for GDM risk assessment, reducing the
reliance on traditional diagnostic tests and enhancing maternal-fetal health
outcomes across diverse healthcare settings. |
|
Keywords: |
Healthcare, Gestational Diabetes Mellitus, Prediction, Bio-Inspired
Optimization, Deep Belief Network, Diabetes. |
|
DOI: |
https://doi.org/10.5281/zenodo.19367062 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
DEEP REINFORCEMENT LEARNING FOR PROACTIVE CYBERSECURITY THREAT DETECTION |
|
Author: |
DR. NAIM SHAIKH, DR. VIVEK VEERAIAH, DR. A.PANKAJAM, DR. TARUN DALAL, DR.
MAMATHA G, DR. G. NAGESWARA RAO, DR. VINOD MOTIRAM RATHOD, DR. TRIPTI SHARMA |
|
Abstract: |
The proliferation of interconnected ecosystems, encompassing cloud
infrastructures, IoT networks, and 5G platforms, has facilitated the execution
of cyberattacks. Consequently, systems are increasingly susceptible to
intricate, adaptive attacks. Reactive security measures, such as signature-based
IDS and conventional machine learning models, are ineffective until an attack
has already occurred. This deficiency stems from their inability to predict and
mitigate threats characterised by aggressive evasion, cognitive wander, and
evolving assault methodologies. Furthermore, the expansion of digitally linked
systems has exacerbated vulnerabilities to sophisticated cyberattacks.
Traditional cybersecurity protocols typically identify threats only
post-incident. In the field of cybersecurity, a shift towards proactive and
adaptive approaches is necessary due to AI's limitations, even if AI enhances
pattern recognition. In contrast to conventional reactive methods, this research
demonstrates the potential of DRL to build a proactive system for danger
identification that can adapt in real-time to new threats. To tackle these
issues, we provide DRL-PRoTECT, a new proactive cybersecurity approach that
combines deep reinforcement learning with existing methods. The system is able
to autonomously detect and mitigate threats in real-time thanks to its
hierarchical DRL decision engine, predictive anomaly scoring, and
self-supervised representation learning. Results on enterprise-scale systems,
NSL-KDD, and UNSW-NB15 show that DRL-PRoTECT outperforms traditional IDS, ML/DL
benchmarks, and virtual testbeds. With an F1 score of 94.5%, a false positive
rate of 2.8%, and a recall rate of 93.7%, the framework accomplished its goals.
The technology also reduced the time needed to identify threats by half. Its
ability to adapt allowed it to keep working well despite changing priorities,
new types of attacks, and attempts to bypass it. Analysts found that including a
human-in-the-loop orchestrator made it easier and less demanding to stay alert.
This led to better understanding, compliance, and trust. The results suggest
that DRL-PRoTECT could help move cybersecurity defences from a detection-focused
approach to a more proactive, self-sufficient, and resilient one. In response to
changing threats, this article presents a proactive and scalable cybersecurity
model that automatically shifts from detection to defense. |
|
Keywords: |
DRL; Proactive Cybersecurity; Threat Detection; IDS; Anomaly Detection; Attacks;
Concept Drift; Federated Learning; Blockchain Security. |
|
DOI: |
https://doi.org/10.5281/zenodo.19367074 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
LEGAL PROTECTION IN ALGORITHMIC CONSUMER MARKETS: A SYSTEMIC APPROACH TO
PLATFORM LIABILITY AND DIGITAL CONTRACTS |
|
Author: |
OLEKSANDR DONCHENKO, OLENA OLSHANSKA, IHOR OKUNIEV, OLEXANDER VERHOLIAS, SERHII
VASYLYNA |
|
Abstract: |
The relevance of this study stems from the transformational changes in consumer
legal relations in the context of digitalization and the development of the
platform economy, which are accompanied by an increase in the structural
vulnerability of consumers and a decrease in the effectiveness of traditional
legal protection mechanisms. The purpose of the study is to substantiate
directions for improving legal mechanisms for consumer protection, and the
subject of the study is defined as consumer legal relations arising in the
digital and platform environment. The theoretical basis of the study is the
provisions of modern consumer law doctrine, the concept of structural asymmetry
in digital markets, behavioral approaches, and the development of European
regulatory law, with the simultaneous application of general scientific and
specific legal methods of analysis, synthesis, comparison, and generalization.
The study demonstrated that digital platforms play an active role in shaping
consumer choice, leading to a transformation of the right to informed choice,
fair contract terms, and effective legal remedies. It has been demonstrated that
personalized commercial practices and algorithmic ranking mechanisms undermine
consumer autonomy and require preventive legal regulation. The feasibility of a
functional approach to the responsibility of digital platforms and the
development of collective mechanisms for consumer protection was substantiated.
The practical significance of the results lies in their potential use for
improving national legislation, harmonizing it with European Union legislation,
and forming effective state policy for consumer protection. |
|
Keywords: |
Consumer Protection, Digitalization, Platform Economy, Algorithmic Influence,
Consumer Law, Collective Protection, Collective Consumer Interests, EU
Alignment, Invalidity, Types of Invalidity, Consumer Interests; EU Alignment,
Invalidity, Types of Invalidity |
|
DOI: |
https://doi.org/10.5281/zenodo.19367092 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
MO-GRPC AN OPTIMIZED FRAMEWORK FOR ENHANCING PERFORMANCE, SCALABILITY, AND
RELIABLITY IN WEB SERVICES |
|
Author: |
J.GNANABHARATHI , K.VADIVAZHAGAN |
|
Abstract: |
The MO-gRPC framework has been proposed to enhance the performance, scalability,
and reliability of web services in dynamic and resource-constrained network
environments. Web services often face challenges such as high latency,
inefficient resource utilization, and reduced throughput under increasing
workloads, which impact their overall efficiency and adaptability. The purpose
of MO-gRPC is to address these challenges by incorporating optimized mechanisms
that improve communication precision, adaptive load balancing, and fault
resilience.Through rigorous evaluation, MO-gRPC has demonstrated significant
improvements in throughput, packet delivery reliability, and load balancing
efficiency compared to conventional approaches. The proposed framework plays a
pivotal role in web services by ensuring efficient data transmission, dynamic
resource allocation, and robust performance under varying network
conditions.MO-gRPC fills critical gaps by offering a scalable,
resource-efficient solution that reduces delays and maximizes bandwidth
utilization, making it a promising approach for optimizing web service
frameworks in diverse and evolving network scenarios. |
|
Keywords: |
Mantis Optimization - Web Services Optimization - Multi-Channel
Communication - Adaptive Load Balancing - Fault Tolerance - Resource Allocation
Efficiency. |
|
DOI: |
https://doi.org/10.5281/zenodo.19367115 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
SUSTAINABLE AI FRAMEWORK FOR FAULT DETECTION IN 6G-INTEGRATED INDUSTRY 4.0 DATA
ECOSYSTEMS |
|
Author: |
RALLA SURESH, PANTHANGI VENKATESWARA RAO, K. VENKATA SUBBA REDDY, Z. SUNITHA
BAI, PARASA KONDALA RAO, P. LAKSHMI PRASANNA, B. VARAPRASAD RAO |
|
Abstract: |
The advent of 6G and Industry 4.0 technologies has revolutionized industrial
automation, connectivity, and data processing. With the growing complexity of
heterogeneous data environments in these domains, detecting faults in real-time
has become increasingly challenging. This paper proposes a sustainable deep
learning framework that integrates advanced neural networks with
resource-efficient processing techniques for fault detection in 6G-enabled
Industry 4.0 environments. The framework leverages data from various sources,
including IoT devices, sensors, and industrial machines, ensuring high accuracy,
scalability, and energy efficiency. A hybrid deep learning model combining
Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks
is employed to capture both spatial and temporal data patterns. The framework is
designed to optimize resource allocation while maintaining fault detection
performance. Simulation results demonstrate the efficacy of the proposed
approach, highlighting its potential to enhance fault management in smart
industrial systems. |
|
Keywords: |
6G, Industry 4.0, Deep learning, Fault detection, Convolutional Neural Networks,
Long Short-Term Memory, Sustainability, IoT |
|
DOI: |
https://doi.org/10.5281/zenodo.19367146 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
SECURING THE FUTURE OF WSNS: A HYBRID FEDERATED AND DEEP LEARNING APPROACH TO
FAULT DETECTION AND THREAT MITIGATION |
|
Author: |
DR. R. SARAVANAKUMAR, DR. B. NARMADA, DR. KEERTHI KETHINENI, VENKATA BALA
ANNAPURNA P, DR. RAKSHITHA KIRAN P, SHAIK JILANI BASHA, DR. N. SATHEESH |
|
Abstract: |
Intelligent computing is getting more and more close to the concept along with
Wireless Sensor Networks (WSNs) operated for fault detection and system
reliability assurance. However, more traditional centrally located deep learning
(DL) models still have limitations in terms of scalability, suffer from high
communication overhead and may have privacy issues. To overcome these problems,
this work proposes a Hybrid Federated Deep Learning (HFDL) model, which merges
federated learning (FL) and distributed DL models to enable secure,
energy-efficient, and reliable fault detection in large-scale WSNs. This
approach was evaluated through a simulated setup consisting of 500 sensors nodes
along with various fault conditions (data loss, node failure, communication
errors). At the edge nodes, the model comprises CNN and LSTM-based local models,
and the global model is updated through a federated virtual aggregation of local
updates, thereby no raw data sharing is involved. They were compared with common
machine learning baselines such as Support Vector Machine (SVM), k -Nearest
Neighbors (kNN), standalone DL only, and FL only. The results indicate that HFDL
is going to have an average fault detection rate of 96.8, a reduction in latency
by 22, and energy consumption of 18 less than the existing methods. Such
outcomes are evidence that the model under consideration elevates not only the
computational efficiency level but also the data privacy standards and can be
implemented in the next-generation intelligent sensor instances. |
|
Keywords: |
Wireless Sensor Networks (WSNs), Federated Learning, Deep Learning, Fault
Detection, Anomaly Detection, Security. |
|
DOI: |
https://doi.org/10.5281/zenodo.19367160 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
Title: |
AI CHATBOTS IN CONVERSATIONAL COMMERCE: INVESTIGATING THE ROLE OF PERCEIVED
SERENDIPITY AND SIMILARITY IN CUSTOMER SATISFACTION WITHIN INDIAN E-RETAIL |
|
Author: |
DR. P ARCHANA, DR. U BHOJANNA, DR. G V M SHARMA, DR. MANJUNATH NAGABHUSHAN |
|
Abstract: |
AI-driven chatbots play an increasingly important role in e-retail, yet prior
research has mainly focused on their functional performance, such as speed and
accuracy, with limited attention to the emotional and experiential mechanisms
through which they influence consumer behavior. This study examines how AI
chatbot interactions generate perceived serendipity and perceived similarity and
how these perceptions enhance customer satisfaction in India’s online cosmetics
retail sector. A quantitative survey was conducted with 588 Flipkart users who
interacted with chatbots while browsing or purchasing cosmetic products. Guided
by Social Presence Theory and Affective Response Theory, the conceptual model
was tested using Structural Equation Modelling (SEM) through SmartPLS 4.0. The
results show that chatbot service quality significantly improves perceived
serendipity and perceived similarity, and these constructs fully mediate the
relationship between service quality and customer satisfaction. The findings
indicate that satisfaction in conversational commerce is not only a cognitive
evaluation of utility but also an emotional outcome shaped by unexpected
discovery and human-like interaction, highlighting the importance of affective
design in AI-enabled retail systems. |
|
Keywords: |
AI Chatbots, Perceived Serendipity, Perceived Similarity, Customer Satisfaction,
Conversational Commerce, Flipkart, E-Retail |
|
DOI: |
https://doi.org/10.5281/zenodo.19367183 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2026 -- Vol. 104. No. 6-- 2026 |
|
Full
Text |
|
|
|