|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
April 2026 | Vol. 104
No.7 |
|
Title: |
THE ROLE OF DIGITAL TRANSFORMATION AUDIT IN MANAGING DIGITAL PAYMENT VALIDATION
COMPLEXITY AND ITS IMPLICATIONS FOR AUDIT QUALITY |
|
Author: |
AUFA SYAHDURANI, AUDREY JADE SUKANDI, BAMBANG LEO HANDOKO |
|
Abstract: |
This study analyzes the role of digital transformation audit in managing the
complexity of digital payment validation and its implications for external audit
quality. With the growing use of digital transactions such as QRIS in the
Society 5.0 era, auditors face challenges validating real-time, large-scale
data. A digital transformation audit leveraging AI, Big Data Analytics, Machine
Learning, and other technologies is proposed to enhance efficiency and accuracy.
Using an explanatory quantitative approach based on the DeLone and McLean IS
Success Model, data were collected from external auditors in Indonesia and
analyzed with PLS-SEM. The findings revealed that System Quality, Information
Quality, and the Complexity of Digital Payment Validation exerted significant
direct effects on Audit Quality. In contrast, the mediation role of complexity
was not supported. These results indicated that technological quality
contributes to audit quality primarily through direct pathways, underscoring the
importance of improvements in system performance and information accuracy for
strengthening external audit outcomes in digital transaction environments. |
|
Keywords: |
Digital Transformation Audit, External Audit, Digital Payment, Complexity
Validation, Audit Quality |
|
DOI: |
https://doi.org/10.5281/zenodo.19593224 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
DOMAIN-INVARIANT REPRESENTATION LEARNING FOR GENERALIZABLE TEXT MINING ACROSS
MULTIPLE DOMAINS |
|
Author: |
SRINADH SWAMY MAJETI, BOMMA RAMAKRISHNA, PEDDADA NAGAMANI, MAREPALLI RADHA, CH.
V. RAVI SANKAR, S. JAYANTH, LAKSHMI PRASANNA BYRAPUNENI, GUNTI SURENDRA,
MEDIKONDA ASHA KIRAN, MANYAM THAILE, RAMESH BABU PITTALA |
|
Abstract: |
Text mining models often struggle to generalize across domains due to
domain-specific linguistic and contextual biases. This limitation significantly
affects real-world deployment where data distributions continuously shift. In
this work, we hypothesize that explicitly learning domain-invariant
representations can substantially improve cross-domain robustness without
requiring target-domain fine-tuning. To address this, we propose a
Domain-Invariant Representation Learning (DIRL) framework that integrates
shared–private representation decomposition, adversarial domain confusion, and
contrastive semantic alignment within a transformer-based architecture. The
proposed method is evaluated on Amazon Reviews, MDSD, and 20 Newsgroups under
cross-domain and unseen-domain generalization settings. Experimental results
demonstrate consistent improvements of 4–9% in Macro-F1 over strong baselines,
along with a significant reduction in domain generalization gap. These findings
confirm that enforcing domain invariance at the representation level enhances
scalability, robustness, and real-world applicability of text mining systems. |
|
Keywords: |
Domain-Invariant Representation Learning; Text Mining; Cross-Domain
Generalization; Multi-Domain Learning; Transfer Learning; Transformer-Based
Models; Adversarial Learning; Contrastive Representation Learning; Document
Classification; Sentiment Analysis |
|
DOI: |
https://doi.org/10.5281/zenodo.19593257 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
TEMPORAL-AWARE DATA AUGMENTATION VIA VARIATIONAL MODE DECOMPOSITION AND
CONDITIONAL GANS FOR EEG-BASED SEIZURE DETECTION |
|
Author: |
MR. KONDANNA KANAMANENI, DR. VENKATA RAJU K |
|
Abstract: |
Automatic detection of epileptic seizures based on electroencephalogram (EEG)
signal has been an essential challenge issue in automated clinical diagnostic
systems. The imbalance issue in the first place and the lack of labeled seizure
data seriously hinder the progress of effective deep learning models. The
proposed paper introduces a temporal-aware data augmentation system that fuses
Variational Mode Decomposition (VMD) and the Conditional Generative Adversarial
Networks (cGANs) in order to produce realistic synthetic EEG seizure data and
maintain important temporal and spectral features. The VMD-cGAN model breaks
down EEG signals into intrinsic mode functions (IMFs) that represent various
frequency components, and the generator learns and reproduces the
physiologically realistic seizure patterns. We assess our method using CHB-MIT
scalp EEG database, proving that augmented training data significantly enhances
seizure detection rates. Our experimental findings indicate 97.84% accuracy,
96.73% sensitivity, and 98.52% specificity, which is 3-5% higher than
state-of-the-art techniques. The framework manages the lack of data while
preserving temporal relationships and morphological properties required to
properly identify seizures. |
|
Keywords: |
Electroencephalogram, Seizure Detection, Data Augmentation, Variational Mode
Decomposition, Conditional Generative Adversarial Networks |
|
DOI: |
https://doi.org/10.5281/zenodo.19593288 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
SCALABLE PARKINSONS DISEASE PREDICTION USING MATHEMATICAL MODELING, HILBERT
TRANSFORMS, AND TRANSFORMER-BASED DEEP LEARNING |
|
Author: |
THALAPATHIRAJ.S, VIVEKANANDAM B, RAJENDRA KUMAR TRIPATHI |
|
Abstract: |
Parkinsons disease (PD) is a progressive neurological disorder necessitating
early and precise diagnosis for effective therapy. However, traditional machine
learning and convolutional neural network (CNN) methods generally have problems
when used on heterogeneous biomedical data since they don't scale well, don't
generalize well, and are harder to understand. To tackle these issues, this
research presents a scalable hybrid system that combines mathematical modeling,
Hilbert transform-based spatial embedding, and transformer-based deep learning
architectures for predicting Parkinson's disease. The suggested Hilbert-based
embedding adds biologically inspired spatial correlations that keep structural
information and make features more stable. To efficiently capture both local and
global dependencies, these improved features are subsequently processed using
advanced transformer architectures including Swin Transformer and Vision
Transformer (ViT). Testing the proposed framework on multimodal datasets that
include spiral drawings, wave patterns, and functional MRI (fMRI) pictures shows
that it is more accurate, precise, and recall than traditional CNN and machine
learning models. The Swin Transformer with Hilbert embedding had the best
performance, with an accuracy of 97.96%. This shows that it is more general and
more robust. The findings demonstrate that the suggested mathematically based
framework offers a scalable, interpretable, and clinically significant approach
for the early prediction of Parkinson’s disease. |
|
Keywords: |
Mathematical Modeling, Machine Learning, Mortality Diseases, Parkinson’s
Disease, Hilbert Matrix. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593314 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
STRUCTURAL DEFENSE-ORIENTED GCR-SCAPS METHOD WITH SPARSE CAPSULE ENCODING AND
CONSISTENCY-BASED LEARNING |
|
Author: |
HEMASHREE P , N VALLIAMMAL |
|
Abstract: |
Adversarial perturbations compromise the structural reliability of remote
sensing image classification, challenging the stability of decision-making
systems in security-critical contexts. The proposed GCR-SCaps model introduces a
Geometric Consistency Reinforced Sparse Capsule Network tailored to counter
structural attacks through object-aware representations. Initial convolutional
layers extract shallow spatial cues, which are then transformed into capsule
vectors with enforced sparsity to promote discriminative feature isolation.
Dynamic routing captures inter-capsule agreement, while final class capsules
preserve semantic integrity. A parallel stream generates geometrically altered
inputs to compute a consistency loss, aligning original and transformed
predictions. This dual-path strategy ensures robustness against geometric
distortions while maintaining classification fidelity. The joint optimization of
structural margin and geometric consistency losses strengthens spatial
reasoning, minimizing confusion caused by adversarial interference. GCR-SCaps
establishes a structurally grounded defense architecture that enhances
resilience, preserves spatial coherence, and ensures dependable classification
performance across diverse aerial scenarios prone to manipulation or ambiguity. |
|
Keywords: |
Remote Sensing Classification, Capsule Networks, Geometric Consistency,
Adversarial Defense, Sparse Routing, Structural Robustness |
|
DOI: |
https://doi.org/10.5281/zenodo.19593327 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
AN INTEGRATED ANN FRAMEWORK USING XGBOOST AND LSTM FOR CONDITION-BASED
MAINTENANCE OF ROTATING EQUIPMENT |
|
Author: |
BONTHA SUSMITHA, PUTTI SRINIVASARAO |
|
Abstract: |
Preserving reliability and efficiency in industrial applications including
rotating machinery depends on a fundamental method called Condition-Based
Maintenance (CBM). Sometimes, two of the most often used traditional techniques
of maintenance -reactive and preventive maintenance can cause needless downtime
or failures. Existing research on CBM explored the Machine learning and deep
learning models for Diadnosis of faults in machine for their Remaining Useful
life (RUL), even though the models are existing from past years many of these
not encounter challenges with high dimensional sensor data and often fail to
adequately model temporal dependencies. This gap limits predictive accuracy and
interpretability in real-world industrial applications. To address this, this
paper suggests an ANN framework for enhanced fault diagnosis and RUL prediction
in rotating machinery combining eXtreme Gradient Boosting (XGBoost) and Long
Short-Term Memory (LSTM) networks. This will help one to handle these
challenges. To maximize the input parameters of the LSTM model, the suggested
system uses XGBoost for strong feature selection and importance ranking. XGBoost
high-dimensional data handling and overfitting control features provide the LSTM
network only the most relevant features. The LSTM model is appropriate for
precise fault classification and RUL prediction since it naturally detects
temporal dependencies in sensor data. When combined, deep sequential modeling
and ensemble learning improve predictive performance as they draw on one
another's strengths. Experimental vibration data collected from vibration
sensors demonstrates that the proposed XGBoost-LSTM model consistently
outperforms conventional ML and DL models with performance measures including
recall, accuracy, accuracy, F1-score, and mean absolute error (MAE). The
experimental results show that, contributes new knowledge by showing that
coupling feature selection with temporal modeling yields a scalable,
interpretable, and high-performing CBM solution s. By providing a fast, scalable
AI-driven framework that reduces maintenance costs, improves operational safety,
and minimizes downtime, this work advances predictive maintenance. |
|
Keywords: |
Condition-Based Maintenance, Rotating Equipment, XGBoost, LSTM, Predictive
Maintenance, Fault Diagnosis, Remaining Useful Life. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593352 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
PRIVACY PRESERVING HIGH-DIMENSIONAL DISTRIBUTED LEARNING VIA CLIENT-SIDE DEEP
DENOISING SPARSE AUTOENCODERS AND SPLIT LEARNING |
|
Author: |
MR. LINGAM SUMAN , DR. S. VENKATA LAKSHMI |
|
Abstract: |
Privacy preservation (PP) in deep learning (DL) is becoming increasingly
important, particularly when working with high-dimensional datasets that pose
considerable computing hurdles. Traditional approaches frequently experience the
"curse of dimensionality," resulting in inefficiencies and increased privacy
threats. To address these issues, this study offers an innovative process that
combines Split Learning (SL) with Client-side Deep Denoising Sparse Autoencoders
(CDDSAEs), resulting in a more efficient and privacy-conscious solution. The
technique begins with attribute filtering, which uses a threshold-based
mechanism to keep just the most important data properties, hence lowering
dimensionality from the start. The data is then vertically shard to distribute
computational burdens and let each client to focus on a single data shard. In
the SL architecture, DDSAEs are used on the client side to compress and
anonymized data before transmission, guaranteeing that only low-dimensional, PP
representations are exchanged with the server. Performance investigation shows
that the suggested model reduces latency by 17.7%, runs faster by 40.3%, and has
a minimum information loss of 0.22, demonstrating its superiority over existing
techniques. |
|
Keywords: |
Curse of Dimensionality, Split Learning, Attribute Filtering, Vertical Sharding,
Denoise. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593374 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
APPLICATION OF DIGITAL FORENSIC TECHNOLOGIES TO DETECT AND PRESERVE DIGITAL
TRACES IN PRE-COURT INVESTIGATIONS |
|
Author: |
VIACHESLAV KULIUSH, OLEKSANDR SHEVCHENKO, VLADAS TUMALAVIČIUS, VALERII
NIKITCHENKO, OLEKSANDR HRUZD |
|
Abstract: |
The article presents the results of a comprehensive study of the application of
digital forensic technologies for the detection and preservation of digital
traces in pre-trial investigation. The cases of Ukraine, Lithuania, and Latvia
were used in the study. The relevance of the study is determined by the growing
need for the implementation of modern digital forensic technologies capable of
ensuring reliable detection, fixation, and preservation of digital traces in
pre-trial investigation. The aim of the study is to assess the effectiveness of
forensic platforms (Autopsy, FTK, X-Ways, Magnet AXIOM), cryptographic
algorithms SHA-256 and SHA-3, as well as blockchain solutions to maintain the
continuity of the “chain of evidence” in three countries with different levels
of digital maturity. The object of the study was forensic platforms (Autopsy,
FTK, X-Ways, Magnet AXIOM), cryptographic algorithms SHA-256 and SHA-3, as well
as blockchain solutions to maintain the continuity of the “chain of evidence”.
The research employed the following methods: experimental testing of forensic
tools on simulated datasets, performance assessment of hash algorithms and
blockchain logs, and comparative analysis of practices in Ukraine, Lithuania,
and Latvia. The results of the study revealed significant differences between
countries. In Ukraine, certified forensic tools are used in only 35% of units,
and the reliability of the “chain of evidence” does not exceed 0.72, which
reduces the level of trust in digital evidence in more than 40% of criminal
cases. In Lithuania, the level of certification reaches 85%, compliance with EU
directives and GDPR ensures the stability of procedures at the level of 0.91,
and the speed of digital data processing is 28% higher than in Ukraine. Latvia
occupies an intermediate position: the level of certification is 65%,
independent verification of digital evidence exceeds 75%, and the transparency
of blockchain logs is estimated at 0.87 with an average transaction delay of 250
ms. The academic novelty of the study is the comprehensive technical and legal
analysis of digital forensic technologies in three countries with different
levels of maturity of digital infrastructure and the development of a model of
optimal workflow for pre-trial investigation. |
|
Keywords: |
Digital Forensics, Electronic Evidence, Blockchain, SHA-256, SHA-3, Forensic
Platforms, Criminal Justice, Rule Of Law, Ukraine, Lithuania, Latvia |
|
DOI: |
https://doi.org/10.5281/zenodo.19593411 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
TRANSFORMATION OF THE DEVELOPMENT STRATEGY OF HIGH-TECHNOLOGICAL PRODUCTION
SYSTEMS |
|
Author: |
SVITLANA FILYPPOVA, VOLODYMYR FILIPPOV, SVITLANA YERMAK, LIUBOV NIEKRASOVA,
OLEKSANDR BALAN, OLEKSIY SOLOHUB |
|
Abstract: |
In the article, the features of the formation and development of high-tech
production systems in countries with different levels of economic development
are examined. The purpose of the study is a comprehensive analysis of the
prerequisites, structure and results of the functioning of high-tech production
systems in countries with different levels of economic development based on a
system of indicators of innovative activity, human capital, high-tech production
and international trade in high-tech products, as well as substantiation of the
directions of transformation of strategies for the development of high-tech
production systems in modern conditions of global economic competition. The
methodological basis of the study is the methods of theoretical generalization,
statistical and comparative analysis, correlation analysis and conceptual
modeling. The empirical basis of the study was formed based on the analysis of
indicators of research and development costs, employment in knowledge-intensive
types of economic activity, the share of high-tech production, as well as
indicators of import, export and trade balance of high-tech products in Sweden,
Romania, Ukraine and the Republic of Moldova for 2020–2025. It was found in the
study that countries with a high level of investment in scientific research and
development and a significant share of employment in knowledge-intensive sectors
demonstrate a more developed structure of high-tech production and higher
competitiveness in international markets for high-tech products. It was found
that Sweden is characterized by a balanced structure of high-tech trade and a
high level of innovative activity, while Romania is characterized by positive
dynamics of development of high-tech production, which gradually strengthens its
export potential. In contrast, Ukraine and the Republic of Moldova demonstrate
significant dependence on imports of high-tech products, which indicates limited
domestic production and innovation potential in the relevant segment. The
correlation analysis confirmed the presence of a close relationship between
investments in scientific research, development of human capital and the results
of functioning of high-tech production. Based on the integrated analysis, a
conceptual model of the transformation of the development strategy of high-tech
production systems was formed, which reflects the logical transition from the
formation of innovative resources to ensuring production results and
strengthening the country's competitiveness in international trade in high-tech
products. The practical effect of the study is the possibility of using the
obtained results to form an effective state policy to support innovative
development, stimulate investment in scientific research and the development of
high-tech production, as well as develop strategies to increase the
technological competitiveness of countries with transformational economies. |
|
Keywords: |
High-Tech Production, Innovative Development, Scientific Research And
Development, Knowledge-Intensive Industry, International Trade In High-Tech
Products, Economic Competitiveness. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593436 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
EFFECT OF EMBEDDING MODELS IN RAG SYSTEM FOR PEDAGOGICAL QUESTION GENERATION |
|
Author: |
JAYASHREE GANESHKUMAR, M. KRISHNAVENI |
|
Abstract: |
The need for cognitively aligned assessments has highlighted the disadvantages
of traditional question generation systems, which often fail in pedagogical
structures such as Bloom’s taxonomy and suffer from factual hallucinations. This
study aims to develop an automated question generation system that integrates a
pedagogical framework using a Retrieval-Augmented Generation (RAG) approach.
This work uses three open-source embedding models – MiniLM-L6, MiniLM-L12 and
msmarco-distilbert to encode educational content and retrieve relevant
information using FAISS indices. Retrieved content is then used by T5-based
generator, guided by Bloom’s taxonomy-specific prompts, to produce pedagogically
aligned questions which are then evaluated using standard NLP metrics and
validated against manually created ground-truth dataset. This study provides a
comparison of embedding models for pedagogically aligned question generation
within RAG framework. Findings show that MiniLM-L12 outperforms other embedding
models across all the levels of Bloom’s taxonomy. The results suggest that
educators can use the RAG-based question generation to reduce assessment design
workload while ensuring generation of cognitively appropriate questions. Future
research will explore larger datasets, fine-tuning strategies, multi-domain
scalability to further advance pedagogically aligned automated assessment tools.
This work contributes to enhanced learning outcomes and broader educational
impact. |
|
Keywords: |
Retrieval-Augmented Generation, Bloom’s Taxonomy, Educational Technology,
Automated Assessment, Question Generation. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593463 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR BALANCING EFFICIENCY AND
INTRA-GROUP EQUITY IN HEALTHCARE RESOURCE ALLOCATION |
|
Author: |
AHLAM BAHARI, LINA ABOUELJINANE, MARIA LEBBAR |
|
Abstract: |
Purpose: Efficient resource allocation is fundamental to mitigating Emergency
Department (ED) congestion. However, global optimization often masks significant
service disparities across patient groups. This study investigates the complex
"Efficiency-Equity-Cost" trilemma by developing a dual-phase optimization
framework designed to minimize boarding times and budgetary expenditures while
explicitly enforcing intra-group equity across different patient severity
levels. Methodology: A Discrete Event Simulation (DES) model was developed to
capture the stochastic dynamics of patient flow across an ED and three
specialized Inpatient Units (IUs). The study employs a Simulation-Based
Optimization (SBO) approach using the ε-constraint method to identify
Pareto-optimal trade-offs between budget and efficiency. In the second phase, an
intra-group equity constraint was integrated to stabilize service performance
within each Emergency Severity Index (ESI) category. Findings: Results
indicate that systemic boarding reduction is primarily achieved by expanding
capacity in downstream IUs rather than the ED itself, identifying these units as
the primary structural bottlenecks. While efficiency-only optimization yielded
significant performance gains, the introduction of equity constraints revealed a
critical feasibility ceiling. However, a "sweet spot" was identified at a budget
of 150 units, where substantial improvements in equity are achieved with
negligible impacts on global system efficiency. Originality/Value: This
research contributes a robust decision-support tool that quantifies the "Price
of Fairness" in healthcare. By identifying the specific budget thresholds where
equity and efficiency converge, this work provides hospital administrators with
a theoretically grounded and practically viable strategy for ethically aligned
resource allocation in high-pressure stochastic environments. |
|
Keywords: |
Emergency Department, Inpatient Unit, Simulation-Based Optimization,
Multi-Objective Approach, Boarding Time, Equity. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593719 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
RSNT-SSAE: A HYBRID DEEP LEARNING AND SPARSE FEATURE LEARNING FRAMEWORK FOR
ROBUST ORANGE FRUIT DISEASE DETECTION AND CLASSIFICATION |
|
Author: |
C. MAHALAKSHMI, Dr. S. SARAVANAN |
|
Abstract: |
Accurate and timely detection of fruit disease is essential for maintaining crop
quality and minimizing economic losses in citrus production. Orange fruits are
susceptible to various diseases that directly affect their surface appearance,
leading to reduced market value and increased post-harvest losses. Manual
inspection methods are often time–consuming, subjective, and impractical for
large–scale applications. To address these challenges, this paper presents an
automated framework for detecting and classifying orange fruit diseases based on
deep learning and machine learning techniques. The proposed model integrates a
ResNet50-based deep feature extraction approach with a Stacked Sparse Auto
Encoder (SSAE) classifier, referred to as the RSNT -SSAE framework. Before
classification, orange fruit images undergo Wiener filter – based noise removal
to suppress image noise, followed by Contrast Limited Adaptive Histogram
Equalization (CLAHE) to enhance visual contrast. Color – based segmentation
using RGB thresholding is employed to isolate disease – affected regions from
the fruit surface. Deep features extracted using the ResNet50 architecture are
then classified using the Stacked Sparse Auto-Encoder (SSAE) model to
distinguish between healthy and diseased fruit categories. Experimental
evaluation conducted on a publicly available orange fruit dataset demonstrates
the effectiveness of the proposed approach, achieving a classification accuracy
of 98.10%. The results indicate that the RSNT – SSAE framework provides robust
and reliable performance, making it a promising solution for automated orange
fruit disease diagnosis and precision agriculture applications. |
|
Keywords: |
Image processing, orange fruits, Deep learning, ResNet50, CLAHE, Machine
Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593742 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
AN ADAPTIVE HYBRID META-LEARNING FRAMEWORK FOR CONTEXT-AWARE INVENTORY
FORECASTING AND OPTIMIZATION |
|
Author: |
MOHAMED ALI, THANAA MOHAMED, SAHAR EL TAYEB3, MOHAMED ROUSHDY |
|
Abstract: |
Accurate inventory forecasting remains challenging in dynamic market conditions,
where demand patterns are shaped by external factors such as competitor pricing,
promotional events, and supply volatility. Traditional forecasting models often
treat algorithms as static, ignoring contextual adaptation and the integration
of forecasting with operational decision-making. This paper presents an adaptive
hybrid intelligent framework (AHIF) that dynamically integrates XGBoost,
LightGBM, and CatBoost within a learning-driven selection architecture, enabling
context-aware demand forecasting across different product categories, time
frames, and market conditions. Unlike existing approaches, AHIF unifies
predictive analytics with an adaptive economic order quantity (EOQ) model that
adjusts purchasing strategies. Empirical evaluation on a multi-sector granular
dataset shows strong predictive performance, achieving R˛ values between 0.985
and 0.987 with reduced prediction errors across multiple metrics, including MAE,
RMSE, and RMSLE. This framework effectively reduces stockouts and overstocking,
highlighting its practical impact on inventory management. The theoretical
contribution lies in extending hybrid gradient boosting strategies to a
generalizable and context-adaptive decision support framework that bridges the
gap between prediction accuracy and inventory optimization. The practical
contribution lies in providing a deployable system that enables companies to
make adaptive, data-driven purchasing decisions under uncertainty. |
|
Keywords: |
Inventory Management, Demand Forecasting, Machine Learning, XGBoost, LightGBM,
CatBoost. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593756 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
A TOPIC–SENTIMENT–INTEGRATED NLP FRAMEWORK FOR CTQ-BASED EARLY DETECTION IN THE
GLOBAL SHIPPING MARKET |
|
Author: |
KIM, DONGWON , KIM, YEONJOO |
|
Abstract: |
This study develops an integrated natural language processing (NLP) framework
for the early detection of Critical-to-Quality (CTQ) risks in the global
shipping market. The shipping industry has recently experienced heightened
uncertainty driven by geopolitical disruptions, climate-related constraints, and
persistent supply–demand imbalances. Under such conditions, conventional
quantitative indicators often fail to capture emerging quality risks that are
embedded in market narratives and stakeholder perceptions. This study addresses
this gap by transforming unstructured shipping-related text into actionable,
forward-looking quality risk indicators. The empirical analysis is based on 155
weekly maritime market reports published by the Korea Maritime Institute (KMI)
from January 2022 to June 2025. The proposed framework integrates sentiment
analysis, topic modeling, and quantitative forecasting. A domain-tuned sentiment
lexicon is constructed by extending the Loughran–McDonald financial dictionary
with shipping-specific terminology, enabling more accurate interpretation of
industry-contextual sentiment. Using ProdLDA-based topic modeling and expert
validation, six core CTQ dimensions are identified: freight rate stability,
schedule reliability, lead time, vessel utilization, equipment availability, and
eco-efficiency. Topic–sentiment relationships are quantified using pointwise
mutual information (PMI) and network centrality measures, and composite
CTQScores are derived by estimating topic-to-CTQ impact weights through
ElasticNet regression combined with Bayesian linear modeling. The predictive
performance of CTQScores is evaluated using ElasticNet and vector autoregression
(VAR) models to examine co-movement and lead–lag relationships with key
performance indicators, particularly the Shanghai Containerized Freight Index
(SCFI). The results demonstrate that CTQScores provide statistically significant
early signals of market instability. The integrated model achieves strong
explanatory power (R˛ ≈ 0.68) and high directional accuracy (hit ratio ≈ 0.74)
for one-week-ahead SCFI movements, outperforming traditional time-series
benchmarks. These findings highlight the value of domain-tuned NLP approaches in
enhancing early-warning systems and supporting proactive quality risk management
in the global shipping industry. |
|
Keywords: |
Critical-To-Quality (CTQ), Natural Language Processing (NLP), Sentiment
Analysis, Shipping Market Early Warning System, Topic Modeling. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593790 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
FEDERATED INTELLIGENCE IN OPHTHALMOLOGY: PRIVACY-PRESERVING COLLABORATION FOR
MULTICENTER AMBLYOPIA MODEL DEVELOPMENT |
|
Author: |
JAYA LAKSHMI C , DR. N. SATHEESH , DR. M. KUMARASAN |
|
Abstract: |
The paper under consideration introduces a federated system of deep-learning to
support privacy-safe and multicenter cooperation in the automated process of
diagnosis and prognosis of amblyopia. Its key objective will be to create a
standard, multimodal ophthalmic corpus (fusing fundus-photography, optical
coherence tomography (OCT), eye-tracking signals, visual induced potentials
(VEP), and demographic data). Through generic adversarial networks (GANs)
augmentation and harmonization, such a layer of data acts out to reduce class
imbalance and boost the generalizability of downstream models. The second goal
is focused on the advanced feature representation based on the work of a hybrid
Convolutional Neural Network (CNN)-Transformer structure. This type of
architecture is able to encode both spatial and temporal relations both in
imaging and behavioral modalities simultaneously, and therefore forms more
comprehensive multimodal embeddings that enhance the discrimination of disease.
The explainable artificial intelligence (XAI) methods such as Grad-CAM and SHAP
are included in the tertiary objective to enhance the interpretability and
clinical transparency of diagnostic predictions. Federated learning (FedAvg,
FedProx) allows learning model optimization among participating institutions
without the need to share their data, hence, protecting patient privacy and
regulating compliance with regulations. Experimental confirmation using
synthetic and real-world pediatric datasets have shown a higher diagnostic
accuracy (AUC > 0.95), high across-center generalizability and readable visual
reasoning maps to aid the decision-making of clinicians. The given work, thus,
predetermines the course of safe, transparent, and cooperative, AI-driven
ophthalmic diagnostics, accelerating the process of clinical implementation and
universal transfer of amblyopia screening. |
|
Keywords: |
Federated Learning, Amblyopia Diagnosis, Multimodal Deep Learning, Explainable
AI (XAI), Privacy-Preserving Collaboration, Ophthalmic Intelligence. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593814 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
A SECURE AND IMPERCEPTIBLE IMAGE STEGANOGRAPHY SCHEME BASED ON LSB-NEW |
|
Author: |
R.GANESH , DR.S.THABASUKANNAN |
|
Abstract: |
Image steganography plays a vital role in secure multimedia communication by
concealing sensitive information within digital images. Conventional least
significant bit (LSB) techniques provide high embedding capacity but often
suffer from visual distortion and weak resistance to analysis under increased
payload conditions. This paper presents an enhanced adaptive LSB-NEW technique
for secure image-in-image data hiding, integrating secret image encryption with
pixel-aware embedding [1]. The proposed method adaptively selects embedding
locations based on local intensity characteristics, thereby improving
imperceptibility while maintaining computational efficiency. The significance
of this work lies in its ability to achieve a practical balance between
security, embedding capacity, and visual quality without introducing high
computational complexity, making it suitable for real-world applications such as
secure image transmission and digital content protection. Experimental
evaluation using standard benchmark images demonstrates that the proposed
approach consistently achieves higher PSNR and SSIM values with lower MSE
compared to the conventional LSB method. Visual analysis further confirms
minimal perceptual distortion. These results indicate that the proposed LSB-NEW
technique offers an efficient and scalable solution for secure image-in-image
data hiding in modern communication systems. |
|
Keywords: |
Image Steganography, LSB-NEW, Image-In-Image Hiding, PSNR, MSE, SSIM, Encryption |
|
DOI: |
https://doi.org/10.5281/zenodo.19593832 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
IDENTIFYING A TYPOLOGY OF MOROCCAN HEALTH PROVINCES IN URBAN AREAS BASED ON
MATERNAL AND CHILD HEALTH INDICATORS USING A CLUSTERING APPROACH |
|
Author: |
MERYEM CHAKKOUCH, HICHAM SADIKI, MEROUANE ERTEL, AZIZ MENGAD, SAID AMALI,
ABDELMALEK AZZOUZI |
|
Abstract: |
Maternal and child health remains a public health priority, particularly given
significant spatial disparities in healthcare accessibility and outcomes. It is
essential to clearly identify these differences in order to refine intervention
strategies and optimize resource allocation. In this article, we propose a
clustering approach to group and identify typologies of Moroccan health
provinces based on MCH indicators. We conducted a comparative analysis to
evaluate five dimensionality reduction (DR) techniques, including PCA, ICA, and
nonlinear methods (KPCA, LE, Isomap), combined with three clustering methods
(K-Means, agglomeration clustering, GMM) on Moroccan urban areas data
characterized by a high dimensionality, low sample size (HDLSS) structure on MCH
indicators. The model selection was guided by internal validation indicators
(Silhouette, Calinski-Harabasz, Davies-Bouldin). Laplacian Eigenmaps followed by
K-means achieves superior performance measures reflecting the most robust and
consistent clustering structure. Three clusters of provinces were identified
in this comparative study: moderate-performing provinces characterized by the
presence of both public and private urban healthcare systems; high-performing
provinces with robust urban public services; and lower-performing provinces due
to rural dependence on urban facilities and resource constraints. The main
contribution of this study is to identify disparities in maternal and child
health across Moroccan provinces/prefectures using urban-area indicators and a
clustering-based approach. It thus makes it possible to propose a territorial
typology that can support public health interventions and improve the strategic
allocation of resources. |
|
Keywords: |
Clustering; PCA, ICA, KPCA, LE, Isomap, K-Means, Agglomerative Clustering, GMM,
MCH Indicators |
|
DOI: |
https://doi.org/10.5281/zenodo.19593857 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
TEXT-TO-SPEECH (TTS) ANALYSIS SYSTEM WITH CLIENT-SIDE PROCESSING |
|
Author: |
TASKEOW SRISOD, PRACHYANUN NILSOOK, SASITORN ISSARO, ORAPHAN AMNUAYSIN3 THANANAN
AREEPONG, ORAWAN SAEUNG, THANI JINTASUTTISAK, THAMASAN SUWANROJ |
|
Abstract: |
In todays digital age, text-to-speech (TTS) technology has become a crucial tool
for increasing accessibility and enhancing user experiences across platforms,
from devices for the visually impaired to smart assistants. Developing this
technology efficiently and quickly is essential. This article focuses on
developing a text-to-speech analysis and synthesis system using client-side
processing technology, an approach that enables TTS conversion to occur directly
on a user's web browser, thereby reducing server load and increasing response
speed. The work covers everything from the process of TTS, user interface (UI)
development, to Web Speech API implementation. Furthermore, to ensure the
quality of the synthesized voices, a systematic evaluation was conducted using
the internationally-standardized Mean Opinion Score (MOS) for Thai voices from
Microsoft client-side TTS voices, namely Pattara and Kanya, to measure clarity,
naturalness, and fluidity. The results of this project not only serve as a
prototype for an effective TTS system, but also provide valuable insights for
the future development of synthetic voices that are more natural and closely
approximate human speech. |
|
Keywords: |
Text to Speech (TTS), Client-Side Processing, Natural Language Processing (NLP) |
|
DOI: |
https://doi.org/10.5281/zenodo.19593882 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
AN INTEGRATED REDUNDANT RELIABILITY OPTIMIZATION FRAMEWORK FOR COHERENT HYBRID
SYSTEMS USING LAGRANGE MULTIPLIER, HEURISTIC, AND INTEGER PROGRAMMING APPROACHES |
|
Author: |
SAI UMA SANKAR MANDAVILLI, SIDHAR AKIRI, SRINIVASA RAO VELAMPUDI, ARUN KUMAR
SARIPALLI, BHAVANI KAPU, RAMADEVI SURAPATI |
|
Abstract: |
The overarching purpose of reliability engineering is to guarantee, within a
certain time frame and set of circumstances, the proper operation of systems and
its components. Integrating redundancy strategically while taking restrictions
like mass, cost, and space needs into coherent system topologies improves system
robustness, according to reliability theory. The impact of these limitations
volume, weight, dimensions, and space on the improvement of system reliability
is the primary emphasis of this study. Parts used in a typical representative
coherent hybrid system are the focus of the inquiry. These parts are really
taken from coherent systems, where dimensions, mass, and cost are critical for
maintaining effective operation. The motor control unit, internal combustion
engine, and electric generator must all go through thorough reliability testing.
To solve this problem, we need to research and construct an integrated redundant
reliability model in a coherent systems structure utilizing the Lagrange
multiplier approach. This approach generates continuous-valued results for
significant variables, including component count, individual and stage
reliabilities, and overall system dependability. The work takes it a step
further by using heuristic methods and integer programming to come up with
integer-based solutions that are both realistic and useful. These strategies
make the dependability research much more accurate and valuable. |
|
Keywords: |
IRR Model, Coherent Systems, System Reliability, LAM Approach, HAM Approach, IP
Approach |
|
DOI: |
https://doi.org/10.5281/zenodo.19593900 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
SPEECH EMOTION ANALYSIS ON MULTIPLE DATASETS USING OPTIMIZED DEEP LEARNING MODEL |
|
Author: |
P.SUBHADASREE, D.USHARANI |
|
Abstract: |
Emotions are fundamental to human communication and play a critical role in
guiding both rational behavior and social interaction. Speech Emotion
Recognition (SER) aims to automatically identify human emotions from spoken
audio, which is a challenging task due to the variability in speech patterns and
the subtle acoustic differences between emotional states, SER are widely used in
many real world applications including health care, stress treatment, ICU
monitoring and other surveillance. This paper presents a Long Short-Term Memory
(LSTM) based approach, enhanced through hyperparameter optimization, for
accurate classification of speech emotions. The model is trained and evaluated
on multiple benchmark datasets, including RAVDESS, KBES, nEmo, VESUS, and
BANSpEmo, each offering diverse linguistic and emotional content. Key acoustic
features such as pitch, energy, and Mel-frequency cepstral coefficients (MFCCs)
are extracted and used to train the model. Unlike traditional Convolutional
Neural Network (CNN) methods that focus on spatial features, the proposed LSTM
architecture effectively captures temporal dependencies within the audio
signals. Emotions such as happiness, surprise, anger, sadness, and neutrality
are classified with high accuracy. Experimental results indicate that the
proposed LSTM-based model, optimized via hyperparameter tuning, outperforms
baseline methods and demonstrates improved generalization across datasets,
making it a promising solution for real-world emotion recognition applications. |
|
Keywords: |
Artificial Intelligence, Machine Learning, Deep Learning, Grid Search Cross
Validation, Random Forest, Hyper Parameter Tuning, Best Fit Parameters. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593942 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
MODIFIED SHUFFLENET WITH TRANSFER LEARNING MODEL FOR BLOCKCHAIN ASSISTED SECURE
LUNG CANCER CLASSIFICATION IN CLOUD-IOT ENVIRONMENT |
|
Author: |
AFFROSE, ASHWANI KUMAR YADAV, AND ARCHEK PRAVEEN KUMAR, AND CHERUKU SANDESH
KUMAR |
|
Abstract: |
Globally, lung cancer (LC) is the much prevalent source of cancer-based
deceases. In recent years, classifying lung cancer using CT has become a new
area of study in the area of medical image systems. A key factor in the LC
diagnosis is the capacity to precisely determine the location and size of the
disease. Rapid diagnosis, detection, classification, and evaluation of CT images
are therefore necessary. The developed study is made to classify the LC using CT
imageries. For classifying LC, the data (CT images) will be acquired from
Cloud-IoT. The acquired data is stored in Blockchain to ensure its security.
After acquiring the data, the LCC process begins by pre-processing the CT images
using Processed Pixel-based Threshold-based Median Filtering (PPT-MF) technique.
Subsequently, modified Res U-Net is used for segmenting the images. Then,
features namely, Modified MRELBP, Local Monotonic Pattern (LMP), and PHOG are
derived. After that, data augmentation is carried out using random sampling
technique. Finally, classification is done using Transfer learning-based
modified Shufflenet (TL-MSNet) model. From the analysis, a higher specificity of
98% is gained with TL-MSNet, whereas, existing methods score lower specificity
values. |
|
Keywords: |
Lung cancer; PPT-MF Technique; Modified Res U-Net; Modified MRELBP; Modified
Shufflenet. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593959 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
ITERATIVE LLM-GUIDED RESIDUAL REASONING FOR SPEECH ENHANCEMENT IN REAL-WORLD
NOISY ENVIRONMENTS |
|
Author: |
AWS I. ABUEID |
|
Abstract: |
Despite significant advances in speech enhancement, most existing approaches
rely on single-pass processing and closed black-box models, limiting progressive
refinement, transparency, and reproducibility. This paper presents a fully local
and reproducible Iterative LLM-Guided Speech Enhancement Framework that
reformulates denoising as a multi-step residual reasoning process operating on
log-Mel spectral representations. Unlike traditional single-pass digital signal
processing methods, the proposed approach employs a 14B-parameter open-weight
transformer (Qwen2.5) to perform structured residual spectral refinement across
multiple iterations, enabling progressive noise suppression while preserving
harmonic continuity and formant structure under real-world acoustic conditions.
The framework was evaluated on 45 paired recordings collected across three
environments: clean indoor speech, outdoor street noise, and dynamic
walk-and-talk conditions. Quantitative evaluation over 30 noisy samples
demonstrates that moderate refinement depth (K = 2) achieves optimal
performance, improving SNR from 3.1 dB to 10.4 dB, PESQ from 1.42 to 2.21, and
STOI from 0.61 to 0.79. Semantic assessment using a locally executed Whisper ASR
model shows a reduction in mean Word Error Rate (WER) from 1.534 (noisy) to
1.507 at K = 2, confirming improved linguistic recoverability. Deeper refinement
(K = 4) yields diminishing returns, indicating stable convergence of the
residual update mechanism. All experiments were conducted entirely offline on
a consumer-grade RTX 3060 GPU (12GB VRAM), demonstrating controlled iterative
convergence under constrained hardware resources. These findings establish
open-weight transformer-based residual reasoning as a scalable and transparent
paradigm for intelligibility-centred speech enhancement in realistic
environments. |
|
Keywords: |
Speech Enhancement, Large Language Models, Iterative Refinement, Semantic
Intelligibility, Residual Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.19593981 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
STTN-CP: A SPATIAL-TEMPORAL TRANSFORMER WITH CONTRASTIVE PRETRAINING MODEL FOR
CREDIT CARD FRAUD DETECTION |
|
Author: |
KATHIRESAN JAYABALAN , SETHURAMAN RADHAKRISHNAN |
|
Abstract: |
Credit card fraud detection remains a critical challenge due to highly
imbalanced datasets, evolving fraud patterns, and complex transaction behaviors,
leading to significant financial losses and reduced user trust. A novel
Spatial-Temporal Transformer Network with Contrastive Pretraining (STTN-CP) is
proposed to effectively detect fraudulent transactions. The STTN-CP model is
designed to capture temporal dependencies throughout the entire transaction
sequence and to identify spatial correlations among the transaction features.
This study introduces a unified framework that integrates spatial–temporal
transformer modeling with contrastive representation learning to enhance feature
discrimination and detection accuracy. A two-step methodology is employed to
tackle the class imbalance issue commonly found in credit card datasets.
Initially, Min-Max normalization is conducted, followed by the application of
the Synthetic Minority Oversampling Technique (SMOTE) in the data preprocessing
phase. Subsequently, the spatial-temporal transformer blocks collaboratively
generate hierarchical embeddings, while the contrastive pretraining module
enhances feature discrimination by clustering analogous transactions and
separating dissimilar ones inside the feature space. The proposed framework is
evaluated using the Credit Card Fraud Detection dataset that is publicly
available from Kaggle, and achieves an accuracy of 99.12%, precision 99.00%,
recall 98.86%, F1-score 98.92%, and specificity 97.96%. The ablation studies
show the importance of spatio-temporal modeling, contrastive pretraining, and
data preprocessing with respect to high detection performance. The proposed
model provides a scalable and robust solution for real-time fraud detection,
improving financial security and decision-making reliability. Future work will
focus on multi-class fraud detection, lightweight architectures for real-time
deployment, and explainable AI integration. |
|
Keywords: |
Credit Card Fraud Detection; Deep Learning; Spatial–Temporal Transformer;
Contrastive Pretraining; SMOTE |
|
DOI: |
https://doi.org/10.5281/zenodo.19593993 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
DIGITAL ECOSYSTEM OF THE STATE AS A DRIVER OF PROFESSIONALIZATION OF PUBLIC
SERVICE: THE EXPERIENCE OF UKRAINE AND EU COUNTRIES |
|
Author: |
OKSANA EVSYUKOVA, NATALIIA OBUSHNA, TETIANA VASYLEVSKA, IRYNA DYNNYK, IRYNA
LAZEBNA, VIKTORIIA PETRYNSKA |
|
Abstract: |
In the context of the public administration transformation in Ukraine, digital
public services are becoming a determining factor in increasing the
transparency, efficiency and professionalism of the public service, providing
new formats of interaction between the state and citizens, optimizing management
processes, contributing to the formation of modern competencies of civil
servants and creating conditions for the transition to a service- oriented model
of the state. The need for research is due to the fact that existing scientific
approaches consider digitalization mainly as a tool for increasing management
efficiency, not sufficiently taking into account its systemic impact on the
professionalization of the public service. In the context of active digital
transformation in Ukraine, a comprehensive analysis of the relationship between
the development of digital services, the level of digital competencies and the
institutional capacity of the public service is relevant. The purpose of the
article is to explore the potential and effectiveness of digital public services
as a tool for increasing the professionalism of the public service, to determine
their impact on the competence, effectiveness and openness of the public
service, and to outline promising ways of their improvement. The study uses
systemic, institutional and structural-functional approaches, as well as
analysis, synthesis, comparison, content analysis of official digital platforms,
analysis of statistical data and monitoring of digital governance practices. As
a result of the study, it was established that digital public services are a
system-forming factor in the professionalization of the public service of
Ukraine. It is proven that the introduction of digital tools, digital ethics and
digital communications forms new competencies of civil servants, expands the
range of managerial roles and increases the need for digital literacy, data
management, cybersecurity and electronic systems. A comparison of the level of
digital skills of citizens and e - government development indicators revealed
interdependence: a higher level of competence of the population ensures a faster
implementation of digital services, and electronic services themselves stimulate
further growth of digital literacy of society and public service employees. The
analysis of public investment in digitalization has shown that the effectiveness
of the digital services development depends not only on the amount of funding,
but also on the ability of the state to integrate digital solutions into
personnel policy, educational programs and the management system. The article
demonstrates that digital public services are not just automation tools, but a
fundamental mechanism for the professionalization of the public service. Their
implementation contributes to the renewal of the competencies of officials,
improvement of management, improvement of the quality of public services and the
formation of a service- oriented model of the state. This ensures sustainable
progress in reforming Ukraine's public service and its compliance with European
standards. |
|
Keywords: |
Digital Services; Digital Ecosystem Of The State; Digital Ethics; Digital
Communications; Public Service; Administrative Services; Professionalization;
Digital Competencies; E-Government; ICT Sector; Digitalization; Innovation;
Digital Transformation; Service- Oriented State. |
|
DOI: |
https://doi.org/10.5281/zenodo.19594018 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
A NOVEL META-REINFORCEMENT FAST LEARNING NETWORK FOR AN ENERGY EFFICIENT FOG
ENABLED IOT ENVIRONMENT |
|
Author: |
G. PRABHAKAR, MEERAVALI SHAIK |
|
Abstract: |
Internet Enabled Devices has opened its wide dimension of applications in the
multiple areas such as healthcare, transportation and automation systems. As the
computing devices increases, Internet connected devices generates the more
number of data which leads to the scarcity of energy and computation. Fog
computing has emerged as an innovative paradigm to enhance these devices in
terms of energy and time by bringing the computation closer to the data sources.
However, a dynamic characteristics of the networks and heterogeneous data in the
Internet enabled network poses the serious challenge that affects the network
performance to achieve the low latency and energy efficiency simultaneously. To
overcome this limitations, this research introduces a novel meta-reinforcement
fast learning algorithm for energy-efficient routing and scheduling in the Fog
enabled IoT environment. The suggested framework integrates the principle of
reinforcement learning and fast neural networks to enable rapid adaptation of
routing policies and optimizes the networks by minimizing the transmission
latency and energy consumption in the Fog gateways. The framework was deployed
in the flexible python based environment (FogBus2 and SimPy) and demonstrated
its strong performance by evaluating the quality of services(QoS) metrics such
as average latency, energy and resource utilization process. To prove the
efficacy of the suggested reinforced technique, its performance was compared
with the other techniques. Evaluation results demonstrates that the suggested
model balances network characteristics and its performance by achieving the
lower transmission latency(54% lesser than traditional methods),consuming only
40% of energy than the conventional learning techniques. The suggested research
provides the better insights for scalable and intelligent solutions for an
energy efficient Fog computing systems. |
|
Keywords: |
Fog Computing, Meta-Reinforcement, Fast Learning Networks,Internet Enabled
Devices, Quality Of Services. |
|
DOI: |
https://doi.org/10.5281/zenodo.19594039 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
A UNIFIED BENCHMARKING FRAMEWORK FOR CLASSICAL AND QAOA-BASED MULTI-OBJECTIVE
ROUTING IN COMMUNICATION NETWORKS |
|
Author: |
KALAVATHI S , BALAMURUGAN N M |
|
Abstract: |
Multi-objective routing in communication networks requires simultaneous
optimization of delay, congestion, and reliability—posing challenges for
classical single-metric routing protocols. This paper presents a rigorous
quantitative benchmarking framework comparing classical deterministic routing
paradigms, namely Routing Information Protocol and Open Shortest Path First,
against a quantum-assisted routing formulation based on the Quantum Approximate
Optimization Algorithm (QAOA). The routing problem is encoded as a constrained
Quadratic Unconstrained Binary Optimization (QUBO) model incorporating
flow-conservation and path-continuity constraints. Random connected weighted
graphs (10–20 nodes) were evaluated across 50 independent runs per topology size
under a unified scalarized cost function (0.5D + 0.3C + 0.2L). Results
demonstrate that deterministic link-state routing achieves optimal path
selection with minimal computational complexity O(m log n), while QAOA attains
near-optimal solutions (3–8% deviation) at significantly higher computational
overhead due to qubit scaling and variational optimization loops. Scalability
analysis reveals quadratic qubit growth with network size, limiting NISQ-era
applicability. The study establishes a structured comparative benchmark and
positions quantum routing as a complementary exploratory mechanism rather than a
replacement for classical routing. |
|
Keywords: |
Multi-objective routing, QAOA, QUBO, RIP, OSPF, quantum networking,
combinatorial optimization, SDN. |
|
DOI: |
https://doi.org/10.5281/zenodo.19594057 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
UNDERSTANDING DIGITAL ADOPTION AMONG INDONESIAN MSMES: THE ROLE OF BEHAVIOURAL
BIASES, FINANCIAL LITERACY, AND HEDONIC MOTIVATION |
|
Author: |
KIMBERLY IRENE MANGITUNG, CAREN IMANUEL, FRANSISCA HANITA RUSGOWANTO |
|
Abstract: |
This study is quantitative research that examines the role of financial literacy
and behavioural biases among MSME actors in Indonesia in shaping decisions
related to digital adoption. This study uses data from 180 MSME owners and
managers operating in Indonesia and applies Partial Least Squares Structural
Equation Modelling (PLS-SEM) for data-analysis. The findings show that financial
literacy has a positive influence on digital adoption, while heuristics, loss
aversion, and overconfidence have negative effects on digital adoption. Hedonic
motivation, as a moderating variable, strengthens the relationship between
financial literacy and digital adoption and reduces the negative effects of
heuristics and overconfidence. However, hedonic motivation does not reduce the
influence of loss aversion on digital adoption. This study contributes to the
literature by developing an integrated framework that combines financial
literacy, behavioural biases, and hedonic motivation in explaining digital
adoption decisions among MSMEs, while extending behavioural research through the
identification of the moderating role of hedonic motivation in shaping
bias-driven decision-making, and providing empirical evidence from the
Indonesian MSME context, where such integrated analyses remain limited.
Digitalisation should be accompanied by improvements in financial literacy and
the development of user-friendly and enjoyable digital systems. |
|
Keywords: |
Digital adoption, Behavioural Biases, Financial Literacy, MSME, Hedonic
Motivation |
|
DOI: |
https://doi.org/10.5281/zenodo.19594079 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
ML-ER-OPE: JOINT PREDICTION OF SOFTWARE DEVELOPMENT EFFORT AND RELIABILITY VIA
OPERATIONAL PROFILE–DRIVEN MULTI-TASK LEARNING |
|
Author: |
SUDHAKAR KAMBHAMPATI, Dr. G VAMSI KRISHNA |
|
Abstract: |
Precise estimation of software development effort and software reliability
remains a significant challenge in today’s non-stationary workloads and
heterogeneous patterns of use. Currently, these tasks are generally modelled
independently and based on static or past failure data, such that these methods
do not provide optimal performance when the system is operating normally. In
this work, we introduce an operational-profile-driven multi-task learning model,
ML-ER-OPE, for the simultaneous prediction of development effort and software
reliability. The approach builds dynamic operational profiles from execution
logs to derive scenario-level execution probabilities that behaviorally scale
structural metrics and reliability signals. The features are then pooled and
concatenated, and the multi-modal representation is fed into a shared deep
encoder with task-specific regression heads (under a composite objective), which
together make feature-level consolidation and associated model optimization. A
flexible optimization engine also adjusts the level of testing and resource
distributions based on estimated risk. The framework is tested on four NASA
benchmark datasets (JM1, KC1, KC2, PC1) using repeated stratified 10-fold
cross-validation. ML-ER-OPE significantly outperforms state-of-the-art baselines
in terms of effort and reliability, where the null hypothesis is rejected using
paired t-tests and Wilcoxon tests at a 95% confidence level, as well as
medium-to-large effect sizes |
|
Keywords: |
Software Effort Estimation, Software Reliability Prediction, Multi-Task
Learning, Operational Profiles, Behavior-Aware Modeling, Joint
Effort–Reliability Modeling, Dynamic Resource Optimization, Empirical Software
Analytics. |
|
DOI: |
https://doi.org/10.5281/zenodo.19594101 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
ENHANCING DEPENDENCY PARSING FOR TELUGU-ENGLISH CODE-MIXED TEXT: TREEBANK
CREATION, PARSER ADAPTATIONS AND POS TAGGING INTEGRATION |
|
Author: |
SANDEEP MADDU , VIZIANANDA ROW SANAPALA |
|
Abstract: |
Code-mixed text from social media poses significant challenges for syntactic
analysis due to irregular grammar, non-standard usage, and frequent language
switching. For Telugu-English code-mixed text, the absence of large-scale
syntactic resources and specialized parsing models limits progress in downstream
multilingual NLP applications. In this work, we address this gap by introducing
the first substantial manually annotated Telugu-English code-mixed dependency
treebank of 4,152 sentences, developed using Universal Dependencies (UD) 2.0
guidelines. We further propose enhancements to a biaffine dependency parser by
incorporating a language-aware head-dependent bias and relation-specific
structural weights to better capture cross-lingual syntactic patterns. Our
approach improves parsing performance, achieving 75.53% UAS and 61.86% LAS, with
consistent gains over a strong baseline. In addition, we demonstrate that
integrating dependency-derived syntactic features into a BiLSTM-CRF model
improves part-of-speech tagging, achieving a macro-F1 score of 83.73%, with
statistically validated gains. We also re-annotate an existing Telugu-English
dataset using UD 2.0 to ensure compatibility with modern syntactic frameworks.
Overall, this work provides new annotated resources and modeling strategies that
advance syntactic processing for Telugu-English code-mixed text, with broader
implications for developing robust NLP systems in low-resource and multilingual
settings. |
|
Keywords: |
Dependency Parsing, Code-Mixing, Telugu, Part-Of-Speech Tagging, Language Aware
Head Bias, Dependency Aware Weights |
|
DOI: |
https://doi.org/10.5281/zenodo.19594110 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
DUAL-THRESHOLD FEATURE SELECTION AND LIGHTWEIGHT ADAPTIVE SOFT-VOTING ENSEMBLE
FOR NETWORK INTRUSION DETECTION IN IoT ENVIRONMENT |
|
Author: |
VEENA S BADIGER , GOPAL K SHYAM |
|
Abstract: |
Network intrusion detection systems (NIDS) is an important tool for protecting
networking devices. In an resource constrained network environment such as IoT
safeguarding these devices is essential. The use of lightweight NIDS is
necessary to find intrusion in resource-constrained IoT environments. Achieving
high detection accuracy with real-time efficiency is challenging therefore, a
novel two-stage lightweight NIDS is developed to improve intrusion detection in
IoT networks. The study presents three important novel components. First
Dual-Threshold Random Forest Feature Selection (DT-RFFS) methodology decreases
the feature space by 68-71 percent and increases the rate of inference by 12-15.
Confidence weighted adaptive soft voting (CW-ASV) strategy adapts the weight of
the classifiers based on the performance and improves the minority attack
detector by 2.3%. Third Confidence Based Early Exit (CBEE) mechanism leaves
earlier high confidence benign traffic with a significant reduction in the
computational overhead and a high detection accuracy. Proposed architecture
performs binary classification in first stage to distinguish benign traffic from
malicious network activity. Subsequently, the second stage identifies specific
intrusion categories using Soft Voting ensemble learner consisting Decision
Tree, Random Forest, LightGBM, XGBoost, and AdaBoost as Base learners. The
dataset is balanced using SMOTE. The experiment was conducted on CIC-ToN-IoT and
RT-IoT2022 datasets. CIC-ToN-IoT and RT-IoT2022 experiment results achieved the
accuracy of 99.8 and 98.6-99.7 in stage 1 and stage 2 respectively, and a
precision, recall and F1-score of 99 percent. The log-loss values are between
0.01 and 0.10, and AUC-ROC values are between 0.9 and 1.0 indicating high
reliability in probability prediction and overall class differentiation. Cross
dataset testing has a generalization accuracy of 90-91%. The experiment
demonstrated inferences at low computation price. Model achieved a latency of
less than 20 ms, and a throughput of 690 to 720 packets/second, showing its
real-time performance and deployment in resource-constrained IoT network
conditions. |
|
Keywords: |
Cyber-attacks, Ensemble Learning, Intrusion Detection, IoT Security, Lightweight
Classifier, Soft Voting. |
|
DOI: |
https://doi.org/10.5281/zenodo.19594128 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
YOUTH SOCIAL MEDIA BEHAVIOR AND ETHICAL INFORMATION VERIFICATION: AN EMPIRICAL
STUDY OF THE DIGITAL TABAYYUN MODEL |
|
Author: |
ABDUL RAUF RIDZUAN, KHAIRUL AZHAR MEERANGANI, MOHAMMAD FAHMI ABDUL HAMID,
MUHAMMAD HILMI MAT JOHAR, ABDUL SATTAR |
|
Abstract: |
The rapid growth of social media has intensified concerns about misinformation
and the ethical responsibilities of users, particularly among youth who
constitute one of the most active online populations. While technological
solutions such as automated fact-checking have gained prominence, less attention
has been given to the behavioral and ethical dimensions of information
verification. This study examines the influence of youth social media behavior
on ethical information verification practices and proposes a structured
value-based framework to address this gap. Using a quantitative cross-sectional
survey design, data were collected from 215 young social media users. Behavioral
constructs grounded in the Theory of Planned Behavior; attitude, subjective
norm, and perceived behavioral control were employed to examine their
relationship with ethical verification practices, which were operationalized
through self-control, open-mindedness, critical thinking, and
information-seeking behaviors. Descriptive, correlation, and regression analyses
were conducted using SPSS. The findings reveal a strong and significant positive
relationship between social media behavior and ethical information verification
(r = 0.767, p < .01). Regression analysis further indicates that social media
behavior is a substantial predictor of verification practices, explaining 59% of
the variance (R˛ = .59). These results demonstrate that ethical verification is
not merely a technical or cognitive skill, but a behavioral practice shaped by
attitudes, social influence, and perceived capacity for action. Building on
these empirical insights, the study introduces the Digital Tabayyun Model as a
value-based ethical verification framework that integrates source evaluation,
reflective verification behavior, and impact awareness. The model complements
existing technological and behavioral approaches by emphasizing the
human-centered dimensions of ethical decision-making in digital environments.
This study contributes to research in information and education technology by
providing empirical evidence on behavioral determinants of ethical verification
and offering a culturally responsive, value-based model to support digital
literacy and responsible social media engagement among youth. |
|
Keywords: |
Ethical Information Verification; Youth Social Media Behavior; Digital Literacy;
Theory of Planned Behavior; Digital Tabayyun |
|
DOI: |
https://doi.org/10.5281/zenodo.19594147 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
HARNESSING UNCERTAINTY-AWARE AND SCALABLE GMM-RF FOR CROSS-DOMAIN SENTIMENT
INTELLIGENCE IN HETEROGENEOUS DIGITAL PLATFORMS |
|
Author: |
RAMAJAYAM G , Dr. VIDYABANU R |
|
Abstract: |
Accurate sentiment classification across diverse online shopping domains remains
a complex challenge due to shifting vocabulary, variable sentiment intensity,
and non-linear emotional expression. This paper presents a domain-adaptive
framework Gaussian Mixture Model-enhanced Random Forest (GMM-RF)—designed to
overcome ambiguity and variability in customer reviews by integrating
probabilistic clustering with ensemble learning. The GMM component models review
as soft sentiment distributions, capturing nuanced emotional overlaps that are
often misrepresented by rigid classifiers. These probabilistic encodings guide
the Random Forest classifier, which applies cluster-aware bootstrapping,
entropy-filtered sampling, and GMM-driven thresholding to form precise and
interpretable decision boundaries. Evaluation on Amazon product reviews—Books,
DVDs, Electronics, and Kitchen Appliances—demonstrates an average classification
accuracy of 95.705%, marking a 7.28% improvement over MMASA. The model also
surpasses EBC with a 9.28% gain, confirming its resilience in sentiment-rich,
high-entropy environments. GMM-RF requires no domain-specific retraining and is
fully scalable, making it ideal for deployment in real-time e-commerce systems.
Its design supports adaptive, interpretable sentiment analytics, enabling more
reliable customer insight extraction from dynamic, cross-category review
streams. |
|
Keywords: |
Sentiment Classification, Gaussian Mixture Model, Random Forest, Cross-Domain
Analysis, Online Reviews) |
|
DOI: |
https://doi.org/10.5281/zenodo.19594160 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
A CNN-DRIVEN FRAMEWORK FOR ACCURATE DIABETIC FOOT ULCER DETECTION IN CLINICAL
IMAGING |
|
Author: |
TALASILA VEENA , BABU REDDY MUKKAMALLA |
|
Abstract: |
One of the most serious side effects of diabetes mellitus is diabetic foot
ulcers (DFUs), which, if left untreated, can result in infection, delayed wound
healing, and lower limb amputations. Because manual DFU assessment is
subjective, time-consuming, and heavily reliant on clinical knowledge, it is not
as scalable in actual healthcare settings. This study suggests a deep
learning-based intelligent and automated DFU diagnosis system that improves
clinical reliability and detection accuracy in order to address these issues.
The suggested approach successfully extracts discriminative features from DFU
photos by combining Convolutional Neural Networks (CNNs) with transfer learning
architectures. To adjust to DFU-specific visual patterns, pre-trained models
like ResNet and EfficientNet are refined. A Generative Adversarial Network
(GAN)-based data augmentation technique is used to address the problem of small
and unbalanced medical datasets, allowing the creation of realistic DFU images
and enhancing model generalization. Benchmark DFU datasets under standardized
preparation are used to train and assess the system and protocols for
augmentation. According to experimental results, the suggested method
outperforms traditional CNN-based and transfer learning baselines with a
classification accuracy of 97.8%, sensitivity of 96.9%, specificity of 98.4%,
and an AUC score of 0.99. Particularly in minority ulcer classes, the addition
of GAN-generated data greatly increases resilience and decreases overfitting.
This work's main contribution is the efficient combination of adversarial
learning and transfer learning for DFU diagnosis, which provides a scalable,
precise, and clinically interpretable approach. By enabling early intervention,
lowering clinician workload, and enhancing patient outcomes in diabetic care,
this approach has great potential for implementation in computer-aided
diagnostic systems. |
|
Keywords: |
CNN, DFU, Multitask Deep Learning, Transfer Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.19594171 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
AN INTERPRETABLE DEEP LEARNING FRAMEWORK FOR GASTRIC POLYP DETECTION AND
SEGMENTATION USING GRAD-CAM AND LIME |
|
Author: |
JYOTI GARG , MOHIT CHHABRA |
|
Abstract: |
Correct identification and segmentation of precancerous polyps in endoscopic
images takes the first steps toward the early detection of stomach cancer. While
Deep Learning has greatly improved in the field of medical image processing,
clinical trust in such models is limited. This paper develops an explainable
deep learning model using a publicly available Kvasir-SEG dataset for the
segmentation of stomach cancer polyps. In this methodology, it encompasses
preprocessing of images, residual U-Net segmentation, and evaluation of the
model using IoU, Dice Score, accuracy, and loss metrics. For interpretability,
LIME provides local explanations at the pixel level for individual predictions,
whereas Grad-CAM has been used to highlight class-discriminative regions that
influence segmentation decisions. The integrated XAI framework in the study
offers global interpretability and instance-specific interpretability;
therefore, clinicians are able to visually confirm why the model designated a
certain area as suspected of carrying cancer. The obtained experimental
consequences showed impressive configuration of model-highlighted regions and
actual polyp structures, with great segmentation capability. The proposed
approach provides relevancy and transparency. The proposed model has been
optimized by using multiple optimization models such as Attention UNet,
EfficientNet, ResUNet, DenseNet, and Transformer. This integrated ensemble model
incorporated with LIME and Grad-CAM produces a deep learning based sudsier that
could easily be used for stomach cancer diagnosed through endoscopic analysis. |
|
Keywords: |
LIME, Grad-CAM, Gastric Cancer, Polyp Segmentation, Explainable AI(XAI) |
|
DOI: |
https://doi.org/10.5281/zenodo.19594184 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
Title: |
TRANSFORMER-BASED CYBERBULLYING DETECTION IN MOROCCAN DARIJA: A WILLARD TAXONOMY
APPROACH |
|
Author: |
ABDELKARIM TAHIRI , BOUBKER SBIHI, RACHID GOUARTI , FATIMA MOURCHID , ALI EL
KSIMI |
|
Abstract: |
Cyberbullying remains a serious threat, yet detection tools for Arabic dialects
are scarce. Moroccan Darija poses particular challenges for automated systems.
This paper evaluates three transformer models on the Offensive Moroccan Comments
Dataset. Results show MARBERT achieves 85.07% F1-score, outperforming AraBERT
(83.84%) and the multilingual baseline (80.10%), confirming that dialectal
pre-training matters for low-resource varieties. We also propose a severity
framework based on Willard's (2007) cyberbullying taxonomy, which distinguishes
eight behavioral types. Our system maps these into three levels: CRITICAL
(cyberstalking, harassment, outing) for immediate action, MODERATE (flaming,
denigration, exclusion, trickery, impersonation) for standard review, and NONE
for safe content. Dataset analysis shows 9.6% of comments fall into the CRITICAL
category. On this high-risk class, MARBERT reaches 86.05% F1-score with 84.80%
recall, ensuring most dangerous content gets flagged. These results offer
practical guidance for deploying content moderation systems for Arabic
communities. |
|
Keywords: |
Cyberbullying Detection, Arabic NLP, Willard Taxonomy, Transformer Models,
Severity Classification |
|
DOI: |
https://doi.org/10.5281/zenodo.19594204 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2026 -- Vol. 104. No. 7-- 2026 |
|
Full
Text |
|
|
|