|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
January 2026 | Vol. 104
No.2 |
|
Title: |
ENHANCING TASK RELATED KNOWLEDGE SHARING IN AGILE VIRTUAL TEAMS: FROM
INFLUENCING FACTORS TO PRACTICAL GUIDELINES – KSAVT FRAMEWORK |
|
Author: |
ZOHAIB AHMED, ZULKEFLI MANSOR, KAMSURIAH AHMAD, MUHAMMAD ZAFARULLAH |
|
Abstract: |
Task-relevant knowledge sharing (TaRK) is an essential process in maintaining
alignment, collaboration, and learning within geographically dispersed teams in
distributed agile environments. Established on the existing Knowledge Sharing
for Agile Virtual teams (KSAVT) Framework, this study clarifies the role of
task-level practices in ensuring effective knowledge sharing and improving
collective performance. Based on a qualitative multiple case study of three
international software organizations and eight practitioners, the research
validates seven task-related influencing factors (IFs) and offers eight
practical guidelines that put these influencing factors into practice in
real-world agile settings. The results suggest that sustaining knowledge sharing
in distributed agile environments depends on open communication, workload
sharing, repeated reflection, and relevant filtering. In theory, the study adds
to the KSAVT Framework by showing how efficient task related knowledge sharing
can convert an individual’s capacity of doing a task into team’s capability. In
practice, it gives agile teams clear steps to make task related knowledge
sharing a part of their daily routine, which helps them become more flexible,
learn faster, and work better together. Consequently, task-oriented knowledge
sharing becomes an active, iterative process that turns everyday work into
learning loops, promoting adaptability and sustained team performance. |
|
Keywords: |
Agile Virtual Teams (Avts), Task Related Knowledge (Tark), Knowledge Sharing,
Influencing Factors (Ifs), Industrial Case Study, Global Software Development
(GSD), Knowledge Sharing For Agile Virtual Teams (KSAVT) Framework. |
|
DOI: |
https://doi.org/10.5281/zenodo.18453956 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
EFFECTIVE CBIR BASED ON HYBRID IMAGE FEATURES AND MULTILEVEL APPROACH |
|
Author: |
V. ARCHANA REDDY, V. VIJAYA KUMAR |
|
Abstract: |
The instantaneous search and retrieval of the most relevant images to a specific
query is one of the significant applications of image processing. The process of
retrieval of images using image contents is widely known as content base image
retrieval (CBIR). The image features extracted from the local windows of 3*3
resulted good results in CBIR. However, the micro windows of 2*2 derived the
texton and motif features and played a dominant role in CBIR. This paper
segmented the 3*3 window into two windows namely cross and diagonal windows.
Four directional motifs of 1*3 are extracted from each of the cross and diagonal
segments. The motif features are derived using Rule based Directional motif
(RDM) to address the ambiguity issues. This paper transformed the motif indexes
derived on a 1x3 triangular window to a 3*3 window and derived eight triangular
RDM local units. The local features are extracted by integrating the features
extracted from the eight different images. The extracted features hold
directional, textures, patterns, edge properties derived from the 3*3 and
triangular windows. The results indicate the superiority of the proposed
methods. |
|
Keywords: |
3*3 Window, 1*3 Triangular Window, Directional Motif; |
|
DOI: |
https://doi.org/10.5281/zenodo.18453998 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
ADVANCEMENTS IN QUANTUM COMPUTING WITH THE DEVELOPMENT OF TOPOLOGICAL QUBITS FOR
SCALABLE AND FAULT-TOLERANT SYSTEMS |
|
Author: |
DR. KONGARA SRINIVASA RAO,DR K SREERAMA MURTHY, CHETLA CHANDRA MOHAN,G.
VASUNDHARA DEVI, YADAVALLI SURESH KUMAR,VEERAMACHINENI VENKATA RAGHU, MANASA
BANDLAMUDI, UDAYA LAKSHMI SAKALA |
|
Abstract: |
Quantum computing can help solve complex problems classical computers cannot
solve. Current quantum systems, especially when using conventional qubits
sensitive to environmental noise, face key scalability and fault tolerance
hurdles. This has been identified as a promising path and is, in principle,
fault-tolerant, but it still suffers from control and scalability issues. This
work presents a new paradigm of quantum computation based on a hybrid quantum
system combining topological qubits for error correction and traditional qubits
for computing. The final hybrid system can reduce the bit-flip error probability
from 0.05 to 0.02 and the phase-flip error probability from 0.08 to 0.01. The
fidelity of quantum states after gate operations reached 0.92, surpassing both
that of conventional (0.85) and topological (0.90) qubits. Moreover, this
approach achieved a strength of interaction (0.90) and operation time (0.01 ms)
higher than conventional qubits. Hybrid architecture significantly improves
quantum systems' efficiency, scalability, and fault tolerance. Even with these
advancements, controlling topological qubits and measuring the braiding of
anyons are still challenging. Soon, research will concentrate on overcoming
these technical challenges and paving the way for scaling up the hybrid system
to more complex quantum computations. |
|
Keywords: |
Quantum Computing, Topological Qubits, Majorana Fermions, Quantum Error
Correction, Scalable Systems, Fault Tolerance, Hybrid Quantum Systems |
|
DOI: |
https://doi.org/10.5281/zenodo.18454014 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
AI-DRIVEN EXPLICIT AND IMPLICIT INFORMATION COLLECTION IN REQUIREMENTS
ENGINEERING: CLASSIFICATION AND KNOWLEDGE REPRESENTATION |
|
Author: |
AHMAD MURAD, KHALIL JAMOUS, IHAB ATIEH, AMJAD HUDAIB, NADIM OBEID, MARWAN
AL-TAWIL |
|
Abstract: |
The evolving complexity of software systems demands smarter Requirements
Engineering (RE) methods that can uncover both explicit and implicit user needs.
Traditional RE techniques often miss latent contextual information, leading to
incomplete or suboptimal system specifications. This paper introduces an
AI-enhanced framework that integrates Natural Language Processing (NLP), Machine
Learning (ML), and Knowledge Graphs (KGs) to intelligently extract, and classify
requirements. Explicit requirements are derived from structured documentation
using BERT and TF-IDF, while implicit requirements are inferred from behavioral
patterns and unstructured data. Class imbalance is mitigated using SMOTE and
ADASYN techniques, and deep learning via BiLSTM captures bidirectional semantic
dependencies. The PROMISE dataset is used to train and evaluate the model, with
results benchmarked using accuracy, F1-score, and ontology completeness. The
resulting knowledge graph enables contextual traceability. Our framework
demonstrates improved requirement visibility, and semantic enrichment, moving RE
toward a more intelligent, context-aware discipline. |
|
Keywords: |
Requirements Engineering RE, Explicit and Implicit Requirements, Knowledge
Graphs KG, Natural Language Processing NLP, Machine Learning ML, AI-Driven
Requirements Analysis |
|
DOI: |
https://doi.org/10.5281/zenodo.18454041 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
COURSE FORGE AI IN COMPUTERS: AN AGENTIC, MODULAR, AND EXPLAINABLE EDUCATIONAL
ASSISTANT USING LLMS AND RAG |
|
Author: |
LENIN MARKSIA U, JEYASHANTHI J, JEGADEESH A, KALIAPPAN M, MARIAPPAN E, ANGEL
HEPZIBAH R,RAMANTH M,MEGA G |
|
Abstract: |
The demand for personalized, interactives learning remains a core challenge in
modern education. This paper presents CourseForge AI, a modular and explainable
academic assistant designed using Large Language Models (LLMs), autonomous
agents, and Retrieval-Augmented Generation (RAG). The system includes
specialized agents for curriculum-aligned Question and Answer, note generation,
flashcards, tutoring, and coding assistance and all powered by semantic
retrieval over academic content. Unlike static chatbots, Course Forge AI
exhibits agentic reasoning and decision-making, allowing it to dynamically
retrieve and contextualize content in response to user input. By leveraging
vector-based semantic search model includes LanceDB and explainable agent
workflows through Phidata, the system ensures that its outputs are grounded,
interpretable, and relevant to syllabus-specific learning. Experimental
validation shows strong performance in user engagement, relevance, and accuracy.
This work contributes a scalable blueprint for AI-assisted education that is
both intelligent and trustworthy |
|
Keywords: |
Agentic AI, AI Assisted learning, Curriculum-Aligned Learning, Large Language
Models (LLMs), Retrieval-Augmented Generation (RAG) |
|
DOI: |
https://doi.org/10.5281/zenodo.18454059 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
MULTI-ASPECT SENTIMENT ANALYSIS IN AMAZON REVIEWS USING GMM-ENHANCED TEXTCAPS
WITH PROBABILISTIC CAPSULE ROUTING |
|
Author: |
SRI SOWMIYA.S , Dr. L. SANKARI |
|
Abstract: |
Online shopping enables consumers to purchase goods through e-commerce
platforms, where customer reviews significantly influence buying decisions and
vendor reputation. Sentiment analysis of these reviews facilitates the decoding
of public opinion, offering actionable insights for product improvement,
marketing personalization, and an enhanced customer experience. The reviews
often contain mixed sentiments, sarcasm, and abrupt polarity changes, which
traditional models tend to oversimplify, thereby losing sentiment flow and
interpretability. This study proposes GMM-Enhanced TextCaps, an advanced model
addressing these challenges by employing Byte-Pair Encoding for subword-level
sentiment extraction, capsule networks for granular aspect representation, and
Gaussian Mixture Models for probabilistic sentiment routing. The model processes
enriched token embeddings through contextual transformer encoders, sarcasm and
negation adjustments, and emotion-weighted attention mechanisms, forming
interpretable capsules that yield confident sentiment predictions. Experiments
utilize the Clothing, Shoes, and Jewelry category from the Amazon Reviews ’23
dataset, comprising 451,478 reviews divided into 80% training, 10% validation,
and 10% testing splits. The model attains a balanced accuracy of 72.51% and a
G-Mean of 72.45%, demonstrating robust performance in capturing nuanced
sentiment dynamics while effectively maintaining class balance. |
|
Keywords: |
Aspect-Based Sentiment Analysis, Capsule Networks, Gaussian Mixture Models,
Sentiment Interpretation, Multimodal Sentiment Analysis, Transformer Encoding |
|
DOI: |
https://doi.org/10.5281/zenodo.18454074 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
AI-ADAPTIVE POST-QUANTUM CRYPTOGRAPHY FOR SECURE AND FRAUD-RESILIENT DIGITAL
PAYMENTS |
|
Author: |
BALAKRISHNAN S , UPPALAPATI NAGA RATNA KUMARI, DR.G.GOKUL KUMARI , DR. LAMA
SAMEER KHOSHAIM , S. RAMYA , ANANTHA RAO GOTTIMUKKALA |
|
Abstract: |
The threat of quantum computing to classical cryptographic infrastructures is
steadily increasing alongside more complex and multi-modal payment fraud posing
a severe security gap within current financial infrastructures. Current payment
stacks, which are based on TLS and have fixed classical cryptography and tabular
fraud detectors, are inadequate to secure high volume digital transactions
against both quantum decryption attacks as well as coordinated fraudulent
behaviour. The purpose of the study was to create and test an AI-based framework
that can optimally improve fraud detection and provide adaptive post-quantum
cryptographic protection in real-time. To satisfy this goal, we developed an
experimental system that merged a heterogeneous temporal graph neural network
(HTGNN) to multimodal fraud analysis with a reinforcement-learning PQC
orchestrator that can be used to choose NIST-standard ML-KEM and ML-DSA
parameter sets based on transaction-level risk. Multimodal datasets (tabular
features, device metadata, text fields, and relational graphs) were publicly
available and converted into a fused format and tested in adversarial, latency,
and throughput-controlled experiments. PQC-enabled OpenSSL/liboqs was used to
implement the crypto layer to measure handshake performance in the real world.
Findings indicate that the proposed model enhanced the AUPR of fraud-detection
(baseline) of 0.35 to 0.58 and increased ROC-AUC to 0.966, which was better than
the classical and hybrid systems. The adaptive PQC algorithm decreased the
handshake latency to 68 ms, which was significantly lower than setups of PQC
(90-120 ms). Robustness tests showed a smaller degradation in recall (−9) and a
cipher-downgrade success rate of less than 1% which depicts a better resilience
to the threat vectors. Altogether, the research results show that a combination
of multimodal GNN-based fraud detection and AI-adaptive PQC is a viable and
quantum-resilient security architecture of digital payment systems in the next
generation. |
|
Keywords: |
Digital Payments Fraud, Post-Quantum Cryptography, Adaptive Security, Graph
Neural Networks, Reinforcement Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454191 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
FEDERATED LEARNING FOR PRIVACY PRESERVING MACHINE LEARNING IN INTERNET OF THINGS
IOT NETWORKS CHALLENGES SOLUTIONS AND FUTURE DIRECTIONS |
|
Author: |
P. SURIYA, DR. P SUMITHABHASHINI, EERLA RAJESH, KALADI GOVINDARAJU, VUPPULOORI
RAVI SEKHARA REDDY, ANANTHA RAO GOTTIMUKKALA, SRIJA GUNDAPANENI, K SWETHA |
|
Abstract: |
The explosive growth of Internet of Things (IoT) devices has led to large
amounts of sensitive data production, causing privacy concerns for traditional
machine learning models. One such approach is Federated Learning (FL), which
enables the training of models in a decentralized manner, keeping the data on
devices. Issues like communication overhead, privacy preservation, and
scalability are still present, especially in resource-constrained IoT scenarios.
We propose a new framework of FL in the context of IoT networks with mechanical
privacy protection like differential privacy and homomorphic encryption, such as
model compression and dynamic client selection. The framework was tested against
standard IoT datasets and showed a better performance of 88.2% model accuracy
than FedAvg (84.6%), FedProx (85.9%) and HeteroFL (83.4%). The results showed
that overhead communication was reduced by as much as 15% while our system could
efficiently scale to 1000 clients with a negligible increase in the per-round
training time. In summary, the presented framework achieves high accuracy,
privacy, and efficiency and is highly scalable for large IoT networks, but it
still suffers from limitations such as resource constraints and parameter tuning
for privacy preservation. In future work, we will work on making our framework
even more robust, integrating asynchronous learning, and testing it in real-life
IoT scenarios to achieve better scalability and applicability. |
|
Keywords: |
Federated Learning, Internet of Things, Privacy Preservation, Communication
Efficiency, Model Accuracy |
|
DOI: |
https://doi.org/10.5281/zenodo.18454221 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
A DUAL-BACKBONE SWIN-TRANSFORMER AND EFFICIENTNET FRAMEWORK FOR ACCURATE
ALZHEIMER’S DISEASE STAGE PREDICTION |
|
Author: |
M K V ANVESH, PRAJNA BODAPATI |
|
Abstract: |
Alzheimers disease (AD) is a serious health problem throughout the world that
gradually damages memory and thinking abilities in human beings. Identifying the
precise stage of Alzheimer’s disease (AD) is a complex task for clinicians, as
the disorder progresses through stages such as Non-Demented, Very Mild Demented,
Mild Demented, and Moderate Demented. However, current diagnostic tools often
struggle to correctly sort patients into these groups, these kind of problems
frequently lead to wrong or late diagnoses. To address these challenges, this
study proposes a dual-backbone deep learning framework that integrates Swin
Transformer with EfficientNet-B7. The model was trained using the Alzheimer's
Synthesized Dataset, which provides 8,000 pseudoRGB MRI images (roughly 2,000
images per disease stage). All images were prepared by being resized to pixels,
put through various augmentations (changes), and then normalized. The model
itself uses two specialized components: the Swin Transformer to pick up
large-scale brain patterns, and EfficientNet-B7 to focus on fine, small local
details. The special features from the two models are merged and passed to a
single classifier, which makes the final, accurate prediction across the four
disease classes. The model showed best results, achieved an accuracy of 99% and
strong F1-scores after being trained with an 85% training and 15% validation
data split. Further, Grad-CAM heatmaps are used to provide clear explanations
and confirm that the results are reliable. This combination of high performance
and clarity makes the overall system a promising and dependable method for quick
Alzheimers diagnosis. |
|
Keywords: |
Alzheimers Disease Classification, Swin Transformer, EfficientNet-B7, pseudoRGB
MRI, Dual-Backbone Deep Learning, Grad-CAM. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454233 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
CONSTRAINT-AUGMENTED NEURAL NETWORKS FOR SHAKESPEAREAN VERSE: INTEGRATING
PROSODIC FIDELITY WITH TRANSFORMER MODELS |
|
Author: |
MEHULKUMAR H KANTARIA , SAHANA EDWIN , VISHAL NAMIREDDY, S. KRISHNAKUMARI ,
JAKKAPU NAGALAKSHMIDEVI , DR GANTA JACOB VICTOR |
|
Abstract: |
This research proposes a methodology for augmenting prosodic constraints to
enhance the creation of Shakespearean poetry in neural models of language.
Contemporary transformer-based generators are proficient in fluency but often do
not satisfy the stringent metrical and rhyming standards of formal poetry. To
fill this gap, the proposed technique uses specialized prosody embeddings as
well as prosody-aware attention mechanisms to combine explicit prosodic
information such syllable counts, stress patterns, along with rhyme suffixes.
The model is guided toward simultaneous optimization of linguistic quality as
well as prosodic integrity by a composite training goal that combines
cross-entropy loss with distinct meter and rhyme penalties and optional
reinforcement learning. Experiments on a prosodically annotated Shakespearean
corpus demonstrate substantial gains in meter accuracy, rhyme correctness, and
overall stylistic adherence compared with standard GPT-based baselines, without
sacrificing coherence or fluency. The results highlight the effectiveness of
embedding phonological structure directly into neural architectures and offer a
scalable approach for controllable poetic and stylistically constrained text
generation. |
|
Keywords: |
Prosodic Constraint Augmentation, Shakespearean Verse Generation, Neural
Language Models, Meter and Rhyme Modelling, Controllable Text Generation |
|
DOI: |
https://doi.org/10.5281/zenodo.18454285 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
FINTECH AS A FACTOR OF ASYMMETRIC TRANSFORMATION OF THE BANKING SECTOR: EUROPEAN
EXPERIENCE AND POTENTIAL FOR UKRAINE |
|
Author: |
CRISTINA GABRIELA COSMULESE, ARTUR ZHAVORONOK, ROMAN GRESHKO, NATALIA OSTROVSKA,
VIOLETTA KHARABARA, VIKTORIIA TKACH |
|
Abstract: |
In the current conditions of digital transformation of the financial sector,
FinTech is one of the key drivers of structural changes in banking. The COVID-19
pandemic, the full-scale war in Ukraine, and global digitalization processes
have significantly accelerated the introduction of contactless payment
technologies, mobile banking, and remote financial services. Special scientific
attention is required to analyze the impact of FinTech on the banking systems of
developing countries, which simultaneously strive to catch up with digital
leaders and face institutional, economic, and infrastructural constraints.
Ukraine, being in conditions of military instability, demonstrates unexpectedly
high rates of digitalization of banking services, which actualizes the need for
its scientific understanding in a comparative international context. The purpose
of the study is to analyze the main trends in the development of financial
technologies in the banking sector of different countries, in particular, to
compare digital transformation in developed and developing countries, to
identify success factors, development barriers, and the potential for
integrating innovations into the banking sector. The study uses quantitative and
comparative approaches. Statistical indicators characterizing access to
financial services and the level of digitalization of banking activities are
analyzed, in particular, the number of bank branches and ATMs, the volume of
mobile and Internet banking transactions, the number of transactions per capita
and the share of digital payments in GDP using the example of 11 European
countries. Methods of comparative analysis, data visualization and
interpretation of digital trends in dynamics are used. The results of the study
indicate the presence of pronounced unevenness and asymmetry of the digital
transformation of the banking sector under the influence of FinTech. In
developed countries (Estonia, Poland, Turkey), FinTech is actually transforming
into the dominant banking service channel, which is accompanied by a reduction
in physical banking infrastructure, a rapid increase in the number of digital
transactions and a high share of non-cash payments in GDP. In contrast, in
developing countries (Albania, Kosovo, Bosnia and Herzegovina, Moldova), digital
financial services are at the stage of formation and are developing in parallel
with the physical banking infrastructure, which indicates the catching-up nature
of digitalization and limited substitutability of traditional service channels.
For Ukraine, despite the lack of complete statistical data in the sample, a
specific model of forced digital transformation is characteristic, in which the
reduction of physical banking infrastructure is combined with the intensive
growth of remote banking services, in particular mobile banking and non-cash
payments. This allows to attribute Ukraine to the group of countries with a high
potential for accelerated FinTech integration, provided that the institutional
and regulatory environment is stabilized. The value of the study lies in
identifying and systematizing models of asymmetric FinTech transformation of the
banking sector in European countries with different levels of economic
development, which allows to explain the absence of a universal trajectory of
digitalization. The practical significance of the results obtained lies in the
possibility of using them to form strategies for the digital transformation of
the banking sector, develop financial inclusion policies, and adapt best
European FinTech practices for countries in catching up, in particular for
Ukraine. |
|
Keywords: |
ICT Sector; FinTech; banking sector; digital transformation; developing
countries; IMF Financial Access Survey; digital transactions; cashless payments;
banking services market; national economy; Ukraine; Europe. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454300 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
BALANCING ACCURACY AND ROBUSTNESS IN INTRUSION DETECTION: AN ENSEMBLE DEEP
LEARNING APPROACH |
|
Author: |
U.V. RAMESH, RAMESH KOTHAPALLI, CH BHANU PRAKASH, P. SRILATHA4,SANDA SRI HARSHA,
ANANTHA RAO GOTTIMUKKALA |
|
Abstract: |
Intrusion Detection Systems increasingly face adaptive, multimodal cyberattacks,
yet most of the current systems are highly accurate in clean conditions and
inadequately robust to adversarial perturbations, creating a major robustness
gap. This challenge is necessary to ensure the current network settings where
the attacks are modified at the structural, time, and content levels. The
purpose of this study was to design and test a sound multimodal intrusion
detection system that is able to meet accuracy and adversarial resilience. Our
experimental study was based on four publicly available datasets of IDS, which
combined the flow features, packet payloads, graph-structured interactions with
the host telemetry into a single architecture. The model was called Robust
Multimodal Graph-Transformer Ensemble (RMGTE) and contrastive alignment,
adversarial training, Mixture-of-Experts routing and randomized smoothing were
used as its additions to improve the robustness and was evaluated against RF,
GNN and hybrid models. Results indicate that RMGTE attained 96.1% accuracy, 0.93
macro-F1, and 0.61 adversarial accuracy, which is significantly higher than
baselines; furthermore, it had a certified robustness radius of 0.48, and high
cross-dataset generalization (0.87 macro-F1 on BoT-IoT holdout). These results
indicate that multimodal fusion and robustness-aware ensemble design makes a
substantial contribution to the performance and reliability of IDS, which
provides a promising future path to resilient next-generation cybersecurity
systems. |
|
Keywords: |
Intrusion Detection, Multimodal Deep Learning, Graph Transformer, Adversarial
Robustness, Ensemble Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.18454317 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
A COMPUTATIONALLY-STRUCTURED FRAMEWORK FOR GENERALIZABLE AND EXPLAINABLE AI IN
INJECTION MOLDING: INTEGRATING DATA STANDARDIZATION, TRANSFER LEARNING, AND EDGE
DEPLOYMENT |
|
Author: |
MOHAMMED HDID , HAKIM JEBARI , HASSAN SAMADI |
|
Abstract: |
The emergence of Artificial Intelligence (AI) and Machine Learning (ML) is now
the key element of modern manufacturing, but the majority of the models applied
in injection molding are domain-specific and cannot be generalized to other
machines, materials, or process conditions. The given paper deals with this
computing problem by developing a structured literature review devoted to the
issue of the generalization and transferability of industrial AI systems. Based
on this discussion, a conceptual framework of 5 steps is put forward to
formalize the way in which cross-domain robustness may be attained based on
fundamental computing systems like data normalization, feature relevance
control, regularization, domain adaptation, and explainable modeling. The stages
are all based on the latest developments in AI, the introduction of
digital-twin, and the integration of hybrid edge-cloud architecture. The
framework suggested opens up a hypothetical basis of constructing domain
resistant and intelligible artificial pipelines, that links machine particular
learning with scalable and transferable intelligence. This effort to coordinate
the data of industrial processes with the principles of reproducibility and
generalization in computing leads to the further development of both AI
methodology and practical use of smart manufacturing in the direction of the
reliable and cross-domain automatization. |
|
Keywords: |
Plastic Injection Molding, Machine Learning (ML), Artificial Intelligence (AI),
Process Optimization, Production Efficiency, Feature Normalization, Defect
Detection and Prediction, Explainable AI (XAI), Predictive Maintenance, Digital
Twin. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454333 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
HYBRID MACHINE LEARNING MODELS FOR INTEGRATED URBAN AIR QUALITY AND WEATHER
FORECASTING - A DECISION-SUPPORT FRAMEWORK FOR SUSTAINABLE CITY PLANNING |
|
Author: |
P. NAGAMANI, GORANTLA NIPUN, AYINALA NAGA SAI, G KIRAN KUMAR, D KARTHIK PHANI
VARMA, DR. K. SWATHI, DR. N. NEELIMA, DR.T.V.SAI KRISHNA |
|
Abstract: |
Rapid urbanization has intensified challenges related to air pollution and
extreme weather, directly impacting public health and sustainable city planning.
This study presents a hybrid machine learning framework for integrated air
quality and weather forecasting using environmental, meteorological, and urban
data. The model combines regression, classification, and time-series prediction
techniques to capture complex interactions. Experimental results show improved
air quality prediction (RMSE = 3.45) and weather forecasting accuracy (93.6%),
demonstrating the framework’s effectiveness as a decision-support tool for
climate-resilient and sustainable urban development. |
|
Keywords: |
Machine Learning, Air Quality Prediction, Weather Prediction, Urban
Planning, Hybrid Model |
|
DOI: |
https://doi.org/10.5281/zenodo.18454343 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
IMPROVING ACCURACY IN FRAUD DETECTION IN PUBLIC COMPANIES FINANCIAL REPORTS
USING NATURAL LANGUAGE PROCESSING |
|
Author: |
ANGELA KURNIAWAN, LAUDYA NATALIE YUDHA, ARMANTO WITJAKSONO |
|
Abstract: |
This study develops and evaluates a Natural Language Processing (NLP) model for
detecting financial statement fraud in Indonesian public companies through
linguistic analysis of annual reports. We analyzed 72 financial reports (20
fraudulent, 52 non-fraudulent) from companies listed on the Indonesia Stock
Exchange using TF-IDF-based text representation combined with linguistic
complexity features. Machine learning classifiers (Logistic Regression, Random
Forest) were evaluated using 5-fold cross- validation. The NLP model achieved
77.3% accuracy, significantly outperforming traditional rule-based audit method
(40.9%) by 88.9%. Contrary to theoretical expectations, fraudulent reports used
simpler language (ambiguity score: 0.69 vs. 0.98) but exhibited significantly
higher hedging language usage (p=0.033). TF-IDF features dominated model
performance (99.9% contribution). This study is the first to systematically
apply NLP-based fraud detection to Indonesian financial statements in a
bilingual reporting context, revealing context-specific linguistic patterns that
differ from Western fraud literature. |
|
Keywords: |
Natural Language Processing, Fraud Detection, Public Companies Financial
Reports, Artificial Intelligence, Manipulatio006E |
|
DOI: |
https://doi.org/10.5281/zenodo.18454367 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
PREDICTING INTERCONNECTED CLIMATE IMPACTS USING THE COX PROPORTIONAL HAZARDS
MODEL AND XAI |
|
Author: |
Y.YESUJYOTHI, DR.SUBHANI SHAIK, DR.YALLA VENKAT |
|
Abstract: |
Climate change has systemic and cumulative effects on air quality, water
availability, food production, and waste management, yet most current predictive
studies examine each area separately and provide little guidance on how to
interpret them to inform policy decisions. This paper interprets climate change
as a systemic risk over time and proposes a feature that integrates the Cox
proportional hazards model with explainable artificial intelligence (XAI) to
measure and explain cascading climate effects. NASA, NOAA, and environmental
monitoring agencies, as well as peer-reviewed studies, were combined to create
multisource longitudinal datasets covering the 1980-2024 timeframe and including
atmospheric, hydrological, agricultural, and waste-related variables. The Cox
model can be used to estimate risks in time-dependent settings where all sectors
are mutually dependent, whereas Shapley Additive Explanations (SHAP) can provide
clear explanations of the effect of each variable. The experimental evidence
shows that the suggested framework attains a concordance index of more than
0.82, which is better than traditional regression-based models for predicting
changes in climate risk. Temperate variations and CO2 release become the
predominant causes of agricultural and water-induced risks, with PM 2.5
concentration exerting the greatest influence on the escalation of health risk.
In contrast to earlier models based on the concepts of staticity and
black-boxing, the proposed method provides interpretable, testable, and
policy-relevant insights into the dynamics of climate risks. The findings
underscore the evidence-based approaches to climate adaptation and provide a
scalable, explainable basis for integrated climate resilience planning,
especially in susceptible and rapidly urbanizing areas. |
|
Keywords: |
Climate Change, Cox Proportional Hazards Model, Explainable AI, Data Science,
Climate Resilience, And Policy Support |
|
DOI: |
https://doi.org/10.5281/zenodo.18454388 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
A HYBRID RESAMPLING–FEATURE SELECTION–ENSEMBLE FRAMEWORK FOR HIGH-FIDELITY
CREDIT CARD FRAUD DETECTION |
|
Author: |
SRINIVASARAO DHARMIREDDI, VISHAL NAMIREDDY, SATHISH KUMAR DEEKONDA, M V V S
SARMA, V. RAMA KRISHNA, RAVI KUMAR BOMMISETTI |
|
Abstract: |
Credit-card fraud detection requires high sensitivity to rare positive
transactions while controlling false positives in large-scale, temporal
transaction streams. Researcher design a reproducible ensemble framework that
jointly optimizes resampling, hybrid feature selection, and model fusion to
improve rare-event detection. Using a time-aware nested cross-validation
protocol on the European credit-card benchmark, we treat resampling strategy and
feature-selection as tunable components, apply SMOTE variants with cleaning,
fuse filter / wrapper / embedded selection scores, and combine diverse base
learners via stacking/weighted averaging. The proposed pipeline raises PR-AUC
from 0.185 (best single baseline) to 0.322 and increases fraud recall from 0.56
to 0.78 (Wilcoxon p=0.012; McNemar p=0.0068) while remaining stable across seeds
(PR-AUC CoV ≈ 2.8%) and robust to prevalence shifts and moderate input noise.
Implications: These gains translate to materially fewer undetected frauds and
manageable analyst workload, making the approach attractive for production
deployment with appropriate monitoring and periodic retraining and model
explainability checks. |
|
Keywords: |
Credit-Card Fraud; Class Imbalance; Resampling; Feature Selection; Ensemble
Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454417 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
TRANSFORMING MOROCCAN E-COMMERCE THROUGH GENERATIVE AI: EVIDENCE FROM AN
EMPIRICAL STUDY |
|
Author: |
HAFSA LEMSIEH, ABDENNOUR CHATTAOU, AICHA ESSAOULA, YOUNESS JOUILIL, MUSTAPHA
BASSOUR, JAMILA JOUALI |
|
Abstract: |
This research investigates the potential role of generative artificial
intelligence (AI) within the e-commerce industry. Generative AI is employed in a
variety of applications, such as automating content creation and personalizing
user experiences, which significantly improve operational efficiency and
customer engagement for e-commerce companies. For example, AI-driven product
recommendations tailored to individual customer preferences have been shown to
increase conversion rates, while AI-powered chatbots help you keep your
customers satisfied and loyal with 24/7 customer service. However, the existing
academic literature tends to focus on technological capabilities of AI in
developed markets, leaving a considerable gap in understanding how these
technologies are implemented and perceived in emerging economies such as
Morocco. This paper seeks to address how e-commerce businesses can overcome
specific barriers in order to effectively harness AI. Despite the obvious
benefits, several challenges stand in the way of widespread adoption of
generative AI in the e-commerce landscape. These include inadequate technology
infrastructure, a shortage of qualified AI professionals, and significant
ethical and regulatory concerns. Issues relating to data protection, privacy and
transparency of AI algorithms are particularly pressing, requiring rigorous
scrutiny and strong regulation frameworks to guarantee the ethical
implementation of AI. The purpose of this study lies in the need to bridge the
gap between worldwide progress in AI and national implementation strategies, in
order to ensure that the digital transformation is adapted to the Moroccan
context, both culturally and economically. For this study, a qualitative
research was conducted through in-depth interviews with e-commerce professionals
in Morocco. The analysis of the results was made by Maxqda Software. The results
indicate that generative AI can boost efficiency, improve customer satisfaction
and extend market reach. The study concludes that while challenges remain, the
strategic implementation of generative AI, supported by appropriate policies and
frameworks, can act as a catalyst for growth and innovation in e-commerce,
positioning it competitively in the global marketplace. This research lays the
foundations for further studies into the impact of AI on e-commerce, and offers
practical recommendations for stakeholders wishing to leverage generative AI for
business growth in Morocco. |
|
Keywords: |
Generative AI, E-Commerce, Empirical Study, Technology Acceptance. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454427 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
RESIDUAL ATTENTION-ENHANCED MULTI-SCALE DEEP LEARNING FRAMEWORK FOR ROBUST AND
CONTINUITY-AWARE LUNG TUMOR SEGMENTATION FROM CT IMAGES |
|
Author: |
PALYAM MUTHYALA MADHURI, DR. VIJAY KUMAR DAMERA |
|
Abstract: |
The accurate segmentation of lung tumors in computed tomography (CT) scans is a
critical step in computer-aided diagnosis and clinical planning. However, many
existing segmentation pipelines remain sensitive to scale variability, boundary
ambiguity, and inter-slice inconsistency, which can lead to fragmented masks and
elevated false positives in challenging CT cases. However, complex anatomical
variations, irregular lesion boundaries, and low tissue contrast often limit the
performance of conventional deep learning models. This study introduces a
Residual Attention-Enhanced Multi-Scale Deep Learning Framework (RAEMD-Net)
designed to achieve robust and precise segmentation of lung tumors from CT
images. The framework integrates multi-scale convolutional feature extraction,
DenseNet-based residual connectivity, and a bidirectional LSTM module to capture
both spatial and contextual dependencies within 3D image sequences. An adaptive
attention mechanism dynamically adjusts the feature weights to enhance tumor
boundary localization, while a hybrid loss function combining Dice and focal
components improves gradient stability and class balance. Extensive experiments
conducted on the LUNA16 and NSCLC-Radiomics datasets demonstrate that RAEMD-Net
achieves a mean Dice coefficient of 0.96, Jaccard index of 0.94, and Hausdorff
distance reduction of 12%, outperforming U-Net, SegNet, and Transformer-based
baselines. Ablation analyses further indicate that residual attention
contributes most strongly to false-positive suppression, while multi-scale
fusion supports edge continuity across heterogeneous tumor sizes. The proposed
model achieves computational efficiency through dynamic learning rate scheduling
and parameter optimization, making it suitable for real-time deployment in
medical imaging systems. |
|
Keywords: |
Deep Learning; Multi-Scale CNN; Residual Attention; LSTM Networks; Lung Tumor
Segmentation; CT Imaging; Computer Vision; Hybrid Loss Optimization; Scientific
Computing |
|
DOI: |
https://doi.org/10.5281/zenodo.18454439 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
FAULT-TOLERANT CLUSTERING PROTOCOL FOR IOT WITH COLD AND HOT STANDBY REDUNDANCY |
|
Author: |
DR. A RADHA KRISHNA, DR. A LAKSHMI NARAYANA, KLV PRASAD, DR. INAKOTI RAMESH
RAJA, G SANTHOSH KUMAR, NEHA BELWAL |
|
Abstract: |
IoT networks are becoming a more unsteady and heterogeneous setting in which
frequent failure of devices, multi-modal noisy signals and uneven workloads
diminish the stability of the cluster and the availability of the system.
Although the current developments in the area of IoT analytics are rather fast,
the currently used clustering protocols seldom incorporate adaptive redundancy,
as well as do not resolve the problem of large-scale, error-prone deployment.
The ability of this gap limit to be used in critical applications like smart
cities, healthcare monitoring, and industrial IoT. The objective of this study
was to create a robust and fault-tolerant clustering protocol, which is able to
learn stable cluster-based structures in addition to dynamically maintaining
cold and hot standby redundancy to ensure high availability with low overhead. A
federated experimental model was adopted with publicly available IoT data
(telemetry, network flows, acoustic faults, and ambient sensing), in which
clustering was achieved with the help of Federated Graph-Contrastive Clustering
(FGCC), and redundancy choices were made with the assistance of reinforcement
learning. Devices were tested with injected failures, partitions, and variation
of workloads to test the availability, MTTD, MTTR, clustering accuracy, and
computational cost. The FT-CoH model was proposed to perform better, making ARI
0.78, availability 99.1, and the reduction of MTTR to 2.1 seconds, which is
better in comparison to k-means, deep clustering, and federated baselines. These
results show that adaptive cold happens to be much more efficient in terms of
distributing fault tolerance in the IoT by means of multimodal federated
clustering, which is additionally complemented by adaptive cold-hot standby
management. In general, FT-CoH offers a reliable and scalable smart framework of
IoT systems of the next generation, and the prospects it brings to autonomous
monitoring, smart infrastructure, and critical IoT applications have been high. |
|
Keywords: |
Fault-Tolerant IoT, Federated Clustering, Hot–Cold Standby, Graph Neural
Networks, Redundancy Optimization. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454456 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
AN EMPIRICAL STUDY ON THE IMPACT OF AI-BASED INSTRUCTIONAL PROGRAMS ON IMPROVING
STORY WRITING SKILLS AMONG TENTH-GRADE STUDENTS |
|
Author: |
SAMEER ABDULSALAM ALSOUS, AWS I. ABUEID |
|
Abstract: |
The study aimed to reveal the effect of artificial intelligence programs on
improving story writing among tenth-grade students. The study employed an
experimental method, and the study sample consisted of one group of 23 female
students who were randomly selected in a cluster sampling manner. The study used
a pre-and post-test. The results showed a statistically significant difference
(α = .05) for the students' scores on the story writing skills test before and
after the application. Those who were exposed to the teaching strategy based on
artificial intelligence programs in writing skills, combined with each skill,
showed differences in favor of the post-application. The study made several
recommendations, the most important of which was the use of artificial
intelligence programs to enhance students' writing and storytelling skills. |
|
Keywords: |
Creative Writing, Artificial Intelligence, Magic School, Writing Skills,
Education Technology, AI-Powered Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.18454465 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
INNOVATIVE PRACTICES OF PUBLIC ADMINISTRATION: APPROACHES TO PUBLIC SECTOR
MANAGEMENT IN THE ERA OF DIGITALIZATION AND OPEN GOVERNMENT |
|
Author: |
OLEKSANDR SOMAK, IHOR DMYTRENKO, INNA NINYUK, INNA SURAY, OLEKSANDR NAKONECHNYI |
|
Abstract: |
The relevance of the study is due to the need for systematic analysis and
implementation of the latest public administration practices that ensure the
efficiency, transparency and adaptability of public institutions in the context
of digitalization, globalization challenges and growing expectations of
citizens. The research problem is the insufficient consideration of regional
specifics and the effectiveness of renewable energy technologies implementation
in the context of energy transformation. The purpose of the study is to
systematize the latest public administration practices, identify the factors of
their effectiveness, and develop an approach to innovation transfer, taking into
account the challenges of digitalization and open government. The research
results showed that modern public administration is formed at the intersection
of technological innovations and humanistic values, where digitalization is not
seen as an end in itself, but as a tool for enhancing transparency,
inclusiveness, and trust in the state. A comparative analysis of international
and national experience has shown that leading countries focus on creating
integrated platform ecosystems and using artificial intelligence, while Ukraine
emphasizes mobile services, paperless technologies, and active citizen
engagement. Based on the clustering of EGDI dynamics in European countries
(2010-2024), the author reveals the uneven development of digital governance and
identifies four typical trajectories – from stable leaders to states that are in
search of institutional prerequisites for digitalization. It is proved that the
universal factors of success are political and institutional leadership, human
and financial capacity, regulatory adaptability, cyber resilience, and civic
participation. The importance of the humanistic dimension – the ability of
innovative practices to reduce digital inequality, ensure social justice and
maintain partnership between the state and citizens – is substantiated. Uneven
development of digital public administration, limited inclusiveness, and
insufficient institutional capacity to implement the latest management practices
in the context of sustainable development and public trust have been identified.
An effective model of public administration for the future will be based on the
synergy of global technological trends and national contexts, combining
innovation with a human-centered approach. |
|
Keywords: |
Digital Transformation, Innovative Practices, Administrative Management, EGDI,
Cluster Analysis |
|
DOI: |
https://doi.org/10.5281/zenodo.18454478 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
AUTOMATED RECOGNITION OF IRONY AND SARCASM IN ENGLISH FICTION USING NATURAL
LANGUAGE PROCESSING TECHNIQUES |
|
Author: |
IRYNA BLYNOVA, NADIIA BRESLAVETS, YEVHENIIA ARTOMOVA, IVAN BAKHOV, VIKTORIIA
MYKHAILENKO |
|
Abstract: |
Relevance of the research The relevance of the research is determined by the
need for highly accurate automated detection of irony and sarcasm in literary
discourse, which remains a challenge for modern Natural Language Processing
(NLP) systems because of rhetorical complexity and semantic ambiguity. Aim of
the research The aim of the research is to develop and formalize a
multi-level methodology for building and validating a Hybrid NLP model for
automated recognition of irony and sarcasm in English literary prose, taking
into account rhetorical, stylistic, and contextual semantic characteristics.
Research methods The research employed the following methods: comparative
qualitative analysis (in two iterations), representative explication,
comparative metric analysis, and decomposition analysis for optimization.
Obtained results The comparative metric study found that the optimized Hybrid
NLP model, which combines BERT4Irony, RuleML, Defeasible Logic, semantic
pre-processing and meta-ensemble (XGBoost, MetaSVM), demonstrates the highest
accuracy (Accuracy = 0.96, F1 = 0.945, AUC-ROC = 0.96) and adaptability to
rhetorical complexity. The improvement of generative and discriminative metrics
(BLEU = 0.91, ROUGE = 0.92, SPD = 0.88) confirms the model’s ability to
accurately detect ironic inversions, stylistic deviations, and pragmatic
implicits in literary discourse. Academic novelty of the research The
academic novelty of the research is the developed Hybrid NLP model for automated
irony and sarcasm detection, which combines BERT4Irony, ontological modelling
(Defeasible Logic, RuleML), and meta-ensemble (XGBoost, MetaSVM). The model
provides semantic accuracy and cognitive correspondence to the rhetorical
structure of a literary text. Prospects for further research Further
research may focus on the cross-linguistic adaptation of the model, taking into
account linguistic and cultural differences in the conveyance of irony and
sarcasm, in particular through the formalization of typical rhetorical patterns
in different languages. |
|
Keywords: |
Irony, Sarcasm, Ontological Modelling, Hybrid NLP Model, Transformer
Architecture, Semantic Framing, Rule-Based Formalism, Explainable AI, Discourse. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454495 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
s |
|
|
Title: |
AN AI-ENABLED WEIGHTED PRIORITY SCHEDULING ALGORITHM FOR REAL-TIME TELEMEDICINE
TRIAGE USING IOMT DATA |
|
Author: |
S. HEMALATHA, KIRAN MAYEE ADAVALA, N. MUTHUVAIRAVAN PILLAI, S. K. SATYANARAYANA
, MANISHA MADDEL, G. KRISHNA MOHAN |
|
Abstract: |
One of the widely developing technologies is the Telemedicine which is accepted
by all the categories of the disease patient. Providing the treatment on time
will safeguard human life; to improve telemedicine consultation, several
technological algorithms were added with the consultation assignment between the
doctor and patient based on the Human Health Criticality Score. Even though this
algorithm has produced better performance, there is still a need to improve the
algorithm to produce more efficiency. Especially in the critical scenario like
Emergency field, by understanding the patient's emergency condition and
assigning the priority to the consultation can safeguard the lifetime of the
patient. This article proposed a solution for the critical patient timely
teleconsultation support. This research article proposed an algorithm named
Enhanced Artificial Intelligent Priority-Based Telemedicine Algorithm (E-AIPTA),
that algorithm automatically assigned the priority to the teleconsultation
patient based on the real time physiological data. There are five primary body
parameters are taken for assigning the priority named as heart rate (HR), oxygen
saturation (SpO₂), blood pressure (BP), respiratory rate (RR), and body
temperature (T) which are collected with the support of wearable sensor device
attached with the patient. All these five parameters are assigned the medical
parameter weight of low, medium, high, over-all Health Criticality Score (HCS)
is computed to predict the emergency care patient. The proposed algorithm was
evaluated using the simulated data sets and results were achieved over 93 % of
accuracy, and average computation time is 0.84 seconds. The proposed work shows
that it reduces latency and false alarm. This work can improve real time
emergency management in critical illness people in teleconsultation. |
|
Keywords: |
Telemedicine, Teleconsultation, Heart Rate, Oxygen Saturation, Blood Pressure,
Respiratory Rate, And Body Temperature, Health Criticality Score |
|
DOI: |
https://doi.org/10.5281/zenodo.18454514 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON THE TRANSFORMATION OF THE GLOBAL
FINANCIAL MARKET MODELS |
|
Author: |
INNA KORSUN, ANDRIAN ULINICI, LILIA BUBLYK, ANDRII HUMENIUK, VITA ANDRIEIEVA |
|
Abstract: |
The
relevance of the study is determined by the need for institutionalized
rethinking of financial models under the influence of artificial intelligence
(AI), which radically transform the architecture of decision-making,
personalization, compliance, and sustainability mechanisms in the global FinTech
environment. The aim of the study is to formalize and quantify the
transformative impact of AI on financial market models by building a validation
framework with stratified AI metrics. Research methodology: critical analysis of
the implementation of AI technologies in FinTech, econometric modelling of
process transformative AI technologies, structural decomposition analysis,
Unified Modelling Language (UML) modelling of a stratified framework,
econometric validation of an optimized AI architecture. An optimized FinTech AI
reengineering framework with a cognitively stratified architecture was formed,
which provides increased explainability (+19.3%), traceability (+22.1%),
RLHF-adaptability adaptability (+16.7%), and resistance (+18.5%) while reducing
latency (–14.2%) and achieving Compliance Automation Ratio (CAR) = 1.0. The
academic novelty of the research is the developed AI architecture with the
first-ever introduced transformation metrics (Automation Index,
AI-Personalization Score, XAI Index, AITCI), which formalize the levels of
algorithmization, personalization, explainability, and sustainability of FinTech
models. The prospects for further research include a controlled testing with
empirical stratification of each AI module of the FinTech architecture for a
formal assessment of cognitive adaptability, regulatory traceability, and metric
resilience in the real financial cycle. |
|
Keywords: |
AI-Enhanced Alpha; XAI Pipeline; RLHF Mechanisms; GNN Modules; Compliance
Automation |
|
DOI: |
https://doi.org/10.5281/zenodo.18454530 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
THE ROLE OF INFORMATION TECHNOLOGIES IN THE INTERNATIONAL LEGAL REGULATION OF
PUBLIC SAFETY, LAW, AND ORDER |
|
Author: |
OLEKSANDR HOLOVKOV, MYKOLA TYSHLEK, IVO SVOBODA, IVAN KAYLO, ANDRII RYBALKIN |
|
Abstract: |
The fast pace at which information technology is being implemented in public
administrations and security systems globally has dramatically changed the way
public safety and law and order can be regulated internationally. Despite the
extensive use of information technology by international organizations, there is
very little scholarly work that provides empirical evidence of how digital
technologies impact the efficiency, transparency, and legality of international
security frameworks. This paper attempts to fill that void through a comparative
empirical model based on a case study of the role of information technology in
international legal regulation. This paper aims to assess how information
technology impacts the performance of international legal frameworks that
regulate public safety and law enforcement and also to identify the legal
circumstances under which the digitalization process increases the regulatory
effectiveness. To achieve these objectives, the authors have employed an
interdisciplinary methodology using Legal Modelling of Standards, comparative
doctrinal analyses and the Legal Impact Assessment Method in a controlled legal
simulation environment (LexSim-Lab).Three international legal frameworks were
analyzed as examples of international governance models - the UN Global
Counter-Terrorism Strategy, the SIRIUS and SIENA platforms of Europol, and the
OSCE cybercrime recommendations. The results show that the application of
information technology clearly improves the time it takes to resolve conflicts,
the level of procedural transparency and the level of adherence to norms; yet,
the level of improvement varies across regimes. The results indicate that those
regimes where the international legal framework was binding, institutionalized
and/or had a high level of embeddedness, such as Europol, exhibited much better
legal efficiency (up to 46.77%), greater transparency, fewer violations of norms
and greater accountability than those regimes that were either politically
coordinated or voluntary. Additionally, the results show that the digital
tools used to support the various international legal frameworks acted as a
"legal stress test" revealing structural weaknesses in international legal
regulation that relate to accountability, jurisdiction and data sovereignty. The
scientific originality of this study lies in the combination of legal and
technological modeling approaches used to analyze the relationship between
digitalization and measurable legal outcomes. Furthermore, the paper presents
new knowledge by demonstrating that the effectiveness of international
regulation of public safety depends less on the technological sophistication of
the digital tool, than on the existence of enforceable legal architecture
regulating the digital tool. Finally, the paper offers some practical advice for
the development of international harmonized legal standards that can support
secure, transparent and compliant with rights digital governance. |
|
Keywords: |
Information Technology, International Law, Public Safety, Law and Order, Legal
Regulation, Cybersecurity Legislation, Data Protection, Security Policy |
|
DOI: |
https://doi.org/10.5281/zenodo.18454557 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
REVISITING MOTIVATION IN DIGITAL LEARNING: THE INTERPLAY OF OERS INTERACTIVITY
AND SELF-REGULATED LEARNING AMONG THAI PRE-SERVICE TEACHERS |
|
Author: |
PHENNAPA SUWANWONG, NURULLIZAM BINTI JAMIAT |
|
Abstract: |
Perceived motivation plays an essential role in digital learning, particularly
for pre-service teachers who often struggle with self-paced environments,
limited digital literacy, and insufficient self-regulated learning (SRL) skills.
These factors can hinder their engagement and persistence in online courses.
Interactive Open Educational Resources (I-OERs) have been proposed as a
promising approach to enhance motivation because some of its features such as
learner control, embedded questions, and feedback are presumed to support
attention, confidence, and satisfaction. However, research exploring how
interactivity interacts with different levels of learners’ self-regulated
learning on their perceived motivation remains limited. Therefore, this study
investigated main effects of Interactive Open Educational Resources (I-OERs) and
Non-Interactive Open Educational Resources (N-I-OERs) and self-regulated
learning as well as their interaction effect on perceived motivation among 117
pre-service teachers. Using a quasi-experimental design with a three by two
factorial design, each student as determined by self-regulated learning
questionnaire was categorized into high, moderate, and low groups and
purposively assigned to study with either an I-OERs or N-IOERs course materials
available on a MOOC platform. Then, perceived motivation was measured by using
the Instructional Materials Motivation Scale (IMMS), and data was analyzed with
two-way ANOVA and descriptive statistics including mean and standard deviation.
The key findings showed that significant main effects of different learning
materials and varying degrees in self-regulated learning on perceived motivation
among participants were evidently observed. The students who used interactive
materials and possessed higher self-regulated learning exhibited greater
motivation. Additionally, an interaction effect between types of learning
materials and regulated learning control significantly affected the rise in
learning motivation. The study highlights the importance of incorporating
interactive and motivational features into open educational resources while
offering adequate support for learners with lower self-regulation. |
|
Keywords: |
Open Educational Resources; Perceived Motivation; Self-Regulated Learning; H5P;
Multimedia Learning; Pre-Service Teachers |
|
DOI: |
https://doi.org/10.5281/zenodo.18454566 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
CAUSALGRAPH-EMOTIONNET PERSONALITY-CONSCIOUS CAUSAL GRAPH TRANSFORMER WITH
NEURAL-ODE TEMPORAL MODELLING TO EXPLAINABLE MULTIMODAL EMOTION RECOGNITION |
|
Author: |
T L DEEPIKA ROY, NULAKA SRINIVASU |
|
Abstract: |
Multimodal emotion recognition Multimodal physiological and behavioral emotion
recognition is of critical importance in affective computing, human-computer
interaction (HCI), and mental-health analytics. However, current deep learning
models generally do not take modalities into account (disregarding their causal
and temporal inter-dependencies) and ignore personality-based variability that
is essential to realistic affect modelling. To address these shortcomings, the
given paper proposes CausalGraph-EmotionNet, a personality-conscious causal
graph transformer, which combines Neural-ODE-based temporal evolution with
causal attention-assisted multimodal fusion. The AFFECT data of each modality
(EEG, electrodermal activity, facial activity, eye gaze, pupil dilation, and
cursor movement) is modeled as a dynamic causal graph the time-varying
connectivity of which reflects time-varying functional and directional
interactions. The merged embeddings are optimized by personality-conditioned
causal attention systems, which allows making individualized and interpretable
inferences about emotions. Large-scale experiments on the AFFET dataset indicate
that CausalGraph-EmotionNet has 84.6% accuracy and 80.8% macro-F1, outperforming
CNN, RNN, GCN, Transformer and PhysioGraph-Transformer. The model significantly
enhances the identification of more complex affective conditions like fear and
disgust, it is resilient to a 40% loss in modality and has interpretable causal
maps that bridge personality dimensions and modality salience. The findings make
CausalGraph-EmotionNet a state-of-the-art, explainable and causally motivated
architecture of multimodal emotion recognition - a unification of data-driven
learning with psychologically relevant causal inferences. |
|
Keywords: |
Affective Computing, High-Level Resources, Emotion Recognition, Causal Graph
Transformer, Neural ODE, Personality-Sensitive Fusion, Multimodal Learning,
Explainable AI Physiological Signals AFFEC Dataset. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454574 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
URBAN TRAFFIC GOVERNANCE USING DU-NET: A DUAL-STREAM DEEP LEARNING FRAMEWORK FOR
REAL-TIME ROAD ANOMALY DETECTION AND INTELLIGENT VEHICLE ANALYTICS |
|
Author: |
VIJAY BHASKAR S, Dr. L. V. V. GOPALA RAO, Dr. A. GOPALA KRISHNA |
|
Abstract: |
The rapid rise of urbanization and the connected mobility demonstrates the
serious limitations facing traditional traffic management systems. Solutions
previously focused on vehicle flow. Others relied on manual inspection for road
maintenance. Both these were not comprehensive. They also caused delays in
answering urban transport issues. In this paper, we will introduce DU-Net
(Dual-Stream Urban Network), an intelligent deep-learning framework for
real-time detection of road anomalies and vehicle analytics in a smart city. The
design utilizes high-definition cathode ray tube roadside camera visual sensing
and IOT sensor data from embedded infrastructure for multi-task learning to
detect potholes and classify vehicles and dynamically estimate traffic density.
A convolutional pipeline with two streams can capture spatial dependencies and
subsequent temporal dependencies. A probabilistic fusion model views the
hypotheses for sensor signals and video signals as being generated from a
probabilistic mixture model. It can align the video-based hypotheses with the
sensor-based hypotheses to achieve consistent decisions. The system uses a smart
technique that gives an appropriate weight to each task in order to enhance
robustness over weather and lighting and traffic conditions. Experimental
analysis using urban road datasets shows DU-Net to outperform state-of-the-art
detectors like YOLOv8 and Faster R-CNN in terms of evaluation metrics, inference
speed, and computational cost. Moreover, its prediction analytics component
facilitates scheduling of maintenance and traffic congestion prediction. The
suggested DU-Net framework provides an extensible foundation for smart
transportation systems in the future settings through data-driven, adaptive and
climate-resilient urban traffic governance. |
|
Keywords: |
Deep Learning, Dual-Stream Architecture, Intelligent Transportation Systems, Iot
Data Fusion, Multi-Task Learning, Pothole Detection, Smart Cities, Traffic
Analytics, Urban Governance, Vehicle Classification. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454589 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
MACHINE LEARNING FRAMEWORK FOR DYNAMIC PRICING: INTEGRATING LSTM FORECASTING AND
REINFORCEMENT LEARNING OPTIMIZATION |
|
Author: |
MEHULKUMAR H KANTARIA, DR. MANJU SHARMA, DR. B SIVA LAKSHMI, S. KRISHNAKUMARI,
MOLIGI SANGEETHA, DR. P. VENKATESWARA RAO |
|
Abstract: |
This manuscript addresses short-horizon retail pricing under misaligned
mixed-frequency IoT signals (POS, footfall, energy) and the forecast–control
representational mismatch. Dynamic pricing in retail requires accurate
short-horizon demand estimation and constraint-aware policy optimization to
reconcile revenue maximization with service levels. Existing pipelines
frequently decouple forecasting and control, thereby incurring responsiveness
and suboptimality under mixed-frequency telemetry regimes. This study aimed to
develop and evaluate a hybrid pipeline that integrates MIDAS-style
mixed-frequency covariate alignment with Long Short-Term Memory (LSTM)
forecasting and an advantage actor–critic controller to improve profit,
conversion and service metrics in retail/e-commerce settings. A two-stage
protocol was executed: Mixed data sampling (MIDAS) logistic-weight aggregation
converted IoT/edge streams into decision-cadence features. An LSTM encoder
produced one-step forecasts that were (a) pretrained (b) optionally fine-tuned
jointly with an A2C agent. Reward shaping incorporated discounted profit,
inventory penalties and smoothness/fairness regularizers. Evaluation used
synthetic nonstationary simulators and a composite POS+ footfall+ energy dataset
with temporal train/validation/test splits. The joint MIDAS-LSTM + A2C pipeline
yielded statistically significant profit uplifts (≈13–18% vs strong baselines),
lower forecast MSE, reduced stockouts and smoother price trajectories. Ablation
(no MIDAS, frozen LSTM, no smoothness) validated the contribution of each
module. Integrating mixed-frequency telemetry into an end-to-end
forecast-to-policy loop materially improves economic outcomes and operational
robustness. Results indicate statistically significant profit uplifts, improved
forecast accuracy, and smoother pricing, although elasticity identification and
latency constraints remain practical limitations. Further work is required on
causal elasticity identification and latency-constrained deployment. |
|
Keywords: |
Dynamic Pricing, Reinforcement Learning, Long Short- Term Memory, Mixed Data
Sampling, Retail Analytics, Iot Mixed Frequency Data |
|
DOI: |
https://doi.org/10.5281/zenodo.18454609 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
DNA-MAPPED OPTICAL CRYPTOGRAPHY FOR ROBUST AND EFFICIENT IOT SECURITY |
|
Author: |
VISHAL NAMIREDDY, DR. TAHMEENA FATIMA, DR. UTTARA GOGATE, MS.LEILA BENNJIMA, R S
S RAJU BATTULA, G SANTHOSH KUMAR |
|
Abstract: |
DNA-based optical cryptography for Internet-of-Things (IoT) devices offer a path
to secure low-power telemetry with minimal per-packet computation. Existing
solutions rely on heavy symmetric ciphers that burden constrained
microcontrollers (MCUs) or on bio-inspired encoders that lack scalable optical
integration and rigorous real-time traces. This paper proposes DO-OPT (DNA-based
Optical Cryptography), a hybrid framework that combined thermodynamically
constrained DNA-code mappings, an optical phase-noise derived TRNG (true-random
number generator), and a lightweight, post-quantum-capable permutation to enable
authenticated, low-latency transmission for IoT telemetry. DO-OPT performed
hardware-friendly DNA LUT (lookup table) mappings on the MCU, derived
high-entropy keys from an optical TRNG passed through a KDF (key derivation
function), and exploited photonic parallelization in an SLM-style optical
encoder to reduce wall-clock per-packet processing. The method preserved high
ciphertext entropy and increased Key Unpredictability Index (KUI) while lowering
energy-per-byte. DO-OPT was evaluated on Gotham Dataset 2025 (78 heterogeneous
IoT devices, 1.2M packets) and on a 1,000-node emulated deployment. Metrics
included latency, energy-per-byte, KUI, NIST STS (Statistical Test Suite)
outcomes, and robustness to ciphertext-only and replay attacks. DO-OPT reduced
median per-packet latency by 37.5% and energy-per-byte by 29.2% relative to
AES-128, increased entropy per byte from 7.10 to 7.98 bits versus the best prior
DNA-inspired scheme, and passed full NIST STS validation. These results indicate
DO-OPT provides a practical, energy-efficient security layer suitable for
real-time IoT telemetry using DNA–optical hybrids. |
|
Keywords: |
DNA-Based Optical Cryptography, DNA Cryptography, Optical Computing, Iot
Security, TRNG, KUI, Telemetry. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454623 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
AHDNAM: AN OPTIMISED HYBRID DEEP LEARNING AND SWARM INTELLIGENCE FRAMEWORK FOR
BREAST ULTRASOUND DIAGNOSIS |
|
Author: |
DR. M. VAMSIKRISHNA, G RESHMA, NARENDRA B MUSTARE, B V SUBBA RAO, DRAKSHAYANI
SRIRAMSETTI, ELANGOVAN MUNIYANDY |
|
Abstract: |
Breast tumor classification from ultrasound is challenged by low contrast and
speckle noise, limiting reliable automated triage. A reproducible, high-recall
classifier that is robust to device and acquisition variability remains a
pressing gap for clinical workflows. This study aimed to evaluate an adapted
BCDNet incorporating a dual-branch hybrid classifier and an adaptive constrained
hyperparameter search to improve malignant lesion detection and overall balanced
performance on public ultrasound datasets. An experimental study used ImageNet
pretrained VGG16 as a backbone with two parallel classifier branches: a dilated
atrous spatial pyramid pooling branch and a channel-attention branch.
Augmentation (mixup, cutmix), label smoothing, focal loss, and stochastic weight
averaging were applied. Patient-level splits from the Kaggle breast ultrasound
dataset and BUSI were used for training, validation, and held-out testing;
hyperparameters were tuned with a 50-evaluation RPAOSM-ESO-inspired search and
top models ensembled. On held-out tests, the hybrid pipeline attained an
accuracy 94.5% (Kaggle) and 93.2% (BUSI), a sensitivity ≈ 94.6% and 93.0%, an
MCC of 0.881 and 0.860, false negative rates 5.43% and 6.80%, and an AUC ≈ of
0.966 and 0.959, respectively. Bootstrap CIs indicated statistically meaningful
gains in MCC and FNR versus single-branch baselines. In spite of the fact that,
before the proposed pipeline can be utilized in clinical practice, more
comprehensive annotations and broader multi-center validation have to be taken
into consideration, it proves that it can be used as a research tool that
results in a smaller number of missed malignancies without compromising the
specificity. |
|
Keywords: |
Breast Ultrasound, Transfer Learning, VGG16, ASPP, Attention Hybrid Classifier,
Hyperparameter Optimization, Explainability |
|
DOI: |
https://doi.org/10.5281/zenodo.18454647 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
CONTEXT-AWARE IMAGE RETRIEVAL: ENHANCING SEARCH PRECISION IN LARGE- SCALE IMAGE
DATABASES USING BLIP AND AUTOMATED CAPTIONING |
|
Author: |
JAHNAVI SOMAVARAPU, RAVI KANTH MOTUPALLI, ANJANEYULU NELLURU, VENKATESWARA RAO
KOTA, SUDHAKAR YADAV NALADESI, TEJASWI POTLURI |
|
Abstract: |
The growing number of digital images has rendered efficient retrieval a serious
issue, since conventional approaches based on low-level features like color and
texture are not able to capture semantic content. Users often find it difficult
to retrieve appropriate images because there is no context-aware search
mechanism, particularly when precise metadata or visual information is not
known. Current retrieval systems are unable to fill the semantic gap between
text-based queries and image content, producing inaccurate or incomplete
results. In order to solve this, we introduce a context-aware image retrieval
sys- tem that combines deep learning and natural language processing (NLP)
approaches. The system utilizes BLIP model for automatic image captioning,
constructing a meaningful textual description of images. User requests are
processed based on Sentence-BERT (SBERT) embeddings, coupled with TF-IDF and
Levenshtein distance-based string matching to enable accurate retrieval. An
intuitive user-friendly search is offered through a Gradio-based interface.
Through this work, it is illustrated that visual feature extraction combined
with NLP captioning and query processing boosts the accuracy of image search
substantially. The system suggested here not only enhances the precision in
retrieval but also maintains scalability and adaptability for big unstructured
data. Future efforts will be dedicated to achieving further computational
efficiency, including multimodal retrieval, and improving user personalization
to improve the system responsiveness to various user needs. |
|
Keywords: |
Image Retrieval, Semantic Search, Deep Learning, Sentence- BERT, Query Encoding,
Context-Aware Search, Gradio Interface. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454673 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
DEVELOPMENT OF A REAL-TIME AND RETROSPECTIVE INTELLIGENT OBJECT DETECTION SYSTEM
FROM CCTV FOR SECURITY ENHANCEMENT IN LOCAL GOVERNMENT ORGANIZATIONS |
|
Author: |
CHANVIT PHROMPHANCHAI, SUWUT TUMTHONG |
|
Abstract: |
This research develops an AI-RTRO system to address the limitations of
traditional CCTV in Bang Yai Municipality's Smart Safety Infrastructure. The
system improves real-time situational awareness and enhances retrospective
evidence retrieval for local government. Managing large-scale CCTV data and
making rapid decisions are key operational challenges. The system integrates the
AI Agent Core framework and IPOF model for object detection, tracking, and
retrieval from both real-time and archived video. Deep learning algorithms
(YOLO, DeepSORT/ByteTrack, ResNet, Faiss) are used, with Grad-CAM providing
transparency. Performance is evaluated in real-world conditions with mAP, IDF1,
and latency. Results show accurate detection, stable tracking, and low-latency
operation. Metadata management and feature vector indexing improve the
performance of retrospective retrieval. The study confirms that integrating AI
Agent Core and IPOF improves system performance, retrieval efficiency, decision
transparency, and continuous learning. The framework was rated highly by 15
experts, with a mean of 4.74 (S.D. = 0.55), indicating readiness for practical
use in local government surveillance. |
|
Keywords: |
Smart City, Object Detection, Time-Indexed Retrieval, CCTV Analytics, AI Agent
Core |
|
DOI: |
https://doi.org/10.5281/zenodo.18454735 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
ALGORITHMIC DETECTION OF ANOMALIES IN PRODUCTION LOGS: DEV-QA STRATEGIES
LEVERAGING ARTIFICIAL INTELLIGENCE AND TRADITIONAL METHODS |
|
Author: |
OLEH SYPIAHIN, ANNA DEVIATKO, VOLODYMYR LOPUKHOVYCH, TYHRAN OVSEPIAN, OLEKSANDR
SHVAIKIN |
|
Abstract: |
Relevance: The relevance of the research is determined by the need to increase
the efficiency of Dev-QA pipelines by integrating automated testing, anomaly
detection, and artificial intelligence (AI)-oriented algorithms to ensure stable
software quality. Aim: The aim of the research is to formalize, model, and
metrically verify an optimized Hybrid Pipeline to increase the automation,
accuracy, and speed of bug reporting and anomaly detection in Dev-QA. Methods:
Methodological basis of the study: cognitive-functional analysis, metric
modelling, Unified Modelling Language (UML)-based modelling Rule-Based Pattern
Matching (AI-less), UML-based modelling Hybrid Pipelines (AI-driven),
decomposition and optimization analysis, UML modelling optimized Hybrid
Pipelines, Iterative metric-based modelling. Results: Optimized Dev-QA Hybrid
Pipeline with integration of rule-based pre-filtering, ML post-classification,
Human-in-the-loop (HITL) augmentation and auto-learning increased anomaly
detection efficiency by 17.4%, reduced MTTR by 21.8%, reduced communication
overhead by 15.6%. Achieved Precision = 0.94, Recall = 0.92, F1-score = 0.93,
Coverage = 96%, fault tolerance = 99.1% uptime, reduction of repeated defects —
18.2%. Academic novelty: The academic novelty of the research is the
formalization of the Dev-QA Hybrid Pipeline with cognitive stratification of
anomaly detection and bug reporting, integration of rule-based and machine
learning (ML) modules with HITL augmentation and auto-learning for metrically
validated end-to-end automation. Prospects for further research: Further
research prospects include experimental implementation of the optimized Dev-QA
Hybrid Pipeline in a controlled Continuous Integration/Continuous Delivery or
Deployment (CI/CD) environment with subsequent analysis of its technical and
procedural efficiency to justify scaled implementation and regulatory
institutionalization. |
|
Keywords: |
Test Automation, Anomaly Detection, Rule-Based Algorithms, Machine Learning,
Dev-QA Pipelines, Bug Reporting, Software Quality Assurance |
|
DOI: |
https://doi.org/10.5281/zenodo.18454753 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
DETECTION OF MEANINGFUL COMMUNITIES IN SOCIAL LEARNING NETWORKS |
|
Author: |
HICHAM SADIKI, RAJAE ZRIAA,AYOUB ENNASSIRI, MAHMOUD LHAM, SAID AMALI |
|
Abstract: |
Community detection and analysis in social learning networks is a critical
challenge for understanding collaborative dynamics. These networks primarily
rely on two distinct structures: pedagogical interests and social interactions.
A promising approach to uncover meaningful learning communities is to combine
these two structures. In this context, we propose a hybrid and adaptive
two-stage approach that integrates both structural and contextual information to
efficiently detect and update communities in dynamic educational social
networks. This approach proceeds in two stages. First, we introduce a static
algorithm called Learning-Enhanced Method for Community Detection (LEMCD). This
hybrid algorithm leverages statistical and semantic measures to analyze both the
pedagogical content associated with learners and their social interactions.
Second, we present Adaptive LEMCD (LEMACD), an adaptive algorithm for detecting
and updating community structures in dynamic social networks. A comparative
study was conducted using real academic networks to evaluate the performance of
LEMACD against other content- and behavior-based community detection algorithms.
The results demonstrate that our approach is capable of identifying coherent
communities in social networks. Moreover, it adapts effectively to changes in
learning networks while reducing response time. |
|
Keywords: |
Dynamic Social Networks, Modularity, Content Information, Topic Detection,
Meaningful Community Detection, Social Learning Networks. |
|
DOI: |
https://doi.org/10.5281/zenodo.18454778 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
Title: |
DEEP LEARNING APPROACHES FOR CARDIAC DISEASE DIAGNOSIS USING MULTI-MODAL
INTEGRATION |
|
Author: |
SUNEETHA DAVULURI , SIVANEASAN BALA KRISHNAN, PRASUN CHAKRABARTI, VENUGOPAL
BOPPANA,SRI HARI NALLAMALA |
|
Abstract: |
This research presents a novel quantum-hybrid machine learning framework for
heart disease classification that integrates Quantum Neural Networks with
variational circuits and Quantum Support Vector Machines featuring custom
quantum kernels alongside classical ensemble methods. The methodology employs
advanced feature engineering, multi-strategy missing value imputation, and
combined feature selection techniques on an enhanced dataset of 2,000
cardiovascular cases with 17 engineered features. The quantum components utilize
8-10 qubits across 5-6 variational layers with quantum interference effects for
enhanced pattern recognition capabilities. Through weighted soft voting ensemble
strategy and dynamic threshold optimization, the quantum-hybrid system achieved
exceptional performance metrics including 96.2% accuracy, 95.8% F1-score, 94.7%
precision, and 96.9% recall, significantly exceeding the 95% accuracy target.
The quantum models demonstrated measurable advantages over classical approaches,
with the Quantum Neural Network reaching 92.4% accuracy and Quantum SVM
achieving 90.8% accuracy independently. The hybrid ensemble showed 4.3
percentage points improvement over the best individual classical model and 2.7
percentage points over standalone quantum implementations. This quantum-hybrid
approach establishes superior diagnostic capabilities for cardiovascular disease
prediction, particularly in minimizing false negatives critical for patient
safety, positioning quantum-enhanced machine learning as a viable solution for
next-generation medical diagnostic systems with clinically relevant accuracy
improvements for healthcare applications. |
|
Keywords: |
Quantum Machine Learning, Cardiovascular Disease Classification, Hybrid Ensemble
Methods, Variational Quantum Circuits, Medical Diagnostic Accuracy,
Quantum-Classical Integration |
|
DOI: |
https://doi.org/10.5281/zenodo.18454790 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st January 2026 -- Vol. 104. No. 2-- 2026 |
|
Full
Text |
|
|
|