|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
January 2026 | Vol. 104
No.1 |
|
Title: |
A HYBRID STYLOMETRIC TRANSFORMER EMBEDDING FRAMEWORK FOR ROBUST AUTHORSHIP
VERIFICATION AGAINST GENERATIVE AI ADVERSARIES |
|
Author: |
DR.P.IMMACULATE REXI JENIFER, DR V MAHESH KUMAR REDDY, DR. DUVVURI ESWARA
CHAITANYA, LAVANYA KUMARI PITHANI, DR. P.VENKATESWARA RAO , DR. VAISHALI GUPTA |
|
Abstract: |
Authorship verification (AV) remains critical to attribute text provenance and
to limit misuse of large language models (LLMs). Existing detectors typically
used handcrafted stylometric cues or deep transformer embeddings but lacked
robustness to model mimicry and targeted obfuscation and often required large
labeled corpora. This work proposes Hybrid Stylometric Transformer Embedding
Framework (HSTEF), a hybrid, explainable AV architecture that fuses multi-level
stylometric descriptors with cross-attended transformer embeddings to form a
compact verification representation. HSTEF integrated engineered lexical,
orthographic and syntactic stylometric vectors, a pre-trained transformer
encoder fine-tuned with contrastive and calibration objectives, and a
cross-modal fusion module that enabled bidirectional style–semantic interaction
while preserving interpretability. The method was evaluated on the PAN CLEF 2025
Voight-Kampff corpus containing human and machine authored texts across genres
and including deliberate obfuscations. Experiments used one classical baseline
(Linear SVM on TF-IDF, SVM) and one hybrid baseline (DistilBERT + stylometric
features) under identical preprocessing and metrics (ROC-AUC, F1, Brier, C@1,
and computation). HSTEF produced an absolute ROC-AUC increase of ≈0.07 and an
absolute F1 increase of ≈0.07 over the hybrid baseline while improving
calibration (Brier reduced ≈47%) at a controlled inference cost increase
consistent with forensic deployment scenarios. These results indicate that a
Hybrid Stylometric Transformer Embedding Framework offers a tractable accuracy
and robustness trade-off for AV against modern generative AI adversaries. |
|
Keywords: |
Authorship Verification, Stylometry, Transformer Embeddings, LLM detection, PAN
CLEF 2025. |
|
DOI: |
https://doi.org/10.5281/zenodo.18258308 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
ADNET-AI: AN OPTIMIZED DEEP LEARNING FRAMEWORK FOR MULTI-CLASS ECG ARRHYTHMIA
DETECTION IN CLINICAL DECISION SUPPORT SYSTEMS |
|
Author: |
RAME NIKHILA, BARSHAMOLLA LAVANYA, BOGA VARASREE, DR. K. PURNACHAND |
|
Abstract: |
An irregular pulse is reflected by a cardiac condition called an arrhythmia. ECG
is the widely used data for the identification of arrhythmia. Healthcare
professionals can understand the abnormalities in heartbeats using ECG data.
However, with the emergence of AI, there is a need for a CDSS that can help
doctors diagnose arrhythmia. We suggested a DL An optimized framework for
automatically identifying arrhythmias in ECG data. Our deep learning framework
has mechanisms required for data acquisition, data transformation training, a
deep learning classifier, and automatic detection of arrhythmia. We proposed a
novel DL architecture called ADNet based on the CNN model. The suggested
structure has been set up to take advantage of the enhanced CNN model for
efficient arrhythmia detection. We presented the Learning-based Arrhythmia
Detection method and Classification (LbADC), exploiting Utilizing arrhythmia,
the suggested deep learning models detection performance. Our experimental
investigation using the MIT-BIH benchmark dataset shows that the suggested
method, LbADC, surpasses several cutting-edge models with the best accuracy of
98.68%. Therefore, the proposed AI-enabled system can be integrated with
existing healthcare applications to automatically screen arrhythmias in ECG
data. |
|
Keywords: |
Arrhythmia Detection, DL, Artificial Intelligence, ECG Data Analytics, Hybrid
Deep Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.18258491 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
THE ROLE OF COUNSELLING GUIDANCE ON CYBERBULLYING BEHAVIOUR AND ITS INFLUENCE ON
STUDENT RESILIENCE: A STUDY OF JUNIOR HIGH SCHOOL STUDENTS IN YOGYAKARTA |
|
Author: |
EKO PERIANTO, MUNGIN EDDY WIBOWO, HERU MUGIARSO3, SUGIYO |
|
Abstract: |
Bullying in the school environment is still a significant challenge for
educational institutions in Indonesia, affecting students' resilience, emotional
well-being, confidence, and academic engagement. Victims with low resilience
often experience social withdrawal, academic difficulties, and psychological
distress. This makes schools, especially counseling guidance teachers, play an
important role in creating a healthy school environment. This study examines
bullying behavior and its impact on the resilience of junior high school
students in Yogyakarta. A descriptive qualitative method was used in this study
with data collected through interviews by 140 students and 14 guidance and
counseling teachers from 14 schools. The results showed that 77.1% of students
had experienced bullying, which led to insecurity, anxiety, lack of confidence,
frustration, isolation, and helplessness, all of which were closely linked to
resilience. Additionally, cyberbullying has emerged as a growing problem, with
some students reporting online harassment, negative comments, and ostracization
on social media, often in conjunction with bullying at school. Cyberbullying
poses a greater risk due to its persistence, anonymity, and wide audience reach,
making it more difficult for victims to escape its effects. To mitigate these
impacts, schools should adopt comprehensive intervention strategies, including
counseling services for victims and perpetrators, anti-bullying education,
Cognitive Behavioral Therapy (CBT), the KiVa Anti-Bullying Program, and digital
literacy training. Strengthening school policies, safe reporting systems, and
support networks is essential to fostering student resilience, enabling them to
overcome bullying experiences and thrive academically and socially. |
|
Keywords: |
Counselling Guidance, Cyberbullying, Resilience, Students, Intervention
Strategies |
|
DOI: |
https://doi.org/10.5281/zenodo.18258611 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
ADDRESSING THE DATA SCARCITY OF TAMIL: A ZERO-SHOT COREFERENCE RESOLUTION
APPROACH WITH MURIL |
|
Author: |
M DHARINI DEVI, DR S KANNAN |
|
Abstract: |
Coreference resolution which establishes whether different kind of expressions
in a text relate to the same thing, is an essential part of natural language
understanding. Although this task has seen substantial development in
high-resource languages, particularly through the use of large annotated
datasets and pretrained models development. Languages like Tamil under explored
because of the limited annotated datasets and specialized language models. To
overcome this, a zero-shot method for coreference resolution in Tamil by
(MuRIL,) a multilingual pretrained language model is proposed.The framework
adapts span-based neural architectures, using contextual span representations,
cosine similarity-based scoring, and unsupervised agglomerative clustering to
detect and link coreferent mentions efficiently.To facilitate evaluation, a new
manually annotated dataset, TGAP (Tamil GAP-style Coreference Dataset) was
developed drawn from diverse text sources. Experimental evaluation compared
MuRIL with other multilingual baselines using standard metrics such as MUC, B3,
CEAF, and LEA F1 scores. Results shows that MuRIL gives the highest overall F1
score (0.67), performing well with other models underscoring the value of
language-specific pretraining. The findings shows that a zero-shot,
annotation-free framework can achieve competitive performance for Tamil
coreference resolution. |
|
Keywords: |
Coreference Resolution, Tamil language, Multilingual language model, Zero Shot
Learning, Span based Embedding |
|
DOI: |
https://doi.org/10.5281/zenodo.18258689 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
MALARIA DISEASE CLASSIFICATION BASED ON MORPHOLOGICAL OPERATIONS AND PCA
ALGORITHM |
|
Author: |
Mohammed S. H. Al-Tamimi, HADEEL JABAR, BILAL S. H. ALBAYATI, ABEER N.
ABDUL-HAMMED |
|
Abstract: |
The constant challenge of malaria as a public health issue still remains
extremely prevalent in areas with little or no infrastructure or access to
health practitioners. Conventional blood-smear analysis is considered to be the
"gold standard" approach to detecting malaria, yet it is extremely complex and
requires a high level of expert human resources and is prone to error. To
overcome these limitations, in the current study a methodology for automated
diagnosis of malaria from blood smear images using machine learning algorithms
is proposed. The work uses a Kaggle dataset of 27,560 images, split equally into
parasitized and uninfected images. Morphological operations - erosion, gradient,
bottom hat - are used to improve image quality and highlight important cellular
elements. Subsequently, feature extraction is done through Principal Component
Analysis (PCA) and classification is done using shallow neural networks,
support-vector machines, random forests and logistic regression. Among these
algorithms, the shallow neural network had 100 percent accuracy, recall,
precision, and F1-score. The results of this study show that morphological
preprocessing in combination with Principle Component Analysis (PCA) and simple
classifiers could enable the rapid and reliable diagnosis of malaria in resource
constrained settings. |
|
Keywords: |
Malaria Disease, Classification, Morphological Operation, Image Processing, PCA. |
|
DOI: |
https://doi.org/10.5281/zenodo.18258729 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
IMPROVING E-GOVERNMENT THROUGH SENTIMENT ANALYSIS IN THE CONTEXT OF DEVELOPING
COUNTRIES: A MAURITANIA CASE STUDY |
|
Author: |
MOHAMED EL MOUSTAPHA CHRIF, EL BENANY MOHAMED MAHMOUD, OMAR EL BEQQALI,
MOHAMEDADE FAROUK NANNE |
|
Abstract: |
New technologies have given rise to online platforms such as Facebook and X
(formerly Twitter), which have become de facto channels of communication,
enabling people to express their opinions in writing. Analyzing the
user-generated data makes it possible to identify opinions and sentiments on
specific topics, thereby providing valuable insights for evidence-based
decision-making. Therefore, this study focuses on the HASSANIYA dialect, a
low-resource variety of Arabic, and proposes a sentiment classification system
based on Natural Language Processing (NLP) to support decision-making. In the
first step, A dedicated dataset was collected from Facebook, preprocessed using
a combination of a Hassaniya-specific stemmer and an Arabic stemmer. Then,
different configurations of feature extraction (n-grams) and weighting (TF-IDF)
were applied to construct effective feature representations and determine the
best classification model. Support Vector Machines (SVM), Logistic Regression
(LR), and Random Forest (RF) were used to classify the HASSANIYA dataset. The
proposed approach was designed to analyze comments written in both Modern
Standard Arabic (MSA) and the HASSANIYA dialect on Facebook and Twitter. The
results show that SVM achieved the best performance and was successfully
validated through a real case study involving the classification of comments
from an X account (Twitter). Finally, this study introduces a prototype
dashboard for visualizing sentiment trends, offering a practical tool to support
e-government strategies in Mauritania. |
|
Keywords: |
Sentiment Analysis, NLP, Feature Extraction, Decision-Making, e-Government,
Social Media, HASSANIYA Dialect, Machine Learning Algorithms |
|
DOI: |
https://doi.org/10.5281/zenodo.18258766 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
CLIENT-ADAPTIVE NEURAL NETWORKS: A SERVER-DRIVEN ARCHITECTURE FOR DYNAMIC MODEL
PARTITIONING IN EDGE COMPUTING |
|
Author: |
Dr.POKKULURI KIRAN SREE, Dr.M.MUNI BABU, Dr.K.KISHORE KUMAR, Dr.SESHAM ANAND,
Dr.M.SRIDHAR, Dr.N.C.KOTAIAH, Dr.K.SELVAM, DR.T.VENGATESH |
|
Abstract: |
The deployment of deep neural networks (DNNs) on resource-constrained edge
devices is a significant challenge in contemporary computing. A core problem is
that traditional model partitioning, which splits a DNN at a fixed point between
a client and a server, fails to account for the dynamic nature of edge
environments, including fluctuating network bandwidth, heterogeneous client
hardware, and varying computational loads. This paper proposes Client-Adaptive
Neural Networks (CANN), a novel server-driven architecture for dynamic model
partitioning. In CANN, a lightweight profiler on the client device continuously
monitors resource metrics (CPU, memory, battery) and network latency. These
metrics are reported to a central Orchestration Server, which leverages a
reinforcement learning (RL) agent to dynamically select the optimal partition
point from a set of pre-defined "bottleneck" layers within a DNN. This decision
minimizes end-to-end inference latency while respecting client resource
constraints. We evaluate our framework on a variety of DNN models (MobileNetV2,
ResNet-50) and edge device simulators under realistic network conditions. Our
results demonstrate that CANN reduces average inference latency by 41% compared
to static client-side, server-side, and fixed-split approaches, and improves
energy efficiency by 33% on client devices. We conclude that the CANN
architecture provides a robust, adaptive solution that effectively overcomes the
limitations of static partitioning, paving the way for more responsive and
sustainable edge AI systems. |
|
Keywords: |
Edge Computing, Dynamic Model Partitioning, Deep Neural Networks, Reinforcement
Learning, Resource Constraint, Inference Latency. |
|
DOI: |
https://doi.org/10.5281/zenodo.18259315 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM FOR PREDICTING RELIABILITY IN
ULTRA-LARGE-SCALE INTEGRATED CIRCUITS |
|
Author: |
MRS. SWARNITA G. KALE, DR. RAHUL G. MAPARI |
|
Abstract: |
As Ultra Large-scale Integration (ULSI) technology allows for the assembly of
billions of transistors on a single chip; it has emerged as the fundamental
component of modern machines and embedded systems. Application-specific
integrated circuits, memory devices, and microprocessors are all being
revolutionized through this advancement, leading to systems that are smaller,
faster, and consume less energy. The long-term performance of ULSI systems has
been jeopardized by serious reliability issues triggered by the fast scaling of
devices into nanoscale levels. Time-dependent dielectric breakdown (TDDB), hot
carrier injection (HCI), and negative bias temperature instability (NBTI) are
instances of electrical degradation mechanisms that drastically diminish
circuits' operating lifetimes. Nonlinear and dynamic deterioration patterns at
the nanoscale are frequently ignored by conventional reliability evaluation
techniques that rely on statistical distributions or physics-of-failure models.
In order to enhance the dependability of ULSI circuits, the study proposed a
predictive modeling framework based on machine learning. Using MOSRA-enabled
HSPICE simulations of benchmark circuits under stress, reliability data were
acquired. Maximum Likelihood Estimation (MLE) and Least Squares Estimation (LSE)
have been applied for statistical preprocessing in order to organize the
deterioration data. We trained and assessed the prediction accuracy of several
ML algorithms, such as Support Vector Regression (SVR), Random Forest (RF),
Artificial Neural Networks (ANN), and Adaptive Neuro-Fuzzy Inference System
(ANFIS). The findings reveal that ML-driven models, in particular ANFIS, perform
better than conventional reliability prediction techniques, as they can
accurately simulate nonlinear behaviors and adapt to various stress
circumstances. Through introducing predictive reliability techniques, our effort
assists in improving the design of next-generation ULSI systems, extending their
lifespan proactively, and detecting faults early. |
|
Keywords: |
Degradation, Reliability, ULSI, MOSRA/HSPICE, ANFIS, Optimization. |
|
DOI: |
https://doi.org/10.5281/zenodo.18259355 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
SECURE IOT COMMUNICATIONS USING SCRP-DRIVEN DYNAMIC QUASIGROUP CRYPTOGRAPHY |
|
Author: |
HAITHAM RIZK FADLALLAH , MARWA HUSSIEN MOHAMED , WALID ALAYASH , JANE JALEEL
STEPHAN , SARA SALMAN QASIM , TAREQ ALQABANY , MOSTAFA ALI |
|
Abstract: |
Modern IoT systems must solve the challenging task of providing secure and
efficient communication, as conventional cryptographic algorithms require high
computations and/or memory. The paper describes a new lightweight block
encryption algorithm with dynamic Theta Quasigroups, which are generated using
the Shift Cyclic Random Permutation method. The novelty of this paper is the
design of a dynamic, equation-driven quasigroup generation framework that avoids
large lookup tables, minimizing the memory overhead while offering strong
security with high efficiency in processing. This work contributes to new
knowledge by presenting how quasigroup algebra can be turned into a scalable and
energy-efficient cryptographic model, optimized for IoT environments. Key
entropy and resistance to key recovery and post-quantum attacks are further
improved by introducing a 256-bit master key combined with a 128-bit salt and
25-bit random padding. The robustness and randomness properties of the algorithm
are confirmed by a security evaluation using the NIST Statistical Test Suite.
Experimental comparisons reveal that the proposed method outperforms AES in
encryption speed and memory efficiency, particularly for small and medium data
sizes, while being compliant with green computing principles. Beyond the purely
technical contribution, this work shall also contribute to the educational and
theoretical development of lightweight cryptography by providing an easily
understandable mathematical framework for teaching and research in IoT security.
In summary, the SCRP-driven TQG algorithm proposed here opens a new direction in
the design of secure, efficient, and sustainable encryption solutions for IoT
and embedded systems. |
|
Keywords: |
SCRP; Dynamic Quasigroup; Secure IoT; Lightweight Cryptography; Post-Quantum
Security; |
|
DOI: |
https://doi.org/10.5281/zenodo.18259402 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
SEMANTICALLY-GUIDED HYBRID TRAVERSAL FOR COMPLEX QUESTION ANSWERING OVER
KNOWLEDGE GRAPHS |
|
Author: |
M DHARINI DEVI , DR S KANNAN |
|
Abstract: |
Knowledge Graph Question Answering (KG-QA) requires strong and effective
navigation of complex graph architectures to obtain precise responses from
natural language inquiries. Conventional traversal techniques such as
breadth-first search (BFS), depth-first search (DFS) , random walks and
transformer based approaches often face efficiency challenges like either
computationally expensive, prone to combinatorial explosion, lacking semantic
focus and failing to scale when applied to large, intricate RDF graphs. To
resolve this, we propose an innovative hybrid traversal framework that combines
semantic filtering, embedding-based prioritization, and parallel exploration
within the RDFLib environment. Our proposed work gives an innovative hybrid
traversal framework that effectively combines semantic filtering,
embedding-based prioritization and simultaneous exploration within the RDFLib
environment. (i) Semantic filtering provides ontology-based restrictions to
methodically remove irrelevant pathways, clarity and concentration. (ii)
Embedding guided ranking gives knowledge graph embedding to semantically elevate
the prioritization of adjacent nodes improving the relevance of traversal
choices and (iii) concurrent exploration speeds up the retrieval procedure by
simultaneously examining several top ranked paths and scalability. Unlike
previous methods, our framework explicitly balances semantic relevance, graph
structure, and multi-hop reasoning, while maintaining interpretable paths for
answer explanation. The suggested framework undergoes thorough evaluation using
a FB15K-237 benchmark dataset of knowledge graph where results demonstrate that
our method achieves better performance over baseline models. Initial results
show that this combined method provides accuracy, decreases response time, and
scalable answer to elevate the capabilities in KG-QA systems. |
|
Keywords: |
Knowledge Graph, Question And Answering, Graph Traversal, Semantic Filtering,
Embedding-based Ranking |
|
DOI: |
https://doi.org/10.5281/zenodo.18259553 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
PROPULSION AND STABILITY AWARE ROUTING PROTOCOL FOR SUSTAINABLE FLYING AD-HOC
NETWORKS (SPARROW-EAR) |
|
Author: |
R. MYLSAMY , Dr. M. JAIKUMAR |
|
Abstract: |
The limited battery capacity of Unmanned Aerial Vehicles (UAVs) remains a
critical barrier to sustaining the operation of Flying Ad-Hoc Networks (FANETs),
which enable coordinated aerial missions in surveillance, disaster management,
and environmental monitoring. Traditional routing protocols emphasize
connectivity and throughput but often neglect the dual challenges of
propulsion-dominated energy consumption and mobility-induced link instability.
This paper presents Propulsion and Stability-Aware Routing Protocol for
Sustainable Flying Ad-Hoc Networks (SPARROW-EAR), also referred to as
Sustainable Power-Aware Routing with Resilience and Opportunistic Weighting. The
protocol integrates a propulsion-aware energy model, residual flight time
estimation, and predictive link stability analysis into its routing process. A
Pareto-optimized route discovery mechanism filters out non-viable paths early,
while a composite route utility function jointly evaluates energy fairness,
reliability, and stability. The opportunistic load-balancing mechanism
distributes traffic across multiple stable routes, preventing concentrated relay
usage and reducing premature energy depletion among UAVs. Simulation results,
supported by controlled UAV testbed trials, show that SPARROW-EAR achieves
consistently higher packet delivery ratios, reduces energy imbalance by nearly
40%, and extends the overall network lifetime by more than 30% compared with
AODV, EMGR, and LSTDA. These performance gains demonstrate SPARROW-EAR's
capability to maintain reliable communication and sustained energy efficiency in
dense, high-mobility FANET deployments, positioning the protocol as a
technically robust solution for mission-critical aerial networking scenarios. |
|
Keywords: |
FANETs, UAVs, Energy-Aware Routing, Propulsion Modeling, Link Stability, and
Load Balancing. |
|
DOI: |
https://doi.org/10.5281/zenodo.18259678 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
TRANSFORMER-BASED AUTOMATION OF AUTHENTIC PRODUCT REVIEWS FOR SCALABLE SOCIAL
PROOF IN E-COMMERCE |
|
Author: |
PAVAN GUNDA, THIRUPATHI RAO KOMATI, SUBHADRA KOMPELLA |
|
Abstract: |
The rapid growth of e-commerce platforms has increased the pressure on a large
quantity of genuine and interesting product reviews to create trust and
stimulate buying behaviour. Manual review writing is time-consuming and
resource-intensive, particularly when the size of the catalog goes into the
thousands. New or niche products have a high likelihood of being discovered and
not accepted by consumers due to insufficient reviews. Retailers and brands are
in need of a solution to automatically create reviews that can replicate the
variety and the believability of authentic customer reviews. An AI-generated
review system with the transformer-based GPT-2 model that is capable of
generating natural and believable product reviews. The automated system will
enhance the e-commerce experience through offering scalable, authentic social
proof on all product listings. The idea behind this project is to fill this gap
between the existence of products and social proof through the use of
sophisticated AI methods. With automation of review writing, businesses will be
able to provide sufficient support to all product listings and eventually
improve customer experience and increase the number of conversions. The study
manages to automate the generation of natural product reviews by combining the
newest deep learning and natural language processing technologies with the help
of GPT-2. The results provided show that the model can capture imperceptible
expressions of product functionality, quality, and satisfaction, all of which
are critical in generating authentic reviews. The implications of this strategy
for the e-commerce sites seeking scalable solutions to sentiment analysis,
content development, and enhancement of customer engagement are significant.
Another way to add to the work is the introduction of transformer-based
architecture that preconditions further research. |
|
Keywords: |
Generative AI, GPT-2, Product Review Generation, Natural Language Processing
(NLP), E-Commerce Automation, Deep Learning, Transformer Model, Amazon Reviews
Dataset, Text Generation, Fine-Tuning |
|
DOI: |
https://doi.org/10.5281/zenodo.18259742 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
DEEP EVOLUTIONARY LEARNING MODEL FOR PRECISION MUTATION ANALYSIS IN
MYELODYSPLASTIC AND LEUKEMIC CELLS |
|
Author: |
K. SRILAKSHMI , D. VENKATA LAKSHMI |
|
Abstract: |
Myelodysplastic Syndromes (MDS) are inherited bone marrow stem cell disorders.
Comparing the genetic makeup of healthy and MDS patients may help identify and
treat the disease. MDS gene expression data is being used by biological
scientists to study coding and non-coding gene activities. Categorization and
risk stratification are necessary to make clinical decisions concerning Acute
Myeloid Leukemia (AML) patients. This study employs two processed gene
expression datasets to identify the most critical genes for effective
clustering. This research incorporates a deep learning strategy for
categorization from segmented images based on the Visual Geometric Group (VGG16)
Convolution Neural Network (CNN) architecture. The solution to this problem is a
VGG16 deep learning architecture that uses Particle Swarm Optimization (PSO) for
disease classification and accurate analysis of gene mutations. Using the PSO
technique to fine-tune VGG16's hyperparameters improves feature extraction
efficiency and classification performance. A publicly available dataset of
pre-processed bone marrow smear images is used to validate the system. This
research proposes a Radiation Associated and Therapy Related Rearrangements and
Mutations of Gene Analysis for Myelodysplastic Syndrome and Acute Myeloid
Leukemia Classification using PSO optimized VGG16 (RATR-RMGA-MDS-AML-PSO-VGG16)
for accurate classification levels. A 99.2% classification accuracy, 99.4% VGG16
feature processing accuracy and 99.4% Mutations of Gene Analysis Accuracy Levels
were achieved by the suggested PSO-VGG16 model in the experiments, which is a
significant improvement over baseline VGG16 and other traditional machine
learning approaches. These results show that PSO-VGG16 integration is a powerful
and accurate diagnostic tool for MDS and AML classification, which opens new
possibilities for treatment plans. |
|
Keywords: |
Myelodysplastic Syndromes, Acute Myeloid Leukemia, Intrahepatic
Cholangiocarcinoma, Radiation Associated, Therapy, Mutations, Gene Analysis,
Particle Swarm Optimization, Convolution Neural Networks. |
|
DOI: |
https://doi.org/10.5281/zenodo.18259777 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
TIME-AWARE MACHINE LEARNING WITH MULTI-SOURCE DATA FOR FIELD-SCALE AGRICULTURAL
DECISION SUPPORT |
|
Author: |
DR. R. SUDHAKAR, DR.B SIVA LAKSHMI, UPPALAPATI NAGA RATNA KUMARI, DR RIDHI RANI,
G SANTHOSH KUMAR, DR. P.VENKATESWARA RAO |
|
Abstract: |
Field scale decision support for smallholder farms in southern India requires
models that respect both space and time. This study aimed to develop a time
aware machine learning pipeline that fuses electromagnetic soil conductivity,
laboratory soil analyses, Sentinel-2 multi-date imagery and local weather
records to provide actionable management zone delineation and nutrient advisory.
We process EM surveys and soil cores with ordinary kriging to create spatial
rasters, generate weekly Sentinel-2 indices from 2018 to 2022, and align daily
weather with crop phenology. A set of models including Random Forest, LSTM,
Temporal Convolutional Network and a spatio-temporal Transformer are trained
under leave-one-year-out and parcel based cross validation. The Transformer
yields the best predictive performance for yield and nutrient response with mean
RMSE = 0.42 t ha-1 and R2 = 0.78 on hold out years, improving RMSE by ~12
percent over the Random Forest baseline. Management zone stability measured by
Cohen Kappa and Jaccard indices is highest for kriging + fuzzy k-means zonation
and supports a 3-class mapping as the best trade-off between stability and
interpretability. Practical decision outputs include per parcel N application
bands and short advisories designed for field managers. The pipeline reduces the
uncertainty of seasonal advisories and delivers maps and short textual guidance
usable by extension services in the southern Indian context. |
|
Keywords: |
Time Aware Machine Learning, Multi Source Data Fusion, Kriging, Sentinel-2, EM
Conductivity, Southern India, Management Zones, Spatio-Temporal Transformer |
|
DOI: |
https://doi.org/10.5281/zenodo.18259816 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
EARLY DETECTION OF UTERINE FIBROIDS ENHANCED BY A DUAL-SCALE TRANSFORMER-CNN
HYBRID SEGMENTATION NETWORK (DSTCH-NET) |
|
Author: |
MINU INBA SHANTHINI WATSON BENJAMIN, VISUMATHI.J |
|
Abstract: |
Accurate and early segmentation of uterine fibroids is necessary for quick
evaluation, planning of treatment, and better results for patients. However,
current segmentation techniques often struggle to achieve reliable results due
to the complex morphology, irregular boundaries, low contrast, and noise present
in MRI and ultrasound images. This research proposes a solution to these
challenges by proposing an innovative Dual-Scale Transformer-CNN Hybrid
Segmentation Network (DSTCH-Net) that uses the strengths of both transformer and
convolutional architectures. The Fine-Scale Transformer Branch captures the
global context and long-range dependencies, while the Coarse-Scale CNN Branch
focuses on detailed local feature extraction and texture analysis. A Contextual
Cross-Attention Fusion Module (CCAFM) integrates essential features from both
branches, and an Adaptive Boundary Refinement Module (ABRM) further enhances
edge precision. Experimental analysis shows that DSTCH-Net can be evaluated at
99.2% segmentation accuracy at a loss of 0.07, which is better than the
state-of-the-art models. These findings confirm that the suggested method can be
used to effectively overcome the constraints of traditional methods and offer a
strong framework for accurate and early detection of uterine fibroids in
clinical practice |
|
Keywords: |
Uterine Fibroids, Medical Image Segmentation, Deep Learning, Dual-Scale
Segmentation, Hybrid Architecture, Transformer |
|
DOI: |
https://doi.org/10.5281/zenodo.18259934 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
AN ADAPTIVE HASH DRIVEN ACCESS CONTROL MODEL FOR ENHANCED PATIENT DATA SECURITY
IN HEALTHCARE |
|
Author: |
K.MAITHILI , S.AMUTHA |
|
Abstract: |
With the growing volume of health information it has become common practice to
protect the patient identity while maintaining convenient access to the data.
Due to varying flow of cyber security threats, traditional solutions never
manage to get flexible access to data without compromising with overflow of
data. To overcome these challenges focusing on patient data protection, in this
paper, we propose a new Hybrid Integrated Hashing approach entitled "Dynamic
Adaptive Hash-Block Access Control (DAHBAC) framework" using blockchain based
advanced data access control mechanism. The dynamic multi factor hashing scheme
can change in response to the current Vulnerability of data and access patterns,
whereas data access control refers to leverage blockchain's immutability and
decentralized structure that helps protecting patient privacy while allowing
authorized persons to read. The dynamic hashing method prevents intruder
attempts by making hash and easy to calculate but requiring real-time
modification of the hash for access protection. This is made possible by
harnessing the application of zero-knowledge proofs (ZKP) within the frame of
blockchain to enable verification of information when there is no disclosure of
the data. Compared with the conventional methods, testing of prototype in a
health care organization resulted in 92% on attempts by unauthorized workers to
enter the system and 7% increasing data retrieval rate. These findings shows
that the proposed model is a perfect patient data protection pattern in ehealth
systems, because it is not only secures patients data but also enhances the
accessibility and scalability to handle more clients. It is enabled by the use
of zero-knowledge proofs (ZKP) in combination with blockchain technology to
verify information, while keeping the information secret. |
|
Keywords: |
Decentralized, Block, Security, Access control, Attacks, Cybercriminals, ZKP,
Multifactor |
|
DOI: |
https://doi.org/10.5281/zenodo.18259959 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
SWARMHEX-BCCLR: BIO-INSPIRED HEXAGONAL CLUSTERING FOR LONGEVITY OF IOT-WSN
SURVIVAL TIME |
|
Author: |
KALAI KANNAN P , VIDHYA S |
|
Abstract: |
Wireless Sensor Networks (WSN)vis the base network for the advance IoT based
network derivation. In WSNs, many sensor nodes work together to gather data from
the environment. This data is first sent to a cluster head, which then passes it
on to the base station for processing. In this paper, a nature-inspired
clustering architecture known as SwarmHex, is proposed by integrating honeycomb
and hexagonal structures to optimize Wireless Sensor Networks (WSNs). The
proposed work enhances data transmission efficiency by structuring the
communication pathway from individual nodes to cluster heads, and from cluster
heads to the base station, thereby improving overall network longevity and
energy utilization. The placement of nodes in the edges of the architecture
provides the compact packing of sensors within a cluster with equal distance
between the sensors tackle the coverage problem and transmission of data through
the architecture using linear direction pattern with effective cluster head
selection strategy based on priority helps the sensor nodes in efficient CH
selection and data transfer. The proposed bio-inspired clustering SwarmHex-BCCLR
shows best in class performance compared to the existing standard algorithms
named as Ant Colony Optimization (ACO), Grey Wolf Optimization (GWO), Sandpiper
Hybrid Optimization (SHO) and Whale Optimization Algorithm (WOA) that leads to
longevity contribution to increased survival time of the WSN. |
|
Keywords: |
Bio-Inspired Clustering, Linear Pattern Routing, Survival Time, Base-Station,
Honey Comb Hexagon Structure, Cluster-Head |
|
DOI: |
https://doi.org/10.5281/zenodo.18259977 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
LAGRANGIAN OPTIMIZATION-BASED DEEP Q-NETWORK FOR ROBUST AUTISM PREDICTION USING
BEHAVIOURAL SCREENING DATA ACROSS AGE GROUPS |
|
Author: |
B. DEEPA , K.S. JEEN MARSELINE |
|
Abstract: |
Autism Spectrum Disorder (ASD) prediction remains a major challenge due to
behavioural heterogeneity, data imbalance, and the absence of stable cues across
age groups. Existing studies have relied on conventional classifiers or deep
learning models that either fail to capture sequential dependencies in screening
data or suffer from instability caused by noisy, incomplete, and imbalanced
records. This gap highlights the lack of constraint-aware models that can both
generalize and provide reliable decision confidence for early ASD detection. To
address this, the present work introduces a Lagrangian optimization-based Deep
Q-Network (LO-DQN), a novel reinforcement learning framework that embeds
constraint-driven mechanisms entropy regularization, replay prioritization,
Q-value bias suppression, and target synchronization. This design ensures
stability in learning from diverse behavioural datasets and reduces the risks of
overestimation and premature convergence common in existing methods. The study
contributes new knowledge by demonstrating how constraint-guided reinforcement
learning can be adapted to ASD screening contexts. Experimental results on
structured, non-imaging datasets show improved classification performance and
stability across varied diagnostic profiles. The findings establish LO-DQN as a
scalable, interpretable, and constraint-resilient framework, opening a new
pathway for early, low-cost, and clinically relevant ASD prediction. |
|
Keywords: |
Autism Spectrum Disorder, Screening Dataset, Deep Q-Network, Lagrangian
Optimization, Behavioural Prediction, Reinforcement Learning |
|
DOI: |
https://doi.org/10.5281/zenodo.18259990 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
CT-LIVERNET: AN EFFICIENT DEEP LEARNING MODEL FOR LIVER LESION SEGMENTATION AND
DIAGNOSIS |
|
Author: |
DR. SIREESHA VIKKURTY, Dr. ROHITH BALA JASWANTH B, Dr. B. VAMSY KRISHNA,
MUDRAGADA SUNEETHA, MADHAN KUMAR JETTY, VEERAMOHANA RAO REDDY, SATHISH KUMAR
SHANMUGAM, Dr. C P PAVAN KUMAR HOTA, N.MOUNIKA, Dr. SIVA KUMAR PATHURI |
|
Abstract: |
Medicine has undergone substantial changes because of big data analytics
combined with deep learning technologies which enable doctors to predict
diseases better while monitoring patients and diagnosing conditions as well as
delivering better treatments. Liver disease stands as a worldwide significant
health challenge attributed to its multiple complicated aspects and elevated
death rates as well as its multiple pathologic expressions. A combination of
wrong medical diagnosis and insufficient medical care decreases survival
expectancy dramatically. Detecting liver disease presents a difficult issue
because some damaged liver areas can look unblemished due to minimal lesion
distribution which causes experts to make incorrect classifications. The
assessment of liver conditions with precision at an early stage proves essential
for shrinking treatment deficiencies along with enhanced medical results. The
progress of deep learning and medical imaging allowed researchers to develop new
detection procedures for liver disease analysis. The presented research develops
a novel approach to liver disease recognition that adopts YOLO (You Only Look
Once) as its feature extractor and pairs this with Random Forest (RF) and
XGBoost classifiers for superior classification accuracy. The proposed
diagnostic methodology reached a 95% accuracy level which proved superior to
Support Vector Machine (SVM) and Logistic Regression and yielded optimal
diagnosis and classification results. |
|
Keywords: |
Liver Disease, YOLO,RF, XGBoost, Logistic Regression, SVM. |
|
DOI: |
https://doi.org/10.5281/zenodo.18260014 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
POSE-GUIDED MULTI-PART ATTENTION FRAMEWORK FOR VIDEO-BASED PERSON
REIDENTIFICATION |
|
Author: |
RANA. S.M. SAAD, MONA. M. MOUSSA, NEMAT S. ABDELKADER, HESHAM FAROUK |
|
Abstract: |
Person Re-Identification (ReID) is a fundamental task in intelligent video
surveillance, aiming to recognize individuals across non-overlapping camera
views. Despite significant progress, video-based ReID remains challenging due to
variations in illumination, occlusion, and dynamic background clutter. To
address these challenges, we present a novel ReID framework that jointly
exploits spatial and temporal cues for robust person retrieval. The proposed
approach integrates multi-level spatial representations by combining global,
intermediate, and fine-grained features. Global and intermediate information is
captured through self-attention mechanism, while fine-grained details are
derived from pose-based attributes to enhance discriminative power. These
spatial representations are then fused and processed by a temporal
self-attention module, which models inter-frame dependencies to effectively
capture motion dynamics. A retrieval stage subsequently leverages the joint
spatial-temporal features to achieve accurate identification. Comprehensive
experiments on the PRID2011 and iLIDS-VID benchmark datasets demonstrate the
effectiveness of the proposed method. The framework achieves consistent
improvements over baseline approaches, with gains of 2.3% and 3.4% in mean
Average Precision (mAP) on PRID2011 and iLIDS-VID, respectively. These results
highlight the robustness and discriminative capability of the proposed
architecture for real-world video-based ReID applications. |
|
Keywords: |
Video Person Re-Identification; Person Re-Identification; Heatmaps; Pose; Pose
Detection; Video Surveillance. |
|
DOI: |
https://doi.org/10.5281/zenodo.18260049 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
EXPLAINABLE HYBRID MACHINE LEARNING PARADIGM FOR TRUSTWORTHY ADVERSARIAL DEFENSE |
|
Author: |
Dr BJD KALYANI, Dr PANNALA KRISHNA MURTHY, Dr Y LAKSHMI PRASANNA, Dr M
PADMAVATHI, Dr SARABU NEELIMA, NITHIN YADAV MASHAM |
|
Abstract: |
The accuracy of user-generated suggestions is critical, especially in
decision-making processes where AI systems play an important role. Explainable
AI (XAI) plays a vital role in decision making process as a mechanism for
assessing and verifying the validity of these inputs, ensuring that AI models
stay correct and reliable. This focus concentrate on how to apply XAI approaches
to determine if users provide correct suggestions or attempt to deceive the
system, hence improving the accuracy of AI-driven decision-making. This paper
recommends a novel hybrid model that incorporates numerous sophisticated
techniques, including Noisy Student self-training, Generative Adversarial
Networks (GANs), and XAI. The Noisy Student technique enhances model resilience
by self-training on noisy data, whereas GANs provide genuine adversarial
instances for data augmentation. Ensemble learning and robust optimization
techniques improve model resilience, guaranteeing that it can withstand and
respond appropriately to hostile inputs. Generally, XAI techniques contribute
transparency, allowing for the discovery and mitigation of misleading inputs by
clarifying model predictions. This hybrid method not only improves the
robustness and accuracy of AI models, but it also sets a new bar for openness in
AI systems, addressing both technical and ethical concerns in modern AI
applications. |
|
Keywords: |
Artificial Intelligence, Generative Adversarial Networks (GAN), Explainable AI
(XAI), Local Interpretable Model-Agnostic Explanations (LIME), Shapley Additive
Explanations (SHAP). |
|
DOI: |
https://doi.org/10.5281/zenodo.18260079 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
INTERPRETABLE DEEP LEARNING FOR IIOT: AN LSTM–IFWT DRIVEN SENSOR DATA ANALYTICS
FRAMEWORK FOR E-WASTE MANAGEMENT AND SUSTAINABLE MONITORING |
|
Author: |
P. SANTHUJA, DR. A SURESH |
|
Abstract: |
Contextually, the Industrial Internet of Things (IIoT) contributes significantly
to the Industry 4.0 to deliver intelligent monitoring and predictive maintenance
systems. Nevertheless, the IoT sensor data, particularly in alpha-phase
applications, are very volatile and intermittent with devices, which create
significant obstacles to trustworthy data analysis. This paper introduces a
mature and scalable framework to Long Short-Term Memory (LSTM) network and an
innovative Intuitive Feature Wavelet Transformation (IFWT) mechanism that may be
employed to successfully model and explain such time-varying time-series data.
LSTM is a form of Recurrent Neural Network (RNN), and may be utilized to learn a
temporal dependency between multivariate temperature measurements observed by
sensors installed within as well as outside a controlled room atmosphere. The
implementation using the architect of the Block chain layer achieved by the
application of 5 layer and 13-layer functional transformations developed by
using e-waste information security and threefold and multilabel categorization
resolution in the advertisement of revolutionary IOT. Results of the experiment
indicate that the suggested LSTM-based model is highly accurate in forecasting
and detecting anomalies even in the presence of data instability. In order to
make the models more interpretable, IFWT is proposed as a post-hoc approach to
analysis that converts the weights of learned features into human-intuitive
formats. The results of the Precision and performance in terms of F-1scores have
been found to be more than 15% better in time series solution and more than 96%
better in terms of classifications than SOA architectures. The proposed study
proposes the vitality of IIOT ecosystems on block chain technology employed with
augmented throughput imploring the many state-of-the-art methods and other deep
learning structures to actualize the many solutions on IIOT. |
|
Keywords: |
Industrial Internet of Things (IIOT), IFWT (Intuitive Feature Wavelet
Transform), Deep Learning (DL), Recurrent Neural Network (RNN), Long Short-Term
Memory (LSTM). |
|
DOI: |
https://doi.org/10.5281/zenodo.18260123 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
INTEGRATING SEMI-SUPERVISED LEARNING AND TRANSFORMER MODELS FOR OFFENSIVE
LANGUAGE DETECTION IN ENGLISH–TAMIL CODE-MIXED SOCIAL MEDIA |
|
Author: |
RAMA S , MYTHILI R |
|
Abstract: |
In todays world, social media has evolved far bigger than the press media,
becoming the dominant medium for public expression. Despite the extraordinary
growth, social media involves the prevalence of offensive language and harmful
content. As there is no real content moderation or compliance checks, social
media has become an open market for trading offenses. lack of stringent content
moderation and compliance checks, social media has become a place where
individuals are able to express things that are objectionable. In terms of
translation requirements, particularly in low-resource code-mixed language
pairs, such as English–Tamil, detection of offensive verbiage is quite a
challenge due to factors, such as variations in scripts, irregular spellings,
and non-availability of reliable large annotated corpora. This paper describes
automated annotation pipeline for the detection of offensive language in
English–Tamil translations using the XLM-RoBERTa-large transformer model. The
goal of the proposed system is to generate high-quality multi-class
offensive-language labels—without manual annotation. This model has been
optimized for multi-class classifications in seven categories: profanity, hate
speech, sarcasm, threats, harassment/bullying, disrespect/dismissive, and
non-offensive. We hypothesize that a semi-supervised, confidence-tiered
annotation process, combined with lexicon-based category mapping, public
datasets that are labelled, machine-translated resources, and synthetic data
generation, can achieve annotation quality comparable to human labelling. The
imbalance in classes is addressed through back-translation, generation of
synthetic data, and optimization of class-weighted loss. Experimental results
indicate that the model can achieve a Macro-F1 score of 92.4% and a Cohen’s
Kappa of 0.92, demonstrating near-human annotation quality. |
|
Keywords: |
Semi-Supervised Learning, Annotation, Offensive Language Labelling, Low
Resource Language, Multi-Class Classification |
|
DOI: |
https://doi.org/10.5281/zenodo.18260204 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
AUTONOMOUS ROBOTICS IN HAZARDOUS ENVIRONMENTS DEVELOPING AI ALGORITHMS FOR SAFE
AND EFFICIENT OPERATION |
|
Author: |
YANDRA SRINIVAS, NARASIMHA RAO THOTA, ANNAPURNA GUMMADI, N SRIHARI RAO, D
VENKATA RAVI KUMAR, SREENIVASULU BOLLA, K.MAKANYADEVI |
|
Abstract: |
This paper describes a new hybrid system that utilizes AI to enhance the safety,
efficiency, and adaptability of autonomous robots in hazardous situations.
Robots can move autonomously in complex and dangerous areas, as the proposed
method relies on RL, CNNs, and real-time hazard detection. In several segments,
the system was tested through simulations conducted in regions with radiation,
chemical waste, and hazardous soil. The research findings demonstrate that this
model yields shorter task times and nearly half the number of safety issues
compared to previous methods. Besides, the robot moved away from danger on its
own when hazards appeared on its path. The research highlights that AI-powered
autonomous systems enable safer and more efficient work in disaster response, as
well as within industries and military operations. It leads to significant
advantages for robots made to operate in dangerous situations. |
|
Keywords: |
Autonomous Robotics, Hazardous Environments, Reinforcement Learning, Hazard
Detection, Path Planning, Safety Protocols |
|
DOI: |
https://doi.org/10.5281/zenodo.18260226 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
IUX -ADNET: MULTI-FUSION DATA ANALYSIS WITH ATTENTION APPROACH COMBINED WITH IUX
DENSE MODEL FOR ACCURATE COTTON YIELD PREDICTION |
|
Author: |
PORANDLA SRINIVAS, DR. A SURESH |
|
Abstract: |
Optimized approaches in agricultural productivity are known and critical factors
to predict and identify the precise cotton yield prediction. This paper
incorporates an innovative model architecture with the IUX framework using
custom functionalities (I (identify), U (SDE), and X (explicit-DE)) implemented
to integrate the multi-fusion approach on data. The proposed methodology
describes the three models (i) IUX-Dense Neural Net, (ii) IUX-Attention model,
and (iii) IUX-Fused Model, which are concatenated with both IUX-Dense and
IUX-Attention models. These models are extensively trained with a
multimodal-fused dataset (Mendeley dataset + weather-parameter-based synthetic
dataset) that is imbued with different features. The experimental evaluations
are implemented with 25 epochs, and its performance stability with early
stopping criteria demonstrates the IUX-fused approach is far superior with
99.99% accuracy with both disease classification and yield predictions with the
IUX-optimized solution with the least MSE (0.05), RMSE (0.24), and MAE (0.188)
and the optimal EVS (explained variance score) (97.58). Similar comparisons with
machine learning and other SOA architectures are made and tabulated with
performance results on both classification and yield prediction analysis.
Finally, the proposed framework with the IUX model approach, as IUX-ADNET
demonstrates, has far superior capabilities and offers scalable and more
reliable solutions towards cotton yield predictions and disease classification. |
|
Keywords: |
Stochastic Differential Equations (SDE), Differential Equations (DE), Explained
Variance Score (EVS), Identify Feature with Stochastic Explicit differential
Equations (IUX), Deep Learning (DL), Machine Learning (ML) |
|
DOI: |
https://doi.org/10.5281/zenodo.18260349 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
A NOVEL DATA SECURE MODEL FOR INTERNET OF HEALTH THINGS WITH A NEW LIGHTWEIGHT
CRYPTOGRAPHY ALGORITHM AND STEGANOGRAPHY TECHNIQUE |
|
Author: |
SANGEETHA SUPRIYA KOLA, DR.JENO LOVESUM S P |
|
Abstract: |
Ensuring the security of data in Internet of Things (IoT) based healthcare
systems (HS) presents considerable challenges due to the limitations of
traditional embedding methods and cryptography techniques, leading to more
memory consumption, more execution time, less security, inadequate payload
capacity, and performance inefficiencies. To address these issues, the Bernoulli
Fish-based Stego Algorithm (BFBSA) is introduced as an innovative solution.
Specifically designed for IoT healthcare data, this algorithm is validated
through the encryption and embedding of healthcare data. The process involves
initializing IoT healthcare data, encrypting it using the BFBSA algorithm, and
embedding the encrypted data within steganographic images. Performance analysis
is conducted using key metrics such as payload capacity, encryption time, memory
usage, PSNR, and MSE. Comparative analysis with existing approaches highlights
the BFBSA model’s efficiency and its effectiveness in ensuring secure and
optimized data management in IoT healthcare environments. |
|
Keywords: |
IoT Healthcare, Encryption, Embedding Image, Data Security, Crypto Algorithm. |
|
DOI: |
https://doi.org/10.5281/zenodo.18260371 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
A LIGHT WEIGHT MULTI HEAD SELF ATTENTION MECHANISM TO CONVOLUTION NEURAL NETWORK
TOWARDS DOMAIN INCREMENTAL CHEST X-RAY IMAGE CLASSIFICATION |
|
Author: |
KAVITHA S, DR.H.HANNAH INBARANI |
|
Abstract: |
In recent year, Chest X-ray images have been widely adopted towards pathological
diagnosis of the respiratory diseases and pulmonary diseases. Advances in
computer vision approaches and artificial intelligence offers promising
solutions with high accuracy and low computation overhead to disease
classification and severity estimation. Traditionally Chest X-ray images
classification has been performed using machine learning and deep learning model
architectures to provide excellent performances as it incorporates domain
incremental learning. Despite of several advantages of those models, domain
incremental learning faces catastrophic forgetting challenges due to large
domain gaps. In this paper, light weight multihead self-attention mechanism
which is considered as transformer architecture is incorporated into ResNet 50
based convolution neural network to fine tune strategies to learn and calculate
the domain invariant features and domain specific features. Especially attention
mechanism performs feature fusion among different domains to prevent forgetting.
Further multi-label contrastive loss is built and trained as loss function to
enhance class distinctiveness of the disease classes. Initially Convolution
layer of the ResNet architecture is applied to extract the hierarchical feature
from X-ray images. Next, Extracted features were projected to pooling layer
incorporating the attention mechanism. Attention mechanism use attention
coefficient to compute the domain invariant features and domain specific
features to establish feature maps. Those feature map of different domains are
aggregated and classified in the fully connected layer composed of activation
function, normalization function, softmax function and loss function. Activation
function using ReLu function aggregates the feature maps while softmax function
using Naďve bayes classifier classify the feature map into disease categories as
COVID-19 variants, Normal, Viral Pneumonia, and Lung Opacity. Experimental
results and performance results substantiate the effectiveness of the current
architecture with accuracy of 98.9% and surpassing the baseline methods
exclusively in classifying the Chest x –ray images into multiple classes.
Finally results emphasize the significance of the transformer architecture for
optimizing the conventional deep learning architecture in the medical imaging. |
|
Keywords: |
Transformer, ResNet50, Multi-Head Self Attention, Convolution Neural Network,
Chest X-ray Images, Respiratory Diseases, Pulmonary Diseases, Medical Imaging. |
|
DOI: |
https://doi.org/10.5281/zenodo.18260398 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
DEEP NEURAL NETWORK-BASED APPROACH FOR DIABETIC FOOT ULCER CLASSIFICATION |
|
Author: |
MOHAMMAD BANI YOUNES , MOHAMED S.SAWAH |
|
Abstract: |
Diabetic Foot Ulcer (DFU) is a severe complication of diabetes, necessitating
early detection to prevent limb amputation. This study proposes a novel hybrid
Convolutional Neural Network (CNN) model leveraging EfficientNetV2S architecture
for automated DFU classification. Utilizing a dataset of 1,055 foot skin patches
(512 abnormal, 543 normal), the model integrates transfer learning, dropout
layers, global average pooling, and cosine decay learning rate optimization.
Pre-trained weights from EfficientNetV2S were fine-tuned, achieving a
state-of-the-art accuracy of 99.94% on the test set. Training and validation
curves demonstrated stable convergence, with minimal overfitting, underscoring
the model’s robustness. Comparative analysis with existing methods highlights
superior performance, suggesting its potential for clinical deployment in early
DFU diagnosis. This approach enhances diagnostic precision, enabling timely
intervention to improve patient outcomes. |
|
Keywords: |
Diabetic Foot Ulcer; Deep Learning; EfficientNetV2S; Medical Image
Classification; Transfer Learning. |
|
DOI: |
https://doi.org/10.5281/zenodo.18260414 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
CHATBOT INTEGRATION IN GAMIFIED E-LEARNING: BRIDGING ONE-SIZE-FITS-ALL AND
ADAPTIVE APPROACHES |
|
Author: |
ZECRI EZZOUBAIR, EL HADDIOUI ISMAIL, OUZZIF MOHAMMED |
|
Abstract: |
It is true that human interaction can play a significant role in the learning
process. The lack of these interactions in e-learning can lead to feelings of
isolation and disengagement among learners. Several researchers have proposed
gamified e-learning system model as an effective way to increase learners'
motivation by providing them with autonomy, relatedness, and competence. Basing
on literature, we can divide gamification models into three categories: - The
"one size fits all" model where the game elements and mechanics are
pre-determined and applied to all learners in a fixed manner. - The static
adaptation models refer to a gamification approach where the game elements and
mechanics are pre-determined and applied to all learners in a fixed manner,
according to their profile. - The dynamic adaptation models refer to an
approach of gamification where mechanics are adapted in real-time to the
learners' changing needs. In order to help designers of gamified learning
management systems to better maintain the learner’s motivation, we propose a new
model for a gamified learning system that incorporates game elements using the
three models explained above. This model aims to address the limitations of
traditional e-learning platforms by providing a more engaging and personalized
learning experience for students. |
|
Keywords: |
Adaptive Gamification, Learner engagement, E-Learning, Instructional Design,
Adaptation Model |
|
DOI: |
https://doi.org/10.5281/zenodo.18260432 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
VISUAL DESCRIPTION GENERATING SYSTEM BY USING CNN AND BI-LSTM |
|
Author: |
ANIL KUMAR MUTHEVI , MANIKYALA RAO BOLLU , BHEEMA RAO RVVN, MAGANTI VENKATESH,
NALLAMILLI V K REDDI, LANKOJI V SAMBASIVARAO |
|
Abstract: |
Image captioning plays a vital role for numerous purposes. The images contains
the captions can lead to rapid and descriptively precise image searches and
indexing. Unidirectional LSTM (Long Short-Term Memory) decoders, commonly used
in existing systems, have limitations in retaining past information, leading to
less than optimal results for long sequences In recent years, the advent of deep
neural networks has made image captioning a viable concept. This Visual
description generator system (VDGS) introduces a model grounded in deep learning
to generate captions for input images. The model takes an image as input,
employing algorithms such as Convolutional Neural Network (CNN) and
Bidirectional Long Short-Term Memory (Bi-LSTM). The CNN component identifies
objects within the image, while the Bi-LSTM constructs sentences about the input
image, resulting in a caption tailored to the project. The proposed model
primarily emphasizes object identification and formulating the most fitting
title for input image. This study presents an enhanced Visual Description
Generating System (VDGS) using CNN-based feature extraction and Bi‑LSTM‑based
caption generation. The novelty lies in integrating bidirectional contextual
modelling, improved sequence handling, and enriched semantic alignment between
images and text. The system overcomes limitations of unidirectional LSTMs by
incorporating forward–backward temporal processing, improving coherence for long
descriptions .The novelty of this work lies in combining CNN (InceptionV3)
feature extraction with a Bi‑LSTM decoder to capture both past and future
contextual cues. This dual‑context modelling strengthens semantic coherence,
improves caption fluency, and enhances alignment with visual regions. |
|
Keywords: |
Object Identification, Image Caption, Recurrent Neural Network, Convolution
Neural Network Bi-directional Long-Short Term Memory |
|
DOI: |
https://doi.org/10.5281/zenodo.18260451 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
ACO–RNN: A MULTI-OBJECTIVE ANT COLONY OPTIMIZATION AND RECURRENT NEURAL NETWORK
FRAMEWORK FOR DISASTER TWEET SUMMARIZATION |
|
Author: |
FAHD A. GHANEM, M.C. PADMA, MOHMMED A. S ALMOHAMADI |
|
Abstract: |
The rapid growth of disaster-related content on social media offers valuable
opportunities for situational awareness, yet it also creates major challenges
due to noise, redundancy, and the rapidly evolving nature of crisis events.
Existing disaster tweet summarization approaches, including ontology-based,
embedding-based, and neural methods, typically address semantic relevance,
redundancy reduction, or coherence in isolation. As a result, they struggle to
simultaneously control redundancy, preserve temporal coherence, and remain
scalable for real-time disaster response.This study is motivated by the need for
a unified summarization framework that jointly addresses these challenges within
a single, scalable, and unsupervised model. To this end, we propose Ant Colony
Optimization and Recurrent Neural Network (ACO-RNN), a hybrid framework that
integrates Word2Vec embeddings for semantic representation, Ant Colony
Optimization (ACO) for redundancy-aware sentence selection, and a Recurrent
Neural Network (RNN) to preserve sequential coherence across selected tweets.
The summarization task is formulated as a multi-objective optimization problem
that balances relevance, coverage, and diversity while minimizing redundancy.
The proposed framework is evaluated on four benchmark disaster tweet datasets,
namely TyHagupit, SandyHShoot, HydBlast, and UFlood, using Recall-Oriented
Understudy for Gisting Evaluation (ROUGE) metrics. Experimental results show
that ACO-RNN achieves ROUGE-1 F1-scores ranging from 0.587 to 0.614, with
relative improvements of 8 to 13 percent over strong baseline methods, including
OntoDSumm and IKDSumm, along with consistent gains in ROUGE-2 and ROUGE-L. These
findings demonstrate that combining redundancy-aware optimization with
sequential neural modeling effectively overcomes key limitations of existing
approaches and produces concise, coherent, and informative summaries suitable
for disaster response and situational awareness. |
|
Keywords: |
Disaster Tweets; Word2Vec; Text Summarization; Ant Colony Optimization;
Recurrent Neural Networks. |
|
DOI: |
https://doi.org/10.5281/zenodo.18260482 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
QUANTUM-ASSISTED HYBRID SECURITY FRAMEWORK FOR SECURE AUTHENTICATION WITH
INTRUSION DETECTION FOR 5G NETWORKS |
|
Author: |
AANANDARAM V, P. DEEPALAKSHMI, VIDHYA PRAKASH RAJENDRAN |
|
Abstract: |
5G networks provides a lot of connectivity and are very advanced but also brings
a lot of security challenges. These challenges include managing keys that can
resist quantum attacks, making sure users are secure and detecting unwanted
access in real time. Traditional security methods and machine learning tools are
not safe against both regular and quantum-based attacks. This work introduces
QSA-IDF (Quantum-Assisted Secure Authentication and Intrusion Detection
Framework), a BB84-based Quantum Key Distribution hybrid security framework for
quantum-driven secure authentication and Variational Quantum Classifier (VQC)
for quantum-driven intrusion detection. The framework is simulated using IBM
Qiskit and implemented on MEC-edge layers using NS-3 and Mininet, simulating
actual 5G slices. Experimental findings on CICIDS 2018 dataset show 99%
detection accuracy, 98% F1-score, and higher quantum fidelity scores than
classical baselines. Comparison with SVM, RF, and CNN models indicates the
scalability and robustness of the quantum system. The contribution of the paper
lies in offering a fully integrated, end-to-end quantum-secure security
framework solving key limitations of existing 5G security architectures and
providing scalable efforts towards 6G-ready quantum-secure communications. |
|
Keywords: |
5G, Network Security, Quantum, Machine Learning, QKD, QML, Alice, Bob,
Eavesdropping, Intrusion Detection. |
|
DOI: |
https://doi.org/10.5281/zenodo.18260517 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
AN NLP-BASED APPROACH FOR SARCASM DETECTION IN MARATHI LANGUAGE WITHIN
MULTILINGUAL ENVIRONMENTS |
|
Author: |
PRAVIN PATIL, SATISH KOLHE |
|
Abstract: |
The expansion of social media platforms has required the development of complex
natural language processing (NLP) methods for sentiment analysis and sarcasm
detection, especially for low-resource languages. This research presents a
novel, ensemble-based NLP framework for sarcasm detection in Marathi-English
code-mixed text, addressing a significant gap in multilingual sentiment
analysis. The study confronts the challenges inherent to the Marathi language,
including its rich morphology and a paucity of annotated corpora, by
constructing a dedicated dataset of 2,400 tweets. Through a hybrid annotation
strategy, this corpus was refined to contain 647 sarcastic, 1,477 non-sarcastic,
and 276 unsure instances. The primary aim of this study is to develop and
evaluate an ensemble-based model that enhances sarcasm detection accuracy in
Marathi-English code-mixed text through the integration of traditional machine
learning and neural network approaches. The proposed methodology employs
comprehensive feature engineering—incorporating TF-IDF, word embedding,
sentiment lexicons, and code-switching indicators—and an ensemble model
architected to synergize the strengths of a Multinomial Naďve Bayes classifier
and a Spiking Neural Network (SNN). Empirical evaluation demonstrates that the
ensemble model achieves superior performance, attaining an accuracy of 94.2%, a
precision of 90.5%, a recall of 92.1%, and an F1-score of 91.3%. These results
signify a substantial improvement over established baseline models, including
XGBoost (87% accuracy), SVM (85% accuracy), BERT-base Multilingual (92%
accuracy), and XLNet (90% accuracy). This work validates the efficacy of
ensemble learning and context-sensitive feature extraction for sarcasm
detection, providing a scalable and robust paradigm for analogous low-resource
linguistic environments. |
|
Keywords: |
Sarcasm detection, Marathi NLP, Code-Mixed Text, Ensemble Learning, Multilingual
Sentiment Analysis, Low-Resource Language |
|
DOI: |
https://doi.org/10.5281/zenodo.18260545 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
UNMASKING DEEPFAKE FACE SWAPS USING MULTI-RESOLUTION HYBRID DCT-DWT APPROACH |
|
Author: |
BHAVANI RANBIDA, DEBABALA SWAIN, RISHI RAJ |
|
Abstract: |
In todays digital world, the advanced tools for image editing using AI
technologies have made it harder to identify whether an image is real or fake.
Deepfake face swaps are the most convincing forms of image forgery among
different manipulation formats. This study presents a new hybrid method that
combines the Discrete Cosine Transform (DCT) and the Discrete Wavelet Transform
(DWT) to detect such deepfakes. By breaking the image into different frequency
and spatial layers, the proposed method can easily detect the visual
inconsistencies happened due to the manipulations. By combining both DCT and
DWT, the manipulated regions can be efficiently located. The experiment shows
that the proposed hybrid model improves both accuracy and computational
performance. It offers a robust and reliable framework for digital image
authenticity against the growing deepfake technologies. |
|
Keywords: |
DCT, DWT, Digital Image Forensic, Deepfake, Multi-Resolution, Counterfeit
Detection. |
|
DOI: |
https://doi.org/10.5281/zenodo.18260569 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
Title: |
EVALUATING THE CODE GENERATION CAPABILITIES OF CHATGPT THROUGH C++ DATABASE
MANAGEMENT TASKS |
|
Author: |
MOHAMMED ALABDULLATIF |
|
Abstract: |
Large language models (LLMs) are becoming important tools in software
engineering. These models, trained on large amounts of text, can understand and
generate natural language with high accuracy. Today, they are being used across
many stages of software development, from improving code quality and generating
documentation to supporting natural language interaction with development tools.
Among their many applications, code generation is one of the areas where these
models have shown the greatest impact. This study explores the coding
capabilities of ChatGPT GPT 4, a large language model introduced by OpenAI in
May 2024. The model was evaluated on twenty C++ database management tasks, and
the quality of its outputs was assessed using Cppcheck, a static code analysis
tool. The results show that ChatGPT consistently produces executable C++
programs, though differences in code length and structure highlight its inherent
nondeterministic nature. Most of the issues detected by static analysis were
stylistic rather than functional or logical. Overall, this work provides
empirical evidence on both the reliability and the current limitations of LLM
based code generation, offering directions for future research. |
|
Keywords: |
Software Engineering, ChatGPT, Large Language Models, Coding, Programming
Languages |
|
DOI: |
https://doi.org/10.5281/zenodo.18260582 |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2026 -- Vol. 104. No. 1-- 2026 |
|
Full
Text |
|
|
|