|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
December 2025 | Vol. 103 No.23 |
|
Title: |
SECURE CLOUD DATA STORAGE USING CONTEXTUAL ENTROPIC CIPHERTEXT-POLICY
ATTRIBUTE-BASED ENCRYPTION |
|
Author: |
NAGA SWETHA DHULIPUDI, BALAJI SAVADAM |
|
Abstract: |
Cloud computing delivers a centralized platform for data storage and commercial
applications. To ensure security, the cloud system effectively manages all
connected devices, applications, and data. Various existing algorithms fail to
optimize the operational delay during data encryption because of ineffective key
management, which affects overall security. To overcome this issue, this
research proposes Contextual Entropic Ciphertext-Policy Attribute-Based
Encryption (CE-CPABE) for secure cloud data storage. CE-CPABE provides improved
security and flexibility for cloud data storage by applying attribute-based
access control and contextual validation. Integrating entropy-based key
generation ensures that every encryption and decryption process is unique and
resistant to key reuse, thereby securing data access in dynamic and distributed
environments. This approach enhances the integrity, confidentiality, and access
efficiency of the sensitive information stored in the cloud. CE-CPABE optimizes
cryptographic operations while reducing computational overhead and resource
utilization in cloud environments. The experimental results show that CE-CPABE
achieves an encryption time of 2.28s and a processing time of 0.62s for a data
size of 30MB, which is better than that of existing methods. |
|
Keywords: |
Attribute-Based Encryption, Attribute-Based Access Control, Contextual Entropic,
Cloud Computing, Secure Cloud Data Storage |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
PUBLIC-PRIVATE PARTNERSHIP IN THE FIELD OF INFORMATION SECURITY AS A TOOL FOR
STRENGTHENING NATIONAL SECURITY |
|
Author: |
ANATOLII ZINENKO, NATALIIA KRASNOSTANOVA, OLGA LUGACH, INNA KULCHII, OLEKSANDR
ORLOVSKYI |
|
Abstract: |
Public-private partnerships (PPPs) are successfully used in international
practice to strengthen national security, in particular, in the defence sector,
which makes the development of this method of public-private interaction
relevant in Ukraine. The aim of the study was to assess the impact of PPPs on
socio-economic and security aspects in low- and middle-income countries and
integrate the experience of leading countries. The research employed the methods
of statistical, correlation, and regression analysis. The results of the study
showed a significant positive impact of PPP investments on the military
strength, logistics, and fire safety of low- and middle-income countries.
However, the research did not reveal any significant impact of such investments
on ensuring cybersecurity, which requires increased attention from governments
to this area. The analysis of the experience of leading countries identified key
positive examples of the use of PPPs in the defence and cybersecurity sectors.
Key obstacles to the development of PPPs in the cybersecurity sector as an
important component of information security were revealed. Priority areas for
the development of PPPs in the field of cybersecurity were outlined. The results
of the study gave grounds to provide recommendations for Ukraine, which can be
used in practice for improving the legislative framework and developing
strategies to strengthen national security. The prospects of further research
may be the development of proposals for the development of PPPs in the defence
sector of Ukraine through the use of specialized platforms for international
coordination and information exchange. |
|
Keywords: |
Public-Private Partnership, National Security, Information Security,
Cybersecurity, Infrastructure, Logistics, Sustainable Development. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
THE IMPACT OF INDONESIAN FOOD INFLUENCERS ON SOCIAL MEDIA IN SHAPING INDONESIAN
CONSUMERS PURCHASE INTENTION FOR FOOD CHOICES DURING OVERSEAS TRAVEL |
|
Author: |
KARINA HANANINGSIH, SHEILA GRACIA, SITI PATIMAH, JERRY S. JUSTIANTO |
|
Abstract: |
As social media increasingly becomes the primary reference source for culinary
information, the role of food influencers has grown more significant in shaping
consumer perceptions, preferences, and influencing purchasing
decisions—especially in the context of international tourism. This study aims to
analyze the impact of Indonesian food influencers on social media in shaping
Indonesian consumers' purchase intentions for food choices, particularly when
traveling abroad. This research adopts a quantitative approach, utilizing
SEM-PLS as the analytical tool to examine the proposed research model. Out of
241 respondents who participated in a mobile-based survey using Google Forms,
225 respondents met the criteria of having been exposed to foreign food review
content uploaded by Indonesian food influencers on social media. The findings
indicate that influencer expertise and perceived similarity have a significant
positive effect on both consumer trust and purchase intention. Interactivity was
found to positively influence purchase intention but had no significant effect
on trust. Trust plays a significant mediating role in the relationship between
expertise and perceived similarity with purchase intention, but not in the
relationship between interactivity and purchase intention. These findings
highlight the importance of expertise and perceived similarity in building trust
toward influencers, as well as the critical role of trust in driving consumers'
purchase intentions in the context of international travel. This study
contributes new insights into cross-cultural consumer behavior by examining the
interplay between same-nationality influencers and consumers within a foreign
context, which has been largely overlooked in prior studies. The practical
implications of this study can be utilized by stakeholders in the tourism and
culinary industries to design promotional strategies based on collaborations
with credible food influencers and making them more effective for Indonesian
consumers traveling abroad. |
|
Keywords: |
Food Influencer, Overseas Travel, Trust, Purchase Intention, Social Media
Marketing |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
EPILEPTIC SEIZURE DETECTION FROM EEG SIGNALS USING SVM WITH TIME–FREQUENCY
FEATURE EXTRACTION METHODS |
|
Author: |
HINDARTO HINDARTO, ADE EVIYANTI, AHMAD AHFAS, EGHA ARYA AFFANDI |
|
Abstract: |
Epilepsy is a neurological disorder characterized by recurrent seizures,
affecting approximately 50 million people worldwide. Early detection of seizures
is crucial for accurate diagnosis and effective treatment. This study proposes
several methods for EEG signal feature extraction in the time-frequency domain,
including STFT, CWT, DWT, and HHT. The UCI dataset is the data used in this
study, this data in 1 second contains 11,500 segments. This dataset is processed
through several stages, first the pre-processing stage, then feature extraction
using the time-frequency domain. The final stage of classification uses Support
Vector Machines (SVM). The classification process is divided into 2 stages,
namely the training and testing process, in this case 80% training and 20%
testing. Evaluation of the classification results is carried out using
evaluation metrics namely recall, F1 score, accuracy, precision, MCC, and Kappa.
The results show that the performance reaches 96% more for all methods, except
for feature extraction using DWT with 5 features. All methods achieve
performance above 96%, except for the feature extraction method using DWT with 5
features. Feature extraction using the STFT method using 14 features and CWT
using 49 features produces the highest accuracy of 98.39%, surpassing previous
studies that used the SVM method for classification. The worst performance with
a value of 86.7% using DWT feature extraction with 5 features. Compared with
previous studies, the proposed method provides better accuracy. The study
concluded that with a combination of appropriate pre-processing and a suitable
feature extraction algorithm, accurate and reliable detection of epileptic
seizures can be achieved. |
|
Keywords: |
Epilepsy, EEG, Seizure Detection, Time Frequency Domain, SVM. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
AUDIBLOC: A POST-QUANTUM AUDIO SECURITY SYSTEM WITH FORENSIC WATERMARKING
CHAOTIC ENCRYPTION AND BLOCKCHAIN VERIFICATION |
|
Author: |
GAYATRI SRUTHI BELLAPUKONDA, NARESH SAMMETA |
|
Abstract: |
Audio transmission of information is vital in telemedicine, financial processes,
the corporate world, government intelligence, and so on. Nevertheless, the
current solutions are being challenged like never before the classic RSA/AES
encryption can be broken with quantum computers, traditional watermarking is
non-forensics traceable, which is problematic in terms of legal compliance, and
centralized verification is prone to single points of failure that can be
targeted. This is the case with the AUDIBLOC, which targets these limitations
with a four-layer architecture revolution. Our quantum random number generation
(QRNG) unlike conventional pseudo-random key generation, true entropy cannot be
attained using classical means. Traditional chaotic encryption has the problem
of periodicity, but we already have exponentially more confusion and diffusion
with our butterfly effect-based implementation. Our quantum forensic
watermarking results in 95% successful detection of watermarks, even after
compression and filtering in wav extension. This compares favorably to the
current watermarking, which is destroyed by even the simplest audio processing.
Most importantly, our verification incorporating a blockchain makes single
points of failure impossible and allows an immutable audit trial. It also shows
better practical performance by experimental validation than other algorithms
with encryption speed of 45.2 MB/sec in the range of AES-256 (68.5 MB/sec) but
offering quantum resistance, watermark survival that survives 89.2% of signal
processing attacks and versus 60-70% of modern schemes, resists both Shor and
Grover quantum algorithms. Security analysis demonstrates a 2 256 bits effective
key space in the case of audio quality scores (PESQ: 4.32), similar to
unencrypted data. AUDIBLOC is a feasible and scalable means of providing
post-quantum secure audio, a key factor in ensuring that organizations can
secure audio communication with confidence in the authenticity and
confidentiality of messages, even as traditional cryptographic underpinnings
become no longer useful. |
|
Keywords: |
Quantum Cryptography, Audio Encryption, Chaos Theory, Perceptual Evaluation of
Speech Quality, Block-chain, Digital Forensics, Post-Quantum Security |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
FACEDEPAI: A DEEP LEARNING FRAMEWORK FOR DEPRESSION DETECTION FROM FACIAL
EXPRESSIONS USING FACEDEPNET |
|
Author: |
MD ZAINUDDIN NAVEED, DR. SHIVAMPETA APARNA |
|
Abstract: |
Depression is a widespread mental health problem with a significant effect on
productivity and quality of life. Early, objective detection is still a crucial
problem as existing diagnostic methods like clinical interviews and
self-reported questionnaires are subjective and resource demanding. Advances in
artificial intelligence have facilitated the construction of an automatic
depression detection system based on multiple modal data signals, such as text,
speech, and image. However, the current image-based methods may overfit to small
datasets and not focus on depression-specific characteristics with poor
interpretability, leading to their limited effectiveness in clinical
applications. To address the gaps above, we propose presenting FaceDepAI, a deep
learning system designed for binary depression detection from facial
expressions, based on our model, FaceDepNet. The architecture combines
convolutional neural networks with squeeze-and-excitation attention modules to
dynamically reweight feature channels, emphasizing subtle yet clinically
significant facial cues. We use data augmentation and dropout regularization to
improve generalization, while incorporating Grad-CAM for increasing
interpretability by highlighting depression-related regions on the face. The
framework was tested on the DAIC-WOZ dataset with five-fold cross-validation.
The experiment results showed that our proposed method achieved 96.8% accuracy,
96.5% precision, and a recall of 97%, with an F1-score as high as 98.2 %
ROC-AUC, which consistently outperformed the baseline models, including
ResNet-18, VGG-Face, and EfficientNet without an attention mechanism. These
results demonstrate FaceDepAI is a robust, explainable, and clinically
meaningful tool for depression detection. The scale can be used for broader,
larger-scale mental health screening and as an aid to clinical decision making. |
|
Keywords: |
Depression Detection, Facial Expressions, Deep Learning, Attention Mechanism,
Explainable |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
REPORTHINK.AI ROLE TO REDUCES SUSTAINABILITY REPORT INFORMATION ASYMMETRY |
|
Author: |
SUCI NASEHATI SUNANINGSIH, MUMPUNI WAHYUDIARTI SITORESMI, SUAMANDA IKA
NOVICHASARI, WILDAN YUDHANTO |
|
Abstract: |
This research aims to examine the effect of Reporthink.AI on information
asymmetry. This research uses 120 companies that are listed on the Indonesian
Stock Exchange ESG Leader Index. Implementation of Reporthink.AI is measured by
a dummy variable. Information asymmetry is measured by bid-ask spread. Data
analysis uses fixed-effect regression. Based on data analysis, Reporthink.AI
brings significant reduction in information asymmetry, validating its value as a
technological advancement for enhanced transparency and credibility for company
disclosures. This research has some contributions. First, this research
contributes to the literature, especially contributes to extend the signalling
theory in the context of AI and sustainability reporting. Second, this research
gives new evidence of Reporthing.AI on information asymmetry, especially in the
Indonesian Stock Exchange. |
|
Keywords: |
ESG Leader Index, Information Asymmetry, Machine Learning, Reporthink.AI,
Sustainability Report |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
TEXT2CHAIN: A MODULAR NLP-ENHANCED ORACLE ARCHITECTURE WITH DECENTRALIZED
VALIDATION AND HUMAN-IN-THE-LOOP FOR SECURE OFF-CHAIN/ON-CHAIN INTEGRATION |
|
Author: |
MERIEM KERMANI |
|
Abstract: |
The integration of unstructured textual data into blockchain systems presents a
significant challenge, primarily due to the semantic ambiguity of natural
language and the inherent trust issues associated with centralized oracles. This
paper proposes a novel, modular oracle architecture enhanced with Natural
Language Processing (NLP) to enable a secure, context-aware translation of
human-readable text into reliable on-chain smart contract execution. Departing
from conventional single-point oracles, our framework employs a decentralized
oracle network designed to perform multi-source semantic validation,
cryptographic securing, and consensus-based aggregation of NLP outputs prior to
blockchain commitment. The architecture operates in three distinct stages: (1)
off-chain NLP processing to extract structured data and confidence scores from
text; (2) a network of AI-powered oracles that independently validate, enrich,
and cryptographically sign results, with a consensus mechanism (e.g.,
2-out-of-3) determining the final authenticated payload; and (3) on-chain
execution that is triggered only after successful cryptographic verification of
the validated data. This design eliminates single points of failure, enhances
resilience against data manipulation, and ensures a trust-minimized semantic
bridge between off-chain interpretation and deterministic on-chain logic. A
proof-of-concept implementation, utilizing spaCy for NLP, Web3.py, an Ethereum
testnet, and ECDSA signatures, demonstrates the framework's feasibility in
processing domains such as clinical narratives or service reports into auditable
blockchain transactions. The evaluation assesses critical performance metrics,
including gas cost and latency, and analyzes the security trade-offs between
automation, decentralization, and operational efficiency. The primary
contribution of this work is a reproducible, modular framework that advances
secure blockchain interoperability by effectively bridging unstructured human
language with deterministic machine execution for reliable automation across
sectors like healthcare, finance, and public administration. |
|
Keywords: |
Decentralized Oracle Network, Natural Language Processing (NLP), Semantic
Validation, Smart Contracts, Off-chain/On-chain Integration. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID ATTENTION-DRIVEN CNN-GRU FRAMEWORK FOR ROBUST DETECTION OF MULTI-SCALE
POWER QUALITY DISTURBANCES IN SMART MICROGRIDS |
|
Author: |
M.DEVIKA RANI, V.SAI GEETHA LAKSHMI, M.V.RAMESH, 4KARUNAKAR KANCHETI, PREMA
KANDASAMY, P. MUTHU KUMAR |
|
Abstract: |
Smart microgrids are becoming increasingly interconnected with renewable energy
sources and nonlinear loads, which means that power quality (PQ) disturbances
such voltage sags/swells, harmonics, transients, and frequency deviations are
becoming more complex and dynamic. It is fairly uncommon for conventional
rule-based or signal processing approaches to miss the multi-scale features and
spatiotemporal features of such disruptions. In order to properly identify and
categorize power quality events, this research introduces a novel deep learning
architecture for smart microgrids that integrates a Convolutional Neural Network
(CNN) with a Gated Recurrent Unit (GRU). The GRU keeps track of the event's
temporal relationships, while the CNN part uses spatial and frequency-domain
data from current and voltage waveforms to extract features. Experimental
results demonstrate significant improvements in classification accuracy,
F1-score, and inference time when compared to baseline models such vanilla CNN,
LSTM, and classic SVM-based classifiers. |
|
Keywords: |
Deep Learning-Based Classification; Gated Recurrent Unit (GRU); Power Quality
Disturbances; Smart Microgrids; Spatiotemporal Feature Extraction |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
AUTOMATED CLASSIFICATION AND COUNTING OF VEHICLES USING DEEP LEARNING APPROACH |
|
Author: |
AMIR RAZA, JOHARI ABDULLAH, REHMAN ULLAH KHAN, IRWANDI HIPNI MOHD HIPINY,
HAMIMAH UJIR |
|
Abstract: |
The rapid increase in automobile traffic in and around cities presents
significant challenges for transportation management, infrastructure planning,
and overall system viability. Conventional vehicle classification and counting
techniques, which are mostly manual or sensor-based, are often constrained by
inefficiency, labor intensity, and limited scalability in real-time
applications. This paper proposes a deep learning approach for real-time
classification and enumeration of vehicles using one-dimensional signals from
piezoelectric sensors. The 1D-CNN was trained, using convolutional layers with
small kernel sizes to stack, so that the temporal dependencies of sensor signals
could be well learned, pooling and fully connected layers used to extract
features with high strength. The model was trained using a hybrid dataset with
field-collected sensor data and synthetically generated signals generated with
traffic simulation tools to cover a classes of vehicles and road conditions,
which guarantees the ability to scale and generalization. The proposed model is
shown to perform well as it has a classification accuracy of 0.99, and mean
Average Precision (mAP) of 0.98 on five different vehicle classes. Moreover, its
performance was checked with the unseen data, which proved high generalization
ability. The solution has potential to be deployed on edge computing devices
because it is computationally efficient to support a realistic application of
traffic monitoring. |
|
Keywords: |
Vehicle Classification, Vehicle Counting, 1D Convolutional Neural Network
(1D-CNN), Road Energy-Harvesting Sensors, Intelligent Transportation System
(ITS) |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
SUPERPIXEL BASED CLASSIFICATION FOR GRAPH NEURAL NETWORK(GNN) IN LEUKEMIA IMAGES |
|
Author: |
B. REVATHI, M. KALIAPPAN, P. CHANTHIYA, S.V. ANANDHI, CALLINS CHRISTIYANA.C,
S.K. KEZIAL ELIZABETH |
|
Abstract: |
Patient survival depends heavily on accurately and quickly detecting leukemia,
but traditional diagnostic methods can be time-consuming and subjective. We
present a innovative approach that utilizes Superpixel-based image segmentation
and graph neural networks (GNN) to classify leukemia in microscopic cell images.
The C-NMC dataset converts each image into a graph, with nodes representing
meaningful regions (Superpixels) and edges capturing spatial relations. The
accuracy of our model is 99.56%, and it performs well in Precision, Recall, and
F1-score metrics. This method not only improves accuracy but also makes it
easier to model spatial patterns in medical images. |
|
Keywords: |
Leukemia Detection, Graph Neural Networks, Superpixel Segmentation, Graph
Attention Network, Image Classification, Medical Imaging |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
AN EARLY BREAST CANCER DETECTION USING EFFICIENT NET BASED TRANSFER LEARNING
MODEL |
|
Author: |
M V GANESWARA RAO, K PRASUNA, MALLAMPATI PURNA KISHORE, SURESH BABU CHANDOLU,
5VENKATA NARAYANA T, SIVAGANGA BADIPATI, S SINDHURA |
|
Abstract: |
Since breast cancer is still one of the top causes of death for women globally,
effective and precise early detection techniques are desperately needed. Despite
their effectiveness, traditional imaging-based diagnostic techniques like
mammography and ultrasound are frequently constrained by noise, mistaken human
interpretation, and decreased precision in dense breast tissues. This work
investigates the use of transfer learning models for early breast cancer
detection in order to address these issues. To distinguish between benign and
malignant tumors, transfer learning uses pre-trained convolutional neural
networks (CNNs), such as VGG16, ResNet50, InceptionV3, and Efficient Net, that
have been refined on publically accessible breast imaging datasets. Reusing deep
feature representations from extensive natural picture datasets (like ImageNet)
allows the models to maintain excellent accuracy with fewer training samples and
less computing power. To improve feature extraction and generalization, the
suggested method includes pre-processing stages such picture normalization, data
augmentation, and contrast enhancement. Results from experiments show that
transfer learning models perform better than traditional machine learning
classifiers, attaining higher recall, accuracy, and precision across several
modalities. With a detection accuracy of over 96%, EfficientNet-B3 offered the
best balance between accuracy and computational efficiency among the models that
were tested. Additionally, to improve clinical transparency, Grad-CAM imaging is
used to highlight discriminative tumour areas and interpret model predictions.
This study demonstrates that, particularly in medical settings with limited
data, transfer learning provides a strong and scalable foundation for early
breast cancer identification. According to the results, early diagnosis, patient
prognosis, and healthcare outcomes could all be considerably enhanced by
incorporating transfer learning-based diagnostic tools into clinical workflows.
For real-time screening and decision assistance, future work will integrate
cloud-based diagnostic systems and fuse multimodal data. |
|
Keywords: |
CNN, Breast Tissues, Efficient Net, Cancer. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
CUDA-ENHANCED YOLO FEATURE EXTRACTION WITH RF–CATBOOST ENSEMBLES FOR
HIGH-PERFORMANCE PRODUCT REVIEW SENTIMENT DETECTION |
|
Author: |
Dr. VANCHA MAHESHWAR REDDY,SRILAKSHMI KAZA, CH.RAMBABU, Dr.A.SRI NAGESH, V.S.N.
MURTHY, M PRAVEEN, SARAVANAN G, B.SRILAKSHMI, Dr. E.SREEDEVI,Dr. SIVA KUMAR
PATHURI |
|
Abstract: |
Sentiment analysis is essential in retrieving opinions, reviews and written
statements to predict emotions based on Natural Language Processing (NLP). It
categorizes the text in sentiments like positive, negative or neutral. In case
of labeled datasets, the customer feedback can be grouped into scales such good,
better, best, or bad, worse, worst, and subsequently converted into sentiment
categories. The fast expansion of the World Wide Web has created massive amounts
of user-generated content opinions and feelings and reviews that are used in
business by e-commerce platforms, such as Amazon, Flipkart, and social
platforms, such as Twitter and Facebook. Opinion mining has now taken a very
important role in business analytics where customer opinion is directly used in
creating and maintaining competitiveness in a product. In this paper, a
CUDA-Optimized Sentiment Analysis Framework based on YOLO to extract features
and a hybrid RF-CATBoost ensemble to classify the sentiments is proposed. YOLO
takes both textual and pictorial aspects, and RF-CATBoost reduces bias and
variance as they aim to achieve increased accuracy. The proposed YOLO +
RF-CATBoost model is benchmarked on the Amazon mobile review dataset against
deep learning baselines such as ResNet-50 and RegNetY and it is more accurate
and faster. The framework, which involves the use of GPU-accelerated CUDA
processing, has high performance, scalability, and efficient sentiment detection
to support e-commerce decision-making. |
|
Keywords: |
Sentiment Analysis, YOLO , RF–CATBoost Ensemble, CUDA Parallel Processing,
Product Review Classification, Deep Learning (ResNet-50, RegNetY) |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
MULTI LEVEL USER VALIDATION MODEL WITH ASYMMETRIC CRYPTOGRAPHY FOR CLOUD DATA
SECURITY |
|
Author: |
K.VENKATESWARARAO, P MANIKYA PRASUNA, KALPANA SANJAY PAWASE, T.V.N.PRASANNA,
KATAKAM RANGANARAYANA, V.LAKSHMAN NARAYANA |
|
Abstract: |
Even though cloud computing makes data storage and service delivery more
affordable, adaptable, and scalable, it is still particularly susceptible to
data breaches, insider assaults, and unwanted access. In light of these
difficulties, the authors of this study present a novel approach to
authentication accuracy and data protection in cloud environments: the
Multi-Level User Validation Model with Asymmetric Cryptography (MLUVM-AC). To
prevent identity spoofing and illegal data modification, the architecture
combines public-private key encryption with multi-stage user validation to
guarantee that encrypted data may only be accessed by certified users. The
suggested approach offers a layered authentication technique with asymmetric key
management, which is an improvement over traditional systems that use symmetric
encryption and single-layer password validation. This leads to better
performance and improved security. If we compare our model to the current state
of the art, we find that it generates keys 17% faster, improves data security
correctness by 18%, and improves validation efficiency by 20%. This study adds
to the existing body of knowledge by developing a stable, efficient, and
extensible method for protecting data in ever-changing cloud environments
through the integration of multi-level authentication with asymmetric
cryptography. |
|
Keywords: |
Cloud Computing, Data Security, User Authentication, Authorization, Asymmetric
Cryptography. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCED DENGUE FORECASTING WITH ANC-DEFO: A HYBRID FEATURE OPTIMIZATION
FRAMEWORK |
|
Author: |
P.MUTHUSELVI, DR.S.DHANASEKARAN, DR.V.VASUDEVAN |
|
Abstract: |
Dengue fever remains a persistent and growing public health challenge,
especially in tropical and subtropical regions where it causes significant
morbidity and mortality. The complexity and seasonality of dengue transmission
necessitate the development of robust and accurate prediction models for early
intervention and resource planning. However, the predictive performance of
traditional machine learning models often suffers due to high-dimensional data,
irrelevant features, and overfitting. This study introduces ANC-DEFO, an
Adaptive Neuro-Classifying model integrated with Differential Evolution and
Fuzzy Optimization, to enhance the accuracy and efficiency of dengue outbreak
prediction. The proposed framework employs intelligent dynamic feature selection
to isolate the most relevant environmental and temporal features, thereby
improving model generalization and reducing computational cost. Experimental
evaluation was conducted using real-world seasonal dengue data from Tamil Nadu,
India. The performance of ANC-DEFO was compared against conventional models such
as LSTM, Random Forest, SVM, and Logistic Regression. The results indicate that
ANC-DEFO achieved a prediction accuracy of 95.5% and a significantly lower RMSE
(2.10) compared to LSTM (accuracy 86.9%, RMSE 4.97), demonstrating its superior
capability in handling noisy and nonlinear epidemiological data. The model
provides a scalable framework for integration into a system for realtime health
surveillance and can be modified for other climate-sensitive or vector-borne
diseases. By facilitating early outbreak detection, ANC-DEFO shows potential for
improving public health readiness and epidemic response strategic planning. |
|
Keywords: |
Dengue Prediction, Machine Learning, Feature Selection, Differential Evolution,
ANC-DEFO, Public Health. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
ENSEMBLE DEEP LEARNING WITH ADAPTIVE FEATURE FUSION FOR CORONARY ARTERY DISEASE
PREDICTION FROM ECG SIGNALS |
|
Author: |
Dr.B.KALPANA, Dr.D.SARASWATHI, KROVVIDI S B AMBIKA, DR.T.VENGATESH |
|
Abstract: |
Coronary Artery Disease (CAD) is a leading cause of mortality worldwide. The
12-lead Electrocardiogram (ECG) is a primary, non-invasive, and cost-effective
tool for initial screening. While deep learning has shown promise in automating
ECG analysis, existing models often underutilize the complex, multi-lead
information and struggle with the subtle and heterogeneous manifestations of
CAD. This paper proposes ECG-EnsembleFuseNet, a novel ensemble framework that
synergistically combines multiple deep learning architectures via an adaptive
feature fusion mechanism for enhanced CAD detection. Our approach leverages an
ensemble of specialized Convolutional Neural Networks (CNNs) and a Transformer
model to capture both localized morphological anomalies and global contextual
dependencies across leads. The core innovation is an adaptive fusion module that
learns to dynamically weight and integrate features from each ensemble member
and each ECG lead, focusing the model on the most salient predictors. Evaluated
on a large dataset of 12-lead ECG records from the PTB-XL database,
ECG-EnsembleFuseNet achieved an accuracy of 94.7%, a sensitivity of 93.8%, and
an AUC-ROC of 0.981, significantly outperforming standalone ResNet (91.2%, AUC:
0.952), InceptionTime (92.1%, AUC: 0.963), and a standard voting ensemble
(93.0%, AUC: 0.972). The model's adaptive attention weights provide a form of
intrinsic explainability, highlighting the contributory importance of different
leads and temporal segments, which aligns with clinical expertise. This work
demonstrates the significant benefit of learned, adaptive fusion over static
ensemble methods for robust and interpretable CAD prediction. |
|
Keywords: |
Coronary Artery Disease, ECG Analysis, Deep Learning, Ensemble Learning, Feature
Fusion, Explainable AI, Medical Signal Processing. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
AN ANALYTICAL MACHINE LEARNING MODEL FOR PREDICTING DEFECTS IN IoT BASED
APPLICATIONS |
|
Author: |
MANUJAKSHI B C, SHASHIDHAR T M, LATHARANI T R, RENUKADEVI S, N SHIVAKUMAR |
|
Abstract: |
In past half decade, there has been considerable progressive advancement towards
performing big data analysis over large scale complex network of
Internet-of-Things (IoT) using machine learning approach for performing
predictive analysis. The proposed study presents a simplified approaches to
perform defect analysis using machine learning in large scale IoT based
applications like healthcare services. In this study various distinct sensory
data are obtained from multiple sources in healthcare services which are further
subjected to data aggregation and preprocessing which not only improves the data
quality but also reduces the computational burden of training operation. The
proposed system is evaluated with multiple machine learning model like random
forest which can perform identification as well as classification of defect with
respect to its logical classes defined in this study model. The study outcome
shows that random forest offers 45% of improved accuracy and 79% of faster
processing in comparison to other frequently used machine learning models. |
|
Keywords: |
Internet-of-Things, Machine Learning, Prediction, Accuracy, Aggregation |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
A SYSTEMATIC REVIEW OF PERSISTENT AI INTERFACES IN EMBEDDED SYSTEMS: BRIDGING
TECHNICAL PERFORMANCE AND USER EXPERIENCE |
|
Author: |
ALAN ISAAC TRINIDAD GONZÁLEZ, ELENA FABIOLA RUIZ LEDESMA |
|
Abstract: |
This systematic review examines the impact of persistent artificial intelligence
interfaces in embedded devices on the usability and user experience of adults.
We start by evaluating the performance of these systems and then shift focus to
subjective experiences, considering factors such as latency, autonomy, and
functionality. Following PRISMA guidelines, we searched Scopus and Springer
Nature Link for studies published between 2020 and 2025, focusing on a
PICO-formulated question about interactions with these “always-ready” embedded
devices. Only studies involving local or hybrid computing that reported on user
experience or perceived performance were included. The findings consistently
show that persistence is achieved through computational proximity, utilizing
voice, vision, or other sensory processes for tasks like word recognition,
monitoring in public spaces, and intelligent battery management. As part of our
contribution, we outline several implications aimed at establishing quality
criteria for embedded devices, useful for both design and evaluation. This
framework extends beyond mere computational metrics; it provides guidance for
enhancing everyday user experiences, addressing operational goals such as
maintaining perceptible thresholds of naturalness, ensuring continuity without
delays or lags, and preventing overheating during prolonged use. Overall, it
proposes methods to “measure what truly matters” without requiring oversized
architectures, while incorporating strategies that make persistence perceptible
without intruding on or compromising usability. |
|
Keywords: |
Artificial intelligence, Embedded devices, Latency, Pervasive interfaces, User
experience. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
DIGITAL TRANSFORMATION OF LOGISTICS: THE ROLE OF IT IN IMPROVING SUPPLY CHAIN
EFFICIENCY |
|
Author: |
HASSAN ALI AL-ABABNEH , IBRAHIM MAHMOUD SIAM |
|
Abstract: |
Logistics digital transformation is a fundamental accelerator of both efficiency
and resilience for supply chains today. Rapidly growing complexity in
international trade and rising consumer demands are driving the adoption of
advanced information technologies (IT) to increase operational flexibility and
decision-making capabilities. The traditional logistics system has developed to
a certain degree; however, it also has some disadvantages, such as poor
integration, low flexibility, and poor real-time ability to respond. This paper
aims at investigating the role of IT (Big Data analytics as a specific IT) on
the betterment of supply chain performance. Using a broad methodological
framework to evaluate IT impacts on KPIs of different logistics process, the
study relies on a Big Data analytics framework. The results revealed that the
application of big (demand constraint) on Big Data analytics leads to savings in
operational cost up to 20% and more improvement on the average orders delayed
and demand forecasting by more than 30% and more than 25% respectively. Novelty
The contribution or originality of this work is on that the integrated model has
been developed both Big Data analytics and supply chain management theories into
problems, this is a holistic vision to digital transformation. The practical
implications of the research are fairly straightforward as it offers non-expert
logistics managers pragmatic advice as to IT's application in terms of enhancing
the supply chain's efficiency and competitive power. The results of this
research will provide to deepen the theoretical framework of the IT in the
logistics field, and would offer a way to apply IT to real cases. |
|
Keywords: |
Information Technology, Supply Chain Efficiency |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
ADVERSARIALLY RESILIENT REMOTE SENSING IMAGE CLASSIFICATION THROUGH
CROSS-SPECTRAL ATTENTION-FUSED TRANSFORMER MODELING (CSAF-VIT) |
|
Author: |
HEMASHREE P, N VALLIAMMAL |
|
Abstract: |
Adversarial threats pose significant challenges to the reliability of remote
sensing image classification, particularly in defense and surveillance
applications. The proposed CSAF-ViT framework introduces a cross-spectral
attention-fused vision transformer designed to enhance resilience against
pixel-level perturbations and preserve spatial-spectral integrity. The model
begins by decomposing multi-band satellite imagery, followed by band-specific
feature encoding and attention-guided fusion to capture intricate cross-band
relationships. Transformer-based spatial encoding then enriches global
contextual awareness, ensuring consistent representation of spectral semantics.
A classification token mechanism is employed for final decision mapping, while
adversarial training fortifies the model against gradient-induced distortions.
The integration of spectral coherence and spatial dependency within a unified
architecture enables stable feature propagation and decision reliability under
manipulated inputs. CSAF-ViT offers a structured defense pipeline that
emphasizes spectral alignment, attention consistency, and robust learning
dynamics, establishing a reliable foundation for secure and accurate remote
sensing classification in critical operational scenarios. |
|
Keywords: |
Remote Sensing Classification, Adversarial Robustness, Cross-Spectral Attention,
Vision Transformer, Spectral Feature Fusion, Secure Image Recognition |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
DAFDN: A DUAL ATTENTION NETWORK FOR ROBUST AND INTERPRETABLE FACE FORGERY
DETECTION IN VIDEOS |
|
Author: |
TENALI ANUSHA , DR. A. SRI NAGESH |
|
Abstract: |
The rapid advancement of deep generative technologies has enabled the creation
of highly realistic face-manipulated videos, commonly referred to as deepfakes.
While these techniques offer benefits in media and entertainment, their misuse
poses significant threats to privacy, trust, and information security. This
paper introduces a novel deepfake detection framework called the Dual Attention
Forgery Detection Net-work (DAFDN), which integrates two specialized attention
modules: The Spatial Reduction Attention Block (SRAB) and the Forgery Feature
Attention Module (FFAM). These modules are designed to enhance the model's
ability to capture subtle artifacts left by tampered facial regions. Built upon
the EfficientNet-B4 backbone, DAFDN is evaluated on two benchmark
datasets—FaceForensics++ and Deepfake Detection Challenge (DFDC). Experimental
results demonstrate that DAFDN achieves strong performance, with AUC scores of
0.945 and 0.911 on FF++ and DFDC respectively, outperforming several
state-of-the-art meth-ods. The proposed model not only improves detection
accuracy but also offers interpretability through attention-based heatmaps,
making it a robust tool for combating video-based face forgery. |
|
Keywords: |
Deep Fake Detection, Spatial Attention, Channel Attention, Efficientnet-B4, Dual
Attention Network, Video Forensics, Forgery Localization, Manipulated Media
Analysis. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
VIABLE SENSOR DATA AGGREGATION AND COMMUNICATION SCHEME FOR AIDING DIAGNOSIS
PRECISION OF INTERNET OF THINGS-BASED HEALTH MONITORING SYSTEMS |
|
Author: |
Dr. L. MOHANA KANNAN, K. DEEPIKA, S. RAMYA, P. VIGNESHKUMAR, M.VALLIKKANNU, Dr.
R. KANNAN |
|
Abstract: |
The rapid growth of the Internet of Things (IoT) has greatly improved smart
healthcare systems with wearable sensors and smart data processing. However,
continuous health monitoring often encounters problems like data loss, delays in
transmission, and communication errors. These issues can lower the accuracy of
disease diagnosis. To tackle these challenges, this article suggests a Viable
Data Aggregation Scheme (VDAS). This scheme aims to boost diagnosis accuracy by
reducing data loss and improving communication reliability in IoT healthcare
settings. The proposed scheme uses a distributed federated learning (FL) model
to check data availability and identify missing or delayed information during
ongoing monitoring periods. By matching sensed and received data over
consecutive intervals, VDAS helps consolidate communications from wearable
sensors and reduce transmission errors. Experimental results show that this
method increases data availability, aggregation efficiency, and communication
rate by 9.19%, 8.17%, and 10.31% respectively across various sensing intervals.
Although the system has synchronization issues due to non-uniform sensing times,
future work will focus on creating interval-based responsive communication
methods. This will aim for more reliable and prioritized data delivery in
asynchronous healthcare monitoring environments. |
|
Keywords: |
Data Error, Diagnosis Accuracy, FL, Healthcare Monitoring, IoT |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
ASPECT-BASED SENTIMENT ANALYSIS USING GLOBAL HYPERBOLIC TANGENT SIGMOID DEEP
NEURAL NETWORK WITH FENNEC FOX OPTIMIZATION |
|
Author: |
SREEMATHY JAYAPRAKASH , PRASATH NITHIYANANTHAM |
|
Abstract: |
Analyzing human speech sentiments is a natural process for humans but an
extremely challenging one for machines because of the difficulty in interpreting
underlying emotions from beneath content-based meaning. While content-based data
analyzes well using traditional sentiment analysis models, it fails to decipher
contextual emotions buried deep within speech. Earlier research mainly addressed
basic sentiment tagging without considering fine-grained contextual and
affective features, leading to poor accuracy and generalizability across various
emotional classes. To overcome these limitations, this study proposes a novel
approach for sentiment analysis from signal data with the IEMOCAP dataset:
Global Hyperbolic Tangent Sigmoid Deep Neural Network with Fennec Fox
Optimization (GHTSDNNet-FFO). The approach starts with pre-processing by Deep
Attentional Guided Image Filtering (DAGIF), then feature extraction using
Style-Based GAN Encoder (SB-GAN Encoder) that retains subtle emotional features.
GHTSDNNet is applied for classification, which is a hybrid model combining
Global Attention Network (GLOBATTNET) and Hyperbolic Tangent Sigmoid Deep Neural
Network (HTSDNN) and additionally optimized with Fennec Fox Optimization (FFO)
to enhance learning efficiency. Experimental outcomes on IEMOCAP dataset prove
outstanding performance with 99.85% accuracy, F1-Score of 99.58% and precision
of 99.62%, which proves the strength and efficiency of the introduced model in
emotion-aware sentiment analysis. |
|
Keywords: |
Deep Attentional Guided Image Filtering, Fennec Fox Optimization, Global
Attention Network, Hyperbolic Tangent Sigmoid Deep Neural Network, Style-Based
GlobAttNet Encoder. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
TECHNOLOGY OF BLOCKCHAIN SYSTEMS: NEW OPPORTUNITIES AND ISSUES IN DATA SECURITY
AND RETENTION |
|
Author: |
SERGII BATAIEV , VIKTOR KYRYCHENKO , VIKTOR OSTAPCHUK , VIKTOR KRASNOSHCHOK ,
SERHII MARTYNIUK |
|
Abstract: |
In the modern digital environment, there is a growing need for reliable
mechanisms for data protection and storage, which is due to the risks of
cybercrime, information leaks and manipulation of digital records. Blockchain
technologies offer decentralized solutions that can provide increased security,
transparency and immutability of data, which makes them promising for use in the
financial sector, medicine, e-government and logistics. The purpose of this
study is to analyze the opportunities and challenges of blockchain in the field
of security and information storage. The work uses methods of comparative
analysis, empirical threat modeling and expert assessment of blockchain
implementation in various industries. The study confirmed that blockchain is
able to significantly reduce the risks of unauthorized access to data and
manipulation of digital records, thanks to the use of cryptographic protection
methods and consensus algorithms. However, it was found that existing blockchain
networks face scalability problems, high energy consumption and the lack of a
unified regulatory framework, which limits their implementation. Possible
solutions are proposed, including the transition to more energy-efficient
algorithms, such as Proof-of-Stake, the development of quantum-resistant
cryptographic methods, and the harmonization of regulatory approaches. The
results obtained have practical significance for the financial, government, and
technology sectors, contributing to the implementation of secure digital
solutions based on blockchain. Further research should be aimed at improving
scaling technologies and increasing the security of blockchain systems in the
context of the development of quantum computing. |
|
Keywords: |
Blockchain, Data Security, Cryptography, Consensus Algorithm, Scalability,
Financial Technology, E-Government |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
GAME-THEORETIC SELF-SUPERVISED STRATEGIES FOR INTELLIGENT ANOMALY DETECTION IN
DYNAMIC SURVEILLANCE ENVIRONMENTS |
|
Author: |
Dr. VUDA SREENIVASA RAO, Prof. CHIN-SHIUH SHIEH, Prof. SIVA SHANKAR S, Prof.
PRASUN CHAKRABARTI |
|
Abstract: |
Detecting anomalies in large-scale video surveillance is challenging due to
diverse scene variations and the limited availability of labeled anomalous data.
Whereas existing models employed supervised deep networks or rule-based
heuristics, this proposed GTSSL framework presents a game-theoretic
self-supervised paradigm that enhances system adaptability without requiring
large annotated datasets. The method redesigns anomaly detection in IT-driven
surveillance by combining decision theory with autonomous learning, a union that
is seldom attempted in existing work. The unique contribution of this research
is the development of the Game-Theoretic Self-Supervised Learning (GTSSL)
framework, which bridges the gap between self-supervised feature learning and
adaptive decision-making in anomaly detection. In contrast to traditional deep
learning approaches that are based on labeled data or fixed optimization, this
framework combines game-theoretic strategy formation, GAN-based augmentation,
and CNN-LSTM hybrid modeling, thus presenting a novel paradigm in intelligent
surveillance analytics. This methodology puts forward the overall corpus of
knowledge in IT by codifying anomaly detection into a strategic optimization
problem, enhancing flexibility, robustness, and online learning in complex
monitoring situations. The added knowledge, therefore it forms an enlarged step
over previous incremental progress in that it brings a theoretically sound,
practically scalable, and computationally feasible model translatable to various
fields of security and surveillance. The framework is implemented using Python
with TensorFlow and OpenCV libraries and evaluated on the Kaggle CCTV Activity
Identification Collection and approximately 200 clips per class. Experimental
analysis demonstrates superior performance of GTSSL, achieving 98.3% accuracy,
97.5% precision, 97.4% recall, and an F1-score of 97.0%, surpassing CNN (72%),
SVM (72.1%), and CNN-RNN (84%) models by a margin of 6–9%. Results confirm that
GTSSL enhances resilience against distributional shifts and adversarial patterns
while maintaining computational efficiency for real-time deployment. These
findings validate the efficacy of combining self-supervised representation
learning, adversarial augmentation, and game-theoretic decision-making in
anomaly detection systems for dynamic surveillance environments. |
|
Keywords: |
Anomaly Detection, Game-Theoretic Learning, Generative Adversarial Networks,
Convolutional Neural Networks, Long Short-Term Memory, Video Surveillance |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
SOFTWARE REFACTORING OPPORTUNITIES PREDICTION ON COMMENT-BASED FEATURES USING
MACHINE LEARNING CLASSIFIERS |
|
Author: |
ARCHANA PATNAIK, NEELAMADHAB PADHY, LOV KUMAR, RASMITA PANIGRAHI |
|
Abstract: |
Context: Software Refactoring focuses on internal improvement of the code base
without altering its external behavior. For enhancing the quality of real-time
open-source projects, it is highly essential to perform complexity analysis
followed by code restructuring. Objectives: In this work, we have created a
hybrid model by combining different machine learning classifiers to predict
refactoring mechanisms on code comments. Quality assessment of a software
project is performed on comment tags using machine learning models. Methodology:
Initially, the comment-based code snippets are pre-processed to separate the
clean and dirty code snippets. The implementation is done by using different
classifiers like Naive Bayes, Random Forest, Decision Trees, Logistic
Regression, K-Nearest Neighbor, and Support Vector Machine. Result: We have
identified six comment tags for refactoring prediction, and TODO comments is
high as compared to others. We have implemented six different classifiers, and
it is observed that the performance of the Random Forest classifier is high with
95% accuracy, 0.95 precision, 0.94 recall, and 0.93 f-measure. Conclusion: It is
observed that the performance of sample 3 is better than other samples, and the
Random Forest classifier has having highest accuracy. In our future work, we
will predict refactoring on Python code. |
|
Keywords: |
Refactoring, Code Comments, Machine Learning Classifiers, Software Quality. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
INTERFERENCE-AWARE SPECTRUM OPTIMAZATION IN 5G DEVICE-TO-DEVICE USING GENETIC
ALGORITHM AND SIMULATED ANNEALING |
|
Author: |
MOHD AZRULAZWAN JUSOH MOHD YUSOFF, NOR FADZILAH ABDULLAH, ASMA’ ABU SAMAH,
MAHAMOD ISMAIL |
|
Abstract: |
The rapid expansion of 5G networks promises improved speed, reduced latency, and
massive connectivity, yet dense deployments introduce severe interference and
spectrum scarcity. This paper presents a novel hybrid Dynamic Spectrum
Management (DSM) framework that integrates Genetic Algorithm (GA) and Simulated
Annealing (SA) to enhance spectrum allocation in Device-to-Device (D2D)
communication. Unlike static methods, the proposed GA-SA model dynamically
optimizes spectrum assignments based on real-time
Signal-to-Interference-plus-Noise Ratio (SINR) and interference data.
Simulations conducted in a 500-user dense 5G scenario demonstrate significant
improvements, with average SINR increasing from 8.2 dB (static) to 14.7 dB and
average throughput rising from 5.4 Mbps to 12.5 Mbps. Additionally, the 90th
percentile throughput improved from 8 Mbps to 19.3 Mbps. These results validate
the GA-SA framework’s effectiveness in reducing interference, boosting capacity,
and offering a computationally efficient alternative for real-time 5G spectrum
management. |
|
Keywords: |
5G Networks, Dynamic Spectrum Management, D2D Communication, Genetic Algorithm,
Simulated Annealing, SINR, Throughput. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
REAL-TIME EMAIL SPOOFING DETECTION USING MACHINE LEARNING AND TIMESTAMP ANOMALY
ANALYSIS |
|
Author: |
ROOBAL, RAHUL SAXENA, VENUS DILLU |
|
Abstract: |
Existing rule-based mechanisms (SPF, DKIM, DMARC) mitigate only a fraction of
spoofing attempts and fail against timestamp-manipulated or forwarded mails,
revealing a persistent knowledge gap in correlating temporal anomalies with
spoofing likelihood. This study addresses that gap by introducing a
timestamp-driven anomaly-based machine-learning model (XGBoost) for real-time
email fraud detection. Unlike prior content-filter or signature-based systems,
the proposed framework fuses authentication records, sender reputation, and
delay deviations to create new knowledge on temporal-behavioural indicators of
spoofing. Results (R² = 0.92–0.94) confirm superior accuracy and practical
deployability. Multiple machine learning models, including Ordinary Least
Squares (OLS) Regression, Polynomial Regression, and XGBoost, were tested.
Results indicate that XGBoost outperforms traditional models, achieving an R²
score of 0.92–0.94, making it highly effective for real-time email fraud
detection. The study also highlights the strong correlation between email delay
anomalies and spoofing behaviour, with spoofed emails exhibiting significantly
longer transmission delays. A flowchart-based implementation is provided,
demonstrating real-world deployment feasibility. This contributes new insight
into how temporal metadata can be operationalized for enterprise-scale,
real-time spoofing prevention. Future work will focus on deploying the model as
a cloud-based API and expanding the dataset with real-world email samples for
further validation. |
|
Keywords: |
Email Spoofing, Machine Learning, XGBoost, Cybersecurity, Timestamp Anomaly
Detection |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
DIGITAL TOOLS FOR MONITORING THE ELECTORAL PROCESS IN UKRAINE UNDER MARTIAL LAW |
|
Author: |
OLEKSANDR HRYHORIEV, LIUDMYLA PAVLOVA, OLENA KARCHEVSKA, GANNA MALKINA, ALINA
VOICHUK |
|
Abstract: |
Relevance of the research The relevance of the study is determined by the
need to ensure electoral integrity, traceability, and trust in digital voting
procedures under martial law and an increased level of cyber threats. Aim of
the research The aim of the research is the formalization of a digital voting
framework to ensure transparency and electoral integrity under martial law
through the integration of legal, technical, and organizational solutions.
Research methods The research employed the following methods:
structural-functional analysis, comparative law, typology and cluster grouping,
technological classification and ranking, cross-validation functional modelling,
Unified Modelling Language (UML) modelling. Obtained results The study
formalizes a framework for digital will expression relevant to martial law
restrictions. Structural-functional analysis and comparative law identified
critical threats (security, regulatory, cognitive), while cross-validation
modelling identified digital technologies with maximum compensatory potential
(blockchain, e-/i-Voting, audit trail). The proposed UML architecture provides
multi-level authentication, traceable verification, and civic oversight, which
guarantees electoral integrity and regulatory validity in times of crisis.
Academic novelty of the research The academic novelty of the study is the
stratification of electoral threats under martial law and the formalization of a
framework for digital expression of will, which integrates multi-factor
authentication, blockchain storage, traceological audit, and citizen oversight,
ensuring the stability of procedures, the legitimacy of results, and the
autonomy of subjects of will declaration. Prospects for further research
Prospects for further research include launching a controlled pilot project to
implement a digital will declaration framework in a limited jurisdiction with
subsequent validation of its operational stability, regulatory compatibility,
and behavioural integrativity. Based on the results of the testing, it is
appropriate to develop adaptive optimization modules aimed at increasing cyber
resilience, minimizing transactional load, and ensuring institutional
scalability. |
|
Keywords: |
Digital Suffrage Framework, Blockchain Voting Architecture, Multi-Factor
Authentication (MFA), End-to-End Verifiability (E2EV), Smart Contracts, Legal
Compliance Modelling, Civic Audit Infrastructure. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
EVALUATION METRICS OF MACHINE LEARNING OPERATIONS (MLOPS) |
|
Author: |
NUR FARAH AFIFAH AHMAD SUKRI, WAN MOHD AMIR FAZAMIN WAN HAMZAH, MOHD KAMIR
YUSOF, ISMAHAFEZI ISMAIL, HARMY MOHAMED YUSOFF, AZLIZA YACOB |
|
Abstract: |
Machine Learning Operations (MLOps) has emerged to address challenges associated
with deploying, integrating, monitoring, and scaling machine learning (ML)
models in production environments. However, to effectively evaluate how well
MLOps improves ML models, it is important to have clear and standardised
evaluation metrics for such measurement. This study introduced thematic analysis
to compile and map evaluation metrics from previous research, select the most
relevant criteria and then define a comprehensive set of metrics to assess MLOps
implementations. The key findings presented in this article focus on eight main
themes: data management, automation and pipelines, model performance, resource
and time efficiency, deployment and scalability, usability and collaboration,
monitoring and observability, as well as compliance and security. |
|
Keywords: |
Machine Learning (ML), Machine Learning Operations (MLOps), Machine Learning
Evaluation Metrics, MLOps evaluation, Thematic Analysis. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
AI-POWERED RESUME SCREENING SYSTEM FOR SMART HIRING: LEVERAGING NLP AND LARGE
LANGUAGE MODELS FOR EFFICIENT AND FAIR RECRUITMENT |
|
Author: |
ANGEL HEPZIBAH R, JEYASANTHI J, DURGA DEVI G, 4MARIAPPAN E, KALIAPPAN M, PREETHY
REBECCA P, SENTHI S, RAMNATH M |
|
Abstract: |
Manual resume screening is a labour-intensive and error-prone process, further
complicated by the growing volume of applications in modern recruitment. While
traditional Applicant Tracking Systems (ATS) depend heavily on keyword-based
filtering, they often overlook contextually relevant candidates and
inadvertently introduce bias. This study proposes a novel AI-powered resume
screening framework that combines Sentence-BERT (SBERT) for semantic similarity
with Large Language Models (LLMs) for context-aware summarization and bias
detection. Unlike existing approaches, our system introduces (i) an enhanced
multi-stage candidate-job matching pipeline that integrates semantic embeddings
with domain-specific fine-tuning (ii) a bias-mitigation layer that detects and
minimizes gender and language-based skew in candidate shortlisting, and (iii) a
real-time adaptive HR dashboard with explainable candidate ranking. The system
incorporates automated resume parsing, cosine similarity scoring, personalized
email notifications, and interactive analytics for recruiters. Experimental
results on benchmark recruitment datasets demonstrate a notable improvement in
candidate-job matching accuracy (78%) compared to existing ATS methods (avg.
65–70%) along with measurable reductions in screening bias and processing time.
The proposed approach is scalable, industry-agnostic and customizable,
representing a new step toward fair, efficient, and explainable AI in
recruitment. |
|
Keywords: |
Resume Screening, Natural Language Processing, Sentence-BERT, Large Language
Models, Cosine Similarity, Recruitment Automation. |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
DEVELOPMENT OF AN IOT-ENABLED SMART FARMING APPLICATION WITH INTEGRATED
FINANCIAL MANAGEMENT AND COLLABORATIVE KNOWLEDGE SHARING |
|
Author: |
ZAMREE CHE-ARON, DARANAT TANSUI, ZAWANEE MAKENG |
|
Abstract: |
The integration of the Internet of Things (IoT) in agriculture has led to the
development of smart farming solutions that enhance planting management and
control. This study presents the design and development of a smart farming
application that utilizes IoT technology to optimize crop cultivation. The
proposed system incorporates IoT sensors to monitor soil moisture, temperature,
and water levels in real time, enabling automated irrigation and lighting
control. Moreover, the application allows farmers to track crop growth, manage
income and expenses, access agricultural knowledge, and interact with a
community forum. A usability evaluation, conducted with IT experts and
end-users, demonstrated high levels of satisfaction in terms of user interface
design, learnability, efficiency, memorability, correctness, security, and
overall functionality. The findings suggest that the developed application
provides a practical and efficient tool for farmers, reducing operational costs,
conserving resources, and improving crop yields. Furthermore, the financial
management and knowledge-sharing functionalities contribute to better
decision-making and increased collaboration within the agricultural sector. |
|
Keywords: |
Internet of Things, Smart Farming, Mobile Application, Planting Management,
Remote Monitoring and Control |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
EXPLORING PERFORMANCE OPTIMIZATION THROUGH ADAPTIVE SHADERS IN UNITY-DRIVEN
VIRTUAL REALITY AND MOBILE GAMES |
|
Author: |
YULIIA YERMOLAIEVA |
|
Abstract: |
The study evaluated the effectiveness of various shader configurations within VR
and mobile applications using the Unity engine to achieve consistent performance
with high visual fidelity under conditions of variable scene complexity. The
problem addressed lies in balancing rendering speed and visual quality in
resource-constrained environments, such as mobile GPUs and VR headsets, where
fluctuations in scene complexity often lead to frame drops and a degraded user
experience. Three rendering alternatives were examined, namely: basic shaders
(Base-Shader), optimized with fixed parameters (Static-Opt), and adaptive
shaders with dynamic load scaling (Adaptive-Shader). The comparative analysis
employed an integral rendering efficiency index (0.87–0.91 for Adaptive-Shader,
0.82–0.85 for Static-Opt, and 0.79–0.81 for Base-Shader), which synthesized the
average FPS, frame stability, frame processing time, and the visual quality
index. The methodology encompassed normalization of indicators, analysis of
variance with repeated measures (Repeated Measures ANOVA), the Friedman test,
and post-hoc comparisons, as well as regression analysis, to discern the impact
of polygonality and shader type on performance. The findings revealed that
Adaptive-Shader consistently achieves over 72 frames per second in VR, even amid
high-load scenes, while maintaining a visual quality index of approximately 0.9,
with power consumption on mobile GPUs ranging from 7.8 to 8.1 watts. Static-Opt
presents a satisfactory equilibrium in simple to moderately complex scenes but
exhibits diminished stability in high-complexity scenarios. The Base-Shader
remains stable in low-complexity scenarios but shows the most significant
decline in FPS and quality under challenging conditions. Statistically
significant differences (χ² = 12.48; p = 0.0019) corroborated the superiority of
Adaptive-Shader over the other configurations, particularly regarding the
ΔFPS-index, where the disparity with Base-Shader approached nearly a twofold
advantage. In practical terms, Adaptive-Shader is particularly advantageous for
VR applications that demand high smoothness and detail. Static-Opt is suitable
for mobile games constrained by limited resources, and Base-Shader is
appropriate for rapid prototyping and simple scenes. The scientific novelty is
underscored by generalizing the interaction patterns among shader type, scene
characteristics, and graphic performance requirements, thereby facilitating a
judicious integration of adaptive graphics technologies into VR and mobile
gaming. In conclusion, the study demonstrates that adaptive shader technology
provides a statistically and practically superior balance between FPS stability,
visual fidelity, and energy efficiency, making it the most promising pathway for
next-generation VR and mobile rendering. Prospects for further investigation
include expanding the metric set to incorporate power consumption and adapting
the methodology for multi-platform rendering systems. |
|
Keywords: |
Unity, Adaptive Shaders, FPS, Frame Stability, Visual Quality Index,
Δperf-Index, VR Games, Mobile Games, Non-Parametric Tests |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
BLOCKCHAIN FOR SECURE AND DECENTRALIZED CLOUD COMPUTING ENHANCING DATA PRIVACY
AND INTEGRITY |
|
Author: |
ANJANEYULU NELLURU , VALAPALA PRABHAVATHI , APPIKATLA NAGA PRAVALLIKA , DR
RAGHAVENDER K V , BATTULA SOWJANYA , B YAMINI SUPRIYA, SATHISH KUMAR SHANMUGAM |
|
Abstract: |
This paper introduces a new hybrid blockchain-cloud architecture to improve
cloud computing's data security, privacy, and integrity. The main contribution
of this research is the introduction of blockchain's decentralized ledger and
cryptographic characteristics into cloud computing for secure data storage,
access control, and data provenance. The blockchain we use is a permissioned
network; the consensus mechanism we have chosen is Practical Byzantine Fault
Tolerance (PBFT), and we use homomorphic encryption for secure evaluation. The
system's performance was measured using transaction time, response time,
scalability, and security metrics. The results indicate that blockchain-based
integration comes with latency and throughput overhead compared to classical
cloud systems. Still, there is a notable increase in security and privacy,
achieving higher Total Security Scores (TSS) and Total Privacy Scores (TPS)
than classical cloud systems. The scalability experiments show that the proposed
system scales well under high loads, although at the expense of some
performance. Such hybrid architecture provides a strong, transparent, and secure
solution to the cloud environment that can be suitable for industries with a
strong need to protect data, such as healthcare and financial. The results
indicate the practicability of blockchain security on clouds, which enormously
impacts distributed cloud service data integrity and user trust. |
|
Keywords: |
Blockchain, Cloud Computing, Data Privacy, Data Integrity, Hybrid Architecture,
Security |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
Title: |
AN EFFICIENT REINFORCEMENT LEARNING-BASED SUPPLIER SELECTION FRAMEWORK THROUGH
RISK PREDICTION PROCEDURES |
|
Author: |
INDUMATHY M , BHARATHI N |
|
Abstract: |
Supply networks have needed structural modifications to react to both positive
and negative occurrences such as Industry 4.0 and natural disasters. Both
negative and positive things can disrupt corporate business operations and have
an impact on their continuation. Selected suppliers can support the
organizations against disruptions. To check uninterrupted materials across flow
of the Supply Chain Management (SCM) supplier selection and other distribution
must be reorganized in light of the dynamics of disaster Industry 4.0
occurrences. SCM is directly impacted by different decision-making processes
like supplier evaluation and selection. Several approaches have been used to
evaluate suppliers' performance and choose the top suppliers more effectively,
enhancing the overall performance at diverse conditions. Yet, the conventional
approaches are not sufficient for managing with large volumes of training
datasets within a restricted period, minimizing the convergence speed. Also, it
is not capable for generating optimal solution in the validation phase and it
requires most efficient performance measures. Therefore, this research is also
exploring a new supplier selection strategy incorporating heuristic-driven deep
learning. In the initial stage, the industrial data are gathered and forwarded
to the data cleaning phase to increase the data quality. Then, the data
transformation is conducted to enhance the efficiency of information flow
without unnecessary complexity. The next stage is the optimal weighted feature
selection, where the weights and optimal features are optimized using a new
hybrid optimizer termed Position Updated-Archimedes and Fruit Fly Optimization
(PU-AFFO) technique. Finally, the selected weighted features are given to the
risk prediction performance, where the outcomes related to the risk is acquired
using the Adaptive Transformer Bidirectional Long Short-Term Memory with
Weighted Bayesian Learning (ATransLSTM-WBL). Here, the parameters of every
network are tuned using the same PU-AFFO technique. Finally, based on the
predicted risk, Reinforcement Learning is performed for the final supplier
selection. At last, to check the developed system performance, various
statistical analyses are utilized and the outcomes given statistically powerful
GEP model. Here, the designed framework has attained a lower error rate of 1.01,
0.01, 3.27, 1.12, and 5.88 in terms of MEP, SMAPE, MASE, MAE and RMSE measures,
which demonstrate the overall efficiency of the designed technique than the
conventional methods. |
|
Keywords: |
Risk Prediction-based Supplier Selection Model; Position Updated-Archimedes and
Fruit Fly Optimization; Transformer Bidirectional Long Short-Term Memory;
Weighted Bayesian Learning; Weighted Feature Selection |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2025 -- Vol. 103. No. 23-- 2025 |
|
Full
Text |
|
|
|